An orchestration platform for the development, production, and observation of data assets.
APACHE-2.0 License
Bot releases are hidden (Show)
Published by sryza over 3 years ago
New
dagster instance migrate
to upgrade.step
and type
filtering now offer fuzzy search, all log event types are now searchable, and visual bugs within the input have been repaired. Additionally, the default setting for “Hide non-matches” has been flipped to true
.dagster-daemon
process now runs faster when running multiple schedulers or sensors from the same repository.fs_io_manager
now defaults the base directory to base_dir
via the Dagster instance’s local_artifact_storage
configuration. Previously, it defaults to the directory where the pipeline is executed.versioned_filesystem_io_manager
and custom_path_fs_io_manager
now require base_dir
as part of the resource configs. Previously, the base_dir
defaulted to the directory where the pipeline was executed.dagster instance migrate
and configuring your instance with the following settings in dagster.yaml
:backfill:
daemon_enabled: true
There is a corresponding flag in the Dagster helm chart to enable this instance configuration. See the Helm chart’s values.yaml
file for more information.
description
parameter that takes in a human-readable string description and displays it on the corresponding landing page in Dagit.Integrations
gcs_pickle_io_manager
now also retries on 403 Forbidden errors, which previously would only retry on 429 TooManyRequests.Bug Fixes
Tuple
with nested inner types in solid definitions no longer causes GraphQL errorsdagster new-repo
should now properly generate subdirectories and files, without needing to install dagster
from source (e.g. with pip install --editable
).Dependencies
pendulum
datetime/timezone library.Documentation
New
dagster run delete
CLI command to delete a run and its associated event log entries.partition_days_offset
argument to the @daily_schedule
decorator that allows you to customize which partition is used for each execution of your schedule. The default value of this parameter is 1
, which means that a schedule that runs on day N will fill in the partition for day N-1. To create a schedule that uses the partition for the current day, set this parameter to 0
, or increase it to make the schedule use an earlier day’s partition. Similar arguments have also been added for the other partitioned schedule decorators (@monthly_schedule
, @weekly_schedule
, and @hourly_schedule
).dagster new-repo
command now includes a workspace.yaml file for your new repository.workspace.yaml
file to load your pipelines, you can now specify an environment variable for the server’s hostname and port. For example, this is now a valid workspace:load_from:
- grpc_server:
host:
env: FOO_HOST
port:
env: FOO_PORT
Integrations
K8sRunLauncher
and CeleryK8sRunLauncher
no longer reload the pipeline being executed just before launching it. The previous behavior ensured that the latest version of the pipeline was always being used, but was inconsistent with other run launchers. Instead, to ensure that you’re running the latest version of your pipeline, you can refresh your repository in Dagit by pressing the button next to the repository name.Bug Fixes
--path-prefix
option. This has been fixed.ModeDefinition
that contains a single executor, that executor is now selected by default.reconstructable
on pipelines with that were also decorated with hooks no longer raises an error.dagster-daemon liveness-check
command previously returned false when daemons surfaced non-fatal errors to be displayed in Dagit, leading to crash loops in Kubernetes. The command has been fixed to return false only when the daemon has stopped running.OutputDefinition
s with io_manager_key
s, or InputDefinition
s with root_manager_key
s, but any of the modes provided for the pipeline definition do not include a resource definition for the required key, Dagster now raises an error immediately instead of when the pipeline is executed.dagster-dbt
has been updated to handle the new run_results.json
schema for dbt 0.19.0.Dependencies
Documentation
New
dagster run delete
CLI command to delete a run and its associated event log entries.partition_days_offset
argument to the @daily_schedule
decorator that allows you to customize which partition is used for each execution of your schedule. The default value of this parameter is 1
, which means that a schedule that runs on day N will fill in the partition for day N-1. To create a schedule that uses the partition for the current day, set this parameter to 0
, or increase it to make the schedule use an earlier day’s partition. Similar arguments have also been added for the other partitioned schedule decorators (@monthly_schedule
, @weekly_schedule
, and @hourly_schedule
).dagster new-repo
command now includes a workspace.yaml file for your new repository.workspace.yaml
file to load your pipelines, you can now specify an environment variable for the server’s hostname and port. For example, this is now a valid workspace:load_from:
- grpc_server:
host:
env: FOO_HOST
port:
env: FOO_PORT
Integrations
K8sRunLauncher
and CeleryK8sRunLauncher
no longer reload the pipeline being executed just before launching it. The previous behavior ensured that the latest version of the pipeline was always being used, but was inconsistent with other run launchers. Instead, to ensure that you’re running the latest version of your pipeline, you can refresh your repository in Dagit by pressing the button next to the repository name.Bug Fixes
--path-prefix
option. This has been fixed.ModeDefinition
that contains a single executor, that executor is now selected by default.reconstructable
on pipelines with that were also decorated with hooks no longer raises an error.dagster-daemon liveness-check
command previously returned false when daemons surfaced non-fatal errors to be displayed in Dagit, leading to crash loops in Kubernetes. The command has been fixed to return false only when the daemon has stopped running.OutputDefinition
s with io_manager_key
s, or InputDefinition
s with root_manager_key
s, but any of the modes provided for the pipeline definition do not include a resource definition for the required key, Dagster now raises an error immediately instead of when the pipeline is executed.dagster-dbt
has been updated to handle the new run_results.json
schema for dbt 0.19.0.Dependencies
Documentation
Community Contributions
/License
for packages that claim distribution under Apache-2.0 (thanks @bollwyvl!)New
dagster/dagster-k8s
and dagster/dagster-celery-k8s
can be used for all processes which don't require user code (Dagit, Daemon, and Celery workers when using the CeleryK8sExecutor). user-code-example
cank8s-dagit
, k8s-celery-worker
, k8s-example
)configured
api on solids now enforces name argument as positional. The name
argument remains a keyword argument on executors. name
argument has been removed from resources, and loggers to reflect that they are anonymous. Previously, you would receive an error message if the name
argument was provided to configured
on resources or loggers.minimum_interval_seconds
field, the overall sensor daemon interval can now be configured in the dagster.yaml
instance settings with:sensor_settings:
interval_seconds: 30 # (default)
This changes the interval at which the daemon checks for sensors which haven't run within their minimum_interval_seconds
.
TypeCheck
dagster-daemon
process now runs each of its daemons in its own thread. This allows the scheduler, sensor loop, and daemon for launching queued runs to run in parallel, without slowing each other down. The dagster-daemon
process will shut down if any of the daemon threads crash or hang, so that the execution environment knows that it needs to be restarted.dagster new-repo
is a new CLI command that generates a Dagster repository with skeleton code in your filesystem. This CLI command is experimental and it may generate different files in future versions, even between dot releases. As of 0.10.5, dagster new-repo
does not support Windows. See here for official API docs.
grpc_server
repository location, Dagit will automatically detect changes and prompt you to reload when the remote server updates.Integrations
Bugfixes
New
Bugfixes
Community Contributions
New
Bugfixes
Fixed an issue where run start times and end times were displayed in the wrong timezone in Dagit when using Postgres storage.
Schedules with partitions that weren’t able to execute due to not being able to find a partition will now display the name of the partition they were unable to find on the “Last tick” entry for that schedule.
Improved timing information display for queued and canceled runs within the Runs table view and on individual Run pages in Dagit.
Improvements to the tick history view for schedules and sensors.
Fixed formatting issues on the Dagit instance configuration page.
Miscellaneous Dagit bugfixes and improvements.
The dagster pipeline launch command will now respect run concurrency limits if they are applied on your instance.
Fixed an issue where re-executing a run created by a sensor would cause the daemon to stop executing any additional runs from that sensor.
Sensor runs with invalid run configuration will no longer create a failed run - instead, an error will appear on the page for the sensor, allowing you to fix the configuration issue.
General dagstermill housekeeping: test refactoring & type annotations, as well as repinning ipykernel to solve #3401
Documentation
Community Contributions
k8s-example
by 25% (104 MB) (thanks @alex-treebeard and @mrdavidlaing!)snowflake_resource
can now be configured to use the SQLAlchemy connector (thanks @basilvetas!)New
userDeployments.deployments
in the Helm chart, replicaCount
now defaults to 1 if not specified.Bugfixes
env
, envConfigMaps
, and envSecrets
.Documentation
QueuedRunCoordinator
to limit run concurrency.Published by prha almost 4 years ago
SystemCronScheduler
or K8sScheduler
to the new scheduler.IOManager
abstraction provides a new, streamlined primitive for granular control over where and how solid outputs are stored and loaded. This is intended to replace the (deprecated) intermediate/system storage abstractions, See the IO Manager Overview for more information.required_resource_keys
parameter on @resource
.DynamicOutputDefinition
API. Dagster can now map the downstream dependencies over a dynamic output at runtime.Dropping Python 2 support
Removal of deprecated APIs
These APIs were marked for deprecation with warnings in the 0.9.0 release, and have been removed in the 0.10.0 release.
input_hydration_config
has been removed. Use the dagster_type_loader
decorator instead.output_materialization_config
has been removed. Use dagster_type_materializer
instead.SystemStorageDefinition
, @system_storage
, and default_system_storage_defs
. Use the new IOManagers
API instead. See the IO Manager Overview for more information.config_field
argument on decorators and definitions classes has been removed and replaced with config_schema
. This is a drop-in rename.step_keys_to_execute
to the functions reexecute_pipeline
and reexecute_pipeline_iterator
has been removed. Use the step_selection
argument to select subsets for execution instead.repository
key in your workspace.yaml
; use load_from
instead. See theBreaking API Changes
SolidExecutionResult.compute_output_event_dict
has been renamed to SolidExecutionResult.compute_output_events_dict
. A solid execution result is returned from methods such as result_for_solid
. Any call sites will need to be updated..compute
suffix is no longer applied to step keys. Step keys that were previously named my_solid.compute
will now be named my_solid
. If you are using any API method that takes a step_selection argument, you will need to update the step keys accordingly.pipeline_def
property has been removed from the InitResourceContext
passed to functions decorated with @resource
.Helm Chart
scheduler
values in the helm chart has changed. Instead of a simple toggle on/off, we now require an explicit scheduler.type
to specify usage of the DagsterDaemonScheduler
, K8sScheduler
, or otherwise. If your specified scheduler.type
has required config, these fields must be specified under scheduler.config
.snake_case
fields have been changed to camelCase
. Please update your values.yaml
as follows:
pipeline_run
→ pipelineRun
dagster_home
→ dagsterHome
env_secrets
→ envSecrets
env_config_maps
→ envConfigMaps
celery
and k8sRunLauncher
have now been consolidated under the Helm value runLauncher
for simplicity. Use the field runLauncher.type
to specify usage of the K8sRunLauncher
, CeleryK8sRunLauncher
, or otherwise. By default, the K8sRunLauncher
is enabled.CeleryK8sRunLauncher
, you should explicitly enable your message broker of choice.userDeployments
are now enabled by default.Event log messages streamed to stdout
and stderr
have been streamlined to be a single line per event.
Experimental support for memoization and versioning lets you execute pipelines incrementally, selecting which solids need to be rerun based on runtime criteria and versioning their outputs with configurable identifiers that capture their upstream dependencies.
To set up memoized step selection, users can provide a MemoizableIOManager
, whose has_output
function decides whether a given solid output needs to be computed or already exists. To execute a pipeline with memoized step selection, users can supply the dagster/is_memoized_run
run tag to execute_pipeline
.
To set the version on a solid or resource, users can supply the version
field on the definition. To access the derived version for a step output, users can access the version
field on the OutputContext
passed to the handle_output
and load_input
methods of IOManager
and the has_output
method of MemoizableIOManager
.
Schedules that are executed using the new DagsterDaemonScheduler
can now execute in any timezone by adding an execution_timezone
parameter to the schedule. Daylight Savings Time transitions are also supported. See the Schedules Overview for more information and examples.
Helm
We've added schema validation to our Helm chart. You can now check that your values YAML file is
correct by running:
helm lint helm/dagster -f helm/dagster/values.yaml
Added support for resource annotations throughout our Helm chart.
Added Helm deployment of the dagster daemon & daemon scheduler.
Added Helm support for configuring a compute log manager in your dagster instance.
User code deployments now include a user ConfigMap
by default.
Changed the default liveness probe for Dagit to use httpGet "/dagit_info"
instead of tcpSocket:80
Dagster-K8s [Kubernetes]
Dagster-Celery-K8s
dagster-docker
library with a DockerRunLauncher
that launches each run in its own Docker container. (See Deploying with Docker docs for an example.)create_databricks_job_solid
for creating solids that launch Databricks jobs.# Run after migrating to 0.10.0
$ dagster instance migrate
This release includes several schema changes to the Dagster storages that improve performance and enable new features like sensors and run queueing. After upgrading to 0.10.0, run the dagster instance migrate
command to migrate your instance storage to the latest schema. This will turn off any running schedules, so you will need to restart any previously running schedules after migrating the schema. Before turning them back on, you should follow the steps below to migrate to DagsterDaemonScheduler
.
DagsterDaemonScheduler
This release includes a new DagsterDaemonScheduler
with improved fault tolerance and full support for timezones. We highly recommend upgrading to the new scheduler during this release. The existing schedulers, SystemCronScheduler
and K8sScheduler
, are deprecated and will be removed in a future release.
Instead of relying on system cron or k8s cron jobs, the DaemonScheduler
uses the new dagster-daemon
service to run schedules. This requires running the dagster-daemon
service as a part of your deployment.
Refer to our deployment documentation for a guides on how to set up and run the daemon process for local development, Docker, or Kubernetes deployments.
If you are currently using the SystemCronScheduler or K8sScheduler:
Stop any currently running schedules, to prevent any dangling cron jobs from being left behind. You can do this through the Dagit UI, or using the following command:
dagster schedule stop --location {repository_location_name} {schedule_name}
If you do not stop running schedules before changing schedulers, Dagster will throw an exception on startup due to the misconfigured running schedules.
In your dagster.yaml
file, remove the scheduler:
entry. If there is no scheduler:
entry, the DagsterDaemonScheduler
is automatically used as the default scheduler.
Start the dagster-daemon
process. Guides can be found in our deployment documentations.
See our schedules troubleshooting guide for help if you experience any problems with the new scheduler.
If you are not using a legacy scheduler:
No migration steps are needed, but make sure you run dagster instance migrate
as a part of upgrading to 0.10.0.
We have deprecated the intermediate storage machinery in favor of the new IO manager abstraction, which offers finer-grained control over how inputs and outputs are serialized and persisted. Check out the IO Managers Overview for more information.
We have deprecated the top level "storage"
and "intermediate_storage"
fields on run_config
. If you are currently executing pipelines as follows:
@pipeline
def my_pipeline():
...
execute_pipeline(
my_pipeline,
run_config={
"intermediate_storage": {
"filesystem": {"base_dir": ...}
}
},
)
execute_pipeline(
my_pipeline,
run_config={
"storage": {
"filesystem": {"base_dir": ...}
}
},
)
You should instead use the built-in IO manager fs_io_manager
, which can be attached to your pipeline as a resource:
@pipeline(
mode_defs=[
ModeDefinition(
resource_defs={"io_manager": fs_io_manager}
)
],
)
def my_pipeline():
...
execute_pipeline(
my_pipeline,
run_config={
"resources": {
"io_manager": {"config": {"base_dir": ...}}
}
},
)
There are corresponding IO managers for other intermediate storages, such as the S3- and ADLS2-based storages
We have deprecated IntermediateStorageDefinition
and @intermediate_storage
.
If you have written custom intermediate storage, you should migrate to custom IO managers defined using the @io_manager
API. We have provided a helper method, io_manager_from_intermediate_storage
, to help migrate your existing custom intermediate storages to IO managers.
my_io_manager_def = io_manager_from_intermediate_storage(
my_intermediate_storage_def
)
@pipeline(
mode_defs=[
ModeDefinition(
resource_defs={
"io_manager": my_io_manager_def
}
),
],
)
def my_pipeline():
...
We have deprecated the intermediate_storage_defs
argument to ModeDefinition
, in favor of the new IO managers, which should be attached using the resource_defs
argument.
input_hydration_config
and output_materialization_config
Use dagster_type_loader
instead of input_hydration_config
and dagster_type_materializer
instead of output_materialization_config
.
On DagsterType
and type constructors in dagster_pandas
use the loader
argument instead of input_hydration_config
and the materializer
argument instead of dagster_type_materializer
argument.
repository
key in workspace YAMLWe have removed the ability to specify a repository in your workspace using the repository:
key. Use load_from:
instead when specifying how to load the repositories in your workspace.
python_environment
key in workspace YAMLThe python_environment:
key is now deprecated and will be removed in a future release.
Previously, when you wanted to load a repository location in your workspace using a different Python environment from Dagit’s Python environment, you needed to use a python_environment:
key under load_from:
instead of the python_file:
or python_package:
keys. Now, you can simply customize the executable_path
in your workspace entries without needing to change to the
python_environment:
key.
For example, the following workspace entry:
- python_environment:
executable_path: "/path/to/venvs/dagster-dev-3.7.6/bin/python"
target:
python_package:
package_name: dagster_examples
location_name: dagster_examples
should now be expressed as:
- python_package:
executable_path: "/path/to/venvs/dagster-dev-3.7.6/bin/python"
package_name: dagster_examples
location_name: dagster_examples
See our Workspaces Overview for more information and examples.
config_field
property on definition classesWe have removed the property config_field
on definition classes. Use config_schema
instead.
We have removed the system storage abstractions, i.e. SystemStorageDefinition
and @system_storage
(deprecated in 0.9.0).
Please note that the intermediate storage abstraction is also deprecated and will be removed in 0.11.0. Use IO managers instead.
system_storage_defs
argument (deprecated in 0.9.0) to ModeDefinition
, in favor of intermediate_storage_defs.
default_system_storage_defs
(deprecated in 0.9.0).step_keys_to_execute
We have removed the step_keys_to_execute
argument to reexecute_pipeline
and reexecute_pipeline_iterator
, in favor of step_selection
. This argument accepts the Dagster selection syntax, so, for example, *solid_a+
represents solid_a
, all of its upstream steps, and its immediate downstream steps.
date_partition_range
Starting in 0.10.0, Dagster uses the pendulum library to ensure that schedules and partitions behave correctly with respect to timezones. As part of this change, the delta
parameter to date_partition_range
(which determined the time different between partitions and was a datetime.timedelta
) has been replaced by a delta_range
parameter (which must be a string that's a valid argument to the pendulum.period
function, such as "days"
, "hours"
, or "months"
).
For example, the following partition range for a monthly partition set:
date_partition_range(
start=datetime.datetime(2018, 1, 1),
end=datetime.datetime(2019, 1, 1),
delta=datetime.timedelta(months=1)
)
should now be expressed as:
date_partition_range(
start=datetime.datetime(2018, 1, 1),
end=datetime.datetime(2019, 1, 1),
delta_range="months"
)
PartitionSetDefinition.create_schedule_definition
When you create a schedule from a partition set using PartitionSetDefinition.create_schedule_definition
, you now must supply a partition_selector
argument that tells the scheduler which partition to use for a given schedule time.
We have added two helper functions, create_offset_partition_selector
and identity_partition_selector
, that capture two common partition selectors (schedules that execute at a fixed offset from the partition times, e.g. a schedule that creates the previous day's partition each morning, and schedules that execute at the same time as the partition times).
The previous default partition selector was last_partition
, which didn't always work as expected when using the default scheduler and has been removed in favor of the two helper partition selectors above.
For example, a schedule created from a daily partition set that fills in each partition the next day at 10AM would be created as follows:
partition_set = PartitionSetDefinition(
name='hello_world_partition_set',
pipeline_name='hello_world_pipeline',
partition_fn= date_partition_range(
start=datetime.datetime(2021, 1, 1),
delta_range="days",
timezone="US/Central",
)
run_config_fn_for_partition=my_run_config_fn,
)
schedule_definition = partition_set.create_schedule_definition(
"daily_10am_schedule",
"0 10 * * *",
partition_selector=create_offset_partition_selector(lambda d: d.subtract(hours=10, days=1))
execution_timezone="US/Central",
)
Following convention in the Helm docs, we now camel case all of our Helm values. To migrate to 0.10.0, you'll need to update your values.yaml
with the following renames:
pipeline_run
→ pipelineRun
dagster_home
→ dagsterHome
env_secrets
→ envSecrets
env_config_maps
→ envConfigMaps
scheduler
in Helm valuesWhen specifying the Dagster instance scheduler, rather than using a boolean field to switch between the current options of K8sScheduler
and DagsterDaemonScheduler
, we now require the scheduler type to be explicitly defined under scheduler.type
. If the user specified scheduler.type
has required config, additional fields will need to be specified under scheduler.config
.
scheduler.type
and corresponding scheduler.config
values are enforced via JSON Schema.
For example, if your Helm values previously were set like this to enable the DagsterDaemonScheduler
:
scheduler:
k8sEnabled: false
You should instead have:
scheduler:
type: DagsterDaemonScheduler
celery
and k8sRunLauncher
in Helm valuescelery
and k8sRunLauncher
now live under runLauncher.config.celeryK8sRunLauncher
and runLauncher.config.k8sRunLauncher
respectively. Now, to enable celery, runLauncher.type
must equal CeleryK8sRunLauncher
. To enable the vanilla K8s run launcher, runLauncher.type
must equal K8sRunLauncher
.
runLauncher.type
and corresponding runLauncher.config
values are enforced via JSON Schema.
For example, if your Helm values previously were set like this to enable the K8sRunLauncher
:
celery:
enabled: false
k8sRunLauncher:
enabled: true
jobNamespace: ~
loadInclusterConfig: true
kubeconfigFile: ~
envConfigMaps: []
envSecrets: []
You should instead have:
runLauncher:
type: K8sRunLauncher
config:
k8sRunLauncher:
jobNamespace: ~
loadInclusterConfig: true
kubeconfigFile: ~
envConfigMaps: []
envSecrets: []
By default, userDeployments
is enabled and the runLauncher
is set to the K8sRunLauncher
. Along with the latter change, all message brokers (e.g. rabbitmq
and redis
) are now disabled by default.
If you were using the CeleryK8sRunLauncher
, one of rabbitmq
or redis
must now be explicitly enabled in your Helm values.
Published by catherinewu almost 4 years ago
Bugfixes
Published by catherinewu almost 4 years ago
New
Bugfixes
Published by catherinewu almost 4 years ago
New
Bugfixes
Community Contributions
Bugfixes
mher/flower:0.9.5
for the Flower pod.