ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.
APACHE-2.0 License
Bot releases are visible (Hide)
This release comes with a number of bug fixes and enhancements.
With this release you can benefit from new Lambda Labs GPU orchestrator integration in your pipelines. Lambda Labs is a cloud provider that offers GPU instances for machine learning workloads.
In this release we have also implemented a few important security improvements to ZenML Server mostly around Content Security Policies. Also users are from now on mandated to provide previous password during the password change process.
Also the documentation was significantly improved with the new AWS Cloud guide and the LLMOps guide covering various aspects of the LLM lifecycle.
We'd like to give a special thanks to @christianversloot who contributed to this release by adding support for Schedule.start_time
to the HyperAI orchestrator.
pyyaml_include
version for copier
by @bcdurak in https://github.com/zenml-io/zenml/pull/2586
start_time
support to HyperAI orchestrator scheduled pipelines by @christianversloot in https://github.com/zenml-io/zenml/pull/2572
secure
an optional import by @stefannica in https://github.com/zenml-io/zenml/pull/2592
0.56.2
by @safoinme in https://github.com/zenml-io/zenml/pull/2565
f
prefix being needed for template strings by @strickvl in https://github.com/zenml-io/zenml/pull/2593
zenml go
by @strickvl in https://github.com/zenml-io/zenml/pull/2571
Full Changelog: https://github.com/zenml-io/zenml/compare/0.56.2...0.56.3
Published by safoinme 7 months ago
This release introduces a wide array of new features, enhancements, and bug fixes, with a strong emphasis on elevating the user experience and streamlining machine
learning workflows. Most notably, you can now deploy models using Hugging Face inference endpoints thanks to an open-source community contribution of this model deployer stack component!
Note that 0.56.0 and 0.56.1 were yanked and removed from PyPI due to an issue with the
alembic versions + migration which could affect the database state. This release
fixes that issue.
This release also comes with a breaking change to the services
architecture.
A significant change in this release is the migration of the Service
(ZenML's technical term for deployment)
registration and deployment from local or remote environments to the ZenML server.
This change will be reflected in an upcoming tab in the dashboard which will
allow users to explore and see the deployed models in the dashboard with their live
status and metadata. This architectural shift also simplifies the model deployer
abstraction and streamlines the model deployment process for users by moving from
limited built-in steps to a more documented and flexible approach.
Important note: If you have models that you previously deployed with ZenML, you might
want to redeploy them to have them stored in the ZenML server and tracked by ZenML,
ensuring they appear in the dashboard.
Additionally, the find_model_server
method now retrieves models (services) from the
ZenML server instead of local or remote deployment environments. As a result, any
usage of find_model_server
will only return newly deployed models stored in the server.
It is also no longer recommended to call service functions like service.start()
.
Instead, use model_deployer.start_model_server(service_id)
, which will allow ZenML
to update the changed status of the service in the server.
Old syntax:
from zenml import pipeline,
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService
@step
def predictor(
service: BentoMLDeploymentService,
) -> None:
# starting the service
service.start(timeout=10)
New syntax:
from zenml import pipeline
from zenml.integrations.bentoml.model_deployers import BentoMLModelDeployer
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService
@step
def predictor(
service: BentoMLDeploymentService,
) -> None:
# starting the service
model_deployer = BentoMLModelDeployer.get_active_model_deployer()
model_deployer.start_model_server(service_id=service.service_id, timeout=10)
Instead of replacing the parameter that was used in the deploy_model
method to replace the
existing service (if it matches the exact same pipeline name and step name without
taking into accounts other parameters or configurations), we now have a new parameter,
continuous_deployment_mode
, that allows you to enable continuous deployment for
the service. This will ensure that the service is updated with the latest version
if it's on the same pipeline and step and the service is not already running. Otherwise,
any new deployment with different configurations will create a new service.
from zenml import pipeline, step, get_step_context
from zenml.client import Client
@step
def deploy_model() -> Optional[MLFlowDeploymentService]:
# Deploy a model using the MLflow Model Deployer
zenml_client = Client()
model_deployer = zenml_client.active_stack.model_deployer
mlflow_deployment_config = MLFlowDeploymentConfig(
name: str = "mlflow-model-deployment-example",
description: str = "An example of deploying a model using the MLflow Model Deployer",
pipeline_name: str = get_step_context().pipeline_name,
pipeline_step_name: str = get_step_context().step_name,
model_uri: str = "runs:/<run_id>/model" or "models:/<model_name>/<model_version>",
model_name: str = "model",
workers: int = 1
mlserver: bool = False
timeout: int = DEFAULT_SERVICE_START_STOP_TIMEOUT
)
service = model_deployer.deploy_model(mlflow_deployment_config, continuous_deployment_mode=True)
logger.info(f"The deployed service info: {model_deployer.get_model_server_info(service)}")
return service
Huggingface Model Deployer
has been introduced, allowing you to seamlesslyuv
library,We'd like to give a special thanks to @dudeperf3ct he contributed to this release
by introducing the Huggingface Model Deployer. We'd also like to thank @moesio-f
for their contribution to this release by adding a new attribute to the Kaniko
image builder.
Additionally, we'd like to thank @christianversloot for his contributions to this release.
uv
to speed up pip installs & the CI in general by @strickvl in https://github.com/zenml-io/zenml/pull/2442
pod_running_timeout
attribute to Kaniko
image builder by @moesio-f in https://github.com/zenml-io/zenml/pull/2509
fsspec
for Huggingface integration by @avishniakov in https://github.com/zenml-io/zenml/pull/2542
pip check
command to use uv
by @strickvl in https://github.com/zenml-io/zenml/pull/2520
Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.5...0.56.2
Published by avishniakov 7 months ago
This is a patch release aiming to solve a dependency problem that was brought in with the new rate-limiting functionality. With 0.56.1 you no longer need starlette
to run client code or to run ZenML CLI commands.
We'd like to thank @christianversloot for his contribution to this release.
Full Changelog: https://github.com/zenml-io/zenml/compare/0.56.0...0.56.1
Published by safoinme 7 months ago
ZenML 0.56.0 introduces a wide array of new features, enhancements, and bug fixes,
with a strong emphasis on elevating the user experience and streamlining machine
learning workflows. Most notably, you can now deploy models using Hugging Face inference endpoints thanks for an open-source community contribution of this model deployer stack component!
This release also comes with a breaking change to the services
architecture.
A significant change in this release is the migration of the Service
(ZenML's technical term for deployment)
registration and deployment from local or remote environments to the ZenML server.
This change will be reflected in an upcoming tab in the dashboard which will
allow users to explore and see the deployed models in the dashboard with their live
status and metadata. This architectural shift also simplifies the model deployer
abstraction and streamlines the model deployment process for users by moving from
limited built-in steps to a more documented and flexible approach.
Important note: If you have models that you previously deployed with ZenML, you might
want to redeploy them to have them stored in the ZenML server and tracked by ZenML,
ensuring they appear in the dashboard.
Additionally, the find_model_server
method now retrieves models (services) from the
ZenML server instead of local or remote deployment environments. As a result, any
usage of find_model_server
will only return newly deployed models stored in the server.
It is also no longer recommended to call service functions like service.start()
.
Instead, use model_deployer.start_model_server(service_id)
, which will allow ZenML
to update the changed status of the service in the server.
Old syntax:
from zenml import pipeline,
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService
@step
def predictor(
service: BentoMLDeploymentService,
) -> None:
# starting the service
service.start(timeout=10)
New syntax:
from zenml import pipeline
from zenml.integrations.bentoml.model_deployers import BentoMLModelDeployer
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService
@step
def predictor(
service: BentoMLDeploymentService,
) -> None:
# starting the service
model_deployer = BentoMLModelDeployer.get_active_model_deployer()
model_deployer.start_model_server(service_id=service.service_id, timeout=10)
Instead of replacing the parameter that was used in the deploy_model
method to replace the
existing service (if it matches the exact same pipeline name and step name without
taking into accounts other parameters or configurations), we now have a new parameter,
continuous_deployment_mode
, that allows you to enable continuous deployment for
the service. This will ensure that the service is updated with the latest version
if it's on the same pipeline and step and the service is not already running. Otherwise,
any new deployment with different configurations will create a new service.
from zenml import pipeline, step, get_step_context
from zenml.client import Client
@step
def deploy_model() -> Optional[MLFlowDeploymentService]:
# Deploy a model using the MLflow Model Deployer
zenml_client = Client()
model_deployer = zenml_client.active_stack.model_deployer
mlflow_deployment_config = MLFlowDeploymentConfig(
name: str = "mlflow-model-deployment-example",
description: str = "An example of deploying a model using the MLflow Model Deployer",
pipeline_name: str = get_step_context().pipeline_name,
pipeline_step_name: str = get_step_context().step_name,
model_uri: str = "runs:/<run_id>/model" or "models:/<model_name>/<model_version>",
model_name: str = "model",
workers: int = 1
mlserver: bool = False
timeout: int = DEFAULT_SERVICE_START_STOP_TIMEOUT
)
service = model_deployer.deploy_model(mlflow_deployment_config, continuous_deployment_mode=True)
logger.info(f"The deployed service info: {model_deployer.get_model_server_info(service)}")
return service
Huggingface Model Deployer
has been introduced, allowing you to seamlesslyuv
library,We'd like to give a special thanks to @dudeperf3ct he contributed to this release
by introducing the Huggingface Model Deployer. We'd also like to thank @moesio-f
for their contribution to this release by adding a new attribute to the Kaniko
image builder.
Additionally, we'd like to thank @christianversloot for his contributions to this release.
uv
to speed up pip installs & the CI in general by @strickvl in https://github.com/zenml-io/zenml/pull/2442
pod_running_timeout
attribute to Kaniko
image builder by @moesio-f in https://github.com/zenml-io/zenml/pull/2509
fsspec
for Huggingface integration by @avishniakov in https://github.com/zenml-io/zenml/pull/2542
pip check
command to use uv
by @strickvl in https://github.com/zenml-io/zenml/pull/2520
Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.5...0.56.0
Published by avishniakov 8 months ago
This patch contains a number of bug fixes and security improvements.
We improved the isolation of artifact stores so that various artifacts cannot be stored or accessed outside of the configured artifact store scope. Such unsafe operations are no longer allowed. This may have an impact on existing codebases if you have used unsafe file operations in the past.
To illustrate such a side effect, let's consider a remote S3 artifact store is configured for the path s3://some_bucket/some_sub_folder
and in the code you use artifact_store.open("s3://some_bucket/some_other_folder/dummy.txt","w")
-> this operation is considered unsafe as it accesses the data outside the scope of the artifact store. If you really need this to achieve your goals, consider switching to s3fs
or similar libraries for such cases.
Also with this release, the server global configuration is no longer stored on the server file system to prevent exposure of sensitive information.
User entities are now uniquely constrained to prevent the creation of duplicate users under certain race conditions.
ruff
versions by @strickvl in https://github.com/zenml-io/zenml/pull/2487
Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.4...0.55.5
Published by strickvl 8 months ago
This release brings a host of enhancements and fixes across the board, including
significant improvements to our services logging and status, the integration of
model saving to the registry via CLI methods, and more robust handling of
parallel pipelines and database entities. We've also made strides in optimizing
MLflow interactions, enhancing our documentation, and ensuring our CI processes
are more robust.
Additionally, we've tackled several bug fixes and performance improvements,
making our platform even more reliable and user-friendly.
We'd like to give a special thanks to @christianversloot and @francoisserra for
their contributions.
save models to registry
setting of a model to CLI methods by @avishniakov in https://github.com/zenml-io/zenml/pull/2447
list_model_versions
by @avishniakov in https://github.com/zenml-io/zenml/pull/2460
.coderabbit.yaml
by @strickvl in https://github.com/zenml-io/zenml/pull/2470
log_model_metadata
with explicit name and version by @avishniakov in https://github.com/zenml-io/zenml/pull/2465
uv pip compile
for environment setup in CI by @strickvl in https://github.com/zenml-io/zenml/pull/2474
Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.3...0.55.4
Published by strickvl 8 months ago
This patch comes with a variety of bug fixes and documentation updates.
With this release you can now download files directly from artifact versions
that you get back from the client without the need to materialize them. If you
would like to bypass materialization entirely and just download the data or
files associated with a particular artifact version, you can use the
download_files
method:
from zenml.client import Client
client = Client()
artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset")
artifact.download_files("path/to/save.zip")
google-cloud-aiplatform
minimum required version to 1.34.0 by @francoisserra in https://github.com/zenml-io/zenml/pull/2428
adlfs
and s3fs
versions by @strickvl in https://github.com/zenml-io/zenml/pull/2402
download_files
method for ArtifactVersion
by @strickvl in https://github.com/zenml-io/zenml/pull/2434
update_model
s and revert #2402 by @bcdurak in https://github.com/zenml-io/zenml/pull/2440
Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.2...0.55.3
Published by bcdurak 9 months ago
This patch comes with a variety of new features, bug-fixes, and documentation updates.
Some of the most important changes include:
We'd like to give a special thanks to @christianversloot and @francoisserra for their contributions.
0.55.1
in migration testing by @avishniakov in https://github.com/zenml-io/zenml/pull/2368
Client()
getters lazy loading by @avishniakov in https://github.com/zenml-io/zenml/pull/2323
is-slow-ci
input from fast and slow integration testing by @strickvl in https://github.com/zenml-io/zenml/pull/2382
ExternalArtifact
non-value features by @avishniakov in https://github.com/zenml-io/zenml/pull/2375
update_model
decorator by @bcdurak in https://github.com/zenml-io/zenml/pull/2136
has_custom_name
for legacy artifacts by @avishniakov in https://github.com/zenml-io/zenml/pull/2384
Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.1...0.55.2
Published by avishniakov 9 months ago
If you are actively using the Model Control Plane features, we suggest that you directly upgrade to 0.55.1, bypassing 0.55.0.
This is a patch release bringing backward compatibility for breaking changes introduced in 0.55.0, so that appropriate migration actions can be performed at the desired pace. Please refer to the 0.55.0 release notes for specific information on breaking changes and how to update your code to align with the new way of doing things. We also have updated our documentation to serve you better and introduced PipelineNamespace
models in our API.
Also, this release is packed with Database recovery in case the upgrade failed to migrate the Database to a newer version of ZenML.
ModelVersion
by @avishniakov in https://github.com/zenml-io/zenml/pull/2357
Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.0...0.55.1
Published by strickvl 9 months ago
Backports some important fixes that have been introduced in more recent versions
of ZenML to the 0.43.x release line.
Full Changelog: https://github.com/zenml-io/zenml/compare/0.43.0...0.43.1
Published by strickvl 9 months ago
Backports some important fixes that have been introduced in more recent versions
of ZenML to the 0.42.x release line.
Full Changelog: https://github.com/zenml-io/zenml/compare/0.42.1...0.42.2
Published by safoinme 9 months ago
This release comes with a range of new features, bug fixes and documentation updates. The most notable changes are the ability to do lazy loading of Artifacts and their Metadata and Model and its Metadata inside the pipeline code using pipeline context object, and the ability to link Artifacts to Model Versions implicitly via the save_artifact
function.
Additionally, we've updated the documentation to include a new starter guide on how to manage artifacts, and a new production guide that walks you through how to configure your pipelines to run in production.
The ModelVersion
concept was renamed to Model
going forward, which affects code bases using the Model Control Plane feature. This change is not backward compatible.
@pipeline(model_version=ModelVersion(...))
-> @pipeline(model=Model(...))
Old syntax:
from zenml import pipeline, ModelVersion
@pipeline(model_version=ModelVersion(name="model_name",version="v42"))
def p():
...
New syntax:
from zenml import pipeline, Model
@pipeline(model=Model(name="model_name",version="v42"))
def p():
...
@step(model_version=ModelVersion(...))
-> @step(model=Model(...))
Old syntax:
from zenml import step, ModelVersion
@step(model_version=ModelVersion(name="model_name",version="v42"))
def s():
...
New syntax:
from zenml import step, Model
@step(model=Model(name="model_name",version="v42"))
def s():
...
Old syntax:
from zenml import pipeline, step, ModelVersion, get_step_context, get_pipeline_context
@pipeline(model_version=ModelVersion(name="model_name",version="v42"))
def p():
model_version = get_pipeline_context().model_version
...
@step(model_version=ModelVersion(name="model_name",version="v42"))
def s():
model_version = get_step_context().model_version
...
New syntax:
from zenml import pipeline, step, Model, get_step_context, get_pipeline_context
@pipeline(model=Model(name="model_name",version="v42"))
def p():
model = get_pipeline_context().model
...
@step(model=Model(name="model_name",version="v42"))
def s():
model = get_step_context().model
...
Old syntax:
model_version:
name: model_name
version: v42
...
New syntax:
model:
name: model_name
version: v42
...
ModelVersion.metadata
-> Model.run_metadata
Old syntax:
from zenml import ModelVersion
def s():
model_version = ModelVersion(name="model_name",version="production")
some_metadata = model_version.metadata["some_metadata"].value
...
New syntax:
from zenml import Model
def s():
model = Model(name="model_name",version="production")
some_metadata = model.run_metadata["some_metadata"].value
...
Those using the older syntax are requested to update their code accordingly.
Full set of changes are highlighted here: https://github.com/zenml-io/zenml/pull/2267
save_artifact
by @avishniakov in https://github.com/zenml-io/zenml/pull/2298
zenml
version to migration testing scripts by @strickvl in https://github.com/zenml-io/zenml/pull/2294
zenml up --blocking
login by @strickvl in https://github.com/zenml-io/zenml/pull/2290
log_step_metadata
utility function by @strickvl in https://github.com/zenml-io/zenml/pull/2322
skypilot
flavors into different folders by @safoinme in https://github.com/zenml-io/zenml/pull/2332
Full Changelog: https://github.com/zenml-io/zenml/compare/0.54.1...0.55.0
Published by safoinme 9 months ago
Release 0.54.1, includes a mix of updates new additions, and bug fixes. The most notable changes are the new production guide,
allowing multi-step VMs for the Skypilot orchestrator which allows you to configure a step to run on a specific VM or run the entire pipeline on a single VM, and some improvements to the Model Control Plane.
latest_version_id
to the ModelResponse
by @avishniakov in https://github.com/zenml-io/zenml/pull/2266
link_artifact
from docs for MCP by @strickvl in https://github.com/zenml-io/zenml/pull/2272
sklearn
versions > 1.3 by @Vishal-Padia in https://github.com/zenml-io/zenml/pull/2271
sklearn
dependency to allow all versions by @strickvl in https://github.com/zenml-io/zenml/pull/2281
yamlfix
script to use --no-yamlfix
flag by @strickvl in https://github.com/zenml-io/zenml/pull/2280
yamlfix
by @strickvl in https://github.com/zenml-io/zenml/pull/2282
teams.yml
by @strickvl in https://github.com/zenml-io/zenml/pull/2283
zenml-tutorial
path assignment in cli.go()
by @safoinme in https://github.com/zenml-io/zenml/pull/2291
Full Changelog: https://github.com/zenml-io/zenml/compare/0.54.0...0.54.1
Published by strickvl 9 months ago
This release brings a range of new features, bug fixes and documentation
updates. The Model Control Plane has received a number of small bugfixes and
improvements, notably the ability to change model and model version names.
We've also added a whole new starter guide that walks you through
how to get started with ZenML, from creating your first pipeline to fetching
objects once your pipelines have run and much more. Be sure to check it out if
you're new to ZenML!
Speaking of documentation improvements, the Model Control Plane now has its own
dedicated documentation section introducing the concepts and features of the
Model Control Plane.
As always, this release comes with number of bug fixes, docs additions and
smaller improvements to our internal processes.
This PR introduces breaking changes in the areas of the REST API concerning secrets and tags. As a consequence, the ZenML Client running the previous ZenML version is no longer compatible with a ZenML Server running the new version and vice-versa. To address this, simply ensure that all your ZenML clients use the same version as the server(s) they connect to.
We'd like to give a special thanks to @christianversloot for two PRs he
contributed to this release. One of them fixes a bug that prevented ZenML from
running on Windows and the other one adds a new materializer for the Polars library.
Also many thanks to @sean-hickey-wf for his contribution of an improvement to
the Slack Alerter stack component
which allows you to define custom blocks for the Slack message.
mlstacks
version to 0.8.0 by @strickvl in https://github.com/zenml-io/zenml/pull/2196
mlstacks
installation instructions to docs by @strickvl in https://github.com/zenml-io/zenml/pull/2228
hydrate
flag to the client methods by @bcdurak in https://github.com/zenml-io/zenml/pull/2120
run_metadata
by @bcdurak in https://github.com/zenml-io/zenml/pull/2230
Full Changelog: https://github.com/zenml-io/zenml/compare/0.53.1...0.54.0
Published by stefannica 10 months ago
This minor release contains a hot fix for a bug that was introduced in 0.53.0
where the secrets manager flavors were not removed from the database
properly. This release fixes that issue.
Full Changelog: https://github.com/zenml-io/zenml/compare/0.53.0...0.53.1
Published by avishniakov 10 months ago
This release is packed with a deeply reworked quickstart example and starter template, the removal of secret manager stack component, improved experience with Cloud Secret Stores, support for tags and metadata directly in Model Versions, some breaking changes for Model Control Plane and a few bugfixes.
Maybe it slipped you by, but ZenML has a brand new feature, and it's called the Model Control Plane (MCP)! The MCP allows users to register their Models directly into their zenml servers, and associate metadata, runs, and artifacts with it. This gives users a model-centric view into ZenML, rather than a pipeline-centric one. Try it out now with the quickstart:
pip install "zenml[server,templates]"
mkdir zenml_starter_template
cd zenml_starter_template
copier copy --trust -r 2023.12.18 https://github.com/zenml-io/template-starter.git .
# or `zenml init --template starter`
# Now either read the README or run quickstart.ipynb
Alternatively, just clone the repo and run the quickstart.
Please note that while frontend features are Cloud Only, OSS users can still use the MCP via the CLI and Python SDK.
Upon upgrading, all Secrets Manager stack components will be removed from the Stacks that still contain them and from the database. This also implies that access to any remaining secrets managed through Secrets Manager stack components will be lost. If you still have secrets configured and managed through Secrets Manager stack components, please consider migrating all your existing secrets to the centralized secrets store before upgrading by means of the zenml secrets-manager secret migrate
CLI command. Also see the zenml secret --help
command for more information.
This is just a renaming to provide better alignment with industry standards. Though, it will affect:
ArtifactConfig(..., is_endpoint_artifact=True)
now is ArtifactConfig(..., is_deployment_artifact=True)
zenml model endpoint_artifacts ...
now is zenml model deployment_artifacts ...
Client().list_model_version_artifact_links(..., only_endpoint_artifacts=True)
now is Client().list_model_version_artifact_links(..., only_deployment_artifacts=True)
ModelVersion(...).get_endpoint_artifact(...)
now is ModelVersion(...).get_deployment_artifact(...)
run_metadata
value returns string instead of other types by @avishniakov in https://github.com/zenml-io/zenml/pull/2149
KubernetesSparkStepOperator
imports fails by @avishniakov in https://github.com/zenml-io/zenml/pull/2159
get_pipeline_context().model_version.get_artifact(...)
flow by @avishniakov in https://github.com/zenml-io/zenml/pull/2162
importlib
calling to importlib.metadata
by @safoinme in https://github.com/zenml-io/zenml/pull/2160
zenml clean
by @bcdurak in https://github.com/zenml-io/zenml/pull/2119
run_metadata
value returns string instead of other types by @avishniakov in https://github.com/zenml-io/zenml/pull/2149
KubernetesSparkStepOperator
imports fails by @avishniakov in https://github.com/zenml-io/zenml/pull/2159
get_pipeline_context().model_version.get_artifact(...)
flow by @avishniakov in https://github.com/zenml-io/zenml/pull/2162
precommit
by @strickvl in https://github.com/zenml-io/zenml/pull/2164
Full Changelog: https://github.com/zenml-io/zenml/compare/0.52.0...0.53.0
Published by stefannica 10 months ago
This release adds the ability to pass in pipeline parameters as YAML configuration and fixes a couple of minor issues affecting the W&B integration and the way expiring credentials are refreshed when service connectors are used.
The current pipeline YAML configurations are now being validated to ensure that configured parameters match what is available in the code. This means that if you have a pipeline that is configured with a parameter that has a different value that what is provided through code, the pipeline will fail to run. This is a breaking change, but it is a good thing as it will help you catch errors early on.
This is an example of a pipeline configuration that will fail to run:
parameters:
some_param: 24
steps:
my_step:
parameters:
input_2: 42
# run.py
@step
def my_step(input_1: int, input_2: int) -> None:
pass
@pipeline
def my_pipeline(some_param: int):
# here an error will be raised since `input_2` is
# `42` in config, but `43` was provided in the code
my_step(input_1=42, input_2=43)
if __name__=="__main__":
# here an error will be raised since `some_param` is
# `24` in config, but `23` was provided in the code
my_pipeline(23)
Full Changelog: https://github.com/zenml-io/zenml/compare/0.51.0...0.52.0
Published by htahir1 10 months ago
This release comes with a breaking change to the model version model, a new use-case example for NLP, and a range of bug fixes and enhancements to the artifact management and pipeline run management features.
mlstacks
version in ZenML extra by @strickvl in https://github.com/zenml-io/zenml/pull/2091
run_metadata
by @avishniakov in https://github.com/zenml-io/zenml/pull/2064
StepContext.inputs
property by @fa9r in https://github.com/zenml-io/zenml/pull/2105
Full Changelog: https://github.com/zenml-io/zenml/compare/0.50.0...0.51.0
Published by safoinme 11 months ago
In this release, we introduce key updates aimed at improving user experience and security.
The ModelConfig
object has been renamed to ModelVersion
for a more intuitive interface.
Additionally, the release features enhancements such as optimized model hydration for better performance,
alongside a range of bug fixes and contributions from both new and returning community members.
ModelConfig
object to ModelVersion
with other related changes to the model control plane,pipeline
or step
it will travel into all other user-facing places: step context, client, etc. by @avishniakov in #2044
k3d
deployments via mlstacks
using the ZenML CLI wrapper #2059
black
with ruff format
/ bump mypy
by @strickvl in https://github.com/zenml-io/zenml/pull/2082
bcrypt
by @strickvl in https://github.com/zenml-io/zenml/pull/2083
Full Changelog: https://github.com/zenml-io/zenml/compare/0.47.0...0.50.0
Published by stefannica 11 months ago
This release fixes a bug that was introduced in 0.46.1 where the default user
was made inaccessible and was inadvertently duplicated. This release rescues
the original user and renames the duplicate.
Full Changelog: https://github.com/zenml-io/zenml/compare/0.46.1...0.47.0