zenml

ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.

APACHE-2.0 License

Downloads
44.5K
Stars
3.6K

Bot releases are hidden (Show)

zenml - 0.56.3 Latest Release

Published by avishniakov 6 months ago

This release comes with a number of bug fixes and enhancements.

With this release you can benefit from new Lambda Labs GPU orchestrator integration in your pipelines. Lambda Labs is a cloud provider that offers GPU instances for machine learning workloads.

In this release we have also implemented a few important security improvements to ZenML Server mostly around Content Security Policies. Also users are from now on mandated to provide previous password during the password change process.

Also the documentation was significantly improved with the new AWS Cloud guide and the LLMOps guide covering various aspects of the LLM lifecycle.

🥳 Community Contributions 🥳

We'd like to give a special thanks to @christianversloot who contributed to this release by adding support for Schedule.start_time to the HyperAI orchestrator.

What's Changed

Full Changelog: https://github.com/zenml-io/zenml/compare/0.56.2...0.56.3

zenml - 0.56.2

Published by safoinme 7 months ago

This release introduces a wide array of new features, enhancements, and bug fixes, with a strong emphasis on elevating the user experience and streamlining machine
learning workflows. Most notably, you can now deploy models using Hugging Face inference endpoints thanks to an open-source community contribution of this model deployer stack component!

Note that 0.56.0 and 0.56.1 were yanked and removed from PyPI due to an issue with the
alembic versions + migration which could affect the database state. This release
fixes that issue.

This release also comes with a breaking change to the services
architecture.

Breaking Change

A significant change in this release is the migration of the Service (ZenML's technical term for deployment)
registration and deployment from local or remote environments to the ZenML server.
This change will be reflected in an upcoming tab in the dashboard which will
allow users to explore and see the deployed models in the dashboard with their live
status and metadata. This architectural shift also simplifies the model deployer
abstraction and streamlines the model deployment process for users by moving from
limited built-in steps to a more documented and flexible approach.

Important note: If you have models that you previously deployed with ZenML, you might
want to redeploy them to have them stored in the ZenML server and tracked by ZenML,
ensuring they appear in the dashboard.

Additionally, the find_model_server method now retrieves models (services) from the
ZenML server instead of local or remote deployment environments. As a result, any
usage of find_model_server will only return newly deployed models stored in the server.

It is also no longer recommended to call service functions like service.start().
Instead, use model_deployer.start_model_server(service_id), which will allow ZenML
to update the changed status of the service in the server.

Starting a service

Old syntax:

from zenml import pipeline, 
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService

@step
def predictor(
    service: BentoMLDeploymentService,
) -> None:
    # starting the service
    service.start(timeout=10)

New syntax:

from zenml import pipeline
from zenml.integrations.bentoml.model_deployers import BentoMLModelDeployer
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService

@step
def predictor(
    service: BentoMLDeploymentService,
) -> None:
    # starting the service
    model_deployer = BentoMLModelDeployer.get_active_model_deployer()
    model_deployer.start_model_server(service_id=service.service_id, timeout=10)

Enabling continuous deployment

Instead of replacing the parameter that was used in the deploy_model method to replace the
existing service (if it matches the exact same pipeline name and step name without
taking into accounts other parameters or configurations), we now have a new parameter,
continuous_deployment_mode, that allows you to enable continuous deployment for
the service. This will ensure that the service is updated with the latest version
if it's on the same pipeline and step and the service is not already running. Otherwise,
any new deployment with different configurations will create a new service.

from zenml import pipeline, step, get_step_context
from zenml.client import Client

@step
def deploy_model() -> Optional[MLFlowDeploymentService]:
    # Deploy a model using the MLflow Model Deployer
    zenml_client = Client()
    model_deployer = zenml_client.active_stack.model_deployer
    mlflow_deployment_config = MLFlowDeploymentConfig(
        name: str = "mlflow-model-deployment-example",
        description: str = "An example of deploying a model using the MLflow Model Deployer",
        pipeline_name: str = get_step_context().pipeline_name,
        pipeline_step_name: str = get_step_context().step_name,
        model_uri: str = "runs:/<run_id>/model" or "models:/<model_name>/<model_version>",
        model_name: str = "model",
        workers: int = 1
        mlserver: bool = False
        timeout: int = DEFAULT_SERVICE_START_STOP_TIMEOUT
    )
    service = model_deployer.deploy_model(mlflow_deployment_config, continuous_deployment_mode=True)
    logger.info(f"The deployed service info: {model_deployer.get_model_server_info(service)}")
    return service

Major Features and Enhancements:

  • A new Huggingface Model Deployer has been introduced, allowing you to seamlessly
    deploy your Huggingface models using ZenML. (Thank you so much @dudeperf3ct for the contribution!)
  • Faster Integration and Dependency Management ZenML now leverages the uv library,
    significantly improving the speed of integration installations and dependency management,
    resulting in a more streamlined and efficient workflow.
  • Enhanced Logging and Status Tracking Logging have been improved, providing better
    visibility into the state of your ZenML services.
  • Improved Artifact Store Isolation: ZenML now prevents unsafe operations that access
    data outside the scope of the artifact store, ensuring better isolation and security.
  • Adding admin user notion for the user accounts and added protection to certain operations
    performed via the REST interface to ADMIN-allowed only.
  • Rate limiting for login API to prevent abuse and protect the server from potential
    security threats.
  • The LLM template is now supported in ZenML, allowing you to use the LLM template
    for your pipelines.

🥳 Community Contributions 🥳

We'd like to give a special thanks to @dudeperf3ct he contributed to this release
by introducing the Huggingface Model Deployer. We'd also like to thank @moesio-f
for their contribution to this release by adding a new attribute to the Kaniko image builder.
Additionally, we'd like to thank @christianversloot for his contributions to this release.

What's Changed

New Contributors

Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.5...0.56.2

zenml - 0.56.1 [YANKED]

Published by avishniakov 7 months ago

[NOTICE] This version introduced the services class that is causing a bug for those users who are migrating from older versions. 0.56.3 will be out shortly in place of this release. For now, this release has been yanked.

This is a patch release aiming to solve a dependency problem that was brought in with the new rate-limiting functionality. With 0.56.1 you no longer need starlette to run client code or to run ZenML CLI commands.

🥳 Community Contributions 🥳

We'd like to thank @christianversloot for his contribution to this release.

What's Changed

Full Changelog: https://github.com/zenml-io/zenml/compare/0.56.0...0.56.1

zenml - 0.56.0 [YANKED]

Published by safoinme 7 months ago

[NOTICE] This version introduced the services class that is causing a bug for those users who are migrating from older versions. 0.56.3 will be out shortly in place of this release. For now, this release has been yanked.

ZenML 0.56.0 introduces a wide array of new features, enhancements, and bug fixes,
with a strong emphasis on elevating the user experience and streamlining machine
learning workflows. Most notably, you can now deploy models using Hugging Face inference endpoints thanks for an open-source community contribution of this model deployer stack component!

This release also comes with a breaking change to the services
architecture.

Breaking Change

A significant change in this release is the migration of the Service (ZenML's technical term for deployment)
registration and deployment from local or remote environments to the ZenML server.
This change will be reflected in an upcoming tab in the dashboard which will
allow users to explore and see the deployed models in the dashboard with their live
status and metadata. This architectural shift also simplifies the model deployer
abstraction and streamlines the model deployment process for users by moving from
limited built-in steps to a more documented and flexible approach.

Important note: If you have models that you previously deployed with ZenML, you might
want to redeploy them to have them stored in the ZenML server and tracked by ZenML,
ensuring they appear in the dashboard.

Additionally, the find_model_server method now retrieves models (services) from the
ZenML server instead of local or remote deployment environments. As a result, any
usage of find_model_server will only return newly deployed models stored in the server.

It is also no longer recommended to call service functions like service.start().
Instead, use model_deployer.start_model_server(service_id), which will allow ZenML
to update the changed status of the service in the server.

Starting a service

Old syntax:

from zenml import pipeline, 
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService

@step
def predictor(
    service: BentoMLDeploymentService,
) -> None:
    # starting the service
    service.start(timeout=10)

New syntax:

from zenml import pipeline
from zenml.integrations.bentoml.model_deployers import BentoMLModelDeployer
from zenml.integrations.bentoml.services.bentoml_deployment import BentoMLDeploymentService

@step
def predictor(
    service: BentoMLDeploymentService,
) -> None:
    # starting the service
    model_deployer = BentoMLModelDeployer.get_active_model_deployer()
    model_deployer.start_model_server(service_id=service.service_id, timeout=10)

Enabling continuous deployment

Instead of replacing the parameter that was used in the deploy_model method to replace the
existing service (if it matches the exact same pipeline name and step name without
taking into accounts other parameters or configurations), we now have a new parameter,
continuous_deployment_mode, that allows you to enable continuous deployment for
the service. This will ensure that the service is updated with the latest version
if it's on the same pipeline and step and the service is not already running. Otherwise,
any new deployment with different configurations will create a new service.

from zenml import pipeline, step, get_step_context
from zenml.client import Client

@step
def deploy_model() -> Optional[MLFlowDeploymentService]:
    # Deploy a model using the MLflow Model Deployer
    zenml_client = Client()
    model_deployer = zenml_client.active_stack.model_deployer
    mlflow_deployment_config = MLFlowDeploymentConfig(
        name: str = "mlflow-model-deployment-example",
        description: str = "An example of deploying a model using the MLflow Model Deployer",
        pipeline_name: str = get_step_context().pipeline_name,
        pipeline_step_name: str = get_step_context().step_name,
        model_uri: str = "runs:/<run_id>/model" or "models:/<model_name>/<model_version>",
        model_name: str = "model",
        workers: int = 1
        mlserver: bool = False
        timeout: int = DEFAULT_SERVICE_START_STOP_TIMEOUT
    )
    service = model_deployer.deploy_model(mlflow_deployment_config, continuous_deployment_mode=True)
    logger.info(f"The deployed service info: {model_deployer.get_model_server_info(service)}")
    return service

Major Features and Enhancements:

  • A new Huggingface Model Deployer has been introduced, allowing you to seamlessly
    deploy your Huggingface models using ZenML. (Thank you so much @dudeperf3ct for the contribution!)
  • Faster Integration and Dependency Management ZenML now leverages the uv library,
    significantly improving the speed of integration installations and dependency management,
    resulting in a more streamlined and efficient workflow.
  • Enhanced Logging and Status Tracking Logging have been improved, providing better
    visibility into the state of your ZenML services.
  • Improved Artifact Store Isolation: ZenML now prevents unsafe operations that access
    data outside the scope of the artifact store, ensuring better isolation and security.
  • Adding admin user notion for the user accounts and added protection to certain operations
    performed via the REST interface to ADMIN-allowed only.
  • Rate limiting for login API to prevent abuse and protect the server from potential
    security threats.
  • The LLM template is now supported in ZenML, allowing you to use the LLM template
    for your pipelines.

🥳 Community Contributions 🥳

We'd like to give a special thanks to @dudeperf3ct he contributed to this release
by introducing the Huggingface Model Deployer. We'd also like to thank @moesio-f
for their contribution to this release by adding a new attribute to the Kaniko image builder.
Additionally, we'd like to thank @christianversloot for his contributions to this release.

All changes:

New Contributors

Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.5...0.56.0

zenml - 0.55.5

Published by avishniakov 8 months ago

This patch contains a number of bug fixes and security improvements.

We improved the isolation of artifact stores so that various artifacts cannot be stored or accessed outside of the configured artifact store scope. Such unsafe operations are no longer allowed. This may have an impact on existing codebases if you have used unsafe file operations in the past.

To illustrate such a side effect, let's consider a remote S3 artifact store is configured for the path s3://some_bucket/some_sub_folder and in the code you use artifact_store.open("s3://some_bucket/some_other_folder/dummy.txt","w") -> this operation is considered unsafe as it accesses the data outside the scope of the artifact store. If you really need this to achieve your goals, consider switching to s3fs or similar libraries for such cases.

Also with this release, the server global configuration is no longer stored on the server file system to prevent exposure of sensitive information.

User entities are now uniquely constrained to prevent the creation of duplicate users under certain race conditions.

What's Changed

Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.4...0.55.5

zenml - 0.55.4

Published by strickvl 8 months ago

This release brings a host of enhancements and fixes across the board, including
significant improvements to our services logging and status, the integration of
model saving to the registry via CLI methods, and more robust handling of
parallel pipelines and database entities. We've also made strides in optimizing
MLflow interactions, enhancing our documentation, and ensuring our CI processes
are more robust.

Additionally, we've tackled several bug fixes and performance improvements,
making our platform even more reliable and user-friendly.

We'd like to give a special thanks to @christianversloot and @francoisserra for
their contributions.

What's Changed

Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.3...0.55.4

zenml - 0.55.3

Published by strickvl 8 months ago

This patch comes with a variety of bug fixes and documentation updates.

With this release you can now download files directly from artifact versions
that you get back from the client without the need to materialize them. If you
would like to bypass materialization entirely and just download the data or
files associated with a particular artifact version, you can use the
download_files method:

from zenml.client import Client

client = Client()
artifact = client.get_artifact_version(name_id_or_prefix="iris_dataset")
artifact.download_files("path/to/save.zip")

What's Changed

Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.2...0.55.3

zenml - 0.55.2

Published by bcdurak 9 months ago

This patch comes with a variety of new features, bug-fixes, and documentation updates.

Some of the most important changes include:

  • The ability to add tags to outputs through the step context
  • Allowing the secret stores to utilize the implicit authentication method of AWS/GCP/Azure Service Connectors
  • Lazy loading client methods in a pipeline context
  • Updates on the Vertex orchestrator to switch to the native VertexAI scheduler
  • The new HyperAI integration featuring a new orchestrator and service connector
  • Bumping the mlflow version to 2.10.0

We'd like to give a special thanks to @christianversloot and @francoisserra for their contributions.

What's Changed

Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.1...0.55.2

zenml - 0.55.1

Published by avishniakov 9 months ago

If you are actively using the Model Control Plane features, we suggest that you directly upgrade to 0.55.1, bypassing 0.55.0.

This is a patch release bringing backward compatibility for breaking changes introduced in 0.55.0, so that appropriate migration actions can be performed at the desired pace. Please refer to the 0.55.0 release notes for specific information on breaking changes and how to update your code to align with the new way of doing things. We also have updated our documentation to serve you better and introduced PipelineNamespace models in our API.

Also, this release is packed with Database recovery in case the upgrade failed to migrate the Database to a newer version of ZenML.

What's Changed

New Contributors

Full Changelog: https://github.com/zenml-io/zenml/compare/0.55.0...0.55.1

zenml - 0.43.1

Published by strickvl 9 months ago

Backports some important fixes that have been introduced in more recent versions
of ZenML to the 0.43.x release line.

Full Changelog: https://github.com/zenml-io/zenml/compare/0.43.0...0.43.1

zenml - 0.42.2

Published by strickvl 9 months ago

Backports some important fixes that have been introduced in more recent versions
of ZenML to the 0.42.x release line.

Full Changelog: https://github.com/zenml-io/zenml/compare/0.42.1...0.42.2

zenml - 0.55.0

Published by safoinme 9 months ago

This release comes with a range of new features, bug fixes and documentation updates. The most notable changes are the ability to do lazy loading of Artifacts and their Metadata and Model and its Metadata inside the pipeline code using pipeline context object, and the ability to link Artifacts to Model Versions implicitly via the save_artifact function.

Additionally, we've updated the documentation to include a new starter guide on how to manage artifacts, and a new production guide that walks you through how to configure your pipelines to run in production.

Breaking Change

The ModelVersion concept was renamed to Model going forward, which affects code bases using the Model Control Plane feature. This change is not backward compatible.

Pipeline decorator

@pipeline(model_version=ModelVersion(...)) -> @pipeline(model=Model(...))

Old syntax:

from zenml import pipeline, ModelVersion

@pipeline(model_version=ModelVersion(name="model_name",version="v42"))
def p():
  ...

New syntax:

from zenml import pipeline, Model

@pipeline(model=Model(name="model_name",version="v42"))
def p():
  ...

Step decorator

@step(model_version=ModelVersion(...)) -> @step(model=Model(...))

Old syntax:

from zenml import step, ModelVersion

@step(model_version=ModelVersion(name="model_name",version="v42"))
def s():
  ...

New syntax:

from zenml import step, Model

@step(model=Model(name="model_name",version="v42"))
def s():
  ...

Acquiring model configuration from pipeline/step context

Old syntax:

from zenml import pipeline, step, ModelVersion, get_step_context, get_pipeline_context

@pipeline(model_version=ModelVersion(name="model_name",version="v42"))
def p():
  model_version = get_pipeline_context().model_version
  ...

@step(model_version=ModelVersion(name="model_name",version="v42"))
def s():
  model_version = get_step_context().model_version
  ...

New syntax:

from zenml import pipeline, step, Model, get_step_context, get_pipeline_context

@pipeline(model=Model(name="model_name",version="v42"))
def p():
  model = get_pipeline_context().model
  ...

@step(model=Model(name="model_name",version="v42"))
def s():
  model = get_step_context().model
  ...

Usage of model configuration inside pipeline YAML config file

Old syntax:

model_version:
  name: model_name
  version: v42
  ...

New syntax:

model:
  name: model_name
  version: v42
  ...

ModelVersion.metadata -> Model.run_metadata

Old syntax:

from zenml import ModelVersion

def s():
  model_version = ModelVersion(name="model_name",version="production")
  some_metadata = model_version.metadata["some_metadata"].value
  ... 

New syntax:

from zenml import Model

def s():
  model = Model(name="model_name",version="production")
  some_metadata = model.run_metadata["some_metadata"].value
  ... 

Those using the older syntax are requested to update their code accordingly.

Full set of changes are highlighted here: https://github.com/zenml-io/zenml/pull/2267

What's Changed

New Contributors

Full Changelog: https://github.com/zenml-io/zenml/compare/0.54.1...0.55.0

zenml - 0.54.1

Published by safoinme 9 months ago

Release 0.54.1, includes a mix of updates new additions, and bug fixes. The most notable changes are the new production guide,
allowing multi-step VMs for the Skypilot orchestrator which allows you to configure a step to run on a specific VM or run the entire pipeline on a single VM, and some improvements to the Model Control Plane.

What's Changed

New Contributors

Full Changelog: https://github.com/zenml-io/zenml/compare/0.54.0...0.54.1

zenml - 0.54.0

Published by strickvl 9 months ago

This release brings a range of new features, bug fixes and documentation
updates. The Model Control Plane has received a number of small bugfixes and
improvements, notably the ability to change model and model version names.

We've also added a whole new starter guide that walks you through
how to get started with ZenML, from creating your first pipeline to fetching
objects once your pipelines have run and much more. Be sure to check it out if
you're new to ZenML!

Speaking of documentation improvements, the Model Control Plane now has its own
dedicated documentation section
introducing the concepts and features of the
Model Control Plane.

As always, this release comes with number of bug fixes, docs additions and
smaller improvements to our internal processes.

Breaking Change

This PR introduces breaking changes in the areas of the REST API concerning secrets and tags. As a consequence, the ZenML Client running the previous ZenML version is no longer compatible with a ZenML Server running the new version and vice-versa. To address this, simply ensure that all your ZenML clients use the same version as the server(s) they connect to.

🥳 Community Contributions 🥳

We'd like to give a special thanks to @christianversloot for two PRs he
contributed to this release. One of them fixes a bug that prevented ZenML from
running on Windows and the other one adds a new materializer for the Polars library.

Also many thanks to @sean-hickey-wf for his contribution of an improvement to
the Slack Alerter stack component

which allows you to define custom blocks for the Slack message.

What's Changed

New Contributors

Full Changelog: https://github.com/zenml-io/zenml/compare/0.53.1...0.54.0

zenml - 0.53.1

Published by stefannica 10 months ago

This minor release contains a hot fix for a bug that was introduced in 0.53.0
where the secrets manager flavors were not removed from the database
properly. This release fixes that issue.

What's Changed

Full Changelog: https://github.com/zenml-io/zenml/compare/0.53.0...0.53.1

zenml - 0.53.0

Published by avishniakov 10 months ago

This release is packed with a deeply reworked quickstart example and starter template, the removal of secret manager stack component, improved experience with Cloud Secret Stores, support for tags and metadata directly in Model Versions, some breaking changes for Model Control Plane and a few bugfixes.

New features

Try out the new quickstart and starter template featuring the Model Control Plane

Maybe it slipped you by, but ZenML has a brand new feature, and it's called the Model Control Plane (MCP)! The MCP allows users to register their Models directly into their zenml servers, and associate metadata, runs, and artifacts with it. This gives users a model-centric view into ZenML, rather than a pipeline-centric one. Try it out now with the quickstart:

pip install "zenml[server,templates]"
mkdir zenml_starter_template
cd zenml_starter_template
copier copy --trust -r 2023.12.18 https://github.com/zenml-io/template-starter.git .

# or `zenml init --template starter`

# Now either read the README or run quickstart.ipynb

Alternatively, just clone the repo and run the quickstart.

Please note that while frontend features are Cloud Only, OSS users can still use the MCP via the CLI and Python SDK.

Breaking changes

Secret Manager stack components sunset

Upon upgrading, all Secrets Manager stack components will be removed from the Stacks that still contain them and from the database. This also implies that access to any remaining secrets managed through Secrets Manager stack components will be lost. If you still have secrets configured and managed through Secrets Manager stack components, please consider migrating all your existing secrets to the centralized secrets store before upgrading by means of the zenml secrets-manager secret migrate CLI command. Also see the zenml secret --help command for more information.

Renaming "endpoints" to "deployments" in Model Control Plane

This is just a renaming to provide better alignment with industry standards. Though, it will affect:

  • ArtifactConfig(..., is_endpoint_artifact=True) now is ArtifactConfig(..., is_deployment_artifact=True)
  • CLI command zenml model endpoint_artifacts ... now is zenml model deployment_artifacts ...
  • Client().list_model_version_artifact_links(..., only_endpoint_artifacts=True) now is Client().list_model_version_artifact_links(..., only_deployment_artifacts=True)
  • ModelVersion(...).get_endpoint_artifact(...) now is ModelVersion(...).get_deployment_artifact(...)

Major bugfixes

What's Changed

Full Changelog: https://github.com/zenml-io/zenml/compare/0.52.0...0.53.0

zenml - 0.52.0

Published by stefannica 10 months ago

This release adds the ability to pass in pipeline parameters as YAML configuration and fixes a couple of minor issues affecting the W&B integration and the way expiring credentials are refreshed when service connectors are used.

Breaking Change

The current pipeline YAML configurations are now being validated to ensure that configured parameters match what is available in the code. This means that if you have a pipeline that is configured with a parameter that has a different value that what is provided through code, the pipeline will fail to run. This is a breaking change, but it is a good thing as it will help you catch errors early on.

This is an example of a pipeline configuration that will fail to run:

parameters:
    some_param: 24

steps:
  my_step:
    parameters:
      input_2: 42
# run.py
@step
def my_step(input_1: int, input_2: int) -> None:
    pass

@pipeline
def my_pipeline(some_param: int):
    # here an error will be raised since `input_2` is
    # `42` in config, but `43` was provided in the code
    my_step(input_1=42, input_2=43)

if __name__=="__main__":
    # here an error will be raised since `some_param` is
    # `24` in config, but `23` was provided in the code
    my_pipeline(23)

What's Changed

Full Changelog: https://github.com/zenml-io/zenml/compare/0.51.0...0.52.0

zenml - 0.51.0

Published by htahir1 10 months ago

This release comes with a breaking change to the model version model, a new use-case example for NLP, and a range of bug fixes and enhancements to the artifact management and pipeline run management features.

Breaking Change

New Example

What's Changed

Full Changelog: https://github.com/zenml-io/zenml/compare/0.50.0...0.51.0

zenml - 0.50.0

Published by safoinme 11 months ago

In this release, we introduce key updates aimed at improving user experience and security.
The ModelConfig object has been renamed to ModelVersion for a more intuitive interface.
Additionally, the release features enhancements such as optimized model hydration for better performance,
alongside a range of bug fixes and contributions from both new and returning community members.

Breaking Change

  • We have renamed the ModelConfig object to ModelVersion with other related changes to the model control plane,
    the goal of this is to bring a simplified user-interface experience, so once ModelVersion is configured in
    pipeline or step it will travel into all other user-facing places: step context, client, etc. by @avishniakov in #2044
  • introducing RBAC for server endpoints, ensuring users have appropriate permissions for actions on resources.
    Additionally, it improves data handling by dehydrating response models to redact inaccessible information, while
    service accounts retain full permissions due to current database constraints. by @schustmi in #1999
  • We have completely reworked our API models. While the request models are mostly the same, now with the new hydration logic, most of
    our response models feature a body and a metadata field which allow us to control the responses of our API and optimize the flow for
    anyone using it By @bcdurak #1971
  • We also worked on adding a new "Artifacts" tab to our dashboard. With these new changes, it will become much easier to understand
    and adjust how ZenML versions your data. Moreover, by using "ExternalArtifacts", you will be able to save and load artifacts manually and
    use them in your pipelines. By @fa9r #1943

Enhancements

  • Improve alembic migration safety by @fa9r in #2073
  • Model Link Filtering by Artifact / Run Name by @fa9r in #2074

Bug Fixes

  • Fix tag<>resource ID generator to fix the issue of manipulating migrated tags properly #2056
  • Fixes for k3d deployments via mlstacks using the ZenML CLI wrapper #2059
  • Fix some filter options for pipeline runs by @schustmi #2078
  • Fix Label Studio image annotation example by @strickvl #2010
  • Alembic migration fix for databases with scheduled pipelines with 2+ runs by @bcdurak #2072
  • Model version endpoint fixes by @schustmi in #2060

ZenML Helm Chart Changes

  • Make helm chart more robust to accidental secret deletions by @stefannica in #2053
  • Separate helm hook resources from regular resources by @stefannica in #2055

Other Changes

New Contributors

Full Changelog: https://github.com/zenml-io/zenml/compare/0.47.0...0.50.0

zenml - 0.47.0

Published by stefannica 11 months ago

This release fixes a bug that was introduced in 0.46.1 where the default user
was made inaccessible and was inadvertently duplicated. This release rescues
the original user and renames the duplicate.

What's Changed

Full Changelog: https://github.com/zenml-io/zenml/compare/0.46.1...0.47.0