concrete-ml

Concrete ML: Privacy Preserving ML framework built on top of Concrete, with bindings to traditional ML frameworks.

OTHER License

Downloads
3.6K
Stars
780
Committers
27

Bot releases are hidden (Show)

concrete-ml - v0.6.1

Published by bcm-at-zama almost 2 years ago

Summary

This Concrete-ML release adds support for:

  • 16-bits built-in NN models,
  • 20+ bits purely leveled (i.e., very fast) linear models, which makes them match floating point models in term of accuracy

New tutorials show how to train large neural networks either from scratch or by transfer learning, how to convert them into an FHE-friendly models and finally how to evaluate them in FHE and with simulation. The release adds tools that leverage FHE simulation to select optimal parameters that speed up the inference of neural networks. Python 3.10 support is included in this release.

Links

Docker Image: zamafhe/concrete-ml:v0.6.1
pip: https://pypi.org/project/concrete-ml/0.6.1
Documentation: https://docs.zama.ai/concrete-ml

v0.6.1

Feature

  • Support 20+ bits linear models (4f112ca)
  • Add python 3.10 support (aede49b)
  • Add a CIFAR-10 CNN with 8-bit accumulators and show p_error search (35715e2)
  • Add tutorials for transfer learning for CIFAR-10/100. (42405c5)
  • Add CIFAR-10 VGG CNN with split clear/FHE compilation. (637c272)
  • Change the license (a52d917)
  • Add support for global_p_error (b54fcac)

Fix

  • Flaky FHE vs VirtualLib overflow (1780cd5)
  • Ensure all operations in QNNs are done in FP64 (52e87b7)
  • Raise error when model results are mismatched between Concrete-ML and VL (b7fa8c1)
  • Set specific dependency versions (f2dfc3e)
  • Flaky client server API (1495214)
  • Issues with pytest and macOS (5196c68)

Documentation

  • Add a showcase of use-cases and tutorials (36adc09)
  • Add global_p_error (b6b4d7a)
  • Add CIFAR-10/100 examples for the fine-tuning approach. (45a4f66)
  • Fully connected NN on MNIST using 16b in VL (f0be5f3)
  • Provide an image filtering demo app (fd11f25)
concrete-ml - v0.5.1

Published by fd0r almost 2 years ago

Summary

The main objective of this release is to fix some issues with recent updates in dependencies, and to extend the support from python 3.7.14 to python 3.7.1.

Links

Docker Image: zamafhe/concrete-ml:v0.5.1
pip: https://pypi.org/project/concrete-ml/0.5.1
Documentation: https://docs.zama.ai/concrete-ml

v0.5.1

Feature

  • Extend python 3.7.14 support to 3.7.1 (eb212bf)

Fix

  • Fixing an issue with LinearRegression (ebd06b4)
concrete-ml - v0.5.0

Published by fd0r almost 2 years ago

Summary

The main objective of this release is to add python 3.7 support.

Links

Docker Image: zamafhe/concrete-ml:v0.5.0
pip: https://pypi.org/project/concrete-ml/0.5.0
Documentation: https://docs.zama.ai/concrete-ml

v0.5.0

Feature

  • Python 3.7 support (fef90d1)
  • Remove constraints in numpy_reduce_sum (89668bf)
  • Check if a network imported with import_qat=True is quantized (24e8f88)

Documentation

  • Move titanic notebook to use_case_examples and cleaned SentimentClassification notebook (14502f0)
concrete-ml - v0.4.0

Published by fd0r almost 2 years ago

Summary

This version of Concrete-ML adds more support for quantization-aware training neural networks, adds decision tree-ensemble regressors, and includes additional linear regression models. For custom models, first-class support for Brevitas quantization aware training neural networks was added: a dedicated function imports models containing Brevitas layers directly. Design rules for these neural networks using Brevitas are detailed in the documentation. Moreover, quantization aware training is now the default for built-in neural networks, giving good accuracy out-of-the-box, with low bit-widths for weights, activations, and accumulators. Tree-based RandomForest and XGBoost regression models are now supported, while the linear regressors are complemented by the Ridge, Lasso, and ElasticNet models. Many usage example notebooks were added, showing how to use the new models, and also showing more complex use-cases such as sentiment analysis, MNIST classification, and Kaggle Titanic dataset classification.

Links

Docker Image: zamafhe/concrete-ml:v0.4.0
pip: https://pypi.org/project/concrete-ml/0.4.0
Documentation: https://docs.zama.ai/concrete-ml/v/0.4/

v0.4.0

Feature

  • Add a XGBoost regression tutorial (#1911) (174f6f7)
  • Add encrypted sentiment analysis demo (fd684df)
  • Make net_inputs and net_ouputs optional (5ed9476)
  • Add RandomForestRegressor (0c7853c)
  • Add XGBRegressor (9736557)
  • Add tree-regressor (5b12d53)
  • Add Ridge, Lasso and ElasticNet regression models. (675c7b3)
  • Import Quantized Brevitas ONNX graphs and upgrade QAT notebook (13d8d74)

Fix

  • Remove pygraphviz dependancy (f708ba7)
  • Flaky client server (#1927) (d162cd6)
  • Tweedie overflow underterministic bug (922d60e)
  • XGBRegressor verifies that n_targets is 1 (a15fe9d)
  • Make linear models with fit_intercept=False possible (90a50b4)
  • Add quantize_inputs_with_net_outputs_precision to calibration process (1dd9ba3)

Documentation

  • Update p_error with api call (d214216)
  • Improve ClassifierComparison notebook (f45b79f)
  • Integration of API docs with lazydocs (985d0d5)
  • Major Revision of Inner Workings and integration of Quantization Aware Training (accfd3e)
  • Improve contribution doc (0a0534a)
  • Adding new models to the docs (081d8f9)
  • Be more precise on installation (88fed0b)
  • Improve our README (b2d81e4)
  • Decrease net_ouputs values from 8 to 5 bits in notebooks (49717bf)
  • Add comment about tqdm in linear.md (ba06e5e)
  • Update the LICENSE (30b2b27)
  • Add tqdm and remove inference slicing in titanic notebook (4419438)
concrete-ml - v0.3.0

Published by andrei-stoian-zama about 2 years ago

Summary

Concrete-ML now gives the user the possibility to deploy models in a client-server setting, separating encryption and decryption from execution, which can now be done by a remote machine. The release also adds support for new models and for new neural network layers, but also allows importing ONNX directly, thus supporting some keras/tensorflow models. Furthermore, this release provides some support for importing Quantization Aware Training neural networks, which contain quantizers in the operation graph and can be built with brevitas.

Links

Docker Image: zamafhe/concrete-ml:v0.3.0
pip: https://pypi.org/project/concrete-ml/0.3.0
Documentation: https://docs.zama.ai/concrete-ml

v0.3.0

Feature

  • Allow recompiling from onnx model (9b69e73)
  • Adding support for p_error (fe03441)
  • Adding GELU activation (c732d15)
  • Add random_state to models for client server reproducibility (6f887d0)
  • Add QAT notebook (64c4512)
  • Import Brevitas QAT networks (6c40a0c)
  • Integration of the encrypt decrypt api (3c2a68a)
  • Support more input types in predict() (76e142c)
  • Support more input types in fit() (1fa74f7)
  • Add Round and Pow operators (d6880ce)
  • Ability to import Quantization Aware Training networks (c1bb947)
  • Compile user supplied ONNX to support keras/tf (fae3dc5)
  • Implement Generalized Linear Regression models (8e8e025)
  • Add SoftSign activation (3ce338e)
  • Adding more activations (e43ce5c)
  • Implement Poisson Regression (09eefa5)
  • Use the 8b of precision of Concrete Numpy (249c712)
  • Add ONNX flatten support (c5f215f)
  • Handle more tree-based classifiers (950cc6c)
  • Add Batch Normalization ONNX operator (7969739)
  • Add Where, Greater, Mul, Sub ONNX operator support (f939149)
  • Add ONNX Average Pooling and Pad operator (40f1ef9)
  • Add more activation functions (26b2221)

Fix

  • Make tree inference faster by creating new numpy boolean operators (206caa5)
  • Set a compatible version for protobuf (97ccfc0)
  • Improve IRIS FCNN FHE accuracy and visualization (02e497c)
  • Replace init call by set_params (111419e)
  • Fix wrong fit_benchmark in linear models (f257def)
  • Fix GridSearchCV on trees (b614285)
  • Support decision tree with custom classes (baa3b4d)

Documentation

  • Major refresh of 0.3 doc (e5e3205)
  • Add sentiment classification notebook (68ae7d0)
  • Restrict hyperparameters in titanic notebook for faster inference (9b63c8a)
  • QAT explanation (9430ba6)
  • Document ONNX compilation (972d05e)
  • Explain quantized vs float ops and fusing (fb9b409)
  • Add doc for pandas support (34652ce)
  • Add notebook and docs client server api (99ff1e7)
  • Developing custom models (1c6f571)
  • Explain built-in quantized Neural Networks (972d464)
  • Add a notebook for Kaggle Titanic competition (0a44853)
concrete-ml - v0.2.1

Published by bcm-at-zama about 2 years ago

Summary

Fixing issues to updates in some dependancies that we used with a not-fixed version

Links

Docker Image: zamafhe/concrete-ml:v0.2.1
pip: https://pypi.org/project/concrete-ml/0.2.1
Documentation: https://docs.zama.ai/concrete-ml

v0.2.1

Fix

  • Set a compatible version for protobuf (4dd46cf)
  • Force ONNX package version to 1.11.0 in CLM 0.2 (fa90586)

Documentation

concrete-ml - v0.2.0

Published by jfrery over 2 years ago

Summary

Use Concrete Numpy 0.5.
Add multi-class classification to XGBoost.
Fixing some minor broken links or issues.

Links

Docker Image: zamafhe/concrete-ml:v0.2.0
pip: https://pypi.org/project/concrete-ml/0.2.0
Documentation: https://docs.zama.ai/concrete-ml/ (old link https://docs.zama.ai/concrete-ml/0.2.0 has been moved)

v0.2.0

Breaking Changes (as compared to 0.1.x)

  • run method is renamed to encrypt_run_decrypt after changes in Concrete-Numpy 0.5.0. Individual APIs to encrypt/run/decrypt separately will be available in a further release of Concrete-ML

Feature

  • Some breaking changes in Concrete Numpy API. (ecbb26e)
  • Using Concrete Numpy 0.5 (ee5987b)
  • Add multiclass xgboost (9247f35)
  • Adding a set_version_and_push command to the makefile (ab853c2)
  • Add multiclass capability to decision trees (df0a64d)

Fix

  • Fixing Pypi's Homepage button link (bf52514)

Breaking

  • The API .forward_fhe.run() has been renamed into .forward_fhe.encrypt_run_decrypt() (ecbb26e)

Documentation

concrete-ml - v0.1.1

Published by bcm-at-zama over 2 years ago

Summary

Use Concrete Numpy 0.4.
Fixing some minor broken links or issues.

Links

Docker Image: zamafhe/concrete-ml:v0.1.1
pip: https://pypi.org/project/concrete-ml/0.1.1
Documentation: https://docs.zama.ai/concrete-ml/0.1.1

v0.1.1

Feature

  • Add multiclass capability to decision trees (6f9651e)
  • Update to Concrete Numpy 0.4.0 and update the theme. (27839be)

Fix

  • Fixing Pypi's Homepage button link (d1c0e5d)
  • Broken links in README.md (0c47668)

Documentation

concrete-ml - v0.1.0

Published by bcm-at-zama over 2 years ago

Summary

First release of Concrete-ML package

Links

Docker Image: zamafhe/concrete-ml:v0.1.0
pip: https://pypi.org/project/concrete-ml/0.1.0
Documentation: https://docs.zama.ai/concrete-ml/0.1.0

v0.1.0

Feature

  • Add tests for more torch functions that are supported, mention them in the docs (0478854)
  • Add FHE in xgboost notebook (1367d4e)
  • Make all classifier demos run in FHE for the datasets and in VL for the domain grid (d95af58)
  • Remove workaround reshape and remaining 3dmatmul (28ea1eb)
  • Change predict to predict_proba for average_precision (a057881)
  • Allow FHE on xgboost (7b5c118)
  • Add CNN notebook (4acca2f)
  • Optimize QuantizeAdd to use TLUs when one of the inputs is a constant(1ffcdfb)
  • Different n_bits for weights/activations/outputs (321d151)
  • Add virtual lib management to SklearnLinearModelMixin (596d16e)
  • Add quantized CNN. (1a78593)
  • Start refactoring tree based models (8e62cf8)
  • Set symmetric quantization by default in PTQ (8fcd307)
  • Add random forest + benchmark (5630f17)
  • Allow base_score with xgboost (17d5cc4)
  • Add predict_proba to logistic regression (9aaeec5)
  • Add xgboost (699603d)
  • Add NN regression benchmarks (9de2ba4)
  • Add symetric quantization (needed for tree output) (4a173ee)
  • Implement LinearSVC (d048077)
  • Implement LinearSVRegression (36df77e)
  • Remove identity nodes from ONNX models (9719c08)
  • Add binary + multiclass logistic regression (85c25df)
  • Improve r2 test for low variance targets. (44ec0b3)
  • Add sklearn linear regression model (060a4c6)
  • Add virtual lib basic class (ad32509)
  • Improve NN benchmarks (ae8313e)
  • Add NN benchmarks and sklearn wrapper for FHE NNs (e73a514)
  • More efficient numpy_gemm, since traced (609f1df)
  • Integrate hummingbird (01c3a4a)
  • Add ONNX quantized implementation for MatMul and Add (716fc43)
  • Allow multiple inputs for a QuantizedModule (1fa530d)
  • Allow QuantizedModule to handle complicated NN topologies (da91e40)
  • Let's allow (alpha, beta) == (1, 0) in Gemm (4b9927a)
  • Manage constant folding in PTQ (a0c56d7)
  • Replace numpy.isclose with r2 score (65f0a6e)
  • Replace the torch quantization functions with ones usable with ONNX (ecdeb50)
  • Add test when input is float to quantized module (d58910d)
  • Let user chose its error type (e5d7440)
  • Post training quantization for ONNX repr (8b051df)
  • Adding more activations and numpy functions (73d885c)
  • Let's have relu and relu6 (f64c3bf)
  • Add quantized tanh (ca9c6e5)
  • Add classification benchmarks, fix bugs in DecisionTreeClassifier (d66d7bf)
  • Provide quantized versions of ONNX ops (b63eca2)
  • Add darglint as a pluggin of flake8 (bb568e2)
  • Use ONNX as intermediate format to convert torch models to numpy (072bd63)
  • Add decision trees + update notebook (db163f5)
  • Restore quantized model benchmarks (d1cfc4e)
  • Port quantization and torch from concrete-numpy. (a525e8b)

Fix

  • Remove fixmes, add HardSigmoid (847db99)
  • Docs (8096acc)
  • Safer default parameter for ensemble methods (8da0988)
  • Increase n_bits for clear vs quantized comparison for decision tree (b9f1206)
  • Fix notebook on macOS + some warnings (ab2a821)
  • Xgboost handle the edge case where n_estimators = 1 (3673584)
  • Issues in Classifier Comparison notebook (3053085)
  • One more bug about convergence (c6cee4e)
  • Fix convergence issues in tests (7b92bd8)
  • Remove metric evaluation for n_bits < 16 (7c4bd0e)
  • Wrong xgboost init (2ed49b6)
  • Workaround while #518 is being investigated (7f521f9)
  • Looks like a mistake (69e9b15)
  • Speedup qnn tests (9d07f5c)
  • Workaround for segfaults on macOS (798662f)
  • Remove check_r2_score with argmax predictions (7d52750)
  • Review (82abb12)
  • Fully connected notebook (1f7b92e)
  • When we test determinism, it is fine if there is an issue in the underlying Concrete Numpy (6595495)
  • Change the md5, even if the licence hasn't changed (182084f)
  • Decision tree bug (84a65e4)
  • Remove gpl lib + update sphinx-zama-theme ^2.2.0 (65aa1b2)
  • Remove Hardsigmoid and Tanhshrink for a moment, since there are issues with precision (51c0bc5)
  • Remove fc comparison fhe vs quantization (1c527be)
  • Use right imports in docs (9fe43bf)
  • Change qvalues to values in quantized module and fix iris notebook mistake (11c5616)
  • Wrong fixture for a list + flaky test for decision tree + add fixture for model check is good execution (cc3c0b6)
  • Add missing docstrings (0c164f5)
  • Fix docstrings which are incomplete thanks to darglint (45d4fca)

Documentation

  • Refresh notebooks (ff771aa)
  • Update the theme (0d1e672)
  • Update simple example readme (21d9a77)
  • Readme (029237a)
  • Update compute with quantization (b836811)
  • Rewrite the developer section for Quantization, show how to to work with quantized operators (436e71e)
  • Add Pruning docs (33b044f)
  • Add info on skorch (6b3ca04)
  • Adding documentation (ed9ee3f)
  • Adding documentation (c4e73ec)
  • Improve quantization explanation in the User Guide (4508282)
  • Add a summary of our results (1046cc2)
  • Write Virtual Lib documentation for release (4f68f3f)
  • Add hummingbird usage (05103b3)
  • Update docs for release (95a1669)
  • Update our project setup doc (beef6c9)
  • Update README (51ed1be)
  • Add automatic ToC to README (4d51c96)
  • Add source in docs (37227c6)
  • Small update to the docker set up instructions (833d6e4)
  • Update contributing to mention make conformance (bff86ca)
  • No need to update releasing.md (179e235)
  • Add a pruning section. (c977a32)
  • No RF or SVM dedicated notebook (6307508)
  • Warn the user that GLM and PoissonRegression are currently not natively in the package (e3e0234)
  • Add Random Forest to our classifier comparison (858f193)
  • Add XGBClassifier to our classifier comparison (eff1b15)
  • Update our documentation (2b16560)
  • Add a comparison of our classifiers (ce0d24b)
  • Make the plan for the documentation (0306cee)
  • Add a sentence about quantized module 237 (a440de3)
  • Use 2.1.0 theme (4fb1445)
  • Add starter docs for how ONNX is used internally (16978b6)
  • Add relevant docs from concrete-numpy (235322a)
  • Check mdformat (c29504a)
Package Rankings
Top 6.84% on Pypi.org