A Keras like abstraction layer on top of the Rust ML framework candle
MPL-2.0 License
This project was started as my RUST exercise to abstract the Rust minimalist ML framework Candle (https://github.com/huggingface/candle) and introduce a more convenient way of programming neural network machine learning models.
The behaviour is inspired by Python KERAS (https://keras.io) and the initial step based on the Rust-Keras-like code (https://github.com/AhmedBoin/Rust-Keras-Like).
So let's call the project Candle Lighter π―, because it helps to turn on the candle light and is even easier to implement.
Examples can be found below the lib/examples/ directory.
To use it as library just call 'cargo add candlelighter'
MAINTAINERS AND CONTRIBUTORS ARE HIGHLY WELCOME
Note: It is by far not production ready and is only used for own training purposes. No warranty and liability is given. I am a private person and not targeting any commercial benefits.
Meta Layer | Type | State | Example |
---|---|---|---|
Sequential model | - | β | |
- | Feature scaling | π | DNN and TNN |
- | Dense | β | DNN |
- | Convolution | β | CNN |
- | Pooling | β | - |
- | Normalization | β | - |
- | Flatten | β | - |
- | Recurrent | β | RNN 1st throw |
- | Regulation | β | - |
- | Recurrent | β | RNN 1st throw |
- | Autoencoder | π | - |
- | Feature embedding | β | S2S 1st throw |
- | Attention | π | TNN 1st throw |
- | Mixture of Experts | π | ENN 1st throw |
- | Feature masking and -quantization | π | - |
- | KAN-Dense | π | - |
Model fine tuning (PEFT) | - | π | In development: DNN2 & DNN3 |
Parallel model (in sense of split) | - | π | PNN 1st throw |
Parallel model | Merging | π | PNN 1st throw |
Transformer models | see | π | |
* BERT | Text similarity | β | LLM |
* LLAMA | Completion (Chat) | β | LLM2 |
Reinforcement models | see | π |
Tripple-licensed to be compatible with the Rust project and the source roots.
Licensed under the MPL 2.0, MIT license or the Apache license, Version 2.0 at your option.