Tengine is a lite, high performance, modular inference engine for embedded device
APACHE-2.0 License
Bot releases are hidden (Show)
Models | Input Size | Inference Time of ZCU102+NVDLA (ms) |
---|---|---|
ResNet18 | 3x32x32 | 12.6 |
YOLOv3-Tiny-ReLU | 3x416x416 | 630.5 |
YOLOX-Nano-ReLU | 3x416x416 | 1138.8 |
Published by BUG1989 about 3 years ago
Published by BUG1989 about 3 years ago
Models | Inference Time of A311D (ms) |
---|---|
MobileNet v1 | 4.3 |
MobileNet v2 | 5.2 |
ResNet18 | 5.5 |
ResNet50 | 14.6 |
SqueezeNet v1.1 | 2.6 |
VGG16 | 18.7 |
YOLOv3 | 78.6 |
YOLOv5s | 68.9 |
YOLOX-S | 55.2 |
Published by BUG1989 over 3 years ago
Models | Inference Time of A311D (ms) |
---|---|
MobileNet v1 | 4.3 |
MobileNet v2 | 5.2 |
ResNet18 | 5.5 |
ResNet50 | 14.6 |
SqueezeNet v1.1 | 2.6 |
VGG16 | 18.7 |
YOLOv3 | 78.6 |
YOLOv5s | 68.9 |
Published by BUG1989 over 3 years ago
Models |
---|
MobileNet v1 |
MobileNet v2 |
ResNet18 |
SqueezeNet v1.1 |
YOLO-Fastest |
Published by BUG1989 over 3 years ago
Models | Inference Time(ms) |
---|---|
MobileNet v1 | 2.3 |
MobileNet v2 | 5.1 |
ResNet18 | 4.5 |
ResNet50 | 11.7 |
SqueezeNet v1.1 | 2.5 |
VGG16 | 22.8 |
YOLOv3 | 78.2 |
Published by BUG1989 over 3 years ago
Published by BUG1989 over 3 years ago
Published by BUG1989 almost 4 years ago
Published by BUG1989 almost 4 years ago
Published by BUG1989 about 4 years ago
Dynamic graph segmentation
C++ API (experiment)
Python API (experiment)
support ARM-Mali GPU with ACL
support others GPU with Vulkan (experiment)
support fp16 inference with armv8.2 (experiment)
uint8 reference op (experiment)
mish activation op
Published by BUG1989 over 4 years ago
Initial Tengine Lite release v0.1
Published by satosa-z almost 5 years ago
Published by cyberfire over 5 years ago
Separate cpu operator implementation and the framework into two so.
Add serializer for TFLite, and reference implementation on TFLite op.
Add RNN/GRU/LSTM reference implementation
Published by cyberfire almost 6 years ago
With the new API 2.0 and a few new features and bug fixes.
Published by cyberfire almost 6 years ago
Android build to run ACL
MSSD can use GPU to accelerate
Android build with c++_shared instead of gnustl_shared
Published by cyberfire almost 6 years ago
Support GPU fp16. Only works with ACL 18.05
More tensorflow model and onnx model support
Published by cyberfire over 6 years ago
This is a first version which implements many basic features for an inference engine