ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
Oct 18, 2024 - C++
PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook's AI Research lab.
ncnn is a high-performance neural network inference framework optimized for the mobile platform
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Open3D: A Modern Library for 3D Data Processing
Transformer related optimization, including BERT, GPT
A GPU-accelerated library containing highly optimized building blocks and an execution engine for data processing to accelerate deep learning training and inference applications.
Tengine is a lite, high performance, modular inference engine for embedded device
TNN: developed by Tencent Youtu Lab and Guangying Lab, a uniform deep learning inference framework for mobile、desktop and server. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and …
Lightning fast C++/CUDA neural network framework
A retargetable MLIR-based machine learning compiler and runtime toolkit.
C++ library based on tensorrt integration
Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
Enabling PyTorch on XLA Devices (e.g. Google TPU)
TengineKit - Free, Fast, Easy, Real-Time Face Detection & Face Landmarks & Face Attributes & Hand Detection & Hand Landmarks & Body Detection & Body Landmarks & Iris Landmarks & Yolov5 SDK On Mobile.
🍅🍅🍅YOLOv5-Lite: Evolved from yolov5 and the size of model is only 900+kb (int8) and 1.7M (fp16). Reach 15 FPS on the Raspberry Pi 4B~
C++ Implementation of PyTorch Tutorials for Everyone
This repository provides code for machine learning algorithms for edge devices developed at Microsoft Research India.
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
Created by Facebook's AI Research lab (FAIR)
Released September 2016
Latest release 4 days ago