(Translated by https://www.hiragana.jp/)
GitHub - Tencent/ncnn: ncnn is a high-performance neural network inference framework optimized for the mobile platform
Skip to content
/ ncnn Public

ncnn is a high-performance neural network inference framework optimized for the mobile platform

License

Notifications You must be signed in to change notification settings

Tencent/ncnn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ncnn

ncnn

License Download Total Count codecov

ncnn is a high-performance neural network inference computing framework optimized for mobile platforms. ncnn is deeply considerate about deployment and uses on mobile phones from the beginning of design. ncnn does not have third-party dependencies. It is cross-platform and runs faster than all known open-source frameworks on mobile phone cpu. Developers can easily deploy deep learning algorithm models to the mobile platform by using efficient ncnn implementation, creating intelligent APPs, and bringing artificial intelligence to your fingertips. ncnn is currently being used in many Tencent applications, such as QQ, Qzone, WeChat, Pitu, and so on.

ncnn 一个为手机端极致优化的高性能神经网络前向计算框架。 ncnn 从设计之はつ深刻しんこくこう虑手つくえはしてき部署ぶしょ使用しよう。 无第さんぽう赖,またが平台ひらだいつくえはし cpu てき速度そくどかい于目ぜん所有しょゆうやめ知的ちてき开源かまちもと于 ncnn,开发しゃのう够将深度しんどがく习算ほう轻松移植いしょくいたつくえはしだかこう执行, 开发人工じんこう智能ちのう APP,はた AI 带到你的ゆびとんが。 ncnn 目前もくぜんやめざい腾讯款应ようちゅう使用しよう,如:QQ,Qzone,ほろしんじてんてん P 图等。


わざ交流こうりゅう QQ ぐん
637093648 (ちょう多大ただい佬)
答案とうあんまきまきまきまきまきやめ满)
Telegram Group

https://t.me/ncnnyes

Discord Channel

https://discord.gg/YRsxgmF

Pocky QQ ぐん(MLIR YES!)
677104663 (ちょう多大ただい佬)
答案とうあん:multi-level intermediate representation
们都不知ふちどう pnnx ゆうこのみようぐん
818998520 (しんぐん!)

Download & Build status

https://github.com/Tencent/ncnn/releases/latest

how to build ncnn library on Linux / Windows / macOS / Raspberry Pi3, Pi4 / POWER / Android / NVIDIA Jetson / iOS / WebAssembly / AllWinner D1 / Loongson 2K1000

Source

Android

Android shared

HarmonyOS

HarmonyOS shared
iOS

iOS-Simulator

macOS

Mac-Catalyst

watchOS

watchOS-Simulator

tvOS

tvOS-Simulator

visionOS

visionOS-Simulator

Apple xcframework

Ubuntu 20.04

Ubuntu 22.04

Ubuntu 24.04

windows
VS2015

VS2017

VS2019

VS2022

WebAssembly

Linux (arm)

Linux (aarch64)

Linux (mips)

Linux (mips64)

Linux (ppc64)

Linux (riscv64)

Linux (loongarch64)


Support most commonly used CNN network

支持しじだい部分ぶぶん常用じょうようてき CNN 网络


HowTo

use ncnn with alexnet with detailed steps, recommended for beginners :)

ncnn 组件使用しようゆびきた alexnet 带详细步骤,新人しんじん强烈きょうれつ推荐 :)

use netron for ncnn model visualization

use ncnn with pytorch or onnx

ncnn low-level operation api

ncnn param and model file spec

ncnn operation param weight table

how to implement custom layer step by step


FAQ

ncnn throw error

ncnn produce wrong result

ncnn vulkan


Features

  • Supports convolutional neural networks, supports multiple input and multi-branch structure, can calculate part of the branch
  • No third-party library dependencies, does not rely on BLAS / NNPACK or any other computing framework
  • Pure C++ implementation, cross-platform, supports Android, iOS and so on
  • ARM NEON assembly level of careful optimization, calculation speed is extremely high
  • Sophisticated memory management and data structure design, very low memory footprint
  • Supports multi-core parallel computing acceleration, ARM big.LITTLE CPU scheduling optimization
  • Supports GPU acceleration via the next-generation low-overhead Vulkan API
  • Extensible model design, supports 8bit quantization and half-precision floating point storage, can import caffe/pytorch/mxnet/onnx/darknet/keras/tensorflow(mlir) models
  • Support direct memory zero copy reference load network model
  • Can be registered with custom layer implementation and extended
  • Well, it is strong, not afraid of being stuffed with まき QvQ

こうのうがいじゅつ

  • 支持しじまき积神经网络,支持しじ输入多分たぶんささえ结构,计算部分ぶぶんぶんささえ
  • 无任なんだいさんぽう库依赖,赖 BLAS/NNPACK とう计算かまち
  • 纯 C++ 实现,またが平台ひらだい支持しじ Android / iOS とう
  • ARM Neon 汇编级良こころ优化,计算速度そくど极快
  • せい细的ないそん管理かんりすうすえ结构设计,ないそんうらないよう极低
  • 支持しじ多核たかく并行计算加速かそく,ARM big.LITTLE CPU 调度优化
  • 支持しじもと于全しんてい消耗しょうもうてき Vulkan API GPU 加速かそく
  • 扩展てき模型もけい设计,支持しじ 8bit りょう かずはん精度せいど浮点そん储,导入 caffe/pytorch/mxnet/onnx/darknet/keras/tensorflow(mlir) 模型もけい
  • 支持しじ直接ちょくせつないそんれい拷贝引用いんよう载网络模がた
  • ちゅうさつてい义层实现并扩てん
  • おん,很强就是りょう怕被ふさがまき QvQ

supported platform matrix

  • ✅ = known work and runs fast with good optimization
  • ✔️ = known work, but speed may not be fast enough
  • ❔ = shall work, not confirmed
  • / = not applied
Windows Linux Android macOS iOS
intel-cpu ✔️ ✔️ ✔️ /
intel-gpu ✔️ ✔️ /
amd-cpu ✔️ ✔️ ✔️ /
amd-gpu ✔️ ✔️ /
nvidia-gpu ✔️ ✔️ /
qcom-cpu ✔️ / /
qcom-gpu ✔️ ✔️ / /
arm-cpu / /
arm-gpu ✔️ / /
apple-cpu / / / ✔️
apple-gpu / / / ✔️ ✔️
ibm-cpu / ✔️ / / /

Project examples



License

BSD 3 Clause