Fast inference engine for Transformer models
Open Source OCR Engine
Build your own AI friend
Port of OpenAI's Whisper model in C/C++
Port of Facebook's LLaMA model in C/C++
Speech-to-text, text-to-speech, and speaker recognition
Run Local LLMs on Any Device. Open-source
Clean and efficient FP8 GEMM kernels with fine-grained scaling
Awesome multilingual OCR toolkits based on PaddlePaddle
Structure-from-Motion and Multi-View Stereo
Audio Plugin for Audio to MIDI transcription using deep learning
A retargetable MLIR-based machine learning compiler runtime toolkit
High-performance neural network inference framework for mobile
TT-NN operator library, and TT-Metalium low level kernel programming
ONNX Runtime: cross-platform, high performance ML inferencing
Open Source Computer Vision Library
CV-CUDA™ is an open-source, GPU accelerated library
Distribute and run LLMs with a single file
C++-based high-performance parallel environment execution engine
OpenVINO™ Toolkit repository
LiteRT is the new name for TensorFlow Lite (TFLite)
LLMs as Copilots for Theorem Proving in Lean
Analyzing, storing and visualizing big data, scientifically
Google Testing and Mocking Framework
Diffusion model(SD,Flux,Wan,Qwen Image,Z-Image,...) inference