A library for accelerating Transformer models on NVIDIA GPUs
LM Studio Apple MLX engine
A real time inference engine for temporal logical specifications
High-performance reactive message-passing based Bayesian engine
DeepEP: an efficient expert-parallel communication library
Ling is a MoE LLM provided and open-sourced by InclusionAI
A high-throughput and memory-efficient inference and serving engine
Jlama is a modern LLM inference engine for Java
Deep learning optimization library: makes distributed training easy
High-performance inference framework for large language models
Alibaba's high-performance LLM inference engine for diverse apps
950 line, minimal, extensible LLM inference engine built from scratch
Blazing fast, instant realtime GraphQL APIs on your DB
lightweight, standalone C++ inference engine for Google's Gemma models
A high-performance inference engine for AI models
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
Open-source large language model family from Tencent Hunyuan
A lightweight vLLM implementation built from scratch
Mooncake is the serving platform for Kimi
Low-latency AI inference engine optimized for mobile devices
Code for running inference and finetuning with SAM 3 model
Pruna is a model optimization framework built for developers
RGBD video generation model conditioned on camera input
Offline inference engine for art, real-time voice conversations
A Powerful Native Multimodal Model for Image Generation