Metric monocular depth estimation (vision model)
CLIP model fine-tuned for zero-shot fashion product classification
VaultGemma: 1B DP-trained Gemma variant for private NLP tasks
Qwen3-Next: 80B instruct LLM with ultra-long context up to 1M tokens
Large-scale xAI model for local inference with SGLang, Grok-2.5
Powerful 14B LLM with strong instruction and long-text handling
Robust BERT-based model for English with improved MLM training
Qwen2.5-VL-3B-Instruct: Multimodal model for chat, vision & video
Portuguese ASR model fine-tuned on XLSR-53 for 16kHz audio input
High-performance MoE model with MLA, MTP, and multilingual reasoning
High-efficiency reasoning and agentic intelligence model
Instruction-tuned 1.2B LLM for multilingual text generation by Meta
ClinicalBERT model trained on MIMIC notes for clinical NLP tasks
Small 3B-base multimodal model ideal for custom AI on edge hardware