Qwen3-omni is a natively end-to-end, omni-modal LLM
Official code base for LeWorldModel: Stable End-to-End Joint-Embedding
The official PyTorch implementation of Google's Gemma models
A state-of-the-art open visual language model
Open Source Speech Language Model
Open-source industrial-grade ASR models
Fast-stable-diffusion + DreamBooth
Hunyuan Translation Model Version 1.5
Multimodal embedding and reranking models built on Qwen3-VL
A trainable PyTorch reproduction of AlphaFold 3
State-of-the-art (SoTA) text-to-video pre-trained model
OCR expert VLM powered by Hunyuan's native multimodal architecture
Implementation of "MobileCLIP" CVPR 2024
VMZ: Model Zoo for Video Modeling
High-resolution models for human tasks
Ling is a MoE LLM provided and open-sourced by InclusionAI
Personalize Any Characters with a Scalable Diffusion Transformer
Stable Virtual Camera: Generative View Synthesis with Diffusion Models
Genome modeling and design across all domains of life
Pretrained time-series foundation model developed by Google Research
General-purpose image editing model that delivers high-fidelity
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI
4M: Massively Multimodal Masked Modeling
This repository contains the official implementation of FastVLM
FAIR Sequence Modeling Toolkit 2