Uncover insights, surface problems, monitor, and fine tune your LLM
Standardized Serverless ML Inference Platform on Kubernetes
Deep learning optimization library: makes distributed training easy
Data manipulation and transformation for audio signal processing
LLM training code for MosaicML foundation models
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction
Superduper: Integrate AI models and machine learning workflows
Create HTML profiling reports from pandas DataFrame objects
Integrate, train and manage any AI models and APIs with your database
State-of-the-art Parameter-Efficient Fine-Tuning
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
PyTorch extensions for fast R&D prototyping and Kaggle farming
The official Python client for the Huggingface Hub
Trainable models and NN optimization tools
Uplift modeling and causal inference with machine learning algorithms
Everything you need to build state-of-the-art foundation models
Efficient few-shot learning with Sentence Transformers
Single-cell analysis in Python
Tensor search for humans
GPU environment management and cluster orchestration
Replace OpenAI GPT with another LLM in your app
The Triton Inference Server provides an optimized cloud
Probabilistic reasoning and statistical analysis in TensorFlow
Pytorch domain library for recommendation systems
A unified framework for scalable computing