Showing 64 open source projects for "sandbox:/mnt/data/project_plan.pod"

View related business solutions
  • DataHub is the leading open-source data catalog helping teams discover, understand, and govern their data assets. Icon
    DataHub is the leading open-source data catalog helping teams discover, understand, and govern their data assets.

    Modern Data Catalog and Metadata Platform

    Built on an open source foundation with a thriving community of 13,000+ members, DataHub gives you unmatched flexibility to customize and extend without vendor lock-in. DataHub Cloud is a modern metadata platform with REST and GraphQL APIs that optimize performance for complex queries, essential for AI-ready data management and ML lifecycle support.
    Learn More
  • Enterprise AI Agents for Every Customer Moment Icon
    Enterprise AI Agents for Every Customer Moment

    For enterprise companies looking for AI Agents

    From chat to voice to SMS, every conversation gets a smart, personalized response powered by your policies, tone, and data.
    Learn More
  • 1
    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese-LLaMA-Alpaca-2 v2.0

    Chinese LLaMA & Alpaca large language model + local CPU/GPU training

    This project has open-sourced the Chinese LLaMA model and the Alpaca large model with instruction fine-tuning to further promote the open research of large models in the Chinese NLP community. Based on the original LLaMA , these models expand the Chinese vocabulary and use Chinese data for secondary pre-training, which further improves the basic semantic understanding of Chinese. At the same time, the Chinese Alpaca model further uses Chinese instruction data for fine-tuning, which significantly improves the model's ability to understand and execute instructions.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Metaseq

    Metaseq

    Repo for external large-scale work

    ...The framework was used internally at Meta to train models like OPT (Open Pre-trained Transformer) and serves as a reference implementation for scaling transformer architectures efficiently across GPUs and nodes. It supports both pretraining and fine-tuning workflows with data pipelines for text, multilingual corpora, and custom tokenization schemes. Metaseq also includes APIs for evaluation, generation, and model serving, enabling seamless transitions from training to inference.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    PRM800K

    PRM800K

    800,000 step-level correctness labels on LLM solutions to MATH problem

    ...The repository releases the raw labels and the labeler instructions used in two project phases, enabling researchers to study how human raters graded intermediate reasoning. Data are stored as newline-delimited JSONL files tracked with Git LFS, where each line is a full solution sample that can contain many step-level labels and rich metadata such as labeler UUIDs, timestamps, generation identifiers, and quality-control flags. Each labeled step can include multiple candidate completions with ratings of -1, 0, or +1, optional human-written corrections (phase 1), and a chosen completion index, along with a final finish reason such as found_error, solution, bad_problem, or give_up.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 4
    VALL-E

    VALL-E

    PyTorch implementation of VALL-E (Zero-Shot Text-To-Speech)

    ...Specifically, we train a neural codec language model (called VALL-E) using discrete codes derived from an off-the-shelf neural audio codec model, and regard TTS as a conditional language modeling task rather than continuous signal regression as in previous work. During the pre-training stage, we scale up the TTS training data to 60K hours of English speech which is hundreds of times larger than existing systems. VALL-E emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt. Experiment results show that VALL-E significantly outperforms the state-of-the-art zero-shot TTS system in terms of speech naturalness and speaker similarity. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Business password and access manager solution for IT security teams Icon
    Business password and access manager solution for IT security teams

    Simplify Access, Secure Your Business

    European businesses use Uniqkey to simplify password management, reclaim IT control and reduce password-based cyber risk. All in one super easy-to-use tool.
    Learn More
  • 5
    minGPT

    minGPT

    A minimal PyTorch re-implementation of the OpenAI GPT

    ...Because the whole model is around 300 lines of code, users can follow each step—from embedding lookup, positional encodings, multi-head attention, feed-forward layers, to output heads—and thus demystify how GPT-style models work beneath the surface. It provides a practical sandbox for experimentation, letting learners tweak the architecture, dataset, or training loop without being overwhelmed by framework abstraction.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    Video Pre-Training

    Video Pre-Training

    Learning to Act by Watching Unlabeled Online Videos

    The Video PreTraining (VPT) repository provides code and model artifacts for a project where agents learn to act by watching human gameplay videos—specifically, gameplay of Minecraft—using behavioral cloning. The idea is to learn general priors of control from large-scale, unlabeled video data, and then optionally fine-tune those priors for more goal-directed behavior via environment interaction. The repository contains demonstration models of different widths, fine-tuned variants (e.g. for building houses or early-game tasks), and inference scripts that instantiate agents from pretrained weights. Key modules include the behavioral cloning logic, the agent wrapper, and data loading pipelines (with an accessible skeleton for loading Minecraft demonstration data). ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    MAE (Masked Autoencoders)

    MAE (Masked Autoencoders)

    PyTorch implementation of MAE

    MAE (Masked Autoencoders) is a self-supervised learning framework for visual representation learning using masked image modeling. It trains a Vision Transformer (ViT) by randomly masking a high percentage of image patches (typically 75%) and reconstructing the missing content from the remaining visible patches. This forces the model to learn semantic structure and global context without supervision. The encoder processes only the visible patches, while a lightweight decoder reconstructs the...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    GPT Neo

    GPT Neo

    An implementation of model parallel GPT-2 and GPT-3-style models

    An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here to play with our pre-trained models, we strongly recommend you try out the HuggingFace Transformer integration. Training and inference is officially supported on TPU and should work on GPU as well. This repository will be (mostly) archived as we move focus to our GPU-specific repo, GPT-NeoX.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 9
    Denoiser

    Denoiser

    Real Time Speech Enhancement in the Waveform Domain (Interspeech 2020)

    ...It uses a causal encoder-decoder architecture with skip connections, optimized with losses defined both in the time domain and frequency domain to better suppress noise while preserving speech. Unlike models that operate on spectrograms alone, this design enables lower latency and coherent waveform output. The implementation includes data augmentation techniques applied to the raw waveforms (e.g. noise mixing, reverberation) to improve model robustness and generalization to diverse noise types. The project supports both offline denoising (batch inference) and live audio processing (e.g. via loopback audio interfaces), making it practical for real-time use in calls or recording. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • CloudZero: The Cloud Cost Optimization Platform Icon
    CloudZero: The Cloud Cost Optimization Platform

    CloudZero automates the collection, allocation, and analysis of your infrastructure and AI spend to uncover waste and improve unit economics.

    CloudZero is the leader in proactive cloud cost efficiency. We enable engineers to build cost-efficient software without slowing down innovation. CloudZero's next-generation cloud cost optimization platform automates the collection, allocation, and analysis of cloud costs to uncover savings opportunities and improve unit economics. We are the only platform that enables companies to understand 100% of their operational cloud spend and take an engineering-led approach to optimizing that spend. CloudZero is used by industry leaders worldwide, such as Coinbase, Klaviyo, Miro, Nubank, and Rapid7.
    Learn More
  • 10
    Image GPT

    Image GPT

    Large-scale autoregressive pixel model for image generation by OpenAI

    ...While the repository is archived and provided as-is, it remains a valuable starting point for experimenting with autoregressive transformers applied directly to raw pixel data. By demonstrating GPT’s flexibility across modalities, Image-GPT influenced subsequent multimodal generative research.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    PyTorch-BigGraph

    PyTorch-BigGraph

    Generate embeddings from large-scale graph-structured data

    PyTorch-BigGraph (PBG) is a system for learning embeddings on massive graphs—think billions of nodes and edges—using partitioning and distributed training to keep memory and compute tractable. It shards entities into partitions and buckets edges so that each training pass only touches a small slice of parameters, which drastically reduces peak RAM and enables horizontal scaling across machines. PBG supports multi-relation graphs (knowledge graphs) with relation-specific scoring functions,...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    MUSE

    MUSE

    A library for Multilingual Unsupervised or Supervised word Embeddings

    MUSE is a framework for learning multilingual word embeddings that live in a shared space, enabling bilingual lexicon induction, cross-lingual retrieval, and zero-shot transfer. It supports both supervised alignment with seed dictionaries and unsupervised alignment that starts without parallel data by using adversarial initialization followed by Procrustes refinement. The code can align pre-trained monolingual embeddings (such as fastText) across dozens of languages and provides standardized evaluation scripts and dictionaries. By mapping languages into a common vector space, MUSE makes it straightforward to build cross-lingual applications where resources are scarce for some languages. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Retrieval-Based Conversational Model

    Retrieval-Based Conversational Model

    Dual LSTM Encoder for Dialog Response Generation

    ...The core idea is to embed both the conversation context and potential replies into vector representations, then score how well each candidate fits the current dialogue, choosing the best match accordingly. Designed to work with datasets like the Ubuntu Dialogue Corpus, this codebase includes data preparation, model training, and evaluation components for building and assessing dialog models that can handle multi-turn conversations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    DeepSeek-V3.2

    DeepSeek-V3.2

    High-efficiency reasoning and agentic intelligence model

    ...The model was notably used in competitive AI challenges such as the 2025 International Mathematical Olympiad (IMO) and IOI, achieving top-tier results. DeepSeek-V3.2 also features a large-scale agentic task synthesis pipeline, which generates training data to enhance tool-use intelligence and multi-step reasoning. It introduces a new “thinking with tools” chat template, allowing it to reason and decide when to invoke specific tools during problem solving.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB