Search Results for "/storage/emulated/0/android/data/net.sourceforge.uiq3.fx603p/files" - Page 5

Showing 158 open source projects for "/storage/emulated/0/android/data/net.sourceforge.uiq3.fx603p/files"

View related business solutions
  • ToogleBox: Simplify, Automate and Improve Google Workspace Functionalities Icon
    ToogleBox: Simplify, Automate and Improve Google Workspace Functionalities

    The must-have platform for Google Workspace

    ToogleBox was created as a solution to address the challenges faced by Google Workspace Super Admins. We developed a premium and secure Software-as-a-Service (SaaS) product completely based on specific customer needs. ToogleBox automates most of the manual processes when working with Google Workspace functionalities and includes additional features to improve the administrator experience.
    Learn More
  • The CI/CD Platform built for Mobile DevOps Icon
    The CI/CD Platform built for Mobile DevOps

    For mobile app developers interested in a powerful CI/CD platform for mobile app development and mobile DevOps

    Save time, money, and developer frustration with fast, flexible, and scalable mobile CI/CD that just works. Whether you swear by native or would rather go cross-platform, we have you covered. From Swift to Objective-C, Java to Kotlin, as well as Xamarin, Cordova, Ionic, React Native, and Flutter: Whatever you choose, we will automatically configure your initial workflows and have you building in minutes.
    Learn More
  • 1
    E2B Cookbook

    E2B Cookbook

    Examples of using E2B

    ...The repository acts as a practical learning resource for developers who want to integrate AI agents with secure cloud execution environments that allow large language models to run code and interact with tools. The examples illustrate how developers can build AI workflows capable of performing tasks such as data analysis, code execution, and application generation inside isolated sandbox environments. E2B itself provides secure Linux-based sandboxes that enable AI systems to safely run generated code and interact with real computing resources without compromising the host environment. The cookbook organizes examples across multiple frameworks and model providers, allowing developers to experiment with integrations involving models from OpenAI, Anthropic, and other ecosystems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    second-brain-ai-assistant-course

    second-brain-ai-assistant-course

    Learn to build your Second Brain AI assistant with LLMs

    ...The concept of a “second brain” refers to a personal knowledge repository containing notes, research, and documents that can be queried and analyzed using AI. Through a series of modules, the project explains how to design data pipelines, build retrieval-augmented generation systems, and implement agent-based reasoning workflows. The course also introduces practical techniques such as dataset generation, model fine-tuning, and deployment strategies for AI applications. Learners build a full system capable of retrieving information from stored resources and generating responses based on that data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Parallax

    Parallax

    Parallax is a distributed model serving framework

    Parallax is a decentralized inference framework designed to run large language models across distributed computing resources. Instead of relying on centralized GPU clusters in data centers, the system allows multiple heterogeneous machines to collaborate in serving AI inference workloads. Parallax divides model layers across different nodes and dynamically coordinates them to form a complete inference pipeline. A two-stage scheduling architecture determines how model layers are allocated to available hardware and how requests are routed across nodes during execution. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    Prometheus-Eval

    Prometheus-Eval

    Evaluate your LLM's response with Prometheus and GPT4

    ...The repository includes a Python package that provides a straightforward interface for running evaluations and integrating them into model development pipelines. It also provides training data and utilities for fine-tuning evaluator models so they can assess outputs according to custom scoring rubrics such as helpfulness, accuracy, or style.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Easy-to-use online form builder for every business. Icon
    Easy-to-use online form builder for every business.

    Create online forms and publish them. Get an email for each response. Collect data.

    Easy-to-use online form builder for every business. Create online forms and publish them. Get an email for each response. Collect data. Design professional looking forms with JotForm Online Form Builder. Customize with advanced styling options to match your branding. Speed up and simplify your daily work by automating complex tasks with JotForm’s industry leading features. Securely and easily sell products. Collect subscription fees and donations. Being away from your computer shouldn’t stop you from getting the information you need. No matter where you work, JotForm Mobile Forms lets you collect data offline with powerful forms you can manage from your phone or tablet. Get the full power of JotForm at your fingertips. JotForm PDF Editor automatically turns collected form responses into professional, secure PDF documents that you can share with colleagues and customers. Easily generate custom PDF files online!
    Learn More
  • 5
    OmAgent

    OmAgent

    Build multimodal language agents for fast prototype and production

    OmAgent is an open-source Python framework designed to simplify the development of multimodal language agents that can reason, plan, and interact with different types of data sources. The framework provides abstractions and infrastructure for building AI agents that operate on text, images, video, and audio while maintaining a relatively simple interface for developers. Instead of forcing developers to implement complex orchestration logic manually, the system manages task scheduling, worker coordination, and node optimization behind the scenes. ...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 6
    ModernBERT

    ModernBERT

    Bringing BERT into modernity via both architecture changes and scaling

    ModernBERT is an open-source research project that modernizes the classic BERT encoder architecture by incorporating recent advances in transformer design, training techniques, and efficiency improvements. The goal of the project is to bring BERT-style models up to date with the capabilities of modern large language models while preserving the strengths of bidirectional encoder architectures used for tasks such as classification, retrieval, and semantic search. ModernBERT introduces...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 7
    Qwen-Audio

    Qwen-Audio

    Chat & pretrained large audio language model proposed by Alibaba Cloud

    Qwen-Audio is a large audio-language model developed by Alibaba Cloud, built to accept various types of audio input (speech, natural sounds, music, singing) along with text input, and output text. There is also an instruction-tuned version called Qwen-Audio-Chat which supports conversational interaction (multi-round), audio + text input, creative tasks and reasoning over audio. It uses multi-task training over many different audio tasks (30+), and achieves strong multi-benchmarks performance...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Canopy

    Canopy

    Retrieval Augmented Generation (RAG) framework

    Canopy is an open-source retrieval-augmented generation (RAG) framework developed by Pinecone to simplify the process of building applications that combine large language models with external knowledge sources. The system provides a complete pipeline for transforming raw text data into searchable embeddings, storing them in a vector database, and retrieving relevant context for language model responses. It is designed to handle many of the complex components required for a RAG workflow, including document chunking, embedding generation, prompt construction, and chat history management. Developers can use Canopy to quickly build chat systems that answer questions using their own data instead of relying solely on the pretrained knowledge of the language model. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 9
    PRIME

    PRIME

    Scalable RL solution for advanced reasoning of language models

    ...PRIME provides training pipelines, datasets, and experimental infrastructure that allow researchers to train models with reinforcement learning tailored for reasoning improvement. The framework also includes data preprocessing utilities and example datasets such as mathematical reasoning tasks that are well suited for process-based reward signals.
    Downloads: 1 This Week
    Last Update:
    See Project
  • E-commerce Fulfillment For Scaling Brands Icon
    E-commerce Fulfillment For Scaling Brands

    Ecommerce and omnichannel brands seeking scalable fulfillment solutions that integrate with popular sales channels

    Flowspace delivers fulfillment excellence by pairing powerful software and on-the-ground logistics know-how. Our platform provides automation, real-time control, and reliability beyond traditional 3PL capabilities—so you can scale smarter, faster, and easier.
    Learn More
  • 10
    xLSTM

    xLSTM

    Neural Network architecture based on ideas of the original LSTM

    ...By introducing innovations such as matrix-based memory and improved normalization techniques, xLSTM improves the ability of recurrent networks to capture long-range dependencies in sequential data. The architecture aims to provide competitive performance with transformer-based models while maintaining advantages such as linear computational scaling and efficient memory usage for long sequences. Researchers have demonstrated that xLSTM models can scale to billions of parameters and large training datasets while maintaining efficient inference speeds.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 11
    Huatuo-Llama-Med-Chinese

    Huatuo-Llama-Med-Chinese

    Instruction-tuning LLM with Chinese Medical Knowledge

    ...The goal of the project is to improve the reliability and domain expertise of language models when answering medical questions or assisting with healthcare-related tasks. By combining domain-specific training data with instruction-tuning techniques, the project produces models capable of generating more accurate medical responses than general-purpose models.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 12
    LangChain-Chatchat

    LangChain-Chatchat

    Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge

    ...The knowledge base information of the current project is stored in the database, please initialize the database before running the project officially (we strongly recommend that you back up your knowledge files before performing operations). Relying on the open-source LLM and Embedding models supported by this project, this project can realize offline private deployment using all open-source models. At the same time, this project also supports the call of OpenAI GPT API, and will continue to expand the access to various models and model APIs in the future.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    TigerBot

    TigerBot

    TigerBot: A multi-language multi-task LLM

    ...The project provides both base models and chat-optimized variants that can be used for dialogue systems, question answering, and general language understanding tasks. In addition to model weights, the repository includes training scripts, inference tools, and configuration files that allow researchers and developers to reproduce experiments or fine-tune the models for specific applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    LLM Workflow Engine

    LLM Workflow Engine

    Power CLI and Workflow manager for LLMs (core package)

    ...Instead of focusing solely on chat interactions, the system is built to embed LLM calls into larger automation pipelines where model outputs can drive decision making or trigger additional processes. Developers can construct structured workflows using configuration files and integrate them with tools such as Ansible playbooks or custom scripts to automate complex tasks. The engine supports multiple AI providers through a plugin architecture, allowing connections to services like OpenAI, Hugging Face, Cohere, or other compatible APIs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    LLM TLDR

    LLM TLDR

    95% token savings. 155x faster queries. 16 languages

    LLM TLDR is a tool that leverages large language models (LLMs) to generate concise, coherent summaries (TL;DRs) of long documents, articles, or text files, helping users quickly understand large amounts of content without reading every word. It integrates with LLM APIs to handle input texts of varying lengths and complexity, applying techniques like chunking, context management, and multi-pass summarization to preserve accuracy even when the source is very large. The system supports both extractive and abstractive summarization styles so that users can choose whether they want condensed highlights or a more narrative paraphrase of key ideas. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    Heretic

    Heretic

    Fully automatic censorship removal for language models

    ...The project can decensor many popular dense and some mixture-of-experts (MoE) models, supporting workflows that would otherwise require manual tuning. Beyond simple decensoring, Heretic includes research-oriented options for analyzing model internals and interpretability data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    VLMEvalKit

    VLMEvalKit

    Open-source evaluation toolkit of large multi-modality models (LMMs)

    ...The toolkit provides a unified framework that allows researchers and developers to evaluate multimodal models across a wide range of datasets and standardized benchmarks with minimal setup. Instead of requiring complex data preparation pipelines or multiple repositories for each benchmark, the system enables evaluation through simple commands that automatically handle dataset loading, model inference, and metric computation. VLMEvalKit supports generation-based evaluation methods, allowing models to produce textual responses to visual inputs while measuring performance through techniques such as exact matching or language-model-assisted answer extraction.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    ChatGLM2-6B

    ChatGLM2-6B

    ChatGLM2-6B: An Open Bilingual Chat LLM

    ChatGLM2-6B is the second-gen Chinese-English conversational LLM from ZhipuAI/Tsinghua. It upgrades the base model with GLM’s hybrid pretraining objective, 1.4 TB bilingual data, and preference alignment—delivering big gains on MMLU, CEval, GSM8K, and BBH. The context window extends up to 32K (FlashAttention), and Multi-Query Attention improves speed and memory use. The repo includes Python APIs, CLI & web demos, OpenAI-style/FASTAPI servers, and quantized checkpoints for lightweight local deployment on GPUs or CPU/MPS.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Prompt Poet

    Prompt Poet

    Streamlines and simplifies prompt design for both developers

    ...By separating prompt structure from program logic, Prompt Poet encourages iterative prompt design and experimentation without requiring constant changes to application code. The framework supports dynamic prompts that adapt to runtime data, allowing developers to inject variables, context, and examples directly into templates. This approach is particularly useful in production environments where prompt consistency, maintainability, and versioning are important.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    AgentEvolver

    AgentEvolver

    Towards Efficient Self-Evolving Agent System

    ...These mechanisms enable agents to continuously improve their capabilities while interacting with complex environments and tools. AgentEvolver also integrates environment sandboxes, experience management systems, and modular data pipelines to support large-scale experimentation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    DriveLM

    DriveLM

    Driving with Graph Visual Question Answering

    ...Instead of treating autonomous driving as a purely sensor-driven pipeline, DriveLM frames it as a reasoning problem where models answer structured questions about the environment to guide decision making. The system includes DriveLM-Data, a dataset built on driving environments such as nuScenes and CARLA, where human-written reasoning steps connect different layers of driving tasks. This design allows models to learn relationships between objects, behaviors, and navigation decisions through graph-structured logic.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    PKU Beaver

    PKU Beaver

    Constrained Value Alignment via Safe Reinforcement Learning

    PKU Beaver is an open-source research project focused on improving the safety alignment of large language models through reinforcement learning from human feedback under explicit safety constraints. The framework introduces techniques that separate helpfulness and harmlessness signals during training, allowing models to optimize for useful responses while minimizing harmful behavior. To support this process, the project provides datasets containing human-labeled examples that encode both...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    WebGLM

    WebGLM

    An Efficient Web-enhanced Question Answering System

    ...The system is based on the General Language Model architecture and was designed to enable language models to interact directly with web information during the question-answering process. Instead of relying solely on knowledge stored in the model’s training data, the system retrieves relevant web content and integrates it into the reasoning process. WebGLM introduces several components that coordinate this process, including a retrieval module that selects relevant web documents, a generator that produces answers, and a scoring system that evaluates the quality of generated responses. The architecture aims to improve the reliability and usefulness of AI systems that answer questions about current or external knowledge sources.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    code-act

    code-act

    Official Repo for ICML 2024 paper

    ...This approach helps unify reasoning and action planning within large language model agents by using code as the primary interface between the model and the external world. The framework also includes training data, models, and evaluation tools designed to study how language models can become more capable autonomous agents.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    LongWriter

    LongWriter

    Unleashing 10,000+ Word Generation from Long Context LLMs

    LongWriter is an open-source framework and set of large language models designed to enable ultra-long text generation that can exceed 10,000 words while maintaining coherence and structure. Traditional large language models can process large inputs but often struggle to generate long outputs due to limitations in training data and alignment strategies. LongWriter addresses this challenge by introducing a specialized dataset and training approach that encourages models to produce longer responses. The system uses an agent-based pipeline called AgentWrite that decomposes large writing tasks into smaller subtasks, allowing the model to produce long documents section by section. ...
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB