Building Mixture-of-Experts from LLaMA with Continual Pre-training
A repository that contains models, datasets, and fine-tuning
We unified the interfaces of instruction-tuning data
Build ChatGPT over your data, all with natural language
Quick guide (especially) for trending instruction finetuning dataset
Auto-GPT on the browser
The first Chinese LLaMA2 model in the open source community
Inference code and configs for the ReplitLM model family
Training and serving large-scale neural networks