Loubna Ben Allal

loubnabnl

AI & ML interests

LLMs, ML for code, Synthetic data

Articles

Organizations

Posts 2

view post
Post
3842
We've just published a detailed blog post on the creation of Cosmopedia dataset. We hope this will provide insights about generating synthetic data at scale for pre-training.
https://huggingface.co/blog/cosmopedia

Here are some key takeaways:
🎯 Prompt curation is crucial: we want to cover many topics with few duplicates.
πŸ“š You can leverage various resources for diversity: using different seed data, generation formats, and target audiences.
βš™οΈ The importance of a good technical stack: for scalable generations with tools like llm-swarm and fast model training and evaluation.

Have a good read!
view post
Post
⭐ Today we’re releasing The Stack v2 & StarCoder2: a series of 3B, 7B & 15B code generation models trained on 3.3 to 4.5 trillion tokens of code:

- StarCoder2-15B matches or outperforms CodeLlama 34B, and approaches DeepSeek-33B on multiple benchmarks.
- StarCoder2-3B outperforms StarCoderBase-15B and similar sized models.
- The Stack v2 a 4x larger dataset than the Stack v1, resulting in 900B unique code tokens πŸš€
As always, we released everything from models and datasets to curation code. Enjoy!

πŸ”— StarCoder2 collection: bigcode/starcoder2-65de6da6e87db3383572be1a
πŸ”— Paper: https://drive.google.com/file/d/17iGn3c-sYNiLyRSY-A85QOzgzGnGiVI3/view
πŸ”— BlogPost: https://huggingface.co/blog/starcoder2
πŸ”— Code Leaderboard: bigcode/bigcode-models-leaderboard