On Ground Labs is charting a launch for 2026, grounded in field work with students, research partners, and public institutions. This is Tanay Pratap's initiative and, while in exploration, we're intentionally not seeking external funding yet.
The blueprint for the coming months is evolving through a set of academic and partnership experiments:
- Q4 2025: Train small language models from scratch, then LoRA-supervised finetune, and finally RL finetune them to understand the entire stack from first principles.
- Partnerships: Explore collaborations with budget schools (≤ ₹20K monthly fee) to ground pilots in classrooms. If you can connect such institutions, please reach out.
- Applied research: Benchmark and finetune current SLMs like Qwen and Gemma for Socratic learning of mathematics and Python, and probe agentic development and deployment patterns.
The plan is to consolidate findings through Q4 2025 and early Q1 2026, then publish the work and deploy solutions with partner institutions. If you want to contribute, write to tanay.mit@gmail.com.