On Ground Labs

Active research

Small Language Models

Compact intelligence for edge deployment

In developmentLaunching 2026

Why Edge Models Matter

Grounded intelligence has to run where people are. That means ₹15K laptops in government schools, shared desktops in rural labs, and patchy bandwidth in community centers. Large models behind data-center APIs struggle in these settings—both economically and technically.

We're exploring tiny and small language models that can live on affordable hardware, sync opportunistically, and still deliver meaningful tutoring, research assistance, and translation support. The goal: real-time, privacy-preserving tools that extend capability without depending on hyperscale infrastructure.

Focus Areas

What we're prototyping

Curriculum-tuned models

Building compact models that are fine-tuned on open Indian curricula and reference texts, so that science and language support feels local and accurate.

Low-bandwidth update loops

Designing synchronization strategies that let offline devices collect usage signals and periodically refresh without reliable connectivity.

Hardware experimentation

Stress-testing inference on affordable GPU-less machines—think Raspberry Pi clusters, Intel NUCs, and entry-level laptops—to document realistic deployment recipes.

Current Needs

Device partners

Organisations willing to lend or benchmark on-the-ground hardware so we can publish deployment playbooks that others can reproduce.

Curriculum collaborators

Educators and researchers who can help validate fine-tuning datasets and ensure outputs align with classroom realities.

Evaluation feedback

Early testers who can measure performance, latency, and reliability in real settings, and share failure cases we need to handle before launch.

Get Involved

This strand is part of the self-funded 2026 launch. We're building quietly with a few close partners and adding more as the roadmap firms up.

Hardware & deployment partners

Device manufacturers, labs, or NGOs interested in co-designing robust deployment recipes for classrooms and field stations.

Curriculum experts

Teachers and researchers willing to review fine-tuning data or co-create evaluation sets for regional languages.

If you can help shape it, email tanay@ongroundlabs.org

References

This work builds on foundational research in RLHF:

Dettmers et al., 2023 – QLoRA: Efficient finetuning of quantized LLMs

Jun et al., 2024 – SLMs: Small language models for on-device intelligence

Narayanan et al., 2023 – Efficient deployment of LMs in low-resource environments