Valmik
What if agents could map an enterprise on their own?
The Problem
Every enterprise AI deployment starts with the same expensive bottleneck: someone has to define what this company's world looks like.
A consulting firm sends a team for six weeks. They interview stakeholders, trace processes, and hand-build an ontology — the formal definition of what entities exist (Customer, Deal, Support Ticket), how they relate (Account Owner manages Customer, Deal requires Approval), and what concepts matter to this specific organization. Palantir charges millions for this. It takes weeks or months. And the output is a static artifact that starts decaying the moment people change how they work.
This isn't a luxury problem. Every company building context graphs, knowledge graphs, or AI agents needs this layer. Without it, agents don't know what they're looking at. They can read a Slack message, but they don't know that "Deal Desk Review" is a concept that matters here, that it connects a Deal to an Approver, that it's a stage in this company's sales process. The ontology is what gives structure to raw enterprise data. Right now, the only way to get one is to pay consultants to build it by hand.
Researchers have tried to automate pieces of this. Systems like OntoGenix and Ontogenia can generate ontology fragments from documents or user stories. But they all require someone to provide the input — either curated documents or hand-written requirements describing what the ontology should cover. The human bottleneck doesn't disappear. It just moves upstream. Nobody has shown that agents can discover a company's ontology directly from its live systems, without being told what to look for.
What We're Exploring
We think the ontology is already in the data. It's encoded in how people use their tools — the patterns in Slack conversations, the informal processes embedded in CRM workflows, the tribal knowledge baked into folder structures and channel names. The question is whether an agent can find it.
The picture we're working toward: you give an agent API credentials to a company's Slack, CRM, and document store. It explores autonomously. It comes back with a formal definition — here are the entity types that matter to this organization, here are the relationships between them, here are the company-specific concepts that don't exist in any standard schema. A "Deal Desk Review" that only exists in how this team uses Slack. A "Technical Validation" stage that lives in informal process, not in any configured CRM field. Structure that no API schema will tell you about.
Getting there raises hard research questions:
We're evaluating on SynCorp-2026 — a synthetic enterprise environment with real services where we control the ground truth. The test: how much of a company's conceptual structure can an agent discover on its own, starting from nothing but API access?