Ethos’s $22.75 million Series A, led by a16z, is a funding story with a technical subtext: the company is betting that expert networks can be rebuilt around richer signals than resumes, titles, and profile keywords.

That matters because the traditional workflow for finding domain expertise is still surprisingly blunt. A company searches a database, filters on job title, and hopes the resulting shortlist reflects real, project-specific knowledge. Ethos is trying to replace that static intake model with voice onboarding that elicits more context from experts up front, then converts those responses into a structured, queryable knowledge graph.

If that data model holds up at scale, the implications go beyond a better directory. It changes what an expert network can represent, how it can be queried, and how reliably it can surface people for narrowly defined work.

From resumes to a knowledge graph

The pitch behind voice onboarding is not just that people can speak more naturally than they can fill out a form. It is that a conversational intake flow can capture data beyond job titles: domains worked in, types of problems solved, product contexts, company stage, technical depth, and the relationships between those experiences.

That is a different architectural starting point from legacy expert networks. Instead of indexing a sparse profile around a title and employer history, Ethos appears to be building a richer schema that can support a knowledge graph. In practice, that means the system can connect an expert to multiple domains and infer relationships between them, rather than flattening them into a single role label.

The distinction matters for discovery. A title such as “product manager” does not tell a company whether someone has experience with fintech compliance, infrastructure tooling, or startup finance automation. A graph built from voice-derived signals can preserve those distinctions and make them queryable.

That is especially relevant in AI workflows, where the input prompt is often already a natural-language problem statement. If the underlying expert database is also represented in a structured way, the system can move from coarse search to semantic retrieval.

Matching gets more interesting when the data gets richer

Ethos says its model helps answer complex client questions such as finding people who worked at a funded startup backed by top-tier investors and solved for finance automation. That example is revealing because it is not a simple filter on one attribute. It requires multiple dimensions: company type, investor context, functional problem, and inferred subject-matter relevance.

This is where NLP-enabled matching becomes more than a marketing phrase. A natural-language query can be parsed into candidate constraints, then mapped to embeddings and graph traversals that rank experts by relevance across several signals at once. If the company’s knowledge graph is detailed enough, the matcher can move from “people with this title” to “people whose experience suggests they can answer this specific question.”

Technically, that creates room for a hybrid retrieval stack. Embeddings can capture semantic similarity across different ways of describing the same skill set. Graph structure can preserve hard relationships such as employer, domain, project type, or peer connections. And ranking can combine both, giving the system a way to handle fuzzy, high-context requests without reducing them to keyword search.

The upside is obvious: better signal quality for clients trying to source advice for specialized AI projects, product decisions, or market mapping exercises. The harder part is keeping that signal trustworthy as the network grows.

Scale will depend on onboarding, verification, and governance

Voice onboarding may improve data richness, but it also raises the difficulty of operating the network reliably. The more attributes the system collects, the more opportunities there are for inconsistency, ambiguity, and drift. Experts may describe similar work in different ways. Terms may be interpreted differently across industries. Some answers will be useful but hard to normalize.

That puts data quality and governance at the center of the product, not in the background. Ethos will need rules for how voice-derived responses are transcribed, structured, reviewed, and updated over time. It will need verification mechanisms that distinguish between self-reported expertise and evidence-backed experience. And it will need a way to manage privacy expectations when onboarding captures more conversational detail than a standard form ever would.

Those concerns are not unique to Ethos, but the company’s approach makes them more visible. An expert network that relies on richer behavioral and contextual data is only as strong as the integrity of the pipeline that produces it. If the onboarding conversation is noisy or incomplete, the graph will be noisy or incomplete too.

For enterprise deployment, that means the product strategy is as much about controls as it is about matching quality. Teams buying advisory access will want to know how data is stored, what is inferred versus directly stated, how often profiles are refreshed, and how the system handles consent.

A different competitive axis for expert networks

Ethos is entering a market already defined by incumbents such as GLG, Third Bridge, and AlphaSights, where the core product has historically been access to vetted professionals and the operational machinery around that access. The company’s differentiator is not merely that it is using AI. It is that it is changing the structure of the underlying data.

That could matter in two ways. First, richer intake may produce more precise matches than systems anchored to titles and static forms. Second, the same data architecture could support adjacent tooling: better internal search, more specific recommendations, and potentially more automation around client requests.

In other words, the value proposition is not only a better expert network. It is a more machine-readable one.

For product teams and developers, that is the interesting part of the Ethos raise. If voice onboarding becomes a reliable way to build a knowledge graph around human expertise, then expert networks start to look less like directories and more like structured retrieval systems. That changes the design problem from profile collection to semantic modeling.

The risks are the point, not a footnote

The same properties that make the model attractive also make it fragile. Voice-derived data can be richer, but it can also be harder to audit. NLP-enabled matching can be more flexible, but it can also produce opaque rankings if the underlying signals are not well governed. A broader knowledge graph can increase recall, but only if the taxonomy and verification logic are disciplined enough to prevent false confidence.

That is why the governance layer is not optional. An expert network built on expanded data beyond job titles needs a clear approach to consent, retention, access control, and review. It also needs ongoing attention to bias: if the system learns from a narrow slice of experts or overweights certain signals, it can reproduce the same discovery problems it was meant to solve.

Ethos’s Series A suggests investors are willing to fund that bet now. The open question is whether the operational discipline required to sustain it can keep pace with the promise of richer matching. If it can, voice onboarding may become one of the more consequential interface changes in the expert network market: not because it sounds novel, but because it changes what the network knows about the people inside it.