A robust ecosystem of tools—like spreadsheets, regression analyses, and SQL—makes structured data easy to work with. But they fall short when it comes to critical unstructured data—support tickets, customer conversations, financial disclosures, product reviews, and more.
The rise of Large Language Models (LLMs) has enabled a wave of prompt-driven tools for unstructured data analysis and retrieval-augmented generation (RAG). But these tools often lack the structure, extensibility, and statistical rigor of traditional tabular methods -—making them hard to audit or integrate into established workflows.
Sturdy Statistics bridges that gap. Sturdy Statistics transforms unstructured text into structured, interpretable data— using models that are transparent, verifiable, and robust. You don’t need to write prompts, tune embeddings, or trust a black box. Every insight can be inspected, audited, and traced back to specific passages in your data.
Analysts to analyze granular natural language data with SQL
All with confidence in how the outputs were generated and with the ability to easily verify every datapoint.
In the following walkthrough, we introduce Sturdy Statistics’ ability to reveal structured insights for unstructured data, not with RAG or LLM black boxes but with rigorous, statistical analysis that leverages traditional tabular data structures. We will analyze the past two years of Earnings Calls from Google, Microsoft, Amazon, NVIDIA, and META.
The core building block in the Sturdy Statistics NLP toolkit is the Index. Each Index is a set of documents and metadata that has been structured or “indexed” by our hierarchical bayesian probability mixture model. Below we are connecting to an Index that has already been trained by our earnings transcripts integration.
index = Index(id="index_c6394fde5e0a46d1a40fb6ddd549072e")
Found an existing index with id="index_c6394fde5e0a46d1a40fb6ddd549072e".
Topic Search
The first API we will explore is the topicSearch api. This api provides a direct interface to the high level themes that our index extracts. You can call with no arguments to get a list of topics ordered by how often they occur in the dataset (prevalence). The resulting data is a structured rollup of all the data in the corpus. It aggregates the topic annotations across each word, paragraph, and document and generates high level semantic statistics.
We visualize our this thematic data in the Sunburst visualization below. The inner circle of the sunburst is the title of the plot. The middle layer is the topic groups. And the leaf nodes are the topics that belong to the corresponding topic group. The size of each node is porportional to how often it shows up in the dataset.