Quickstart

A robust ecosystem of tools—like spreadsheets, regression analyses, and SQL—makes structured data easy to work with. But they fall short when it comes to critical unstructured data—support tickets, customer conversations, financial disclosures, product reviews, and more.

The rise of Large Language Models (LLMs) has enabled a wave of prompt-driven tools for unstructured data analysis and retrieval-augmented generation (RAG). But these tools often lack the structure, extensibility, and statistical rigor of traditional tabular methods -—making them hard to audit or integrate into established workflows.

Sturdy Statistics bridges that gap. Sturdy Statistics transforms unstructured text into structured, interpretable data— using models that are transparent, verifiable, and robust. You don’t need to write prompts, tune embeddings, or trust a black box. Every insight can be inspected, audited, and traced back to specific passages in your data.

[1]

Sturdy Statistics automatic structure enables

All with confidence in how the outputs were generated and with the ability to easily verify every datapoint.

In the following walkthrough, we introduce Sturdy Statistics’ ability to reveal structured insights for unstructured data, not with RAG or LLM black boxes but with rigorous, statistical analysis that leverages traditional tabular data structures. We will analyze the past two years of Earnings Calls from Google, Microsoft, Amazon, NVIDIA, and META.

Resources

To follow along:

For a deeper dive, explore:

The Index Object

The core building block in the Sturdy Statistics NLP toolkit is the Index. Each Index is a set of documents and metadata that has been structured or “indexed” by our hierarchical bayesian probability mixture model. Below we are connecting to an Index that has already been trained by our earnings transcripts integration.

index = Index(id="index_c6394fde5e0a46d1a40fb6ddd549072e") 
Found an existing index with id="index_c6394fde5e0a46d1a40fb6ddd549072e".

Topic Search

The first API we will explore is the topicSearch api. This api provides a direct interface to the high level themes that our index extracts. You can call with no arguments to get a list of topics ordered by how often they occur in the dataset (prevalence). The resulting data is a structured rollup of all the data in the corpus. It aggregates the topic annotations across each word, paragraph, and document and generates high level semantic statistics.

[2]

Mentions refers to the number of paragraphs in which the topic occurs. Prevalence refers to the total percentage of all data that a topic comprises.

topic_df = index.topicSearch()
topic_df.head()[["topic_id", "short_title", "topic_group_short_title", "mentions", "prevalence"]]
topic_id short_title topic_group_short_title mentions prevalence
0 159 Accelerated Computing Systems Technological Developments 359.0 0.042775
1 139 Consumer Behavior Insights Growth Strategies 585.0 0.033129
2 108 Cloud Performance Metrics Investment and Financials 157.0 0.026985
3 115 Zuckerberg on Business Strategies Corporate Strategy 420.0 0.026971
4 127 Comprehensive Security Solutions Investment and Financials 146.0 0.023265

Hierarchical Visualization

We visualize our this thematic data in the Sunburst visualization below. The inner circle of the sunburst is the title of the plot. The middle layer is the topic groups. And the leaf nodes are the topics that belong to the corresponding topic group. The size of each node is porportional to how often it shows up in the dataset.

topic_df["title"] = "Tech <br> Earnings Calls"
fig = px.sunburst(
    topic_df, 
    path=["title", "topic_group_short_title", "short_title"], 
    values="prevalence", 
    hover_data=["topic_id", "mentions"]
)
fig = procFig(fig, height=500)
fig.show()