Independent research into the
efficiency of reasoning.
We build compilers, benchmarks, and self-modifying architectures to enable System 2 thinking in Large Language Models. Moving beyond static context to active state management.
Research Log
2025 Agenda[Redacted] Benchmark
Nov 25, 2025Upcoming open benchmark, this is a placeholder.
Lab History (2022–2024)
TabLib
DatasetThe world's largest open-source dataset of tabular data. 627M tables extracted for training Large Data Models.
HuggingFace ↗Sketch
ToolAn AI code-writing assistant for pandas that understands data context via approximate summarization algorithms.
GitHub ↗Julyp
ProductData-focused AI assistant (formerly Tabby Chat and Julyp) backed by on-demand Jupyter Lab environments with dynamic installs, cached data pipelines, dashboards, and full iframe/canvas rendering.
TableGen
ProductExperimental smart spreadsheet where each cell runs an agent with row and column context. Agents execute in parallel to fill tables and enable fast data manipulation.
Founder | Principal Investigator
Justin Waugh
Focused on the intersection of evolutionary algorithms, compilers, and reasoning efficiency. Previously at Unsupervised and Approximate Labs (v1). Background in Physics (University of Colorado Boulder).