[Draft] Simplify for Scale
![[Draft] Simplify for Scale](/_next/image?url=%2Fimages%2Fchangelog%2F2026-02-11-v4-preview%2Fv4-banner.png&w=3840&q=75)
We've rebuilt Langfuse's infrastructure to address performance bottlenecks at scale.
After processing billions of events per month, we analzyed our query patterns and we introduce changes now to be able to scale with our customers.
The new architecture delivers faster charts and APIs through a simplified data model that eliminates expensive joins and reduces query complexity.
Changes
Observation-centric data model
The data model now centers on observations as the primary concept.
Context attributes (user_id, session_id, metadata) set at the trace level now propagate automatically to all child spans. This eliminates the need to repeatedly update trace metadata or patch outputs across observations.
Single inputs and outputs for a trace are not sufficient anymore. Users with complex agentic workflows benefit from the new data model combined with new filter capabilities as there may be multiple observations for a single trace that are relevant for the user to look at in the UI.
UI performance improvements
Chart rendering time has been significantly reduced. Queries that previously took seconds or minutes now complete in few seconds or milliseconds. Database operations have been optimized to reduce query complexity.
This applies to trace exploration, filtering, and analytics across large projects.
New v2 API endpoints
We redesigned the observations and metrics APIs to test the new data model under the hood.The redesigned APIs offer significant performance improvements:
- Observations API v2: Optimized query execution with flexible filtering
- Metrics API v2: Faster aggregations across large projects
Faster LLM-as-a-judge evaluations
Previously, each evaluation required a separate ClickHouse query, adding latency and limiting throughput.
Evaluations now run directly on observations, eliminating database query overhead:
- Evaluations can be executed on every observation without performance degradation
- Execution time is now limited by LLM API rate limits rather than database queries
- We support higher evaluation concurrency
Technical changes
The rebuild is based on three core principles:
- Immutable observations: Observations are written once and never modified, eliminating deduplication operations.
- No joins: Trace-level attributes propagate to observations in the SDK. Queries run on a single table without joins.
- Observation-centric model: Observations are the primary data source for APIs and UI.

Max shared on how we use ClickHouse to keep product performance ahead of demand at ClickHouse Open House (recording) in Amsterdam.
How to unlock new performance
Track rollout progress and migration updates on GitHub discussions.
The new architecture is being rolled out gradually. Hereβs how to access the performance improvements:
Enable beta UI experience
In the UI, a beta toggle will be available to opt into the new experience. Note that not all UI screens have been migrated to the new data model yet. Data from older SDK versions will be delayed by 5-10 minutes.
Upgrade SDKs
Upgrade to the latest major SDK versions to explore your data in real time:
- Python SDK: Version x.x.x or higher
- JS/TS SDK: Version x.x.x or higher
Set trace-level attributes (user_id, session_id, trace metadata) as early as possible in your instrumentation so they propagate to all downstream spans automatically.
Migrate LLM-as-a-judge evaluations
Upgrade your LLM-as-a-judge evaluations to run at the observation level instead of the trace level for significantly faster execution.
Adopt new v2 API endpoints
Use the new Observations API v2 and Metrics API v2 endpoints to retrieve data with improved performance.