Production Grade

Scale out, not up

Vertical scaling has a ceiling. When you hit it, you face a hard migration, a new architecture, or both. Tacnode Context Lake scales horizontally without limits — add nodes, capacity increases linearly, the system stays fully online.

No scale events that require downtime. No vertical scaling ceiling you'll eventually hit. No manual rebalancing between every growth phase. Horizontal scalability isn't a feature — it's the foundation.

Cluster throughput50k req/s
node-125k req/s
node-225k req/s

2 nodes — throughput scales linearly. No downtime. No rebalancing windows. No ceiling.

Most Real-Time Data Stores Don't Scale Horizontally — They Shard Manually

"Horizontal scaling" is marketed broadly. In practice, many databases that claim it still require careful key design to avoid hot spots, manual configuration when adding nodes, and a maintenance window when removing them. Scaling out becomes an engineering project, not a runtime operation.

The deeper issue is that manual sharding shifts the burden of distribution to the application layer. Engineers must think about partition keys, access patterns, and shard balance continuously. Every new data type resets the design problem. Every traffic spike exposes the assumptions that were made at schema design time.

Vertical vs. Horizontal Scaling

Vertical scaling works until it doesn't. Every instance class has a maximum size. Every maximum size has a price that grows faster than the capacity it buys. At some threshold, vertical scaling stops being an option and becomes a migration.

VerticalHorizontal
Capacity ceilingHard limit — largest available instanceNone — add nodes indefinitely
Scale event downtimeOften required — instance resize or migrationZero — nodes join and leave live
Cost efficiencyDiminishing returns — large instances cost disproportionately moreLinear — pay for exactly the capacity you use
RebalancingN/A — single node holds all dataAutomatic — partitions rebalance with no operator intervention
Failure blast radiusTotal — one node is everythingPartial — failures affect only the partitions on the failed node

Vertical scaling

hardware ceiling
1 large server

Every upgrade is a larger instance. Every upgrade eventually hits the ceiling. The ceiling forces a migration.

Horizontal scaling

ceiling ∞
n smaller nodes

Add nodes, capacity grows. Remove nodes, cost drops. No migration. No ceiling. System stays online.

Where Horizontal Scaling Claims Break Down

The word "scalable" is in every database's marketing. The operational reality is usually more complicated — and the gaps surface at exactly the worst moment.

Manual Sharding

hotspot formation

Pattern: Keys are distributed across shards by engineering decision, not automatically. Hot shards form when traffic concentrates on popular keys.

Cost: Hotspots throttle the shard. Rebalancing requires downtime and careful coordination. Scaling becomes an operational project.

Fixed Partition Counts

partition ceiling

Pattern: The number of partitions is set at cluster creation. Adding nodes doesn't increase parallelism beyond the original partition count.

Cost: Scaling out adds hardware but not throughput. The system appears to scale but the bottleneck moves to a different layer.

Scale-Out With Downtime

scale-event downtime

Pattern: The database requires a maintenance window to add nodes, migrate data, or update the partition map.

Cost: Growth requires scheduled downtime. Real-time systems can't pause. Engineering teams defer scaling until they're forced to act.

Asymmetric Scale-In

asymmetric elasticity

Pattern: Adding nodes is supported. Removing nodes requires manual data migration or leaves partitions unbalanced.

Cost: Cost optimization is manual. Teams overprovision because shrinking is too risky. Elastic pricing becomes theoretical.

What Real Horizontal Scalability Requires

Elastic scaling without downtime isn't a configuration option. It's an architectural property — and it requires the system to be designed for it from the beginning.

Automatic Partition Rebalancing

When a node joins or leaves, partitions redistribute automatically. No operator coordinates the migration.
Rebalancing is a manual operation — an engineer must plan, schedule, and execute the data movement.

Consistent Hash Distribution

Keys are distributed by consistent hashing. No single node is the designated owner of popular key ranges. Hot spots are structurally prevented.
Range-based partitioning assigns contiguous key ranges to nodes. Popular prefixes concentrate traffic on one shard.

Linear Throughput Growth

Adding a node increases aggregate throughput proportionally. The system gets measurably faster with every node added.
Nodes add storage capacity but not throughput — a coordination bottleneck limits how much parallelism actually helps.

Live Scale-In

Removing a node drains and redistributes its partitions while remaining fully online. Data is never at risk.
Scale-in is unsupported or requires taking the cluster offline. Teams resize upward only.

How Tacnode Delivers Elastic Horizontal Scale

Tacnode separates compute from storage, which means they scale independently. Compute is organized into nodegroups — independent execution units that each operate over the same underlying storage. Adding a nodegroup increases query throughput without moving data. Adding storage capacity scales durability independently of query capacity. Neither operation requires the other.

Node additions and removals are live operations. There is no rebalancing window. There is no pause in writes. There is no operator coordinating the migration. Ingestion-heavy workloads, analytical queries, and low-latency serving paths can be expanded independently — scaling one does not provision capacity for another.

Consistent hashing prevents the hot spot problem structurally. No key range is special. No node owns a disproportionate share of popular keys. Traffic distributes across the cluster proportionally to its size, not to the access patterns of the data it holds.

Automatic

Partition rebalancing

Zero

Scale event downtime

Linear

Throughput growth

See Tacnode scale without limits

Automatic rebalancing. Linear throughput growth. Zero-downtime scale events. No manual sharding. No maintenance windows. No ceiling.