Skip to the content.
Systems Series Part 7

🧠 Preload Has Short-Term Memory. Redis Has a Nervous System.

How request-local optimization turns into system-wide coordination once Redis becomes infrastructure.

Suma Manjunath
Author: Suma Manjunath
Published on: October 22, 2025

Redis Nervous System

One-line metric: Cut “trending” cold-load from ~2s to cache hits under 200ms during spikes by moving repeated work to Redis.
For: Backend engineers, Rails devs, platform architects
Reading time: 8–10 minutes
Prerequisites: Rails 7+, Redis 6+, ActiveRecord basics (eager loading, update_all), a cache store configured
Why now (urgency): Traffic bursts + background jobs turn per-request fixes into system-wide contention unless you add cross-request memory and coordination.

TL;DR:

⚠️ Disclaimer: All scenarios, accounts, names, and data used in examples are not real. They are realistic scenarios provided only for educational and illustrative purposes.


1. The Breakup Letter to Preload

Preload, we need to talk.

You were great when we first met. You stopped me from asking the database “who’s this user?” like a goldfish with amnesia.
But you have no long-term memory.

Every new request, you start over.
You re-run the same query, recompute the same data, and act shocked that we’ve met before.

Preload is speed-dating. Redis is marriage.
Preload optimizes the moment.
Redis remembers the relationship.


2. Redis Patterns That Replace N+1 at the System Boundary

Preload solves N+1 inside a request. Teams at scale run into a second-order version: N+1 across requests and workers—the same expensive work repeated by many processes because the system forgets between calls.

This section keeps the breakup energy, but frames Redis as infrastructure patterns (shared memory, buffering, coordination) rather than “hero moments.”

These are recurring shapes that appear as concurrency and repetition increase—not a timeline of specific incidents.

🧠 A. When 1,000 Requests Ask the Same Question

# The old way: the DB is crying softly in the corner.
def trending_posts
  Post.joins(:views, :comments, :shares)
      .where('created_at > ?', 24.hours.ago)
      .group('posts.id')
      .order('COUNT(*) DESC')
      .limit(10)
end

# The Redis way: "I got you, fam."
def trending_posts
  Rails.cache.fetch('trending_posts', expires_in: 10.minutes) do
    Post.joins(:views, :comments, :shares)
        .where('created_at > ?', 24.hours.ago)
        .group('posts.id')
        .order('COUNT(*) DESC')
        .limit(10)
  end
end

One user pays the full 2-second penalty. The next 999? Instant.

System lens: This isn’t just caching. It’s shared memory — like the system’s hippocampus. Every request doesn’t need to rediscover the same truth, as long as you’re explicit about how long that truth is allowed to live (TTL) and what staleness you’re willing to tolerate.

⚙️ B. When Background Jobs Became a Database Bully

Picture this: you’ve got 50,000 payment records to reconcile every day. Your background job’s doing its best — but your database is quietly weeping in the corner.

❌ Stage 1: The nightmare (N+1 everything)

payment_ids.each do |id|
  payment = Payment.find(id)              # 50,000 SELECTs
  payment.update!(reconciled: true)       # 50,000 UPDATEs
end
# Database: "Why do you hate me?" 🔥

Every payment triggers its own SELECT and UPDATE. That’s 100,000 round-trips for one job — you can hear the disks crying.

🟡 Stage 2: Smarter reads, still bad writes

# Better: batch the reads with find_each
Payment.unreconciled.find_each(batch_size: 1000) do |payment|
  payment.update!(reconciled: true)  # Still 50,000 individual UPDATEs
end
# Database: "Better... but I'm still exhausted" 😓

Batched SELECTs help, but you’re still hammering the database with 50,000 UPDATE statements.

✅ Stage 3: Redis as write buffer

class ReconcileJob
  def perform
    # Batch reads from DB (find_each handles this automatically)
    Payment.unreconciled.find_each(batch_size: 1000) do |payment|
      if payment.reconcile!
        # Accumulate successful IDs in Redis Set (blazing fast)
        $redis.sadd("reconciled_today", payment.id)
      end
    end

    # 🧠 ONE bulk database write at the end
    reconciled_ids = $redis.smembers("reconciled_today")
    Payment.where(id: reconciled_ids).update_all(reconciled: true)
    
    # 🧹 Clean up Redis memory
    $redis.del("reconciled_today")
  end
end

System lens: This is the same pattern CDNs, message queues, and write-ahead logs use—fast path for accumulation, slow path for durability. Redis acts as a write buffer: you collect state in memory at lightning speed, then sync to the database once. Your DB goes from 50,000 round-trips to 1 bulk operation.

🚦 C. Accidental DDOS? Redis to the Rescue

class ApiController < ApplicationController
  before_action :enforce_rate_limit

  def enforce_rate_limit
    key = "requests:#{current_user.id}:#{Time.now.to_i / 60}"
    count = Rails.cache.increment(key, 1, expires_in: 1.minute)

    if count > 100
      render json: { error: "Rate limit exceeded (100 requests/min)." }, status: 429
    end
  end
end

Redis counts faster than your user can refresh. It’s atomic rate limiting — no DB overhead, no background cron, just one shared atomic counter.

System lens: That’s not middleware. It’s coordination — ephemeral state distributed across processes in real time.

3. The Architecture Lesson Hidden in Rails.cache

Most developers think of Redis as a key-value store. But what it really is — is memory with perspective.

Rails’ includes gives you short-term clarity. Redis gives your whole system long-term coordination.

Layer What it remembers Lifetime Scope Preload Query results 1 request Local Redis cache Computed data Configurable TTL Global Database Truth Forever Persistent

“Every scalable system eventually separates truth from time. Databases hold the truth. Redis holds the time.”

🧭 Architecture Diagram — “How Redis Turns a Request into a System”

ASCII Version


                           ┌──────────────────────────────┐
Request ──▶ App Server ───▶│        REDIS  (CACHE)        │
           (Rails App)     │  Cross-request memory        │
                           │  TTL · eviction · atomic ops │
                           └──────────┬──────────┬────────┘
                                      │          │
                                      │          │
                            Cache MISS│     Cache HIT
                                      ▼          │
                           ┌──────────────────────────────┐
                           │   DB  (SOURCE OF TRUTH)      │
                           │   Persistent truth / SQL     │
                           └──────────────────────────────┘
                                      ▲
                                      │
                             (Query DB & return)
  
flowchart LR %% Lifetimes subgraph R["Request Scope (Preload - within a single request)"] A["Client Request"] B["App (Rails)"] end subgraph C["Cross-Request Memory (Redis - TTL, eviction, atomic ops)"] RDS["Redis Cache"] end subgraph D["Persistent Truth (Database)"] DB["Primary Database"] end A --> B --> RDS RDS -->|"HIT -> return cached value"| B RDS -->|"MISS"| DB DB -->|"Fetch source of truth"| B B -->|"Refresh cache (SETEX TTL)"| RDS B -->|"Response"| A %% Coordination across processes subgraph P["Coordination Across Processes"] W1["App Server #1"] W2["App Server #2"] J["Background Jobs"] end W1 -.->|"shared cache / locks"| RDS W2 -.->|"shared counters / rate limits"| RDS J -.->|"idempotency keys / buffers"| RDS

🪄 Caption:

Preload = short-term memory inside one request. Redis = cross-request memory and coordination layer. DB = durable truth.

Together they form a nervous system — neurons (requests) firing, Redis as the synapses, and the DB as the memory core.

4. The Actually Good Analogy

Preload is your brain during a conversation. You remember what was said 10 seconds ago so you don’t repeat yourself.

Redis is your brain’s short-term memory. It remembers what you learned yesterday, across rooms and contexts. You’re not starting every request with “who am I, where am I, what’s my schema again?”

That’s not optimization — that’s continuity of consciousness.

5. The Systems Takeaway

Redis isn’t a Rails add-on. It’s a design philosophy hiding behind a gem. The moment your system shares memory across processes, it stops being “an app” and starts being a distributed organism.

“At scale, the difference between latency and serenity is usually memory.”

Preload can be smart in the moment. Redis can be wise across time. Together, they can turn code into a system.

But here’s what nobody tells you about system memory: It can lie. It can disappear. It can stampede.

Next redis article in the system series: When Redis Turns Into the Hulk

References

Comments & Discussion

Share your thoughts, ask questions, or start a discussion about this article.