Code Tips11 min read

Email Queue Patterns for Next.js: Background Jobs, Rate Limiting, and Retry Logic

Build production-ready email queues for Next.js: in-memory patterns for simple apps, Redis + BullMQ for scale, serverless cron for Vercel, plus monitoring, testing, and dead letter queues.

R

React Emails Pro

February 28, 2026

Sending emails synchronously in a Next.js API route is fine for small apps. Once you ship to production, it's a ticking time bomb.

Your password reset handler blocks for 3 seconds while the email sends. A webhook times out waiting for 50 welcome emails to finish. An overnight batch job crashes halfway through 10,000 notifications with no way to resume.

If you're waiting for await resend.send() in a request handler, you're blocking execution, wasting resources, and setting yourself up for user-facing failures.

Here's how to build email queues that don't block requests, retry failures, and scale beyond localhost.


Why you need a queue (and when you don't)

Queues decouple email sending from request handling. Benefits:

  • Non-blocking: API routes return immediately instead of waiting for SMTP handshakes
  • Resilience: Failed sends retry automatically without user intervention
  • Rate limiting: Respect provider limits (Resend: 10/sec, SendGrid: varies by plan)
  • Observability: Track success/failure rates, see backlogs, debug issues
  • Batching: Send bulk emails without memory leaks or timeouts
When to skip the queue: Simple hobby projects with <100 emails/day. Just use await resend.send() directly. The complexity isn't worth it yet.

Pattern 1: In-Memory Queue (Simple, Fragile)

For small-scale production apps (not prototypes), an in-memory queue can work if you accept the tradeoff: job loss on process restart.

lib/email-queue.ts
type EmailJob = {
  id: string;
  to: string;
  subject: string;
  react: React.ReactElement;
  retries: number;
};

class EmailQueue {
  private queue: EmailJob[] = [];
  private processing = false;
  private readonly maxRetries = 3;
  private readonly retryDelayMs = 5000;

  async enqueue(job: Omit<EmailJob, "id" | "retries">) {
    const emailJob: EmailJob = {
      ...job,
      id: crypto.randomUUID(),
      retries: 0,
    };
    
    this.queue.push(emailJob);
    console.log(`[Queue] Enqueued ${emailJob.id}`);
    
    // Trigger processing if not already running
    if (!this.processing) {
      this.process();
    }
  }

  private async process() {
    this.processing = true;

    while (this.queue.length > 0) {
      const job = this.queue.shift();
      if (!job) break;

      try {
        await this.send(job);
        console.log(`[Queue] Sent ${job.id}`);
      } catch (error) {
        console.error(`[Queue] Failed ${job.id}:`, error);

        if (job.retries < this.maxRetries) {
          job.retries++;
          console.log(`[Queue] Retry ${job.retries}/${this.maxRetries} for ${job.id}`);
          
          // Re-enqueue with delay
          setTimeout(() => {
            this.queue.push(job);
          }, this.retryDelayMs * job.retries); // Exponential backoff
        } else {
          console.error(`[Queue] Dropped ${job.id} after max retries`);
          // TODO: Log to dead letter queue or alerting system
        }
      }
    }

    this.processing = false;
  }

  private async send(job: EmailJob) {
    const { render } = await import("@react-email/render");
    const html = render(job.react);
    
    await resend.send({
      from: "noreply@yourdomain.com",
      to: job.to,
      subject: job.subject,
      html,
    });
  }
}

export const emailQueue = new EmailQueue();

Usage in an API route:

app/api/auth/reset/route.ts
import { emailQueue } from "@/lib/email-queue";
import PasswordResetEmail from "@/emails/password-reset";

export async function POST(req: Request) {
  const { email } = await req.json();
  
  // Validate, generate token, etc.
  const resetToken = generateResetToken(email);
  
  // Enqueue email (non-blocking)
  await emailQueue.enqueue({
    to: email,
    subject: "Reset your password",
    react: <PasswordResetEmail token={resetToken} />,
  });
  
  // Return immediately (don't wait for send)
  return Response.json({ success: true });
}
Pros: Zero dependencies, fast, simple.
Cons: Jobs lost on restart, no persistence, no distributed processing.

Pattern 2: Redis Queue (Production-Ready)

For real production apps, use Redis + BullMQ. It persists jobs, supports distributed workers, and has built-in retries, rate limiting, and monitoring.

terminal
npm install bullmq ioredis
# Or with Upstash (serverless Redis):
npm install @upstash/redis
lib/email-queue.ts
import { Queue, Worker } from "bullmq";
import { Redis } from "ioredis";
import { render } from "@react-email/render";
import { resend } from "@/lib/resend";

const connection = new Redis({
  host: process.env.REDIS_HOST,
  port: parseInt(process.env.REDIS_PORT || "6379"),
  password: process.env.REDIS_PASSWORD,
  maxRetriesPerRequest: null, // Required for BullMQ
});

// Define job data type
type EmailJobData = {
  to: string;
  subject: string;
  html: string;
};

// Create queue
export const emailQueue = new Queue<EmailJobData>("emails", {
  connection,
  defaultJobOptions: {
    attempts: 3,
    backoff: {
      type: "exponential",
      delay: 5000, // Start at 5s, double each retry
    },
    removeOnComplete: 100, // Keep last 100 completed jobs
    removeOnFail: 500, // Keep last 500 failed jobs for debugging
  },
});

// Create worker (runs in separate process or serverless function)
export const emailWorker = new Worker<EmailJobData>(
  "emails",
  async (job) => {
    const { to, subject, html } = job.data;

    await resend.send({
      from: "noreply@yourdomain.com",
      to,
      subject,
      html,
    });

    console.log(`[Worker] Sent email to ${to}`);
  },
  {
    connection,
    limiter: {
      max: 10, // Max 10 jobs per second (Resend limit)
      duration: 1000,
    },
  }
);

// Error handling
emailWorker.on("failed", (job, err) => {
  console.error(`[Worker] Job ${job?.id} failed:`, err);
  // TODO: Send to error tracking (Sentry, etc.)
});

// Helper to enqueue emails
export async function enqueueEmail(
  to: string,
  subject: string,
  react: React.ReactElement
) {
  const html = render(react);
  
  await emailQueue.add("send", { to, subject, html });
}

API route stays identical:

app/api/auth/reset/route.ts
import { enqueueEmail } from "@/lib/email-queue";
import PasswordResetEmail from "@/emails/password-reset";

export async function POST(req: Request) {
  const { email } = await req.json();
  const resetToken = generateResetToken(email);
  
  await enqueueEmail(
    email,
    "Reset your password",
    <PasswordResetEmail token={resetToken} />
  );
  
  return Response.json({ success: true });
}
Worker deployment: Run the worker in a separate process (e.g., npm run worker) or as a long-running serverless function (Vercel Cron, AWS Lambda with SQS trigger).

Pattern 3: Serverless Queue (Vercel-Friendly)

On serverless platforms (Vercel, Netlify), you can't run long-lived workers. Use Vercel Cron + Upstash Redis instead.

app/api/cron/process-emails/route.ts
import { emailQueue } from "@/lib/email-queue";
import { Worker } from "bullmq";

// Triggered by Vercel Cron every 1 minute
export async function GET(req: Request) {
  const authHeader = req.headers.get("authorization");
  
  if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
    return new Response("Unauthorized", { status: 401 });
  }

  // Process up to 50 jobs per invocation
  const worker = new Worker("emails", async (job) => {
    // ... same send logic as Pattern 2
  }, {
    connection, // Upstash Redis
    limiter: { max: 10, duration: 1000 },
  });

  // Run for 50 seconds (before 60s timeout)
  await new Promise((resolve) => setTimeout(resolve, 50000));
  await worker.close();

  return Response.json({ processed: true });
}

Configure in vercel.json:

vercel.json
{
  "crons": [
    {
      "path": "/api/cron/process-emails",
      "schedule": "* * * * *"
    }
  ]
}
Rate limit gotcha: Cron jobs run in parallel across regions. Use Redis-based rate limiting (BullMQ limiter) to avoid exceeding provider limits.

Observability: Know What's Happening

Production queues need visibility. Add:

  • Dashboard: Use BullMQ Board or build a simple admin page to see queue depth, failed jobs, retry counts
  • Metrics: Track emails_sent_total, emails_failed_total, queue_depth with Prometheus or Datadog
  • Alerts: Notify when queue depth > 1000 or failure rate > 5%
  • Dead letter queue: Store failed jobs after max retries for manual investigation
lib/email-queue.ts
emailWorker.on("failed", async (job, err) => {
  if (job && job.attemptsMade >= 3) {
    // Store in dead letter queue
    await db.deadLetterQueue.create({
      jobId: job.id,
      data: job.data,
      error: err.message,
      failedAt: new Date(),
    });
    
    // Alert on-call
    await sendSlackAlert(`Email job ${job.id} failed permanently`);
  }
});

Testing Queue Behavior

Test your queue logic without actually sending emails. Mock the send function and verify retry/rate-limit behavior.

lib/email-queue.test.ts
import { describe, it, expect, vi } from "vitest";
import { emailQueue, enqueueEmail } from "./email-queue";

vi.mock("@/lib/resend", () => ({
  resend: {
    send: vi.fn().mockResolvedValue({ id: "test-id" }),
  },
}));

describe("Email Queue", () => {
  it("enqueues email job", async () => {
    await enqueueEmail(
      "user@example.com",
      "Test",
      <div>Hello</div>
    );
    
    const jobs = await emailQueue.getJobs();
    expect(jobs).toHaveLength(1);
    expect(jobs[0].data.to).toBe("user@example.com");
  });

  it("retries failed jobs", async () => {
    const { resend } = await import("@/lib/resend");
    
    // Fail twice, then succeed
    vi.mocked(resend.send)
      .mockRejectedValueOnce(new Error("Network error"))
      .mockRejectedValueOnce(new Error("Timeout"))
      .mockResolvedValueOnce({ id: "success-id" });
    
    // Process job and verify retries
    // ... (test implementation depends on your queue setup)
  });
});

Production Checklist

Before shipping email queues to production:

  1. Persistence: Jobs survive process restarts (Redis/DB-backed)
  2. Retries: Exponential backoff, max 3-5 attempts
  3. Rate limiting: Respect provider limits (Resend: 10/sec)
  4. Dead letter queue: Store permanently failed jobs
  5. Monitoring: Track queue depth, send rate, failure rate
  6. Alerts: Notify on high failure rate or queue backlog
  7. Idempotency: Prevent duplicate sends (use unique job IDs)
  8. Testing: Unit tests for retry logic, integration tests for worker behavior
Idempotency tip: Use a composite job ID like password-reset-{userId}-{timestamp} and configure { jobId: customId, removeOnComplete: false } to prevent duplicate sends within a time window.

When to Upgrade Your Queue

Start simple. Upgrade when you hit limits:

  • In-memory → Redis: When you need persistence or hit >100 emails/hour
  • Redis → Dedicated queue service: When you need multi-region workers, advanced routing, or enterprise SLAs (AWS SQS, GCP Pub/Sub, Inngest)
  • Redis → Event streaming: When emails are part of a broader event-driven architecture (Kafka, RabbitMQ)

For most Next.js apps, BullMQ + Redis is the sweet spot: production-ready, low ops overhead, scales to millions of jobs.


Recap

Synchronous email sending is fine for prototypes. Production apps need queues to handle failures, respect rate limits, and avoid blocking requests.

  • Pattern 1 (In-Memory): Simple but fragile. Use for low-volume apps where job loss is acceptable.
  • Pattern 2 (Redis + BullMQ): Production-ready. Use for most Next.js apps.
  • Pattern 3 (Serverless Cron): Vercel-friendly. Trade worker efficiency for serverless compatibility.

Add monitoring, testing, and a dead letter queue before you ship. Your on-call rotation will thank you.

Production-ready templates for every flow

Pick from 9 template packs built with React Email. One-time purchase, lifetime updates, tested across every major email client.

Browse all templates