Deliverability14 min read

Resend DMARC Analyzer: Self-Hosted DMARC Report Monitoring for Next.js

Deploy the open-source Resend DMARC Analyzer to automatically parse DMARC aggregate reports, detect spoofing, and build a roadmap from p=none to p=reject — with code examples for extending it.

R

React Emails Pro

March 13, 2026

You set up DMARC, pointed rua= at a mailbox, and moved on. Six months later, that inbox has 14,000 unread XML attachments from Google, Microsoft, and Yahoo — and you've read exactly zero of them.

DMARC aggregate reports are the single best source of truth for email authentication health. They tell you which IPs are sending as your domain, whether SPF and DKIM are passing, and whether someone is spoofing you. But the reports arrive as compressed XML, sometimes multiple times per day, from every inbox provider on the internet. Nobody reads raw XML at scale.

Resend just open-sourced resend-dmarc-analyzer, a self-hosted Next.js app that ingests DMARC reports via webhooks, parses the XML automatically, and gives you a visual dashboard plus email digests. This guide walks through what it does, how to deploy it, and how to integrate it into a real-world email authentication workflow.

~80%

Domains with DMARC

Of top 1M domains now publish DMARC (2025 Valimail report)

<10%

Actually monitor reports

Most set p=none and never check RUA data

4.75x

Spoofing increase

Domains without DMARC enforcement see 4.75x more spoofing (Agari)


Why DMARC reports matter (and why you're ignoring them)

If you've already set up SPF, DKIM, and a basic DMARC record (if not, start with our SPF/DKIM/DMARC checklist for transactional email), you're generating aggregate reports automatically. Every inbox provider that receives mail claiming to be from your domain sends a report back to the address in your rua= tag.

These reports contain critical data:

  • Source IPs — every server that sent mail as your domain
  • SPF results — pass/fail per IP, with alignment status
  • DKIM results — pass/fail per message, including which selector was used
  • Message volume — how many messages each source sent
  • Policy applied — whether the provider quarantined, rejected, or passed messages

Without reading these reports, you're flying blind. A third-party service might be sending email as your domain with broken DKIM. A marketing tool might have misconfigured SPF. Or someone could be actively spoofing your domain — and you'd never know.

Setting p=none without monitoring reports is security theater. You're telling providers "don't enforce anything" but also never checking what's happening. That's worse than having no DMARC at all, because it gives you false confidence.

What the Resend DMARC Analyzer does

The resend-dmarc-analyzer is an open-source, MIT-licensed Next.js application that turns raw DMARC aggregate reports into something a human can act on. It's built by the Resend team and runs on their inbound email webhook infrastructure.

Two ingestion modes

  • Webhook mode — Point your DMARC rua= address at a Resend-managed inbox. Reports arrive as email attachments, Resend fires a webhook, and the analyzer processes them in real time.
  • Paste mode — Copy raw DMARC XML into the web UI for instant, on-demand analysis. Useful for debugging individual reports.

What you get

  • Automatic decompression of .xml.gz and .zip attachments (the two formats providers use)
  • Parsed report dashboard showing source IPs, pass/fail status, and volume per sender
  • Email digest summaries sent via Resend using React Email templates
  • Webhook signature verification for security
  • Stateless architecture — no database required, processes reports on the fly

Tech stack

It's a standard Next.js application with a familiar stack:

  • Next.js 16 with App Router
  • React 19 and Tailwind CSS 4
  • Resend SDK for sending digest emails
  • React Email for email templates
  • fast-xml-parser for DMARC XML parsing
  • pako and jszip for decompression

How DMARC reporting works under the hood

Before deploying the analyzer, it helps to understand the reporting flow. DMARC defines two report types:

Aggregate reports (RUA)

Sent daily (sometimes more frequently) by inbox providers. Each report covers a time window and includes every message they received claiming your domain, grouped by source IP. The report is XML, typically gzipped or zipped, and sent as an email attachment to the address in your rua= tag.

Example DMARC aggregate report (simplified)
<?xml version="1.0" encoding="UTF-8"?>
<feedback>
  <report_metadata>
    <org_name>google.com</org_name>
    <email>noreply-dmarc-support@google.com</email>
    <date_range>
      <begin>1709856000</begin>
      <end>1709942400</end>
    </date_range>
  </report_metadata>
  <policy_published>
    <domain>yourdomain.com</domain>
    <adkim>r</adkim>
    <aspf>r</aspf>
    <p>none</p>
  </policy_published>
  <record>
    <row>
      <source_ip>198.51.100.42</source_ip>
      <count>1523</count>
      <policy_evaluated>
        <disposition>none</disposition>
        <dkim>pass</dkim>
        <spf>pass</spf>
      </policy_evaluated>
    </row>
    <identifiers>
      <header_from>yourdomain.com</header_from>
    </identifiers>
    <auth_results>
      <dkim>
        <domain>yourdomain.com</domain>
        <result>pass</result>
        <selector>resend</selector>
      </dkim>
      <spf>
        <domain>yourdomain.com</domain>
        <result>pass</result>
      </spf>
    </auth_results>
  </record>
</feedback>

That XML tells you: Google received 1,523 messages from IP 198.51.100.42 claiming to be yourdomain.com. Both SPF and DKIM passed. Your policy is p=none, so Google delivered them regardless of auth results.

Forensic reports (RUF)

Sent per-message when authentication fails. These contain more detail (including headers) but are less commonly supported — many providers don't send them due to privacy concerns. The analyzer supports both RUA and RUF via separate webhook endpoints.

Most of the value comes from aggregate (RUA) reports. Focus your monitoring there. Forensic reports are a bonus.

Deploying the DMARC Analyzer step by step

Here's the complete setup, from cloning the repo to receiving your first parsed report.

1

Clone and install

Terminal
git clone https://github.com/resend/resend-dmarc-analyzer.git
cd resend-dmarc-analyzer
pnpm install
2

Configure environment variables

Create a .env.local file with your Resend credentials:

.env.local
# Your Resend API key (from https://resend.com/api-keys)
RESEND_API_KEY=re_xxxxxxxxxxxxx

# Webhook signing secret (from Resend webhook settings)
RESEND_WEBHOOK_SECRET=whsec_xxxxxxxxxxxxx

# Where to send digest emails
RECIPIENT_EMAIL=team@yourdomain.com
3

Set up Resend Inbound

In your Resend dashboard, configure an inbound email address (e.g., dmarc@inbound.yourdomain.com). This is where DMARC reports will arrive. Then add a webhook pointing to your deployed app's endpoint:

Webhook endpoints
# Aggregate reports (RUA)
https://your-app.vercel.app/api/webhooks/dmarc/rua

# Forensic reports (RUF) — optional
https://your-app.vercel.app/api/webhooks/dmarc/ruf
4

Update your DMARC DNS record

Point your rua= tag to the Resend inbound address:

DNS (DMARC record)
_dmarc.yourdomain.com  TXT  "v=DMARC1; p=none; rua=mailto:dmarc@inbound.yourdomain.com; ruf=mailto:dmarc@inbound.yourdomain.com; fo=1"
5

Deploy

Deploy to Vercel (or any Node.js host). The app is a standard Next.js project:

Terminal
# Deploy to Vercel
vercel deploy --prod

# Or run locally with ngrok for testing
pnpm dev
# In another terminal:
ngrok http 3000

For local development, use ngrok to expose your local server so Resend webhooks can reach it.

DNS changes (step 4) take time to propagate. You may not see reports for 24-48 hours after updating the DMARC record. In the meantime, use the paste mode to test with sample XML.

How the webhook processing works

The analyzer's webhook route handles the full pipeline: signature verification, attachment extraction, decompression, XML parsing, and digest email sending. Here's what the flow looks like:

Webhook processing flow (simplified)
// 1. Verify the webhook signature
import { Webhook } from "resend";

const webhook = new Webhook(process.env.RESEND_WEBHOOK_SECRET);

export async function POST(req: Request) {
  const payload = await req.text();
  const headers = Object.fromEntries(req.headers.entries());

  // Throws if signature is invalid
  const event = webhook.verify(payload, headers);

  // 2. Extract attachments from the inbound email
  const attachments = event.data.attachments ?? [];

  for (const attachment of attachments) {
    const buffer = Buffer.from(attachment.content, "base64");

    // 3. Decompress based on content type
    let xml: string;
    if (attachment.filename.endsWith(".gz")) {
      // gzip — used by Google, Yahoo
      xml = pako.inflate(buffer, { to: "string" });
    } else if (attachment.filename.endsWith(".zip")) {
      // zip — used by Microsoft, some others
      const zip = await JSZip.loadAsync(buffer);
      const file = Object.values(zip.files)[0];
      xml = await file.async("string");
    } else {
      xml = buffer.toString("utf-8");
    }

    // 4. Parse the DMARC XML
    const parser = new XMLParser();
    const report = parser.parse(xml);
    const feedback = report.feedback;

    // 5. Extract structured data
    const records = Array.isArray(feedback.record)
      ? feedback.record
      : [feedback.record];

    const parsed = records.map((record) => ({
      sourceIp: record.row.source_ip,
      count: record.row.count,
      spf: record.row.policy_evaluated.spf,
      dkim: record.row.policy_evaluated.dkim,
      headerFrom: record.identifiers.header_from,
    }));

    // 6. Send digest email via Resend
    await resend.emails.send({
      from: "DMARC Monitor <dmarc@yourdomain.com>",
      to: process.env.RECIPIENT_EMAIL,
      subject: `DMARC Report: ${feedback.report_metadata.org_name}`,
      react: DmarcDigestEmail({ report: parsed }),
    });
  }
}

The key architectural decision here is statelessness. The analyzer doesn't store reports in a database — it processes them on the fly and sends a digest. This makes deployment trivial (no database to manage) but means you don't get historical trend analysis out of the box.

If you need historical data, fork the repo and add a database write step after parsing. A simple Postgres table with source_ip, spf_result, dkim_result, message_count, and report_date columns gets you 90% of what paid DMARC tools offer.

Reading DMARC reports: what to look for

Once reports start flowing in, here's how to interpret the data and take action:

Healthy report (all good)

  • All source IPs belong to your sending provider (Resend, SES, etc.)
  • SPF: pass across all records
  • DKIM: pass across all records
  • No unexpected source IPs

Warning signs

  • Unknown source IPs — Someone is sending as your domain from an IP you don't recognize. Could be a forgotten third-party service or active spoofing.
  • SPF pass but DKIM fail — A legitimate sender (authorized via SPF) isn't signing with DKIM. Fix the DKIM configuration for that service.
  • DKIM pass but SPF fail — Your SPF record is missing an include: for a legitimate sending service.
  • Both fail from unknown IPs — Almost certainly spoofing. If you see significant volume, accelerate your move to p=quarantine or p=reject.
Action on report data
  • Investigate every unknown source IP in your reports
  • Cross-reference IPs with your sending providers' published ranges
  • Move to p=quarantine after 2-4 weeks of clean reports
  • Set up alerts for SPF/DKIM failures above a threshold
Common mistake
  • Ignore reports because 'we set up DMARC already'
  • Jump straight to p=reject without monitoring first
  • Assume all failures are spoofing (could be misconfigured services)
  • Wait months before tightening your DMARC policy

The DMARC enforcement roadmap

The goal of monitoring DMARC reports is to reach p=reject — where inbox providers actively block unauthenticated mail claiming your domain. Here's the safe path to get there:

Phase 1: Monitor (weeks 1-4)

DNS
_dmarc.yourdomain.com  TXT  "v=DMARC1; p=none; rua=mailto:dmarc@inbound.yourdomain.com; fo=1"
  • Deploy the DMARC analyzer and start ingesting reports
  • Identify all legitimate sending sources (your app, marketing tools, support tools)
  • Fix any SPF/DKIM misalignments
  • Document every IP that should be sending as your domain

Phase 2: Quarantine (weeks 5-8)

DNS
_dmarc.yourdomain.com  TXT  "v=DMARC1; p=quarantine; pct=25; rua=mailto:dmarc@inbound.yourdomain.com; fo=1"
  • Start with pct=25 to quarantine only 25% of failing messages
  • Monitor reports for false positives (legitimate mail being quarantined)
  • Gradually increase: pct=50, then pct=75, then pct=100

Phase 3: Reject (week 9+)

DNS
_dmarc.yourdomain.com  TXT  "v=DMARC1; p=reject; rua=mailto:dmarc@inbound.yourdomain.com; fo=1"
  • Full enforcement — unauthenticated mail is rejected
  • Continue monitoring reports (new services may break if not configured)
  • Your domain is now protected against spoofing and phishing
Never skip the monitoring phase. Going straight to p=reject without reading reports will break legitimate email from services you forgot about — your marketing tool, your support desk, your billing system.

Extending the analyzer for production use

The open-source analyzer is a solid foundation. Here are practical extensions for production deployments:

Add persistent storage

The stateless design is great for getting started, but you'll want historical data for trend analysis. Add a simple database table:

lib/db/schema.ts
// Using Drizzle ORM (or your preferred ORM)
import { pgTable, text, integer, timestamp, boolean } from "drizzle-orm/pg-core";

export const dmarcReports = pgTable("dmarc_reports", {
  id: text("id").primaryKey(),
  reportingOrg: text("reporting_org").notNull(),
  sourceIp: text("source_ip").notNull(),
  messageCount: integer("message_count").notNull(),
  spfResult: text("spf_result").notNull(),    // "pass" | "fail"
  dkimResult: text("dkim_result").notNull(),   // "pass" | "fail"
  spfAligned: boolean("spf_aligned").notNull(),
  dkimAligned: boolean("dkim_aligned").notNull(),
  headerFrom: text("header_from").notNull(),
  disposition: text("disposition").notNull(),   // "none" | "quarantine" | "reject"
  reportDateStart: timestamp("report_date_start").notNull(),
  reportDateEnd: timestamp("report_date_end").notNull(),
  createdAt: timestamp("created_at").defaultNow().notNull(),
});

Add Slack alerts for failures

lib/alerts/slack-dmarc.ts
export async function alertDmarcFailure(record: {
  sourceIp: string;
  count: number;
  spf: string;
  dkim: string;
  headerFrom: string;
  reportingOrg: string;
}) {
  if (record.spf === "pass" && record.dkim === "pass") return;

  const severity = record.spf === "fail" && record.dkim === "fail"
    ? "🔴 CRITICAL"
    : "🟡 WARNING";

  await fetch(process.env.SLACK_WEBHOOK_URL!, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      text: [
        `${severity} DMARC authentication failure`,
        `*Source IP:* ${record.sourceIp}`,
        `*Volume:* ${record.count} messages`,
        `*SPF:* ${record.spf} | *DKIM:* ${record.dkim}`,
        `*Reported by:* ${record.reportingOrg}`,
        `*Domain:* ${record.headerFrom}`,
      ].join("\n"),
    }),
  });
}

Build a weekly trend dashboard

With persistent storage, you can query trends over time:

app/api/dmarc/trends/route.ts
import { db } from "@/lib/db";
import { dmarcReports } from "@/lib/db/schema";
import { sql, gte } from "drizzle-orm";
import { NextResponse } from "next/server";

export async function GET() {
  const thirtyDaysAgo = new Date(Date.now() - 30 * 24 * 60 * 60 * 1000);

  const trends = await db
    .select({
      date: sql`DATE(report_date_start)`.as("date"),
      totalMessages: sql`SUM(message_count)`.as("total"),
      spfPass: sql`SUM(CASE WHEN spf_result = 'pass' THEN message_count ELSE 0 END)`.as("spf_pass"),
      dkimPass: sql`SUM(CASE WHEN dkim_result = 'pass' THEN message_count ELSE 0 END)`.as("dkim_pass"),
      uniqueSources: sql`COUNT(DISTINCT source_ip)`.as("sources"),
    })
    .from(dmarcReports)
    .where(gte(dmarcReports.reportDateStart, thirtyDaysAgo))
    .groupBy(sql`DATE(report_date_start)`)
    .orderBy(sql`DATE(report_date_start)`);

  return NextResponse.json(trends);
}

DMARC and your transactional email stack

DMARC enforcement directly impacts your transactional email deliverability. When you reach p=reject, inbox providers trust your domain more because you've proven you control who sends as you. This translates to better inbox placement for the emails that matter — password resets, magic links, invoices, and shipping notifications.

The connection to your email templates is direct: a well-authenticated domain means your carefully designed transactional emails actually reach the inbox. No authentication means your templates land in spam, no matter how good they look.

Related reading: our sender reputation monitoring guide covers the broader monitoring picture, including Gmail Postmaster Tools, bounce tracking, and engagement metrics. DMARC monitoring is one piece of a complete deliverability stack.

If you're building transactional email in Next.js, the stack looks like this:

  1. Templates — React Email components for each email type (welcome, password reset, invoice, etc.)
  2. Sending — Resend, SES, or Postmark for reliable delivery (see our provider comparison)
  3. Authentication — SPF, DKIM, and DMARC configured correctly (setup checklist)
  4. Monitoring — DMARC report analysis (this post), plus reputation monitoring and feedback loop tracking

Self-hosted vs. paid DMARC monitoring tools

The Resend DMARC Analyzer isn't the only option for monitoring DMARC reports. Here's how it compares to the alternatives:

Self-hosted (Resend Analyzer)
  • Free and open source (MIT license)
  • Full control over your data — reports never leave your infrastructure
  • Customizable — fork and add storage, alerts, dashboards
  • Integrates with your existing Next.js deployment
Paid SaaS (Valimail, Agari, dmarcian)
  • $100-500+/month depending on volume
  • Data stored on third-party servers
  • Richer out-of-box features: historical trends, threat intelligence, auto-remediation
  • Better for organizations managing 50+ domains

For a SaaS team managing 1-5 sending domains, the self-hosted analyzer gives you everything you need. For enterprise teams with dozens of domains and complex sending infrastructure, a paid tool may save time. Start self-hosted, upgrade later if needed.


Key takeaway
R

React Emails Pro

Team

Building production-ready email templates with React Email. Writing about transactional email best practices, deliverability, and developer tooling.

Production-ready templates

Pick from 9 template packs built with React Email. One-time purchase, lifetime updates, tested across every major email client.

Browse all templates