You set up DMARC, pointed rua= at a mailbox, and moved on. Six months later, that inbox has 14,000 unread XML attachments from Google, Microsoft, and Yahoo — and you've read exactly zero of them.
DMARC aggregate reports are the single best source of truth for email authentication health. They tell you which IPs are sending as your domain, whether SPF and DKIM are passing, and whether someone is spoofing you. But the reports arrive as compressed XML, sometimes multiple times per day, from every inbox provider on the internet. Nobody reads raw XML at scale.
Resend just open-sourced resend-dmarc-analyzer, a self-hosted Next.js app that ingests DMARC reports via webhooks, parses the XML automatically, and gives you a visual dashboard plus email digests. This guide walks through what it does, how to deploy it, and how to integrate it into a real-world email authentication workflow.
~80%
Domains with DMARC
Of top 1M domains now publish DMARC (2025 Valimail report)
<10%
Actually monitor reports
Most set p=none and never check RUA data
4.75x
Spoofing increase
Domains without DMARC enforcement see 4.75x more spoofing (Agari)
Why DMARC reports matter (and why you're ignoring them)
If you've already set up SPF, DKIM, and a basic DMARC record (if not, start with our SPF/DKIM/DMARC checklist for transactional email), you're generating aggregate reports automatically. Every inbox provider that receives mail claiming to be from your domain sends a report back to the address in your rua= tag.
These reports contain critical data:
- Source IPs — every server that sent mail as your domain
- SPF results — pass/fail per IP, with alignment status
- DKIM results — pass/fail per message, including which selector was used
- Message volume — how many messages each source sent
- Policy applied — whether the provider quarantined, rejected, or passed messages
Without reading these reports, you're flying blind. A third-party service might be sending email as your domain with broken DKIM. A marketing tool might have misconfigured SPF. Or someone could be actively spoofing your domain — and you'd never know.
p=none without monitoring reports is security theater. You're telling providers "don't enforce anything" but also never checking what's happening. That's worse than having no DMARC at all, because it gives you false confidence.What the Resend DMARC Analyzer does
The resend-dmarc-analyzer is an open-source, MIT-licensed Next.js application that turns raw DMARC aggregate reports into something a human can act on. It's built by the Resend team and runs on their inbound email webhook infrastructure.
Two ingestion modes
- Webhook mode — Point your DMARC
rua=address at a Resend-managed inbox. Reports arrive as email attachments, Resend fires a webhook, and the analyzer processes them in real time. - Paste mode — Copy raw DMARC XML into the web UI for instant, on-demand analysis. Useful for debugging individual reports.
What you get
- Automatic decompression of
.xml.gzand.zipattachments (the two formats providers use) - Parsed report dashboard showing source IPs, pass/fail status, and volume per sender
- Email digest summaries sent via Resend using React Email templates
- Webhook signature verification for security
- Stateless architecture — no database required, processes reports on the fly
Tech stack
It's a standard Next.js application with a familiar stack:
- Next.js 16 with App Router
- React 19 and Tailwind CSS 4
- Resend SDK for sending digest emails
- React Email for email templates
fast-xml-parserfor DMARC XML parsingpakoandjszipfor decompression
How DMARC reporting works under the hood
Before deploying the analyzer, it helps to understand the reporting flow. DMARC defines two report types:
Aggregate reports (RUA)
Sent daily (sometimes more frequently) by inbox providers. Each report covers a time window and includes every message they received claiming your domain, grouped by source IP. The report is XML, typically gzipped or zipped, and sent as an email attachment to the address in your rua= tag.
<?xml version="1.0" encoding="UTF-8"?>
<feedback>
<report_metadata>
<org_name>google.com</org_name>
<email>noreply-dmarc-support@google.com</email>
<date_range>
<begin>1709856000</begin>
<end>1709942400</end>
</date_range>
</report_metadata>
<policy_published>
<domain>yourdomain.com</domain>
<adkim>r</adkim>
<aspf>r</aspf>
<p>none</p>
</policy_published>
<record>
<row>
<source_ip>198.51.100.42</source_ip>
<count>1523</count>
<policy_evaluated>
<disposition>none</disposition>
<dkim>pass</dkim>
<spf>pass</spf>
</policy_evaluated>
</row>
<identifiers>
<header_from>yourdomain.com</header_from>
</identifiers>
<auth_results>
<dkim>
<domain>yourdomain.com</domain>
<result>pass</result>
<selector>resend</selector>
</dkim>
<spf>
<domain>yourdomain.com</domain>
<result>pass</result>
</spf>
</auth_results>
</record>
</feedback>That XML tells you: Google received 1,523 messages from IP 198.51.100.42 claiming to be yourdomain.com. Both SPF and DKIM passed. Your policy is p=none, so Google delivered them regardless of auth results.
Forensic reports (RUF)
Sent per-message when authentication fails. These contain more detail (including headers) but are less commonly supported — many providers don't send them due to privacy concerns. The analyzer supports both RUA and RUF via separate webhook endpoints.
Deploying the DMARC Analyzer step by step
Here's the complete setup, from cloning the repo to receiving your first parsed report.
Clone and install
git clone https://github.com/resend/resend-dmarc-analyzer.git
cd resend-dmarc-analyzer
pnpm installConfigure environment variables
Create a .env.local file with your Resend credentials:
# Your Resend API key (from https://resend.com/api-keys)
RESEND_API_KEY=re_xxxxxxxxxxxxx
# Webhook signing secret (from Resend webhook settings)
RESEND_WEBHOOK_SECRET=whsec_xxxxxxxxxxxxx
# Where to send digest emails
RECIPIENT_EMAIL=team@yourdomain.comSet up Resend Inbound
In your Resend dashboard, configure an inbound email address (e.g., dmarc@inbound.yourdomain.com). This is where DMARC reports will arrive. Then add a webhook pointing to your deployed app's endpoint:
# Aggregate reports (RUA)
https://your-app.vercel.app/api/webhooks/dmarc/rua
# Forensic reports (RUF) — optional
https://your-app.vercel.app/api/webhooks/dmarc/rufUpdate your DMARC DNS record
Point your rua= tag to the Resend inbound address:
_dmarc.yourdomain.com TXT "v=DMARC1; p=none; rua=mailto:dmarc@inbound.yourdomain.com; ruf=mailto:dmarc@inbound.yourdomain.com; fo=1"Deploy
Deploy to Vercel (or any Node.js host). The app is a standard Next.js project:
# Deploy to Vercel
vercel deploy --prod
# Or run locally with ngrok for testing
pnpm dev
# In another terminal:
ngrok http 3000For local development, use ngrok to expose your local server so Resend webhooks can reach it.
How the webhook processing works
The analyzer's webhook route handles the full pipeline: signature verification, attachment extraction, decompression, XML parsing, and digest email sending. Here's what the flow looks like:
// 1. Verify the webhook signature
import { Webhook } from "resend";
const webhook = new Webhook(process.env.RESEND_WEBHOOK_SECRET);
export async function POST(req: Request) {
const payload = await req.text();
const headers = Object.fromEntries(req.headers.entries());
// Throws if signature is invalid
const event = webhook.verify(payload, headers);
// 2. Extract attachments from the inbound email
const attachments = event.data.attachments ?? [];
for (const attachment of attachments) {
const buffer = Buffer.from(attachment.content, "base64");
// 3. Decompress based on content type
let xml: string;
if (attachment.filename.endsWith(".gz")) {
// gzip — used by Google, Yahoo
xml = pako.inflate(buffer, { to: "string" });
} else if (attachment.filename.endsWith(".zip")) {
// zip — used by Microsoft, some others
const zip = await JSZip.loadAsync(buffer);
const file = Object.values(zip.files)[0];
xml = await file.async("string");
} else {
xml = buffer.toString("utf-8");
}
// 4. Parse the DMARC XML
const parser = new XMLParser();
const report = parser.parse(xml);
const feedback = report.feedback;
// 5. Extract structured data
const records = Array.isArray(feedback.record)
? feedback.record
: [feedback.record];
const parsed = records.map((record) => ({
sourceIp: record.row.source_ip,
count: record.row.count,
spf: record.row.policy_evaluated.spf,
dkim: record.row.policy_evaluated.dkim,
headerFrom: record.identifiers.header_from,
}));
// 6. Send digest email via Resend
await resend.emails.send({
from: "DMARC Monitor <dmarc@yourdomain.com>",
to: process.env.RECIPIENT_EMAIL,
subject: `DMARC Report: ${feedback.report_metadata.org_name}`,
react: DmarcDigestEmail({ report: parsed }),
});
}
}The key architectural decision here is statelessness. The analyzer doesn't store reports in a database — it processes them on the fly and sends a digest. This makes deployment trivial (no database to manage) but means you don't get historical trend analysis out of the box.
source_ip, spf_result, dkim_result, message_count, and report_date columns gets you 90% of what paid DMARC tools offer.Reading DMARC reports: what to look for
Once reports start flowing in, here's how to interpret the data and take action:
Healthy report (all good)
- All source IPs belong to your sending provider (Resend, SES, etc.)
- SPF: pass across all records
- DKIM: pass across all records
- No unexpected source IPs
Warning signs
- Unknown source IPs — Someone is sending as your domain from an IP you don't recognize. Could be a forgotten third-party service or active spoofing.
- SPF pass but DKIM fail — A legitimate sender (authorized via SPF) isn't signing with DKIM. Fix the DKIM configuration for that service.
- DKIM pass but SPF fail — Your SPF record is missing an
include:for a legitimate sending service. - Both fail from unknown IPs — Almost certainly spoofing. If you see significant volume, accelerate your move to
p=quarantineorp=reject.
- Investigate every unknown source IP in your reports
- Cross-reference IPs with your sending providers' published ranges
- Move to p=quarantine after 2-4 weeks of clean reports
- Set up alerts for SPF/DKIM failures above a threshold
- Ignore reports because 'we set up DMARC already'
- Jump straight to p=reject without monitoring first
- Assume all failures are spoofing (could be misconfigured services)
- Wait months before tightening your DMARC policy
The DMARC enforcement roadmap
The goal of monitoring DMARC reports is to reach p=reject — where inbox providers actively block unauthenticated mail claiming your domain. Here's the safe path to get there:
Phase 1: Monitor (weeks 1-4)
_dmarc.yourdomain.com TXT "v=DMARC1; p=none; rua=mailto:dmarc@inbound.yourdomain.com; fo=1"- Deploy the DMARC analyzer and start ingesting reports
- Identify all legitimate sending sources (your app, marketing tools, support tools)
- Fix any SPF/DKIM misalignments
- Document every IP that should be sending as your domain
Phase 2: Quarantine (weeks 5-8)
_dmarc.yourdomain.com TXT "v=DMARC1; p=quarantine; pct=25; rua=mailto:dmarc@inbound.yourdomain.com; fo=1"- Start with
pct=25to quarantine only 25% of failing messages - Monitor reports for false positives (legitimate mail being quarantined)
- Gradually increase:
pct=50, thenpct=75, thenpct=100
Phase 3: Reject (week 9+)
_dmarc.yourdomain.com TXT "v=DMARC1; p=reject; rua=mailto:dmarc@inbound.yourdomain.com; fo=1"- Full enforcement — unauthenticated mail is rejected
- Continue monitoring reports (new services may break if not configured)
- Your domain is now protected against spoofing and phishing
p=reject without reading reports will break legitimate email from services you forgot about — your marketing tool, your support desk, your billing system.Extending the analyzer for production use
The open-source analyzer is a solid foundation. Here are practical extensions for production deployments:
Add persistent storage
The stateless design is great for getting started, but you'll want historical data for trend analysis. Add a simple database table:
// Using Drizzle ORM (or your preferred ORM)
import { pgTable, text, integer, timestamp, boolean } from "drizzle-orm/pg-core";
export const dmarcReports = pgTable("dmarc_reports", {
id: text("id").primaryKey(),
reportingOrg: text("reporting_org").notNull(),
sourceIp: text("source_ip").notNull(),
messageCount: integer("message_count").notNull(),
spfResult: text("spf_result").notNull(), // "pass" | "fail"
dkimResult: text("dkim_result").notNull(), // "pass" | "fail"
spfAligned: boolean("spf_aligned").notNull(),
dkimAligned: boolean("dkim_aligned").notNull(),
headerFrom: text("header_from").notNull(),
disposition: text("disposition").notNull(), // "none" | "quarantine" | "reject"
reportDateStart: timestamp("report_date_start").notNull(),
reportDateEnd: timestamp("report_date_end").notNull(),
createdAt: timestamp("created_at").defaultNow().notNull(),
});Add Slack alerts for failures
export async function alertDmarcFailure(record: {
sourceIp: string;
count: number;
spf: string;
dkim: string;
headerFrom: string;
reportingOrg: string;
}) {
if (record.spf === "pass" && record.dkim === "pass") return;
const severity = record.spf === "fail" && record.dkim === "fail"
? "🔴 CRITICAL"
: "🟡 WARNING";
await fetch(process.env.SLACK_WEBHOOK_URL!, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text: [
`${severity} DMARC authentication failure`,
`*Source IP:* ${record.sourceIp}`,
`*Volume:* ${record.count} messages`,
`*SPF:* ${record.spf} | *DKIM:* ${record.dkim}`,
`*Reported by:* ${record.reportingOrg}`,
`*Domain:* ${record.headerFrom}`,
].join("\n"),
}),
});
}Build a weekly trend dashboard
With persistent storage, you can query trends over time:
import { db } from "@/lib/db";
import { dmarcReports } from "@/lib/db/schema";
import { sql, gte } from "drizzle-orm";
import { NextResponse } from "next/server";
export async function GET() {
const thirtyDaysAgo = new Date(Date.now() - 30 * 24 * 60 * 60 * 1000);
const trends = await db
.select({
date: sql`DATE(report_date_start)`.as("date"),
totalMessages: sql`SUM(message_count)`.as("total"),
spfPass: sql`SUM(CASE WHEN spf_result = 'pass' THEN message_count ELSE 0 END)`.as("spf_pass"),
dkimPass: sql`SUM(CASE WHEN dkim_result = 'pass' THEN message_count ELSE 0 END)`.as("dkim_pass"),
uniqueSources: sql`COUNT(DISTINCT source_ip)`.as("sources"),
})
.from(dmarcReports)
.where(gte(dmarcReports.reportDateStart, thirtyDaysAgo))
.groupBy(sql`DATE(report_date_start)`)
.orderBy(sql`DATE(report_date_start)`);
return NextResponse.json(trends);
}DMARC and your transactional email stack
DMARC enforcement directly impacts your transactional email deliverability. When you reach p=reject, inbox providers trust your domain more because you've proven you control who sends as you. This translates to better inbox placement for the emails that matter — password resets, magic links, invoices, and shipping notifications.
The connection to your email templates is direct: a well-authenticated domain means your carefully designed transactional emails actually reach the inbox. No authentication means your templates land in spam, no matter how good they look.
If you're building transactional email in Next.js, the stack looks like this:
- Templates — React Email components for each email type (welcome, password reset, invoice, etc.)
- Sending — Resend, SES, or Postmark for reliable delivery (see our provider comparison)
- Authentication — SPF, DKIM, and DMARC configured correctly (setup checklist)
- Monitoring — DMARC report analysis (this post), plus reputation monitoring and feedback loop tracking
Self-hosted vs. paid DMARC monitoring tools
The Resend DMARC Analyzer isn't the only option for monitoring DMARC reports. Here's how it compares to the alternatives:
- Free and open source (MIT license)
- Full control over your data — reports never leave your infrastructure
- Customizable — fork and add storage, alerts, dashboards
- Integrates with your existing Next.js deployment
- $100-500+/month depending on volume
- Data stored on third-party servers
- Richer out-of-box features: historical trends, threat intelligence, auto-remediation
- Better for organizations managing 50+ domains
For a SaaS team managing 1-5 sending domains, the self-hosted analyzer gives you everything you need. For enterprise teams with dozens of domains and complex sending infrastructure, a paid tool may save time. Start self-hosted, upgrade later if needed.