Idempotency Keys for Webhooks: A Practical Guide
Webhooks get retried. Without idempotency, that means duplicate orders, double charges, and angry customers. Here is how to design a deduplication strategy that actually works.
The Duplicate Webhook Problem
Every webhook provider retries. Stripe retries failed deliveries for up to three days. GitHub retries on any 5xx response. Shopify retries 19 times over 48 hours. Even your own infrastructure between the provider and your handler can replay a message — load balancers, queues, sidecars — anything that doesn't get a clean acknowledgement will try again.
If your handler isn't idempotent, every retry risks doing the same work twice. That means:
- A customer charged twice for one order
- A welcome email sent five times
- A row inserted three times in your database
- A downstream API called once per retry, multiplying side effects
The fix is well known but often implemented incorrectly. Let's walk through what actually works in production.
What Idempotency Actually Means
An operation is idempotent if running it once and running it ten times produce the same observable result. SET name = 'Alice' is idempotent. balance = balance + 100 is not.
For webhook handlers, the goal is: no matter how many times the same event arrives, the system ends up in the same state, and externally observable side effects happen exactly once.
Note that "same event" needs a precise definition. Two HTTP requests with the same body aren't necessarily the same event — the provider might genuinely be sending you two distinct charges that happen to have identical amounts.
Step One: Find the Right Idempotency Key
Most providers give you one. Use theirs:
| Provider | Header / Field |
|----------|---------------|
| Stripe | event.id (e.g. evt_1ABC...) |
| GitHub | X-GitHub-Delivery header |
| Shopify | X-Shopify-Webhook-Id header |
| Twilio | MessageSid in the payload |
| Square | event.event_id |
If the provider doesn't supply one, derive a stable key from the payload. A common pattern:
key = sha256(provider + ":" + resource_id + ":" + event_type + ":" + occurred_at)
Avoid hashing the full body. Providers occasionally re-serialize JSON between retries (key order, whitespace), which produces different hashes for the same logical event.
Step Two: Atomically Check and Store
The naive implementation has a race condition:
// DON'T do this
const existing = await db.processedEvents.findOne({ id: eventId });
if (existing) return;
await processEvent(event);
await db.processedEvents.insert({ id: eventId });
If two retries land at the same moment, both reads return null, both proceed, both insert. You've processed the event twice and gotten a duplicate-key error on the second insert — but the damage is done.
The fix is to make the check and the claim a single atomic operation. In Postgres:
INSERT INTO processed_events (event_id, received_at)
VALUES ($1, NOW())
ON CONFLICT (event_id) DO NOTHING
RETURNING event_id;
If the insert returns a row, you won the race — proceed. If it returns nothing, another worker is handling it — return success without doing anything.
In Redis: SET key value NX EX 86400 gives you the same primitive.
Step Three: Handle the Side Effect Window
Even with atomic claiming, there's still a window where you've claimed the event but haven't finished processing it. If your worker crashes mid-flight, the next retry sees the claim and skips — but the side effects didn't all happen.
Three patterns to handle this:
1. Outbox pattern. Within the same database transaction, write the side effects to an outbox table. A separate worker reads the outbox and performs the actual side effects (sending emails, calling APIs). The outbox row is only deleted after success.
2. Status state machine. Track pending, in_progress, completed, failed on the processed event row. On retry, if state is in_progress and the timestamp is old, take it over. If completed, skip.
3. Idempotent downstream calls. Many APIs accept their own idempotency keys (Stripe's Idempotency-Key header, for example). Pass through your event ID so the downstream operation also dedupes.
Step Four: Set a Retention Window
You can't keep idempotency keys forever. Pick a window longer than your provider's maximum retry duration:
- Stripe retries up to 3 days → keep keys 7 days
- Shopify retries up to 48 hours → keep keys 5 days
- GitHub gives up after a single failure with retries over a few hours → keep keys 24 hours
A simple cron that deletes rows older than the window keeps the table small.
Common Mistakes
Using the request body as the key. Bodies can be re-serialized; whitespace differs. Use the provider's event ID.
Storing keys in memory. Process restarts and horizontal scaling break this immediately.
Skipping the atomic insert. The race window is small but production traffic finds it. Always use INSERT ... ON CONFLICT or equivalent.
Treating idempotency as optional for "low-volume" endpoints. Volume isn't the issue — providers retry every endpoint. A handler that only fires once a day still needs to handle the occasional retry.
Idempotent reads, non-idempotent writes. A handler that checks "does this order exist?" then "create order" without a transaction has the same race condition as the naive example above. The check-and-write must be atomic.
How Hookbase Helps
Hookbase deduplicates events automatically before they ever reach your handler. Configure a deduplication window on your source, and we'll drop repeats based on the provider's event ID — or a custom JSONata expression you define. Your handler only sees one delivery per event.
Combined with our automatic retries and DLQ, you get end-to-end exactly-once delivery semantics without writing the deduplication logic yourself.
Try it free — deduplication is available on all plans, including the free tier.