Skip to content
ArticleMarch 5, 202613 min readmakededuplicationwebhooksidempotencyautomation

Make.com Duplicate Prevention: Stop Duplicate Records on Retry

Make.com duplicate prevention stops duplicate records when webhook retries, reruns, or manual replays fire twice. Learn Data Store gates and safe replay.

If Make.com can still write twice after a timeout

Start with Make.com error handling when webhook retries, reruns, or replay ambiguity are the root cause. If those duplicate writes already polluted HubSpot ownership, routing, or field state, route the CRM lane separately.

On this page (17)

How to stop duplicate records when Make.com retries, reruns, or replays

Teams do not come looking for "duplicate prevention" in the abstract. They come after a webhook fires twice, a rerun creates another contact, or a timeout duplicates a finance write.

I have used this exact model in production audits because duplicate cleanup is always slower than putting a deterministic gate in front of the next write.

I see this in production audits of Make.com webhook lanes: the visible run status looks green while the business state underneath is already wrong. That is exactly why Make.com duplicate prevention needs a production model, not a checkbox in one module.

Most duplicate incidents come from three patterns happening together:

  • at-least-once webhook delivery from source platforms,
  • retries after downstream timeouts,
  • manual reruns without event-level state.

If you are reading this during a live incident, you are not alone. The failure mode is common, and the fix is repeatable. This guide gives the complete model I use in production rollouts: dedupe key design, Data Store gate, search-then-create write logic, error ownership, and monitoring.

If you searched for "Make.com duplicate records", "webhook fired twice", or "scenario rerun created another contact", the root issue is usually the same: no deterministic gate exists between repeated delivery and the next write.

If retries, reruns, or replay ambiguity are the core problem, start with Make.com error handling. If those duplicate writes already polluted HubSpot owners, lifecycle history, or CRM trust, route the damaged lane into HubSpot workflow automation or CRM data cleanup. For operating context, see About. For a concrete example of duplicate removal in a HubSpot lane, review Typeform to HubSpot dedupe.

Why Make.com creates duplicates (at-least-once delivery explained)

Webhook systems deliver events with at-least-once semantics. That means the same event can be delivered more than once, especially when acknowledgments are delayed or network responses are ambiguous.

A typical duplicate chain:

  1. Source sends webhook event evt_18492.
  2. Make.com starts execution and reaches external write.
  3. Destination API succeeds slowly.
  4. Source does not see fast acknowledgment and resends.
  5. Make.com receives same business event again.
  6. If no deterministic dedupe gate exists, second write creates duplicate record.

From platform perspective, each attempt is valid. From business perspective, the second write is data corruption.

This is why "scenario success rate" is not enough. You need event-level correctness checks.

Scenario view highlighting duplicate-risk branches after webhook intake

Without event-level gating, retries and resends can both produce valid-looking but duplicate writes.

Does Make.com have built-in deduplication?

Short answer: not in the way production teams need it.

Make.com can filter, route, and retry, but it does not provide a complete business-event dedupe model out of the box. You still need to define:

  • stable idempotency key from source data,
  • state memory for each event,
  • explicit behavior when key reappears,
  • replay policy with ownership.

Teams often assume that using one filter or one search module equals dedupe. It does not. Dedupe is a full control path, not a single condition.

A reliable model needs to survive:

  • source resend,
  • timeout retry,
  • branch replay,
  • manual rerun by operator.

If any of those create a second write, your dedupe is incomplete.

How to use Data Store for deduplication (Make.com Data Store dedupe pattern)

Data Store is the most practical ledger for Make.com duplicate prevention because it keeps control state inside the same runtime.

Minimum record fields:

FieldRole in dedupe
processing_idunique key for business event
statusnew, processing, completed, failed, dead_letter
sourcesource app and event type
created_atfirst seen timestamp
updated_atlast transition timestamp
error_codefailure class for operator triage

Lookup policy before writes:

  • completed -> skip and log duplicate prevented,
  • processing -> block concurrent re-entry,
  • failed -> route to controlled retry,
  • missing -> create row and continue.

This one gate is where most duplicate incidents are prevented.

Data Store ledger used for event state and dedupe decisions

Data Store acts as a deterministic gate, not just passive logging.

Idempotency key pattern for webhook triggers

The key rule is simple: key comes from source intent, never from execution attempt.

Good key sources:

  • webhook event ID,
  • form submission token,
  • invoice number plus source namespace,
  • deterministic payload hash from stable fields.

Bad key sources:

  • Make execution ID,
  • current timestamp,
  • random UUID generated per run.

If key changes on retry, dedupe fails by design.

Practical key recipe I use:

  1. normalize source fields,
  2. build one deterministic processing_id,
  3. store it before first side effect,
  4. reuse same key across retries and replays.

This model also aligns with Make.com Data Store as a state machine, where the same key anchors every status transition.

Processing key generation before any write branch executes

If the key is unstable, no downstream dedupe logic can save you.

How to prevent duplicate runs when webhook fires twice

When the same webhook arrives twice, your scenario must prove this is replay, not new intent.

Use this flow:

  1. webhook received,
  2. compute processing_id,
  3. lookup in Data Store,
  4. if completed, stop write path and log duplicate prevented,
  5. if missing, continue normal processing.

Add a short lock window for processing state so concurrent arrivals of the same event do not race into duplicate writes.

Common mistake: teams place dedupe check only once at scenario start, then run multiple downstream write branches without additional guards. Each critical write branch still needs protection.

How to stop duplicate records when scenario reruns

Scenario reruns happen during incident recovery, module retries, or manual operations.

To keep reruns safe:

  • separate ingestion from replay lane,
  • permit replay only for failed records,
  • reuse same processing_id,
  • transition status explicitly,
  • block completed -> processing unless manual override is approved.

This is where many lanes break. Operators rerun entire scenarios to recover one failed module and accidentally re-create records that already succeeded.

Use a replay runbook with strict entry criteria:

  • one failed event id,
  • expected missing side effect,
  • owner assigned,
  • post-replay verification.

If you need a deeper incident process, pair this with Make.com webhook debugging playbook and Make.com retry logic without duplicates.

Implementation path

Retries, reruns, or replays already creating duplicate records?

Use Make.com error handling for Data Store gates, replay-safe state, and owner-routed alerts. If duplicate writes already damaged HubSpot routing or CRM trust, fix that lane separately after the implementation layer is stable.

Make.com search then create pattern for HubSpot writes

Search-then-create is mandatory for contact writes.

Pattern:

  1. search HubSpot by normalized email and external key,
  2. if found, update allowed fields only,
  3. if not found, create contact,
  4. write result status to Data Store.

This avoids blind create behavior during retries and reduces duplicate contacts dramatically.

You can see this implemented in HubSpot + Typeform reliability setup and validated in HubSpot workflow audit: 7 silent failures.

For service scope in this lane, see Make.com error handling.

Step-by-step: complete dedupe setup in Make.com

Step 1: Define one dedupe contract

Document:

  • primary key structure,
  • state model,
  • allowed transitions,
  • replay ownership,
  • retention policy.

No contract means every future scenario can reintroduce duplicate risk.

Step 2: Add dedupe router before first write

Configure router outputs by state:

  • completed -> skip,
  • processing -> lock safe exit,
  • failed -> retry queue,
  • missing -> continue.

Router configuration for dedupe branches and safe exits

Router logic should reflect business state, not only module status.

Step 3: Enforce status transitions

Status transitions must be explicit modules:

  • set processing before write,
  • set completed after confirmed success,
  • set failed with error details on handler path.

State transition map across processing, completed, and failed branches

Explicit transitions make replay decisions and audits fast and defensible.

Step 4: Add error handler ownership

On any critical write failure:

  1. update ledger to failed,
  2. include error class and message,
  3. alert owner channel with processing_id,
  4. stop execution.

Error handler path writing failed state and notifying owner

Write failure state first, then notify, then stop.

Step 5: Isolate retry queue

Use separate scheduled retry scenario:

  • fetch failed records,
  • replay deterministically,
  • set completed on success,
  • escalate to dead_letter after threshold.

Retry queue control lane for failed records and safe reprocessing

Separate retry lane prevents noisy recovery logic in ingress scenario.

Validation tests before production

Run these tests before calling the lane stable:

TestWhat to doExpected result
Duplicate webhook sendresend same event id twicesecond run skipped, zero new writes
Downstream timeoutforce delayed API responseone business write only
Partial branch failurefail mid-scenario modulestate failed, no duplicate side effects
Manual replayreplay one failed eventonly missing step completes
Burst trafficsend 100 events quicklybacklog controlled, duplicate-created near zero

If this matrix fails, do not ship yet.

Monitoring that keeps duplicate risk low

Daily metrics:

  • duplicate-created count,
  • duplicate-prevented count,
  • failed backlog age,
  • replay success rate,
  • owner response time.

Weekly controls:

  • sample 20 completed events end-to-end,
  • review 10 duplicate-prevented events for false positives,
  • verify no stale processing records beyond timeout,
  • archive old completed rows by retention policy.

For monitoring baseline, use Make.com monitoring in production.

Permanent prevention model (not one-off cleanup)

A lot of teams do one cleanup sprint and call it solved. Duplicates return when a new scenario or field mapping is added without the dedupe contract.

Permanent prevention requires:

  • shared key and state standards,
  • code review checklist for any new write path,
  • owner accountability for failed records,
  • incident postmortem updates to dedupe contract,
  • quarterly stress tests on retry and replay behavior.

This is where a reliability partner helps. If your lanes cross CRM and finance boundaries, Make.com error handling gives a fixed-scope rollout path.

Edge cases that still create duplicates if you ignore them

Even mature setups can leak duplicates when these cases are not handled explicitly.

Case 1: One person, multiple emails

If dedupe keys rely on email only, one person using two domains can bypass your checks. For B2B lanes, add secondary identity hints where possible, such as external user ID or company plus normalized name fingerprint.

Do not auto-merge aggressively. Use confidence tiers:

  • high confidence: deterministic key match, safe merge path,
  • medium confidence: queue for owner review,
  • low confidence: leave separated and monitor.

Case 2: Payload order changes on resend

Some sources reorder JSON fields or omit optional fields on retries. If you hash raw payload text, the hash changes and duplicates bypass the key gate.

Safer approach:

  • normalize field order before hash,
  • exclude unstable fields from key material,
  • include only business-stable identifiers.

Case 3: Multi-branch writes with one weak guard

Teams often protect one write branch and forget secondary branches such as task creation, lifecycle update, or notification persistence. A duplicate can be prevented in contact create but still appear in downstream objects.

Control rule:

  • every external side effect branch must read and respect same processing_id state.

Case 4: Cleanup scripts that ignore ledger status

Manual cleanup jobs can reprocess records already marked completed and recreate side effects. Any remediation script should check ledger status before write, exactly like production scenario does.

How to recover when duplicates already exist

Prevention and cleanup are separate workstreams. If duplicates already exist, recover in phases.

Phase 1: freeze new duplicate creation.

  • deploy dedupe gate and state controls first,
  • confirm duplicate-created metric starts dropping.

Phase 2: classify existing duplicates.

  • exact duplicates by deterministic key,
  • probable duplicates for owner review,
  • conflicting records requiring business decision.

Phase 3: clean in controlled batches.

  • merge or archive in small windows,
  • verify reporting and owner assignments after each batch,
  • log every merge reason and operator.

Phase 4: harden anti-regression controls.

  • lock key contract in runbook,
  • require checklist before any new automation branch,
  • keep weekly duplicate-prevented and duplicate-created review.

In finance-adjacent lanes, treat cleanup evidence like audit material. You can compare this approach with the operational control depth shown in VAT automation case.

Team implementation checklist (first 2 weeks)

Use this rollout if your team is starting from partial controls.

Week 1:

  • map all write paths,
  • define one idempotency key standard,
  • implement Data Store lookup before first write,
  • add failed-state alerts with owner assignment.

Week 2:

  • isolate retry lane from ingress lane,
  • run duplicate and replay test matrix,
  • backfill monitoring dashboard for dedupe metrics,
  • document replay and cleanup runbook.

Expected output by day 14:

  • duplicate-created trend near zero,
  • duplicate-prevented events visible and explainable,
  • no unowned failed records older than SLA,
  • operator playbook available for incidents.

FAQ

Does Make.com have built-in deduplication for webhooks?

Make.com provides building blocks, but not a complete event-level dedupe system by default. You still need a stable idempotency key, a state ledger, and explicit replay behavior to keep duplicate writes out of production data.

What is the best idempotency key for Make.com webhook flows?

Use a key derived from source intent: event ID, submission token, or deterministic payload hash from stable fields. Do not use execution IDs or timestamps, because those change across retries and break dedupe.

How do I prevent duplicate records when webhook fires twice?

Lookup processing_id before every critical write. If state is completed, skip. If processing, lock and exit safely. If missing, continue. This is the minimum reliable pattern for duplicate prevention.

Should I use search-then-create or always update existing records?

Use search-then-create with strict match criteria and allowed-field ownership. Blind create paths produce duplicates under retry pressure, while blind update paths can overwrite valid data. Controlled branching is safer.

What if duplicate records already exist in HubSpot?

Clean existing duplicates in phases, then deploy prevention controls before next volume cycle. If cleanup happens without prevention, duplicates come back quickly. Start with Typeform to HubSpot dedupe and owner runbooks.

Next steps

Free checklist: HubSpot workflow reliability audit.

Get the PDF immediately after submission. Use it to catch duplicate contacts, retries, routing gaps, and required-field misses before your next workflow change.

Free 30-minute discovery call available after review. Paid reliability audit from €500 if fit is confirmed.

Next step

Need duplicate records to stop in Make.com, not just get cleaned later?

Start with Make.com error handling to harden retries, reruns, and replay paths. If Make.com already created duplicate contacts, wrong owners, or CRM drift, route the damaged lane into HubSpot workflow automation or CRM cleanup next.