Data Synchronization Patterns for Enterprise Integration
Four core data synchronization patterns (request-reply, fire-and-forget, pub-sub, saga) govern how D365 exchanges data with external systems reliably.
Data synchronization is the process of keeping master & transactional data consistent across multiple systems. In a multi-system landscape (D365, legacy ERP, data warehouse, e-commerce platform), synchronization is critical but complex. This guide covers sync pattern architectures, trade-offs, master data management challenges, conflict resolution strategies, & operational best practices.
TL;DR
- Five sync pattern styles: request-reply (pull), fire-and-forget (push), pub-sub (fanout), saga (orchestrated), & batch (scheduled). Each trades latency, coupling, & complexity.
- Master data synchronization requires a golden record authority & strategy: centralized master (single source of truth), federated masters (authority by entity), or hybrid consensus.
- Conflict resolution handles simultaneous updates across systems: last-write-wins, field-level merge, manual review, or eventual consistency with versioning.
- Bidirectional sync is tightly coupled & error-prone; prefer unidirectional (D365 as source, consume in other systems) unless you have strong justification.
- Idempotency ensures duplicate messages don’t corrupt data; implement via deduplication tables, unique constraints, or version-based upserts.
- Dead-letter queues, retry logic, & monitoring are essential. Track sync latency, error rates, & data quality metrics.
- Batch reconciliation (nightly full compare + correction) is cheaper insurance than trying to prevent every inconsistency in real time.
- Governance: document golden record authority, conflict rules, & retry policies. Data quality ownership must be clear.
Sync Pattern Taxonomy
Five core patterns for keeping systems synchronized:
| Pattern | Initiator | Acknowledgment | Latency | Coupling |
|---|---|---|---|---|
| Request-Reply (Pull) | Consumer pulls on demand | Synchronous response | Variable | Tight |
| Fire-and-Forget (Push) | Producer pushes immediately | Async (best-effort) | Sub-second | Loose |
| Pub-Sub (Fanout) | Producer publishes once | Multiple subscribers consume | Sub-second | Loosest |
| Saga (Orchestration) | Orchestrator coordinates multi-step sync | Transactional semantics | Seconds to minutes | Medium |
| Batch (Scheduled) | Scheduler triggers bulk sync | Asynchronous, logged | Hours | Loosest |
Request-Reply Pattern
Consumer calls a producer API synchronously: “Give me customer 12345.” Producer responds immediately with data.
Characteristics:
- Latency: Synchronous; depends on API response time (typically 100–500 ms).
- Coupling: Tight. Consumer & producer must be available & responsive.
- Data Freshness: Always current (you ask, get latest).
- Scalability: Limited by API throughput & consumer concurrency.
Use Cases:
- On-demand reads (user dashboard, lookup, mobile app).
- Low-volume, high-frequency queries.
- Reference data lookups (GL accounts, items, customers).
Example: Sales person opens a customer record in Salesforce. Salesforce queries D365 GL API to fetch customer credit balance. API responds in 200ms. Sales person sees balance instantly.
Pitfalls: If producer is slow or unavailable, consumer blocks. Network latency compounds. Not suitable for high-volume sync.
Fire-and-Forget Pattern
Producer sends update message to consumer asynchronously; doesn’t wait for acknowledgment.
Characteristics:
- Latency: Sub-second push, but consumer processes asynchronously (seconds to minutes after).
- Coupling: Loose. Producer & consumer decoupled via message queue.
- Reliability: Message queue guarantees delivery (if configured); producer moves on.
- Ordering: May be out of order if consumer processes in parallel.
Use Cases:
- One-directional push notifications (invoice posted, order confirmed).
- High-volume event streams.
- Scenarios where consumer delay is acceptable.
Example: Invoice posted in F&O. Business Event fires to Service Bus. Logic App consuming the queue processes the invoice asynchronously, updating accounting system. Accounting system acknowledges receipt. If Logic App fails, message remains in queue for retry.
Pitfalls: Consumer must be idempotent (handle duplicates). Ordering must be ensured or handled in consumer logic. Harder to debug (async flow).
Publish-Subscribe Pattern
Producer publishes change to a topic. Multiple subscribers listen independently.
Characteristics:
- Decoupling: Loosest. Producer doesn’t know about subscribers.
- Scalability: Easy to add new subscribers without changing producer.
- Fanout: One change message reaches many systems.
- Ordering: Per subscriber (subscribers get messages in order, but different subscribers may process in different orders).
Use Cases:
- Customer master changes need to sync to: CRM, marketing platform, data warehouse, e-commerce site.
- One event triggers multiple downstream actions.
- Future flexibility: add new subscriber without re-architecting.
Example: Customer address changes in D365. Event published to “Customer Updated” topic. Three subscribers listen: CRM, data warehouse, e-commerce. Each processes independently, at its own pace. If e-commerce fails, CRM & warehouse still succeed.
Pitfalls: Harder to coordinate across multiple subscribers (if all must succeed for transaction). Duplicate handling is critical.
Saga & Orchestration Pattern
A saga is a long-running business process coordinating multiple systems with eventual consistency & compensating transactions.
Example Saga: Order-to-Cash
- Sales order created in D365.
- Inventory system reserves stock.
- Warehouse system creates pick list.
- Shipping system creates shipment.
- Billing system creates invoice.
If step 3 fails (warehouse unavailable), compensate: release stock reservation, cancel invoice. Orchestrator (workflow engine, e.g., Azure Logic Apps, Temporal, Step Functions) drives each step & rollback on failure.
Styles:
- Choreography: Each system listens to events & triggers the next step. Loose coupling, but hard to trace end-to-end flow.
- Orchestration: Central coordinator (orchestrator) calls each system in sequence. Tighter coupling, but clearer flow & easier debugging.
Characteristics:
- Latency: Seconds to minutes (multiple steps).
- Consistency: Eventual (not ACID).
- Complexity: High; requires rollback logic.
Master Data Synchronization
Master data (customers, vendors, items, GL accounts) is foundational. Synchronizing master data correctly is critical.
Golden Record Concept: One system is the authoritative source for each entity. D365 may be golden for customers & items, but legacy ERP is golden for GL accounts. Or a dedicated master data management (MDM) platform is golden for all.
Strategies:
1. Centralized Master: D365 is the single source of truth. All other systems sync FROM D365. Simple, clear authority.
2. Federated Masters: Different systems own different entities. D365 owns customers & items. Legacy ERP owns GL accounts. Data Lake owns reporting dimensions. Authority is distributed but clear per entity.
3. Hybrid with Consensus: No single master. Multiple systems create/update the same entity. Merge logic determines the “true” record (e.g., most recent timestamp, manual review). Complex but flexible.
Challenges:
- Latency Mismatch: D365 system A changes customer; change propagates to system B. System B user sees stale data until sync completes. Set expectations on data freshness SLA.
- Incomplete Attributes: D365 stores customer tax ID; legacy ERP stores credit limit. Full customer record is split. Use a data vault or reference data service to unify.
- Deletion Handling: Can you delete a customer? If D365 deletes, but legacy ERP still has open invoices, what happens? Often, you soft-delete (mark inactive) instead of hard-delete.
Business Central + Shopify Integration: Complete Guide [2026]
Complete guide to native Shopify Connector in Business Central. Setup, features, multi-store support, inventory sync, order processing, limitations, and third-party alternatives.
Read MoreConflict Resolution Strategies
Conflicts occur when two systems update the same record simultaneously or nearly so.
Example: Customer address updated in D365 & Salesforce at the same time. Which address is correct?
Strategies:
| Strategy | Method | Pros | Cons |
|---|---|---|---|
| Last-Write-Wins (LWW) | Whichever system updates last wins | Simple, automatic | Data loss; may not be correct |
| Version-Based | Track version number; accept if version > current | Prevents stale overwrites | Requires version tracking |
| Field-Level Merge | Take non-null fields from both; merge | Preserves more data | May create invalid combinations |
| Source Authority | Honor update from authoritative system only; reject others | Enforces golden record | Tight coupling; other systems can’t update |
| Manual Review | Flag conflict; queue for human review | Guarantees correctness | Expensive, slow |
| Eventual Consistency | Accept inconsistency temporarily; reconcile later | Scales; no blocking | Data may be stale; reconciliation needed |
Recommendation: Enforce single-master architecture (only D365 updates customer address) to prevent conflicts. If multi-master is required, use version numbers & field-level merge, with manual review for conflicts.
Bidirectional Synchronization
Both systems can create/update the same record. Changes flow in both directions.
Complexity: High. Bidirectional sync is prone to infinite loops, conflicts, & data corruption.
Example Problem: Customer updates address in D365. Change syncs to CRM. CRM user updates phone. Change syncs back to D365. Now, D365 sees the phone update. If there’s a race condition, D365 might overwrite the address, reverting CRM’s phone update.
Solutions:
- Change Tracking + Versioning: Each system tracks which fields it changed. When receiving an update, merge only fields changed by the remote system; keep local changes. Requires discipline in both systems.
- Field-Level Authority: D365 owns address; CRM owns phone. Each system updates only its fields, ignores remote updates to its fields. Simplifies merging but requires strict field ownership.
- Operational Transform (OT) / CRDT: Advanced: use conflict-free replicated data types or operational transforms (like Google Docs) to merge concurrent edits. Overkill for most ERP scenarios.
Recommendation: Avoid bidirectional sync if possible. Choose unidirectional (D365 as source, consume in CRM) unless business requirement demands it.
Idempotency & Resilience
Messaging is unreliable. Networks fail, services crash. Messages may be delivered multiple times, out of order, or delayed.
Idempotency: Ensure that processing the same message twice produces the same result as once. Implementation:
- Deduplication Table: Track message IDs. Before processing, check if ID exists. If yes, skip. If no, process & record ID.
- Unique Constraints: Use database unique constraints on (entity_id, version) or (entity_id, timestamp). If a duplicate arrives, constraint violation fails gracefully (skip).
- Version-Based Upsert: Store version with record. Only update if incoming version > stored version. Prevents stale updates.
Resilience Patterns:
- Retry with Exponential Backoff: On failure, retry after 1s, 2s, 4s, 8s, etc., up to max retries. Then dead-letter.
- Dead-Letter Queue (DLQ): After max retries, move message to DLQ. Investigate & manually replay when fixed.
- Circuit Breaker: If downstream system is failing (too many 5xx errors), stop sending requests for a cooldown period. Prevents cascading failures.
- Bulkhead (Isolation): Use separate queues/threads for critical vs non-critical syncs. If non-critical sync is slow, doesn’t block critical transactions.
Monitoring & Data Quality Governance
Production sync requires operational visibility.
Key Metrics:
- Sync Latency: Time from change in source to change in destination. Target: seconds to minutes depending on pattern.
- Error Rate: % of messages failing. Target: < 0.1%. Alert if > 1%.
- DLQ Depth: Messages stuck in dead-letter queue. Target: 0. Alert if > 10.
- Record Duplication: Duplicate records created. Can indicate idempotency failure.
- Data Drift: Periodic comparison of source & destination records. % matching. Target: 99.5%+.
Monitoring Setup:
- Application Insights or DataDog dashboards tracking latency, error rate, throughput.
- Alerts on high error rate, high DLQ depth, high latency.
- Nightly reconciliation job comparing source & destination record counts & checksums.
Data Quality Governance:
- Document golden record authority per entity (e.g., “D365 is golden for customers; legacy ERP is golden for GL”).
- Document conflict resolution rules (“For address field: take D365 value if newer than 24 hours; otherwise take CRM”).
- Document retry & retry policies (“Retry up to 5 times with exponential backoff; move to DLQ after 1 hour”).
- Ownership: who is responsible for investigating DLQ messages? Who owns data quality?
- SLA: what latency is acceptable? What is downtime impact?
Batch Reconciliation: Even with perfect sync logic, inconsistencies creep in. Run a nightly batch job comparing source & destination on a sample of records (or all, if feasible). Flag mismatches & auto-correct using golden record authority rules.
Frequently Asked Questions
Q: Should I use sync patterns or virtual entities for reference data?
A: If you need real-time, always-fresh reference data (GL accounts) in CRM, use virtual entities (cloud-native, CRM-friendly). If you need to replicate to on-premise systems or historical snapshot, use sync patterns.
Q: How do I handle large attribute values in sync messages?
A: If a record is > 256 KB (Service Bus limit), send only key + link. Consumer fetches full record via API. Or split attributes across multiple messages with a correlation key.
Q: Can I sync partial records (only changed fields)?
A: Yes. Send only changed fields in the message. Consumer uses PATCH/merge logic, not full replacement. Reduces message size & network traffic.
Q: What if a synced record is deleted in the source?
A: Send a delete marker (or soft-delete flag). Consumer must handle deletion: either hard-delete, soft-delete, or archive depending on business rules.
Q: How do I test sync logic?
A: Use local message queue (e.g., RabbitMQ, Kafka in Docker). Inject test messages, verify consumer processes & produces correct output. Test failure scenarios (consumer crash, duplicate message, out-of-order).
Q: Is eventual consistency okay for master data?
A: Depends. If system A shows stale data temporarily, is that acceptable? For reference data (items, GL accounts), yes. For customer credit limits (used in sales validation), maybe not. Document your SLA.
Q: How long should I retain messages in the queue?
A: Service Bus default is 14 days. If sync is down longer than 14 days, you’ll lose history. For critical syncs, increase retention or switch to a durable log (Kafka default 7 days, configurable).
Q: Can I prioritize certain sync messages?
A: Yes. Use Service Bus message priority or separate queues (critical vs non-critical). Process critical queue first.
Methodology
This guide synthesizes data synchronization patterns from enterprise integration literature (Enterprise Integration Patterns book, Microsoft architecture reference), distributed systems theory, & real-world D365 implementation experience. Topics cover sync pattern taxonomy (request-reply, fire-and-forget, pub-sub, saga, batch), master data management strategies, conflict resolution tactics, bidirectional sync complexities, idempotency & resilience mechanisms, & operational governance & monitoring.
Dataset & Sources: Gregor Hohpe & Bobby Woolf’s Enterprise Integration Patterns; Microsoft Azure integration services documentation; Apache Kafka & message queue architecture guides; distributed systems consensus & eventual consistency literature.
Analytical Approach: Compared sync patterns on latency, coupling, complexity, & scalability. Analyzed master data synchronization challenges & golden record authority models. Evaluated conflict resolution strategies with trade-off matrices. Discussed bidirectional sync pitfalls & mitigations. Emphasized idempotency & resilience as non-negotiable for production reliability.
Limitations: This guide covers standard patterns & common tools (Service Bus, Kafka, Logic Apps). Advanced CRDT & operational transform implementations are referenced but not detailed. Database-specific tuning (SQL indexing for deduplication, etc.) is beyond scope.
Data Currency: Accurate as of March 2026. Azure integration services & event-driven architectures are rapidly evolving. Consult Microsoft Learn & Apache project documentation for latest features.
Frequently Asked Questions
Use request-reply (pull) for on-demand reads (user dashboards, lookups). Use fire-and-forget (push) for notifications that don't need immediate confirmation (welcome email after customer created). Choose fire-and-forget for high-volume, asynchronous scenarios to decouple systems.
The golden record is the authoritative source for an entity (customer, vendor, item). In a federated approach, D365 owns customers; legacy ERP owns GL accounts. Choose one system per entity to avoid conflicts. Centralize all writes to golden record; other systems read or sync from it.
Enforce single-master architecture: only D365 can update customers. Other systems can read but not write. If multi-master is unavoidable, use version numbers to detect stale updates, implement field-level merge logic, or flag conflicts for manual review.
No. Bidirectional sync is error-prone and prone to infinite loops and data corruption. Prefer unidirectional (D365 as source) unless you have strong business justification. If required, use change tracking, versioning, and field-level authority to manage complexity.
Run nightly batch reconciliation comparing source and destination record counts, checksums, and key fields. Flag mismatches and auto-correct using golden record authority. This catches inconsistencies that slip through real-time sync and prevents data drift from accumulating.
Depends on use case. Reference data (items, GL accounts): 1–2 hours acceptable. Operational data (orders, invoices): 15–60 minutes. Real-time integrations (supply chain visibility): sub-second to seconds. Document your SLA and monitor actual latency weekly.
Related Reading
Dynamics 365 Enterprise Integration: The Complete Guide
Change Data Capture (CDC) Patterns for Dynamics 365 Integration
Master CDC strategies for D365. Learn capture mechanisms (Change Tracking, Dataverse CDC, Data Lake), polling vs push, delta detection, data warehousing, Debezium/Kafka patterns, and consistency challenges.