All articles
·7 min

6 API integration patterns that prevent chaos as you scale

The problem: your integrations work — until they don't

You've connected your CRM to the ERP. Billing syncs with accounting. Notifications arrive in Slack. Everything looks fine — until an API provider changes their response format, volume spikes 3x during a Black Friday campaign, or an external service goes down for 45 minutes and you lose 200 transactions that nobody notices.

Most integrations are built for the happy path. They work perfectly in demos. But in production, at real volume, with unpredictable external systems? They start failing silently.

The difference between a fragile integration and a resilient one isn't code complexity — it's choosing the right pattern.

Pattern 1: Synchronous Request-Response (and when to avoid it)

The simplest pattern: your application makes an HTTP request to an external API and waits for the response.

Example: Real-time stock verification with a supplier before confirming an order.

When it works well:

  • The response is needed immediately to continue the flow
  • The external API consistently responds under 500ms
  • Volume is under 100 requests per minute

When it becomes a problem:

  • The external API responds slowly → your application blocks
  • The external API goes down → your users see errors
  • Volume increases → you exceed the provider's rate limit

Rule of thumb: If your flow can tolerate a 5-30 second delay without user impact, don't use synchronous request-response. You have better alternatives.

Pattern 2: Webhooks (push instead of pull)

Instead of periodically asking "has anything changed?", the external system notifies you automatically when an event occurs.

Concrete example: Stripe sends a webhook on every successful payment. Instead of checking every 30 seconds whether a payment arrived, you receive the notification instantly.

Correct implementation:

  • Dedicated endpoint with authentication (HMAC signature or secret token)
  • Fast response — process the webhook in under 5 seconds or return 200 and process async
  • Idempotency — treat every webhook as if it could arrive twice (because it will)
  • Retry logic on the provider side — most providers resend the webhook if they don't receive a 2xx

Implementation cost: €1,000-2,000 per webhook integration, including error handling and logging.

A distributor in Cluj that we worked with at NEXVA SYSTEM was receiving orders via EDI from 3 different retailers. Each had their own format. We implemented webhook receivers with automatic transformation — orders now reach their ERP in under 10 seconds, compared to 2 hours/day of manual processing.

Pattern 3: Message Queue (async with delivery guarantee)

When your integration can't lose data and doesn't need an instant response, a message queue is the ideal solution.

How it works: Your application puts a message in a queue (RabbitMQ, Amazon SQS, Redis Streams). A worker processes messages one by one. If processing fails, the message stays in the queue and retries automatically.

Real example: An e-commerce platform processes 500 orders per hour. Each order needs to sync with the ERP, be sent to fulfillment, and be recorded in analytics. Without a queue, a 10-minute ERP outage blocks everything. With a queue, orders accumulate and process automatically when the ERP comes back online.

Critical advantages:

  • Zero data loss — messages persist until processed
  • Decoupling — producer and consumer don't need to be online simultaneously
  • Simple scaling — add workers when volume grows
  • Natural backpressure — the queue absorbs traffic spikes

Infrastructure cost: Amazon SQS costs under €1/month for 1 million messages. Self-hosted RabbitMQ: €50-100/month on a dedicated server.

Pattern 4: Circuit Breaker (cascade failure protection)

When an external API goes down, the worst thing you can do is keep sending requests. Waiting for timeouts consumes resources, timeouts accumulate, and within 5 minutes your application is completely blocked — even though 90% of functionality doesn't depend on that API.

A circuit breaker works like an electrical fuse:

1. Closed (normal) — requests pass through normally

2. Open (tripped) — after N consecutive failures, stop requests and instantly return a fallback

3. Half-open (testing) — periodically send a test request. If it succeeds, return to normal

Practical example: Your application verifies delivery addresses via a geocoding API. The API goes down. Without a circuit breaker: every verification takes 30 seconds (timeout), and the user waits frustrated. With a circuit breaker: after 5 consecutive failures, the system accepts the address without validation and marks it for later verification. The user notices nothing.

Recommended thresholds:

  • Open: 5 consecutive failures or error rate > 50% in the last 60 seconds
  • Half-open wait time: 30-60 seconds
  • Test requests: 1 per half-open interval

Pattern 5: API Gateway (single entry point)

When you have 10+ integrations, each with their own credentials, rate limits, data formats, and retry logic, management becomes a nightmare.

An API Gateway centralizes:

  • Authentication — manages tokens and API keys for all external services
  • Rate limiting — prevents exceeding provider limits
  • Data transformation — converts different formats into a standard internal format
  • Centralized logging — one place to see all requests, errors, and latencies
  • Caching — frequent responses are served from cache, reducing costs and latency

Real example: A retail client integrates: Stripe (payments), SmartBill (invoicing), FAN Courier (shipping), Google Analytics (tracking), Mailchimp (email). Without a gateway: 5 separate modules, 5 sets of hardcoded credentials, 5 different retry logics. With a gateway: one integration layer, one monitoring configuration, one place to debug.

Implementation cost: €5,000-10,000 for a custom gateway. Alternatively, Kong or AWS API Gateway: €50-200/month.

Pattern 6: Event Sourcing (complete change history)

Instead of storing only the current state, you store every event that modified the state. This pattern is powerful in integrations because it provides:

  • Complete audit trail — you know exactly what happened, when, and why
  • Replay — if an integration fails, you can reprocess events from the last valid point
  • Simplified debugging — instead of guessing what went wrong, you see the exact sequence of events

When it's worth the effort: Financial systems, logistics, or any domain where data loss or corruption has serious consequences. Not necessary for simple notification integrations or non-critical syncs.

How to choose the right pattern

| Criterion | Sync | Webhook | Queue | Circuit Breaker |

|-----------|------|---------|-------|----------------|

| Instant response needed | ✅ | ❌ | ❌ | N/A |

| Zero data loss | ❌ | ⚠️ | ✅ | N/A |

| High volume | ❌ | ✅ | ✅ | N/A |

| External outage protection | ❌ | N/A | ✅ | ✅ |

In practice, you combine patterns. A robust integration uses webhooks for reception, queues for processing, circuit breakers for external calls, and a gateway for management. It's not complicated if you implement them from the start — it's only complicated when you add them after things have already gone wrong.

Mistakes we see frequently

1. Excessive polling

Checking an API every 10 seconds when data changes 3 times per day. Wastes resources, exceeds rate limits, and you pay for useless requests.

2. Lack of idempotency

If you process the same event twice and the result differs (for example, creating 2 invoices instead of 1), you have a fundamental design problem.

3. Ignoring event ordering

In a distributed system, events can arrive in a different order than they were emitted. "Order cancelled" can arrive before "Order created" if you don't have an ordering mechanism.

4. Non-existent error handling

"Retry forever" is not a strategy. Implement exponential backoff, dead-letter queues for repeatedly failing messages, and alerts for human intervention.

The real cost of robust integrations

For a company with 3-5 external integrations:

| Component | Cost |

|-----------|------|

| Architecture and design | €2,000-4,000 |

| Implementation (webhooks + queues + circuit breakers) | €8,000-15,000 |

| API Gateway setup | €3,000-5,000 |

| Monitoring and alerting | €1,000-2,000 |

| Total | €14,000-26,000 |

| Monthly maintenance | €300-600 |

Compare with the cost of the alternative: a 2-hour outage that loses 500 orders, or 3 months of desynchronized data requiring manual reconciliation.

How to get started

1. Audit existing integrations — where are the fragile points? Where have you lost data?

2. Classify by criticality — which integrations can't lose data vs. which tolerate delay?

3. Implement incrementally — start with circuit breakers on the most unstable integrations

4. Add monitoring — if you're not measuring latency and error rates, you can't improve

You don't need to rebuild everything at once. But every new integration should be built with the right patterns from the start — the cost is minimal upfront and massive if you need to refactor later.

Want us to evaluate your integration architecture together? Book a free consultation.

Want to discuss automating your processes?

Book a consultation