GitHub user mlevkov edited a discussion: Proposal: HTTP Source Connector (Webhook Gateway)
Hi team, Following up on the HTTP sink connector ([#2925](https://github.com/apache/iggy/pull/2925)), we'd like to propose its inverse — an HTTP **source** connector that acts as a webhook gateway. It embeds an HTTP server inside the Source plugin, accepts incoming POST requests, and produces messages to Iggy topics. The connector ecosystem has 3 sources today, all poll-based (random, PostgreSQL, Elasticsearch). There's no push-based source for accepting inbound webhooks — the most common pattern for receiving real-time events from SaaS integrations (GitHub, Stripe, Slack), IoT devices, CI/CD pipelines, and inter-service communication. ## Pre-Implementation Review We ran 5 specialized review agents against this design before writing any code. **25 findings (7 CRITICAL, 18 HIGH)**. 21 have been corrected in the design. **3 are architectural blockers** that require team input: ### Blocker 1: SDK Single-Topic Routing `ProducedMessage` has no topic field. The runtime creates one `IggyProducer` for the last `[[streams]]` entry. Multi-topic routing (different webhook endpoints → different topics) is the connector's primary value proposition, but it **cannot be expressed** through the current SDK contract. We lean toward using the connector's own `IggyClient` to produce directly (bypassing the forwarding loop), with an SDK extension as a follow-up. **What does the team prefer?** ### Blocker 2: Shutdown Data Loss `SourceManager::stop_connector` calls `cleanup_sender` (removes the flume channel) BEFORE calling `iggy_source_close`. Messages drained by `poll()` during shutdown hit a dead callback. **Would a PR to adjust the shutdown order be welcome?** (This affects all source connectors, not just HTTP.) ### Blocker 3: Config Consumer Failure The connector creates its own `IggyClient` to consume a config topic (event-sourced endpoint registry). If that connection drops, the HTTP server continues with a frozen registry — revoked endpoints stay active. We've added health signaling (degraded status, optional rejection). **Is this failure mode acceptable?** ## Design Highlights - **Push-to-pull bridge**: axum HTTP server → lock-free bounded buffer (`crossbeam::queue::ArrayQueue`) → `poll()` drains → Iggy - **Event-sourced endpoint registry**: Dedicated Iggy config topic for endpoint registration/revocation — enables hot reload and multi-instance coordination - **Ephemeral endpoints**: Cryptographically random URL paths (`/e/{random_id}`) — individually revocable, with optional per-endpoint HMAC validation - **Lock-free hot path**: `ArcSwap` for registry, `ArrayQueue` + `Notify` for buffer — no mutex between TCP accept and HTTP 200 - **Structured concurrency**: 3 tasks (HTTP server, config consumer, poll loop) via `CancellationToken` tree - **RBAC is a deliberate non-goal** — authorization delegated to API gateway (above) and Iggy server permissions (below) Full design document with all corrections, type definitions, and review findings is in the comment below. Looking forward to your feedback, especially on the three blockers. GitHub link: https://github.com/apache/iggy/discussions/3039 ---- This is an automatically sent email for [email protected]. To unsubscribe, please send an email to: [email protected]
