arpitrathore commented on PR #58929: URL: https://github.com/apache/airflow/pull/58929#issuecomment-4074661578
Hi @eladkal, thank you for the offer to help with reviews, I really appreciate it. I understand the appeal of co-locating in the existing provider to simplify the process. I want to share why I believe a separate provider is the right technical call, and I'm genuinely open to your thoughts: The two Python clients are fundamentally incompatible at the API level: - InfluxDB 2.x uses influxdb-client -> InfluxDBClient(url=..., token=..., org=...), queries via Flux language, returns FluxTable, write API has batching/synchronous modes. - InfluxDB 3.x uses influxdb3-python -> InfluxDBClient3(host=..., token=..., database=...), queries via SQL, returns pd.DataFrame, write API is a single direct call. There is zero shared code between the two hook implementations, different client classes, different connection schemas (org+bucket vs database), different query languages, different return types. Dependency concern: A user who only runs InfluxDB 3.x would be forced to install influxdb-client (the 2.x library) as part of the provider, and vice versa. With separate providers, each can declare only what it needs. Connection type: Both need their own conn_type (influxdb vs influxdb3) with different form widgets and connection extras. These already exist as separate logical integrations in Airflow's connection registry. That said, I recognize this is a governance/process decision as much as a technical one. If you're open to being the committer sponsor for this as a new incubating provider (per AIP-95), I'm committed to being a steward and actively maintaining it. I just need a second steward and your sponsorship to move forward under the new policy. Would you be willing to help in that capacity? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
