Hi team,

Had a design question around wether it’s a good idea to write wrappers over
all existing spark connectors for adding some functionality/improving
usability in terms of options passed to the connector. In contrast to
providing utility libraries that takes parameters and calls the underlying
connectors as it is.

Example:
Option 1:
Extends snowflake-connector to add some options and use it as:
>> spark.read.format(“my-snowflake”).option(“myoption”, “”)

Option 2:
Write a utility
>> util.readWithMyOption()


Also what kind of logic can be put inside the DataSourceV2 connector? Is it
a good idea to manipulate/change the incoming dataset inside the connector

Thanks.

Reply via email to