Hello,

We are planning to set up a data pipeline and send periodic, incremental 
updates from Teradata to Hadoop via Kafka.  For a large DW table with hundreds 
of GB of data, is it okay (in terms of performance) to use Kafka for the 
initial bulk data load?  Or will Sqoop with Teradata connector be more 
appropriate?


Thanks,
Po

Reply via email to