I finally got it run by adding some steps.
Add some jars to the lib directory:
mysql-binlog-connector-java-0.11.0.jar
mysql-connector-java-5.1.47.jar
nifi-cdc-api-1.9.2.jar
nifi-cdc-mysql-processors-1.9.2.jar
nifi-distributed-cache-client-serv
I'm using NiFi 1.9.2 QueryDatabaseTable->PutBigQueryBatch to attempt to
replicate a MySQL table to BigQuery. In QueryDatabaseTable I've configured
'Use Avro Logical Types=true', so I have a MySQL DATETIME which is encoded
in Avro as a Long with logical type timestamp-millis. The PutBigQueryBatc
Noe
Just activate compression on the s2s port and the client will honor it if
able. I dont believe the protocol has changed in quite a while so you
should be fine with the versions noted.
Thanks
Joe
On Mon, Jul 15, 2019 at 9:08 AM Noe Detore wrote:
> Hello,
>
> What is the best way to configu
Hello,
What is the best way to configure compression using site to site when
sending data from one data center to another? I notice there is the ability
to configure compression in a queue. What considerations need to be taken
into account for different versions? DC1 Nifi 1.5 and DC2 Nifi 1.9.
Th
Dweep,
The data I am moving into S3 is already some fairly large sets of files, since
they are a bulk export from a SaaS application. Thus, the number of files
which were being PUT to S3 was not a huge consideration. However, since the
Parquet files are to be consumed by Redshift Spectrum