Hi Rickard,
Great suggestion, thanks a lot!
I'll try and use this and compare with the quick and dirty way I wrote so
far, which was to pull this info from the Sqoop command's output log. Your
suggestion is much more elegant.
Regards,
Douglas
On Thu, Dec 8, 2016 at 12:55 PM, Rickard Cardell w
Hi
We are doing a similar thing, but a job id is required. We fetch all job
stats from the Rest api of the Jobhistory server and push it to an ELK
cluster. We can then graph all kinds of stuff :) But perhaps for the use
case you describe it might be enough to curl the jobhistory server.
The counte
Hello all,
I am currently struggling a lot to ingest data from Teradata into HDFS Hive in
Parquet format.
1. I was expecting sqoop to create the tables automatically, but then I
get an error of Import Hive table's column schema is missing.
2. Instead of import, I troubleshooted and ju