unsubscribe

2019-04-29 Thread Amrit Jangid

Re: HBaseContext with Spark

2017-01-25 Thread Amrit Jangid
Hi chetan, If you just need HBase Data into Hive, You can use Hive EXTERNAL TABLE with STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'. Try this if you problem can be solved https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration Regards Amrit . On Wed, Jan 25,

Re: Data frame writing

2017-01-12 Thread Amrit Jangid
Hi Rajendra, It says your directory is not empty *s3n://**buccketName/cip/daily_date.* Try to use save *mode. eg * df.write.mode(SaveMode.Overwrite).partitionBy("date").f ormat("com.databricks.spark.csv").option("delimiter", "#").option("codec", "

Re: [Spark Structured Streaming]: Is it possible to ingest data from a jdbc data source incrementally?

2017-01-03 Thread Amrit Jangid
You can try out *debezium* : https://github.com/debezium. it reads data from bin-logs, provides structure and stream into Kafka. Now Kafka can be your new source for streaming. On Tue, Jan 3, 2017 at 4:36 PM, Yuanzhe Yang wrote: > Hi Hongdi, > > Thanks a lot for your