How to enable hive support on an existing Spark session?

2020-05-26 Thread Kun Huang (COSMOS)
Hi Spark experts, I am seeking for an approach to enable hive support manually on an existing Spark session. Currently, HiveContext seems the best way for my scenario. However, this class has already been marked as deprecated and it is recommended to use

Re: RecordTooLargeException in Spark *Structured* Streaming

2020-05-26 Thread Something Something
Thanks. Missed that part of documentation. Appreciate your help. Regards. On Mon, May 25, 2020 at 10:42 PM Jungtaek Lim wrote: > Hi, > > You need to add the prefix "kafka." for the configurations which should be > propagated to the Kafka. Others will be used in Spark data source > itself.

Re: Using Spark Accumulators with Structured Streaming

2020-05-26 Thread Something Something
Hmm... how would they go to Graphana if they are not getting computed in your code? I am talking about the Application Specific Accumulators. The other standard counters such as 'event.progress.inputRowsPerSecond' are getting populated correctly! On Mon, May 25, 2020 at 8:39 PM Srinivas V wrote: