Hi Spark experts,
I am seeking for an approach to enable hive support manually on an existing
Spark session.
Currently, HiveContext seems the best way for my scenario. However, this class
has already been marked as deprecated and it is recommended to use
Thanks. Missed that part of documentation. Appreciate your help. Regards.
On Mon, May 25, 2020 at 10:42 PM Jungtaek Lim
wrote:
> Hi,
>
> You need to add the prefix "kafka." for the configurations which should be
> propagated to the Kafka. Others will be used in Spark data source
> itself.
Hmm... how would they go to Graphana if they are not getting computed in
your code? I am talking about the Application Specific Accumulators. The
other standard counters such as 'event.progress.inputRowsPerSecond' are
getting populated correctly!
On Mon, May 25, 2020 at 8:39 PM Srinivas V wrote: