Hi,
i am on spark 1.6. I am getting error if i try to run a hive query in Spark
that involves joining ORC and non-ORC tables in hive.
Find the error below, any help would be appreciated
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
TungstenExchange
Hi Sjoerd,
We've added kafka.group.id config to Spark 3.0...
kafka.group.id string none streaming and batch The Kafka group id to use in
Kafka consumer while reading from Kafka. Use this with caution. By default,
each query generates a unique group id for reading data. This ensures that
each
This is exactly the issue I am fighting against. Within a good number of
organizations, this is against policy. Another solution is necessary.
From: Spico Florin
Sent: Tuesday, March 24, 2020 11:23:29 AM
To: Sethupathi T
Cc: Sjoerd van Leent ;
Hello!
Maybe you can find more information on the same issue reported here:
https://jaceklaskowski.gitbooks.io/spark-structured-streaming/spark-sql-streaming-KafkaSourceProvider.html
validateGeneralOptions makes sure that group.id has not been specified and
reports an IllegalArgumentException
I now need to integrate spark into our own platform built with spring to
reflect the ability of task submission and task monitoring. Spark tasks run
on yarn and are in cluster mode. And our current service may submit tasks
to different yarn clusters.
According to the current method provided by