Hi All,
I am trying to read from Kafka using spark streaming from spark-shell but
getting the below error. Any suggestions to fix this is much appreciated.
I am running from spark-shell hence it is client mode and the files are
available in the local filesystem.
I tried to access the files as sh
e HiveContext
>> in your Spark Code.
>>
>> Br,
>>
>> Dennis
>>
>> Von meinem iPhone gesendet
>>
>> Am 23.11.2020 um 19:04 schrieb joyan sil :
>>
>>
>>
>> Hi,
>>
>> We have ranger policies defined on the hive table
Hi,
We have ranger policies defined on the hive table and authorization works
as expected when we use hive cli and beeline. But when we access those hive
tables using spark-shell or spark-submit it does not work.
Any suggestions to make Ranger work with Spark?
Regards
Joyan
Hi,
We are using the InsertInto method of dataframe to write into an object
store backed hive table in Google cloud. We have observed slowness in this
approach.
>From the internet, we got to know
Writes to Hive tables in Spark happen in a two-phase manner.
- Step 1 – DistributedWrite: Data is
Hi Jack and Spark experts,
Further to the question asked in this thread, what are some recommended
resources (blog/videos) that have helped you to deep dive into the spark
source code.
Thanks
Regards
Joyan
On Wed, Aug 19, 2020 at 11:06 AM Jack Kolokasis
wrote:
> Hi,
>
> From my experience, I