truncate', False).start()
q.awaitTermination()
From: Amit Joshi
Sent: Monday, 18 January 2021 20:22
To: Boris Litvak
Cc: spark-user
Subject: Re: [Spark Structured Streaming] Processing the data path coming from
kafka.
Hi Boris,
Thanks for your code block.
I understood what you are trying to achieve in
nested-json-with-array-in-scala
>>
>>
>>
>> Note that unless I am missing something you cannot access spark session
>> from foreach as code is not running on the driver.
>>
>> Please say if it makes sense or did I miss anything.
>>
>>
>
rk session
> from foreach as code is not running on the driver.
>
> Please say if it makes sense or did I miss anything.
>
>
>
> Boris
>
>
>
> *From:* Amit Joshi
> *Sent:* Monday, 18 January 2021 17:10
> *To:* Boris Litvak
> *Cc:* spark-use
Litvak
Cc: spark-user
Subject: Re: [Spark Structured Streaming] Processing the data path coming from
kafka.
Hi Boris,
I need to do processing on the data present in the path.
That is the reason I am trying to make the dataframe.
Can you please provide the example of your solution?
Regards
Ami
Hi Boris,
I need to do processing on the data present in the path.
That is the reason I am trying to make the dataframe.
Can you please provide the example of your solution?
Regards
Amit
On Mon, Jan 18, 2021 at 7:15 PM Boris Litvak wrote:
> Hi Amit,
>
>
>
> Why won’t you just map()/mapXXX()
Hi Amit,
Why won’t you just map()/mapXXX() the kafkaDf with the mapping function that
reads the paths?
Also, do you really have to read the json into an additional dataframe?
Thanks, Boris
From: Amit Joshi
Sent: Monday, 18 January 2021 15:04
To: spark-user
Subject: [Spark Structured