"Stream". The Dataframe would need to be
>>>> updated every hour. Have you done something similar in the past? Do you
>>>> have an example to share?
>>>>
>>>> On Mon, May 3, 2021 at 9:52 AM Mich Talebzadeh <
>>>> mich.talebza...@gmai
t;>>> 2. The RDBMS table can be read through JDBC in Spark and a
>>>>dataframe can be created on. Does that work for you? You do not really
>>>> need
>>>>to stream the reference table.
>>>>
>>>>
>>>> HTH
t;>>the reference table.
>>>
>>>
>>> HTH
>>>
>>>
>>>
>>>view my Linkedin profile
>>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>>
>>>
>>>
>>> *Discla
;> HTH
>>>>
>>>>
>>>>
>>>>view my Linkedin profile
>>>>
>>>>
>>>> Disclaimer: Use it at your own risk. Any and all responsibility for any
>>>> loss, damage or destruction of data or any other
r own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising fr
l in no case be liable for any monetary damages arising from such
>> loss, damage or destruction.
>>
>>
>>
>>> On Mon, 3 May 2021 at 17:37, Eric Beabes wrote:
>>> I would like to develop a Spark Structured Streaming job that reads
>>> mes
can be created on. Does that work for you? You do not really need to
>> stream
>>the reference table.
>>
>>
>> HTH
>>
>>
>>
>>view my Linkedin profile
>> <https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2/>
>>
&
or destruction.
>
>
>
>
> On Mon, 3 May 2021 at 17:37, Eric Beabes wrote:
>
>> I would like to develop a Spark Structured Streaming job that reads
>> messages in a Stream which needs to be “joined” with another Stream of
>> “Reference” data.
>>
>>
which needs to be “joined” with another Stream of
> “Reference” data.
>
> For example, let’s say I’m reading messages from Kafka coming in from
> (lots of) IOT devices. This message has a ‘device_id’. We have a DEVICE
> table on a relational database. What I need to do is “join” the ‘
I would like to develop a Spark Structured Streaming job that reads
messages in a Stream which needs to be “joined” with another Stream of
“Reference” data.
For example, let’s say I’m reading messages from Kafka coming in from (lots
of) IOT devices. This message has a ‘device_id’. We have
10 matches
Mail list logo