The continuous one is our new low latency continuous processing engine in
Structured Streaming (to be released in 2.3).
Here is the pre-release doc -
https://dist.apache.org/repos/dist/dev/spark/v2.3.0-rc5-docs/_site/structured-streaming-programming-guide.html#continuous-processing
On Sun, Feb 25,
--
*Thanks and Regards,*
*Rama Bhaskar*
Thanks
Regards
Sandeep Kumar Choudhary
IIT Delhi
Contact: +917042645119
unsubscribe
Thanks & Regards
Sudheesh K J
Tata Consultancy Services Limited,
TCS Centre SEZ Unit,
Infopark PO,
Kochi - 682042
Kerala, India
Cell Number: 9746068506
Mail to: sudheesh...@tcs.com
Web site: www.tcs.com
From: Brindha Sengottaiyan
Sent: 24 February 2018 03:06 AM
Appu,
I am also landed in same problem.
Are you able to solve this issue? Could you please share snippet of code if
your able to do?
Thanks,
Naresh
On Wed, Feb 14, 2018 at 8:04 PM, Tathagata Das
wrote:
> 1. Just loop like this.
>
>
> def startQuery(): Streaming Query = {
>// Define the da
Hi,
I need to write a rule to customize the join function using Spark Catalyst
optimizer. The objective to duplicate the second dataset using this
process:
- Execute a udf on the column called x, this udf returns an array
- Execute an explode function on the new column
Using SQL terms, my objec
Hello Spark Experts,
What is the difference between Trigger.Continuous(10.seconds) and
Trigger.ProcessingTime("10 seconds") ?
Thank you,
Naresh
10 matches
Mail list logo