unsubscribe
What's the privilege of using that specific version for this? Please throw
some light onto it.
On Mon, Dec 25, 2017 at 6:51 AM, Felix Cheung
wrote:
> Or use it with Scala 2.11?
>
> --
> *From:* ayan guha
> *Sent:*
Hey Serkan, it depends of your Kafka version... Is it 0.8.2?
Em 25 de dez de 2017 06:17, "Serkan TAS" escreveu:
> Hi,
>
>
>
> Working on spark 2.2.0 cluster and 1.0 kafka brokers.
>
>
>
> I was using the library
>
> "org.apache.spark" % "spark-streaming-kafka-0-10_2.11"
Hi, community,
I have a subquery running slow on druid cluster.
The *inner query* yield fields:
*SELECT D1, D2, D3, MAX(M1) as MAX_M1*
*FROM SOME_TABLE*
*GROUP BY D1, D2, D3*
Then, the outer query looks like:
*SELECT D1, D2, SUM(MAX_M1)*
*FROM INNER_QUERY*
*GROUP BY D1, D2*
The D3 is a high
Window function requires a timestamp column because you will apply a
function for each window (like an aggregation). You still can use UDF for
customized tasks
Em 25 de dez de 2017 20:15, "M Singh"
escreveu:
> Hi:
> I would like to use window function on a DataSet
Can you please post here your code?
Em 25 de dez de 2017 19:24, "M Singh"
escreveu:
> Hi:
>
> I am using spark structured streaming (v 2.2.0) to read data from files. I
> have configured checkpoint location. On stopping and restarting the
> application, it looks
Hi M Singh! Here I'm using query.stop()
Em 25 de dez de 2017 19:19, "M Singh"
escreveu:
> Hi:
> Are there any patterns/recommendations for gracefully stopping a
> structured streaming application ?
> Thanks
>
>
>
Hi:I would like to use window function on a DataSet stream (Spark 2.2.0)The
window function requires Column as argument and can be used with DataFrames by
passing the column. Is there any analogous window function or pointers to how
window function can be used for DataSets ?
Thanks
Hi:
I am using spark structured streaming (v 2.2.0) to read data from files. I have
configured checkpoint location. On stopping and restarting the application, it
looks like it is reading the previously ingested files. Is that expected
behavior ?
Is there anyway to prevent reading files that
Hi:Are there any patterns/recommendations for gracefully stopping a structured
streaming application ?Thanks
You find several presentations on this at the Spark summit web page.
Generally you have also to make a decision if you run one cluster for all
applications or one cluster per application in the container context.
Not sure though why do you want to run just on one node. If you have only one
Folks,
Can you share your experience of running spark under docker on a single
local / standalone node.
Anybody using it under production environments ?, we have a existing
Docker Swarm deployment, and i want to run Spark in a seperate FAT VM
hooked / controlled by docker swarm
I know there is
Hi,
Working on spark 2.2.0 cluster and 1.0 kafka brokers.
I was using the library
"org.apache.spark" % "spark-streaming-kafka-0-10_2.11" % "2.2.0"
and had lots of problems during streaming process then downgraded to
"org.apache.spark" % "spark-streaming-kafka-0-8_2.11" % "2.2.0"
13 matches
Mail list logo