+1. The Streaming UI should give you more than enough information.
Thanks, Hari
On Mon, Sep 28, 2015 at 9:55 PM, Shixiong Zhu wrote:
> Which version are you using? Could you take a look at the new Streaming UI
> in 1.4.0?
> Best Regards,
> Shixiong Zhu
> 2015-09-29 7:52
As of now, you can feed Spark Streaming from both kafka and flume.
Currently though there is no API to write data back to either of the two
directly.
I sent a PR which should eventually add something like this:
Btw, if you want to write to Spark Streaming from Flume -- there is a sink
(it is a part of Spark, not Flume). See Approach 2 here:
http://spark.apache.org/docs/latest/streaming-flume-integration.html
On Wed, Nov 19, 2014 at 12:41 PM, Hari Shreedharan
hshreedha...@cloudera.com wrote
No, Scala primitives remain primitives. Unless you create an RDD using one
of the many methods - you would not be able to access any of the RDD
methods. There is no automatic porting. Spark is an application as far as
scala is concerned - there is no compilation (except of course, the scala,
JIT
Do you see anything suspicious in the logs? How did you run the application?
On Thu, Aug 7, 2014 at 10:02 PM, XiaoQinyu xiaoqinyu_sp...@outlook.com
wrote:
Hi~
I run a spark streaming app to receive data from flume event.When I run on
standalone,Spark Streaming can receive the Flume event
Off the top of my head, you can use the ForEachDStream to which you pass
in the code that writes to Hadoop, and then register that as an output
stream, so the function you pass in is periodically executed and causes
the data to be written to HDFS. If you are ok with the data being in
text