Thanks Kevin for the link, I have had issues trying to install zeppelin as
I believe it is not yet supported for CDH 5.3, and Spark 1.2. Please
correct me if I am mistaken.

On Thu, Feb 12, 2015 at 7:33 PM, Kevin (Sangwoo) Kim <kevin...@apache.org>
wrote:

> Apache Zeppelin also has a scheduler and then you can reload your chart
> periodically,
> Check it out:
> http://zeppelin.incubator.apache.org/docs/tutorial/tutorial.html
>
>
>
>
> On Fri Feb 13 2015 at 7:29:00 AM Silvio Fiorito <
> silvio.fior...@granturing.com> wrote:
>
>>   One method I’ve used is to publish each batch to a message bus or
>> queue with a custom UI listening on the other end, displaying the results
>> in d3.js or some other app. As far as I’m aware there isn’t a tool that
>> will directly take a DStream.
>>
>>  Spark Notebook seems to have some support for updating graphs
>> periodically. I haven’t used it myself yet so not sure how well it works.
>> See here: https://github.com/andypetrella/spark-notebook
>>
>>   From: Su She
>> Date: Thursday, February 12, 2015 at 1:55 AM
>> To: Felix C
>> Cc: Kelvin Chu, "user@spark.apache.org"
>>
>> Subject: Re: Can spark job server be used to visualize streaming data?
>>
>>   Hello Felix,
>>
>>  I am already streaming in very simple data using Kafka (few messages /
>> second, each record only has 3 columns...really simple, but looking to
>> scale once I connect everything). I am processing it in Spark Streaming and
>> am currently writing word counts to hdfs. So the part where I am confused
>> is...
>>
>> Kafka Publishes Data -> Kafka Consumer/Spark Streaming Receives Data ->
>> Spark Word Count -> *How do I visualize?*
>>
>>  is there a viz tool that I can set up to visualize JavaPairDStreams? or
>> do I have to write to hbase/hdfs first?
>>
>>  Thanks!
>>
>> On Wed, Feb 11, 2015 at 10:39 PM, Felix C <felixcheun...@hotmail.com>
>> wrote:
>>
>>>  What kind of data do you have? Kafka is a popular source to use with
>>> spark streaming.
>>> But, spark streaming also support reading from a file. Its called basic
>>> source
>>>
>>> https://spark.apache.org/docs/latest/streaming-programming-guide.html#input-dstreams-and-receivers
>>>
>>> --- Original Message ---
>>>
>>> From: "Su She" <suhsheka...@gmail.com>
>>> Sent: February 11, 2015 10:23 AM
>>> To: "Felix C" <felixcheun...@hotmail.com>
>>> Cc: "Kelvin Chu" <2dot7kel...@gmail.com>, user@spark.apache.org
>>> Subject: Re: Can spark job server be used to visualize streaming data?
>>>
>>>   Thank you Felix and Kelvin. I think I'll def be using the k-means
>>> tools in mlib.
>>>
>>>  It seems the best way to stream data is by storing in hbase and then
>>> using an api in my viz to extract data? Does anyone have any thoughts on
>>> this?
>>>
>>>   Thanks!
>>>
>>>
>>> On Tue, Feb 10, 2015 at 11:45 PM, Felix C <felixcheun...@hotmail.com>
>>> wrote:
>>>
>>>  Checkout
>>>
>>> https://databricks.com/blog/2015/01/28/introducing-streaming-k-means-in-spark-1-2.html
>>>
>>> In there are links to how that is done.
>>>
>>>
>>> --- Original Message ---
>>>
>>> From: "Kelvin Chu" <2dot7kel...@gmail.com>
>>> Sent: February 10, 2015 12:48 PM
>>> To: "Su She" <suhsheka...@gmail.com>
>>> Cc: user@spark.apache.org
>>> Subject: Re: Can spark job server be used to visualize streaming data?
>>>
>>>   Hi Su,
>>>
>>>  Out of the box, no. But, I know people integrate it with Spark
>>> Streaming to do real-time visualization. It will take some work though.
>>>
>>>  Kelvin
>>>
>>> On Mon, Feb 9, 2015 at 5:04 PM, Su She <suhsheka...@gmail.com> wrote:
>>>
>>>  Hello Everyone,
>>>
>>>  I was reading this blog post:
>>> http://homes.esat.kuleuven.be/~bioiuser/blog/a-d3-visualisation-from-spark-as-a-service/
>>>
>>>  and was wondering if this approach can be taken to visualize streaming
>>> data...not just historical data?
>>>
>>>  Thank you!
>>>
>>>  -Suh
>>>
>>>
>>>
>>>
>>

Reply via email to