If you are using  Python, please try using Bokeh and its related stack.
Most of the people in this forum including guys at data bricks have not
tried that stack from Anaconda, its worth a try when you are visualizing
data in big data stack.


Regards,
Gourav

On Sat, Jul 30, 2016 at 10:25 PM, Rerngvit Yanggratoke <
rerngvit.yanggrat...@gmail.com> wrote:

> Since you already have an existing application (not starting from
> scratch), the simplest way to visualize would be to export the data to a
> file (e.g., a CSV file) and visualise using other tools, e.g., Excel,
> RStudio, Matlab, Jupiter, Zeppelin, Tableu, Elastic Stack.
> The choice depends on your background and preferences of the technology.
> Note that if you are dealing with a large dataset, you generally first
> should apply sampling to the data. A good mechanism to sampling depends on
> your application domain.
>
> - Rerngvit
> > On 30 Jul 2016, at 21:45, Tony Lane <tonylane....@gmail.com> wrote:
> >
> > I am developing my analysis application by using spark (in eclipse as
> the IDE)
> >
> > what is a good way to visualize the data, taking into consideration i
> have multiple files which make up my spark application.
> >
> > I have seen some notebook demo's but not sure how to use my application
> with such notebooks.
> >
> > thoughts/ suggestions/ experiences -- please share
> >
> > -Tony
>
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>

Reply via email to