Hi,
You can use Tableau. Link -http://tableausoftware.com/
But you need hive.
On Tue, Oct 15, 2013 at 10:30 AM, Jagat Singh jagatsi...@gmail.com wrote:
Hi ,
You should look at pentaho or talend tools .
Connect via hiveserver and plot charts
Thanks
On 15/10/2013 2:04 PM, Xuri Nagarin
Xuri-
I know a lot of folks that take the result of Hadoop jobs and visualize the
data within a web page using the D3 JavaScript library.
http://d3js.org/
Josh
From: Xuri Nagarin [mailto:secs...@gmail.com]
Sent: Monday, October 14, 2013 11:46 PM
To: user@hadoop.apache.org
Subject: Re: Hadoop
Have you looked at Splunk's new product called Hunk
Splunk A Buffet of Big Data
Todd Johnston
Senior Sales Engineer – National Programs
Splunk Inc.
Mobile: 703 980-3397
Email: tjohns...@splunk.commailto:tjohns...@splunk.com
Washington D.C. | Cupertino | London | Hong Kong | San Francisco |
Thanks for all the responses. Currently, I am only looking at open source
solutions.
On Tue, Oct 15, 2013 at 6:59 AM, Todd Johnston tjohns...@splunk.com wrote:
Have you looked at Splunk's new product called Hunk
Splunk A Buffet of Big Data
Todd Johnston
Senior Sales Engineer –
In the Hadoop 2.1 docs, I see there is a read(ByteBuffer) call in
FSDataInputStream, but I don't see a write(ByteBuffer) call documented for
FSDataOutputStream. Is there a (fast) way to write a ByteBuffer to HDFS files?
Thanks
John
Jing,
thanks for your answer.
if hbase with high availability is the desired goal, is it recommended to
remove sshfence? we do not plan to use hdfs for anything else.
i understood that the only downside of no fencing is that the old namenode
could still be serving read requests. could this
I think a real fencing is not required in case that you're using
QJM-based HA. If you are using ZKFC, a graceful fencing will first be
triggered in which ZKFC will send a RPC request to the original ANN to
make it standby. If the graceful fencing failed the configured fencing
will be used. In the
HI ,
I would like to know if JRE alone is sufficient to run HADOOP services or
JDK is required ?
we are planning to install latest stable version of hadoop
Thanks,
Oc.tsdb
http://blog.cloudera.com/blog/2012/10/quorum-based-journaling-in-cdh4-1/
Old version (4.1) but the principle is still the same.
*No requirement for custom fencing configuration *- fencing methods such as
STONITH http://en.wikipedia.org/wiki/STONITH require custom hardware;
instead, we should