hi,
This was my mistake on one of the cluster machines. It had
hadoop-gpl-compression jar from google code. After removing this jar,
this error no longer came up.
Sorry about the false alarm.
Thanks and regards,
- Ashish
On Fri, 4 Feb 2011 10:37:33 +0530
Ashish Shinde wrote:
> hi Todd,
>
Hi, guys,
I have these lines trying to connect to the HBase, and it works perfectly
well when I am connecting on that machine that runs HBase, but not when I am
connecting from the outside. What ports or other conditions should I check?
Configuration config = HBaseConfiguration.create();
Adding to what Andrew says.
On Sat, Feb 5, 2011 at 11:46 AM, Andrew Purtell wrote:
>> From: Oleg Ruchovets
>
>> 1) We want to use multi column families bulk loading. The
>> question is what is the status of 0.92? Is it possible to
>> use in production?
>
You could apply https://issues.apache.or
> From: Oleg Ruchovets
> 1) We want to use multi column families bulk loading. The
> question is what is the status of 0.92? Is it possible to
> use in production?
The status of 0.92 is that it does not exist yet.
There has been recent talk of us putting out a developer preview release of
0.92
Hi,
I have installed Cloudera's CDH3 successfully on a node. I have written
a small application attempting to connect to it. My hbase-site.xml is
very simple:
hbase.zookeeper.quorum
12.34.56.78
The Zookeeper connection is successful, but I get the following error
message systematicall
Hi ,
My congratulation about 0.90 release.
We are going to production and I have couple of questions:
1) We want to use multi column families bulk loading. The question is what
is the status of 0.92? Is it possible to use in production?
2) When do you plan to release 0.92 ?
3) Does upgrade
For testing purposes, it is possible to run HBase without HDFS and the
benefits of durability. Benoit Sigoure has a good writeup here:
http://opentsdb.net/setup-hbase.html
But for larger deployments, HDFS is the way to go. Another approach you
might consider is the pseudo-distributed option, wh
Mike, you'll also need also access to an installation of Hadoop, whether
this on the same machines as your HBase install (common), or somewhere
else. Often, people install Hadoop first and then layer HBase over it.
HBase depends on core Hadoop functionality like HDFS, and uses the Hadoop
JAR in l
On a related note:
http://wiki.apache.org/hadoop/Hadoop%20Upgrade (referenced by
http://wiki.apache.org/hadoop/Hbase/HowToMigrate#90) needs to be filled out.
On Fri, Feb 4, 2011 at 11:47 PM, Mike Spreitzer wrote:
> Hi, I'm new to HBase and have a stupid question about its dependency on
> Hadoop.
On Fri, Feb 4, 2011 at 4:43 AM, Shuja Rehman wrote:
> More specific can provide example equivalent to this
>
> for(i=0; i list.add(putitem[i]);
> }
> htable.put(list);
The equivalent would be:
Callback callback = new Callback {
public Object run(Object arg) {
// Do whatever you want on a
Just to add on what others said, at StumbleUpon we typically see a
99th percentile lower than 100ms for simple get/put operations.
OpenTSDB also uses HBase with an interactive GWT interface and relies
on sub-second response times for queries involving many thousands of
rows.
--
Benoit "tsuna" Si
Stack, Thanks. I can able to understand. I am just developing the model.
Could you please be exact? About the error. I am very new to hbase. Many of
them by your clear answer.
Thank you Stack
Regards
Prabakaran
On Fri, Feb 4, 2011 at 11:30 PM, Stack wrote:
> The below error is telling you th
Sounds like you ahve a problem with hbase being swapped out of memory. It
might help (paradoxically) to decrease the memory available for hbase since
it will cache less and have fewer long lived pages in its cache. Certainly
you should consider decreasing the memory used by the map-reduce process
13 matches
Mail list logo