Moving conversation to user@hbase.apache.org, which is the right list
for HBase questions.
HBase (and many other distributed systems) rely on clocks being synced
for various operations to behave as expected. HBase does recommend
running ntpd on all nodes to keep time in proper sync. This is why it
On Wed, Mar 7, 2012 at 10:30 PM, Something Something
wrote:
> 2012-03-07 22:09:29,828 [Thread-4] FATAL
> org.apache.hadoop.conf.Configuration - error parsing conf file:
> com.sun.org.apache.xerces.internal.impl.io.MalformedByteSequenceException:
> Invalid byte 2 of 2-byte UTF-8 sequence.
>
>
> Sou
Thank you for quick response.
At 2012-03-08 14:37:15,"N Keywal" wrote:
>Hi,
>
>It's replaced by HBaseTestingUtility.
>
>Cheers,
>
>N.
>
>2012/3/8 lulynn_2008
>
>> Hi All,
>> I am integrating flume-0.9.4 with hbase-0.92.0. And I find hbase-0.92.0
>> removed HBaseClusterTestCase which is used
Hi,
It's replaced by HBaseTestingUtility.
Cheers,
N.
2012/3/8 lulynn_2008
> Hi All,
> I am integrating flume-0.9.4 with hbase-0.92.0. And I find hbase-0.92.0
> removed HBaseClusterTestCase which is used in flume-0.9.4.
> My question is:
> Is there any replacement for HBaseClusterTestCase?
>
Hello,
I am using: hadoop-0.20.2-cdh3u2, hbase-0.90.4-cdh3u3, pig-0.8.1-cdh3u3
I have successfully loaded data into HBase tables (implying my Hadoop &
HBase setup is good). I can look at the data using HBase shell.
Now I am trying to read data from HBase via a Pig Script. My test script
looks
Hi All,
I am integrating flume-0.9.4 with hbase-0.92.0. And I find hbase-0.92.0 removed
HBaseClusterTestCase which is used in flume-0.9.4.
My question is:
Is there any replacement for HBaseClusterTestCase?
Thank you.
> Nice Sujee. Make a patch for the reference guide so folks can find it
> easy? Add a footnote here: http://hbase.apache.org/book.html#dns?
> (If you write text into an issue, I'll take care of getting it into
> the guide). Shouldn't we have this in hbase altogether?
>
> ./bin/hbase checkdns
>
>
On Wed, Mar 7, 2012 at 1:48 PM, Nicolas Spiegelberg wrote:
> Looking for +1s on a March 27th HBase Users Group. Just want to make sure
> there are no huge conflicts before we post the official meetup. StumbleUpon
> has shiny new office space, so it seems like a great spot to host this
> meetu
On Wed, Mar 7, 2012 at 6:12 PM, Sujee Maniyam wrote:
> HI all,
> I was once stung by issue of DNS not working correctly on a
> Hadoop/Hbase cluster, it wasn't easy to debug.
>
> So I wrote a simple utility to verify DNS on a cluster (all machines)
>
> https://github.com/sujee/hadoop-dns-checker
>
On Wed, Mar 7, 2012 at 7:24 PM, Gopal wrote:
> One question remains is this :-
>
> If I start hbase just with NameNode on master and No Datanode. It does not
> seem to work.
>
> IN other words :- master -> NameNode-( *nodatanode*)
> hbase does not want to work nice.
>
> It just hangs.
>
In essenc
On 3/7/2012 9:58 PM, Gopal wrote:
On 3/7/2012 9:11 PM, Gopal wrote:
*Linux
*Debian Squeeze
*Hadoop Configuration*
Version: hadoop-*0.20.205*.0
IP's:- 192.168.1.*76 *and 192.168.1.*74*
/etc/host:
master -> 192.168.1.*76
*slave ->**192.168.1.*74
*Confi*guration files on both master & slave ser
On 3/7/2012 9:11 PM, Gopal wrote:
*Linux
*Debian Squeeze
*Hadoop Configuration*
Version: hadoop-*0.20.205*.0
IP's:- 192.168.1.*76 *and 192.168.1.*74*
/etc/host:
master -> 192.168.1.*76
*slave ->**192.168.1.*74
*Confi*guration files on both master & slave server's :-
*cat *master -> master
*ca
Dear All friends
As Peter said this is a bug when there is no table in Hbase.
I want to know this installation is enough for creating a simple application
base on Java. I want to use Rest API.
Best Regard
Mahdi
HI all,
I was once stung by issue of DNS not working correctly on a
Hadoop/Hbase cluster, it wasn't easy to debug.
So I wrote a simple utility to verify DNS on a cluster (all machines)
https://github.com/sujee/hadoop-dns-checker
- It is written in pure Java; doesn't use any third party libraries
*Linux
*Debian Squeeze
*Hadoop Configuration*
Version: hadoop-*0.20.205*.0
IP's:- 192.168.1.*76 *and 192.168.1.*74*
/etc/host:
master -> 192.168.1.*76
*slave ->**192.168.1.*74
*Confi*guration files on both master & slave server's :-
*cat *master -> master
*cat *slave -> slave
*Hadoop comes up
Sounds great to me.
On Wed, Mar 7, 2012 at 1:48 PM, Nicolas Spiegelberg wrote:
> Looking for +1s on a March 27th HBase Users Group. Just want to make sure
> there are no huge conflicts before we post the official meetup.
> StumbleUpon has shiny new office space, so it seems like a great spot to
Looking for +1s on a March 27th HBase Users Group. Just want to make sure
there are no huge conflicts before we post the official meetup. StumbleUpon
has shiny new office space, so it seems like a great spot to host this meetup.
Also, if you would like to present or announce, email me (or Sta
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi Mahdi
On 05/03/12 09:14, Mahdi Negahi wrote:
> I'm new at Linux and Hbase. At first time, I install hbase on
> windows by Cygwin successfully but after install Thrift everything
> change. so I decided to change my OS and try to install Hbase on
>
On Tue, Mar 6, 2012 at 9:30 PM, Mahdi Negahi wrote:
>
>
>
>
> Dear All friends
>
> thanks for ur response. I just explain the installation of Hbase steps that I
> do.
>
> 1- download Hbase-0.92.0 and untar it.
>
> 2- move it to /usr/lib by this command
>
> sudo mv hbase-0.92.0 /usr/lib
>
> 3- ope
Hi there-
You probably also want to see this section in the RefGuide on schema
design...
http://hbase.apache.org/book.html#rowkey.design
... as well as this for region-RS assignment (and failover)...
http://hbase.apache.org/book.html#regions.arch
re: "recommended minimum number of nodes?"
Comments inline.
> -Original Message-
> From: philip.ev...@gmail.com [mailto:philip.ev...@gmail.com] On Behalf
> Of Phil Evans
> Sent: Wednesday, March 07, 2012 3:53 PM
> To: user@hbase.apache.org
> Subject: Designing Row Key
>
> Dear All,
>
> We're currently designing a Row Key for our
Dear All,
We’re currently designing a Row Key for our schema and this has raised a
number of queries which we’ve struggled to find a definitive answer to but
think we understand what goes on and hoped someone on the list would be
able to help clarify!
Ultimately, the data we are storing is time s
22 matches
Mail list logo