the detail of PCs : 3G RAM ,Pentium(R) Dual-Core CPU E5800 @ 3.2GHz
hadoop-0.20.2(append) zokeeper-3.3.3 hbase-0.90.3,two PCs are connected by 1G
switch
At 2011-10-20 13:30:44,"Ted Yu" wrote:
>Can you give us more details about your setup ?
>For example, hbase version, Hadoop version, etc.
Can you give us more details about your setup ?
For example, hbase version, Hadoop version, etc.
On Oct 19, 2011, at 9:58 PM, 郑浩 wrote:
> hello,everyone
> i have test performace of using bulk load to load data into hbase,i only use
> two PC,the result is 1M byte every one second one PC,Is it
Hi St. Ack , Ben
I also have a scenario that in my case I have to take periodical backup of
Hbase data. For that I have will be using export/import tool. I have decided to
take backup based on time range interval. I have read it in some other posts
also that it is not good idea for one to use t
On Thu, Oct 20, 2011 at 2:58 AM, Jignesh Patel wrote:
> I am having Larse George's book in my hand but he has not mention anything
> about web based GUI for creating tables. He has some detail about the usage
> at page 277.
>
There is no graphical table creator in UI.
> If I need to create 50 c
Harsh
Can you elaborate more about following:
"Otherwise, you need to make do with HBase's web UI, and the shell."
I am having Larse George's book in my hand but he has not mention anything
about web based GUI for creating tables. He has some detail about the usage
at page 277.
If I need to crea
于 2011/10/20 9:38, Eason Lee 写道:
> Any sugguestion? thx~~
>
>
See from the source code,Export tool use scan to get the new added
records and dump them into sequencefile
But if I have deleted a record, this record can't be fetched by scan how
can I delete the deleted rows?
1. Whenever my computer goes to sleep mode(Mac lion OS X) it stops
HMaster.However all map reduce related not remain active.
739 JobTracker
585 DataNode
675 SecondaryNameNode
496 NameNode
5991 Jps
826 TaskTracker
Is this expected behaviour?
2. Even HMaster is not running when I tried to check th
Any sugguestion? thx~~
Sameer,
Despite that FATAL message, you should still have had the table created. That
message is more of a warning to users, that their setup won't work in fully
distributed mode if they use 'localhost'.
Sure looks like one'd feel the table creation errored out, though.
On 20-Oct-2011, at 3:40
On Wed, Oct 19, 2011 at 4:03 PM, Sameer Farooqui
wrote:
> I have decided to also name my first born in your honor, so that perhaps he
> will be blessed with your high intelligence. Will discuss with wife and get
> back to you.
>
If you need me to call the wife and explain why it'd be a blessing
h
Kind sir,
That worked. Here is proof:
hbase(main):003:0' create 'Michael Charlemange Mary Malachy
Angel-from-Heaven Stack', 'ohgodpleasework'
hbase(main):004:0'
By the way, I didn't make the change in the hbase-site.xml file. I left that
with localhost. I just changed the zoo.cfg file to use the
I see this in your first set of errors:
2011-09-12 10:21:39,606 INFO [main] wal.HLog(396):
getNumCurrentReplicas--HDFS-826 not available;
That seems strange for a CDH -- IIRC, it should have the above.
What is the context you are running the merge in? My guess is that we
are finding other than
On Tue, Oct 18, 2011 at 7:34 AM, Jignesh Patel wrote:
> Harsh,
> for the launchctl changes even by using sudo command as an admin it says I
> don't have permission.
>
>
Would suggest you look on a macosx mailing list for answer to above Jignesh.
St.Ack
On Wed, Oct 19, 2011 at 3:10 PM, Sameer Farooqui
wrote:
>
> hbase.rootdir
> hdfs://localhost:8020/hbase
>
Can you change the above? Remove the 'localhost' and put the machine
name in there?
St.Ack
P.S. For naming table, my name is 'Michael Charlemange Mary Malachy
Angel-from-Hea
Hello everyone,
I'm having a really hard time trying to use HBase on a single server in
pseudodistributed mode. I have a large instance on Amazon EC2 running Red
Hat 6.1 64-bit.
We are running the Cloudera Hadoop release CDH3. When I get into the hbase
shell and try to create a table, I get this
On Wed, Oct 19, 2011 at 12:18 PM, Ben West wrote:
> We're storing timestamped data in HBase; from lurking on the mailing list it
> seems like the recommendation is usually to make the timestamp part of the
> row key. I'm curious why this is - is scanning over rows more efficient than
> scanning
On Wed, Oct 19, 2011 at 2:19 PM, Jignesh Patel wrote:
> St.Ack,
>
> changin hadoop-version in pom.xml doesn't help as during runtime it still
> tries to load from m2./repository folder. Is there a way where I can say
> that don't go to .m2/repository folder but goto /lib directory.
>
You can rig
St.Ack,
changin hadoop-version in pom.xml doesn't help as during runtime it still
tries to load from m2./repository folder. Is there a way where I can say
that don't go to .m2/repository folder but goto /lib directory.
-jignesh
On Wed, Oct 19, 2011 at 4:42 PM, Stack wrote:
> On Wed, Oct 19, 20
On Wed, Oct 19, 2011 at 1:10 PM, Jignesh Patel wrote:
> just figured out that while running start-hbase.sh it is taking files from
> ~/.m2/repository then usual hadoop-hbase/lib.
>
> What I need to change so that it will take the files only from
> hadoop-hbase/lib. Because repository has hadoop-co
just figured out that while running start-hbase.sh it is taking files from
~/.m2/repository then usual hadoop-hbase/lib.
What I need to change so that it will take the files only from
hadoop-hbase/lib. Because repository has hadoop-core append file which is
not matching with mine hadoop-core-2.205
Hi J-D,
Thanks for the detailed explanation.
So if I understand correctly the lease we're talking about is a scanner
lease and the timeout is between two scanner calls, correct? I think that
make sense because I now realize that jobs that fail (some jobs continued to
fail even after reducing the nu
Hi all,
We're storing timestamped data in HBase; from lurking on the mailing list it
seems like the recommendation is usually to make the timestamp part of the row
key. I'm curious why this is - is scanning over rows more efficient than
scanning over timestamps within a cell?
The book says: "
After compiling with with maven, I started getting back the following error.
2011-10-19 14:02:38,089 FATAL org.apache.hadoop.hbase.master.HMaster:
Unhandled exception. Starting shutdown.
java.io.IOException: Call to localhost/127.0.0.1:9000 failed on local
exception: java.io.EOFException
at org.ap
Mmm ok, how did you kill the master exactly? kill -9 or a normal shutdown? I
think I could see how it would happen in the case of a normal shutdown, but
even then it would *really really* help to see the logs of what's going on.
J-D
On Tue, Oct 18, 2011 at 6:37 PM, Mingjian Deng wrote:
> @J-D:
If you started ZK via HBase, use bin/hbase-daemons.sh stop zookeeper
As you can see the stop command you are using doesn't know about the process
it should be looking for...
J-D
On Wed, Oct 19, 2011 at 4:38 AM, Arsalan Bilal wrote:
> i am trying to stop zookeeper but when i issue command, it do
What is the value of fs.default.name in hbase-site.xml?
On Oct 19, 2011 5:02 PM, "Sooraj S" wrote:
> Hi All,
>
> When i try to start the hbase master its throwing error as follows. Can
> anybody please help me with it ?
>
> Error:
>
> INFO org.apache.hadoop.hbase.master.ActiveMasterManager:
> Ma
Hey Matthew,
The only way to increase the number of reducers is to have more regions -
each reducer produces an output per region, so the number of reducers ==
number of regions.
Thanks
Karthik
On 10/18/11 2:00 AM, "Matthew Tovbin" wrote:
>Hello, Guys,
>
>I'm willing to bulk load data from hd
Just thinking it may be more easier to refer the hbase lib/jar by creating
map reduce project.
But I don't know in which jar following package exist.
org.apache.hadoop.hbase.
-Jignesh
On Wed, Oct 19, 2011 at 10:14 AM, Jignesh Patel wrote:
> Dough and Jonathan,
> Yes I did follow the steps and
Dough and Jonathan,
Yes I did follow the steps and with the help of Maven, I have created
target folder.
But that folder doesn't have anything like eclipse plugin. Something readily
available in hadoop/contrib folder.
What I am looking at this time, when I create a project through hadoop it
shoul
In addition to what Jonathan just said, see
http://hbase.apache.org/book.html#ides
On 10/19/11 3:05 AM, "Jonathan Gray" wrote:
>Not sure what kind of integration you're talking about, but if just want
>to create a project with the HBase source then just grab an SVN checkout
>of an HBase repo
sorryhdfs://:8020/hbase
On 19 October 2011 17:13, Sooraj S wrote:
> hdfs://hdfs://:8020/hbase
>
>
> On 19 October 2011 17:08, Ramkrishna S Vasudevan
> wrote:
>
>>
>> What is the configuration for hdfs.rootdir?
>>
>> Regards
>> Ram
>> -Original Message-
>> From: Sooraj S [mailto:soor
hdfs://hdfs://:8020/hbase
On 19 October 2011 17:08, Ramkrishna S Vasudevan wrote:
>
> What is the configuration for hdfs.rootdir?
>
> Regards
> Ram
> -Original Message-
> From: Sooraj S [mailto:soor...@sparksupport.com]
> Sent: Wednesday, October 19, 2011 5:02 PM
> To: user@hbase.apache.
i am trying to stop zookeeper but when i issue command, it does not stop
zookeeper, you can still see it in JPS.
# ../zookeeper/bin/zkServer.sh stop
JMX enabled by default
Using config: /etc/zookeeper/zoo.cfg
Stopping zookeeper ...
kill: 132: No such process
STOPPED
#jps
9338 Jps
2031 QuorumPe
What is the configuration for hdfs.rootdir?
Regards
Ram
-Original Message-
From: Sooraj S [mailto:soor...@sparksupport.com]
Sent: Wednesday, October 19, 2011 5:02 PM
To: user@hbase.apache.org
Subject: java.lang.NumberFormatException: For input string: ""
Hi All,
When i try to start the
Hi All,
When i try to start the hbase master its throwing error as follows. Can
anybody please help me with it ?
Error:
INFO org.apache.hadoop.hbase.master.ActiveMasterManager:
Master=my-desktop:6
FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting
shutdown.
java.la
Not sure what kind of integration you're talking about, but if just want to
create a project with the HBase source then just grab an SVN checkout of an
HBase repo and just do:
mvn eclipse:eclipse
This creates all the necessary project files. Then just add new project from
existing source.
Th
36 matches
Mail list logo