Re: Hbase DeleteAll is not working

2012-05-14 Thread Jean-Daniel Cryans
Please don't cross-post, your question is about HBase not MapReduce itself so I put mapreduce-user@ in BCC. 0.20.3 is, relatively to the age of the project, as old as my grand-mother so you should consider upgrading to 0.90 or 0.92 which are both pretty stable. I'm curious about the shell's behav

Re: mapreduce streaming with hbase as a source

2011-02-22 Thread Jean-Daniel Cryans
1:a/1298037767127/Put/vlen=3, >> row3/family1:b/1298037770111/Put/vlen=3, >> row3/family1:c/1298037774954/Put/vlen=3} >> >> I see there is everything but value. What should I do to get value >> on stdin too? >> >> Ondrej >> >&g

Re: mapreduce streaming with hbase as a source

2011-02-18 Thread Jean-Daniel Cryans
You have a typo, it's hbase.mapred.tablecolumns not hbase.mapred.tablecolumn J-D On Fri, Feb 18, 2011 at 6:05 AM, Ondrej Holecek wrote: > Hello, > > I'm testing hadoop and hbase, I can run mapreduce streaming or pipes jobs > agains text files on > hadoop, but I have a problem when I try to run

Re: Single Job to put Data into Hbase+MySQL

2010-10-27 Thread Jean-Daniel Cryans
mapper output saved in files which then > transferred to mysql using sqoop. so here i need to save output to files + > sent data to hbase. suppose i use output format to save data into file then > how to send data to hbase in map reduce manually? > > On Thu, Oct 28, 2010 at 1:47 A

Re: Single Job to put Data into Hbase+MySQL

2010-10-27 Thread Jean-Daniel Cryans
Do both insertions in your reducer by either not using the output formats at all or use one of them and do the other insert by hand. J-D On Wed, Oct 27, 2010 at 1:44 PM, Shuja Rehman wrote: > Hi Folks > > I am wondering if anyone has the answer of this question. I am processing > log files using

Client hanging 20 seconds after job's over (WAS: Re: Can I run HBase 0.20.6 on Hadoop 0.21?)

2010-09-27 Thread Jean-Daniel Cryans
bout 20 seconds ... > very slow > > I'm trying to understand why I have this 20 second overhead and what I can do > about it. > > My map and reduce classes are in my Hadoop classpath. > > On Sep 27, 2010, at 11:32 AM, Jean-Daniel Cryans wrote: > >> Using 0.

Re: Error opening job jar

2010-06-15 Thread Jean-Daniel Cryans
This isn't a HBase question, this is for mapreduce-user@hadoop.apache.org J-D On Tue, Jun 15, 2010 at 8:21 AM, yshintre1982 wrote: > > i am running wordcount example on linux vmware on hadoop. > i get the following exception > > Exception in thread "main" java.io.IOException: Error opening job j

Re: Reducers are stuck fetching map data.

2010-01-26 Thread Jean-Daniel Cryans
You mean that documentation? http://hadoop.apache.org/common/docs/r0.20.1/quickstart.html#Required+Software J-D On Tue, Jan 26, 2010 at 1:34 AM, Suhail Rehman wrote: > We finally figured it out! The problem was with the JDK installation on our > VMs, it was configured to use IBM JDK, and the mom

JVM reuse (was: HBase bulk load)

2010-01-21 Thread Jean-Daniel Cryans
This is a question about straight mapreduce so I'm cross-sending the answer. To get any parallelization, you have to start multiple JVMs in the current hadoop version. Let's say you have configured your servers with mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum t

Re: how to load big files into Hbase without crashing?

2010-01-12 Thread Jean-Daniel Cryans
Michael, This question should be addressed to the hbase-user mailing list as it is strictly about HBase's usage of MapReduce, the framework itself doen't have any knowledge of how the region servers are configured. I CC'd it. Uploading into an empty table is always a problem as you saw since ther