Please don't cross-post, your question is about HBase not MapReduce
itself so I put mapreduce-user@ in BCC.
0.20.3 is, relatively to the age of the project, as old as my
grand-mother so you should consider upgrading to 0.90 or 0.92 which
are both pretty stable.
I'm curious about the shell's behav
1:a/1298037767127/Put/vlen=3,
>> row3/family1:b/1298037770111/Put/vlen=3,
>> row3/family1:c/1298037774954/Put/vlen=3}
>>
>> I see there is everything but value. What should I do to get value
>> on stdin too?
>>
>> Ondrej
>>
>&g
You have a typo, it's hbase.mapred.tablecolumns not hbase.mapred.tablecolumn
J-D
On Fri, Feb 18, 2011 at 6:05 AM, Ondrej Holecek wrote:
> Hello,
>
> I'm testing hadoop and hbase, I can run mapreduce streaming or pipes jobs
> agains text files on
> hadoop, but I have a problem when I try to run
mapper output saved in files which then
> transferred to mysql using sqoop. so here i need to save output to files +
> sent data to hbase. suppose i use output format to save data into file then
> how to send data to hbase in map reduce manually?
>
> On Thu, Oct 28, 2010 at 1:47 A
Do both insertions in your reducer by either not using the output
formats at all or use one of them and do the other insert by hand.
J-D
On Wed, Oct 27, 2010 at 1:44 PM, Shuja Rehman wrote:
> Hi Folks
>
> I am wondering if anyone has the answer of this question. I am processing
> log files using
bout 20 seconds ...
> very slow
>
> I'm trying to understand why I have this 20 second overhead and what I can do
> about it.
>
> My map and reduce classes are in my Hadoop classpath.
>
> On Sep 27, 2010, at 11:32 AM, Jean-Daniel Cryans wrote:
>
>> Using 0.
This isn't a HBase question, this is for mapreduce-user@hadoop.apache.org
J-D
On Tue, Jun 15, 2010 at 8:21 AM, yshintre1982 wrote:
>
> i am running wordcount example on linux vmware on hadoop.
> i get the following exception
>
> Exception in thread "main" java.io.IOException: Error opening job j
You mean that documentation?
http://hadoop.apache.org/common/docs/r0.20.1/quickstart.html#Required+Software
J-D
On Tue, Jan 26, 2010 at 1:34 AM, Suhail Rehman wrote:
> We finally figured it out! The problem was with the JDK installation on our
> VMs, it was configured to use IBM JDK, and the mom
This is a question about straight mapreduce so I'm cross-sending the answer.
To get any parallelization, you have to start multiple JVMs in the
current hadoop version. Let's say you have configured your servers
with mapred.tasktracker.map.tasks.maximum and
mapred.tasktracker.reduce.tasks.maximum t
Michael,
This question should be addressed to the hbase-user mailing list as it
is strictly about HBase's usage of MapReduce, the framework itself
doen't have any knowledge of how the region servers are configured. I
CC'd it.
Uploading into an empty table is always a problem as you saw since
ther
10 matches
Mail list logo