es
That's basically what CopyTable does if I understand your need properly:
https://github.com/apache/hbase/blob/0.90/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java
J-D
On Wed, Mar 30, 2011 at 8:34 AM, Stuart Scott wrote:
> Hi,
>
>
>
> I have a map/reduce job
Hi,
I have a map/reduce job that reads from a Hbase Table and writes to
another.
Does anyone know how to programmatically set the Zoo Keeper address for
a Reducer?
I can create a job as below, set the IP address using .set... it works
fine for the Maper. The reducer defaults to localhost. We
If you could grab a copy of dnode 5’s logs, I will forward to the user group.
They are usually really helpful.
From: Stuart Scott
Sent: 21 March 2011 22:45
To: 'Desmond Lee'
Cc: Desmond Lee
Subject: RE: habse.Client.write.buffer
Yep… more memory me thinks.
I can’t see the CPU’
e limited
information you have provided.
Dave
-Original Message-
From: Ted Dunning [mailto:tdunn...@maprtech.com]
Sent: Monday, March 21, 2011 1:19 PM
To: user@hbase.apache.org
Cc: Stuart Scott
Subject: Re: HBase Stability
Is there a reason you are not using a recent version of 0.90?
On
1 March 2011 20:20
To: user@hbase.apache.org
Cc: Stuart Scott
Subject: Re: HBase Stability
No, map-reduce is not really necessary to add so few rows.
Our internal tests repeatedly load 10-100 million rows without much
fuss. And that is on clusters ranging from 3 to 11 nodes.
On Mon, Mar 2
We have been trying to build a working system that we can insert records
reliably but to no avail.
Any advice would be appreciated.
Regards
Stuart
Stuart Scott
System Architect
emis intellectual technology
Fulford Grange, Micklefield Lane
Rawdon Leeds LS19 6BA
E-mail: stua
Hi,
We are experiencing the same issue. Have experimented with the memory
settings also but still get the same problem. We are inserting over
1,000,00 records. We find that it freezes as below but also, after
running for some time, the entire connectivity dies.
Would be interested in any progress
lumn information via Thrift/PHP/Scanner
when you scan using the shell what do you see? Note that qualifier names are
just byte[] and thus caps sensitive.
-ryan
On Tue, Feb 1, 2011 at 6:38 AM, Stuart Scott wrote:
> Hi,
>
>
>
> Wonder if anyone could offer any advice please? I
lt->row;
echo $row;
//returns the row number correctly
$column = $TRowResult->columns;
foreach ($column as $family_column=>$Tcell)
{
$rec=$Tcell->value;
//doesn't return anything for the column family:column
echo $rec->v
t: 28 January 2011 16:02
To: user@hbase.apache.org
Subject: Re: HBase access from C#.NET
Can you use thrift?
St.Ack
On Fri, Jan 28, 2011 at 7:55 AM, Stuart Scott
wrote:
> Hi,
>
>
>
> Has anyone tried to get a Windows C#.NET application to connect to
> HBase?
>
>
Hi,
Has anyone tried to get a Windows C#.NET application to connect to
HBase?
If so, how did you manage it?
Regards
Stuart Scott
System Architect
emis intellectual technology
Fulford Grange, Micklefield Lane
Rawdon Leeds LS19 6BA
E-mail: stuart.sc...@e-mis.com <mailto:stuart
you pack this up into a job jar?
Lars
On Thu, Jan 27, 2011 at 8:10 AM, Stuart Scott wrote:
> Hi Lars,
>
> Thanks for your reply.
> Yes, I've got this in my code as below.. (I'm new to Map Reduce-so I'm
> probably doing something silly).
>
> Stuart
>
Hi,
Has anyone come across the error below? Any ideas how to resolve this?
Regards
Stuart
Starting Job
11/01/27 06:37:42 WARN mapred.JobClient: No job jar file set. User
classes may not be found. See JobConf(Class) or JobConf#setJar(String).
Hi,
Has anyone connected successfully to Hive from a Windows client PC using
JDBC?
If so, how did you manage it? (I currently get : No suitable driver
found for jdbc:hive://mainpc:1/default)
Any pointers would be really appreciated.
Regards
Stuart Scott
System Architect
medURLException: unknown protocol: hdfs
Would we need the full Hadoop on the Client (which isn't part of the
cluster)?
Will this actually work?
Any advice would be gratefully received.
Regards
Stuart Scott
System Architect
emis intellectual technology
Fulford Grange, Micklefiel
15 matches
Mail list logo