Digging into this bug a bit, I think I have a feel for what's happening, but
I want to check.
It seems like since the MR job is writing to two HBase tables, it's using
two instances of TableOutputFormat in the same thread. This means two
instances of HTable in the same thread. From previous discus
Finally got ttransport.TBufferedTransport to work, however, the same error
exists.
On 21/05/2011, at 3:08 PM, Mark Jarecki wrote:
> I've tried, unsuccessfully, to get ttransport.TBufferedTransport to work.
>
>
> On 21/05/2011, at 2:30 PM, Stack wrote:
>
>> If you use a different transport
I've tried, unsuccessfully, to get ttransport.TBufferedTransport to work.
On 21/05/2011, at 2:30 PM, Stack wrote:
> If you use a different transport -- is that possible w/ node.js --
> does it work then?
> St.Ack
>
> On Fri, May 20, 2011 at 8:56 PM, Mark Jarecki wrote:
>> Just an addition,
>>
If you use a different transport -- is that possible w/ node.js --
does it work then?
St.Ack
On Fri, May 20, 2011 at 8:56 PM, Mark Jarecki wrote:
> Just an addition,
>
> I notice that when I restart the thrift server, the exception is no longer
> persisted.
>
> On 21/05/2011, at 1:51 PM, Mark Ja
Just an addition,
I notice that when I restart the thrift server, the exception is no longer
persisted.
On 21/05/2011, at 1:51 PM, Mark Jarecki wrote:
> Hi there,
> Just experimenting with getting Hbase 0.90.3 working with Node.js using
> Thrift 0.6 (nonblocking & framed transport) and node-th
Hi there,
Just experimenting with getting Hbase 0.90.3 working with Node.js using Thrift
0.6 (nonblocking & framed transport) and node-thrift.
I was testing exceptions, by mutating on a column that didn't exist:
var mutations = [];
mutations.push(new ttypes.Mutation({column: "colum
On Fri, May 20, 2011 at 4:16 PM, Something Something
wrote:
> On a side note, Mozilla's link is broken:
> https://github.com/xstevens/akela/blob/master/src/java/com/mozilla/hadoop/Backup.java
>
It looks like it moved here:
https://github.com/mozilla-metrics/akela/blob/master/src/java/com/mozilla/h
Sweet.
org.apache.hadoop.hbase.mapreduce.Export
& org.apache.hadoop.hbase.mapreduce.Import are working well. Meets our
needs for now.
On a side note, Mozilla's link is broken:
https://github.com/xstevens/akela/blob/master/src/java/com/mozilla/hadoop/Backup.java
Anyway, thanks for the quick repl
Yes, that's what it seems. I've opened a Pig JIRA for it:
https://issues.apache.org/jira/browse/PIG-2085
On Thu, May 19, 2011 at 1:31 PM, Jean-Daniel Cryans wrote:
> Your attachement didn't make it, it rarely does on the mailing lists.
> I suggest you use a gist.github or a pastebin.
>
> Regardi
Here's an overview of what you can do
http://blog.sematext.com/2011/03/11/hbase-backup-options/
J-D
On Fri, May 20, 2011 at 2:18 PM, Something Something
wrote:
> Looking for a reliable Backup/Restore solution. Is Cluster Replication (
> http://hbase.apache.org/replication.html) the only recomm
Looking for a reliable Backup/Restore solution. Is Cluster Replication (
http://hbase.apache.org/replication.html) the only recommended way? We
don't have extra infrastructure needed at this client for replication. Just
creating a demo/prototype application for them.
Is there a utility that wi
Are you running at INFO level logging Jack? Can you pastebin more log
context. I'd like to take a look.
Thanks,
St.Ack
On Thu, May 19, 2011 at 11:36 PM, Jack Levin wrote:
> Thanks, now with setting that value to "2", we still get slow DN death
> master recovery of logs:
>
> 2011-05-19 23:34:55,
Ok.
This why I asked you earlier about how you were generating your user ids.
You're not going to get a good distribution.
First, random numbers usually aren't that random.
How many users do you want to simulate?
Try this...
Create n number of type 5 uuids. These are uuids that have been generat
I've created https://issues.apache.org/jira/browse/HBASE-3908 for
Lucian, we'll submit the patch on Monday
On 05/20/2011 06:41 PM, Lars George wrote:
Yes please, plus a patch would be awesome :)
On Fri, May 20, 2011 at 5:24 PM, Lucian Iordache
wrote:
Hi guys,
I've just found a problem wit
Hi Himanish,
This is a phenomon I've seen before, but not in the context of HBase.
We had web-service calls with sub-second response times on desktops. When
moving to a blade environment we had spikes of up to 20 seconds.
In that case it turned out that we had verbose logging turned on. The dev
Not sure whybthis didn't make the list...
Sent from a remote device. Please excuse any typos...
Mike Segel
On May 20, 2011, at 1:15 AM, "Michel Segel" wrote:
> Ok.
> This why I asked you earlier about how you were generating your user ids.
>
> You're not going to get a good distribution.
>
Yes please, plus a patch would be awesome :)
On Fri, May 20, 2011 at 5:24 PM, Lucian Iordache
wrote:
> Hi guys,
>
> I've just found a problem with the class TableSplit. It implements "equals",
> but it does not implement hashCode also, as it should have.
> I've discovered it by trying to use a Ha
Hi guys,
I've just found a problem with the class TableSplit. It implements "equals",
but it does not implement hashCode also, as it should have.
I've discovered it by trying to use a HashSet of TableSplit's, and I've
noticed that some duplicate splits are added to the set.
The only option I have
This could be happen under the following steps with little probability:
(I suppose the cluster nodes names are RS1/RS2/HM, and there's more than 10,000
regions in the cluster)
1.Root region was opened in RS1.
2.Due to some reason(Maybe the hdfs process was got abnormal),RS1 aborted.
3.ServerShutd
19 matches
Mail list logo