3 0.00 0.00 0.00 0.01
99.96
I suppose it's caused by a high load but I don't have any proof :( Is
there a known bug about that ? I had a similar issue with Cassandra that
forced me to upgrade to linux kernel > 3.0
thanks.
--
Cyril SCETBON
kernel > 3.0
thanks.
--
Cyril SCETBON
it's caused by a high load but I don't have any proof :( Is
there a known bug about that ? I had a similar issue with Cassandra that
forced me to upgrade to linux kernel > 3.0
thanks.
--
Cyril SCETBON
is an error can't come
back after that ...
thanks
On 5/29/12 5:17 PM, Cyril Scetbon wrote:
Hi,
I've installed hbase on the following configuration :
12 x (rest hbase + regionserver hbase + datanode hadoop)
2 x (zookeeper + hbase master)
1 x (zookeeper + hbase master + namenode hadoop
ck again on one node. On
which node do you think I should check GC issue ?
--
Cyril SCETBON
thing like
pastebin.com) and we could see more evidence of the issue.
J-D
On Thu, May 31, 2012 at 2:09 PM, Cyril Scetbon wrote:
On 5/31/12 11:00 PM, Jean-Daniel Cryans wrote:
What I'm seeing looks more like GC issues. Start reading this:
http://hbase.apache.org/book.html#gc
J-D
Hi,
Reall
Added=hb-d2,60020,1338553126560 to dead servers, submitted shutdown
handler to be executed, root=false, meta=false
2012-06-01 13:32:20,048 INFO
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler: Splitting
logs for hb-d2,60020,1338553126560
On 6/1/12 3:25 PM, Cyril Scetbon wrote:
ZooKeeper, the timeout is by default 180
seconds (setting: zookeeper.session.timeout)
--
Cyril SCETBON
uster
balanced), and it's not related to the process of looking after dead
nodes.
The nodes are monitored by ZooKeeper, the timeout is by default 180
seconds (setting: zookeeper.session.timeout)
On Fri, Jun 1, 2012 at 4:40 PM, Cyril Scetbon wrote:
I've another regionserver (hb-d2) that
I forgot to say that we're using Amazon EC2 instances. Maybe an issue is
known ?
On 5/29/12 5:17 PM, Cyril Scetbon wrote:
Hi,
I've installed hbase on the following configuration :
12 x (rest hbase + regionserver hbase + datanode hadoop)
2 x (zookeeper + hbase master)
1 x (zookeep
y exists: ise!
If I try to create another table it works !!!
hbase(main):003:0> create 't1', 'f1', 'f2', 'f3'
0 row(s) in 1.0520 seconds
hbase(main):005:0> disable 't1'
0 row(s) in 1.1030 seconds
hbase(main):006:0> drop 't1'
0 row(s) in 1.2290 seconds
Any explanation ?
thanks
--
Cyril SCETBON
On 6/18/12 12:03 PM, Laxman wrote:
Hi Cyril, Did you delete ZooKeeper data as well?
no
--
Regards,
Laxman
--
Cyril SCETBON
still there. And
I'm not able to delete it anymore. Like the issue you are facing.
JM
2012/6/18, Cyril Scetbon:
On 6/18/12 12:03 PM, Laxman wrote:
Hi Cyril, Did you delete ZooKeeper data as well?
no
--
Regards,
Laxman
--
Cyril SCETBON
--
Cyril SCETBON
I've just done it and it works :)
So zookeeper does not check if its metadata information is right when it
reboots :(
thanks !
On 6/18/12 12:03 PM, Laxman wrote:
Hi Cyril, Did you delete ZooKeeper data as well?
--
Regards,
Laxman
--
Cyril SCETBON
It seems that reducing the number of versions kept by column family enable the
freeing of space.
Regards
Cyril SCETBON
On Jun 27, 2012, at 8:03 PM, Amandeep Khurana wrote:
> Cyril,
>
> Did you notice the space on the hbase directory in HDFS change at all? It
> takes time to
ctly :(
I suppose some data has not been flushed and it's not really important for me.
Is there a way to fix theses errors even if I will lose data ?
thanks
Cyril SCETBON
path '/hbase' is HEALTHY
Cyril SCETBON
Cyril SCETBON
On Jul 5, 2012, at 7:59 PM, Jean-Daniel Cryans wrote:
> Does this file really exist in HDFS?
>
> hdfs://hb-zk1:54310/hbase/.logs/hb-d12,60020,1341429679981-splitting/hb-d12%2C60020%2C1341429679981.1341430649711
>
>
able.
Datanodes might not have reported blocks completely. Will retry for 1 times
cat: Could not obtain the last block locations.
I'm using hadoop 2.0 from Cloudera package (CDH4) with hbase 0.92.1
Regards
Cyril SCETBON
On Jul 5, 2012, at 11:44 PM, Jean-Daniel Cryans wrote:
> Interesti
dfs.datanode.max.xcievers is set to 4096 and the soft limit of nofile is set to
32768 (it is the default in the package)
However when I log in as hdfs it's set to 1024 and I can't find if it's set
somewhere to more...
Cyril SCETBON
On Jul 6, 2012, at 12:19 PM, N Keywal wro
Here are the log files you asked for :
http://pastebin.com/xRBuQdNS < hbase-master.log
http://pastebin.com/u6WYQT6R < hdfs-namenode.log
If you find the fix to this damn issue I'll enjoy !
Thanks
Cyril SCETBON
On Jul 5, 2012, at 11:44 PM, Jean-Daniel Cryans wrote:
&g
d in the hadoop filesystem :
http://pastebin.com/RbcLdbcs
Regards
Cyril SCETBON
On Jul 6, 2012, at 8:17 PM, Cyril Scetbon wrote:
> Here are the log files you asked for :
>
> http://pastebin.com/xRBuQdNS < hbase-master.log
>
> http://pastebin.com/u6WYQT6R < hdfs-name
thanks
Cyril SCETBON
On Jul 6, 2012, at 8:40 PM, Cyril Scetbon wrote:
> As you can see in the master log, region servers are in charge of splitting
> log files (not found I suppose) and it's retried several times (I didn't
> check if it's always redone) on different region
A network issue ?? it's weird, cause reads/writes are working well and not
rising errors (I'll double check it)
Regards
Cyril SCETBON
On Jul 9, 2012, at 10:55 PM, Jean-Daniel Cryans wrote:
> We've been running with distributed splitting here for >6 months and
> never
column=core:value,
timestamp=1343596419845, value=\x00\x00\x00\x00\x00\x00\x00\x0A
The only thing I can add is that my hbase server's version is 0.94.0 and that I
use version 0.92.0 of the hbase jar
any idea why it doesn't work ?
thanks
Cyril SCETBON
I've given the values returned by scan 'table' command in hbase shell in my
first email.
Regards
Cyril SCETBON
On Jul 30, 2012, at 12:50 AM, Himanshu Vashishtha
wrote:
> And also, what are your cell values look like?
>
> Himanshu
>
> On Sun, Jul 29, 2012 at 3:
)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1376)
Regards
Cyril SCETBON
On Jul 29, 2012, at 11:54 PM, yuzhih...@gmail.com wrote:
> Can you use 0.94 for your client jar ?
>
> Please show us the NullPointerException stack.
>
> Thanks
>
>
Thanks, it's really better !
I've read that by default it supports only Long values, that's why I was using
a null ColumnInterpreter.
Regards.
Cyril SCETBON
On Jul 30, 2012, at 5:56 PM, Himanshu Vashishtha
wrote:
> On Mon, Jul 30, 2012 at 6:55 AM, Cyril Scetbon wrote:
&g
unfortunately I can't remember/find it :( and I see in AggregationClient's
javadoc that :
"Column family can't be null" , so I suppose I should have read it at first !
Thanks again
Cyril SCETBON
On Jul 30, 2012, at 7:30 PM, Himanshu Vashishtha
wrote:
> We s
28 matches
Mail list logo