I've given the values returned by scan 'table' command in hbase shell in my
first email.
Regards
Cyril SCETBON
On Jul 30, 2012, at 12:50 AM, Himanshu Vashishtha hvash...@cs.ualberta.ca
wrote:
And also, what are your cell values look like?
Himanshu
On Sun, Jul 29, 2012 at 3:54 PM
)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1376)
Regards
Cyril SCETBON
On Jul 29, 2012, at 11:54 PM, yuzhih...@gmail.com wrote:
Can you use 0.94 for your client jar ?
Please show us the NullPointerException stack.
Thanks
On Jul 29, 2012, at 2:49
Thanks, it's really better !
I've read that by default it supports only Long values, that's why I was using
a null ColumnInterpreter.
Regards.
Cyril SCETBON
On Jul 30, 2012, at 5:56 PM, Himanshu Vashishtha hvash...@cs.ualberta.ca
wrote:
On Mon, Jul 30, 2012 at 6:55 AM, Cyril Scetbon
unfortunately I can't remember/find it :( and I see in AggregationClient's
javadoc that :
Column family can't be null , so I suppose I should have read it at first !
Thanks again
Cyril SCETBON
On Jul 30, 2012, at 7:30 PM, Himanshu Vashishtha hvash...@cs.ualberta.ca
wrote:
We should fix
The only thing I can add is that my hbase server's version is 0.94.0 and that I
use version 0.92.0 of the hbase jar
any idea why it doesn't work ?
thanks
Cyril SCETBON
A network issue ?? it's weird, cause reads/writes are working well and not
rising errors (I'll double check it)
Regards
Cyril SCETBON
On Jul 9, 2012, at 10:55 PM, Jean-Daniel Cryans wrote:
We've been running with distributed splitting here for 6 months and
never had this issue. Also
SCETBON
On Jul 6, 2012, at 8:40 PM, Cyril Scetbon wrote:
As you can see in the master log, region servers are in charge of splitting
log files (not found I suppose) and it's retried several times (I didn't
check if it's always redone) on different region servers. You can for
example follow
reported blocks completely. Will retry for 1 times
cat: Could not obtain the last block locations.
I'm using hadoop 2.0 from Cloudera package (CDH4) with hbase 0.92.1
Regards
Cyril SCETBON
On Jul 5, 2012, at 11:44 PM, Jean-Daniel Cryans wrote:
Interesting... Can you read the file? Try a hadoop dfs
dfs.datanode.max.xcievers is set to 4096 and the soft limit of nofile is set to
32768 (it is the default in the package)
However when I log in as hdfs it's set to 1024 and I can't find if it's set
somewhere to more...
Cyril SCETBON
On Jul 6, 2012, at 12:19 PM, N Keywal wrote:
Hi Cyril
Here are the log files you asked for :
http://pastebin.com/xRBuQdNS hbase-master.log
http://pastebin.com/u6WYQT6R hdfs-namenode.log
If you find the fix to this damn issue I'll enjoy !
Thanks
Cyril SCETBON
On Jul 5, 2012, at 11:44 PM, Jean-Daniel Cryans wrote:
Interesting... Can
filesystem :
http://pastebin.com/RbcLdbcs
Regards
Cyril SCETBON
On Jul 6, 2012, at 8:17 PM, Cyril Scetbon wrote:
Here are the log files you asked for :
http://pastebin.com/xRBuQdNS hbase-master.log
http://pastebin.com/u6WYQT6R hdfs-namenode.log
If you find the fix
suppose some data has not been flushed and it's not really important for me.
Is there a way to fix theses errors even if I will lose data ?
thanks
Cyril SCETBON
path '/hbase' is HEALTHY
Cyril SCETBON
Cyril SCETBON
On Jul 5, 2012, at 7:59 PM, Jean-Daniel Cryans wrote:
Does this file really exist in HDFS?
hdfs://hb-zk1:54310/hbase/.logs/hb-d12,60020,1341429679981-splitting/hb-d12%2C60020%2C1341429679981.1341430649711
If so, did you run fsck
It seems that reducing the number of versions kept by column family enable the
freeing of space.
Regards
Cyril SCETBON
On Jun 27, 2012, at 8:03 PM, Amandeep Khurana wrote:
Cyril,
Did you notice the space on the hbase directory in HDFS change at all? It
takes time to complete the major
create 't1', 'f1', 'f2', 'f3'
0 row(s) in 1.0520 seconds
hbase(main):005:0 disable 't1'
0 row(s) in 1.1030 seconds
hbase(main):006:0 drop 't1'
0 row(s) in 1.2290 seconds
Any explanation ?
thanks
--
Cyril SCETBON
On 6/18/12 12:03 PM, Laxman wrote:
Hi Cyril, Did you delete ZooKeeper data as well?
no
--
Regards,
Laxman
--
Cyril SCETBON
to delete it anymore. Like the issue you are facing.
JM
2012/6/18, Cyril Scetboncyril.scet...@free.fr:
On 6/18/12 12:03 PM, Laxman wrote:
Hi Cyril, Did you delete ZooKeeper data as well?
no
--
Regards,
Laxman
--
Cyril SCETBON
--
Cyril SCETBON
I've just done it and it works :)
So zookeeper does not check if its metadata information is right when it
reboots :(
thanks !
On 6/18/12 12:03 PM, Laxman wrote:
Hi Cyril, Did you delete ZooKeeper data as well?
--
Regards,
Laxman
--
Cyril SCETBON
, the timeout is by default 180
seconds (setting: zookeeper.session.timeout)
--
Cyril SCETBON
-d2,60020,1338553126560 to dead servers, submitted shutdown
handler to be executed, root=false, meta=false
2012-06-01 13:32:20,048 INFO
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler: Splitting
logs for hb-d2,60020,1338553126560
On 6/1/12 3:25 PM, Cyril Scetbon wrote:
I've added
that ...
thanks
On 5/29/12 5:17 PM, Cyril Scetbon wrote:
Hi,
I've installed hbase on the following configuration :
12 x (rest hbase + regionserver hbase + datanode hadoop)
2 x (zookeeper + hbase master)
1 x (zookeeper + hbase master + namenode hadoop)
OS used is ubuntu lucid (10.04)
The issue
. On
which node do you think I should check GC issue ?
--
Cyril SCETBON
0.00 0.00 0.01
99.96
I suppose it's caused by a high load but I don't have any proof :( Is
there a known bug about that ? I had a similar issue with Cassandra that
forced me to upgrade to linux kernel 3.0
thanks.
--
Cyril SCETBON
.
--
Cyril SCETBON
by a high load but I don't have any proof :( Is
there a known bug about that ? I had a similar issue with Cassandra that
forced me to upgrade to linux kernel 3.0
thanks.
--
Cyril SCETBON
25 matches
Mail list logo