On Sat, Jul 7, 2012 at 2:58 AM, Sever Fundatureanu
fundatureanu.se...@gmail.com wrote:
Also does anybody know what is the flow in the system when a coprocessor
from one RS make a Put call with a row key which falls on another RS? I.e.
do the Region servers communicate directly between each
Hi,
When we try to add a value to a CF which does not exist on a table, we
are getting the error below. I think this is not really giving the
right information about the issue.
Should it not be better to provide an exception like
UnknownColumnFamillyException?
JM
Depending on your setup(not MapR) you can also raise your allowed failed
volumes this will let you keep your nodes up until you are ready to replace
the single bad drive.
On Mon, Jul 9, 2012 at 1:04 AM, M. C. Srivas mcsri...@gmail.com wrote:
On Sun, Jun 24, 2012 at 8:14 PM, Michel Segel
I agree.
On Jul 9, 2012, at 5:25 AM, Jean-Marc Spaggiari jean-m...@spaggiari.org wrote:
Hi,
When we try to add a value to a CF which does not exist on a table, we
are getting the error below. I think this is not really giving the
right information about the issue.
Should it not be
Hi,
We've been tinkering around ideas of implementing secondary index.
One of the ideas is based on concatenating three meaningful fields into a long:
int, short (2 bytes), short. This long will be used as timestamp when issuing a
put to the secondary index table.
This means an puts, timestamp
Hi,
I believe that I am hitting an identified issued HBASE-5780. I'm using 0.92.1.
It says that it is fixed in 0.92.2, 0.94.0, and 0.96.0. The download page
does not have 0.92.2 or 0.96.0? I'm using the hbase-security distribution and
just wondering which version I should target if the
This may beg the question ...
Why do you not know the CF?
Your table schemas only consist of tables and CFs. So you should know them at
the start of your job or m/r Mapper.setup();
On Jul 9, 2012, at 7:25 AM, Jean-Marc Spaggiari wrote:
Hi,
When we try to add a value to a CF which does
In my case it was a codding issue. Used the wrong final byte array to
access the CF. So I agree, the CF is well known since you create the
table based on them. But maybe you have added some other CFs later and
something went wrong?
It's just that based on the exception received, there is no
Jean-Marc,
I think you mis understood.
At run time, you can query HBase to find out the table schema and its column
families.
While I agree that you are seeing poorly written exceptions, IMHO its easier to
avoid the problem in the first place.
In a Map/Reduce in side the mapper class, you
On 07/09/2012 04:39 AM, xkwang bruce wrote:
hi all:
when I restart my zookeeper, the following error is up.
ERROR org.apache.zookeeper.server.persistence.FileTxnSnapLog: Parent /hbase
missing for /hbase/master
ERROR org.apache.zookeeper.server.quorum.QuorumPeer: Unable to load
database on
On 07/09/2012 09:34 AM, Tony Dean wrote:
Hi,
I believe that I am hitting an identified issued HBASE-5780. I'm
using 0.92.1. It says that it is fixed in 0.92.2, 0.94.0, and 0.96.0.
That's correct.
The download page does not have 0.92.2 or 0.96.0?
It seems that is working normally, at least
Sorry I did not understand your question. Are you planning to use the
concatenated long as the rowkey to your secondary table?
Best Regards,
Sonal
Crux: Reporting for HBase https://github.com/sonalgoyal/crux
Nube Technologies http://www.nubetech.co
http://in.linkedin.com/in/sonalgoyal
On
Hello:
I'd like to get advice on the below strategy of decreasing the
ipc.socket.timeout configuration on the HBase Client side ... has
anyone tried this? Has anyone had any issues with configuring this
lower than the default 20s?
Thanks,
--Suraj
On Mon, Jul 2, 2012 at 5:51 PM, Suraj Varma
Hi Marcos,
What I am saying is that the download site has the following links:
hbase-0.90.5/
hbase-0.90.6/
hbase-0.92.0/
hbase-0.92.1/
hbase-0.94.0/
stable/ which points to 0.92.1
I can get 0.92.2 from Jenkins build, but should i? Why is it not listed on the
download page? And it seems as
Apache HBase 0.92.2 has not yet been released. You can consider using
0.94.0 (or 0.94.1, it is coming very soon), or perhaps just back-port
the fix onto your local copy of HBase and rebuild it in the meantime.
There was a vote called on 0.92.2's release earlier in June but it
hasn't been followed
Hi all ,
When map reduce program is executed the regionserver is get closed
automatically . See the error below
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException:
Failed 15 actions: NotServingRegionException: 15 times, servers with
issues: alok:60020,
at
Hi,
What you're describing -the 35 minutes recovery time- seems to match
the code. And it's a bug (still there on trunk). Could you please
create a jira for it? If you have the logs it even better.
Lowering the ipc.socket.timeout seems to be an acceptable partial
workaround. Setting it to 10s
Can you please attach the output of hbck -details and the output of hadoop
fs -lsr /hbase(you can send directly if you do not want to share that)?
There should not be any problem with using the uber-hbck to resolve this
from the .90.6 version.
On Mon, Jul 2, 2012 at 8:38 PM, Suraj Varma
No.
My index is composed of several fields. Some goes to the RowKey, some to the
column name, and some - and hence the question - to timestamp.
Those that goes to the timestamp are of types Integer, Short and short which
together form 8 bytes - the size of the Timestamp in hbase.
So my
Hello --
I just wrote this message to the list:
I have done some searching and some places say to use:
$HBASE_HOME/bin/hbase shell --master:hbase2:6000
and some say to use:
$HBASE_HOME/bin/hbase shell --conf path/to/hbase-site.xml
But both fail for me on saying:
LoadError: No such file to load
Bryan,
I forgot to add the link to the new code to grab the uber-hbck
http://archive.cloudera.com/cdh/3/hbase-0.90.6-cdh3u4.tar.gz
On Mon, Jul 9, 2012 at 12:53 PM, Kevin O'dell kevin.od...@cloudera.comwrote:
Can you please attach the output of hbck -details and the output of hadoop
fs -lsr
Hello Harsh,
On Mon, Jul 09, 2012 at 07:14:56AM +0530, Harsh J wrote:
Perhaps the pre-compiled set does not work against the version of libs
in your ArchLinux. We've noticed this to be the case between CentOS 5
and 6 versions too (5 doesn't pick up the Snappy codec for some
reason).
Try
The hbase-daemon.sh does not ssh back into the host, so preserves any
environment variables you haven't otherwise set in the hbase-env.sh
file. I guess that did the trick for you.
On Mon, Jul 9, 2012 at 11:14 PM, Arvid Warnecke ar...@nostalgix.org wrote:
Hello Harsh,
On Mon, Jul 09, 2012 at
Maybe you should look at the content of the jvm argument switch
-Djava.library.path, (ps -ef | grep hbase , to see the command line). This will
give you a hint on the directories the .so object is being looked for.
On Jul 9, 2012, at 21:00 PM, Harsh J wrote:
The hbase-daemon.sh does not ssh
Hi,
My cluster started being incredibly slow in the past 2 days.
I've seen many Blocking updates on the region server logs, which lead me to
believe HDFS creates is the bottleneck.
I ran a small test (hadoop fs -copyFromLocal big3_3Giga.tz.gz /tmp) which
copies a 3.3G file, and I was surprised
Hi Ben,
Your self-discovered solution was not clear to me:
to configure the config files (such as hbase-site) in hbase and then
re-run hbase shell.
Pls clarify what actions constitute: configure the config files in
hbase:Do you mean making certain changes to the local config files, or
He probably means configuring $HBASE_HOME/conf and/or pointing
$HBASE_CONF_DIR locally to the right configs dir.
On Tue, Jul 10, 2012 at 12:06 AM, Stephen Boesch java...@gmail.com wrote:
Hi Ben,
Your self-discovered solution was not clear to me:
to configure the config files (such as
Thanks for the response Kevin, Shrijeet, and Suraj.
I forgot to update the list (sorry!) but the addregion.rb did seem to work.
At first it didn't seem to, and I stepped away to work on something else
for the last week. Looking at the table now, it must have been an async
action because all the
No problem. Glad to hear all is well!
On Mon, Jul 9, 2012 at 3:10 PM, Bryan Beaudreault
bbeaudrea...@hubspot.comwrote:
Thanks for the response Kevin, Shrijeet, and Suraj.
I forgot to update the list (sorry!) but the addregion.rb did seem to work.
At first it didn't seem to, and I stepped
Now that I have a stable cluster, I would like to use YCSB to test
its performance; however, I am a bit confused after reading several
different website posting about YCSB.
1) Be default will YCSB read my hbase-site.xml file or do I have to
copy it into the YCSB conf directory?
On Mon, Jul 9, 2012 at 8:35 PM, Asaf Mesika asaf.mes...@gmail.com wrote:
Hi,
My cluster started being incredibly slow in the past 2 days.
I've seen many Blocking updates on the region server logs, which lead me to
believe HDFS creates is the bottleneck.
I ran a small test (hadoop fs
Inline.
On Monday, July 9, 2012 at 12:17 PM, registrat...@circle-cross-jn.com wrote:
Now that I have a stable cluster, I would like to use YCSB to test
its performance; however, I am a bit confused after reading several
different website posting about YCSB.
1) Be default will YCSB
Hello,
I wonder, for purging old data, if I'm OK with remove all StoreFiles which
are older than ... way, can I do that? To me it seems like this can be a
very effective way to remove old data, similar to fast bulk import
functionality, but for deletion.
Thank you,
Alex Baranau
--
Sematext
I got JMX counters hooked up to JConsole (couple of them opened).
Do you have any advice from your experience on what metrics I should focus on
to spot this issue?
On Jul 9, 2012, at 22:19 PM, Stack wrote:
On Mon, Jul 9, 2012 at 8:35 PM, Asaf Mesika asaf.mes...@gmail.com wrote:
Hi,
My
Bryan,
Here's the current best documentation about how to use the new features in
hbck.
http://hbase.apache.org/book.html#hbck.in.depth
Jon.
On Mon, Jul 9, 2012 at 12:10 PM, Bryan Beaudreault bbeaudrea...@hubspot.com
wrote:
Thanks for the response Kevin, Shrijeet, and Suraj.
I forgot to
I _think_ you should be able to do it and be just fine but you'll need to shut
down the region servers before you remove and start them back up after you are
done. Someone else closer to the internals can confirm/deny this.
On Monday, July 9, 2012 at 12:36 PM, Alex Baranau wrote:
Hello,
Heh, this is what I want to avoid actually: restarting RSs.
Alex Baranau
--
Sematext :: http://blog.sematext.com/ :: Solr - Lucene - Hadoop - HBase
On Mon, Jul 9, 2012 at 3:38 PM, Amandeep Khurana ama...@gmail.com wrote:
I _think_ you should be able to do it and be just fine but you'll
You could set your ttls and trigger a major compaction ...
Or, (this is pretty advanced) you can probably do it without taking down
RS's by:
1) closing the region in the hbase shell
2) deleting the file in the shell
3) reopening the region in the hbase shell
Jon.
On Mon, Jul 9, 2012 at 12:41
Hey, this is closer!
However, I think I'd want to avoid major compaction. In fact I was thinking
about avoiding any compactions splitting.
E.g. say I process some amount of data every 1 hour (e.g. with MR job), the
output is written as a set of HFiles and added to be served by HBase. At
the same
We've been running with distributed splitting here for 6 months and
never had this issue. Also the exceptions you are seeing come from
HDFS and not HBase, the fact that it worked from the master and not
the region servers seem to point to a network configuration issue
because the actual splitting
Thank you Amandeep for your input.
I go into hbase shell to create a table from my HMaster, which
isn't running a DN process and I get the following. Could this be
caused by a number of my DNs being offline, by the fact that the node
isn't running a DN process, or something
This exception is generally caused when one of your server names returned does
not map to a valid IP address on that host.. The services being up or not does
not matter but the hostname should resolve to a valid IP
Regards,
Dhaval
From:
Is there a debug flag I can use with hbase shell that will tell
me the name it's trying to resolve?
Thank you
---
Jay Wilson
- Original Message -
From:
To:user@hbase.apache.org , registrat...@circle-cross-jn.com
Cc:
Sent:Tue, 10 Jul 2012 05:36:44
There is definitely a debug flag on hbase.. You can find out details
on http://hbase.apache.org/shell.html.. I am not sure how much details would it
log though.. I have never used it personally
Regards,
Dhaval
- Original Message -
From: registrat...@circle-cross-jn.com
Hello Asaf,
If the 'int' parts of your rowkeys are close to each other, you may
face hotspotting.
Best
-Tariq
On Monday, July 9, 2012, Asaf Mesika asaf.mes...@gmail.com wrote:
No.
My index is composed of several fields. Some goes to the RowKey, some to
the column name, and some - and
Increase the value of timeout property in hbase-site.xml..maybe your
application is keeping the connection open for too long.
On Monday, July 9, 2012, syed kather in.ab...@gmail.com wrote:
Hi all ,
When map reduce program is executed the regionserver is get closed
automatically . See the error
Keith,
The HBASE-3584 feature is a 0.94 and we are strongly considering an 0.94
version for for a future CDH4 update. There is very little chance this
will get into a CDH3 release.
Jon.
On Thu, Jul 5, 2012 at 4:50 PM, lars hofhansl lhofha...@yahoo.com wrote:
I'll let the Cloudera folks
The int, short, short part goes to the time stamp.
Thanks!
Sent from my iPad
On 10 ביול 2012, at 01:08, Mohammad Tariq donta...@gmail.com wrote:
Hello Asaf,
If the 'int' parts of your rowkeys are close to each other, you may
face hotspotting.
Best
-Tariq
On Monday, July 9, 2012,
Asaf - check the URL in my signature - it's a service for monitoring various
HBase metrics and may visually show you what is going on with various parts of
your 3 servers.
Otis
Performance Monitoring for Solr / ElasticSearch / HBase -
http://sematext.com/spm
- Original Message
I used it but I didn't like the fact that the graph was updating every
minute and sometimes every 5 minutes.
Sent from my iPhone
On 10 ביול 2012, at 06:57, Otis Gospodnetic otis_gospodne...@yahoo.com
wrote:
Asaf - check the URL in my signature - it's a service for monitoring
various HBase
Hello,
On Mon, Jul 09, 2012 at 09:10:12PM +0300, Asaf Mesika wrote:
On Jul 9, 2012, at 21:00 PM, Harsh J wrote:
The hbase-daemon.sh does not ssh back into the host, so preserves any
environment variables you haven't otherwise set in the hbase-env.sh
file. I guess that did the trick for
51 matches
Mail list logo