yes, I believe this will cover most of the use-cases.
Lior
On Tue, May 14, 2013 at 9:25 PM, Mike Spreitzer mspre...@us.ibm.com wrote:
Why not go whole hog and create checkAndMultiMutate (for all varieties of
mutation) (all on the same row)?
Thanks,
Mike
From: Lior Schachter lior
it.
On Saturday, April 27, 2013, Lior Schachter wrote:
Hi Ted,
Thanks for the prompt response.
I've already had a look at HRegionServer.checkAndPut and the
implementation
looks quite straight forward.
That's why I was wondering why the other 2 methods are not available...or
planned (couldn't find
Hi,
I want to increment a cell value only after checking a condition on another
cell. I could find checkAndPut/checkAndDelete on HTableInteface. It seems
that checkAndIncrement (and checkAndAppend) are missing.
Can you suggest a workaround for my use-case ? working with version 0.94.5.
Thanks,
Put put) throws IOException {
You can create checkAndIncrement() in a similar way.
Cheers
On Sat, Apr 27, 2013 at 9:02 PM, Lior Schachter lior...@gmail.com wrote:
Hi,
I want to increment a cell value only after checking a condition on
another
cell. I could find checkAndPut/checkAndDelete
byte[] family, final byte[] qualifier, final byte[] value,
final Put put) throws IOException {
You can create checkAndIncrement() in a similar way.
Cheers
On Sat, Apr 27, 2013 at 9:02 PM, Lior Schachter lior...@gmail.com
wrote:
Hi,
I want to increment a cell value only
the configurations you sent and see if it can
eliminate the problem.
Lior
On Mon, Mar 26, 2012 at 7:43 PM, Jean-Daniel Cryans jdcry...@apache.orgwrote:
On Sun, Mar 25, 2012 at 1:23 AM, Lior Schachter li...@infolinks.com
wrote:
Hi all,
We use hbase 0.9.2. We recently started to experience region servers
Hi all,
We use hbase 0.9.2. We recently started to experience region servers
crashed under heavy load (2-3 different servers crashes eah load).
Seems like missing block in HDFS causes a full GC and regions are being
closed.
Following logs sample from the region server (gc log, region server log)
in the
mappers.
Dave
-Original Message-
From: Lior Schachter [mailto:li...@infolinks.com]
Sent: Sunday, August 14, 2011 9:32 AM
To: user@hbase.apache.org; mapreduce-u...@hadoop.apache.org
Subject: M/R vs hbase problem in production
Hi,
cluster details:
hbase 0.90.2. 10 machines. 1GB
Hi,
cluster details:
hbase 0.90.2. 10 machines. 1GB switch.
use-case
M/R job that inserts about 10 million rows to hbase in the reducer, followed
by M/R that works with hdfs files.
When the first job maps finish the second job maps starts and region server
crushes.
please note, that when running
(33548K)]
icms_dc=100 , 7.2816440 secs] [Times: user=9.19 sys=0.01, real=7.28 secs]
On Sun, Aug 14, 2011 at 7:32 PM, Lior Schachter li...@infolinks.com wrote:
Hi,
cluster details:
hbase 0.90.2. 10 machines. 1GB switch.
use-case
M/R job that inserts about 10 million rows to hbase
Hi all,
I'm running a scan using the M/R framework.
My table contains hundreds of millions of rows and I'm scanning using
start/stop key about 50 million rows.
The problem is that some map tasks get stuck and the task manager kills
these maps after 600 seconds. When retrying the task everything
in master / region server logs around the moment
of timeout ?
Cheers
On Mon, Jul 4, 2011 at 4:48 AM, Lior Schachter li...@infolinks.com
wrote:
Hi all,
I'm running a scan using the M/R framework.
My table contains hundreds of millions of rows and I'm scanning using
start/stop key about
TableInputFormatBase.getSplits():
* Calculates the splits that will serve as input for the map tasks. The
* number of splits matches the number of regions in a table.
On Mon, Jul 4, 2011 at 7:37 AM, Lior Schachter li...@infolinks.com
wrote:
1. yes - I configure my job using this line
called already.
Can you try getting jstack of one of the map tasks before task tracker
kills
it ?
Thanks
On Mon, Jul 4, 2011 at 8:15 AM, Lior Schachter li...@infolinks.com
wrote:
1. Currently every map gets one region. So I don't understand what
difference will it make using the splits
, Jul 4, 2011 at 9:26 AM, Lior Schachter li...@infolinks.com
wrote:
I used kill -3, following the thread dump:
...
On Mon, Jul 4, 2011 at 6:22 PM, Ted Yu yuzhih...@gmail.com wrote:
I wasn't clear in my previous email.
It was not answer to why map tasks got stuck
.
There may be more than one connection to zookeeper from one map task.
So it doesn't hurt if you increase hbase.zookeeper.property.maxClientCnxns
Cheers
On Mon, Jul 4, 2011 at 9:47 AM, Lior Schachter li...@infolinks.com
wrote:
1. HBaseURLsDaysAggregator.java:124
Hi Stack,
We already have version 0.90.1 installed on our production cluster and we
want to upgrade to 0.90.2.
Obviously we should upgrade the hbase-0.90.1.jar with the new jar.
Should we upgrade other libraries/configuration files ?
Is there a maven repository with the new version ?
Thanks,
.*;
/**
* User: Lior Schachter
* Email: li...@infolinks.com
*/
public class HBaseOperations {
private static final Logger logger =
Logger.getLogger(com.infolinks.hadoop.commons.hbase.HBaseOperations.class);
private Configuration conf;
private HTablePool pool = null;
public HBaseOperations
days, then you can
emit key_date from mappers instead of date_key and then reassemble them
correctly in reducers. This way you'll have an even distribution of inserts
on your pre-created regions.
Cosmin
On Mar 27, 2011, at 8:00 PM, Lior Schachter wrote:
Hi,
Last week I consulted he
them
correctly in reducers. This way you'll have an even distribution of
inserts
on your pre-created regions.
Cosmin
On Mar 27, 2011, at 8:00 PM, Lior Schachter wrote:
Hi,
Last week I consulted he forum about hbase insertion optimization
when
the
key format
Hi,
Last week I consulted he forum about hbase insertion optimization when the
key format is : date_key.
This key format is very good for efficient scans but creates hotspot a
single region when inserting millions of rows.
I would like to share and get a feedback on the solution we found:
1.
Hi should I download the patch HBASE1861-incomplete.patch ?
I should apply it as Eclipse patch on 0.91 and produce a new jar ?
On Thu, Mar 17, 2011 at 8:58 PM, Nichole Treadway kntread...@gmail.comwrote:
Hi all,
I am attempting to bulk load data into HBase using the importtsv program. I
have
Hi,
We are trying to test the bulk loading and created a simple test for it.
The problem is that if we follow the instructions on
http://hbase.apache.org/bulk-loads.html we get an exception in
TotalOrderPartitioner.readPartitions.
After debugging we saw that the partition file path created in
Hi,
What is the API or configuration for changing the default hash function for
a specific htable.
thanks,
Lior
the hash function that distributes the rows between the regions.
On Sun, Mar 20, 2011 at 8:36 PM, Stack st...@duboce.net wrote:
Hash? Which hash are you referring to sir?
St.Ack
On Sun, Mar 20, 2011 at 10:06 AM, Lior Schachter li...@infolinks.com
wrote:
Hi,
What is the API
other things but does mean it is
easier to hotspot regions. Key design is very important.
-chris
On Mar 20, 2011, at 11:41 AM, Lior Schachter wrote:
the hash function that distributes the rows between the regions.
On Sun, Mar 20, 2011 at 8:36 PM, Stack st...@duboce.net wrote:
Hash
your configurations and system
characteristics (maybe in a Wiki page).
It will also help to get more of the small tweaks that you found helpful.
Lior Schachter
On Mon, Nov 22, 2010 at 1:33 PM, Lars George lars.geo...@gmail.com wrote:
Oleg,
Do you have Ganglia or some other graphing tool
27 matches
Mail list logo