Re: HBase mapreduce job crawls on final 25% of maps

2016-04-13 Thread Colin Kincaid Williams
It appears that my issue was caused by the missing sections I
mentioned in the second post. I ran a job with these settings, and my
job finished in < 6 hours. Thanks for your suggestions because I have
further ideas regarding issues moving forward.

scan.setCaching(500);// 1 is the default in Scan, which will
be bad for MapReduce jobs
scan.setCacheBlocks(false);  // don't set to true for MR jobs



On Wed, Apr 13, 2016 at 7:32 AM, Colin Kincaid Williams  wrote:
> Hi Chien,
>
> 4. From  50-150k per * second * to 100-150k per * minute *, as stated
> above, so reads went *DOWN* significantly. I think you must have
> misread.
>
> I will take into account some of your other suggestions.
>
> Thanks,
>
> Colin
>
> On Tue, Apr 12, 2016 at 8:19 PM, Chien Le  wrote:
>> Some things I would look at:
>> 1. Node statistics, both the mapper and regionserver nodes. Make sure
>> they're on fully healthy nodes (no disk issues, no half duplex, etc) and
>> that they're not already saturated from other jobs.
>> 2. Is there a common regionserver behind the remaining mappers/regions? If
>> so, try moving some regions off to spread the load.
>> 3. Verify the locality of the region blocks to the regionserver. If you
>> don't automate major compacts or have moved regions recently, mapper
>> locality might not help. Major compact if needed or move regions if you can
>> determine source?
>> 4. You mentioned that the requests per sec has gone from 50-150k to
>> 100-150k. Was that a typo? Did the read rate really increase?
>> 5. You've listed the region sizes but was that done with a cursory hadoop
>> fs du? Have you tried using the hfile analyzer to verify number of rows and
>> sizes are roughly the same?
>> 5. profile the mappers. If you can share the task counters for a completed
>> and a still running task to compare, it might help find the issue
>> 6. I don't think you should underestimate the perf gains of node local
>> tasks vs just rack local, especially if short circuit reads are enabled.
>> This is a big gamble unfortunately given how far your tasks have been
>> running already so I'd look at this as a last resort
>>
>>
>> HTH,
>> Chien
>>
>> On Tue, Apr 12, 2016 at 3:59 PM, Colin Kincaid Williams 
>> wrote:
>>
>>> I've noticed that I've omitted
>>>
>>> scan.setCaching(500);// 1 is the default in Scan, which will
>>> be bad for MapReduce jobs
>>> scan.setCacheBlocks(false);  // don't set to true for MR jobs
>>>
>>> which appear to be suggestions from examples. Still I am not sure if
>>> this explains the significant request slowdown on the final 25% of the
>>> jobs.
>>>
>>> On Tue, Apr 12, 2016 at 10:36 PM, Colin Kincaid Williams 
>>> wrote:
>>> > Excuse my double post. I thought I deleted my draft, and then
>>> > constructed a cleaner, more detailed, more readable mail.
>>> >
>>> > On Tue, Apr 12, 2016 at 10:26 PM, Colin Kincaid Williams 
>>> wrote:
>>> >> After trying to get help with distcp on hadoop-user and cdh-user
>>> >> mailing lists, I've given up on trying to use distcp and exporttable
>>> >> to migrate my hbase from .92.1 cdh4.1.3 to .98 on cdh5.3.0
>>> >>
>>> >> I've been working on an hbase map reduce job to serialize my entries
>>> >> and insert them into kafka. Then I plan to re-import them into
>>> >> cdh5.3.0.
>>> >>
>>> >> Currently I'm having trouble with my map-reduce job. I have 43 maps,
>>> >> 33 which have finished successfully, and 10 which are currently still
>>> >> running. I had previously seen requests of 50-150k per second. Now for
>>> >> the final 10 maps, I'm seeing 100-150k per minute.
>>> >>
>>> >> I might also mention that there were 6 failures near the application
>>> >> start. Unfortunately, I cannot read the logs for these 6 failures.
>>> >> There is an exception related to the yarn logging for these maps,
>>> >> maybe because they failed to start.
>>> >>
>>> >> I had a look around HDFS. It appears that the regions are all between
>>> >> 5-10GB. The longest completed map so far took 7 hours, with the
>>> >> majority appearing to take around 3.5 hours .
>>> >>
>>> >> The remaining 10 maps have each been running between 23-27 hours.
>>> >>
>>> >> Considering data locality issues. 6 of the remaining jobs are running
>>> >> on the same rack. Then the other 4 are split between my other two
>>> >> racks. There should currently be a replica on each rack, since it
>>> >> appears the replicas are set to 3. Then I'm not sure this is really
>>> >> the cause of the slowdown.
>>> >>
>>> >> Then I'm looking for advice on what I can do to troubleshoot my job.
>>> >> I'm setting up my map job like:
>>> >>
>>> >> main(String[] args){
>>> >> ...
>>> >> Scan fromScan = new Scan();
>>> >> System.out.println(fromScan);
>>> >> TableMapReduceUtil.initTableMapperJob(fromTableName, fromScan,
>>> Map.class,
>>> >> null, null, job, true, TableInputFormat.class);
>>> >>
>>> >> // My guess is this contols the output 

RE: Append Visibility Labels?

2016-04-13 Thread benedict.whittamsmith
Yes - it's a capability we would need to efficiently support permissioning.

Good to know that we haven't killed off the old products! But I'm not sure the 
archaeological approach would scale.

The generic facility you describe, caveats noted, certainly seems to fit our 
use case - especially if we are talking of combining label expressions.

I guess we'd always use an 'OR' operator to add them. But what if we wanted to 
remove a product/visibility label?

-Original Message-
From: Andrew Purtell [mailto:andrew.purt...@gmail.com] 
Sent: 13 April 2016 17:23
To: user@hbase.apache.org
Subject: Re: Append Visibility Labels?

I think Benedict was asking if it would be possible to add the capability. 

Actually the old product data doesn't have to die, Benedict. Set VERSIONS > 1 
in your schema. The old cell version(s) carrying the old label set will still 
be there, accessible with a Scan that asks for N versions instead of just the 
latest. You'll get back a Result with up to N cells to iterate over and figure 
out how to process and display the information. If you only want the latest, 
use a Get instead. 

I think it could be possible to introduce a generic facility for handling the 
case where you have an existing value on the server, that value has tags 
attached, now a new mutation op has arrived with a tag attached _and_ another 
op attribute set by the client is asking for any tags on an earlier cell 
version be brought forward. For each tag type there would be a registered 
"combiner" that does what makes sense for its particulars. We do this in core 
for Append and Increment already, but without the notion of combination. This 
is an off the cuff remark, caveat: I haven't spent time thinking through 
implications. 

> On Apr 13, 2016, at 8:58 AM, Ted Yu  wrote:
> 
> There is currently no API for appending Visibility Labels.
> 
> checkAndPut() only allows you to compare value, not labels.
> 
> On Wed, Apr 13, 2016 at 8:12 AM, 
> 
> wrote:
> 
>> We sell data. A product can be defined as a permission to access data 
>> (at a cell level). Visibility Labels look like a very good candidate 
>> for implementing this model.
>> 
>> The implementation works well until we create a new product over old data.
>> We can set the visibility label for the new product but, whoops, by 
>> applying it to the relevant cells we've overwritten all the existing 
>> labels on those cells, destroying the permissioning of our older 
>> products. What to do?
>> 
>> One answer would be to append the new visibility label to the 
>> existing label expressions on the cells with an 'OR'. But I'm not 
>> sure that's possible .. yet?
>> 
>> Thanks,
>> 
>> Ben
>> 
>> 
>> 
>> This e-mail is for the sole use of the intended recipient and 
>> contains information that may be privileged and/or confidential. If 
>> you are not an intended recipient, please notify the sender by return 
>> e-mail and delete this e-mail and any attachments. Certain required 
>> legal entity disclosures can be accessed on our website.< 
>> http://site.thomsonreuters.com/site/disclosures/>
>> 


Re: Append Visibility Labels?

2016-04-13 Thread Andrew Purtell
I think Benedict was asking if it would be possible to add the capability. 

Actually the old product data doesn't have to die, Benedict. Set VERSIONS > 1 
in your schema. The old cell version(s) carrying the old label set will still 
be there, accessible with a Scan that asks for N versions instead of just the 
latest. You'll get back a Result with up to N cells to iterate over and figure 
out how to process and display the information. If you only want the latest, 
use a Get instead. 

I think it could be possible to introduce a generic facility for handling the 
case where you have an existing value on the server, that value has tags 
attached, now a new mutation op has arrived with a tag attached _and_ another 
op attribute set by the client is asking for any tags on an earlier cell 
version be brought forward. For each tag type there would be a registered 
"combiner" that does what makes sense for its particulars. We do this in core 
for Append and Increment already, but without the notion of combination. This 
is an off the cuff remark, caveat: I haven't spent time thinking through 
implications. 

> On Apr 13, 2016, at 8:58 AM, Ted Yu  wrote:
> 
> There is currently no API for appending Visibility Labels.
> 
> checkAndPut() only allows you to compare value, not labels.
> 
> On Wed, Apr 13, 2016 at 8:12 AM, 
> wrote:
> 
>> We sell data. A product can be defined as a permission to access data (at
>> a cell level). Visibility Labels look like a very good candidate for
>> implementing this model.
>> 
>> The implementation works well until we create a new product over old data.
>> We can set the visibility label for the new product but, whoops, by
>> applying it to the relevant cells we've overwritten all the existing labels
>> on those cells, destroying the permissioning of our older products. What to
>> do?
>> 
>> One answer would be to append the new visibility label to the existing
>> label expressions on the cells with an 'OR'. But I'm not sure that's
>> possible .. yet?
>> 
>> Thanks,
>> 
>> Ben
>> 
>> 
>> 
>> This e-mail is for the sole use of the intended recipient and contains
>> information that may be privileged and/or confidential. If you are not an
>> intended recipient, please notify the sender by return e-mail and delete
>> this e-mail and any attachments. Certain required legal entity disclosures
>> can be accessed on our website.<
>> http://site.thomsonreuters.com/site/disclosures/>
>> 


Re: Append Visibility Labels?

2016-04-13 Thread Ted Yu
There is currently no API for appending Visibility Labels.

checkAndPut() only allows you to compare value, not labels.

On Wed, Apr 13, 2016 at 8:12 AM, 
wrote:

> We sell data. A product can be defined as a permission to access data (at
> a cell level). Visibility Labels look like a very good candidate for
> implementing this model.
>
> The implementation works well until we create a new product over old data.
> We can set the visibility label for the new product but, whoops, by
> applying it to the relevant cells we've overwritten all the existing labels
> on those cells, destroying the permissioning of our older products. What to
> do?
>
> One answer would be to append the new visibility label to the existing
> label expressions on the cells with an 'OR'. But I'm not sure that's
> possible .. yet?
>
> Thanks,
>
> Ben
>
> 
>
> This e-mail is for the sole use of the intended recipient and contains
> information that may be privileged and/or confidential. If you are not an
> intended recipient, please notify the sender by return e-mail and delete
> this e-mail and any attachments. Certain required legal entity disclosures
> can be accessed on our website.<
> http://site.thomsonreuters.com/site/disclosures/>
>


Append Visibility Labels?

2016-04-13 Thread benedict.whittamsmith
We sell data. A product can be defined as a permission to access data (at a 
cell level). Visibility Labels look like a very good candidate for implementing 
this model.

The implementation works well until we create a new product over old data. We 
can set the visibility label for the new product but, whoops, by applying it to 
the relevant cells we've overwritten all the existing labels on those cells, 
destroying the permissioning of our older products. What to do?

One answer would be to append the new visibility label to the existing label 
expressions on the cells with an 'OR'. But I'm not sure that's possible .. yet?

Thanks,

Ben



This e-mail is for the sole use of the intended recipient and contains 
information that may be privileged and/or confidential. If you are not an 
intended recipient, please notify the sender by return e-mail and delete this 
e-mail and any attachments. Certain required legal entity disclosures can be 
accessed on our website.


Re: HBase mapreduce job crawls on final 25% of maps

2016-04-13 Thread Colin Kincaid Williams
Hi Chien,

4. From  50-150k per * second * to 100-150k per * minute *, as stated
above, so reads went *DOWN* significantly. I think you must have
misread.

I will take into account some of your other suggestions.

Thanks,

Colin

On Tue, Apr 12, 2016 at 8:19 PM, Chien Le  wrote:
> Some things I would look at:
> 1. Node statistics, both the mapper and regionserver nodes. Make sure
> they're on fully healthy nodes (no disk issues, no half duplex, etc) and
> that they're not already saturated from other jobs.
> 2. Is there a common regionserver behind the remaining mappers/regions? If
> so, try moving some regions off to spread the load.
> 3. Verify the locality of the region blocks to the regionserver. If you
> don't automate major compacts or have moved regions recently, mapper
> locality might not help. Major compact if needed or move regions if you can
> determine source?
> 4. You mentioned that the requests per sec has gone from 50-150k to
> 100-150k. Was that a typo? Did the read rate really increase?
> 5. You've listed the region sizes but was that done with a cursory hadoop
> fs du? Have you tried using the hfile analyzer to verify number of rows and
> sizes are roughly the same?
> 5. profile the mappers. If you can share the task counters for a completed
> and a still running task to compare, it might help find the issue
> 6. I don't think you should underestimate the perf gains of node local
> tasks vs just rack local, especially if short circuit reads are enabled.
> This is a big gamble unfortunately given how far your tasks have been
> running already so I'd look at this as a last resort
>
>
> HTH,
> Chien
>
> On Tue, Apr 12, 2016 at 3:59 PM, Colin Kincaid Williams 
> wrote:
>
>> I've noticed that I've omitted
>>
>> scan.setCaching(500);// 1 is the default in Scan, which will
>> be bad for MapReduce jobs
>> scan.setCacheBlocks(false);  // don't set to true for MR jobs
>>
>> which appear to be suggestions from examples. Still I am not sure if
>> this explains the significant request slowdown on the final 25% of the
>> jobs.
>>
>> On Tue, Apr 12, 2016 at 10:36 PM, Colin Kincaid Williams 
>> wrote:
>> > Excuse my double post. I thought I deleted my draft, and then
>> > constructed a cleaner, more detailed, more readable mail.
>> >
>> > On Tue, Apr 12, 2016 at 10:26 PM, Colin Kincaid Williams 
>> wrote:
>> >> After trying to get help with distcp on hadoop-user and cdh-user
>> >> mailing lists, I've given up on trying to use distcp and exporttable
>> >> to migrate my hbase from .92.1 cdh4.1.3 to .98 on cdh5.3.0
>> >>
>> >> I've been working on an hbase map reduce job to serialize my entries
>> >> and insert them into kafka. Then I plan to re-import them into
>> >> cdh5.3.0.
>> >>
>> >> Currently I'm having trouble with my map-reduce job. I have 43 maps,
>> >> 33 which have finished successfully, and 10 which are currently still
>> >> running. I had previously seen requests of 50-150k per second. Now for
>> >> the final 10 maps, I'm seeing 100-150k per minute.
>> >>
>> >> I might also mention that there were 6 failures near the application
>> >> start. Unfortunately, I cannot read the logs for these 6 failures.
>> >> There is an exception related to the yarn logging for these maps,
>> >> maybe because they failed to start.
>> >>
>> >> I had a look around HDFS. It appears that the regions are all between
>> >> 5-10GB. The longest completed map so far took 7 hours, with the
>> >> majority appearing to take around 3.5 hours .
>> >>
>> >> The remaining 10 maps have each been running between 23-27 hours.
>> >>
>> >> Considering data locality issues. 6 of the remaining jobs are running
>> >> on the same rack. Then the other 4 are split between my other two
>> >> racks. There should currently be a replica on each rack, since it
>> >> appears the replicas are set to 3. Then I'm not sure this is really
>> >> the cause of the slowdown.
>> >>
>> >> Then I'm looking for advice on what I can do to troubleshoot my job.
>> >> I'm setting up my map job like:
>> >>
>> >> main(String[] args){
>> >> ...
>> >> Scan fromScan = new Scan();
>> >> System.out.println(fromScan);
>> >> TableMapReduceUtil.initTableMapperJob(fromTableName, fromScan,
>> Map.class,
>> >> null, null, job, true, TableInputFormat.class);
>> >>
>> >> // My guess is this contols the output type for the reduce function
>> >> base on setOutputKeyClass and setOutput value class from p.27 . Since
>> >> there is no reduce step, then this is currently null.
>> >> job.setOutputFormatClass(NullOutputFormat.class);
>> >> job.setNumReduceTasks(0);
>> >> job.submit();
>> >> ...
>> >> }
>> >>
>> >> I'm not performing a reduce step, and I'm traversing row keys like
>> >>
>> >> map(final ImmutableBytesWritable fromRowKey,
>> >> Result fromResult, Context context) throws IOException {
>> >> ...
>> >>   // should I assume that each keyvalue is a version of the stored
>> row?
>> >>   for