[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2022-06-12 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-12394:

Resolution: Abandoned
Status: Resolved  (was: Patch Available)

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0
>Reporter: Weichen Ye
>Priority: Major
> Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
> HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394-v6.patch, 
> HBASE-12394.patch, HBase-12394 Document.pdf
>
>
> Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
> The Latest Patch is "Diff Revision 2 (Latest)"
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
> Text.class, job);
>  
>   



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBASE-12394-v6.patch

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0
>Reporter: Weichen Ye
> Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
> HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394-v6.patch, 
> HBASE-12394.patch, HBase-12394 Document.pdf
>
>
> Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
> The Latest Patch is "Diff Revision 2 (Latest)"
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
> Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: (was: HBASE-12394-v6.patch)

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0
>Reporter: Weichen Ye
> Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
> HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394.patch, HBase-12394 
> Document.pdf
>
>
> Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
> The Latest Patch is "Diff Revision 2 (Latest)"
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
> Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Affects Version/s: (was: 0.98.6.1)

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0
>Reporter: Weichen Ye
> Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
> HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394-v6.patch, 
> HBASE-12394.patch, HBase-12394 Document.pdf
>
>
> Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
> The Latest Patch is "Diff Revision 2 (Latest)"
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
> Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBASE-12394-v6.patch

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0, 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
> HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394-v6.patch, 
> HBASE-12394.patch, HBase-12394 Document.pdf
>
>
> Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
> The Latest Patch is "Diff Revision 2 (Latest)"
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
> Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBASE-12394-v5.patch

In the new Patch:
1, add some tests to demo the new code actually works
2, abstract out some duplicated code into a method so that the if branch and 
else branch can share
3, add some new comments in code 

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0, 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
> HBASE-12394-v4.patch, HBASE-12394-v5.patch, HBASE-12394.patch, HBase-12394 
> Document.pdf
>
>
> Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
> The Latest Patch is "Diff Revision 2 (Latest)"
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
> Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-13 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBase-12394 Document.pdf

Attach an introduction document.

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0, 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
> HBASE-12394-v4.patch, HBASE-12394.patch, HBase-12394 Document.pdf
>
>
> Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
> The Latest Patch is "Diff Revision 2 (Latest)"
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
> Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-09 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBASE-12394-v4.patch

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0, 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
> HBASE-12394-v4.patch, HBASE-12394.patch
>
>
> Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
> The Latest Patch is "Diff Revision 2 (Latest)"
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
> Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-06 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBASE-12394-v3.patch

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0, 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394-v2.patch, HBASE-12394-v3.patch, 
> HBASE-12394.patch
>
>
> Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
> The Latest Patch is "Diff Revision 2 (Latest)"
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
> Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-06 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Description: 
Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
The Latest Patch is "Diff Revision 2 (Latest)"

For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.

In order to support multiple regions for one mapper, we need a new property in 
configuration--"hbase.mapreduce.scan.regionspermapper"

hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
for one mapper. For example,if we have an HBase table with 300 regions, and we 
set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan the 
table, the job will use only 300/3=100 mappers.

In this way, we can control the number of mappers using the following formula.
Number of Mappers = (Total region numbers) / 
hbase.mapreduce.scan.regionspermapper

This is an example of the configuration.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
Text.class, job);



 
  

  was:
Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/

For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are included in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--"hbase.mapreduce.scan.regionspermapper"

hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
for one mapper. For example,if we have an HBase table with 300 regions, and we 
set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan the 
table, the job will use only 300/3=100 mappers.

In this way, we can control the number of mappers using the following formula.
Number of Mappers = (Total region numbers) / 
hbase.mapreduce.scan.regionspermapper

This is an example of the configuration.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  


> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0, 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394-v2.patch, HBASE-12394.patch
>
>
> Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/   
> The Latest Patch is "Diff Revision 2 (Latest)"
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, Text.class, 
> Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-03 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Description: 
Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/

For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are included in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--"hbase.mapreduce.scan.regionspermapper"

hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
for one mapper. For example,if we have an HBase table with 300 regions, and we 
set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan the 
table, the job will use only 300/3=100 mappers.

In this way, we can control the number of mappers using the following formula.
Number of Mappers = (Total region numbers) / 
hbase.mapreduce.scan.regionspermapper

This is an example of the configuration.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  

  was:
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are included in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--"hbase.mapreduce.scan.regionspermapper"

hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
for one mapper. For example,if we have an HBase table with 300 regions, and we 
set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan the 
table, the job will use only 300/3=100 mappers.

In this way, we can control the number of mappers using the following formula.
Number of Mappers = (Total region numbers) / 
hbase.mapreduce.scan.regionspermapper

This is an example of the configuration.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  


> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0, 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394-v2.patch, HBASE-12394.patch
>
>
> Welcome to the ReviewBoard :https://reviews.apache.org/r/27519/
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
>  
> The following new files are included in this patch:
> TableMultiRegionInputFormat.java
> TableMultiRegionInputFormatBase.java
> TableMultiRegionMapReduceUtil.java
> *TestTableMultiRegionInputFormatScan1.java
> *TestTableMultiRegionInputFormatScan2.java
> *TestTableMultiRegionInputFormatScanBase.java
> *TestTableMultiRegionMapReduceUtil.java
>  
> The files start with * are tests.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers =

[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-03 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: (was: HBASE-12394.patch.v2)

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0, 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394-v2.patch, HBASE-12394.patch
>
>
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
>  
> The following new files are included in this patch:
> TableMultiRegionInputFormat.java
> TableMultiRegionInputFormatBase.java
> TableMultiRegionMapReduceUtil.java
> *TestTableMultiRegionInputFormatScan1.java
> *TestTableMultiRegionInputFormatScan2.java
> *TestTableMultiRegionInputFormatScanBase.java
> *TestTableMultiRegionMapReduceUtil.java
>  
> The files start with * are tests.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
> Text.class, Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-03 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBASE-12394-v2.patch

 Line length issues fxed.

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0, 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394-v2.patch, HBASE-12394.patch, 
> HBASE-12394.patch.v2
>
>
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
>  
> The following new files are included in this patch:
> TableMultiRegionInputFormat.java
> TableMultiRegionInputFormatBase.java
> TableMultiRegionMapReduceUtil.java
> *TestTableMultiRegionInputFormatScan1.java
> *TestTableMultiRegionInputFormatScan2.java
> *TestTableMultiRegionInputFormatScanBase.java
> *TestTableMultiRegionMapReduceUtil.java
>  
> The files start with * are tests.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
> Text.class, Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-03 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBASE-12394.patch.v2

fix some lineLengths issues from last patch.

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0, 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394.patch, HBASE-12394.patch.v2
>
>
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
>  
> The following new files are included in this patch:
> TableMultiRegionInputFormat.java
> TableMultiRegionInputFormatBase.java
> TableMultiRegionMapReduceUtil.java
> *TestTableMultiRegionInputFormatScan1.java
> *TestTableMultiRegionInputFormatScan2.java
> *TestTableMultiRegionInputFormatScanBase.java
> *TestTableMultiRegionMapReduceUtil.java
>  
> The files start with * are tests.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
> Text.class, Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-11-02 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Description: 
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are included in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--"hbase.mapreduce.scan.regionspermapper"

hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
for one mapper. For example,if we have an HBase table with 300 regions, and we 
set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan the 
table, the job will use only 300/3=100 mappers.

In this way, we can control the number of mappers using the following formula.
Number of Mappers = (Total region numbers) / 
hbase.mapreduce.scan.regionspermapper

This is an example of the configuration.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  

  was:
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are included in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--"hbase.mapreduce.scan.regionspermapper"

This is an example,which means each mapper has 3 regions as input.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  


> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 2.0.0, 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394.patch
>
>
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
>  
> The following new files are included in this patch:
> TableMultiRegionInputFormat.java
> TableMultiRegionInputFormatBase.java
> TableMultiRegionMapReduceUtil.java
> *TestTableMultiRegionInputFormatScan1.java
> *TestTableMultiRegionInputFormatScan2.java
> *TestTableMultiRegionInputFormatScanBase.java
> *TestTableMultiRegionMapReduceUtil.java
>  
> The files start with * are tests.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> hbase.mapreduce.scan.regionspermapper controls how many regions used as input 
> for one mapper. For example,if we have an HBase table with 300 regions, and 
> we set hbase.mapreduce.scan.regionspermapper = 3. Then we run a job to scan 
> the table, the job will use only 300/3=100 mappers.
> In this way, we can control the number of mappers using the following formula.
> Number of Mappers = (Total region numbers) / 
> hbase.mapreduce.scan.regionspermapper
> This is an example of the configuration.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
> Text.class, Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Affects Version/s: 2.0.0
   Status: Patch Available  (was: Open)

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 0.98.6.1, 2.0.0
>Reporter: Weichen Ye
> Attachments: HBASE-12394.patch
>
>
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
>  
> The following new files are included in this patch:
> TableMultiRegionInputFormat.java
> TableMultiRegionInputFormatBase.java
> TableMultiRegionMapReduceUtil.java
> *TestTableMultiRegionInputFormatScan1.java
> *TestTableMultiRegionInputFormatScan2.java
> *TestTableMultiRegionInputFormatScanBase.java
> *TestTableMultiRegionMapReduceUtil.java
>  
> The files start with * are tests.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> This is an example,which means each mapper has 3 regions as input.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
> Text.class, Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Description: 
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are included in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--"hbase.mapreduce.scan.regionspermapper"

This is an example,which means each mapper has 3 regions as input.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  

  was:
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are including in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--"hbase.mapreduce.scan.regionspermapper"

This is an example,which means each mapper has 3 regions as input.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  


> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394.patch
>
>
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
>  
> The following new files are included in this patch:
> TableMultiRegionInputFormat.java
> TableMultiRegionInputFormatBase.java
> TableMultiRegionMapReduceUtil.java
> *TestTableMultiRegionInputFormatScan1.java
> *TestTableMultiRegionInputFormatScan2.java
> *TestTableMultiRegionInputFormatScanBase.java
> *TestTableMultiRegionMapReduceUtil.java
>  
> The files start with * are tests.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> This is an example,which means each mapper has 3 regions as input.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
> Text.class, Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Description: 
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are including in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--"hbase.mapreduce.scan.regionspermapper"

This is an example,which means each mapper has 3 regions as input.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  

  was:
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are including in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--"hbase.mapreduce.scan.regionspermapper"

This is an example,which means each mapper can have 3 regions as input.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  


> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394.patch
>
>
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
>  
> The following new files are including in this patch:
> TableMultiRegionInputFormat.java
> TableMultiRegionInputFormatBase.java
> TableMultiRegionMapReduceUtil.java
> *TestTableMultiRegionInputFormatScan1.java
> *TestTableMultiRegionInputFormatScan2.java
> *TestTableMultiRegionInputFormatScanBase.java
> *TestTableMultiRegionMapReduceUtil.java
>  
> The files start with * are tests.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> This is an example,which means each mapper has 3 regions as input.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
> Text.class, Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Description: 
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are including in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, we need a new property in 
configuration--"hbase.mapreduce.scan.regionspermapper"

This is an example,which means each mapper can have 3 regions as input.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  

  was:
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are including in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, users can add a property 
in configuration--"hbase.mapreduce.scan.regionspermapper"

This is an example,This means each mapper can have 3 regions as input.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  


> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394.patch
>
>
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
>  
> The following new files are including in this patch:
> TableMultiRegionInputFormat.java
> TableMultiRegionInputFormatBase.java
> TableMultiRegionMapReduceUtil.java
> *TestTableMultiRegionInputFormatScan1.java
> *TestTableMultiRegionInputFormatScan2.java
> *TestTableMultiRegionInputFormatScanBase.java
> *TestTableMultiRegionMapReduceUtil.java
>  
> The files start with * are tests.
> In order to support multiple regions for one mapper, we need a new property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> This is an example,which means each mapper can have 3 regions as input.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
> Text.class, Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Attachment: HBASE-12394.patch

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 0.98.6.1
>Reporter: Weichen Ye
> Attachments: HBASE-12394.patch
>
>
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
>  
> The following new files are including in this patch:
> TableMultiRegionInputFormat.java
> TableMultiRegionInputFormatBase.java
> TableMultiRegionMapReduceUtil.java
> *TestTableMultiRegionInputFormatScan1.java
> *TestTableMultiRegionInputFormatScan2.java
> *TestTableMultiRegionInputFormatScanBase.java
> *TestTableMultiRegionMapReduceUtil.java
>  
> The files start with * are tests.
> In order to support multiple regions for one mapper, users can add a property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> This is an example,This means each mapper can have 3 regions as input.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
> Text.class, Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12394) Support multiple regions as input to each mapper in map/reduce jobs

2014-10-31 Thread Weichen Ye (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weichen Ye updated HBASE-12394:
---
Description: 
For Hadoop cluster, a job with large HBase table as input always consumes a 
large amount of computing resources. For example, we need to create a job with 
1000 mappers to scan a table with 1000 regions. This patch is to support one 
mapper using multiple regions as input.
 
The following new files are including in this patch:
TableMultiRegionInputFormat.java
TableMultiRegionInputFormatBase.java
TableMultiRegionMapReduceUtil.java
*TestTableMultiRegionInputFormatScan1.java
*TestTableMultiRegionInputFormatScan2.java
*TestTableMultiRegionInputFormatScanBase.java
*TestTableMultiRegionMapReduceUtil.java
 
The files start with * are tests.

In order to support multiple regions for one mapper, users can add a property 
in configuration--"hbase.mapreduce.scan.regionspermapper"

This is an example,This means each mapper can have 3 regions as input.

 hbase.mapreduce.scan.regionspermapper
 3


This is an example for Java code:
TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
Text.class, Text.class, job);



 
  

> Support multiple regions as input to each mapper in map/reduce jobs
> ---
>
> Key: HBASE-12394
> URL: https://issues.apache.org/jira/browse/HBASE-12394
> Project: HBase
>  Issue Type: Improvement
>  Components: mapreduce
>Affects Versions: 0.98.6.1
>Reporter: Weichen Ye
>
> For Hadoop cluster, a job with large HBase table as input always consumes a 
> large amount of computing resources. For example, we need to create a job 
> with 1000 mappers to scan a table with 1000 regions. This patch is to support 
> one mapper using multiple regions as input.
>  
> The following new files are including in this patch:
> TableMultiRegionInputFormat.java
> TableMultiRegionInputFormatBase.java
> TableMultiRegionMapReduceUtil.java
> *TestTableMultiRegionInputFormatScan1.java
> *TestTableMultiRegionInputFormatScan2.java
> *TestTableMultiRegionInputFormatScanBase.java
> *TestTableMultiRegionMapReduceUtil.java
>  
> The files start with * are tests.
> In order to support multiple regions for one mapper, users can add a property 
> in configuration--"hbase.mapreduce.scan.regionspermapper"
> This is an example,This means each mapper can have 3 regions as input.
> 
>  hbase.mapreduce.scan.regionspermapper
>  3
> 
> This is an example for Java code:
> TableMultiRegionMapReduceUtil.initTableMapperJob(tablename, scan, Map.class, 
> Text.class, Text.class, job);
>  
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)