[jira] [Created] (HBASE-10063) Consider lighter weight generation of IVs for WAL entries

2013-12-02 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10063:
--

 Summary: Consider lighter weight generation of IVs for WAL entries
 Key: HBASE-10063
 URL: https://issues.apache.org/jira/browse/HBASE-10063
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.98.0


In SecureWALCellCodec#EncryptedKvEncoder#write we get the IV for the entry from 
the secure RNG. This can be a heavyweight operation if not using an accelerated 
RNG. Consider something lighter weight. One option could be to create a random 
IV only once, store it in the header, and then increment it per cell. Correct 
decryption will depend on being able to correctly count entries even if 
skipping or rewinding.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9884) Add Thrift and REST support for Visibility Labels

2013-12-02 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836340#comment-13836340
 ] 

ramkrishna.s.vasudevan commented on HBASE-9884:
---

Thanks Andy.  Will commit  this later in the evening.

 Add Thrift and REST support for Visibility Labels
 -

 Key: HBASE-9884
 URL: https://issues.apache.org/jira/browse/HBASE-9884
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.0

 Attachments: HBASE-9884.patch, HBASE-9884_1.patch, HBASE-9884_2.patch


 In HBASE-7663 the REST and thrift support has been seperated out because the 
 patch is becoming bigger.  This JIRA is to add the Thrift and REST part as a 
 seperated patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9399) Up the memstore flush size

2013-12-02 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-9399:
--

Fix Version/s: 0.98.0

Can try this for 0.98

 Up the memstore flush size
 --

 Key: HBASE-9399
 URL: https://issues.apache.org/jira/browse/HBASE-9399
 Project: HBase
  Issue Type: Task
  Components: regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 0.98.0


 As heap sizes get bigger we are still recommending that users keep their 
 number of regions to a minimum.  This leads to lots of un-used memstore 
 memory.
 For example I have a region server with 48 gigs of ram.  30 gigs are there 
 for the region server.  This with current defaults the global memstore size 
 reserved is 8 gigs.
 The per region memstore size is 128mb right now.  That means that I need 80 
 regions actively taking writes to reach the global memstore size.  That 
 number is way out of line with what our split policies currently give users.  
 They are given much fewer regions by default.
 We should up the hbase.hregion.memstore.flush.size size.  Ideally we should 
 auto tune everything.  But until then I think something like 512mb would help 
 a lot with our write throughput on clusters that don't have several hundred 
 regions per RS.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10061) TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in thrown NPE

2013-12-02 Thread Amit Sela (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836347#comment-13836347
 ] 

Amit Sela commented on HBASE-10061:
---

Take a look at https://issues.apache.org/jira/browse/HBASE-8158
I think throwing Exception if jar was not found was introduced there.


 TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in 
 thrown NPE
 --

 Key: HBASE-10061
 URL: https://issues.apache.org/jira/browse/HBASE-10061
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.12
Reporter: Amit Sela

 TableMapReduceUtil.findOrCreateJar line 596:
 jar = getJar(my_class);
 updateMap(jar, packagedClasses);
 In case getJar returns null, updateMap will throw NPE.
 Should check null==jar before calling updateMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9280) Integration tests should use compression.

2013-12-02 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-9280:
--

Fix Version/s: 0.98.0
 Assignee: Andrew Purtell

Maybe a new monkey action that changes the encryption attribute of a column?

 Integration tests should use compression.
 -

 Key: HBASE-9280
 URL: https://issues.apache.org/jira/browse/HBASE-9280
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.98.0
Reporter: Elliott Clark
Assignee: Andrew Purtell
 Fix For: 0.98.0






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (HBASE-9280) Integration tests should use compression.

2013-12-02 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836354#comment-13836354
 ] 

Andrew Purtell edited comment on HBASE-9280 at 12/2/13 8:23 AM:


Maybe a new monkey action that changes the compression attribute of a column?


was (Author: apurtell):
Maybe a new monkey action that changes the encryption attribute of a column?

 Integration tests should use compression.
 -

 Key: HBASE-9280
 URL: https://issues.apache.org/jira/browse/HBASE-9280
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.98.0
Reporter: Elliott Clark
Assignee: Andrew Purtell
 Fix For: 0.98.0






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9211) ERROR: undefined method `message' for nil:NilClass in the shell on error

2013-12-02 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-9211:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Already committed to trunk, resolving.

 ERROR: undefined method `message' for nil:NilClass in the shell on error
 --

 Key: HBASE-9211
 URL: https://issues.apache.org/jira/browse/HBASE-9211
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.98.0
Reporter: Jean-Daniel Cryans
Assignee: Ted Yu
 Fix For: 0.98.0

 Attachments: 9211-v2.txt


 Not sure where this is coming from but since today if I try to create a table 
 that already exists in the shell I get:
 bq. ERROR: undefined method `message' for nil:NilClass
 instead of the normal exception.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9502) HStore.seekToScanner should handle magic value

2013-12-02 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-9502:
--

   Resolution: Fixed
Fix Version/s: 0.98.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk.

bq. So, I can run the test and it will fail w/o this patch?

Yes.

 HStore.seekToScanner should handle magic value
 --

 Key: HBASE-9502
 URL: https://issues.apache.org/jira/browse/HBASE-9502
 Project: HBase
  Issue Type: Bug
  Components: regionserver, Scanners
Affects Versions: 0.98.0, 0.96.1
Reporter: Liang Xie
Assignee: Liang Xie
 Fix For: 0.98.0

 Attachments: 9502-v2.patch, HBASE-9502-v2.txt, HBASE-9502.txt


 due to faked key, the seekTo probably reture -2, and HStore.seekToScanner 
 should handle this corner case.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9502) HStore.seekToScanner should handle magic value

2013-12-02 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836360#comment-13836360
 ] 

Andrew Purtell commented on HBASE-9502:
---

Thanks for the patch [~xieliang007]

 HStore.seekToScanner should handle magic value
 --

 Key: HBASE-9502
 URL: https://issues.apache.org/jira/browse/HBASE-9502
 Project: HBase
  Issue Type: Bug
  Components: regionserver, Scanners
Affects Versions: 0.98.0, 0.96.1
Reporter: Liang Xie
Assignee: Liang Xie
 Fix For: 0.98.0

 Attachments: 9502-v2.patch, HBASE-9502-v2.txt, HBASE-9502.txt


 due to faked key, the seekTo probably reture -2, and HStore.seekToScanner 
 should handle this corner case.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10061) TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in thrown NPE

2013-12-02 Thread Amit Sela (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Sela updated HBASE-10061:
--

Attachment: HBASE-10061.patch

I'm adding a patch to check if jar is null or empty in updateMap and to return 
null instead of throwing Exception in case jar was not found.

 TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in 
 thrown NPE
 --

 Key: HBASE-10061
 URL: https://issues.apache.org/jira/browse/HBASE-10061
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.12
Reporter: Amit Sela
 Attachments: HBASE-10061.patch


 TableMapReduceUtil.findOrCreateJar line 596:
 jar = getJar(my_class);
 updateMap(jar, packagedClasses);
 In case getJar returns null, updateMap will throw NPE.
 Should check null==jar before calling updateMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-10064) AggregateClient.validateParameters will throw NullPointerException when set startRow/stopRow of scan to null

2013-12-02 Thread cuijianwei (JIRA)
cuijianwei created HBASE-10064:
--

 Summary: AggregateClient.validateParameters will throw 
NullPointerException when set startRow/stopRow of scan to null
 Key: HBASE-10064
 URL: https://issues.apache.org/jira/browse/HBASE-10064
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.14
Reporter: cuijianwei


When using methods such as max(...), min(...) in AggregationClient, we will 
pass Scan as parameter. These methods will throw NullPointerException if users 
invoke scan.setStartRow(null) or scan.setStopRow(null) before passing the scan 
as parameter.  The NullPointerException is thrown by validateParameters(Scan 
scan) which will be invoked before sending requests  to server. The 
implementation of validateParameters is :
{code}
  private void validateParameters(Scan scan) throws IOException {
if (scan == null
|| (Bytes.equals(scan.getStartRow(), scan.getStopRow())  !Bytes
.equals(scan.getStartRow(), HConstants.EMPTY_START_ROW))
|| ((Bytes.compareTo(scan.getStartRow(), scan.getStopRow())  0) 
!Bytes.equals(scan.getStopRow(), HConstants.EMPTY_END_ROW))) {
  throw new IOException(
  Agg client Exception: Startrow should be smaller than Stoprow);
} else if (scan.getFamilyMap().size() != 1) {
  throw new IOException(There must be only one family.);
}
  }
{code}
“Bytes.equals(scan.getStartRow(), HConstants.EMPTY_START_ROW)” will throw 
NullPointerException if the startRow of scan is set to null.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-7091) support custom GC options in hbase-env.sh

2013-12-02 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836375#comment-13836375
 ] 

Nicolas Liochon commented on HBASE-7091:


IIRC, I had the issue on both trunk  0.96.
I've just tested it again, by doing, on .96 head:
{code}
$export HBASE_OPTS=qsfijqspfjpqsf
{code}

without any change, the value is ignored, and I can do a ??bin/start-hbase.sh??:
{noformat}
$bin/start-hbase.sh 
starting master, logging to 
/home/liochon/dev/hbase/bin/../logs/hbase-liochon-master-balzac.out
{noformat}

If I change it by:
{noformat}
-export HBASE_OPTS=-XX:+UseConcMarkSweepGC
+#export HBASE_OPTS=-XX:+UseConcMarkSweepGC
{noformat}

then the value is not ignored anymore and I have:

{noformat}
$bin/start-hbase.sh 
Error: Could not find or load main class qsfijqspfjpqsf
Error: Could not find or load main class qsfijqspfjpqsf
starting master, logging to 
/home/liochon/dev/hbase/bin/../logs/hbase-liochon-master-balzac.out
Error: Could not find or load main class qsfijqspfjpqsf
localhost: starting regionserver, logging to 
/home/liochon/dev/hbase/bin/../logs/hbase-liochon-regionserver-balzac.out

$cat /home/liochon/dev/hbase/bin/../logs/hbase-liochon-master-balzac.out
Error: Could not find or load main class qsfijqspfjpqsf
{noformat}


My understanding is that an user should not modify HBASE_OPTS?

 support custom GC options in hbase-env.sh
 -

 Key: HBASE-7091
 URL: https://issues.apache.org/jira/browse/HBASE-7091
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.94.4
Reporter: Jesse Yates
Assignee: Jesse Yates
  Labels: newbie
 Fix For: 0.94.4, 0.95.0

 Attachments: hbase-7091-v1.patch


 When running things like bin/start-hbase and bin/hbase-daemon.sh start 
 [master|regionserver|etc] we end up setting HBASE_OPTS property a couple 
 times via calling hbase-env.sh. This is generally not a problem for most 
 cases, but when you want to set your own GC log properties, one would think 
 you should set HBASE_GC_OPTS, which get added to HBASE_OPTS. 
 NOPE! That would make too much sense.
 Running bin/hbase-daemons.sh will run bin/hbase-daemon.sh with the daemons it 
 needs to start. Each time through hbase-daemon.sh we also call bin/hbase. 
 This isn't a big deal except for each call to hbase-daemon.sh, we also source 
 hbase-env.sh twice (once in the script and once in bin/hbase). This is 
 important for my next point.
 Note that to turn on GC logging, you uncomment:
 {code}
 # export HBASE_OPTS=$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails 
 -XX:+PrintGCDateStamps $HBASE_GC_OPTS 
 {code}
 and then to log to a gc file for each server, you then uncomment:
 {code}
 # export HBASE_USE_GC_LOGFILE=true
 {code}
 in hbase-env.sh
 On the first pass through hbase-daemon.sh, HBASE_GC_OPTS isn't set, so 
 HBASE_OPTS doesn't get anything funky, but we set HBASE_USE_GC_LOGFILE, which 
 then sets HBASE_GC_OPTS to the log file (-Xloggc:...). Then in bin/hbase we 
 again run hbase-env.sh, which now hs HBASE_GC_OPTS set, adding the GC file. 
 This isn't a general problem because HBASE_OPTS is set without prefixing the 
 existing HBASE_OPTS (eg. HBASE_OPTS=$HBASE_OPTS ...), allowing easy 
 updating. However, GC OPTS don't work the same and this is really odd 
 behavior when you want to set your own GC opts, which can include turning on 
 GC log rolling (yes, yes, they really are jvm opts, but they ought to support 
 their own param, to help minimize clutter).
 The simple version of this patch will just add an idempotent GC option to 
 hbase-env.sh and some comments that uncommenting 
 {code}
 # export HBASE_USE_GC_LOGFILE=true
 {code}
 will lead to a custom gc log file per server (along with an example name), so 
 you don't need to set -Xloggc.
 The more complex solution does the above and also solves the multiple calls 
 to hbase-env.sh so we can be sane about how all this works. Note that to fix 
 this, hbase-daemon.sh just needs to read in HBASE_USE_GC_LOGFILE after 
 sourcing hbase-env.sh and then update HBASE_OPTS. Oh and also not source 
 hbase-env.sh in bin/hbase. 
 Even further, we might want to consider adding options just for cases where 
 we don't need gc logging - i.e. the shell, the config reading tool, hcbk, 
 etc. This is the hardest version to handle since the first couple will 
 willy-nilly apply the gc options.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10064) AggregateClient.validateParameters will throw NullPointerException when set startRow/stopRow of scan to null

2013-12-02 Thread cuijianwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cuijianwei updated HBASE-10064:
---

Status: Patch Available  (was: Open)

 AggregateClient.validateParameters will throw NullPointerException when set 
 startRow/stopRow of scan to null
 

 Key: HBASE-10064
 URL: https://issues.apache.org/jira/browse/HBASE-10064
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.14
Reporter: cuijianwei

 When using methods such as max(...), min(...) in AggregationClient, we will 
 pass Scan as parameter. These methods will throw NullPointerException if 
 users invoke scan.setStartRow(null) or scan.setStopRow(null) before passing 
 the scan as parameter.  The NullPointerException is thrown by 
 validateParameters(Scan scan) which will be invoked before sending requests  
 to server. The implementation of validateParameters is :
 {code}
   private void validateParameters(Scan scan) throws IOException {
 if (scan == null
 || (Bytes.equals(scan.getStartRow(), scan.getStopRow())  !Bytes
 .equals(scan.getStartRow(), HConstants.EMPTY_START_ROW))
 || ((Bytes.compareTo(scan.getStartRow(), scan.getStopRow())  0) 
   !Bytes.equals(scan.getStopRow(), HConstants.EMPTY_END_ROW))) {
   throw new IOException(
   Agg client Exception: Startrow should be smaller than Stoprow);
 } else if (scan.getFamilyMap().size() != 1) {
   throw new IOException(There must be only one family.);
 }
   }
 {code}
 “Bytes.equals(scan.getStartRow(), HConstants.EMPTY_START_ROW)” will throw 
 NullPointerException if the startRow of scan is set to null.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10064) AggregateClient.validateParameters will throw NullPointerException when set startRow/stopRow of scan to null

2013-12-02 Thread cuijianwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cuijianwei updated HBASE-10064:
---

Status: Open  (was: Patch Available)

 AggregateClient.validateParameters will throw NullPointerException when set 
 startRow/stopRow of scan to null
 

 Key: HBASE-10064
 URL: https://issues.apache.org/jira/browse/HBASE-10064
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.14
Reporter: cuijianwei

 When using methods such as max(...), min(...) in AggregationClient, we will 
 pass Scan as parameter. These methods will throw NullPointerException if 
 users invoke scan.setStartRow(null) or scan.setStopRow(null) before passing 
 the scan as parameter.  The NullPointerException is thrown by 
 validateParameters(Scan scan) which will be invoked before sending requests  
 to server. The implementation of validateParameters is :
 {code}
   private void validateParameters(Scan scan) throws IOException {
 if (scan == null
 || (Bytes.equals(scan.getStartRow(), scan.getStopRow())  !Bytes
 .equals(scan.getStartRow(), HConstants.EMPTY_START_ROW))
 || ((Bytes.compareTo(scan.getStartRow(), scan.getStopRow())  0) 
   !Bytes.equals(scan.getStopRow(), HConstants.EMPTY_END_ROW))) {
   throw new IOException(
   Agg client Exception: Startrow should be smaller than Stoprow);
 } else if (scan.getFamilyMap().size() != 1) {
   throw new IOException(There must be only one family.);
 }
   }
 {code}
 “Bytes.equals(scan.getStartRow(), HConstants.EMPTY_START_ROW)” will throw 
 NullPointerException if the startRow of scan is set to null.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10064) AggregateClient.validateParameters will throw NullPointerException when set startRow/stopRow of scan to null

2013-12-02 Thread cuijianwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cuijianwei updated HBASE-10064:
---

Attachment: HBASE-10064-0.94-v1.patch

This patch resolve the problem by setting startRow/stopRow to 
EMPTY_START_ROW/EMPTY_END_ROW if user invokes 
scan.setStartRow(null)/scan.setStopRow(null)

 AggregateClient.validateParameters will throw NullPointerException when set 
 startRow/stopRow of scan to null
 

 Key: HBASE-10064
 URL: https://issues.apache.org/jira/browse/HBASE-10064
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.14
Reporter: cuijianwei
 Attachments: HBASE-10064-0.94-v1.patch


 When using methods such as max(...), min(...) in AggregationClient, we will 
 pass Scan as parameter. These methods will throw NullPointerException if 
 users invoke scan.setStartRow(null) or scan.setStopRow(null) before passing 
 the scan as parameter.  The NullPointerException is thrown by 
 validateParameters(Scan scan) which will be invoked before sending requests  
 to server. The implementation of validateParameters is :
 {code}
   private void validateParameters(Scan scan) throws IOException {
 if (scan == null
 || (Bytes.equals(scan.getStartRow(), scan.getStopRow())  !Bytes
 .equals(scan.getStartRow(), HConstants.EMPTY_START_ROW))
 || ((Bytes.compareTo(scan.getStartRow(), scan.getStopRow())  0) 
   !Bytes.equals(scan.getStopRow(), HConstants.EMPTY_END_ROW))) {
   throw new IOException(
   Agg client Exception: Startrow should be smaller than Stoprow);
 } else if (scan.getFamilyMap().size() != 1) {
   throw new IOException(There must be only one family.);
 }
   }
 {code}
 “Bytes.equals(scan.getStartRow(), HConstants.EMPTY_START_ROW)” will throw 
 NullPointerException if the startRow of scan is set to null.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-10065) Stronger validation of key unwrapping

2013-12-02 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10065:
--

 Summary: Stronger validation of key unwrapping
 Key: HBASE-10065
 URL: https://issues.apache.org/jira/browse/HBASE-10065
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.98.0


In EncryptionUtil#unwrapKey we use a CRC32 to validate the successful 
unwrapping of a data key. I chose a CRC32 to limit overhead. There is only a 1 
in 2^32 chance of a random collision, low enough to be extremely unlikely. 
However, I was talking with my colleague Jerry Chen today about this. A 
cryptographic hash would lower the probability to essentially zero and we are 
only wrapping data keys once per HColumnDescriptor and once per HFile, saving a 
few bytes here and there only really. Might as well use the SHA of the data key 
and in addition consider running AES in GCM mode to cover that hash as 
additional authenticated data.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-10066) Use ByteArrayOutputStream#writeTo where appropriate

2013-12-02 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10066:
--

 Summary: Use ByteArrayOutputStream#writeTo where appropriate
 Key: HBASE-10066
 URL: https://issues.apache.org/jira/browse/HBASE-10066
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.98.0


We can avoid some unnecessary copies by using ByteArrayOutputStream#writeTo 
instead of #toByteArray followed by write(). Found this in a few places.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10000) Initiate lease recovery for outstanding WAL files at the very beginning of recovery

2013-12-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-1:
---

Fix Version/s: 0.98.1

 Initiate lease recovery for outstanding WAL files at the very beginning of 
 recovery
 ---

 Key: HBASE-1
 URL: https://issues.apache.org/jira/browse/HBASE-1
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 0.98.1

 Attachments: 1-recover-ts-with-pb-2.txt, 
 1-recover-ts-with-pb-2.txt, 1-v1.txt, 1-v4.txt, 1-v5.txt, 
 1-v6.txt


 At the beginning of recovery, master can send lease recovery requests 
 concurrently for outstanding WAL files using a thread pool.
 Each split worker would first check whether the WAL file it processes is 
 closed.
 Thanks to Nicolas Liochon and Jeffery discussion with whom gave rise to this 
 idea. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-8039) Make HDFS replication number configurable for a column family

2013-12-02 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-8039:
--

Affects Version/s: (was: 0.95.1)
Fix Version/s: 0.98.0
 Assignee: Andrew Purtell

 Make HDFS replication number configurable for a column family
 -

 Key: HBASE-8039
 URL: https://issues.apache.org/jira/browse/HBASE-8039
 Project: HBase
  Issue Type: Improvement
  Components: HFile
Affects Versions: 0.98.0
Reporter: Maryann Xue
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.98.0


 To allow users to decide which column family's data is more important and 
 which is less important by specifying a replica number instead of using the 
 default replica number.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs

2013-12-02 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-9117:
--

Status: Open  (was: Patch Available)

 Remove HTablePool and all HConnection pooling related APIs
 --

 Key: HBASE-9117
 URL: https://issues.apache.org/jira/browse/HBASE-9117
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Nick Dimiduk
 Fix For: 0.98.0

 Attachments: HBASE-9117.00.patch, HBASE-9117.01.patch


 The recommended way is now:
 # Create an HConnection: HConnectionManager.createConnection(...)
 # Create a light HTable: HConnection.getTable(...)
 # table.close()
 # connection.close()
 All other API and pooling will be removed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9117) Remove HTablePool and all HConnection pooling related APIs

2013-12-02 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-9117:
--

Status: Patch Available  (was: Open)

 Remove HTablePool and all HConnection pooling related APIs
 --

 Key: HBASE-9117
 URL: https://issues.apache.org/jira/browse/HBASE-9117
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Nick Dimiduk
 Fix For: 0.98.0

 Attachments: HBASE-9117.00.patch, HBASE-9117.01.patch


 The recommended way is now:
 # Create an HConnection: HConnectionManager.createConnection(...)
 # Create a light HTable: HConnection.getTable(...)
 # table.close()
 # connection.close()
 All other API and pooling will be removed.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10061) TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in thrown NPE

2013-12-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836401#comment-13836401
 ] 

Ted Yu commented on HBASE-10061:


Can you attach trunk patch ?

 TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in 
 thrown NPE
 --

 Key: HBASE-10061
 URL: https://issues.apache.org/jira/browse/HBASE-10061
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.12
Reporter: Amit Sela
 Attachments: HBASE-10061.patch


 TableMapReduceUtil.findOrCreateJar line 596:
 jar = getJar(my_class);
 updateMap(jar, packagedClasses);
 In case getJar returns null, updateMap will throw NPE.
 Should check null==jar before calling updateMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9718) Add a test scope dependency on org.slf4j:slf4j-api to hbase-client

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836404#comment-13836404
 ] 

Hudson commented on HBASE-9718:
---

SUCCESS: Integrated in hbase-0.96 #210 (See 
[https://builds.apache.org/job/hbase-0.96/210/])
HBASE-9718. Add a test scope dependency on org.slf4j:slf4j-api to hbase-client 
(apurtell: rev 1546893)
* /hbase/branches/0.96/hbase-client/pom.xml


 Add a test scope dependency on org.slf4j:slf4j-api to hbase-client
 --

 Key: HBASE-9718
 URL: https://issues.apache.org/jira/browse/HBASE-9718
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.98.0, 0.96.1

 Attachments: 9718.patch


 hbase-client needs a test scope dependency on org.slf4j:slf4j-api in its POM. 
 Without this change at least Eclipse cannot resolve org.slf4j.Logger from 
 RecoverableZooKeeper - the ZooKeeper classes use it - and so the 
 'hbase-client' project will not build. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-10067) Filters are not applied to Coprocessor if columns are added to the scanner

2013-12-02 Thread Vikram Singh Chandel (JIRA)
Vikram Singh Chandel created HBASE-10067:


 Summary: Filters are not applied to Coprocessor if columns are 
added to the scanner
 Key: HBASE-10067
 URL: https://issues.apache.org/jira/browse/HBASE-10067
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, Filters, Scanners
Affects Versions: 0.94.6
 Environment: Linux 2.6.32-279.11.1.el6.x86_64
Reporter: Vikram Singh Chandel


While applying columns to scanner in coprocessor the filtering  does not happen 
and entire scan of the table is done

Total Rows in Table: 8.1 million
Expected Result :8788 
Actual Result: 8.1 million

Code Snippet:

Scan scan = new Scan();
scan.addColumn(family, qualifier);//Entire scan happens Filters are 
   ignored

SingleColumnValueFilter filterOne =new SingleColumnValueFilter(colFal,col,
CompareOp.EQUAL, val);
filterOne.setFilterIfMissing(true);

FilterList filter = new FilterList(Operator.MUST_PASS_ALL,
Arrays.asList((Filter) filterOne));

scan.setFilter(filter);  // Not working

 If addFamily is used it works

scan.addFamily(family);
scan.setFilter(filter); //Works




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10067) Filters are not applied to Coprocessor if columns are added to the scanner

2013-12-02 Thread Vikram Singh Chandel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Singh Chandel updated HBASE-10067:
-

Description: 
While applying columns to scanner in coprocessor the filtering  does not happen 
and entire scan of the table is done

Expected behaviour: Filters should be applied when particular columns are added 
to scanner 
Actual behaviour: Filter are not applied entire result set is returned

Code Snippet:

Scan scan = new Scan();
scan.addColumn(family, qualifier);//Entire scan happens Filters are 
   ignored

SingleColumnValueFilter filterOne =new SingleColumnValueFilter(colFal,col,
CompareOp.EQUAL, val);
filterOne.setFilterIfMissing(true);

FilterList filter = new FilterList(Operator.MUST_PASS_ALL,
Arrays.asList((Filter) filterOne));

scan.setFilter(filter);  // Not working

 If addFamily is used it works

scan.addFamily(family);
scan.setFilter(filter); //Works


  was:
While applying columns to scanner in coprocessor the filtering  does not happen 
and entire scan of the table is done

Total Rows in Table: 8.1 million
Expected Result :8788 
Actual Result: 8.1 million

Code Snippet:

Scan scan = new Scan();
scan.addColumn(family, qualifier);//Entire scan happens Filters are 
   ignored

SingleColumnValueFilter filterOne =new SingleColumnValueFilter(colFal,col,
CompareOp.EQUAL, val);
filterOne.setFilterIfMissing(true);

FilterList filter = new FilterList(Operator.MUST_PASS_ALL,
Arrays.asList((Filter) filterOne));

scan.setFilter(filter);  // Not working

 If addFamily is used it works

scan.addFamily(family);
scan.setFilter(filter); //Works



 Filters are not applied to Coprocessor if columns are added to the scanner
 --

 Key: HBASE-10067
 URL: https://issues.apache.org/jira/browse/HBASE-10067
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, Filters, Scanners
Affects Versions: 0.94.6
 Environment: Linux 2.6.32-279.11.1.el6.x86_64
Reporter: Vikram Singh Chandel

 While applying columns to scanner in coprocessor the filtering  does not 
 happen and entire scan of the table is done
 Expected behaviour: Filters should be applied when particular columns are 
 added to scanner 
 Actual behaviour: Filter are not applied entire result set is returned
 Code Snippet:
   Scan scan = new Scan();
 scan.addColumn(family, qualifier);//Entire scan happens Filters are   
  ignored
 SingleColumnValueFilter filterOne =new SingleColumnValueFilter(colFal,col,
   CompareOp.EQUAL, val);
   filterOne.setFilterIfMissing(true);
   
 FilterList filter = new FilterList(Operator.MUST_PASS_ALL,
   Arrays.asList((Filter) filterOne));
   scan.setFilter(filter);  // Not working
  If addFamily is used it works
 scan.addFamily(family);
   scan.setFilter(filter); //Works



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10067) Filters are not applied if columns are added to the scanner

2013-12-02 Thread Vikram Singh Chandel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Singh Chandel updated HBASE-10067:
-

Summary: Filters are not applied if columns are added to the scanner  (was: 
Filters are not applied to Coprocessor if columns are added to the scanner)

 Filters are not applied if columns are added to the scanner
 ---

 Key: HBASE-10067
 URL: https://issues.apache.org/jira/browse/HBASE-10067
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, Filters, Scanners
Affects Versions: 0.94.6
 Environment: Linux 2.6.32-279.11.1.el6.x86_64
Reporter: Vikram Singh Chandel

 While applying columns to scanner in coprocessor the filtering  does not 
 happen and entire scan of the table is done
 Expected behaviour: Filters should be applied when particular columns are 
 added to scanner 
 Actual behaviour: Filter are not applied entire result set is returned
 Code Snippet:
   Scan scan = new Scan();
 scan.addColumn(family, qualifier);//Entire scan happens Filters are   
  ignored
 SingleColumnValueFilter filterOne =new SingleColumnValueFilter(colFal,col,
   CompareOp.EQUAL, val);
   filterOne.setFilterIfMissing(true);
   
 FilterList filter = new FilterList(Operator.MUST_PASS_ALL,
   Arrays.asList((Filter) filterOne));
   scan.setFilter(filter);  // Not working
  If addFamily is used it works
 scan.addFamily(family);
   scan.setFilter(filter); //Works



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10067) Filters are not applied if columns are added to the scanner

2013-12-02 Thread Vikram Singh Chandel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Singh Chandel updated HBASE-10067:
-

Description: 
While applying columns to scanner the filtering  does not happen and entire 
scan of the table is done

Expected behaviour: Filters should be applied when particular columns are added 
to scanner 
Actual behaviour: Filter are not applied entire result set is returned

Code Snippet:

Scan scan = new Scan();
scan.addColumn(family, qualifier);//Entire scan happens Filters are 
   ignored

SingleColumnValueFilter filterOne =new SingleColumnValueFilter(colFal,col,
CompareOp.EQUAL, val);
filterOne.setFilterIfMissing(true);

FilterList filter = new FilterList(Operator.MUST_PASS_ALL,
Arrays.asList((Filter) filterOne));

scan.setFilter(filter);  // Not working

 If addFamily is used it works

scan.addFamily(family);
scan.setFilter(filter); //Works


  was:
While applying columns to scanner in coprocessor the filtering  does not happen 
and entire scan of the table is done

Expected behaviour: Filters should be applied when particular columns are added 
to scanner 
Actual behaviour: Filter are not applied entire result set is returned

Code Snippet:

Scan scan = new Scan();
scan.addColumn(family, qualifier);//Entire scan happens Filters are 
   ignored

SingleColumnValueFilter filterOne =new SingleColumnValueFilter(colFal,col,
CompareOp.EQUAL, val);
filterOne.setFilterIfMissing(true);

FilterList filter = new FilterList(Operator.MUST_PASS_ALL,
Arrays.asList((Filter) filterOne));

scan.setFilter(filter);  // Not working

 If addFamily is used it works

scan.addFamily(family);
scan.setFilter(filter); //Works



 Filters are not applied if columns are added to the scanner
 ---

 Key: HBASE-10067
 URL: https://issues.apache.org/jira/browse/HBASE-10067
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, Filters, Scanners
Affects Versions: 0.94.6
 Environment: Linux 2.6.32-279.11.1.el6.x86_64
Reporter: Vikram Singh Chandel

 While applying columns to scanner the filtering  does not happen and entire 
 scan of the table is done
 Expected behaviour: Filters should be applied when particular columns are 
 added to scanner 
 Actual behaviour: Filter are not applied entire result set is returned
 Code Snippet:
   Scan scan = new Scan();
 scan.addColumn(family, qualifier);//Entire scan happens Filters are   
  ignored
 SingleColumnValueFilter filterOne =new SingleColumnValueFilter(colFal,col,
   CompareOp.EQUAL, val);
   filterOne.setFilterIfMissing(true);
   
 FilterList filter = new FilterList(Operator.MUST_PASS_ALL,
   Arrays.asList((Filter) filterOne));
   scan.setFilter(filter);  // Not working
  If addFamily is used it works
 scan.addFamily(family);
   scan.setFilter(filter); //Works



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9978) The client retries even if the method is not present on the server

2013-12-02 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-9978:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to 96 and trunk, thanks!

 The client retries even if the method is not present on the server
 --

 Key: HBASE-9978
 URL: https://issues.apache.org/jira/browse/HBASE-9978
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9978-v0.patch


 If the RpcServer is not able to find the method on the server throws an 
 UnsupportedOperationException, but since is not wrapped in a DoNotRetry the 
 client keeps retrying even if the operation doesn't exists.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9631) add murmur3 hash

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836431#comment-13836431
 ] 

Hudson commented on HBASE-9631:
---

SUCCESS: Integrated in HBase-TRUNK #4706 (See 
[https://builds.apache.org/job/HBase-TRUNK/4706/])
HBASE-9631. Add murmur3 hash (apurtell: rev 1546894)
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash3.java


 add murmur3 hash
 

 Key: HBASE-9631
 URL: https://issues.apache.org/jira/browse/HBASE-9631
 Project: HBase
  Issue Type: New Feature
  Components: util
Affects Versions: 0.98.0
Reporter: Liang Xie
Assignee: Liang Xie
 Fix For: 0.98.0

 Attachments: HBase-9631-v2.txt, HBase-9631.txt


 MurmurHash3 is the successor to MurmurHash2. It comes in 3 variants - a 
 32-bit version that targets low latency for hash table use and two 128-bit 
 versions for generating unique identifiers for large blocks of data, one each 
 for x86 and x64 platforms.
 several open source projects have added murmur3 already, like cassandra, 
 mahout, etc.
 I just port the murmur3 from MAHOUT-862. due to compatibility, let's keep the 
 default Hash algo(murmur2) without changing.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9718) Add a test scope dependency on org.slf4j:slf4j-api to hbase-client

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836433#comment-13836433
 ] 

Hudson commented on HBASE-9718:
---

SUCCESS: Integrated in HBase-TRUNK #4706 (See 
[https://builds.apache.org/job/HBase-TRUNK/4706/])
HBASE-9718. Add a test scope dependency on org.slf4j:slf4j-api to hbase-client 
(apurtell: rev 1546892)
* /hbase/trunk/hbase-client/pom.xml


 Add a test scope dependency on org.slf4j:slf4j-api to hbase-client
 --

 Key: HBASE-9718
 URL: https://issues.apache.org/jira/browse/HBASE-9718
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.98.0, 0.96.1

 Attachments: 9718.patch


 hbase-client needs a test scope dependency on org.slf4j:slf4j-api in its POM. 
 Without this change at least Eclipse cannot resolve org.slf4j.Logger from 
 RecoverableZooKeeper - the ZooKeeper classes use it - and so the 
 'hbase-client' project will not build. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9856) Fix some findbugs Performance Warnings

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836432#comment-13836432
 ] 

Hudson commented on HBASE-9856:
---

SUCCESS: Integrated in HBase-TRUNK #4706 (See 
[https://builds.apache.org/job/HBase-TRUNK/4706/])
HBASE-9856 Fix some findbugs Performance Warnings (tedyu: rev 1546888)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java


 Fix some findbugs Performance Warnings
 --

 Key: HBASE-9856
 URL: https://issues.apache.org/jira/browse/HBASE-9856
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 0.98.0

 Attachments: 9856-v1.txt, 9856-v2.txt


 These are the warnings to be fixed:
 {code}
 SIC Should org.apache.hadoop.hbase.regionserver.HRegion$RowLock be a _static_ 
 inner class?
 UPM Private method 
 org.apache.hadoop.hbase.security.access.AccessController.requirePermission(String,
  String, Permission$Action[]) is never called
 WMI Method 
 org.apache.hadoop.hbase.regionserver.wal.WALEditsReplaySink.replayEntries(List)
  makes inefficient use of keySet iterator instead of entrySet iterator
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10061) TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in thrown NPE

2013-12-02 Thread Amit Sela (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amit Sela updated HBASE-10061:
--

Attachment: 10061-trunk.txt

Trunk patch.
Check jar is not null or empty in updateMap.
If cna't find jar, return null instead of throwing exception (will log WARN)

 TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in 
 thrown NPE
 --

 Key: HBASE-10061
 URL: https://issues.apache.org/jira/browse/HBASE-10061
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.12
Reporter: Amit Sela
 Attachments: 10061-trunk.txt, HBASE-10061.patch


 TableMapReduceUtil.findOrCreateJar line 596:
 jar = getJar(my_class);
 updateMap(jar, packagedClasses);
 In case getJar returns null, updateMap will throw NPE.
 Should check null==jar before calling updateMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9203) Secondary index support through coprocessors

2013-12-02 Thread Jyothi Mandava (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836456#comment-13836456
 ] 

Jyothi Mandava commented on HBASE-9203:
---

bq. Yes this is supported. Can bulk load data to user table. The ImportTSV like 
tool will then create index data for the index table

Please use org.apache.hadoop.hbase.index.mapreduce.IndexImportTSV tool for 
updating index along with the user table data.

 Secondary index support through coprocessors
 

 Key: HBASE-9203
 URL: https://issues.apache.org/jira/browse/HBASE-9203
 Project: HBase
  Issue Type: New Feature
Reporter: rajeshbabu
Assignee: rajeshbabu
 Attachments: SecondaryIndex Design.pdf


 We have been working on implementing secondary index in HBase and open 
 sourced  on hbase 0.94.8 version.
 The project is available on github.
 https://github.com/Huawei-Hadoop/hindex
 This Jira is to support secondary index on trunk(0.98).
 Following features will be supported.
 -  multiple indexes on table,
 -  multi column index,
 -  index based on part of a column value,
 -  equals and range condition scans using index, and
 -  bulk loading data to indexed table (Indexing done with bulk load)
 Most of the kernel changes needed for secondary index is available in trunk. 
 Very minimal changes needed for it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9884) Add Thrift and REST support for Visibility Labels

2013-12-02 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-9884:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk.  Thanks for the review Andrew and Anoop.

 Add Thrift and REST support for Visibility Labels
 -

 Key: HBASE-9884
 URL: https://issues.apache.org/jira/browse/HBASE-9884
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.0

 Attachments: HBASE-9884.patch, HBASE-9884_1.patch, HBASE-9884_2.patch


 In HBASE-7663 the REST and thrift support has been seperated out because the 
 patch is becoming bigger.  This JIRA is to add the Thrift and REST part as a 
 seperated patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10067) Filters are not applied if columns are added to the scanner

2013-12-02 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836502#comment-13836502
 ] 

Anoop Sam John commented on HBASE-10067:


Creating Scan in CP?  Can you tell us more regarding ur usage context pls?

 Filters are not applied if columns are added to the scanner
 ---

 Key: HBASE-10067
 URL: https://issues.apache.org/jira/browse/HBASE-10067
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, Filters, Scanners
Affects Versions: 0.94.6
 Environment: Linux 2.6.32-279.11.1.el6.x86_64
Reporter: Vikram Singh Chandel

 While applying columns to scanner the filtering  does not happen and entire 
 scan of the table is done
 Expected behaviour: Filters should be applied when particular columns are 
 added to scanner 
 Actual behaviour: Filter are not applied entire result set is returned
 Code Snippet:
   Scan scan = new Scan();
 scan.addColumn(family, qualifier);//Entire scan happens Filters are   
  ignored
 SingleColumnValueFilter filterOne =new SingleColumnValueFilter(colFal,col,
   CompareOp.EQUAL, val);
   filterOne.setFilterIfMissing(true);
   
 FilterList filter = new FilterList(Operator.MUST_PASS_ALL,
   Arrays.asList((Filter) filterOne));
   scan.setFilter(filter);  // Not working
  If addFamily is used it works
 scan.addFamily(family);
   scan.setFilter(filter); //Works



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-7386) Investigate providing some supervisor support for znode deletion

2013-12-02 Thread Samir Ahmic (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836520#comment-13836520
 ] 

Samir Ahmic commented on HBASE-7386:


Hi [~stack], 

Is someone is working on this currently? This looks like great idea. I would 
like to continue this work if we  still think that this is good direction ?

Cheers

 Investigate providing some supervisor support for znode deletion
 

 Key: HBASE-7386
 URL: https://issues.apache.org/jira/browse/HBASE-7386
 Project: HBase
  Issue Type: Task
  Components: master, regionserver, scripts
Reporter: Gregory Chanan
Assignee: stack
Priority: Blocker
 Attachments: HBASE-7386-v0.patch, supervisordconfigs-v0.patch


 There a couple of JIRAs for deleting the znode on a process failure:
 HBASE-5844 (RS)
 HBASE-5926 (Master)
 which are pretty neat; on process failure, they delete the znode of the 
 underlying process so HBase can recover faster.
 These JIRAs were implemented via the startup scripts; i.e. the script hangs 
 around and waits for the process to exit, then deletes the znode.
 There are a few problems associated with this approach, as listed in the 
 below JIRAs:
 1) Hides startup output in script
 https://issues.apache.org/jira/browse/HBASE-5844?focusedCommentId=13463401page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13463401
 2) two hbase processes listed per launched daemon
 https://issues.apache.org/jira/browse/HBASE-5844?focusedCommentId=13463409page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13463409
 3) Not run by a real supervisor
 https://issues.apache.org/jira/browse/HBASE-5844?focusedCommentId=13463409page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13463409
 4) Weird output after kill -9 actual process in standalone mode
 https://issues.apache.org/jira/browse/HBASE-5926?focusedCommentId=13506801page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13506801
 5) Can kill existing RS if called again
 https://issues.apache.org/jira/browse/HBASE-5844?focusedCommentId=13463401page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13463401
 6) Hides stdout/stderr[6]
 https://issues.apache.org/jira/browse/HBASE-5844?focusedCommentId=13506832page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13506832
 I suspect running in via something like supervisor.d can solve these issues 
 if we provide the right support.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9997) Add per KV security details to HBase book

2013-12-02 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-9997:
--

Attachment: HBASE-9997.patch

 Add per KV security details to HBase book
 -

 Key: HBASE-9997
 URL: https://issues.apache.org/jira/browse/HBASE-9997
 Project: HBase
  Issue Type: Sub-task
  Components: security
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0

 Attachments: HBASE-9997.patch


 Per KV visibility labels
 Per KV ACLs



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10067) Filters are not applied if columns are added to the scanner

2013-12-02 Thread Vikram Singh Chandel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836532#comment-13836532
 ] 

Vikram Singh Chandel commented on HBASE-10067:
--

our normal scan are taking around 62 seconds for around 8.1 million records. 
And anything beyond 5 sec is not acceptable.

 Filters are not applied if columns are added to the scanner
 ---

 Key: HBASE-10067
 URL: https://issues.apache.org/jira/browse/HBASE-10067
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors, Filters, Scanners
Affects Versions: 0.94.6
 Environment: Linux 2.6.32-279.11.1.el6.x86_64
Reporter: Vikram Singh Chandel

 While applying columns to scanner the filtering  does not happen and entire 
 scan of the table is done
 Expected behaviour: Filters should be applied when particular columns are 
 added to scanner 
 Actual behaviour: Filter are not applied entire result set is returned
 Code Snippet:
   Scan scan = new Scan();
 scan.addColumn(family, qualifier);//Entire scan happens Filters are   
  ignored
 SingleColumnValueFilter filterOne =new SingleColumnValueFilter(colFal,col,
   CompareOp.EQUAL, val);
   filterOne.setFilterIfMissing(true);
   
 FilterList filter = new FilterList(Operator.MUST_PASS_ALL,
   Arrays.asList((Filter) filterOne));
   scan.setFilter(filter);  // Not working
  If addFamily is used it works
 scan.addFamily(family);
   scan.setFilter(filter); //Works



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10062) Store the encrypted data length in the block encryption header instead of plaintext length

2013-12-02 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836547#comment-13836547
 ] 

Jean-Marc Spaggiari commented on HBASE-10062:
-

Hi Andrew,

Will that require a migration step to migrate existing data on the disk? Or 
will that apply only on the blocks in memory?

 Store the encrypted data length in the block encryption header instead of 
 plaintext length
 --

 Key: HBASE-10062
 URL: https://issues.apache.org/jira/browse/HBASE-10062
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.98.0


 After HBASE-7544, if an HFile belongs to an encrypted family, it is encrypted 
 on a per block basis. The encrypted blocks include the following header:
 {noformat}
   // +--+
   // | vint plaintext length|
   // +--+
   // | vint iv length   |
   // +--+
   // | iv data ...  |
   // +--+
   // | encrypted block data ... |
   // +--+
 {noformat}
 The reason for storing the plaintext length is so we can create an decryption 
 stream over the encrypted block data and, no matter the internal details of 
 the crypto algorithm (whether it adds padding, etc.) after reading the 
 expected plaintext bytes we know the reader is finished. However my colleague 
 Jerry Chen pointed out today this construction mandates the block be 
 processed exactly that way. Storing and using the encrypted data length 
 instead could provide more implementation flexibility down the road.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9856) Fix some findbugs Performance Warnings

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836574#comment-13836574
 ] 

Hudson commented on HBASE-9856:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #860 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/860/])
HBASE-9856 Fix some findbugs Performance Warnings (tedyu: rev 1546888)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALEditsReplaySink.java


 Fix some findbugs Performance Warnings
 --

 Key: HBASE-9856
 URL: https://issues.apache.org/jira/browse/HBASE-9856
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 0.98.0

 Attachments: 9856-v1.txt, 9856-v2.txt


 These are the warnings to be fixed:
 {code}
 SIC Should org.apache.hadoop.hbase.regionserver.HRegion$RowLock be a _static_ 
 inner class?
 UPM Private method 
 org.apache.hadoop.hbase.security.access.AccessController.requirePermission(String,
  String, Permission$Action[]) is never called
 WMI Method 
 org.apache.hadoop.hbase.regionserver.wal.WALEditsReplaySink.replayEntries(List)
  makes inefficient use of keySet iterator instead of entrySet iterator
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9631) add murmur3 hash

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836573#comment-13836573
 ] 

Hudson commented on HBASE-9631:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #860 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/860/])
HBASE-9631. Add murmur3 hash (apurtell: rev 1546894)
* /hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Hash.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/MurmurHash3.java


 add murmur3 hash
 

 Key: HBASE-9631
 URL: https://issues.apache.org/jira/browse/HBASE-9631
 Project: HBase
  Issue Type: New Feature
  Components: util
Affects Versions: 0.98.0
Reporter: Liang Xie
Assignee: Liang Xie
 Fix For: 0.98.0

 Attachments: HBase-9631-v2.txt, HBase-9631.txt


 MurmurHash3 is the successor to MurmurHash2. It comes in 3 variants - a 
 32-bit version that targets low latency for hash table use and two 128-bit 
 versions for generating unique identifiers for large blocks of data, one each 
 for x86 and x64 platforms.
 several open source projects have added murmur3 already, like cassandra, 
 mahout, etc.
 I just port the murmur3 from MAHOUT-862. due to compatibility, let's keep the 
 default Hash algo(murmur2) without changing.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9502) HStore.seekToScanner should handle magic value

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836571#comment-13836571
 ] 

Hudson commented on HBASE-9502:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #860 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/860/])
HBASE-9502. HStore.seekToScanner should handle magic value (Liang Xie) 
(apurtell: rev 1546925)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


 HStore.seekToScanner should handle magic value
 --

 Key: HBASE-9502
 URL: https://issues.apache.org/jira/browse/HBASE-9502
 Project: HBase
  Issue Type: Bug
  Components: regionserver, Scanners
Affects Versions: 0.98.0, 0.96.1
Reporter: Liang Xie
Assignee: Liang Xie
 Fix For: 0.98.0

 Attachments: 9502-v2.patch, HBASE-9502-v2.txt, HBASE-9502.txt


 due to faked key, the seekTo probably reture -2, and HStore.seekToScanner 
 should handle this corner case.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9978) The client retries even if the method is not present on the server

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836572#comment-13836572
 ] 

Hudson commented on HBASE-9978:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #860 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/860/])
HBASE-9978 The client retries even if the method is not present on the server 
(mbertozzi: rev 1546961)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java


 The client retries even if the method is not present on the server
 --

 Key: HBASE-9978
 URL: https://issues.apache.org/jira/browse/HBASE-9978
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9978-v0.patch


 If the RpcServer is not able to find the method on the server throws an 
 UnsupportedOperationException, but since is not wrapped in a DoNotRetry the 
 client keeps retrying even if the operation doesn't exists.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9718) Add a test scope dependency on org.slf4j:slf4j-api to hbase-client

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836576#comment-13836576
 ] 

Hudson commented on HBASE-9718:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #860 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/860/])
HBASE-9718. Add a test scope dependency on org.slf4j:slf4j-api to hbase-client 
(apurtell: rev 1546892)
* /hbase/trunk/hbase-client/pom.xml


 Add a test scope dependency on org.slf4j:slf4j-api to hbase-client
 --

 Key: HBASE-9718
 URL: https://issues.apache.org/jira/browse/HBASE-9718
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.98.0, 0.96.1

 Attachments: 9718.patch


 hbase-client needs a test scope dependency on org.slf4j:slf4j-api in its POM. 
 Without this change at least Eclipse cannot resolve org.slf4j.Logger from 
 RecoverableZooKeeper - the ZooKeeper classes use it - and so the 
 'hbase-client' project will not build. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9884) Add Thrift and REST support for Visibility Labels

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836575#comment-13836575
 ] 

Hudson commented on HBASE-9884:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #860 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/860/])
HBASE-9884 - Add Thrift and REST support for Visibility Labels (Ram) 
(ramkrishna: rev 1546988)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/RowSpec.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/ScannerResource.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/ScannerResultGenerator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/model/ScannerModel.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ScannerMessage.java
* 
/hbase/trunk/hbase-server/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/ScannerMessage.proto
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithLabels.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/model/TestScannerModel.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAppend.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAuthorization.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TGet.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIncrement.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TPut.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TScan.java
* 
/hbase/trunk/hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
* 
/hbase/trunk/hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandlerWithLabels.java


 Add Thrift and REST support for Visibility Labels
 -

 Key: HBASE-9884
 URL: https://issues.apache.org/jira/browse/HBASE-9884
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.0

 Attachments: HBASE-9884.patch, HBASE-9884_1.patch, HBASE-9884_2.patch


 In HBASE-7663 the REST and thrift support has been seperated out because the 
 patch is becoming bigger.  This JIRA is to add the Thrift and REST part as a 
 seperated patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9978) The client retries even if the method is not present on the server

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836597#comment-13836597
 ] 

Hudson commented on HBASE-9978:
---

FAILURE: Integrated in hbase-0.96-hadoop2 #139 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/139/])
HBASE-9978 The client retries even if the method is not present on the server 
(mbertozzi: rev 1546962)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java


 The client retries even if the method is not present on the server
 --

 Key: HBASE-9978
 URL: https://issues.apache.org/jira/browse/HBASE-9978
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9978-v0.patch


 If the RpcServer is not able to find the method on the server throws an 
 UnsupportedOperationException, but since is not wrapped in a DoNotRetry the 
 client keeps retrying even if the operation doesn't exists.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9718) Add a test scope dependency on org.slf4j:slf4j-api to hbase-client

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836598#comment-13836598
 ] 

Hudson commented on HBASE-9718:
---

FAILURE: Integrated in hbase-0.96-hadoop2 #139 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/139/])
HBASE-9718. Add a test scope dependency on org.slf4j:slf4j-api to hbase-client 
(apurtell: rev 1546893)
* /hbase/branches/0.96/hbase-client/pom.xml


 Add a test scope dependency on org.slf4j:slf4j-api to hbase-client
 --

 Key: HBASE-9718
 URL: https://issues.apache.org/jira/browse/HBASE-9718
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.98.0, 0.96.1

 Attachments: 9718.patch


 hbase-client needs a test scope dependency on org.slf4j:slf4j-api in its POM. 
 Without this change at least Eclipse cannot resolve org.slf4j.Logger from 
 RecoverableZooKeeper - the ZooKeeper classes use it - and so the 
 'hbase-client' project will not build. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9502) HStore.seekToScanner should handle magic value

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836621#comment-13836621
 ] 

Hudson commented on HBASE-9502:
---

SUCCESS: Integrated in HBase-TRUNK #4707 (See 
[https://builds.apache.org/job/HBase-TRUNK/4707/])
HBASE-9502. HStore.seekToScanner should handle magic value (Liang Xie) 
(apurtell: rev 1546925)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


 HStore.seekToScanner should handle magic value
 --

 Key: HBASE-9502
 URL: https://issues.apache.org/jira/browse/HBASE-9502
 Project: HBase
  Issue Type: Bug
  Components: regionserver, Scanners
Affects Versions: 0.98.0, 0.96.1
Reporter: Liang Xie
Assignee: Liang Xie
 Fix For: 0.98.0

 Attachments: 9502-v2.patch, HBASE-9502-v2.txt, HBASE-9502.txt


 due to faked key, the seekTo probably reture -2, and HStore.seekToScanner 
 should handle this corner case.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9884) Add Thrift and REST support for Visibility Labels

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836623#comment-13836623
 ] 

Hudson commented on HBASE-9884:
---

SUCCESS: Integrated in HBase-TRUNK #4707 (See 
[https://builds.apache.org/job/HBase-TRUNK/4707/])
HBASE-9884 - Add Thrift and REST support for Visibility Labels (Ram) 
(ramkrishna: rev 1546988)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/RowSpec.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/ScannerResource.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/ScannerResultGenerator.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/model/ScannerModel.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/rest/protobuf/generated/ScannerMessage.java
* 
/hbase/trunk/hbase-server/src/main/resources/org/apache/hadoop/hbase/rest/protobuf/ScannerMessage.proto
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/TestScannersWithLabels.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/rest/model/TestScannerModel.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAppend.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TAuthorization.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TCellVisibility.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TColumnValue.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TGet.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TIncrement.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TPut.java
* 
/hbase/trunk/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TScan.java
* 
/hbase/trunk/hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
* 
/hbase/trunk/hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandlerWithLabels.java


 Add Thrift and REST support for Visibility Labels
 -

 Key: HBASE-9884
 URL: https://issues.apache.org/jira/browse/HBASE-9884
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.0

 Attachments: HBASE-9884.patch, HBASE-9884_1.patch, HBASE-9884_2.patch


 In HBASE-7663 the REST and thrift support has been seperated out because the 
 patch is becoming bigger.  This JIRA is to add the Thrift and REST part as a 
 seperated patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9978) The client retries even if the method is not present on the server

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836622#comment-13836622
 ] 

Hudson commented on HBASE-9978:
---

SUCCESS: Integrated in HBase-TRUNK #4707 (See 
[https://builds.apache.org/job/HBase-TRUNK/4707/])
HBASE-9978 The client retries even if the method is not present on the server 
(mbertozzi: rev 1546961)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java


 The client retries even if the method is not present on the server
 --

 Key: HBASE-9978
 URL: https://issues.apache.org/jira/browse/HBASE-9978
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9978-v0.patch


 If the RpcServer is not able to find the method on the server throws an 
 UnsupportedOperationException, but since is not wrapped in a DoNotRetry the 
 client keeps retrying even if the operation doesn't exists.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9832) Add MR support for Visibility labels

2013-12-02 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836673#comment-13836673
 ] 

Anoop Sam John commented on HBASE-9832:
---

ImportTSV#createTable()
{quote}
for (String aColumn : columns) {
  if (TsvParser.ROWKEY_COLUMN_SPEC.equals(aColumn)
  || TsvParser.TIMESTAMPKEY_COLUMN_SPEC.equals(aColumn)) continue;
{quote}
Pls add the check for CELL_VISIBILITY_COLUMN_SPEC also so that we wont create a 
cf named CELL_VISIBILITY_COLUMN_SPEC? 

 Add MR support for Visibility labels
 

 Key: HBASE-9832
 URL: https://issues.apache.org/jira/browse/HBASE-9832
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.0

 Attachments: HBASE-9832.patch, HBASE-9832_1.patch, HBASE-9832_2.patch


 MR needs to support adding the visibility labels through TableOutputFormat 
 and HfileOutPutformat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9832) Add MR support for Visibility labels

2013-12-02 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-9832:
--

Status: Open  (was: Patch Available)

 Add MR support for Visibility labels
 

 Key: HBASE-9832
 URL: https://issues.apache.org/jira/browse/HBASE-9832
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.0

 Attachments: HBASE-9832.patch, HBASE-9832_1.patch, 
 HBASE-9832_2.patch, HBASE-9832_4.patch


 MR needs to support adding the visibility labels through TableOutputFormat 
 and HfileOutPutformat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9832) Add MR support for Visibility labels

2013-12-02 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-9832:
--

Status: Patch Available  (was: Open)

 Add MR support for Visibility labels
 

 Key: HBASE-9832
 URL: https://issues.apache.org/jira/browse/HBASE-9832
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.0

 Attachments: HBASE-9832.patch, HBASE-9832_1.patch, 
 HBASE-9832_2.patch, HBASE-9832_4.patch


 MR needs to support adding the visibility labels through TableOutputFormat 
 and HfileOutPutformat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9832) Add MR support for Visibility labels

2013-12-02 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-9832:
--

Attachment: HBASE-9832_4.patch

Latest patch.

 Add MR support for Visibility labels
 

 Key: HBASE-9832
 URL: https://issues.apache.org/jira/browse/HBASE-9832
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.0

 Attachments: HBASE-9832.patch, HBASE-9832_1.patch, 
 HBASE-9832_2.patch, HBASE-9832_4.patch


 MR needs to support adding the visibility labels through TableOutputFormat 
 and HfileOutPutformat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (HBASE-9832) Add MR support for Visibility labels

2013-12-02 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836673#comment-13836673
 ] 

Anoop Sam John edited comment on HBASE-9832 at 12/2/13 4:44 PM:


ImportTSV#createTable()
{code}
for (String aColumn : columns) {
  if (TsvParser.ROWKEY_COLUMN_SPEC.equals(aColumn)
  || TsvParser.TIMESTAMPKEY_COLUMN_SPEC.equals(aColumn)) continue;
{code}
Pls add the check for CELL_VISIBILITY_COLUMN_SPEC also so that we wont create a 
cf named CELL_VISIBILITY_COLUMN_SPEC? 


was (Author: anoop.hbase):
ImportTSV#createTable()
{quote}
for (String aColumn : columns) {
  if (TsvParser.ROWKEY_COLUMN_SPEC.equals(aColumn)
  || TsvParser.TIMESTAMPKEY_COLUMN_SPEC.equals(aColumn)) continue;
{quote}
Pls add the check for CELL_VISIBILITY_COLUMN_SPEC also so that we wont create a 
cf named CELL_VISIBILITY_COLUMN_SPEC? 

 Add MR support for Visibility labels
 

 Key: HBASE-9832
 URL: https://issues.apache.org/jira/browse/HBASE-9832
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.0

 Attachments: HBASE-9832.patch, HBASE-9832_1.patch, HBASE-9832_2.patch


 MR needs to support adding the visibility labels through TableOutputFormat 
 and HfileOutPutformat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9978) The client retries even if the method is not present on the server

2013-12-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836694#comment-13836694
 ] 

Hudson commented on HBASE-9978:
---

SUCCESS: Integrated in hbase-0.96 #211 (See 
[https://builds.apache.org/job/hbase-0.96/211/])
HBASE-9978 The client retries even if the method is not present on the server 
(mbertozzi: rev 1546962)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java


 The client retries even if the method is not present on the server
 --

 Key: HBASE-9978
 URL: https://issues.apache.org/jira/browse/HBASE-9978
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.98.0, 0.96.0
Reporter: Matteo Bertozzi
Assignee: Matteo Bertozzi
Priority: Trivial
 Fix For: 0.98.0, 0.96.1

 Attachments: HBASE-9978-v0.patch


 If the RpcServer is not able to find the method on the server throws an 
 UnsupportedOperationException, but since is not wrapped in a DoNotRetry the 
 client keeps retrying even if the operation doesn't exists.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9992) [hbck] Refactor so that arbitrary -D cmdline options are included

2013-12-02 Thread takeshi.miao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836703#comment-13836703
 ] 

takeshi.miao commented on HBASE-9992:
-

@Jonathan Tks for your patch, I also learned from it !

 [hbck] Refactor so that arbitrary -D cmdline options are included 
 --

 Key: HBASE-9992
 URL: https://issues.apache.org/jira/browse/HBASE-9992
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.96.0, 0.94.13
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.98.0, 0.96.1

 Attachments: hbase-9992.patch


 A review of HBASE-9831 pointed out the fact that -D options aren't being 
 passed into the configuration object used by hbck.  This means overriding -D 
 options will not work unless special hooks are for specific options.  A first 
 attempt to fix this was in HBASE-9831 but it affected many other files.
 The right approach would be to create a new HbckTool class that had the 
 configured interface and change to existing HBaseFsck main to instantiate 
 that to have it parse args, and then create the HBaseFsck object inside run.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9931) Optional setBatch for CopyTable to copy large rows in batches

2013-12-02 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836717#comment-13836717
 ] 

Dave Latham commented on HBASE-9931:


The patch adds setBatch and removes setCaching.   I think setCaching should 
stay in there too - probably a copy/replace bug?

 Optional setBatch for CopyTable to copy large rows in batches
 -

 Key: HBASE-9931
 URL: https://issues.apache.org/jira/browse/HBASE-9931
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Dave Latham
 Attachments: HBASE-9931.00.patch


 We've had CopyTable jobs fail because a small number of rows are wide enough 
 to not fit into memory.  If we could specify the batch size for CopyTable 
 scans that shoud be able to break those large rows up into multiple 
 iterations to save the heap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10061) TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in thrown NPE

2013-12-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836720#comment-13836720
 ] 

Hadoop QA commented on HBASE-10061:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616536/10061-trunk.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.wal.TestLogRolling

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8038//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8038//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8038//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8038//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8038//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8038//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8038//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8038//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8038//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8038//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8038//console

This message is automatically generated.

 TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in 
 thrown NPE
 --

 Key: HBASE-10061
 URL: https://issues.apache.org/jira/browse/HBASE-10061
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.12
Reporter: Amit Sela
 Attachments: 10061-trunk.txt, HBASE-10061.patch


 TableMapReduceUtil.findOrCreateJar line 596:
 jar = getJar(my_class);
 updateMap(jar, packagedClasses);
 In case getJar returns null, updateMap will throw NPE.
 Should check null==jar before calling updateMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10061) TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in thrown NPE

2013-12-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10061:
---

Status: Patch Available  (was: Open)

 TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in 
 thrown NPE
 --

 Key: HBASE-10061
 URL: https://issues.apache.org/jira/browse/HBASE-10061
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.12
Reporter: Amit Sela
 Attachments: 10061-trunk.txt, HBASE-10061.patch


 TableMapReduceUtil.findOrCreateJar line 596:
 jar = getJar(my_class);
 updateMap(jar, packagedClasses);
 In case getJar returns null, updateMap will throw NPE.
 Should check null==jar before calling updateMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9832) Add MR support for Visibility labels

2013-12-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836754#comment-13836754
 ] 

Hadoop QA commented on HBASE-9832:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616570/HBASE-9832_4.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8039//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8039//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8039//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8039//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8039//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8039//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8039//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8039//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8039//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8039//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8039//console

This message is automatically generated.

 Add MR support for Visibility labels
 

 Key: HBASE-9832
 URL: https://issues.apache.org/jira/browse/HBASE-9832
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.0

 Attachments: HBASE-9832.patch, HBASE-9832_1.patch, 
 HBASE-9832_2.patch, HBASE-9832_4.patch


 MR needs to support adding the visibility labels through TableOutputFormat 
 and HfileOutPutformat.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HBASE-10059) TestSplitLogWorker#testMultipleTasks fails occasionally

2013-12-02 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong reassigned HBASE-10059:
-

Assignee: Jeffrey Zhong

 TestSplitLogWorker#testMultipleTasks fails occasionally
 ---

 Key: HBASE-10059
 URL: https://issues.apache.org/jira/browse/HBASE-10059
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Jeffrey Zhong

 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/857/testReport/junit/org.apache.hadoop.hbase.regionserver/TestSplitLogWorker/testMultipleTasks/
  :
 {code}
 2013-11-30 01:13:23,022 INFO  [pool-1-thread-1] hbase.ResourceChecker(147): 
 before: regionserver.TestSplitLogWorker#testMultipleTasks Thread=16, 
 OpenFileDescriptor=157, MaxFileDescriptor=4, SystemLoadAverage=338, 
 ProcessCount=144, AvailableMemoryMB=1474, ConnectionCount=0
 2013-11-30 01:13:23,026 INFO  [pool-1-thread-1] 
 zookeeper.MiniZooKeeperCluster(200): Started MiniZK Cluster and connect 1 ZK 
 server on client port: 53800
 2013-11-30 01:13:23,029 INFO  [pool-1-thread-1] 
 zookeeper.RecoverableZooKeeper(120): Process 
 identifier=split-log-worker-tests connecting to ZooKeeper 
 ensemble=localhost:53800
 2013-11-30 01:13:23,249 DEBUG [pool-1-thread-1-EventThread] 
 zookeeper.ZooKeeperWatcher(310): split-log-worker-tests, 
 quorum=localhost:53800, baseZNode=/hbase Received ZooKeeper Event, type=None, 
 state=SyncConnected, path=null
 2013-11-30 01:13:23,251 DEBUG [pool-1-thread-1-EventThread] 
 zookeeper.ZooKeeperWatcher(387): split-log-worker-tests-0x142a6913350 
 connected
 2013-11-30 01:13:23,261 DEBUG [pool-1-thread-1] 
 regionserver.TestSplitLogWorker(105): /hbase created
 2013-11-30 01:13:23,270 DEBUG [pool-1-thread-1] 
 regionserver.TestSplitLogWorker(108): /hbase/splitWAL created
 2013-11-30 01:13:23,278 DEBUG [pool-1-thread-1] executor.ExecutorService(99): 
 Starting executor service name=RS_LOG_REPLAY_OPS-TestSplitLogWorker, 
 corePoolSize=10, maxPoolSize=10
 2013-11-30 01:13:23,278 INFO  [pool-1-thread-1] 
 regionserver.TestSplitLogWorker(246): testMultipleTasks
 2013-11-30 01:13:23,280 INFO  [SplitLogWorker-tmt_svr,1,1] 
 regionserver.SplitLogWorker(175): SplitLogWorker tmt_svr,1,1 starting
 2013-11-30 01:13:23,380 INFO  [pool-1-thread-1] hbase.Waiter(174): Waiting up 
 to [1,500] milli-secs(wait.for.ratio=[1])
 2013-11-30 01:13:23,394 DEBUG [pool-1-thread-1-EventThread] 
 zookeeper.ZooKeeperWatcher(310): split-log-worker-tests-0x142a6913350, 
 quorum=localhost:53800, baseZNode=/hbase Received ZooKeeper Event, 
 type=NodeChildrenChanged, state=SyncConnected, path=/hbase/splitWAL
 2013-11-30 01:13:23,394 DEBUG [pool-1-thread-1-EventThread] 
 regionserver.SplitLogWorker(595): tasks arrived or departed
 2013-11-30 01:13:23,394 INFO  [pool-1-thread-1] hbase.Waiter(174): Waiting up 
 to [1,500] milli-secs(wait.for.ratio=[1])
 2013-11-30 01:13:23,402 INFO  [SplitLogWorker-tmt_svr,1,1] 
 regionserver.SplitLogWorker(363): worker tmt_svr,1,1 acquired task 
 /hbase/splitWAL/tmt_task
 2013-11-30 01:13:23,410 DEBUG [pool-1-thread-1-EventThread] 
 zookeeper.ZooKeeperWatcher(310): split-log-worker-tests-0x142a6913350, 
 quorum=localhost:53800, baseZNode=/hbase Received ZooKeeper Event, 
 type=NodeChildrenChanged, state=SyncConnected, path=/hbase/splitWAL
 2013-11-30 01:13:23,410 DEBUG [pool-1-thread-1-EventThread] 
 regionserver.SplitLogWorker(595): tasks arrived or departed
 2013-11-30 01:13:23,418 DEBUG [pool-1-thread-1-EventThread] 
 zookeeper.ZooKeeperWatcher(310): split-log-worker-tests-0x142a6913350, 
 quorum=localhost:53800, baseZNode=/hbase Received ZooKeeper Event, 
 type=NodeDataChanged, state=SyncConnected, path=/hbase/splitWAL/tmt_task
 2013-11-30 01:13:23,419 INFO  [pool-1-thread-1] hbase.Waiter(174): Waiting up 
 to [1,500] milli-secs(wait.for.ratio=[1])
 2013-11-30 01:13:23,420 INFO  [pool-1-thread-1-EventThread] 
 regionserver.SplitLogWorker(522): task /hbase/splitWAL/tmt_task preempted 
 from tmt_svr,1,1, current task state and owner=OWNED another-worker,1,1
 2013-11-30 01:13:23,420 INFO  [pool-1-thread-1-EventThread] 
 regionserver.SplitLogWorker(608): Sending interrupt to stop the worker thread
 2013-11-30 01:13:23,420 WARN  [SplitLogWorker-tmt_svr,1,1] 
 regionserver.SplitLogWorker(374): Interrupted while yielding for other region 
 servers
 java.lang.InterruptedException: sleep interrupted
   at java.lang.Thread.sleep(Native Method)
   at 
 org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:372)
   at 
 org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:251)
   at 
 org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:209)
   at java.lang.Thread.run(Thread.java:662)
 2013-11-30 01:13:23,427 INFO  

[jira] [Commented] (HBASE-10059) TestSplitLogWorker#testMultipleTasks fails occasionally

2013-12-02 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836780#comment-13836780
 ] 

Jeffrey Zhong commented on HBASE-10059:
---

I checked the error log and the test failed due to it took around 1529 
milli-secs for the wait condition becomes true while the timeout is 1500 
milli-secs. 

{code}
2013-11-30 01:13:23,394 INFO  [pool-1-thread-1] hbase.Waiter(174): Waiting up 
to [1,500] milli-secs(wait.for.ratio=[1])
...
2013-11-30 01:13:24,923 WARN  [RS_LOG_REPLAY_OPS-TestSplitLogWorker-0] 
handler.HLogSplitterHandler(87): task execution prempted tmt_task

2013-11-30 01:13:24,923 INFO  [RS_LOG_REPLAY_OPS-TestSplitLogWorker-0] 
handler.HLogSplitterHandler(107): worker tmt_svr,1,1 done with task 
/hbase/splitWAL/tmt_task in 1520ms
{code}

 TestSplitLogWorker#testMultipleTasks fails occasionally
 ---

 Key: HBASE-10059
 URL: https://issues.apache.org/jira/browse/HBASE-10059
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Jeffrey Zhong

 From 
 https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/857/testReport/junit/org.apache.hadoop.hbase.regionserver/TestSplitLogWorker/testMultipleTasks/
  :
 {code}
 2013-11-30 01:13:23,022 INFO  [pool-1-thread-1] hbase.ResourceChecker(147): 
 before: regionserver.TestSplitLogWorker#testMultipleTasks Thread=16, 
 OpenFileDescriptor=157, MaxFileDescriptor=4, SystemLoadAverage=338, 
 ProcessCount=144, AvailableMemoryMB=1474, ConnectionCount=0
 2013-11-30 01:13:23,026 INFO  [pool-1-thread-1] 
 zookeeper.MiniZooKeeperCluster(200): Started MiniZK Cluster and connect 1 ZK 
 server on client port: 53800
 2013-11-30 01:13:23,029 INFO  [pool-1-thread-1] 
 zookeeper.RecoverableZooKeeper(120): Process 
 identifier=split-log-worker-tests connecting to ZooKeeper 
 ensemble=localhost:53800
 2013-11-30 01:13:23,249 DEBUG [pool-1-thread-1-EventThread] 
 zookeeper.ZooKeeperWatcher(310): split-log-worker-tests, 
 quorum=localhost:53800, baseZNode=/hbase Received ZooKeeper Event, type=None, 
 state=SyncConnected, path=null
 2013-11-30 01:13:23,251 DEBUG [pool-1-thread-1-EventThread] 
 zookeeper.ZooKeeperWatcher(387): split-log-worker-tests-0x142a6913350 
 connected
 2013-11-30 01:13:23,261 DEBUG [pool-1-thread-1] 
 regionserver.TestSplitLogWorker(105): /hbase created
 2013-11-30 01:13:23,270 DEBUG [pool-1-thread-1] 
 regionserver.TestSplitLogWorker(108): /hbase/splitWAL created
 2013-11-30 01:13:23,278 DEBUG [pool-1-thread-1] executor.ExecutorService(99): 
 Starting executor service name=RS_LOG_REPLAY_OPS-TestSplitLogWorker, 
 corePoolSize=10, maxPoolSize=10
 2013-11-30 01:13:23,278 INFO  [pool-1-thread-1] 
 regionserver.TestSplitLogWorker(246): testMultipleTasks
 2013-11-30 01:13:23,280 INFO  [SplitLogWorker-tmt_svr,1,1] 
 regionserver.SplitLogWorker(175): SplitLogWorker tmt_svr,1,1 starting
 2013-11-30 01:13:23,380 INFO  [pool-1-thread-1] hbase.Waiter(174): Waiting up 
 to [1,500] milli-secs(wait.for.ratio=[1])
 2013-11-30 01:13:23,394 DEBUG [pool-1-thread-1-EventThread] 
 zookeeper.ZooKeeperWatcher(310): split-log-worker-tests-0x142a6913350, 
 quorum=localhost:53800, baseZNode=/hbase Received ZooKeeper Event, 
 type=NodeChildrenChanged, state=SyncConnected, path=/hbase/splitWAL
 2013-11-30 01:13:23,394 DEBUG [pool-1-thread-1-EventThread] 
 regionserver.SplitLogWorker(595): tasks arrived or departed
 2013-11-30 01:13:23,394 INFO  [pool-1-thread-1] hbase.Waiter(174): Waiting up 
 to [1,500] milli-secs(wait.for.ratio=[1])
 2013-11-30 01:13:23,402 INFO  [SplitLogWorker-tmt_svr,1,1] 
 regionserver.SplitLogWorker(363): worker tmt_svr,1,1 acquired task 
 /hbase/splitWAL/tmt_task
 2013-11-30 01:13:23,410 DEBUG [pool-1-thread-1-EventThread] 
 zookeeper.ZooKeeperWatcher(310): split-log-worker-tests-0x142a6913350, 
 quorum=localhost:53800, baseZNode=/hbase Received ZooKeeper Event, 
 type=NodeChildrenChanged, state=SyncConnected, path=/hbase/splitWAL
 2013-11-30 01:13:23,410 DEBUG [pool-1-thread-1-EventThread] 
 regionserver.SplitLogWorker(595): tasks arrived or departed
 2013-11-30 01:13:23,418 DEBUG [pool-1-thread-1-EventThread] 
 zookeeper.ZooKeeperWatcher(310): split-log-worker-tests-0x142a6913350, 
 quorum=localhost:53800, baseZNode=/hbase Received ZooKeeper Event, 
 type=NodeDataChanged, state=SyncConnected, path=/hbase/splitWAL/tmt_task
 2013-11-30 01:13:23,419 INFO  [pool-1-thread-1] hbase.Waiter(174): Waiting up 
 to [1,500] milli-secs(wait.for.ratio=[1])
 2013-11-30 01:13:23,420 INFO  [pool-1-thread-1-EventThread] 
 regionserver.SplitLogWorker(522): task /hbase/splitWAL/tmt_task preempted 
 from tmt_svr,1,1, current task state and owner=OWNED another-worker,1,1
 2013-11-30 01:13:23,420 INFO  [pool-1-thread-1-EventThread] 
 regionserver.SplitLogWorker(608): Sending interrupt to stop the worker thread
 

[jira] [Commented] (HBASE-7091) support custom GC options in hbase-env.sh

2013-12-02 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836791#comment-13836791
 ] 

Jesse Yates commented on HBASE-7091:


Oh, you are modifying HBASE_OPTS on the command line! That wasn't a case I'd 
considered in the original version.

Yeah, either you set it on the command line or in the script - not both. You 
could modify the HBASE_OPTS variable in your hbase-env to support that behavior 
by doing the original 

{code}
HBASE_OPTS=$HBASE_OPTS ...
{code}

but that ends up being problematic for some cases as hbase-env can get sourced 
multiple times. A better fix might be ensuring that we don't do that and allow 
the external setting to be prepended. Either way, should be a follow up jira - 
want to file one?

 support custom GC options in hbase-env.sh
 -

 Key: HBASE-7091
 URL: https://issues.apache.org/jira/browse/HBASE-7091
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.94.4
Reporter: Jesse Yates
Assignee: Jesse Yates
  Labels: newbie
 Fix For: 0.94.4, 0.95.0

 Attachments: hbase-7091-v1.patch


 When running things like bin/start-hbase and bin/hbase-daemon.sh start 
 [master|regionserver|etc] we end up setting HBASE_OPTS property a couple 
 times via calling hbase-env.sh. This is generally not a problem for most 
 cases, but when you want to set your own GC log properties, one would think 
 you should set HBASE_GC_OPTS, which get added to HBASE_OPTS. 
 NOPE! That would make too much sense.
 Running bin/hbase-daemons.sh will run bin/hbase-daemon.sh with the daemons it 
 needs to start. Each time through hbase-daemon.sh we also call bin/hbase. 
 This isn't a big deal except for each call to hbase-daemon.sh, we also source 
 hbase-env.sh twice (once in the script and once in bin/hbase). This is 
 important for my next point.
 Note that to turn on GC logging, you uncomment:
 {code}
 # export HBASE_OPTS=$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails 
 -XX:+PrintGCDateStamps $HBASE_GC_OPTS 
 {code}
 and then to log to a gc file for each server, you then uncomment:
 {code}
 # export HBASE_USE_GC_LOGFILE=true
 {code}
 in hbase-env.sh
 On the first pass through hbase-daemon.sh, HBASE_GC_OPTS isn't set, so 
 HBASE_OPTS doesn't get anything funky, but we set HBASE_USE_GC_LOGFILE, which 
 then sets HBASE_GC_OPTS to the log file (-Xloggc:...). Then in bin/hbase we 
 again run hbase-env.sh, which now hs HBASE_GC_OPTS set, adding the GC file. 
 This isn't a general problem because HBASE_OPTS is set without prefixing the 
 existing HBASE_OPTS (eg. HBASE_OPTS=$HBASE_OPTS ...), allowing easy 
 updating. However, GC OPTS don't work the same and this is really odd 
 behavior when you want to set your own GC opts, which can include turning on 
 GC log rolling (yes, yes, they really are jvm opts, but they ought to support 
 their own param, to help minimize clutter).
 The simple version of this patch will just add an idempotent GC option to 
 hbase-env.sh and some comments that uncommenting 
 {code}
 # export HBASE_USE_GC_LOGFILE=true
 {code}
 will lead to a custom gc log file per server (along with an example name), so 
 you don't need to set -Xloggc.
 The more complex solution does the above and also solves the multiple calls 
 to hbase-env.sh so we can be sane about how all this works. Note that to fix 
 this, hbase-daemon.sh just needs to read in HBASE_USE_GC_LOGFILE after 
 sourcing hbase-env.sh and then update HBASE_OPTS. Oh and also not source 
 hbase-env.sh in bin/hbase. 
 Even further, we might want to consider adding options just for cases where 
 we don't need gc logging - i.e. the shell, the config reading tool, hcbk, 
 etc. This is the hardest version to handle since the first couple will 
 willy-nilly apply the gc options.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-12-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9485:
--

Status: Patch Available  (was: Open)

 TableOutputCommitter should implement recovery if we don't want jobs to start 
 from 0 on RM restart
 --

 Key: HBASE-9485
 URL: https://issues.apache.org/jira/browse/HBASE-9485
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9485-v2.txt


 HBase extends OutputCommitter which turns recovery off. Meaning all completed 
 maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
 implements recovery so we should look at that to see what is potentially 
 needed for recovery.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-12-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9485:
--

Attachment: 9485-v2.txt

 TableOutputCommitter should implement recovery if we don't want jobs to start 
 from 0 on RM restart
 --

 Key: HBASE-9485
 URL: https://issues.apache.org/jira/browse/HBASE-9485
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9485-v2.txt


 HBase extends OutputCommitter which turns recovery off. Meaning all completed 
 maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
 implements recovery so we should look at that to see what is potentially 
 needed for recovery.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-12-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836830#comment-13836830
 ] 

Ted Yu commented on HBASE-9485:
---

Patch v2 adds recovery related methods to TableOutputCommitter.
For HBase, there is no intermediate table - we write to the target table 
directly.

 TableOutputCommitter should implement recovery if we don't want jobs to start 
 from 0 on RM restart
 --

 Key: HBASE-9485
 URL: https://issues.apache.org/jira/browse/HBASE-9485
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9485-v2.txt


 HBase extends OutputCommitter which turns recovery off. Meaning all completed 
 maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
 implements recovery so we should look at that to see what is potentially 
 needed for recovery.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10061) TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in thrown NPE

2013-12-02 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836928#comment-13836928
 ] 

Nick Dimiduk commented on HBASE-10061:
--

Why is ignoring the missing jar a good idea? The current implementation will 
fail at job preparation time when a jar is missing. This patch will cause the 
failure to happen after the job is submitted, when a class from the missing jar 
are used. Is better to fail earlier, no?

 TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in 
 thrown NPE
 --

 Key: HBASE-10061
 URL: https://issues.apache.org/jira/browse/HBASE-10061
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.12
Reporter: Amit Sela
 Attachments: 10061-trunk.txt, HBASE-10061.patch


 TableMapReduceUtil.findOrCreateJar line 596:
 jar = getJar(my_class);
 updateMap(jar, packagedClasses);
 In case getJar returns null, updateMap will throw NPE.
 Should check null==jar before calling updateMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10061) TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in thrown NPE

2013-12-02 Thread Amit Sela (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836937#comment-13836937
 ] 

Amit Sela commented on HBASE-10061:
---

At least in my use case, those jars are pre-deployed in the cluster so why fail 
a job that will execute successfully ? I would add a configuration like Ted 
suggested but I did not want to change method signatures (passing on 
configuration) because I usually don't develop against trunk. 

 TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in 
 thrown NPE
 --

 Key: HBASE-10061
 URL: https://issues.apache.org/jira/browse/HBASE-10061
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.12
Reporter: Amit Sela
 Attachments: 10061-trunk.txt, HBASE-10061.patch


 TableMapReduceUtil.findOrCreateJar line 596:
 jar = getJar(my_class);
 updateMap(jar, packagedClasses);
 In case getJar returns null, updateMap will throw NPE.
 Should check null==jar before calling updateMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-12-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836938#comment-13836938
 ] 

Hadoop QA commented on HBASE-9485:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12616602/9485-v2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8040//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8040//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8040//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8040//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8040//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8040//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8040//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8040//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8040//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8040//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8040//console

This message is automatically generated.

 TableOutputCommitter should implement recovery if we don't want jobs to start 
 from 0 on RM restart
 --

 Key: HBASE-9485
 URL: https://issues.apache.org/jira/browse/HBASE-9485
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9485-v2.txt


 HBase extends OutputCommitter which turns recovery off. Meaning all completed 
 maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
 implements recovery so we should look at that to see what is potentially 
 needed for recovery.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10061) TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in thrown NPE

2013-12-02 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836948#comment-13836948
 ] 

Nick Dimiduk commented on HBASE-10061:
--

If they jars are deployed, can you not include them in the classpath when 
submitting the job?

 TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in 
 thrown NPE
 --

 Key: HBASE-10061
 URL: https://issues.apache.org/jira/browse/HBASE-10061
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.12
Reporter: Amit Sela
 Attachments: 10061-trunk.txt, HBASE-10061.patch


 TableMapReduceUtil.findOrCreateJar line 596:
 jar = getJar(my_class);
 updateMap(jar, packagedClasses);
 In case getJar returns null, updateMap will throw NPE.
 Should check null==jar before calling updateMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10061) TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in thrown NPE

2013-12-02 Thread Amit Sela (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836957#comment-13836957
 ] 

Amit Sela commented on HBASE-10061:
---

Without considering my case, why fail a job that may execute successfully ?

 TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in 
 thrown NPE
 --

 Key: HBASE-10061
 URL: https://issues.apache.org/jira/browse/HBASE-10061
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.12
Reporter: Amit Sela
 Attachments: 10061-trunk.txt, HBASE-10061.patch


 TableMapReduceUtil.findOrCreateJar line 596:
 jar = getJar(my_class);
 updateMap(jar, packagedClasses);
 In case getJar returns null, updateMap will throw NPE.
 Should check null==jar before calling updateMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-10068) Create Table Issue

2013-12-02 Thread Shashi Singh (JIRA)
Shashi Singh created HBASE-10068:


 Summary: Create Table Issue
 Key: HBASE-10068
 URL: https://issues.apache.org/jira/browse/HBASE-10068
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.94.14
Reporter: Shashi Singh
Priority: Minor


Ran the following:
create 'Person' 'Table', 'Demography'

was expecting the shell would throw a syntax error,however, it created the 
table with the  name  PersonTable. There is a space between 'Person' and 
'Table'.

I've pasted the screen text:

hbase(main):002:0 create 'Person' 'Table', 'Demography'
0 row(s) in 1.2050 seconds

hbase(main):003:0 list
TABLE
PersonTable
1 row(s) in 0.0330 seconds





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10068) Create Table Issue

2013-12-02 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836981#comment-13836981
 ] 

Jean-Daniel Cryans commented on HBASE-10068:


That's how ruby, which the shell is built with, works. It concatenates strings 
laid out like this.

 Create Table Issue
 --

 Key: HBASE-10068
 URL: https://issues.apache.org/jira/browse/HBASE-10068
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 0.94.14
Reporter: Shashi Singh
Priority: Minor

 Ran the following:
 create 'Person' 'Table', 'Demography'
 was expecting the shell would throw a syntax error,however, it created the 
 table with the  name  PersonTable. There is a space between 'Person' and 
 'Table'.
 I've pasted the screen text:
 hbase(main):002:0 create 'Person' 'Table', 'Demography'
 0 row(s) in 1.2050 seconds
 hbase(main):003:0 list
 TABLE
 PersonTable
 1 row(s) in 0.0330 seconds



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10061) TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in thrown NPE

2013-12-02 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836983#comment-13836983
 ] 

Nick Dimiduk commented on HBASE-10061:
--

The user application has requested that the jar for a class be packaged and 
sent to the cluster. That jar was not available at the time of submission, so 
the request is failed. If the job doesn't need the jar shipped to the cluster, 
it shouldn't add a representative class to this method's invocation list.

Just curious, can you describe your scenario in a little more detail? I'm 
surprised you're able to instantiate a Class object for a class that isn't 
available on the classpath. Or is it there, just as a .class file entry instead 
of in a jar?

 TableMapReduceUtil.findOrCreateJar calls updateMap(null, ) resulting in 
 thrown NPE
 --

 Key: HBASE-10061
 URL: https://issues.apache.org/jira/browse/HBASE-10061
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.94.12
Reporter: Amit Sela
 Attachments: 10061-trunk.txt, HBASE-10061.patch


 TableMapReduceUtil.findOrCreateJar line 596:
 jar = getJar(my_class);
 updateMap(jar, packagedClasses);
 In case getJar returns null, updateMap will throw NPE.
 Should check null==jar before calling updateMap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9892) Add info port to ServerName to support multi instances in a node

2013-12-02 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836989#comment-13836989
 ] 

Enis Soztutar commented on HBASE-9892:
--

bq. info port is an attribute that does not vary while server is up and 
running. It does not belong in the cluster status set. Can you not get it from 
zk Steve Loughran? Or failing that, you want us to write you a script that will 
read it out of zk for you – this and stuff like current master?
We still need to make the info ports and maybe other static attributes for the 
cluster to the clients as an API I think. It belongs together with liveServers 
data, since for each live server, the client may need the info port. 

 Add info port to ServerName to support multi instances in a node
 

 Key: HBASE-9892
 URL: https://issues.apache.org/jira/browse/HBASE-9892
 Project: HBase
  Issue Type: Improvement
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Attachments: HBASE-9892-0.94-v1.diff, HBASE-9892-0.94-v2.diff, 
 HBASE-9892-0.94-v3.diff, HBASE-9892-0.94-v4.diff


 The full GC time of  regionserver with big heap( 30G ) usually  can not be 
 controlled in 30s. At the same time, the servers with 64G memory are normal. 
 So we try to deploy multi rs instances(2-3 ) in a single node and the heap of 
 each rs is about 20G ~ 24G.
 Most of the things works fine, except the hbase web ui. The master get the RS 
 info port from conf, which is suitable for this situation of multi rs  
 instances in a node. So we add info port to ServerName.
 a. at the startup, rs report it's info port to Hmaster.
 b, For root region, rs write the servername with info port ro the zookeeper 
 root-region-server node.
 c, For meta regions, rs write the servername with info port to root region 
 d. For user regions,  rs write the servername with info port to meta regions 
 So hmaster and client can get info port from the servername.
 To test this feature, I change the rs num from 1 to 3 in standalone mode, so 
 we can test it in standalone mode,
 I think Hoya(hbase on yarn) will encounter the same problem.  Anyone knows 
 how Hoya handle this problem?
 PS: There are  different formats for servername in zk node and meta table, i 
 think we need to unify it and refactor the code.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8553) improve unit-test coverage of package org.apache.hadoop.hbase.mapreduce.hadoopbackport

2013-12-02 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13836992#comment-13836992
 ] 

Lars Hofhansl commented on HBASE-8553:
--

This is a good test! It does break with Hadoop 2.x. Looks like this is 
expected; nevertheless it is annoying.

[~iveselovsky], do you have any insight here? Is that something that we could 
fix easily? There are more folks these days who run 0.94 against a Hadoop 2.x 
version. Not a big deal, just asking.


 improve unit-test coverage of package 
 org.apache.hadoop.hbase.mapreduce.hadoopbackport
 --

 Key: HBASE-8553
 URL: https://issues.apache.org/jira/browse/HBASE-8553
 Project: HBase
  Issue Type: Test
Affects Versions: 0.94.9
Reporter: Ivan A. Veselovsky
Assignee: Ivan A. Veselovsky
 Fix For: 0.94.13

 Attachments: HBASE-8553-0.94--N2.patch


 The patch is for branch 0.94 only.
 The class InputSampler is modified because need to fix bug there: in method 
 run(String[] args) should be 
 TotalOrderPartitioner.setPartitionFile(job.getConfiguration(), outf); 
 instead of TotalOrderPartitioner.setPartitionFile(getConf(), outf);. 
 Otherwise it is impossible to set output file correctly. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HBASE-10044) test-patch.sh should filter out documents by known file extensions

2013-12-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-10044:
--

Assignee: Ted Yu

 test-patch.sh should filter out documents by known file extensions
 --

 Key: HBASE-10044
 URL: https://issues.apache.org/jira/browse/HBASE-10044
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10044-v1.txt


 Currently only htm[l] files are filtered out when test-patch.sh looks for 
 patch attachment.
 The following files should be excluded as well:
 .pdf
 .xlsx
 .jpg
 .png



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10044) test-patch.sh should filter out documents by known file extensions

2013-12-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10044:
---

Attachment: 10044-v1.txt

 test-patch.sh should filter out documents by known file extensions
 --

 Key: HBASE-10044
 URL: https://issues.apache.org/jira/browse/HBASE-10044
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10044-v1.txt


 Currently only htm[l] files are filtered out when test-patch.sh looks for 
 patch attachment.
 The following files should be excluded as well:
 .pdf
 .xlsx
 .jpg
 .png



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10044) test-patch.sh should filter out documents by known file extensions

2013-12-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10044:
---

Status: Patch Available  (was: Open)

 test-patch.sh should filter out documents by known file extensions
 --

 Key: HBASE-10044
 URL: https://issues.apache.org/jira/browse/HBASE-10044
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10044-v1.txt


 Currently only htm[l] files are filtered out when test-patch.sh looks for 
 patch attachment.
 The following files should be excluded as well:
 .pdf
 .xlsx
 .jpg
 .png



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-10069) Potential duplicate calls to log#appendNoSync() in HRegion#doMiniBatchMutation()

2013-12-02 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10069:
--

 Summary: Potential duplicate calls to log#appendNoSync() in 
HRegion#doMiniBatchMutation()
 Key: HBASE-10069
 URL: https://issues.apache.org/jira/browse/HBASE-10069
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Priority: Minor


In HRegion#doMiniBatchMutation():
{code}
if (nonceGroup != currentNonceGroup || nonce != currentNonce) {
  if (walEdit.size()  0) {
assert isInReplay;
txid = this.log.appendNoSync(this.getRegionInfo(), 
htableDescriptor.getTableName(),
  walEdit, m.getClusterIds(), now, htableDescriptor, 
this.sequenceId, true,
  currentNonceGroup, currentNonce);
hasWalAppends = true;
  }
  currentNonceGroup = nonceGroup;
  currentNonce = nonce;
}

// Add WAL edits by CP
WALEdit fromCP = batchOp.walEditsFromCoprocessors[i];
if (fromCP != null) {
  for (KeyValue kv : fromCP.getKeyValues()) {
walEdit.add(kv);
  }
}
...
  Mutation mutation = batchOp.getMutation(firstIndex);
  if (walEdit.size()  0) {
txid = this.log.appendNoSync(this.getRegionInfo(), 
this.htableDescriptor.getTableName(),
  walEdit, mutation.getClusterIds(), now, this.htableDescriptor, 
this.sequenceId,
  true, currentNonceGroup, currentNonce);
hasWalAppends = true;
  }
{code}
If fromCP is null, there may not be new edits added to walEdit.
But log#appendNoSync() would be called one more time at line 2368.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8089) Add type support

2013-12-02 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837073#comment-13837073
 ] 

Nick Dimiduk commented on HBASE-8089:
-

Of these subtasks, probably performance improvements (HBASE-8694) and type 
comparisons (HBASE-8863) could be tackled by a willing individual. Client-side 
API enhancements will take some time for discussion. I think the ImportTSV 
stuff should be tackled after we've defined a language for type declaration 
(similar to what we have for Filters in {{ParseFilter}}).

 Add type support
 

 Key: HBASE-8089
 URL: https://issues.apache.org/jira/browse/HBASE-8089
 Project: HBase
  Issue Type: New Feature
  Components: Client
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 0.98.0

 Attachments: HBASE-8089-types.txt, HBASE-8089-types.txt, 
 HBASE-8089-types.txt, HBASE-8089-types.txt, hbase data types WIP.pdf


 This proposal outlines an improvement to HBase that provides for a set of 
 types, above and beyond the existing byte-bucket strategy. This is intended 
 to reduce user-level duplication of effort, provide better support for 
 3rd-party integration, and provide an overall improved experience for 
 developers using HBase.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9931) Optional setBatch for CopyTable to copy large rows in batches

2013-12-02 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9931:


Attachment: HBASE-9931.01.patch

Yes, you're right. Take 2.

 Optional setBatch for CopyTable to copy large rows in batches
 -

 Key: HBASE-9931
 URL: https://issues.apache.org/jira/browse/HBASE-9931
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Dave Latham
 Attachments: HBASE-9931.00.patch, HBASE-9931.01.patch


 We've had CopyTable jobs fail because a small number of rows are wide enough 
 to not fit into memory.  If we could specify the batch size for CopyTable 
 scans that shoud be able to break those large rows up into multiple 
 iterations to save the heap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-12-02 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837108#comment-13837108
 ] 

Nick Dimiduk commented on HBASE-9485:
-

This patch sets explicitly that recovery is not supported. How do you intend to 
add support for recovery without a global ordering of records to write and a 
status tracking successful puts?

Nit: please include the @Override annotations on the methods.

 TableOutputCommitter should implement recovery if we don't want jobs to start 
 from 0 on RM restart
 --

 Key: HBASE-9485
 URL: https://issues.apache.org/jira/browse/HBASE-9485
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9485-v2.txt


 HBase extends OutputCommitter which turns recovery off. Meaning all completed 
 maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
 implements recovery so we should look at that to see what is potentially 
 needed for recovery.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10044) test-patch.sh should filter out documents by known file extensions

2013-12-02 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837111#comment-13837111
 ] 

Jesse Yates commented on HBASE-10044:
-

+1 ltgm.

Should we instead just have accepted endings for patches? For instance, just 
supporting .patch and .txt would cover 99% (if not more) of the cases

 test-patch.sh should filter out documents by known file extensions
 --

 Key: HBASE-10044
 URL: https://issues.apache.org/jira/browse/HBASE-10044
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10044-v1.txt


 Currently only htm[l] files are filtered out when test-patch.sh looks for 
 patch attachment.
 The following files should be excluded as well:
 .pdf
 .xlsx
 .jpg
 .png



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-8763) [BRAINSTORM] Combine MVCC and SeqId

2013-12-02 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837109#comment-13837109
 ] 

Jeffrey Zhong commented on HBASE-8763:
--

Today I had some discussion with [~enis] and [~te...@apache.org] on this topic 
and found it might be possible to handle the JIRA issue in a simpler way. Below 
are the steps:

1) Memstore insert using long.max as the initial write number
2) append no sync
3) sync
4) update WriteEntry's write number to the sequence number returned from Step 2
5) CompleteMemstoreInsert. In this step, make current read point to be = the 
sequence number from Step 2. The reasoning behind this is that once we sync 
till the sequence number, all changes with small sequence numbers are already 
synced into WAL. Therefore, we should be able to bump up read number to the 
last sequence number synced.

Currently, we maintain an internal queue which might defer the read point bump 
up if transactions complete order is different than that of MVCC internal write 
queue. 

By doing above, it's possible to remove the logics maintaining writeQueue so it 
means we can remove two locking and one queue loop in write code path. Sounds 
too good to be true :-). Let me try to write a quick patch and run it against 
unit tests to see if the idea could fly.


 [BRAINSTORM] Combine MVCC and SeqId
 ---

 Key: HBASE-8763
 URL: https://issues.apache.org/jira/browse/HBASE-8763
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: Enis Soztutar
 Attachments: hbase-8763_wip1.patch


 HBASE-8701 and a lot of recent issues include good discussions about mvcc + 
 seqId semantics. It seems that having mvcc and the seqId complicates the 
 comparator semantics a lot in regards to flush + WAL replay + compactions + 
 delete markers and out of order puts. 
 Thinking more about it I don't think we need a MVCC write number which is 
 different than the seqId. We can keep the MVCC semantics, read point and 
 smallest read points intact, but combine mvcc write number and seqId. This 
 will allow cleaner semantics + implementation + smaller data files. 
 We can do some brainstorming for 0.98. We still have to verify that this 
 would be semantically correct, it should be so by my current understanding.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-12-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9485:
--

Attachment: (was: 9485-v2.txt)

 TableOutputCommitter should implement recovery if we don't want jobs to start 
 from 0 on RM restart
 --

 Key: HBASE-9485
 URL: https://issues.apache.org/jira/browse/HBASE-9485
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Ted Yu
Assignee: Ted Yu

 HBase extends OutputCommitter which turns recovery off. Meaning all completed 
 maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
 implements recovery so we should look at that to see what is potentially 
 needed for recovery.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-12-02 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9485:
--

Attachment: 9485-v2.txt

Correct patch.

When a task completes, it is done for good, there is nothing to recover really 
if the job restarts later.

 TableOutputCommitter should implement recovery if we don't want jobs to start 
 from 0 on RM restart
 --

 Key: HBASE-9485
 URL: https://issues.apache.org/jira/browse/HBASE-9485
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9485-v2.txt


 HBase extends OutputCommitter which turns recovery off. Meaning all completed 
 maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
 implements recovery so we should look at that to see what is potentially 
 needed for recovery.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9931) Optional setBatch for CopyTable to copy large rows in batches

2013-12-02 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837113#comment-13837113
 ] 

Dave Latham commented on HBASE-9931:


Looks good, Nick.  +1

Would love to see it hit 0.96 and 0.94 as well.

 Optional setBatch for CopyTable to copy large rows in batches
 -

 Key: HBASE-9931
 URL: https://issues.apache.org/jira/browse/HBASE-9931
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Dave Latham
 Attachments: HBASE-9931.00.patch, HBASE-9931.01.patch


 We've had CopyTable jobs fail because a small number of rows are wide enough 
 to not fit into memory.  If we could specify the batch size for CopyTable 
 scans that shoud be able to break those large rows up into multiple 
 iterations to save the heap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-12-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837114#comment-13837114
 ] 

Ted Yu commented on HBASE-9485:
---

w.r.t. @Override annotation, the recovery methods are not in OutputCommitter of 
hadoop-1
That was why the annotation was left out.

 TableOutputCommitter should implement recovery if we don't want jobs to start 
 from 0 on RM restart
 --

 Key: HBASE-9485
 URL: https://issues.apache.org/jira/browse/HBASE-9485
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 9485-v2.txt


 HBase extends OutputCommitter which turns recovery off. Meaning all completed 
 maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
 implements recovery so we should look at that to see what is potentially 
 needed for recovery.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10044) test-patch.sh should filter out documents by known file extensions

2013-12-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837119#comment-13837119
 ] 

Ted Yu commented on HBASE-10044:


bq.  just supporting .patch and .txt would cover 99%
I would consider the remaining 1% :-)
If some contributors submit patch with other suffixes, some committer(s) would 
explain to them why their patches are rejected.

 test-patch.sh should filter out documents by known file extensions
 --

 Key: HBASE-10044
 URL: https://issues.apache.org/jira/browse/HBASE-10044
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10044-v1.txt


 Currently only htm[l] files are filtered out when test-patch.sh looks for 
 patch attachment.
 The following files should be excluded as well:
 .pdf
 .xlsx
 .jpg
 .png



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-9931) Optional setBatch for CopyTable to copy large rows in batches

2013-12-02 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837122#comment-13837122
 ] 

Nick Dimiduk commented on HBASE-9931:
-

[~lhofhansl], [~stack], [~apurtell] Any objections on this one?

 Optional setBatch for CopyTable to copy large rows in batches
 -

 Key: HBASE-9931
 URL: https://issues.apache.org/jira/browse/HBASE-9931
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Dave Latham
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.15

 Attachments: HBASE-9931.00.patch, HBASE-9931.01.patch


 We've had CopyTable jobs fail because a small number of rows are wide enough 
 to not fit into memory.  If we could specify the batch size for CopyTable 
 scans that shoud be able to break those large rows up into multiple 
 iterations to save the heap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-9931) Optional setBatch for CopyTable to copy large rows in batches

2013-12-02 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9931:


Fix Version/s: 0.94.15
   0.96.1
   0.98.0

 Optional setBatch for CopyTable to copy large rows in batches
 -

 Key: HBASE-9931
 URL: https://issues.apache.org/jira/browse/HBASE-9931
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Dave Latham
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.15

 Attachments: HBASE-9931.00.patch, HBASE-9931.01.patch


 We've had CopyTable jobs fail because a small number of rows are wide enough 
 to not fit into memory.  If we could specify the batch size for CopyTable 
 scans that shoud be able to break those large rows up into multiple 
 iterations to save the heap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Assigned] (HBASE-9931) Optional setBatch for CopyTable to copy large rows in batches

2013-12-02 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk reassigned HBASE-9931:
---

Assignee: Nick Dimiduk

 Optional setBatch for CopyTable to copy large rows in batches
 -

 Key: HBASE-9931
 URL: https://issues.apache.org/jira/browse/HBASE-9931
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Dave Latham
Assignee: Nick Dimiduk
 Fix For: 0.98.0, 0.96.1, 0.94.15

 Attachments: HBASE-9931.00.patch, HBASE-9931.01.patch


 We've had CopyTable jobs fail because a small number of rows are wide enough 
 to not fit into memory.  If we could specify the batch size for CopyTable 
 scans that shoud be able to break those large rows up into multiple 
 iterations to save the heap.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10044) test-patch.sh should filter out documents by known file extensions

2013-12-02 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837123#comment-13837123
 ] 

Jesse Yates commented on HBASE-10044:
-

Sure. My point being, is that preferable? Just asking before we start adding 
more code (convention over configuration being the root argument). Don't 
really care either way :)

 test-patch.sh should filter out documents by known file extensions
 --

 Key: HBASE-10044
 URL: https://issues.apache.org/jira/browse/HBASE-10044
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10044-v1.txt


 Currently only htm[l] files are filtered out when test-patch.sh looks for 
 patch attachment.
 The following files should be excluded as well:
 .pdf
 .xlsx
 .jpg
 .png



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10044) test-patch.sh should filter out documents by known file extensions

2013-12-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837147#comment-13837147
 ] 

Ted Yu commented on HBASE-10044:


Posted 'Extensions for patches accepted by QA bot' on dev list to solicit more 
opinions.

 test-patch.sh should filter out documents by known file extensions
 --

 Key: HBASE-10044
 URL: https://issues.apache.org/jira/browse/HBASE-10044
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10044-v1.txt


 Currently only htm[l] files are filtered out when test-patch.sh looks for 
 patch attachment.
 The following files should be excluded as well:
 .pdf
 .xlsx
 .jpg
 .png



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HBASE-10070) HBase read high-availability using eventually consistent region replicas

2013-12-02 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-10070:
-

 Summary: HBase read high-availability using eventually consistent 
region replicas
 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar


In the present HBase architecture, it is hard, probably impossible, to satisfy 
constraints like 99th percentile of the reads will be served under 10 ms. One 
of the major factors that affects this is the MTTR for regions. There are three 
phases in the MTTR process - detection, assignment, and recovery. Of these, the 
detection is usually the longest and is presently in the order of 20-30 
seconds. During this time, the clients would not be able to read the region 
data.

However, some clients will be better served if regions will be available for 
reads during recovery for doing eventually consistent reads. This will help 
with satisfying low latency guarantees for some class of applications which can 
work with stale reads.

For improving read availability, we propose a replicated read-only region 
serving design, also referred as secondary regions, or region shadows. 
Extending current model of a region being opened for reads and writes in a 
single region server, the region will be also opened for reading in region 
servers. The region server which hosts the region for reads and writes (as in 
current case) will be declared as PRIMARY, while 0 or more region servers might 
be hosting the region as SECONDARY. There may be more than one secondary 
(replica count  2).

Will attach a design doc shortly which contains most of the details and some 
thoughts about development approaches. Reviews are more than welcome. 

We also have a proof of concept patch, which includes the master and regions 
server side of changes. Client side changes will be coming soon as well. 






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HBASE-10070) HBase read high-availability using eventually consistent region replicas

2013-12-02 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-10070:
--

Attachment: HighAvailabilityDesignforreadsApachedoc.pdf

Attaching a design doc for the feature. Comments welcome. 

 HBase read high-availability using eventually consistent region replicas
 

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10070) HBase read high-availability using eventually consistent region replicas

2013-12-02 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837174#comment-13837174
 ] 

Vladimir Rodionov commented on HBASE-10070:
---

{quote}
 Of these, the detection is usually the longest and is presently in the order 
of 20-30 seconds. 
{quote}
Any particular reason, why? 

 HBase read high-availability using eventually consistent region replicas
 

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HBASE-10070) HBase read high-availability using eventually consistent region replicas

2013-12-02 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13837187#comment-13837187
 ] 

Jonathan Hsieh commented on HBASE-10070:


This is great.  I've been giving this a lot of thought recently and doing some 
experiments to see how feasible this is.  

 HBase read high-availability using eventually consistent region replicas
 

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >