[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876241#comment-13876241
 ] 

Anoop Sam John commented on HBASE-10322:


Sorry for the confusion Lars
The final idea is that only super user can view tags.  But the impl raised some 
issues and we decided that we will not handle at this time.  As of now for 
0.98.0 we will make tags as serevr only thing.  No user, even super user, will 
be able to retrieve tags to client side.  Am I making it clear now?

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876241#comment-13876241
 ] 

Anoop Sam John edited comment on HBASE-10322 at 1/20/14 8:20 AM:
-

Sorry for the confusion Lars
The final idea is that only super user can view tags.  But the impl raised some 
issues and we decided that we will not handle at this time.  As of now for 
0.98.0 we will make tags as serevr only thing.  No user, even super user, will 
be able to retrieve tags to client side.  Am I making it clear now?

bq.Can tackle Export/Copytable/etc later
Yes for use cases of Export/Copytable we thought tags should be accessible for 
clients also. Then thougt might be this can be controlled via user and we can 
ask Export to be executed by a super user. Then only tags will get exported. 
Even all the KVs can be surely scanned back only when the super user is 
executing it. For some other users, some KVs for which he is not authorized 
with related labels, wont get read.


was (Author: anoop.hbase):
Sorry for the confusion Lars
The final idea is that only super user can view tags.  But the impl raised some 
issues and we decided that we will not handle at this time.  As of now for 
0.98.0 we will make tags as serevr only thing.  No user, even super user, will 
be able to retrieve tags to client side.  Am I making it clear now?

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10380) Add bytesBinary and filter options to CopyTable

2014-01-20 Thread Ishan Chhabra (JIRA)
Ishan Chhabra created HBASE-10380:
-

 Summary: Add bytesBinary and filter options to CopyTable
 Key: HBASE-10380
 URL: https://issues.apache.org/jira/browse/HBASE-10380
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10380) Add bytesBinary and filter options to CopyTable

2014-01-20 Thread Ishan Chhabra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chhabra updated HBASE-10380:
--

Status: Patch Available  (was: Open)

 Add bytesBinary and filter options to CopyTable
 ---

 Key: HBASE-10380
 URL: https://issues.apache.org/jira/browse/HBASE-10380
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
Priority: Minor

 Add options in CopyTable to:
 1. specify the start and stop row in bytesBinary format 
 2. Use filters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10380) Add bytesBinary and filter options to CopyTable

2014-01-20 Thread Ishan Chhabra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chhabra updated HBASE-10380:
--

Description: 
Add options in CopyTable to:
1. specify the start and stop row in bytesBinary format 
2. Use filters

 Add bytesBinary and filter options to CopyTable
 ---

 Key: HBASE-10380
 URL: https://issues.apache.org/jira/browse/HBASE-10380
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
Priority: Minor

 Add options in CopyTable to:
 1. specify the start and stop row in bytesBinary format 
 2. Use filters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10380) Add bytesBinary and filter options to CopyTable

2014-01-20 Thread Ishan Chhabra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chhabra updated HBASE-10380:
--

Description: 
Add options in CopyTable to:
1. Specify the start and stop row in bytesBinary format 
2. Use filters

  was:
Add options in CopyTable to:
1. specify the start and stop row in bytesBinary format 
2. Use filters


 Add bytesBinary and filter options to CopyTable
 ---

 Key: HBASE-10380
 URL: https://issues.apache.org/jira/browse/HBASE-10380
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
Priority: Minor
 Attachments: HBASE_10380_0.94-v1.patch


 Add options in CopyTable to:
 1. Specify the start and stop row in bytesBinary format 
 2. Use filters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10380) Add bytesBinary and filter options to CopyTable

2014-01-20 Thread Ishan Chhabra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chhabra updated HBASE-10380:
--

Attachment: HBASE_10380_0.94-v1.patch

 Add bytesBinary and filter options to CopyTable
 ---

 Key: HBASE-10380
 URL: https://issues.apache.org/jira/browse/HBASE-10380
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
Priority: Minor
 Attachments: HBASE_10380_0.94-v1.patch


 Add options in CopyTable to:
 1. specify the start and stop row in bytesBinary format 
 2. Use filters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10380) Add bytesBinary and filter options to CopyTable

2014-01-20 Thread Ishan Chhabra (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876257#comment-13876257
 ] 

Ishan Chhabra commented on HBASE-10380:
---

For filters, the patch allows one to specify a file containing the filter in a 
serialized form. This seemed to be the only generic way to specify filters and 
allows complex filters (including filter lists).

 Add bytesBinary and filter options to CopyTable
 ---

 Key: HBASE-10380
 URL: https://issues.apache.org/jira/browse/HBASE-10380
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
Priority: Minor
 Attachments: HBASE_10380_0.94-v1.patch


 Add options in CopyTable to:
 1. Specify the start and stop row in bytesBinary format 
 2. Use filters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10380) Add bytesBinary and filter options to CopyTable

2014-01-20 Thread Ishan Chhabra (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876259#comment-13876259
 ] 

Ishan Chhabra commented on HBASE-10380:
---

If the approach looks good, then I can build a patch for trunk.

 Add bytesBinary and filter options to CopyTable
 ---

 Key: HBASE-10380
 URL: https://issues.apache.org/jira/browse/HBASE-10380
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
Priority: Minor
 Attachments: HBASE_10380_0.94-v1.patch


 Add options in CopyTable to:
 1. Specify the start and stop row in bytesBinary format 
 2. Use filters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10380) Add bytesBinary and filter options to CopyTable

2014-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876260#comment-13876260
 ] 

Hadoop QA commented on HBASE-10380:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12623922/HBASE_10380_0.94-v1.patch
  against trunk revision .
  ATTACHMENT ID: 12623922

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8472//console

This message is automatically generated.

 Add bytesBinary and filter options to CopyTable
 ---

 Key: HBASE-10380
 URL: https://issues.apache.org/jira/browse/HBASE-10380
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
Priority: Minor
 Attachments: HBASE_10380_0.94-v1.patch


 Add options in CopyTable to:
 1. Specify the start and stop row in bytesBinary format 
 2. Use filters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10381) HBase shell scripts do not handle whitespaces in path names

2014-01-20 Thread G G (JIRA)
G G created HBASE-10381:
---

 Summary: HBase shell scripts do not handle whitespaces in path 
names
 Key: HBASE-10381
 URL: https://issues.apache.org/jira/browse/HBASE-10381
 Project: HBase
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.96.0
 Environment: Windows, Linux
Reporter: G G


When setting one of the HBASE_CONF_DIR, HBASE_LOG_DIR, or HBASE_CLASSPATH 
environment variables to a directory containing a whitespace, the Linux shell 
scripts to start/stop hbase daemons (bin/start-hbase.sh) resp. the scripts it 
calls stop working. I tried to create a patch for this but unfortunately my 
shell-script knowledge does not suffice.
In some lines, escaping the used environment variables seems to do the trick 
but I was not able to fix the code that builds the command line in bin/hbase 
which looks like this:
{noformat}
HBASE_OPTS=$HBASE_OPTS -Dhbase.log.dir=$HBASE_LOG_DIR
...
{noformat}
If HBASE_LOG_DIR is e.g. /tmp/foo bar then HBASE_OPTS becomes ... 
-Dhbase.log.dir=/tmp/foo bar and when java is started it interprets bar as the 
main-class argument.

On Windows, HBase would not start unless I escaped the HBASE_CONF_DIR 
environment variable using double quotes.

If anyone has an idea on how to fix this, I could try to build a patch.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10377) Add test for HBASE-10370 Compaction in out-of-date Store causes region split failure

2014-01-20 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-10377:


Attachment: HBASE-10377-v1.patch

Run the test for several times in local machine and they all pass.

Someone help to re-trigger the ci tests.

The reason why test failed in HBASE-10370 is that RS had run a major compaction 
, so when test request a compaction manually, it return null, which made the 
test failed.




 Add test for HBASE-10370 Compaction in out-of-date Store causes region split 
 failure
 

 Key: HBASE-10377
 URL: https://issues.apache.org/jira/browse/HBASE-10377
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
 Attachments: 10377-testSplitFailedCompactionAndSplit.html, 
 HBASE-10377-v1.patch


 HBASE-10370 fixes the issue where region split fails following compacting 
 out-of-date Store
 The new test failed in this build:
 https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/82/testReport/org.apache.hadoop.hbase.regionserver/TestSplitTransactionOnCluster/testSplitFailedCompactionAndSplit/
 This issue is to make the new test, testSplitFailedCompactionAndSplit, robust.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10382) HBase fails to start when hbase.rootdir contains a whitespace character

2014-01-20 Thread G G (JIRA)
G G created HBASE-10382:
---

 Summary: HBase fails to start when hbase.rootdir contains a 
whitespace character
 Key: HBASE-10382
 URL: https://issues.apache.org/jira/browse/HBASE-10382
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: G G


When hbase.rootdir contains a whitespace, e.g. when it is set to C:\Program 
Files\... then HBase fails to start with the following exception being logged:
{noformat}
java.lang.IllegalArgumentException: Illegal character in path at index 50: 
file:/C:/Users/cwat-ggsenger/.dynaTrace/easyTravel 
2.0.0/easyTravel/database/hbase-data/data
at java.net.URI.create(URI.java:859)
at org.apache.hadoop.fs.FileSystem.getDefaultUri(FileSystem.java:131)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
at org.apache.hadoop.hbase.fs.HFileSystem.init(HFileSystem.java:79)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1182)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:795)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.net.URISyntaxException: Illegal character in path at index 50: 
file:/C:/Users/cwat-ggsenger/.dynaTrace/easyTravel 
2.0.0/easyTravel/database/hbase-data/data
at java.net.URI$Parser.fail(URI.java:2829)
at java.net.URI$Parser.checkChars(URI.java:3002)
at java.net.URI$Parser.parseHierarchical(URI.java:3086)
at java.net.URI$Parser.parse(URI.java:3034)
at java.net.URI.init(URI.java:595)
at java.net.URI.create(URI.java:857)
... 6 more
{noformat}

This does *not* apply to the hbase.zookeeper.property.dataDir property, which 
may contain whitespace characters.

I tried using a workaround by providing the complete URL instead of a path name 
(e.g. file:/C:/Program%20Files/...) but in this case %20 is not interpreted as 
a whitespace but literally, so that HBase creates the path 
C:\Program%20Files\

Finally I was able to create a workaround by using DOS-style path names like 
C:\PROGRA~1\



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10377) Add test for HBASE-10370 Compaction in out-of-date Store causes region split failure

2014-01-20 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HBASE-10377:
--

Assignee: Liu Shaohui
  Status: Patch Available  (was: Open)

 Add test for HBASE-10370 Compaction in out-of-date Store causes region split 
 failure
 

 Key: HBASE-10377
 URL: https://issues.apache.org/jira/browse/HBASE-10377
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Liu Shaohui
 Attachments: 10377-testSplitFailedCompactionAndSplit.html, 
 HBASE-10377-v1.patch


 HBASE-10370 fixes the issue where region split fails following compacting 
 out-of-date Store
 The new test failed in this build:
 https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/82/testReport/org.apache.hadoop.hbase.regionserver/TestSplitTransactionOnCluster/testSplitFailedCompactionAndSplit/
 This issue is to make the new test, testSplitFailedCompactionAndSplit, robust.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-8593) Type support in ImportTSV tool

2014-01-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876453#comment-13876453
 ] 

Anoop Sam John commented on HBASE-8593:
---

Where are we wrt this issue? [~rajesh23]

 Type support in ImportTSV tool
 --

 Key: HBASE-8593
 URL: https://issues.apache.org/jira/browse/HBASE-8593
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Reporter: Anoop Sam John
Assignee: rajeshbabu
 Fix For: 0.99.0

 Attachments: HBASE-8593.patch, HBASE-8593_v2.patch, 
 HBASE-8593_v4.patch, ReportMapper.java


 Now the ImportTSV tool treats all the table column to be of type String. It 
 converts the input data into bytes considering its type to be String. Some 
 times user will need a type of say int/float to get added to table by using 
 this tool.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10383) Secure Bulk Load fails for version 0.94.15

2014-01-20 Thread Kashif J S (JIRA)
Kashif J S created HBASE-10383:
--

 Summary: Secure Bulk Load fails for version 0.94.15
 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S


Secure Bulk Load with kerberos enabled fails for LoadIncrementalHfile with 
following exception



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-20 Thread Kashif J S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kashif J S updated HBASE-10383:
---

Description: 
Secure Bulk Load with kerberos enabled fails for Complete Bulk 
LoadLoadIncrementalHfile with following exception ERROR 
org.apache.hadoop.hbase.regionserver.HRegionServer: 
org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
handler for protocol 
org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
 at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
 at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
 at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
 at java.lang.reflect.Method.invoke(Method.java)
 at 
org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
 at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



  was:Secure Bulk Load with kerberos enabled fails for LoadIncrementalHfile 
with following exception

Summary: Secure Bulk Load for 'completebulkload' fails for version 
0.94.15  (was: Secure Bulk Load fails for version 0.94.15)

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S

 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-20 Thread Kashif J S (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876474#comment-13876474
 ] 

Kashif J S commented on HBASE-10383:


This happens because the SecureBulkLoadClient tries to invoke bulkLoadHfiles 
with arguments passed as List, Token, String, Boolean  . But the 
SecureBulkLoadProtocol does not define any method with such signature.  
SecureBulkLoadProtocol has an interface bulkLoadHFiles(ListPairbyte[], 
String familyPaths,
 Token? userToken, String bulkToken)

Hence the method invocation fails. 

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S

 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10377) Add test for HBASE-10370 Compaction in out-of-date Store causes region split failure

2014-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876504#comment-13876504
 ] 

Hadoop QA commented on HBASE-10377:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12623939/HBASE-10377-v1.patch
  against trunk revision .
  ATTACHMENT ID: 12623939

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8473//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8473//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8473//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8473//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8473//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8473//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8473//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8473//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8473//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8473//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8473//console

This message is automatically generated.

 Add test for HBASE-10370 Compaction in out-of-date Store causes region split 
 failure
 

 Key: HBASE-10377
 URL: https://issues.apache.org/jira/browse/HBASE-10377
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Liu Shaohui
 Attachments: 10377-testSplitFailedCompactionAndSplit.html, 
 HBASE-10377-v1.patch


 HBASE-10370 fixes the issue where region split fails following compacting 
 out-of-date Store
 The new test failed in this build:
 https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/82/testReport/org.apache.hadoop.hbase.regionserver/TestSplitTransactionOnCluster/testSplitFailedCompactionAndSplit/
 This issue is to make the new test, testSplitFailedCompactionAndSplit, robust.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10277) refactor AsyncProcess

2014-01-20 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876520#comment-13876520
 ] 

Nicolas Liochon commented on HBASE-10277:
-

It's a difficult patch to read:
1) Some changes are cosmetics: some protected becomes private, some this. are 
removed. I'm not against these changes, but it makes the real meat more 
difficult to find.
2) The javadoc has not been updated, so when the code differs from the javadoc, 
the reader has to sort out himself if it's just that the javadoc is now out 
dated or if there is a regression.
I haven't yet reviewed it globally, but here is a set of questions/comments.

AsyncProcess#submit. Why does it take a tableName? Does it mean that an 
AsyncProcess can now be shared between Tables?

AsyncRequestSet#waitUntilDone
Same responsibility as AsyncProcess#waitUntilDone, but less features (no logs. 
These logs are useful).
bq. It would be nice to normalize AP usage patterns; in particular, separate 
the global part (load tracking) from per-submit-call part.
This part should go in HConnection, as we should manage the load tracking 
globally and not only for a single call. Iit would be a change in bahavior 
compared to the 0.94, but I think we should do it. Would it make you life 
easier here?

bq. I ran some perf test using YCSB and table with write-dropping coproc
Great, really. We should do that much more often...

bq. Probably this perf difference will not be noticeable on real requests 
(remains to be tested).
Let me be more pessimistic here :-).

bq. Also got rid of callback that was mostly used for tests, tests can check 
results without it.
I'm not a big fan of this part of the change. Callbacks can be reused in 
different context (for example to have a different policy, such as ignoring 
errors as in HTableMultiplexer). As well, we now have a hardRetryLimit, but 
this attribute is used only in tests.


More globally, This patch allows to reuse a single AsyncProcess between 
independant streams of writes. Would that be necessary if it was cheaper to 
create ? The cost is reading the configuration, as when we do a HTable#get and 
create a RegionServerCallable. The problem is that with this patch, we still 
create a AsyncProcess in some cases, for example on the batchCallback path...

 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-10277.patch


 AsyncProcess currently has two patterns of usage, one from HTable flush w/o 
 callback and with reuse, and one from HCM/HTable batch call, with callback 
 and w/o reuse. In the former case (but not the latter), it also does some 
 throttling of actions on initial submit call, limiting the number of 
 outstanding actions per server.
 The latter case is relatively straightforward. The former appears to be error 
 prone due to reuse - if, as javadoc claims should be safe, multiple submit 
 calls are performed without waiting for the async part of the previous call 
 to finish, fields like hasError become ambiguous and can be used for the 
 wrong call; callback for success/failure is called based on original index 
 of an action in submitted list, but with only one callback supplied to AP in 
 ctor it's not clear to which submit call the index belongs, if several are 
 outstanding.
 I was going to add support for HBASE-10070 to AP, and found that it might be 
 difficult to do cleanly.
 It would be nice to normalize AP usage patterns; in particular, separate the 
 global part (load tracking) from per-submit-call part.
 Per-submit part can more conveniently track stuff like initialActions, 
 mapping of indexes and retry information, that is currently passed around the 
 method calls.
 -I am not sure yet, but maybe sending of the original index to server in 
 ClientProtos.MultiAction can also be avoided.- Cannot be avoided because 
 the API to server doesn't have one-to-one correspondence between requests and 
 responses in an individual call to multi (retries/rearrangement have nothing 
 to do with it)



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code

2014-01-20 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876534#comment-13876534
 ] 

Nicolas Liochon commented on HBASE-10375:
-

Yep, it's a typo. The code is right.
This can be changed: if someone listens on the wrong port, its mttr will be 
decreased, but nothing else will be broken, and it won't impact the other nodes.

 hbase-default.xml hbase.status.multicast.address.port does not match code
 -

 Key: HBASE-10375
 URL: https://issues.apache.org/jira/browse/HBASE-10375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jonathan Hsieh

 In hbase-default.xml
 {code}
 +  property
 +namehbase.status.multicast.address.port/name
 +value6100/value
 +description
 +  Multicast port to use for the status publication by multicast.
 +/description
 +  /property
 {code}
 In HConstants it was 60100.
 {code}
   public static final String STATUS_MULTICAST_PORT = 
 hbase.status.multicast.port;
   public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100;
 {code}
 (it was 60100 in the code for 0.96 and 0.98.)
 I lean towards going with the code as opposed to the config file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code

2014-01-20 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon reassigned HBASE-10375:
---

Assignee: Nicolas Liochon

 hbase-default.xml hbase.status.multicast.address.port does not match code
 -

 Key: HBASE-10375
 URL: https://issues.apache.org/jira/browse/HBASE-10375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jonathan Hsieh
Assignee: Nicolas Liochon

 In hbase-default.xml
 {code}
 +  property
 +namehbase.status.multicast.address.port/name
 +value6100/value
 +description
 +  Multicast port to use for the status publication by multicast.
 +/description
 +  /property
 {code}
 In HConstants it was 60100.
 {code}
   public static final String STATUS_MULTICAST_PORT = 
 hbase.status.multicast.port;
   public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100;
 {code}
 (it was 60100 in the code for 0.96 and 0.98.)
 I lean towards going with the code as opposed to the config file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code

2014-01-20 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10375:


Attachment: 10375.v1.98-96.patch

 hbase-default.xml hbase.status.multicast.address.port does not match code
 -

 Key: HBASE-10375
 URL: https://issues.apache.org/jira/browse/HBASE-10375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jonathan Hsieh
Assignee: Nicolas Liochon
 Attachments: 10375.v1.98-96.patch, 10375.v1.patch


 In hbase-default.xml
 {code}
 +  property
 +namehbase.status.multicast.address.port/name
 +value6100/value
 +description
 +  Multicast port to use for the status publication by multicast.
 +/description
 +  /property
 {code}
 In HConstants it was 60100.
 {code}
   public static final String STATUS_MULTICAST_PORT = 
 hbase.status.multicast.port;
   public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100;
 {code}
 (it was 60100 in the code for 0.96 and 0.98.)
 I lean towards going with the code as opposed to the config file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code

2014-01-20 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10375:


Attachment: 10375.v1.patch

 hbase-default.xml hbase.status.multicast.address.port does not match code
 -

 Key: HBASE-10375
 URL: https://issues.apache.org/jira/browse/HBASE-10375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jonathan Hsieh
Assignee: Nicolas Liochon
 Attachments: 10375.v1.98-96.patch, 10375.v1.patch


 In hbase-default.xml
 {code}
 +  property
 +namehbase.status.multicast.address.port/name
 +value6100/value
 +description
 +  Multicast port to use for the status publication by multicast.
 +/description
 +  /property
 {code}
 In HConstants it was 60100.
 {code}
   public static final String STATUS_MULTICAST_PORT = 
 hbase.status.multicast.port;
   public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100;
 {code}
 (it was 60100 in the code for 0.96 and 0.98.)
 I lean towards going with the code as opposed to the config file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-20 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10087:


Attachment: 10087.v2.patch

 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0, 0.96.1, 0.94.14
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-20 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10087:


Status: Patch Available  (was: Open)

 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.14, 0.96.1, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-20 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-10087:


Status: Open  (was: Patch Available)

 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.94.14, 0.96.1, 0.98.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-20 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10383:
--

Fix Version/s: 0.94.17

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.94.17


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-4955) Use the official versions of surefire junit

2014-01-20 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876552#comment-13876552
 ] 

Nicolas Liochon commented on HBASE-4955:


I've tried again a couple of month ago, same result. I'm waiting for surefire 
2.17 to try again.

 Use the official versions of surefire  junit
 -

 Key: HBASE-4955
 URL: https://issues.apache.org/jira/browse/HBASE-4955
 Project: HBase
  Issue Type: Improvement
  Components: test
Affects Versions: 0.94.0, 0.98.0, 0.96.0
 Environment: all
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Critical
 Attachments: 4955.v1.patch, 4955.v2.patch, 4955.v2.patch, 
 4955.v2.patch, 4955.v2.patch, 4955.v3.patch, 4955.v3.patch, 4955.v3.patch, 
 4955.v4.patch, 4955.v4.patch, 4955.v4.patch, 4955.v4.patch, 4955.v4.patch, 
 4955.v4.patch, 4955.v5.patch, 4955.v6.patch, 4955.v7.patch, 4955.v7.patch, 
 4955.v8.patch, 8204.v4.patch


 We currently use private versions for Surefire  JUnit since HBASE-4763.
 This JIRA traks what we need to move to official versions.
 Surefire 2.11 is just out, but, after some tests, it does not contain all 
 what we need.
 JUnit. Could be for JUnit 4.11. Issue to monitor:
 https://github.com/KentBeck/junit/issues/359: fixed in our version, no 
 feedback for an integration on trunk
 Surefire: Could be for Surefire 2.12. Issues to monitor are:
 329 (category support): fixed, we use the official implementation from the 
 trunk
 786 (@Category with forkMode=always): fixed, we use the official 
 implementation from the trunk
 791 (incorrect elapsed time on test failure): fixed, we use the official 
 implementation from the trunk
 793 (incorrect time in the XML report): Not fixed (reopen) on trunk, fixed on 
 our version.
 760 (does not take into account the test method): fixed in trunk, not fixed 
 in our version
 798 (print immediately the test class name): not fixed in trunk, not fixed in 
 our version
 799 (Allow test parallelization when forkMode=always): not fixed in trunk, 
 not fixed in our version
 800 (redirectTestOutputToFile not taken into account): not yet fix on trunk, 
 fixed on our version
 800  793 are the more important to monitor, it's the only ones that are 
 fixed in our version but not on trunk.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-8280) Add a dummy test to check build env

2014-01-20 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-8280:
---

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Was committed a long long time ago.

 Add a dummy test to check build env
 ---

 Key: HBASE-8280
 URL: https://issues.apache.org/jira/browse/HBASE-8280
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 0.95.2
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.96.0

 Attachments: 8280.v1.patch


 trunk seems to be built on machines named ubuntu*
 precommit is on machines named hadoop*
 Looking at java.home on these machines, we have:
 hadoop1: /home/jenkins/tools/java/jdk1.6.0_27/jre
 hadoop2: /home/jenkins/tools/java/jdk1.6.0_26/jre
 hadoop3: /home/jenkins/tools/java/jdk1.6.0_26/jre
 hadoop4: /home/jenkins/tools/java/jdk1.6.0_26/jre
 hadoop5: /home/jenkins/tools/java/jdk1.6.0_26/jre
 hadoop6: /home/jenkins/tools/java/jdk1.6.0_26/jre
 ubuntu1: /home/jenkins/jenkins-slave/jdk/jreversion: 1.6.0_16-b01
 ubuntu2: no reply
 ubuntu3: /usr/lib/jvm/java-6-openjdk-amd64/jre  version: 1.6.0_27-b27
 ubuntu4: /usr/lib/jvm/java-6-openjdk-amd64/jre  version: 1.6.0_27-b27
 ubuntu5: no reply
 ubuntu6: /home/jenkins/jenkins-slave/jdk/jre   version: 1.6.0_16-b01
 The build logs says:
 [PreCommit-HBASE-Build] $ /bin/bash /tmp/hudson6507498537145712977.sh
 asf002.sp2.ygridcore.net
 Linux asf002.sp2.ygridcore.net 2.6.32-33-server #71-Ubuntu SMP Wed Jul 20 
 17:42:25 UTC 2011 x86_64 GNU/Linux
 /tmp/hudson6507498537145712977.sh: line 4: java: command not found
 = We try to get the java version, fails, but the build will continue anyway, 
 and succeeds.
 On ubuntu, we have
 + java -version
 java version 1.6.0_32
 Java(TM) SE Runtime Environment (build 1.6.0_32-b05)
 = Seems good. But it's this one that fails.
 Now, the fact that the java -version fails on the precommit makes me wonder 
 if we don't use the version installed by default. Hence, we would use a 
 random and often the openjdk one when we build the trunk.
 This patch will help us to understand our build env.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code

2014-01-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876562#comment-13876562
 ] 

stack commented on HBASE-10375:
---

+1 on all patches.

 hbase-default.xml hbase.status.multicast.address.port does not match code
 -

 Key: HBASE-10375
 URL: https://issues.apache.org/jira/browse/HBASE-10375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jonathan Hsieh
Assignee: Nicolas Liochon
 Attachments: 10375.v1.98-96.patch, 10375.v1.patch


 In hbase-default.xml
 {code}
 +  property
 +namehbase.status.multicast.address.port/name
 +value6100/value
 +description
 +  Multicast port to use for the status publication by multicast.
 +/description
 +  /property
 {code}
 In HConstants it was 60100.
 {code}
   public static final String STATUS_MULTICAST_PORT = 
 hbase.status.multicast.port;
   public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100;
 {code}
 (it was 60100 in the code for 0.96 and 0.98.)
 I lean towards going with the code as opposed to the config file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876559#comment-13876559
 ] 

stack commented on HBASE-10087:
---

+1

 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0, 0.96.1, 0.94.14
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10323) Auto detect data block encoding in HFileOutputFormat

2014-01-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876564#comment-13876564
 ] 

Ted Yu commented on HBASE-10323:


Integrated to trunk.

Thanks for the patch, Ishan.

Thanks for the review, Nick.

 Auto detect data block encoding in HFileOutputFormat
 

 Key: HBASE-10323
 URL: https://issues.apache.org/jira/browse/HBASE-10323
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
 Fix For: 0.99.0

 Attachments: HBASE_10323-0.94.15-v1.patch, 
 HBASE_10323-0.94.15-v2.patch, HBASE_10323-0.94.15-v3.patch, 
 HBASE_10323-0.94.15-v4.patch, HBASE_10323-0.94.15-v5.patch, 
 HBASE_10323-trunk-v1.patch, HBASE_10323-trunk-v2.patch, 
 HBASE_10323-trunk-v3.patch, HBASE_10323-trunk-v4.patch


 Currently, one has to specify the data block encoding of the table explicitly 
 using the config parameter 
 hbase.mapreduce.hfileoutputformat.datablock.encoding when doing a bulkload 
 load. This option is easily missed, not documented and also works differently 
 than compression, block size and bloom filter type, which are auto detected. 
 The solution would be to add support to auto detect datablock encoding 
 similar to other parameters. 
 The current patch does the following:
 1. Automatically detects datablock encoding in HFileOutputFormat.
 2. Keeps the legacy option of manually specifying the datablock encoding
 around as a method to override auto detections.
 3. Moves string conf parsing to the start of the program so that it fails
 fast during starting up instead of failing during record writes. It also
 makes the internals of the program type safe.
 4. Adds missing doc strings and unit tests for code serializing and
 deserializing config paramerters for bloom filer type, block size and
 datablock encoding.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10377) Add test for HBASE-10370 Compaction in out-of-date Store causes region split failure

2014-01-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876566#comment-13876566
 ] 

stack commented on HBASE-10377:
---

+1 from me.  Nice looking test.

 Add test for HBASE-10370 Compaction in out-of-date Store causes region split 
 failure
 

 Key: HBASE-10377
 URL: https://issues.apache.org/jira/browse/HBASE-10377
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Liu Shaohui
 Attachments: 10377-testSplitFailedCompactionAndSplit.html, 
 HBASE-10377-v1.patch


 HBASE-10370 fixes the issue where region split fails following compacting 
 out-of-date Store
 The new test failed in this build:
 https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/82/testReport/org.apache.hadoop.hbase.regionserver/TestSplitTransactionOnCluster/testSplitFailedCompactionAndSplit/
 This issue is to make the new test, testSplitFailedCompactionAndSplit, robust.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9004) Fix Documentation around Minor compaction and ttl

2014-01-20 Thread Dan Feng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876598#comment-13876598
 ] 

Dan Feng commented on HBASE-9004:
-

Thanks for the prompt reply.

 Fix Documentation around Minor compaction and ttl
 -

 Key: HBASE-9004
 URL: https://issues.apache.org/jira/browse/HBASE-9004
 Project: HBase
  Issue Type: Task
Reporter: Elliott Clark

 Minor compactions should be able to delete KeyValues outside of ttl.  The 
 docs currently suggest otherwise.  We should bring them in line.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876621#comment-13876621
 ] 

Hadoop QA commented on HBASE-10087:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12623961/10087.v2.patch
  against trunk revision .
  ATTACHMENT ID: 12623961

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.util.TestHBaseFsck

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8474//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8474//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8474//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8474//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8474//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8474//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8474//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8474//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8474//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8474//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8474//console

This message is automatically generated.

 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0, 0.96.1, 0.94.14
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-10384:
---

 Summary: Failed to increment serveral columns in one Increment
 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang


We have some problem to increment several columns of a row in one increment 
request.

This one works, we can get all columns incremented as expected:

{noformat}
  Increment inc1 = new Increment(row);
  inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
  inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
  inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
  inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
  testTable.increment(inc1);
{noformat}

However, this one just increments counter_A, other columns are reset to 1 
instead of incremented:

{noformat}
  Increment inc1 = new Increment(row);
  inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
  inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
  inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
  inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
  testTable.increment(inc1);
{noformat}






--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10384:


Affects Version/s: (was: 0.96.0)

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang

 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-20 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876631#comment-13876631
 ] 

Nicolas Liochon commented on HBASE-10087:
-

It's just a comment change, TestHBaseFsck is likely flakky...

 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0, 0.96.1, 0.94.14
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HBASE-10365) HBaseFsck should clean up connection properly when repair is completed

2014-01-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-10365:
--

Assignee: Ted Yu

 HBaseFsck should clean up connection properly when repair is completed
 --

 Key: HBASE-10365
 URL: https://issues.apache.org/jira/browse/HBASE-10365
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10365-v1.txt


 At the end of exec() method, connections to the cluster are not properly 
 released.
 Connections should be released upon completion of repair.
 This was mentioned by Jean-Marc in the thread '[VOTE] The 1st hbase 0.94.16 
 release candidate is available for download'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10365) HBaseFsck should clean up connection properly when repair is completed

2014-01-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10365:
---

Attachment: 10365-v1.txt

 HBaseFsck should clean up connection properly when repair is completed
 --

 Key: HBASE-10365
 URL: https://issues.apache.org/jira/browse/HBASE-10365
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
 Attachments: 10365-v1.txt


 At the end of exec() method, connections to the cluster are not properly 
 released.
 Connections should be released upon completion of repair.
 This was mentioned by Jean-Marc in the thread '[VOTE] The 1st hbase 0.94.16 
 release candidate is available for download'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10365) HBaseFsck should clean up connection properly when repair is completed

2014-01-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10365:
---

Status: Patch Available  (was: Open)

 HBaseFsck should clean up connection properly when repair is completed
 --

 Key: HBASE-10365
 URL: https://issues.apache.org/jira/browse/HBASE-10365
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10365-v1.txt


 At the end of exec() method, connections to the cluster are not properly 
 released.
 Connections should be released upon completion of repair.
 This was mentioned by Jean-Marc in the thread '[VOTE] The 1st hbase 0.94.16 
 release candidate is available for download'



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876636#comment-13876636
 ] 

Jimmy Xiang commented on HBASE-10384:
-

I see the problem on 0.96 tip too. So it should be in 0.98 as well.

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang

 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-8593) Type support in ImportTSV tool

2014-01-20 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876645#comment-13876645
 ] 

Nick Dimiduk commented on HBASE-8593:
-

I requested the patch's author kindly consider how this could be built off of 
HBASE-10091. However, I've not made progress there.

 Type support in ImportTSV tool
 --

 Key: HBASE-8593
 URL: https://issues.apache.org/jira/browse/HBASE-8593
 Project: HBase
  Issue Type: Sub-task
  Components: mapreduce
Reporter: Anoop Sam John
Assignee: rajeshbabu
 Fix For: 0.99.0

 Attachments: HBASE-8593.patch, HBASE-8593_v2.patch, 
 HBASE-8593_v4.patch, ReportMapper.java


 Now the ImportTSV tool treats all the table column to be of type String. It 
 converts the input data into bytes considering its type to be String. Some 
 times user will need a type of say int/float to get added to table by using 
 this tool.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876680#comment-13876680
 ] 

Andrew Purtell commented on HBASE-10322:


Let's condense the dense walls of text above to a one line answer to the below 
question if it is possible:

Can we have a tag-aware codec that can be configured by *only* 
ReplicationSource and ReplicationSink for the RPC they do server-to-server 
cross site? 

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10323) Auto detect data block encoding in HFileOutputFormat

2014-01-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876684#comment-13876684
 ] 

Hudson commented on HBASE-10323:


SUCCESS: Integrated in HBase-TRUNK #4837 (See 
[https://builds.apache.org/job/HBase-TRUNK/4837/])
HBASE-10323 Auto detect data block encoding in HFileOutputFormat (Tedyu: rev 
1559771)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java


 Auto detect data block encoding in HFileOutputFormat
 

 Key: HBASE-10323
 URL: https://issues.apache.org/jira/browse/HBASE-10323
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
 Fix For: 0.99.0

 Attachments: HBASE_10323-0.94.15-v1.patch, 
 HBASE_10323-0.94.15-v2.patch, HBASE_10323-0.94.15-v3.patch, 
 HBASE_10323-0.94.15-v4.patch, HBASE_10323-0.94.15-v5.patch, 
 HBASE_10323-trunk-v1.patch, HBASE_10323-trunk-v2.patch, 
 HBASE_10323-trunk-v3.patch, HBASE_10323-trunk-v4.patch


 Currently, one has to specify the data block encoding of the table explicitly 
 using the config parameter 
 hbase.mapreduce.hfileoutputformat.datablock.encoding when doing a bulkload 
 load. This option is easily missed, not documented and also works differently 
 than compression, block size and bloom filter type, which are auto detected. 
 The solution would be to add support to auto detect datablock encoding 
 similar to other parameters. 
 The current patch does the following:
 1. Automatically detects datablock encoding in HFileOutputFormat.
 2. Keeps the legacy option of manually specifying the datablock encoding
 around as a method to override auto detections.
 3. Moves string conf parsing to the start of the program so that it fails
 fast during starting up instead of failing during record writes. It also
 makes the internals of the program type safe.
 4. Adds missing doc strings and unit tests for code serializing and
 deserializing config paramerters for bloom filer type, block size and
 datablock encoding.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876697#comment-13876697
 ] 

Andrew Purtell commented on HBASE-10322:


[~lhofhansl], [~stack], [~anoop.hbase], [~ram_krish]: Quick recap as I see it. 
Security tags can be more sensitive than the cell itself. Users will share the 
cells among each other. However, we don't want that sharing to also leak access 
rules for the cell. That would be at best a violation of need to know. Also, 
0.96 clients can't handle serializations that include tags. The easiest answer 
is: RPC does not handle cell tags. We can thus avoid: negotiation, per-cell 
access checks, per-cell rewrites (copies). However, that fails to address 
replication, which uses the RPC code but must be able to replicate tags from a 
0.98 source to another 0.98 sink. For replication, we need to hand RPC a codec 
that is tag aware. Because 0.98 may be talking to 0.96, we can't do that by 
default, we need a configuration setting for replication that tells it what RPC 
codec to select when talking to the peer. 

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876701#comment-13876701
 ] 

Andrew Purtell commented on HBASE-10322:


[~lhofhansl]: Instead of Export, make a snapshot and DistCp the HFiles. Instead 
of Import, use the bulk import facility.

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-20 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10383:
--

Attachment: 10383.txt

Probably just this.
Is anybody in the position to test this?

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.94.17

 Attachments: 10383.txt


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-20 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876717#comment-13876717
 ] 

Nick Dimiduk commented on HBASE-10383:
--

We have IntegrationTestBulkLoad, so this should be easy enough to test for 
someone with a secure cluster. Let me see about giving this a spin this 
afternoon.

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.94.17

 Attachments: 10383.txt


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10380) Add bytesBinary and filter options to CopyTable

2014-01-20 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876722#comment-13876722
 ] 

Nick Dimiduk commented on HBASE-10380:
--

The HBase process for applying changes is to start with trunk and work 
backwards to the released branches. Please post a patch against first trunk so 
we can get the process of patch review and acceptance started.

 Add bytesBinary and filter options to CopyTable
 ---

 Key: HBASE-10380
 URL: https://issues.apache.org/jira/browse/HBASE-10380
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
Priority: Minor
 Attachments: HBASE_10380_0.94-v1.patch


 Add options in CopyTable to:
 1. Specify the start and stop row in bytesBinary format 
 2. Use filters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876734#comment-13876734
 ] 

Lars Hofhansl commented on HBASE-10322:
---

Good summary. (And I'm really just a bystander here.)
It's simplest to keep tag server-only unless there is a compelling argument to 
do differently.
A compelling argument might eventually be that code outside of HBase needs to 
check/manipulate tags.

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10383) Secure Bulk Load for 'completebulkload' fails for version 0.94.15

2014-01-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876736#comment-13876736
 ] 

Lars Hofhansl commented on HBASE-10383:
---

Thanks [~ndimiduk].
I would have preferred if TestSecureLoadIncrementalHFiles had failed.
Actually, why *doesn't* it fail? Looking through the code, it should have 
called exactly the method that fails, yet it passes locally (without this 
change here).

I'll have a look at that, when as soon as I get a chance.

 Secure Bulk Load for 'completebulkload' fails for version 0.94.15
 -

 Key: HBASE-10383
 URL: https://issues.apache.org/jira/browse/HBASE-10383
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Affects Versions: 0.94.15
Reporter: Kashif J S
Assignee: Kashif J S
 Fix For: 0.94.17

 Attachments: 10383.txt


 Secure Bulk Load with kerberos enabled fails for Complete Bulk 
 LoadLoadIncrementalHfile with following exception ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer: 
 org.apache.hadoop.hbase.ipc.HBaseRPC$UnknownProtocolException: No matching 
 handler for protocol 
 org.apache.hadoop.hbase.security.access.SecureBulkLoadProtocol in region 
 t1,,1389699438035.28bb0284d971d0676cf562efea80199b.
  at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java)
  at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java)
  at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java)
  at java.lang.reflect.Method.invoke(Method.java)
  at 
 org.apache.hadoop.hbase.ipc.SecureRpcEngine$Server.call(SecureRpcEngine.java)
  at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java) 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-10384:
---

Priority: Blocker  (was: Major)

This seems like a blocker.  This is a semantics change from 0.94 right?

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker

 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-10384:
---

Affects Version/s: 0.99.0
   0.98.0
   0.96.1.1

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang

 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876745#comment-13876745
 ] 

stack commented on HBASE-10384:
---

[~jxiang] This anything to do w/ our moving Increment to be a Mutation?  Or 
pb'ing?

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker

 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10385) ImportTsv to parse date time from typical loader formats

2014-01-20 Thread Vijay Sarvepali (JIRA)
Vijay Sarvepali created HBASE-10385:
---

 Summary: ImportTsv to parse date time from typical loader formats
 Key: HBASE-10385
 URL: https://issues.apache.org/jira/browse/HBASE-10385
 Project: HBase
  Issue Type: New Feature
  Components: mapreduce
Affects Versions: 0.96.1.1
Reporter: Vijay Sarvepali
Priority: Minor


Simple patch to enable parsing of standard date time fields from TSV files into 
Hbase.

***
*** 57,62 
--- 57,70 
  import com.google.common.base.Splitter;
  import com.google.common.collect.Lists;
  
+ //2013-08-19T04:39:07
+ import java.text.DateFormat;
+ import java.util.*;
+ import java.text.SimpleDateFormat;
+ import java.text.ParseException;
+ 
+ 
+ 
  /**
   * Tool to import data from a TSV file.
   *
***
*** 220,229 
  getColumnOffset(timestampKeyColumnIndex),
  getColumnLength(timestampKeyColumnIndex));
  try {
!   return Long.parseLong(timeStampStr);
  } catch (NumberFormatException nfe) {
// treat this record as bad record
!   throw new BadTsvLineException(Invalid timestamp  + timeStampStr);
  }
}

--- 228,239 
  getColumnOffset(timestampKeyColumnIndex),
  getColumnLength(timestampKeyColumnIndex));
  try {
!   return Long.parseLong(timeStampStr);
  } catch (NumberFormatException nfe) {
+   // Try this record with string to date in mseconds long
+   return extractTimestampInput(timeStampStr);
// treat this record as bad record
!   //throw new BadTsvLineException(Invalid timestamp  + 
timeStampStr);
  }
}

***
*** 243,248 
--- 253,274 
  return lineBytes;
}
  }
+  public static long extractTimestampInput(String strDate) throws 
BadTsvLineException{
+ final ListString dateFormats = Arrays.asList(-MM-dd HH:mm:ss.SSS, 
-MM-dd'T'HH:mm:ss);
+ 
+ for(String format: dateFormats){
+ SimpleDateFormat sdf = new SimpleDateFormat(format);
+ try{
+ Date d= sdf.parse(strDate);
+   long msecs = d.getTime();
+   return msecs;
+ } catch (ParseException e) {
+   //intentionally empty
+ }
+ }
+ // If we come here we have a problem with converting timestamps for this 
row.
+ throw new BadTsvLineException(Invalid timestamp  + strDate); 
+  } 
  
  public static class BadTsvLineException extends Exception {
public BadTsvLineException(String err) {



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code

2014-01-20 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876748#comment-13876748
 ] 

Jonathan Hsieh commented on HBASE-10375:


Hey, [~nkeywal], we need to change the property name in the hbase-defaults.xml 
too -- 

hbase.status.multicast.address.port != hbase.status.multicast.port.

+1 when the change is done to the hbase-defaults.xml files.


 hbase-default.xml hbase.status.multicast.address.port does not match code
 -

 Key: HBASE-10375
 URL: https://issues.apache.org/jira/browse/HBASE-10375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jonathan Hsieh
Assignee: Nicolas Liochon
 Attachments: 10375.v1.98-96.patch, 10375.v1.patch


 In hbase-default.xml
 {code}
 +  property
 +namehbase.status.multicast.address.port/name
 +value6100/value
 +description
 +  Multicast port to use for the status publication by multicast.
 +/description
 +  /property
 {code}
 In HConstants it was 60100.
 {code}
   public static final String STATUS_MULTICAST_PORT = 
 hbase.status.multicast.port;
   public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100;
 {code}
 (it was 60100 in the code for 0.96 and 0.98.)
 I lean towards going with the code as opposed to the config file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10277) refactor AsyncProcess

2014-01-20 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876760#comment-13876760
 ] 

Sergey Shelukhin commented on HBASE-10277:
--


bq. Could AsyncRequests be done as a 'Future'? Seems to have a bunch in common.
I'll take a look at it... it's more like a multi-future. Maybe even FutureTask 
can be used.

bq. We have something called Set but it does not implement java.util.Set:
Will fix.

bq. We make an ap and a multiap each time?
Once per HTable. The difference is the mode of operation - the legacy mode or 
the normal one.

bq. Batch should return an array of objects?
It does? Don't quite understand the comment. Some of the existing interfaces 
accept the array for results that fill it. Backward compat.


bq. 1) Some changes are cosmetics: some protected becomes private, some this. 
are removed. I'm not against these changes, but it makes the real meat more 
difficult to find.
Removing this. is not cosmetic (at least in the cases I am aware of) - 
methods moved to a non-static nested class, there's no more this..
Changing to private can be removed, although it's a good change to have.

bq. 2) The javadoc has not been updated, so when the code differs from the 
javadoc, the reader has to sort out himself if it's just that the javadoc is 
now out dated or if there is a regression. 
Will update.

bq. AsyncProcess#submit. Why does it take a tableName? Does it mean that an 
AsyncProcess can now be shared between Tables?
Yes.

bq. AsyncRequestSet#waitUntilDone
bq. Same responsibility as AsyncProcess#waitUntilDone, but less features (no 
logs. These logs are useful).
Some logs were preserved. 
Previous waitUntilDone had two wait conditions and loops in an effort not to 
loop often, whereas only one seems to be necessary, so I don't think features 
were lost... I can add more logs.

bq. This part should go in HConnection, as we should manage the load tracking 
globally and not only for a single call. Iit would be a change in bahavior 
compared to the 0.94, but I think we should do it. Would it make you life 
easier here?
It may, actually... but HTable uses AP directly. In fact batch calls from HCM 
currently don't use throttling at all (it calls submitAll), so only 
HTable-direct-usage uses this code.

{quote}
bq. Probably this perf difference will not be noticeable on real requests 
(remains to be tested).
Let me be more pessimistic here .
{quote}
Yeah, I'd probably need to test. I was not able to figure out what exactly is 
the cause of slowdown - YourKit gives very low % numbers for AP, CPU or wall 
clock, and almost identical between old and equivalent new methods across runs.

{quote}
bq. Also got rid of callback that was mostly used for tests, tests can check 
results without it.
I'm not a big fan of this part of the change. Callbacks can be reused in 
different context (for example to have a different policy, such as ignoring 
errors as in HTableMultiplexer). As well, we now have a hardRetryLimit, but 
this attribute is used only in tests.
{quote}
Callback was already only used for tests, for practical purposes (also for 
array filling, but that is no longer necessary).
I am not a big fan of having test-oriented code in production code; thus I 
replaced the only necessary test usage (stopping retries) with a field. I 
wanted to get rid of that too, but that would be too convoluted it seems.
When we have a scenario to use some callback, we can add it, under YAGNI 
principle :)

bq. More globally, This patch allows to reuse a single AsyncProcess between 
independant streams of writes. Would that be necessary if it was cheaper to 
create ? The cost is reading the configuration, as when we do a HTable#get and 
create a RegionServerCallable.
The main reason is to have well-defined context for single call for normal 
(as opposed to HTable cross-put stuff) usage pattern. So for example replica 
calls could group requests differently and synchronize/populate the same 
result, cancel other calls when they are no longer useful, etc.
It also separates the two patterns better (by ctor argument); see potential 
problems outlined in JIRA description.

bq. The problem is that with this patch, we still create a AsyncProcess in some 
cases, for example on the batchCallback path...
Yeah, that is due to ability to pass a custom execution pool (could be changed 
to accomodate it by making pool per request, but I am not sure it's 
necessary...) and limitations of java generics + current result type handling 
(AP will have to be per result type). IIRC most of these paths are deprecated.



 refactor AsyncProcess
 -

 Key: HBASE-10277
 URL: https://issues.apache.org/jira/browse/HBASE-10277
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: 

[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876758#comment-13876758
 ] 

stack commented on HBASE-10322:
---

Nice distillation.  So, to 'solve' this issue for 0.98.0RC, we just need to 
figure a means of allowing a user insert a particular codec when the client is 
replicating.  It does not even have to be 'on' in 0.98.0. in fact it is 
better if is not 'on'.  It just needs to be possible.  Right?

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876781#comment-13876781
 ] 

Jimmy Xiang edited comment on HBASE-10384 at 1/20/14 7:45 PM:
--

At first, want to point out: 0.94 doesn't have this issue.

[~stack], it is very likely related to moving Increment to be a Mutation.  The 
root cause is that the cells in the familymap of Increment are not sorted any 
more (an ArrayList now). 


was (Author: jxiang):
At first, want to point out: 0.94 doesn't have this issue.

[~stack], it is very likely related to moving Increment to be a Mutation.  The 
root cause is that the cells are not sorted any more (an ArrayList now). 

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker

 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876781#comment-13876781
 ] 

Jimmy Xiang commented on HBASE-10384:
-

At first, want to point out: 0.94 doesn't have this issue.

[~stack], it is very likely related to moving Increment to be a Mutation.  The 
root cause is that the cells are not sorted any more (an ArrayList now). 

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker

 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876797#comment-13876797
 ] 

Jimmy Xiang commented on HBASE-10384:
-

Append could have the same issue too since its cells are in an array list too. 
In 0.94, it is used to be a set.

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker

 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10087) Store should be locked during a memstore snapshot

2014-01-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876822#comment-13876822
 ] 

Andrew Purtell commented on HBASE-10087:


+1

 Store should be locked during a memstore snapshot
 -

 Key: HBASE-10087
 URL: https://issues.apache.org/jira/browse/HBASE-10087
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0, 0.96.1, 0.94.14
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
 Fix For: 0.98.1, 0.99.0

 Attachments: 10079.v1.patch, 10087.v2.patch


 regression from HBASE-9963, found while looking at HBASE-10079.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10375) hbase-default.xml hbase.status.multicast.address.port does not match code

2014-01-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876823#comment-13876823
 ] 

Andrew Purtell commented on HBASE-10375:


+1 after what Jon says.

 hbase-default.xml hbase.status.multicast.address.port does not match code
 -

 Key: HBASE-10375
 URL: https://issues.apache.org/jira/browse/HBASE-10375
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jonathan Hsieh
Assignee: Nicolas Liochon
 Attachments: 10375.v1.98-96.patch, 10375.v1.patch


 In hbase-default.xml
 {code}
 +  property
 +namehbase.status.multicast.address.port/name
 +value6100/value
 +description
 +  Multicast port to use for the status publication by multicast.
 +/description
 +  /property
 {code}
 In HConstants it was 60100.
 {code}
   public static final String STATUS_MULTICAST_PORT = 
 hbase.status.multicast.port;
   public static final int DEFAULT_STATUS_MULTICAST_PORT = 16100;
 {code}
 (it was 60100 in the code for 0.96 and 0.98.)
 I lean towards going with the code as opposed to the config file.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876828#comment-13876828
 ] 

Andrew Purtell commented on HBASE-10322:


bq. So, to 'solve' this issue for 0.98.0RC, we just need to figure a means of 
allowing a user insert a particular codec when the client is replicating. It 
does not even have to be 'on' in 0.98.0. in fact it is better if is not 
'on'. It just needs to be possible. Right?

Yes. 

One more configuration variable (yeah...) named hbase.replication.rpc.codec 
or some such, so the tag-aware codec can be separately set up by the source and 
sink. Defaulting to the same codec we are using for RPC to be compatible with 
0.96 clients.

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876835#comment-13876835
 ] 

Andrew Purtell commented on HBASE-10322:


bq. A compelling argument might eventually be that code outside of HBase needs 
to check/manipulate tags.

This will be possible after my proposed change on this issue. Such code can 
directly build HFiles (v3) containing tags, and submit them through the bulk 
import facility. Likewise, if you copy out HFiles (v3) from a snapshot, they 
will come over with tags included, which can be read by accessing the HFile 
directly using the low level scanners. The security story is acceptable. 
Accumulo has a similar hands-off approach to labels in bulk imported files, see 
http://accumulo.apache.org/1.5/accumulo_user_manual.html#_security: This 
constraint is not applied to bulk imported data, if this a concern then disable 
the bulk import permission. Also we can trivially prevent unauthorized direct 
access to HFiles by enabling encryption (HBASE-7544).

 Strip tags from KV while sending back to client on reads
 

 Key: HBASE-10322
 URL: https://issues.apache.org/jira/browse/HBASE-10322
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10322.patch, HBASE-10322_V2.patch, 
 HBASE-10322_codec.patch


 Right now we have some inconsistency wrt sending back tags on read. We do 
 this in scan when using Java client(Codec based cell block encoding). But 
 during a Get operation or when a pure PB based Scan comes we are not sending 
 back the tags.  So any of the below fix we have to do
 1. Send back tags in missing cases also. But sending back visibility 
 expression/ cell ACL is not correct.
 2. Don't send back tags in any case. This will a problem when a tool like 
 ExportTool use the scan to export the table data. We will miss exporting the 
 cell visibility/ACL.
 3. Send back tags based on some condition. It has to be per scan basis. 
 Simplest way is pass some kind of attribute in Scan which says whether to 
 send back tags or not. But believing some thing what scan specifies might not 
 be correct IMO. Then comes the way of checking the user who is doing the 
 scan. When a HBase super user doing the scan then only send back tags. So 
 when a case comes like Export Tool's the execution should happen from a super 
 user.
 So IMO we should go with #3.
 Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876836#comment-13876836
 ] 

Jimmy Xiang commented on HBASE-10384:
-

Append doesn't have the issue. It sorted the cells:
{noformat}
Collections.sort(family.getValue(), store.getComparator());
{noformat}

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker

 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9004) Fix Documentation around Minor compaction and ttl

2014-01-20 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876851#comment-13876851
 ] 

Masatake Iwasaki commented on HBASE-9004:
-

In my understanding, whole hfile is deleted on minor compaction if all of the 
cells in it are outside of TTL. Deletion of cells is a result of deletion of 
hfiles. Does this issue refer to that path?

 Fix Documentation around Minor compaction and ttl
 -

 Key: HBASE-9004
 URL: https://issues.apache.org/jira/browse/HBASE-9004
 Project: HBase
  Issue Type: Task
Reporter: Elliott Clark

 Minor compactions should be able to delete KeyValues outside of ttl.  The 
 docs currently suggest otherwise.  We should bring them in line.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10384:


Status: Patch Available  (was: Open)

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.1.1, 0.98.0, 0.99.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker
 Attachments: hbase-10384.patch


 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10384:


Attachment: hbase-10384.patch

Fixed increment the same way as we do for Append. Added a unit test to cover 
it. The existing unit test for Append is good.

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker
 Attachments: hbase-10384.patch


 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10386) MultiThreadedWriter should utilize LOG in waitForFinish()

2014-01-20 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10386:
--

 Summary: MultiThreadedWriter should utilize LOG in waitForFinish()
 Key: HBASE-10386
 URL: https://issues.apache.org/jira/browse/HBASE-10386
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Priority: Minor


I was doing load test on a 0.98 cluster and saw the following in output:
{code}
2014-01-20 20:44:52,567 [Thread-2] INFO  client.HBaseAdmin 
(HBaseAdmin.java:enableTable(761)) - Enabled table test
Starting to write data...
Failed to write keys: 0
{code}
The above was from call to System.out.println()

There is LOG field in MultiThreadedWriter which is used in other methods except 
for waitForFinish()
waitForFinish() should utilize LOG as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10387) Add config option to LoadTestTool which allows skipping call to System.exit()

2014-01-20 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10387:
--

 Summary: Add config option to LoadTestTool which allows skipping 
call to System.exit()
 Key: HBASE-10387
 URL: https://issues.apache.org/jira/browse/HBASE-10387
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu


I was running Hoya load test which uses LoadTestTool.
The load was successful. However Hoya test failed with:
{code}
[ERROR] Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
/home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
[ERROR] - [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hoya-funtest: Execution default-test of goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test failed: The forked VM 
terminated without saying properly goodbye. VM crash or System.exit called ?
Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
/home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:224)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.PluginExecutionException: Execution 
default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test 
failed: The forked VM terminated without saying properly goodbye. VM crash or 
System.exit called ?
Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
/home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:115)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
... 19 more
Caused by: java.lang.RuntimeException: The forked VM terminated without saying 

[jira] [Commented] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876868#comment-13876868
 ] 

stack commented on HBASE-10384:
---

+1

On commit, add comment on why you bother sorting.

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker
 Attachments: hbase-10384.patch


 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10123) Change default ports; move them out of linux ephemeral port range

2014-01-20 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-10123:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Change default ports; move them out of linux ephemeral port range
 -

 Key: HBASE-10123
 URL: https://issues.apache.org/jira/browse/HBASE-10123
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.1.1
Reporter: stack
Assignee: Jonathan Hsieh
Priority: Critical
 Fix For: 0.99.0

 Attachments: hbase-10123.patch, hbase-10123.v2.patch, 
 hbase-10123.v3.patch, hbase-10123.v4.patch


 Our defaults clash w/ the range linux assigns itself for creating come-and-go 
 ephemeral ports; likely in our history we've clashed w/ a random, short-lived 
 process.  While easy to change the defaults, we should just ship w/ defaults 
 that make sense.  We could host ourselves up into the 7 or 8k range.
 See http://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876873#comment-13876873
 ] 

Jimmy Xiang commented on HBASE-10384:
-

Sure. Thanks for the review.

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker
 Attachments: hbase-10384.patch


 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10387) Add config option to LoadTestTool which allows skipping call to System.exit()

2014-01-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10387:
---

Description: 
I was running Hoya load test which uses LoadTestTool.
The load was successful. However Hoya test failed with:
{code}
[ERROR] Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
/home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
[ERROR] - [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hoya-funtest: Execution default-test of goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test failed: The forked VM 
terminated without saying properly goodbye. VM crash or System.exit called ?
Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
/home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:224)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.PluginExecutionException: Execution 
default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test 
failed: The forked VM terminated without saying properly goodbye. VM crash or 
System.exit called ?
Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
-XX:+HeapDumpOnOutOfMemoryError -jar 
/home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
/home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:115)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
... 19 more
Caused by: java.lang.RuntimeException: The forked VM terminated without saying 
properly goodbye. VM crash or System.exit called ?
Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  

[jira] [Updated] (HBASE-10387) Add config option to LoadTestTool which allows skipping call to System.exit()

2014-01-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10387:
---

Attachment: 10387-v1.txt

Patch v1 adds skip_sys_exit option.
At the end of successful load test, System.exit() would be skipped if this 
option was specified.

 Add config option to LoadTestTool which allows skipping call to System.exit()
 -

 Key: HBASE-10387
 URL: https://issues.apache.org/jira/browse/HBASE-10387
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
 Attachments: 10387-v1.txt


 I was running Hoya load test which uses LoadTestTool.
 The load was successful. However Hoya test failed with:
 {code}
 [ERROR] Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
 -XX:+HeapDumpOnOutOfMemoryError -jar 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
  /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
 [ERROR] - [Help 1]
 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
 goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) 
 on project hoya-funtest: Execution default-test of goal 
 org.apache.maven.plugins:maven-surefire-plugin:2.16:test failed: The forked 
 VM terminated without saying properly goodbye. VM crash or System.exit called 
 ?
 Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
 -XX:+HeapDumpOnOutOfMemoryError -jar 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
  /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:224)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
 at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
 at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
 at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
 at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
 at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
 at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
 at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
 at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
 at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
 at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
 Caused by: org.apache.maven.plugin.PluginExecutionException: Execution 
 default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test 
 failed: The forked VM terminated without saying properly goodbye. VM crash or 
 System.exit called ?
 Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
 -XX:+HeapDumpOnOutOfMemoryError -jar 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
  

[jira] [Updated] (HBASE-10373) Add more details info for ACL group in HBase book

2014-01-20 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10373:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk.  Will show the next time we push the doc.  Thank you for 
the contrib Takeshi Miao.

 Add more details info for ACL group in HBase book
 -

 Key: HBASE-10373
 URL: https://issues.apache.org/jira/browse/HBASE-10373
 Project: HBase
  Issue Type: Improvement
  Components: documentation, security
Affects Versions: 0.99.0
Reporter: takeshi.miao
Assignee: takeshi.miao
Priority: Minor
 Fix For: 0.99.0

 Attachments: HBASE-10373-trunk-v01.patch


 Current ACL section '8.3. Access Control' in HBase book, not instructs user 
 to grant ACL for group. I think this is good to make it clear for users due 
 to I think that this is great and important feature for users to manage their 
 ACL more easily.
 mailing list
 http://mail-archives.apache.org/mod_mbox/hbase-user/201401.mbox/%3CCA+RK=_b+umfzwiaeud9fsqjk8rs8l-vuo6arvos8k5sutog...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10363) [0.94] TestInputSampler and TestInputSamplerTool fail under hadoop 2.0/23 profiles.

2014-01-20 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10363:
--

Fix Version/s: (was: 0.94.16)
   0.94.17

 [0.94] TestInputSampler and TestInputSamplerTool fail under hadoop 2.0/23 
 profiles.
 ---

 Key: HBASE-10363
 URL: https://issues.apache.org/jira/browse/HBASE-10363
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.15
Reporter: Jonathan Hsieh
Priority: Critical
 Fix For: 0.94.17


 From tip of 0.94 and from 0.94.15.
 {code}
 jon@swoop:~/proj/hbase-0.94$ mvn clean test -Dhadoop.profile=2.0 
 -Dtest=TestInputSampler,TestInputSamplerTool -PlocalTests
 ...
 Running org.apache.hadoop.hbase.mapreduce.hadoopbackport.TestInputSamplerTool
 Tests run: 4, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 3.718 sec  
 FAILURE!
 Running org.apache.hadoop.hbase.mapreduce.hadoopbackport.TestInputSampler
 Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.666 sec  
 FAILURE!
 Results :
 Tests in error: 
   
 testSplitInterval(org.apache.hadoop.hbase.mapreduce.hadoopbackport.TestInputSamplerTool):
  Failed getting constructor
   
 testSplitRamdom(org.apache.hadoop.hbase.mapreduce.hadoopbackport.TestInputSamplerTool):
  Failed getting constructor
   
 testSplitSample(org.apache.hadoop.hbase.mapreduce.hadoopbackport.TestInputSamplerTool):
  Failed getting constructor
   
 testSplitSampler(org.apache.hadoop.hbase.mapreduce.hadoopbackport.TestInputSampler):
  Failed getting constructor
   
 testIntervalSampler(org.apache.hadoop.hbase.mapreduce.hadoopbackport.TestInputSampler):
  Failed getting constructor
 Tests run: 6, Failures: 0, Errors: 5, Skipped: 0
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10387) Add config option to LoadTestTool which allows skipping call to System.exit()

2014-01-20 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10387:
---

Resolution: Invalid
Status: Resolved  (was: Patch Available)

Integration tests and unit tests can successfully use LoadTestTool. Suggest 
finding out how they do it and do that instead of introducing a hack.

 Add config option to LoadTestTool which allows skipping call to System.exit()
 -

 Key: HBASE-10387
 URL: https://issues.apache.org/jira/browse/HBASE-10387
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10387-v1.txt


 I was running Hoya load test which uses LoadTestTool.
 The load was successful. However Hoya test failed with:
 {code}
 [ERROR] Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
 -XX:+HeapDumpOnOutOfMemoryError -jar 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
  /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
 [ERROR] - [Help 1]
 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
 goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) 
 on project hoya-funtest: Execution default-test of goal 
 org.apache.maven.plugins:maven-surefire-plugin:2.16:test failed: The forked 
 VM terminated without saying properly goodbye. VM crash or System.exit called 
 ?
 Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
 -XX:+HeapDumpOnOutOfMemoryError -jar 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
  /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:224)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
 at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
 at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
 at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
 at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
 at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
 at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
 at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
 at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
 at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
 at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
 Caused by: org.apache.maven.plugin.PluginExecutionException: Execution 
 default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test 
 failed: The forked VM terminated without saying properly goodbye. VM crash or 
 System.exit called ?
 Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
 -XX:+HeapDumpOnOutOfMemoryError -jar 

[jira] [Updated] (HBASE-10377) Add test for HBASE-10370 Compaction in out-of-date Store causes region split failure

2014-01-20 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10377:
--

   Resolution: Fixed
Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to trunk.  Thank you for the nice test [~liushaohui]

 Add test for HBASE-10370 Compaction in out-of-date Store causes region split 
 failure
 

 Key: HBASE-10377
 URL: https://issues.apache.org/jira/browse/HBASE-10377
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Liu Shaohui
 Fix For: 0.99.0

 Attachments: 10377-testSplitFailedCompactionAndSplit.html, 
 HBASE-10377-v1.patch


 HBASE-10370 fixes the issue where region split fails following compacting 
 out-of-date Store
 The new test failed in this build:
 https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/82/testReport/org.apache.hadoop.hbase.regionserver/TestSplitTransactionOnCluster/testSplitFailedCompactionAndSplit/
 This issue is to make the new test, testSplitFailedCompactionAndSplit, robust.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10380) Add bytesBinary and filter options to CopyTable

2014-01-20 Thread Ishan Chhabra (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876880#comment-13876880
 ] 

Ishan Chhabra commented on HBASE-10380:
---

Ok. Ill submit a trunk patch then. I tend to create 0.94 patches first since we 
are running that internally.

 Add bytesBinary and filter options to CopyTable
 ---

 Key: HBASE-10380
 URL: https://issues.apache.org/jira/browse/HBASE-10380
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
Priority: Minor
 Attachments: HBASE_10380_0.94-v1.patch


 Add options in CopyTable to:
 1. Specify the start and stop row in bytesBinary format 
 2. Use filters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10387) Add config option to LoadTestTool which allows skipping call to System.exit()

2014-01-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10387:
---

Assignee: Ted Yu
  Status: Patch Available  (was: Open)

 Add config option to LoadTestTool which allows skipping call to System.exit()
 -

 Key: HBASE-10387
 URL: https://issues.apache.org/jira/browse/HBASE-10387
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10387-v1.txt


 I was running Hoya load test which uses LoadTestTool.
 The load was successful. However Hoya test failed with:
 {code}
 [ERROR] Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
 -XX:+HeapDumpOnOutOfMemoryError -jar 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
  /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
 [ERROR] - [Help 1]
 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
 goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) 
 on project hoya-funtest: Execution default-test of goal 
 org.apache.maven.plugins:maven-surefire-plugin:2.16:test failed: The forked 
 VM terminated without saying properly goodbye. VM crash or System.exit called 
 ?
 Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
 -XX:+HeapDumpOnOutOfMemoryError -jar 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
  /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:224)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
 at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
 at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
 at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
 at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
 at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
 at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
 at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
 at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
 at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
 at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
 Caused by: org.apache.maven.plugin.PluginExecutionException: Execution 
 default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test 
 failed: The forked VM terminated without saying properly goodbye. VM crash or 
 System.exit called ?
 Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
 -XX:+HeapDumpOnOutOfMemoryError -jar 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
  /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
 

[jira] [Resolved] (HBASE-10386) MultiThreadedWriter should utilize LOG in waitForFinish()

2014-01-20 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-10386.


Resolution: Invalid

LoadTestTool reports other status updates via System.out and System.err. Better 
to change all or change none. Might as well change none, since nobody has asked 
for it.

 MultiThreadedWriter should utilize LOG in waitForFinish()
 -

 Key: HBASE-10386
 URL: https://issues.apache.org/jira/browse/HBASE-10386
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Priority: Minor

 I was doing load test on a 0.98 cluster and saw the following in output:
 {code}
 2014-01-20 20:44:52,567 [Thread-2] INFO  client.HBaseAdmin 
 (HBaseAdmin.java:enableTable(761)) - Enabled table test
 Starting to write data...
 Failed to write keys: 0
 {code}
 The above was from call to System.out.println()
 There is LOG field in MultiThreadedWriter which is used in other methods 
 except for waitForFinish()
 waitForFinish() should utilize LOG as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10385) ImportTsv to parse date time from typical loader formats

2014-01-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876885#comment-13876885
 ] 

stack commented on HBASE-10385:
---

Thank you for the contrib [~ericavijay].  Would you mind formatting it as a 
patch file attached to the issue and having the code follow the convention of 
the rest of the code base (see the reference guide on how to contribute if you 
need more).  Also, this looks like a behavior that should be optional?

 ImportTsv to parse date time from typical loader formats
 

 Key: HBASE-10385
 URL: https://issues.apache.org/jira/browse/HBASE-10385
 Project: HBase
  Issue Type: New Feature
  Components: mapreduce
Affects Versions: 0.96.1.1
Reporter: Vijay Sarvepali
Priority: Minor
  Labels: importtsv
   Original Estimate: 2h
  Remaining Estimate: 2h

 Simple patch to enable parsing of standard date time fields from TSV files 
 into Hbase.
 ***
 *** 57,62 
 --- 57,70 
   import com.google.common.base.Splitter;
   import com.google.common.collect.Lists;
   
 + //2013-08-19T04:39:07
 + import java.text.DateFormat;
 + import java.util.*;
 + import java.text.SimpleDateFormat;
 + import java.text.ParseException;
 + 
 + 
 + 
   /**
* Tool to import data from a TSV file.
*
 ***
 *** 220,229 
   getColumnOffset(timestampKeyColumnIndex),
   getColumnLength(timestampKeyColumnIndex));
   try {
 !   return Long.parseLong(timeStampStr);
   } catch (NumberFormatException nfe) {
 // treat this record as bad record
 !   throw new BadTsvLineException(Invalid timestamp  + 
 timeStampStr);
   }
 }
 
 --- 228,239 
   getColumnOffset(timestampKeyColumnIndex),
   getColumnLength(timestampKeyColumnIndex));
   try {
 ! return Long.parseLong(timeStampStr);
   } catch (NumberFormatException nfe) {
 + // Try this record with string to date in mseconds long
 + return extractTimestampInput(timeStampStr);
 // treat this record as bad record
 !   //throw new BadTsvLineException(Invalid timestamp  + 
 timeStampStr);
   }
 }
 
 ***
 *** 243,248 
 --- 253,274 
   return lineBytes;
 }
   }
 +  public static long extractTimestampInput(String strDate) throws 
 BadTsvLineException{
 + final ListString dateFormats = Arrays.asList(-MM-dd 
 HH:mm:ss.SSS, -MM-dd'T'HH:mm:ss);
 + 
 + for(String format: dateFormats){
 + SimpleDateFormat sdf = new SimpleDateFormat(format);
 + try{
 + Date d= sdf.parse(strDate);
 + long msecs = d.getTime();
 + return msecs;
 + } catch (ParseException e) {
 + //intentionally empty
 + }
 + }
 + // If we come here we have a problem with converting timestamps for 
 this row.
 + throw new BadTsvLineException(Invalid timestamp  + strDate); 
 +  } 
   
   public static class BadTsvLineException extends Exception {
 public BadTsvLineException(String err) {



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10380) Add bytesBinary and filter options to CopyTable

2014-01-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876900#comment-13876900
 ] 

Andrew Purtell commented on HBASE-10380:


I concur with Nick about starting with a trunk patch, but I looked at the 
attached patch anyhow. 

I don't like the idea of serializing and deserializing filters from a file. We 
have a filter than accepts a textual input language, see ParseFilter. If 0.94 
doesn't support this than a backport could be in order, but ParseFilter for 
trunk definitely seems a better option to me. 

Check the imports on this patch, some unwanted/unneeded ones slipped through.

 Add bytesBinary and filter options to CopyTable
 ---

 Key: HBASE-10380
 URL: https://issues.apache.org/jira/browse/HBASE-10380
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
Priority: Minor
 Attachments: HBASE_10380_0.94-v1.patch


 Add options in CopyTable to:
 1. Specify the start and stop row in bytesBinary format 
 2. Use filters



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10323) Auto detect data block encoding in HFileOutputFormat

2014-01-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876903#comment-13876903
 ] 

Andrew Purtell commented on HBASE-10323:


+1 for 0.98

 Auto detect data block encoding in HFileOutputFormat
 

 Key: HBASE-10323
 URL: https://issues.apache.org/jira/browse/HBASE-10323
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
 Fix For: 0.99.0

 Attachments: HBASE_10323-0.94.15-v1.patch, 
 HBASE_10323-0.94.15-v2.patch, HBASE_10323-0.94.15-v3.patch, 
 HBASE_10323-0.94.15-v4.patch, HBASE_10323-0.94.15-v5.patch, 
 HBASE_10323-trunk-v1.patch, HBASE_10323-trunk-v2.patch, 
 HBASE_10323-trunk-v3.patch, HBASE_10323-trunk-v4.patch


 Currently, one has to specify the data block encoding of the table explicitly 
 using the config parameter 
 hbase.mapreduce.hfileoutputformat.datablock.encoding when doing a bulkload 
 load. This option is easily missed, not documented and also works differently 
 than compression, block size and bloom filter type, which are auto detected. 
 The solution would be to add support to auto detect datablock encoding 
 similar to other parameters. 
 The current patch does the following:
 1. Automatically detects datablock encoding in HFileOutputFormat.
 2. Keeps the legacy option of manually specifying the datablock encoding
 around as a method to override auto detections.
 3. Moves string conf parsing to the start of the program so that it fails
 fast during starting up instead of failing during record writes. It also
 makes the internals of the program type safe.
 4. Adds missing doc strings and unit tests for code serializing and
 deserializing config paramerters for bloom filer type, block size and
 datablock encoding.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876905#comment-13876905
 ] 

Andrew Purtell commented on HBASE-10384:


+1 with the comment for 0.98. WTF. Good catch.

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker
 Attachments: hbase-10384.patch


 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10387) Add config option to LoadTestTool which allows skipping call to System.exit()

2014-01-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876917#comment-13876917
 ] 

Ted Yu commented on HBASE-10387:


Should have consulted hbase-it module earlier :-)

 Add config option to LoadTestTool which allows skipping call to System.exit()
 -

 Key: HBASE-10387
 URL: https://issues.apache.org/jira/browse/HBASE-10387
 Project: HBase
  Issue Type: Improvement
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: 10387-v1.txt


 I was running Hoya load test which uses LoadTestTool.
 The load was successful. However Hoya test failed with:
 {code}
 [ERROR] Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
 -XX:+HeapDumpOnOutOfMemoryError -jar 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
  /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
 [ERROR] - [Help 1]
 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
 goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) 
 on project hoya-funtest: Execution default-test of goal 
 org.apache.maven.plugins:maven-surefire-plugin:2.16:test failed: The forked 
 VM terminated without saying properly goodbye. VM crash or System.exit called 
 ?
 Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
 -XX:+HeapDumpOnOutOfMemoryError -jar 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
  /home/yarn/hoya/hoya-funtest/target/surefire/surefire6543037778494048137tmp 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefire_07364766695548839031tmp
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:224)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
 at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
 at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
 at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
 at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
 at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
 at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
 at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
 at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
 at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
 at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:601)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
 at 
 org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
 Caused by: org.apache.maven.plugin.PluginExecutionException: Execution 
 default-test of goal org.apache.maven.plugins:maven-surefire-plugin:2.16:test 
 failed: The forked VM terminated without saying properly goodbye. VM crash or 
 System.exit called ?
 Command was/bin/sh -c cd /home/yarn/hoya/hoya-funtest  
 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.9.x86_64/jre/bin/java -Xmx1024m 
 -XX:+HeapDumpOnOutOfMemoryError -jar 
 /home/yarn/hoya/hoya-funtest/target/surefire/surefirebooter274017860635722411.jar
  

[jira] [Updated] (HBASE-10323) Auto detect data block encoding in HFileOutputFormat

2014-01-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10323:
---

Fix Version/s: 0.98.0
 Hadoop Flags: Reviewed

Integrated to 0.98 as well.

 Auto detect data block encoding in HFileOutputFormat
 

 Key: HBASE-10323
 URL: https://issues.apache.org/jira/browse/HBASE-10323
 Project: HBase
  Issue Type: Improvement
  Components: mapreduce
Reporter: Ishan Chhabra
Assignee: Ishan Chhabra
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE_10323-0.94.15-v1.patch, 
 HBASE_10323-0.94.15-v2.patch, HBASE_10323-0.94.15-v3.patch, 
 HBASE_10323-0.94.15-v4.patch, HBASE_10323-0.94.15-v5.patch, 
 HBASE_10323-trunk-v1.patch, HBASE_10323-trunk-v2.patch, 
 HBASE_10323-trunk-v3.patch, HBASE_10323-trunk-v4.patch


 Currently, one has to specify the data block encoding of the table explicitly 
 using the config parameter 
 hbase.mapreduce.hfileoutputformat.datablock.encoding when doing a bulkload 
 load. This option is easily missed, not documented and also works differently 
 than compression, block size and bloom filter type, which are auto detected. 
 The solution would be to add support to auto detect datablock encoding 
 similar to other parameters. 
 The current patch does the following:
 1. Automatically detects datablock encoding in HFileOutputFormat.
 2. Keeps the legacy option of manually specifying the datablock encoding
 around as a method to override auto detections.
 3. Moves string conf parsing to the start of the program so that it fails
 fast during starting up instead of failing during record writes. It also
 makes the internals of the program type safe.
 4. Adds missing doc strings and unit tests for code serializing and
 deserializing config paramerters for bloom filer type, block size and
 datablock encoding.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10378) Divide HLog interface into User and Implementor specific interfaces

2014-01-20 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876942#comment-13876942
 ] 

Himanshu Vashishtha commented on HBASE-10378:
-

Thanks for taking a look at the patch. These are really good questions, and are 
applicable for the current WAL implementation as well.

bq. should we have an implemenation for the WALservice where based on the 
number of hLogs those many syncer and writer threads need to be started along 
with the replicaiton services for them? Currently the HRS just instantiates one 
HLog and starts them. What do you say?
Yes,  that is definitely one possible implementation.

Re: getWAL() api wrt grouping WALs:
Yes, the grouping logic needs to be extracted and made available, it can be 
either in HRS, or in the WAL implementor. One way is to use an array of 
WALService instances in HRegionServer, and let the WALGroup return an array to 
the region server? That way, the region server could invoke its usual method 
(append/sync, etc) just like it does currently? The grouping logic would be 
present in HRs in this case.
Or, have the grouping logic in WAL impl and rewrite the getWAL to use this 
grouping knowledge? i.e., getWAL passes a Hregioninfo to the underlying 
Group-WAL-impl, and the underlying Group-wal impl returns a WALService instance 
based on its grouping. In either case, I see WALGroup can extend ÅbstractWAL 
interface. What do you think? Or, let's chat offline?


 Divide HLog interface into User and Implementor specific interfaces
 ---

 Key: HBASE-10378
 URL: https://issues.apache.org/jira/browse/HBASE-10378
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: Himanshu Vashishtha
 Attachments: 10378-1.patch


 HBASE-5937 introduces the HLog interface as a first step to support multiple 
 WAL implementations. This interface is a good start, but has some 
 limitations/drawbacks in its current state, such as:
 1) There is no clear distinction b/w User and Implementor APIs, and it 
 provides APIs both for WAL users (append, sync, etc) and also WAL 
 implementors (Reader/Writer interfaces, etc). There are APIs which are very 
 much implementation specific (getFileNum, etc) and a user such as a 
 RegionServer shouldn't know about it.
 2) There are about 14 methods in FSHLog which are not present in HLog 
 interface but are used at several places in the unit test code. These tests 
 typecast HLog to FSHLog, which makes it very difficult to test multiple WAL 
 implementations without doing some ugly checks.
 I'd like to propose some changes in HLog interface that would ease the multi 
 WAL story:
 1) Have two interfaces WAL and WALService. WAL provides APIs for 
 implementors. WALService provides APIs for users (such as RegionServer).
 2) A skeleton implementation of the above two interface as the base class for 
 other WAL implementations (AbstractWAL). It provides required fields for all 
 subclasses (fs, conf, log dir, etc). Make a minimal set of test only methods 
 and add this set in AbstractWAL.
 3) HLogFactory returns a WALService reference when creating a WAL instance; 
 if a user need to access impl specific APIs (there are unit tests which get 
 WAL from a HRegionServer and then call impl specific APIs), use AbstractWAL 
 type casting,
 4) Make TestHLog abstract and let all implementors provide their respective 
 test class which extends TestHLog (TestFSHLog, for example).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13876958#comment-13876958
 ] 

Hadoop QA commented on HBASE-10384:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12624012/hbase-10384.patch
  against trunk revision .
  ATTACHMENT ID: 12624012

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8477//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8477//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8477//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8477//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8477//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8477//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8477//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8477//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8477//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8477//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8477//console

This message is automatically generated.

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker
 Attachments: hbase-10384.patch


 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10388) Add export control notice in README

2014-01-20 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10388:
--

 Summary: Add export control notice in README
 Key: HBASE-10388
 URL: https://issues.apache.org/jira/browse/HBASE-10388
 Project: HBase
  Issue Type: Task
Affects Versions: 0.98.0, 0.99.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.0, 0.99.0


A discussion on general@incubator for Twill mentioned that the (out-of-date?) 
document at http://www.apache.org/dev/crypto.html suggests an export notice in 
the project README. I know Apache Accumulo added a transparent encryption 
feature to their trunk recently and found an export notice in their readme. 
Adding one to ours out of an abundance of caution.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10388) Add export control notice in README

2014-01-20 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10388:
---

Description: A discussion on general@incubator for Twill mentioned that the 
(out-of-date?) document at http://www.apache.org/dev/crypto.html suggests an 
export notice in the project README. I know Apache Accumulo added a transparent 
encryption feature recently and found an export notice in their README on their 
trunk. Adding one to ours out of an abundance of caution.  (was: A discussion 
on general@incubator for Twill mentioned that the (out-of-date?) document at 
http://www.apache.org/dev/crypto.html suggests an export notice in the project 
README. I know Apache Accumulo added a transparent encryption feature to their 
trunk recently and found an export notice in their readme. Adding one to ours 
out of an abundance of caution.)

 Add export control notice in README
 ---

 Key: HBASE-10388
 URL: https://issues.apache.org/jira/browse/HBASE-10388
 Project: HBase
  Issue Type: Task
Affects Versions: 0.98.0, 0.99.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.0, 0.99.0


 A discussion on general@incubator for Twill mentioned that the (out-of-date?) 
 document at http://www.apache.org/dev/crypto.html suggests an export notice 
 in the project README. I know Apache Accumulo added a transparent encryption 
 feature recently and found an export notice in their README on their trunk. 
 Adding one to ours out of an abundance of caution.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10388) Add export control notice in README

2014-01-20 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10388:
---

Attachment: 10388.patch

I'm not able to get the site target to build (trust anchors are empty) but 
emacs did validate the XML I put together.

 Add export control notice in README
 ---

 Key: HBASE-10388
 URL: https://issues.apache.org/jira/browse/HBASE-10388
 Project: HBase
  Issue Type: Task
Affects Versions: 0.98.0, 0.99.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.0, 0.99.0

 Attachments: 10388.patch


 A discussion on general@incubator for Twill mentioned that the (out-of-date?) 
 document at http://www.apache.org/dev/crypto.html suggests an export notice 
 in the project README. I know Apache Accumulo added a transparent encryption 
 feature recently and found an export notice in their README on their trunk. 
 Adding one to ours out of an abundance of caution.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10388) Add export control notice in README

2014-01-20 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10388:
---

Priority: Blocker  (was: Major)

 Add export control notice in README
 ---

 Key: HBASE-10388
 URL: https://issues.apache.org/jira/browse/HBASE-10388
 Project: HBase
  Issue Type: Task
Affects Versions: 0.98.0, 0.99.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: 10388.patch


 A discussion on general@incubator for Twill mentioned that the (out-of-date?) 
 document at http://www.apache.org/dev/crypto.html suggests an export notice 
 in the project README. I know Apache Accumulo added a transparent encryption 
 feature recently and found an export notice in their README on their trunk. 
 Adding one to ours out of an abundance of caution.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10388) Add export control notice in README

2014-01-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877009#comment-13877009
 ] 

stack commented on HBASE-10388:
---

Go for it Andrew.

 Add export control notice in README
 ---

 Key: HBASE-10388
 URL: https://issues.apache.org/jira/browse/HBASE-10388
 Project: HBase
  Issue Type: Task
Affects Versions: 0.98.0, 0.99.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: 10388.patch


 A discussion on general@incubator for Twill mentioned that the (out-of-date?) 
 document at http://www.apache.org/dev/crypto.html suggests an export notice 
 in the project README. I know Apache Accumulo added a transparent encryption 
 feature recently and found an export notice in their README on their trunk. 
 Adding one to ours out of an abundance of caution.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10384) Failed to increment serveral columns in one Increment

2014-01-20 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-10384:


   Resolution: Fixed
Fix Version/s: 0.99.0
   0.96.2
   0.98.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Integrated into 0.96, 0.98, and trunk. Thanks.

 Failed to increment serveral columns in one Increment
 -

 Key: HBASE-10384
 URL: https://issues.apache.org/jira/browse/HBASE-10384
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
Priority: Blocker
 Fix For: 0.98.0, 0.96.2, 0.99.0

 Attachments: hbase-10384.patch


 We have some problem to increment several columns of a row in one increment 
 request.
 This one works, we can get all columns incremented as expected:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}
 However, this one just increments counter_A, other columns are reset to 1 
 instead of incremented:
 {noformat}
   Increment inc1 = new Increment(row);
   inc1.addColumn(cf, Bytes.toBytes(counter_B), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_C), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_A), 1L);
   inc1.addColumn(cf, Bytes.toBytes(counter_D), 1L);
   testTable.increment(inc1);
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10377) Add test for HBASE-10370 Compaction in out-of-date Store causes region split failure

2014-01-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877014#comment-13877014
 ] 

Hudson commented on HBASE-10377:


SUCCESS: Integrated in HBase-TRUNK #4838 (See 
[https://builds.apache.org/job/HBase-TRUNK/4838/])
HBASE-10377 Add test for HBASE-10370 Compaction in out-of-date Store causes 
region split failure (stack: rev 1559838)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransactionOnCluster.java


 Add test for HBASE-10370 Compaction in out-of-date Store causes region split 
 failure
 

 Key: HBASE-10377
 URL: https://issues.apache.org/jira/browse/HBASE-10377
 Project: HBase
  Issue Type: Test
Reporter: Ted Yu
Assignee: Liu Shaohui
 Fix For: 0.99.0

 Attachments: 10377-testSplitFailedCompactionAndSplit.html, 
 HBASE-10377-v1.patch


 HBASE-10370 fixes the issue where region split fails following compacting 
 out-of-date Store
 The new test failed in this build:
 https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/82/testReport/org.apache.hadoop.hbase.regionserver/TestSplitTransactionOnCluster/testSplitFailedCompactionAndSplit/
 This issue is to make the new test, testSplitFailedCompactionAndSplit, robust.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


  1   2   >