[jira] [Commented] (HBASE-9272) A simple parallel, unordered scanner

2013-08-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747277#comment-13747277
 ] 

Lars Hofhansl commented on HBASE-9272:
--

I was thinking about this too (i.e. keeping all RSs busy). On the other hand I 
was trying to keep this simple, assuming that in most cases the region to 
server assignment would be more or less random.
With some number of threads and a reasonable sized cluster (without which a 
parallel scanner does not help much anyway), one would assume a fairly nice 
load distribution.

So a test with more regions should see the same speedup, there is nothing 
inherently costly per region (the ClientScanners will need to find the region 
again, but it should be cached).

There are other considerations too. For example, instead of having a task per 
Region, one could split the requested rowkey space into N slices (using the 
region boundaries as a poor-mans histogram by assuming that all regions will be 
of roughly the same size in bytes). In that case one would keep the number of 
threads unlimited but instead limit the number of tasks (i.e. slices).

(also above the penalty was 2.2% rather than 1.5% - but that was just a single 
run anyway)

Will do a test with more regions.


> A simple parallel, unordered scanner
> 
>
> Key: HBASE-9272
> URL: https://issues.apache.org/jira/browse/HBASE-9272
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Attachments: ParallelClientScanner.java, ParallelClientScanner.java
>
>
> The contract of ClientScanner is to return rows in sort order. That limits 
> the order in which region can be scanned.
> I propose a simple ParallelScanner that does not have this requirement and 
> queries regions in parallel, return whatever gets returned first.
> This is generally useful for scans that filter a lot of data on the server, 
> or in cases where the client can very quickly react to the returned data.
> I have a simple prototype (doesn't do error handling right, and might be a 
> bit heavy on the synchronization side - it used a BlockingQueue to hand data 
> between the client using the scanner and the threads doing the scanning, it 
> also could potentially starve some scanners long enugh to time out at the 
> server).
> On the plus side, it's only a 130 lines of code. :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9297) Fix HFileV1Detector tool post HBASE-9126

2013-08-21 Thread Himanshu Vashishtha (JIRA)
Himanshu Vashishtha created HBASE-9297:
--

 Summary: Fix HFileV1Detector tool post HBASE-9126
 Key: HBASE-9297
 URL: https://issues.apache.org/jira/browse/HBASE-9297
 Project: HBase
  Issue Type: Bug
  Components: migration
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha


We need to detect any HFileV1 before upgrading to 0.96. The code to read 
HFileV1 version is removed in HBASE-9126. It breaks HFileV1Detector tool.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8348) Polish the migration to 0.96

2013-08-21 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747276#comment-13747276
 ] 

Himanshu Vashishtha commented on HBASE-8348:


I have attached a patch and log file. This patch takes care of the review 
comments by Jeffrey and Stack. The log file contains sample run of the tool 
(command and log statements).

I will file a jira to fix the HFileV1 tool now. Thanks.

> Polish the migration to 0.96
> 
>
> Key: HBASE-8348
> URL: https://issues.apache.org/jira/browse/HBASE-8348
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.0
>Reporter: Jean-Daniel Cryans
>Assignee: rajeshbabu
>Priority: Blocker
> Fix For: 0.96.0
>
> Attachments: HBASE-8348-approach-2.patch, 
> HBASE-8348-approach-2-v2.1.patch, HBASE-8348-approach-2-v2.2.patch, 
> HBASE-8348-approach-2-v2.3.patch, HBASE-8348-approach-3.patch, 
> HBASE-8348_trunk.patch, HBASE-8348_trunk_v2.patch, HBASE-8348_trunk_v3.patch, 
> log
>
>
> Currently, migration works but there's still a couple of rough edges:
>  - HBASE-8045 finished the .META. migration but didn't remove ROOT, so it's 
> still on the filesystem.
>  - Data in ZK needs to be removed manually. Either we fix up the data in ZK 
> or we delete it ourselves.
>  - TestMetaMigrationRemovingHTD has a testMetaUpdatedFlagInROOT method, but 
> ROOT is gone now.
> Elliott was also mentioning that we could have "hbase migrate" do the HFileV1 
> checks, clear ZK, remove ROOT, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8348) Polish the migration to 0.96

2013-08-21 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-8348:
---

Attachment: log

Attached is a log file, that contains sample runs of the tool.

> Polish the migration to 0.96
> 
>
> Key: HBASE-8348
> URL: https://issues.apache.org/jira/browse/HBASE-8348
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.0
>Reporter: Jean-Daniel Cryans
>Assignee: rajeshbabu
>Priority: Blocker
> Fix For: 0.96.0
>
> Attachments: HBASE-8348-approach-2.patch, 
> HBASE-8348-approach-2-v2.1.patch, HBASE-8348-approach-2-v2.2.patch, 
> HBASE-8348-approach-2-v2.3.patch, HBASE-8348-approach-3.patch, 
> HBASE-8348_trunk.patch, HBASE-8348_trunk_v2.patch, HBASE-8348_trunk_v3.patch, 
> log
>
>
> Currently, migration works but there's still a couple of rough edges:
>  - HBASE-8045 finished the .META. migration but didn't remove ROOT, so it's 
> still on the filesystem.
>  - Data in ZK needs to be removed manually. Either we fix up the data in ZK 
> or we delete it ourselves.
>  - TestMetaMigrationRemovingHTD has a testMetaUpdatedFlagInROOT method, but 
> ROOT is gone now.
> Elliott was also mentioning that we could have "hbase migrate" do the HFileV1 
> checks, clear ZK, remove ROOT, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8348) Polish the migration to 0.96

2013-08-21 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-8348:
---

Attachment: HBASE-8348-approach-2-v2.3.patch

> Polish the migration to 0.96
> 
>
> Key: HBASE-8348
> URL: https://issues.apache.org/jira/browse/HBASE-8348
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.0
>Reporter: Jean-Daniel Cryans
>Assignee: rajeshbabu
>Priority: Blocker
> Fix For: 0.96.0
>
> Attachments: HBASE-8348-approach-2.patch, 
> HBASE-8348-approach-2-v2.1.patch, HBASE-8348-approach-2-v2.2.patch, 
> HBASE-8348-approach-2-v2.3.patch, HBASE-8348-approach-3.patch, 
> HBASE-8348_trunk.patch, HBASE-8348_trunk_v2.patch, HBASE-8348_trunk_v3.patch, 
> log
>
>
> Currently, migration works but there's still a couple of rough edges:
>  - HBASE-8045 finished the .META. migration but didn't remove ROOT, so it's 
> still on the filesystem.
>  - Data in ZK needs to be removed manually. Either we fix up the data in ZK 
> or we delete it ourselves.
>  - TestMetaMigrationRemovingHTD has a testMetaUpdatedFlagInROOT method, but 
> ROOT is gone now.
> Elliott was also mentioning that we could have "hbase migrate" do the HFileV1 
> checks, clear ZK, remove ROOT, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9268) Client doesn't recover from a stalled region server

2013-08-21 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747272#comment-13747272
 ] 

Nicolas Liochon commented on HBASE-9268:


bq. How do you define "new" here? I haven't tested it but I'm pretty sure this 
wasn't an issue in 0.94.
Yeah, that's the 0.94 I was thinking about. The code has changed a lot around 
this area, but in HBaseClient.java it seems like in 0.95: no timeouts on 
writes. So I wonder what's the extra logic that makes it work on 0.94 (it's not 
purely theoretical: we could have this issue somewhere else in 0.95). I'm going 
to try the 0.94 to be sure.

bq. It worked fine with HBASE-7590 once I fixed the class name in the release 
note 
Thanks for the test and the fix, JD.

> Client doesn't recover from a stalled region server
> ---
>
> Key: HBASE-9268
> URL: https://issues.apache.org/jira/browse/HBASE-9268
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Jean-Daniel Cryans
>Assignee: Nicolas Liochon
> Fix For: 0.98.0, 0.95.3
>
> Attachments: 9268-hack.patch
>
>
> Got this testing the 0.95.2 RC.
> I killed -STOP a region server and let it stay like that while running PE. 
> The clients didn't find the new region locations and in the jstack were stuck 
> doing RPC. Eventually I killed -CONT and the client printed these:
> bq. Exception in thread "TestClient-6" java.lang.RuntimeException: 
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 
> 128 actions: IOException: 90 times, SocketTimeoutException: 38 times,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8595) Add rename operation in hbase shell

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-8595:
-

Fix Version/s: (was: 0.96.0)

> Add rename operation in hbase shell
> ---
>
> Key: HBASE-8595
> URL: https://issues.apache.org/jira/browse/HBASE-8595
> Project: HBase
>  Issue Type: New Feature
>  Components: shell
>Affects Versions: 0.94.8, 0.95.1
>Reporter: Aleksandr Shulman
>Assignee: Aleksandr Shulman
>Priority: Minor
> Attachments: HBASE-8595-v0.patch
>
>
> We can use a set of snapshot commands to elegantly rename a table. It would 
> be nice to wrap all those commands in a single call.
> http://hbase.apache.org/book.html#table.rename
> Also -- the documentation is missing the last step where the original table 
> needs to be deleted. I can add that to the docbook.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8565) stop-hbase.sh clean up: backup master

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747268#comment-13747268
 ] 

stack commented on HBASE-8565:
--

Committed to 0.95 and trunk.  Makes sense that should be one way to shutdown 
backup master unless you explicitly ask to do so.

You want this in 0.94 [~lhofhansl]?

On the second issue, test for presence of the process before waiting on it?

> stop-hbase.sh clean up: backup master
> -
>
> Key: HBASE-8565
> URL: https://issues.apache.org/jira/browse/HBASE-8565
> Project: HBase
>  Issue Type: Bug
>  Components: master, scripts
>Affects Versions: 0.94.7, 0.95.0
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Minor
> Fix For: 0.98.0, 0.94.12, 0.96.0
>
> Attachments: HBASE-8565-v1-0.94.patch, HBASE-8565-v1-trunk.patch
>
>
> In stop-hbase.sh:
> {code}
>   # TODO: store backup masters in ZooKeeper and have the primary send them a 
> shutdown message
>   # stop any backup masters
>   "$bin"/hbase-daemons.sh --config "${HBASE_CONF_DIR}" \
> --hosts "${HBASE_BACKUP_MASTERS}" stop master-backup
> {code}
> After HBASE-5213, stop-hbase.sh -> hbase master stop will bring down the 
> backup master too via the cluster status znode.
> We should not need the above code anymore.
> Another issue happens when the current master died and the backup master 
> became the active master.
> {code}
> nohup nice -n ${HBASE_NICENESS:-0} "$HBASE_HOME"/bin/hbase \
>--config "${HBASE_CONF_DIR}" \
>master stop "$@" > "$logout" 2>&1 < /dev/null &
> waitForProcessEnd `cat $pid` 'stop-master-command'
> {code}
> We can still issue 'hbase-stop.sh' from the old master.
> stop-hbase.sh -> hbase master stop -> look for active master -> request 
> shutdown
> This process still works.
> But the waitForProcessEnd statement will not work since the local master pid 
> is not relevant anymore.
> What is the best way in the this case?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7525) A canary monitoring program specifically for regionserver

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7525:
-

Priority: Critical  (was: Minor)

Marking critical again so I test this for 0.96 -- it looks nice.

> A canary monitoring program specifically for regionserver
> -
>
> Key: HBASE-7525
> URL: https://issues.apache.org/jira/browse/HBASE-7525
> Project: HBase
>  Issue Type: New Feature
>  Components: monitoring
>Affects Versions: 0.94.0
>Reporter: takeshi.miao
>Priority: Critical
> Fix For: 0.98.0, 0.96.0
>
> Attachments: HBASE-7525-0.95-v0.patch, HBASE-7525-0.95-v1.patch, 
> HBASE-7525-0.95-v3.patch, HBASE-7525-0.95-v4.patch, HBASE-7525-v0.patch, 
> RegionServerCanary.java
>
>
> *Motivation*
> This ticket is to provide a canary monitoring tool specifically for 
> HRegionserver, details as follows
> 1. This tool is required by operation team due to they thought that the 
> canary for each region of a HBase is too many for them, so I implemented this 
> coarse-granular one based on the original o.a.h.h.tool.Canary for them
> 2. And this tool is implemented by multi-threading, which means the each Get 
> request sent by a thread. the reason I use this way is due to we suffered the 
> region server hung issue by now the root cause is still not clear. so this 
> tool can help operation team to detect hung region server if any.
> *example*
> 1. the tool docs
> ./bin/hbase org.apache.hadoop.hbase.tool.RegionServerCanary -help
> Usage: [opts] [regionServerName 1 [regionServrName 2...]]
>  regionServerName - FQDN serverName, can use linux command:hostname -f to 
> check your serverName
>  where [-opts] are:
>-help Show this help and exit.
>-eUse regionServerName as regular expression
>   which means the regionServerName is regular expression pattern
>-f  stop whole program if first error occurs, default is true
>-t  timeout for a check, default is 60 (milisecs)
>-daemonContinuous check at defined intervals.
>-interval   Interval between checks (sec)
> 2. Will send a request to each regionserver in a HBase cluster
> ./bin/hbase org.apache.hadoop.hbase.tool.RegionServerCanary
> 3. Will send a request to a regionserver by given name
> ./bin/hbase org.apache.hadoop.hbase.tool.RegionServerCanary rs1.domainname
> 4. Will send a request to regionserver(s) by given regular-expression
> /opt/trend/circus-opstool/bin/hbase-canary-monitor-each-regionserver.sh -e 
> rs1.domainname.pattern
> // another example
> ./bin/hbase org.apache.hadoop.hbase.tool.RegionServerCanary -e 
> tw-poc-tm-puppet-hdn[0-9]\{1,2\}.client.tw.trendnet.org
> 5. Will send a request to a regionserver and also set a timeout limit for 
> this test
> // query regionserver:rs1.domainname with timeout limit 10sec
> // -f false, means that will not exit this program even test failed
> ./bin/hbase org.apache.hadoop.hbase.tool.RegionServerCanary -f false -t 1 
> rs1.domainname
> // echo "1" if timeout
> echo "$?"
> 6. Will run as daemon mode, which means it will send request to each 
> regionserver periodically
> ./bin/hbase org.apache.hadoop.hbase.tool.RegionServerCanary -daemon

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7767) Get rid of ZKTable, and table enable/disable state in ZK

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7767:
-

Fix Version/s: (was: 0.96.0)

Moving out issue not being worked on though would love to have it.

> Get rid of ZKTable, and table enable/disable state in ZK 
> -
>
> Key: HBASE-7767
> URL: https://issues.apache.org/jira/browse/HBASE-7767
> Project: HBase
>  Issue Type: Sub-task
>  Components: Zookeeper
>Affects Versions: 0.95.2
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
>
> As discussed table state in zookeeper for enable/disable state breaks our 
> zookeeper contract. It is also very intrusive, used from the client side, 
> master and region servers. We should get rid of it. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8441) [replication] Refactor KeeperExceptions thrown from replication state interfaces into replication specific exceptions

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-8441:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.95 and to trunk.  Thanks Chris and JD

> [replication] Refactor KeeperExceptions thrown from replication state 
> interfaces into replication specific exceptions
> -
>
> Key: HBASE-8441
> URL: https://issues.apache.org/jira/browse/HBASE-8441
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: HBASE-8441-v1.patch, HBASE-8441-v2.patch
>
>
> Currently, the replication state interfaces (state, peers and queues) throw 
> KeeperExceptions from some of their methods. Refactor these into replication 
> specific exceptions to prevent the implementation details of Zookeeper from 
> leaking through.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7564) [replication] Create interfaces for manipulation of replication state

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747261#comment-13747261
 ] 

stack commented on HBASE-7564:
--

[~ctrezzo] You finished then?  All the sub-issues are committed.  Does that 
mean this is done?  If so, congrats (and resolve this)!  Otherwise, whats in 
the way?

> [replication] Create interfaces for manipulation of replication state
> -
>
> Key: HBASE-7564
> URL: https://issues.apache.org/jira/browse/HBASE-7564
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Fix For: 0.96.0
>
> Attachments: ReplicationRefactor.pdf, ReplicationRefactor-v2.pdf
>
>
> Currently ReplicationZookeeper maintains all the zookeeper state for 
> replication. This makes the class relatively large and slightly confusing. 
> There are three major pieces of zookeeper state maintained for replication:
> 1. The state of replication (i.e. whether replication is ENABLED or DISABLED 
> on the cluster). This is held in the state znode.
> 2. The set of slave (or peer) clusters that replication needs to ship edits 
> to. This is held in the peer znode.
> 3. The replication queues that keep track of which hlog files still need to 
> be replicated. There is one queue for each replication source/peer cluster 
> pair.
> Splitting each of these three pieces into their own interfaces will separate 
> the implementation from the operations needed to manipulate replication 
> state. This will allow easier unit testing of the replication logic and more 
> flexibility for future implementations of how replication state is stored.
> The plan is to implement these changes as a series of patches (at least one 
> for each of the three interfaces). The state interface will be first, since 
> it is the most easily separable from the other two pieces.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7043) Region Server Group CLI commands

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7043:
-

Fix Version/s: (was: 0.96.0)

Moving out

> Region Server Group CLI commands
> 
>
> Key: HBASE-7043
> URL: https://issues.apache.org/jira/browse/HBASE-7043
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-6721_94_2.patch, HBASE-7043_94_2.patch, 
> HBASE-7043_94_3.patch, HBASE-7043_94_4.patch, HBASE-7043_94.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6721) RegionServer Group based Assignment

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6721:
-

Fix Version/s: (was: 0.96.0)

Moving out new feature.  Chatting w/ Francis Liu last night, could be good one 
for 0.98.

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Vandana Ayyalasomayajula
> Attachments: 6721-master-webUI.patch, HBASE-6721_10.patch, 
> HBASE-6721_8.patch, HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_3.patch, HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, 
> HBASE-6721_94_6.patch, HBASE-6721_94_7.patch, HBASE-6721_94.patch, 
> HBASE-6721_94.patch, HBASE-6721_9.patch, HBASE-6721_9.patch, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7257) Region server group based configuration

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7257:
-

Fix Version/s: (was: 0.96.0)

> Region server group based configuration
> ---
>
> Key: HBASE-7257
> URL: https://issues.apache.org/jira/browse/HBASE-7257
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>
> The read/write load pattern would be different across region server groups.
> It is desirable for each group to have unique configuration parameters.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7825) Retire non distributed log splitting related code

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-7825:
-

  Description: 
I think we only use distributed log splitting now and the legacy code before 
distributed log splitting should be retired. Any objections?

Thanks,
-Jeffrey

  was:

I think we only use distributed log splitting now and the legacy code before 
distributed log splitting should be retired. Any objections?

Thanks,
-Jeffrey

Fix Version/s: (was: 0.96.0)

Moving out a nice-to-have that is not being worked on

> Retire non distributed log splitting related code
> -
>
> Key: HBASE-7825
> URL: https://issues.apache.org/jira/browse/HBASE-7825
> Project: HBase
>  Issue Type: Wish
>  Components: master
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
>
> I think we only use distributed log splitting now and the legacy code before 
> distributed log splitting should be retired. Any objections?
> Thanks,
> -Jeffrey

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9208) ReplicationLogCleaner slow at large scale

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747252#comment-13747252
 ] 

stack commented on HBASE-9208:
--

Trunk patch looks good.  Does what you have deployed look like this patch 
[~davelatham]?

It doesn't have unit tests.  You are relying on existing tests to ensure that 
base functionality remains undisturbed?

Nits:

Rather than indent whole method, just reverse this test and return early: if 
(entries != null) {

Is this right?  Delete all files if not configured?

+   // all members of this class are null if replication is disabled, 
+   // so we cannot filter the files
 if (this.getConf() == null) {
-  return true;
+  LOG.warn("Not configured - allowing all hlogs to be deleted");
+  return files;
 }

Otherwise, patch looks good to me.

> ReplicationLogCleaner slow at large scale
> -
>
> Key: HBASE-9208
> URL: https://issues.apache.org/jira/browse/HBASE-9208
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: Dave Latham
>Assignee: Dave Latham
> Fix For: 0.94.12, 0.96.0
>
> Attachments: HBASE-9208-0.94.patch, HBASE-9208.patch, 
> HBASE-9208-v2.patch
>
>
> At a large scale the ReplicationLogCleaner fails to clean up .oldlogs as fast 
> as the cluster is producing them.  For each old HLog file that has been 
> replicated and should be deleted the ReplicationLogCleaner checks every 
> replication queue in ZooKeeper before removing it.  This means that as a 
> cluster scales up the number of files to delete scales as well as the time to 
> delete each file so the cleanup chore scales quadratically.  In our case it 
> reached the point where the oldlogs were growing faster than they were being 
> cleaned up.
> We're now running with a patch that allows the ReplicationLogCleaner to 
> refresh its list of files in the replication queues from ZooKeeper just once 
> for each batch of files the CleanerChore wants to evaluate.
> I'd propose updating FileCleanerDelegate to take a List rather 
> than a single one at a time.  This would allow file cleaners that check an 
> external resource for references such as ZooKeeper (for 
> ReplicationLogCleaner) or HDFS (for SnapshotLogCleaner which looks like it 
> may also have similar trouble at scale) to load those references once per 
> batch rather than for every log.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9141) Replication Znodes Backup Tool

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9141:
-

Fix Version/s: (was: 0.96.0)

Moving out.  We are not going to use this migrating, right 
[~himan...@cloudera.com] - but it might be of general use down the road.

> Replication Znodes Backup Tool
> --
>
> Key: HBASE-9141
> URL: https://issues.apache.org/jira/browse/HBASE-9141
> Project: HBase
>  Issue Type: Improvement
>  Components: migration, Replication
>Affects Versions: 0.94.10
>Reporter: Himanshu Vashishtha
>Assignee: Himanshu Vashishtha
> Attachments: HBase-9141.patch, HBase-9141-v1.patch
>
>
> While migrating to 0.96, we recommend deleting old znodes so users not face 
> issues like HBASE-7766, and let HBase create them out of box.
> Though HBase tends to store only ephemeral data in zookeeper, replication has 
> a different approach. Almost all of its data (state, peer info, logs, etc) is 
> present in zookeeper. We would like to preserve them in order to not do 
> re-adding of peers, and ensuring complete replication after we have migrated 
> to 0.96. 
> This jira adds a tool to serialize/de-serialize replication znodes to the 
> underlying filesystem. This could be used while migrating to 0.96.0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9213) create a unified shim for hadoop 1 and 2 so that there's one build of HBase

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9213:
-

Fix Version/s: (was: 0.96.0)

> create a unified shim for hadoop 1 and 2 so that there's one build of HBase
> ---
>
> Key: HBASE-9213
> URL: https://issues.apache.org/jira/browse/HBASE-9213
> Project: HBase
>  Issue Type: Brainstorming
>  Components: build
>Reporter: Sergey Shelukhin
>
> This is a brainstorming JIRA. Working with HBase dependency at this point 
> seems to be rather painful from what I hear from other folks. We could do the 
> hive model with unified shim, built in such manner that it can work with 
> either version, where at build time dependencies for all 2-3 versions are 
> pulled and the appropriate one is used for tests, and when running HBase you 
> have to point at Hadoop directory to get the dependencies. I am not very 
> proficient at maven so not quite certain of the best solution yet.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9285) User who created table cannot scan the same table due to Insufficient permissions

2013-08-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9285:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Resolve.  Was committed.

> User who created table cannot scan the same table due to Insufficient 
> permissions
> -
>
> Key: HBASE-9285
> URL: https://issues.apache.org/jira/browse/HBASE-9285
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9285.patch
>
>
> User hrt_qa has been given 'C' permission.
> {code}
> create 'te', {NAME => 'f1', VERSIONS => 5}
> ...
> hbase(main):003:0> list
> TABLE
> hbase:acl
> hbase:namespace
> te
> 6 row(s) in 0.0570 seconds
> hbase(main):004:0> scan 'te'
> ROW  COLUMN+CELL
> 2013-08-21 02:21:00,921 DEBUG [main] token.AuthenticationTokenSelector: No 
> matching token found
> 2013-08-21 02:21:00,921 DEBUG [main] security.HBaseSaslRpcClient: Creating 
> SASL GSSAPI client. Server's Kerberos principal name is 
> hbase/hor16n13.gq1.ygridcore@horton.ygridcore.net
> 2013-08-21 02:21:00,923 DEBUG [main] security.HBaseSaslRpcClient: Have sent 
> token of size 582 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 0 for processing by initSASLContext
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 0 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 53 for processing by initSASLContext
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 53 from initSASLContext.
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: SASL client 
> context established. Negotiated QoP: auth
> 2013-08-21 02:21:00,935 WARN  [main] client.RpcRetryingCaller: Call 
> exception, tries=0, retries=7, retryTime=-14ms
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847)
> ...
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException):
>  org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
> {code}
> Here was related entries in hbase:acl table:
> {code}
> hbase(main):001:0> scan 'hbase:acl'
> ROW  COLUMN+CELL
>  hbase:acl   column=l:hrt_qa, 
> timestamp=1377045996685, value=C
>  te  column=l:hrt_qa, 
> timestamp=1377051648649, value=RWXCA
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9292) Syncer fails but we won't go down

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747244#comment-13747244
 ] 

stack commented on HBASE-9292:
--

I figured that what brings on this condition in hdfs is filled disks.  We 
should go down though anyways rather than be stuck here -- the regionserver is 
up hosting data that is unreachable.

> Syncer fails but we won't go down
> -
>
> Key: HBASE-9292
> URL: https://issues.apache.org/jira/browse/HBASE-9292
> Project: HBase
>  Issue Type: Bug
>  Components: wal
>Affects Versions: 0.95.3
> Environment: hadoop-2.1.0-beta and tip of 0.95 branch
>Reporter: stack
>Priority: Critical
> Fix For: 0.96.0
>
>
> Running some simple loading tests i ran into the following running on 
> hadoop-2.1.0-beta.
> {code}
> 2013-08-20 16:51:56,310 DEBUG [regionserver60020.logRoller] 
> regionserver.LogRoller: HLog roll requested
> 2013-08-20 16:51:56,314 DEBUG [regionserver60020.logRoller] wal.FSHLog: 
> cleanupCurrentWriter  waiting for transactions to get synced  total 655761 
> synced till here 655750
> 2013-08-20 16:51:56,360 INFO  [regionserver60020.logRoller] wal.FSHLog: 
> Rolled WAL 
> /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042714402
>  with entries=985, filesize=122.5 M; new WAL 
> /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716311
> 2013-08-20 16:51:56,378 WARN  [Thread-4788] hdfs.DFSClient: DataStreamer 
> Exception
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /hbase/WALs/a2434.halxg.cloudera.com,60020,1377031955847/a2434.halxg.cloudera.com%2C60020%2C1377031955847.1377042716311
>  could only be replicated to 0 nodes instead of minReplication (=1).  There 
> are 5 datanode(s) running and no node(s) are excluded in this operation.
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
> at org.apache.hadoop.ipc.Client.call(Client.java:1347)
> at org.apache.hadoop.ipc.Client.call(Client.java:1300)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> at $Proxy13.addBlock(Unknown Source)
> at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at $Proxy13.addBlock(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
> at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
> at $Proxy14.addBlock(Unknown Source)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1220)
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1073)
> ...
> {code}
> Thereafter the server is up but useless and can't go down be

[jira] [Updated] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Vasu Mariyala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasu Mariyala updated HBASE-8930:
-

Attachment: HBASE-8930-rev1.patch

Attaching the patch HBASE-8930-rev1.patch which has the fixes for the javadoc 
and the release audit. The test was locally successful. Letting the hadoop qa 
retry with this patch

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Attachments: HBASE-8930.patch, HBASE-8930-rev1.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
> put.add(cf, col6v, Bytes.toBytes((short) 10));
> hTable.put(put);
> hTable.flushCommits();
> //==READING=//
> Filter allwaysNextColFilter = new AllwaysNextColFilter();
> Get get = new Get(row);
> get.addColumn(cf, col1); //5581
> get.addColumn(cf, col1v); //5584
> get.addColumn(cf, col1g

[jira] [Commented] (HBASE-9285) User who created table cannot scan the same table due to Insufficient permissions

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747207#comment-13747207
 ] 

Hudson commented on HBASE-9285:
---

SUCCESS: Integrated in HBase-TRUNK #4423 (See 
[https://builds.apache.org/job/HBase-TRUNK/4423/])
HBASE-9285 User who created table cannot scan the same table due to 
Insufficient permissions (Ted Yu) (tedyu: rev 1516346)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java


> User who created table cannot scan the same table due to Insufficient 
> permissions
> -
>
> Key: HBASE-9285
> URL: https://issues.apache.org/jira/browse/HBASE-9285
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9285.patch
>
>
> User hrt_qa has been given 'C' permission.
> {code}
> create 'te', {NAME => 'f1', VERSIONS => 5}
> ...
> hbase(main):003:0> list
> TABLE
> hbase:acl
> hbase:namespace
> te
> 6 row(s) in 0.0570 seconds
> hbase(main):004:0> scan 'te'
> ROW  COLUMN+CELL
> 2013-08-21 02:21:00,921 DEBUG [main] token.AuthenticationTokenSelector: No 
> matching token found
> 2013-08-21 02:21:00,921 DEBUG [main] security.HBaseSaslRpcClient: Creating 
> SASL GSSAPI client. Server's Kerberos principal name is 
> hbase/hor16n13.gq1.ygridcore@horton.ygridcore.net
> 2013-08-21 02:21:00,923 DEBUG [main] security.HBaseSaslRpcClient: Have sent 
> token of size 582 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 0 for processing by initSASLContext
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 0 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 53 for processing by initSASLContext
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 53 from initSASLContext.
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: SASL client 
> context established. Negotiated QoP: auth
> 2013-08-21 02:21:00,935 WARN  [main] client.RpcRetryingCaller: Call 
> exception, tries=0, retries=7, retryTime=-14ms
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847)
> ...
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException):
>  org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
> {code}
> Here was related entries in hbase:acl table:
> {code}
> hbase(main):001:0> scan 'hbase:acl'
> ROW  COLUMN+CELL
>  hbase:acl   column=l:hrt_qa, 
> timestamp=1377045996685, value=C
>  te  column=l:hrt_qa, 
> timestamp=1377051648649, value=RWXCA
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9247) Cleanup Key/KV/Meta/MetaKey Comparators

2013-08-21 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747208#comment-13747208
 ] 

Jonathan Hsieh commented on HBASE-9247:
---

There are three comparators being used -- MetaKey, Key and RawComparator (for 
bloom filter's FileTrailer section..).  I have a version that isolates the 
KeyComparators from the KVComparators, still cleaning it up.

> Cleanup Key/KV/Meta/MetaKey Comparators
> ---
>
> Key: HBASE-9247
> URL: https://issues.apache.org/jira/browse/HBASE-9247
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
>
> HBASE-9164 converted KVComparator's KeyCompare#compare guts from one that 
> assumed a contiguous array backing a KV to one that used the Cell interface 
> which doesn't have this assumption.
> There is now duplicate code that should be cleaned up.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9274) After HBASE-8408 applied, temporary test files are being left in /tmp/hbase-

2013-08-21 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-9274:
--

Status: Open  (was: Patch Available)

> After HBASE-8408 applied, temporary test files are being left in 
> /tmp/hbase-
> --
>
> Key: HBASE-9274
> URL: https://issues.apache.org/jira/browse/HBASE-9274
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 0.98.0, 0.95.3
>
> Attachments: hbase-9274.patch, hbase-9274.v2.patch
>
>
> Some of our jenkins CI machines have been failing out with /tmp/hbase-
> This can be shown by executing the following command before and after the 
> namespaces patch.
> {code}
> # several tests are dropping stuff in the archive dir, just pick one
> mvn clean test -Dtest=TestEncodedSeekers
> find /tmp/hbase-jon/hbase/
> {code}
> /tmp/hbase-jon after test run before patch applied
> {code}
> $ find /tmp/hbase-jon/
> /tmp/hbase-jon/
> /tmp/hbase-jon/local
> /tmp/hbase-jon/local/jars
> {code}
> after namespaces patch applied
> {code}
> /tmp/hbase-jon/
> /tmp/hbase-jon/local
> /tmp/hbase-jon/local/jars
> /tmp/hbase-jon/hbase
> /tmp/hbase-jon/hbase/.archive
> /tmp/hbase-jon/hbase/.archive/.data
> /tmp/hbase-jon/hbase/.archive/.data/default
> /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable
> /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35
> /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF
> /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/8e76a87806b94483851158366f7d5c17
> /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/494c07dbf08940749696bb0f9278401e
> /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/.8e76a87806b94483851158366f7d5c1
> 7.crc 
> 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9274) After HBASE-8408 applied, temporary test files are being left in /tmp/hbase-

2013-08-21 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-9274:
--

Status: Patch Available  (was: Open)

> After HBASE-8408 applied, temporary test files are being left in 
> /tmp/hbase-
> --
>
> Key: HBASE-9274
> URL: https://issues.apache.org/jira/browse/HBASE-9274
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.95.2
>Reporter: Jonathan Hsieh
>Assignee: Jonathan Hsieh
> Fix For: 0.98.0, 0.95.3
>
> Attachments: hbase-9274.patch, hbase-9274.v2.patch
>
>
> Some of our jenkins CI machines have been failing out with /tmp/hbase-
> This can be shown by executing the following command before and after the 
> namespaces patch.
> {code}
> # several tests are dropping stuff in the archive dir, just pick one
> mvn clean test -Dtest=TestEncodedSeekers
> find /tmp/hbase-jon/hbase/
> {code}
> /tmp/hbase-jon after test run before patch applied
> {code}
> $ find /tmp/hbase-jon/
> /tmp/hbase-jon/
> /tmp/hbase-jon/local
> /tmp/hbase-jon/local/jars
> {code}
> after namespaces patch applied
> {code}
> /tmp/hbase-jon/
> /tmp/hbase-jon/local
> /tmp/hbase-jon/local/jars
> /tmp/hbase-jon/hbase
> /tmp/hbase-jon/hbase/.archive
> /tmp/hbase-jon/hbase/.archive/.data
> /tmp/hbase-jon/hbase/.archive/.data/default
> /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable
> /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35
> /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF
> /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/8e76a87806b94483851158366f7d5c17
> /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/494c07dbf08940749696bb0f9278401e
> /tmp/hbase-jon/hbase/.archive/.data/default/encodedSeekersTable/c6ec51dca2a9fe4c2279006345d62b35/encodedSeekersCF/.8e76a87806b94483851158366f7d5c1
> 7.crc 
> 
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9272) A simple parallel, unordered scanner

2013-08-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747201#comment-13747201
 ] 

Anoop Sam John commented on HBASE-9272:
---

Encouraging numbers Lars...
Can we have a test with much more #regions?  The #threads should be well below 
#regions..  Can the client be little more intelligent so as to distribute the 
load to all the RSs at a given point in time..  Say 10 threads in client side 
and 10 RSs and there are 1000 regions. At a given point in time, there is 
chance that all 10 client threads contacting regions which are in same RS. So 
all other RSs will be idle for that time. May be for a begining a simple patch 
would be enough. These are improvements we can try later also.   Good one Lars.

> A simple parallel, unordered scanner
> 
>
> Key: HBASE-9272
> URL: https://issues.apache.org/jira/browse/HBASE-9272
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Attachments: ParallelClientScanner.java, ParallelClientScanner.java
>
>
> The contract of ClientScanner is to return rows in sort order. That limits 
> the order in which region can be scanned.
> I propose a simple ParallelScanner that does not have this requirement and 
> queries regions in parallel, return whatever gets returned first.
> This is generally useful for scans that filter a lot of data on the server, 
> or in cases where the client can very quickly react to the returned data.
> I have a simple prototype (doesn't do error handling right, and might be a 
> bit heavy on the synchronization side - it used a BlockingQueue to hand data 
> between the client using the scanner and the threads doing the scanning, it 
> also could potentially starve some scanners long enugh to time out at the 
> server).
> On the plus side, it's only a 130 lines of code. :)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9126) Make HFile MIN VERSION as 2

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747198#comment-13747198
 ] 

stack commented on HBASE-9126:
--

Tool should be able to deal I'd say.

> Make HFile MIN VERSION as 2
> ---
>
> Key: HBASE-9126
> URL: https://issues.apache.org/jira/browse/HBASE-9126
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.95.1
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.98.0, 0.95.2
>
> Attachments: HBASE-9126.patch, HBASE-9126_V2.patch
>
>
> Removed the HFile V1 support from version>95. We can make the min version 
> supported as 2? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9248) Place holders for tags in 0.96 to accommodate tags in 0.98

2013-08-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747196#comment-13747196
 ] 

ramkrishna.s.vasudevan commented on HBASE-9248:
---

bq.Doesn't seem you agree =)
No Matt.  I agree.  If you see the bigger patches that I have in RB I have 
updated the name to be plural every where.
Here it was not done (i missed it because a 'max' comes before it).  I will 
surely change this.  Thanks a lot for the reviews.

> Place holders for tags in 0.96 to accommodate tags in 0.98
> --
>
> Key: HBASE-9248
> URL: https://issues.apache.org/jira/browse/HBASE-9248
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.95.2
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 0.98.0, 0.95.3
>
> Attachments: HBASE-9248_2.patch, HBASE-9248_2.patch, 
> HBASE-9248_3.patch, HBASE-9248.patch
>
>
> This JIRA is focused on adding placeholders for tags in 0.96 so that 0.98 
> would be easier to accommodate tags.
> Changes would be in WAL Encoders/decoders with compression.
> PrefixTree codec.  (Had an offline discussion with Matt Corgan on this).
> If anything else would be needed I can add it in this JIRA. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9126) Make HFile MIN VERSION as 2

2013-08-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747197#comment-13747197
 ] 

Anoop Sam John commented on HBASE-9126:
---

Can we fix in the tool?  Done this as part of clean up in FFT. Can you show us 
the change needed in the tool so as to fix?  Thanks for reporting Himanshu.

> Make HFile MIN VERSION as 2
> ---
>
> Key: HBASE-9126
> URL: https://issues.apache.org/jira/browse/HBASE-9126
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.95.1
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.98.0, 0.95.2
>
> Attachments: HBASE-9126.patch, HBASE-9126_V2.patch
>
>
> Removed the HFile V1 support from version>95. We can make the min version 
> supported as 2? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9126) Make HFile MIN VERSION as 2

2013-08-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747194#comment-13747194
 ] 

ramkrishna.s.vasudevan commented on HBASE-9126:
---

How are you planning to fix the tool? The V1 detector will not be able to read 
the trailer version I suppose.

> Make HFile MIN VERSION as 2
> ---
>
> Key: HBASE-9126
> URL: https://issues.apache.org/jira/browse/HBASE-9126
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.95.1
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.98.0, 0.95.2
>
> Attachments: HBASE-9126.patch, HBASE-9126_V2.patch
>
>
> Removed the HFile V1 support from version>95. We can make the min version 
> supported as 2? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9285) User who created table cannot scan the same table due to Insufficient permissions

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747193#comment-13747193
 ] 

Hudson commented on HBASE-9285:
---

SUCCESS: Integrated in hbase-0.95-on-hadoop2 #263 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/263/])
HBASE-9285 User who created table cannot scan the same table due to 
Insufficient permissions (Ted Yu) (tedyu: rev 1516345)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java


> User who created table cannot scan the same table due to Insufficient 
> permissions
> -
>
> Key: HBASE-9285
> URL: https://issues.apache.org/jira/browse/HBASE-9285
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9285.patch
>
>
> User hrt_qa has been given 'C' permission.
> {code}
> create 'te', {NAME => 'f1', VERSIONS => 5}
> ...
> hbase(main):003:0> list
> TABLE
> hbase:acl
> hbase:namespace
> te
> 6 row(s) in 0.0570 seconds
> hbase(main):004:0> scan 'te'
> ROW  COLUMN+CELL
> 2013-08-21 02:21:00,921 DEBUG [main] token.AuthenticationTokenSelector: No 
> matching token found
> 2013-08-21 02:21:00,921 DEBUG [main] security.HBaseSaslRpcClient: Creating 
> SASL GSSAPI client. Server's Kerberos principal name is 
> hbase/hor16n13.gq1.ygridcore@horton.ygridcore.net
> 2013-08-21 02:21:00,923 DEBUG [main] security.HBaseSaslRpcClient: Have sent 
> token of size 582 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 0 for processing by initSASLContext
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 0 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 53 for processing by initSASLContext
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 53 from initSASLContext.
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: SASL client 
> context established. Negotiated QoP: auth
> 2013-08-21 02:21:00,935 WARN  [main] client.RpcRetryingCaller: Call 
> exception, tries=0, retries=7, retryTime=-14ms
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847)
> ...
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException):
>  org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
> {code}
> Here was related entries in hbase:acl table:
> {code}
> hbase(main):001:0> scan 'hbase:acl'
> ROW  COLUMN+CELL
>  hbase:acl   column=l:hrt_qa, 
> timestamp=1377045996685, value=C
>  te  column=l:hrt_qa, 
> timestamp=1377051648649, value=RWXCA
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9285) User who created table cannot scan the same table due to Insufficient permissions

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747185#comment-13747185
 ] 

Hudson commented on HBASE-9285:
---

SUCCESS: Integrated in hbase-0.95 #483 (See 
[https://builds.apache.org/job/hbase-0.95/483/])
HBASE-9285 User who created table cannot scan the same table due to 
Insufficient permissions (Ted Yu) (tedyu: rev 1516345)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlLists.java


> User who created table cannot scan the same table due to Insufficient 
> permissions
> -
>
> Key: HBASE-9285
> URL: https://issues.apache.org/jira/browse/HBASE-9285
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9285.patch
>
>
> User hrt_qa has been given 'C' permission.
> {code}
> create 'te', {NAME => 'f1', VERSIONS => 5}
> ...
> hbase(main):003:0> list
> TABLE
> hbase:acl
> hbase:namespace
> te
> 6 row(s) in 0.0570 seconds
> hbase(main):004:0> scan 'te'
> ROW  COLUMN+CELL
> 2013-08-21 02:21:00,921 DEBUG [main] token.AuthenticationTokenSelector: No 
> matching token found
> 2013-08-21 02:21:00,921 DEBUG [main] security.HBaseSaslRpcClient: Creating 
> SASL GSSAPI client. Server's Kerberos principal name is 
> hbase/hor16n13.gq1.ygridcore@horton.ygridcore.net
> 2013-08-21 02:21:00,923 DEBUG [main] security.HBaseSaslRpcClient: Have sent 
> token of size 582 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 0 for processing by initSASLContext
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 0 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 53 for processing by initSASLContext
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 53 from initSASLContext.
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: SASL client 
> context established. Negotiated QoP: auth
> 2013-08-21 02:21:00,935 WARN  [main] client.RpcRetryingCaller: Call 
> exception, tries=0, retries=7, retryTime=-14ms
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847)
> ...
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException):
>  org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
> {code}
> Here was related entries in hbase:acl table:
> {code}
> hbase(main):001:0> scan 'hbase:acl'
> ROW  COLUMN+CELL
>  hbase:acl   column=l:hrt_qa, 
> timestamp=1377045996685, value=C
>  te  column=l:hrt_qa, 
> timestamp=1377051648649, value=RWXCA
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9267) StochasticLoadBalancer goes over its processing time limit

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747186#comment-13747186
 ] 

Hudson commented on HBASE-9267:
---

SUCCESS: Integrated in hbase-0.95 #483 (See 
[https://builds.apache.org/job/hbase-0.95/483/])
HBASE-9267 Change region load in load balancer to use circular array. (eclark: 
rev 1516334)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java


> StochasticLoadBalancer goes over its processing time limit
> --
>
> Key: HBASE-9267
> URL: https://issues.apache.org/jira/browse/HBASE-9267
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.3
>
> Attachments: HBASE-9267-0.patch, HBASE-9267-1.patch, 
> HBASE-9267-2.patch, HBASE-9267-3.patch, HBASE-9267-4.patch
>
>
> I trying out 0.95.2, I left it running over the weekend (8 RS, average load 
> between 12 and 3 regions) and right now the balancer runs for 12 mins:
> bq. 2013-08-19 21:54:45,534 DEBUG 
> [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
> org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
> find a better load balance plan.  Tried 0 different configurations in 
> 777309ms, and did not find anything with a computed cost less than 
> 36.32576937689094
> It seems it slowly crept up there, yesterday it was doing:
> bq. 2013-08-18 20:53:17,232 DEBUG 
> [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
> org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
> find a better load balance plan.  Tried 0 different configurations in 
> 257374ms, and did not find anything with a computed cost less than 
> 36.3251082542424
> And originally it was doing 1 minute.
> In the jstack I see a 1000 of these and jstack doesn't want to show me the 
> whole thing:
> bq.  at java.util.SubList$1.nextIndex(AbstractList.java:713)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8348) Polish the migration to 0.96

2013-08-21 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747179#comment-13747179
 ] 

Himanshu Vashishtha commented on HBASE-8348:


Thanks for the reviews, Stack.

bq. ... what would a corrupt file be? Are you going to freak folks out if they 
see this message? Does the v1 checker print out the instnaces of v1 hfiles?
A corrupt file is a store file which has an unknown major version (neither 1 
nor 2). Yes, the tool prints out all such files, and also prints V1 files. I 
will add in the description what does corrupt mean. I think it is important to 
let the user know if there is any such file.
{code}
There are some HFileV1, or corrupt files (files with incorrect major version). 
Please look at the log for a list of such files.
{code}

Sure, I have a patch to replace all system.out.xxx (in UploadTo96 and 
HFileV1Detector tool) to LOG.xxx. 

bq. Shoudl you proceed if the first has a non-zero result? 
Namespace upgrade tool returns 0 unless there is any error (which it throws on 
the caller). So, this would work too. But let me add the check.
bq. And why not just do return upgradeZNodes instead of catch result in a res 
and the in new statement doing the return res.
Will do.

bq. Can't you get options to print this out for you anyways?
It is used, after the description on the tool. Okay, will make the help 
consistent with other tools, by just printing the options. More description can 
be added in the ref guide.

I think we should get this working patch in, and then do refinements based on 
users feedback (before 0.96 comes out). I will upload a patch tonight, with a 
log of its run.
BTW, I figured out that HBASE-9126 removed the V1 support altogether and now, 
we get an exception if the major version is 1. Could that cleanup be done in 
0.98 (or another dot release, in case we tell users to go through 0.96.0 before 
doing any upgrade from 0.94.x). It is just a suggestion, though. I am okay 
otherwise, and will fix the HFileV1 tool. Please let me know. Thanks.

> Polish the migration to 0.96
> 
>
> Key: HBASE-8348
> URL: https://issues.apache.org/jira/browse/HBASE-8348
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.0
>Reporter: Jean-Daniel Cryans
>Assignee: rajeshbabu
>Priority: Blocker
> Fix For: 0.96.0
>
> Attachments: HBASE-8348-approach-2.patch, 
> HBASE-8348-approach-2-v2.1.patch, HBASE-8348-approach-2-v2.2.patch, 
> HBASE-8348-approach-3.patch, HBASE-8348_trunk.patch, 
> HBASE-8348_trunk_v2.patch, HBASE-8348_trunk_v3.patch
>
>
> Currently, migration works but there's still a couple of rough edges:
>  - HBASE-8045 finished the .META. migration but didn't remove ROOT, so it's 
> still on the filesystem.
>  - Data in ZK needs to be removed manually. Either we fix up the data in ZK 
> or we delete it ourselves.
>  - TestMetaMigrationRemovingHTD has a testMetaUpdatedFlagInROOT method, but 
> ROOT is gone now.
> Elliott was also mentioning that we could have "hbase migrate" do the HFileV1 
> checks, clear ZK, remove ROOT, etc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9267) StochasticLoadBalancer goes over its processing time limit

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747160#comment-13747160
 ] 

Hudson commented on HBASE-9267:
---

FAILURE: Integrated in HBase-TRUNK #4422 (See 
[https://builds.apache.org/job/HBase-TRUNK/4422/])
HBASE-9267 Change region load in load balancer to use circular array. (eclark: 
rev 1516335)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java


> StochasticLoadBalancer goes over its processing time limit
> --
>
> Key: HBASE-9267
> URL: https://issues.apache.org/jira/browse/HBASE-9267
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.3
>
> Attachments: HBASE-9267-0.patch, HBASE-9267-1.patch, 
> HBASE-9267-2.patch, HBASE-9267-3.patch, HBASE-9267-4.patch
>
>
> I trying out 0.95.2, I left it running over the weekend (8 RS, average load 
> between 12 and 3 regions) and right now the balancer runs for 12 mins:
> bq. 2013-08-19 21:54:45,534 DEBUG 
> [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
> org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
> find a better load balance plan.  Tried 0 different configurations in 
> 777309ms, and did not find anything with a computed cost less than 
> 36.32576937689094
> It seems it slowly crept up there, yesterday it was doing:
> bq. 2013-08-18 20:53:17,232 DEBUG 
> [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
> org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
> find a better load balance plan.  Tried 0 different configurations in 
> 257374ms, and did not find anything with a computed cost less than 
> 36.3251082542424
> And originally it was doing 1 minute.
> In the jstack I see a 1000 of these and jstack doesn't want to show me the 
> whole thing:
> bq.  at java.util.SubList$1.nextIndex(AbstractList.java:713)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9263) Add initialize method to load balancer interface

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747161#comment-13747161
 ] 

Hudson commented on HBASE-9263:
---

FAILURE: Integrated in HBase-TRUNK #4422 (See 
[https://builds.apache.org/job/HBase-TRUNK/4422/])
HBASE-9263 Add initialize method to load balancer interface (stack: rev 1516310)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java


> Add initialize method to load balancer interface
> 
>
> Key: HBASE-9263
> URL: https://issues.apache.org/jira/browse/HBASE-9263
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Francis Liu
>Assignee: Francis Liu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: HBASE-9263.patch
>
>
> The load balancer has two methods setMasterServices and setConf that needs to 
> be called prior to it being functional. Some balancers will need to go 
> through an initialization procedure once these methods have been called. An 
> initialize() method would be helpful in this regard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8960) TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747162#comment-13747162
 ] 

Hudson commented on HBASE-8960:
---

FAILURE: Integrated in HBase-TRUNK #4422 (See 
[https://builds.apache.org/job/HBase-TRUNK/4422/])
HBASE-8960: TestDistributedLogSplitting fails sometime - stablize 
testDisallowWritesInRecovering - addendum (jeffreyz: rev 1516328)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java


> TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes
> --
>
> Key: HBASE-8960
> URL: https://issues.apache.org/jira/browse/HBASE-8960
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: hbase-8690-v4.patch, hbase-8960-addendum-2.patch, 
> hbase-8960-addendum.patch, 
> hbase-8960-fix-disallowWritesInRecovering-addendum.patch, 
> hbase-8960-fix-disallowWritesInRecovering.patch, hbase-8960.patch
>
>
> http://54.241.6.143/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/634/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testLogReplayForDisablingTable/
> {noformat}
> java.lang.AssertionError: expected:<1000> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayForDisablingTable(TestDistributedLogSplitting.java:797)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9267) StochasticLoadBalancer goes over its processing time limit

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747156#comment-13747156
 ] 

Hudson commented on HBASE-9267:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #691 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/691/])
HBASE-9267 Change region load in load balancer to use circular array. (eclark: 
rev 1516335)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java


> StochasticLoadBalancer goes over its processing time limit
> --
>
> Key: HBASE-9267
> URL: https://issues.apache.org/jira/browse/HBASE-9267
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.3
>
> Attachments: HBASE-9267-0.patch, HBASE-9267-1.patch, 
> HBASE-9267-2.patch, HBASE-9267-3.patch, HBASE-9267-4.patch
>
>
> I trying out 0.95.2, I left it running over the weekend (8 RS, average load 
> between 12 and 3 regions) and right now the balancer runs for 12 mins:
> bq. 2013-08-19 21:54:45,534 DEBUG 
> [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
> org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
> find a better load balance plan.  Tried 0 different configurations in 
> 777309ms, and did not find anything with a computed cost less than 
> 36.32576937689094
> It seems it slowly crept up there, yesterday it was doing:
> bq. 2013-08-18 20:53:17,232 DEBUG 
> [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
> org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
> find a better load balance plan.  Tried 0 different configurations in 
> 257374ms, and did not find anything with a computed cost less than 
> 36.3251082542424
> And originally it was doing 1 minute.
> In the jstack I see a 1000 of these and jstack doesn't want to show me the 
> whole thing:
> bq.  at java.util.SubList$1.nextIndex(AbstractList.java:713)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8960) TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747157#comment-13747157
 ] 

Hudson commented on HBASE-8960:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #691 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/691/])
HBASE-8960: TestDistributedLogSplitting fails sometime - stablize 
testDisallowWritesInRecovering - addendum (jeffreyz: rev 1516328)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java


> TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes
> --
>
> Key: HBASE-8960
> URL: https://issues.apache.org/jira/browse/HBASE-8960
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: hbase-8690-v4.patch, hbase-8960-addendum-2.patch, 
> hbase-8960-addendum.patch, 
> hbase-8960-fix-disallowWritesInRecovering-addendum.patch, 
> hbase-8960-fix-disallowWritesInRecovering.patch, hbase-8960.patch
>
>
> http://54.241.6.143/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/634/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testLogReplayForDisablingTable/
> {noformat}
> java.lang.AssertionError: expected:<1000> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayForDisablingTable(TestDistributedLogSplitting.java:797)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9267) StochasticLoadBalancer goes over its processing time limit

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747123#comment-13747123
 ] 

Hudson commented on HBASE-9267:
---

FAILURE: Integrated in hbase-0.95-on-hadoop2 #262 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/262/])
HBASE-9267 Change region load in load balancer to use circular array. (eclark: 
rev 1516334)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.java
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestStochasticLoadBalancer.java


> StochasticLoadBalancer goes over its processing time limit
> --
>
> Key: HBASE-9267
> URL: https://issues.apache.org/jira/browse/HBASE-9267
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.3
>
> Attachments: HBASE-9267-0.patch, HBASE-9267-1.patch, 
> HBASE-9267-2.patch, HBASE-9267-3.patch, HBASE-9267-4.patch
>
>
> I trying out 0.95.2, I left it running over the weekend (8 RS, average load 
> between 12 and 3 regions) and right now the balancer runs for 12 mins:
> bq. 2013-08-19 21:54:45,534 DEBUG 
> [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
> org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
> find a better load balance plan.  Tried 0 different configurations in 
> 777309ms, and did not find anything with a computed cost less than 
> 36.32576937689094
> It seems it slowly crept up there, yesterday it was doing:
> bq. 2013-08-18 20:53:17,232 DEBUG 
> [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
> org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
> find a better load balance plan.  Tried 0 different configurations in 
> 257374ms, and did not find anything with a computed cost less than 
> 36.3251082542424
> And originally it was doing 1 minute.
> In the jstack I see a 1000 of these and jstack doesn't want to show me the 
> whole thing:
> bq.  at java.util.SubList$1.nextIndex(AbstractList.java:713)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8960) TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747125#comment-13747125
 ] 

Hudson commented on HBASE-8960:
---

FAILURE: Integrated in hbase-0.95-on-hadoop2 #262 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/262/])
HBASE-8960: TestDistributedLogSplitting fails sometime - stablize 
testDisallowWritesInRecovering - addendum (jeffreyz: rev 1516329)
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java


> TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes
> --
>
> Key: HBASE-8960
> URL: https://issues.apache.org/jira/browse/HBASE-8960
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: hbase-8690-v4.patch, hbase-8960-addendum-2.patch, 
> hbase-8960-addendum.patch, 
> hbase-8960-fix-disallowWritesInRecovering-addendum.patch, 
> hbase-8960-fix-disallowWritesInRecovering.patch, hbase-8960.patch
>
>
> http://54.241.6.143/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/634/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testLogReplayForDisablingTable/
> {noformat}
> java.lang.AssertionError: expected:<1000> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayForDisablingTable(TestDistributedLogSplitting.java:797)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9263) Add initialize method to load balancer interface

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747124#comment-13747124
 ] 

Hudson commented on HBASE-9263:
---

FAILURE: Integrated in hbase-0.95-on-hadoop2 #262 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/262/])
HBASE-9263 Add initialize method to load balancer interface (stack: rev 1516309)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java


> Add initialize method to load balancer interface
> 
>
> Key: HBASE-9263
> URL: https://issues.apache.org/jira/browse/HBASE-9263
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Francis Liu
>Assignee: Francis Liu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: HBASE-9263.patch
>
>
> The load balancer has two methods setMasterServices and setConf that needs to 
> be called prior to it being functional. Some balancers will need to go 
> through an initialization procedure once these methods have been called. An 
> initialize() method would be helpful in this regard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9263) Add initialize method to load balancer interface

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747116#comment-13747116
 ] 

Hudson commented on HBASE-9263:
---

SUCCESS: Integrated in hbase-0.95 #482 (See 
[https://builds.apache.org/job/hbase-0.95/482/])
HBASE-9263 Add initialize method to load balancer interface (stack: rev 1516309)
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java


> Add initialize method to load balancer interface
> 
>
> Key: HBASE-9263
> URL: https://issues.apache.org/jira/browse/HBASE-9263
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Francis Liu
>Assignee: Francis Liu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: HBASE-9263.patch
>
>
> The load balancer has two methods setMasterServices and setConf that needs to 
> be called prior to it being functional. Some balancers will need to go 
> through an initialization procedure once these methods have been called. An 
> initialize() method would be helpful in this regard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9210) "hbase shell -d" doesn't print out exception stack trace

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747115#comment-13747115
 ] 

Hudson commented on HBASE-9210:
---

SUCCESS: Integrated in hbase-0.95 #482 (See 
[https://builds.apache.org/job/hbase-0.95/482/])
HBASE-9210: hbase shell -d doesn't print out exception stack trace (jeffreyz: 
rev 1516298)
* /hbase/branches/0.95/bin/hirb.rb
* /hbase/branches/0.95/bin/region_mover.rb
* /hbase/branches/0.95/bin/region_status.rb
* /hbase/branches/0.95/hbase-server/src/main/ruby/shell/commands.rb


> "hbase shell  -d" doesn't print out exception stack trace
> -
>
> Key: HBASE-9210
> URL: https://issues.apache.org/jira/browse/HBASE-9210
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.95.2
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Attachments: hbase-9210.patch, hbase-9210-v1.patch
>
>
> when starting shell with "-d" specified, the following line doesn't print 
> anything because debug isn't set when shell is constructed.
> {code}
> "Backtrace: #{e.backtrace.join("\n   ")}" if debug
> {code}
> In addition, the existing code prints the outer most exception while we 
> normally need the root cause exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8960) TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747117#comment-13747117
 ] 

Hudson commented on HBASE-8960:
---

SUCCESS: Integrated in hbase-0.95 #482 (See 
[https://builds.apache.org/job/hbase-0.95/482/])
HBASE-8960: TestDistributedLogSplitting fails sometime - stablize 
testDisallowWritesInRecovering - addendum (jeffreyz: rev 1516329)
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java


> TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes
> --
>
> Key: HBASE-8960
> URL: https://issues.apache.org/jira/browse/HBASE-8960
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: hbase-8690-v4.patch, hbase-8960-addendum-2.patch, 
> hbase-8960-addendum.patch, 
> hbase-8960-fix-disallowWritesInRecovering-addendum.patch, 
> hbase-8960-fix-disallowWritesInRecovering.patch, hbase-8960.patch
>
>
> http://54.241.6.143/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/634/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testLogReplayForDisablingTable/
> {noformat}
> java.lang.AssertionError: expected:<1000> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayForDisablingTable(TestDistributedLogSplitting.java:797)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9296) Update to bootstrap 3.0

2013-08-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-9296:
-

Component/s: UI

> Update to bootstrap 3.0
> ---
>
> Key: HBASE-9296
> URL: https://issues.apache.org/jira/browse/HBASE-9296
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 0.98.0, 0.95.2
>Reporter: Elliott Clark
>Priority: Minor
>
> There was a major revision on Bootstrap css we should take the upgrade as it 
> make responsive layouts much easier in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9296) Update to bootstrap 3.0

2013-08-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-9296:
-

Affects Version/s: 0.98.0
   0.95.2

> Update to bootstrap 3.0
> ---
>
> Key: HBASE-9296
> URL: https://issues.apache.org/jira/browse/HBASE-9296
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.95.2
>Reporter: Elliott Clark
>Priority: Minor
>
> There was a major revision on Bootstrap css we should take the upgrade as it 
> make responsive layouts much easier in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9296) Update to bootstrap 3.0

2013-08-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-9296:
-

Priority: Minor  (was: Major)

> Update to bootstrap 3.0
> ---
>
> Key: HBASE-9296
> URL: https://issues.apache.org/jira/browse/HBASE-9296
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Priority: Minor
>
> There was a major revision on Bootstrap css we should take the upgrade as it 
> make responsive layouts much easier in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9296) Update to bootstrap 3.0

2013-08-21 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-9296:


 Summary: Update to bootstrap 3.0
 Key: HBASE-9296
 URL: https://issues.apache.org/jira/browse/HBASE-9296
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark


There was a major revision on Bootstrap css we should take the upgrade as it 
make responsive layouts much easier in the future.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9283) Struct and StructIterator should properly handle trailing nulls

2013-08-21 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9283:


Fix Version/s: 0.95.3
   0.96.0

> Struct and StructIterator should properly handle trailing nulls
> ---
>
> Key: HBASE-9283
> URL: https://issues.apache.org/jira/browse/HBASE-9283
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Nick Dimiduk
> Fix For: 0.96.0, 0.95.3
>
>
> For a composite row key, Phoenix strips off trailing null columns values in 
> the row key. The reason this is important is that then new nullable row key 
> columns can be added to a schema without requiring any data upgrade to 
> existing rows. Otherwise, adding new row key columns to the end of a schema 
> becomes extremely cumbersome, as you'd need to delete all existing rows and 
> add them back with a row key that includes a null value.
> Rather than Phoenix needing to modify the iteration code everywhere (as 
> [~ndimiduk] outlined here: 
> https://issues.apache.org/jira/browse/HBASE-8693?focusedCommentId=13744499&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13744499),
>  it'd be better if StructIterator handled this out-of-the-box. Otherwise, if 
> Phoenix has to specialize this, we'd lose the interop piece which is the 
> justification for switching our type system to this new one in the first 
> place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9283) Struct and StructIterator should properly handle trailing nulls

2013-08-21 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9283:


Affects Version/s: 0.95.2

> Struct and StructIterator should properly handle trailing nulls
> ---
>
> Key: HBASE-9283
> URL: https://issues.apache.org/jira/browse/HBASE-9283
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Nick Dimiduk
>
> For a composite row key, Phoenix strips off trailing null columns values in 
> the row key. The reason this is important is that then new nullable row key 
> columns can be added to a schema without requiring any data upgrade to 
> existing rows. Otherwise, adding new row key columns to the end of a schema 
> becomes extremely cumbersome, as you'd need to delete all existing rows and 
> add them back with a row key that includes a null value.
> Rather than Phoenix needing to modify the iteration code everywhere (as 
> [~ndimiduk] outlined here: 
> https://issues.apache.org/jira/browse/HBASE-8693?focusedCommentId=13744499&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13744499),
>  it'd be better if StructIterator handled this out-of-the-box. Otherwise, if 
> Phoenix has to specialize this, we'd lose the interop piece which is the 
> justification for switching our type system to this new one in the first 
> place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-9283) Struct and StructIterator should properly handle trailing nulls

2013-08-21 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk reassigned HBASE-9283:
---

Assignee: Nick Dimiduk

> Struct and StructIterator should properly handle trailing nulls
> ---
>
> Key: HBASE-9283
> URL: https://issues.apache.org/jira/browse/HBASE-9283
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
> Fix For: 0.96.0, 0.95.3
>
>
> For a composite row key, Phoenix strips off trailing null columns values in 
> the row key. The reason this is important is that then new nullable row key 
> columns can be added to a schema without requiring any data upgrade to 
> existing rows. Otherwise, adding new row key columns to the end of a schema 
> becomes extremely cumbersome, as you'd need to delete all existing rows and 
> add them back with a row key that includes a null value.
> Rather than Phoenix needing to modify the iteration code everywhere (as 
> [~ndimiduk] outlined here: 
> https://issues.apache.org/jira/browse/HBASE-8693?focusedCommentId=13744499&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13744499),
>  it'd be better if StructIterator handled this out-of-the-box. Otherwise, if 
> Phoenix has to specialize this, we'd lose the interop piece which is the 
> justification for switching our type system to this new one in the first 
> place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9281) user_permission command encounters NullPointerException

2013-08-21 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747104#comment-13747104
 ] 

Andrew Purtell commented on HBASE-9281:
---

If you step back, the help is wrong, how can you check user permissions for a 
table without the table name? Sure the patch fixes the NPE but it doesn't then 
error out. It should error out and the help should be updated.

> user_permission command encounters NullPointerException
> ---
>
> Key: HBASE-9281
> URL: https://issues.apache.org/jira/browse/HBASE-9281
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 9281-v1.txt
>
>
> As user hbase, user_permission command gave:
> {code}
> java.io.IOException: java.io.IOException
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
>   at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.security.access.AccessControlLists.getUserTablePermissions(AccessControlLists.java:484)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.getUserPermissions(AccessController.java:1341)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.getUserPermissions(AccessControlProtos.java:9949)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10107)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5121)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3211)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26851)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2147)
>   ... 1 more
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:235)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1304)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:87)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:84)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:98)
>   at 
> org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:90)
>   at 
> org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:67)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$BlockingStub.getUserPermissions(AccessControlProtos.java:10304)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getUserPermissions(ProtobufUtil.java:1974)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
> ...
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
> java.io.IOException
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2185)
>   at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1854)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.security.access.AccessControlLists.getUserTablePermissions(AccessControlLists.java:484)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.getUserPermissions(AccessController.java:1341)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.getUserPermissions(AccessControlProtos.java:9949)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10107)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5121)
>   at 
>

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747101#comment-13747101
 ] 

Hadoop QA commented on HBASE-8930:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12599208/HBASE-8930.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6836//console

This message is automatically generated.

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Attachments: HBASE-8930.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().obje

[jira] [Commented] (HBASE-9285) User who created table cannot scan the same table due to Insufficient permissions

2013-08-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747095#comment-13747095
 ] 

Ted Yu commented on HBASE-9285:
---

Integrated to 0.95 and trunk.

Thanks for the review.

> User who created table cannot scan the same table due to Insufficient 
> permissions
> -
>
> Key: HBASE-9285
> URL: https://issues.apache.org/jira/browse/HBASE-9285
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9285.patch
>
>
> User hrt_qa has been given 'C' permission.
> {code}
> create 'te', {NAME => 'f1', VERSIONS => 5}
> ...
> hbase(main):003:0> list
> TABLE
> hbase:acl
> hbase:namespace
> te
> 6 row(s) in 0.0570 seconds
> hbase(main):004:0> scan 'te'
> ROW  COLUMN+CELL
> 2013-08-21 02:21:00,921 DEBUG [main] token.AuthenticationTokenSelector: No 
> matching token found
> 2013-08-21 02:21:00,921 DEBUG [main] security.HBaseSaslRpcClient: Creating 
> SASL GSSAPI client. Server's Kerberos principal name is 
> hbase/hor16n13.gq1.ygridcore@horton.ygridcore.net
> 2013-08-21 02:21:00,923 DEBUG [main] security.HBaseSaslRpcClient: Have sent 
> token of size 582 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 0 for processing by initSASLContext
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 0 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 53 for processing by initSASLContext
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 53 from initSASLContext.
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: SASL client 
> context established. Negotiated QoP: auth
> 2013-08-21 02:21:00,935 WARN  [main] client.RpcRetryingCaller: Call 
> exception, tries=0, retries=7, retryTime=-14ms
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847)
> ...
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException):
>  org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
> {code}
> Here was related entries in hbase:acl table:
> {code}
> hbase(main):001:0> scan 'hbase:acl'
> ROW  COLUMN+CELL
>  hbase:acl   column=l:hrt_qa, 
> timestamp=1377045996685, value=C
>  te  column=l:hrt_qa, 
> timestamp=1377051648649, value=RWXCA
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9285) User who created table cannot scan the same table due to Insufficient permissions

2013-08-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747093#comment-13747093
 ] 

Ted Yu commented on HBASE-9285:
---

The javadoc warnings are not related to my patch:
{code}
[WARNING] Javadoc Warnings
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterAdminProtos.java:33377:
 warning - @return tag cannot be used in method with void return type.
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/MasterAdminProtos.java:32381:
 warning - @return tag cannot be used in method with void return type.
{code}

> User who created table cannot scan the same table due to Insufficient 
> permissions
> -
>
> Key: HBASE-9285
> URL: https://issues.apache.org/jira/browse/HBASE-9285
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9285.patch
>
>
> User hrt_qa has been given 'C' permission.
> {code}
> create 'te', {NAME => 'f1', VERSIONS => 5}
> ...
> hbase(main):003:0> list
> TABLE
> hbase:acl
> hbase:namespace
> te
> 6 row(s) in 0.0570 seconds
> hbase(main):004:0> scan 'te'
> ROW  COLUMN+CELL
> 2013-08-21 02:21:00,921 DEBUG [main] token.AuthenticationTokenSelector: No 
> matching token found
> 2013-08-21 02:21:00,921 DEBUG [main] security.HBaseSaslRpcClient: Creating 
> SASL GSSAPI client. Server's Kerberos principal name is 
> hbase/hor16n13.gq1.ygridcore@horton.ygridcore.net
> 2013-08-21 02:21:00,923 DEBUG [main] security.HBaseSaslRpcClient: Have sent 
> token of size 582 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 0 for processing by initSASLContext
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 0 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 53 for processing by initSASLContext
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 53 from initSASLContext.
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: SASL client 
> context established. Negotiated QoP: auth
> 2013-08-21 02:21:00,935 WARN  [main] client.RpcRetryingCaller: Call 
> exception, tries=0, retries=7, retryTime=-14ms
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847)
> ...
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException):
>  org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
> {code}
> Here was related entries in hbase:acl table:
> {code}
> hbase(main):001:0> scan 'hbase:acl'
> ROW  COLUMN+CELL
>  hbase:acl   column=l:hrt_qa, 
> timestamp=1377045996685, value=C
>  te  column=l:hrt_qa, 
> timestamp=1377051648649, value=RWXCA
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9285) User who created table cannot scan the same table due to Insufficient permissions

2013-08-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747088#comment-13747088
 ] 

Hadoop QA commented on HBASE-9285:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12599291/9285.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6835//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6835//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6835//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6835//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6835//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6835//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6835//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6835//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6835//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/6835//console

This message is automatically generated.

> User who created table cannot scan the same table due to Insufficient 
> permissions
> -
>
> Key: HBASE-9285
> URL: https://issues.apache.org/jira/browse/HBASE-9285
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9285.patch
>
>
> User hrt_qa has been given 'C' permission.
> {code}
> create 'te', {NAME => 'f1', VERSIONS => 5}
> ...
> hbase(main):003:0> list
> TABLE
> hbase:acl
> hbase:namespace
> te
> 6 row(s) in 0.0570 seconds
> hbase(main):004:0> scan 'te'
> ROW  COLUMN+CELL
> 2013-08-21 02:21:00,921 DEBUG [main] token.AuthenticationTokenSelector: No 
> matching token found
> 2013-08-21 02:21:00,921 DEBUG [main] security.HBaseSaslRpcClient: Creating 
> SASL GSSAPI client. Server's Kerberos principal name is 
> hbase/hor16n13.gq1.ygridcore@horton.ygridcore.net
> 2013-08-21 02:21:00,923 DEBUG [main] security.HBaseSaslRpcClient: Have sent 
> token of size 582 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 0 for processing by initSASLContext
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 0 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 53 for processing by initSASLContext
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 53 from initSASLContext.
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: SASL client 
> context established. Negotiated QoP: auth
> 2013-08-21 02:21:00,935 WARN 

[jira] [Updated] (HBASE-9135) Upgrade hadoop 1 version to 1.2.1 which is stable

2013-08-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9135:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Upgrade hadoop 1 version to 1.2.1 which is stable
> -
>
> Key: HBASE-9135
> URL: https://issues.apache.org/jira/browse/HBASE-9135
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 9135-v1.txt
>
>
> Here is related discussion:
> http://search-hadoop.com/m/nA71y1kKHDm1/Hadoop+version+1.2.1+%2528stable%2529+released&subj=Re+ANNOUNCE+Hadoop+version+1+2+1+stable+released
> Older hadoop 1 artifacts would be phased out.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9126) Make HFile MIN VERSION as 2

2013-08-21 Thread Himanshu Vashishtha (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747082#comment-13747082
 ] 

Himanshu Vashishtha commented on HBASE-9126:


With this patch, HFileV1Detector gets an IAE for a HFileV1. If the version is 
1, HFile.checkFormatVersion() throws an IAE now. Could this be reverted? If 
not, I am okay fixing the tool.

> Make HFile MIN VERSION as 2
> ---
>
> Key: HBASE-9126
> URL: https://issues.apache.org/jira/browse/HBASE-9126
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.95.1
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 0.98.0, 0.95.2
>
> Attachments: HBASE-9126.patch, HBASE-9126_V2.patch
>
>
> Removed the HFile V1 support from version>95. We can make the min version 
> supported as 2? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747083#comment-13747083
 ] 

Lars Hofhansl commented on HBASE-8930:
--

At the least it would mean that you cannot filter on columns that are not in 
the Scan object, but that is already partially broken anyway (see HBASE-4364).


> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Attachments: HBASE-8930.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
> put.add(cf, col6v, Bytes.toBytes((short) 10));
> hTable.put(put);
> hTable.flushCommits();
> //==READING=//
> Filter allwaysNextColFilter = new AllwaysNextColFilter();
> Get get = new Get(row);
> get.addColumn(cf, col1); //5581
> get.addColumn(cf, col1v); //5584
> get.addColumn(cf, col1g); //5586
> get.add

[jira] [Created] (HBASE-9295) Allow test-patch.sh to detect TreeMap keyed by byte[] which doesn't use proper comparator

2013-08-21 Thread Ted Yu (JIRA)
Ted Yu created HBASE-9295:
-

 Summary: Allow test-patch.sh to detect TreeMap keyed by byte[] 
which doesn't use proper comparator
 Key: HBASE-9295
 URL: https://issues.apache.org/jira/browse/HBASE-9295
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu


There were two recent bug fixes (HBASE-9285 and HBASE-9238) for the case where 
the TreeMap keyed by byte[] doesn't use proper comparator:
{code}
new TreeMap()
{code}
test-patch.sh should be able to detect this situation and report accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747080#comment-13747080
 ] 

Lars Hofhansl commented on HBASE-8930:
--

Vasu and I had discussed this offline.

+1 on the approach. Would need to think through the implications for 0.94, but 
the current behavior is "surprising", so this would be a good fix.

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Attachments: HBASE-8930.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
> put.add(cf, col6v, Bytes.toBytes((short) 10));
> hTable.put(put);
> hTable.flushCommits();
> //==READING=//
> Filter allwaysNextColFilter = new AllwaysNextColFilter();
> Get get = new Get(row);
> get.addColumn(cf, col1); //5581
> get.addColumn(cf, col1v); //5584
> get.addColumn(cf, col

[jira] [Updated] (HBASE-9285) User who created table cannot scan the same table due to Insufficient permissions

2013-08-21 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-9285:
---

Due Date: 21/Aug/13

> User who created table cannot scan the same table due to Insufficient 
> permissions
> -
>
> Key: HBASE-9285
> URL: https://issues.apache.org/jira/browse/HBASE-9285
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9285.patch
>
>
> User hrt_qa has been given 'C' permission.
> {code}
> create 'te', {NAME => 'f1', VERSIONS => 5}
> ...
> hbase(main):003:0> list
> TABLE
> hbase:acl
> hbase:namespace
> te
> 6 row(s) in 0.0570 seconds
> hbase(main):004:0> scan 'te'
> ROW  COLUMN+CELL
> 2013-08-21 02:21:00,921 DEBUG [main] token.AuthenticationTokenSelector: No 
> matching token found
> 2013-08-21 02:21:00,921 DEBUG [main] security.HBaseSaslRpcClient: Creating 
> SASL GSSAPI client. Server's Kerberos principal name is 
> hbase/hor16n13.gq1.ygridcore@horton.ygridcore.net
> 2013-08-21 02:21:00,923 DEBUG [main] security.HBaseSaslRpcClient: Have sent 
> token of size 582 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 0 for processing by initSASLContext
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 0 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 53 for processing by initSASLContext
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 53 from initSASLContext.
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: SASL client 
> context established. Negotiated QoP: auth
> 2013-08-21 02:21:00,935 WARN  [main] client.RpcRetryingCaller: Call 
> exception, tries=0, retries=7, retryTime=-14ms
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847)
> ...
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException):
>  org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
> {code}
> Here was related entries in hbase:acl table:
> {code}
> hbase(main):001:0> scan 'hbase:acl'
> ROW  COLUMN+CELL
>  hbase:acl   column=l:hrt_qa, 
> timestamp=1377045996685, value=C
>  te  column=l:hrt_qa, 
> timestamp=1377051648649, value=RWXCA
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747068#comment-13747068
 ] 

Ted Yu commented on HBASE-8930:
---

I like the matrix following STEP 4 explaining combinations of FilterResponse 
and ColumnChecker values.

For ScanWildcardColumnTracker.java, the license header formatting is off.

For TestInvocationRecordFilter, license header is missing.

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Attachments: HBASE-8930.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
> put.add(cf, col6v, Bytes.toBytes((short) 10));
> hTable.put(put);
> hTable.flushCommits();
> //==READING=//
> Filter allwaysNextColFilter = new AllwaysNextColFilter();
> Get get = new Get(row);
> get.addColumn(cf, col1); //5581
> get.addColumn(cf, col1v); 

[jira] [Created] (HBASE-9294) NPE in /rs-status during RS shutdown

2013-08-21 Thread Steve Loughran (JIRA)
Steve Loughran created HBASE-9294:
-

 Summary: NPE in /rs-status during RS shutdown
 Key: HBASE-9294
 URL: https://issues.apache.org/jira/browse/HBASE-9294
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.95.2
Reporter: Steve Loughran
Priority: Minor


While hitting reload to see when a kill-initiated RS shutdown would make the 
Web UI go away, I got a stack trace from an NPE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9294) NPE in /rs-status during RS shutdown

2013-08-21 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747055#comment-13747055
 ] 

Steve Loughran commented on HBASE-9294:
---

{code}
java.lang.NullPointerException
at 
org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmplImpl.renderNoFlush(RSStatusTmplImpl.java:163)
at 
org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmpl.renderNoFlush(RSStatusTmpl.java:172)
at 
org.apache.hadoop.hbase.tmpl.regionserver.RSStatusTmpl.render(RSStatusTmpl.java:163)
at 
org.apache.hadoop.hbase.regionserver.RSStatusServlet.doGet(RSStatusServlet.java:49)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1077)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
{code}

> NPE in /rs-status during RS shutdown
> 
>
> Key: HBASE-9294
> URL: https://issues.apache.org/jira/browse/HBASE-9294
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.95.2
>Reporter: Steve Loughran
>Priority: Minor
>
> While hitting reload to see when a kill-initiated RS shutdown would make the 
> Web UI go away, I got a stack trace from an NPE

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Federico Gaule (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747047#comment-13747047
 ] 

Federico Gaule commented on HBASE-8930:
---

Got my answers. Thanks

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Attachments: HBASE-8930.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
> put.add(cf, col6v, Bytes.toBytes((short) 10));
> hTable.put(put);
> hTable.flushCommits();
> //==READING=//
> Filter allwaysNextColFilter = new AllwaysNextColFilter();
> Get get = new Get(row);
> get.addColumn(cf, col1); //5581
> get.addColumn(cf, col1v); //5584
> get.addColumn(cf, col1g); //5586
> get.addColumn(cf, col2); //5591
> get.addColumn(cf, col2v); //5594
> get.addColumn(cf, col2g); //5596
> 
>  

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Federico Gaule (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747042#comment-13747042
 ] 

Federico Gaule commented on HBASE-8930:
---

[~vasu.pulip...@yellowshirt.com] You made my day. Hope to have the patch 
applied for next release.
 


> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Attachments: HBASE-8930.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
> put.add(cf, col6v, Bytes.toBytes((short) 10));
> hTable.put(put);
> hTable.flushCommits();
> //==READING=//
> Filter allwaysNextColFilter = new AllwaysNextColFilter();
> Get get = new Get(row);
> get.addColumn(cf, col1); //5581
> get.addColumn(cf, col1v); //5584
> get.addColumn(cf, col1g); //5586
> get.addColumn(cf, col2); //5591
> get.addColumn(cf, 

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747041#comment-13747041
 ] 

Vasu Mariyala commented on HBASE-8930:
--

[~fgaule] Sorry, I didn't understand the question. Let me know if you found the 
answer already.

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Attachments: HBASE-8930.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
> put.add(cf, col6v, Bytes.toBytes((short) 10));
> hTable.put(put);
> hTable.flushCommits();
> //==READING=//
> Filter allwaysNextColFilter = new AllwaysNextColFilter();
> Get get = new Get(row);
> get.addColumn(cf, col1); //5581
> get.addColumn(cf, col1v); //5584
> get.addColumn(cf, col1g); //5586
> get.addColumn(cf, col2); //5591
> get.addColumn(cf, col2v); 

[jira] [Commented] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747039#comment-13747039
 ] 

Hudson commented on HBASE-9287:
---

SUCCESS: Integrated in HBase-0.94 #1123 (See 
[https://builds.apache.org/job/HBase-0.94/1123/])
HBASE-9287 TestCatalogTracker depends on the execution order (mbertozzi: rev 
1516290)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTracker.java


> TestCatalogTracker depends on the execution order
> -
>
> Key: HBASE-9287
> URL: https://issues.apache.org/jira/browse/HBASE-9287
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.98.0, 0.95.2, 0.94.11
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 0.98.0, 0.94.12, 0.96.0
>
> Attachments: HBASE-9287-trunk-v0.patch, HBASE-9287-v0.patch
>
>
> Some CatalogTracker test don't delete the ROOT location.
> For example if testNoTimeoutWaitForRoot() runs before 
> testInterruptWaitOnMetaAndRoot() you get
> {code}
> junit.framework.AssertionFailedError: Expected:  but was: 
> example.org,1234,1377038834244
>   at junit.framework.Assert.fail(Assert.java:50)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertNull(Assert.java:237)
>   at junit.framework.Assert.assertNull(Assert.java:230)
>   at 
> org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747037#comment-13747037
 ] 

Hudson commented on HBASE-9287:
---

SUCCESS: Integrated in HBase-0.94-security #269 (See 
[https://builds.apache.org/job/HBase-0.94-security/269/])
HBASE-9287 TestCatalogTracker depends on the execution order (mbertozzi: rev 
1516290)
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTracker.java


> TestCatalogTracker depends on the execution order
> -
>
> Key: HBASE-9287
> URL: https://issues.apache.org/jira/browse/HBASE-9287
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.98.0, 0.95.2, 0.94.11
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 0.98.0, 0.94.12, 0.96.0
>
> Attachments: HBASE-9287-trunk-v0.patch, HBASE-9287-v0.patch
>
>
> Some CatalogTracker test don't delete the ROOT location.
> For example if testNoTimeoutWaitForRoot() runs before 
> testInterruptWaitOnMetaAndRoot() you get
> {code}
> junit.framework.AssertionFailedError: Expected:  but was: 
> example.org,1234,1377038834244
>   at junit.framework.Assert.fail(Assert.java:50)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertNull(Assert.java:237)
>   at junit.framework.Assert.assertNull(Assert.java:230)
>   at 
> org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747033#comment-13747033
 ] 

Vasu Mariyala commented on HBASE-8930:
--

To simplify:

With out the fix, whenever a row doesn't contain value for the next column of 
our interest (say column1), the filter gets invoked for a column which has a 
value and is after our next column of interest (column1).

Consider the scenario where a row contains values for columns a, c, d, e and f 
and scan is requested for columns a, b, e and f. The filter is invoked for a, c 
(as value for b is not found and c is the column after b), e and f. Note that 
the filter is not invoked for d because the invocation of column matcher for 
the value of c would result in the scanner being asked to seek to the next 
column of interest which is e.

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Attachments: HBASE-8930.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes

[jira] [Updated] (HBASE-9267) StochasticLoadBalancer goes over its processing time limit

2013-08-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-9267:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed.  I'll look at the cost functions taking too long in another issue if 
it can be reproduced.

> StochasticLoadBalancer goes over its processing time limit
> --
>
> Key: HBASE-9267
> URL: https://issues.apache.org/jira/browse/HBASE-9267
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Jean-Daniel Cryans
>Assignee: Elliott Clark
> Fix For: 0.98.0, 0.95.3
>
> Attachments: HBASE-9267-0.patch, HBASE-9267-1.patch, 
> HBASE-9267-2.patch, HBASE-9267-3.patch, HBASE-9267-4.patch
>
>
> I trying out 0.95.2, I left it running over the weekend (8 RS, average load 
> between 12 and 3 regions) and right now the balancer runs for 12 mins:
> bq. 2013-08-19 21:54:45,534 DEBUG 
> [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
> org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
> find a better load balance plan.  Tried 0 different configurations in 
> 777309ms, and did not find anything with a computed cost less than 
> 36.32576937689094
> It seems it slowly crept up there, yesterday it was doing:
> bq. 2013-08-18 20:53:17,232 DEBUG 
> [jdec2hbase0403-1.vpc.cloudera.com,6,1376689696384-BalancerChore] 
> org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer: Could not 
> find a better load balance plan.  Tried 0 different configurations in 
> 257374ms, and did not find anything with a computed cost less than 
> 36.3251082542424
> And originally it was doing 1 minute.
> In the jstack I see a 1000 of these and jstack doesn't want to show me the 
> whole thing:
> bq.  at java.util.SubList$1.nextIndex(AbstractList.java:713)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Deleted] (HBASE-9293) add tests to partition filter JDO pushdown for like and make sure it works, or remove it

2013-08-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin deleted HBASE-9293:



> add tests to partition filter JDO pushdown for like and make sure it works, 
> or remove it
> 
>
> Key: HBASE-9293
> URL: https://issues.apache.org/jira/browse/HBASE-9293
> Project: HBase
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> There's a mailing list thread. Partition filtering w/JDO pushdown using LIKE 
> is not used by Hive due to client check (in PartitionPruner); after enabling 
> it seems to be broken. We need to fix and enable it, or remove it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9293) add tests to partition filter JDO pushdown for like and make sure it works, or remove it

2013-08-21 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HBASE-9293:
---

 Summary: add tests to partition filter JDO pushdown for like and 
make sure it works, or remove it
 Key: HBASE-9293
 URL: https://issues.apache.org/jira/browse/HBASE-9293
 Project: HBase
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin


There's a mailing list thread. Partition filtering w/JDO pushdown using LIKE is 
not used by Hive due to client check (in PartitionPruner); after enabling it 
seems to be broken. We need to fix and enable it, or remove it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8960) TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes

2013-08-21 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-8960:
-

Attachment: hbase-8960-fix-disallowWritesInRecovering-addendum.patch

The reworked one failed in the Jenkins build once. I applied the addendum. 
Sorry for the spamming. 

> TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes
> --
>
> Key: HBASE-8960
> URL: https://issues.apache.org/jira/browse/HBASE-8960
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: hbase-8690-v4.patch, hbase-8960-addendum-2.patch, 
> hbase-8960-addendum.patch, 
> hbase-8960-fix-disallowWritesInRecovering-addendum.patch, 
> hbase-8960-fix-disallowWritesInRecovering.patch, hbase-8960.patch
>
>
> http://54.241.6.143/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/634/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testLogReplayForDisablingTable/
> {noformat}
> java.lang.AssertionError: expected:<1000> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayForDisablingTable(TestDistributedLogSplitting.java:797)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9285) User who created table cannot scan the same table due to Insufficient permissions

2013-08-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9285:
--

Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed

> User who created table cannot scan the same table due to Insufficient 
> permissions
> -
>
> Key: HBASE-9285
> URL: https://issues.apache.org/jira/browse/HBASE-9285
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: 9285.patch
>
>
> User hrt_qa has been given 'C' permission.
> {code}
> create 'te', {NAME => 'f1', VERSIONS => 5}
> ...
> hbase(main):003:0> list
> TABLE
> hbase:acl
> hbase:namespace
> te
> 6 row(s) in 0.0570 seconds
> hbase(main):004:0> scan 'te'
> ROW  COLUMN+CELL
> 2013-08-21 02:21:00,921 DEBUG [main] token.AuthenticationTokenSelector: No 
> matching token found
> 2013-08-21 02:21:00,921 DEBUG [main] security.HBaseSaslRpcClient: Creating 
> SASL GSSAPI client. Server's Kerberos principal name is 
> hbase/hor16n13.gq1.ygridcore@horton.ygridcore.net
> 2013-08-21 02:21:00,923 DEBUG [main] security.HBaseSaslRpcClient: Have sent 
> token of size 582 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 0 for processing by initSASLContext
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 0 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 53 for processing by initSASLContext
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 53 from initSASLContext.
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: SASL client 
> context established. Negotiated QoP: auth
> 2013-08-21 02:21:00,935 WARN  [main] client.RpcRetryingCaller: Call 
> exception, tries=0, retries=7, retryTime=-14ms
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847)
> ...
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException):
>  org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
> {code}
> Here was related entries in hbase:acl table:
> {code}
> hbase(main):001:0> scan 'hbase:acl'
> ROW  COLUMN+CELL
>  hbase:acl   column=l:hrt_qa, 
> timestamp=1377045996685, value=C
>  te  column=l:hrt_qa, 
> timestamp=1377051648649, value=RWXCA
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Federico Gaule (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747002#comment-13747002
 ] 

Federico Gaule commented on HBASE-8930:
---

@Vasu, this is what i was looking for since a long time ago. Summarizing, 
without the fix being applied, if you don´t narrow columns using a filter (i.e. 
ColumnRangeFilter), filters can go till the last column in case they match the 
Filters. Am i Right?


> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Attachments: HBASE-8930.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf, col6, Bytes.toBytes((short) 3));
> hTable.put(put);
> put = new Put(row);
> put.add(cf, col1v, Bytes.toBytes((short) 10));
> put.add(cf, col2v, Bytes.toBytes((short) 10));
> put.add(cf, col3v, Bytes.toBytes((short) 10));
> put.add(cf, col4v, Bytes.toBytes((short) 10));
> put.add(cf, col5v, Bytes.toBytes((short) 10));
> put.add(cf, col6v, Bytes.toBytes((short) 10));
> hTable.put(put);
> hTable.flushCommits();
> //==READING=//
> Filter allwaysNextColFilter = new AllwaysNextColFilter();
> Get get = new Get(row);
> get.addColumn(cf, col1); //5581
>

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13747004#comment-13747004
 ] 

Vasu Mariyala commented on HBASE-8930:
--

The patch attached changes the operations in the ScanQueryMatcher to the 
following

a) Check if the current key value is present in our requested columns. This 
returns 'include' if column is of interest, 'seek to next column' if the column 
is not present and there are more columns of interest after the current column, 
'seek to next row' if the column is not present and there are no more columns 
of interest. It does not do any version checks.

b) If the return value of a is include, it does the following

  b.a) Calls the filterKeyValue method of the filter. If the return code asks 
not to include the keyvalue, the return code is just returned.

  b.b) If the return value of b.a is 'include' or 'include and seek next 
column', it calls the Checkversions to check against the number of versions. 
The return value of b.a and the check versions is taken into consideration 
while retuning the return code.
 
c) Return the return value of a

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Attachments: HBASE-8930.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_1));
> byte[] col1g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_6));
> byte[] col2g = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_6));
> byte[] col1v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_4));
> byte[] col2v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_4));
> byte[] col3v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_4));
> byte[] col4v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_4));
> byte[] col5v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_4));
> byte[] col6v = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 563, (byte) SUFFIX_4));
> // === INSERTION =//
> Put put = new Put(row);
> put.add(cf, col1, Bytes.toBytes((short) 1));
> put.add(cf, col2, Bytes.toBytes((short) 1));
> put.add(cf, col3, Bytes.toBytes((short) 3));
> put.add(cf, col4, Bytes.toBytes((short) 3));
> put.add(cf, col5, Bytes.toBytes((short) 3));
> put.add(cf

[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-08-21 Thread Vasu Mariyala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746983#comment-13746983
 ] 

Vasu Mariyala commented on HBASE-8930:
--

Scanning for the results in a region is done in the following way (Note: I may 
have missed some of the conditions)

a) RegionScannerImpl has a store heap which contains key value scanner objects 
for a region like the storescanner (opens scanners across memstore, snapshot 
and all store files)

b) Based on few configurations, the scanner is seeked to the start key of the 
request.

c) For each row until end is reached or the batch size is reached, it does the 
following

   c.a) calls the filterRowKey on the filter object. If it returns true, it 
will go to the next row.

   c.b) If the above call returns false, it does the following.

   c.b.a) Set the row read in (c) as the current row

   c.b.b) While the next key value is not null

  c.b.b.a) Reads the next key value and ensures that is greater than 
the previous key value

  c.b.b.b) Calls the ScanQueryMatcher match method. It checks if the 
current key value belongs to the same row as the one read in step (c). It calls 
the filter logic and then checks if the keyValue corresponds to the column of 
interest and if the number of versions returned matches the ones specified 
using min and max versions.

  c.b.b.c) Based on the return type, it does the appropriate action. 
For include, it includes and goes to the next key value. If it is include and 
seek next column, the result is included and the scanner is asked to seek to 
the next column of interest. If it is include and seek next row, the result is 
included and the scanner is asked to seek to the next row.  If it is done, it 
returns. If it is seek next row, the scanner is asked to seek to the next row. 
If it is seek next column, the scanner is asked to seek to the next column of 
interest.


To illustrate the failing scenario mentioned in example 

a) Once key value in (c.b.b.b) is for the column 5594 and the appropriate 
number of versions for the column are read, it returns include and seek next 
column.

b) in c.b.b.c, the scanner is asked to the next column of interest which is 
5596. But since there is no value for that column, the scanner seeks to the 
column 5601 (the one after 5596)

c) The loop of c.b.b repeats itself and since the filtering is done before the 
actual check of what columns are requested for, the filterKeyValue method is 
invoked.

> Filter evaluates KVs outside requested columns
> --
>
> Key: HBASE-8930
> URL: https://issues.apache.org/jira/browse/HBASE-8930
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.94.7
>Reporter: Federico Gaule
>Assignee: Vasu Mariyala
>Priority: Critical
>  Labels: filters, hbase, keyvalue
> Attachments: HBASE-8930.patch
>
>
> 1- Fill row with some columns
> 2- Get row with some columns less than universe - Use filter to print kvs
> 3- Filter prints not requested columns
> Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
> and prints KV's qualifier
> SUFFIX_0 = 0
> SUFFIX_1 = 1
> SUFFIX_4 = 4
> SUFFIX_6 = 6
> P= Persisted
> R= Requested
> E= Evaluated
> X= Returned
> | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
> | 5606 |... 
> |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
> |  |...
> |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
> |  |...
> |  |  E   |   E  |  |  |  E   |   E  |  |  |  
> {color:red}E{color}   |  |  |...
> |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
> |  |
> {code:title=ExtraColumnTest.java|borderStyle=solid}
> @Test
> public void testFilter() throws Exception {
> Configuration config = HBaseConfiguration.create();
> config.set("hbase.zookeeper.quorum", "myZK");
> HTable hTable = new HTable(config, "testTable");
> byte[] cf = Bytes.toBytes("cf");
> byte[] row = Bytes.toBytes("row");
> byte[] col1 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 558, (byte) SUFFIX_1));
> byte[] col2 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 559, (byte) SUFFIX_1));
> byte[] col3 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 560, (byte) SUFFIX_1));
> byte[] col4 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 561, (byte) SUFFIX_1));
> byte[] col5 = new QualifierConverter().objectToByteArray(new 
> Qualifier((short) 562, (byte) SUFFIX_1));
> byte[] col6 = new QualifierConvert

[jira] [Commented] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746971#comment-13746971
 ] 

Hudson commented on HBASE-9287:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #690 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/690/])
HBASE-9287 TestCatalogTracker depends on the execution order (mbertozzi: rev 
1516292)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTracker.java


> TestCatalogTracker depends on the execution order
> -
>
> Key: HBASE-9287
> URL: https://issues.apache.org/jira/browse/HBASE-9287
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.98.0, 0.95.2, 0.94.11
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 0.98.0, 0.94.12, 0.96.0
>
> Attachments: HBASE-9287-trunk-v0.patch, HBASE-9287-v0.patch
>
>
> Some CatalogTracker test don't delete the ROOT location.
> For example if testNoTimeoutWaitForRoot() runs before 
> testInterruptWaitOnMetaAndRoot() you get
> {code}
> junit.framework.AssertionFailedError: Expected:  but was: 
> example.org,1234,1377038834244
>   at junit.framework.Assert.fail(Assert.java:50)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertNull(Assert.java:237)
>   at junit.framework.Assert.assertNull(Assert.java:230)
>   at 
> org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9210) "hbase shell -d" doesn't print out exception stack trace

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746972#comment-13746972
 ] 

Hudson commented on HBASE-9210:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #690 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/690/])
HBASE-9210: hbase shell -d doesn't print out exception stack trace (jeffreyz: 
rev 1516293)
* /hbase/trunk/bin/hirb.rb
* /hbase/trunk/bin/region_mover.rb
* /hbase/trunk/bin/region_status.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands.rb


> "hbase shell  -d" doesn't print out exception stack trace
> -
>
> Key: HBASE-9210
> URL: https://issues.apache.org/jira/browse/HBASE-9210
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.95.2
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Attachments: hbase-9210.patch, hbase-9210-v1.patch
>
>
> when starting shell with "-d" specified, the following line doesn't print 
> anything because debug isn't set when shell is constructed.
> {code}
> "Backtrace: #{e.backtrace.join("\n   ")}" if debug
> {code}
> In addition, the existing code prints the outer most exception while we 
> normally need the root cause exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8960) TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746974#comment-13746974
 ] 

Hudson commented on HBASE-8960:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #690 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/690/])
HBASE-8960: TestDistributedLogSplitting fails sometime - stablize 
testDisallowWritesInRecovering (jeffreyz: rev 1516248)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java


> TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes
> --
>
> Key: HBASE-8960
> URL: https://issues.apache.org/jira/browse/HBASE-8960
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: hbase-8690-v4.patch, hbase-8960-addendum-2.patch, 
> hbase-8960-addendum.patch, hbase-8960-fix-disallowWritesInRecovering.patch, 
> hbase-8960.patch
>
>
> http://54.241.6.143/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/634/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testLogReplayForDisablingTable/
> {noformat}
> java.lang.AssertionError: expected:<1000> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayForDisablingTable(TestDistributedLogSplitting.java:797)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9263) Add initialize method to load balancer interface

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746975#comment-13746975
 ] 

Hudson commented on HBASE-9263:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #690 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/690/])
HBASE-9263 Add initialize method to load balancer interface (stack: rev 1516310)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/LoadBalancer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java


> Add initialize method to load balancer interface
> 
>
> Key: HBASE-9263
> URL: https://issues.apache.org/jira/browse/HBASE-9263
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Francis Liu
>Assignee: Francis Liu
> Fix For: 0.98.0, 0.96.0
>
> Attachments: HBASE-9263.patch
>
>
> The load balancer has two methods setMasterServices and setConf that needs to 
> be called prior to it being functional. Some balancers will need to go 
> through an initialization procedure once these methods have been called. An 
> initialize() method would be helpful in this regard.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9282) Minor logging cleanup; shorten logs, remove redundant info

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746973#comment-13746973
 ] 

Hudson commented on HBASE-9282:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #690 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/690/])
HBASE-9282 Minor logging cleanup; shorten logs, remove redundant info (stack: 
rev 1516287)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java
HBASE-9282 Minor logging cleanup; shorten logs, remove redundant info (stack: 
rev 1516267)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKAssign.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Threads.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java


> Minor logging cleanup; shorten logs, remove redundant info
> --
>
> Key: HBASE-9282
> URL: https://issues.apache.org/jira/browse/HBASE-9282
> Project: HBase
>  Issue Type: Task
>  Components: Usability
>Reporter: stack
>Assignee: stack
> Fix For: 0.96.0
>
> Attachments: 9282.addendum.txt, 9282.txt
>
>
> Minor log cleanup; trying to get it so hbase logs can be read on a laptop 
> screen w/o having to scroll right.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9289) hbase-assembly pom should use project.parent.basedir

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746970#comment-13746970
 ] 

Hudson commented on HBASE-9289:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #690 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/690/])
HBASE-9289 hbase-assembly pom should use project.parent.basedir (jxiang: rev 
1516260)
* /hbase/trunk/hbase-assembly/pom.xml


> hbase-assembly pom should use project.parent.basedir
> 
>
> Key: HBASE-9289
> URL: https://issues.apache.org/jira/browse/HBASE-9289
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.95.2
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.98.0, 0.96.0
>
> Attachments: trunk-9289.patch
>
>
> Currently, we have
> {noformat}
> ${project.build.directory}/../../target/cached_classpath.txt
> {noformat}
> It is more robust to use ${project.parent.basedir} instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746957#comment-13746957
 ] 

Hudson commented on HBASE-9287:
---

FAILURE: Integrated in hbase-0.95-on-hadoop2 #261 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/261/])
HBASE-9287 TestCatalogTracker depends on the execution order (mbertozzi: rev 
1516291)
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTracker.java


> TestCatalogTracker depends on the execution order
> -
>
> Key: HBASE-9287
> URL: https://issues.apache.org/jira/browse/HBASE-9287
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.98.0, 0.95.2, 0.94.11
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 0.98.0, 0.94.12, 0.96.0
>
> Attachments: HBASE-9287-trunk-v0.patch, HBASE-9287-v0.patch
>
>
> Some CatalogTracker test don't delete the ROOT location.
> For example if testNoTimeoutWaitForRoot() runs before 
> testInterruptWaitOnMetaAndRoot() you get
> {code}
> junit.framework.AssertionFailedError: Expected:  but was: 
> example.org,1234,1377038834244
>   at junit.framework.Assert.fail(Assert.java:50)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertNull(Assert.java:237)
>   at junit.framework.Assert.assertNull(Assert.java:230)
>   at 
> org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8960) TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746963#comment-13746963
 ] 

Hudson commented on HBASE-8960:
---

FAILURE: Integrated in HBase-TRUNK #4421 (See 
[https://builds.apache.org/job/HBase-TRUNK/4421/])
HBASE-8960: TestDistributedLogSplitting fails sometime - stablize 
testDisallowWritesInRecovering (jeffreyz: rev 1516248)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java


> TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes
> --
>
> Key: HBASE-8960
> URL: https://issues.apache.org/jira/browse/HBASE-8960
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: hbase-8690-v4.patch, hbase-8960-addendum-2.patch, 
> hbase-8960-addendum.patch, hbase-8960-fix-disallowWritesInRecovering.patch, 
> hbase-8960.patch
>
>
> http://54.241.6.143/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/634/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testLogReplayForDisablingTable/
> {noformat}
> java.lang.AssertionError: expected:<1000> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayForDisablingTable(TestDistributedLogSplitting.java:797)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9210) "hbase shell -d" doesn't print out exception stack trace

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746958#comment-13746958
 ] 

Hudson commented on HBASE-9210:
---

FAILURE: Integrated in HBase-TRUNK #4421 (See 
[https://builds.apache.org/job/HBase-TRUNK/4421/])
HBASE-9210: hbase shell -d doesn't print out exception stack trace (jeffreyz: 
rev 1516293)
* /hbase/trunk/bin/hirb.rb
* /hbase/trunk/bin/region_mover.rb
* /hbase/trunk/bin/region_status.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands.rb


> "hbase shell  -d" doesn't print out exception stack trace
> -
>
> Key: HBASE-9210
> URL: https://issues.apache.org/jira/browse/HBASE-9210
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.95.2
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Attachments: hbase-9210.patch, hbase-9210-v1.patch
>
>
> when starting shell with "-d" specified, the following line doesn't print 
> anything because debug isn't set when shell is constructed.
> {code}
> "Backtrace: #{e.backtrace.join("\n   ")}" if debug
> {code}
> In addition, the existing code prints the outer most exception while we 
> normally need the root cause exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9289) hbase-assembly pom should use project.parent.basedir

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746954#comment-13746954
 ] 

Hudson commented on HBASE-9289:
---

FAILURE: Integrated in HBase-TRUNK #4421 (See 
[https://builds.apache.org/job/HBase-TRUNK/4421/])
HBASE-9289 hbase-assembly pom should use project.parent.basedir (jxiang: rev 
1516260)
* /hbase/trunk/hbase-assembly/pom.xml


> hbase-assembly pom should use project.parent.basedir
> 
>
> Key: HBASE-9289
> URL: https://issues.apache.org/jira/browse/HBASE-9289
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.95.2
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.98.0, 0.96.0
>
> Attachments: trunk-9289.patch
>
>
> Currently, we have
> {noformat}
> ${project.build.directory}/../../target/cached_classpath.txt
> {noformat}
> It is more robust to use ${project.parent.basedir} instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9282) Minor logging cleanup; shorten logs, remove redundant info

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746961#comment-13746961
 ] 

Hudson commented on HBASE-9282:
---

FAILURE: Integrated in HBase-TRUNK #4421 (See 
[https://builds.apache.org/job/HBase-TRUNK/4421/])
HBASE-9282 Minor logging cleanup; shorten logs, remove redundant info (stack: 
rev 1516287)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java
HBASE-9282 Minor logging cleanup; shorten logs, remove redundant info (stack: 
rev 1516267)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKAssign.java
* 
/hbase/trunk/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Threads.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java


> Minor logging cleanup; shorten logs, remove redundant info
> --
>
> Key: HBASE-9282
> URL: https://issues.apache.org/jira/browse/HBASE-9282
> Project: HBase
>  Issue Type: Task
>  Components: Usability
>Reporter: stack
>Assignee: stack
> Fix For: 0.96.0
>
> Attachments: 9282.addendum.txt, 9282.txt
>
>
> Minor log cleanup; trying to get it so hbase logs can be read on a laptop 
> screen w/o having to scroll right.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9289) hbase-assembly pom should use project.parent.basedir

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746955#comment-13746955
 ] 

Hudson commented on HBASE-9289:
---

FAILURE: Integrated in hbase-0.95-on-hadoop2 #261 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/261/])
HBASE-9289 hbase-assembly pom should use project.parent.basedir (jxiang: rev 
1516262)
* /hbase/branches/0.95/hbase-assembly/pom.xml


> hbase-assembly pom should use project.parent.basedir
> 
>
> Key: HBASE-9289
> URL: https://issues.apache.org/jira/browse/HBASE-9289
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.95.2
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
> Fix For: 0.98.0, 0.96.0
>
> Attachments: trunk-9289.patch
>
>
> Currently, we have
> {noformat}
> ${project.build.directory}/../../target/cached_classpath.txt
> {noformat}
> It is more robust to use ${project.parent.basedir} instead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9282) Minor logging cleanup; shorten logs, remove redundant info

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746960#comment-13746960
 ] 

Hudson commented on HBASE-9282:
---

FAILURE: Integrated in hbase-0.95-on-hadoop2 #261 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/261/])
HBASE-9282 Minor logging cleanup; shorten logs, remove redundant info (stack: 
rev 1516288)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java
HBASE-9282 Minor logging cleanup; shorten logs, remove redundant info (stack: 
rev 1516266)
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
/hbase/branches/0.95/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKAssign.java
* 
/hbase/branches/0.95/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Threads.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
* 
/hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/OpenRegionHandler.java


> Minor logging cleanup; shorten logs, remove redundant info
> --
>
> Key: HBASE-9282
> URL: https://issues.apache.org/jira/browse/HBASE-9282
> Project: HBase
>  Issue Type: Task
>  Components: Usability
>Reporter: stack
>Assignee: stack
> Fix For: 0.96.0
>
> Attachments: 9282.addendum.txt, 9282.txt
>
>
> Minor log cleanup; trying to get it so hbase logs can be read on a laptop 
> screen w/o having to scroll right.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8960) TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746962#comment-13746962
 ] 

Hudson commented on HBASE-8960:
---

FAILURE: Integrated in hbase-0.95-on-hadoop2 #261 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/261/])
HBASE-8960: TestDistributedLogSplitting fails sometime - stablize 
testDisallowWritesInRecovering (jeffreyz: rev 1516252)
* 
/hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestDistributedLogSplitting.java


> TestDistributedLogSplitting.testLogReplayForDisablingTable fails sometimes
> --
>
> Key: HBASE-8960
> URL: https://issues.apache.org/jira/browse/HBASE-8960
> Project: HBase
>  Issue Type: Task
>  Components: test
>Reporter: Jimmy Xiang
>Assignee: Jeffrey Zhong
>Priority: Minor
> Fix For: 0.96.0
>
> Attachments: hbase-8690-v4.patch, hbase-8960-addendum-2.patch, 
> hbase-8960-addendum.patch, hbase-8960-fix-disallowWritesInRecovering.patch, 
> hbase-8960.patch
>
>
> http://54.241.6.143/job/HBase-0.95-Hadoop-2/org.apache.hbase$hbase-server/634/testReport/junit/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testLogReplayForDisablingTable/
> {noformat}
> java.lang.AssertionError: expected:<1000> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hbase.master.TestDistributedLogSplitting.testLogReplayForDisablingTable(TestDistributedLogSplitting.java:797)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9287) TestCatalogTracker depends on the execution order

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746956#comment-13746956
 ] 

Hudson commented on HBASE-9287:
---

FAILURE: Integrated in HBase-TRUNK #4421 (See 
[https://builds.apache.org/job/HBase-TRUNK/4421/])
HBASE-9287 TestCatalogTracker depends on the execution order (mbertozzi: rev 
1516292)
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestCatalogTracker.java


> TestCatalogTracker depends on the execution order
> -
>
> Key: HBASE-9287
> URL: https://issues.apache.org/jira/browse/HBASE-9287
> Project: HBase
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.98.0, 0.95.2, 0.94.11
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Minor
> Fix For: 0.98.0, 0.94.12, 0.96.0
>
> Attachments: HBASE-9287-trunk-v0.patch, HBASE-9287-v0.patch
>
>
> Some CatalogTracker test don't delete the ROOT location.
> For example if testNoTimeoutWaitForRoot() runs before 
> testInterruptWaitOnMetaAndRoot() you get
> {code}
> junit.framework.AssertionFailedError: Expected:  but was: 
> example.org,1234,1377038834244
>   at junit.framework.Assert.fail(Assert.java:50)
>   at junit.framework.Assert.assertTrue(Assert.java:20)
>   at junit.framework.Assert.assertNull(Assert.java:237)
>   at junit.framework.Assert.assertNull(Assert.java:230)
>   at 
> org.apache.hadoop.hbase.catalog.TestCatalogTracker.testInterruptWaitOnMetaAndRoot(TestCatalogTracker.java:144)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9210) "hbase shell -d" doesn't print out exception stack trace

2013-08-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746959#comment-13746959
 ] 

Hudson commented on HBASE-9210:
---

FAILURE: Integrated in hbase-0.95-on-hadoop2 #261 (See 
[https://builds.apache.org/job/hbase-0.95-on-hadoop2/261/])
HBASE-9210: hbase shell -d doesn't print out exception stack trace (jeffreyz: 
rev 1516298)
* /hbase/branches/0.95/bin/hirb.rb
* /hbase/branches/0.95/bin/region_mover.rb
* /hbase/branches/0.95/bin/region_status.rb
* /hbase/branches/0.95/hbase-server/src/main/ruby/shell/commands.rb


> "hbase shell  -d" doesn't print out exception stack trace
> -
>
> Key: HBASE-9210
> URL: https://issues.apache.org/jira/browse/HBASE-9210
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.95.2
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Attachments: hbase-9210.patch, hbase-9210-v1.patch
>
>
> when starting shell with "-d" specified, the following line doesn't print 
> anything because debug isn't set when shell is constructed.
> {code}
> "Backtrace: #{e.backtrace.join("\n   ")}" if debug
> {code}
> In addition, the existing code prints the outer most exception while we 
> normally need the root cause exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9285) User who created table cannot scan the same table due to Insufficient permissions

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746930#comment-13746930
 ] 

stack commented on HBASE-9285:
--

+1

Good find.

> User who created table cannot scan the same table due to Insufficient 
> permissions
> -
>
> Key: HBASE-9285
> URL: https://issues.apache.org/jira/browse/HBASE-9285
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 0.98.0
>
> Attachments: 9285.patch
>
>
> User hrt_qa has been given 'C' permission.
> {code}
> create 'te', {NAME => 'f1', VERSIONS => 5}
> ...
> hbase(main):003:0> list
> TABLE
> hbase:acl
> hbase:namespace
> te
> 6 row(s) in 0.0570 seconds
> hbase(main):004:0> scan 'te'
> ROW  COLUMN+CELL
> 2013-08-21 02:21:00,921 DEBUG [main] token.AuthenticationTokenSelector: No 
> matching token found
> 2013-08-21 02:21:00,921 DEBUG [main] security.HBaseSaslRpcClient: Creating 
> SASL GSSAPI client. Server's Kerberos principal name is 
> hbase/hor16n13.gq1.ygridcore@horton.ygridcore.net
> 2013-08-21 02:21:00,923 DEBUG [main] security.HBaseSaslRpcClient: Have sent 
> token of size 582 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 0 for processing by initSASLContext
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 0 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 53 for processing by initSASLContext
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 53 from initSASLContext.
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: SASL client 
> context established. Negotiated QoP: auth
> 2013-08-21 02:21:00,935 WARN  [main] client.RpcRetryingCaller: Call 
> exception, tries=0, retries=7, retryTime=-14ms
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847)
> ...
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException):
>  org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
> {code}
> Here was related entries in hbase:acl table:
> {code}
> hbase(main):001:0> scan 'hbase:acl'
> ROW  COLUMN+CELL
>  hbase:acl   column=l:hrt_qa, 
> timestamp=1377045996685, value=C
>  te  column=l:hrt_qa, 
> timestamp=1377051648649, value=RWXCA
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9292) Syncer fails but we won't go down

2013-08-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13746928#comment-13746928
 ] 

stack commented on HBASE-9292:
--

This seems easy enough to reproduce on this hadoop-2.1.0-beta.  Here we go 
again:

{code}
2013-08-21 15:17:48,645 DEBUG [regionserver60020.logRoller] 
regionserver.LogRoller: HLog roll requested
2013-08-21 15:17:48,652 INFO  [regionserver60020.logRoller] wal.FSHLog: Rolled 
WAL 
/hbase/WALs/a2430.halxg.cloudera.com,60020,1377123425666/a2430.halxg.cloudera.com%2C60020%2C1377123425666.1377123468621
 with entries=1, filesize=697.5 K; new WAL 
/hbase/WALs/a2430.halxg.cloudera.com,60020,1377123425666/a2430.halxg.cloudera.com%2C60020%2C1377123425666.1377123468645
2013-08-21 15:19:41,429 WARN  [Thread-204] hdfs.DFSClient: DataStreamer 
Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/hbase/WALs/a2430.halxg.cloudera.com,60020,1377123425666/a2430.halxg.cloudera.com%2C60020%2C1377123425666.1377123468645
 could only be replicated to 0 nodes instead of minReplication (=1).  There are 
5 datanode(s) running and no node(s) are excluded in this operation.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)

at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at $Proxy13.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:188)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at $Proxy13.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:266)
at $Proxy14.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1220)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1073)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:509)
2013-08-21 15:19:41,431 WARN  [RpcServer.handler=0,port=60020] hdfs.DFSClient: 
Error while syncing
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/hbase/WALs/a2430.halxg.cloudera.com,60020,1377123425666/a2430.halxg.cloudera.com%2C60020%2C1377123425666.1377123468645
 could only be replicated to 0 nodes instead of minReplication (=1).  There are 
5 datanode(s) running and no node(s) are excluded in this operation.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2458)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:525)

...

2013-08-21 15:19:41,432 FATAL [RpcServer.handler=0,port=60020] wal.FSHLog: 
Could not sync. Requesting roll of hlog
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/hbase/WALs/a

[jira] [Updated] (HBASE-9285) User who created table cannot scan the same table due to Insufficient permissions

2013-08-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9285:
--

Attachment: 9285.patch

Patch that fixes the problem.

I have verified the fix on a 4 node cluster.

> User who created table cannot scan the same table due to Insufficient 
> permissions
> -
>
> Key: HBASE-9285
> URL: https://issues.apache.org/jira/browse/HBASE-9285
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.95.2
>Reporter: Ted Yu
> Attachments: 9285.patch
>
>
> User hrt_qa has been given 'C' permission.
> {code}
> create 'te', {NAME => 'f1', VERSIONS => 5}
> ...
> hbase(main):003:0> list
> TABLE
> hbase:acl
> hbase:namespace
> te
> 6 row(s) in 0.0570 seconds
> hbase(main):004:0> scan 'te'
> ROW  COLUMN+CELL
> 2013-08-21 02:21:00,921 DEBUG [main] token.AuthenticationTokenSelector: No 
> matching token found
> 2013-08-21 02:21:00,921 DEBUG [main] security.HBaseSaslRpcClient: Creating 
> SASL GSSAPI client. Server's Kerberos principal name is 
> hbase/hor16n13.gq1.ygridcore@horton.ygridcore.net
> 2013-08-21 02:21:00,923 DEBUG [main] security.HBaseSaslRpcClient: Have sent 
> token of size 582 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 0 for processing by initSASLContext
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 0 from initSASLContext.
> 2013-08-21 02:21:00,926 DEBUG [main] security.HBaseSaslRpcClient: Will read 
> input token of size 53 for processing by initSASLContext
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: Will send 
> token of size 53 from initSASLContext.
> 2013-08-21 02:21:00,927 DEBUG [main] security.HBaseSaslRpcClient: SASL client 
> context established. Negotiated QoP: auth
> 2013-08-21 02:21:00,935 WARN  [main] client.RpcRetryingCaller: Call 
> exception, tries=0, retries=7, retryTime=-14ms
> org.apache.hadoop.hbase.security.AccessDeniedException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:26847)
> ...
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.security.AccessDeniedException):
>  org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions for user 'hrt_qa' for scanner open on table te
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preScannerOpen(AccessController.java:1116)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1294)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3007)
> {code}
> Here was related entries in hbase:acl table:
> {code}
> hbase(main):001:0> scan 'hbase:acl'
> ROW  COLUMN+CELL
>  hbase:acl   column=l:hrt_qa, 
> timestamp=1377045996685, value=C
>  te  column=l:hrt_qa, 
> timestamp=1377051648649, value=RWXCA
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   >