[jira] [Updated] (HBASE-4105) Stargate does not support Content-Type: application/json and Content-Encoding: gzip in parallel

2011-07-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-4105:
--

Attachment: HBASE-4105.patch

Attached patch for this issue. Also fixes a problem reported on the user list 
where response codes other than 200 are not being set if the gzip filter is 
active.

 Stargate does not support Content-Type: application/json and 
 Content-Encoding: gzip in parallel
 ---

 Key: HBASE-4105
 URL: https://issues.apache.org/jira/browse/HBASE-4105
 Project: HBase
  Issue Type: Bug
  Components: rest
Affects Versions: 0.90.1
 Environment: Server: jetty/6.1.26
 REST: 0.0.2 
 OS: Linux 2.6.32-bpo.5-amd64 amd64
 Jersey: 1.4
 JVM: Sun Microsystems Inc. 1.6.0_22-17.1-b03
Reporter: Jean-Pierre Koenig
Assignee: Andrew Purtell
  Labels: gzip, json, rest
 Fix For: 0.94.0

 Attachments: HBASE-4105.patch


 When:
 curl -H Accept: application/json http://localhost:3000/version -v
 Response is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0 {Server:jetty/6.1.26,REST:0.0.2,OS:Linux 
 2.6.32-bpo.5-amd64 amd64,Jersey:1.4,JVM:Sun Microsystems Inc. 
 1.6.0_22-17.1-b03}
 but with compression:
 curl -H Accept: application/json http://localhost:3000/version -v 
 --compressed
 Reponse is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1 ... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept-Encoding: deflate, gzip
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Content-Encoding: gzip
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0
 and the stargate server throws the following exception:
 11/07/14 11:21:44 ERROR mortbay.log: /version
 java.lang.ClassCastException: org.mortbay.jetty.HttpConnection$Output cannot 
 be cast to org.apache.hadoop.hbase.rest.filter.GZIPResponseStream
 at org.apache.hadoop.hbase.rest.filter.GzipFilter.doFilter(GzipFilter.java:54)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
 at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 This is not reproduceable with content type text/plain and gzip.
 This is somehow related to https://issues.apache.org/jira/browse/HBASE-3275

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4105) Stargate does not support Content-Type: application/json and Content-Encoding: gzip in parallel

2011-07-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-4105:
--

Status: Patch Available  (was: Open)

 Stargate does not support Content-Type: application/json and 
 Content-Encoding: gzip in parallel
 ---

 Key: HBASE-4105
 URL: https://issues.apache.org/jira/browse/HBASE-4105
 Project: HBase
  Issue Type: Bug
  Components: rest
Affects Versions: 0.90.1
 Environment: Server: jetty/6.1.26
 REST: 0.0.2 
 OS: Linux 2.6.32-bpo.5-amd64 amd64
 Jersey: 1.4
 JVM: Sun Microsystems Inc. 1.6.0_22-17.1-b03
Reporter: Jean-Pierre Koenig
Assignee: Andrew Purtell
  Labels: gzip, json, rest
 Fix For: 0.90.4, 0.94.0

 Attachments: HBASE-4105.patch


 When:
 curl -H Accept: application/json http://localhost:3000/version -v
 Response is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0 {Server:jetty/6.1.26,REST:0.0.2,OS:Linux 
 2.6.32-bpo.5-amd64 amd64,Jersey:1.4,JVM:Sun Microsystems Inc. 
 1.6.0_22-17.1-b03}
 but with compression:
 curl -H Accept: application/json http://localhost:3000/version -v 
 --compressed
 Reponse is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1 ... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept-Encoding: deflate, gzip
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Content-Encoding: gzip
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0
 and the stargate server throws the following exception:
 11/07/14 11:21:44 ERROR mortbay.log: /version
 java.lang.ClassCastException: org.mortbay.jetty.HttpConnection$Output cannot 
 be cast to org.apache.hadoop.hbase.rest.filter.GZIPResponseStream
 at org.apache.hadoop.hbase.rest.filter.GzipFilter.doFilter(GzipFilter.java:54)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
 at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 This is not reproduceable with content type text/plain and gzip.
 This is somehow related to https://issues.apache.org/jira/browse/HBASE-3275

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4105) Stargate does not support Content-Type: application/json and Content-Encoding: gzip in parallel

2011-07-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-4105:
--

Fix Version/s: 0.90.4

 Stargate does not support Content-Type: application/json and 
 Content-Encoding: gzip in parallel
 ---

 Key: HBASE-4105
 URL: https://issues.apache.org/jira/browse/HBASE-4105
 Project: HBase
  Issue Type: Bug
  Components: rest
Affects Versions: 0.90.1
 Environment: Server: jetty/6.1.26
 REST: 0.0.2 
 OS: Linux 2.6.32-bpo.5-amd64 amd64
 Jersey: 1.4
 JVM: Sun Microsystems Inc. 1.6.0_22-17.1-b03
Reporter: Jean-Pierre Koenig
Assignee: Andrew Purtell
  Labels: gzip, json, rest
 Fix For: 0.90.4, 0.94.0

 Attachments: HBASE-4105.patch


 When:
 curl -H Accept: application/json http://localhost:3000/version -v
 Response is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0 {Server:jetty/6.1.26,REST:0.0.2,OS:Linux 
 2.6.32-bpo.5-amd64 amd64,Jersey:1.4,JVM:Sun Microsystems Inc. 
 1.6.0_22-17.1-b03}
 but with compression:
 curl -H Accept: application/json http://localhost:3000/version -v 
 --compressed
 Reponse is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1 ... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept-Encoding: deflate, gzip
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Content-Encoding: gzip
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0
 and the stargate server throws the following exception:
 11/07/14 11:21:44 ERROR mortbay.log: /version
 java.lang.ClassCastException: org.mortbay.jetty.HttpConnection$Output cannot 
 be cast to org.apache.hadoop.hbase.rest.filter.GZIPResponseStream
 at org.apache.hadoop.hbase.rest.filter.GzipFilter.doFilter(GzipFilter.java:54)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
 at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 This is not reproduceable with content type text/plain and gzip.
 This is somehow related to https://issues.apache.org/jira/browse/HBASE-3275

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4105) Stargate does not support Content-Type: application/json and Content-Encoding: gzip in parallel

2011-07-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13069912#comment-13069912
 ] 

stack commented on HBASE-4105:
--

+1

Commit it I'd say.  Ive tagged RC0.  I think its going to go down soon going 
by my testing.

 Stargate does not support Content-Type: application/json and 
 Content-Encoding: gzip in parallel
 ---

 Key: HBASE-4105
 URL: https://issues.apache.org/jira/browse/HBASE-4105
 Project: HBase
  Issue Type: Bug
  Components: rest
Affects Versions: 0.90.1
 Environment: Server: jetty/6.1.26
 REST: 0.0.2 
 OS: Linux 2.6.32-bpo.5-amd64 amd64
 Jersey: 1.4
 JVM: Sun Microsystems Inc. 1.6.0_22-17.1-b03
Reporter: Jean-Pierre Koenig
Assignee: Andrew Purtell
  Labels: gzip, json, rest
 Fix For: 0.90.4, 0.94.0

 Attachments: HBASE-4105.patch


 When:
 curl -H Accept: application/json http://localhost:3000/version -v
 Response is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0 {Server:jetty/6.1.26,REST:0.0.2,OS:Linux 
 2.6.32-bpo.5-amd64 amd64,Jersey:1.4,JVM:Sun Microsystems Inc. 
 1.6.0_22-17.1-b03}
 but with compression:
 curl -H Accept: application/json http://localhost:3000/version -v 
 --compressed
 Reponse is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1 ... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept-Encoding: deflate, gzip
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Content-Encoding: gzip
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0
 and the stargate server throws the following exception:
 11/07/14 11:21:44 ERROR mortbay.log: /version
 java.lang.ClassCastException: org.mortbay.jetty.HttpConnection$Output cannot 
 be cast to org.apache.hadoop.hbase.rest.filter.GZIPResponseStream
 at org.apache.hadoop.hbase.rest.filter.GzipFilter.doFilter(GzipFilter.java:54)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
 at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 This is not reproduceable with content type text/plain and gzip.
 This is somehow related to https://issues.apache.org/jira/browse/HBASE-3275

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-4130) HTable.flushCommits throws IndexOutOfBoundsException

2011-07-23 Thread Zizon (JIRA)
HTable.flushCommits throws IndexOutOfBoundsException


 Key: HBASE-4130
 URL: https://issues.apache.org/jira/browse/HBASE-4130
 Project: HBase
  Issue Type: Bug
  Components: client
Affects Versions: 0.90.3
Reporter: Zizon


Using a HTable instance with auto commit disabled in multiple thread may raise 
IndexOutOfBoundsException , as the processBatchOfPuts remove the commited 
results by their index which may be wrong as the list size shortened by other 
thread.

the following is the stack trace.

java.lang.IndexOutOfBoundsException: Index: 3781, Size: 
at java.util.ArrayList.RangeCheck(ArrayList.java:547)
at java.util.ArrayList.remove(ArrayList.java:387)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1252)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:826)
at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:682)
at org.apache.hadoop.hbase.client.HTable.put(HTable.java:667)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4130) HTable.flushCommits throws IndexOutOfBoundsException

2011-07-23 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13069962#comment-13069962
 ] 

Ted Yu commented on HBASE-4130:
---

HTable has always been thread unsafe.

 HTable.flushCommits throws IndexOutOfBoundsException
 

 Key: HBASE-4130
 URL: https://issues.apache.org/jira/browse/HBASE-4130
 Project: HBase
  Issue Type: Bug
  Components: client
Affects Versions: 0.90.3
Reporter: Zizon

 Using a HTable instance with auto commit disabled in multiple thread may 
 raise IndexOutOfBoundsException , as the processBatchOfPuts remove the 
 commited results by their index which may be wrong as the list size shortened 
 by other thread.
 the following is the stack trace.
 java.lang.IndexOutOfBoundsException: Index: 3781, Size: 
   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
   at java.util.ArrayList.remove(ArrayList.java:387)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1252)
   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:826)
   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:682)
   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:667)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4128) Detect whether there was zookeeper ensemble hanging from previous build

2011-07-23 Thread Eric Charles (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13069976#comment-13069976
 ] 

Eric Charles commented on HBASE-4128:
-

Should we also detect hanged master and region servers?

 Detect whether there was zookeeper ensemble hanging from previous build
 ---

 Key: HBASE-4128
 URL: https://issues.apache.org/jira/browse/HBASE-4128
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu

 Quite often, we see unit test(s) time out after 15 minutes.
 One example was TestShell: 
 https://builds.apache.org/view/G-L/view/HBase/job/hbase-0.90/239/console
 This may be caused by zookeeper ensemble hanging from previous build.
 We should detect (and terminate, if possible) the hanging zk ensemble from 
 previous build as the first step in current build.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4128) Detect whether there was zookeeper ensemble, master or region server hanging from previous build

2011-07-23 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-4128:
--

Description: 
Quite often, we see unit test(s) time out after 15 minutes.
One example was TestShell: 
https://builds.apache.org/view/G-L/view/HBase/job/hbase-0.90/239/console

This may be caused by zookeeper ensemble, master or region server hanging from 
previous build.
We should detect (and terminate, if possible) the hanging zk ensemble, master 
or region server from previous build as the first step in current build.

  was:
Quite often, we see unit test(s) time out after 15 minutes.
One example was TestShell: 
https://builds.apache.org/view/G-L/view/HBase/job/hbase-0.90/239/console

This may be caused by zookeeper ensemble hanging from previous build.
We should detect (and terminate, if possible) the hanging zk ensemble from 
previous build as the first step in current build.

Summary: Detect whether there was zookeeper ensemble, master or region 
server hanging from previous build  (was: Detect whether there was zookeeper 
ensemble hanging from previous build)

 Detect whether there was zookeeper ensemble, master or region server hanging 
 from previous build
 

 Key: HBASE-4128
 URL: https://issues.apache.org/jira/browse/HBASE-4128
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu

 Quite often, we see unit test(s) time out after 15 minutes.
 One example was TestShell: 
 https://builds.apache.org/view/G-L/view/HBase/job/hbase-0.90/239/console
 This may be caused by zookeeper ensemble, master or region server hanging 
 from previous build.
 We should detect (and terminate, if possible) the hanging zk ensemble, master 
 or region server from previous build as the first step in current build.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4105) Stargate does not support Content-Type: application/json and Content-Encoding: gzip in parallel

2011-07-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-4105:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.90 branch. All tests pass locally.

 Stargate does not support Content-Type: application/json and 
 Content-Encoding: gzip in parallel
 ---

 Key: HBASE-4105
 URL: https://issues.apache.org/jira/browse/HBASE-4105
 Project: HBase
  Issue Type: Bug
  Components: rest
Affects Versions: 0.90.1
 Environment: Server: jetty/6.1.26
 REST: 0.0.2 
 OS: Linux 2.6.32-bpo.5-amd64 amd64
 Jersey: 1.4
 JVM: Sun Microsystems Inc. 1.6.0_22-17.1-b03
Reporter: Jean-Pierre Koenig
Assignee: Andrew Purtell
  Labels: gzip, json, rest
 Fix For: 0.90.4, 0.94.0

 Attachments: HBASE-4105.patch


 When:
 curl -H Accept: application/json http://localhost:3000/version -v
 Response is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0 {Server:jetty/6.1.26,REST:0.0.2,OS:Linux 
 2.6.32-bpo.5-amd64 amd64,Jersey:1.4,JVM:Sun Microsystems Inc. 
 1.6.0_22-17.1-b03}
 but with compression:
 curl -H Accept: application/json http://localhost:3000/version -v 
 --compressed
 Reponse is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1 ... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept-Encoding: deflate, gzip
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Content-Encoding: gzip
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0
 and the stargate server throws the following exception:
 11/07/14 11:21:44 ERROR mortbay.log: /version
 java.lang.ClassCastException: org.mortbay.jetty.HttpConnection$Output cannot 
 be cast to org.apache.hadoop.hbase.rest.filter.GZIPResponseStream
 at org.apache.hadoop.hbase.rest.filter.GzipFilter.doFilter(GzipFilter.java:54)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
 at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 This is not reproduceable with content type text/plain and gzip.
 This is somehow related to https://issues.apache.org/jira/browse/HBASE-3275

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4105) Stargate does not support Content-Type: application/json and Content-Encoding: gzip in parallel

2011-07-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070033#comment-13070033
 ] 

Andrew Purtell commented on HBASE-4105:
---

Added a test case for checking that the gzip filter does not interfere with 
expected return codes.

 Stargate does not support Content-Type: application/json and 
 Content-Encoding: gzip in parallel
 ---

 Key: HBASE-4105
 URL: https://issues.apache.org/jira/browse/HBASE-4105
 Project: HBase
  Issue Type: Bug
  Components: rest
Affects Versions: 0.90.1
 Environment: Server: jetty/6.1.26
 REST: 0.0.2 
 OS: Linux 2.6.32-bpo.5-amd64 amd64
 Jersey: 1.4
 JVM: Sun Microsystems Inc. 1.6.0_22-17.1-b03
Reporter: Jean-Pierre Koenig
Assignee: Andrew Purtell
  Labels: gzip, json, rest
 Fix For: 0.90.4, 0.94.0

 Attachments: HBASE-4105.patch


 When:
 curl -H Accept: application/json http://localhost:3000/version -v
 Response is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0 {Server:jetty/6.1.26,REST:0.0.2,OS:Linux 
 2.6.32-bpo.5-amd64 amd64,Jersey:1.4,JVM:Sun Microsystems Inc. 
 1.6.0_22-17.1-b03}
 but with compression:
 curl -H Accept: application/json http://localhost:3000/version -v 
 --compressed
 Reponse is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1 ... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept-Encoding: deflate, gzip
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Content-Encoding: gzip
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0
 and the stargate server throws the following exception:
 11/07/14 11:21:44 ERROR mortbay.log: /version
 java.lang.ClassCastException: org.mortbay.jetty.HttpConnection$Output cannot 
 be cast to org.apache.hadoop.hbase.rest.filter.GZIPResponseStream
 at org.apache.hadoop.hbase.rest.filter.GzipFilter.doFilter(GzipFilter.java:54)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
 at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 This is not reproduceable with content type text/plain and gzip.
 This is somehow related to https://issues.apache.org/jira/browse/HBASE-3275

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4116) [stargate] StringIndexOutOfBoundsException in row spec parse

2011-07-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-4116:
--

Attachment: HBASE-4116.patch

Converted Allan's description into a patch.

 [stargate] StringIndexOutOfBoundsException in row spec parse
 

 Key: HBASE-4116
 URL: https://issues.apache.org/jira/browse/HBASE-4116
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.94.0

 Attachments: HBASE-4116.patch


 From user@hbase, Allan Yan writes:
 There might be a bug for REST web service to get rows with given startRow and 
 endRow.
 For example, to get a list of rows with startRow=testrow1, endRow=testrow2, I 
 send GET request:
 curl http://localhost:8123/TestRowResource/testrow1,testrow2/a:1
 And got StringIndexOutOfBoundsException.
 This was because in the RowSpec.java, parseRowKeys method, startRow value was 
 changed:
 {code}
startRow = sb.toString();
int idx = startRow.indexOf(',');
if (idx != -1) {
  startRow = URLDecoder.decode(startRow.substring(0, idx),
HConstants.UTF8_ENCODING);
  endRow = URLDecoder.decode(startRow.substring(idx + 1),
HConstants.UTF8_ENCODING);
} else {
  startRow = URLDecoder.decode(startRow, HConstants.UTF8_ENCODING);
}
 {code}
  After change to this, it works:
 {code}
String row = sb.toString();
int idx = row.indexOf(',');
if (idx != -1) {
  startRow = URLDecoder.decode(row.substring(0, idx),
HConstants.UTF8_ENCODING);
  endRow = URLDecoder.decode(row.substring(idx + 1),
HConstants.UTF8_ENCODING);
} else {
  startRow = URLDecoder.decode(row, HConstants.UTF8_ENCODING);
}
 {code}
 I've also created a unit test method in TestRowResource.java,
 {code}
@Test
public void testStartEndRowGetPutXML() throws IOException, JAXBException {
  String[] rows = {ROW_1,ROW_2,ROW_3};
  String[] values = {VALUE_1,VALUE_2,VALUE_3}; 
  Response response = null;
  for(int i=0; irows.length; i++){
  response = putValueXML(TABLE, rows[i], COLUMN_1, values[i]);
  assertEquals(200, response.getCode());
  checkValueXML(TABLE, rows[i], COLUMN_1, values[i]);
  }
  response = getValueXML(TABLE, rows[0], rows[2], COLUMN_1);
  assertEquals(200, response.getCode());
  CellSetModel cellSet = (CellSetModel)
unmarshaller.unmarshal(new ByteArrayInputStream(response.getBody()));
  assertEquals(2, cellSet.getRows().size());
  for(int i=0; icellSet.getRows().size()-1; i++){
  RowModel rowModel = cellSet.getRows().get(i);
  for(CellModel cell : rowModel.getCells()){
  assertEquals(COLUMN_1, Bytes.toString(cell.getColumn()));
  assertEquals(values[i], Bytes.toString(cell.getValue()));
  }   
  }
 
  for(String row : rows){
  response = deleteRow(TABLE, row);
  assertEquals(200, response.getCode());
  }
}
private static Response getValueXML(String table, String startRow, String
  endRow, String column)
throws IOException {
  StringBuilder path = new StringBuilder();
  path.append('/');
  path.append(table);
  path.append('/');
  path.append(startRow);
  path.append(,);
  path.append(endRow);
  path.append('/');
  path.append(column);
  return getValueXML(path.toString());
}
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HBASE-4116) [stargate] StringIndexOutOfBoundsException in row spec parse

2011-07-23 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-4116.
---

   Resolution: Fixed
Fix Version/s: 0.90.4
 Assignee: (was: Andrew Purtell)

Committed to 0.90 branch and trunk. All tests pass locally, including new test.

 [stargate] StringIndexOutOfBoundsException in row spec parse
 

 Key: HBASE-4116
 URL: https://issues.apache.org/jira/browse/HBASE-4116
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
 Fix For: 0.90.4, 0.94.0

 Attachments: HBASE-4116.patch


 From user@hbase, Allan Yan writes:
 There might be a bug for REST web service to get rows with given startRow and 
 endRow.
 For example, to get a list of rows with startRow=testrow1, endRow=testrow2, I 
 send GET request:
 curl http://localhost:8123/TestRowResource/testrow1,testrow2/a:1
 And got StringIndexOutOfBoundsException.
 This was because in the RowSpec.java, parseRowKeys method, startRow value was 
 changed:
 {code}
startRow = sb.toString();
int idx = startRow.indexOf(',');
if (idx != -1) {
  startRow = URLDecoder.decode(startRow.substring(0, idx),
HConstants.UTF8_ENCODING);
  endRow = URLDecoder.decode(startRow.substring(idx + 1),
HConstants.UTF8_ENCODING);
} else {
  startRow = URLDecoder.decode(startRow, HConstants.UTF8_ENCODING);
}
 {code}
  After change to this, it works:
 {code}
String row = sb.toString();
int idx = row.indexOf(',');
if (idx != -1) {
  startRow = URLDecoder.decode(row.substring(0, idx),
HConstants.UTF8_ENCODING);
  endRow = URLDecoder.decode(row.substring(idx + 1),
HConstants.UTF8_ENCODING);
} else {
  startRow = URLDecoder.decode(row, HConstants.UTF8_ENCODING);
}
 {code}
 I've also created a unit test method in TestRowResource.java,
 {code}
@Test
public void testStartEndRowGetPutXML() throws IOException, JAXBException {
  String[] rows = {ROW_1,ROW_2,ROW_3};
  String[] values = {VALUE_1,VALUE_2,VALUE_3}; 
  Response response = null;
  for(int i=0; irows.length; i++){
  response = putValueXML(TABLE, rows[i], COLUMN_1, values[i]);
  assertEquals(200, response.getCode());
  checkValueXML(TABLE, rows[i], COLUMN_1, values[i]);
  }
  response = getValueXML(TABLE, rows[0], rows[2], COLUMN_1);
  assertEquals(200, response.getCode());
  CellSetModel cellSet = (CellSetModel)
unmarshaller.unmarshal(new ByteArrayInputStream(response.getBody()));
  assertEquals(2, cellSet.getRows().size());
  for(int i=0; icellSet.getRows().size()-1; i++){
  RowModel rowModel = cellSet.getRows().get(i);
  for(CellModel cell : rowModel.getCells()){
  assertEquals(COLUMN_1, Bytes.toString(cell.getColumn()));
  assertEquals(values[i], Bytes.toString(cell.getValue()));
  }   
  }
 
  for(String row : rows){
  response = deleteRow(TABLE, row);
  assertEquals(200, response.getCode());
  }
}
private static Response getValueXML(String table, String startRow, String
  endRow, String column)
throws IOException {
  StringBuilder path = new StringBuilder();
  path.append('/');
  path.append(table);
  path.append('/');
  path.append(startRow);
  path.append(,);
  path.append(endRow);
  path.append('/');
  path.append(column);
  return getValueXML(path.toString());
}
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4128) Detect whether there was zookeeper ensemble, master or region server hanging from previous build

2011-07-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070062#comment-13070062
 ] 

stack commented on HBASE-4128:
--

@Eric Yes. We'll now jps as first thing we do before build.  Lets see what that 
turns up next time we have a TestShell hang.  If its hung master/regionserver, 
should show... or I suppose it won't really.  We'll see the maven surefile 
process running but that should be clue enough.

 Detect whether there was zookeeper ensemble, master or region server hanging 
 from previous build
 

 Key: HBASE-4128
 URL: https://issues.apache.org/jira/browse/HBASE-4128
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu

 Quite often, we see unit test(s) time out after 15 minutes.
 One example was TestShell: 
 https://builds.apache.org/view/G-L/view/HBase/job/hbase-0.90/239/console
 This may be caused by zookeeper ensemble, master or region server hanging 
 from previous build.
 We should detect (and terminate, if possible) the hanging zk ensemble, master 
 or region server from previous build as the first step in current build.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4129) hbase-3872 added a warn message 'CatalogJanitor: Daughter regiondir does not exist' that is triggered though its often legit that daughter is not present

2011-07-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4129:
-

Attachment: 3872.txt

So, I think the issue is that the HRegionInfo comparator is sorting daughters' 
before parents so we process daughter first, then parent.  Rather than change 
the HRI comparator, a radical move, I just made a version of it over in 
CatalogJanitor that will sort parents first.  This should make it so I do not 
have to relax requirement that a daughter region exist before I can remove 
parent.  Testing on cluster now.

 hbase-3872 added a warn message 'CatalogJanitor: Daughter regiondir does not 
 exist' that is triggered though its often legit that daughter is not present
 -

 Key: HBASE-4129
 URL: https://issues.apache.org/jira/browse/HBASE-4129
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.4
Reporter: stack
Assignee: stack
 Fix For: 0.90.4

 Attachments: 3872.txt


 If a daughter region is split before the catalog janitor runs, we'll see:
 {code}
 2011-07-22 16:10:26,398 WARN org.apache.hadoop.hbase.master.CatalogJanitor: 
 Daughter regiondir does not exist: 
 hdfs://sv4borg227:1/hbase/TestTable/a1023b2b00fe44c86bd8ae3633f531fa
 {code}
 Its legit that the daughter region does not exist in this case (it was just 
 cleaned up by the catalogjanitor).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4130) HTable.flushCommits throws IndexOutOfBoundsException

2011-07-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070065#comment-13070065
 ] 

stack commented on HBASE-4130:
--

Yes, what Ted says.  At head of class it says This class is not thread safe 
for updates; the underlying write buffer can be corrupted if multiple threads 
contend over a single HTable instance.

Can we close this issue?

 HTable.flushCommits throws IndexOutOfBoundsException
 

 Key: HBASE-4130
 URL: https://issues.apache.org/jira/browse/HBASE-4130
 Project: HBase
  Issue Type: Bug
  Components: client
Affects Versions: 0.90.3
Reporter: Zizon

 Using a HTable instance with auto commit disabled in multiple thread may 
 raise IndexOutOfBoundsException , as the processBatchOfPuts remove the 
 commited results by their index which may be wrong as the list size shortened 
 by other thread.
 the following is the stack trace.
 java.lang.IndexOutOfBoundsException: Index: 3781, Size: 
   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
   at java.util.ArrayList.remove(ArrayList.java:387)
   at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchOfPuts(HConnectionManager.java:1252)
   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:826)
   at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:682)
   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:667)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-1938) Make in-memory table scanning faster

2011-07-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070067#comment-13070067
 ] 

stack commented on HBASE-1938:
--

bq. I modified the unit test to make it work with the trunk as it is today (new 
file attached).

Thanks.

Reviewing it, one thing you might want to do is study classes in hbase so get 
gist of the hadoop/hbase style.  Notice how they have two spaces for tabs, ~80 
chars a line.  But thats for future.  Not important here.

You just need to make sure your KVs have a readPoint that is less than the 
current readPoint.  It looks like you are making KVs w/o setting memstorets.  
Default then is used and its zero.   The default read point is zero.  The 
compare is = so it looks like you don't need to set the read point at all.  
What you have should be no harm.

Your new test class seems fine.  Would be nice to add more tests.  As memstore 
data structure grows, all slows.

Another issue is about hacking on the concurrentskiplistset that is memstore to 
make it more suited to our accesses and perhaps to make it go faster (its 
public domain when you dig down into the java src).

bq. On a scan the next() part, the hbase currently compare the value of two 
internals iterators. In this test, the second list is always empty, hence the 
cost on comparator is lowered vs. real life.

What is this that you are referring too?  Is it this? KeyValue kv = 
scanner.next();

bq. But I don't think it worth a patch just for this (it should be included in 
a bigger patch hoewever).

Up to you but yes, the above is probably the way to go.

Thanks N.

 Make in-memory table scanning faster
 

 Key: HBASE-1938
 URL: https://issues.apache.org/jira/browse/HBASE-1938
 Project: HBase
  Issue Type: Improvement
  Components: performance
Reporter: stack
Assignee: stack
Priority: Blocker
 Attachments: MemStoreScanPerformance.java, 
 MemStoreScanPerformance.java, caching-keylength-in-kv.patch, test.patch


 This issue is about profiling hbase to see if I can make hbase scans run 
 faster when all is up in memory.  Talking to some users, they are seeing 
 about 1/4 million rows a second.  It should be able to go faster than this 
 (Scanning an array of objects, they can do about 4-5x this).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4105) Stargate does not support Content-Type: application/json and Content-Encoding: gzip in parallel

2011-07-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070070#comment-13070070
 ] 

Hudson commented on HBASE-4105:
---

Integrated in HBase-TRUNK #2047 (See 
[https://builds.apache.org/job/HBase-TRUNK/2047/])
HBASE-4105 Stargate does not support json and gzip in parallel

apurtell : 
Files : 
* 
/hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/filter/GZIPResponseWrapper.java
* /hbase/trunk/CHANGES.txt
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/rest/TestGzipFilter.java


 Stargate does not support Content-Type: application/json and 
 Content-Encoding: gzip in parallel
 ---

 Key: HBASE-4105
 URL: https://issues.apache.org/jira/browse/HBASE-4105
 Project: HBase
  Issue Type: Bug
  Components: rest
Affects Versions: 0.90.1
 Environment: Server: jetty/6.1.26
 REST: 0.0.2 
 OS: Linux 2.6.32-bpo.5-amd64 amd64
 Jersey: 1.4
 JVM: Sun Microsystems Inc. 1.6.0_22-17.1-b03
Reporter: Jean-Pierre Koenig
Assignee: Andrew Purtell
  Labels: gzip, json, rest
 Fix For: 0.90.4, 0.94.0

 Attachments: HBASE-4105.patch


 When:
 curl -H Accept: application/json http://localhost:3000/version -v
 Response is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0 {Server:jetty/6.1.26,REST:0.0.2,OS:Linux 
 2.6.32-bpo.5-amd64 amd64,Jersey:1.4,JVM:Sun Microsystems Inc. 
 1.6.0_22-17.1-b03}
 but with compression:
 curl -H Accept: application/json http://localhost:3000/version -v 
 --compressed
 Reponse is:
 About to connect() to localhost port 3000 (#0)
 Trying 127.0.0.1 ... connected
 Connected to localhost (127.0.0.1) port 3000 (#0)
  GET /version HTTP/1.1
  User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 
  OpenSSL/0.9.8r zlib/1.2.3
  Host: localhost:3000
  Accept-Encoding: deflate, gzip
  Accept: application/json
  
  HTTP/1.1 200 OK
  Cache-Control: no-cache
  Content-Type: application/json
  Content-Encoding: gzip
  Transfer-Encoding: chunked
 
 Connection #0 to host localhost left intact
 Closing connection #0
 and the stargate server throws the following exception:
 11/07/14 11:21:44 ERROR mortbay.log: /version
 java.lang.ClassCastException: org.mortbay.jetty.HttpConnection$Output cannot 
 be cast to org.apache.hadoop.hbase.rest.filter.GZIPResponseStream
 at org.apache.hadoop.hbase.rest.filter.GzipFilter.doFilter(GzipFilter.java:54)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at 
 org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
 at 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 This is not reproduceable with content type text/plain and gzip.
 This is somehow related to https://issues.apache.org/jira/browse/HBASE-3275

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HBASE-4131) Make the Replication Service pluggable via a standard interface definition

2011-07-23 Thread dhruba borthakur (JIRA)
Make the Replication Service pluggable via a standard interface definition
--

 Key: HBASE-4131
 URL: https://issues.apache.org/jira/browse/HBASE-4131
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur


The current HBase code supports a replication service that can be used to sync 
data from from one hbase cluster to another. It would be nice to make it a 
pluggable interface so that other cross-data-center replication services can be 
used in conjuction with HBase.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4116) [stargate] StringIndexOutOfBoundsException in row spec parse

2011-07-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070082#comment-13070082
 ] 

Hudson commented on HBASE-4116:
---

Integrated in HBase-TRUNK #2048 (See 
[https://builds.apache.org/job/HBase-TRUNK/2048/])
HBASE-4116 [stargate] StringIndexOutOfBoundsException in row spec parse

apurtell : 
Files : 
* /hbase/trunk/src/test/java/org/apache/hadoop/hbase/rest/TestRowResource.java
* /hbase/trunk/CHANGES.txt
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/rest/RowSpec.java


 [stargate] StringIndexOutOfBoundsException in row spec parse
 

 Key: HBASE-4116
 URL: https://issues.apache.org/jira/browse/HBASE-4116
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
 Fix For: 0.90.4, 0.94.0

 Attachments: HBASE-4116.patch


 From user@hbase, Allan Yan writes:
 There might be a bug for REST web service to get rows with given startRow and 
 endRow.
 For example, to get a list of rows with startRow=testrow1, endRow=testrow2, I 
 send GET request:
 curl http://localhost:8123/TestRowResource/testrow1,testrow2/a:1
 And got StringIndexOutOfBoundsException.
 This was because in the RowSpec.java, parseRowKeys method, startRow value was 
 changed:
 {code}
startRow = sb.toString();
int idx = startRow.indexOf(',');
if (idx != -1) {
  startRow = URLDecoder.decode(startRow.substring(0, idx),
HConstants.UTF8_ENCODING);
  endRow = URLDecoder.decode(startRow.substring(idx + 1),
HConstants.UTF8_ENCODING);
} else {
  startRow = URLDecoder.decode(startRow, HConstants.UTF8_ENCODING);
}
 {code}
  After change to this, it works:
 {code}
String row = sb.toString();
int idx = row.indexOf(',');
if (idx != -1) {
  startRow = URLDecoder.decode(row.substring(0, idx),
HConstants.UTF8_ENCODING);
  endRow = URLDecoder.decode(row.substring(idx + 1),
HConstants.UTF8_ENCODING);
} else {
  startRow = URLDecoder.decode(row, HConstants.UTF8_ENCODING);
}
 {code}
 I've also created a unit test method in TestRowResource.java,
 {code}
@Test
public void testStartEndRowGetPutXML() throws IOException, JAXBException {
  String[] rows = {ROW_1,ROW_2,ROW_3};
  String[] values = {VALUE_1,VALUE_2,VALUE_3}; 
  Response response = null;
  for(int i=0; irows.length; i++){
  response = putValueXML(TABLE, rows[i], COLUMN_1, values[i]);
  assertEquals(200, response.getCode());
  checkValueXML(TABLE, rows[i], COLUMN_1, values[i]);
  }
  response = getValueXML(TABLE, rows[0], rows[2], COLUMN_1);
  assertEquals(200, response.getCode());
  CellSetModel cellSet = (CellSetModel)
unmarshaller.unmarshal(new ByteArrayInputStream(response.getBody()));
  assertEquals(2, cellSet.getRows().size());
  for(int i=0; icellSet.getRows().size()-1; i++){
  RowModel rowModel = cellSet.getRows().get(i);
  for(CellModel cell : rowModel.getCells()){
  assertEquals(COLUMN_1, Bytes.toString(cell.getColumn()));
  assertEquals(values[i], Bytes.toString(cell.getValue()));
  }   
  }
 
  for(String row : rows){
  response = deleteRow(TABLE, row);
  assertEquals(200, response.getCode());
  }
}
private static Response getValueXML(String table, String startRow, String
  endRow, String column)
throws IOException {
  StringBuilder path = new StringBuilder();
  path.append('/');
  path.append(table);
  path.append('/');
  path.append(startRow);
  path.append(,);
  path.append(endRow);
  path.append('/');
  path.append(column);
  return getValueXML(path.toString());
}
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4131) Make the Replication Service pluggable via a standard interface definition

2011-07-23 Thread dhruba borthakur (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070083#comment-13070083
 ] 

dhruba borthakur commented on HBASE-4131:
-

To keep compatibility with the currently existing Replication Service, my 
initial proposal is to do something like this:

{code}

public interface ReplicationService {

  /**
   * Start replication services.
   * @throws IOException
   */
  public void startReplicationService() throws IOException;

  /**
   * Stops replication service.
   */
  public void stopReplicationService();

  /**
   * Returns a WALObserver for the service. This is needed to 
   * observe log rolls and log archival events.
   */
  public WALObserver getWALObserver();

  /**
   * Carry on the list of log entries down to the sink
   * @param entries list of entries to replicate
   * @throws IOException
   */
  public void replicateLogEntries(HLog.Entry[] entries) throws IOException;
}
{code}

 Make the Replication Service pluggable via a standard interface definition
 --

 Key: HBASE-4131
 URL: https://issues.apache.org/jira/browse/HBASE-4131
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur

 The current HBase code supports a replication service that can be used to 
 sync data from from one hbase cluster to another. It would be nice to make it 
 a pluggable interface so that other cross-data-center replication services 
 can be used in conjuction with HBase.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4132) Extend the WALObserver API to accomodate log archival

2011-07-23 Thread dhruba borthakur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dhruba borthakur updated HBASE-4132:


Attachment: walArchive.txt

Added two new methods to the WALObserver interface:

{code}

  /**
   * The WAL needs to be archived. 
   * It is going to be moved from oldPath to newPath.
   * @param oldPath the path to the old hlog
   * @param newPath the path to the new hlog
   */
  public void logArchiveStart(Path oldPath, Path newPath) throws IOException;

  /**
   * The WAL has been archived.
   * It is moved from oldPath to newPath.
   * @param oldPath the path to the old hlog
   * @param newPath the path to the new hlog
   * @param archivalWasSuccessful true, if the archival was successful
   */
  public void logArchiveComplete(Path oldPath, Path newPath,
boolean archivalWasSuccessful) throws IOException;

{code}

Any backward compatibility issues I need to think about? Especially since this 
has been a public API. 

 Extend the WALObserver API to accomodate log archival
 -

 Key: HBASE-4132
 URL: https://issues.apache.org/jira/browse/HBASE-4132
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Attachments: walArchive.txt


 The WALObserver interface exposes the log roll events. It would be nice to 
 extend it to accomodate log archival events as well.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-4132) Extend the WALObserver API to accomodate log archival

2011-07-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-4132:
-

Fix Version/s: 0.92.0
 Release Note: Add pre and post archiving methods to WALObserver
   Status: Patch Available  (was: Open)

 Extend the WALObserver API to accomodate log archival
 -

 Key: HBASE-4132
 URL: https://issues.apache.org/jira/browse/HBASE-4132
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Fix For: 0.92.0

 Attachments: walArchive.txt


 The WALObserver interface exposes the log roll events. It would be nice to 
 extend it to accomodate log archival events as well.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4132) Extend the WALObserver API to accomodate log archival

2011-07-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070095#comment-13070095
 ] 

stack commented on HBASE-4132:
--

+1 on patch.  We could put it into a 0.90.5 if wanted since this is all 
internal APIs.  Otherwise it'll go into 0.92.

I'll let it hang out a little while before committing.  J-D or Gary might have 
opinions.

 Extend the WALObserver API to accomodate log archival
 -

 Key: HBASE-4132
 URL: https://issues.apache.org/jira/browse/HBASE-4132
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Fix For: 0.92.0

 Attachments: walArchive.txt


 The WALObserver interface exposes the log roll events. It would be nice to 
 extend it to accomodate log archival events as well.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4131) Make the Replication Service pluggable via a standard interface definition

2011-07-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070096#comment-13070096
 ] 

stack commented on HBASE-4131:
--

This looks fine.  How will replicateLogEntries work?  The current replication 
reading the edits source would invoke it?  The current replication would then 
need to change so that on invocation of replicateLogEntries, if it was the 
configured sink, it would pass the edits to the remote cluster as it does now?

 Make the Replication Service pluggable via a standard interface definition
 --

 Key: HBASE-4131
 URL: https://issues.apache.org/jira/browse/HBASE-4131
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur

 The current HBase code supports a replication service that can be used to 
 sync data from from one hbase cluster to another. It would be nice to make it 
 a pluggable interface so that other cross-data-center replication services 
 can be used in conjuction with HBase.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4132) Extend the WALObserver API to accomodate log archival

2011-07-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070097#comment-13070097
 ] 

stack commented on HBASE-4132:
--

There is also master/LogCleaner.java.  You know about that?  It takes plugins 
that say whether or not a log should be deleted.  You interested in this at all?

 Extend the WALObserver API to accomodate log archival
 -

 Key: HBASE-4132
 URL: https://issues.apache.org/jira/browse/HBASE-4132
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Fix For: 0.92.0

 Attachments: walArchive.txt


 The WALObserver interface exposes the log roll events. It would be nice to 
 extend it to accomodate log archival events as well.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4120) isolation and allocation

2011-07-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070098#comment-13070098
 ] 

stack commented on HBASE-4120:
--

@Liu Thank you for posting the documents.  That helps a lot.  I'm impressed.  
You should post a note to the users's list describing what you have done. 
Others may be interested in using it.

What would you like to see added to hbase to make your life easier writing this 
version of hbase?  If coprocessors had been available when you went to write 
your customizations, could you have done them all up in coprocessors (It 
doesn't look like it given you have your own AssignmentManager and your own 
HBaseServer).

What from your version of hbase could we put back into hbase core?

Thank you for letting us know about this interesting application. 

 isolation and allocation
 

 Key: HBASE-4120
 URL: https://issues.apache.org/jira/browse/HBASE-4120
 Project: HBase
  Issue Type: New Feature
  Components: master, regionserver
Affects Versions: 0.90.2
Reporter: Liu Jia
 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, 
 HBase_isolation_and_allocation_user_guide.pdf, 
 Performance_of_Table_priority.pdf


 The HBase isolation and allocation tool is designed to help users manage 
 cluster resource among different application and tables.
 When we have a large scale of HBase cluster with many applications running on 
 it, there will be lots of problems. In Taobao there is a cluster for many 
 departments to test their applications performance, these applications are 
 based on HBase. With one cluster which has 12 servers, there will be only one 
 application running exclusively on this server, and many other applications 
 must wait until the previous test finished.
 After we add allocation manage function to the cluster, applications can 
 share the cluster and run concurrently. Also if the Test Engineer wants to 
 make sure there is no interference, he/she can move out other tables from 
 this group.
 In groups we use table priority to allocate resource, when system is busy; we 
 can make sure high-priority tables are not affected lower-priority tables
 Different groups can have different region server configurations, some groups 
 optimized for reading can have large block cache size, and others optimized 
 for writing can have large memstore size. 
 Tables and region servers can be moved easily between groups; after changing 
 the configuration, a group can be restarted alone instead of restarting the 
 whole cluster.
 git entry : https://github.com/ICT-Ope/HBase_allocation .
 We hope our work is helpful.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4120) isolation and allocation

2011-07-23 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070102#comment-13070102
 ] 

Andrew Purtell commented on HBASE-4120:
---

It would be interesting what can be added to the Master coprocessor API to 
support this kind of extension.



 isolation and allocation
 

 Key: HBASE-4120
 URL: https://issues.apache.org/jira/browse/HBASE-4120
 Project: HBase
  Issue Type: New Feature
  Components: master, regionserver
Affects Versions: 0.90.2
Reporter: Liu Jia
 Attachments: Design_document_for_HBase_isolation_and_allocation.pdf, 
 HBase_isolation_and_allocation_user_guide.pdf, 
 Performance_of_Table_priority.pdf


 The HBase isolation and allocation tool is designed to help users manage 
 cluster resource among different application and tables.
 When we have a large scale of HBase cluster with many applications running on 
 it, there will be lots of problems. In Taobao there is a cluster for many 
 departments to test their applications performance, these applications are 
 based on HBase. With one cluster which has 12 servers, there will be only one 
 application running exclusively on this server, and many other applications 
 must wait until the previous test finished.
 After we add allocation manage function to the cluster, applications can 
 share the cluster and run concurrently. Also if the Test Engineer wants to 
 make sure there is no interference, he/she can move out other tables from 
 this group.
 In groups we use table priority to allocate resource, when system is busy; we 
 can make sure high-priority tables are not affected lower-priority tables
 Different groups can have different region server configurations, some groups 
 optimized for reading can have large block cache size, and others optimized 
 for writing can have large memstore size. 
 Tables and region servers can be moved easily between groups; after changing 
 the configuration, a group can be restarted alone instead of restarting the 
 whole cluster.
 git entry : https://github.com/ICT-Ope/HBase_allocation .
 We hope our work is helpful.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-3741) Make HRegionServer aware of the regions it's opening/closing

2011-07-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070101#comment-13070101
 ] 

stack commented on HBASE-3741:
--

Question: Looking at this patch again, if we throw a 
RegionAlreadyInTransitionException, won't we just assign the region elsewhere 
though RegionAlreadyInTransitionException in at least one case here is saying 
that the region is already open on this regionserver?

 Make HRegionServer aware of the regions it's opening/closing
 

 Key: HBASE-3741
 URL: https://issues.apache.org/jira/browse/HBASE-3741
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.1
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
Priority: Blocker
 Fix For: 0.90.3

 Attachments: HBASE-3741-rsfix-v2.patch, HBASE-3741-rsfix-v3.patch, 
 HBASE-3741-rsfix.patch, HBASE-3741-trunk.patch


 This is a serious issue about a race between regions being opened and closed 
 in region servers. We had this situation where the master tried to unassign a 
 region for balancing, failed, force unassigned it, force assigned it 
 somewhere else, failed to open it on another region server (took too long), 
 and then reassigned it back to the original region server. A few seconds 
 later, the region server processed the first closed and the region was left 
 unassigned.
 This is from the master log:
 {quote}
 11-04-05 15:11:17,758 DEBUG org.apache.hadoop.hbase.master.AssignmentManager: 
 Sent CLOSE to serverName=sv4borg42,60020,1300920459477, load=(requests=187, 
 regions=574, usedHeap=3918, maxHeap=6973) for region 
 stumbles_by_userid2,\x00'\x8E\xE8\x7F\xFF\xFE\xE7\xA9\x97\xFC\xDF\x01\x10\xCC6,1266566087256.1470298961
 2011-04-05 15:12:10,021 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed 
 out:  
 stumbles_by_userid2,\x00'\x8E\xE8\x7F\xFF\xFE\xE7\xA9\x97\xFC\xDF\x01\x10\xCC6,1266566087256.1470298961
  state=PENDING_CLOSE, ts=1302041477758
 2011-04-05 15:12:10,021 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Region has been 
 PENDING_CLOSE for too long, running forced unassign again on 
 region=stumbles_by_userid2,\x00'\x8E\xE8\x7F\xFF\xFE\xE7\xA9\x97\xFC\xDF\x01\x10\xCC6,1266566087256.1470298961
 ...
 2011-04-05 15:14:45,783 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Forcing OFFLINE; 
 was=stumbles_by_userid2,\x00'\x8E\xE8\x7F\xFF\xFE\xE7\xA9\x97\xFC\xDF\x01\x10\xCC6,1266566087256.1470298961
  state=CLOSED, ts=1302041685733
 2011-04-05 15:14:45,783 DEBUG org.apache.hadoop.hbase.zookeeper.ZKAssign: 
 master:6-0x42ec2cece810b68 Creating (or updating) unassigned node for 
 1470298961 with OFFLINE state
 ...
 2011-04-05 15:14:45,885 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Using pre-existing plan for 
 region 
 stumbles_by_userid2,\x00'\x8E\xE8\x7F\xFF\xFE\xE7\xA9\x97\xFC\xDF\x01\x10\xCC6,1266566087256.1470298961;
  
 plan=hri=stumbles_by_userid2,\x00'\x8E\xE8\x7F\xFF\xFE\xE7\xA9\x97\xFC\xDF\x01\x10\xCC6,1266566087256.1470298961,
  src=sv4borg42,60020,1300920459477, dest=sv4borg40,60020,1302041218196
 2011-04-05 15:14:45,885 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Assigning region 
 stumbles_by_userid2,\x00'\x8E\xE8\x7F\xFF\xFE\xE7\xA9\x97\xFC\xDF\x01\x10\xCC6,1266566087256.1470298961
  to sv4borg40,60020,1302041218196
 2011-04-05 15:15:39,410 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed 
 out:  
 stumbles_by_userid2,\x00'\x8E\xE8\x7F\xFF\xFE\xE7\xA9\x97\xFC\xDF\x01\x10\xCC6,1266566087256.1470298961
  state=PENDING_OPEN, ts=1302041700944
 2011-04-05 15:15:39,410 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Region has been 
 PENDING_OPEN for too long, reassigning 
 region=stumbles_by_userid2,\x00'\x8E\xE8\x7F\xFF\xFE\xE7\xA9\x97\xFC\xDF\x01\x10\xCC6,1266566087256.1470298961
 2011-04-05 15:15:39,410 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Forcing OFFLINE; 
 was=stumbles_by_userid2,\x00'\x8E\xE8\x7F\xFF\xFE\xE7\xA9\x97\xFC\xDF\x01\x10\xCC6,1266566087256.1470298961
  state=PENDING_OPEN, ts=1302041700944
 ...
 2011-04-05 15:15:39,410 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: No previous transition plan 
 was found (or we are ignoring an existing plan) for 
 stumbles_by_userid2,\x00'\x8E\xE8\x7F\xFF\xFE\xE7\xA9\x97\xFC\xDF\x01\x10\xCC6,1266566087256.1470298961
  so generated a random one; 
 hri=stumbles_by_userid2,\x00'\x8E\xE8\x7F\xFF\xFE\xE7\xA9\x97\xFC\xDF\x01\x10\xCC6,1266566087256.1470298961,
  src=, dest=sv4borg42,60020,1300920459477; 19 (online=19, exclude=null) 
 available servers
 2011-04-05 15:15:39,410 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Assigning region 
 

[jira] [Commented] (HBASE-4124) ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'.

2011-07-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070103#comment-13070103
 ] 

stack commented on HBASE-4124:
--

hbase-3741 changes the behavior here in that now we notice if we are asked to 
open a region that is already open and we'll throw an exception back to the 
master.  I think the master will now reassign it elsewhere which is not what we 
want if its a RegionAlreadyInTransitionException.  This will make it so we'll 
not keep retrying but I think there is more to do.

 ZK restarted while assigning a region, new active HM re-assign it but the RS 
 warned 'already online on this server'.
 

 Key: HBASE-4124
 URL: https://issues.apache.org/jira/browse/HBASE-4124
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: fulin wang
 Attachments: log.txt

   Original Estimate: 0.4h
  Remaining Estimate: 0.4h

 ZK restarted while assigning a region, new active HM re-assign it but the RS 
 warned 'already online on this server'.
 Issue:
 The RS failed besause of 'already online on this server' and return; The HM 
 can not receive the message and report 'Regions in transition timed out'.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-3872) Hole in split transaction rollback; edits to .META. need to be rolled back even if it seems like they didn't make it

2011-07-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070104#comment-13070104
 ] 

Hudson commented on HBASE-3872:
---

Integrated in HBase-TRUNK #2049 (See 
[https://builds.apache.org/job/HBase-TRUNK/2049/])
HBASE-4129 hbase-3872 added a warn message 'CatalogJanitor: Daughter 
regiondir does not exist' that is triggered though its often legit that 
daughter is not present


 Hole in split transaction rollback; edits to .META. need to be rolled back 
 even if it seems like they didn't make it
 

 Key: HBASE-3872
 URL: https://issues.apache.org/jira/browse/HBASE-3872
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.90.3
Reporter: stack
Assignee: stack
Priority: Blocker
 Fix For: 0.90.4

 Attachments: 3872-v2.txt, 3872.txt


 Saw this interesting one on a cluster of ours.  The cluster was configured 
 with too few handlers so lots of the phenomeneon where actions were queued 
 but then by the time they got into the server and tried respond to the 
 client, the client had disconnected because of the timeout of 60 seconds.  
 Well, the meta edits for a split were queued at the regionserver carrying 
 .META. and by the time it went to write back, the client had gone (the first 
 insert of parent offline with daughter regions added as info:splitA and 
 info:splitB).  The client presumed the edits failed and 'successfully' rolled 
 back the transaction (failing to undo .META. edits thinking they didn't go 
 through).
 A few minutes later the .META. scanner on master runs.  It sees 'no 
 references' in daughters -- the daughters had been cleaned up as part of the 
 split transaction rollback -- so it thinks its safe to delete the parent.
 Two things:
 + Tighten up check in master... need to check daughter region at least exists 
 and possibly the daughter region has an entry in .META.
 + Dependent on the edit that fails, schedule rollback edits though it will 
 seem like they didn't go through.
 This is pretty critical one.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4129) hbase-3872 added a warn message 'CatalogJanitor: Daughter regiondir does not exist' that is triggered though its often legit that daughter is not present

2011-07-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070105#comment-13070105
 ] 

Hudson commented on HBASE-4129:
---

Integrated in HBase-TRUNK #2049 (See 
[https://builds.apache.org/job/HBase-TRUNK/2049/])
HBASE-4129 hbase-3872 added a warn message 'CatalogJanitor: Daughter 
regiondir does not exist' that is triggered though its often legit that 
daughter is not present

stack : 
Files : 
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/master/CatalogJanitor.java
* /hbase/trunk/CHANGES.txt


 hbase-3872 added a warn message 'CatalogJanitor: Daughter regiondir does not 
 exist' that is triggered though its often legit that daughter is not present
 -

 Key: HBASE-4129
 URL: https://issues.apache.org/jira/browse/HBASE-4129
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.4
Reporter: stack
Assignee: stack
 Fix For: 0.90.4

 Attachments: 3872.txt


 If a daughter region is split before the catalog janitor runs, we'll see:
 {code}
 2011-07-22 16:10:26,398 WARN org.apache.hadoop.hbase.master.CatalogJanitor: 
 Daughter regiondir does not exist: 
 hdfs://sv4borg227:1/hbase/TestTable/a1023b2b00fe44c86bd8ae3633f531fa
 {code}
 Its legit that the daughter region does not exist in this case (it was just 
 cleaned up by the catalogjanitor).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4064) Two concurrent unassigning of the same region caused the endless loop of Region has been PENDING_CLOSE for too long...

2011-07-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070106#comment-13070106
 ] 

stack commented on HBASE-4064:
--

bq.  4.When master receives the watcher event, It removes the region from RIT 
and then remove from regions collection. There is a short window when diable 
table can't finish in a period(). The region may be unssigned again. My patch 
try to fix above case. remove regions collection firstly and disable thread 
can't get a processing region.

What is 'period()' in the above Gao?  Where is it?  Where is the code that does 
the 'unassign again'?  I'm trying to understand why in particular your patch 
addresses.  Do you think your patch solves the problem in spite of the race 
windows described above by J-D?

 Two concurrent unassigning of the same region caused the endless loop of 
 Region has been PENDING_CLOSE for too long...
 

 Key: HBASE-4064
 URL: https://issues.apache.org/jira/browse/HBASE-4064
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.90.3
Reporter: Jieshan Bean
 Fix For: 0.90.5

 Attachments: HBASE-4064-v1.patch, HBASE-4064_branch90V2.patch


 1. If there is a rubbish RegionState object with PENDING_CLOSE in 
 regionsInTransition(The RegionState was remained by some exception which 
 should be removed, that's why I called it as rubbish object), but the 
 region is not currently assigned anywhere, TimeoutMonitor will fall into an 
 endless loop:
 2011-06-27 10:32:21,326 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed 
 out:  test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
 state=PENDING_CLOSE, ts=1309141555301
 2011-06-27 10:32:21,326 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Region has been 
 PENDING_CLOSE for too long, running forced unassign again on 
 region=test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f.
 2011-06-27 10:32:21,438 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of 
 region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
 (offlining)
 2011-06-27 10:32:21,441 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Attempted to unassign 
 region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. but it is 
 not currently assigned anywhere
 2011-06-27 10:32:31,207 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed 
 out:  test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
 state=PENDING_CLOSE, ts=1309141555301
 2011-06-27 10:32:31,207 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Region has been 
 PENDING_CLOSE for too long, running forced unassign again on 
 region=test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f.
 2011-06-27 10:32:31,215 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of 
 region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
 (offlining)
 2011-06-27 10:32:31,215 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Attempted to unassign 
 region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. but it is 
 not currently assigned anywhere
 2011-06-27 10:32:41,164 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Regions in transition timed 
 out:  test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
 state=PENDING_CLOSE, ts=1309141555301
 2011-06-27 10:32:41,164 INFO 
 org.apache.hadoop.hbase.master.AssignmentManager: Region has been 
 PENDING_CLOSE for too long, running forced unassign again on 
 region=test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f.
 2011-06-27 10:32:41,172 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Starting unassignment of 
 region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. 
 (offlining)
 2011-06-27 10:32:41,172 DEBUG 
 org.apache.hadoop.hbase.master.AssignmentManager: Attempted to unassign 
 region test2,070712,1308971310309.9a6e26d40293663a79523c58315b930f. but it is 
 not currently assigned anywhere
 .
 2  In the following scenario, two concurrent unassigning call of the same 
 region may lead to the above problem:
 the first unassign call send rpc call success, the master watched the event 
 of RS_ZK_REGION_CLOSED, process this event, will create a 
 ClosedRegionHandler to remove the state of the region in master.eg.
 while ClosedRegionHandler is running in  
 hbase.master.executor.closeregion.threads thread (A), another unassign call 
 of same region run in another thread(B).
 while thread B  run if (!regions.containsKey(region)), this.regions have 
 the region info, now  cpu switch to thread A.
 The thread A will remove the region from 

[jira] [Commented] (HBASE-4132) Extend the WALObserver API to accomodate log archival

2011-07-23 Thread dhruba borthakur (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070107#comment-13070107
 ] 

dhruba borthakur commented on HBASE-4132:
-

stack: I did not even know the existence of master/LogCleaner. Thanks a bunch 
for pointing it out. Definitely will look into that.

 Extend the WALObserver API to accomodate log archival
 -

 Key: HBASE-4132
 URL: https://issues.apache.org/jira/browse/HBASE-4132
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Fix For: 0.92.0

 Attachments: walArchive.txt


 The WALObserver interface exposes the log roll events. It would be nice to 
 extend it to accomodate log archival events as well.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-3899) enhance HBase RPC to support free-ing up server handler threads even if response is not ready

2011-07-23 Thread jirapos...@reviews.apache.org (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070108#comment-13070108
 ] 

jirapos...@reviews.apache.org commented on HBASE-3899:
--


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/1174/#review1175
---

Ship it!


Looks good to me.  Small items below.   Have you run this code on a cluster 
under load?


src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
https://reviews.apache.org/r/1174/#comment2464

Why remove the static?  What in the outer class do we need in here in Call?



src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java
https://reviews.apache.org/r/1174/#comment2465

Any chance that this synchronized slows down rpc'ing?

It doesn't look too bad though.  We're just allocating and then copying 
into the allocation the response.


- Michael


On 2011-07-22 00:17:13, Vlad Dogaru wrote:
bq.  
bq.  ---
bq.  This is an automatically generated e-mail. To reply, visit:
bq.  https://reviews.apache.org/r/1174/
bq.  ---
bq.  
bq.  (Updated 2011-07-22 00:17:13)
bq.  
bq.  
bq.  Review request for hbase.
bq.  
bq.  
bq.  Summary
bq.  ---
bq.  
bq.  Free up RPC server Handler thread if the called routine specifies the call 
should be delayed. The RPC client sees no difference, changes are server-side 
only. This is based on the previous submitted patch from Dhruba.
bq.  
bq.  
bq.  This addresses bug HBASE-3899.
bq.  https://issues.apache.org/jira/browse/HBASE-3899
bq.  
bq.  
bq.  Diffs
bq.  -
bq.  
bq.src/test/java/org/apache/hadoop/hbase/ipc/TestDelayedRpc.java 
PRE-CREATION 
bq.src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java 0da7f9e 
bq.src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java 61d3915 
bq.src/main/java/org/apache/hadoop/hbase/ipc/Delayable.java PRE-CREATION 
bq.  
bq.  Diff: https://reviews.apache.org/r/1174/diff
bq.  
bq.  
bq.  Testing
bq.  ---
bq.  
bq.  Unit tests run. Also, the patch includes a new unit test.
bq.  
bq.  
bq.  Thanks,
bq.  
bq.  Vlad
bq.  
bq.



 enhance HBase RPC to support free-ing up server handler threads even if 
 response is not ready
 -

 Key: HBASE-3899
 URL: https://issues.apache.org/jira/browse/HBASE-3899
 Project: HBase
  Issue Type: Improvement
  Components: ipc
Reporter: dhruba borthakur
Assignee: dhruba borthakur
 Fix For: 0.94.0

 Attachments: asyncRpc.txt, asyncRpc.txt


 In the current implementation, the server handler thread picks up an item 
 from the incoming callqueue, processes it and then wraps the response as a 
 Writable and sends it back to the IPC server module. This wastes 
 thread-resources when the thread is blocked for disk IO (transaction logging, 
 read into block cache, etc).
 It would be nice if we can make the RPC Server Handler threads pick up a call 
 from the IPC queue, hand it over to the application (e.g. HRegion), the 
 application can queue it to be processed asynchronously and send a response 
 back to the IPC server module saying that the response is not ready. The RPC 
 Server Handler thread is now ready to pick up another request from the 
 incoming callqueue. When the queued call is processed by the application, it 
 indicates to the IPC module that the response is now ready to be sent back to 
 the client.
 The RPC client continues to experience the same behaviour as before. A RPC 
 client is synchronous and blocks till the response arrives.
 This RPC enhancement allows us to do very powerful things with the 
 RegionServer. In future, we can make enhance the RegionServer's threading 
 model to a message-passing model for better performance. We will not be 
 limited by the number of threads in the RegionServer.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HBASE-3465) Hbase should use a HADOOP_HOME environment variable if available.

2011-07-23 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-3465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-3465:
-

  Resolution: Fixed
Release Note: If HADOOP_HOME is defined, we'll use this hadoop over whats 
in HBASE_HOME/lib
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

Committed to TRUNK.  Thank you for the patch Alejandro.

 Hbase should use a HADOOP_HOME environment variable if available.
 -

 Key: HBASE-3465
 URL: https://issues.apache.org/jira/browse/HBASE-3465
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.0
Reporter: Ted Dunning
Assignee: Alejandro Abdelnur
 Attachments: a1-HBASE-3465.patch


 I have been burned a few times lately while developing code by having the 
 make sure that the hadoop jar in hbase/lib is exactly correct.  In my own 
 deployment, there are actually 3 jars and a native library to keep in sync 
 that hbase shouldn't have to know about explicitly.  A similar problem arises 
 when using stock hbase with CDH3 because of the security patches changing the 
 wire protocol.
 All of these problems could be avoided by not assuming that the hadoop 
 library is in the local directory.  Moreover, I think it might be possible to 
 assemble the distribution such that the compile time hadoop dependency is in 
 a cognate directory to lib and is referenced using a default value for 
 HADOOP_HOME.
 Does anybody have any violent antipathies to such a change?

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-3918) When assigning regions to an address, check the regionserver is actually online first

2011-07-23 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070110#comment-13070110
 ] 

stack commented on HBASE-3918:
--

bq. So my query is, is this defect applicable only between 0.89.x and 0.90.x 
versions?

Yes, the servername format has changed between 0.91 and 0.90 and they will not 
be able to communicate.  I'm not this issue is specific to 0.89 and 0.90.  
Regardless, I'd think that we should check that a server actually belongs to 
our current cluster before we go to use it don't you think Ram?

 When assigning regions to an address, check the regionserver is actually 
 online first
 -

 Key: HBASE-3918
 URL: https://issues.apache.org/jira/browse/HBASE-3918
 Project: HBase
  Issue Type: Bug
Reporter: stack

 This one came up in the case where the data was copied from one cluster to 
 another.  The first cluster was running 0.89.x.  The second 0.90.x.  On 
 startup of 0.90.x, it wanted to verify .META. was in the location -ROOT- said 
 it was at, so it tried connect to the FIRST cluster.  The attempt failed 
 because of mismatched RPCs.  The master then actually aborted.
 {code}
 org.apache.hadoop.hbase.ipc.HBaseRPC$VersionMismatch: Protocol 
 org.apache.hadoop.hbase.ipc.HRegionInterface version mismatch. (client = 27, 
 server = 24)
 at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:424)
 at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:393)
 at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:444)
 at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:349)
 at 
 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:965)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.getCachedConnection(CatalogTracker.java:386)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:285)
 at 
 org.apache.hadoop.hbase.catalog.CatalogTracker.verifyMetaRegionLocation(CatalogTracker.java:486)
 at org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:442)
 at 
 org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:389)
 at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:283)
 2011-05-23 22:38:07,720 INFO org.apache.hadoop.hbase.master.HMaster: Aborting
 {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira