Re: Marking fix version

2013-04-06 Thread Stack
On Thu, Apr 4, 2013 at 10:15 AM, Jonathan Hsieh j...@cloudera.com wrote:

 The argument for excluding the 0.96 tag makes sense.  Can we agree to do
 this:

 Commit only to trunk: Mark with 0.98
 Commit to 0.95 and trunk : Mark with 0.98, and 0.95.x
 Commit to 0.94.x and 0.95, and trunk: Mark with 0.98, 0.95.x, and 0.94.x
 Commit to 89-fb: Mark with 89-fb.
 Commit site fixes: no version


I added the above agreement to the refguide:
http://hbase.apache.org/book.html#decisions

I fixed issues that were resolved w/ 0.95.0 adding 0.98.0 as per above.


 Should we remove 0.96 tag for now until the branch appears again?


I removed it.

Good on you lads,
St.Ack


Re: trunk vs. precommit dev env

2013-04-06 Thread Nicolas Liochon
Jenkins has been down for 15 hours now, so I can't change the
configuration...
If nobody objects, I will:
 - set all the builds to hadoop* and jdk 1.6
 - in parallel do a few tries on jdk 1.7. Especially, I would like to
ensure that we have a recent 64 bits version of the jdk1.7 on all machines
 - see what's flaky with 1.6 (I think Jeffrey's tool will allow us to get
this list after a few days of commit)
 - then set all the trunk builds to 1.7




On Sat, Apr 6, 2013 at 2:03 AM, Andrew Purtell apurt...@apache.org wrote:

 Also I am remiss to mention that the JDK version used on EC2 Jenkins is
 currently JDK6. I've been meaning to bring up discussion on moving to JDK7
 there. It's already installed on the AMI, the only change necessary is to
 the node configuration in Jenkins.


 On Fri, Apr 5, 2013 at 12:22 PM, Andrew Purtell apurt...@apache.org
 wrote:

  It seems better to me to change precommit builds to jdk7 too. Otherwise
  wouldn't we be relying on a EOL product for build stability? With the
 flaky
  test category under discussion we can get green builds that way without
  keeping one foot (or both) in the past. Thoughts?
 
  As for moving builds to the hadoop* machines, +1 they seem cleaner.
 
 
  On Friday, April 5, 2013, Stack wrote:
 
  On Fri, Apr 5, 2013 at 11:36 AM, Nicolas Liochon nkey...@gmail.com
  wrote:
 
-Dsurefire.
   secondPartThreadCount=2 will make it so only two // tests
  
   Yes. I think we should use jdk 1.6 as well.
  
  
  
  Go for it.  We had a jdk7 discussion a while back and switched trunk
 build
  to jdk7 as a result.  Stability is more important so feel free to change
  if
  that will help improve it.
 
 
   You don't want to run the trunk on hadoop* machines? This would be
 safer
   imho.
  
  
  Go for it.
 
  St.Ack
 
 
 
  --
  Best regards,
 
 - Andy
 
  Problems worthy of attack prove their worth by hitting back. - Piet Hein
  (via Tom White)
 
 


 --
 Best regards,

- Andy

 Problems worthy of attack prove their worth by hitting back. - Piet Hein
 (via Tom White)



Re: trunk vs. precommit dev env

2013-04-06 Thread Ted
Sounds like a nice plan. 

On Apr 6, 2013, at 4:38 AM, Nicolas Liochon nkey...@gmail.com wrote:

 Jenkins has been down for 15 hours now, so I can't change the
 configuration...
 If nobody objects, I will:
 - set all the builds to hadoop* and jdk 1.6
 - in parallel do a few tries on jdk 1.7. Especially, I would like to
 ensure that we have a recent 64 bits version of the jdk1.7 on all machines
 - see what's flaky with 1.6 (I think Jeffrey's tool will allow us to get
 this list after a few days of commit)
 - then set all the trunk builds to 1.7
 
 
 
 
 On Sat, Apr 6, 2013 at 2:03 AM, Andrew Purtell apurt...@apache.org wrote:
 
 Also I am remiss to mention that the JDK version used on EC2 Jenkins is
 currently JDK6. I've been meaning to bring up discussion on moving to JDK7
 there. It's already installed on the AMI, the only change necessary is to
 the node configuration in Jenkins.
 
 
 On Fri, Apr 5, 2013 at 12:22 PM, Andrew Purtell apurt...@apache.org
 wrote:
 
 It seems better to me to change precommit builds to jdk7 too. Otherwise
 wouldn't we be relying on a EOL product for build stability? With the
 flaky
 test category under discussion we can get green builds that way without
 keeping one foot (or both) in the past. Thoughts?
 
 As for moving builds to the hadoop* machines, +1 they seem cleaner.
 
 
 On Friday, April 5, 2013, Stack wrote:
 
 On Fri, Apr 5, 2013 at 11:36 AM, Nicolas Liochon nkey...@gmail.com
 wrote:
 
 -Dsurefire.
 secondPartThreadCount=2 will make it so only two // tests
 
 Yes. I think we should use jdk 1.6 as well.
 Go for it.  We had a jdk7 discussion a while back and switched trunk
 build
 to jdk7 as a result.  Stability is more important so feel free to change
 if
 that will help improve it.
 
 
 You don't want to run the trunk on hadoop* machines? This would be
 safer
 imho.
 Go for it.
 
 St.Ack
 
 
 --
 Best regards,
 
   - Andy
 
 Problems worthy of attack prove their worth by hitting back. - Piet Hein
 (via Tom White)
 
 
 --
 Best regards,
 
   - Andy
 
 Problems worthy of attack prove their worth by hitting back. - Piet Hein
 (via Tom White)
 


Re: Marking fix version

2013-04-06 Thread Jonathan Hsieh
Thanks stack!

On Fri, Apr 5, 2013 at 11:18 PM, Stack st...@duboce.net wrote:

 On Thu, Apr 4, 2013 at 10:15 AM, Jonathan Hsieh j...@cloudera.com wrote:

  The argument for excluding the 0.96 tag makes sense.  Can we agree to do
  this:
 
  Commit only to trunk: Mark with 0.98
  Commit to 0.95 and trunk : Mark with 0.98, and 0.95.x
  Commit to 0.94.x and 0.95, and trunk: Mark with 0.98, 0.95.x, and 0.94.x
  Commit to 89-fb: Mark with 89-fb.
  Commit site fixes: no version
 
 
 I added the above agreement to the refguide:
 http://hbase.apache.org/book.html#decisions

 I fixed issues that were resolved w/ 0.95.0 adding 0.98.0 as per above.


  Should we remove 0.96 tag for now until the branch appears again?
 

 I removed it.

 Good on you lads,
 St.Ack




-- 
// Jonathan Hsieh (shay)
// Software Engineer, Cloudera
// j...@cloudera.com


Re: Does compatibility between versions also mean binary compatibility?

2013-04-06 Thread James Taylor
That seems reasonable to make an exception for coprocessors on binary 
compatibility. Can this be explicitly documented if it's not already so 
folks are sure to know that?


James

On 04/04/2013 05:53 PM, Andrew Purtell wrote:

Thanks for taking the time for the thoughtful feedback James.


- between 0.94.3 and 0.94.4, new methods were introduced on

RegionScanner. Often coprocessors will have their own implementation of
these so that they can aggregate in the postScannerOpen. Though this broke
binary compat, it improved scan performance as well. Where does the binary
compatible line stop?

- between 0.94.3 and 0.94.4, the class loading changed for coprocessors.

If a coprocessor was on the classpath, it didn't matter what the jar path
was, it would load. In 0.94.4, that was no longer the case - if the jar
path was invalid, the coprocessor would no longer load. Though this broke
compatibility, it was a good cleanup for the class loading logic. Call it a
bug fix or a change in behavior, but either way it was an incompatible
change. Is a change in behavior that causes incompatibilities still meet
the binary compatible criteria?

As owner of this space, I will claim that no binary compatibility nor
even interface compatibility should be expected at this time even among
point releases. This is for two reasons, the first being most important:

 1) The Coprocessor API caters to only a few users currently, so feels
the pull of the gravity of these major (new) users, e.g. security, Francis
Liu's work, Phoenix. We can expect this to lessen naturally over time as --
exactly here -- users like Phoenix become satisfied with the status quo and
begin to push back. Your input going forward will be valuable.

 2) Coprocessors are an extension surface for internals



On Thu, Apr 4, 2013 at 4:29 PM, James Taylor jtay...@salesforce.com wrote:


The binary compat is a slippery slope. It'd be a bummer if we couldn't
take advantage of all the innovation you guys are doing. At the same time,
it's tough to require the Phoenix user community, for example, to upgrade
their HBase servers to be able to move to the latest version of Phoenix. I
don't know what the right answer is, but here are a couple of concrete
cases:
- between 0.94.3 and 0.94.4, new methods were introduced on RegionScanner.
Often coprocessors will have their own implementation of these so that they
can aggregate in the postScannerOpen. Though this broke binary compat, it
improved scan performance as well. Where does the binary compatible line
stop?
- between 0.94.3 and 0.94.4, the class loading changed for coprocessors.
If a coprocessor was on the classpath, it didn't matter what the jar path
was, it would load. In 0.94.4, that was no longer the case - if the jar
path was invalid, the coprocessor would no longer load. Though this broke
compatibility, it was a good cleanup for the class loading logic. Call it a
bug fix or a change in behavior, but either way it was an incompatible
change. Is a change in behavior that causes incompatibilities still meet
the binary compatible criteria?
- between 0.94.4 and 0.94.5, the essential column family feature was
introduced. This is an example of one that is binary compatible. We're able
to take advantage of the feature and maintain binary compatibility with
0.94.4 (in which case the feature simple wouldn't be available).

Maybe if we just explicitly identified compatibility issues, that would be
a good start? We'd likely need a way to find them, though.

 James

On 04/04/2013 03:59 PM, lars hofhansl wrote:


I agree we need both, but I'm afraid that ship has sailed.
It's not something we paid a lot of attention to especially being
forward-binary-compatible. I would guess that there will be many more of
these issues.

Also, we have to qualify this statement somewhere. If you extend
HRegionServer you cannot expect compatibility between releases. Of course
that is silly, but it serves the point I am making.

For client visible classes (such as in this case) we should make it work,
we identifies issues with Filters and Coprocessors in the past and kept
them binary compatible on a best effort basis.


TL;DR: Let's fix this issue, and be wary of more such issues.


-- Lars



__**__
   From: Andrew Purtell apurt...@apache.org
To: dev@hbase.apache.org dev@hbase.apache.org
Sent: Thursday, April 4, 2013 3:21 PM
Subject: Re: Does compatibility between versions also mean binary
compatibility?
   Compatible implies both to my understanding of the term, unless
qualified.

I don't think we should qualify it. This looks like a regression to me.


On Thu, Apr 4, 2013 at 1:20 PM, Jean-Daniel Cryans jdcry...@apache.org

wrote:

  tl;dr should two compatible versions be considered both wire and

binary compatible or just the former?

Hey devs,

0.92 is compatible with 0.94, meaning that you can run a client for
either against the other and you can roll restart from 0.92 to 0.94.

What about binary 

Recovery failure during single Get()

2013-04-06 Thread Varun Sharma
Hi,

We are observing this bug for a while when we use HTable.get() operation to
do a single Get call using the Result get(Get get) API and I thought its
best to bring it up.

Steps to reproduce this bug:
1) Gracefull restart a region server causing regions to get redistributed.
2) Client call to this region keeps failing since Meta Cache is never
purged on the client for the region that moved.

Reason behind the bug:
1) Client continues to hit the old region server.
2) The old region server throws NotServingRegionException which is not
handled correctly and the META cache entries are never purged for that
server causing the client to keep hitting the old server.

The reason lies in ServerCallable code since we only purge META cache
entries when there is a RetriesExhaustedException, SocketTimeoutException
or ConnectException. However, there is no case check for
NotServingRegionException(s).

Why is this not a problem for Scan(s) and Put(s) ?

a) If a region server is not hosting a region/scanner, then an
UnknownScannerException is thrown which causes a relocateRegion() call
causing a refresh of the META cache for that particular region.
b) For put(s), the processBatchCallback() interface in HConnectionManager
is used which clears out META cache entries for all kinds of exceptions
except DoNotRetryException.

Created HBASE 8285 for this.


[jira] [Created] (HBASE-8285) HBaseClient never recovers for single HTable.get() calls when regions move

2013-04-06 Thread Varun Sharma (JIRA)
Varun Sharma created HBASE-8285:
---

 Summary: HBaseClient never recovers for single HTable.get() calls 
when regions move
 Key: HBASE-8285
 URL: https://issues.apache.org/jira/browse/HBASE-8285
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.94.6.1
Reporter: Varun Sharma
Assignee: Varun Sharma
Priority: Critical
 Fix For: 0.94.7


Steps to reproduce this bug:
1) Gracefull restart a region server causing regions to get redistributed.
2) Client call to this region keeps failing since Meta Cache is never purged on 
the client for the region that moved.

Reason behind the bug:
1) Client continues to hit the old region server.
2) The old region server throws NotServingRegionException which is not handled 
correctly and the META cache entries are never purged for that server causing 
the client to keep hitting the old server.

The reason lies in ServerCallable code since we only purge META cache entries 
when there is a RetriesExhaustedException, SocketTimeoutException or 
ConnectException. However, there is no case check for 
NotServingRegionException(s).

Why is this not a problem for Scan(s) and Put(s) ?

a) If a region server is not hosting a region/scanner, then an 
UnknownScannerException is thrown which causes a relocateRegion() call causing 
a refresh of the META cache for that particular region.
b) For put(s), the processBatchCallback() interface in HConnectionManager is 
used which clears out META cache entries for all kinds of exceptions except 
DoNotRetryException.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Recovery failure during single Get()

2013-04-06 Thread Ted Yu
Thanks for the analysis.

I left some comment on HBASE-8285


On Sat, Apr 6, 2013 at 1:36 PM, Varun Sharma va...@pinterest.com wrote:

 Hi,

 We are observing this bug for a while when we use HTable.get() operation to
 do a single Get call using the Result get(Get get) API and I thought its
 best to bring it up.

 Steps to reproduce this bug:
 1) Gracefull restart a region server causing regions to get redistributed.
 2) Client call to this region keeps failing since Meta Cache is never
 purged on the client for the region that moved.

 Reason behind the bug:
 1) Client continues to hit the old region server.
 2) The old region server throws NotServingRegionException which is not
 handled correctly and the META cache entries are never purged for that
 server causing the client to keep hitting the old server.

 The reason lies in ServerCallable code since we only purge META cache
 entries when there is a RetriesExhaustedException, SocketTimeoutException
 or ConnectException. However, there is no case check for
 NotServingRegionException(s).

 Why is this not a problem for Scan(s) and Put(s) ?

 a) If a region server is not hosting a region/scanner, then an
 UnknownScannerException is thrown which causes a relocateRegion() call
 causing a refresh of the META cache for that particular region.
 b) For put(s), the processBatchCallback() interface in HConnectionManager
 is used which clears out META cache entries for all kinds of exceptions
 except DoNotRetryException.

 Created HBASE 8285 for this.



Build failed in Jenkins: HBase-0.94 #949

2013-04-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/HBase-0.94/949/changes

Changes:

[larsh] HBASE-7961 truncate on disabled table should throw 
TableNotEnabledException. (rajeshbabu)

[larsh] HBASE-8208 In some situations data is not replicated to slaves when 
deferredLogSync is enabled (Jeffrey Zhong)

[tedyu] HBASE-8276 Backport hbase-6738 to 0.94 Too aggressive task 
resubmission from the distributed log manager (Jeffrey)

--
[...truncated 5969 lines...]
Running org.apache.hadoop.hbase.regionserver.wal.TestLogRolling
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 248.195 sec
Running org.apache.hadoop.hbase.coprocessor.TestCoprocessorEndpoint
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 36.546 sec
Running org.apache.hadoop.hbase.coprocessor.TestWALObserver
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.178 sec
Running org.apache.hadoop.hbase.coprocessor.TestMasterObserver
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.237 sec
Running org.apache.hadoop.hbase.regionserver.wal.TestHLogSplitCompressed
Tests run: 30, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 294.011 sec
Running 
org.apache.hadoop.hbase.coprocessor.example.TestZooKeeperScanPolicyObserver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.328 sec
Running 
org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithAbort
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.303 sec
Running 
org.apache.hadoop.hbase.coprocessor.TestMasterCoprocessorExceptionWithRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.387 sec
Running org.apache.hadoop.hbase.coprocessor.TestBigDecimalColumnInterpreter
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.774 sec
Running org.apache.hadoop.hbase.coprocessor.example.TestBulkDeleteProtocol
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 78.805 sec
Running org.apache.hadoop.hbase.coprocessor.TestRegionObserverBypass
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.22 sec
Running org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 89.475 sec
Running org.apache.hadoop.hbase.procedure.TestZKProcedureControllers
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.957 sec
Running org.apache.hadoop.hbase.procedure.TestZKProcedure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.779 sec
Running org.apache.hadoop.hbase.mapred.TestTableInputFormat
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.34 sec
Running org.apache.hadoop.hbase.TestGlobalMemStoreSize
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.309 sec
Running org.apache.hadoop.hbase.mapreduce.TestHLogRecordReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.752 sec
Running org.apache.hadoop.hbase.mapred.TestTableMapReduce
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 149.389 sec
Running org.apache.hadoop.hbase.mapreduce.TestWALPlayer
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.283 sec
Running org.apache.hadoop.hbase.mapreduce.TestTableMapReduce
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 182.554 sec
Running org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 185.303 sec
Running org.apache.hadoop.hbase.mapreduce.TestImportTsv
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 319.481 sec
Running org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 546.72 sec
Running org.apache.hadoop.hbase.mapreduce.TestMultithreadedTableMapper
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 173.228 sec
Running org.apache.hadoop.hbase.mapreduce.TestTimeRangeMapRed
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 83.323 sec
Running org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.369 sec
Running org.apache.hadoop.hbase.mapreduce.TestMultiTableInputFormat
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 537.15 sec
Running org.apache.hadoop.hbase.mapreduce.TestImportExport
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 95.494 sec
Running org.apache.hadoop.hbase.snapshot.TestSnapshotDescriptionUtils
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.565 sec
Running org.apache.hadoop.hbase.snapshot.TestExportSnapshot
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.673 sec
Running org.apache.hadoop.hbase.TestAcidGuarantees
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 130.463 sec
Running org.apache.hadoop.hbase.snapshot.TestFlushSnapshotFromClient
Tests run: 5, Failures: 0, Errors: 0, 

Build failed in Jenkins: hbase-0.95-on-hadoop2 #58

2013-04-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/hbase-0.95-on-hadoop2/58/changes

Changes:

[larsh] HBASE-8208 In some situations data is not replicated to slaves when 
deferredLogSync is enabled (Jeffrey Zhong)

--
[...truncated 24607 lines...]
Running org.apache.hadoop.hbase.rest.TestGzipFilter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.87 sec
Forking command line: /bin/sh -c cd 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server
  /x1/jenkins/tools/java/jdk1.6.0_32-32/jre/bin/java -enableassertions 
-Xmx1900m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -jar 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefirebooter7866959265816345364.jar
 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefire67875129693572645tmp
 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefire_1868979617782310544347tmp
Running org.apache.hadoop.hbase.rest.TestRowResource
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.321 sec
Forking command line: /bin/sh -c cd 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server
  /x1/jenkins/tools/java/jdk1.6.0_32-32/jre/bin/java -enableassertions 
-Xmx1900m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -jar 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefirebooter1374223297700856324.jar
 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefire8083076525375087404tmp
 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefire_1875008473647838768726tmp
Running org.apache.hadoop.hbase.rest.TestVersionResource
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.569 sec
Forking command line: /bin/sh -c cd 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server
  /x1/jenkins/tools/java/jdk1.6.0_32-32/jre/bin/java -enableassertions 
-Xmx1900m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -jar 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefirebooter226132973184595.jar
 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefire3442481146896969102tmp
 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefire_1885009970012873498923tmp
Running org.apache.hadoop.hbase.rest.TestScannerResource
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.181 sec
Forking command line: /bin/sh -c cd 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server
  /x1/jenkins/tools/java/jdk1.6.0_32-32/jre/bin/java -enableassertions 
-Xmx1900m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -jar 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefirebooter2840930448828142524.jar
 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefire3309863390981821147tmp
 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefire_1897540040611508167303tmp
Running org.apache.hadoop.hbase.rest.TestStatusResource
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.352 sec
Forking command line: /bin/sh -c cd 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server
  /x1/jenkins/tools/java/jdk1.6.0_32-32/jre/bin/java -enableassertions 
-Xmx1900m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -jar 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefirebooter6791055463280177718.jar
 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefire994992704614973002tmp
 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefire_1905893167283232817186tmp
Running org.apache.hadoop.hbase.rest.TestScannersWithFilters
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.412 sec
Forking command line: /bin/sh -c cd 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server
  /x1/jenkins/tools/java/jdk1.6.0_32-32/jre/bin/java -enableassertions 
-Xmx1900m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -jar 
/x1/jenkins/jenkins-slave/workspace/hbase-0.95-on-hadoop2/0.95-on-hadoop2/hbase-server/target/surefire/surefirebooter2257109758895489859.jar
 

Build failed in Jenkins: HBase-TRUNK-on-Hadoop-2.0.0 #480

2013-04-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/480/changes

Changes:

[stack] Add what we decided about how to set versions in JIRA when resolving an 
issue

[larsh] HBASE-8208 In some situations data is not replicated to slaves when 
deferredLogSync is enabled (Jeffrey Zhong)

--
[...truncated 33010 lines...]
Running org.apache.hadoop.hbase.regionserver.TestStoreScanner
Forking command line: /bin/sh -c cd 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server
  /home/jenkins/tools/java/jdk1.6.0_27-32/jre/bin/java -enableassertions 
-Xmx1900m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -jar 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefirebooter6494636941603525547.jar
 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire4704488171917193588tmp
 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire_199411891237726640773tmp
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.777 sec
Forking command line: /bin/sh -c cd 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server
  /home/jenkins/tools/java/jdk1.6.0_27-32/jre/bin/java -enableassertions 
-Xmx1900m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -jar 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefirebooter5540107373024491519.jar
 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire3758990334260250757tmp
 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire_2007827814663449539287tmp
Running org.apache.hadoop.hbase.regionserver.TestPriorityRpc
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.698 sec
Forking command line: /bin/sh -c cd 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server
  /home/jenkins/tools/java/jdk1.6.0_27-32/jre/bin/java -enableassertions 
-Xmx1900m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -jar 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefirebooter5083715830977522275.jar
 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire4708968678576836691tmp
 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire_2019121905330341276996tmp
Running org.apache.hadoop.hbase.regionserver.TestParallelPut
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.129 sec
Forking command line: /bin/sh -c cd 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server
  /home/jenkins/tools/java/jdk1.6.0_27-32/jre/bin/java -enableassertions 
-Xmx1900m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -jar 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefirebooter6140205829632159217.jar
 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire3382088154756719731tmp
 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire_2026339034516406978267tmp
Running org.apache.hadoop.hbase.regionserver.TestMemStore
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.81 sec
Forking command line: /bin/sh -c cd 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server
  /home/jenkins/tools/java/jdk1.6.0_27-32/jre/bin/java -enableassertions 
-Xmx1900m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -jar 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefirebooter2246513797664575780.jar
 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire4629233969602267164tmp
 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire_2031722903572311291180tmp
Running org.apache.hadoop.hbase.regionserver.TestClusterId
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.683 sec
Forking command line: /bin/sh -c cd 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server
  /home/jenkins/tools/java/jdk1.6.0_27-32/jre/bin/java -enableassertions 
-Xmx1900m -Djava.security.egd=file:/dev/./urandom 
-Djava.net.preferIPv4Stack=true -jar 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefirebooter8810954524821114023.jar
 
https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/ws/trunk/hbase-server/target/surefire/surefire7631577562241655875tmp
 

[jira] [Created] (HBASE-8287) TestRegionMergeTransactionOnCluster failed in trunk build #4010

2013-04-06 Thread chunhui shen (JIRA)
chunhui shen created HBASE-8287:
---

 Summary: TestRegionMergeTransactionOnCluster failed in trunk build 
#4010
 Key: HBASE-8287
 URL: https://issues.apache.org/jira/browse/HBASE-8287
 Project: HBase
  Issue Type: Bug
  Components: master
Reporter: chunhui shen
Assignee: chunhui shen
 Fix For: 0.95.2, 0.98.0


From the log of trunk build #4010:
{code}
2013-04-04 05:45:59,396 INFO  
[MASTER_TABLE_OPERATIONS-quirinus.apache.org,53514,1365054344859-0] 
handler.DispatchMergingRegionHandler(157): 
Cancel merging regions 
testCleanMergeReference,,1365054353296.bf3d60360122d6c83a246f5f96c2cdd1.,
testCleanMergeReference,testRow0020,1365054353302.72fbc04566e78aa6732531296256a5aa.,
 
because can't move them together after 842ms
 
2013-04-04 05:45:59,396 INFO  [hbase-am-zkevent-worker-pool-2-thread-1] 
master.AssignmentManager$4(1164):
The master has opened the region 
testCleanMergeReference,testRow0020,1365054353302.72fbc04566e78aa6732531296256a5aa.
 that was onlin
e on quirinus.apache.org,45718,1365054345790
{code}

There's a small probability that fail to move merging regions together to same 
regionserver

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira