I downloaded source for RC0

TestShell passed.
Running it again just to confirm.

FYI

On Wed, Jul 8, 2015 at 11:57 AM, Enis Söztutar <enis....@gmail.com> wrote:

> Thanks JM.
>
> In the jenkins run for the RC, there is only 1 failure, and it is
> TestShell:
>
>
> https://builds.apache.org/view/All/job/HBase-1.0.2RC0/lastCompletedBuild/testReport/
>
>
> test_The_get/put_methods_should_work_for_data_written_with_Visibility(Hbase::VisibilityLabelsAdminMethodsTest):
> NativeException: junit.framework.AssertionFailedError: Waiting timed
> out after [10,000] msec
>
>
> HBASE-13084 or a simple timeout issue?
>
>
> Enis
>
>
> On Wed, Jul 8, 2015 at 11:08 AM, Jean-Marc Spaggiari <
> jean-m...@spaggiari.org> wrote:
>
> > Tests are still in progress, but again, I'm not able to complete a test
> > suite. Ran on 5 different servers with different hardware.
> >
> > Running with export JAVA_HOME=/usr/local/jdk1.7.0_45/; export
> > MAVEN_OPTS="-Xmx6100M -XX:-UsePerfData"; mvn clean; mvn -PrunAllTests
> > -DreuseForks=false install -Dmaven.test.redirectTestOutputToFile=true
> > -Dsurefire.rerunFailingTestsCount=4 -Dit.test=noItTest
> >
> > On server failed on all of that. The only one running on SSD:
> >
> > Failed tests:
> >
> >
> >
> TestDistributedLogSplitting.testSameVersionUpdatesRecoveryWithCompaction:1374
> > expected:<2000> but was:<1862>
> >
> > Flaked tests:
> >
> >
> org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent(org.apache.hadoop.hbase.client.TestSnapshotCloneIndependence)
> >   Run 1:
> >
> >
> TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent:182->runTestSnapshotDeleteIndependent:424
> > » RetriesExhausted
> >   Run 2: PASS
> >
> >
> >
> org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface.testLegacyRecovery(org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface)
> >   Run 1:
> >
> TestRegionObserverInterface.testLegacyRecovery:678->verifyMethodResult:744
> > Result of
> >
> >
> org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver$Legacy.getCtPreWALRestore
> > is expected to be 1, while we get 0
> >   Run 2: PASS
> >
> >
> >
> org.apache.hadoop.hbase.master.handler.TestEnableTableHandler.testEnableTableWithNoRegionServers(org.apache.hadoop.hbase.master.handler.TestEnableTableHandler)
> >   Run 1: TestEnableTableHandler.testEnableTableWithNoRegionServers:101
> > Waiting timed out after [60,000] msec
> >   Run 2: PASS
> >
> >
> >
> org.apache.hadoop.hbase.replication.TestReplicationEndpoint.testReplicationEndpointReturnsFalseOnReplicate(org.apache.hadoop.hbase.replication.TestReplicationEndpoint)
> >   Run 1:
> >
> TestReplicationEndpoint.testReplicationEndpointReturnsFalseOnReplicate:145
> > Waiting timed out after [60,000] msec
> >   Run 2: PASS
> >
> >
> >
> >
> >
> > Another server fails on that:
> > Failed tests:
> >
> >
> org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface.testLegacyRecovery(org.apache.hadoop.hbase.coprocessor.TestRegionObserverInterface)
> >   Run 1:
> >
> TestRegionObserverInterface.testLegacyRecovery:678->verifyMethodResult:744
> > Result of
> >
> >
> org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver$Legacy.getCtPreWALRestore
> > is expected to be 1, while we get 0
> >   Run 2:
> >
> TestRegionObserverInterface.testLegacyRecovery:678->verifyMethodResult:744
> > Result of
> >
> >
> org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver$Legacy.getCtPreWALRestore
> > is expected to be 1, while we get 0
> >   Run 3:
> >
> TestRegionObserverInterface.testLegacyRecovery:678->verifyMethodResult:744
> > Result of
> >
> >
> org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver$Legacy.getCtPreWALRestore
> > is expected to be 1, while we get 0
> >   Run 4:
> >
> TestRegionObserverInterface.testLegacyRecovery:678->verifyMethodResult:744
> > Result of
> >
> >
> org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver$Legacy.getCtPreWALRestore
> > is expected to be 1, while we get 0
> >   Run 5:
> >
> TestRegionObserverInterface.testLegacyRecovery:678->verifyMethodResult:744
> > Result of
> >
> >
> org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver$Legacy.getCtPreWALRestore
> > is expected to be 1, while we get 0
> >
> >
> >
> >
> >
> > And 3 servers fail on that:
> > Tests in error:
> >   TestShell.testRunShellTests:81 » EvalFailed (RuntimeError) Shell unit
> > tests fa...
> >
> >
> >
> > I'm re-running them but since 3 failed with the same error maybe someone
> > might want to look at it? I will look at the logs after the next run.
> >
> > JM
> >
> >
> >
> >
> >
> >
> > 2015-07-07 16:30 GMT-04:00 Enis Söztutar <e...@apache.org>:
> >
> > >  I am pleased to announce that the first release candidate for the
> > release
> > > 1.0.2
> > > (HBase-1.0.2RC0), is available for download at
> > > https://dist.apache.org/repos/dist/dev/hbase/hbase-1.0.2RC0/
> > >
> > > Maven artifacts are also available in the temporary repository
> > > https://repository.apache.org/content/repositories/orgapachehbase-1088
> > >
> > > Signed with my code signing key E964B5FF. Can be found here:
> > > https://people.apache.org/keys/committer/enis.asc
> > >
> > > Signed tag in the repository can be found here:
> > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=tag;h=11de648322d50c509e15373b3e35db1020a7d2c1
> > >
> > >
> > > HBase 1.0.2 is the next “patch” release in the 1.0.x release line and
> > > supersedes 1.0.0 and 1.0.1.
> > > According to the HBase’s semantic version guide (See [1]), the release
> > > candidate is
> > > source and binary compatible with 1.0.x for client applications and
> > server
> > > side libraries
> > > (coprocessors, filters, etc).
> > >
> > > Binary / source compatibility report of 1.0.2RC0 compared to 1.0.1 can
> be
> > > reached here:
> > > https://people.apache.org/~enis/1.0.1_1.0.2RC0_compat_report.html
> > >
> > >
> > > 1.0.2 release contains 123 fixes on top of 1.0.1 release. Most of
> > > the changes are
> > > bug fixes or test fixes except for the following:
> > >
> > > ** Improvement
> > >     * [HBASE-12415] - Add add(byte[][] arrays) to Bytes.
> > >     * [HBASE-12957] - region_mover#isSuccessfulScan may be extremely
> slow
> > > on region with lots of expired data
> > >     * [HBASE-13247] - Change BufferedMutatorExample to use addColumn()
> > > since add() is deprecated
> > >     * [HBASE-13344] - Add enforcer rule that matches our JDK support
> > > statement
> > >     * [HBASE-13366] - Throw DoNotRetryIOException instead of read only
> > > IOException
> > >     * [HBASE-13420] - RegionEnvironment.offerExecutionLatency Blocks
> > > Threads under Heavy Load
> > >     * [HBASE-13431] - Allow to skip store file range check based on
> > column
> > > family while creating reference files in
> HRegionFileSystem#splitStoreFile
> > >     * [HBASE-13550] - [Shell] Support unset of a list of table
> attributes
> > >     * [HBASE-13761] - Optimize FuzzyRowFilter
> > >     * [HBASE-13780] - Default to 700 for HDFS root dir permissions for
> > > secure deployments
> > >     * [HBASE-13828] - Add group permissions testing coverage to AC.
> > >     * [HBASE-13925] - Use zookeeper multi to clear znodes in
> > > ZKProcedureUtil
> > >
> > > ** New Feature
> > >     * [HBASE-13057] - Provide client utility to easily enable and
> disable
> > > table replication
> > >
> > > ** Task
> > >     * [HBASE-13764] - Backport HBASE-7782
> > > (HBaseTestingUtility.truncateTable() not acting like CLI) to branch-1.x
> > >     * [HBASE-13799] - javadoc how Scan gets polluted when used; if you
> > set
> > > attributes or ask for scan metrics
> > >
> > > ** Sub-task
> > >     * [HBASE-7847] - Use zookeeper multi to clear znodes
> > >     * [HBASE-13035] - [0.98] Backport HBASE-12867 - Shell does not
> > support
> > > custom replication endpoint specification
> > >     * [HBASE-13201] - Remove HTablePool from thrift-server
> > >     * [HBASE-13496] - Make
> > > Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo
> inlineable
> > >     * [HBASE-13497] - Remove MVCC stamps from HFile when that is safe
> > >     * [HBASE-13563] - Add missing table owner to AC tests.
> > >     * [HBASE-13579] - Avoid isCellTTLExpired() for NO-TAG cases
> > >     * [HBASE-13937] - Partially revert HBASE-13172·
> > >     * [HBASE-13983] - Doc how the oddball HTable methods getStartKey,
> > > getEndKey, etc. will be removed in 2.0.0
> > >     * [HBASE-14003] - work around jdk8 spec bug in WALPerfEval
> > >
> > >
> > > Full list of the issues can be found at
> > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12329865&styleName=Html&projectId=12310753&Create=Create
> > >
> > >
> > > Compatibility
> > > -------------
> > > This release (1.0.2) is source, wire and binary compatible with all
> > > previous 1.0.x releases. Client
> > > applications do not have to be recompiled with the new version (unless
> > new
> > > API is used)
> > > if upgrading from a previous 1.0.x. It is a drop-in replacement.
> > >
> > > See release notes for 1.0.0 [2] for compatibility with earlier
> > > versions (0.94, 0.96, 0.98).
> > > Compatibility of 1.0.2 with earlier versions is the same as in 1.0.0.
> > >
> > > Source Compatibility:
> > > Client side code in HBase-1.0.x is (mostly) source compatible with
> 0.98.x
> > > versions. Some minor API changes might be needed from the client side.
> > >
> > > Wire Compatibility:
> > > HBase-1.0.x release is wire compatible with 0.98.x releases. Clients
> and
> > > servers running in different versions as long as new features are not
> > used
> > > should be possible.
> > > A rolling upgrade from 0.98.x clusters to 1.0.x is supported as well.
> > > Rolling upgrade from 0.96 directly to 1.0.x is not supported.
> > > 1.0.x is NOT wire compatible with earlier releases (0.94, etc).
> > >
> > > Binary Compatibility:
> > > Binary compatibility at the Java API layer with earlier versions
> (0.98.x,
> > > 0.96.x and 0.94.x) is not supported. You may have to recompile your
> > client
> > > code and any server side code (coprocessors, filters etc) referring to
> > > hbase jars.
> > >
> > >
> > > Upgrading
> > > ---------
> > > This release is rolling upgradable from earlier 1.0.x releases.
> > >
> > > See [2] and [3] for upgrade instructions from earlier versions.
> Upgrading
> > > to 1.0.2 is similar
> > > to upgrading to 1.0.0 as documented in [3].
> > >
> > > From 0.98.x : Upgrade from 0.98.x in regular upgrade or rolling upgrade
> > > fashion
> > > is supported.
> > >
> > > From 0.96.x : Upgrade from 0.96.x is supported with a shutdown and
> > restart
> > > of
> > > the cluster.
> > >
> > > From 0.94.x : Upgrade from 0.94.x is supported similar to upgrade from
> > > 0.94 -> 0.96. The upgrade script should be run to rewrite cluster level
> > > metadata.
> > > See [3] for details.
> > >
> > >
> > > Supported Hadoop versions
> > > -------------------------
> > > 1.0.x releases support only Hadoop-2.x. Hadoop-2.4.x, Hadoop-2.5.x
> > > and Hadoop-2.6.x
> > > releases are the most tested hadoop releases and we recommend running
> > with
> > > those
> > > versions (or later versions). Earlier Hadoop-2 based releases
> > (hadoop-2.2.x
> > > and 2.3.x)
> > > are not tested to the full extend. More information can be found at
> [4].
> > >
> > >
> > > Supported Java versions
> > > -------------------------
> > > 1.0.x releases only support JDK7. JDK8 support is experimental. More
> > > information can be
> > > found at [5].
> > >
> > >
> > > Voting
> > > ------
> > > Please try to test and vote on this release by July 14 2015 11:59PM
> PDT.
> > >
> > > [] +1 Release the artifacts as 1.0.2
> > > [] -1 DO NOT release the artifacts as 1.0.2, because...
> > >
> > >
> > > References
> > > ----------
> > > [1] https://hbase.apache.org/book/upgrading.html#hbase.versioning
> > > [2] http://s.apache.org/hbase-1.0.0-release-notes
> > > [3] https://hbase.apache.org/book/upgrade1.0.html#upgrade1.0.changes
> > > [4] https://hbase.apache.org/book/configuration.html#hadoop
> > > [5] https://hbase.apache.org/book/configuration.html#java
> > >
> > >
> > > Thanks all who worked on this release!
> > >
> > > Enis
> > >
> >
>

Reply via email to