+1 (non-binding)

mvn clean install -DskipTests -Dhbase.profile=(1.6/1.3) (successful)

Tested various Upserts, Queries on various types of tables. (successful)
Ran pherf (successful)
./bin/pherf-standalone.py  -l -q -z localhost -schemaFile
<-schema-file-name>.sql -scenarioFile <scenario-file-name>.xml (successful)

Nit: ./bin/phoenix_utils.py gives the following error
testjar:
Traceback (most recent call last):
  File "./bin/phoenix_utils.py", line 215, in <module>
    print("phoenix_queryserver_jar:", phoenix_queryserver_jar)
NameError: name 'phoenix_queryserver_jar' is not defined


Ran the following MR jobs
/hbase/bin/hbase org.apache.phoenix.mapreduce.index.IndexTool -op
/tmp/indexing.log -v AFTER -dt <table-name> -it <index-table-name>
(successful)
/hbase/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter <table-name>
(successful)

Had executed perf workloads using the RC1 build
The results and analysis can be found here -
https://docs.google.com/document/d/19QHG6vvdxwCNkT3nqu8N-ift_1OIn161pqtJx1UcXiY/edit#

Thanks
Jacob

On Mon, Feb 8, 2021 at 6:48 PM Ankit Singhal <an...@apache.org> wrote:

> +1 (binding)
>
>  * Download source and build - OK
>  * Ran some DDLs and DMLs on fresh cluster - OK
>  * Signatures and checksums for src and bin(1.3)- OK
>  * apache-rat:check - SUCCESS
>  * CHANGES and RELEASENOTES - OK
>  * Unit tests( mvn clean install -Dit.test=noITs, though code-coverage
> check failed for me) - Ok
>
> Regards,
> Ankit Singhal
>
> On Mon, Feb 8, 2021 at 1:40 PM Chinmay Kulkarni <
> chinmayskulka...@gmail.com>
> wrote:
>
> > +1 (Binding)
> >
> > Tested against hbase-1.3 and hbase-1.6
> >
> > * Build from source (mvn clean install -DskipTests
> > -Dhbase.profile=1.3/1.6): OK
> > * Green build: OK (thanks for triggering this Viraj)
> > * Did some basic DDL, queries, upserts, deletes and everything looked
> fine:
> > OK
> > * Did some upgrade testing: Create tables, views, indices from an old
> > client, query, upsert. Then upgrade to 4.16 metadata, query, upsert from
> an
> > old client, then upgrade the client and query, upsert from a new client:
> OK
> > * Verified checksums: OK
> > * Verified signatures: OK
> > * mvn clean apache-rat:check: OK
> >
> > On Sun, Feb 7, 2021 at 10:03 PM Viraj Jasani <vjas...@apache.org> wrote:
> >
> > > +1 (non-binding)
> > >
> > > Clean build:
> > >
> >
> https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.16/29/
> > >
> > > Tested against HBase-1.6 profile:
> > >
> > > * Checksum : ok
> > > * Rat check (1.8.0_171): ok
> > >  - mvn clean apache-rat:check
> > > * Built from source (1.8.0_171): ok
> > >  - mvn clean install  -DskipTests
> > > * Basic testing with mini cluster: ok
> > > * Unit tests pass (1.8.0_171): failed (passing eventually)
> > >  - mvn clean package  && mvn verify  -Dskip.embedded
> > >
> > >
> > > [ERROR] Tests run: 23, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed:
> > > 197.428 s <<< FAILURE! - in org.apache.phoenix.end2end.AggregateIT
> > > [ERROR]
> > >
> >
> testOrderByOptimizeForClientAggregatePlanBug4820(org.apache.phoenix.end2end.AggregateIT)
> > > Time elapsed: 9.055 s  <<< ERROR!
> > > java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to
> create
> > > new native thread
> > >         at
> > >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:239)
> > >         at
> > >
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:273)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:434)
> > >         at
> > >
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:308)
> > >
> > >
> > > [ERROR] Tests run: 37, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed:
> > > 204.243 s <<< FAILURE! - in
> > org.apache.phoenix.end2end.ArrayAppendFunctionIT
> > > [ERROR]
> > >
> >
> testUpsertArrayAppendFunctionVarchar(org.apache.phoenix.end2end.ArrayAppendFunctionIT)
> > > Time elapsed: 4.286 s  <<< ERROR!
> > > org.apache.phoenix.exception.PhoenixIOException:
> > > org.apache.hadoop.hbase.DoNotRetryIOException: N000065:
> > > java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to
> create
> > > new native thread
> > >         at
> > >
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:122)
> > >         at
> > >
> >
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2151)
> > >         at
> > >
> >
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17317)
> > >
> > >
> > > [ERROR] Tests run: 28, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed:
> > > 147.854 s <<< FAILURE! - in
> > org.apache.phoenix.end2end.ArrayRemoveFunctionIT
> > > [ERROR]
> > >
> >
> testArrayRemoveFunctionWithNull(org.apache.phoenix.end2end.ArrayRemoveFunctionIT)
> > > Time elapsed: 2.519 s  <<< ERROR!
> > > org.apache.phoenix.exception.PhoenixIOException:
> > > java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
> > unable
> > > to create new native thread
> > >         at
> > >
> >
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:146)
> > >         at
> > >
> >
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1511)
> > >         at
> > >
> >
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1901)
> > >         at
> > >
> >
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:3065)
> > >
> > >
> > > [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed:
> > > 5,069.234 s <<< FAILURE! - in
> > >
> >
> org.apache.phoenix.end2end.PermissionNSDisabledWithCustomAccessControllerIT
> > > [ERROR]
> > >
> >
> testAutomaticGrantWithIndexAndView(org.apache.phoenix.end2end.PermissionNSDisabledWithCustomAccessControllerIT)
> > > Time elapsed: 2,572.586 s  <<< ERROR!
> > > java.lang.reflect.UndeclaredThrowableException
> > >         at
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1862)
> > >         at
> > >
> >
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:340)
> > >         at
> > >
> >
> org.apache.phoenix.end2end.BasePermissionsIT.verifyAllowed(BasePermissionsIT.java:776)
> > >         at
> > >
> >
> org.apache.phoenix.end2end.BasePermissionsIT.verifyAllowed(BasePermissionsIT.java:769)
> > >
> > > Tests are passing in subsequent runs.
> > >
> > >
> > > On 2021/02/06 04:53:48, Xinyi Yan <yanxi...@apache.org> wrote:
> > > > Hello Everyone,
> > > >
> > > > This is a call for a vote on Apache Phoenix 4.16.0 RC2. This is the
> > next
> > > > minor release of Phoenix 4, compatible with Apache HBase 1.3, 1.4,
> 1.5
> > > > and 1.6.
> > > >
> > > > The VOTE will remain open for at least 72 hours.
> > > >
> > > > [ ] +1 Release this package as Apache phoenix 4.16.0
> > > > [ ] -1 Do not release this package because ...
> > > >
> > > > The tag to be voted on is 4.16.0RC2
> > > > https://github.com/apache/phoenix/tree/4.16.0RC2
> > > >
> > > > The release files, including signatures, digests, as well as
> CHANGES.md
> > > > and RELEASENOTES.md included in this RC can be found at:
> > > > https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.16.0RC2/
> > > >
> > > > For a complete list of changes, see:
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.16.0RC2/CHANGES.md
> > > >
> > > > Artifacts are signed with my "CODE SIGNING KEY":
> > > > E4882DD3AB711587
> > > >
> > > > KEYS file available here:
> > > > https://dist.apache.org/repos/dist/dev/phoenix/KEYS
> > > >
> > > >
> > > > Thanks,
> > > > Xinyi
> > > >
> > > > <
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.16.0RC1/CHANGES.md
> > > >
> > > >
> > >
> >
> >
> > --
> > Chinmay Kulkarni
> >
>

Reply via email to