Re: [VOTE] Release of Apache Phoenix 5.1.0 RC4

2021-02-08 Thread Ankit Singhal
+1(binding)

 * Download source and build - OK
 * Signatures and checksums for src and bin(2.4)- OK
 * apache-rat:check - SUCCESS
 * CHANGES and RELEASENOTES - OK
 * Unit tests (have not tested IT) - Ok

Regards,
Ankit Singhal


On Mon, Feb 8, 2021 at 2:30 PM Chinmay Kulkarni 
wrote:

> +1 (Binding)
> Tested against HBase 2.4
>
> * Build from source (mvn clean install -DskipTests): OK
> * Did some basic DDL, queries, upserts, deletes and everything looked fine:
> OK
> * Verified checksums: OK
> * Verified signatures: OK
> * mvn clean apache-rat:check: OK
>
> On Sun, Feb 7, 2021 at 10:11 PM Viraj Jasani  wrote:
>
> > +1 (non-binding)
> >
> > Tested against HBase-2.4 profile:
> >
> > * Signature: ok
> > * Checksum : ok
> > * Rat check (1.8.0_171): ok
> >  - mvn clean apache-rat:check
> > * Built from source (1.8.0_171): ok
> >  - mvn clean install  -DskipTests
> > * Basic testing with mini cluster: ok
> > * Unit tests pass (1.8.0_171): failed (passed eventually)
> >  - mvn clean package  && mvn verify  -Dskip.embedded
> >
> >
> > [ERROR] Tests run: 14, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
> > 768.631 s <<< FAILURE! - in org.apache.phoenix.end2end.OrphanViewToolIT
> > [ERROR]
> >
> testDeleteChildViewRows[OrphanViewToolIT_multiTenant=true](org.apache.phoenix.end2end.OrphanViewToolIT)
> > Time elapsed: 306.591 s  <<< ERROR!
> > java.sql.SQLTimeoutException: . Query couldn't be completed in the
> > allotted time: 30 ms
> > at
> >
> org.apache.phoenix.exception.SQLExceptionCode$16.newException(SQLExceptionCode.java:469)
> > at
> >
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:217)
> > at
> >
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1390)
> >
> >
> > [ERROR] Tests run: 11, Failures: 1, Errors: 0, Skipped: 0, Time elapsed:
> > 274.449 s <<< FAILURE! - in org.apache.phoenix.end2end.PermissionsCacheIT
> > [ERROR]
> >
> testAutomaticGrantWithIndexAndView(org.apache.phoenix.end2end.PermissionsCacheIT)
> > Time elapsed: 56.42 s  <<< FAILURE!
> > java.lang.AssertionError: Expected exception was not thrown for user
> > 'unprivilegedUser_N27'
> > at org.junit.Assert.fail(Assert.java:89)
> > at
> >
> org.apache.phoenix.end2end.BasePermissionsIT.verifyDenied(BasePermissionsIT.java:842)
> > at
> >
> org.apache.phoenix.end2end.BasePermissionsIT.verifyDenied(BasePermissionsIT.java:805)
> >
> >
> > [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
> > 2,472.819 s <<< FAILURE! - in
> > org.apache.phoenix.end2end.ParameterizedIndexUpgradeToolIT
> > [ERROR]
> >
> testDryRunAndFailures[IndexUpgradeToolIT_mutable=false,upgrade=true,isNamespaceEnabled=false,
> >
> rebuild=false](org.apache.phoenix.end2end.ParameterizedIndexUpgradeToolIT)
> > Time elapsed: 633.978 s  <<< ERROR!
> > org.apache.phoenix.exception.PhoenixIOException:
> > java.util.concurrent.TimeoutException: The procedure 288 is still running
> > at
> >
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:138)
> > at
> >
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1519)
> > at
> >
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1913)
> >
> >
> >
> > On 2021/02/05 10:28:25, Istvan Toth  wrote:
> > > Please vote on this Apache phoenix release candidate,
> > > phoenix-5.1.0RC4
> > >
> > > The VOTE will remain open for at least 72 hours.
> > >
> > > [ ] +1 Release this package as Apache phoenix 5.1.0
> > > [ ] -1 Do not release this package because ...
> > >
> > > The tag to be voted on is 5.1.0RC4:
> > >
> > >   https://github.com/apache/phoenix/tree/5.1.0RC4
> > >
> > > The release files, including signatures, digests, as well as CHANGES.md
> > > and RELEASENOTES.md included in this RC can be found at:
> > >
> > >   https://dist.apache.org/repos/dist/dev/phoenix/phoenix-5.1.0RC4/
> > >
> > > Maven artifacts are available in a staging repository at:
> > >
> > >   https://repository.apache.org/#stagingRepositories
> > (orgapachephoenix-1213)
> > >
> > > Artifacts were signed with  0x794433C7 key which can be found in:
> > >
> > >   https://dist.apache.org/repos/dist/release/phoenix/KEYS
> > >
> > > To learn more about Apache phoenix, please see
> > >
> > >   http://phoenix.apache.org/
> > >
> > > Thanks,
> > > Istvan
> > >
> >
>
>
> --
> Chinmay Kulkarni
>


Re: [Discuss] Phoenix Tech Talks

2021-02-08 Thread Ankit Singhal
This is excellent, thanks Kadir for initiating it, I am keen to listen to
your presentation on "strongly consistent global indexes".
Logistics looks great, however, if required can be adjusted as per the
feedback after the first meetup.

Some of the current topics comes to my mind (in case someone wants to share
their experiences with them)
* Improved python driver support ( Could be completely technical about
implementation or on the use-case
  that how easy it is for any python developer to get started with
Phoenix/HBase)
* Key features in 4.16 and 5.1 release and upgrade path
* Adopted projects Apache Omid and Apache Tephra key differences.


+user  to expand the scope of talks if someone
wants to present their use-cases.

On Mon, Feb 8, 2021 at 9:06 AM Viraj Jasani  wrote:

> +1 to the idea of monthly meet-ups, and I will also do my best to join in.
> Logistics part (when, durations etc) looks good to me.
>
>
> On Fri, 5 Feb 2021 at 1:36 AM, Josh Elser  wrote:
>
> > Love it! I'll do my best to join in and listen (and participate later
> > on, too ;))
> >
> > I joined one from Calcite a week or two ago. They did a signup via
> > Meetup.com and hosted it through Zoom. It felt very professional.
> >
> > On 2/4/21 12:10 PM, Kadir Ozdemir wrote:
> > > We are very excited to propose an idea that brings the Phoenix
> community
> > > together to have technical discussions on a recurring basis. The goal
> is
> > to
> > > have a forum where we share technical knowledge we have acquired by
> > working
> > > on various aspects of Phoenix and to continue to bring innovation and
> > > improvements as a community into Phoenix. We’d love to get feedback on
> > this
> > > idea and determine the logistics for these meetings.
> > >
> > > Here is what we were thinking:
> > >
> > > - Come together as a community by hosting *Phoenix tech talks*
> once a
> > > month
> > > - The topics for these meetings can be any technical subject
> related
> > to
> > > Phoenix, including the architecture, internals, features and
> > interfaces of
> > > Phoenix, its operational aspects in the first party data centers
> and
> > cloud,
> > > the technologies that it leverages (e.g., HBase and Zookeeper), and
> > > technologies it can possibly leverage, adapt or follow
> > >
> > > *Logistics*:
> > >
> > > - *When*: First Thursday of each month at 9AM PST
> > > - *Duration*: 90 minutes (to allow the audience to participate and
> > ask
> > > questions)
> > > - We will conduct these meetings over a video conference and make
> the
> > > recordings available (we are sorting out the specifics)
> > > - The meeting agenda and past recordings will be available on the
> > Apache
> > > Phoenix site
> > >
> > > We need a coordinator for these meetings to set the agenda and manage
> its
> > > logistics. I will volunteer to organize these meetings and curate the
> > > topics for the tech talks, at least initially. To get the ball
> rolling, I
> > > will present the strongly consistent global indexes in the first
> meeting.
> > > What do you think about this proposal?
> > >
> > > Thanks,
> > > Kadir
> > >
> >
>


Re: [Discuss] Dropping support for older HBase version

2021-01-31 Thread Ankit Singhal
+u...@phoenix.apache.org

I wouldn't suggest making any strict policies for ourselves and
doing any promise to the user on the support of EOL HBase versions.
As it may become a burden down the line for us and then sometimes require
an exemption
if we can't make a feature work with a certain release.

IMHO, it can be on the basis of consensus on a mailing list and willingness
to support
the development and release of the respective version a user/s is
interested in. Though,
I can agree that it is good to remain pro-active for these consensuses to
avoid last-minute
 surprise for the user who has been waiting for a long on the release.

Regards,
Ankit Singhal



On Fri, Jan 29, 2021 at 10:59 PM Istvan Toth  wrote:

> I'm not sure I understand, let me rephrase
>
> So we drop support right after we release a Phoenix minor version,
> if the Phoenix release date is more than a year after the HBase EOL date ?
>
> That sounds fine to me.
>
> How about patch releases ?
> I feel that we should not drop Hbase release support in a patch release.
> i.e if we release 5.1.2, 5.1.2, etc those should keep support for all HBase
> versions that 5.1.0 supported.
>
> regards
> Istvan
>
> On Sat, Jan 30, 2021 at 3:23 AM Xinyi Yan  wrote:
>
> > IMO, we should consider one year grace period plus one minor release? For
> > example, if we have a new 4.17.0 release in September 2021, we should not
> > support HBase 1.3(was EOL in Aug 2020) since it passes one year grace
> > period and one more release support. This means we will include HBase 1.4
> > and 2.2 support for the next releases(4.17.0 and 5.2.0). As Istvan
> > mentioned above, dropping HBase 1.3 support would make simplification, at
> > least I feel we should drop the support for HBase 1.3 for the next minor
> > release.
> >
> > What do people think about this? One minor release plus one year grace
> > period?
> >
> >
> > On Fri, Jan 29, 2021 at 7:26 AM Josh Elser  wrote:
> >
> > > I'd request that we keep hbase-2.2 support around for a while longer.
> If
> > > we drop that, it's going to cause us some major headache whereas I'd
> > > rather see us able to keep pushing our dayjob efforts directly into
> > > upstream.
> > >
> > > On 1/28/21 11:56 PM, Viraj Jasani wrote:
> > > > +1(non-binding) to EOLing the support for HBase 1.3 and 2.1 at least
> > > since
> > > > both were EOLed last year (1.4 and 2.2 can also be dropped).
> > > >
> > > > Moreover, b/ 2.4.0 and 2.4.1 we have some compat issue in IA.Private
> > > class
> > > > (we need some utility from HStore which is refactored in 2.4.1),
> hence
> > we
> > > > will need new compat module to support 2.4.1+ releases in Phoenix
> > 5.2.0+
> > > > releases mostly.
> > > >
> > > >
> > > > On Fri, 29 Jan 2021 at 6:54 AM, Geoffrey Jacoby 
> > > wrote:
> > > >
> > > >> +1. Following 4.16 and 5.1's releases I'd suggest EOLing support for
> > > HBase
> > > >> 1.3, 1.4, 2.1 and 2.2, I believe all of which have been EOLed by the
> > > HBase
> > > >> community. All of those versions also require special compatibility
> > lib
> > > >> support currently.
> > > >>
> > > >> Geoffrey
> > > >>
> > > >> On Thu, Jan 28, 2021 at 6:35 PM Xinyi Yan 
> > wrote:
> > > >>
> > > >>> Hi,
> > > >>>
> > > >>> I'm thinking to drop the number of supported HBase versions for
> > future
> > > >>> releases. For example, the HBase 1.3 was EOM'd in August 2020, do
> we
> > > >> still
> > > >>> consider support it for 4.17.0? Similarly, our current master
> branch
> > > also
> > > >>> supports EOM'd HBase version. If phoenix users already upgraded
> their
> > > >>> HBase, we should not spend time supporting these old versions IMO.
> > > >>>
> > > >>> I think we should do it after 4.16.0 and 5.1.0, thoughts?
> > > >>>
> > > >>>
> > > >>> Thanks,
> > > >>> Xinyi
> > > >>>
> > > >>
> > > >
> > >
> >
>


Re: [VOTE] Release of Apache Phoenix Thirdparty 1.1.0 RC0

2021-01-31 Thread Ankit Singhal
+1

 * Build phoenix-thirdparty - OK
 * Build Phoenix master with phoenix-thirdparty - Ok
 * apache-rat:check check failed for following files (it's better we remove
them):-

phoenix-thirdparty/phoenix-shaded-commons-cli/src/main/patches/CLI-254-1.4.patch

phoenix-thirdparty/phoenix-shaded-commons-cli/src/main/java/META-INF/MANIFEST.MF
 *  Distribution link for the vote has a typo, the correct link seems to be
 https://dist.apache.org/repos/dist/dev/phoenix/phoenix-thirdparty-1.1.0RC0/
 * RELEASE notes and changes.md seems to be not updated from the last
release.
 * Signature and checksum - Ok
 * Licence and Notice(nit: Copyright 2021) - Ok

Regards,
Ankit Singhal

On Fri, Jan 29, 2021 at 11:05 PM Viraj Jasani  wrote:

> +1 (non-binding)
>
> Build: ok
> Compilation against master and 4.x branch: ok
> ChangeLog/Release notes: ok
>
> Staging repository looks good:
> https://repository.apache.org/content/repositories/staging/org/apache/phoenix/thirdparty/phoenix-thirdparty/1.1.0/
>
> IT results look good on PRs #1122 and #1123.
>
>
> On 2021/01/29 21:32:39, Istvan Toth  wrote:
> > Please vote on this Apache phoenix thirdparty release candidate,
> > phoenix-thirdparty-1.1.0RC0
> >
> > The VOTE will remain open for at least 72 hours.
> >
> > [ ] +1 Release this package as Apache phoenix thirdparty 1.1.0
> > [ ] -1 Do not release this package because ...
> >
> > The tag to be voted on is 1.1.0RC0:
> >
> >   https://github.com/apache/phoenix-thirdparty/tree/1.1.0RC0
> >
> > The release files, including signatures, digests, as well as CHANGES.md
> > and RELEASENOTES.md included in this RC can be found at:
> >
> >   https://dist.apache.org/repos/dist/dev/phoenix/1.1.0RC0/
> >
> > Maven artifacts are available in a staging repository at:
> >
> >   https://repository.apache.org/content/repositories//
> >
> > Artifacts were signed with the 0x794433C7 key which can be found in:
> >
> >   https://dist.apache.org/repos/dist/release/phoenix/KEYS
> >
> > To learn more about Apache phoenix thirdparty, please see
> >
> >   http://phoenix.apache.org/
> >
> > Thanks,
> > Istvan
> >
>


Re: [ANNOUNCE] New Committer Daniel Wong

2021-01-26 Thread Ankit Singhal
Congratulations and welcome, Daniel

On Tue, Jan 26, 2021 at 11:38 AM Xinyi Yan  wrote:

> Congratulations and welcome, Daniel!
>
> On Tue, Jan 26, 2021 at 11:09 AM Geoffrey Jacoby 
> wrote:
>
> > On behalf of the Apache Phoenix PMC, I'm pleased to announce that Daniel
> > Wong
> > has accepted the PMC's invitation to become a committer on Apache
> Phoenix.
> >
> > We appreciate all of the great contributions Daniel has made to the
> > community thus far and we look forward to his continued involvement.
> >
> > Welcome Daniel!
> >
> > Geoffrey Jacoby
> >
>


[jira] [Created] (PHOENIX-6331) Increase index retry from 1 to 2 incase of NotServingRegionException

2021-01-20 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-6331:
--

 Summary: Increase index retry from 1 to 2 incase of 
NotServingRegionException
 Key: PHOENIX-6331
 URL: https://issues.apache.org/jira/browse/PHOENIX-6331
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal


Currently, we move the index to PENDING_DISABLE whenever the single write to 
index failed and carry out a retry at the client but this can be optimized for 
NotServingRegionException as Index regions can move very frequently depending 
on the balancer and one more retry at the server could avoid unnecessary 
handling of index states and retries at the client.

 
{code:java}
2021-01-20 06:54:58,682 WARN org.apache.hadoop.hbase.client.AsyncProcess: #277, 
table=, attempt=1/1 failed=1ops, last exception: 
org.apache.hadoop.hbase.NotServingRegionException: 
org.apache.hadoop.hbase.NotServingRegionException: Region  is not 
online on 
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2997)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1069)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2100)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
 on , tracking started Wed Jan 20 06:54:58 CET 2021; not 
retrying 1 - final failure
2021-01-20 06:54:58,690 INFO 
org.apache.phoenix.index.PhoenixIndexFailurePolicy: Successfully update 
INDEX_DISABLE_TIMESTAMP for  due to an exception while writing 
updates. indexState=PENDING_DISABLE
org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
disableIndexOnFailure=true, Failed to write to multiple index tables: 
[]
at 
org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
at 
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:617)
at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:577)
at 
org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:560)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1034)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3421)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2944)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2886)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:765)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:716)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2146)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
2021-01-20 06:54:58,691 INFO 
org.apache.phoenix.hbase.index.util.IndexManagementUtil: Rethrowing 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to the 
index failed.  disableIndexOnFailure=true, Failed to write to multiple index 
tables: [] ,serverTimestamp=1611122098649,
2021-01-20 06:55:01,296 INFO SecurityLogger.org.apache.hadoop.hbase.Server: 
Auth successful for hbase (auth:SIMPLE) {code}
 

 



--
This message

[jira] [Created] (PHOENIX-6298) Use timestamp of PENDING_DISABLE_COUNT to calculate elapse time for PENDING_DISABLE state

2021-01-04 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-6298:
--

 Summary: Use timestamp of PENDING_DISABLE_COUNT to calculate 
elapse time for PENDING_DISABLE state
 Key: PHOENIX-6298
 URL: https://issues.apache.org/jira/browse/PHOENIX-6298
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal


Instead of taking indexDisableTimestamp to calculate the elapsed time, we 
should be considering the last time we incr/decremented the counter for 
PENDING_DISABLE_COUNT. as if the application writes failures span more than the 
default threshold of 30 seconds, the index will unnecessarily get disabled even 
though the client could have retried and made it active.

{code}
long elapsedSinceDisable = 
EnvironmentEdgeManager.currentTimeMillis() - Math.abs(indexDisableTimestamp);

// on an index write failure, the server side transitions to PENDING_DISABLE, 
then the client
// retries, and after retries are exhausted, disables the 
index
if (indexState == PIndexState.PENDING_DISABLE) {
if (elapsedSinceDisable > pendingDisableThreshold) {
// too long in PENDING_DISABLE - client didn't 
disable the index, so we do it here
IndexUtil.updateIndexState(conn, 
indexTableFullName, PIndexState.DISABLE, indexDisableTimestamp);
}
continue;
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[ANNOUNCE] New Phoenix committer Richárd Antal

2021-01-04 Thread Ankit Singhal
On behalf of the Apache Phoenix PMC, I'm pleased to announce that Richárd
Antal
has accepted the PMC's invitation to become a committer on Apache Phoenix.

We appreciate all of the great contributions Richárd has made to the
community thus far and we look forward to his continued involvement.

Congratulations and welcome, Richárd Antal!


Re: [Discuss] Releasing Phoenix 4.16

2020-12-13 Thread Ankit Singhal
I see that both the blockers listed here PHOENIX-5712 and PHOENIX-6241
have been resolved(Thanks to Xinyi), and also as per the JIRA query there
is no Jira marked as a blocker for 4.16 except the one related to
documentation
for "splittable catalog table".

Xinyi, so are we good to start the release process now?

On Wed, Dec 2, 2020 at 9:32 PM Xinyi Yan  wrote:

> Thanks for replying and providing suggestions. I looked at the wrong result
> Jira list that Daniel provided and did some local testing, and here is the
> result:
> [resolved] PHOENIX-4116, PHOENIX-4419, and PHOENIX-4642: cannot reproduce
> it.
> [More information is required] PHOENIX-4504 cannot reproduce it but someone
> claimed that he had a similar issue.
> [unusual query] PHOENIX-4540 and PHOENIX-6217
>
> Based on my finding, I think it's better to have more frequent housekeeping
> and resolve unreproducible bugs especially since many of them are
> considering out of date (phoenix-4.11 or even phoenix-4.6). Since I still
> need time to work on the blocker Jira(PHOENIX-5712, PHOENIX-6241) and fix
> test flappers, if you want to fix "unusual query" bugs, feel free to do so.
>
>
> Sincerely,
> Xinyi
>
> On Wed, Dec 2, 2020 at 12:41 AM Ankit Singhal  wrote:
>
> > Thanks Daniel and appreciate the effort you put in getting the list ready
> > for bugs producing wrong results
> > but none of them seems to be a blocker to me for 4.16 as they are not the
> > regression and doesn't break the general functionality
> > except for specific features, RVC/desc as Chenglei also pointed out
> (though
> > I'll defer the assessment to RM "Xinyi").
> > Probably these can be a part of 4.16.1 or we can do 4.17.0 soon maybe
> after
> > a few weeks/month?
> >
> > Considering that we have already fixed 137 bugs and done 85+
> > improvements/features in 4.16,
> > it will not be a good idea to deprive the user from such fixes.
> > It's been a year since our last 4.15 release, having no release brings
> more
> > questions on the project
> > rather than the bugs which affect a certain % of feature/users, would the
> > release notes
> > explaining the stability of certain features set the right expectation
> for
> > those users who rely on these features to wait for a future release?
> >
> > Regards,
> > Ankit Singhal
> >
> > On Tue, Dec 1, 2020 at 8:21 PM cheng...@apache.org 
> > wrote:
> >
> > >
> > >
> > >
> > > In my opinion, we should  keep releases light and frequent, and for
> some
> > > unusual query bugs like RVC and DESC
> > > we could delay fix to next release . I think we should release 4.16.0
> and
> > > 5.1.0 as quickly as possible. In China, many users
> > > in HBase User Group thought that  Phoenix was dead because our
> > too
> > > long interval release and stopped using
> > > Phoenix.
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > At 2020-12-02 08:45:46, "Chinmay Kulkarni"  >
> > > wrote:
> > > >I agree. These are all major bugs and we should aim at solving them
> > after
> > > >checking that they are still issues. I am +1 on 5833 and I think 5484
> > > would
> > > >be a great addition to 4.16 as well. We should aim at resolving high
> > > >priority bugs like this in every release.
> > > >
> > > >Sometimes we let these bugs slip without a resolution before a
> release,
> > > >citing that these are "known issues" or "not regressions from the last
> > > >release". In some cases this may be fine since we want to keep
> releases
> > > >light and frequent, but perhaps we can track such issues and aim at
> > > >reducing the number of bugs by x% in each release? This will also keep
> > old
> > > >Jiras alive since we will potentially periodically review them.
> > > >
> > > >
> > > >On Tue, Dec 1, 2020 at 4:01 PM Geoffrey Jacoby 
> > > wrote:
> > > >
> > > >> I've got PHOENIX-5435 in review right now, and would like to get it
> in
> > > 4.16
> > > >> / 5.1.
> > > >>
> > > >> It's allowing the annotation of Phoenix metadata into HBase WALs as
> a
> > > >> pre-req for the Phoenix Change Detection Capture framework
> > > (PHOENIX-5442).
> > > >> Since it has both client/se

Re: IndexExtendedIT always failed on 4.x and master

2020-12-10 Thread Ankit Singhal
Thanks Geoffrey (and Chenglei once again).

On Thu, Dec 10, 2020 at 12:46 PM Geoffrey Jacoby  wrote:

> I already reverted the patch from both branches earlier this afternoon.
> Sorry for the noise.
>
> Geoffrey
>
> On Thu, Dec 10, 2020 at 2:44 PM Ankit Singhal 
> wrote:
>
> > Please free to revert from 4.x (and master) and re-open the corresponding
> > JIRA.
> >
> > On Thu, Dec 10, 2020 at 5:50 AM 程磊  wrote:
> >
> > >
> > >
> > >
> > > Yes, when I revert the 4.x before  PHOENIX-5140, the IndexExtendedIT
> and
> > > IndexScrutinyToolIT are sucess.
> > > but after applying  PHOENIX-5140,both IndexExtendedIT and
> > > IndexScrutinyToolIT are always failed.
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > At 2020-12-10 16:51:27, "Ankit Singhal" 
> > wrote:
> > > >Thanks Chenglei for bringing the failures to the notice. Have you
> > > confirmed
> > > >that reverting PHOENIX-5140 fixes the problem? if yes then I would
> > suggest
> > > >reverting the fix and re-open the JIRA to be worked upon separately
> for
> > > the
> > > >new fix.
> > > >
> > > >On Wed, Dec 9, 2020 at 7:14 PM cheng...@apache.org <
> cheng...@apache.org
> > >
> > > >wrote:
> > > >
> > > >> I noticed IndexExtendedIT always failed on 4.x and master now, It
> may
> > be
> > > >> caused by PHOENIX-5140 , for 4.x , the stack is :
> > > >> java.lang.AssertionError: expected:<0> but was:<-1>
> > > >> at org.junit.Assert.fail(Assert.java:89)
> > > >> at org.junit.Assert.failNotEquals(Assert.java:835)
> > > >> at org.junit.Assert.assertEquals(Assert.java:647)
> > > >> at org.junit.Assert.assertEquals(Assert.java:633)
> > > >> at
> > > >>
> > >
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:806)
> > > >> at
> > > >>
> > >
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:785)
> > > >> at
> > > >>
> > >
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:776)
> > > >> at
> > > >>
> > >
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:770)
> > > >> at
> > > >>
> > >
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:744)
> > > >> at
> > > >>
> > >
> >
> org.apache.phoenix.end2end.IndexExtendedIT.testMutableIndexWithUpdates(IndexExtendedIT.java:166)
> > > >>
> > > >>
> > > >> and the similar stack is on master, I suggest we should rollback or
> > stop
> > > >> commit to the repo until the problem is fixed.
> > >
> >
>


Re: IndexExtendedIT always failed on 4.x and master

2020-12-10 Thread Ankit Singhal
Please free to revert from 4.x (and master) and re-open the corresponding
JIRA.

On Thu, Dec 10, 2020 at 5:50 AM 程磊  wrote:

>
>
>
> Yes, when I revert the 4.x before  PHOENIX-5140, the IndexExtendedIT and
> IndexScrutinyToolIT are sucess.
> but after applying  PHOENIX-5140,both IndexExtendedIT and
> IndexScrutinyToolIT are always failed.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> At 2020-12-10 16:51:27, "Ankit Singhal"  wrote:
> >Thanks Chenglei for bringing the failures to the notice. Have you
> confirmed
> >that reverting PHOENIX-5140 fixes the problem? if yes then I would suggest
> >reverting the fix and re-open the JIRA to be worked upon separately for
> the
> >new fix.
> >
> >On Wed, Dec 9, 2020 at 7:14 PM cheng...@apache.org 
> >wrote:
> >
> >> I noticed IndexExtendedIT always failed on 4.x and master now, It may be
> >> caused by PHOENIX-5140 , for 4.x , the stack is :
> >> java.lang.AssertionError: expected:<0> but was:<-1>
> >> at org.junit.Assert.fail(Assert.java:89)
> >> at org.junit.Assert.failNotEquals(Assert.java:835)
> >> at org.junit.Assert.assertEquals(Assert.java:647)
> >> at org.junit.Assert.assertEquals(Assert.java:633)
> >> at
> >>
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:806)
> >> at
> >>
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:785)
> >> at
> >>
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:776)
> >> at
> >>
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:770)
> >> at
> >>
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:744)
> >> at
> >>
> org.apache.phoenix.end2end.IndexExtendedIT.testMutableIndexWithUpdates(IndexExtendedIT.java:166)
> >>
> >>
> >> and the similar stack is on master, I suggest we should rollback or stop
> >> commit to the repo until the problem is fixed.
>


Re: IndexExtendedIT always failed on 4.x and master

2020-12-10 Thread Ankit Singhal
Thanks Chenglei for bringing the failures to the notice. Have you confirmed
that reverting PHOENIX-5140 fixes the problem? if yes then I would suggest
reverting the fix and re-open the JIRA to be worked upon separately for the
new fix.

On Wed, Dec 9, 2020 at 7:14 PM cheng...@apache.org 
wrote:

> I noticed IndexExtendedIT always failed on 4.x and master now, It may be
> caused by PHOENIX-5140 , for 4.x , the stack is :
> java.lang.AssertionError: expected:<0> but was:<-1>
> at org.junit.Assert.fail(Assert.java:89)
> at org.junit.Assert.failNotEquals(Assert.java:835)
> at org.junit.Assert.assertEquals(Assert.java:647)
> at org.junit.Assert.assertEquals(Assert.java:633)
> at
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:806)
> at
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:785)
> at
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:776)
> at
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:770)
> at
> org.apache.phoenix.end2end.IndexToolIT.runIndexTool(IndexToolIT.java:744)
> at
> org.apache.phoenix.end2end.IndexExtendedIT.testMutableIndexWithUpdates(IndexExtendedIT.java:166)
>
>
> and the similar stack is on master, I suggest we should rollback or stop
> commit to the repo until the problem is fixed.


Re: [DISCUSS] phoenix-connectors release criteria

2020-12-08 Thread Ankit Singhal
Thanks Istvan (and Richard) for compiling the details. This
definitely helps in understanding the state of each connector.

I'm +1 in marking PIG, Flume, and Kafka connectors as unmaintained(with no
real-world testing) in
the release notes(and also in README) and move forward with the release of
connectors repo as long as they build successfully with the latest Phoenix
releases.

And in the future, if maintaining the build process for them(specially PIG
and FLUME)
becomes difficult, then I think we should start a discussion thread
to even consider dropping them from the next release instead
of dragging them with unknown stability.

Regards,
Ankit Singhal


On Tue, Dec 8, 2020 at 7:00 AM Istvan Toth  wrote:

> Hi!
>
> We are getting to the point where releasing the next Phoenix versions is
> within sight.
>
> The next steps will be releasing queryserver and connectors projects.
> Queryserver is mostly in good shape, but the situation with Connectors is
> not so clear-cut.
>
> We (mostly Richard Antal ) have made a huge step in collecting all of the
> connectors patches for 5.x, and consolidating the Phoenix 4 and 5
> connectors the connectors repo.
>
> However, not all connectors received equal attention. While all of them
> compile, and run their test suite successfully, this doesn't necessarily
> mean that they are useful in the real world.
>
> Hive:
> * Supports latest Hive versions (with caveats)
> * Custom shaded connector JAR
> * local e2e test successful for Hive 3 / Phoenix 5
>
> Spark:
> * Supports Spark 2.4 (not tested with 3.x)
> * Custom shaded connector JAR (waiting for review)
> * local e2e test successful for Phoenix 5
>
> Pig:
> * The last pig release (0.17) was in 2017 - abandoned ?
> * connectors on version 0.13 from 2014
> * (but builds and runs test suite with 0.17 with trivial changes)
> * has a basic shaded jar
> * I have made no effort to do e2e testing
>
> Flume:
> * Project seems to be alive, if low activity, last version is 1.9.0 from
> 2019
> * connectors on version 1.4.0 from 2014
> * (but builds and runs test suite with 1.9.0)
> * has no shaded JAR (was in phoenix-client pre-split)
> * I have made no effort to do e2e testing
>
> Kafka:
> * Last version is 2.6.0
> * connectors on version 0.9.0.0 from 2015
> * (couldn't build connector with the latest release)
> * depends on and reuses flume connector code
> * has a shaded jar
> * I have made no effort to do e2e testing
>
> So basically we have the Spark and Hive connectors, that me and my
> colleagues actively maintain as part of $dayjob, and expect to continue to
> do so in the future.
>
> The other three, Flume, Pig, and Kafka haven't seen any development (apart
> from build system updates and such) in 5-6 years, evidenced by the ancient
> releases that we build with.
>
> At $dayjob we are considering taking up the kafka connector at some point
> in the future, but it will definitely not happen in the 4.16/5.1
> timeframe, if ever.
>
> As having a connectors release with some known useful and up-to-date
> connectors is better than not having any connectors release at all, I
> suggest that we go ahead and release phoenix-connectors 6.0.0 after 5.1 /
> 4.16 in the present state, but mark the Flume, Pig and Hive connectors as
> unmaintained in the README / Release notes.
>
> This leaves the door open for maintainers to step up and update/maintain
> those connectors, while it lets us make regular releases for the maintained
> connectors.
>
> Please share your opinion on this plan.
>
> regards
> Istvan
>


Re: [VOTE] Release of Apache Tephra 0.16.0RC2 as Apache Tephra 0.16.0

2020-12-02 Thread Ankit Singhal
+1(binding)

* Build from the source - Ok
* Unit tests (checked for a few of the compatibility modules) - Passed
* RAT check - Ok
* Build Phoenix using this RC - OK
* Signature/checksums - Ok
* Changes and ReleaseNotes - Ok
* LICENSE and NOTICE (source and maven artifacts) - ok (though year needs
to be updated in NOTICE file)


Regards,
Ankit Singhal

On Tue, Dec 1, 2020 at 2:04 AM Viraj Jasani  wrote:

> +1 (non-binding)
>
> * Build from source: ok
> * Signature: ok
> * Rat-check: ok
> * All unit tests in tephra passed
> * Build Phoenix master with 0.16.0RC2: ok
>
>
> On 2020/12/01 08:01:26, "rajeshb...@apache.org" 
> wrote:
> > +1 (binding)
> >
> > - xsums/sigs OK
> > - Rat check is passed
> > - build successful.
> > - Ran tests and all are fine.
> >
> > Thanks,
> > Rajeshbabu.
> >
> >
> >
> > On Tue, Dec 1, 2020 at 2:22 AM Josh Elser  wrote:
> >
> > > +1 (binding)
> > >
> > > * xsums/sigs OK
> > > * CHANGES has reasonable content
> > > * mvn apache-rat:check passes and `find . -type f` doesn't show
> anything
> > > unreasonable
> > > * Can build the code
> > > * Ran many unit tests, but cancelled at ~1hr mark.
> > > * Can build Phoenix master against 0.16.0rc2
> > >
> > > On 11/30/20 8:15 AM, Istvan Toth wrote:
> > > > Please vote on this Apache Phoenix Tephra release candidate,
> > > > phoenix-tephra-0.16.0RC2
> > > >
> > > > The VOTE will remain open for at least 72 hours.
> > > >
> > > > [ ] +1 Release this package as Apache phoenix tephra 0.16.0
> > > > [ ] -1 Do not release this package because ...
> > > >
> > > > The tag to be voted on is 0.16.0RC2
> > > >
> > > >https://github.com/apache/phoenix-tephra/tree/0.16.0RC2
> > > >
> > > > The release files, including signatures, digests, as well as
> CHANGES.md
> > > > and RELEASENOTES.md included in this RC can be found at:
> > > >
> > > >
> > >
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-tephra-0.16.0RC2/
> > > >
> > > > Maven artifacts are available in a staging repository at:
> > > >
> > > >
> > >
> https://repository.apache.org/content/repositories/orgapachephoenix-1206/
> > > >
> > > > Artifacts were signed with the 0x794433C7 key which can be found in:
> > > >
> > > >https://dist.apache.org/repos/dist/release/phoenix/KEYS
> > > >
> > > > To learn more about Apache Phoenix Tephra, please see
> > > >
> > > >https://tephra.incubator.apache.org/
> > > >
> > > > Thanks,
> > > > Istvan
> > > >
> > >
> >
>


Re: [Discuss] Releasing Phoenix 4.16

2020-12-02 Thread Ankit Singhal
Thanks Daniel and appreciate the effort you put in getting the list ready
for bugs producing wrong results
but none of them seems to be a blocker to me for 4.16 as they are not the
regression and doesn't break the general functionality
except for specific features, RVC/desc as Chenglei also pointed out (though
I'll defer the assessment to RM "Xinyi").
Probably these can be a part of 4.16.1 or we can do 4.17.0 soon maybe after
a few weeks/month?

Considering that we have already fixed 137 bugs and done 85+
improvements/features in 4.16,
it will not be a good idea to deprive the user from such fixes.
It's been a year since our last 4.15 release, having no release brings more
questions on the project
rather than the bugs which affect a certain % of feature/users, would the
release notes
explaining the stability of certain features set the right expectation for
those users who rely on these features to wait for a future release?

Regards,
Ankit Singhal

On Tue, Dec 1, 2020 at 8:21 PM cheng...@apache.org 
wrote:

>
>
>
> In my opinion, we should  keep releases light and frequent, and for some
> unusual query bugs like RVC and DESC
> we could delay fix to next release . I think we should release 4.16.0 and
> 5.1.0 as quickly as possible. In China, many users
> in HBase User Group thought that  Phoenix was dead because our too
> long interval release and stopped using
> Phoenix.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> At 2020-12-02 08:45:46, "Chinmay Kulkarni" 
> wrote:
> >I agree. These are all major bugs and we should aim at solving them after
> >checking that they are still issues. I am +1 on 5833 and I think 5484
> would
> >be a great addition to 4.16 as well. We should aim at resolving high
> >priority bugs like this in every release.
> >
> >Sometimes we let these bugs slip without a resolution before a release,
> >citing that these are "known issues" or "not regressions from the last
> >release". In some cases this may be fine since we want to keep releases
> >light and frequent, but perhaps we can track such issues and aim at
> >reducing the number of bugs by x% in each release? This will also keep old
> >Jiras alive since we will potentially periodically review them.
> >
> >
> >On Tue, Dec 1, 2020 at 4:01 PM Geoffrey Jacoby 
> wrote:
> >
> >> I've got PHOENIX-5435 in review right now, and would like to get it in
> 4.16
> >> / 5.1.
> >>
> >> It's allowing the annotation of Phoenix metadata into HBase WALs as a
> >> pre-req for the Phoenix Change Detection Capture framework
> (PHOENIX-5442).
> >> Since it has both client/server logic, and adds a field to
> System.Catalog,
> >> it can't go in a patch release.
> >>
> >> Depending on timing, I'd _like_ to get PHOENIX-6227, which is the last
> part
> >> of CDC that will go into core Phoenix, into 4.16, but since that _can_
> go
> >> in a patch release and I haven't started it yet, if the release gets cut
> >> before it's ready, no big deal. (The rest of CDC will go into
> >> phoenix-connectors for a future release of that project.)
> >>
> >> As for the correctness problems that Daniel points out, I think we
> should
> >> fix the ones that were detected with a recent version (4.14 or 4.15?),
> and
> >> test to see which of the older ones can still be reproduced. Once we
> know
> >> which bugs are real and which are just historical, we can better judge
> >> scope. And hopefully close a bunch of obsolete bugs. (Thanks, Daniel,
> for
> >> collecting that list!)
> >>
> >> Geoffrey
> >>
> >>
> >>
> >> On Tue, Dec 1, 2020 at 1:33 PM Daniel Wong
> >>  wrote:
> >>
> >> > Hi, I wanted to bring up wrong results in Phoenix and some JIRAs
> around
> >> > them that I think we should fix as the wrong result lessens the end
> >> user's
> >> > trust in Phoenix.  Releasing a new version without addressing these
> in a
> >> > minor release hurts our visibility in that these critical issues are
> not
> >> > addressed.
> >> >
> >> > Jira's that I'm involved with for example: I've already given a patch
> >> > several months ago for 5833 and there is a chance it may fix 5484.
> >> >   https://issues.apache.org/jira/browse/PHOENIX-5833
> >> >   https://issues.apache.org/jira/browse/PHOENIX-5484
> >> >
> >> > In addition, inspecting apache JIRA i see several other wrong result
> >> JIRAs
> >> > from the community.  Some of 

[jira] [Resolved] (PHOENIX-4378) Unable to set KEEP_DELETED_CELLS to true on RS scanner

2020-11-10 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4378.

Resolution: Duplicate

> Unable to set KEEP_DELETED_CELLS to true on RS scanner
> --
>
> Key: PHOENIX-4378
> URL: https://issues.apache.org/jira/browse/PHOENIX-4378
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Blocker
>  Labels: HBase-2.0
> Fix For: 5.1.0
>
>
> [~jamestaylor], 
> It seems we may need to fix PHOENIX-4277 differently for HBase 2.0 as we can 
> only update TTL and maxVersions now in preStoreScannerOpen and cannot return 
> a new StoreScanner with updated scanInfo.
> for reference:
> [1]https://issues.apache.org/jira/browse/PHOENIX-4318?focusedCommentId=16249943=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16249943



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (PHOENIX-4378) Unable to set KEEP_DELETED_CELLS to true on RS scanner

2020-11-10 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reopened PHOENIX-4378:


> Unable to set KEEP_DELETED_CELLS to true on RS scanner
> --
>
> Key: PHOENIX-4378
> URL: https://issues.apache.org/jira/browse/PHOENIX-4378
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Blocker
>  Labels: HBase-2.0
> Fix For: 5.1.0
>
>
> [~jamestaylor], 
> It seems we may need to fix PHOENIX-4277 differently for HBase 2.0 as we can 
> only update TTL and maxVersions now in preStoreScannerOpen and cannot return 
> a new StoreScanner with updated scanInfo.
> for reference:
> [1]https://issues.apache.org/jira/browse/PHOENIX-4318?focusedCommentId=16249943=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16249943



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-4378) Unable to set KEEP_DELETED_CELLS to true on RS scanner

2020-11-10 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4378.

Resolution: Fixed

> Unable to set KEEP_DELETED_CELLS to true on RS scanner
> --
>
> Key: PHOENIX-4378
> URL: https://issues.apache.org/jira/browse/PHOENIX-4378
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Blocker
>  Labels: HBase-2.0
> Fix For: 5.1.0
>
>
> [~jamestaylor], 
> It seems we may need to fix PHOENIX-4277 differently for HBase 2.0 as we can 
> only update TTL and maxVersions now in preStoreScannerOpen and cannot return 
> a new StoreScanner with updated scanInfo.
> for reference:
> [1]https://issues.apache.org/jira/browse/PHOENIX-4318?focusedCommentId=16249943=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16249943



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] Release of phoenix-thirdparty 1.0.0

2020-10-20 Thread Ankit Singhal
+1(binding)

* License and Notice - OK
* Build from the source - OK
* Used the above-created artifact 1.0.0 and build the Phoenix master - OK
* Signature/checksums - Ok
* License header in files - Ok
* Changes and ReleaseNotes - Ok

Note: Signatures are verified by using
your 825203A70405BC83AECF5F7D97351C1B794433C7

Regards,
Ankit Singhal

On Wed, Oct 14, 2020 at 12:05 AM Istvan Toth  wrote:

> Please vote on this Apache phoenix-thirdparty release candidate,
> phoenix-thirdparty-1.0.0RC0
>
> The VOTE will remain open for at least 72 hours.
>
> [ ] +1 Release this package as Apache phoenix-thirdparty 1.0.0
> [ ] -1 Do not release this package because ...
>
> The tag to be voted on is 1.0.0RC0
>
>   https://github.com/apache/phoenix-thirdparty/tree/1.0.0RC0
>
> The release files, including signatures, digests, as well as CHANGES.md
> and RELEASENOTES.md included in this RC can be found at:
>
>
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-thirdparty-1.0.0RC0/
>
> Artifacts were signed with the ${GPG_KEY} key which can be found in:
>
>   https://dist.apache.org/repos/dist/release/phoenix/KEYS
>
> To learn more about Apache Phoenix, please see
>
>   http://phoenix.apache.org/
>
> Thanks,
> Istvan
>


[jira] [Resolved] (PHOENIX-6196) Update phoenix.mutate.maxSizeBytes to accept long values

2020-10-20 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-6196.

Fix Version/s: 4.16.0
   5.1.0
   Resolution: Fixed

> Update phoenix.mutate.maxSizeBytes to accept long values
> 
>
> Key: PHOENIX-6196
> URL: https://issues.apache.org/jira/browse/PHOENIX-6196
> Project: Phoenix
>  Issue Type: Task
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> Currently, the config "phoenix.mutate.maxSizeBytes" accepts value in Int, so 
> a user can only provide up to 2GB but there are some scenarios like UPSERT 
> SELECT from temp table with large rows, where the user may want to set more 
> value to it when auto-commit is off. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6196) Update phoenix.mutate.maxSizeBytes to accept long values

2020-10-19 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-6196:
--

 Summary: Update phoenix.mutate.maxSizeBytes to accept long values
 Key: PHOENIX-6196
 URL: https://issues.apache.org/jira/browse/PHOENIX-6196
 Project: Phoenix
  Issue Type: Task
Reporter: Ankit Singhal
Assignee: Ankit Singhal


Currently, the config "phoenix.mutate.maxSizeBytes" accepts value in Int, so a 
user can only provide up to 2GB but there are some scenarios like UPSERT SELECT 
from temp table with large rows, where the user may want to set more value to 
it when auto-commit is off. 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Releasing Phoenix-thirdparty 1.0

2020-10-01 Thread Ankit Singhal
Sounds good to me.

On Wed, Sep 30, 2020 at 9:48 AM Istvan Toth  wrote:

> Hi!
>
> We have been using phoenix-thirdparty snapshots in omid for a few weeks,
> and in phoenix (master) for about a week, and so far we haven't found any
> issues stemming from using Phoenix-thirdparty (effectively thirdparty
> Guava, the only current component).
>
> I think that we are ready to release Phoenix-thirdparty 1.0 now.
>
> My current plan is to cut an rc0 sometime next week, unless problems arise,
> or someone voices an objection.
>
> Looking forward to hearing your opinion and feedback.
>
> regards
> Istvan
>


[jira] [Resolved] (PHOENIX-6034) Optimize InListIT

2020-08-25 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-6034.

Fix Version/s: 4.16.0
   5.1.0
   Resolution: Fixed

> Optimize InListIT
> -
>
> Key: PHOENIX-6034
> URL: https://issues.apache.org/jira/browse/PHOENIX-6034
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is just an attempt to take one test from Lars 
> [list|[https://www.mail-archive.com/dev@phoenix.apache.org/msg57310.html]] 
> and improve the performance.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] EOL support for HBase 1.3

2020-08-24 Thread Ankit Singhal
+1 for planning EOL support for HBase 1.3, but can we do it after 4.16
release? I feel it because of the following reasons:-

(including user  )

* It will be an advance notice to the users who have been waiting for fixes
in 4.16
   release for a long time but are on HBase-1.3.

* As work is already done to support 1.3 through shim, so why not release
   it if it is possible with minimal effort.

* We will have code for 1.3 shim available in release tag, then in future,
  if someone volunteers to support 1.3(for a minor release or 4.16 point
release) doesn't
  need to revert the set of commits which brings back this shim layer,
  instead can use the tag and go with it.

Regards,
Ankit Singhal


On Mon, Aug 24, 2020 at 10:04 AM Geoffrey Jacoby  wrote:

> The HBase community has just unanimously EOLed HBase 1.3.
>
> As I recall, 1.3 has some significant differences with 1.4+ that we have to
> workaround via compatibility shim, particularly with metrics and changes to
> the HBase Scan API. (In particular, the scan API changes are difficult to
> handle even with the shim.)
>
> Should we simplify our 4.x branch by removing 1.3 support for the upcoming
> 4.16 release?
>
> Geoffrey
>


[jira] [Commented] (TEPHRA-304) Remove Support for Java 7

2020-08-08 Thread Ankit Singhal (Jira)


[ 
https://issues.apache.org/jira/browse/TEPHRA-304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17173715#comment-17173715
 ] 

Ankit Singhal commented on TEPHRA-304:
--

{quote}This change means that the Phoenix 4.x branch cannot move to 0.16 ever, 
as it is stuck on Java 7.

Maybe reconsider this, and keep the test workarounds ?
{quote}
Agreed with [~stoty] , due to the dependency on HBase runtime, we can't upgrade 
JDK for 4.x branches of Phoenix. Considering Tephra being highly used in 
Phoenix and benefitted with subsequent release of Tephra so I also think we 
should reconsider this. And also it seems that the problem can be workaround by 
just adding an option to use TLS-1.2 protocol

> Remove Support for Java 7
> -
>
> Key: TEPHRA-304
> URL: https://issues.apache.org/jira/browse/TEPHRA-304
> Project: Phoenix Tephra
>  Issue Type: Improvement
>Reporter: Andreas Neumann
>Assignee: Andreas Neumann
>Priority: Major
> Fix For: 0.16.0-incubating
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Java 7 has long reached the end of its life support, yet Tephra still
> supports and tests with Java 7. After the recent change to use HTTPS, Java
> 7 causes repeated test failures, caused by its lack of support for TLS 1.2 (
> [https://central.sonatype.org/articles/2018/May/04/discontinued-support-for-tlsv11-and-below/])
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6023) Wrong result when issuing query for an immutable table with multiple column families

2020-07-21 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-6023.

Fix Version/s: 4.16.0
   5.1.0
   Resolution: Fixed

> Wrong result when issuing query for an immutable table with multiple column 
> families
> 
>
> Key: PHOENIX-6023
> URL: https://issues.apache.org/jira/browse/PHOENIX-6023
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Steps to reproduce are as follows:
> 1. Create an immutable table with multiple column families:
> {code}
> 0: jdbc:phoenix:> CREATE TABLE TEST (
> . . . . . . . . >   ID VARCHAR PRIMARY KEY,
> . . . . . . . . >   A.COL1 VARCHAR,
> . . . . . . . . >   B.COL2 VARCHAR
> . . . . . . . . > ) IMMUTABLE_ROWS = TRUE;
> No rows affected (1.182 seconds)
> {code}
> 2. Upsert some rows:
> {code}
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id0', '0', 'a');
> 1 row affected (0.138 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id1', '1', NULL);
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id2', '2', 'b');
> 1 row affected (0.011 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id3', '3', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id4', '4', 'c');
> 1 row affected (0.006 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id5', '5', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id6', '6', 'd');
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id7', '7', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id8', '8', 'e');
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id9', '9', NULL);
> 1 row affected (0.009 seconds)
> {code}
> 3. Count query is okay:
> {code}
> 0: jdbc:phoenix:> SELECT COUNT(COL1) FROM TEST WHERE COL2 IS NOT NULL;
> ++
> | COUNT(A.COL1)  |
> ++
> | 5  |
> ++
> 1 row selected (0.1 seconds)
> {code}
> 4. However, the following select query returns wrong result (it should return 
> 5 records):
> {code}
> 0: jdbc:phoenix:> SELECT COL1 FROM TEST WHERE COL2 IS NOT NULL;
> +---+
> | COL1  |
> +---+
> | 0 |
> | 1 |
> | 2 |
> | 3 |
> | 4 |
> | 5 |
> | 6 |
> | 7 |
> | 8 |
> | 9 |
> +---+
> 10 rows selected (0.058 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6034) Optimize InListIT

2020-07-21 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-6034:
--

 Summary: Optimize InListIT
 Key: PHOENIX-6034
 URL: https://issues.apache.org/jira/browse/PHOENIX-6034
 Project: Phoenix
  Issue Type: Improvement
  Components: core
Reporter: Ankit Singhal
Assignee: Ankit Singhal


This is just an attempt to take one test from Lars 
[list|[https://www.mail-archive.com/dev@phoenix.apache.org/msg57310.html]] and 
improve the performance.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [Discuss] Twill removal and Guava update plan

2020-06-09 Thread Ankit Singhal
Thanks, Istvan for putting up a detailed plan

+1 for removing twill(as I can see it is recently moved to Attic) and
adding whatever convenience it's bringing in Omid/tephra project itself.

Instead of creating a shaded artifact of Guava and then refer it in Omid
and Tephra, can't we just short-circuit your step 2 with 3 and use
relocation of the maven-shaded plugin to produce the shaded artifacts for
Tephra/Omid as we are anyways shading everything including public API?



On Tue, Jun 2, 2020 at 8:25 AM Josh Elser  wrote:

> Sounds like a well-thought-out plan to me.
>
> If we're going through and changing Guava, it may also be worthwhile to
> try to eliminate the use of Guava in our "public API". While the shaded
> guava eliminates classpath compatibility issues, Guava could (at any
> point) drop a class that we're using in our API and still break us. That
> could be a "later" thing.
>
> The only thing I think differently is that 4.x could (at some point)
> pick up the shaded guava artifact you describe and make the change.
> However, that's just for the future -- the call can be made if/when
> someone wants to do that :)
>
> On 6/2/20 10:01 AM, Istvan Toth wrote:
> > Hi!
> >
> > There are two related dependency issues that I believe should be solved
> in
> > Phoenix to keep it healthy and supportable.
> >
> > The Twill project has been officially terminated. Both Tephra and Omid
> > depend on it, and so transitively Phoenix does as well.
> >
> > Hadoop 3.3 has updated its Guava to 29, while Phoenix (master) is still
> on
> > 13.
> > None of Twill, Omid, Tephra, or Phoenix will run or compile against
> recent
> > Guava versions, which are pulled in by Hadoop 3.3.
> >
> > If we want to work with Hadoop 3.3, we either have to update all
> > dependencies to a recent Guava version, or we have to build our artifacts
> > with shaded Guava.
> > Since Guava 13 has known vulnerabilities, including in the classpath
> causes
> > a barrier to adaptation. Some potential Phoenix users consider including
> > dependencies with
> > known vulnerabilities a show-stopper, they do not care if the
> vulnerability
> > affects Phoenix or not.
> >
> > I propose that we take following steps to ensure compatibility with
> > upcoming Hadoop versions:
> >
> > *1. Remove the Twill dependency from Omid and Tephra*
> > It is generally not healthy to depend on abandoned projects, but the fact
> > Twill also depends (heavily) on Guava 13, makes removal the best
> solution.
> > As far as I can see, Omid and Tephra mostly use the ZK client from Twill,
> > as well as the (transitively included) Guava service model.
> > Refactoring to use the native ZK client, and to use the Guava service
> > classes directly shouldn't be too difficult.
> >
> > *2. Create a shaded guava artifact for Omid and Tephra*
> > Since Omid and Tephra needs to work with Hadoop2 and Hadoop3 (including
> the
> > upcoming Hadoop 3.3), which already pull in Guava, we need to use
> different
> > Guava internally.
> > (similar to the HBase-thirdparty solution, but we need a separate one).
> > This artifact could live under the Phoenix groupId, but we'll need to be
> > careful with the circular dependencies.
> >
> > *3. Update the Omid and Tephra to use the shaded Guava artifact*
> > Apart from handling the mostly trivial, "let's break API compatibility
> for
> > the heck of it" Guava changes, the Guava Service API that both Omid and
> > Tephra build on has changed significantly.
> > This will mean changes in the public (Phoenix facing) APIs. All Guava
> > references will have to be replaced with the shaded guava classes from
> step
> > 2.
> >
> > *3. Define self-contained public APIs for Omid and Tephra*
> > To break the public API's dependency on Guava, redefine the public APIs
> in
> > such a way that they do not have Guava classes as ancestors.
> > This doesn't mean that we decouple the internal implementation from
> Guava,
> > simply defining a set of java Interfaces that matches the existing
> (updated
> > to recent Guava Service API)
> > interface's signature, but is self-contained under the Tephra/Omid
> > namespace should do the trick.
> >
> > *4. Update Phoenix to use new Omid/Tephra API*
> > i.e. use the new Interface that we defined in step 3.
> >
> > *5. Update Phoenix to work with Guava 13-29.*
> > We need to somehow get Phoenix work with both old and new Guava.
> > Probably the least disruptive way to do this is reduce the Guava use to
> the
> > common subset of 13.0 and 29.0, and replace/reimplement the parts that
> > cannot be resolved.
> > Alternatively, we could rebase to the same shaded guava-thirdparty
> library
> > that we use for Omid and Tephra.
> >
> > For *4.x*, since we cannot get rid of Guava 13 ever, *Step 5 *is not
> > necessary
> >
> > I am very interested in your opinion on the above plan.
> > Does anyone have any objections ?
> > Does anyone have a better solution ?
> > Is there some hidden pitfall that I hadn't considered (there certainly
> 

[jira] [Updated] (PHOENIX-5884) Join query return empty result when filters for both the tables are present

2020-05-20 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5884:
---
Attachment: PHOENIX-5884.master.v3.patch

> Join query return empty result when filters for both the tables are present
> ---
>
> Key: PHOENIX-5884
> URL: https://issues.apache.org/jira/browse/PHOENIX-5884
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-5884.master.v2.patch, 
> PHOENIX-5884.master.v3.patch, PHOENIX-5884_v1.patch
>
>
> Let's assume DDL to be same for both the tables involved in a join
> {code}
> CREATE TABLE LeftTable (id1 CHAR(6) NOT NULL,id2 VARCHAR(22) NOT 
> NULL,id3 VARCHAR(12) NOT NULL,id4 CHAR(2) NOT NULL,id5 CHAR(6) 
> NOT NULL, id6 VARCHAR(200) NOT NULL,id7 VARCHAR(50) NOT NULL,ts 
> TIMESTAMP ,CONSTRAINT PK_JOIN_AND_INTERSECTION_TABLE PRIMARY 
> KEY(id1,id2,id3,id4,id5,id6,id7))
> {code}
> Following query return right results
> {code}
> SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3 and 
> m.id2 = r.id2  and m.id4 = r.id4  and m.id5 = r.id5  and m.id1 = r.id1  and 
> m.ts = r.ts  where  r.id1 IN ('201904','201905')  and r.id2 = 'ID2_VAL'  and 
> r.id3 IN ('ID3_VAL','ID3_VAL2') 
> {code}
> but When to optimize the query, filters for the left table are also added , 
> query returned empty result . Though the filters are based on join condition 
> so semantically above query and below query should be same.
> {code}
> SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3  and 
> m.id2 = r.id2  and m.id4 = r.id4 and m.id5 = r.id5  and m.id1 = r.id1 and 
> m.ts = r.ts  where m.id1 IN ('201904','201905')  and r.id1 IN 
> ('201904','201905') and r.id2 = 'ID2_VAL' and m.id3 IN 
> ('ID3_VAL','ID3_VAL2')  and r.id3 IN ('ID3_VAL','ID3_VAL2') 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5884) Join query return empty result when filters for both the tables are present

2020-05-05 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5884:
---
Attachment: PHOENIX-5884_v1.patch

> Join query return empty result when filters for both the tables are present
> ---
>
> Key: PHOENIX-5884
> URL: https://issues.apache.org/jira/browse/PHOENIX-5884
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-5884_v1.patch
>
>
> Let's assume DDL to be same for both the tables involved in a join
> {code}
> CREATE TABLE LeftTable (id1 CHAR(6) NOT NULL,id2 VARCHAR(22) NOT 
> NULL,id3 VARCHAR(12) NOT NULL,id4 CHAR(2) NOT NULL,id5 CHAR(6) 
> NOT NULL, id6 VARCHAR(200) NOT NULL,id7 VARCHAR(50) NOT NULL,ts 
> TIMESTAMP ,CONSTRAINT PK_JOIN_AND_INTERSECTION_TABLE PRIMARY 
> KEY(id1,id2,id3,id4,id5,id6,id7))
> {code}
> Following query return right results
> {code}
> SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3 and 
> m.id2 = r.id2  and m.id4 = r.id4  and m.id5 = r.id5  and m.id1 = r.id1  and 
> m.ts = r.ts  where  r.id1 IN ('201904','201905')  and r.id2 = 'ID2_VAL'  and 
> r.id3 IN ('ID3_VAL','ID3_VAL2') 
> {code}
> but When to optimize the query, filters for the left table are also added , 
> query returned empty result . Though the filters are based on join condition 
> so semantically above query and below query should be same.
> {code}
> SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3  and 
> m.id2 = r.id2  and m.id4 = r.id4 and m.id5 = r.id5  and m.id1 = r.id1 and 
> m.ts = r.ts  where m.id1 IN ('201904','201905')  and r.id1 IN 
> ('201904','201905') and r.id2 = 'ID2_VAL' and m.id3 IN 
> ('ID3_VAL','ID3_VAL2')  and r.id3 IN ('ID3_VAL','ID3_VAL2') 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5884) Join query return empty result when filters for both the tables are present

2020-05-05 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5884:
---
Description: 
Let's assume DDL to be same for both the tables involved in a join
{code}
CREATE TABLE LeftTable (id1 CHAR(6) NOT NULL,id2 VARCHAR(22) NOT NULL,  
  id3 VARCHAR(12) NOT NULL,id4 CHAR(2) NOT NULL,id5 CHAR(6) NOT NULL,   
  id6 VARCHAR(200) NOT NULL,id7 VARCHAR(50) NOT NULL,ts TIMESTAMP ,
CONSTRAINT PK_JOIN_AND_INTERSECTION_TABLE PRIMARY 
KEY(id1,id2,id3,id4,id5,id6,id7))
{code}

Following query return right results
{code}
SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3 and m.id2 
= r.id2  and m.id4 = r.id4  and m.id5 = r.id5  and m.id1 = r.id1  and m.ts = 
r.ts  where  r.id1 IN ('201904','201905')  and r.id2 = 'ID2_VAL'  and r.id3 IN 
('ID3_VAL','ID3_VAL2') 
{code}

but When to optimize the query, filters for the left table are also added , 
query returned empty result . Though the filters are based on join condition so 
semantically above query and below query should be same.
{code}
SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3  and 
m.id2 = r.id2  and m.id4 = r.id4 and m.id5 = r.id5  and m.id1 = r.id1 and m.ts 
= r.ts  where m.id1 IN ('201904','201905')  and r.id1 IN ('201904','201905') 
and r.id2 = 'ID2_VAL' and m.id3 IN ('ID3_VAL','ID3_VAL2')  and 
r.id3 IN ('ID3_VAL','ID3_VAL2') 
{code}

  was:
Let's assume DDL to be same for both the tables involved in a join
{code}
CREATE TABLE LeftTable (id1 CHAR(6) NOT NULL,id2 VARCHAR(22) NOT NULL,  
  id3 VARCHAR(12) NOT NULL,id4 CHAR(2) NOT NULL,id5 CHAR(6) NOT NULL,   
  id6 VARCHAR(200) NOT NULL,id7 VARCHAR(50) NOT NULL,ts TIMESTAMP ,
CONSTRAINT PK_JOIN_AND_INTERSECTION_TABLE PRIMARY 
KEY(id1,id2,id3,id4,id5,id6,id7))
{code}

Following query return right results
{code}
SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3 and m.id2 
= r.id2  and m.id4 = r.id4  and m.id5 = r.id5  and m.id1 = r.id1  and m.ts = 
r.ts  where  r.id1 IN ('201904','201905')  and r.id2 = 'ID2_VAL'  and r.id3 IN 
('ID3_VAL','ID3_VAL2') 
{code

but When to optimize the query, filters for the left table are also added , 
query returned empty result . Though the filters are based on join condition so 
semantically above query and below query should be same.
{code}
SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3  and 
m.id2 = r.id2  and m.id4 = r.id4 and m.id5 = r.id5  and m.id1 = r.id1 and m.ts 
= r.ts  where m.id1 IN ('201904','201905')  and r.id1 IN ('201904','201905') 
and r.id2 = 'ID2_VAL' and m.id3 IN ('ID3_VAL','ID3_VAL2')  and 
r.id3 IN ('ID3_VAL','ID3_VAL2') 
{code}


> Join query return empty result when filters for both the tables are present
> ---
>
> Key: PHOENIX-5884
> URL: https://issues.apache.org/jira/browse/PHOENIX-5884
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
>
> Let's assume DDL to be same for both the tables involved in a join
> {code}
> CREATE TABLE LeftTable (id1 CHAR(6) NOT NULL,id2 VARCHAR(22) NOT 
> NULL,id3 VARCHAR(12) NOT NULL,id4 CHAR(2) NOT NULL,id5 CHAR(6) 
> NOT NULL, id6 VARCHAR(200) NOT NULL,id7 VARCHAR(50) NOT NULL,ts 
> TIMESTAMP ,CONSTRAINT PK_JOIN_AND_INTERSECTION_TABLE PRIMARY 
> KEY(id1,id2,id3,id4,id5,id6,id7))
> {code}
> Following query return right results
> {code}
> SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3 and 
> m.id2 = r.id2  and m.id4 = r.id4  and m.id5 = r.id5  and m.id1 = r.id1  and 
> m.ts = r.ts  where  r.id1 IN ('201904','201905')  and r.id2 = 'ID2_VAL'  and 
> r.id3 IN ('ID3_VAL','ID3_VAL2') 
> {code}
> but When to optimize the query, filters for the left table are also added , 
> query returned empty result . Though the filters are based on join condition 
> so semantically above query and below query should be same.
> {code}
> SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3  and 
> m.id2 = r.id2  and m.id4 = r.id4 and m.id5 = r.id5  and m.id1 = r.id1 and 
> m.ts = r.ts  where m.id1 IN ('201904','201905')  and r.id1 IN 
> ('201904','201905') and r.id2 = 'ID2_VAL' and m.id3 IN 
> ('ID3_VAL','ID3_VAL2')  and r.id3 IN ('ID3_VAL','ID3_VAL2') 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5884) Join query return empty result when filters for both the tables are present

2020-05-05 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-5884:
--

 Summary: Join query return empty result when filters for both the 
tables are present
 Key: PHOENIX-5884
 URL: https://issues.apache.org/jira/browse/PHOENIX-5884
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal


Let's assume DDL to be same for both the tables involved in a join
{code}
CREATE TABLE LeftTable (id1 CHAR(6) NOT NULL,id2 VARCHAR(22) NOT NULL,  
  id3 VARCHAR(12) NOT NULL,id4 CHAR(2) NOT NULL,id5 CHAR(6) NOT NULL,   
  id6 VARCHAR(200) NOT NULL,id7 VARCHAR(50) NOT NULL,ts TIMESTAMP ,
CONSTRAINT PK_JOIN_AND_INTERSECTION_TABLE PRIMARY 
KEY(id1,id2,id3,id4,id5,id6,id7))
{code}

Following query return right results
{code}
SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3 and m.id2 
= r.id2  and m.id4 = r.id4  and m.id5 = r.id5  and m.id1 = r.id1  and m.ts = 
r.ts  where  r.id1 IN ('201904','201905')  and r.id2 = 'ID2_VAL'  and r.id3 IN 
('ID3_VAL','ID3_VAL2') 
{code

but When to optimize the query, filters for the left table are also added , 
query returned empty result . Though the filters are based on join condition so 
semantically above query and below query should be same.
{code}
SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3  and 
m.id2 = r.id2  and m.id4 = r.id4 and m.id5 = r.id5  and m.id1 = r.id1 and m.ts 
= r.ts  where m.id1 IN ('201904','201905')  and r.id1 IN ('201904','201905') 
and r.id2 = 'ID2_VAL' and m.id3 IN ('ID3_VAL','ID3_VAL2')  and 
r.id3 IN ('ID3_VAL','ID3_VAL2') 
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Are we embracing Java 8?

2020-04-22 Thread Ankit Singhal
Just linking a discussion we had one and a half years back on the same [1],
considering nothing has changed since then because Java 7 was EOLed in July
2015
and Phoenix 5.0 was also out, it majorly comes to the point of the
inconvenience of working
on old code style and extra efforts required to create patches for each
branch.

Let's say if we decide to upgrade to Java8(Option 3), don't we require the
following changes?

* Major version upgrade from 4.x-HBase-1.x to 5.x-HBase-1.x:- As upgrading
4.x branches
to Java8 breaks the compatibility with HBase and Java runtime, we need to
ensure that
we adhere to dependency compatibility between the minor releases as it is
expected that
Phoenix minor upgrade should be just a server jar drop and a restart (even
though
we don't explicit covers Java runtime in our backward compatibility [2] as
HBase does[3]
but people are used to it now)

* Release Notes and convenience libraries:- Though we would say that it is
compatible with
HBase version 1.x but require a Java8, so we need to be explicit with our
convenience libraries
as well , like append "jdk8" to a name or something similar
(phoenix-server-HBase-1.x-jdk8.jar).
And also provide clarity on the version upgrade

* Avoiding issues during runtime:- Though JAVA8 and JAVA7 are said to be
binary compatible
and are expected to be fine in Java8 runtime but it has been called out
there are rare instances
which can cause issues due to some incompatibilities between JRE and
JDK[4]. (As Andrew also
called out and might have observed the same)


Though I agreed with Istvan that creating multiple patches and dealing with
change in code style every time
we switch branches has put additional load on the contributor but IMHO, we
should wait for
HBase-1.x to upgrade Java before we do it to avoid the some of the issues
listed above related to Option 3.

Option2 would be preferred at the time when we decide on whether we want to
diverge from feature
parity in our major releases and we do only limited fixes for 4.x branch.
So basically I'm also in favor of Option 1 (continuing Java 7 for HBase-1.x
and code style as much possible for 5.x).

[1]
https://lists.apache.org/thread.html/970db0b5cb0560c49654e450aafb8fb759596384cefe4f02808e80cc%40%3Cdev.phoenix.apache.org%3E
[2]http://phoenix.apache.org/upgrading.html
[3] https://hbase.apache.org/book.html#hbase.versioning
[4]
https://www.oracle.com/technetwork/java/javase/8-compatibility-guide-2156366.html#A999387

Regards,
Ankit Singhal


Re: [DISCUSS] Add components for sub-projects to maven

2020-03-25 Thread Ankit Singhal
+1 Agreed, I think adding these components would be really helpful.

How about adding some more with one level deeper for connectors like

   - flume
   - spark
   - hive
   - kafka
   - pig
   - presto

so that people who are more versed/interested in certain components,
can keep a watch on a certain section of JIRAs where they want to
actively contribute with the reviews and ideas.

Regards,
Ankit Singhal



On Wed, Mar 25, 2020 at 2:13 PM Istvan Toth  wrote:

> Hi!
>
> Phoenix has been split into sub projects, and adopted some projects, so I
> think that it is time to reflect this in Jira as well.
>
> I propose adding the following components to the project (one per repo)
>
>- core
>- queryserver
>- connectors
>- tephra
>- omid
>
>  What do you think ?
>
> This is tracked in https://issues.apache.org/jira/browse/PHOENIX-5781 , I
> just wanted to get some opinions on this, hence this thread.
>
> regards
>
> Istvan
>


Re: [ANNOUNCE] New Phoenix committer Gokcen Iskender

2020-02-13 Thread Ankit Singhal
Congratulations Gokcen!


On Wed, Feb 12, 2020 at 4:08 PM Josh Elser  wrote:

> Congratulations and welcome, Gokcen!
>
> On 2/10/20 1:55 PM, Geoffrey Jacoby wrote:
> > On behalf of the Apache Phoenix PMC, I'm pleased to announce that Gokcen
> > Iskender has accepted our invitation to become a committer on the Phoenix
> > project. Gokcen has contributed many features and bug fixes as part of
> our
> > rewrite of secondary global indexes, and presented on these changes at
> the
> > NoSQL Day of last year's DataWorks Summit. She's also been an active
> > reviewer and tester on other's patches.
> >
> > We appreciate Gokcen's many contributions and look forward to her
> continued
> > involvement. Welcome!
> >
> > Geoffrey Jacoby
> >
>


[jira] [Updated] (PHOENIX-5691) create index is failing when phoenix acls enabled and ranger is enabled

2020-01-23 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5691:
---
Summary: create index is failing when phoenix acls enabled and ranger is 
enabled  (was: create index is failing when phoenix acls enabled when ranger is 
enabled)

> create index is failing when phoenix acls enabled and ranger is enabled
> ---
>
> Key: PHOENIX-5691
> URL: https://issues.apache.org/jira/browse/PHOENIX-5691
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-5691.patch
>
>
> create index failing with following exception when phoenix ACLs enabled.
> {noformat}
>   
> phoenix.acls.enabled
> true
>   
> {noformat}
> {noformat}
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.ClassCastException: 
> org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl cannot be cast to 
> com.google.protobuf.RpcController
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:103)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:603)
>   at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16537)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8305)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2497)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2479)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl cannot be cast to 
> com.google.protobuf.RpcController
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.getUserPermsFromUserDefinedAccessController(PhoenixAccessController.java:448)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:431)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:418)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:515)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:496)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
>   at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:192)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.getUserPermissions(PhoenixAccessController.java:418)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:498)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preGetTable(PhoenixAccessController.java:116)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:157)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:154)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$PhoenixObserverOperation.callObserver(PhoenixMetaDataCoprocessorHost.java:87)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:107)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMet

[jira] [Updated] (PHOENIX-5691) create index is failing when phoenix acls enabled when ranger is enabled

2020-01-23 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5691:
---
Summary: create index is failing when phoenix acls enabled when ranger is 
enabled  (was: create index is failing when phoenix acls enabled.)

> create index is failing when phoenix acls enabled when ranger is enabled
> 
>
> Key: PHOENIX-5691
> URL: https://issues.apache.org/jira/browse/PHOENIX-5691
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-5691.patch
>
>
> create index failing with following exception when phoenix ACLs enabled.
> {noformat}
>   
> phoenix.acls.enabled
> true
>   
> {noformat}
> {noformat}
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.ClassCastException: 
> org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl cannot be cast to 
> com.google.protobuf.RpcController
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:103)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:603)
>   at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16537)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8305)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2497)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2479)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl cannot be cast to 
> com.google.protobuf.RpcController
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.getUserPermsFromUserDefinedAccessController(PhoenixAccessController.java:448)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:431)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:418)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:515)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:496)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
>   at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:192)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.getUserPermissions(PhoenixAccessController.java:418)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:498)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preGetTable(PhoenixAccessController.java:116)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:157)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:154)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$PhoenixObserverOperation.callObserver(PhoenixMetaDataCoprocessorHost.java:87)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:107)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocess

[jira] [Resolved] (PHOENIX-5594) Different permission of phoenix-*-queryserver.log from umask

2019-11-29 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-5594.

Resolution: Fixed

> Different permission of phoenix-*-queryserver.log from umask
> 
>
> Key: PHOENIX-5594
> URL: https://issues.apache.org/jira/browse/PHOENIX-5594
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.15.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The permission of phoenix-*-queryserver.log is different from umask we set.
> For example, when we set umask to 077, the permission of 
> phoenix-*-queryserver.log should be 600, but it's 666:
> {code}
> $ umask 077
> $ /bin/queryserver.py start
> starting Query Server, logging to /var/log/hbase/phoenix-hbase-queryserver.log
> $ ll /var/log/hbase/phoenix*
> -rw-rw-rw- 1 hbase hadoop 6181 Nov 27 13:52 phoenix-hbase-queryserver.log
> -rw--- 1 hbase hadoop 1358 Nov 27 13:52 phoenix-hbase-queryserver.out
> {code}
> It looks like the permission of phoenix-*-queryserver.out is correct (600).
> queryserver.py opens QueryServer process as a sub process but it looks like 
> the umask is not inherited. I think we need to inherit the umask to the sub 
> process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5594) Different permission of phoenix-*-queryserver.log from umask

2019-11-29 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5594:
---
Fix Version/s: 4.15.0

> Different permission of phoenix-*-queryserver.log from umask
> 
>
> Key: PHOENIX-5594
> URL: https://issues.apache.org/jira/browse/PHOENIX-5594
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.15.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The permission of phoenix-*-queryserver.log is different from umask we set.
> For example, when we set umask to 077, the permission of 
> phoenix-*-queryserver.log should be 600, but it's 666:
> {code}
> $ umask 077
> $ /bin/queryserver.py start
> starting Query Server, logging to /var/log/hbase/phoenix-hbase-queryserver.log
> $ ll /var/log/hbase/phoenix*
> -rw-rw-rw- 1 hbase hadoop 6181 Nov 27 13:52 phoenix-hbase-queryserver.log
> -rw--- 1 hbase hadoop 1358 Nov 27 13:52 phoenix-hbase-queryserver.out
> {code}
> It looks like the permission of phoenix-*-queryserver.out is correct (600).
> queryserver.py opens QueryServer process as a sub process but it looks like 
> the umask is not inherited. I think we need to inherit the umask to the sub 
> process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-5552) Hive against Phoenix gets 'Expecting "RPAREN", got "L"' in Tez mode

2019-11-13 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-5552.

Resolution: Fixed

> Hive against Phoenix gets 'Expecting "RPAREN", got "L"' in Tez mode
> ---
>
> Key: PHOENIX-5552
> URL: https://issues.apache.org/jira/browse/PHOENIX-5552
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: connectors-1.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Steps to reproduce are as follows;
> 1. Create a table that has a BIGINT column in Phoenix:
> {code:java}
> CREATE TABLE TBL (
>  COL1 VARCHAR PRIMARY KEY,
>  COL2 BIGINT
> );
> {code}
> 2. Create an external table in Hive against the table created step 1:
> {code:java}
> create external table tbl (
>  col1 string,
>  col2 bigint
> )
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
> TBLPROPERTIES (
>  "phoenix.table.name" = "TBL",
>  "phoenix.zookeeper.quorum" = ...,
>  "phoenix.zookeeper.znode.parent" = ...,
>  "phoenix.zookeeper.client.port" = "2181",
>  "phoenix.rowkeys" = "COL1",
>  "phoenix.column.mapping" = "col1:COL1,col2:COL2"
> );
> {code}
> 3. Issue a query for the hive table with a condition of the BIGINT column in 
> Tez mode, but the following error happens:
> {code:java}
> > select * from tbl where col2 = 100;
> Error: java.io.IOException: java.lang.RuntimeException: 
> org.apache.phoenix.exception.PhoenixParserException: ERROR 603 (42P00): 
> Syntax error. Unexpected input. Expecting "RPAREN", got "L" at line 1, column 
> 67. (state=,code=0)
> {code}
> In this case, the problem is that Hive passes whereClause "col2=100L" (as a 
> bigint value with 'L') to Phoenix, but phoenix can't accept the bigint value 
> with 'L', so the syntax error happens.
> We need to remove 'L' for bigint values when building phoenix queries.
> It looks like this issue happens only in Tez mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5552) Hive against Phoenix gets 'Expecting "RPAREN", got "L"' in Tez mode

2019-11-13 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5552:
---
Fix Version/s: connectors-1.0.0

> Hive against Phoenix gets 'Expecting "RPAREN", got "L"' in Tez mode
> ---
>
> Key: PHOENIX-5552
> URL: https://issues.apache.org/jira/browse/PHOENIX-5552
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: connectors-1.0.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Steps to reproduce are as follows;
> 1. Create a table that has a BIGINT column in Phoenix:
> {code:java}
> CREATE TABLE TBL (
>  COL1 VARCHAR PRIMARY KEY,
>  COL2 BIGINT
> );
> {code}
> 2. Create an external table in Hive against the table created step 1:
> {code:java}
> create external table tbl (
>  col1 string,
>  col2 bigint
> )
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
> TBLPROPERTIES (
>  "phoenix.table.name" = "TBL",
>  "phoenix.zookeeper.quorum" = ...,
>  "phoenix.zookeeper.znode.parent" = ...,
>  "phoenix.zookeeper.client.port" = "2181",
>  "phoenix.rowkeys" = "COL1",
>  "phoenix.column.mapping" = "col1:COL1,col2:COL2"
> );
> {code}
> 3. Issue a query for the hive table with a condition of the BIGINT column in 
> Tez mode, but the following error happens:
> {code:java}
> > select * from tbl where col2 = 100;
> Error: java.io.IOException: java.lang.RuntimeException: 
> org.apache.phoenix.exception.PhoenixParserException: ERROR 603 (42P00): 
> Syntax error. Unexpected input. Expecting "RPAREN", got "L" at line 1, column 
> 67. (state=,code=0)
> {code}
> In this case, the problem is that Hive passes whereClause "col2=100L" (as a 
> bigint value with 'L') to Phoenix, but phoenix can't accept the bigint value 
> with 'L', so the syntax error happens.
> We need to remove 'L' for bigint values when building phoenix queries.
> It looks like this issue happens only in Tez mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] Accept Tephra and Omid podlings as Phoenix sub-projects

2019-10-30 Thread Ankit Singhal
+1

On Wed, Oct 30, 2019 at 10:44 AM Andrew Purtell  wrote:

> +0 (binding)
>
> I support this but it isn't my time I'd be committing to maintaining the
> new repos with a +1...
>
> On Wed, Oct 30, 2019 at 8:27 AM Josh Elser  wrote:
>
> > Hi,
> >
> > As was previously discussed[1][2], there is a motion to "adopt" the
> > Tephra and Omid podlings as sub-projects to Apache Phoenix. A
> > sub-project is a distinct software project from some top-level project
> > (TLP) but operates under the guidance of a TLP (e.g. the TLP's PMC)
> >
> > Per the Incubator guidelines[3], we need to have a formal vote to adopt
> > them. While we still need details from these two podlings, I believe we
> > can have a VOTE now to keep things moving.
> >
> > Actions:
> > * Phoenix will incorporate Omid and Tephra as sub-projects and they will
> > continue to function under the Apache Phoenix PMC guidance.
> > * Any current podling member may request to be added as a Phoenix
> > committer. Podling members would be expected to demonstrate the normal
> > level of commitment to grow from a committer to a PMC member.
> >
> > Stating what I see as an implicit decision (but to alleviate any
> > possible confusion): all new community members will be expected to
> > function in the way that Phoenix currently does today (e.g
> > review-then-commit). Future Omid and Tephra development would happen in
> > the same way that Phoenix development happens today.
> >
> > Please vote:
> >
> > +1: to accept Omid and Tephra as Phoenix sub-projects and to allow any
> > PPMC to become Phoenix committers.
> >
> > -1/-0/+0: No and why..
> >
> > Here is my +1 (binding)
> >
> > This vote will be open for at least 72hrs (2019/11/02 1600 UTC).
> >
> >
> > [1]
> >
> >
> https://lists.apache.org/thread.html/ec00615cbbb4225885e3776f1e8fd071600a9c50f35769f453b8a779@%3Cdev.phoenix.apache.org%3E
> > [2]
> >
> >
> https://lists.apache.org/thread.html/692a030a27067c20b9228602af502199cd4d80eb0aa8ed6461ebe1ee@%3Cgeneral.incubator.apache.org%3E
> > [3]
> >
> >
> https://incubator.apache.org/guides/graduation.html#graduating_to_a_subproject
> >
>
>
> --
> Best regards,
> Andrew
>
> Words like orphans lost among the crosstalk, meaning torn from truth's
> decrepit hands
>- A23, Crosstalk
>


[jira] [Updated] (PHOENIX-5506) Psql load fails with lower table name

2019-10-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5506:
---
Fix Version/s: 4.15.0

> Psql load fails with lower table name
> -
>
> Key: PHOENIX-5506
> URL: https://issues.apache.org/jira/browse/PHOENIX-5506
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Smarak
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5506.patch, PHOENIX-5506_v1.patch
>
>
> sqlline> CREATE TABLE "table_zyx" (EMPID INTEGER PRIMARY KEY, ENAME 
> VARCHAR(30));
> psql.py -t table_zyx  localhost:/hbase-unsecure Temp.csv
> {code}
> 19/10/03 20:13:18 INFO util.UpsertExecutor: Upserting SQL data with UPSERT  
> INTO table_zyx ("EMPID", "0"."ENAME") VALUES (?, ?)
> 19/10/03 20:13:18 DEBUG jdbc.PhoenixStatement: Reloading table TABLE_ZYX data 
> from server
> 19/10/03 20:13:18 DEBUG csv.CsvUpsertExecutor: Error on CSVRecord [1, a]
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=TABLE_ZYX
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:599)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:383)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:300)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:352)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:784)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:770)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:401)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:95)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:55)
>   at 
> org.apache.phoenix.util.UpsertExecutor.execute(UpsertExecutor.java:133)
>   at 
> org.apache.phoenix.util.CSVCommonsLoader.upsert(CSVCommonsLoader.java:217)
>   at 
> org.apache.phoenix.util.CSVCommonsLoader.upsert(CSVCommonsLoader.java:182)
>   at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:307)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5506) Psql load fails with lower table name

2019-10-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5506:
---
Attachment: PHOENIX-5506_v1.patch

> Psql load fails with lower table name
> -
>
> Key: PHOENIX-5506
> URL: https://issues.apache.org/jira/browse/PHOENIX-5506
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Smarak
>    Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-5506.patch, PHOENIX-5506_v1.patch
>
>
> sqlline> CREATE TABLE "table_zyx" (EMPID INTEGER PRIMARY KEY, ENAME 
> VARCHAR(30));
> psql.py -t table_zyx  localhost:/hbase-unsecure Temp.csv
> {code}
> 19/10/03 20:13:18 INFO util.UpsertExecutor: Upserting SQL data with UPSERT  
> INTO table_zyx ("EMPID", "0"."ENAME") VALUES (?, ?)
> 19/10/03 20:13:18 DEBUG jdbc.PhoenixStatement: Reloading table TABLE_ZYX data 
> from server
> 19/10/03 20:13:18 DEBUG csv.CsvUpsertExecutor: Error on CSVRecord [1, a]
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=TABLE_ZYX
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:599)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:383)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:300)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:352)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:784)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:770)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:401)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:95)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:55)
>   at 
> org.apache.phoenix.util.UpsertExecutor.execute(UpsertExecutor.java:133)
>   at 
> org.apache.phoenix.util.CSVCommonsLoader.upsert(CSVCommonsLoader.java:217)
>   at 
> org.apache.phoenix.util.CSVCommonsLoader.upsert(CSVCommonsLoader.java:182)
>   at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:307)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5506) Psql load fails with lower table name

2019-10-03 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5506:
---
Attachment: PHOENIX-5506.patch

> Psql load fails with lower table name
> -
>
> Key: PHOENIX-5506
> URL: https://issues.apache.org/jira/browse/PHOENIX-5506
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Smarak
>    Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-5506.patch
>
>
> sqlline> CREATE TABLE "table_zyx" (EMPID INTEGER PRIMARY KEY, ENAME 
> VARCHAR(30));
> psql.py -t table_zyx  localhost:/hbase-unsecure Temp.csv
> {code}
> 19/10/03 20:13:18 INFO util.UpsertExecutor: Upserting SQL data with UPSERT  
> INTO table_zyx ("EMPID", "0"."ENAME") VALUES (?, ?)
> 19/10/03 20:13:18 DEBUG jdbc.PhoenixStatement: Reloading table TABLE_ZYX data 
> from server
> 19/10/03 20:13:18 DEBUG csv.CsvUpsertExecutor: Error on CSVRecord [1, a]
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=TABLE_ZYX
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:599)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:383)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:300)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:352)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:784)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:770)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:401)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:95)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:55)
>   at 
> org.apache.phoenix.util.UpsertExecutor.execute(UpsertExecutor.java:133)
>   at 
> org.apache.phoenix.util.CSVCommonsLoader.upsert(CSVCommonsLoader.java:217)
>   at 
> org.apache.phoenix.util.CSVCommonsLoader.upsert(CSVCommonsLoader.java:182)
>   at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:307)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5506) Psql load fails with lower table name

2019-10-03 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5506:
---
Reporter: Smarak  (was: Ankit Singhal)

> Psql load fails with lower table name
> -
>
> Key: PHOENIX-5506
> URL: https://issues.apache.org/jira/browse/PHOENIX-5506
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Smarak
>    Assignee: Ankit Singhal
>Priority: Major
>
> sqlline> CREATE TABLE "table_zyx" (EMPID INTEGER PRIMARY KEY, ENAME 
> VARCHAR(30));
> psql.py -t table_zyx  localhost:/hbase-unsecure Temp.csv
> {code}
> 19/10/03 20:13:18 INFO util.UpsertExecutor: Upserting SQL data with UPSERT  
> INTO table_zyx ("EMPID", "0"."ENAME") VALUES (?, ?)
> 19/10/03 20:13:18 DEBUG jdbc.PhoenixStatement: Reloading table TABLE_ZYX data 
> from server
> 19/10/03 20:13:18 DEBUG csv.CsvUpsertExecutor: Error on CSVRecord [1, a]
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=TABLE_ZYX
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:599)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:383)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:300)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:352)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:784)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:770)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:401)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:95)
>   at 
> org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:55)
>   at 
> org.apache.phoenix.util.UpsertExecutor.execute(UpsertExecutor.java:133)
>   at 
> org.apache.phoenix.util.CSVCommonsLoader.upsert(CSVCommonsLoader.java:217)
>   at 
> org.apache.phoenix.util.CSVCommonsLoader.upsert(CSVCommonsLoader.java:182)
>   at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:307)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5506) Psql load fails with lower table name

2019-10-03 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-5506:
--

 Summary: Psql load fails with lower table name
 Key: PHOENIX-5506
 URL: https://issues.apache.org/jira/browse/PHOENIX-5506
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal


sqlline> CREATE TABLE "table_zyx" (EMPID INTEGER PRIMARY KEY, ENAME 
VARCHAR(30));

psql.py -t table_zyx  localhost:/hbase-unsecure Temp.csv

{code}
19/10/03 20:13:18 INFO util.UpsertExecutor: Upserting SQL data with UPSERT  
INTO table_zyx ("EMPID", "0"."ENAME") VALUES (?, ?)
19/10/03 20:13:18 DEBUG jdbc.PhoenixStatement: Reloading table TABLE_ZYX data 
from server
19/10/03 20:13:18 DEBUG csv.CsvUpsertExecutor: Error on CSVRecord [1, a]
org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
undefined. tableName=TABLE_ZYX
at 
org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:599)
at 
org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:391)
at 
org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:383)
at 
org.apache.phoenix.compile.FromCompiler.getResolverForMutation(FromCompiler.java:300)
at 
org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:352)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:784)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:770)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:401)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
at 
org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:95)
at 
org.apache.phoenix.util.csv.CsvUpsertExecutor.execute(CsvUpsertExecutor.java:55)
at 
org.apache.phoenix.util.UpsertExecutor.execute(UpsertExecutor.java:133)
at 
org.apache.phoenix.util.CSVCommonsLoader.upsert(CSVCommonsLoader.java:217)
at 
org.apache.phoenix.util.CSVCommonsLoader.upsert(CSVCommonsLoader.java:182)
at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:307)
{code}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New Phoenix PMC Member: Chinmay Kulkarni

2019-08-26 Thread Ankit Singhal
Congratulations Chinmay !!

On Mon, Aug 26, 2019 at 1:33 PM Thomas D'Silva  wrote:

> Congrats Chinmay, well deserved!
>
> On Mon, Aug 26, 2019 at 11:00 AM Vincent Poon 
> wrote:
>
> > Congrats, Chinmay!
> >
> > On Mon, Aug 26, 2019 at 10:34 AM Geoffrey Jacoby 
> > wrote:
> >
> > > Each Apache project is governed by a Project Management Committee, or
> > PMC.
> > > On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> Chinmay
> > > Kulkarni has accepted our invitation to join. Chinmay has been a
> > dedicated
> > > contributor and committer, and has recently volunteered to be RM for
> our
> > > upcoming major 4.15 and 5.1 releases.
> > >
> > > Please join me in welcoming Chinmay!
> > >
> > > Geoffrey Jacoby
> > >
> >
>


[jira] [Resolved] (PHOENIX-4741) Shade disruptor dependency

2019-07-28 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4741.

Resolution: Not A Problem

> Shade disruptor dependency 
> ---
>
> Key: PHOENIX-4741
> URL: https://issues.apache.org/jira/browse/PHOENIX-4741
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Jungtaek Lim
>    Assignee: Ankit Singhal
>Priority: Major
>
> We should shade disruptor dependency to avoid conflict with the versions used 
> by the other framework like storm , hive etc.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (PHOENIX-3541) Bulk Data Loading - Can't use table name by small letter

2019-07-10 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-3541.

   Resolution: Fixed
 Assignee: Karthik Palanisamy  (was: Kalyan)
Fix Version/s: 5.1.0
   4.15.0

> Bulk Data Loading - Can't use table name by small letter 
> -
>
> Key: PHOENIX-3541
> URL: https://issues.apache.org/jira/browse/PHOENIX-3541
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Beomjun Kim
>Assignee: Karthik Palanisamy
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-3541.master.v1.patch, 
> PHOENIX-3541.master.v2.patch, PHOENIX-3541.patch
>
>
> i have existing Phoenix table abc
> i wanna Bulk Data Loading via MapReduce  
> And then used the following command to load the csv file
> hadoop jar 
> /root/Phoenix/apache-phoenix-4.8.0-HBase-0.98-bin/phoenix-4.8.0-HBase-0.98-client.jar
>  org.apache.phoenix.mapreduce.CsvBulkLoadTool   --t  abc --input /example.csv
> but, it does not seem to find the table abc
> Exception in thread "main" java.lang.IllegalArgumentException: Table ABC not 
> found
> i try change command  table name  --t 'abc' and --t "abc"
> but it doesn't work  
> how can i use table name small letter ??
> And also, i found  same case  
> http://apache-phoenix-user-list.1124778.n5.nabble.com/Load-into-Phoenix-table-via-CsvBulkLoadTool-cannot-find-table-and-fails-td2792.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5388) Incorrect current_date()/now() when query involves subquery

2019-07-10 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5388:
---
Attachment: PHOENIX-5388.patch

> Incorrect current_date()/now() when query involves subquery
> ---
>
> Key: PHOENIX-5388
> URL: https://issues.apache.org/jira/browse/PHOENIX-5388
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-5388.patch
>
>
> Following query fails in the month of December:-
> {code}
> select  NOW(),  MONTH(NOW()) m, 
> CASE 
>  WHEN MONTH(NOW()) = 12 THEN TO_TIME(YEAR(NOW()) || '-12-31 23:59:59.999') 
>  ELSE TO_TIME(YEAR(NOW()) || '-' || ( MONTH(NOW()) + 1  ) || '-01 
> 23:59:59.999') - 1 
> END  AS this_month_end
> {code}
> It is due to an optimization we have during compilation where we evaluate the 
> expression if they result in to constant so that we don't need to do it for 
> every row. 
> Currently parsing stack evaluates every expression if possible without 
> considering any condition, resulting in evaluation of all three expression 
> for CASE node, MONTH(NOW()) = 12 , TO_TIME(YEAR(NOW()) || '-12-31 
> 23:59:59.999') ,TO_TIME(YEAR(NOW()) || '-' || ( MONTH(NOW()) + 1  ) || '-01 
> 23:59:59.999') - 1) but evaluation of 3rd one will fail because of invalid 
> month. 
> Workaround: For the particular use-case , Following query though help in 
> preventing the expressions of WHEN CASE to be evaluated to a constant at 
> compile time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5388) Incorrect current_date()/now() when query involves subquery

2019-07-10 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-5388:
--

 Summary: Incorrect current_date()/now() when query involves 
subquery
 Key: PHOENIX-5388
 URL: https://issues.apache.org/jira/browse/PHOENIX-5388
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal


Following query fails in the month of December:-
{code}
select  NOW(),  MONTH(NOW()) m, 
CASE 
 WHEN MONTH(NOW()) = 12 THEN TO_TIME(YEAR(NOW()) || '-12-31 23:59:59.999') 
 ELSE TO_TIME(YEAR(NOW()) || '-' || ( MONTH(NOW()) + 1  ) || '-01 
23:59:59.999') - 1 
END  AS this_month_end
{code}

It is due to an optimization we have during compilation where we evaluate the 
expression if they result in to constant so that we don't need to do it for 
every row. 
Currently parsing stack evaluates every expression if possible without 
considering any condition, resulting in evaluation of all three expression for 
CASE node, MONTH(NOW()) = 12 , TO_TIME(YEAR(NOW()) || '-12-31 23:59:59.999') 
,TO_TIME(YEAR(NOW()) || '-' || ( MONTH(NOW()) + 1  ) || '-01 23:59:59.999') - 
1) but evaluation of 3rd one will fail because of invalid month. 


Workaround: For the particular use-case , Following query though help in 
preventing the expressions of WHEN CASE to be evaluated to a constant at 
compile time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4850) Like predicate without wildcard doesn't pass the exact string if varchar columns has maxlength

2019-05-28 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-4850:
--

Assignee: (was: Ankit Singhal)

> Like predicate without wildcard doesn't pass the exact string if varchar 
> columns has maxlength 
> ---
>
> Key: PHOENIX-4850
> URL: https://issues.apache.org/jira/browse/PHOENIX-4850
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0
>
>
> [William 
> |https://community.hortonworks.com/users/11882/williamprendergast.html]reported
>  on 
> [https://community.hortonworks.com/questions/210582/like-query-in-phoenix.html]
>  that query is skipping all rows when length of the literal doesn't match 
> with max lenght of the varchar column.
> Copied from above link:-
> When using a LIKE in a where clause, the rows are not found unless a 
> wildcard(%) is added
> create table t ( ID VARCHAR(290) NOT NULL PRIMARY KEY, NAME VARCHAR(256));
> No rows affected (1.386 seconds) 0:
> jdbc:phoenix:> upsert into t values ('1','test');
> 1 row affected (0.046 seconds) 0:
> jdbc:phoenix:> select * from t;
> +-+---+
> | ID | NAME |
> +-+---+
> | 1 | test |
> +-+---+
> 1 row selected (0.05 seconds) 0:
> jdbc:phoenix:> select * from t where name like 'test';
> +-+---+
> | ID | NAME |
> +-+---+
> +-+---+
> No rows selected (0.016 seconds) 0:
> jdbc:phoenix:> select * from t where name like 'test%';
> +-+---+
> | ID | NAME |
> +-+---+
> | 1 | test |
> +-+---+
> 1 row selected (0.032 seconds)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4850) Like predicate without wildcard doesn't pass the exact string if varchar columns has maxlength

2019-05-28 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4850:
---
Fix Version/s: (was: 4.15.0)

> Like predicate without wildcard doesn't pass the exact string if varchar 
> columns has maxlength 
> ---
>
> Key: PHOENIX-4850
> URL: https://issues.apache.org/jira/browse/PHOENIX-4850
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Ankit Singhal
>Priority: Major
>
> [William 
> |https://community.hortonworks.com/users/11882/williamprendergast.html]reported
>  on 
> [https://community.hortonworks.com/questions/210582/like-query-in-phoenix.html]
>  that query is skipping all rows when length of the literal doesn't match 
> with max lenght of the varchar column.
> Copied from above link:-
> When using a LIKE in a where clause, the rows are not found unless a 
> wildcard(%) is added
> create table t ( ID VARCHAR(290) NOT NULL PRIMARY KEY, NAME VARCHAR(256));
> No rows affected (1.386 seconds) 0:
> jdbc:phoenix:> upsert into t values ('1','test');
> 1 row affected (0.046 seconds) 0:
> jdbc:phoenix:> select * from t;
> +-+---+
> | ID | NAME |
> +-+---+
> | 1 | test |
> +-+---+
> 1 row selected (0.05 seconds) 0:
> jdbc:phoenix:> select * from t where name like 'test';
> +-+---+
> | ID | NAME |
> +-+---+
> +-+---+
> No rows selected (0.016 seconds) 0:
> jdbc:phoenix:> select * from t where name like 'test%';
> +-+---+
> | ID | NAME |
> +-+---+
> | 1 | test |
> +-+---+
> 1 row selected (0.032 seconds)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Drop support for HBase-1.2

2019-05-10 Thread Ankit Singhal
+1

On Fri, May 10, 2019 at 4:44 PM Josh Elser  wrote:

> +1
>
> On 5/10/19 4:28 PM, Thomas D'Silva wrote:
> > Since HBase 1.2 is now end of life and we are creating a new branch to
> > support HBase-1.5(PHOENIX-5277), I think we should drop the HBase-1.2
> > branches. What do folks think?
> >
> > Thanks,
> > Thomas
> >
>


[jira] [Assigned] (PHOENIX-5258) Add support to parse header from the input CSV file as input columns for CsvBulkLoadTool

2019-05-07 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-5258:
--

Assignee: Prashant Vithani

> Add support to parse header from the input CSV file as input columns for 
> CsvBulkLoadTool
> 
>
> Key: PHOENIX-5258
> URL: https://issues.apache.org/jira/browse/PHOENIX-5258
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Prashant Vithani
>Assignee: Prashant Vithani
>Priority: Minor
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5258-4.x-HBase-1.4.patch, 
> PHOENIX-5258-master.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, CsvBulkLoadTool does not support reading header from the input csv 
> and expects the content of the csv to match with the table schema. The 
> support for the header can be added to dynamically map the schema with the 
> header.
> The proposed solution is to introduce another option for the tool 
> `–parse-header`. If this option is passed, the input columns list is 
> constructed by reading the first line of the input CSV file.
>  * If there is only one file, read the header from the first line and 
> generate the `ColumnInfo` list.
>  * If there are multiple files, read the header from all the files, and throw 
> an error if the headers across files do not match.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Phoenix committer: Mihir Monani

2019-04-30 Thread Ankit Singhal
Congratulations and Welcome Mihir !!

On Tue, Apr 30, 2019 at 5:49 PM Andrew Purtell  wrote:

> Congratulations Mihir.
>
> On Sat, Apr 27, 2019 at 5:42 PM Thomas D'Silva  wrote:
>
> > On behalf of the Apache Phoenix PMC, I am pleased to announce that
> > Mihir Monani has accepted our invitation to become a committer.
> > Mihir has done some nice work fixing several bugs related to indexing[1].
> >
> > Please welcome him to the Apache Phoenix team.
> >
> > Thanks,
> > Thomas
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/browse/PHOENIX-5199?jql=project%20%3D%20PHOENIX%20AND%20assignee%3D%22mihir6692%22%20AND%20status%3DResolved
> >
>
>
> --
> Best regards,
> Andrew
>
> Words like orphans lost among the crosstalk, meaning torn from truth's
> decrepit hands
>- A23, Crosstalk
>


Re: [ANNOUNCE] New Phoenix PMC member: Cheng Lei

2019-04-09 Thread Ankit Singhal
Congratulations, Cheng Lei!!

On Tue, Apr 9, 2019 at 6:35 PM Jaanai Zhang  wrote:

> Congratulations Cheng!
>
> 
>Jaanai Zhang
>Best regards!
>
>
>
> Geoffrey Jacoby  于2019年4月10日周三 上午5:03写道:
>
> > Congratulations, Cheng!
> >
> > On Tue, Apr 9, 2019 at 1:50 PM Abhishek Singh Chouhan <
> > abhishekchouhan...@gmail.com> wrote:
> >
> > > Congrats Cheng!
> > >
> > > On Tue, Apr 9, 2019 at 12:00 PM Chinmay Kulkarni <
> > > chinmayskulka...@gmail.com>
> > > wrote:
> > >
> > > > Congratulations Cheng!
> > > >
> > > > On Tue, Apr 9, 2019 at 11:51 AM Vincent Poon  >
> > > > wrote:
> > > >
> > > > > Congrats Cheng!
> > > > >
> > > > > On Tue, Apr 9, 2019 at 11:42 AM Thomas D'Silva  >
> > > > wrote:
> > > > >
> > > > > > On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> > > Cheng
> > > > > > Lei has accepted our invitation to join the PMC.
> > > > > >
> > > > > > Please join me in congratulating Cheng.
> > > > > >
> > > > > > Thanks,
> > > > > > Thomas
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Chinmay Kulkarni
> > > > M.S. Computer Science,
> > > > University of Illinois at Urbana-Champaign.
> > > > B. Tech Computer Engineering,
> > > > College of Engineering, Pune.
> > > >
> > >
> >
>


Re: [ANNOUNCE] New Phoenix PMC member: Geoffrey Jacoby

2019-04-09 Thread Ankit Singhal
Congratulations and Welcome, Geoffrey!!

On Tue, Apr 9, 2019 at 6:36 PM Jaanai Zhang  wrote:

> Congratulations Geoffrey!
>
> 
>Jaanai Zhang
>Best regards!
>
>
>
> Abhishek Singh Chouhan  于2019年4月10日周三
> 上午4:50写道:
>
> > Congrats Geoffrey!!
> >
> > On Tue, Apr 9, 2019 at 12:00 PM Chinmay Kulkarni <
> > chinmayskulka...@gmail.com>
> > wrote:
> >
> > > Congratulations Geoffrey!
> > >
> > > On Tue, Apr 9, 2019 at 11:51 AM Vincent Poon 
> > > wrote:
> > >
> > > > Congrats Geoffrey!
> > > >
> > > > On Tue, Apr 9, 2019 at 11:42 AM Thomas D'Silva 
> > > wrote:
> > > >
> > > > > On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> > > > Geoffrey
> > > > > Jacoby has accepted our invitation to join the PMC.
> > > > >
> > > > > Please join me in congratulating Geoffrey.
> > > > >
> > > > > Thanks,
> > > > > Thomas
> > > > >
> > > >
> > >
> > >
> > > --
> > > Chinmay Kulkarni
> > > M.S. Computer Science,
> > > University of Illinois at Urbana-Champaign.
> > > B. Tech Computer Engineering,
> > > College of Engineering, Pune.
> > >
> >
>


[jira] [Updated] (PHOENIX-5178) SYSTEM schema is not getting cached at MetaData server

2019-03-11 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5178:
---
Attachment: PHOENIX-5178_v1-addendum.patch

> SYSTEM schema is not getting cached at MetaData server
> --
>
> Key: PHOENIX-5178
> URL: https://issues.apache.org/jira/browse/PHOENIX-5178
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5178.patch, PHOENIX-5178_v1-addendum.patch, 
> PHOENIX-5178_v1.patch
>
>
> During initialization, the meta connection will not be able to see the SYSTEM 
> schema as the scanner at meta server is running with max_timestamp of 
> MIN_SYSTEM_TABLE_TIMESTAMP(exclusive) which result in a new connection to 
> create SYSTEM schema metadata everytime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5178) SYSTEM schema is not getting cached at MetaData server

2019-03-11 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-5178.

   Resolution: Fixed
Fix Version/s: 4.15.0

committed to master and 4.x branches. Thanks [~elserj] for the review.

> SYSTEM schema is not getting cached at MetaData server
> --
>
> Key: PHOENIX-5178
> URL: https://issues.apache.org/jira/browse/PHOENIX-5178
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5178.patch, PHOENIX-5178_v1.patch
>
>
> During initialization, the meta connection will not be able to see the SYSTEM 
> schema as the scanner at meta server is running with max_timestamp of 
> MIN_SYSTEM_TABLE_TIMESTAMP(exclusive) which result in a new connection to 
> create SYSTEM schema metadata everytime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5178) SYSTEM schema is not getting cached at MetaData server

2019-03-11 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5178:
---
Attachment: PHOENIX-5178_v1.patch

> SYSTEM schema is not getting cached at MetaData server
> --
>
> Key: PHOENIX-5178
> URL: https://issues.apache.org/jira/browse/PHOENIX-5178
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-5178.patch, PHOENIX-5178_v1.patch
>
>
> During initialization, the meta connection will not be able to see the SYSTEM 
> schema as the scanner at meta server is running with max_timestamp of 
> MIN_SYSTEM_TABLE_TIMESTAMP(exclusive) which result in a new connection to 
> create SYSTEM schema metadata everytime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5178) SYSTEM schema is not getting cached at MetaData server

2019-03-06 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5178:
---
Attachment: PHOENIX-5178.patch

> SYSTEM schema is not getting cached at MetaData server
> --
>
> Key: PHOENIX-5178
> URL: https://issues.apache.org/jira/browse/PHOENIX-5178
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-5178.patch
>
>
> During initialization, the meta connection will not be able to see the SYSTEM 
> schema as the scanner at meta server is running with max_timestamp of 
> MIN_SYSTEM_TABLE_TIMESTAMP(exclusive) which result in a new connection to 
> create SYSTEM schema metadata everytime.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5178) SYSTEM schema is not getting cached at MetaData server

2019-03-06 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-5178:
--

 Summary: SYSTEM schema is not getting cached at MetaData server
 Key: PHOENIX-5178
 URL: https://issues.apache.org/jira/browse/PHOENIX-5178
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.8.0
Reporter: Ankit Singhal
Assignee: Ankit Singhal


During initialization, the meta connection will not be able to see the SYSTEM 
schema as the scanner at meta server is running with max_timestamp of 
MIN_SYSTEM_TABLE_TIMESTAMP(exclusive) which result in a new connection to 
create SYSTEM schema metadata everytime.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3710) Cannot use lowername data table name with indextool

2019-02-21 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3710:
---
Attachment: PHOENIX-3710.002.rebased.patch

> Cannot use lowername data table name with indextool
> ---
>
> Key: PHOENIX-3710
> URL: https://issues.apache.org/jira/browse/PHOENIX-3710
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Matthew Shipton
>Assignee: Josh Elser
>Priority: Minor
> Attachments: PHOENIX-3710.002.patch, PHOENIX-3710.002.rebased.patch, 
> PHOENIX-3710.patch, test.sh, test.sql
>
>
> {code}
> hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
> \"my_lowcase_table\" --index-table INDEX_TABLE --output-path /tmp/some_path
> {code}
> results in:
> {code}
> java.lang.IllegalArgumentException:  INDEX_TABLE is not an index table for 
> MY_LOWCASE_TABLE
> {code}
> This is despite the data table being explictly lowercased.
> Appears to be referring to the lowcase table, not the uppercase version.
> Workaround exists by changing the tablename, but this is not always feasible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Integration Testing Requirements

2019-02-19 Thread Ankit Singhal
you can check if HADOOP_HOME and JAVA_HOME are properly set in the
environment.

On Tue, Feb 19, 2019 at 11:23 AM William Shen 
wrote:

> Hi everyone,
>
> I'm trying to set up the Jenkins job at work to build Phoenix and run the
> integration tests. However, repeatedly I encounter issues with the hive
> module when I run mvn verify. Does the hive integration tests require any
> special set up for them to pass? The other modules passed integration
> testing successfully.
>
> Attaching below is a sample failure trace.
>
> Thanks!
>
> - Will
>
> [ERROR] Tests run: 6, Failures: 6, Errors: 0, Skipped: 0, Time
> elapsed: 48.302 s <<< FAILURE! - in
> org.apache.phoenix.hive.HiveTezIT[ERROR]
> simpleColumnMapTest(org.apache.phoenix.hive.HiveTezIT)  Time elapsed:
> 6.727 s  <<< FAILURE!junit.framework.AssertionFailedError:
> Unexpected exception java.lang.RuntimeException:
> org.apache.tez.dag.api.SessionNotRunning: TezSession has already
> shutdown. Application application_1550371508120_0001 failed 2 times
> due to AM Container for appattempt_1550371508120_0001_02 exited
> with  exitCode: 127
> For more detailed output, check application tracking
> page:
> http://40a4dd0e8959:38843/cluster/app/application_1550371508120_0001Then,
> click on links to logs of each attempt.
> Diagnostics: Exception from container-launch.
> Container id: container_1550371508120_0001_02_01
> Exit code: 127
> Stack trace: ExitCodeException exitCode=127:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
> at org.apache.hadoop.util.Shell.run(Shell.java:456)
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
> at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
>
>
> Container exited with a non-zero exit code 127
> Failing this attempt. Failing the application.
> at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:535)
> at
> org.apache.phoenix.hive.HiveTestUtil.cliInit(HiveTestUtil.java:637)
> at
> org.apache.phoenix.hive.HiveTestUtil.cliInit(HiveTestUtil.java:590)
> at
> org.apache.phoenix.hive.BaseHivePhoenixStoreIT.runTest(BaseHivePhoenixStoreIT.java:117)
> at
> org.apache.phoenix.hive.HivePhoenixStoreIT.simpleColumnMapTest(HivePhoenixStoreIT.java:103)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runners.Suite.runChild(Suite.java:128)
> at org.junit.runners.Suite.runChild(Suite.java:27)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at 

[jira] [Updated] (PHOENIX-5060) DeleteFamily cells are getting skipped while building Index data after HBASE-21158

2018-12-04 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5060:
---
Attachment: PHOENIX-5060.patch

> DeleteFamily cells are getting skipped while building Index data after 
> HBASE-21158
> --
>
> Key: PHOENIX-5060
> URL: https://issues.apache.org/jira/browse/PHOENIX-5060
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-5060.patch
>
>
> Attaching a patch which we will be needed when we upgrade our HBase version 
> to 2.0.3 or 2.1+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5060) DeleteFamily cells are getting skipped while building Index data after HBASE-21158

2018-12-04 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-5060:
--

 Summary: DeleteFamily cells are getting skipped while building 
Index data after HBASE-21158
 Key: PHOENIX-5060
 URL: https://issues.apache.org/jira/browse/PHOENIX-5060
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal


Attaching a patch which we will be needed when we upgrade our HBase version to 
2.0.3 or 2.1+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5010) Don't build client guidepost cache when phoenix.stats.collection.enabled is disabled

2018-11-13 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5010:
---
Fix Version/s: 5.1.0

> Don't build client guidepost cache when phoenix.stats.collection.enabled is 
> disabled
> 
>
> Key: PHOENIX-5010
> URL: https://issues.apache.org/jira/browse/PHOENIX-5010
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5010.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5010) Don't build client guidepost cache when phoenix.stats.collection.enabled is disabled

2018-11-13 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-5010.

Resolution: Fixed

Thanks [~elserj], committed to 4.x and master.

> Don't build client guidepost cache when phoenix.stats.collection.enabled is 
> disabled
> 
>
> Key: PHOENIX-5010
> URL: https://issues.apache.org/jira/browse/PHOENIX-5010
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5010.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5010) Don't build client guidepost cache when phoenix.stats.collection.enabled is disabled

2018-11-09 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5010:
---
Attachment: PHOENIX-5010.patch

> Don't build client guidepost cache when phoenix.stats.collection.enabled is 
> disabled
> 
>
> Key: PHOENIX-5010
> URL: https://issues.apache.org/jira/browse/PHOENIX-5010
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5010.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5010) Don't build client guidepost cache when phoenix.stats.collection.enabled is disabled

2018-11-09 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-5010:
--

 Summary: Don't build client guidepost cache when 
phoenix.stats.collection.enabled is disabled
 Key: PHOENIX-5010
 URL: https://issues.apache.org/jira/browse/PHOENIX-5010
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal
 Fix For: 4.15.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4825) Replace usage of HBase Base64 implementation with java.util.Base64

2018-10-05 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4825.

Resolution: Fixed

> Replace usage of HBase Base64 implementation with java.util.Base64
> --
>
> Key: PHOENIX-4825
> URL: https://issues.apache.org/jira/browse/PHOENIX-4825
> Project: Phoenix
>  Issue Type: Task
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4825.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4825) Replace usage of HBase Base64 implementation with java.util.Base64

2018-10-05 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4825:
---
Fix Version/s: (was: 4.15.0)

> Replace usage of HBase Base64 implementation with java.util.Base64
> --
>
> Key: PHOENIX-4825
> URL: https://issues.apache.org/jira/browse/PHOENIX-4825
> Project: Phoenix
>  Issue Type: Task
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 5.1.0
>
> Attachments: PHOENIX-4825.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4941) Handle TableExistsException when wrapped under RemoteException for SYSTEM.MUTEX table

2018-10-02 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4941.

Resolution: Fixed

> Handle TableExistsException when wrapped under RemoteException for 
> SYSTEM.MUTEX table
> -
>
> Key: PHOENIX-4941
> URL: https://issues.apache.org/jira/browse/PHOENIX-4941
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4941.patch, PHOENIX-4941_v1.patch, 
> PHOENIX-4941_v2.patch, PHOENIX-4941_v3.patch
>
>
> {code}
> Caused by: java.sql.SQLException: 
> org.apache.hadoop.hbase.TableExistsException: 
> org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2644)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
>   at 
> com.hortonworks.smartsense.activity.sink.PhoenixSink.getConnection(PhoenixSink.java:461)
>   ... 4 more
> Caused by: org.apache.hadoop.hbase.TableExistsException: 
> org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(R

[jira] [Updated] (PHOENIX-4941) Handle TableExistsException when wrapped under RemoteException for SYSTEM.MUTEX table

2018-10-02 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4941:
---
Attachment: PHOENIX-4941_v3.patch

> Handle TableExistsException when wrapped under RemoteException for 
> SYSTEM.MUTEX table
> -
>
> Key: PHOENIX-4941
> URL: https://issues.apache.org/jira/browse/PHOENIX-4941
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4941.patch, PHOENIX-4941_v1.patch, 
> PHOENIX-4941_v2.patch, PHOENIX-4941_v3.patch
>
>
> {code}
> Caused by: java.sql.SQLException: 
> org.apache.hadoop.hbase.TableExistsException: 
> org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2644)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
>   at 
> com.hortonworks.smartsense.activity.sink.PhoenixSink.getConnection(PhoenixSink.java:461)
>   ... 4 more
> Caused by: org.apache.hadoop.hbase.TableExistsException: 
> org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(R

[jira] [Updated] (PHOENIX-4941) Handle TableExistsException when wrapped under RemoteException for SYSTEM.MUTEX table

2018-10-02 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4941:
---
Attachment: PHOENIX-4941_v2.patch

> Handle TableExistsException when wrapped under RemoteException for 
> SYSTEM.MUTEX table
> -
>
> Key: PHOENIX-4941
> URL: https://issues.apache.org/jira/browse/PHOENIX-4941
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4941.patch, PHOENIX-4941_v1.patch, 
> PHOENIX-4941_v2.patch
>
>
> {code}
> Caused by: java.sql.SQLException: 
> org.apache.hadoop.hbase.TableExistsException: 
> org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2644)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
>   at 
> com.hortonworks.smartsense.activity.sink.PhoenixSink.getConnection(PhoenixSink.java:461)
>   ... 4 more
> Caused by: org.apache.hadoop.hbase.TableExistsException: 
> org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(R

[jira] [Updated] (PHOENIX-4941) Handle TableExistsException when wrapped under RemoteException for SYSTEM.MUTEX table

2018-10-02 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4941:
---
Attachment: PHOENIX-4941_v1.patch

> Handle TableExistsException when wrapped under RemoteException for 
> SYSTEM.MUTEX table
> -
>
> Key: PHOENIX-4941
> URL: https://issues.apache.org/jira/browse/PHOENIX-4941
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4941.patch, PHOENIX-4941_v1.patch
>
>
> {code}
> Caused by: java.sql.SQLException: 
> org.apache.hadoop.hbase.TableExistsException: 
> org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2644)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
>   at 
> com.hortonworks.smartsense.activity.sink.PhoenixSink.getConnection(PhoenixSink.java:461)
>   ... 4 more
> Caused by: org.apache.hadoop.hbase.TableExistsException: 
> org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(Remo

[jira] [Updated] (PHOENIX-4941) Handle TableExistsException when wrapped under RemoteException for SYSTEM.MUTEX table

2018-10-01 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4941:
---
Attachment: PHOENIX-4941.patch

> Handle TableExistsException when wrapped under RemoteException for 
> SYSTEM.MUTEX table
> -
>
> Key: PHOENIX-4941
> URL: https://issues.apache.org/jira/browse/PHOENIX-4941
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4941.patch
>
>
> {code}
> Caused by: java.sql.SQLException: 
> org.apache.hadoop.hbase.TableExistsException: 
> org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2644)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
>   at 
> com.hortonworks.smartsense.activity.sink.PhoenixSink.getConnection(PhoenixSink.java:461)
>   ... 4 more
> Caused by: org.apache.hadoop.hbase.TableExistsException: 
> org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
>   at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90)
>   at 
> org.apache.hado

[jira] [Created] (PHOENIX-4941) Handle TableExistsException when wrapped under RemoteException for SYSTEM.MUTEX table

2018-10-01 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-4941:
--

 Summary: Handle TableExistsException when wrapped under 
RemoteException for SYSTEM.MUTEX table
 Key: PHOENIX-4941
 URL: https://issues.apache.org/jira/browse/PHOENIX-4941
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal
 Fix For: 4.15.0, 5.1.0


{code}
Caused by: java.sql.SQLException: org.apache.hadoop.hbase.TableExistsException: 
org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
at 
org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
at 
org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2644)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2532)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2532)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at 
com.hortonworks.smartsense.activity.sink.PhoenixSink.getConnection(PhoenixSink.java:461)
... 4 more
Caused by: org.apache.hadoop.hbase.TableExistsException: 
org.apache.hadoop.hbase.TableExistsException: SYSTEM.MUTEX
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:236)
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:88)
at 
org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:52)
at 
org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:184)
at 
org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:850)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1475)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1250)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$900(ProcedureExecutor.java:76)
at 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1764)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:100)
at 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:90)
at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:359)
at 
org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:347)
at 
org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3079)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:3071)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.createTableAsync

[jira] [Assigned] (PHOENIX-4908) [Apache Spark Plugin Doc] update save api when using spark dataframe

2018-09-20 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-4908:
--

Assignee: Sandeep Nemuri

> [Apache Spark Plugin Doc] update save api when using spark dataframe
> 
>
> Key: PHOENIX-4908
> URL: https://issues.apache.org/jira/browse/PHOENIX-4908
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sandeep Nemuri
>Assignee: Sandeep Nemuri
>Priority: Major
> Attachments: PHOENIX-4908.001.patch
>
>
>  
> Error, when saving the dataframe to phoenix table as mentioned in 
> [https://phoenix.apache.org/phoenix_spark.html] (Mentioned In Saving 
> DataFrames section)
> {code:java}
> scala> val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> 
> "INPUT_TABLE", | "zkUrl" -> "c221-node4.com:2181")) 
> warning: there was one deprecation warning; re-run with -deprecation for 
> details df: org.apache.spark.sql.DataFrame = [ID: bigint, COL1: string ... 1 
> more field] 
> scala> dfin.show() 
> +---+--++ 
> | ID| COL1|COL2| 
> +---+--++ 
> | 1|test_row_1| 1| 
> +---+--++ 
>  
> scala> df.save("org.apache.phoenix.spark", SaveMode.Overwrite, Map("table" -> 
> "OUTPUT_TABLE","zkUrl" -> "c221-node4.com:2181")) :32: error: value 
> save is not a member of org.apache.spark.sql.DataFrame 
> df.save("org.apache.phoenix.spark", SaveMode.Overwrite, Map("table" -> 
> "OUTPUT_TABLE","zkUrl" -> "c221-node4.com:2181")) ^
>  
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4895) NoClassDefFound when use IndexTool create async index

2018-09-11 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4895.

Resolution: Duplicate

> NoClassDefFound when use IndexTool create async index
> -
>
> Key: PHOENIX-4895
> URL: https://issues.apache.org/jira/browse/PHOENIX-4895
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
> Environment: HDP :3.0.0
> HBase :2.0.0
> Phoenix : 5.0.0
> Hadoop : 3.1.0
>Reporter: 张延召
>Priority: Major
>  Labels: 5.0.0, ASYNC, INDEX
> Fix For: 5.1.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> *First I created a table:*
> ^CREATE TABLE TMP_TEST(^
>  ^ID VARCHAR NOT NULL PRIMARY KEY,^
>  ^NAME VARCHAR,^
>  ^ADDR VARCHAR,^
>  ^AGE BIGINT DEFAULT 10^
>  ^);^
> *Then I created the asynchronous index table:*
> ^CREATE INDEX ASYNC_IDX ON TMP_TEST (NAME, ADDR) INCLUDE (AGE) ASYNC;^
> *Finally, perform the MapReduce Task:*
> ^./hbase org.apache.phoenix.mapreduce.index.IndexTool --schema default 
> --data-table TMP_TEST --index-table ASYNC_IDX --output-path ASYNC_IDX_HFILES^
> *But I received an error message:*
> ^Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/commons/cli/DefaultParser^
>  ^at 
> org.apache.phoenix.mapreduce.index.IndexTool.parseOptions(IndexTool.java:183)^
>  ^at org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:522)^
>  ^at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)^
>  ^at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)^
>  ^at org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:769)^
>  ^Caused by: java.lang.ClassNotFoundException: 
> org.apache.commons.cli.DefaultParser^
>  ^at java.net.URLClassLoader.findClass(URLClassLoader.java:381)^
>  ^at java.lang.ClassLoader.loadClass(ClassLoader.java:424)^
>  ^at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)^
>  ^at java.lang.ClassLoader.loadClass(ClassLoader.java:357)^
>  ^... 5 more^
> *Please give me some advice,Thanks!*
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4860) CursorUtil Needs to Use a ConcurrentHashMap rather than HashMap

2018-08-21 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-4860:
--

Assignee: Jack Steenkamp

> CursorUtil Needs to Use a ConcurrentHashMap rather than HashMap
> ---
>
> Key: PHOENIX-4860
> URL: https://issues.apache.org/jira/browse/PHOENIX-4860
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1, 5.0.0
>Reporter: Jack Steenkamp
>Assignee: Jack Steenkamp
>Priority: Major
> Attachments: CursorUtil.patch
>
>
> in very rare cases, when dealing with Apache Phoenix Cursors, the following 
> NullPointerException is encountered:
> java.lang.NullPointerException: null
> at org.apache.phoenix.util.CursorUtil.updateCursor(CursorUtil.java:179)
> at 
> org.apache.phoenix.iterate.CursorResultIterator.next(CursorResultIterator.java:46)
> at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:779)
> (This is for 4.13.1 - but seems that
> org.apache.phoenix.util.CursorUtil has not changed, at the time of writing, 
> since first being introduced as part of PHOENIX-3572).
> Upon closer inspection it would seem that on line 124 of CursorUtil, a
> HashMap is used to keep state which is then exposed via a number of
> static methods, which, one has to assume, can be accessed by many
> different threads. Using a plain old HashMap in cases like these can cause 
> issues.
> The mapCursorIDQuery member should be a ConcurrentHashMap instead? That 
> should tighten up the class and prevent any potential inconsistencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4839) IndexHalfStoreFileReaderGenerator throws NullPointerException

2018-08-16 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4839:
---
Attachment: PHOENIX-4839.patch

> IndexHalfStoreFileReaderGenerator throws NullPointerException
> -
>
> Key: PHOENIX-4839
> URL: https://issues.apache.org/jira/browse/PHOENIX-4839
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Attachments: PHOENIX-4839.patch
>
>
> {noformat}
> 018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> phoenix1,60020,1533715370645: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NullPointerException
>  java.lang.NullPointerException
>  at java.util.ArrayList.addAll(ArrayList.java:577)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
>  at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5916)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5890)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2739)
>  at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2719)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7197)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7156)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7149)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2249)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35068)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4850) Like predicate without wildcard doesn't pass the exact string if varchar columns has maxlength

2018-08-15 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-4850:
--

 Summary: Like predicate without wildcard doesn't pass the exact 
string if varchar columns has maxlength 
 Key: PHOENIX-4850
 URL: https://issues.apache.org/jira/browse/PHOENIX-4850
 Project: Phoenix
  Issue Type: Improvement
Reporter: Ankit Singhal
Assignee: Ankit Singhal
 Fix For: 4.15.0


[William 
|https://community.hortonworks.com/users/11882/williamprendergast.html]reported 
on 
[https://community.hortonworks.com/questions/210582/like-query-in-phoenix.html] 
that query is skipping all rows when length of the literal doesn't match with 
max lenght of the varchar column.

Copied from above link:-

When using a LIKE in a where clause, the rows are not found unless a 
wildcard(%) is added

create table t ( ID VARCHAR(290) NOT NULL PRIMARY KEY, NAME VARCHAR(256));

No rows affected (1.386 seconds) 0:

jdbc:phoenix:> upsert into t values ('1','test');

1 row affected (0.046 seconds) 0:

jdbc:phoenix:> select * from t;

+-+---+

| ID | NAME |

+-+---+

| 1 | test |

+-+---+

1 row selected (0.05 seconds) 0:

jdbc:phoenix:> select * from t where name like 'test';

+-+---+

| ID | NAME |

+-+---+

+-+---+

No rows selected (0.016 seconds) 0:

jdbc:phoenix:> select * from t where name like 'test%';

+-+---+

| ID | NAME |

+-+---+

| 1 | test |

+-+---+

1 row selected (0.032 seconds)

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4765) Add server side config property to disallow metadata changes that require being propagated to children

2018-07-31 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4765:
---
Priority: Blocker  (was: Major)

> Add server side config property to disallow metadata changes that require 
> being propagated to children
> --
>
> Key: PHOENIX-4765
> URL: https://issues.apache.org/jira/browse/PHOENIX-4765
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Priority: Blocker
>
> After the server has been upgraded we will have a server side config property 
> that will allow us to rollback the upgrade if required. This config will:
> 1. Disallow metadata changes to a base table that require being propagated to 
> child views.
> If the client is older than the server we also disallow metadata changes to a 
> base table with child views since we no longer lock the parent on the server 
> side. This is handled on the client side in the new jar.
> 2. Prevent SYSTEM.CATALOG from splitting.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4831) Move calls to SYSTEM.MUTEX on server

2018-07-31 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-4831:
--

 Summary: Move calls to SYSTEM.MUTEX on server
 Key: PHOENIX-4831
 URL: https://issues.apache.org/jira/browse/PHOENIX-4831
 Project: Phoenix
  Issue Type: Improvement
Reporter: Ankit Singhal


Right now Mutex are managed at the client requires WRITE access which can lead 
to inconsistency and issues in case of multi-tenant environment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4828) Remove dependency on HBase interfaces of type InterfaceAudience.Private

2018-07-30 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-4828:
--

 Summary: Remove dependency on HBase interfaces of type 
InterfaceAudience.Private
 Key: PHOENIX-4828
 URL: https://issues.apache.org/jira/browse/PHOENIX-4828
 Project: Phoenix
  Issue Type: Task
Reporter: Ankit Singhal


Currently, the patch upgrades in HBase can break the compatibility of Phoenix 
released for the corresponding minor version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4826) Changes to support HBase 2.0.1

2018-07-30 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4826:
---
Summary: Changes to support HBase 2.0.1  (was: Changes to support 2.0.1)

> Changes to support HBase 2.0.1
> --
>
> Key: PHOENIX-4826
> URL: https://issues.apache.org/jira/browse/PHOENIX-4826
> Project: Phoenix
>  Issue Type: Task
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-4826.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4826) Changes to support 2.0.1

2018-07-30 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4826:
---
Attachment: PHOENIX-4826.patch

> Changes to support 2.0.1
> 
>
> Key: PHOENIX-4826
> URL: https://issues.apache.org/jira/browse/PHOENIX-4826
> Project: Phoenix
>  Issue Type: Task
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-4826.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4826) Changes to support 2.0.1

2018-07-30 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-4826:
--

 Summary: Changes to support 2.0.1
 Key: PHOENIX-4826
 URL: https://issues.apache.org/jira/browse/PHOENIX-4826
 Project: Phoenix
  Issue Type: Task
Reporter: Ankit Singhal
Assignee: Ankit Singhal
 Fix For: 5.1.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4825) Replace usage of HBase Base64 implementation with java.util.Base64

2018-07-30 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4825:
---
Attachment: PHOENIX-4825.patch

> Replace usage of HBase Base64 implementation with java.util.Base64
> --
>
> Key: PHOENIX-4825
> URL: https://issues.apache.org/jira/browse/PHOENIX-4825
> Project: Phoenix
>  Issue Type: Task
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4825.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4825) Replace usage of HBase Base64 implementation with java.util.Base64

2018-07-30 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4825:
---
Summary: Replace usage of HBase Base64 implementation with java.util.Base64 
 (was: Replace usage of our Base64 implementation with java.util.Base64)

> Replace usage of HBase Base64 implementation with java.util.Base64
> --
>
> Key: PHOENIX-4825
> URL: https://issues.apache.org/jira/browse/PHOENIX-4825
> Project: Phoenix
>  Issue Type: Task
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4825) Replace usage of our Base64 implementation with java.util.Base64

2018-07-30 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-4825:
--

 Summary: Replace usage of our Base64 implementation with 
java.util.Base64
 Key: PHOENIX-4825
 URL: https://issues.apache.org/jira/browse/PHOENIX-4825
 Project: Phoenix
  Issue Type: Task
Reporter: Ankit Singhal
Assignee: Ankit Singhal
 Fix For: 4.15.0, 5.1.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4809) connectionQueue never cleared in ConnectionQueryServicesImpl when lease renewal is disabled/unsupported

2018-07-11 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540666#comment-16540666
 ] 

Ankit Singhal commented on PHOENIX-4809:


+1 [~elserj], Thanks for the test as well :)

> connectionQueue never cleared in ConnectionQueryServicesImpl when lease 
> renewal is disabled/unsupported
> ---
>
> Key: PHOENIX-4809
> URL: https://issues.apache.org/jira/browse/PHOENIX-4809
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4809.001.patch
>
>
> When we create a new {{PhoenixConnection}}, we update {{connectionQueues}} in 
> CQSI:
> {code:java}
>     @Override
>     public void addConnection(PhoenixConnection connection) throws 
> SQLException {
>     connectionQueues.get(getQueueIndex(connection)).add(new 
> WeakReference(connection));
>     if (returnSequenceValues) {
>     synchronized (connectionCountLock) {
>     connectionCount++;
>     }
>     }
>     }{code}
> We use connectionQueues to determine what needs lease renewal done.
> However, when the user closes a connection, this datastructure is never 
> cleaned up.
> {code:java}
>     @Override
>     public void removeConnection(PhoenixConnection connection) throws 
> SQLException {
>     if (returnSequenceValues) {
>     ConcurrentMap formerSequenceMap = null;
>     synchronized (connectionCountLock) {
>     if (--connectionCount <= 0) {
>     if (!this.sequenceMap.isEmpty()) {
>     formerSequenceMap = this.sequenceMap;
>     this.sequenceMap = Maps.newConcurrentMap();
>     }
>     }
>     if (connectionCount < 0) {
>     connectionCount = 0;
>     }
>     }
>     // Since we're using the former sequenceMap, we can do this 
> outside
>     // the lock.
>     if (formerSequenceMap != null) {
>     // When there are no more connections, attempt to return any 
> sequences
>     returnAllSequences(formerSequenceMap);
>     }
>     } else if (shouldThrottleNumConnections){ //still need to decrement 
> connection count
>     synchronized (connectionCountLock) {
>     if (connectionCount > 0) {
>     --connectionCount;
>     }
>     }
>     }
>     }{code}
> Running a test now, but seems to be the case on master.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release of Apache Phoenix 5.0.0 RC1

2018-07-02 Thread Ankit Singhal
+1 (binding)

* Build from src - OK
* Enabled namespace mapping and check migration:- OK (not regressed)
* Created some views and local/global indexes - OK
* Unit tests - ALL are PASSING
* Query logging - OK



On Wed, Jun 27, 2018 at 9:03 PM Josh Elser  wrote:

> +1 (binding)
>
> Noticed that the pom.xml.versionsBackup are still present, but that's
> not the end of the world.
>
> On 6/26/18 1:59 PM, rajeshb...@apache.org wrote:
> > Hello Everyone,
> >
> > This is a call for a vote on Apache Phoenix 5.0.0 RC1. This is the next
> > major release
> > of Phoenix compatible with the 2.0 branch of Apache HBase(2.0.0+).  The
> > release
> > includes both a source-only release and a convenience binary release. The
> > prior RC
> > was sunk due to PHOENIX-4785 which is fixed now.
> >
> > This release has feature parity with HBase 2.0.0 version.
> >
> > Here is are few highlights of Phoenix 5.0.0 over Phoenix 4.14.0 recently
> > released.
> > 1) Refactored coprocessor implementations as for the new coprocessor
> design
> > changes in HBase 2.0[1 <
> https://issues.apache.org/jira/browse/PHOENIX-4338>
> > ].
> > 2) Cleaned up deprecated APIs and leveraged new performant APIs[2
> > ].
> > 3) Hive and Spark integration works with latest versions of Hive(3.0.0)
> and
> > Spark respectively[3  >][4
> > ].
> >
> > The source tarball, including signatures, digests, etc can be found at:
> > https://dist.apache.org/repos/dist/dev/phoenix/apache-
> > phoenix-5.0.0-HBase-2.0-rc1/src
> > The binary artifacts can be found at:
> > https://dist.apache.org/repos/dist/dev/phoenix/apache-
> > phoenix-5.0.0-HBase-2.0-rc1/bin
> >
> > Release artifacts are signed with the following key:
> > https://pgp.mit.edu/pks/lookup?op=get=0x318FD86BAAEDBD7B
> > https://dist.apache.org/repos/dist/dev/phoenix/KEYS
> >
> > The hash and tag to be voted upon:
> > https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;
> > h=refs/tags/v5.0.0-HBase-2.0-rc1
> > *
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/tags/v5.0.0-HBase-2.0-rc1
> > <
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/tags/v5.0.0-HBase-2.0-rc1
> >*
> > <
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v5.0.0-HBase-2.0-rc0
> >
> >
> > Vote will be open for at least 72 hours. Please vote:
> >
> > [ ] +1 approve
> > [ ] +0 no opinion 
> > [ ] -1 disapprove (and reason why)
> >
> > [1] https://issues.apache.org/jira/browse/PHOENIX-4338
> > [2] https://issues.apache.org/jira/browse/PHOENIX-4297
> > [3] https://issues.apache.org/jira/browse/PHOENIX-4423
> > [4] https://issues.apache.org/jira/browse/PHOENIX-4787
> >
> > Thanks,
> > The Apache Phoenix Team
> >
>


[jira] [Commented] (PHOENIX-4666) Add a subquery cache that persists beyond the life of a query

2018-06-26 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524192#comment-16524192
 ] 

Ankit Singhal commented on PHOENIX-4666:


[~ortutay] , it seems re-sending of HashJoinCache is regressed by your change, 
if you fixed that then probably your fix will also work.

You can run all HashJoin test which sends cache again for the first time by 
making following change.
{code:java}
--public class HashJoinCacheIT extends BaseJoinIT {
++public class HashJoinCacheIT extends HashJoinGlobalIndexIT {

++ public HashJoinCacheIT(String[] indexDDL, String[] plans) {
++ super(indexDDL, plans);
++ }
++{code}
 

 

> Add a subquery cache that persists beyond the life of a query
> -
>
> Key: PHOENIX-4666
> URL: https://issues.apache.org/jira/browse/PHOENIX-4666
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Marcell Ortutay
>Assignee: Marcell Ortutay
>Priority: Major
>
> The user list thread for additional context is here: 
> [https://lists.apache.org/thread.html/e62a6f5d79bdf7cd238ea79aed8886816d21224d12b0f1fe9b6bb075@%3Cuser.phoenix.apache.org%3E]
> 
> A Phoenix query may contain expensive subqueries, and moreover those 
> expensive subqueries may be used across multiple different queries. While 
> whole result caching is possible at the application level, it is not possible 
> to cache subresults in the application. This can cause bad performance for 
> queries in which the subquery is the most expensive part of the query, and 
> the application is powerless to do anything at the query level. It would be 
> good if Phoenix provided a way to cache subquery results, as it would provide 
> a significant performance gain.
> An illustrative example:
>     SELECT * FROM table1 JOIN (SELECT id_1 FROM large_table WHERE x = 10) 
> expensive_result ON table1.id_1 = expensive_result.id_2 AND table1.id_1 = 
> \{id}
> In this case, the subquery "expensive_result" is expensive to compute, but it 
> doesn't change between queries. The rest of the query does because of the 
> \{id} parameter. This means the application can't cache it, but it would be 
> good if there was a way to cache expensive_result.
> Note that there is currently a coprocessor based "server cache", but the data 
> in this "cache" is not persisted across queries. It is deleted after a TTL 
> expires (30sec by default), or when the query completes.
> This is issue is fairly high priority for us at 23andMe and we'd be happy to 
> provide a patch with some guidance from Phoenix maintainers. We are currently 
> putting together a design document for a solution, and we'll post it to this 
> Jira ticket for review in a few days.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4494) Fix PhoenixTracingEndToEndIT

2018-06-22 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16520889#comment-16520889
 ] 

Ankit Singhal commented on PHOENIX-4494:


[~rajeshbabu] , still not all tests are passing 

PhoenixTracingEndToEndIT#testScanTracingOnServer and 
testClientServerIndexingTracing are still failing. let me take a look but not a 
blocker for 5.0 release(:)).

> Fix PhoenixTracingEndToEndIT
> 
>
> Key: PHOENIX-4494
> URL: https://issues.apache.org/jira/browse/PHOENIX-4494
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Ankit Singhal
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHEONXI-4494.001.patch
>
>
> {code}
> [ERROR] Tests run: 8, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 
> 148.175 s <<< FAILURE! - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
> [ERROR] 
> testScanTracingOnServer(org.apache.phoenix.trace.PhoenixTracingEndToEndIT)  
> Time elapsed: 64.484 s  <<< FAILURE!
> java.lang.AssertionError: Didn't get expected updates to trace table
> at 
> org.apache.phoenix.trace.PhoenixTracingEndToEndIT.testScanTracingOnServer(PhoenixTracingEndToEndIT.java:304)
> [ERROR] 
> testClientServerIndexingTracing(org.apache.phoenix.trace.PhoenixTracingEndToEndIT)
>   Time elapsed: 22.346 s  <<< FAILURE!
> java.lang.AssertionError: Never found indexing updates
> at 
> org.apache.phoenix.trace.PhoenixTracingEndToEndIT.testClientServerIndexingTracing(PhoenixTracingEndToEndIT.java:193)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4338) Move to HBase-2.0

2018-06-22 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-4338:
--

Assignee: Ankit Singhal

> Move to HBase-2.0
> -
>
> Key: PHOENIX-4338
> URL: https://issues.apache.org/jira/browse/PHOENIX-4338
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>Priority: Major
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4561) Temporarily disable transactional tests

2018-06-22 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4561.

Resolution: Fixed

Tests are re-enable in PHOENIX-4567

> Temporarily disable transactional tests
> ---
>
> Key: PHOENIX-4561
> URL: https://issues.apache.org/jira/browse/PHOENIX-4561
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4561.001.patch, PHOENIX-4561.addendum.patch
>
>
> All 5.x transactional table tests are failing because of a necessary Tephra 
> release which is pending.
> Let's disable these tests so we have a better idea of the state of the build.
> FYI [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


<    1   2   3   4   5   6   7   8   9   10   >