[jira] [Commented] (PHOENIX-3867) nth_value returns valid values for non-existing rows
[ https://issues.apache.org/jira/browse/PHOENIX-3867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16237467#comment-16237467 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3867: -- [~giacomotaylor] This is low priority issue. However in this corner case scenario the return output is coming as incorrect data for non-existing row. It is nice to fix. Apologies these days occupied with other work. > nth_value returns valid values for non-existing rows > - > > Key: PHOENIX-3867 > URL: https://issues.apache.org/jira/browse/PHOENIX-3867 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 >Reporter: Loknath Priyatham Teja Singamsetty >Priority: Major > Fix For: 4.13.0 > > > Assume a table with two rows as follows: > id, page_id, date, value > 2, 8 , 1 , 7 > 3, 8 , 2, 9 > Fetch 3rd most recent value of page_id 3 should not return any values. > However, rs.next() succeeds and rs.getInt(1) returns 0 and the assertion > fails. Below is the test case depicting the same. > Issues: > > a) From sqline, the 3rd nth_value is returned as null > b) When programatically accessed, it is coming as 0 > Test Case: > - > public void nonExistingNthRowTestWithGroupBy() throws Exception { > Connection conn = DriverManager.getConnection(getUrl()); > String nthValue = generateUniqueName(); > String ddl = "CREATE TABLE IF NOT EXISTS " + nthValue + " " > + "(id INTEGER NOT NULL PRIMARY KEY, page_id UNSIGNED_LONG," > + " dates INTEGER, val INTEGER)"; > conn.createStatement().execute(ddl); > conn.createStatement().execute( > "UPSERT INTO " + nthValue + " (id, page_id, dates, val) VALUES > (2, 8, 1, 7)"); > conn.createStatement().execute( > "UPSERT INTO " + nthValue + " (id, page_id, dates, val) VALUES > (3, 8, 2, 9)"); > conn.commit(); > ResultSet rs = conn.createStatement().executeQuery( > "SELECT NTH_VALUE(val, 3) WITHIN GROUP (ORDER BY dates DESC) FROM > " + nthValue > + " GROUP BY page_id"); > assertTrue(rs.next()); > assertEquals(rs.getInt(1), 4); > assertFalse(rs.next()); > } > Root Cause: > --- > The underlying issue seems to be with the way NTH_Value aggregation is done > by the aggregator. The client aggregator is first populated with the top 'n' > rows (if present) and during the iterator.next() never gets evaluated in > BaseGroupedAggregatingResultIterator to see if the nth row is actually > present or not. Once the iterator.next() succeeds, retrieving the value from > the result set using the row projector triggers the client aggregators > evaluate() method as part of schema.toBytes(..) which is defaulting to 0 for > empty row if it is int when programmatically accessed. > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (PHOENIX-3211) Support running UPSERT SELECT asynchronously
[ https://issues.apache.org/jira/browse/PHOENIX-3211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty reassigned PHOENIX-3211: Assignee: (was: Loknath Priyatham Teja Singamsetty ) > Support running UPSERT SELECT asynchronously > > > Key: PHOENIX-3211 > URL: https://issues.apache.org/jira/browse/PHOENIX-3211 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor > Fix For: 4.13.0 > > > We have support for creating indexes asynchronously. We should add the > ability to run an UPSERT SELECT asynchronously too for very large tables. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (PHOENIX-3929) Update website with recently added features FIRST_VALUES and LAST_VALUES
[ https://issues.apache.org/jira/browse/PHOENIX-3929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16046351#comment-16046351 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3929: -- CC: [~mujtabachohan] [~tdsilva] can you do the needful here. > Update website with recently added features FIRST_VALUES and LAST_VALUES > > > Key: PHOENIX-3929 > URL: https://issues.apache.org/jira/browse/PHOENIX-3929 > Project: Phoenix > Issue Type: Improvement >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Trivial > Fix For: 4.11.0, 4.12.0 > > Attachments: website_update.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (PHOENIX-3929) Update website with recently added features FIRST_VALUES and LAST_VALUES
[ https://issues.apache.org/jira/browse/PHOENIX-3929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3929: - Fix Version/s: 4.12.0 4.11.0 > Update website with recently added features FIRST_VALUES and LAST_VALUES > > > Key: PHOENIX-3929 > URL: https://issues.apache.org/jira/browse/PHOENIX-3929 > Project: Phoenix > Issue Type: Improvement >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Trivial > Fix For: 4.11.0, 4.12.0 > > Attachments: website_update.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (PHOENIX-3929) Update website with recently added features FIRST_VALUES and LAST_VALUES
[ https://issues.apache.org/jira/browse/PHOENIX-3929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044266#comment-16044266 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3929: -- [~jamestaylor] Can you please help commit the patch to svn repo to update the website changes. > Update website with recently added features FIRST_VALUES and LAST_VALUES > > > Key: PHOENIX-3929 > URL: https://issues.apache.org/jira/browse/PHOENIX-3929 > Project: Phoenix > Issue Type: Improvement >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Trivial > Attachments: website_update.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3929) Update website with recently added features FIRST_VALUES and LAST_VALUES
[ https://issues.apache.org/jira/browse/PHOENIX-3929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3929: - Attachment: website_update.patch > Update website with recently added features FIRST_VALUES and LAST_VALUES > > > Key: PHOENIX-3929 > URL: https://issues.apache.org/jira/browse/PHOENIX-3929 > Project: Phoenix > Issue Type: Improvement >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Trivial > Attachments: website_update.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (PHOENIX-3929) Update website with recently added features FIRST_VALUES and LAST_VALUES
Loknath Priyatham Teja Singamsetty created PHOENIX-3929: Summary: Update website with recently added features FIRST_VALUES and LAST_VALUES Key: PHOENIX-3929 URL: https://issues.apache.org/jira/browse/PHOENIX-3929 Project: Phoenix Issue Type: Improvement Reporter: Loknath Priyatham Teja Singamsetty Assignee: Loknath Priyatham Teja Singamsetty Priority: Trivial -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (PHOENIX-3802) NPE with PRowImpl.toRowMutations(PTableImpl.java)
[ https://issues.apache.org/jira/browse/PHOENIX-3802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty reassigned PHOENIX-3802: Assignee: Loknath Priyatham Teja Singamsetty > NPE with PRowImpl.toRowMutations(PTableImpl.java) > - > > Key: PHOENIX-3802 > URL: https://issues.apache.org/jira/browse/PHOENIX-3802 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0, 4.10.1 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 4.10.0, 4.11.0, 4.10.1 > > > Caused by: org.apache.phoenix.execute.CommitException: > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 2 > actions: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to process ON > DUPLICATE IGNORE for > COMMUNITIES.TOP_ENTITY(00DT000Dpvc000RF\x00D5B00SMgzx): > null > at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89) > at > org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:234) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$47.call(RegionCoprocessorHost.java:1241) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1670) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preIncrementAfterRowLock(RegionCoprocessorHost.java:1236) > at > org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:5818) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4605) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3802) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3693) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32500) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NullPointerException > at > org.apache.phoenix.schema.PTableImpl$PRowImpl.toRowMutations(PTableImpl.java:910) > at > org.apache.phoenix.index.PhoenixIndexBuilder.executeAtomicOp(PhoenixIndexBuilder.java:246) > at > org.apache.phoenix.hbase.index.builder.IndexBuildManager.executeAtomicOp(IndexBuildManager.java:187) > at > org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:213) > ... 15 more > : 1 time, org.apache.hadoop.hbase.DoNotRetryIOException: Unable to process ON > DUPLICATE IGNORE for > COMMUNITIES.TOP_ENTITY(00DT000Dpvc000RF\x000TOB00010ic0D5B00SMgzx): > null > at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89) > at > org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:234) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$47.call(RegionCoprocessorHost.java:1241) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1670) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preIncrementAfterRowLock(RegionCoprocessorHost.java:1236) > at > org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:5818) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4605) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3802) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3693) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32500) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
[jira] [Comment Edited] (PHOENIX-3913) Support PArrayDataType.appendItemToArray to append item to array when null or empty
[ https://issues.apache.org/jira/browse/PHOENIX-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16040207#comment-16040207 ] Loknath Priyatham Teja Singamsetty edited comment on PHOENIX-3913 at 6/7/17 8:55 AM: -- Thanks [~jamestaylor] for implementing this without having to encode/decode object for the first item added to array. was (Author: singamteja): Thanks [~jamestaylor] for implementing this without having to decode object for the first item added to array. > Support PArrayDataType.appendItemToArray to append item to array when null or > empty > > > Key: PHOENIX-3913 > URL: https://issues.apache.org/jira/browse/PHOENIX-3913 > Project: Phoenix > Issue Type: Task >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: James Taylor > Attachments: PHOENIX-3913_4.x-HBase-0.98.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16040286#comment-16040286 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- [~jamestaylor] Attached patches created on top of PHOENIX-3913 for all the branches. > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_4.x-HBase-0.98_v2.patch, PHOENIX-3773_4.x-HBase-1.1_v2.patch, > PHOENIX-3773_4.x-HBase-1.2_v2.patch, PHOENIX-3773_master.patch, > PHOENIX-3773_master_v2.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_4.x-HBase-1.2_v2.patch PHOENIX-3773_4.x-HBase-1.1_v2.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_4.x-HBase-0.98_v2.patch, PHOENIX-3773_4.x-HBase-1.1_v2.patch, > PHOENIX-3773_4.x-HBase-1.2_v2.patch, PHOENIX-3773_master.patch, > PHOENIX-3773_master_v2.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_master_v2.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_4.x-HBase-0.98_v2.patch, PHOENIX-3773_master.patch, > PHOENIX-3773_master_v2.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773_master_final.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_4.x-HBase-0.98_v2.patch, PHOENIX-3773_master.patch, > PHOENIX-3773_master_v2.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773_4.x-HBase-0.98_final.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_4.x-HBase-0.98_v2.patch, PHOENIX-3773_master.patch, > PHOENIX-3773_master_v2.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3211) Support running UPSERT SELECT asynchronously
[ https://issues.apache.org/jira/browse/PHOENIX-3211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16040273#comment-16040273 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3211: -- [~jamestaylor] Do we still need this? This was de-prioritised earlier. > Support running UPSERT SELECT asynchronously > > > Key: PHOENIX-3211 > URL: https://issues.apache.org/jira/browse/PHOENIX-3211 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 4.11.0 > > > We have support for creating indexes asynchronously. We should add the > ability to run an UPSERT SELECT asynchronously too for very large tables. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3913) Support PArrayDataType.appendItemToArray to append item to array when null or empty
[ https://issues.apache.org/jira/browse/PHOENIX-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16040207#comment-16040207 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3913: -- Thanks [~jamestaylor] for implementing this without having to decode object for the first item added to array. > Support PArrayDataType.appendItemToArray to append item to array when null or > empty > > > Key: PHOENIX-3913 > URL: https://issues.apache.org/jira/browse/PHOENIX-3913 > Project: Phoenix > Issue Type: Task >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: James Taylor > Attachments: PHOENIX-3913_4.x-HBase-0.98.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16039178#comment-16039178 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- [~tdsilva] If you have access can you paste the code from smart-apply-patch.sh and test-patch.sh CC: [~samarthjain] > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_final.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master_final.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16039169#comment-16039169 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- [~tdsilva] In one of the earlier comments, Samarth suggested to create _.patch which will only apply to specific branch. Is this not the case ? > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_final.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master_final.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038677#comment-16038677 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- Not able to debug further as i don't have access to smart-applu-patch.sh FYI: [~jamestaylor] > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_final.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master_final.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16038674#comment-16038674 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- [~tdsilva] [~samarthjain] [~jamestaylor] Have incorporated the review comments. a) Added ability to PArrayDataType.appendItemToArray to build array with element provided when empty b) Renamed the variable to isArrayReturnType. c) Removed the outer loop d) For nth value return empty byte array with return value as true when not found Issue with pre-commit build. The master patch file is applied properly. But the pre-commit build fails applying the 4.x-HBase-0.98 patch although it applies perfectly fine with "git apply " on checked out phoenix code. No insight into why this patch failed. Here are the log lines from the build output: {quote} + /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/dev/smart-apply-patch.sh /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/patchprocess/patch /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/dev/test-patch.sh: line 488: /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/dev/smart-apply-patch.sh: No such file or directory + [[ 127 != 0 ]] {quote} Can you guys help here? > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_final.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master_final.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_4.x-HBase-0.98_final.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_final.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master_final.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773_4.x-HBase-0.98_final.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master_final.patch, PHOENIX-3773_master.patch, > PHOENIX-3773.patch, PHOENIX-3773.v2.patch, PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773_4.x-HBase-0.98_final.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_final.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master_final.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_4.x-HBase-0.98_final.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_final.patch, > PHOENIX-3773_4.x-HBase-0.98_final.patch, PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master_final.patch, PHOENIX-3773_master.patch, > PHOENIX-3773.patch, PHOENIX-3773.v2.patch, PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (PHOENIX-3913) Support PArrayDataType.appendItemToArray to append item to array when null or empty
[ https://issues.apache.org/jira/browse/PHOENIX-3913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty resolved PHOENIX-3913. -- Resolution: Duplicate > Support PArrayDataType.appendItemToArray to append item to array when null or > empty > > > Key: PHOENIX-3913 > URL: https://issues.apache.org/jira/browse/PHOENIX-3913 > Project: Phoenix > Issue Type: Task >Reporter: Loknath Priyatham Teja Singamsetty > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_master_final.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_final.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master_final.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_4.x-HBase-0.98_final.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_final.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master.patch, > PHOENIX-3773.patch, PHOENIX-3773.v2.patch, PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773_master_003.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773_4.x-HBase-0.98_003.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773_master_002.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773_4.x-HBase-0.98_002.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_4.x-HBase-0.98_003.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_002.patch, > PHOENIX-3773_4.x-HBase-0.98_003.patch, PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master_002.patch, PHOENIX-3773_master_003.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_master_003.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_002.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master_002.patch, > PHOENIX-3773_master_003.patch, PHOENIX-3773_master.patch, PHOENIX-3773.patch, > PHOENIX-3773.v2.patch, PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773_master_001.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_002.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master_002.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_master_002.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_002.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master_002.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773_4.x-HBase-0.98_001.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_002.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master_001.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_4.x-HBase-0.98_002.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_001.patch, > PHOENIX-3773_4.x-HBase-0.98_002.patch, PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master_001.patch, PHOENIX-3773_master.patch, PHOENIX-3773.patch, > PHOENIX-3773.v2.patch, PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3915) Implement LAST_VALUES to retrieve last n values
[ https://issues.apache.org/jira/browse/PHOENIX-3915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3915: - Affects Version/s: 4.11.0 > Implement LAST_VALUES to retrieve last n values > > > Key: PHOENIX-3915 > URL: https://issues.apache.org/jira/browse/PHOENIX-3915 > Project: Phoenix > Issue Type: New Feature >Affects Versions: 4.11.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (PHOENIX-3915) Implement LAST_VALUES to retrieve last n values
Loknath Priyatham Teja Singamsetty created PHOENIX-3915: Summary: Implement LAST_VALUES to retrieve last n values Key: PHOENIX-3915 URL: https://issues.apache.org/jira/browse/PHOENIX-3915 Project: Phoenix Issue Type: New Feature Reporter: Loknath Priyatham Teja Singamsetty Assignee: Loknath Priyatham Teja Singamsetty -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_master_001.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_001.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master_001.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_4.x-HBase-0.98_001.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98_001.patch, > PHOENIX-3773_4.x-HBase-0.98.patch, PHOENIX-3773_master.patch, > PHOENIX-3773.patch, PHOENIX-3773.v2.patch, PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_4.x-HBase-0.98.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773_4.x-HBase-0.98.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (PHOENIX-3914) Support PArrayDataType.appendItemToArray to append item to array when null or empty
Loknath Priyatham Teja Singamsetty created PHOENIX-3914: Summary: Support PArrayDataType.appendItemToArray to append item to array when null or empty Key: PHOENIX-3914 URL: https://issues.apache.org/jira/browse/PHOENIX-3914 Project: Phoenix Issue Type: Sub-task Reporter: Loknath Priyatham Teja Singamsetty Assignee: Loknath Priyatham Teja Singamsetty -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (PHOENIX-3913) Support PArrayDataType.appendItemToArray to append item to array when null or empty
Loknath Priyatham Teja Singamsetty created PHOENIX-3913: Summary: Support PArrayDataType.appendItemToArray to append item to array when null or empty Key: PHOENIX-3913 URL: https://issues.apache.org/jira/browse/PHOENIX-3913 Project: Phoenix Issue Type: Task Reporter: Loknath Priyatham Teja Singamsetty -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035333#comment-16035333 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- Sure [~jamestaylor].. Will do whats best here. > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Sure James. Will do whats best here. -- Thanks, Teja. > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035333#comment-16035333 ] Loknath Priyatham Teja Singamsetty edited comment on PHOENIX-3773 at 6/2/17 8:06 PM: -- Sure [~jamestaylor].. Will do whatever is best suited here. was (Author: singamteja): Sure [~jamestaylor].. Will do whats best here. > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16035310#comment-16035310 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- bq. You might want to check out ArrayAppendFunctionIT which exercises ARRAY_APPEND Thanks for the pointer James. Was looking into the same and was understanding how things were working. [~jamestaylor] Looks like I found the reason. The PArrayDataType.appendItemToArray can be used when you already have an Array serialized to bytes with atleast one element in it. We cannot leverage this without having an array pre-constructed. In our case, the requirement is to convert the multiple PDataType to single PArrayDataType. There is no such util method which can construct the Array from scratch given element one by one to array. We have to perform serialization/deserialization for one element in order to construct the Array, post which we can make use of PArrayDataType.appendItemToArray. This would save serialization/deserialization cost on the rest of items in first values array result set. Let me know if this approach is fine with you. > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16034333#comment-16034333 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- [~jamestaylor] bq. Then, once you have all the values, combine them together using the PArrayDataType.appendItemToArray() method I tried this last week as well but for some reason the output is not as expected. Yesterday made changes to use PArrayDataType.appendItemToArray(), upon debugging found two things: a) For fixed length data types, the appendItemToArray is actually prepending the arrayBytes reversing the array construction. For the time being used prependItemToArray() method instead which fixed this issue. The following lines of code in appendItemToArray seems to be the reason behind this which copies the new bytes to front of array and older bytes to the end. {quote} newArray = new byte[length + elementLength]; System.arraycopy(arrayBytes, offset, newArray, 0, length); System.arraycopy(elementBytes, elementOffset, newArray, length, elementLength); {quote} b) For variable length data types, the Array construction results in ArrayIndexOutOfBoundsException. Here is the stack trace java.lang.ArrayIndexOutOfBoundsException: 32767 at org.apache.phoenix.schema.types.PArrayDataType.prependItemToArray(PArrayDataType.java:545) at org.apache.phoenix.expression.aggregator.FirstLastValueBaseClientAggregator.evaluate(FirstLastValueBaseClientAggregator.java:117) at org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:112) at org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:93) at org.apache.phoenix.expression.aggregator.Aggregators.toBytes(Aggregators.java:112) at org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:82) at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778) at org.apache.phoenix.end2end.FirstValuesFunctionIT.varcharDatatypeSimpleTest(FirstValuesFunctionIT.java:100) I'm debugging this further. bq.Probably a good idea to have a test that asks for the top 3 values when there are only 2 values to make sure that case works too (if you don't have that already). Test case is included already. > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773_4.x-HBase-0.98.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_master.patch, PHOENIX-3773.patch, > PHOENIX-3773.v2.patch, PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030777#comment-16030777 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- [~samarthjain] Made necessary changes as suggested. > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773_master.patch PHOENIX-3773_4.x-HBase-0.98.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773_4.x-HBase-0.98.patch, > PHOENIX-3773_master.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3733_master.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3733-v6.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3733-v5.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3733.v4.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3733-v6.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3733.v4.patch, PHOENIX-3733-v5.patch, > PHOENIX-3733-v6.patch, PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030608#comment-16030608 ] Loknath Priyatham Teja Singamsetty edited comment on PHOENIX-3773 at 5/31/17 4:24 AM: --- [~jamestaylor] [~samarthjain] Here is what i'm doing to create patch which is working for master and 4.x branch locally. This used to work earlier but the precommit build is failing now while applying to master. a) checkout to 4.x-HBase-0.98 b) git commit c) git format-patch HEAD -1 I have also tried the instructions specified in http://phoenix.apache.org/contributing.html. This is resulting in patch file with lot of commits back dated from 2014 when i run "git format-patch -- stdout origin > PHOENIX-{NUMBER}.patch" Between on debugging further from build console output, {quote} + /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/dev/smart-apply-patch.sh /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/patchprocess/patch The patch does not appear to apply with p0 to p2 + [[ 1 != 0 ]] + echo 'PATCH APPLICATION FAILED' PATCH APPLICATION FAILED + JIRA_COMMENT='Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12870496/PHOENIX-3733.v4.patch against master branch at commit e3fc929e93715a359b4267db9f4d12706247a6a6. {quote} Would like to know the commands part of smart-apply-patch.sh for further debugging. was (Author: singamteja): [~jamestaylor] [~samarthjain] Here is what i'm doing to create patch which is working for master and 4.x branch locally. This used to work earlier but the precommit build is failing now while applying to master. a) checkout to 4.x-HBase-0.98 b) git commit c) git format-patch HEAD -1 I have also tried the instructions specified in http://phoenix.apache.org/contributing.html. This is resulting in patch file with lot of commits back dated from 2014 when i run "git format-patch --stdout origin > PHOENIX-{NUMBER}.patch" Between on debugging further from build console output, {quote} + /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/dev/smart-apply-patch.sh /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/patchprocess/patch The patch does not appear to apply with p0 to p2 + [[ 1 != 0 ]] + echo 'PATCH APPLICATION FAILED' PATCH APPLICATION FAILED + JIRA_COMMENT='Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12870496/PHOENIX-3733.v4.patch against master branch at commit e3fc929e93715a359b4267db9f4d12706247a6a6. {quote} Would like to know the commands part of smart-apply-patch.sh for further debugging. > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3733.v4.patch, PHOENIX-3773.patch, > PHOENIX-3773.v2.patch, PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030608#comment-16030608 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- [~jamestaylor] [~samarthjain] Here is what i'm doing to create patch which is working for master and 4.x branch locally. This used to work earlier but the precommit build is failing now while applying to master. a) checkout to 4.x-HBase-0.98 b) git commit c) git format-patch HEAD -1 I have also tried the instructions specified in http://phoenix.apache.org/contributing.html. This is resulting in patch file with lot of commits back dated from 2014 when i run "git format-patch --stdout origin > PHOENIX-{NUMBER}.patch" Between on debugging further from build console output, {quote} + /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/dev/smart-apply-patch.sh /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/patchprocess/patch The patch does not appear to apply with p0 to p2 + [[ 1 != 0 ]] + echo 'PATCH APPLICATION FAILED' PATCH APPLICATION FAILED + JIRA_COMMENT='Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12870496/PHOENIX-3733.v4.patch against master branch at commit e3fc929e93715a359b4267db9f4d12706247a6a6. {quote} Would like to know the commands part of smart-apply-patch.sh for further debugging. > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3733.v4.patch, PHOENIX-3773.patch, > PHOENIX-3773.v2.patch, PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16030554#comment-16030554 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- [~samarthjain] [~tdsilva] Can you please help resolve this "patch command could not apply the patch". I was able to apply this patch to both master and 4.x-HBase.0.98 on dev box. > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3733.v4.patch, PHOENIX-3773.patch, > PHOENIX-3773.v2.patch, PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3733.v4.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3733.v4.patch, PHOENIX-3773.patch, > PHOENIX-3773.v2.patch, PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (PHOENIX-3215) Add oracle regexp_like function in phoenix
[ https://issues.apache.org/jira/browse/PHOENIX-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty reassigned PHOENIX-3215: Assignee: (was: Loknath Priyatham Teja Singamsetty ) > Add oracle regexp_like function in phoenix > -- > > Key: PHOENIX-3215 > URL: https://issues.apache.org/jira/browse/PHOENIX-3215 > Project: Phoenix > Issue Type: Improvement >Reporter: Loknath Priyatham Teja Singamsetty >Priority: Minor > > We have regexp_substr today which returns a substring of a string by applying > a regular expression start from the offset of a one-based position. > However, when using query builder frameworks like JOOQ code generators that > generates java code from database and build type safe sql queries out of box, > lack of regexp_like syntax is making developers to take work arounds to build > the equivalent queries. > Hard coding Query for regexp_substr as JOOQ does not support it: > {quote} > regex = regex + " AND regexp_substr("+ > TestResultEntity.Column.BASELINE_MESSAGE.getName() +", ?)" + matching; > {quote} > Here is the oracle documentation for regexp_like > https://docs.oracle.com/cd/B12037_01/server.101/b10759/conditions018.htm -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (PHOENIX-3867) nth_value returns valid values for non-existing rows
[ https://issues.apache.org/jira/browse/PHOENIX-3867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty reassigned PHOENIX-3867: Assignee: (was: Loknath Priyatham Teja Singamsetty ) > nth_value returns valid values for non-existing rows > - > > Key: PHOENIX-3867 > URL: https://issues.apache.org/jira/browse/PHOENIX-3867 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 >Reporter: Loknath Priyatham Teja Singamsetty > Fix For: 4.11.0 > > > Assume a table with two rows as follows: > id, page_id, date, value > 2, 8 , 1 , 7 > 3, 8 , 2, 9 > Fetch 3rd most recent value of page_id 3 should not return any values. > However, rs.next() succeeds and rs.getInt(1) returns 0 and the assertion > fails. Below is the test case depicting the same. > Issues: > > a) From sqline, the 3rd nth_value is returned as null > b) When programatically accessed, it is coming as 0 > Test Case: > - > public void nonExistingNthRowTestWithGroupBy() throws Exception { > Connection conn = DriverManager.getConnection(getUrl()); > String nthValue = generateUniqueName(); > String ddl = "CREATE TABLE IF NOT EXISTS " + nthValue + " " > + "(id INTEGER NOT NULL PRIMARY KEY, page_id UNSIGNED_LONG," > + " dates INTEGER, val INTEGER)"; > conn.createStatement().execute(ddl); > conn.createStatement().execute( > "UPSERT INTO " + nthValue + " (id, page_id, dates, val) VALUES > (2, 8, 1, 7)"); > conn.createStatement().execute( > "UPSERT INTO " + nthValue + " (id, page_id, dates, val) VALUES > (3, 8, 2, 9)"); > conn.commit(); > ResultSet rs = conn.createStatement().executeQuery( > "SELECT NTH_VALUE(val, 3) WITHIN GROUP (ORDER BY dates DESC) FROM > " + nthValue > + " GROUP BY page_id"); > assertTrue(rs.next()); > assertEquals(rs.getInt(1), 4); > assertFalse(rs.next()); > } > Root Cause: > --- > The underlying issue seems to be with the way NTH_Value aggregation is done > by the aggregator. The client aggregator is first populated with the top 'n' > rows (if present) and during the iterator.next() never gets evaluated in > BaseGroupedAggregatingResultIterator to see if the nth row is actually > present or not. Once the iterator.next() succeeds, retrieving the value from > the result set using the row projector triggers the client aggregators > evaluate() method as part of schema.toBytes(..) which is defaulting to 0 for > empty row if it is int when programmatically accessed. > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16029256#comment-16029256 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- [~jamestaylor] Please find the v3 patch with the required changes to support returning ARRAY for FIRST_VALUES > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773.v3.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773.patch, PHOENIX-3773.v2.patch, > PHOENIX-3773.v3.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Closed] (PHOENIX-3802) NPE with PRowImpl.toRowMutations(PTableImpl.java)
[ https://issues.apache.org/jira/browse/PHOENIX-3802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty closed PHOENIX-3802. > NPE with PRowImpl.toRowMutations(PTableImpl.java) > - > > Key: PHOENIX-3802 > URL: https://issues.apache.org/jira/browse/PHOENIX-3802 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0, 4.10.1 >Reporter: Loknath Priyatham Teja Singamsetty > Fix For: 4.10.0, 4.11.0, 4.10.1 > > > Caused by: org.apache.phoenix.execute.CommitException: > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 2 > actions: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to process ON > DUPLICATE IGNORE for > COMMUNITIES.TOP_ENTITY(00DT000Dpvc000RF\x00D5B00SMgzx): > null > at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89) > at > org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:234) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$47.call(RegionCoprocessorHost.java:1241) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1670) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preIncrementAfterRowLock(RegionCoprocessorHost.java:1236) > at > org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:5818) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4605) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3802) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3693) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32500) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NullPointerException > at > org.apache.phoenix.schema.PTableImpl$PRowImpl.toRowMutations(PTableImpl.java:910) > at > org.apache.phoenix.index.PhoenixIndexBuilder.executeAtomicOp(PhoenixIndexBuilder.java:246) > at > org.apache.phoenix.hbase.index.builder.IndexBuildManager.executeAtomicOp(IndexBuildManager.java:187) > at > org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:213) > ... 15 more > : 1 time, org.apache.hadoop.hbase.DoNotRetryIOException: Unable to process ON > DUPLICATE IGNORE for > COMMUNITIES.TOP_ENTITY(00DT000Dpvc000RF\x000TOB00010ic0D5B00SMgzx): > null > at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89) > at > org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:234) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$47.call(RegionCoprocessorHost.java:1241) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1670) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preIncrementAfterRowLock(RegionCoprocessorHost.java:1236) > at > org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:5818) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4605) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3802) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3693) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32500) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) > at
[jira] [Resolved] (PHOENIX-3802) NPE with PRowImpl.toRowMutations(PTableImpl.java)
[ https://issues.apache.org/jira/browse/PHOENIX-3802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty resolved PHOENIX-3802. -- Resolution: Resolved > NPE with PRowImpl.toRowMutations(PTableImpl.java) > - > > Key: PHOENIX-3802 > URL: https://issues.apache.org/jira/browse/PHOENIX-3802 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0, 4.10.1 >Reporter: Loknath Priyatham Teja Singamsetty > Fix For: 4.10.0, 4.11.0, 4.10.1 > > > Caused by: org.apache.phoenix.execute.CommitException: > org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 2 > actions: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to process ON > DUPLICATE IGNORE for > COMMUNITIES.TOP_ENTITY(00DT000Dpvc000RF\x00D5B00SMgzx): > null > at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89) > at > org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:234) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$47.call(RegionCoprocessorHost.java:1241) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1670) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preIncrementAfterRowLock(RegionCoprocessorHost.java:1236) > at > org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:5818) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4605) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3802) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3693) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32500) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) > at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NullPointerException > at > org.apache.phoenix.schema.PTableImpl$PRowImpl.toRowMutations(PTableImpl.java:910) > at > org.apache.phoenix.index.PhoenixIndexBuilder.executeAtomicOp(PhoenixIndexBuilder.java:246) > at > org.apache.phoenix.hbase.index.builder.IndexBuildManager.executeAtomicOp(IndexBuildManager.java:187) > at > org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:213) > ... 15 more > : 1 time, org.apache.hadoop.hbase.DoNotRetryIOException: Unable to process ON > DUPLICATE IGNORE for > COMMUNITIES.TOP_ENTITY(00DT000Dpvc000RF\x000TOB00010ic0D5B00SMgzx): > null > at > org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89) > at > org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:234) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$47.call(RegionCoprocessorHost.java:1241) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1670) > at > org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preIncrementAfterRowLock(RegionCoprocessorHost.java:1236) > at > org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:5818) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4605) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3802) > at > org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3693) > at > org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32500) > at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210) > at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) > at > org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16022503#comment-16022503 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- [~jamestaylor] bq. Have FIRST_VALUES return an ARRAY type so that you can return all values in a single row. It's not going to work to change the semantics of SQL (it's pretty well established). A function can't return multiple rows like that. Gone through Oracle/SQL rank over and grouped_concat behaviours. Probably what you are suggesting here is to implement Group_concat https://stackoverflow.com/questions/2129693/using-limit-within-group-by-to-get-n-results-per-group. Kindly help me with the expectation here. TEST.TEST ||id||page_id||date||val|| |2|8|1|7| |3|8|2|9| |4|8|3|4| |5|8|4|2| |6|9|5|10| |7|9|6|13| For the above table with below queries, a) FIRST_VALUES With Group By Clause SELECT page_id, FIRST_VALUES(val, 2) WITHIN GROUP (ORDER BY dates DESC) as first_values FROM TEST.TEST GROUP BY page_id Expected Output? --- ||page_id||first_values|| |8|2,4| |9|13,10| b) FIRST_VALUES without group by SELECT FIRST_VALUES(val, 2) as first_values WITHIN GROUP (ORDER BY dates DESC) as first_values FROM TEST.TEST ||first_values|| |13,10| [~jamestaylor] Let me know if the above looks as expected behaviour. > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773.patch, PHOENIX-3773.v2.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773.v2.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773.patch, PHOENIX-3773.v2.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021384#comment-16021384 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- [~jamestaylor] [~tdsilva] [~samarthjain] Can you please review. Also can you please see why the patch is not getting applied resulting in build failure. > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: (was: PHOENIX-3773.patch) > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021011#comment-16021011 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3773: -- [~jamestaylor] Please review > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3826) Exception stack trace is being logged in info mode when new phoenix connection is created
[ https://issues.apache.org/jira/browse/PHOENIX-3826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16021000#comment-16021000 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3826: -- [~samarthjain] Can we commit this and close this JIRA. > Exception stack trace is being logged in info mode when new phoenix > connection is created > -- > > Key: PHOENIX-3826 > URL: https://issues.apache.org/jira/browse/PHOENIX-3826 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.11.0, 4.10.1 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Attachments: PHOENIX-3826.patch > > > Exception is being raised when new phoenix connection is created > 2017-05-03 05:51:39,898 INFO [main] query.ConnectionQueryServicesImpl - An > instance of ConnectionQueryServices was created: java.lang.Exception > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2401) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2378) > at > org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2378) > at > org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255) > at > org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149) > at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) > at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157) > at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203) > at sqlline.Commands.connect(Commands.java:1064) > at sqlline.Commands.connect(Commands.java:996) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36) > at sqlline.SqlLine.dispatch(SqlLine.java:803) > at sqlline.SqlLine.initArgs(SqlLine.java:588) > at sqlline.SqlLine.begin(SqlLine.java:656) > at sqlline.SqlLine.start(SqlLine.java:398) > at sqlline.SqlLine.main(SqlLine.java:292) > We can simply log a message without printing the stack trace. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3773: - Attachment: PHOENIX-3773.patch > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: New Feature >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > Fix For: 4.11.0 > > Attachments: PHOENIX-3773.patch > > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (PHOENIX-3867) nth_value returns valid values for non-existing rows
Loknath Priyatham Teja Singamsetty created PHOENIX-3867: Summary: nth_value returns valid values for non-existing rows Key: PHOENIX-3867 URL: https://issues.apache.org/jira/browse/PHOENIX-3867 Project: Phoenix Issue Type: Bug Affects Versions: 4.10.0 Reporter: Loknath Priyatham Teja Singamsetty Assignee: Loknath Priyatham Teja Singamsetty Fix For: 4.11.0 Assume a table with two rows as follows: id, page_id, date, value 2, 8 , 1 , 7 3, 8 , 2, 9 Fetch 3rd most recent value of page_id 3 should not return any values. However, rs.next() succeeds and rs.getInt(1) returns 0 and the assertion fails. Below is the test case depicting the same. Issues: a) From sqline, the 3rd nth_value is returned as null b) When programatically accessed, it is coming as 0 Test Case: - public void nonExistingNthRowTestWithGroupBy() throws Exception { Connection conn = DriverManager.getConnection(getUrl()); String nthValue = generateUniqueName(); String ddl = "CREATE TABLE IF NOT EXISTS " + nthValue + " " + "(id INTEGER NOT NULL PRIMARY KEY, page_id UNSIGNED_LONG," + " dates INTEGER, val INTEGER)"; conn.createStatement().execute(ddl); conn.createStatement().execute( "UPSERT INTO " + nthValue + " (id, page_id, dates, val) VALUES (2, 8, 1, 7)"); conn.createStatement().execute( "UPSERT INTO " + nthValue + " (id, page_id, dates, val) VALUES (3, 8, 2, 9)"); conn.commit(); ResultSet rs = conn.createStatement().executeQuery( "SELECT NTH_VALUE(val, 3) WITHIN GROUP (ORDER BY dates DESC) FROM " + nthValue + " GROUP BY page_id"); assertTrue(rs.next()); assertEquals(rs.getInt(1), 4); assertFalse(rs.next()); } Root Cause: --- The underlying issue seems to be with the way NTH_Value aggregation is done by the aggregator. The client aggregator is first populated with the top 'n' rows (if present) and during the iterator.next() never gets evaluated in BaseGroupedAggregatingResultIterator to see if the nth row is actually present or not. Once the iterator.next() succeeds, retrieving the value from the result set using the row projector triggers the client aggregators evaluate() method as part of schema.toBytes(..) which is defaulting to 0 for empty row if it is int when programmatically accessed. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3841) Phoenix View creation failure with Primary table not found error when we use update_cache_frequency for primary table
[ https://issues.apache.org/jira/browse/PHOENIX-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16004196#comment-16004196 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3841: -- Minor typo [~sukuna...@gmail.com] in the patch, in couple of places correct alwaysHitSerer -> alwaysHisServer > Phoenix View creation failure with Primary table not found error when we use > update_cache_frequency for primary table > - > > Key: PHOENIX-3841 > URL: https://issues.apache.org/jira/browse/PHOENIX-3841 > Project: Phoenix > Issue Type: Sub-task >Affects Versions: 4.10.0 >Reporter: Maddineni Sukumar >Assignee: Maddineni Sukumar >Priority: Minor > Fix For: 4.11 > > Attachments: PHOENIX-3841.patch, PHOENIX-3841.v2.patch, > PHOENIX-3841.v3.patch > > > Create VIEW command failing with actual table not found error and next retry > failed with VIEW already exists error..And its continuing like that(first > tabelnotfound and then view already exists).. > If I create table without UPDATE_CACHE_FREQUENCY then its working fine. > Create table command: > create table UpdateCacheViewTestB (k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 > VARCHAR) UPDATE_CACHE_FREQUENCY=10; > Create View command: > CREATE VIEW my_view (v43 VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE > v1 = 'value1’; > sqlline Console output: > 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> select * from > UPDATECACHEVIEWTESTB; > -- > K V1 V2 > -- > 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 > VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1'; > Error: ERROR 1012 (42M03): Table undefined. tableName=UPDATECACHEVIEWTESTB > (state=42M03,code=1012) > 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 > VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1'; > Error: ERROR 1013 (42M04): Table already exists. tableName=MY_VIEW > (state=42M04,code=1013) > 0: jdbc:phoenix:shared-mnds1-1-sfm.ops.sfdc.n> CREATE VIEW my_view (v43 > VARCHAR) AS SELECT * FROM UpdateCacheViewTestB WHERE v1 = 'value1'; > Error: ERROR 1012 (42M03): Table undefined. tableName=UPDATECACHEVIEWTESTB > (state=42M03,code=1012) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3840) Functions extending FirstLastValueBaseFunction returning NAME as null instead of actual function name
[ https://issues.apache.org/jira/browse/PHOENIX-3840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16002371#comment-16002371 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3840: -- Fixed the bug and attached the patch here. FYI : [~jamestaylor] [~samarthjain] > Functions extending FirstLastValueBaseFunction returning NAME as null instead > of actual function name > - > > Key: PHOENIX-3840 > URL: https://issues.apache.org/jira/browse/PHOENIX-3840 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.9.0, 4.10.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 4.11.0, 4.10.1 > > Attachments: PHOENIX-3840.patch > > > The NTH_VALUE, FIRST_VALUE and other functions extending > FirstLastValueBaseFunction are printing NAME as null because in Java the > member variables are always binded to the reference type, the super class > type referencing to sub class object is using the super class member variable > which is initialised to null. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3840) Functions extending FirstLastValueBaseFunction returning NAME as null instead of actual function name
[ https://issues.apache.org/jira/browse/PHOENIX-3840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3840: - Attachment: PHOENIX-3840.patch > Functions extending FirstLastValueBaseFunction returning NAME as null instead > of actual function name > - > > Key: PHOENIX-3840 > URL: https://issues.apache.org/jira/browse/PHOENIX-3840 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.9.0, 4.10.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 4.11.0, 4.10.1 > > Attachments: PHOENIX-3840.patch > > > The NTH_VALUE, FIRST_VALUE and other functions extending > FirstLastValueBaseFunction are printing NAME as null because in Java the > member variables are always binded to the reference type, the super class > type referencing to sub class object is using the super class member variable > which is initialised to null. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (PHOENIX-3840) Functions extending FirstLastValueBaseFunction returning NAME as null instead of actual function name
[ https://issues.apache.org/jira/browse/PHOENIX-3840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty reassigned PHOENIX-3840: Assignee: Loknath Priyatham Teja Singamsetty Affects Version/s: 4.10.0 4.9.0 Fix Version/s: 4.10.1 4.11.0 > Functions extending FirstLastValueBaseFunction returning NAME as null instead > of actual function name > - > > Key: PHOENIX-3840 > URL: https://issues.apache.org/jira/browse/PHOENIX-3840 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.9.0, 4.10.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Fix For: 4.11.0, 4.10.1 > > > The NTH_VALUE, FIRST_VALUE and other functions extending > FirstLastValueBaseFunction are printing NAME as null because in Java the > member variables are always binded to the reference type, the super class > type referencing to sub class object is using the super class member variable > which is initialised to null. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (PHOENIX-3840) Functions extending FirstLastValueBaseFunction returning NAME as null instead of actual function name
Loknath Priyatham Teja Singamsetty created PHOENIX-3840: Summary: Functions extending FirstLastValueBaseFunction returning NAME as null instead of actual function name Key: PHOENIX-3840 URL: https://issues.apache.org/jira/browse/PHOENIX-3840 Project: Phoenix Issue Type: Bug Reporter: Loknath Priyatham Teja Singamsetty Priority: Minor The NTH_VALUE, FIRST_VALUE and other functions extending FirstLastValueBaseFunction are printing NAME as null because in Java the member variables are always binded to the reference type, the super class type referencing to sub class object is using the super class member variable which is initialised to null. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3826) Exception stack trace is being logged in info mode when new phoenix connection is created
[ https://issues.apache.org/jira/browse/PHOENIX-3826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15994431#comment-15994431 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3826: -- [~samarthjain] Incorporated the changes and attached the patch. > Exception stack trace is being logged in info mode when new phoenix > connection is created > -- > > Key: PHOENIX-3826 > URL: https://issues.apache.org/jira/browse/PHOENIX-3826 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.11.0, 4.10.1 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Attachments: PHOENIX-3826.patch > > > Exception is being raised when new phoenix connection is created > 2017-05-03 05:51:39,898 INFO [main] query.ConnectionQueryServicesImpl - An > instance of ConnectionQueryServices was created: java.lang.Exception > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2401) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2378) > at > org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2378) > at > org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255) > at > org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149) > at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) > at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157) > at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203) > at sqlline.Commands.connect(Commands.java:1064) > at sqlline.Commands.connect(Commands.java:996) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36) > at sqlline.SqlLine.dispatch(SqlLine.java:803) > at sqlline.SqlLine.initArgs(SqlLine.java:588) > at sqlline.SqlLine.begin(SqlLine.java:656) > at sqlline.SqlLine.start(SqlLine.java:398) > at sqlline.SqlLine.main(SqlLine.java:292) > We can simply log a message without printing the stack trace. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3826) Exception stack trace is being logged in info mode when new phoenix connection is created
[ https://issues.apache.org/jira/browse/PHOENIX-3826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3826: - Affects Version/s: 4.10.1 > Exception stack trace is being logged in info mode when new phoenix > connection is created > -- > > Key: PHOENIX-3826 > URL: https://issues.apache.org/jira/browse/PHOENIX-3826 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.11.0, 4.10.1 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Attachments: PHOENIX-3826.patch > > > Exception is being raised when new phoenix connection is created > 2017-05-03 05:51:39,898 INFO [main] query.ConnectionQueryServicesImpl - An > instance of ConnectionQueryServices was created: java.lang.Exception > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2401) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2378) > at > org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2378) > at > org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255) > at > org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149) > at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) > at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157) > at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203) > at sqlline.Commands.connect(Commands.java:1064) > at sqlline.Commands.connect(Commands.java:996) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36) > at sqlline.SqlLine.dispatch(SqlLine.java:803) > at sqlline.SqlLine.initArgs(SqlLine.java:588) > at sqlline.SqlLine.begin(SqlLine.java:656) > at sqlline.SqlLine.start(SqlLine.java:398) > at sqlline.SqlLine.main(SqlLine.java:292) > We can simply log a message without printing the stack trace. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3826) Exception stack trace is being logged in info mode when new phoenix connection is created
[ https://issues.apache.org/jira/browse/PHOENIX-3826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3826: - Attachment: PHOENIX-3826.patch > Exception stack trace is being logged in info mode when new phoenix > connection is created > -- > > Key: PHOENIX-3826 > URL: https://issues.apache.org/jira/browse/PHOENIX-3826 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.11.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > Attachments: PHOENIX-3826.patch > > > Exception is being raised when new phoenix connection is created > 2017-05-03 05:51:39,898 INFO [main] query.ConnectionQueryServicesImpl - An > instance of ConnectionQueryServices was created: java.lang.Exception > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2401) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2378) > at > org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2378) > at > org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255) > at > org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149) > at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) > at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157) > at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203) > at sqlline.Commands.connect(Commands.java:1064) > at sqlline.Commands.connect(Commands.java:996) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36) > at sqlline.SqlLine.dispatch(SqlLine.java:803) > at sqlline.SqlLine.initArgs(SqlLine.java:588) > at sqlline.SqlLine.begin(SqlLine.java:656) > at sqlline.SqlLine.start(SqlLine.java:398) > at sqlline.SqlLine.main(SqlLine.java:292) > We can simply log a message without printing the stack trace. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3826) Exception stack trace is being logged in info mode when new phoenix connection is created
[ https://issues.apache.org/jira/browse/PHOENIX-3826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3826: - Priority: Minor (was: Major) > Exception stack trace is being logged in info mode when new phoenix > connection is created > -- > > Key: PHOENIX-3826 > URL: https://issues.apache.org/jira/browse/PHOENIX-3826 > Project: Phoenix > Issue Type: Bug >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty >Priority: Minor > > Exception is being raised when new phoenix connection is created > 2017-05-03 05:51:39,898 INFO [main] query.ConnectionQueryServicesImpl - An > instance of ConnectionQueryServices was created: java.lang.Exception > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2401) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2378) > at > org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2378) > at > org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255) > at > org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149) > at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) > at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157) > at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203) > at sqlline.Commands.connect(Commands.java:1064) > at sqlline.Commands.connect(Commands.java:996) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36) > at sqlline.SqlLine.dispatch(SqlLine.java:803) > at sqlline.SqlLine.initArgs(SqlLine.java:588) > at sqlline.SqlLine.begin(SqlLine.java:656) > at sqlline.SqlLine.start(SqlLine.java:398) > at sqlline.SqlLine.main(SqlLine.java:292) > We can simply log a message without printing the stack trace. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (PHOENIX-3826) Exception stack trace is being logged in info mode when new phoenix connection is created
[ https://issues.apache.org/jira/browse/PHOENIX-3826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty reassigned PHOENIX-3826: Assignee: Loknath Priyatham Teja Singamsetty > Exception stack trace is being logged in info mode when new phoenix > connection is created > -- > > Key: PHOENIX-3826 > URL: https://issues.apache.org/jira/browse/PHOENIX-3826 > Project: Phoenix > Issue Type: Bug >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > > Exception is being raised when new phoenix connection is created > 2017-05-03 05:51:39,898 INFO [main] query.ConnectionQueryServicesImpl - An > instance of ConnectionQueryServices was created: java.lang.Exception > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2401) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2378) > at > org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) > at > org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2378) > at > org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255) > at > org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149) > at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) > at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157) > at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203) > at sqlline.Commands.connect(Commands.java:1064) > at sqlline.Commands.connect(Commands.java:996) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36) > at sqlline.SqlLine.dispatch(SqlLine.java:803) > at sqlline.SqlLine.initArgs(SqlLine.java:588) > at sqlline.SqlLine.begin(SqlLine.java:656) > at sqlline.SqlLine.start(SqlLine.java:398) > at sqlline.SqlLine.main(SqlLine.java:292) > We can simply log a message without printing the stack trace. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (PHOENIX-3826) Exception stack trace is being logged in info mode when new phoenix connection is created
Loknath Priyatham Teja Singamsetty created PHOENIX-3826: Summary: Exception stack trace is being logged in info mode when new phoenix connection is created Key: PHOENIX-3826 URL: https://issues.apache.org/jira/browse/PHOENIX-3826 Project: Phoenix Issue Type: Bug Reporter: Loknath Priyatham Teja Singamsetty Exception is being raised when new phoenix connection is created 2017-05-03 05:51:39,898 INFO [main] query.ConnectionQueryServicesImpl - An instance of ConnectionQueryServices was created: java.lang.Exception at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2401) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2378) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2378) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157) at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203) at sqlline.Commands.connect(Commands.java:1064) at sqlline.Commands.connect(Commands.java:996) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36) at sqlline.SqlLine.dispatch(SqlLine.java:803) at sqlline.SqlLine.initArgs(SqlLine.java:588) at sqlline.SqlLine.begin(SqlLine.java:656) at sqlline.SqlLine.start(SqlLine.java:398) at sqlline.SqlLine.main(SqlLine.java:292) We can simply log a message without printing the stack trace. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (PHOENIX-3802) NPE with PRowImpl.toRowMutations(PTableImpl.java)
Loknath Priyatham Teja Singamsetty created PHOENIX-3802: Summary: NPE with PRowImpl.toRowMutations(PTableImpl.java) Key: PHOENIX-3802 URL: https://issues.apache.org/jira/browse/PHOENIX-3802 Project: Phoenix Issue Type: Bug Affects Versions: 4.10.0, 4.10.1 Reporter: Loknath Priyatham Teja Singamsetty Fix For: 4.10.0, 4.11.0, 4.10.1 Caused by: org.apache.phoenix.execute.CommitException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 2 actions: org.apache.hadoop.hbase.DoNotRetryIOException: Unable to process ON DUPLICATE IGNORE for COMMUNITIES.TOP_ENTITY(00DT000Dpvc000RF\x00D5B00SMgzx): null at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89) at org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:234) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$47.call(RegionCoprocessorHost.java:1241) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1670) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preIncrementAfterRowLock(RegionCoprocessorHost.java:1236) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:5818) at org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4605) at org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3802) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3693) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32500) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at org.apache.phoenix.schema.PTableImpl$PRowImpl.toRowMutations(PTableImpl.java:910) at org.apache.phoenix.index.PhoenixIndexBuilder.executeAtomicOp(PhoenixIndexBuilder.java:246) at org.apache.phoenix.hbase.index.builder.IndexBuildManager.executeAtomicOp(IndexBuildManager.java:187) at org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:213) ... 15 more : 1 time, org.apache.hadoop.hbase.DoNotRetryIOException: Unable to process ON DUPLICATE IGNORE for COMMUNITIES.TOP_ENTITY(00DT000Dpvc000RF\x000TOB00010ic0D5B00SMgzx): null at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:89) at org.apache.phoenix.hbase.index.Indexer.preIncrementAfterRowLock(Indexer.java:234) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$47.call(RegionCoprocessorHost.java:1241) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1670) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preIncrementAfterRowLock(RegionCoprocessorHost.java:1236) at org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:5818) at org.apache.hadoop.hbase.regionserver.HRegionServer.increment(HRegionServer.java:4605) at org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3802) at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3693) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32500) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2210) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at org.apache.phoenix.schema.PTableImpl$PRowImpl.toRowMutations(PTableImpl.java:910) at
[jira] [Assigned] (PHOENIX-3773) Implement FIRST_VALUES aggregate function
[ https://issues.apache.org/jira/browse/PHOENIX-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty reassigned PHOENIX-3773: Assignee: Loknath Priyatham Teja Singamsetty > Implement FIRST_VALUES aggregate function > - > > Key: PHOENIX-3773 > URL: https://issues.apache.org/jira/browse/PHOENIX-3773 > Project: Phoenix > Issue Type: Bug >Reporter: James Taylor >Assignee: Loknath Priyatham Teja Singamsetty > Labels: SFDC > > Similar to FIRST_VALUE, but would allow the user to specify how many values > to keep. This could use a MinMaxPriorityQueue under the covers and be much > more efficient than using multiple NTH_VALUE calls to do the same like this: > {code} > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM MY_TABLE > WHERE tenant_id='00Dx000' > AND entity_id in ('0D5x00ABCD','0D5x00ABCE') > GROUP BY entity_id; > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3795) SELECT query returns inconsistent results when the output result set fields are given
[ https://issues.apache.org/jira/browse/PHOENIX-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3795: - Description: CREATE TABLE IF NOT EXISTS TEST.TEST ( O_ID CHAR(15) NOT NULL, F_E_ID CHAR(15) NOT NULL, U_ID CHAR(15) NOT NULL, L_R_D TIMESTAMP NULL, F_E_R_ID CHAR(15), N_ID CHAR(15) CONSTRAINT PKVIEW PRIMARY KEY ( O_ID, F_E_ID, U_ID ) ) VERSIONS=1,MULTI_TENANT=TRUE,IMMUTABLE_ROWS=TRUE,REPLICATION_SCOPE=1 This table maintains information of the date when the user last read the feed element. Note this is an immutable table and the app does upserts. Queries: Select F_E_ID, U_ID, L_R_D from TEST.TEST where O_ID = 'X' and F_E_ID = 'Y'; Results in 6 records (as shown in the pic above). This is the expected result. 2.Select F_E_ID, U_ID, L_R_D from TEST.TEST where O_ID = 'X' and F_E_ID = 'Y' order by L_R_D DESC; was: CREATE TABLE IF NOT EXISTS FEEDS.FEED_ENTITY_READ ( ORGANIZATION_ID CHAR(15) NOT NULL, FEED_ENTITY_ID CHAR(15) NOT NULL, USER_ID CHAR(15) NOT NULL, LAST_READ_DATE TIMESTAMP NULL, FEED_ENTITY_READ_ID CHAR(15), NETWORK_ID CHAR(15) CONSTRAINT PKVIEW PRIMARY KEY ( ORGANIZATION_ID, FEED_ENTITY_ID, USER_ID ) ) VERSIONS=1,MULTI_TENANT=TRUE,IMMUTABLE_ROWS=TRUE,REPLICATION_SCOPE=1 This table maintains information of the date when the user last read the feed element. Note this is an immutable table and the app does upserts. Queries: Select FEED_ENTITY_ID, USER_ID, LAST_READ_DATE from FEEDS.FEED_ENTITY_READ where ORGANIZATION_ID = 'X' and FEED_ENTITY_ID = 'Y'; Results in 6 records (as shown in the pic above). This is the expected result. 2.Select FEED_ENTITY_ID, USER_ID, LAST_READ_DATE from FEEDS.FEED_ENTITY_READ where ORGANIZATION_ID = 'X' and FEED_ENTITY_ID = 'Y' order by LAST_READ_DATE DESC; > SELECT query returns inconsistent results when the output result set fields > are given > -- > > Key: PHOENIX-3795 > URL: https://issues.apache.org/jira/browse/PHOENIX-3795 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > > CREATE TABLE IF NOT EXISTS TEST.TEST > ( > O_ID CHAR(15) NOT NULL, > F_E_ID CHAR(15) NOT NULL, > U_ID CHAR(15) NOT NULL, > L_R_D TIMESTAMP NULL, > F_E_R_ID CHAR(15), > N_ID CHAR(15) > CONSTRAINT PKVIEW PRIMARY KEY > ( > O_ID, > F_E_ID, > U_ID > ) > ) VERSIONS=1,MULTI_TENANT=TRUE,IMMUTABLE_ROWS=TRUE,REPLICATION_SCOPE=1 > This table maintains information of the date when the user last read the feed > element. Note this is an immutable table and the app does upserts. > Queries: > Select F_E_ID, U_ID, L_R_D from TEST.TEST where O_ID = 'X' and F_E_ID = 'Y'; > Results in 6 records (as shown in the pic above). This is the expected result. > 2.Select F_E_ID, U_ID, L_R_D from TEST.TEST where O_ID = 'X' and F_E_ID = 'Y' > order by L_R_D DESC; -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (PHOENIX-3795) SELECT query returns inconsistent results when the output result set fields are given
[ https://issues.apache.org/jira/browse/PHOENIX-3795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty reassigned PHOENIX-3795: Assignee: Loknath Priyatham Teja Singamsetty > SELECT query returns inconsistent results when the output result set fields > are given > -- > > Key: PHOENIX-3795 > URL: https://issues.apache.org/jira/browse/PHOENIX-3795 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > > CREATE TABLE IF NOT EXISTS FEEDS.FEED_ENTITY_READ > ( > ORGANIZATION_ID CHAR(15) NOT NULL, > FEED_ENTITY_ID CHAR(15) NOT NULL, > USER_ID CHAR(15) NOT NULL, > LAST_READ_DATE TIMESTAMP NULL, > FEED_ENTITY_READ_ID CHAR(15), > NETWORK_ID CHAR(15) > CONSTRAINT PKVIEW PRIMARY KEY > ( > ORGANIZATION_ID, > FEED_ENTITY_ID, > USER_ID > ) > ) VERSIONS=1,MULTI_TENANT=TRUE,IMMUTABLE_ROWS=TRUE,REPLICATION_SCOPE=1 > This table maintains information of the date when the user last read the feed > element. Note this is an immutable table and the app does upserts. > Queries: > Select FEED_ENTITY_ID, USER_ID, LAST_READ_DATE from FEEDS.FEED_ENTITY_READ > where ORGANIZATION_ID = 'X' and FEED_ENTITY_ID = 'Y'; > Results in 6 records (as shown in the pic above). This is the expected result. > 2.Select FEED_ENTITY_ID, USER_ID, LAST_READ_DATE from FEEDS.FEED_ENTITY_READ > where ORGANIZATION_ID = 'X' and FEED_ENTITY_ID = 'Y' order by LAST_READ_DATE > DESC; -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (PHOENIX-3795) SELECT query returns inconsistent results when the output result set fields are given
Loknath Priyatham Teja Singamsetty created PHOENIX-3795: Summary: SELECT query returns inconsistent results when the output result set fields are given Key: PHOENIX-3795 URL: https://issues.apache.org/jira/browse/PHOENIX-3795 Project: Phoenix Issue Type: Bug Affects Versions: 4.10.0 Reporter: Loknath Priyatham Teja Singamsetty CREATE TABLE IF NOT EXISTS FEEDS.FEED_ENTITY_READ ( ORGANIZATION_ID CHAR(15) NOT NULL, FEED_ENTITY_ID CHAR(15) NOT NULL, USER_ID CHAR(15) NOT NULL, LAST_READ_DATE TIMESTAMP NULL, FEED_ENTITY_READ_ID CHAR(15), NETWORK_ID CHAR(15) CONSTRAINT PKVIEW PRIMARY KEY ( ORGANIZATION_ID, FEED_ENTITY_ID, USER_ID ) ) VERSIONS=1,MULTI_TENANT=TRUE,IMMUTABLE_ROWS=TRUE,REPLICATION_SCOPE=1 This table maintains information of the date when the user last read the feed element. Note this is an immutable table and the app does upserts. Queries: Select FEED_ENTITY_ID, USER_ID, LAST_READ_DATE from FEEDS.FEED_ENTITY_READ where ORGANIZATION_ID = 'X' and FEED_ENTITY_ID = 'Y'; Results in 6 records (as shown in the pic above). This is the expected result. 2.Select FEED_ENTITY_ID, USER_ID, LAST_READ_DATE from FEEDS.FEED_ENTITY_READ where ORGANIZATION_ID = 'X' and FEED_ENTITY_ID = 'Y' order by LAST_READ_DATE DESC; -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3511) Async Secondary index MR job fails for large data > 200 M records
[ https://issues.apache.org/jira/browse/PHOENIX-3511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15972065#comment-15972065 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3511: -- [~jamestaylor] Not able to reproduce this scenario. It looks like lease renewal logic is working. We have applied these settings in SFDC. Lets see if these changes are good to be checked into open source. > Async Secondary index MR job fails for large data > 200 M records > - > > Key: PHOENIX-3511 > URL: https://issues.apache.org/jira/browse/PHOENIX-3511 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.9.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 4.11.0 > > Attachments: phoenix-3511.patch, phoenix-3511-v2.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (PHOENIX-3777) NTH_VALUE() function with multiple where clause filters on primary key components with GROUP BY is returning results for first grouped set and not for all grouped sets
[ https://issues.apache.org/jira/browse/PHOENIX-3777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3777: - Fix Version/s: 4.10.1 4.11.0 Description: Here is the reproducible case. The following query is failing: SELECT entity_id, NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as nth1_user_id, NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as nth2_user_id, NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as nth3_user_id, count(*) FROM TEST.TEST WHERE id='00Dx00091CU' AND entity_id in ('0D5x006ARCN','0D5x006AQrO') GROUP BY entity_id; Current Output: +-+-+-+-+---+ | ENTITY_ID | NTH1_USER_ID| NTH2_USER_ID | NTH3_USER_ID | COUNT | +-+-+-+-+---+ | 0D5x006AQrO | 005x000ZSX0 | 005x000ZSWz| 005x000ZSWy | 50| | 0D5x006ARCN | 005x000ZSX0 || | 50| +-+-+-+-+---+ Expected Output: == +-+-+-+-+---+ | ENTITY_ID | NTH1_USER_ID| NTH2_USER_ID | NTH3_USER_ID | COUNT | +-+-+-+-+---+ | 0D5x006AQrO | 005x000ZSX0 | 005x000ZSWz| 005x000ZSWy | 50| | 0D5x006ARCN | 005x000ZSX0 | 005x000ZSWy| 005x000ZSWy | 50| +-+-+-+-+---+ QUERY PLAN: CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY SKIP SCAN ON 2 KEYS OVER FEEDS.FEED_ENTITY_READ ['00Dx00091CU','0D5x006AQrO'] - ['00Dx00091CU','0D5x006ARCN’] SERVER AGGREGATE INTO ORDERED DISTINCT ROWS BY [FEED_ENTITY_ID] Schema: CREATE TABLE IF NOT EXISTS TEST.TEST ( ID CHAR(15) NOT NULL, ENTITY_ID CHAR(15) NOT NULL, USER_ID CHAR(15) NOT NULL, LAST_READ_DATE TIMESTAMP NULL, ENTITY_READ_ID CHAR(15) CONSTRAINT PKVIEW PRIMARY KEY ( ID, ENTITY_ID, USER_ID ) ) VERSIONS=1,MULTI_TENANT=TRUE,REPLICATION_SCOPE=1 was: Here is the reproducible case. The following query is failing: SELECT entity_id, NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as nth1_user_id, NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as nth2_user_id, NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as nth3_user_id, count(*) FROM TEST.TEST WHERE id='00Dx00091CU' AND entity_id in ('0D5x006ARCN','0D5x006AQrO') GROUP BY entity_id; Current Output: +-+-+-+-+---+ | ENTITY_ID | NTH1_USER_ID| NTH2_USER_ID | NTH3_USER_ID | COUNT | +-+-+-+-+---+ | 0D5x006AQrO | 005x000ZSX0 | 005x000ZSWz| 005x000ZSWy | 50| | 0D5x006ARCN | 005x000ZSX0 || | 50| +-+-+-+-+---+ Expected Output: == +-+-+-+-+---+ | ENTITY_ID | NTH1_USER_ID| NTH2_USER_ID | NTH3_USER_ID | COUNT | +-+-+-+-+---+ | 0D5x006AQrO | 005x000ZSX0 | 005x000ZSWz| 005x000ZSWy | 50| | 0D5x006ARCN | 005x000ZSX0 | 005x000ZSWy| 005x000ZSWy | 50| +-+-+-+-+---+ QUERY PLAN: CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY SKIP SCAN ON 2 KEYS OVER FEEDS.FEED_ENTITY_READ ['00Dx00091CU','0D5x006AQrO'] - ['00Dx00091CU','0D5x006ARCN’] SERVER AGGREGATE INTO ORDERED DISTINCT ROWS BY [FEED_ENTITY_ID] Schema: CREATE TABLE IF NOT EXISTS TEST.TEST ( ID CHAR(15) NOT NULL, ENTITY_ID CHAR(15) NOT NULL, USER_ID CHAR(15) NOT NULL, LAST_READ_DATE TIMESTAMP NULL, ENTITY_READ_ID CHAR(15) CONSTRAINT PKVIEW PRIMARY KEY ( ID, ENTITY_ID, USER_ID ) ) VERSIONS=1,MULTI_TENANT=TRUE,REPLICATION_SCOPE=1 > NTH_VALUE() function with multiple where clause filters on primary key > components with GROUP BY is returning results for first grouped set and not > for all grouped sets >
[jira] [Updated] (PHOENIX-3777) NTH_VALUE() function with multiple where clause filters on primary key components with GROUP BY is returning results for first grouped set and not for all grouped sets
[ https://issues.apache.org/jira/browse/PHOENIX-3777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty updated PHOENIX-3777: - Attachment: PHOENIX-3777.patch > NTH_VALUE() function with multiple where clause filters on primary key > components with GROUP BY is returning results for first grouped set and not > for all grouped sets > > > Key: PHOENIX-3777 > URL: https://issues.apache.org/jira/browse/PHOENIX-3777 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Attachments: PHOENIX-3777.patch > > > Here is the reproducible case. The following query is failing: > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM TEST.TEST > WHERE id='00Dx00091CU' > AND entity_id in ('0D5x006ARCN','0D5x006AQrO') > GROUP BY entity_id; > Current Output: > > +-+-+-+-+---+ > | ENTITY_ID | NTH1_USER_ID| NTH2_USER_ID | NTH3_USER_ID | COUNT | > +-+-+-+-+---+ > | 0D5x006AQrO | 005x000ZSX0 | 005x000ZSWz| 005x000ZSWy | 50 > | > | 0D5x006ARCN | 005x000ZSX0 || > >| 50| > +-+-+-+-+---+ > Expected Output: > == > +-+-+-+-+---+ > | ENTITY_ID | NTH1_USER_ID| NTH2_USER_ID | NTH3_USER_ID | COUNT | > +-+-+-+-+---+ > | 0D5x006AQrO | 005x000ZSX0 | 005x000ZSWz| 005x000ZSWy | 50 > | > | 0D5x006ARCN | 005x000ZSX0 | 005x000ZSWy| 005x000ZSWy | 50 > | > +-+-+-+-+---+ > QUERY PLAN: > > CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY SKIP SCAN ON 2 KEYS OVER > FEEDS.FEED_ENTITY_READ ['00Dx00091CU','0D5x006AQrO'] - > ['00Dx00091CU','0D5x006ARCN’] > SERVER AGGREGATE INTO ORDERED DISTINCT ROWS BY [FEED_ENTITY_ID] > Schema: > CREATE TABLE IF NOT EXISTS TEST.TEST > ( ID CHAR(15) NOT NULL, > ENTITY_ID CHAR(15) NOT NULL, > USER_ID CHAR(15) NOT NULL, > LAST_READ_DATE TIMESTAMP NULL, > ENTITY_READ_ID CHAR(15) > CONSTRAINT PKVIEW PRIMARY KEY ( ID, ENTITY_ID, USER_ID ) > ) VERSIONS=1,MULTI_TENANT=TRUE,REPLICATION_SCOPE=1 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3777) NTH_VALUE() function with multiple where clause filters on primary key components with GROUP BY is returning results for first grouped set and not for all grouped set
[ https://issues.apache.org/jira/browse/PHOENIX-3777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15970514#comment-15970514 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3777: -- Yes [~jamestaylor]. Written test case which passed. Attached the patch here. > NTH_VALUE() function with multiple where clause filters on primary key > components with GROUP BY is returning results for first grouped set and not > for all grouped sets > > > Key: PHOENIX-3777 > URL: https://issues.apache.org/jira/browse/PHOENIX-3777 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > > Here is the reproducible case. The following query is failing: > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM TEST.TEST > WHERE id='00Dx00091CU' > AND entity_id in ('0D5x006ARCN','0D5x006AQrO') > GROUP BY entity_id; > Current Output: > > +-+-+-+-+---+ > | ENTITY_ID | NTH1_USER_ID| NTH2_USER_ID | NTH3_USER_ID | COUNT | > +-+-+-+-+---+ > | 0D5x006AQrO | 005x000ZSX0 | 005x000ZSWz| 005x000ZSWy | 50 > | > | 0D5x006ARCN | 005x000ZSX0 || > >| 50| > +-+-+-+-+---+ > Expected Output: > == > +-+-+-+-+---+ > | ENTITY_ID | NTH1_USER_ID| NTH2_USER_ID | NTH3_USER_ID | COUNT | > +-+-+-+-+---+ > | 0D5x006AQrO | 005x000ZSX0 | 005x000ZSWz| 005x000ZSWy | 50 > | > | 0D5x006ARCN | 005x000ZSX0 | 005x000ZSWy| 005x000ZSWy | 50 > | > +-+-+-+-+---+ > QUERY PLAN: > > CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY SKIP SCAN ON 2 KEYS OVER > FEEDS.FEED_ENTITY_READ ['00Dx00091CU','0D5x006AQrO'] - > ['00Dx00091CU','0D5x006ARCN’] > SERVER AGGREGATE INTO ORDERED DISTINCT ROWS BY [FEED_ENTITY_ID] > Schema: > CREATE TABLE IF NOT EXISTS TEST.TEST > ( ID CHAR(15) NOT NULL, > ENTITY_ID CHAR(15) NOT NULL, > USER_ID CHAR(15) NOT NULL, > LAST_READ_DATE TIMESTAMP NULL, > ENTITY_READ_ID CHAR(15) > CONSTRAINT PKVIEW PRIMARY KEY ( ID, ENTITY_ID, USER_ID ) > ) VERSIONS=1,MULTI_TENANT=TRUE,REPLICATION_SCOPE=1 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (PHOENIX-3777) NTH_VALUE() function with multiple where clause filters on primary key components with GROUP BY is returning results for first grouped set and not for all grouped sets
[ https://issues.apache.org/jira/browse/PHOENIX-3777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Loknath Priyatham Teja Singamsetty reassigned PHOENIX-3777: Assignee: Loknath Priyatham Teja Singamsetty > NTH_VALUE() function with multiple where clause filters on primary key > components with GROUP BY is returning results for first grouped set and not > for all grouped sets > > > Key: PHOENIX-3777 > URL: https://issues.apache.org/jira/browse/PHOENIX-3777 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.10.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > > Here is the reproducible case. The following query is failing: > SELECT entity_id, >NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth1_user_id, >NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth2_user_id, >NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as > nth3_user_id, >count(*) > FROM TEST.TEST > WHERE id='00Dx00091CU' > AND entity_id in ('0D5x006ARCN','0D5x006AQrO') > GROUP BY entity_id; > Current Output: > > +-+-+-+-+---+ > | ENTITY_ID | NTH1_USER_ID| NTH2_USER_ID | NTH3_USER_ID | COUNT | > +-+-+-+-+---+ > | 0D5x006AQrO | 005x000ZSX0 | 005x000ZSWz| 005x000ZSWy | 50 > | > | 0D5x006ARCN | 005x000ZSX0 || > >| 50| > +-+-+-+-+---+ > Expected Output: > == > +-+-+-+-+---+ > | ENTITY_ID | NTH1_USER_ID| NTH2_USER_ID | NTH3_USER_ID | COUNT | > +-+-+-+-+---+ > | 0D5x006AQrO | 005x000ZSX0 | 005x000ZSWz| 005x000ZSWy | 50 > | > | 0D5x006ARCN | 005x000ZSX0 | 005x000ZSWy| 005x000ZSWy | 50 > | > +-+-+-+-+---+ > QUERY PLAN: > > CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY SKIP SCAN ON 2 KEYS OVER > FEEDS.FEED_ENTITY_READ ['00Dx00091CU','0D5x006AQrO'] - > ['00Dx00091CU','0D5x006ARCN’] > SERVER AGGREGATE INTO ORDERED DISTINCT ROWS BY [FEED_ENTITY_ID] > Schema: > CREATE TABLE IF NOT EXISTS TEST.TEST > ( ID CHAR(15) NOT NULL, > ENTITY_ID CHAR(15) NOT NULL, > USER_ID CHAR(15) NOT NULL, > LAST_READ_DATE TIMESTAMP NULL, > ENTITY_READ_ID CHAR(15) > CONSTRAINT PKVIEW PRIMARY KEY ( ID, ENTITY_ID, USER_ID ) > ) VERSIONS=1,MULTI_TENANT=TRUE,REPLICATION_SCOPE=1 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (PHOENIX-3777) NTH_VALUE() function with multiple where clause filters on primary key components with GROUP BY is returning results for first grouped set and not for all grouped sets
Loknath Priyatham Teja Singamsetty created PHOENIX-3777: Summary: NTH_VALUE() function with multiple where clause filters on primary key components with GROUP BY is returning results for first grouped set and not for all grouped sets Key: PHOENIX-3777 URL: https://issues.apache.org/jira/browse/PHOENIX-3777 Project: Phoenix Issue Type: Bug Affects Versions: 4.10.0 Reporter: Loknath Priyatham Teja Singamsetty Here is the reproducible case. The following query is failing: SELECT entity_id, NTH_VALUE(user_id,1) WITHIN GROUP (ORDER BY last_read_date DESC) as nth1_user_id, NTH_VALUE(user_id,2) WITHIN GROUP (ORDER BY last_read_date DESC) as nth2_user_id, NTH_VALUE(user_id,3) WITHIN GROUP (ORDER BY last_read_date DESC) as nth3_user_id, count(*) FROM TEST.TEST WHERE id='00Dx00091CU' AND entity_id in ('0D5x006ARCN','0D5x006AQrO') GROUP BY entity_id; Current Output: +-+-+-+-+---+ | ENTITY_ID | NTH1_USER_ID| NTH2_USER_ID | NTH3_USER_ID | COUNT | +-+-+-+-+---+ | 0D5x006AQrO | 005x000ZSX0 | 005x000ZSWz| 005x000ZSWy | 50| | 0D5x006ARCN | 005x000ZSX0 || | 50| +-+-+-+-+---+ Expected Output: == +-+-+-+-+---+ | ENTITY_ID | NTH1_USER_ID| NTH2_USER_ID | NTH3_USER_ID | COUNT | +-+-+-+-+---+ | 0D5x006AQrO | 005x000ZSX0 | 005x000ZSWz| 005x000ZSWy | 50| | 0D5x006ARCN | 005x000ZSX0 | 005x000ZSWy| 005x000ZSWy | 50| +-+-+-+-+---+ QUERY PLAN: CLIENT 1-CHUNK 0 ROWS 0 BYTES PARALLEL 1-WAY SKIP SCAN ON 2 KEYS OVER FEEDS.FEED_ENTITY_READ ['00Dx00091CU','0D5x006AQrO'] - ['00Dx00091CU','0D5x006ARCN’] SERVER AGGREGATE INTO ORDERED DISTINCT ROWS BY [FEED_ENTITY_ID] Schema: CREATE TABLE IF NOT EXISTS TEST.TEST ( ID CHAR(15) NOT NULL, ENTITY_ID CHAR(15) NOT NULL, USER_ID CHAR(15) NOT NULL, LAST_READ_DATE TIMESTAMP NULL, ENTITY_READ_ID CHAR(15) CONSTRAINT PKVIEW PRIMARY KEY ( ID, ENTITY_ID, USER_ID ) ) VERSIONS=1,MULTI_TENANT=TRUE,REPLICATION_SCOPE=1 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (PHOENIX-3511) Async Secondary index MR job fails for large data > 200 M records
[ https://issues.apache.org/jira/browse/PHOENIX-3511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15724276#comment-15724276 ] Loknath Priyatham Teja Singamsetty commented on PHOENIX-3511: -- Sure James. This will take a while to reproduce the scenario with default settings in our internal cluster. > Async Secondary index MR job fails for large data > 200 M records > - > > Key: PHOENIX-3511 > URL: https://issues.apache.org/jira/browse/PHOENIX-3511 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.9.0 >Reporter: Loknath Priyatham Teja Singamsetty >Assignee: Loknath Priyatham Teja Singamsetty > Fix For: 4.9.0, 4.9.1 > > Attachments: phoenix-3511-v2.patch, phoenix-3511.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)