[jira] [Updated] (PHOENIX-5125) Some tests fail after PHOENIX-4009

2019-02-04 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5125:
---
Issue Type: Test  (was: Bug)

> Some tests fail after PHOENIX-4009
> --
>
> Key: PHOENIX-5125
> URL: https://issues.apache.org/jira/browse/PHOENIX-5125
> Project: Phoenix
>  Issue Type: Test
>Reporter: Lars Hofhansl
>Priority: Major
>
> * NoOpStatsCollectorIT.testStatsCollectionDuringMajorCompaction
> * SpillableGroupByIT.testStatisticsAreNotWritten
> [~karanmehta93], FYI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5092) Client dies with when initial regions of Data table moves from betweeen Region Server

2019-02-04 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5092:

Labels: SFDC  (was: )

> Client dies with when initial regions of Data table moves from betweeen 
> Region Server
> -
>
> Key: PHOENIX-5092
> URL: https://issues.apache.org/jira/browse/PHOENIX-5092
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Monani Mihir
>Priority: Major
>  Labels: SFDC
>
> After starting Pherf Load with Unsalted table (and few idexes) , Data table 
> region will split and move to another region server. When region moves, 
> client (all threads of client) will die with following exception :-
> {code:java}
> 2018-12-19 10:45:22,830 WARN [pool-8-thread-39] cache.ServerCacheClient - 
> Unable to remove hash cache for 
> [region=table1,1545216068270.0310dab896249506cb1de9b6badd7fa4., 
> hostname=phoenix-test1,60020,1545213555354, seqNum=40685]
> java.io.InterruptedIOException: Interrupted calling coprocessor service 
> org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService
>  for row 
> tenantABCDId2loginUserId0030F9X418G\x00messageTextId007419receipientId07420
> at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1787)
> at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1736)
> at 
> org.apache.phoenix.cache.ServerCacheClient.removeServerCache(ServerCacheClient.java:357)
> at 
> org.apache.phoenix.cache.ServerCacheClient.access$000(ServerCacheClient.java:85)
> at 
> org.apache.phoenix.cache.ServerCacheClient$ServerCache.close(ServerCacheClient.java:207)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:1072)
> at org.apache.phoenix.execute.MutationState.send(MutationState.java:1350)
> at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1173)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:670)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:666)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:666)
> at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:297)
> at 
> org.apache.phoenix.pherf.workload.WriteWorkload$2.call(WriteWorkload.java:256)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.InterruptedException
> at java.util.concurrent.FutureTask.awaitDone(FutureTask.java:404)
> at java.util.concurrent.FutureTask.get(FutureTask.java:191)
> at org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:1780)
> ... 17 more
> [pool-8-thread-39] INFO org.apache.phoenix.execute.MutationState - Abort 
> successful
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5124) PropertyPolicyProvider should not evaluate default hbase config properties

2019-02-04 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5124:

Attachment: PHOENIX-5124-4.x-HBase-1.3.patch

> PropertyPolicyProvider should not evaluate default hbase config properties
> --
>
> Key: PHOENIX-5124
> URL: https://issues.apache.org/jira/browse/PHOENIX-5124
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5124-4.x-HBase-1.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5125) Some tests fail after PHOENIX-4009

2019-02-04 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created PHOENIX-5125:
--

 Summary: Some tests fail after PHOENIX-4009
 Key: PHOENIX-5125
 URL: https://issues.apache.org/jira/browse/PHOENIX-5125
 Project: Phoenix
  Issue Type: Bug
Reporter: Lars Hofhansl


* NoOpStatsCollectorIT.testStatsCollectionDuringMajorCompaction
* SpillableGroupByIT.testStatisticsAreNotWritten

[~karanmehta93], FYI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5124) PropertyPolicyProvider should not evaluate default hbase config properties

2019-02-04 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-5124:
---

 Summary: PropertyPolicyProvider should not evaluate default hbase 
config properties
 Key: PHOENIX-5124
 URL: https://issues.apache.org/jira/browse/PHOENIX-5124
 Project: Phoenix
  Issue Type: Bug
Reporter: Thomas D'Silva
Assignee: Thomas D'Silva
 Fix For: 4.15.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5108) Normalize column names while generating SELECT statement in the spark connector

2019-02-04 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5108:

Attachment: PHOENIX-5108-v2-HBase-1.3.patch

> Normalize column names while generating SELECT statement in the spark 
> connector
> ---
>
> Key: PHOENIX-5108
> URL: https://issues.apache.org/jira/browse/PHOENIX-5108
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: connectors-1.0.0
>
> Attachments: PHOENIX-5108-HBase-1.3.patch, 
> PHOENIX-5108-v2-HBase-1.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5122) PHOENIX-4322 breaks client backward compatibility

2019-02-04 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5122:

Fix Version/s: 4.14.2
   4.15.0

> PHOENIX-4322 breaks client backward compatibility
> -
>
> Key: PHOENIX-5122
> URL: https://issues.apache.org/jira/browse/PHOENIX-5122
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Jacob Isaac
>Priority: Blocker
> Fix For: 4.15.0, 4.14.2
>
>
> Scenario :
> *4.13 client -> 4.14.1 server*
> Connected to: Phoenix (version 4.13)
> Driver: PhoenixEmbeddedDriver (version 4.13)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 135/135 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T02 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.31 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.033 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T02 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> +--+--+
> {color:#FF}+*No rows selected (0.033 seconds)*+{color}
> 0: jdbc:phoenix:localhost> select * from P_T02 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.016 seconds)
> 0: jdbc:phoenix:localhost>
>  
> *4.14.1 client -> 4.14.1 server* 
> Connected to: Phoenix (version 4.14)
> Driver: PhoenixEmbeddedDriver (version 4.14)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 133/133 (100%) Done
> Done
> sqlline version 1.1.9
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> CREATE table P_T01 (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
> No rows affected (1.273 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0001', 
> 'v0001');
> 1 row affected (0.056 seconds)
> 0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0002', 
> 'v0002');
> 1 row affected (0.004 seconds)
> 0: jdbc:phoenix:localhost> 
> 0: jdbc:phoenix:localhost> select * from P_T01 where (oid, code) IN 
> (('0001', 'v0001'), ('0002', 'v0002'));
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.051 seconds)
> 0: jdbc:phoenix:localhost> select * from P_T01 ;
> +--+--+
> | OID | CODE |
> +--+--+
> | 0002 | v0002 |
> | 0001 | v0001 |
> +--+--+
> 2 rows selected (0.017 seconds)
> 0: jdbc:phoenix:localhost>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5120) Avoid using MappedByteBuffers for server side sorting.

2019-02-04 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5120:
---
Affects Version/s: 4.14.1

> Avoid using MappedByteBuffers for server side sorting.
> --
>
> Key: PHOENIX-5120
> URL: https://issues.apache.org/jira/browse/PHOENIX-5120
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.14.1
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Attachments: 5120-1.4-v2.txt, 5120-1.4-v3.txt, 5120-1.4-wip.txt, 
> 5120-1.4.txt, 5120-master-v3.txt, 5120-master-v5.txt, 5120-master-v6.txt, 
> 5120-master.txt
>
>
> We should had a production outage due to this.
> MappedByteBuffer may leave files around, on top they use direct memory, which 
> is not cleared until the JVM executes a full GC.
> See last comment on PHOENIX-2405.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5120) Avoid using MappedByteBuffers for server side sorting.

2019-02-04 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5120:
---
Attachment: 5120-master-v6.txt

> Avoid using MappedByteBuffers for server side sorting.
> --
>
> Key: PHOENIX-5120
> URL: https://issues.apache.org/jira/browse/PHOENIX-5120
> Project: Phoenix
>  Issue Type: Task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Attachments: 5120-1.4-v2.txt, 5120-1.4-v3.txt, 5120-1.4-wip.txt, 
> 5120-1.4.txt, 5120-master-v3.txt, 5120-master-v5.txt, 5120-master-v6.txt, 
> 5120-master.txt
>
>
> We should had a production outage due to this.
> MappedByteBuffer may leave files around, on top they use direct memory, which 
> is not cleared until the JVM executes a full GC.
> See last comment on PHOENIX-2405.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-374) Enable access to dynamic columns in * or cf.* selection

2019-02-04 Thread Chinmay Kulkarni (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-374:
-
Attachment: PHOENIX-374.patch

> Enable access to dynamic columns in * or cf.* selection
> ---
>
> Key: PHOENIX-374
> URL: https://issues.apache.org/jira/browse/PHOENIX-374
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: nicolas maillard
>Assignee: Chinmay Kulkarni
>Priority: Critical
> Attachments: PHOENIX-374.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> As of recent work we can now read and write columns that are not in the 
> schema, AKA dynamic columns. the Select and Upsert allow dynamic columns to 
> be specified. 
> I think two additions are still needed.
> - Alter dynamicly: In the Upsert and/or Select statement  the ability to add 
> on the specified dynamic column to schema. Say Upsert into Table (key, 
> cf.dynColumn varchar SCHEMAADD) values (..)
> and for select: 
>  - select key, cf.dynColumn varchar from T would only read
>  - select key from T(cf.dynColumn varchar ) would only read and wrtie to 
> schema
> - Select a complete column Family: More complex, accessing a whole Column 
> Family with all rows known in schema or not.
>  select cf.* from T
> today this works for know columns it could be nice to have this for all 
> columns of a family in the schema or not. I'm trying right now to extend this 
> to schema for unknown columns. However every new row can a lot of very 
> different unknowcolumns. The defined ones will be first but the unknown one 
> will be appended at the end.
> This means the metadata might need to be updated at every row to account for 
> all new columns discovered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-374) Enable access to dynamic columns in * or cf.* selection

2019-02-04 Thread Chinmay Kulkarni (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-374:
-
Attachment: (was: PHOENIX-374.patch)

> Enable access to dynamic columns in * or cf.* selection
> ---
>
> Key: PHOENIX-374
> URL: https://issues.apache.org/jira/browse/PHOENIX-374
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: nicolas maillard
>Assignee: Chinmay Kulkarni
>Priority: Critical
> Attachments: PHOENIX-374.patch
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> As of recent work we can now read and write columns that are not in the 
> schema, AKA dynamic columns. the Select and Upsert allow dynamic columns to 
> be specified. 
> I think two additions are still needed.
> - Alter dynamicly: In the Upsert and/or Select statement  the ability to add 
> on the specified dynamic column to schema. Say Upsert into Table (key, 
> cf.dynColumn varchar SCHEMAADD) values (..)
> and for select: 
>  - select key, cf.dynColumn varchar from T would only read
>  - select key from T(cf.dynColumn varchar ) would only read and wrtie to 
> schema
> - Select a complete column Family: More complex, accessing a whole Column 
> Family with all rows known in schema or not.
>  select cf.* from T
> today this works for know columns it could be nice to have this for all 
> columns of a family in the schema or not. I'm trying right now to extend this 
> to schema for unknown columns. However every new row can a lot of very 
> different unknowcolumns. The defined ones will be first but the unknown one 
> will be appended at the end.
> This means the metadata might need to be updated at every row to account for 
> all new columns discovered.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5105) Push Filter through Sort for SortMergeJoin

2019-02-04 Thread chenglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-5105:
--
Attachment: PHOENIX-5015_v3-4.x-HBase-1.4.patch

> Push Filter through Sort for SortMergeJoin
> --
>
> Key: PHOENIX-5105
> URL: https://issues.apache.org/jira/browse/PHOENIX-5105
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5015-4.x-HBase-1.4.patch, 
> PHOENIX-5015_v2-4.x-HBase-1.4.patch, PHOENIX-5015_v3-4.x-HBase-1.4.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Given two tables:
> {code:java}
>   CREATE TABLE merge1 ( 
> aid INTEGER PRIMARY KEY,
> age INTEGER)
>   
>   CREATE TABLE merge2  ( 
> bid INTEGER PRIMARY KEY,
> code INTEGER)
> {code}
> for following sql :
> {code:java}
> select /*+ USE_SORT_MERGE_JOIN */ a.aid,b.code from 
> (select aid,age from merge1 where age >=11 and age<=33 order by age limit 3) 
> a inner join 
> (select bid,code from merge2 order by code limit 1) b on a.aid=b.bid where 
> b.code > 50
> {code}
> For the RHS of SortMergeJoin, at first the where condition {{b.code > 50}} is 
> pushed down to RHS as its {{JoinCompiler.Table.postFilters}},  then {{order 
> by b.bid}} is appended to RHS and it is rewritten as 
>  {{select bid,code from (select bid,code from merge2 order by code limit 1) 
> order by bid}}
> by following line 211 in {{QueryCompiler.compileJoinQuery}}.
> Next the above rewritten sql is compiled to ClientScanPlan by following line 
> 221 ,and previously pushed down {{b.code > 50}} is compiled by 
> {{table.compilePostFilterExpression}} method in following line 224 to filter 
> the result of the preceding ClientScanPlan. The problem here is that we 
> execute the {{order by bid}} first and then the postFilter {{b.code > 50}}, 
> obviously it is inefficient. In fact, we can directly rewrite the RHS as 
>  {{select bid,code from (select bid,code from merge2 order by code limit 1) 
> order by bid where code > 50}} 
>  to first filter {{b.code > 50}} and then execute the {{order by bid}} .
> {code:java}
> 208protected QueryPlan compileJoinQuery(StatementContext context, 
> List binds, JoinTable joinTable, boolean asSubquery, boolean 
> projectPKColumns, List orderBy) throws SQLException {
> 209 if (joinTable.getJoinSpecs().isEmpty()) {
> 210  Table table = joinTable.getTable();
> 211   SelectStatement subquery = table.getAsSubquery(orderBy);
> 212  if (!table.isSubselect()) {
> 213  context.setCurrentTable(table.getTableRef());
> 214  PTable projectedTable = 
> table.createProjectedTable(!projectPKColumns, context);
> 215  TupleProjector projector = new 
> TupleProjector(projectedTable);
> 216  
> TupleProjector.serializeProjectorIntoScan(context.getScan(), projector);
> 217  
> context.setResolver(FromCompiler.getResolverForProjectedTable(projectedTable, 
> context.getConnection(), subquery.getUdfParseNodes()));
> 218  table.projectColumns(context.getScan());
> 219  return compileSingleFlatQuery(context, subquery, binds, 
> asSubquery, !asSubquery, null, projectPKColumns ? projector : null, true);
> 220}
> 221QueryPlan plan = compileSubquery(subquery, false);
> 222PTable projectedTable = 
> table.createProjectedTable(plan.getProjector());
> 223
> context.setResolver(FromCompiler.getResolverForProjectedTable(projectedTable, 
> context.getConnection(), subquery.getUdfParseNodes()));
> 224return new TupleProjectionPlan(plan, new 
> TupleProjector(plan.getProjector()), 
> table.compilePostFilterExpression(context));
> 225}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5123) Avoid using MappedByteBuffers for server side GROUP BY

2019-02-04 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created PHOENIX-5123:
--

 Summary: Avoid using MappedByteBuffers for server side GROUP BY
 Key: PHOENIX-5123
 URL: https://issues.apache.org/jira/browse/PHOENIX-5123
 Project: Phoenix
  Issue Type: Bug
Reporter: Lars Hofhansl


Like PHOENIX-5120 but for GROUP BY.
Solution is a bit more tricky, since outline for sorting the access here is 
truly random.
[~apurtell] suggests to perhaps just use a RandomAccessFile for this.
(I'm not sure that uses under the hood, though)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5121) Move unnecessary sorting and fetching out of loop

2019-02-04 Thread Aman Poonia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aman Poonia updated PHOENIX-5121:
-
Attachment: PHOENIX-5121.patch

> Move unnecessary sorting and fetching out of loop
> -
>
> Key: PHOENIX-5121
> URL: https://issues.apache.org/jira/browse/PHOENIX-5121
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Minor
> Attachments: PHOENIX-5121.patch
>
>
> Don't fetch and sort PK columns of a table inside loop in 
> PhoenixDatabaseMetaData#getPrimaryKeys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5121) Move unnecessary sorting and fetching out of loop

2019-02-04 Thread Aman Poonia (JIRA)
Aman Poonia created PHOENIX-5121:


 Summary: Move unnecessary sorting and fetching out of loop
 Key: PHOENIX-5121
 URL: https://issues.apache.org/jira/browse/PHOENIX-5121
 Project: Phoenix
  Issue Type: Improvement
Reporter: Aman Poonia
Assignee: Aman Poonia


Don't fetch and sort PK columns of a table inside loop in 
PhoenixDatabaseMetaData#getPrimaryKeys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5120) Avoid using MappedByteBuffers for server side sorting.

2019-02-04 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5120:
---
Attachment: 5120-master-v5.txt

> Avoid using MappedByteBuffers for server side sorting.
> --
>
> Key: PHOENIX-5120
> URL: https://issues.apache.org/jira/browse/PHOENIX-5120
> Project: Phoenix
>  Issue Type: Task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Attachments: 5120-1.4-v2.txt, 5120-1.4-v3.txt, 5120-1.4-wip.txt, 
> 5120-1.4.txt, 5120-master-v3.txt, 5120-master-v5.txt, 5120-master.txt
>
>
> We should had a production outage due to this.
> MappedByteBuffer may leave files around, on top they use direct memory, which 
> is not cleared until the JVM executes a full GC.
> See last comment on PHOENIX-2405.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5120) Avoid using MappedByteBuffers for server side sorting.

2019-02-04 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5120:
---
Attachment: 5120-master-v3.txt

> Avoid using MappedByteBuffers for server side sorting.
> --
>
> Key: PHOENIX-5120
> URL: https://issues.apache.org/jira/browse/PHOENIX-5120
> Project: Phoenix
>  Issue Type: Task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Attachments: 5120-1.4-v2.txt, 5120-1.4-v3.txt, 5120-1.4-wip.txt, 
> 5120-1.4.txt, 5120-master-v3.txt, 5120-master.txt
>
>
> We should had a production outage due to this.
> MappedByteBuffer may leave files around, on top they use direct memory, which 
> is not cleared until the JVM executes a full GC.
> See last comment on PHOENIX-2405.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5120) Avoid using MappedByteBuffers for server side sorting.

2019-02-04 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5120:
---
Attachment: 5120-1.4-v3.txt

> Avoid using MappedByteBuffers for server side sorting.
> --
>
> Key: PHOENIX-5120
> URL: https://issues.apache.org/jira/browse/PHOENIX-5120
> Project: Phoenix
>  Issue Type: Task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Attachments: 5120-1.4-v2.txt, 5120-1.4-v3.txt, 5120-1.4-wip.txt, 
> 5120-1.4.txt, 5120-master.txt
>
>
> We should had a production outage due to this.
> MappedByteBuffer may leave files around, on top they use direct memory, which 
> is not cleared until the JVM executes a full GC.
> See last comment on PHOENIX-2405.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)