[jira] [Commented] (HAWQ-1458) Shared Input Scan QE hung in shareinput_reader_waitready().

2017-05-09 Thread Kavinder Dhaliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003203#comment-16003203
 ] 

Kavinder Dhaliwal commented on HAWQ-1458:
-

[~abai] Do you have steps to reproduce this issue?

> Shared Input Scan QE hung in shareinput_reader_waitready().
> ---
>
> Key: HAWQ-1458
> URL: https://issues.apache.org/jira/browse/HAWQ-1458
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Reporter: Amy
>Assignee: Amy
> Fix For: backlog
>
>
> The stack is as below:
> ```
> 4/13/17 6:12:32 AM PDT: stack of postgres process (pid 108464) on test4:
> 4/13/17 6:12:32 AM PDT: Thread 2 (Thread 0x7f7ca0c7b700 (LWP 108465)):
> 4/13/17 6:12:32 AM PDT: #0  0x0032214df283 in poll () from 
> /lib64/libc.so.6
> 4/13/17 6:12:32 AM PDT: #1  0x0097e110 in rxThreadFunc ()
> 4/13/17 6:12:32 AM PDT: #2  0x003221807aa1 in start_thread () from 
> /lib64/libpthread.so.0
> 4/13/17 6:12:32 AM PDT: #3  0x0032214e8aad in clone () from 
> /lib64/libc.so.6
> 4/13/17 6:12:32 AM PDT: Thread 1 (Thread 0x7f7cc5d48920 (LWP 108464)):
> 4/13/17 6:12:32 AM PDT: #0  0x0032214e1523 in select () from 
> /lib64/libc.so.6
> 4/13/17 6:12:32 AM PDT: #1  0x0069baaf in shareinput_reader_waitready 
> ()
> 4/13/17 6:12:32 AM PDT: #2  0x0069be0d in 
> ExecSliceDependencyShareInputScan ()
> 4/13/17 6:12:32 AM PDT: #3  0x0066eb40 in ExecSliceDependencyNode ()
> 4/13/17 6:12:32 AM PDT: #4  0x0066eaa5 in ExecSliceDependencyNode ()
> 4/13/17 6:12:32 AM PDT: #5  0x0066eaa5 in ExecSliceDependencyNode ()
> 4/13/17 6:12:32 AM PDT: #6  0x0066af41 in ExecutePlan ()
> 4/13/17 6:12:32 AM PDT: #7  0x0066bafa in ExecutorRun ()
> 4/13/17 6:12:32 AM PDT: #8  0x007f52aa in PortalRun ()
> 4/13/17 6:12:32 AM PDT: #9  0x007eb044 in exec_mpp_query ()
> 4/13/17 6:12:32 AM PDT: #10 0x007effb4 in PostgresMain ()
> 4/13/17 6:12:32 AM PDT: #11 0x007a04f0 in ServerLoop ()
> 4/13/17 6:12:32 AM PDT: #12 0x007a32b9 in PostmasterMain ()
> 4/13/17 6:12:32 AM PDT: #13 0x004a52b9 in main ()
> ```



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1440) Support Analyze for Hive External Tables

2017-05-05 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal closed HAWQ-1440.
---
Resolution: Fixed

> Support Analyze for Hive External Tables
> 
>
> Key: HAWQ-1440
> URL: https://issues.apache.org/jira/browse/HAWQ-1440
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: 2.3.0.0-incubating
>
>
> Currently the ANALYZE command only functions against PXF HDFS external 
> tables. Implementing this command for PXF Hive external tables will improve 
> the statistics that HAWQ maintains about these tables and will improve the 
> types of plans generated by the query optimizer



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1440) Support Analyze for Hive External Tables

2017-04-25 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-1440:
---

Assignee: Kavinder Dhaliwal  (was: Ed Espino)

> Support Analyze for Hive External Tables
> 
>
> Key: HAWQ-1440
> URL: https://issues.apache.org/jira/browse/HAWQ-1440
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: 2.3.0.0-incubating
>
>
> Currently the ANALYZE command only functions against PXF HDFS external 
> tables. Implementing this command for PXF Hive external tables will improve 
> the statistics that HAWQ maintains about these tables and will improve the 
> types of plans generated by the query optimizer



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1440) Support Analyze for Hive External Tables

2017-04-25 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-1440:
---

 Summary: Support Analyze for Hive External Tables
 Key: HAWQ-1440
 URL: https://issues.apache.org/jira/browse/HAWQ-1440
 Project: Apache HAWQ
  Issue Type: New Feature
  Components: PXF
Reporter: Kavinder Dhaliwal
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


Currently the ANALYZE command only functions against PXF HDFS external tables. 
Implementing this command for PXF Hive external tables will improve the 
statistics that HAWQ maintains about these tables and will improve the types of 
plans generated by the query optimizer



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1409) HAWQ send additional header to PXF to indicate aggregate function type

2017-03-24 Thread Kavinder Dhaliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15940520#comment-15940520
 ] 

Kavinder Dhaliwal commented on HAWQ-1409:
-

Currently the design of this implementation will only support count. So a new 
header AGG-TYPE will be sent from HAWQ to PXF with the following possible values

"count"
"unknown"

This simplifies the initial implementation

> HAWQ send additional header to PXF to indicate aggregate function type
> --
>
> Key: HAWQ-1409
> URL: https://issues.apache.org/jira/browse/HAWQ-1409
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Ed Espino
> Fix For: 2.3.0.0-incubating
>
>
> PXF can take advantage of some file formats such as ORC and leverage the 
> stats in the metadata. This means that for some simple aggregate functions 
> like count, min, max without any complex joins or filters PXF can simply read 
> the metadata and avoid reading tuples. In order for PXF to know that a query 
> can be completed via ORC metadata HAWQ must indicate to PXF that the query is 
> an aggregate query and the type of function



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1409) HAWQ send additional header to PXF to indicate aggregate function type

2017-03-24 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-1409:
---

Assignee: Kavinder Dhaliwal  (was: Ed Espino)

> HAWQ send additional header to PXF to indicate aggregate function type
> --
>
> Key: HAWQ-1409
> URL: https://issues.apache.org/jira/browse/HAWQ-1409
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: 2.3.0.0-incubating
>
>
> PXF can take advantage of some file formats such as ORC and leverage the 
> stats in the metadata. This means that for some simple aggregate functions 
> like count, min, max without any complex joins or filters PXF can simply read 
> the metadata and avoid reading tuples. In order for PXF to know that a query 
> can be completed via ORC metadata HAWQ must indicate to PXF that the query is 
> an aggregate query and the type of function



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1409) HAWQ send additional header to PXF to indicate aggregate function type

2017-03-24 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-1409:
---

 Summary: HAWQ send additional header to PXF to indicate aggregate 
function type
 Key: HAWQ-1409
 URL: https://issues.apache.org/jira/browse/HAWQ-1409
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: PXF
Reporter: Kavinder Dhaliwal
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


PXF can take advantage of some file formats such as ORC and leverage the stats 
in the metadata. This means that for some simple aggregate functions like 
count, min, max without any complex joins or filters PXF can simply read the 
metadata and avoid reading tuples. In order for PXF to know that a query can be 
completed via ORC metadata HAWQ must indicate to PXF that the query is an 
aggregate query and the type of function




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1108) Add JDBC PXF Plugin

2017-03-07 Thread Kavinder Dhaliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15900494#comment-15900494
 ] 

Kavinder Dhaliwal commented on HAWQ-1108:
-

Hi Devin, your PR is ready to be merged but can you please do the following:

- Rebase your branch against master
- squash your commits into one commit with a commit message like "HAWQ-1108. 
[DESCRIPTION OF COMMIT]"
- Git push --force to your branch

This will ensure that your commits are all contained in one commit that 
references this JIRA and you will retain your authorship when a committer 
merges your PR

> Add JDBC PXF Plugin
> ---
>
> Key: HAWQ-1108
> URL: https://issues.apache.org/jira/browse/HAWQ-1108
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Devin Jia
>
> On the back of the work in :
> https://issues.apache.org/jira/browse/HAWQ-779
> We would like to add to Hawq Plugins a JDBC implementation.
> There are currently two noted implementations in the openly available in 
> GitHub.
> 1) https://github.com/kojec/pxf-field/tree/master/jdbc-pxf-ext
> 2) https://github.com/inspur-insight/pxf-plugin/tree/master/pxf-jdbc
> The latter (2) is an improved version of the former (1) and also what 
> HAWQ-779 changes were to support.
> [~jiadx] would you be happy to contribute the source as apache 2 license open 
> source?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-944) Numutils.c: pg_ltoa and pg_itoa functions allocate unnecessary amount of bytes

2017-02-24 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal resolved HAWQ-944.

Resolution: Fixed

> Numutils.c: pg_ltoa and pg_itoa functions allocate unnecessary amount of bytes
> --
>
> Key: HAWQ-944
> URL: https://issues.apache.org/jira/browse/HAWQ-944
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>Priority: Minor
> Fix For: 2.2.0.0-incubating
>
>
> The current implementations of {{pg_ltoa}} and {{pg_itoa}} allocate a 33 byte 
> char array and set the input pointer to that array. This is far too many 
> bytes than needed to translate an int16 or int32 to a string
> int32 -> 10 bytes maximum + 1 sign bit + '\0' = 12 bytes
> int16 ->  5 bytes maximum  + 1 sign bit + '\0' = 7 bytes
> When HAWQ/Greenplum forked from Postgres the two functions simply delegated 
> to {{sprintf}} so an optimization was introduced that involved the 33 byte 
> solution. Postgres itself implemented these functions in commit 
> https://github.com/postgres/postgres/commit/4fc115b2e981f8c63165ca86a23215380a3fda66
>  that require a 12 byte maximum char pointer.
> This is a minor improvement that can be made to the HAWQ codebase and it's 
> relatively little effort to do so.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-762) Hive aggregation queries through PXF sometimes hang

2017-02-13 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal resolved HAWQ-762.

   Resolution: Fixed
Fix Version/s: (was: backlog)
   2.1.0.0-incubating

> Hive aggregation queries through PXF sometimes hang
> ---
>
> Key: HAWQ-762
> URL: https://issues.apache.org/jira/browse/HAWQ-762
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Hcatalog, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Oleksandr Diachenko
>Assignee: Kavinder Dhaliwal
>  Labels: performance
> Fix For: 2.1.0.0-incubating
>
>
> Reproduce Steps:
> {code}
> select count(*) from hcatalog.default.hivetable;
> {code}
> sometimes, this query will hang and we see from pxf logs that Hive thrift 
> server cannot be connected from PXF agent. 
> While users can still visit hive metastore (through HUE) and execute the same 
> query.
> After a restart of PXF agent, this query goes through without issues.
> *Troubleshooting Guide*
> - check catalina.out (tomcat) and pxf-service.log to see if the query request 
> gets to tomcat/pxf webapp, any exceptions happened during the time window
> - enable {code}log_min_messages=DEBUG2{code} to see at which step the query 
> is stuck
> - try:
> {code}
> curl http:///pxf/ProtocolVersion
> {code}
> where URI is the hostname or IP of the machine you installed PXF, port is 
> usually 51200 if you didn’t change it.
> The response you’ll get if PXF service is running OK:
> {code}
> {version: v14}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-762) Hive aggregation queries through PXF sometimes hang

2017-02-13 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-762:
--

Assignee: Kavinder Dhaliwal  (was: Goden Yao)

> Hive aggregation queries through PXF sometimes hang
> ---
>
> Key: HAWQ-762
> URL: https://issues.apache.org/jira/browse/HAWQ-762
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Hcatalog, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Oleksandr Diachenko
>Assignee: Kavinder Dhaliwal
>  Labels: performance
> Fix For: 2.1.0.0-incubating
>
>
> Reproduce Steps:
> {code}
> select count(*) from hcatalog.default.hivetable;
> {code}
> sometimes, this query will hang and we see from pxf logs that Hive thrift 
> server cannot be connected from PXF agent. 
> While users can still visit hive metastore (through HUE) and execute the same 
> query.
> After a restart of PXF agent, this query goes through without issues.
> *Troubleshooting Guide*
> - check catalina.out (tomcat) and pxf-service.log to see if the query request 
> gets to tomcat/pxf webapp, any exceptions happened during the time window
> - enable {code}log_min_messages=DEBUG2{code} to see at which step the query 
> is stuck
> - try:
> {code}
> curl http:///pxf/ProtocolVersion
> {code}
> where URI is the hostname or IP of the machine you installed PXF, port is 
> usually 51200 if you didn’t change it.
> The response you’ll get if PXF service is running OK:
> {code}
> {version: v14}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1302) PXF RPM install does not copy correct classpath

2017-01-31 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-1302:
---

 Summary: PXF RPM install does not copy correct classpath
 Key: HAWQ-1302
 URL: https://issues.apache.org/jira/browse/HAWQ-1302
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Kavinder Dhaliwal
Assignee: Ed Espino
 Fix For: 2.1.0.0-incubating


Since the changes in 
[HAWQ-1297|https://issues.apache.org/jira/browse/HAWQ-1297] the new 
pxf-private.classpath results in a specific distributions classpath file 
pxf-private[distro].classpath to not be succesfully renamed by gradle to 
pxf-private.classpath 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1302) PXF RPM install does not copy correct classpath

2017-01-31 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-1302:
---

Assignee: Shivram Mani  (was: Ed Espino)

> PXF RPM install does not copy correct classpath
> ---
>
> Key: HAWQ-1302
> URL: https://issues.apache.org/jira/browse/HAWQ-1302
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Shivram Mani
> Fix For: 2.1.0.0-incubating
>
>
> Since the changes in 
> [HAWQ-1297|https://issues.apache.org/jira/browse/HAWQ-1297] the new 
> pxf-private.classpath results in a specific distributions classpath file 
> pxf-private[distro].classpath to not be succesfully renamed by gradle to 
> pxf-private.classpath 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1215) PXF HiveORC profile doesn't handle complex types correctly

2017-01-20 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal resolved HAWQ-1215.
-
   Resolution: Fixed
Fix Version/s: 2.1.0.0-incubating

> PXF HiveORC profile doesn't handle complex types correctly
> --
>
> Key: HAWQ-1215
> URL: https://issues.apache.org/jira/browse/HAWQ-1215
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Shivram Mani
>Assignee: Shivram Mani
> Fix For: 2.1.0.0-incubating
>
>
> The new HiveORC profile has an issue with handling complex hive types 
> (array,map,struct,union,etc). The object inspector being used marks all these 
> complex types as string and hence during resolution time, PXF treats them as 
> primitive data types and fails.
> We get the following exception
> {code}
> 2016-12-12 10:13:37.0579 DEBUG tomcat-http--13 
> org.apache.hawq.pxf.service.rest.BridgeResource - Starting streaming fragment 
> 0 of resource /hive/warehouse/hive_collections_table_orc/00_0
> 2016-12-12 10:13:37.0580 ERROR tomcat-http--13 
> org.apache.hawq.pxf.service.rest.BridgeResource - Exception thrown when 
> streaming
> java.lang.ClassCastException: java.util.ArrayList cannot be cast to 
> org.apache.hadoop.io.Text
> at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.getPrimitiveJavaObject(WritableStringObjectInspector.java:46)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveResolver.resolvePrimitive(HiveResolver.java:563)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseTuple(HiveResolver.java:368)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveResolver.traverseStruct(HiveResolver.java:470)
> at 
> org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver.getFields(HiveORCSerdeResolver.java:81)
> at org.apache.hawq.pxf.service.ReadBridge.getNext(ReadBridge.java:104)
> at 
> org.apache.hawq.pxf.service.rest.BridgeResource$1.write(BridgeResource.java:140)
> {code}
> HiveORC profile uses the column types from the schema definition in HAWQ. 
> Complex fields are defined as text in HAWQ and hence is treated as string and 
> results in this error. This should be modified to use the schema definition 
> from Fragmenter metadata instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-1165) HiveORC External Table Query Memory Leak

2016-11-18 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal closed HAWQ-1165.
---
Resolution: Fixed

Not applicable

> HiveORC External Table Query Memory Leak
> 
>
> Key: HAWQ-1165
> URL: https://issues.apache.org/jira/browse/HAWQ-1165
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Lei Chang
> Fix For: backlog
>
>
> When running an external table query using the HiveORC profile queries will 
> often grow in their memory footprint even after completing a query. This 
> memory is only released when GC is subsequently run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1165) HiveORC External Table Query Memory Leak

2016-11-18 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-1165:
---

 Summary: HiveORC External Table Query Memory Leak
 Key: HAWQ-1165
 URL: https://issues.apache.org/jira/browse/HAWQ-1165
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Kavinder Dhaliwal
Assignee: Lei Chang
 Fix For: backlog


When running an external table query using the HiveORC profile queries will 
often grow in their memory footprint even after completing a query. This memory 
is only released when GC is subsequently run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-583) Extend PXF to allow plugins to support returning partial content of SELECT(column projection)

2016-10-26 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal closed HAWQ-583.
--
   Resolution: Fixed
Fix Version/s: (was: backlog)
   2.0.1.0-incubating

> Extend PXF to allow plugins to support returning partial content of 
> SELECT(column projection)
> -
>
> Key: HAWQ-583
> URL: https://issues.apache.org/jira/browse/HAWQ-583
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Kavinder Dhaliwal
> Fix For: 2.0.1.0-incubating
>
>
> Currently PXF supports being able to push down the predicate WHERE logic to 
> the external system to reduce the amount data needed to be retrieved.
> SELECT a, b FROM external_pxf_source WHERE z < 3 AND x > 6
> As such we can filter the rows returned, but currently still would have to 
> return all the fields / complete row.
> This proposal is so that we can return only the columns in SELECT part.
> For data sources where it is columnar storage or selectable such as remote 
> database that PXF can read or connect to this has advantages in the data that 
> needs to be accessed or even transferred.
> As like with the push down Filter it should be optional so that plugins that 
> provide support can use it but others that do not, continue to work as they 
> do.
> The proposal would be for
> 1) create an interface created for plugins to optionally implement, where the 
> columns needed to be returned are given to the plugin.
> 2) update pxf api for hawq to send columns defined in SELECT, for pxf to 
> invoke the plugin interface and pass this information onto if provided
> 3) update pxf integration within hawq itself so that hawq passes this 
> additonal  information to pxf.
> This Ticket is off the back of discussion on HAWQ-492.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-1100) Support Decimal Values in PXF Filter

2016-10-26 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal closed HAWQ-1100.
---
   Resolution: Fixed
Fix Version/s: 2.0.1.0-incubating

> Support Decimal Values in PXF Filter
> 
>
> Key: HAWQ-1100
> URL: https://issues.apache.org/jira/browse/HAWQ-1100
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: 2.0.1.0-incubating
>
>
> Currently PXF's filter parser either assumes that the constant values in a 
> filter are either a String or a Long. With changes in the PXF Bridge 
> (HAWQ-1048) new numeric types are being passed in the filter string so PXF 
> must handle these appropriately



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-1049) Enhance PXF Service to support AND,OR,NOT logical operators in Predicate Pushdown

2016-10-26 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal closed HAWQ-1049.
---
   Resolution: Fixed
Fix Version/s: 2.0.1.0-incubating

> Enhance PXF Service to support AND,OR,NOT logical operators in Predicate 
> Pushdown
> -
>
> Key: HAWQ-1049
> URL: https://issues.apache.org/jira/browse/HAWQ-1049
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Kavinder Dhaliwal
> Fix For: 2.0.1.0-incubating
>
>
> Support additional logical operators OR, NOT along with currently supported 
> AND.
> Update the PXF ORC Accessor to support these opearators as well.
> Currently PXF only receives filters as a list of AND expressions. In 
> anticipation of HAWQ-1048, PXF needs to support parsing a filter string that 
> includes AND, OR, and NOT operators. The proposal for doing so is to 
> introduce a special character 'l' for logical operators. With the following 
> operations
> AND='0'
> OR='1'
> NOT='2'
> Thus the filter string
> 'a1c2o0a2c5o2l1' would translate to Column 1 < 1 OR Column 2 > 5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-1105) Missing License Info from PXF Java files

2016-10-14 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal resolved HAWQ-1105.
-
Resolution: Fixed

> Missing License Info from PXF Java files
> 
>
> Key: HAWQ-1105
> URL: https://issues.apache.org/jira/browse/HAWQ-1105
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: 2.0.1.0-incubating
>
>
>   pxf/pxf-api/src/main/java/org/apache/hawq/pxf/api/LogicalFilter.java
>   
> pxf/pxf-hbase/src/test/java/org/apache/hawq/pxf/plugins/hbase/HBaseFilterBuilderTest.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-1105) Missing License Info from PXF Java files

2016-10-14 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-1105:
---

Assignee: Kavinder Dhaliwal  (was: Lei Chang)

> Missing License Info from PXF Java files
> 
>
> Key: HAWQ-1105
> URL: https://issues.apache.org/jira/browse/HAWQ-1105
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: 2.0.1.0-incubating
>
>
>   pxf/pxf-api/src/main/java/org/apache/hawq/pxf/api/LogicalFilter.java
>   
> pxf/pxf-hbase/src/test/java/org/apache/hawq/pxf/plugins/hbase/HBaseFilterBuilderTest.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1105) Missing License Info from PXF Java files

2016-10-14 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal updated HAWQ-1105:

Fix Version/s: 2.0.1.0-incubating

> Missing License Info from PXF Java files
> 
>
> Key: HAWQ-1105
> URL: https://issues.apache.org/jira/browse/HAWQ-1105
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: 2.0.1.0-incubating
>
>
>   pxf/pxf-api/src/main/java/org/apache/hawq/pxf/api/LogicalFilter.java
>   
> pxf/pxf-hbase/src/test/java/org/apache/hawq/pxf/plugins/hbase/HBaseFilterBuilderTest.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1105) Missing License Info from PXF Java files

2016-10-14 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-1105:
---

 Summary: Missing License Info from PXF Java files
 Key: HAWQ-1105
 URL: https://issues.apache.org/jira/browse/HAWQ-1105
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Kavinder Dhaliwal
Assignee: Lei Chang


  pxf/pxf-api/src/main/java/org/apache/hawq/pxf/api/LogicalFilter.java
  
pxf/pxf-hbase/src/test/java/org/apache/hawq/pxf/plugins/hbase/HBaseFilterBuilderTest.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1049) Enhance PXF Service to support AND,OR,NOT logical operators in Predicate Pushdown

2016-10-13 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal updated HAWQ-1049:

Fix Version/s: (was: backlog)

> Enhance PXF Service to support AND,OR,NOT logical operators in Predicate 
> Pushdown
> -
>
> Key: HAWQ-1049
> URL: https://issues.apache.org/jira/browse/HAWQ-1049
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Kavinder Dhaliwal
>
> Support additional logical operators OR, NOT along with currently supported 
> AND.
> Update the PXF ORC Accessor to support these opearators as well.
> Currently PXF only receives filters as a list of AND expressions. In 
> anticipation of HAWQ-1048, PXF needs to support parsing a filter string that 
> includes AND, OR, and NOT operators. The proposal for doing so is to 
> introduce a special character 'l' for logical operators. With the following 
> operations
> AND='0'
> OR='1'
> NOT='2'
> Thus the filter string
> 'a1c2o0a2c5o2l1' would translate to Column 1 < 1 OR Column 2 > 5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1102) Expand PXF HBase Filter Data Type support

2016-10-13 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-1102:
---

 Summary: Expand PXF HBase Filter Data Type support
 Key: HAWQ-1102
 URL: https://issues.apache.org/jira/browse/HAWQ-1102
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: PXF
Reporter: Kavinder Dhaliwal
Assignee: Lei Chang


PXF's HBase profile only supports a limited set of data types for filter 
pushdown. With changes in the PXF Bridge (HAWQ-1048) filters now include a 
wider range of data types. PXF should support these data types when setting 
comparator objects for HBase filtering



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1101) PXF Hive Partition Filter for filters with OR and NOT

2016-10-13 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-1101:
---

 Summary: PXF Hive Partition Filter for filters with OR and NOT
 Key: HAWQ-1101
 URL: https://issues.apache.org/jira/browse/HAWQ-1101
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: PXF
Reporter: Kavinder Dhaliwal
Assignee: Lei Chang


PXF currently only supports partition filtering based on the user query when 
clauses are joined only by AND. With support for pushing down OR and NOT 
(HAWQ-964) PXF should correctly filter partitions even when the query has a OR 
or NOT condition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-1100) Support Decimal Values in PXF Filter

2016-10-13 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-1100:
---

Assignee: Kavinder Dhaliwal  (was: Lei Chang)

> Support Decimal Values in PXF Filter
> 
>
> Key: HAWQ-1100
> URL: https://issues.apache.org/jira/browse/HAWQ-1100
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>
> Currently PXF's filter parser either assumes that the constant values in a 
> filter are either a String or a Long. With changes in the PXF Bridge 
> (HAWQ-1048) new numeric types are being passed in the filter string so PXF 
> must handle these appropriately



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1100) Support Decimal Values in PXF Filter

2016-10-13 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-1100:
---

 Summary: Support Decimal Values in PXF Filter
 Key: HAWQ-1100
 URL: https://issues.apache.org/jira/browse/HAWQ-1100
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: PXF
Reporter: Kavinder Dhaliwal
Assignee: Lei Chang


Currently PXF's filter parser either assumes that the constant values in a 
filter are either a String or a Long. With changes in the PXF Bridge 
(HAWQ-1048) new numeric types are being passed in the filter string so PXF must 
handle these appropriately



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1049) Enhance PXF Service to support AND,OR,NOT logical operators in Predicate Pushdown

2016-09-14 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal updated HAWQ-1049:

Description: 
Support additional logical operators OR, NOT along with currently supported AND.
Update the PXF ORC Accessor to support these opearators as well.

Currently PXF only receives filters as a list of AND expressions. In 
anticipation of HAWQ-1048, PXF needs to support parsing a filter string that 
includes AND, OR, and NOT operators. The proposal for doing so is to introduce 
a special character 'l' for logical operators. With the following operations

AND='0'
OR='1'
NOT='2'

Thus the filter string

'a1c2o0a2c5o2l1' would translate to Column 1 < 1 OR Column 2 > 5



  was:
Support additional logical operators OR, NOT along with currently supported AND.
Update the PXF ORC Accessor to support these opearators as well.


> Enhance PXF Service to support AND,OR,NOT logical operators in Predicate 
> Pushdown
> -
>
> Key: HAWQ-1049
> URL: https://issues.apache.org/jira/browse/HAWQ-1049
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Shivram Mani
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> Support additional logical operators OR, NOT along with currently supported 
> AND.
> Update the PXF ORC Accessor to support these opearators as well.
> Currently PXF only receives filters as a list of AND expressions. In 
> anticipation of HAWQ-1048, PXF needs to support parsing a filter string that 
> includes AND, OR, and NOT operators. The proposal for doing so is to 
> introduce a special character 'l' for logical operators. With the following 
> operations
> AND='0'
> OR='1'
> NOT='2'
> Thus the filter string
> 'a1c2o0a2c5o2l1' would translate to Column 1 < 1 OR Column 2 > 5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-986) Redundant Fragmenter API Call from Reading External Table

2016-08-29 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-986:
--

Assignee: Kavinder Dhaliwal  (was: Goden Yao)

> Redundant Fragmenter API Call from Reading External Table
> -
>
> Key: HAWQ-986
> URL: https://issues.apache.org/jira/browse/HAWQ-986
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> Right now, when HAWQ does select from external PXF table it does two 
> fragmenter calls, seems like it's redundant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-986) Redundant Fragmenter API Call from Reading External Table

2016-08-29 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal resolved HAWQ-986.

Resolution: Not A Problem

> Redundant Fragmenter API Call from Reading External Table
> -
>
> Key: HAWQ-986
> URL: https://issues.apache.org/jira/browse/HAWQ-986
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> Right now, when HAWQ does select from external PXF table it does two 
> fragmenter calls, seems like it's redundant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-986) Redundant Fragmenter API Call from Reading External Table

2016-08-29 Thread Kavinder Dhaliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15446532#comment-15446532
 ] 

Kavinder Dhaliwal commented on HAWQ-986:


This is a known design decision. HAWQ calls the planner twice in order to do 
dynamic resource negotiation with every query.

> Redundant Fragmenter API Call from Reading External Table
> -
>
> Key: HAWQ-986
> URL: https://issues.apache.org/jira/browse/HAWQ-986
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Right now, when HAWQ does select from external PXF table it does two 
> fragmenter calls, seems like it's redundant.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-973) Check pi_varList in pxfheaders before setting headers

2016-08-01 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal updated HAWQ-973:
---
Description: "add_projection_desc_httpheader()" extracts the indices of 
columns to project from pi_varNumbers, but calculates the length based on 
pi_targetList. This is similar to how ExecVariableList handles projection. 
However there are cases where length(pi_targetList) >0 and 
length(pi_varNumbers) == 0, so a condition on pi_varList needs to be added 
before invoking add_projection_desc_httpheader as is done in ExecProject.  
(was: {code}
add_projection_desc_httpheader
{code}

extracts the indices of columns to project from pi_varNumbers, but calculates 
the length based on pi_targetList. This is similar to how ExecVariableList 
handles projection. However there are cases where length(pi_targetList) >0 and 
length(pi_varNumbers) == 0, so a condition on pi_varList needs to be added 
before invoking add_projection_desc_httpheader as is done in ExecProject.)

> Check pi_varList in pxfheaders before setting headers
> -
>
> Key: HAWQ-973
> URL: https://issues.apache.org/jira/browse/HAWQ-973
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>
> "add_projection_desc_httpheader()" extracts the indices of columns to project 
> from pi_varNumbers, but calculates the length based on pi_targetList. This is 
> similar to how ExecVariableList handles projection. However there are cases 
> where length(pi_targetList) >0 and length(pi_varNumbers) == 0, so a condition 
> on pi_varList needs to be added before invoking 
> add_projection_desc_httpheader as is done in ExecProject.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-973) Check pi_varList in pxfheaders before setting headers

2016-08-01 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-973:
--

Assignee: Kavinder Dhaliwal  (was: Goden Yao)

> Check pi_varList in pxfheaders before setting headers
> -
>
> Key: HAWQ-973
> URL: https://issues.apache.org/jira/browse/HAWQ-973
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>
> {code}
> add_projection_desc_httpheader
> {code}
> extracts the indices of columns to project from pi_varNumbers, but calculates 
> the length based on pi_targetList. This is similar to how ExecVariableList 
> handles projection. However there are cases where length(pi_targetList) >0 
> and length(pi_varNumbers) == 0, so a condition on pi_varList needs to be 
> added before invoking add_projection_desc_httpheader as is done in 
> ExecProject.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-973) Check pi_varList in pxfheaders before setting headers

2016-08-01 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-973:
--

 Summary: Check pi_varList in pxfheaders before setting headers
 Key: HAWQ-973
 URL: https://issues.apache.org/jira/browse/HAWQ-973
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Kavinder Dhaliwal
Assignee: Goden Yao


{code}
add_projection_desc_httpheader
{code}

extracts the indices of columns to project from pi_varNumbers, but calculates 
the length based on pi_targetList. This is similar to how ExecVariableList 
handles projection. However there are cases where length(pi_targetList) >0 and 
length(pi_varNumbers) == 0, so a condition on pi_varList needs to be added 
before invoking add_projection_desc_httpheader as is done in ExecProject.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-972) Curl upload field should be set before sending request

2016-08-01 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-972:
--

Assignee: Kavinder Dhaliwal  (was: Goden Yao)

> Curl upload field should be set before sending request
> --
>
> Key: HAWQ-972
> URL: https://issues.apache.org/jira/browse/HAWQ-972
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>
> HAWQ-932 refactored out common code from curl_init_upload and 
> curl_init_download into curl_init, however curl_init now invokes 
> setup_multi_handle which performs the async curl request before 
> curl_init_upload sets the upload parameter to false. This leads to a 
> non-deterministic error where GET requests are made to PXF endpoints that 
> only accept POST requests leading to a 405 status return code from PXF.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-972) Curl upload field should be set before sending request

2016-08-01 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-972:
--

 Summary: Curl upload field should be set before sending request
 Key: HAWQ-972
 URL: https://issues.apache.org/jira/browse/HAWQ-972
 Project: Apache HAWQ
  Issue Type: Bug
  Components: External Tables, PXF
Reporter: Kavinder Dhaliwal
Assignee: Goden Yao


HAWQ-932 refactored out common code from curl_init_upload and 
curl_init_download into curl_init, however curl_init now invokes 
setup_multi_handle which performs the async curl request before 
curl_init_upload sets the upload parameter to false. This leads to a 
non-deterministic error where GET requests are made to PXF endpoints that only 
accept POST requests leading to a 405 status return code from PXF.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-583) Extend PXF to allow plugins to support returning partial content of SELECT(column projection)

2016-07-28 Thread Kavinder Dhaliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15397931#comment-15397931
 ] 

Kavinder Dhaliwal commented on HAWQ-583:


This is being implemented for ORC files via HAWQ-886

> Extend PXF to allow plugins to support returning partial content of 
> SELECT(column projection)
> -
>
> Key: HAWQ-583
> URL: https://issues.apache.org/jira/browse/HAWQ-583
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> Currently PXF supports being able to push down the predicate WHERE logic to 
> the external system to reduce the amount data needed to be retrieved.
> SELECT a, b FROM external_pxf_source WHERE z < 3 AND x > 6
> As such we can filter the rows returned, but currently still would have to 
> return all the fields / complete row.
> This proposal is so that we can return only the columns in SELECT part.
> For data sources where it is columnar storage or selectable such as remote 
> database that PXF can read or connect to this has advantages in the data that 
> needs to be accessed or even transferred.
> As like with the push down Filter it should be optional so that plugins that 
> provide support can use it but others that do not, continue to work as they 
> do.
> The proposal would be for
> 1) create an interface created for plugins to optionally implement, where the 
> columns needed to be returned are given to the plugin.
> 2) update pxf api for hawq to send columns defined in SELECT, for pxf to 
> invoke the plugin interface and pass this information onto if provided
> 3) update pxf integration within hawq itself so that hawq passes this 
> additonal  information to pxf.
> This Ticket is off the back of discussion on HAWQ-492.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-954) Seg Fault when writing to External Table where ProjInfo is NULL

2016-07-26 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-954:
--

Assignee: Kavinder Dhaliwal  (was: Goden Yao)

> Seg Fault when writing to External Table where ProjInfo is NULL
> ---
>
> Key: HAWQ-954
> URL: https://issues.apache.org/jira/browse/HAWQ-954
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>
> HAWQ-927 introduced a regression where when adding headers to the http 
> request to PXF the macro 
> {code}
> #define EXTPROTOCOL_GET_PROJINFO(fcinfo) (((ExtProtocolData*) 
> fcinfo->context)->desc->projInfo
> {code}
> Is a NULL value and causes a SegFault. This bug has been observed mainly in 
> external writable tables



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-954) Seg Fault when writing to External Table where ProjInfo is NULL

2016-07-26 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-954:
--

 Summary: Seg Fault when writing to External Table where ProjInfo 
is NULL
 Key: HAWQ-954
 URL: https://issues.apache.org/jira/browse/HAWQ-954
 Project: Apache HAWQ
  Issue Type: Bug
  Components: External Tables, PXF
Reporter: Kavinder Dhaliwal
Assignee: Goden Yao


HAWQ-927 introduced a regression where when adding headers to the http request 
to PXF the macro 

{code}
#define EXTPROTOCOL_GET_PROJINFO(fcinfo) (((ExtProtocolData*) 
fcinfo->context)->desc->projInfo
{code}

Is a NULL value and causes a SegFault. This bug has been observed mainly in 
external writable tables



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-950) PXF support for Float filters encoded in header data

2016-07-26 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-950:
--

 Summary: PXF support for Float filters encoded in header data
 Key: HAWQ-950
 URL: https://issues.apache.org/jira/browse/HAWQ-950
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: External Tables, PXF
Reporter: Kavinder Dhaliwal
Assignee: Goden Yao


HAWQ-779 contributed by [~jiadx] introduced the ability for hawq to serialize 
filters on float columns and send the data to PXF. However, PXF is not 
currently capable of parsing float values in the string filter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-949) Hawq sending unsupported serialized float filter data to PXF

2016-07-25 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-949:
--

Assignee: Kavinder Dhaliwal  (was: Goden Yao)

> Hawq sending unsupported serialized float filter data to PXF
> 
>
> Key: HAWQ-949
> URL: https://issues.apache.org/jira/browse/HAWQ-949
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>
> HAWQ-779 introduced support in the C side for Float to be serialized into the 
> filter header sent to PXF. However, changes were not made to the FilterParser 
> class in PXF to support parsing non-Int numeric types.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-949) Hawq sending unsupported serialized float filter data to PXF

2016-07-25 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-949:
--

 Summary: Hawq sending unsupported serialized float filter data to 
PXF
 Key: HAWQ-949
 URL: https://issues.apache.org/jira/browse/HAWQ-949
 Project: Apache HAWQ
  Issue Type: Bug
  Components: External Tables, PXF
Reporter: Kavinder Dhaliwal
Assignee: Goden Yao


HAWQ-779 introduced support in the C side for Float to be serialized into the 
filter header sent to PXF. However, changes were not made to the FilterParser 
class in PXF to support parsing non-Int numeric types.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-944) Numutils.c: pg_ltoa and pg_itoa functions allocate unnecessary amount of bytes

2016-07-21 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal updated HAWQ-944:
---
Priority: Minor  (was: Major)

> Numutils.c: pg_ltoa and pg_itoa functions allocate unnecessary amount of bytes
> --
>
> Key: HAWQ-944
> URL: https://issues.apache.org/jira/browse/HAWQ-944
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>Priority: Minor
>
> The current implementations of {{pg_ltoa}} and {{pg_itoa}} allocate a 33 byte 
> char array and set the input pointer to that array. This is far too many 
> bytes than needed to translate an int16 or int32 to a string
> int32 -> 10 bytes maximum + 1 sign bit + '\0' = 12 bytes
> int16 ->  5 bytes maximum  + 1 sign bit + '\0' = 7 bytes
> When HAWQ/Greenplum forked from Postgres the two functions simply delegated 
> to {{sprintf}} so an optimization was introduced that involved the 33 byte 
> solution. Postgres itself implemented these functions in commit 
> https://github.com/postgres/postgres/commit/4fc115b2e981f8c63165ca86a23215380a3fda66
>  that require a 12 byte maximum char pointer.
> This is a minor improvement that can be made to the HAWQ codebase and it's 
> relatively little effort to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-944) Numutils.c: pg_ltoa and pg_itoa functions allocate unnecessary amount of bytes

2016-07-21 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-944:
--

Assignee: Kavinder Dhaliwal  (was: Lei Chang)

> Numutils.c: pg_ltoa and pg_itoa functions allocate unnecessary amount of bytes
> --
>
> Key: HAWQ-944
> URL: https://issues.apache.org/jira/browse/HAWQ-944
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>
> The current implementations of {{pg_ltoa}} and {{pg_itoa}} allocate a 33 byte 
> char array and set the input pointer to that array. This is far too many 
> bytes than needed to translate an int16 or int32 to a string
> int32 -> 10 bytes maximum + 1 sign bit + '\0' = 12 bytes
> int16 ->  5 bytes maximum  + 1 sign bit + '\0' = 7 bytes
> When HAWQ/Greenplum forked from Postgres the two functions simply delegated 
> to {{sprintf}} so an optimization was introduced that involved the 33 byte 
> solution. Postgres itself implemented these functions in commit 
> https://github.com/postgres/postgres/commit/4fc115b2e981f8c63165ca86a23215380a3fda66
>  that require a 12 byte maximum char pointer.
> This is a minor improvement that can be made to the HAWQ codebase and it's 
> relatively little effort to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-944) Numutils.c: pg_ltoa and pg_itoa functions allocate unnecessary amount of bytes

2016-07-21 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-944:
--

 Summary: Numutils.c: pg_ltoa and pg_itoa functions allocate 
unnecessary amount of bytes
 Key: HAWQ-944
 URL: https://issues.apache.org/jira/browse/HAWQ-944
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: Core
Reporter: Kavinder Dhaliwal
Assignee: Lei Chang


The current implementations of {{pg_ltoa}} and {{pg_itoa}} allocate a 33 byte 
char array and set the input pointer to that array. This is far too many bytes 
than needed to translate an int16 or int32 to a string

int32 -> 10 bytes maximum + 1 sign bit + '\0' = 12 bytes
int16 ->  5 bytes maximum  + 1 sign bit + '\0' = 7 bytes

When HAWQ/Greenplum forked from Postgres the two functions simply delegated to 
{{sprintf}} so an optimization was introduced that involved the 33 byte 
solution. Postgres itself implemented these functions in commit 
https://github.com/postgres/postgres/commit/4fc115b2e981f8c63165ca86a23215380a3fda66
 that require a 12 byte maximum char pointer.

This is a minor improvement that can be made to the HAWQ codebase and it's 
relatively little effort to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-932) HAWQ fails to query external table defined with "localhost" in URL

2016-07-20 Thread Kavinder Dhaliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15386100#comment-15386100
 ] 

Kavinder Dhaliwal commented on HAWQ-932:


[~odiachenko] Good work on finding the fix for this. Do you know why we didn't 
notice this issue earlier?

> HAWQ fails to query external table defined with "localhost" in URL
> --
>
> Key: HAWQ-932
> URL: https://issues.apache.org/jira/browse/HAWQ-932
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, PXF
>Reporter: Goden Yao
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> Originally reported by [~jpatel] when he's making a docker image based on 
> HAWQ 2.0.0.0-incubating dev build. Investigated by [~odiachenko]
> There is workaround to define it with 127.0.0.1, but there is not a 
> workaround for querying tables using HCatalog integration.
> It used to work before.
> {code}
> template1=# CREATE EXTERNAL TABLE ext_table1 (t1text, t2text,
> num1  integer, dub1  double precision) LOCATION
> (E'pxf://localhost:51200/hive_small_data?PROFILE=Hive') FORMAT 'CUSTOM'
> (formatter='pxfwritable_import');*
> CREATE EXTERNAL TABLE
> template1=# select * from ext_table1;
> ERROR:  remote component error (0): (libchurl.c:898)*
> {code}
> When I turned on debug mode in curl, I found this error in logs - "*
> Closing connection 0".
> I found a workaround, to set CURLOPT_RESOLVE option in curl:
> {code}
> struct curl_slist *host = NULL;
> host = curl_slist_append(NULL, "localhost:51200:127.0.0.1");*
> set_curl_option(context, CURLOPT_RESOLVE, host);
> {code}
> It seems like an issue with DNS cache,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-927) Send Projection Info Data from HAWQ to PXF

2016-07-18 Thread Kavinder Dhaliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382958#comment-15382958
 ] 

Kavinder Dhaliwal commented on HAWQ-927:


My initial implementation of this feature is to introduce a struct in the HAWQ 
codebase {{ExternalSelectDescData}} that currently holds a pointer to the 
current queries {{ProjectionInfo}} struct. I chose to pass the pointer through 
the encapsulating struct because I believe this will make it more convenient to 
pack more data into the struct to give PXF more insight into aspects of a query 
(aggregate functions, filters, limits). 

To send the data to HAWQ via the REST protocol I added "X-GP-PROJECT" and 
"X-GP-PROJECT-COLS" to pass a flag whether there are any columns to project and 
a list of those columns into the header data passed via the API request. 

I look forward to the community's suggestions and feedback about this feature

> Send Projection Info Data from HAWQ to PXF
> --
>
> Key: HAWQ-927
> URL: https://issues.apache.org/jira/browse/HAWQ-927
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> To achieve column projection at the level of PXF or the underlying readers we 
> need to first send this data as a Header/Param to PXF. Currently, PXF has no 
> knowledge whether a query requires all columns or a subset of columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-927) Send Projection Info Data from HAWQ to PXF

2016-07-14 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal updated HAWQ-927:
---
Component/s: PXF

> Send Projection Info Data from HAWQ to PXF
> --
>
> Key: HAWQ-927
> URL: https://issues.apache.org/jira/browse/HAWQ-927
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> To achieve column projection at the level of PXF or the underlying readers we 
> need to first send this data as a Header/Param to PXF. Currently, PXF has no 
> knowledge whether a query requires all columns or a subset of columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-927) Send Projection Info Data from HAWQ to PXF

2016-07-14 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-927:
--

 Summary: Send Projection Info Data from HAWQ to PXF
 Key: HAWQ-927
 URL: https://issues.apache.org/jira/browse/HAWQ-927
 Project: Apache HAWQ
  Issue Type: Sub-task
  Components: External Tables
Reporter: Kavinder Dhaliwal
Assignee: Lei Chang


To achieve column projection at the level of PXF or the underlying readers we 
need to first send this data as a Header/Param to PXF. Currently, PXF has no 
knowledge whether a query requires all columns or a subset of columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-927) Send Projection Info Data from HAWQ to PXF

2016-07-14 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-927:
--

Assignee: Kavinder Dhaliwal  (was: Lei Chang)

> Send Projection Info Data from HAWQ to PXF
> --
>
> Key: HAWQ-927
> URL: https://issues.apache.org/jira/browse/HAWQ-927
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> To achieve column projection at the level of PXF or the underlying readers we 
> need to first send this data as a Header/Param to PXF. Currently, PXF has no 
> knowledge whether a query requires all columns or a subset of columns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-583) Extend PXF to allow plugins to support returning partial content of SELECT(column projection)

2016-07-14 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-583:
--

Assignee: Kavinder Dhaliwal  (was: Shivram Mani)

> Extend PXF to allow plugins to support returning partial content of 
> SELECT(column projection)
> -
>
> Key: HAWQ-583
> URL: https://issues.apache.org/jira/browse/HAWQ-583
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Kavinder Dhaliwal
> Fix For: backlog
>
>
> Currently PXF supports being able to push down the predicate WHERE logic to 
> the external system to reduce the amount data needed to be retrieved.
> SELECT a, b FROM external_pxf_source WHERE z < 3 AND x > 6
> As such we can filter the rows returned, but currently still would have to 
> return all the fields / complete row.
> This proposal is so that we can return only the columns in SELECT part.
> For data sources where it is columnar storage or selectable such as remote 
> database that PXF can read or connect to this has advantages in the data that 
> needs to be accessed or even transferred.
> As like with the push down Filter it should be optional so that plugins that 
> provide support can use it but others that do not, continue to work as they 
> do.
> The proposal would be for
> 1) create an interface created for plugins to optionally implement, where the 
> columns needed to be returned are given to the plugin.
> 2) update pxf api for hawq to send columns defined in SELECT, for pxf to 
> invoke the plugin interface and pass this information onto if provided
> 3) update pxf integration within hawq itself so that hawq passes this 
> additonal  information to pxf.
> This Ticket is off the back of discussion on HAWQ-492.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-893) Incorrect array indexing in load_expected_statuses() in pg_regress.c

2016-07-06 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal closed HAWQ-893.
--
Resolution: Fixed

> Incorrect array indexing in load_expected_statuses() in pg_regress.c
> 
>
> Key: HAWQ-893
> URL: https://issues.apache.org/jira/browse/HAWQ-893
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>
> Sometimes when running installcheck good there can be a case where when 
> parsing a line from the src/test/regress/expected_statuses file the string 
> that contains the name of the test will have its zero byte set to the null 
> character.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-893) Incorrect array indexing in load_expected_statuses() in pg_regress.c

2016-07-05 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-893:
--

Assignee: Kavinder Dhaliwal  (was: Lei Chang)

> Incorrect array indexing in load_expected_statuses() in pg_regress.c
> 
>
> Key: HAWQ-893
> URL: https://issues.apache.org/jira/browse/HAWQ-893
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>
> Sometimes when running installcheck good there can be a case where when 
> parsing a line from the src/test/regress/expected_statuses file the string 
> that contains the name of the test will have its zero byte set to the null 
> character.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-893) Incorrect array indexing in load_expected_statuses() in pg_regress.c

2016-07-05 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-893:
--

 Summary: Incorrect array indexing in load_expected_statuses() in 
pg_regress.c
 Key: HAWQ-893
 URL: https://issues.apache.org/jira/browse/HAWQ-893
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Kavinder Dhaliwal
Assignee: Lei Chang


Sometimes when running installcheck good there can be a case where when parsing 
a line from the src/test/regress/expected_statuses file the string that 
contains the name of the test will have its zero byte set to the null character.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-786) HAWQ supports ORC as a native storage format

2016-06-17 Thread Kavinder Dhaliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15336415#comment-15336415
 ] 

Kavinder Dhaliwal commented on HAWQ-786:


Thanks for design doc [~liming01]. From a high level having a framework similar 
to FDW looks like a good change to make to HAWQ. I have a couple points:

1. This document seems to join ORC support and the Foreign Data Wrapper 
framework into the same issue. Am I correct in thinking that the FDW 
abstraction will be used to support ORC as a native format? Why not work on it 
in two stages, first the FDW then ORC support?

2. Since this is such a large change can you update the document with 
information of the architectural changes that will be made to HAWQ to support 
FDW? 

3. Can you also add information about the impact this change will have on the 
existing PXF framework? How the two frameworks will differ and why in your 
opinion FDW is the preferred approach to PXF or whether the two will co-exist?

Overall I appreciate that the design doc explains the new User experience HAWQ 
will have through the FDW framework but I'd like more explanation of the 
internal changes to HAWQ that you are purposing.

> HAWQ supports ORC as a native storage format
> 
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Storage
>Reporter: Lei Chang
>Assignee: hongwu
> Attachments: ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-785) Failure running `make -j8 all`

2016-06-08 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal closed HAWQ-785.
--
Resolution: Fixed
  Assignee: Kavinder Dhaliwal  (was: Lei Chang)

> Failure running `make -j8 all`
> --
>
> Key: HAWQ-785
> URL: https://issues.apache.org/jira/browse/HAWQ-785
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>
> I am trying to build hawq on a local OS X 10.10 environment with gcc version 
> 5.3.0
> I can succesfully run 
> {code}
> ./configure CFLAGS="-O3 -g" CXXFLAGS="-O3 -g" LDFLAGS= --with-pgport=5432 
> --with-libedit-preferred --enable-email --enable-snmp --with-perl 
> --with-python --with-java --with-openssl --with-pam --without-krb5 
> --with-gssapi --with-ldap --with-r --with-pgcrypto --enable-orca 
> --prefix=~/hawq_install/
> {code}
> However when I run `make -j8 all` I get many errors related to building 
> libhdfs3 such as
> {code}
> Undefined symbols for architecture x86_64:
>   "google::protobuf::MessageFactory::InternalRegisterGeneratedFile(char 
> const*, void (*)(std::__cxx11::basic_string, 
> std::alloc
> ator > const&))", referenced from:
>   Hdfs::Internal::protobuf_AddDesc_ClientDatanodeProtocol_2eproto() 
> in ClientDatanodeProtocol.pb.cc.o
>   Hdfs::Internal::protobuf_AddDesc_ClientNamenodeProtocol_2eproto() 
> in ClientNamenodeProtocol.pb.cc.o
>   Hdfs::Internal::protobuf_AddDesc_datatransfer_2eproto() in 
> datatransfer.pb.cc.o
>   Hdfs::Internal::protobuf_AddDesc_hdfs_2eproto() in hdfs.pb.cc.o
>   Hdfs::Internal::protobuf_AddDesc_IpcConnectionContext_2eproto() in 
> IpcConnectionContext.pb.cc.o
>   Hdfs::Internal::protobuf_AddDesc_ProtobufRpcEngine_2eproto() in 
> ProtobufRpcEngine.pb.cc.o
>   Hdfs::Internal::protobuf_AddDesc_RpcHeader_2eproto() in 
> RpcHeader.pb.cc.o
> ...
> ld: symbol(s) not found for architecture x86_64
> collect2: error: ld returned 1 exit status
> make[4]: *** [src/libhdfs3.2.2.31.dylib] Error 1
> make[3]: *** [src/CMakeFiles/libhdfs3-shared.dir/all] Error 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-785) Failure running `make -j8 all`

2016-06-08 Thread Kavinder Dhaliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320959#comment-15320959
 ] 

Kavinder Dhaliwal commented on HAWQ-785:


I was able to overcome this issue by using clang instead of gcc with the 
following environment variables

{code}
export CXXFLAGS="-std=c++11 -stdlib=libc++"
export CC="which clang"
export CXX="which clang++"
{code}

> Failure running `make -j8 all`
> --
>
> Key: HAWQ-785
> URL: https://issues.apache.org/jira/browse/HAWQ-785
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: Kavinder Dhaliwal
>Assignee: Lei Chang
>
> I am trying to build hawq on a local OS X 10.10 environment with gcc version 
> 5.3.0
> I can succesfully run 
> {code}
> ./configure CFLAGS="-O3 -g" CXXFLAGS="-O3 -g" LDFLAGS= --with-pgport=5432 
> --with-libedit-preferred --enable-email --enable-snmp --with-perl 
> --with-python --with-java --with-openssl --with-pam --without-krb5 
> --with-gssapi --with-ldap --with-r --with-pgcrypto --enable-orca 
> --prefix=~/hawq_install/
> {code}
> However when I run `make -j8 all` I get many errors related to building 
> libhdfs3 such as
> {code}
> Undefined symbols for architecture x86_64:
>   "google::protobuf::MessageFactory::InternalRegisterGeneratedFile(char 
> const*, void (*)(std::__cxx11::basic_string, 
> std::alloc
> ator > const&))", referenced from:
>   Hdfs::Internal::protobuf_AddDesc_ClientDatanodeProtocol_2eproto() 
> in ClientDatanodeProtocol.pb.cc.o
>   Hdfs::Internal::protobuf_AddDesc_ClientNamenodeProtocol_2eproto() 
> in ClientNamenodeProtocol.pb.cc.o
>   Hdfs::Internal::protobuf_AddDesc_datatransfer_2eproto() in 
> datatransfer.pb.cc.o
>   Hdfs::Internal::protobuf_AddDesc_hdfs_2eproto() in hdfs.pb.cc.o
>   Hdfs::Internal::protobuf_AddDesc_IpcConnectionContext_2eproto() in 
> IpcConnectionContext.pb.cc.o
>   Hdfs::Internal::protobuf_AddDesc_ProtobufRpcEngine_2eproto() in 
> ProtobufRpcEngine.pb.cc.o
>   Hdfs::Internal::protobuf_AddDesc_RpcHeader_2eproto() in 
> RpcHeader.pb.cc.o
> ...
> ld: symbol(s) not found for architecture x86_64
> collect2: error: ld returned 1 exit status
> make[4]: *** [src/libhdfs3.2.2.31.dylib] Error 1
> make[3]: *** [src/CMakeFiles/libhdfs3-shared.dir/all] Error 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-785) Failure running `make -j8 all`

2016-06-07 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-785:
--

 Summary: Failure running `make -j8 all`
 Key: HAWQ-785
 URL: https://issues.apache.org/jira/browse/HAWQ-785
 Project: Apache HAWQ
  Issue Type: Bug
  Components: libhdfs
Reporter: Kavinder Dhaliwal
Assignee: Lei Chang


I am trying to build hawq on a local OS X 10.10 environment with gcc version 
5.3.0

I can succesfully run 

{code}
./configure CFLAGS="-O3 -g" CXXFLAGS="-O3 -g" LDFLAGS= --with-pgport=5432 
--with-libedit-preferred --enable-email --enable-snmp --with-perl --with-python 
--with-java --with-openssl --with-pam --without-krb5 --with-gssapi --with-ldap 
--with-r --with-pgcrypto --enable-orca --prefix=~/hawq_install/
{code}

However when I run `make -j8 all` I get many errors related to building 
libhdfs3 such as

{code}
Undefined symbols for architecture x86_64:
  "google::protobuf::MessageFactory::InternalRegisterGeneratedFile(char const*, 
void (*)(std::__cxx11::basic_string, std::alloc
ator > const&))", referenced from:
  Hdfs::Internal::protobuf_AddDesc_ClientDatanodeProtocol_2eproto() in 
ClientDatanodeProtocol.pb.cc.o
  Hdfs::Internal::protobuf_AddDesc_ClientNamenodeProtocol_2eproto() in 
ClientNamenodeProtocol.pb.cc.o
  Hdfs::Internal::protobuf_AddDesc_datatransfer_2eproto() in 
datatransfer.pb.cc.o
  Hdfs::Internal::protobuf_AddDesc_hdfs_2eproto() in hdfs.pb.cc.o
  Hdfs::Internal::protobuf_AddDesc_IpcConnectionContext_2eproto() in 
IpcConnectionContext.pb.cc.o
  Hdfs::Internal::protobuf_AddDesc_ProtobufRpcEngine_2eproto() in 
ProtobufRpcEngine.pb.cc.o
  Hdfs::Internal::protobuf_AddDesc_RpcHeader_2eproto() in 
RpcHeader.pb.cc.o
...
ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status
make[4]: *** [src/libhdfs3.2.2.31.dylib] Error 1
make[3]: *** [src/CMakeFiles/libhdfs3-shared.dir/all] Error 2
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-514) Update and add to PXF Rpm 'Summary' and 'Description' metadata

2016-03-10 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-514:
--

Assignee: Kavinder Dhaliwal  (was: Lei Chang)

> Update and add to PXF Rpm 'Summary' and 'Description' metadata
> --
>
> Key: HAWQ-514
> URL: https://issues.apache.org/jira/browse/HAWQ-514
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>
> Currently the build.gradle script in "pxf" outputs the following information 
> for all pxf rpms:
> {code}
> Summary : The PXF extensions library for HAWQ
> Description :
> {code}
> The summary should read "HAWQ extension framework" and the description should 
> include relevant information about the rpm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-514) Update and add to PXF Rpm 'Summary' and 'Description' metadata

2016-03-10 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-514:
--

 Summary: Update and add to PXF Rpm 'Summary' and 'Description' 
metadata
 Key: HAWQ-514
 URL: https://issues.apache.org/jira/browse/HAWQ-514
 Project: Apache HAWQ
  Issue Type: Improvement
Reporter: Kavinder Dhaliwal
Assignee: Lei Chang


Currently the build.gradle script in "pxf" outputs the following information 
for all pxf rpms:

{code}
Summary : The PXF extensions library for HAWQ
Description :
{code}

The summary should read "HAWQ extension framework" and the description should 
include relevant information about the rpm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails

2016-02-25 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal updated HAWQ-462:
---
Description: 
On an HA Secure Cluster querying a hive external table works:


{code}
create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b boolean) 
location 
('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
format 'custom' (formatter='pxfwritable_import');
select * from pxf_hive;
{code}

but querying the same table via hcatalog does not

{code}
SELECT * FROM hcatalog.default.hive_table;
ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
(hd_work_mgr.c:930)
{code}

This should be fixed by the PR for 
https://issues.apache.org/jira/browse/HAWQ-317

  was:
On an HA Secure Cluster querying a hive external table works:


```
create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b boolean) 
location 
('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
format 'custom' (formatter='pxfwritable_import');
select * from pxf_hive;
```

but querying the same table via hcatalog does not

SELECT * FROM hcatalog.default.hive_table;
ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
(hd_work_mgr.c:930)


This should be fixed by the PR for 
https://issues.apache.org/jira/browse/HAWQ-317


> Querying Hcatalog in HA Secure Environment Fails
> 
>
> Key: HAWQ-462
> URL: https://issues.apache.org/jira/browse/HAWQ-462
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, Hcatalog, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Lei Chang
>
> On an HA Secure Cluster querying a hive external table works:
> {code}
> create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b 
> boolean) location 
> ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
> format 'custom' (formatter='pxfwritable_import');
> select * from pxf_hive;
> {code}
> but querying the same table via hcatalog does not
> {code}
> SELECT * FROM hcatalog.default.hive_table;
> ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
> (hd_work_mgr.c:930)
> {code}
> This should be fixed by the PR for 
> https://issues.apache.org/jira/browse/HAWQ-317



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails

2016-02-25 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal updated HAWQ-462:
---
Description: 
On an HA Secure Cluster querying a hive external table works:

```
create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b boolean) 
location 
('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
format 'custom' (formatter='pxfwritable_import');
select * from pxf_hive;
```

but querying the same table via hcatalog does not

SELECT * FROM hcatalog.default.hive_table;
ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
(hd_work_mgr.c:930)


This should be fixed by the PR for 
https://issues.apache.org/jira/browse/HAWQ-317

  was:
On an HA Secure Cluster querying a hive external table works:


create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b boolean) 
location 
('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
format 'custom' (formatter='pxfwritable_import');
select * from pxf_hive;


but querying the same table via hcatalog does not

SELECT * FROM hcatalog.default.hive_table;
ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
(hd_work_mgr.c:930)


This should be fixed by the PR for 
https://issues.apache.org/jira/browse/HAWQ-317


> Querying Hcatalog in HA Secure Environment Fails
> 
>
> Key: HAWQ-462
> URL: https://issues.apache.org/jira/browse/HAWQ-462
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, Hcatalog, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Lei Chang
>
> On an HA Secure Cluster querying a hive external table works:
> ```
> create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b 
> boolean) location 
> ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
> format 'custom' (formatter='pxfwritable_import');
> select * from pxf_hive;
> ```
> but querying the same table via hcatalog does not
> SELECT * FROM hcatalog.default.hive_table;
> ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
> (hd_work_mgr.c:930)
> This should be fixed by the PR for 
> https://issues.apache.org/jira/browse/HAWQ-317



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails

2016-02-25 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal updated HAWQ-462:
---
Description: 
On an HA Secure Cluster querying a hive external table works:


```
create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b boolean) 
location 
('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
format 'custom' (formatter='pxfwritable_import');
select * from pxf_hive;
```

but querying the same table via hcatalog does not

SELECT * FROM hcatalog.default.hive_table;
ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
(hd_work_mgr.c:930)


This should be fixed by the PR for 
https://issues.apache.org/jira/browse/HAWQ-317

  was:
On an HA Secure Cluster querying a hive external table works:

```
create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b boolean) 
location 
('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
format 'custom' (formatter='pxfwritable_import');
select * from pxf_hive;
```

but querying the same table via hcatalog does not

SELECT * FROM hcatalog.default.hive_table;
ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
(hd_work_mgr.c:930)


This should be fixed by the PR for 
https://issues.apache.org/jira/browse/HAWQ-317


> Querying Hcatalog in HA Secure Environment Fails
> 
>
> Key: HAWQ-462
> URL: https://issues.apache.org/jira/browse/HAWQ-462
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, Hcatalog, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Lei Chang
>
> On an HA Secure Cluster querying a hive external table works:
> ```
> create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b 
> boolean) location 
> ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
> format 'custom' (formatter='pxfwritable_import');
> select * from pxf_hive;
> ```
> but querying the same table via hcatalog does not
> SELECT * FROM hcatalog.default.hive_table;
> ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
> (hd_work_mgr.c:930)
> This should be fixed by the PR for 
> https://issues.apache.org/jira/browse/HAWQ-317



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails

2016-02-25 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-462:
--

 Summary: Querying Hcatalog in HA Secure Environment Fails
 Key: HAWQ-462
 URL: https://issues.apache.org/jira/browse/HAWQ-462
 Project: Apache HAWQ
  Issue Type: Bug
  Components: External Tables, Hcatalog
Reporter: Kavinder Dhaliwal
Assignee: Lei Chang


On an HA Secure Cluster querying a hive external table works:

```
create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b boolean) 
location 
('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
format 'custom' (formatter='pxfwritable_import');
select * from pxf_hive;
```

but querying the same table via hcatalog does not
```
SELECT * FROM hcatalog.default.hive_table;
ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
(hd_work_mgr.c:930)
```

This should be fixed by the PR for 
https://issues.apache.org/jira/browse/HAWQ-317



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails

2016-02-25 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal updated HAWQ-462:
---
Description: 
On an HA Secure Cluster querying a hive external table works:


create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b boolean) 
location 
('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
format 'custom' (formatter='pxfwritable_import');
select * from pxf_hive;


but querying the same table via hcatalog does not

SELECT * FROM hcatalog.default.hive_table;
ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
(hd_work_mgr.c:930)


This should be fixed by the PR for 
https://issues.apache.org/jira/browse/HAWQ-317

  was:
On an HA Secure Cluster querying a hive external table works:

```
create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b boolean) 
location 
('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
format 'custom' (formatter='pxfwritable_import');
select * from pxf_hive;
```

but querying the same table via hcatalog does not
```
SELECT * FROM hcatalog.default.hive_table;
ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
(hd_work_mgr.c:930)
```

This should be fixed by the PR for 
https://issues.apache.org/jira/browse/HAWQ-317


> Querying Hcatalog in HA Secure Environment Fails
> 
>
> Key: HAWQ-462
> URL: https://issues.apache.org/jira/browse/HAWQ-462
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, Hcatalog
>Reporter: Kavinder Dhaliwal
>Assignee: Lei Chang
>
> On an HA Secure Cluster querying a hive external table works:
> create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b 
> boolean) location 
> ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') 
> format 'custom' (formatter='pxfwritable_import');
> select * from pxf_hive;
> but querying the same table via hcatalog does not
> SELECT * FROM hcatalog.default.hive_table;
> ERROR:  Failed to acquire a delegation token for uri hdfs://localhost:8020/ 
> (hd_work_mgr.c:930)
> This should be fixed by the PR for 
> https://issues.apache.org/jira/browse/HAWQ-317



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)