[GitHub] incubator-hawq issue #812: HAWQ-949. Revert serializing floats in pxf string...

2016-07-26 Thread sansanichfb
Github user sansanichfb commented on the issue:

https://github.com/apache/incubator-hawq/pull/812
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HAWQ-950) PXF support for Float filters encoded in header data

2016-07-26 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-950:
--

 Summary: PXF support for Float filters encoded in header data
 Key: HAWQ-950
 URL: https://issues.apache.org/jira/browse/HAWQ-950
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: External Tables, PXF
Reporter: Kavinder Dhaliwal
Assignee: Goden Yao


HAWQ-779 contributed by [~jiadx] introduced the ability for hawq to serialize 
filters on float columns and send the data to PXF. However, PXF is not 
currently capable of parsing float values in the string filter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Pratheesh Nair (JIRA)
Pratheesh Nair created HAWQ-951:
---

 Summary: PXF not locating Hadoop native libraries needed for Snappy
 Key: HAWQ-951
 URL: https://issues.apache.org/jira/browse/HAWQ-951
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Pratheesh Nair
Assignee: Goden Yao


Hawq queries are failing when we try to read Snappy-compressed table from 
hcatalog via external tables.

After the following was performed on every PXF host and restarting, the issue 
was resolved:
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
/usr/hdp/current/hadoop-client/lib/native native
Also, the default pxf-public-classpath should probably contain something like 
the following line:
/usr/hdp/current/hadoop-client/lib/snappy*.jar





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-951:
---
Affects Version/s: 2.0.0.0-incubating

> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> /usr/hdp/current/hadoop-client/lib/snappy*.jar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-951:
---
Fix Version/s: backlog

> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> /usr/hdp/current/hadoop-client/lib/snappy*.jar



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-950) PXF support for Float filters encoded in header data

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-950:
---
Affects Version/s: 2.0.0.0-incubating

> PXF support for Float filters encoded in header data
> 
>
> Key: HAWQ-950
> URL: https://issues.apache.org/jira/browse/HAWQ-950
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: External Tables, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> HAWQ-779 contributed by [~jiadx] introduced the ability for hawq to serialize 
> filters on float columns and send the data to PXF. However, PXF is not 
> currently capable of parsing float values in the string filter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-950) PXF support for Float filters encoded in header data

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-950:
---
Fix Version/s: 2.0.1.0-incubating

> PXF support for Float filters encoded in header data
> 
>
> Key: HAWQ-950
> URL: https://issues.apache.org/jira/browse/HAWQ-950
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: External Tables, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> HAWQ-779 contributed by [~jiadx] introduced the ability for hawq to serialize 
> filters on float columns and send the data to PXF. However, PXF is not 
> currently capable of parsing float values in the string filter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-950) PXF support for Float filters encoded in header data

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-950:
---
Description: 
HAWQ-779 contributed by [~jiadx] introduced the ability for hawq to serialize 
filters on float columns and send the data to PXF. However, PXF is not 
currently capable of parsing float values in the string filter.

We need to 
1. add support for float type on JAVA side.
2. add unit test for this change.

  was:HAWQ-779 contributed by [~jiadx] introduced the ability for hawq to 
serialize filters on float columns and send the data to PXF. However, PXF is 
not currently capable of parsing float values in the string filter.


> PXF support for Float filters encoded in header data
> 
>
> Key: HAWQ-950
> URL: https://issues.apache.org/jira/browse/HAWQ-950
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: External Tables, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> HAWQ-779 contributed by [~jiadx] introduced the ability for hawq to serialize 
> filters on float columns and send the data to PXF. However, PXF is not 
> currently capable of parsing float values in the string filter.
> We need to 
> 1. add support for float type on JAVA side.
> 2. add unit test for this change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-950) PXF support for Float filters encoded in header data

2016-07-26 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394554#comment-15394554
 ] 

Goden Yao commented on HAWQ-950:


[~jiadx] - do you think you may continue your work and complete the scenario by 
fixing this issue?

> PXF support for Float filters encoded in header data
> 
>
> Key: HAWQ-950
> URL: https://issues.apache.org/jira/browse/HAWQ-950
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: External Tables, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> HAWQ-779 contributed by [~jiadx] introduced the ability for hawq to serialize 
> filters on float columns and send the data to PXF. However, PXF is not 
> currently capable of parsing float values in the string filter.
> We need to 
> 1. add support for float type on JAVA side.
> 2. add unit test for this change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-951:
---
Description: 
Hawq queries are failing when we try to read Snappy-compressed table from 
hcatalog via external tables.

After the following was performed on every PXF host and restarting, the issue 
was resolved:
{code}
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
/usr/hdp/current/hadoop-client/lib/native native
{code}

Also, the default pxf-public-classpath should probably contain something like 
the following line:
{code}
/usr/hdp/current/hadoop-client/lib/snappy*.jar
{code}

  was:
Hawq queries are failing when we try to read Snappy-compressed table from 
hcatalog via external tables.

After the following was performed on every PXF host and restarting, the issue 
was resolved:
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
/usr/hdp/current/hadoop-client/lib/native native
Also, the default pxf-public-classpath should probably contain something like 
the following line:
/usr/hdp/current/hadoop-client/lib/snappy*.jar




> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> {code}
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> {code}
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> {code}
> /usr/hdp/current/hadoop-client/lib/snappy*.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-29) Refactor HAWQ InputFormat to support Spark/Scala

2016-07-26 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-29?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394576#comment-15394576
 ] 

Kyle R Dunn commented on HAWQ-29:
-

[~ronert], [~ronert_obst] - Can you see if this example Scala code works for 
you?

https://github.com/kdunn926/sparkHawq

> Refactor HAWQ InputFormat to support Spark/Scala
> 
>
> Key: HAWQ-29
> URL: https://issues.apache.org/jira/browse/HAWQ-29
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lirong Jian
>Assignee: Lirong Jian
>Priority: Minor
>  Labels: features
> Fix For: 2.0.1.0-incubating
>
>
> Currently the implementation of HAWQ InputFormat doesn't support Spark/Scala 
> very well. We need to refactor the code to support that feature. More 
> specifically, we need implement the serializable interface for some classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-952) Merge COPYRIGHT file to NOTICE File

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-952:
---
Fix Version/s: 2.0.0.0-incubating

> Merge COPYRIGHT file to NOTICE File
> ---
>
> Key: HAWQ-952
> URL: https://issues.apache.org/jira/browse/HAWQ-952
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Goden Yao
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> Per mentor's suggestion, we should merge COPYRIGHT file into NOTICE file 
> rather than a separate file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-952) Merge COPYRIGHT file to NOTICE File

2016-07-26 Thread Goden Yao (JIRA)
Goden Yao created HAWQ-952:
--

 Summary: Merge COPYRIGHT file to NOTICE File
 Key: HAWQ-952
 URL: https://issues.apache.org/jira/browse/HAWQ-952
 Project: Apache HAWQ
  Issue Type: Task
  Components: Documentation
Reporter: Goden Yao
Assignee: Lei Chang


Per mentor's suggestion, we should merge COPYRIGHT file into NOTICE file rather 
than a separate file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-953) Pushed down filter fails when quering partitioned Hive tables

2016-07-26 Thread Oleksandr Diachenko (JIRA)
Oleksandr Diachenko created HAWQ-953:


 Summary: Pushed down filter fails when quering partitioned Hive 
tables
 Key: HAWQ-953
 URL: https://issues.apache.org/jira/browse/HAWQ-953
 Project: Apache HAWQ
  Issue Type: Bug
  Components: PXF
Reporter: Oleksandr Diachenko
Assignee: Goden Yao






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-953) Pushed down filter fails when quering partitioned Hive tables

2016-07-26 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko reassigned HAWQ-953:


Assignee: Oleksandr Diachenko  (was: Goden Yao)

> Pushed down filter fails when quering partitioned Hive tables
> -
>
> Key: HAWQ-953
> URL: https://issues.apache.org/jira/browse/HAWQ-953
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-953) Pushed down filter fails when quering partitioned Hive tables

2016-07-26 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-953:
-
Description: After code changes in HAWQ-779 Hawq started sending filters to 
Hive, which fails on partitioned tables.

> Pushed down filter fails when quering partitioned Hive tables
> -
>
> Key: HAWQ-953
> URL: https://issues.apache.org/jira/browse/HAWQ-953
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>
> After code changes in HAWQ-779 Hawq started sending filters to Hive, which 
> fails on partitioned tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-954) Seg Fault when writing to External Table where ProjInfo is NULL

2016-07-26 Thread Kavinder Dhaliwal (JIRA)
Kavinder Dhaliwal created HAWQ-954:
--

 Summary: Seg Fault when writing to External Table where ProjInfo 
is NULL
 Key: HAWQ-954
 URL: https://issues.apache.org/jira/browse/HAWQ-954
 Project: Apache HAWQ
  Issue Type: Bug
  Components: External Tables, PXF
Reporter: Kavinder Dhaliwal
Assignee: Goden Yao


HAWQ-927 introduced a regression where when adding headers to the http request 
to PXF the macro 

{code}
#define EXTPROTOCOL_GET_PROJINFO(fcinfo) (((ExtProtocolData*) 
fcinfo->context)->desc->projInfo
{code}

Is a NULL value and causes a SegFault. This bug has been observed mainly in 
external writable tables



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-954) Seg Fault when writing to External Table where ProjInfo is NULL

2016-07-26 Thread Kavinder Dhaliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kavinder Dhaliwal reassigned HAWQ-954:
--

Assignee: Kavinder Dhaliwal  (was: Goden Yao)

> Seg Fault when writing to External Table where ProjInfo is NULL
> ---
>
> Key: HAWQ-954
> URL: https://issues.apache.org/jira/browse/HAWQ-954
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
>
> HAWQ-927 introduced a regression where when adding headers to the http 
> request to PXF the macro 
> {code}
> #define EXTPROTOCOL_GET_PROJINFO(fcinfo) (((ExtProtocolData*) 
> fcinfo->context)->desc->projInfo
> {code}
> Is a NULL value and causes a SegFault. This bug has been observed mainly in 
> external writable tables



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-953) Pushed down filter fails when quering partitioned Hive tables

2016-07-26 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-953:
-
Description: 
After code changes in HAWQ-779 Hawq started sending filters to Hive, which 
fails on partitioned tables.

```
# \d hive_partitions_all_types
External table "public.hive_partitions_all_types"
 Column |Type | Modifiers 
+-+---
 t1 | text| 
 t2 | text| 
 num1   | integer | 
 dub1   | double precision| 
 dec1   | numeric | 
 tm | timestamp without time zone | 
 r  | real| 
 bg | bigint  | 
 b  | boolean | 
 tn | smallint| 
 sml| smallint| 
 dt | date| 
 vc1| character varying(5)| 
 c1 | character(3)| 
 bin| bytea   | 
Type: readable
Encoding: UTF8
Format type: custom
Format options: formatter 'pxfwritable_import' 
External location: 
pxf://127.0.0.1:51200/hive_many_partitioned_table?PROFILE=Hive

pxfautomation=# SELECT t1, t2, bg FROM hive_partitions_all_types where bg = 
23456789;
ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   MetaException(message:Filtering is supported only on 
partition keys of type string)description   The server encountered an 
internal error that prevented it from fulfilling this request.exception   
javax.servlet.ServletException: MetaException(message:Filtering is supported 
only on partition keys of type string) (libchurl.c:878)

```

  was:After code changes in HAWQ-779 Hawq started sending filters to Hive, 
which fails on partitioned tables.


> Pushed down filter fails when quering partitioned Hive tables
> -
>
> Key: HAWQ-953
> URL: https://issues.apache.org/jira/browse/HAWQ-953
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>
> After code changes in HAWQ-779 Hawq started sending filters to Hive, which 
> fails on partitioned tables.
> ```
> # \d hive_partitions_all_types
> External table "public.hive_partitions_all_types"
>  Column |Type | Modifiers 
> +-+---
>  t1 | text| 
>  t2 | text| 
>  num1   | integer | 
>  dub1   | double precision| 
>  dec1   | numeric | 
>  tm | timestamp without time zone | 
>  r  | real| 
>  bg | bigint  | 
>  b  | boolean | 
>  tn | smallint| 
>  sml| smallint| 
>  dt | date| 
>  vc1| character varying(5)| 
>  c1 | character(3)| 
>  bin| bytea   | 
> Type: readable
> Encoding: UTF8
> Format type: custom
> Format options: formatter 'pxfwritable_import' 
> External location: 
> pxf://127.0.0.1:51200/hive_many_partitioned_table?PROFILE=Hive
> pxfautomation=# SELECT t1, t2, bg FROM hive_partitions_all_types where bg = 
> 23456789;
> ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
> report   message   MetaException(message:Filtering is supported only on 
> partition keys of type string)description   The server encountered an 
> internal error that prevented it from fulfilling this request.exception   
> javax.servlet.ServletException: MetaException(message:Filtering is supported 
> only on partition keys of type string) (libchurl.c:878)
> ```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-953) Pushed down filter fails when quering partitioned Hive tables

2016-07-26 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-953:
-
Description: 
After code changes in HAWQ-779 Hawq started sending filters to Hive, which 
fails on partitioned tables.

{code}
# \d hive_partitions_all_types
External table "public.hive_partitions_all_types"
 Column |Type | Modifiers 
+-+---
 t1 | text| 
 t2 | text| 
 num1   | integer | 
 dub1   | double precision| 
 dec1   | numeric | 
 tm | timestamp without time zone | 
 r  | real| 
 bg | bigint  | 
 b  | boolean | 
 tn | smallint| 
 sml| smallint| 
 dt | date| 
 vc1| character varying(5)| 
 c1 | character(3)| 
 bin| bytea   | 
Type: readable
Encoding: UTF8
Format type: custom
Format options: formatter 'pxfwritable_import' 
External location: 
pxf://127.0.0.1:51200/hive_many_partitioned_table?PROFILE=Hive

pxfautomation=# SELECT t1, t2, bg FROM hive_partitions_all_types where bg = 
23456789;
ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   MetaException(message:Filtering is supported only on 
partition keys of type string)description   The server encountered an 
internal error that prevented it from fulfilling this request.exception   
javax.servlet.ServletException: MetaException(message:Filtering is supported 
only on partition keys of type string) (libchurl.c:878)

{code}

  was:
After code changes in HAWQ-779 Hawq started sending filters to Hive, which 
fails on partitioned tables.

```
# \d hive_partitions_all_types
External table "public.hive_partitions_all_types"
 Column |Type | Modifiers 
+-+---
 t1 | text| 
 t2 | text| 
 num1   | integer | 
 dub1   | double precision| 
 dec1   | numeric | 
 tm | timestamp without time zone | 
 r  | real| 
 bg | bigint  | 
 b  | boolean | 
 tn | smallint| 
 sml| smallint| 
 dt | date| 
 vc1| character varying(5)| 
 c1 | character(3)| 
 bin| bytea   | 
Type: readable
Encoding: UTF8
Format type: custom
Format options: formatter 'pxfwritable_import' 
External location: 
pxf://127.0.0.1:51200/hive_many_partitioned_table?PROFILE=Hive

pxfautomation=# SELECT t1, t2, bg FROM hive_partitions_all_types where bg = 
23456789;
ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   MetaException(message:Filtering is supported only on 
partition keys of type string)description   The server encountered an 
internal error that prevented it from fulfilling this request.exception   
javax.servlet.ServletException: MetaException(message:Filtering is supported 
only on partition keys of type string) (libchurl.c:878)

```


> Pushed down filter fails when quering partitioned Hive tables
> -
>
> Key: HAWQ-953
> URL: https://issues.apache.org/jira/browse/HAWQ-953
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
>
> After code changes in HAWQ-779 Hawq started sending filters to Hive, which 
> fails on partitioned tables.
> {code}
> # \d hive_partitions_all_types
> External table "public.hive_partitions_all_types"
>  Column |Type | Modifiers 
> +-+---
>  t1 | text| 
>  t2 | text| 
>  num1   | integer | 
>  dub1   | double precision| 
>  dec1   | numeric | 
>  tm | timestamp without time zone | 
>  r  | real| 
>  bg | bigint  | 
>  b  | boolean | 
>  tn | smallint| 
>  sml| smallint| 
>  dt | date| 
>  vc1| character varying(5)| 
>  c1 | character(3)| 
>  bin| bytea   | 
> Type: readable
> Encoding: UTF8
> Format type: custom
> Format options: formatter 'pxfwritable_import' 
> External location: 
> pxf://127.0.0.1:51200/hive

[GitHub] incubator-hawq pull request #816: HAWQ-953. Reverted sending qualifiers when...

2016-07-26 Thread sansanichfb
GitHub user sansanichfb opened a pull request:

https://github.com/apache/incubator-hawq/pull/816

HAWQ-953. Reverted sending qualifiers when creating PXF plan.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-hawq HAWQ-953

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/816.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #816


commit 4275ae363577348c82b3e4c411bad3f4c872c337
Author: Oleksandr Diachenko 
Date:   2016-07-26T22:25:53Z

HAWQ-953. Reverted sending qualifiers when creating PXF plan.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (HAWQ-779) support more pxf filter pushdwon

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-779:
---
Fix Version/s: (was: 2.0.0.0-incubating)
   2.0.1.0-incubating

>  support more pxf filter pushdwon
> -
>
> Key: HAWQ-779
> URL: https://issues.apache.org/jira/browse/HAWQ-779
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Devin Jia
>Assignee: Shivram Mani
> Fix For: 2.0.1.0-incubating
>
>
> When I use the pxf hawq, I need to read a traditional relational database 
> systems and solr by way of the external table. The project 
> :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext,
>  only "WriteAccessor ",so I developed 2 plug-ins, the projects: 
> https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to 
> modified HAWQ:
> 1. When get a list of fragment from pxf services, push down the 
> 'filterString'. modify the backend / optimizer / plan / createplan.c of 
> create_pxf_plan methods:
> segdb_work_map = map_hddata_2gp_segments (uri_str,
> total_segs, segs_participating,
> relation, ctx-> root-> parse-> jointree-> quals);
> 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, 
> Date type data operator, Float type operator.
> 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE 
> operator.
> I already created a feature branch in my local ,and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HAWQ-779) support more pxf filter pushdwon

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao reopened HAWQ-779:


reopen as too many issues after testing the feature.

>  support more pxf filter pushdwon
> -
>
> Key: HAWQ-779
> URL: https://issues.apache.org/jira/browse/HAWQ-779
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Devin Jia
>Assignee: Shivram Mani
> Fix For: 2.0.1.0-incubating
>
>
> When I use the pxf hawq, I need to read a traditional relational database 
> systems and solr by way of the external table. The project 
> :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext,
>  only "WriteAccessor ",so I developed 2 plug-ins, the projects: 
> https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to 
> modified HAWQ:
> 1. When get a list of fragment from pxf services, push down the 
> 'filterString'. modify the backend / optimizer / plan / createplan.c of 
> create_pxf_plan methods:
> segdb_work_map = map_hddata_2gp_segments (uri_str,
> total_segs, segs_participating,
> relation, ctx-> root-> parse-> jointree-> quals);
> 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, 
> Date type data operator, Float type operator.
> 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE 
> operator.
> I already created a feature branch in my local ,and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HAWQ-779) support more pxf filter pushdwon

2016-07-26 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394685#comment-15394685
 ] 

Goden Yao edited comment on HAWQ-779 at 7/26/16 10:34 PM:
--

reopen as too many issues after testing the feature. [~jiadx] please check 
HAWQ-953 and HAWQ-950 and provide an updated patch for us to merge in.


was (Author: godenyao):
reopen as too many issues after testing the feature.

>  support more pxf filter pushdwon
> -
>
> Key: HAWQ-779
> URL: https://issues.apache.org/jira/browse/HAWQ-779
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Devin Jia
>Assignee: Shivram Mani
> Fix For: 2.0.1.0-incubating
>
>
> When I use the pxf hawq, I need to read a traditional relational database 
> systems and solr by way of the external table. The project 
> :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext,
>  only "WriteAccessor ",so I developed 2 plug-ins, the projects: 
> https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to 
> modified HAWQ:
> 1. When get a list of fragment from pxf services, push down the 
> 'filterString'. modify the backend / optimizer / plan / createplan.c of 
> create_pxf_plan methods:
> segdb_work_map = map_hddata_2gp_segments (uri_str,
> total_segs, segs_participating,
> relation, ctx-> root-> parse-> jointree-> quals);
> 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, 
> Date type data operator, Float type operator.
> 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE 
> operator.
> I already created a feature branch in my local ,and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394720#comment-15394720
 ] 

Kyle R Dunn edited comment on HAWQ-951 at 7/26/16 10:58 PM:


The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.


mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native



was (Author: kdunn926):
The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.

```
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native
```


> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> {code}
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> {code}
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> {code}
> /usr/hdp/current/hadoop-client/lib/snappy*.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394720#comment-15394720
 ] 

Kyle R Dunn commented on HAWQ-951:
--

The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.

```
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native
```


> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> {code}
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> {code}
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> {code}
> /usr/hdp/current/hadoop-client/lib/snappy*.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394720#comment-15394720
 ] 

Kyle R Dunn edited comment on HAWQ-951 at 7/26/16 10:58 PM:


The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.

{{{ mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native }}}


was (Author: kdunn926):
The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.

{{{
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native
}}}

> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> {code}
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> {code}
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> {code}
> /usr/hdp/current/hadoop-client/lib/snappy*.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394720#comment-15394720
 ] 

Kyle R Dunn edited comment on HAWQ-951 at 7/26/16 10:59 PM:


The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.

{{ 
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native 
}}


was (Author: kdunn926):
The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.

{{{ mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native }}}

> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> {code}
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> {code}
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> {code}
> /usr/hdp/current/hadoop-client/lib/snappy*.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394720#comment-15394720
 ] 

Kyle R Dunn edited comment on HAWQ-951 at 7/26/16 10:58 PM:


The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.

{{{
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native
}}}


was (Author: kdunn926):
The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.


mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native


> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> {code}
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> {code}
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> {code}
> /usr/hdp/current/hadoop-client/lib/snappy*.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394720#comment-15394720
 ] 

Kyle R Dunn edited comment on HAWQ-951 at 7/26/16 10:59 PM:


The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.

{code:shell}
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native 
{code}


was (Author: kdunn926):
The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.

{{ 
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native 
}}

> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> {code}
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> {code}
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> {code}
> /usr/hdp/current/hadoop-client/lib/snappy*.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394720#comment-15394720
 ] 

Kyle R Dunn edited comment on HAWQ-951 at 7/26/16 11:00 PM:


The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.

{code:none}
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native 
{code}


was (Author: kdunn926):
The corrected symlink process is below, I had it incorrect in the Zendesk 
ticket.

{code:shell}
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native 
{code}

> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> {code}
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> {code}
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> {code}
> /usr/hdp/current/hadoop-client/lib/snappy*.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394747#comment-15394747
 ] 

Goden Yao commented on HAWQ-951:


i don't see a difference.

> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> {code}
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> {code}
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> {code}
> /usr/hdp/current/hadoop-client/lib/snappy*.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394749#comment-15394749
 ] 

Kyle R Dunn commented on HAWQ-951:
--

cd to {{/usr/lib/hadoop/lib}} , rather than {{/usr/lib/hadoop}}

> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> {code}
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> {code}
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> {code}
> /usr/hdp/current/hadoop-client/lib/snappy*.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394757#comment-15394757
 ] 

Goden Yao commented on HAWQ-951:


ok corrected it in description.

> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> {code}
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> {code}
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> {code}
> /usr/hdp/current/hadoop-client/lib/snappy*.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-951) PXF not locating Hadoop native libraries needed for Snappy

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-951:
---
Description: 
Hawq queries are failing when we try to read Snappy-compressed table from 
hcatalog via external tables.

After the following was performed on every PXF host and restarting, the issue 
was resolved:
{code}
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
/usr/hdp/current/hadoop-client/lib/native native
{code}

Also, the default pxf-public-classpath should probably contain something like 
the following line:
{code}
/usr/hdp/current/hadoop-client/lib/snappy*.jar
{code}

  was:
Hawq queries are failing when we try to read Snappy-compressed table from 
hcatalog via external tables.

After the following was performed on every PXF host and restarting, the issue 
was resolved:
{code}
mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop && ln -s 
/usr/hdp/current/hadoop-client/lib/native native
{code}

Also, the default pxf-public-classpath should probably contain something like 
the following line:
{code}
/usr/hdp/current/hadoop-client/lib/snappy*.jar
{code}


> PXF not locating Hadoop native libraries needed for Snappy
> --
>
> Key: HAWQ-951
> URL: https://issues.apache.org/jira/browse/HAWQ-951
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Pratheesh Nair
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Hawq queries are failing when we try to read Snappy-compressed table from 
> hcatalog via external tables.
> After the following was performed on every PXF host and restarting, the issue 
> was resolved:
> {code}
> mkdir -p /usr/lib/hadoop/lib && cd /usr/lib/hadoop/lib && ln -s 
> /usr/hdp/current/hadoop-client/lib/native native
> {code}
> Also, the default pxf-public-classpath should probably contain something like 
> the following line:
> {code}
> /usr/hdp/current/hadoop-client/lib/snappy*.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #817: HAWQ-954. Check that ExternalSelectDesc re...

2016-07-26 Thread kavinderd
GitHub user kavinderd opened a pull request:

https://github.com/apache/incubator-hawq/pull/817

HAWQ-954. Check that ExternalSelectDesc reference exists



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kavinderd/incubator-hawq HAWQ-954

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/817.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #817


commit 633f398d330fe5581951e0fa3773bdfa02a39fea
Author: Kavinder Dhaliwal 
Date:   2016-07-26T23:09:38Z

HAWQ-954. Check that ExternalSelectDesc reference exists




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #816: HAWQ-953. Reverted sending qualifiers when...

2016-07-26 Thread sansanichfb
Github user sansanichfb closed the pull request at:

https://github.com/apache/incubator-hawq/pull/816


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #817: HAWQ-954. Check that ExternalSelectDesc reference...

2016-07-26 Thread sansanichfb
Github user sansanichfb commented on the issue:

https://github.com/apache/incubator-hawq/pull/817
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #814: HAWQ-922. Add basic verification for various pl a...

2016-07-26 Thread xunzhang
Github user xunzhang commented on the issue:

https://github.com/apache/incubator-hawq/pull/814
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #812: HAWQ-949. Revert serializing floats in pxf string...

2016-07-26 Thread xunzhang
Github user xunzhang commented on the issue:

https://github.com/apache/incubator-hawq/pull/812
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (HAWQ-949) Hawq sending unsupported serialized float filter data to PXF

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-949:
---
Fix Version/s: 2.0.1.0-incubating

> Hawq sending unsupported serialized float filter data to PXF
> 
>
> Key: HAWQ-949
> URL: https://issues.apache.org/jira/browse/HAWQ-949
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: 2.0.1.0-incubating
>
>
> HAWQ-779 introduced support in the C side for Float to be serialized into the 
> filter header sent to PXF. However, changes were not made to the FilterParser 
> class in PXF to support parsing non-Int numeric types.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-953) Pushed down filter fails when quering partitioned Hive tables

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-953:
---
Fix Version/s: 2.0.1.0-incubating

> Pushed down filter fails when quering partitioned Hive tables
> -
>
> Key: HAWQ-953
> URL: https://issues.apache.org/jira/browse/HAWQ-953
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> After code changes in HAWQ-779 Hawq started sending filters to Hive, which 
> fails on partitioned tables.
> {code}
> # \d hive_partitions_all_types
> External table "public.hive_partitions_all_types"
>  Column |Type | Modifiers 
> +-+---
>  t1 | text| 
>  t2 | text| 
>  num1   | integer | 
>  dub1   | double precision| 
>  dec1   | numeric | 
>  tm | timestamp without time zone | 
>  r  | real| 
>  bg | bigint  | 
>  b  | boolean | 
>  tn | smallint| 
>  sml| smallint| 
>  dt | date| 
>  vc1| character varying(5)| 
>  c1 | character(3)| 
>  bin| bytea   | 
> Type: readable
> Encoding: UTF8
> Format type: custom
> Format options: formatter 'pxfwritable_import' 
> External location: 
> pxf://127.0.0.1:51200/hive_many_partitioned_table?PROFILE=Hive
> pxfautomation=# SELECT t1, t2, bg FROM hive_partitions_all_types where bg = 
> 23456789;
> ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
> report   message   MetaException(message:Filtering is supported only on 
> partition keys of type string)description   The server encountered an 
> internal error that prevented it from fulfilling this request.exception   
> javax.servlet.ServletException: MetaException(message:Filtering is supported 
> only on partition keys of type string) (libchurl.c:878)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-954) Seg Fault when writing to External Table where ProjInfo is NULL

2016-07-26 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-954:
---
Fix Version/s: 2.0.1.0-incubating

> Seg Fault when writing to External Table where ProjInfo is NULL
> ---
>
> Key: HAWQ-954
> URL: https://issues.apache.org/jira/browse/HAWQ-954
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables, PXF
>Reporter: Kavinder Dhaliwal
>Assignee: Kavinder Dhaliwal
> Fix For: 2.0.1.0-incubating
>
>
> HAWQ-927 introduced a regression where when adding headers to the http 
> request to PXF the macro 
> {code}
> #define EXTPROTOCOL_GET_PROJINFO(fcinfo) (((ExtProtocolData*) 
> fcinfo->context)->desc->projInfo
> {code}
> Is a NULL value and causes a SegFault. This bug has been observed mainly in 
> external writable tables



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-955) Add script for feature test running in parallel

2016-07-26 Thread hongwu (JIRA)
hongwu created HAWQ-955:
---

 Summary: Add script for feature test running in parallel
 Key: HAWQ-955
 URL: https://issues.apache.org/jira/browse/HAWQ-955
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: Tests
Reporter: hongwu
Assignee: Jiali Yao






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-955) Add script for feature test running in parallel

2016-07-26 Thread hongwu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hongwu updated HAWQ-955:

Fix Version/s: 2.0.1.0-incubating

> Add script for feature test running in parallel
> ---
>
> Key: HAWQ-955
> URL: https://issues.apache.org/jira/browse/HAWQ-955
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Tests
>Reporter: hongwu
>Assignee: Jiali Yao
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-955) Add script for feature test running in parallel

2016-07-26 Thread hongwu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hongwu reassigned HAWQ-955:
---

Assignee: hongwu  (was: Jiali Yao)

> Add script for feature test running in parallel
> ---
>
> Key: HAWQ-955
> URL: https://issues.apache.org/jira/browse/HAWQ-955
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Tests
>Reporter: hongwu
>Assignee: hongwu
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq issue #815: HAWQ-898. Add feature test for COPY with new test...

2016-07-26 Thread jiny2
Github user jiny2 commented on the issue:

https://github.com/apache/incubator-hawq/pull/815
  
Revert COPY checkinstall-good case because transactions and row_types need 
this to prepare data


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (HAWQ-955) Add script for feature test running in parallel

2016-07-26 Thread hongwu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hongwu updated HAWQ-955:

Issue Type: Sub-task  (was: Improvement)
Parent: HAWQ-832

> Add script for feature test running in parallel
> ---
>
> Key: HAWQ-955
> URL: https://issues.apache.org/jira/browse/HAWQ-955
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: hongwu
>Assignee: hongwu
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-950) PXF support for Float filters encoded in header data

2016-07-26 Thread Devin Jia (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15395075#comment-15395075
 ] 

Devin Jia commented on HAWQ-950:


ok

> PXF support for Float filters encoded in header data
> 
>
> Key: HAWQ-950
> URL: https://issues.apache.org/jira/browse/HAWQ-950
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: External Tables, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> HAWQ-779 contributed by [~jiadx] introduced the ability for hawq to serialize 
> filters on float columns and send the data to PXF. However, PXF is not 
> currently capable of parsing float values in the string filter.
> We need to 
> 1. add support for float type on JAVA side.
> 2. add unit test for this change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-779) support more pxf filter pushdwon

2016-07-26 Thread Devin Jia (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15395076#comment-15395076
 ] 

Devin Jia commented on HAWQ-779:


1. HAWQ-953 : Cause of the problem: Hive of JDO filter pushdown only support 
String column on PartitionKey. Can modify the file, add the following:
 
hive.metastore.integral.jdo.pushdown
true
 
Or , modify hawq pxf class-HiveDataFragmenter.java, allowing only string filter 
.

2.HAWQ-950 :  ok

>  support more pxf filter pushdwon
> -
>
> Key: HAWQ-779
> URL: https://issues.apache.org/jira/browse/HAWQ-779
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Devin Jia
>Assignee: Shivram Mani
> Fix For: 2.0.1.0-incubating
>
>
> When I use the pxf hawq, I need to read a traditional relational database 
> systems and solr by way of the external table. The project 
> :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext,
>  only "WriteAccessor ",so I developed 2 plug-ins, the projects: 
> https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to 
> modified HAWQ:
> 1. When get a list of fragment from pxf services, push down the 
> 'filterString'. modify the backend / optimizer / plan / createplan.c of 
> create_pxf_plan methods:
> segdb_work_map = map_hddata_2gp_segments (uri_str,
> total_segs, segs_participating,
> relation, ctx-> root-> parse-> jointree-> quals);
> 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, 
> Date type data operator, Float type operator.
> 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE 
> operator.
> I already created a feature branch in my local ,and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HAWQ-779) support more pxf filter pushdwon

2016-07-26 Thread Devin Jia (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15395076#comment-15395076
 ] 

Devin Jia edited comment on HAWQ-779 at 7/27/16 5:35 AM:
-

1. HAWQ-953 : Cause of the problem: Hive of JDO filter pushdown only support 
String  PartitionKey. Can modify the file-hive-site.xml, add the following:
 
hive.metastore.integral.jdo.pushdown
true
 
Or , modify hawq pxf class-HiveDataFragmenter.java, allowing only string filter 
.

2.HAWQ-950 :  ok


was (Author: jiadx):
1. HAWQ-953 : Cause of the problem: Hive of JDO filter pushdown only support 
String column on PartitionKey. Can modify the file, add the following:
 
hive.metastore.integral.jdo.pushdown
true
 
Or , modify hawq pxf class-HiveDataFragmenter.java, allowing only string filter 
.

2.HAWQ-950 :  ok

>  support more pxf filter pushdwon
> -
>
> Key: HAWQ-779
> URL: https://issues.apache.org/jira/browse/HAWQ-779
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Devin Jia
>Assignee: Shivram Mani
> Fix For: 2.0.1.0-incubating
>
>
> When I use the pxf hawq, I need to read a traditional relational database 
> systems and solr by way of the external table. The project 
> :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext,
>  only "WriteAccessor ",so I developed 2 plug-ins, the projects: 
> https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to 
> modified HAWQ:
> 1. When get a list of fragment from pxf services, push down the 
> 'filterString'. modify the backend / optimizer / plan / createplan.c of 
> create_pxf_plan methods:
> segdb_work_map = map_hddata_2gp_segments (uri_str,
> total_segs, segs_participating,
> relation, ctx-> root-> parse-> jointree-> quals);
> 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, 
> Date type data operator, Float type operator.
> 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE 
> operator.
> I already created a feature branch in my local ,and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-26 Thread xunzhang
GitHub user xunzhang opened a pull request:

https://github.com/apache/incubator-hawq/pull/818

HAWQ-955. Add scriptS for feature test running in parallel.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xunzhang/incubator-hawq HAWQ-955

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/818.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #818


commit f5ffb50304207ade51beefd353a44ee8b34f9bc9
Author: xunzhang 
Date:   2016-07-27T06:12:30Z

HAWQ-955. Add scriptS for feature test running in parallel.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #818: HAWQ-955. Add scriptS for feature test running in...

2016-07-26 Thread xunzhang
Github user xunzhang commented on the issue:

https://github.com/apache/incubator-hawq/pull/818
  
cc @yaoj2 @paul-guo- 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-26 Thread yaoj2
Github user yaoj2 commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72385745
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,74 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+init_hawq_test() {
+  source "${GPHOME}/greenplum_path.sh"
+  
+  PSQL=${GPHOME}/bin/psql
+  TEST_DB_NAME="hawq_feature_test"
+  
+  HAWQ_DB=${PGDATABASE:-"postgres"}
+  HAWQ_HOST=${PGHOST:-"localhost"}
+  HAWQ_PORT=${PGPORT:-"5432"}
+
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "create database $TEST_DB_NAME;" > /dev/null 2>&1
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set lc_messages to 'C';" > 
/dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set lc_messages failed."
+exit 1
+  fi
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set lc_monetary to 'C';" > 
/dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set lc_monetary failed."
+exit 1
+  fi
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set lc_numeric to 'C';" > 
/dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set lc_numeric failed."
+exit 1
+  fi
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set lc_time to 'C';" > /dev/null 
2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set lc_time failed."
+exit 1
+  fi
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set timezone_abbreviations to 
'Default';" > /dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set timezone_abbreviations 
failed."
+exit 1
+  fi
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set timezone to 'PST8PDT';" > 
/dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set timezone failed."
+exit 1
+  fi
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set datestyle to 'postgres,MDY';" 
> /dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set datestyle failed."
+exit 1
+  fi
--- End diff --

It would be better make runsql and error check in a function



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #818: HAWQ-955. Add scriptS for feature test running in...

2016-07-26 Thread xunzhang
Github user xunzhang commented on the issue:

https://github.com/apache/incubator-hawq/pull/818
  
Usage: `./src/test/feature/run-feature-test.sh 8 
./src/test/feature/feature-test --gtest_filter=xx`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-26 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72386072
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,74 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+init_hawq_test() {
+  source "${GPHOME}/greenplum_path.sh"
+  
+  PSQL=${GPHOME}/bin/psql
+  TEST_DB_NAME="hawq_feature_test"
+  
+  HAWQ_DB=${PGDATABASE:-"postgres"}
+  HAWQ_HOST=${PGHOST:-"localhost"}
+  HAWQ_PORT=${PGPORT:-"5432"}
+
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "create database $TEST_DB_NAME;" > /dev/null 2>&1
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set lc_messages to 'C';" > 
/dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set lc_messages failed."
+exit 1
+  fi
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set lc_monetary to 'C';" > 
/dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set lc_monetary failed."
+exit 1
+  fi
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set lc_numeric to 'C';" > 
/dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set lc_numeric failed."
+exit 1
+  fi
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set lc_time to 'C';" > /dev/null 
2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set lc_time failed."
+exit 1
+  fi
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set timezone_abbreviations to 
'Default';" > /dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set timezone_abbreviations 
failed."
+exit 1
+  fi
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set timezone to 'PST8PDT';" > 
/dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set timezone failed."
+exit 1
+  fi
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "alter database $TEST_DB_NAME set datestyle to 'postgres,MDY';" 
> /dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "Alter database hawq_feature_test set datestyle failed."
+exit 1
+  fi
--- End diff --

Good suggestion! Let me do this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #818: HAWQ-955. Add scriptS for feature test running in...

2016-07-26 Thread xunzhang
Github user xunzhang commented on the issue:

https://github.com/apache/incubator-hawq/pull/818
  
also @radarwave 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---