[jira] [Deleted] (HAWQ-1648) HAWQ-1628

2018-08-09 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo deleted HAWQ-1648:
--


> HAWQ-1628
> -
>
> Key: HAWQ-1648
> URL: https://issues.apache.org/jira/browse/HAWQ-1648
> Project: Apache HAWQ
>  Issue Type: New Feature
>Reporter: Oushu_WangZiming
>Assignee: Radar Lei
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-786) Framework to support pluggable formats and file systems

2018-07-26 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-786.


> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
> Attachments: HAWQ Pluggable Storage Framework.pdf, 
> ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-786) Framework to support pluggable formats and file systems

2018-07-26 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-786.
--
Resolution: Fixed

Closing this feature given the pluggable storage framework is available in hawq 
code base. The subsequent features including hdfs protocol, orc, text csv, hive 
protocol, etc will be tracked in separate issues.

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
> Attachments: HAWQ Pluggable Storage Framework.pdf, 
> ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1630) Support TEXT/CSV format using pluggable storage framework

2018-07-26 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1630:
-

Assignee: oushu1longziyang1

> Support TEXT/CSV format using pluggable storage framework
> -
>
> Key: HAWQ-1630
> URL: https://issues.apache.org/jira/browse/HAWQ-1630
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: oushu1longziyang1
>Priority: Major
> Fix For: backlog
>
>
> Add TEXT/CSV format using pluggable storage framework so that user can store 
> data in table with TEXT/CSV format. The TEXT/CSV format in pluggable storage 
> framework will have better performance and extensibility comparing with that 
> in external table framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1629) Support ORC format using pluggable storage framework

2018-07-26 Thread Ruilong Huo (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558018#comment-16558018
 ] 

Ruilong Huo commented on HAWQ-1629:
---

Assigning to Long Ziyang to add the feature.

> Support ORC format using pluggable storage framework
> 
>
> Key: HAWQ-1629
> URL: https://issues.apache.org/jira/browse/HAWQ-1629
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: oushu1longziyang1
>Priority: Major
> Fix For: backlog
>
>
> Add ORC format using pluggable storage framework so that user can store data 
> in table with ORC format, which is a commonly adopted format and has 
> potential performance gain with statistics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1629) Support ORC format using pluggable storage framework

2018-07-26 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1629:
-

Assignee: oushu1longziyang1

> Support ORC format using pluggable storage framework
> 
>
> Key: HAWQ-1629
> URL: https://issues.apache.org/jira/browse/HAWQ-1629
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: oushu1longziyang1
>Priority: Major
> Fix For: backlog
>
>
> Add ORC format using pluggable storage framework so that user can store data 
> in table with ORC format, which is a commonly adopted format and has 
> potential performance gain with statistics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1628) Support HDFS protocol using pluggable storage framework

2018-07-26 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1628:
-

Assignee: Oushu_WangZiming

> Support HDFS protocol using pluggable storage framework
> ---
>
> Key: HAWQ-1628
> URL: https://issues.apache.org/jira/browse/HAWQ-1628
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Oushu_WangZiming
>Priority: Major
> Fix For: backlog
>
>
> The purpose of supporting HDFS protocol using pluggable storage framework can 
> be in two folds:
> 1. demonstrate an example on how to add a protocol in pluggable storage 
> framework
> 2. data formats including orc, text, csv etc can be added to hawq using 
> pluggable storage framework



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1628) Support HDFS protocol using pluggable storage framework

2018-07-26 Thread Ruilong Huo (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558011#comment-16558011
 ] 

Ruilong Huo commented on HAWQ-1628:
---

Assigning to Wang Ziming to add the feature.

> Support HDFS protocol using pluggable storage framework
> ---
>
> Key: HAWQ-1628
> URL: https://issues.apache.org/jira/browse/HAWQ-1628
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Oushu_WangZiming
>Priority: Major
> Fix For: backlog
>
>
> The purpose of supporting HDFS protocol using pluggable storage framework can 
> be in two folds:
> 1. demonstrate an example on how to add a protocol in pluggable storage 
> framework
> 2. data formats including orc, text, csv etc can be added to hawq using 
> pluggable storage framework



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1637) Compile apache hawq failure due to Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate-jar on osx 10.11

2018-07-04 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1637:
--
Description: 
Follow instruction ([https://cwiki.apache.org/confluence/disp
/usr/local/bin/mvn package 
-DskipTestslay/HAWQ/Build+and+Install)|https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install)]
 to build apache hawq on osx 10.11, it fails due to Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:2.9.1:aggregate-jar

 
{code:java}
[INFO] --- maven-jar-plugin:2.4:test-jar (default) @ hawq-hadoop --- [WARNING] 
JAR will be empty - no content was marked for inclusion! [WARNING] The 
following dependencies could not be resolved at this point of the build but 
seem to be part of the reactor: [WARNING] o 
com.pivotal.hawq:hawq-mapreduce-common:jar:1.1.0 (compile) [WARNING] Try 
running the build up to the lifecycle phase "package" [WARNING] The following 
dependencies could not be resolved at this point of the build but seem to be 
part of the reactor: [WARNING] o 
com.pivotal.hawq:hawq-mapreduce-common:jar:1.1.0 (compile) [WARNING] Try 
running the build up to the lifecycle phase "package" [WARNING] The following 
dependencies could not be resolved at this point of the build but seem to be 
part of the reactor: [WARNING] o com.pivotal.hawq:hawq-mapreduce-ao:jar:1.1.0 
(compile) [WARNING] o com.pivotal.hawq:hawq-mapreduce-common:jar:1.1.0 
(compile) [WARNING] o com.pivotal.hawq:hawq-mapreduce-parquet:jar:1.1.0 
(compile) [WARNING] Try running the build up to the lifecycle phase "package" 
[INFO] [INFO] --- maven-javadoc-plugin:2.9.1:aggregate-jar (default) @ 
hawq-hadoop --- [WARNING] The dependency: 
[com.pivotal.hawq:hawq-mapreduce-common:jar:1.1.0] can't be resolved but has 
been found in the reactor (probably snapshots). This dependency has been 
excluded from the Javadoc classpath. You should rerun javadoc after executing 
mvn install. [WARNING] IGNORED to add some artifacts in the classpath. See 
above. [WARNING] The dependency: 
[com.pivotal.hawq:hawq-mapreduce-common:jar:1.1.0] can't be resolved but has 
been found in the reactor (probably snapshots). This dependency has been 
excluded from the Javadoc classpath. You should rerun javadoc after executing 
mvn install. [WARNING] IGNORED to add some artifacts in the classpath. See 
above. [WARNING] The dependency: 
[com.pivotal.hawq:hawq-mapreduce-common:jar:1.1.0] can't be resolved but has 
been found in the reactor (probably snapshots). This dependency has been 
excluded from the Javadoc classpath. You should rerun javadoc after executing 
mvn install. [WARNING] The dependency: 
[com.pivotal.hawq:hawq-mapreduce-ao:jar:1.1.0] can't be resolved but has been 
found in the reactor (probably snapshots). This dependency has been excluded 
from the Javadoc classpath. You should rerun javadoc after executing mvn 
install. [WARNING] The dependency: 
[com.pivotal.hawq:hawq-mapreduce-parquet:jar:1.1.0] can't be resolved but has 
been found in the reactor (probably snapshots). This dependency has been 
excluded from the Javadoc classpath. You should rerun javadoc after executing 
mvn install. [WARNING] IGNORED to add some artifacts in the classpath. See 
above.[ERROR] import parquet.hadoop.ParquetInputFormat;

[ERROR] ^
[ERROR] 
/Users/wangziming/workplace/incubator-hawq/contrib/hawq-hadoop/hawq-mapreduce-parquet/src/main/java/com/pivotal/hawq/mapreduce/parquet/HAWQParquetInputFormat.java:43:
 错误: 找不到符号
[ERROR] public class HAWQParquetInputFormat extends 
ParquetInputFormat {
[ERROR] ^
[ERROR] 符号: 类 ParquetInputFormat
[ERROR] 
/Users/wangziming/workplace/incubator-hawq/contrib/hawq-hadoop/hawq-mapreduce-parquet/src/main/java/com/pivotal/hawq/mapreduce/parquet/convert/HAWQBoxConverter.java:25:
 错误: 程序包parquet.io.api不存在
[ERROR] import parquet.io.api.Converter;
[ERROR] ^
[ERROR] 
/Users/wangziming/workplace/incubator-hawq/contrib/hawq-hadoop/hawq-mapreduce-parquet/src/main/java/com/pivotal/hawq/mapreduce/parquet/convert/HAWQBoxConverter.java:26:
 错误: 程序包parquet.io.api不存在
[ERROR] import parquet.io.api.GroupConverter;
[ERROR] ^
[ERROR] 
/Users/wangziming/workplace/incubator-hawq/contrib/hawq-hadoop/hawq-mapreduce-parquet/src/main/java/com/pivotal/hawq/mapreduce/parquet/convert/HAWQBoxConverter.java:27:
 错误: 程序包parquet.io.api不存在
[ERROR] import parquet.io.api.PrimitiveConverter;
[ERROR] ^
[ERROR] 
/Users/wangziming/workplace/incubator-hawq/contrib/hawq-hadoop/hawq-mapreduce-parquet/src/main/java/com/pivotal/hawq/mapreduce/parquet/convert/HAWQBoxConverter.java:38:
 错误: 找不到符号
[ERROR] public class HAWQBoxConverter extends GroupConverter {
[ERROR] ^
[ERROR] 符号: 类 GroupConverter
[ERROR] 
/Users/wangziming/workplace/incubator-hawq/contrib/hawq-hadoop/hawq-mapreduce-parquet/src/main/java/com/pivotal/hawq/mapreduce/parquet/convert/HAWQBoxConverter.java:41:
 错误: 找不到符号
[ERROR] private Converter[] converters;
[ERROR] ^
[ERROR] 符号: 类 Converter
[ERROR] 位置: 类 

[jira] [Commented] (HAWQ-1636) Compile apache hawq failure due to unsupported syntax in libyarn on osx 10.11

2018-07-04 Thread Ruilong Huo (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16533236#comment-16533236
 ] 

Ruilong Huo commented on HAWQ-1636:
---

Assigning to oushu1wangziming1 for fix.

> Compile apache hawq failure due to unsupported syntax in libyarn on osx 10.11
> -
>
> Key: HAWQ-1636
> URL: https://issues.apache.org/jira/browse/HAWQ-1636
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.3.0.0-incubating
>Reporter: WangZiming
>Assignee: WangZiming
>Priority: Major
>
> Follow instruction 
> ([https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install)] to 
> build apache hawq on osx 10.11, it fails due to unsupported syntax in libyarn:
> {code:java}
> 1. ./configure
> 2. make
> [ 9%] Building CXX object 
> src/CMakeFiles/libyarn-shared.dir/libyarnclient/ApplicationClient.cpp.o
> cd /Users/wangziming/workplace/incubator-hawq/depends/libyarn/build/src && 
> /usr/bin/g++ -DTEST_HDFS_PREFIX=\"./\" -D_GNU_SOURCE -D__STDC_FORMAT_MACROS 
> -Dlibyarn_shared_EXPORTS 
> -I/Users/wangziming/workplace/incubator-hawq/depends/thirdparty/googletest/googletest/include
>  
> -I/Users/wangziming/workplace/incubator-hawq/depends/thirdparty/googletest/googlemock/include
>  -I/Users/wangziming/workplace/incubator-hawq/depends/libyarn/src 
> -I/Users/wangziming/workplace/incubator-hawq/depends/libyarn/src/common 
> -I/Users/wangziming/workplace/incubator-hawq/depends/libyarn/build/src 
> -I/usr/local/include -I/usr/include/libxml2 
> -I/Users/wangziming/workplace/incubator-hawq/depends/libyarn/mock 
> -fno-omit-frame-pointer -msse4.2 -std=c++0x -O2 -g -DNDEBUG -fPIC -o 
> CMakeFiles/libyarn-shared.dir/libyarnclient/ApplicationClient.cpp.o -c 
> /Users/wangziming/workplace/incubator-hawq/depends/libyarn/src/libyarnclient/ApplicationClient.cpp
> /Users/wangziming/workplace/incubator-hawq/depends/libyarn/src/libyarnclient/ApplicationClient.cpp:76:10:
>  error: no template named 'vector'; did you mean 'std::vector'?
> for (vector::iterator it = rmConfInfos.begin();
> ^~
> std::vector
> /Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/vector:457:29: 
> note: 'std::vector' declared here
> class _LIBCPP_TYPE_VIS_ONLY vector
> ^
> /Users/wangziming/workplace/incubator-hawq/depends/libyarn/src/libyarnclient/ApplicationClient.cpp:79:14:
>  error: no template named 'vector'; did you mean 'std::vector'?
> for (vector::iterator it2 = rmInfos.begin();
> ^~
> std::vector
> /Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/iterator:1244:75:
>  note: 'std::vector' declared here
> template  friend class _LIBCPP_TYPE_VIS_ONLY vector;
> ^
> /Users/wangziming/workplace/incubator-hawq/depends/libyarn/src/libyarnclient/ApplicationClient.cpp:98:17:
>  warning: format specifies type 'int' but the argument has type 'size_type' 
> (aka 'unsigned long') [-Wformat]
> rmInfos.size());
> ^~
> /Users/wangziming/workplace/incubator-hawq/depends/libyarn/src/common/Logger.h:59:47:
>  note: expanded from macro 'LOG'
> Yarn::Internal::RootLogger.printf(s, fmt, ##_VA_ARGS_)
> ^~~
> 1 warning and 2 errors generated.
> make[4]: *** 
> [src/CMakeFiles/libyarn-shared.dir/libyarnclient/ApplicationClient.cpp.o] 
> Error 1
> make[3]: *** [src/CMakeFiles/libyarn-shared.dir/all] Error 2
> make[2]: *** [all] Error 2
> make[1]: *** [build] Error 2
> make: *** [all] Error 2{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1636) Compile apache hawq failure due to unsupported syntax in libyarn on osx 10.11

2018-07-04 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1636:
-

Assignee: WangZiming  (was: Radar Lei)

> Compile apache hawq failure due to unsupported syntax in libyarn on osx 10.11
> -
>
> Key: HAWQ-1636
> URL: https://issues.apache.org/jira/browse/HAWQ-1636
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.3.0.0-incubating
>Reporter: WangZiming
>Assignee: WangZiming
>Priority: Major
>
> Follow instruction 
> ([https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install)] to 
> build apache hawq on osx 10.11, it fails due to unsupported syntax in libyarn:
> {code:java}
> 1. ./configure
> 2. make
> [ 9%] Building CXX object 
> src/CMakeFiles/libyarn-shared.dir/libyarnclient/ApplicationClient.cpp.o
> cd /Users/wangziming/workplace/incubator-hawq/depends/libyarn/build/src && 
> /usr/bin/g++ -DTEST_HDFS_PREFIX=\"./\" -D_GNU_SOURCE -D__STDC_FORMAT_MACROS 
> -Dlibyarn_shared_EXPORTS 
> -I/Users/wangziming/workplace/incubator-hawq/depends/thirdparty/googletest/googletest/include
>  
> -I/Users/wangziming/workplace/incubator-hawq/depends/thirdparty/googletest/googlemock/include
>  -I/Users/wangziming/workplace/incubator-hawq/depends/libyarn/src 
> -I/Users/wangziming/workplace/incubator-hawq/depends/libyarn/src/common 
> -I/Users/wangziming/workplace/incubator-hawq/depends/libyarn/build/src 
> -I/usr/local/include -I/usr/include/libxml2 
> -I/Users/wangziming/workplace/incubator-hawq/depends/libyarn/mock 
> -fno-omit-frame-pointer -msse4.2 -std=c++0x -O2 -g -DNDEBUG -fPIC -o 
> CMakeFiles/libyarn-shared.dir/libyarnclient/ApplicationClient.cpp.o -c 
> /Users/wangziming/workplace/incubator-hawq/depends/libyarn/src/libyarnclient/ApplicationClient.cpp
> /Users/wangziming/workplace/incubator-hawq/depends/libyarn/src/libyarnclient/ApplicationClient.cpp:76:10:
>  error: no template named 'vector'; did you mean 'std::vector'?
> for (vector::iterator it = rmConfInfos.begin();
> ^~
> std::vector
> /Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/vector:457:29: 
> note: 'std::vector' declared here
> class _LIBCPP_TYPE_VIS_ONLY vector
> ^
> /Users/wangziming/workplace/incubator-hawq/depends/libyarn/src/libyarnclient/ApplicationClient.cpp:79:14:
>  error: no template named 'vector'; did you mean 'std::vector'?
> for (vector::iterator it2 = rmInfos.begin();
> ^~
> std::vector
> /Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/iterator:1244:75:
>  note: 'std::vector' declared here
> template  friend class _LIBCPP_TYPE_VIS_ONLY vector;
> ^
> /Users/wangziming/workplace/incubator-hawq/depends/libyarn/src/libyarnclient/ApplicationClient.cpp:98:17:
>  warning: format specifies type 'int' but the argument has type 'size_type' 
> (aka 'unsigned long') [-Wformat]
> rmInfos.size());
> ^~
> /Users/wangziming/workplace/incubator-hawq/depends/libyarn/src/common/Logger.h:59:47:
>  note: expanded from macro 'LOG'
> Yarn::Internal::RootLogger.printf(s, fmt, ##_VA_ARGS_)
> ^~~
> 1 warning and 2 errors generated.
> make[4]: *** 
> [src/CMakeFiles/libyarn-shared.dir/libyarnclient/ApplicationClient.cpp.o] 
> Error 1
> make[3]: *** [src/CMakeFiles/libyarn-shared.dir/all] Error 2
> make[2]: *** [all] Error 2
> make[1]: *** [build] Error 2
> make: *** [all] Error 2{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1631) Support Hive protocol using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1631:
-

Assignee: (was: Radar Lei)

> Support Hive protocol using pluggable storage framework
> ---
>
> Key: HAWQ-1631
> URL: https://issues.apache.org/jira/browse/HAWQ-1631
> Project: Apache HAWQ
>  Issue Type: Sub-task
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Priority: Major
> Fix For: backlog
>
>
> Add Hive protocol using pluggable storage framework so that user can access 
> data in Hive from hawq efficiently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1632) Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1632:
-

Assignee: (was: Radar Lei)

> Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support
> --
>
> Key: HAWQ-1632
> URL: https://issues.apache.org/jira/browse/HAWQ-1632
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Documentation, Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Priority: Major
> Fix For: backlog
>
>
> Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support to add 
> user guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1628) Support HDFS protocol using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1628:
-

Assignee: (was: Radar Lei)

> Support HDFS protocol using pluggable storage framework
> ---
>
> Key: HAWQ-1628
> URL: https://issues.apache.org/jira/browse/HAWQ-1628
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Priority: Major
> Fix For: backlog
>
>
> The purpose of supporting HDFS protocol using pluggable storage framework can 
> be in two folds:
> 1. demonstrate an example on how to add a protocol in pluggable storage 
> framework
> 2. data formats including orc, text, csv etc can be added to hawq using 
> pluggable storage framework



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1629) Support ORC format using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1629:
-

Assignee: (was: Radar Lei)

> Support ORC format using pluggable storage framework
> 
>
> Key: HAWQ-1629
> URL: https://issues.apache.org/jira/browse/HAWQ-1629
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Priority: Major
> Fix For: backlog
>
>
> Add ORC format using pluggable storage framework so that user can store data 
> in table with ORC format, which is a commonly adopted format and has 
> potential performance gain with statistics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1630) Support TEXT/CSV format using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1630:
-

Assignee: (was: Radar Lei)

> Support TEXT/CSV format using pluggable storage framework
> -
>
> Key: HAWQ-1630
> URL: https://issues.apache.org/jira/browse/HAWQ-1630
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Priority: Major
> Fix For: backlog
>
>
> Add TEXT/CSV format using pluggable storage framework so that user can store 
> data in table with TEXT/CSV format. The TEXT/CSV format in pluggable storage 
> framework will have better performance and extensibility comparing with that 
> in external table framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1577) Add demo for pluggable format insert

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1577.
---
Resolution: Duplicate

> Add demo for pluggable format insert
> 
>
> Key: HAWQ-1577
> URL: https://issues.apache.org/jira/browse/HAWQ-1577
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, Storage
>Reporter: Chiyang Wan
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Once the new feature of pluggable storage framework ready, It is necessary to 
> add a demo on how to implement external insert on a new format using the 
> pluggable framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1577) Add demo for pluggable format insert

2018-06-25 Thread Ruilong Huo (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16521947#comment-16521947
 ] 

Ruilong Huo commented on HAWQ-1577:
---

This issue is deprecated by 
[HAWQ-1628|https://issues.apache.org/jira/browse/HAWQ-1628] ~ 
[HAWQ-1632|https://issues.apache.org/jira/browse/HAWQ-1632] which add HDFS/Hive 
protocol and ORC/TEXT/CSV format using pluggable storage framework.

> Add demo for pluggable format insert
> 
>
> Key: HAWQ-1577
> URL: https://issues.apache.org/jira/browse/HAWQ-1577
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, Storage
>Reporter: Chiyang Wan
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Once the new feature of pluggable storage framework ready, It is necessary to 
> add a demo on how to implement external insert on a new format using the 
> pluggable framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1576) Add demo for pluggable format scan

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1576:
--
Affects Version/s: 2.3.0.0-incubating

> Add demo for pluggable format scan
> --
>
> Key: HAWQ-1576
> URL: https://issues.apache.org/jira/browse/HAWQ-1576
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Chiyang Wan
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Once the new feature of pluggable storage framework ready, It is necessary to 
> add a demo on how to implement external scan on a new format using the 
> pluggable framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1566) Include Pluggable Storage Format Framework in External Table Insert

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1566.
---
Resolution: Fixed

Resolved as the commit is merged to master.

> Include Pluggable Storage Format Framework in External Table Insert
> ---
>
> Key: HAWQ-1566
> URL: https://issues.apache.org/jira/browse/HAWQ-1566
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Chiyang Wan
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> There are 2 types of operation related to external table, i.e. scan, insert. 
> Including pluggable storage framework in these operations is necessary. 
> We add the external table insert and copy from(write into external table) 
> related feature here.
> In the following steps, we still need to specify some of the critical info 
> that comes from the planner and the file splits info in the pluggable 
> filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1566) Include Pluggable Storage Format Framework in External Table Insert

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1566:
--
Affects Version/s: 2.3.0.0-incubating

> Include Pluggable Storage Format Framework in External Table Insert
> ---
>
> Key: HAWQ-1566
> URL: https://issues.apache.org/jira/browse/HAWQ-1566
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Chiyang Wan
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> There are 2 types of operation related to external table, i.e. scan, insert. 
> Including pluggable storage framework in these operations is necessary. 
> We add the external table insert and copy from(write into external table) 
> related feature here.
> In the following steps, we still need to specify some of the critical info 
> that comes from the planner and the file splits info in the pluggable 
> filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1632) Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1632:
--
Component/s: Storage

> Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support
> --
>
> Key: HAWQ-1632
> URL: https://issues.apache.org/jira/browse/HAWQ-1632
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Documentation, Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support to add 
> user guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1632) Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1632:
--
Component/s: Documentation

> Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support
> --
>
> Key: HAWQ-1632
> URL: https://issues.apache.org/jira/browse/HAWQ-1632
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Documentation, Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support to add 
> user guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HAWQ-1632) Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support

2018-06-25 Thread Ruilong Huo (JIRA)
Ruilong Huo created HAWQ-1632:
-

 Summary: Wrap up pluggable storage framework and HDFS/Hive 
ORC/TEXT/CSV support
 Key: HAWQ-1632
 URL: https://issues.apache.org/jira/browse/HAWQ-1632
 Project: Apache HAWQ
  Issue Type: Sub-task
Reporter: Ruilong Huo
Assignee: Radar Lei


Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support to add 
user guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1632) Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1632:
--
Affects Version/s: 2.3.0.0-incubating

> Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support
> --
>
> Key: HAWQ-1632
> URL: https://issues.apache.org/jira/browse/HAWQ-1632
> Project: Apache HAWQ
>  Issue Type: Sub-task
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support to add 
> user guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1632) Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1632:
--
Fix Version/s: backlog

> Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support
> --
>
> Key: HAWQ-1632
> URL: https://issues.apache.org/jira/browse/HAWQ-1632
> Project: Apache HAWQ
>  Issue Type: Sub-task
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> Wrap up pluggable storage framework and HDFS/Hive ORC/TEXT/CSV support to add 
> user guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1631) Support Hive protocol using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1631:
--
Affects Version/s: 2.3.0.0-incubating

> Support Hive protocol using pluggable storage framework
> ---
>
> Key: HAWQ-1631
> URL: https://issues.apache.org/jira/browse/HAWQ-1631
> Project: Apache HAWQ
>  Issue Type: Sub-task
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> Add Hive protocol using pluggable storage framework so that user can access 
> data in Hive from hawq efficiently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1630) Support TEXT/CSV format using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1630:
--
Affects Version/s: 2.3.0.0-incubating

> Support TEXT/CSV format using pluggable storage framework
> -
>
> Key: HAWQ-1630
> URL: https://issues.apache.org/jira/browse/HAWQ-1630
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> Add TEXT/CSV format using pluggable storage framework so that user can store 
> data in table with TEXT/CSV format. The TEXT/CSV format in pluggable storage 
> framework will have better performance and extensibility comparing with that 
> in external table framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1628) Support HDFS protocol using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1628:
--
Fix Version/s: backlog

> Support HDFS protocol using pluggable storage framework
> ---
>
> Key: HAWQ-1628
> URL: https://issues.apache.org/jira/browse/HAWQ-1628
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> The purpose of supporting HDFS protocol using pluggable storage framework can 
> be in two folds:
> 1. demonstrate an example on how to add a protocol in pluggable storage 
> framework
> 2. data formats including orc, text, csv etc can be added to hawq using 
> pluggable storage framework



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1630) Support TEXT/CSV format using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1630:
--
Fix Version/s: backlog

> Support TEXT/CSV format using pluggable storage framework
> -
>
> Key: HAWQ-1630
> URL: https://issues.apache.org/jira/browse/HAWQ-1630
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> Add TEXT/CSV format using pluggable storage framework so that user can store 
> data in table with TEXT/CSV format. The TEXT/CSV format in pluggable storage 
> framework will have better performance and extensibility comparing with that 
> in external table framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1628) Support HDFS protocol using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1628:
--
Affects Version/s: 2.3.0.0-incubating

> Support HDFS protocol using pluggable storage framework
> ---
>
> Key: HAWQ-1628
> URL: https://issues.apache.org/jira/browse/HAWQ-1628
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> The purpose of supporting HDFS protocol using pluggable storage framework can 
> be in two folds:
> 1. demonstrate an example on how to add a protocol in pluggable storage 
> framework
> 2. data formats including orc, text, csv etc can be added to hawq using 
> pluggable storage framework



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1629) Support ORC format using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1629:
--
Fix Version/s: backlog

> Support ORC format using pluggable storage framework
> 
>
> Key: HAWQ-1629
> URL: https://issues.apache.org/jira/browse/HAWQ-1629
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> Add ORC format using pluggable storage framework so that user can store data 
> in table with ORC format, which is a commonly adopted format and has 
> potential performance gain with statistics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1629) Support ORC format using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1629:
--
Affects Version/s: 2.3.0.0-incubating

> Support ORC format using pluggable storage framework
> 
>
> Key: HAWQ-1629
> URL: https://issues.apache.org/jira/browse/HAWQ-1629
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Radar Lei
>Priority: Major
> Fix For: backlog
>
>
> Add ORC format using pluggable storage framework so that user can store data 
> in table with ORC format, which is a commonly adopted format and has 
> potential performance gain with statistics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HAWQ-1631) Support Hive protocol using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)
Ruilong Huo created HAWQ-1631:
-

 Summary: Support Hive protocol using pluggable storage framework
 Key: HAWQ-1631
 URL: https://issues.apache.org/jira/browse/HAWQ-1631
 Project: Apache HAWQ
  Issue Type: Sub-task
Reporter: Ruilong Huo
Assignee: Radar Lei


Add Hive protocol using pluggable storage framework so that user can access 
data in Hive from hawq efficiently.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HAWQ-1630) Support TEXT/CSV format using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)
Ruilong Huo created HAWQ-1630:
-

 Summary: Support TEXT/CSV format using pluggable storage framework
 Key: HAWQ-1630
 URL: https://issues.apache.org/jira/browse/HAWQ-1630
 Project: Apache HAWQ
  Issue Type: Sub-task
  Components: Storage
Reporter: Ruilong Huo
Assignee: Radar Lei


Add TEXT/CSV format using pluggable storage framework so that user can store 
data in table with TEXT/CSV format. The TEXT/CSV format in pluggable storage 
framework will have better performance and extensibility comparing with that in 
external table framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HAWQ-1629) Support ORC format using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)
Ruilong Huo created HAWQ-1629:
-

 Summary: Support ORC format using pluggable storage framework
 Key: HAWQ-1629
 URL: https://issues.apache.org/jira/browse/HAWQ-1629
 Project: Apache HAWQ
  Issue Type: Sub-task
  Components: Storage
Reporter: Ruilong Huo
Assignee: Radar Lei


Add ORC format using pluggable storage framework so that user can store data in 
table with ORC format, which is a commonly adopted format and has potential 
performance gain with statistics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HAWQ-1628) Support HDFS protocol using pluggable storage framework

2018-06-25 Thread Ruilong Huo (JIRA)
Ruilong Huo created HAWQ-1628:
-

 Summary: Support HDFS protocol using pluggable storage framework
 Key: HAWQ-1628
 URL: https://issues.apache.org/jira/browse/HAWQ-1628
 Project: Apache HAWQ
  Issue Type: Sub-task
  Components: Storage
Reporter: Ruilong Huo
Assignee: Radar Lei


The purpose of supporting HDFS protocol using pluggable storage framework can 
be in two folds:

1. demonstrate an example on how to add a protocol in pluggable storage 
framework

2. data formats including orc, text, csv etc can be added to hawq using 
pluggable storage framework




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1590) hawq version in build.properties for hawq-ambari-plugin need to be bumped to 2.3 for Apache HAWQ 2.3.0.0-incubating Release

2018-02-20 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-1590.
-

> hawq version in build.properties for hawq-ambari-plugin need to be bumped to 
> 2.3 for Apache HAWQ 2.3.0.0-incubating Release
> ---
>
> Key: HAWQ-1590
> URL: https://issues.apache.org/jira/browse/HAWQ-1590
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
>Priority: Minor
> Fix For: 2.3.0.0-incubating
>
>
> Need to bump hawq version in contrib/hawq-ambari-plugin/build.properties to 
> 2.3 for Apache HAWQ 2.3.0.0-incubating Release.
> {code}
> $ cat contrib/hawq-ambari-plugin/build.properties
> hawq.release.version=2.2.0   -->   2.3.0
> hawq.common.services.version=2.0.0
> pxf.release.version=3.2.1
> pxf.common.services.version=3.0.0
> hawq.repo.prefix=hawq
> hawq.addons.repo.prefix=hawq-add-ons
> repository.version=2.3.0.0
> default.stack=HDP-2.5
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1590) hawq version in build.properties for hawq-ambari-plugin need to be bumped to 2.3 for Apache HAWQ 2.3.0.0-incubating Release

2018-02-20 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1590.
---
Resolution: Fixed

Fixed with change merged to master branch. Need to cherry-pick the fix to 
2.3.0.0-incubating for the release.

> hawq version in build.properties for hawq-ambari-plugin need to be bumped to 
> 2.3 for Apache HAWQ 2.3.0.0-incubating Release
> ---
>
> Key: HAWQ-1590
> URL: https://issues.apache.org/jira/browse/HAWQ-1590
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
>Priority: Minor
> Fix For: 2.3.0.0-incubating
>
>
> Need to bump hawq version in contrib/hawq-ambari-plugin/build.properties to 
> 2.3 for Apache HAWQ 2.3.0.0-incubating Release.
> {code}
> $ cat contrib/hawq-ambari-plugin/build.properties
> hawq.release.version=2.2.0   -->   2.3.0
> hawq.common.services.version=2.0.0
> pxf.release.version=3.2.1
> pxf.common.services.version=3.0.0
> hawq.repo.prefix=hawq
> hawq.addons.repo.prefix=hawq-add-ons
> repository.version=2.3.0.0
> default.stack=HDP-2.5
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1589) hawq version in pom.xml need to be bumped to 2.3 for Apache HAWQ 2.3.0.0-incubating Release

2018-02-20 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-1589.
-

> hawq version in pom.xml need to be bumped to 2.3 for Apache HAWQ 
> 2.3.0.0-incubating Release
> ---
>
> Key: HAWQ-1589
> URL: https://issues.apache.org/jira/browse/HAWQ-1589
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
>Priority: Minor
> Fix For: 2.3.0.0-incubating
>
>
> Need to bump hawq version in pom.xml to 2.3 as RAT check for Apache HAWQ 
> 2.3.0.0-incubating Release shows wrong version.
> {code}
> Ruilongs-MBP:apache-hawq-src-2.3.0.0-incubating huor$ mvn verify
>  [INFO] Scanning for projects...
>  [INFO]
>  [INFO] 
> 
>  [INFO] Building hawq 2.2
>  [INFO] 
> 
>  ...
>  [INFO] 3334 resources included (use -debug for more details)
>  [INFO] Rat check: Summary over all files. Unapproved: 0, unknown: 0, 
> generated: 0, approved: 3324 licenses.
>  [INFO] 
> 
>  [INFO] BUILD SUCCESS
>  [INFO] 
> 
>  [INFO] Total time: 40.695 s
>  [INFO] Finished at: 2018-02-20T20:28:29+08:00
>  [INFO] Final Memory: 14M/201M
>  [INFO] 
> 
> {code}
> The fix is as below:
> {code}
>  org.apache.hawq
>  hawq
>  2.2   --> 2.3
>  pom
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1589) hawq version in pom.xml need to be bumped to 2.3 for Apache HAWQ 2.3.0.0-incubating Release

2018-02-20 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1589.
---
Resolution: Fixed

Fixed with change merged to master branch. Need to cherry-pick the fix to 
2.3.0.0-incubating for the release.

> hawq version in pom.xml need to be bumped to 2.3 for Apache HAWQ 
> 2.3.0.0-incubating Release
> ---
>
> Key: HAWQ-1589
> URL: https://issues.apache.org/jira/browse/HAWQ-1589
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
>Priority: Minor
> Fix For: 2.3.0.0-incubating
>
>
> Need to bump hawq version in pom.xml to 2.3 as RAT check for Apache HAWQ 
> 2.3.0.0-incubating Release shows wrong version.
> {code}
> Ruilongs-MBP:apache-hawq-src-2.3.0.0-incubating huor$ mvn verify
>  [INFO] Scanning for projects...
>  [INFO]
>  [INFO] 
> 
>  [INFO] Building hawq 2.2
>  [INFO] 
> 
>  ...
>  [INFO] 3334 resources included (use -debug for more details)
>  [INFO] Rat check: Summary over all files. Unapproved: 0, unknown: 0, 
> generated: 0, approved: 3324 licenses.
>  [INFO] 
> 
>  [INFO] BUILD SUCCESS
>  [INFO] 
> 
>  [INFO] Total time: 40.695 s
>  [INFO] Finished at: 2018-02-20T20:28:29+08:00
>  [INFO] Final Memory: 14M/201M
>  [INFO] 
> 
> {code}
> The fix is as below:
> {code}
>  org.apache.hawq
>  hawq
>  2.2   --> 2.3
>  pom
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1590) hawq version in build.properties for hawq-ambari-plugin need to be bumped to 2.3 for Apache HAWQ 2.3.0.0-incubating Release

2018-02-20 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1590:
--
Affects Version/s: (was: 2.2.0.0-incubating)
   2.3.0.0-incubating

> hawq version in build.properties for hawq-ambari-plugin need to be bumped to 
> 2.3 for Apache HAWQ 2.3.0.0-incubating Release
> ---
>
> Key: HAWQ-1590
> URL: https://issues.apache.org/jira/browse/HAWQ-1590
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
>Priority: Minor
> Fix For: 2.3.0.0-incubating
>
>
> Need to bump hawq version in contrib/hawq-ambari-plugin/build.properties to 
> 2.3 for Apache HAWQ 2.3.0.0-incubating Release.
> {code}
> $ cat contrib/hawq-ambari-plugin/build.properties
> hawq.release.version=2.2.0   -->   2.3.0
> hawq.common.services.version=2.0.0
> pxf.release.version=3.2.1
> pxf.common.services.version=3.0.0
> hawq.repo.prefix=hawq
> hawq.addons.repo.prefix=hawq-add-ons
> repository.version=2.3.0.0
> default.stack=HDP-2.5
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1590) hawq version in build.properties for hawq-ambari-plugin need to be bumped to 2.3 for Apache HAWQ 2.3.0.0-incubating Release

2018-02-20 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1590:
--
Affects Version/s: 2.2.0.0-incubating

> hawq version in build.properties for hawq-ambari-plugin need to be bumped to 
> 2.3 for Apache HAWQ 2.3.0.0-incubating Release
> ---
>
> Key: HAWQ-1590
> URL: https://issues.apache.org/jira/browse/HAWQ-1590
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
>Priority: Minor
> Fix For: 2.3.0.0-incubating
>
>
> Need to bump hawq version in contrib/hawq-ambari-plugin/build.properties to 
> 2.3 for Apache HAWQ 2.3.0.0-incubating Release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1590) hawq version in build.properties for hawq-ambari-plugin need to be bumped to 2.3 for Apache HAWQ 2.3.0.0-incubating Release

2018-02-20 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1590:
--
Priority: Minor  (was: Major)

> hawq version in build.properties for hawq-ambari-plugin need to be bumped to 
> 2.3 for Apache HAWQ 2.3.0.0-incubating Release
> ---
>
> Key: HAWQ-1590
> URL: https://issues.apache.org/jira/browse/HAWQ-1590
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
>Priority: Minor
> Fix For: 2.3.0.0-incubating
>
>
> Need to bump hawq version in contrib/hawq-ambari-plugin/build.properties to 
> 2.3 for Apache HAWQ 2.3.0.0-incubating Release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HAWQ-1590) hawq version in build.properties for hawq-ambari-plugin need to be bumped to 2.3 for Apache HAWQ 2.3.0.0-incubating Release

2018-02-20 Thread Ruilong Huo (JIRA)
Ruilong Huo created HAWQ-1590:
-

 Summary: hawq version in build.properties for hawq-ambari-plugin 
need to be bumped to 2.3 for Apache HAWQ 2.3.0.0-incubating Release
 Key: HAWQ-1590
 URL: https://issues.apache.org/jira/browse/HAWQ-1590
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Build
Reporter: Ruilong Huo
Assignee: Radar Lei
 Fix For: 2.3.0.0-incubating


Need to bump hawq version in contrib/hawq-ambari-plugin/build.properties to 2.3 
for Apache HAWQ 2.3.0.0-incubating Release.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1589) hawq version in pom.xml need to be bumped to 2.3 for Apache HAWQ 2.3.0.0-incubating Release

2018-02-20 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1589:
--
Affects Version/s: 2.3.0.0-incubating

> hawq version in pom.xml need to be bumped to 2.3 for Apache HAWQ 
> 2.3.0.0-incubating Release
> ---
>
> Key: HAWQ-1589
> URL: https://issues.apache.org/jira/browse/HAWQ-1589
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Need to bump hawq version in pom.xml to 2.3 as RAT check for Apache HAWQ 
> 2.3.0.0-incubating Release shows wrong version.
> ```
> Ruilongs-MBP:apache-hawq-src-2.3.0.0-incubating huor$ mvn verify
> [INFO] Scanning for projects...
> [INFO]
> [INFO] 
> 
> [INFO] Building hawq 2.2
> [INFO] 
> 
> ...
> [INFO] 3334 resources included (use -debug for more details)
> [INFO] Rat check: Summary over all files. Unapproved: 0, unknown: 0, 
> generated: 0, approved: 3324 licenses.
> [INFO] 
> 
> [INFO] BUILD SUCCESS
> [INFO] 
> 
> [INFO] Total time: 40.695 s
> [INFO] Finished at: 2018-02-20T20:28:29+08:00
> [INFO] Final Memory: 14M/201M
> [INFO] 
> 
> ```
> The fix is as below:
> ```
>  org.apache.hawq
>  hawq
>  2.2   --> 2.3
>  pom
> ```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HAWQ-1589) hawq version in pom.xml need to be bumped to 2.3 for Apache HAWQ 2.3.0.0-incubating Release

2018-02-20 Thread Ruilong Huo (JIRA)
Ruilong Huo created HAWQ-1589:
-

 Summary: hawq version in pom.xml need to be bumped to 2.3 for 
Apache HAWQ 2.3.0.0-incubating Release
 Key: HAWQ-1589
 URL: https://issues.apache.org/jira/browse/HAWQ-1589
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Build
Reporter: Ruilong Huo
Assignee: Radar Lei
 Fix For: 2.3.0.0-incubating


Need to bump hawq version in pom.xml to 2.3 as RAT check for Apache HAWQ 
2.3.0.0-incubating Release shows wrong version.

```

Ruilongs-MBP:apache-hawq-src-2.3.0.0-incubating huor$ mvn verify
[INFO] Scanning for projects...
[INFO]
[INFO] 
[INFO] Building hawq 2.2
[INFO] 
...
[INFO] 3334 resources included (use -debug for more details)
[INFO] Rat check: Summary over all files. Unapproved: 0, unknown: 0, generated: 
0, approved: 3324 licenses.
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 40.695 s
[INFO] Finished at: 2018-02-20T20:28:29+08:00
[INFO] Final Memory: 14M/201M
[INFO] 

```

The fix is as below:

```
 org.apache.hawq
 hawq
 2.2   --> 2.3
 pom

```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HAWQ-1512) Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria

2018-02-04 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351737#comment-16351737
 ] 

Ruilong Huo edited comment on HAWQ-1512 at 2/4/18 10:53 AM:


[~yjin] and [~rlei], please find my comment below:
 # For dependencies of hawq core part, the [third party 
page|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75963741] 
is good enough. One more thing to check is that we probably need to list 
googlemock as it was a separate project, though it is merged into googletest 
now.
 # For ranger pluggin of hawq, it is good if it is not a mandatory component in 
hawq for now. Otherwise, we need to add corresponding dependencies as mentioned 
in [^HAWQ Ranger Pluggin Service Dependencies.xlsx]
 # For libhdfs3 and libyarn, I think they are part of hawq itself as they are 
not open sourced as standalone project yet.


was (Author: huor):
[~yjin] and [~rlei], please find my comment below:
 # For dependencies of hawq core part, the [third party 
page|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75963741] 
is good enough. One more thing to check is that we probably need to list 
googlemock as it was a separate project, though it is merged into googletest 
now.
 # For ranger pluggin of hawq, it is good if it is not a mandatory component in 
hawq for now. Otherwise, we need to add corresponding dependencies as mentioned 
in [^HAWQ Ranger Pluggin Service Dependencies.xlsx]
 # For libhdfs3 and libyarn, I think they are part of hawq itself for now as 
they are not open sourced as standalone project yet.

> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> --
>
> Key: HAWQ-1512
> URL: https://issues.apache.org/jira/browse/HAWQ-1512
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Yi Jin
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
> Attachments: HAWQ Ranger Pluggin Service Dependencies.xlsx
>
>
> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> Check the following page for the criteria
> https://cwiki.apache.org/confluence/display/HAWQ/ASF+Maturity+Evaluation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HAWQ-1512) Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria

2018-02-04 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351737#comment-16351737
 ] 

Ruilong Huo edited comment on HAWQ-1512 at 2/4/18 10:52 AM:


[~yjin] and [~rlei], please find my comment below:
 # For dependencies of hawq core part, the [third party 
page|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75963741] 
is good enough. One more thing to check is that we probably need to list 
googlemock as it was a separate project, though it is merged into googletest 
now.
 # For ranger pluggin of hawq, it is good if it is not a mandatory component in 
hawq for now. Otherwise, we need to add corresponding dependencies as mentioned 
in [^HAWQ Ranger Pluggin Service Dependencies.xlsx]^Unable to embed resource: 
HAWQ Ranger Pluggin Service Dependencies.xlsx of type 
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet^
 # For libhdfs3 and libyarn, I think they are part of hawq itself for now as 
they are not open sourced as standalone project yet.


was (Author: huor):
[~yjin] and [~rlei], please find my comment below:
 # For dependencies of hawq core part, the [third party page | 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75963741] is 
good enough. One more thing to check is that we probably need to list 
googlemock as it was a separate project, though it is merged into googletest 
now.
 # For ranger pluggin of hawq, it is good if it is not a mandatory component in 
hawq for now. Otherwise, we need to add corresponding dependencies as mentioned 
in [^HAWQ Ranger Pluggin Service Dependencies.xlsx]^!HAWQ Ranger Pluggin 
Service Dependencies.xlsx|width=7,height=7!^
 # For libhdfs3 and libyarn, I think they are part of hawq itself for now as 
they are not open sourced as standalone project yet.

> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> --
>
> Key: HAWQ-1512
> URL: https://issues.apache.org/jira/browse/HAWQ-1512
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Yi Jin
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
> Attachments: HAWQ Ranger Pluggin Service Dependencies.xlsx
>
>
> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> Check the following page for the criteria
> https://cwiki.apache.org/confluence/display/HAWQ/ASF+Maturity+Evaluation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HAWQ-1512) Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria

2018-02-04 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351737#comment-16351737
 ] 

Ruilong Huo edited comment on HAWQ-1512 at 2/4/18 10:52 AM:


[~yjin] and [~rlei], please find my comment below:
 # For dependencies of hawq core part, the [third party 
page|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75963741] 
is good enough. One more thing to check is that we probably need to list 
googlemock as it was a separate project, though it is merged into googletest 
now.
 # For ranger pluggin of hawq, it is good if it is not a mandatory component in 
hawq for now. Otherwise, we need to add corresponding dependencies as mentioned 
in [^HAWQ Ranger Pluggin Service Dependencies.xlsx]
 # For libhdfs3 and libyarn, I think they are part of hawq itself for now as 
they are not open sourced as standalone project yet.


was (Author: huor):
[~yjin] and [~rlei], please find my comment below:
 # For dependencies of hawq core part, the [third party 
page|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75963741] 
is good enough. One more thing to check is that we probably need to list 
googlemock as it was a separate project, though it is merged into googletest 
now.
 # For ranger pluggin of hawq, it is good if it is not a mandatory component in 
hawq for now. Otherwise, we need to add corresponding dependencies as mentioned 
in [^HAWQ Ranger Pluggin Service Dependencies.xlsx]^Unable to embed resource: 
HAWQ Ranger Pluggin Service Dependencies.xlsx of type 
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet^
 # For libhdfs3 and libyarn, I think they are part of hawq itself for now as 
they are not open sourced as standalone project yet.

> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> --
>
> Key: HAWQ-1512
> URL: https://issues.apache.org/jira/browse/HAWQ-1512
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Yi Jin
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
> Attachments: HAWQ Ranger Pluggin Service Dependencies.xlsx
>
>
> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> Check the following page for the criteria
> https://cwiki.apache.org/confluence/display/HAWQ/ASF+Maturity+Evaluation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1512) Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria

2018-02-04 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16351737#comment-16351737
 ] 

Ruilong Huo commented on HAWQ-1512:
---

[~yjin] and [~rlei], please find my comment below:
 # For dependencies of hawq core part, the [third party page | 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75963741] is 
good enough. One more thing to check is that we probably need to list 
googlemock as it was a separate project, though it is merged into googletest 
now.
 # For ranger pluggin of hawq, it is good if it is not a mandatory component in 
hawq for now. Otherwise, we need to add corresponding dependencies as mentioned 
in [^HAWQ Ranger Pluggin Service Dependencies.xlsx]^!HAWQ Ranger Pluggin 
Service Dependencies.xlsx|width=7,height=7!^
 # For libhdfs3 and libyarn, I think they are part of hawq itself for now as 
they are not open sourced as standalone project yet.

> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> --
>
> Key: HAWQ-1512
> URL: https://issues.apache.org/jira/browse/HAWQ-1512
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Yi Jin
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
> Attachments: HAWQ Ranger Pluggin Service Dependencies.xlsx
>
>
> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> Check the following page for the criteria
> https://cwiki.apache.org/confluence/display/HAWQ/ASF+Maturity+Evaluation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-786) Framework to support pluggable formats and file systems

2017-11-12 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16249054#comment-16249054
 ] 

Ruilong Huo commented on HAWQ-786:
--

Assigning to Chiyang so that he can work on this feature.

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: chiyang wan
> Fix For: 2.3.0.0-incubating
>
> Attachments: ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-786) Framework to support pluggable formats and file systems

2017-11-12 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-786:


Assignee: chiyang wan  (was: Ruilong Huo)

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: chiyang wan
> Fix For: 2.3.0.0-incubating
>
> Attachments: ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-786) Framework to support pluggable formats and file systems

2017-11-09 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246939#comment-16246939
 ] 

Ruilong Huo commented on HAWQ-786:
--

[~chiyang1], sounds a good idea!

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
> Attachments: ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1270) Plugged storage back-ends for HAWQ

2017-11-09 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1270:
-

Assignee: Yi Jin  (was: Ruilong Huo)

> Plugged storage back-ends for HAWQ
> --
>
> Key: HAWQ-1270
> URL: https://issues.apache.org/jira/browse/HAWQ-1270
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Dmitry Buzolin
>Assignee: Yi Jin
>
> Since HAWQ only depends on Hadoop and Parquet for columnar format support, I 
> would like to propose pluggable storage backend design for Hawq. Hadoop is 
> already supported but there is Ceph -  a distributed, storage system which 
> offers standard Posix compliant file system, object and a block storage. Ceph 
> is also data location aware, written in C++. and is more sophisticated 
> storage backend compare to Hadoop at this time. It provides replicated and 
> erasure encoded storage pools, Other great features of Ceph are: snapshots 
> and an algorithmic approach to map data to the nodes rather than having 
> centrally managed namenodes. I don't think HDFS offers any of these features. 
> In terms of performance, Ceph should be faster than HFDS since it is written 
> on C++ and because it doesn't have scalability limitations when mapping data 
> to storage pools, compare to Hadoop, where name node is such point of 
> contention.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1270) Plugged storage back-ends for HAWQ

2017-11-09 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1270:
-

Assignee: Ruilong Huo  (was: Yi Jin)

> Plugged storage back-ends for HAWQ
> --
>
> Key: HAWQ-1270
> URL: https://issues.apache.org/jira/browse/HAWQ-1270
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Dmitry Buzolin
>Assignee: Ruilong Huo
>
> Since HAWQ only depends on Hadoop and Parquet for columnar format support, I 
> would like to propose pluggable storage backend design for Hawq. Hadoop is 
> already supported but there is Ceph -  a distributed, storage system which 
> offers standard Posix compliant file system, object and a block storage. Ceph 
> is also data location aware, written in C++. and is more sophisticated 
> storage backend compare to Hadoop at this time. It provides replicated and 
> erasure encoded storage pools, Other great features of Ceph are: snapshots 
> and an algorithmic approach to map data to the nodes rather than having 
> centrally managed namenodes. I don't think HDFS offers any of these features. 
> In terms of performance, Ceph should be faster than HFDS since it is written 
> on C++ and because it doesn't have scalability limitations when mapping data 
> to storage pools, compare to Hadoop, where name node is such point of 
> contention.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HAWQ-1528) Support read on writable external table

2017-09-18 Thread Ruilong Huo (JIRA)
Ruilong Huo created HAWQ-1528:
-

 Summary: Support read on writable external table
 Key: HAWQ-1528
 URL: https://issues.apache.org/jira/browse/HAWQ-1528
 Project: Apache HAWQ
  Issue Type: New Feature
  Components: External Tables
Reporter: Ruilong Huo
Assignee: Radar Lei
 Fix For: backlog


It is better to support read on writable external table for convenience. BTW, 
currently, it only support read on readable external table, and write on 
writable external table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-07-12 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1487.
---
Resolution: Fixed

> hang process due to deadlock when it try to process interrupt in error 
> handling
> ---
>
> Key: HAWQ-1487
> URL: https://issues.apache.org/jira/browse/HAWQ-1487
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
>
> It has hang process when it try to process interrupt in error handling. To be 
> specific, some QE encounter division by zero error, and then it error out. 
> During the error processing, it try to handle query cancelling interrupt and 
> thus deadlock occur.
> The hang process is:
> {noformat}
> $ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger p
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> co
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer p
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoi
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsupe
> $ ps -ef | grep postgres | grep -v grep
> gpadmin   51245  1  0 06:15 ?00:01:01 
> /usr/local/hawq_2_2_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
>  -i -M segment -p 20100 --silent-mode=true
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger process
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> collector process
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer process
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoint process
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment resource manager
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
> MPPEXEC SELECT
> gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
> {noformat}
> The call stack is:
> {noformat}
> $ sudo gdb -p 182983
> (gdb) bt
> #0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
> #6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 
> "postgres.c", lineno=3618,
> funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
> #9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
> #10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
> postgres.c:3463
> #11 
> #12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
> lineno=839, funcname=0xd3bf3a "float8div",
> domain=0x0) at elog.c:492
> #18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
> #19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
> econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:1762
> #20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
> isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:2250
> #21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
> argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
> #22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
> econtext=0x32495d8,
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at 
> execQual.c:1532
> #23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177",
> isDone=0x0) at execQual.c:2228
> #24 

[jira] [Closed] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-07-12 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-1487.
-

> hang process due to deadlock when it try to process interrupt in error 
> handling
> ---
>
> Key: HAWQ-1487
> URL: https://issues.apache.org/jira/browse/HAWQ-1487
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
>
> It has hang process when it try to process interrupt in error handling. To be 
> specific, some QE encounter division by zero error, and then it error out. 
> During the error processing, it try to handle query cancelling interrupt and 
> thus deadlock occur.
> The hang process is:
> {noformat}
> $ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger p
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> co
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer p
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoi
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsupe
> $ ps -ef | grep postgres | grep -v grep
> gpadmin   51245  1  0 06:15 ?00:01:01 
> /usr/local/hawq_2_2_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
>  -i -M segment -p 20100 --silent-mode=true
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger process
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> collector process
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer process
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoint process
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment resource manager
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
> MPPEXEC SELECT
> gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
> {noformat}
> The call stack is:
> {noformat}
> $ sudo gdb -p 182983
> (gdb) bt
> #0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
> #6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 
> "postgres.c", lineno=3618,
> funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
> #9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
> #10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
> postgres.c:3463
> #11 
> #12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
> lineno=839, funcname=0xd3bf3a "float8div",
> domain=0x0) at elog.c:492
> #18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
> #19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
> econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:1762
> #20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
> isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:2250
> #21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
> argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
> #22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
> econtext=0x32495d8,
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at 
> execQual.c:1532
> #23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177",
> isDone=0x0) at execQual.c:2228
> #24 0x0076eed2 in initFcinfo 

[jira] [Commented] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-07-12 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084988#comment-16084988
 ] 

Ruilong Huo commented on HAWQ-1487:
---

Thanks [~vVineet], closing this jira as the fix is available.

> hang process due to deadlock when it try to process interrupt in error 
> handling
> ---
>
> Key: HAWQ-1487
> URL: https://issues.apache.org/jira/browse/HAWQ-1487
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
>
> It has hang process when it try to process interrupt in error handling. To be 
> specific, some QE encounter division by zero error, and then it error out. 
> During the error processing, it try to handle query cancelling interrupt and 
> thus deadlock occur.
> The hang process is:
> {noformat}
> $ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger p
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> co
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer p
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoi
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsupe
> $ ps -ef | grep postgres | grep -v grep
> gpadmin   51245  1  0 06:15 ?00:01:01 
> /usr/local/hawq_2_2_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
>  -i -M segment -p 20100 --silent-mode=true
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger process
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> collector process
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer process
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoint process
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment resource manager
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
> MPPEXEC SELECT
> gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
> {noformat}
> The call stack is:
> {noformat}
> $ sudo gdb -p 182983
> (gdb) bt
> #0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
> #6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 
> "postgres.c", lineno=3618,
> funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
> #9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
> #10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
> postgres.c:3463
> #11 
> #12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
> lineno=839, funcname=0xd3bf3a "float8div",
> domain=0x0) at elog.c:492
> #18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
> #19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
> econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:1762
> #20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
> isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:2250
> #21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
> argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
> #22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
> econtext=0x32495d8,
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at 
> execQual.c:1532
> #23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
> 

[jira] [Resolved] (HAWQ-1475) Add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for Apache HAWQ binary release

2017-06-27 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1475.
---
Resolution: Fixed

> Add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for Apache 
> HAWQ binary release
> -
>
> Key: HAWQ-1475
> URL: https://issues.apache.org/jira/browse/HAWQ-1475
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary release, 
> we need to add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for 
> the binary release.
> Refer to 
> https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
>  for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1475) Add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for Apache HAWQ binary release

2017-06-27 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1475:
--
Affects Version/s: 2.2.0.0-incubating

> Add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for Apache 
> HAWQ binary release
> -
>
> Key: HAWQ-1475
> URL: https://issues.apache.org/jira/browse/HAWQ-1475
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary release, 
> we need to add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for 
> the binary release.
> Refer to 
> https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
>  for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1475) Add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for Apache HAWQ binary release

2017-06-27 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1475:
--
Summary: Add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for 
Apache HAWQ binary release  (was: Add LICENSE, NOTICE, and DISCLAIMER files for 
Apache HAWQ binary release)

> Add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for Apache 
> HAWQ binary release
> -
>
> Key: HAWQ-1475
> URL: https://issues.apache.org/jira/browse/HAWQ-1475
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary release, 
> we need to add LICENSE, NOTICE, and DISCLAIMER files for the binary release.
> Refer to 
> https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
>  for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1475) Add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for Apache HAWQ binary release

2017-06-27 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1475:
--
Description: 
Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary release, we 
need to add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for the 
binary release.

Refer to 
https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
 for detail.

  was:
Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary release, we 
need to add LICENSE, NOTICE, and DISCLAIMER files for the binary release.

Refer to 
https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
 for detail.


> Add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for Apache 
> HAWQ binary release
> -
>
> Key: HAWQ-1475
> URL: https://issues.apache.org/jira/browse/HAWQ-1475
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary release, 
> we need to add LICENSE, NOTICE, and DISCLAIMER files for c/c++ components for 
> the binary release.
> Refer to 
> https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
>  for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1489) For PXF jar files, add LICENSE, NOTICE, and DISCLAIMER files for Apache HAWQ binary release

2017-06-27 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1489:
--
Affects Version/s: 2.2.0.0-incubating

> For PXF jar files, add LICENSE, NOTICE, and DISCLAIMER files for Apache HAWQ 
> binary release
> ---
>
> Key: HAWQ-1489
> URL: https://issues.apache.org/jira/browse/HAWQ-1489
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build, PXF
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ed Espino
>Assignee: Ed Espino
> Fix For: 2.2.0.0-incubating
>
>
> For PXF jar files, add LICENSE, NOTICE, and DISCLAIMER.
> Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary release, 
> we need to add LICENSE, NOTICE, and DISCLAIMER files for the binary release.
> Refer to 
> https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
>  for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1489) Fix PXF jar files, add LICENSE, NOTICE, and DISCLAIMER files for Apache HAWQ binary release

2017-06-27 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1489:
--
Summary: Fix PXF jar files, add LICENSE, NOTICE, and DISCLAIMER files for 
Apache HAWQ binary release  (was: For PXF jar files, add LICENSE, NOTICE, and 
DISCLAIMER files for Apache HAWQ binary release)

> Fix PXF jar files, add LICENSE, NOTICE, and DISCLAIMER files for Apache HAWQ 
> binary release
> ---
>
> Key: HAWQ-1489
> URL: https://issues.apache.org/jira/browse/HAWQ-1489
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build, PXF
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ed Espino
>Assignee: Ed Espino
> Fix For: 2.2.0.0-incubating
>
>
> For PXF jar files, add LICENSE, NOTICE, and DISCLAIMER.
> Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary release, 
> we need to add LICENSE, NOTICE, and DISCLAIMER files for the binary release.
> Refer to 
> https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
>  for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-738) Allocate query resource twice in function call through jdbc

2017-06-19 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055079#comment-16055079
 ] 

Ruilong Huo commented on HAWQ-738:
--

For apache hawq, it is in 2.0.0.0-incubating release: 
https://github.com/apache/incubator-hawq/releases/tag/rel%2Fv2.0.0.0-incubating.
 You can find details at: 
https://github.com/apache/incubator-hawq/commit/3d3611ef80d246b446a9e403f91438395b4d856d

For pivotal hdb, it is in 2.0.1.0 release. Refer to 
https://hdb.docs.pivotal.io/201/hdb/releasenotes/HAWQ201ReleaseNotes.html for 
more information.

> Allocate query resource twice in function call through jdbc
> ---
>
> Key: HAWQ-738
> URL: https://issues.apache.org/jira/browse/HAWQ-738
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.0.0.0-incubating
>
>
> It allocates query resource twice in function call through jdbc, one in 
> parse, and the other in bind. Though the same thing works with psql.
> Use runme.sh in attached bug.zip to reproduce the issue. It may raise below 
> error on host with limited resource (i.e., low memory, etc).
> {noformat}
> [gpadmin@localhost debug]$ ./runme.sh 
> java -classpath /home/gpadmin/debug/Bug.jar:/home/gpadmin/debug/gpdb.jar Bug 
> localhost 5432 gpadmin gpadmin changeme
> gpServer: hdp23
> gpPort: 5432
> gpDatabase: gpadmin
> gpUserName: gpadmin
> gpPassword: changeme
> DriverManager.getConnection("jdbc:postgresql://hdp23:5432/gpadmin")
> trying sun.jdbc.odbc.JdbcOdbcDriver
> *Driver.connect (jdbc:postgresql://hdp23:5432/gpadmin)
> trying org.postgresql.Driver
> getConnection returning org.postgresql.Driver
> strSQL: DROP TABLE IF EXISTS public.debug;
> CREATE TABLE public.debug
> (id int, foo_bar text)
> DISTRIBUTED RANDOMLY;
> strSQL: INSERT INTO public.debug
> SELECT i, 'foo_' || i from generate_series(1,100) AS i;
> strSQL: CREATE OR REPLACE FUNCTION public.fn_debug() RETURNS text AS
> $$
> DECLARE
>   v_return text;
> BEGIN
>   SELECT foo_bar
>   INTO v_return
>   FROM public.debug
>   WHERE id = 1;
>   RETURN v_return;
> END;
> $$
> LANGUAGE plpgsql;
> strSQL: SELECT public.fn_debug()
> org.postgresql.util.PSQLException: ERROR: failed to acquire resource from 
> resource manager, session 32 deadlock is detected (pquery.c:804)
>   at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2102)
>   at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1835)
>   at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)
>   at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:500)
>   at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:374)
>   at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:254)
>   at Bug.getFunctionResults(Bug.java:144)
>   at Bug.main(Bug.java:41)
> SQLException: SQLState(XX000)
> ERROR: failed to acquire resource from resource manager, session 32 deadlock 
> is detected (pquery.c:804)
> Exception in thread "main" java.sql.SQLException: ERROR: failed to acquire 
> resource from resource manager, session 32 deadlock is detected (pquery.c:804)
>   at Bug.main(Bug.java:49)
> {noformat}
> while the expected result is as below:
> {noformat}
> [gpadmin@localhost hawq_bug]$ ./runme.sh
> java -classpath 
> /home/gpadmin/huor/hawq_bug/Bug.jar:/home/gpadmin/huor/hawq_bug/gpdb.jar Bug 
> localhost 5432 gptest gpadmin changeme
> gpServer: localhost
> gpPort: 5432
> gpDatabase: gptest
> gpUserName: gpadmin
> gpPassword: changeme
> DriverManager.getConnection("jdbc:postgresql://localhost:5432/gptest")
> trying sun.jdbc.odbc.JdbcOdbcDriver
> *Driver.connect (jdbc:postgresql://localhost:5432/gptest)
> trying org.postgresql.Driver
> getConnection returning org.postgresql.Driver
> strSQL: DROP TABLE IF EXISTS public.debug;
> CREATE TABLE public.debug
> (id int, foo_bar text)
> DISTRIBUTED RANDOMLY;
> SQLWarning:
> strSQL: INSERT INTO public.debug
> SELECT i, 'foo_' || i from generate_series(1,100) AS i;
> strSQL: CREATE OR REPLACE FUNCTION public.fn_debug() RETURNS text AS
> $$
> DECLARE
> v_return text;
> BEGIN
> SELECT foo_bar
> INTO v_return
> FROM public.debug
> WHERE id = 1;
> RETURN v_return;
> END;
> $$
> LANGUAGE plpgsql;
> strSQL: SELECT public.fn_debug()
> output: foo_1
> {noformat}
> If you look into the pg_log on master, you can see it allocate query resource 
> twice for the function call:
> {noformat}
> rhuo-mbp:jdbc rhuo$ cat hawq-2016-05-14_00.csv
> 2016-05-16 13:50:50.255504 
> 

[jira] [Updated] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-06-15 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1487:
--
Affects Version/s: 2.2.0.0-incubating

> hang process due to deadlock when it try to process interrupt in error 
> handling
> ---
>
> Key: HAWQ-1487
> URL: https://issues.apache.org/jira/browse/HAWQ-1487
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
>
> It has hang process when it try to process interrupt in error handling. To be 
> specific, some QE encounter division by zero error, and then it error out. 
> During the error processing, it try to handle query cancelling interrupt and 
> thus deadlock occur.
> The hang process is:
> {noformat}
> $ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger p
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> co
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer p
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoi
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsupe
> $ ps -ef | grep postgres | grep -v grep
> gpadmin   51245  1  0 06:15 ?00:01:01 
> /usr/local/hawq_2_2_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
>  -i -M segment -p 20100 --silent-mode=true
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger process
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> collector process
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer process
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoint process
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment resource manager
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
> MPPEXEC SELECT
> gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
> {noformat}
> The call stack is:
> {noformat}
> $ sudo gdb -p 182983
> (gdb) bt
> #0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
> #6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 
> "postgres.c", lineno=3618,
> funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
> #9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
> #10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
> postgres.c:3463
> #11 
> #12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
> lineno=839, funcname=0xd3bf3a "float8div",
> domain=0x0) at elog.c:492
> #18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
> #19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
> econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:1762
> #20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
> isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:2250
> #21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
> argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
> #22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
> econtext=0x32495d8,
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at 
> execQual.c:1532
> #23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177",
> isDone=0x0) at 

[jira] [Updated] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-06-15 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1487:
--
Fix Version/s: 2.3.0.0-incubating

> hang process due to deadlock when it try to process interrupt in error 
> handling
> ---
>
> Key: HAWQ-1487
> URL: https://issues.apache.org/jira/browse/HAWQ-1487
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
>
> It has hang process when it try to process interrupt in error handling. To be 
> specific, some QE encounter division by zero error, and then it error out. 
> During the error processing, it try to handle query cancelling interrupt and 
> thus deadlock occur.
> The hang process is:
> {noformat}
> $ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger p
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> co
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer p
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoi
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsupe
> $ ps -ef | grep postgres | grep -v grep
> gpadmin   51245  1  0 06:15 ?00:01:01 
> /usr/local/hawq_2_2_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
>  -i -M segment -p 20100 --silent-mode=true
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger process
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> collector process
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer process
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoint process
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment resource manager
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
> MPPEXEC SELECT
> gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
> {noformat}
> The call stack is:
> {noformat}
> $ sudo gdb -p 182983
> (gdb) bt
> #0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
> #6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 
> "postgres.c", lineno=3618,
> funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
> #9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
> #10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
> postgres.c:3463
> #11 
> #12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
> lineno=839, funcname=0xd3bf3a "float8div",
> domain=0x0) at elog.c:492
> #18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
> #19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
> econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:1762
> #20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
> isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:2250
> #21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
> argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
> #22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
> econtext=0x32495d8,
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at 
> execQual.c:1532
> #23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177",
> isDone=0x0) at execQual.c:2228
> #24 0x0076eed2 in initFcinfo 

[jira] [Assigned] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-06-15 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1487:
-

Assignee: Ruilong Huo  (was: Lei Chang)

> hang process due to deadlock when it try to process interrupt in error 
> handling
> ---
>
> Key: HAWQ-1487
> URL: https://issues.apache.org/jira/browse/HAWQ-1487
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
>
> It has hang process when it try to process interrupt in error handling. To be 
> specific, some QE encounter division by zero error, and then it error out. 
> During the error processing, it try to handle query cancelling interrupt and 
> thus deadlock occur.
> The hang process is:
> {noformat}
> $ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger p
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> co
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer p
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoi
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsupe
> $ ps -ef | grep postgres | grep -v grep
> gpadmin   51245  1  0 06:15 ?00:01:01 
> /usr/local/hawq_2_2_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
>  -i -M segment -p 20100 --silent-mode=true
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger process
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> collector process
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer process
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoint process
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment resource manager
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
> MPPEXEC SELECT
> gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
> {noformat}
> The call stack is:
> {noformat}
> $ sudo gdb -p 182983
> (gdb) bt
> #0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
> #6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 
> "postgres.c", lineno=3618,
> funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
> #9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
> #10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
> postgres.c:3463
> #11 
> #12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
> lineno=839, funcname=0xd3bf3a "float8div",
> domain=0x0) at elog.c:492
> #18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
> #19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
> econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:1762
> #20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
> isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:2250
> #21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
> argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
> #22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
> econtext=0x32495d8,
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at 
> execQual.c:1532
> #23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177",
> isDone=0x0) at execQual.c:2228
> #24 0x0076eed2 in 

[jira] [Updated] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-06-15 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1487:
--
Description: 
It has hang process when it try to process interrupt in error handling. To be 
specific, some QE encounter division by zero error, and then it error out. 
During the error processing, it try to handle query cancelling interrupt and 
thus deadlock occur.

The hang process is:
{noformat}
$ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, logger p
gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats co
gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, writer p
gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, checkpoi
gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, segment
gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, hawqsupe

$ ps -ef | grep postgres | grep -v grep
gpadmin   51245  1  0 06:15 ?00:01:01 
/usr/local/hawq_2_2_0_0/bin/postgres -D 
/data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
 -i -M segment -p 20100 --silent-mode=true
gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, logger 
process
gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
collector process
gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, writer 
process
gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
checkpoint process
gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, segment 
resource manager
gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
MPPEXEC SELECT
gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
{noformat}

The call stack is:
{noformat}
$ sudo gdb -p 182983
(gdb) bt
#0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
#2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
#4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
#5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
#6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
#7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
#8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 "postgres.c", 
lineno=3618,
funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
#9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
#10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
postgres.c:3463
#11 
#12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
#13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
#14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
#15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
#16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
#17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
lineno=839, funcname=0xd3bf3a "float8div",
domain=0x0) at elog.c:492
#18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
#19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
isDone=0x7ffd04d2bd04) at execQual.c:1762
#20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
isNull=0x7ffd04d2c0e0 "\030",
isDone=0x7ffd04d2bd04) at execQual.c:2250
#21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
#22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
econtext=0x32495d8,
isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at execQual.c:1532
#23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
isNull=0x7ffd04d2c5c1 "\306\322\004\375\177",
isDone=0x0) at execQual.c:2228
#24 0x0076eed2 in initFcinfo (wrxstate=0x31b8fe0, 
fcinfo=0x7ffd04d2c280, funcstate=0x7f83c7412318, econtext=0x32495d8,
check_nulls=1 '\001') at nodeWindow.c:3201
#25 0x0076efa4 in add_tuple_to_trans (funcstate=0x7f83c7412318, 
wstate=0x3248ab8, econtext=0x32495d8,
check_nulls=1 '\001') at nodeWindow.c:3223
#26 0x00772f72 in processTupleSlot (wstate=0x3248ab8, slot=0x31ac150, 
last_peer=0 '\000') at nodeWindow.c:5105
#27 0x00772760 in ExecWindow (wstate=0x3248ab8) at nodeWindow.c:4821
---Type  to continue, or q  to quit---
#28 0x0071eda7 in ExecProcNode (node=0x3248ab8) at execProcnode.c:1007
#29 0x0075aded in NextInputSlot (node=0x31af928) at nodeResult.c:95

[jira] [Created] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-06-15 Thread Ruilong Huo (JIRA)
Ruilong Huo created HAWQ-1487:
-

 Summary: hang process due to deadlock when it try to process 
interrupt in error handling
 Key: HAWQ-1487
 URL: https://issues.apache.org/jira/browse/HAWQ-1487
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Query Execution
Reporter: Ruilong Huo
Assignee: Lei Chang


It has hang process when it try to process interrupt in error handling. To be 
specific, some QE encounter division by zero error, and then it error out. 
During the error processing, it try to handle query cancelling interrupt and 
thus deadlock occur.

The hang process is:
{noformat}
$ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, logger p
gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats co
gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, writer p
gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, checkpoi
gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, segment
gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, hawqsupe

$ ps -ef | grep postgres | grep -v grep
gpadmin   51245  1  0 06:15 ?00:01:01 
/usr/local/hawq_2_2_0_0/bin/postgres -D 
/data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
 -i -M segment -p 20100 --silent-mode=true
gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, logger 
process
gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
collector process
gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, writer 
process
gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
checkpoint process
gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, segment 
resource manager
gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
MPPEXEC SELECT
gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
{noformat}

The call stack is:
{noformat}
$ sudo gdb -p 182983
(gdb) bt
#0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
#2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
#4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
#5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
#6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
#7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
#8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 "postgres.c", 
lineno=3618,
funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
#9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
#10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
postgres.c:3463
#11 
#12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
#13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
#14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
#15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
#16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
#17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
lineno=839, funcname=0xd3bf3a "float8div",
domain=0x0) at elog.c:492
#18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
#19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
isDone=0x7ffd04d2bd04) at execQual.c:1762
#20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
isNull=0x7ffd04d2c0e0 "\030",
isDone=0x7ffd04d2bd04) at execQual.c:2250
#21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
#22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
econtext=0x32495d8,
isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at execQual.c:1532
#23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
isNull=0x7ffd04d2c5c1 "\306\322\004\375\177",
isDone=0x0) at execQual.c:2228
#24 0x0076eed2 in initFcinfo (wrxstate=0x31b8fe0, 
fcinfo=0x7ffd04d2c280, funcstate=0x7f83c7412318, econtext=0x32495d8,
check_nulls=1 '\001') at nodeWindow.c:3201
#25 0x0076efa4 in add_tuple_to_trans (funcstate=0x7f83c7412318, 
wstate=0x3248ab8, econtext=0x32495d8,
check_nulls=1 '\001') at nodeWindow.c:3223
#26 0x00772f72 in processTupleSlot (wstate=0x3248ab8, slot=0x31ac150, 
last_peer=0 '\000') at nodeWindow.c:5105
#27 0x00772760 in ExecWindow 

[jira] [Assigned] (HAWQ-1475) Add LICENSE, NOTICE, and DISCLAIMER files for Apache HAWQ binary release

2017-05-27 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1475:
-

Assignee: Ruilong Huo  (was: Ed Espino)

> Add LICENSE, NOTICE, and DISCLAIMER files for Apache HAWQ binary release
> 
>
> Key: HAWQ-1475
> URL: https://issues.apache.org/jira/browse/HAWQ-1475
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary release, 
> we need to add LICENSE, NOTICE, and DISCLAIMER files for the binary release.
> Refer to 
> https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
>  for detail.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1475) Add LICENSE, NOTICE, and DISCLAIMER files for Apache HAWQ binary release

2017-05-27 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1475:
--
Description: 
Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary release, we 
need to add LICENSE, NOTICE, and DISCLAIMER files for the binary release.

Refer to 
https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
 for detail.

  was:Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary 
release, we need to add LICENSE, NOTICE, and DISCLAIMER files for the binary 
release. Refer to 
https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
 for detail.


> Add LICENSE, NOTICE, and DISCLAIMER files for Apache HAWQ binary release
> 
>
> Key: HAWQ-1475
> URL: https://issues.apache.org/jira/browse/HAWQ-1475
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Ruilong Huo
>Assignee: Ed Espino
> Fix For: 2.2.0.0-incubating
>
>
> Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary release, 
> we need to add LICENSE, NOTICE, and DISCLAIMER files for the binary release.
> Refer to 
> https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
>  for detail.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1475) Add LICENSE, NOTICE, and DISCLAIMER files for Apache HAWQ binary release

2017-05-27 Thread Ruilong Huo (JIRA)
Ruilong Huo created HAWQ-1475:
-

 Summary: Add LICENSE, NOTICE, and DISCLAIMER files for Apache HAWQ 
binary release
 Key: HAWQ-1475
 URL: https://issues.apache.org/jira/browse/HAWQ-1475
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Build
Reporter: Ruilong Huo
Assignee: Ed Espino
 Fix For: 2.2.0.0-incubating


Per review from IPMC vote for Apache HAWQ 2.2.0.0-incubating binary release, we 
need to add LICENSE, NOTICE, and DISCLAIMER files for the binary release. Refer 
to 
https://lists.apache.org/thread.html/13a66f0f5762166f37354ec2cdc7e82f531b74650be16239f5f738ce@%3Cdev.hawq.apache.org%3E
 for detail.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1412) Inconsistent json file for catalog of hawq 2.0

2017-05-07 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-1412.
-

> Inconsistent json file for catalog of hawq 2.0
> --
>
> Key: HAWQ-1412
> URL: https://issues.apache.org/jira/browse/HAWQ-1412
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.0.0.0-incubating
>
>
> To generate catalog information for hawq, we need to make sure the right 
> version of metadata information is used. For hawq 2.2, the 
> tools/bin/gppylib/data/2.2.json is created based on 2.2 code base in 
> [HAWQ-1406|https://issues.apache.org/jira/browse/HAWQ-1406]. However, we need 
> to correct that for hawq 2.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1411) Inconsistent json file for catalog of hawq 2.1

2017-05-07 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-1411.
-

> Inconsistent json file for catalog of hawq 2.1
> --
>
> Key: HAWQ-1411
> URL: https://issues.apache.org/jira/browse/HAWQ-1411
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: 2.1.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.1.0.0-incubating
>
>
> To generate catalog information for hawq, we need to make sure the right 
> version of metadata information is used. For hawq 2.2, the 
> tools/bin/gppylib/data/2.2.json is created based on 2.2 code base in 
> [HAWQ-1406|https://issues.apache.org/jira/browse/HAWQ-1406]. However, we need 
> to correct that for hawq 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1412) Inconsistent json file for catalog of hawq 2.0

2017-05-07 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1412.
---
Resolution: Fixed

> Inconsistent json file for catalog of hawq 2.0
> --
>
> Key: HAWQ-1412
> URL: https://issues.apache.org/jira/browse/HAWQ-1412
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: 2.0.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.0.0.0-incubating
>
>
> To generate catalog information for hawq, we need to make sure the right 
> version of metadata information is used. For hawq 2.2, the 
> tools/bin/gppylib/data/2.2.json is created based on 2.2 code base in 
> [HAWQ-1406|https://issues.apache.org/jira/browse/HAWQ-1406]. However, we need 
> to correct that for hawq 2.0.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1411) Inconsistent json file for catalog of hawq 2.1

2017-05-07 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1411.
---
Resolution: Fixed

> Inconsistent json file for catalog of hawq 2.1
> --
>
> Key: HAWQ-1411
> URL: https://issues.apache.org/jira/browse/HAWQ-1411
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: 2.1.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.1.0.0-incubating
>
>
> To generate catalog information for hawq, we need to make sure the right 
> version of metadata information is used. For hawq 2.2, the 
> tools/bin/gppylib/data/2.2.json is created based on 2.2 code base in 
> [HAWQ-1406|https://issues.apache.org/jira/browse/HAWQ-1406]. However, we need 
> to correct that for hawq 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1379) Do not send options multiple times in build_startup_packet()

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-1379.
-

> Do not send options multiple times in build_startup_packet()
> 
>
> Key: HAWQ-1379
> URL: https://issues.apache.org/jira/browse/HAWQ-1379
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> build_startup_packet() build a libpq packet, however it includes 
> conn->pgoptions more than 1 time - this is is unnecessary and really wastes 
> network bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1379) Do not send options multiple times in build_startup_packet()

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1379:
--
Component/s: Core

> Do not send options multiple times in build_startup_packet()
> 
>
> Key: HAWQ-1379
> URL: https://issues.apache.org/jira/browse/HAWQ-1379
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> build_startup_packet() build a libpq packet, however it includes 
> conn->pgoptions more than 1 time - this is is unnecessary and really wastes 
> network bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1379) Do not send options multiple times in build_startup_packet()

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1379:
--
Affects Version/s: 2.1.0.0-incubating

> Do not send options multiple times in build_startup_packet()
> 
>
> Key: HAWQ-1379
> URL: https://issues.apache.org/jira/browse/HAWQ-1379
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> build_startup_packet() build a libpq packet, however it includes 
> conn->pgoptions more than 1 time - this is is unnecessary and really wastes 
> network bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1379) Do not send options multiple times in build_startup_packet()

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1379.
---
Resolution: Fixed

> Do not send options multiple times in build_startup_packet()
> 
>
> Key: HAWQ-1379
> URL: https://issues.apache.org/jira/browse/HAWQ-1379
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> build_startup_packet() build a libpq packet, however it includes 
> conn->pgoptions more than 1 time - this is is unnecessary and really wastes 
> network bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1408) PANICs during COPY ... FROM STDIN

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1408:
--
Affects Version/s: (was: backlog)
   2.1.0.0-incubating

> PANICs during COPY ... FROM STDIN
> -
>
> Key: HAWQ-1408
> URL: https://issues.apache.org/jira/browse/HAWQ-1408
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.2.0.0-incubating
>
>
> We found PANIC (and respective core dumps). From the initial analysis from 
> the logs and core dump, the query causing this PANIC is a "COPY ... FROM 
> STDIN". This query does not always panic.
> This kind of queries are executed from Java/Scala code (by one of IG Spark 
> Jobs). Connection to the DB is managed by connection pool (commons-dbcp2) and 
> validated on borrow by “select 1” validation query. IG is using 
> postgresql-9.4-1206-jdbc41 as a java driver to create those connections. I 
> believe they should be using the driver from DataDirect, available in PivNet; 
> however, I haven't found hard evidence pointing the driver as a root cause.
> My initial analysis on the packcore for the master PANIC. Not sure if this 
> helps or makes sense.
> This is the backtrace of the packcore for process 466858:
> {code}
> (gdb) bt
> #0  0x7fd875f906ab in raise () from 
> /data/logs/52280/packcore-core.postgres.466858/lib64/libpthread.so.0
> #1  0x008c0b19 in SafeHandlerForSegvBusIll (postgres_signal_arg=11, 
> processName=) at elog.c:4519
> #2  
> #3  0x0053b9c3 in SetSegnoForWrite (existing_segnos=0x4c46ff0, 
> existing_segnos@entry=0x0, relid=relid@entry=1195061, 
> segment_num=segment_num@entry=6, forNewRel=forNewRel@entry=0 '\000', 
> keepHash=keepHash@entry=1 '\001') at appendonlywriter.c:1166
> #4  0x0053c08f in assignPerRelSegno 
> (all_relids=all_relids@entry=0x2b96d68, segment_num=6) at 
> appendonlywriter.c:1212
> #5  0x005f79e8 in DoCopy (stmt=stmt@entry=0x2b2a3d8, 
> queryString=) at copy.c:1591
> #6  0x007ef737 in ProcessUtility 
> (parsetree=parsetree@entry=0x2b2a3d8, queryString=0x2c2f550 "COPY 
> mis_data_ig_client_derived_attributes.client_derived_attributes_src (id, 
> tracking_id, name, value_string, value_timestamp, value_number, 
> value_boolean, environment, account, channel, device, feat"...,
> params=0x0, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x2b2a7c8, completionTag=completionTag@entry=0x7ffcb5e318e0 
> "") at utility.c:1076
> #7  0x007ea95e in PortalRunUtility (portal=portal@entry=0x2b8eab0, 
> utilityStmt=utilityStmt@entry=0x2b2a3d8, isTopLevel=isTopLevel@entry=1 
> '\001', dest=dest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:1969
> #8  0x007ec13e in PortalRunMulti (portal=portal@entry=0x2b8eab0, 
> isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0x2b2a7c8, 
> altdest=altdest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:2079
> #9  0x007ede95 in PortalRun (portal=portal@entry=0x2b8eab0, 
> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x2b2a7c8, altdest=altdest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:1596
> #10 0x007e5ad9 in exec_simple_query 
> (query_string=query_string@entry=0x2b29100 "COPY 
> mis_data_ig_client_derived_attributes.client_derived_attributes_src (id, 
> tracking_id, name, value_string, value_timestamp, value_number, 
> value_boolean, environment, account, channel, device, feat"...,
> seqServerHost=seqServerHost@entry=0x0, 
> seqServerPort=seqServerPort@entry=-1) at postgres.c:1816
> #11 0x007e6cb2 in PostgresMain (argc=, argv= out>, argv@entry=0x29d7820, username=0x29d75d0 "mis_ig") at postgres.c:4840
> #12 0x00799540 in BackendRun (port=0x29afc50) at postmaster.c:5915
> #13 BackendStartup (port=0x29afc50) at postmaster.c:5484
> #14 ServerLoop () at postmaster.c:2163
> #15 0x0079c309 in PostmasterMain (argc=, 
> argv=) at postmaster.c:1454
> #16 0x004a4209 in main (argc=9, argv=0x29af010) at main.c:226
> {code}
> Jumping into the frame 3 and running info locals, we found something odd for 
> "status" variable:
> {code}
> (gdb) f 3
> #3  0x0053b9c3 in SetSegnoForWrite (existing_segnos=0x4c46ff0, 
> existing_segnos@entry=0x0, relid=relid@entry=1195061, 
> segment_num=segment_num@entry=6, forNewRel=forNewRel@entry=0 '\000', 
> keepHash=keepHash@entry=1 '\001') at appendonlywriter.c:1166
> 1166  appendonlywriter.c: No such file or directory.
> (gdb) info locals
> status = 0x0
> [...]
> {code}
> This panic comes from this piece of code in "appendonlywritter.c":
> 

[jira] [Updated] (HAWQ-1378) Elaborate the "invalid command-line arguments for server process" error.

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1378:
--
Component/s: Query Execution

> Elaborate the "invalid command-line arguments for server process" error.
> 
>
> Key: HAWQ-1378
> URL: https://issues.apache.org/jira/browse/HAWQ-1378
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.1.0.0-incubating
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> I saw the following errors when running several times,
> "Error dispatching to ***: connection pointer is NULL."
> FATAL:  invalid command-line arguments for server process
> While this usually means there is bug in related code but the code should 
> have reported more detailed log so that we could catch what argument is wrong 
> with less pain, even there is log level switch for argument dumping.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1378) Elaborate the "invalid command-line arguments for server process" error.

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-1378.
-

> Elaborate the "invalid command-line arguments for server process" error.
> 
>
> Key: HAWQ-1378
> URL: https://issues.apache.org/jira/browse/HAWQ-1378
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.1.0.0-incubating
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> I saw the following errors when running several times,
> "Error dispatching to ***: connection pointer is NULL."
> FATAL:  invalid command-line arguments for server process
> While this usually means there is bug in related code but the code should 
> have reported more detailed log so that we could catch what argument is wrong 
> with less pain, even there is log level switch for argument dumping.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (HAWQ-1379) Do not send options multiple times in build_startup_packet()

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reopened HAWQ-1379:
---

Reopen to mark the right affected version.

> Do not send options multiple times in build_startup_packet()
> 
>
> Key: HAWQ-1379
> URL: https://issues.apache.org/jira/browse/HAWQ-1379
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> build_startup_packet() build a libpq packet, however it includes 
> conn->pgoptions more than 1 time - this is is unnecessary and really wastes 
> network bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1378) Elaborate the "invalid command-line arguments for server process" error.

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1378.
---
Resolution: Fixed

> Elaborate the "invalid command-line arguments for server process" error.
> 
>
> Key: HAWQ-1378
> URL: https://issues.apache.org/jira/browse/HAWQ-1378
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.1.0.0-incubating
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> I saw the following errors when running several times,
> "Error dispatching to ***: connection pointer is NULL."
> FATAL:  invalid command-line arguments for server process
> While this usually means there is bug in related code but the code should 
> have reported more detailed log so that we could catch what argument is wrong 
> with less pain, even there is log level switch for argument dumping.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (HAWQ-1378) Elaborate the "invalid command-line arguments for server process" error.

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reopened HAWQ-1378:
---

Reopen to mark the right affected version.

> Elaborate the "invalid command-line arguments for server process" error.
> 
>
> Key: HAWQ-1378
> URL: https://issues.apache.org/jira/browse/HAWQ-1378
> Project: Apache HAWQ
>  Issue Type: Bug
>Affects Versions: 2.1.0.0-incubating
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> I saw the following errors when running several times,
> "Error dispatching to ***: connection pointer is NULL."
> FATAL:  invalid command-line arguments for server process
> While this usually means there is bug in related code but the code should 
> have reported more detailed log so that we could catch what argument is wrong 
> with less pain, even there is log level switch for argument dumping.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1378) Elaborate the "invalid command-line arguments for server process" error.

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1378:
--
Affects Version/s: 2.1.0.0-incubating

> Elaborate the "invalid command-line arguments for server process" error.
> 
>
> Key: HAWQ-1378
> URL: https://issues.apache.org/jira/browse/HAWQ-1378
> Project: Apache HAWQ
>  Issue Type: Bug
>Affects Versions: 2.1.0.0-incubating
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> I saw the following errors when running several times,
> "Error dispatching to ***: connection pointer is NULL."
> FATAL:  invalid command-line arguments for server process
> While this usually means there is bug in related code but the code should 
> have reported more detailed log so that we could catch what argument is wrong 
> with less pain, even there is log level switch for argument dumping.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1371) QE process hang in shared input scan

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-1371.
-

> QE process hang in shared input scan
> 
>
> Key: HAWQ-1371
> URL: https://issues.apache.org/jira/browse/HAWQ-1371
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.1.0.0-incubating
>Reporter: Amy
>Assignee: Amy
> Fix For: 2.2.0.0-incubating
>
>
> process hang on some segment node while QD and QE on other segment nodes 
> terminated.
> {code}
> on segment test2:
> [gpadmin@test2 ~]$ pp
> gpadmin   21614  0.0  1.2 788636 407428 ?   Ss   Feb26   1:19 
> /usr/local/hawq_2_1_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-YARN/product/segmentdd -p 
> 31100 --silent-mode=true -M segment -i
> gpadmin   21615  0.0  0.0 279896  6952 ?Ss   Feb26   0:08 postgres: 
> port 31100, logger process
> gpadmin   21618  0.0  0.0 282128  6980 ?Ss   Feb26   0:00 postgres: 
> port 31100, stats collector process
> gpadmin   21619  0.0  0.0 788636  7280 ?Ss   Feb26   0:11 postgres: 
> port 31100, writer process
> gpadmin   21620  0.0  0.0 788636  7064 ?Ss   Feb26   0:01 postgres: 
> port 31100, checkpoint process
> gpadmin   21621  0.0  0.0 793048 11752 ?SFeb26   0:19 postgres: 
> port 31100, segment resource manager
> gpadmin   91760  0.0  0.0 861000 16840 ?TNsl Feb26   0:07 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15250) con558 seg4 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin   91762  0.0  0.0 861064 17116 ?SNsl Feb26   0:08 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15253) con558 seg5 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin  216648  0.0  0.0 103244   788 pts/0S+   19:54   0:00 grep 
> postgres
> {code}
> QE stack trace is:
> {code}
> (gdb) bt
> #0  0x0032214e1523 in select () from /lib64/libc.so.6
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> #2  0x00695798 in ExecEndMaterial (node=0x1d2eb50) at 
> nodeMaterial.c:512
> #3  0x0067048d in ExecEndNode (node=0x1d2eb50) at execProcnode.c:1681
> #4  0x0069c6b5 in ExecEndShareInputScan (node=0x1d2e6f0) at 
> nodeShareInputScan.c:382
> #5  0x0067042a in ExecEndNode (node=0x1d2e6f0) at execProcnode.c:1674
> #6  0x006ac9be in ExecEndSequence (node=0x1d23890) at 
> nodeSequence.c:165
> #7  0x006705f0 in ExecEndNode (node=0x1d23890) at execProcnode.c:1583
> #8  0x0069a0ab in ExecEndResult (node=0x1d214a0) at nodeResult.c:481
> #9  0x0067060d in ExecEndNode (node=0x1d214a0) at execProcnode.c:1575
> #10 0x0069a0ab in ExecEndResult (node=0x1d20860) at nodeResult.c:481
> #11 0x0067060d in ExecEndNode (node=0x1d20860) at execProcnode.c:1575
> #12 0x00698fd2 in ExecEndMotion (node=0x1d20320) at nodeMotion.c:1230
> #13 0x00670434 in ExecEndNode (node=0x1d20320) at execProcnode.c:1713
> #14 0x00669da7 in ExecEndPlan (planstate=0x1d20320, estate=0x1cb6b40) 
> at execMain.c:2896
> #15 0x0066a311 in ExecutorEnd (queryDesc=0x1cabf20) at execMain.c:1407
> #16 0x006195f2 in PortalCleanupHelper (portal=0x1cbcc40) at 
> portalcmds.c:365
> #17 PortalCleanup (portal=0x1cbcc40) at portalcmds.c:317
> #18 0x00900544 in AtAbort_Portals () at portalmem.c:693
> #19 0x004e697f in AbortTransaction () at xact.c:2800
> #20 0x004e7565 in AbortCurrentTransaction () at xact.c:3377
> #21 0x007ed0fa in PostgresMain (argc=, 
> argv=, username=0x1b47f10 "gpadmin") at postgres.c:4630
> #22 0x007a05d0 in BackendRun () at postmaster.c:5915
> #23 BackendStartup () at postmaster.c:5484
> #24 ServerLoop () at postmaster.c:2163
> #25 0x007a3399 in PostmasterMain (argc=Unhandled dwarf expression 
> opcode 0xf3
> ) at postmaster.c:1454
> #26 0x004a52e9 in main (argc=9, argv=0x1b0cd10) at main.c:226
> (gdb) p CurrentTransactionState->state
> $1 = TRANS_ABORT
> (gdb) p pctxt->donefd
> No symbol "pctxt" in current context.
> (gdb) f 1
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> 989   nodeShareInputScan.c: No such file or directory.
>   in nodeShareInputScan.c
> (gdb) p pctxt->donefd
> $2 = 15
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1371) QE process hang in shared input scan

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1371.
---
Resolution: Fixed

> QE process hang in shared input scan
> 
>
> Key: HAWQ-1371
> URL: https://issues.apache.org/jira/browse/HAWQ-1371
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.1.0.0-incubating
>Reporter: Amy
>Assignee: Amy
> Fix For: 2.2.0.0-incubating
>
>
> process hang on some segment node while QD and QE on other segment nodes 
> terminated.
> {code}
> on segment test2:
> [gpadmin@test2 ~]$ pp
> gpadmin   21614  0.0  1.2 788636 407428 ?   Ss   Feb26   1:19 
> /usr/local/hawq_2_1_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-YARN/product/segmentdd -p 
> 31100 --silent-mode=true -M segment -i
> gpadmin   21615  0.0  0.0 279896  6952 ?Ss   Feb26   0:08 postgres: 
> port 31100, logger process
> gpadmin   21618  0.0  0.0 282128  6980 ?Ss   Feb26   0:00 postgres: 
> port 31100, stats collector process
> gpadmin   21619  0.0  0.0 788636  7280 ?Ss   Feb26   0:11 postgres: 
> port 31100, writer process
> gpadmin   21620  0.0  0.0 788636  7064 ?Ss   Feb26   0:01 postgres: 
> port 31100, checkpoint process
> gpadmin   21621  0.0  0.0 793048 11752 ?SFeb26   0:19 postgres: 
> port 31100, segment resource manager
> gpadmin   91760  0.0  0.0 861000 16840 ?TNsl Feb26   0:07 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15250) con558 seg4 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin   91762  0.0  0.0 861064 17116 ?SNsl Feb26   0:08 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15253) con558 seg5 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin  216648  0.0  0.0 103244   788 pts/0S+   19:54   0:00 grep 
> postgres
> {code}
> QE stack trace is:
> {code}
> (gdb) bt
> #0  0x0032214e1523 in select () from /lib64/libc.so.6
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> #2  0x00695798 in ExecEndMaterial (node=0x1d2eb50) at 
> nodeMaterial.c:512
> #3  0x0067048d in ExecEndNode (node=0x1d2eb50) at execProcnode.c:1681
> #4  0x0069c6b5 in ExecEndShareInputScan (node=0x1d2e6f0) at 
> nodeShareInputScan.c:382
> #5  0x0067042a in ExecEndNode (node=0x1d2e6f0) at execProcnode.c:1674
> #6  0x006ac9be in ExecEndSequence (node=0x1d23890) at 
> nodeSequence.c:165
> #7  0x006705f0 in ExecEndNode (node=0x1d23890) at execProcnode.c:1583
> #8  0x0069a0ab in ExecEndResult (node=0x1d214a0) at nodeResult.c:481
> #9  0x0067060d in ExecEndNode (node=0x1d214a0) at execProcnode.c:1575
> #10 0x0069a0ab in ExecEndResult (node=0x1d20860) at nodeResult.c:481
> #11 0x0067060d in ExecEndNode (node=0x1d20860) at execProcnode.c:1575
> #12 0x00698fd2 in ExecEndMotion (node=0x1d20320) at nodeMotion.c:1230
> #13 0x00670434 in ExecEndNode (node=0x1d20320) at execProcnode.c:1713
> #14 0x00669da7 in ExecEndPlan (planstate=0x1d20320, estate=0x1cb6b40) 
> at execMain.c:2896
> #15 0x0066a311 in ExecutorEnd (queryDesc=0x1cabf20) at execMain.c:1407
> #16 0x006195f2 in PortalCleanupHelper (portal=0x1cbcc40) at 
> portalcmds.c:365
> #17 PortalCleanup (portal=0x1cbcc40) at portalcmds.c:317
> #18 0x00900544 in AtAbort_Portals () at portalmem.c:693
> #19 0x004e697f in AbortTransaction () at xact.c:2800
> #20 0x004e7565 in AbortCurrentTransaction () at xact.c:3377
> #21 0x007ed0fa in PostgresMain (argc=, 
> argv=, username=0x1b47f10 "gpadmin") at postgres.c:4630
> #22 0x007a05d0 in BackendRun () at postmaster.c:5915
> #23 BackendStartup () at postmaster.c:5484
> #24 ServerLoop () at postmaster.c:2163
> #25 0x007a3399 in PostmasterMain (argc=Unhandled dwarf expression 
> opcode 0xf3
> ) at postmaster.c:1454
> #26 0x004a52e9 in main (argc=9, argv=0x1b0cd10) at main.c:226
> (gdb) p CurrentTransactionState->state
> $1 = TRANS_ABORT
> (gdb) p pctxt->donefd
> No symbol "pctxt" in current context.
> (gdb) f 1
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> 989   nodeShareInputScan.c: No such file or directory.
>   in nodeShareInputScan.c
> (gdb) p pctxt->donefd
> $2 = 15
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1371) QE process hang in shared input scan

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1371:
--
Fix Version/s: (was: backlog)
   2.2.0.0-incubating

> QE process hang in shared input scan
> 
>
> Key: HAWQ-1371
> URL: https://issues.apache.org/jira/browse/HAWQ-1371
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.1.0.0-incubating
>Reporter: Amy
>Assignee: Amy
> Fix For: 2.2.0.0-incubating
>
>
> process hang on some segment node while QD and QE on other segment nodes 
> terminated.
> {code}
> on segment test2:
> [gpadmin@test2 ~]$ pp
> gpadmin   21614  0.0  1.2 788636 407428 ?   Ss   Feb26   1:19 
> /usr/local/hawq_2_1_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-YARN/product/segmentdd -p 
> 31100 --silent-mode=true -M segment -i
> gpadmin   21615  0.0  0.0 279896  6952 ?Ss   Feb26   0:08 postgres: 
> port 31100, logger process
> gpadmin   21618  0.0  0.0 282128  6980 ?Ss   Feb26   0:00 postgres: 
> port 31100, stats collector process
> gpadmin   21619  0.0  0.0 788636  7280 ?Ss   Feb26   0:11 postgres: 
> port 31100, writer process
> gpadmin   21620  0.0  0.0 788636  7064 ?Ss   Feb26   0:01 postgres: 
> port 31100, checkpoint process
> gpadmin   21621  0.0  0.0 793048 11752 ?SFeb26   0:19 postgres: 
> port 31100, segment resource manager
> gpadmin   91760  0.0  0.0 861000 16840 ?TNsl Feb26   0:07 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15250) con558 seg4 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin   91762  0.0  0.0 861064 17116 ?SNsl Feb26   0:08 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15253) con558 seg5 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin  216648  0.0  0.0 103244   788 pts/0S+   19:54   0:00 grep 
> postgres
> {code}
> QE stack trace is:
> {code}
> (gdb) bt
> #0  0x0032214e1523 in select () from /lib64/libc.so.6
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> #2  0x00695798 in ExecEndMaterial (node=0x1d2eb50) at 
> nodeMaterial.c:512
> #3  0x0067048d in ExecEndNode (node=0x1d2eb50) at execProcnode.c:1681
> #4  0x0069c6b5 in ExecEndShareInputScan (node=0x1d2e6f0) at 
> nodeShareInputScan.c:382
> #5  0x0067042a in ExecEndNode (node=0x1d2e6f0) at execProcnode.c:1674
> #6  0x006ac9be in ExecEndSequence (node=0x1d23890) at 
> nodeSequence.c:165
> #7  0x006705f0 in ExecEndNode (node=0x1d23890) at execProcnode.c:1583
> #8  0x0069a0ab in ExecEndResult (node=0x1d214a0) at nodeResult.c:481
> #9  0x0067060d in ExecEndNode (node=0x1d214a0) at execProcnode.c:1575
> #10 0x0069a0ab in ExecEndResult (node=0x1d20860) at nodeResult.c:481
> #11 0x0067060d in ExecEndNode (node=0x1d20860) at execProcnode.c:1575
> #12 0x00698fd2 in ExecEndMotion (node=0x1d20320) at nodeMotion.c:1230
> #13 0x00670434 in ExecEndNode (node=0x1d20320) at execProcnode.c:1713
> #14 0x00669da7 in ExecEndPlan (planstate=0x1d20320, estate=0x1cb6b40) 
> at execMain.c:2896
> #15 0x0066a311 in ExecutorEnd (queryDesc=0x1cabf20) at execMain.c:1407
> #16 0x006195f2 in PortalCleanupHelper (portal=0x1cbcc40) at 
> portalcmds.c:365
> #17 PortalCleanup (portal=0x1cbcc40) at portalcmds.c:317
> #18 0x00900544 in AtAbort_Portals () at portalmem.c:693
> #19 0x004e697f in AbortTransaction () at xact.c:2800
> #20 0x004e7565 in AbortCurrentTransaction () at xact.c:3377
> #21 0x007ed0fa in PostgresMain (argc=, 
> argv=, username=0x1b47f10 "gpadmin") at postgres.c:4630
> #22 0x007a05d0 in BackendRun () at postmaster.c:5915
> #23 BackendStartup () at postmaster.c:5484
> #24 ServerLoop () at postmaster.c:2163
> #25 0x007a3399 in PostmasterMain (argc=Unhandled dwarf expression 
> opcode 0xf3
> ) at postmaster.c:1454
> #26 0x004a52e9 in main (argc=9, argv=0x1b0cd10) at main.c:226
> (gdb) p CurrentTransactionState->state
> $1 = TRANS_ABORT
> (gdb) p pctxt->donefd
> No symbol "pctxt" in current context.
> (gdb) f 1
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> 989   nodeShareInputScan.c: No such file or directory.
>   in nodeShareInputScan.c
> (gdb) p pctxt->donefd
> $2 = 15
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1371) QE process hang in shared input scan

2017-04-04 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956180#comment-15956180
 ] 

Ruilong Huo commented on HAWQ-1371:
---

Reopen to correct the fix version.

> QE process hang in shared input scan
> 
>
> Key: HAWQ-1371
> URL: https://issues.apache.org/jira/browse/HAWQ-1371
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.1.0.0-incubating
>Reporter: Amy
>Assignee: Amy
> Fix For: 2.2.0.0-incubating
>
>
> process hang on some segment node while QD and QE on other segment nodes 
> terminated.
> {code}
> on segment test2:
> [gpadmin@test2 ~]$ pp
> gpadmin   21614  0.0  1.2 788636 407428 ?   Ss   Feb26   1:19 
> /usr/local/hawq_2_1_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-YARN/product/segmentdd -p 
> 31100 --silent-mode=true -M segment -i
> gpadmin   21615  0.0  0.0 279896  6952 ?Ss   Feb26   0:08 postgres: 
> port 31100, logger process
> gpadmin   21618  0.0  0.0 282128  6980 ?Ss   Feb26   0:00 postgres: 
> port 31100, stats collector process
> gpadmin   21619  0.0  0.0 788636  7280 ?Ss   Feb26   0:11 postgres: 
> port 31100, writer process
> gpadmin   21620  0.0  0.0 788636  7064 ?Ss   Feb26   0:01 postgres: 
> port 31100, checkpoint process
> gpadmin   21621  0.0  0.0 793048 11752 ?SFeb26   0:19 postgres: 
> port 31100, segment resource manager
> gpadmin   91760  0.0  0.0 861000 16840 ?TNsl Feb26   0:07 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15250) con558 seg4 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin   91762  0.0  0.0 861064 17116 ?SNsl Feb26   0:08 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15253) con558 seg5 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin  216648  0.0  0.0 103244   788 pts/0S+   19:54   0:00 grep 
> postgres
> {code}
> QE stack trace is:
> {code}
> (gdb) bt
> #0  0x0032214e1523 in select () from /lib64/libc.so.6
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> #2  0x00695798 in ExecEndMaterial (node=0x1d2eb50) at 
> nodeMaterial.c:512
> #3  0x0067048d in ExecEndNode (node=0x1d2eb50) at execProcnode.c:1681
> #4  0x0069c6b5 in ExecEndShareInputScan (node=0x1d2e6f0) at 
> nodeShareInputScan.c:382
> #5  0x0067042a in ExecEndNode (node=0x1d2e6f0) at execProcnode.c:1674
> #6  0x006ac9be in ExecEndSequence (node=0x1d23890) at 
> nodeSequence.c:165
> #7  0x006705f0 in ExecEndNode (node=0x1d23890) at execProcnode.c:1583
> #8  0x0069a0ab in ExecEndResult (node=0x1d214a0) at nodeResult.c:481
> #9  0x0067060d in ExecEndNode (node=0x1d214a0) at execProcnode.c:1575
> #10 0x0069a0ab in ExecEndResult (node=0x1d20860) at nodeResult.c:481
> #11 0x0067060d in ExecEndNode (node=0x1d20860) at execProcnode.c:1575
> #12 0x00698fd2 in ExecEndMotion (node=0x1d20320) at nodeMotion.c:1230
> #13 0x00670434 in ExecEndNode (node=0x1d20320) at execProcnode.c:1713
> #14 0x00669da7 in ExecEndPlan (planstate=0x1d20320, estate=0x1cb6b40) 
> at execMain.c:2896
> #15 0x0066a311 in ExecutorEnd (queryDesc=0x1cabf20) at execMain.c:1407
> #16 0x006195f2 in PortalCleanupHelper (portal=0x1cbcc40) at 
> portalcmds.c:365
> #17 PortalCleanup (portal=0x1cbcc40) at portalcmds.c:317
> #18 0x00900544 in AtAbort_Portals () at portalmem.c:693
> #19 0x004e697f in AbortTransaction () at xact.c:2800
> #20 0x004e7565 in AbortCurrentTransaction () at xact.c:3377
> #21 0x007ed0fa in PostgresMain (argc=, 
> argv=, username=0x1b47f10 "gpadmin") at postgres.c:4630
> #22 0x007a05d0 in BackendRun () at postmaster.c:5915
> #23 BackendStartup () at postmaster.c:5484
> #24 ServerLoop () at postmaster.c:2163
> #25 0x007a3399 in PostmasterMain (argc=Unhandled dwarf expression 
> opcode 0xf3
> ) at postmaster.c:1454
> #26 0x004a52e9 in main (argc=9, argv=0x1b0cd10) at main.c:226
> (gdb) p CurrentTransactionState->state
> $1 = TRANS_ABORT
> (gdb) p pctxt->donefd
> No symbol "pctxt" in current context.
> (gdb) f 1
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> 989   nodeShareInputScan.c: No such file or directory.
>   in nodeShareInputScan.c
> (gdb) p pctxt->donefd
> $2 = 15
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (HAWQ-1371) QE process hang in shared input scan

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reopened HAWQ-1371:
---

> QE process hang in shared input scan
> 
>
> Key: HAWQ-1371
> URL: https://issues.apache.org/jira/browse/HAWQ-1371
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.1.0.0-incubating
>Reporter: Amy
>Assignee: Amy
> Fix For: 2.2.0.0-incubating
>
>
> process hang on some segment node while QD and QE on other segment nodes 
> terminated.
> {code}
> on segment test2:
> [gpadmin@test2 ~]$ pp
> gpadmin   21614  0.0  1.2 788636 407428 ?   Ss   Feb26   1:19 
> /usr/local/hawq_2_1_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-YARN/product/segmentdd -p 
> 31100 --silent-mode=true -M segment -i
> gpadmin   21615  0.0  0.0 279896  6952 ?Ss   Feb26   0:08 postgres: 
> port 31100, logger process
> gpadmin   21618  0.0  0.0 282128  6980 ?Ss   Feb26   0:00 postgres: 
> port 31100, stats collector process
> gpadmin   21619  0.0  0.0 788636  7280 ?Ss   Feb26   0:11 postgres: 
> port 31100, writer process
> gpadmin   21620  0.0  0.0 788636  7064 ?Ss   Feb26   0:01 postgres: 
> port 31100, checkpoint process
> gpadmin   21621  0.0  0.0 793048 11752 ?SFeb26   0:19 postgres: 
> port 31100, segment resource manager
> gpadmin   91760  0.0  0.0 861000 16840 ?TNsl Feb26   0:07 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15250) con558 seg4 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin   91762  0.0  0.0 861064 17116 ?SNsl Feb26   0:08 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15253) con558 seg5 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin  216648  0.0  0.0 103244   788 pts/0S+   19:54   0:00 grep 
> postgres
> {code}
> QE stack trace is:
> {code}
> (gdb) bt
> #0  0x0032214e1523 in select () from /lib64/libc.so.6
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> #2  0x00695798 in ExecEndMaterial (node=0x1d2eb50) at 
> nodeMaterial.c:512
> #3  0x0067048d in ExecEndNode (node=0x1d2eb50) at execProcnode.c:1681
> #4  0x0069c6b5 in ExecEndShareInputScan (node=0x1d2e6f0) at 
> nodeShareInputScan.c:382
> #5  0x0067042a in ExecEndNode (node=0x1d2e6f0) at execProcnode.c:1674
> #6  0x006ac9be in ExecEndSequence (node=0x1d23890) at 
> nodeSequence.c:165
> #7  0x006705f0 in ExecEndNode (node=0x1d23890) at execProcnode.c:1583
> #8  0x0069a0ab in ExecEndResult (node=0x1d214a0) at nodeResult.c:481
> #9  0x0067060d in ExecEndNode (node=0x1d214a0) at execProcnode.c:1575
> #10 0x0069a0ab in ExecEndResult (node=0x1d20860) at nodeResult.c:481
> #11 0x0067060d in ExecEndNode (node=0x1d20860) at execProcnode.c:1575
> #12 0x00698fd2 in ExecEndMotion (node=0x1d20320) at nodeMotion.c:1230
> #13 0x00670434 in ExecEndNode (node=0x1d20320) at execProcnode.c:1713
> #14 0x00669da7 in ExecEndPlan (planstate=0x1d20320, estate=0x1cb6b40) 
> at execMain.c:2896
> #15 0x0066a311 in ExecutorEnd (queryDesc=0x1cabf20) at execMain.c:1407
> #16 0x006195f2 in PortalCleanupHelper (portal=0x1cbcc40) at 
> portalcmds.c:365
> #17 PortalCleanup (portal=0x1cbcc40) at portalcmds.c:317
> #18 0x00900544 in AtAbort_Portals () at portalmem.c:693
> #19 0x004e697f in AbortTransaction () at xact.c:2800
> #20 0x004e7565 in AbortCurrentTransaction () at xact.c:3377
> #21 0x007ed0fa in PostgresMain (argc=, 
> argv=, username=0x1b47f10 "gpadmin") at postgres.c:4630
> #22 0x007a05d0 in BackendRun () at postmaster.c:5915
> #23 BackendStartup () at postmaster.c:5484
> #24 ServerLoop () at postmaster.c:2163
> #25 0x007a3399 in PostmasterMain (argc=Unhandled dwarf expression 
> opcode 0xf3
> ) at postmaster.c:1454
> #26 0x004a52e9 in main (argc=9, argv=0x1b0cd10) at main.c:226
> (gdb) p CurrentTransactionState->state
> $1 = TRANS_ABORT
> (gdb) p pctxt->donefd
> No symbol "pctxt" in current context.
> (gdb) f 1
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> 989   nodeShareInputScan.c: No such file or directory.
>   in nodeShareInputScan.c
> (gdb) p pctxt->donefd
> $2 = 15
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1371) QE process hang in shared input scan

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1371:
--
Affects Version/s: 2.1.0.0-incubating

> QE process hang in shared input scan
> 
>
> Key: HAWQ-1371
> URL: https://issues.apache.org/jira/browse/HAWQ-1371
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.1.0.0-incubating
>Reporter: Amy
>Assignee: Amy
> Fix For: 2.2.0.0-incubating
>
>
> process hang on some segment node while QD and QE on other segment nodes 
> terminated.
> {code}
> on segment test2:
> [gpadmin@test2 ~]$ pp
> gpadmin   21614  0.0  1.2 788636 407428 ?   Ss   Feb26   1:19 
> /usr/local/hawq_2_1_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-YARN/product/segmentdd -p 
> 31100 --silent-mode=true -M segment -i
> gpadmin   21615  0.0  0.0 279896  6952 ?Ss   Feb26   0:08 postgres: 
> port 31100, logger process
> gpadmin   21618  0.0  0.0 282128  6980 ?Ss   Feb26   0:00 postgres: 
> port 31100, stats collector process
> gpadmin   21619  0.0  0.0 788636  7280 ?Ss   Feb26   0:11 postgres: 
> port 31100, writer process
> gpadmin   21620  0.0  0.0 788636  7064 ?Ss   Feb26   0:01 postgres: 
> port 31100, checkpoint process
> gpadmin   21621  0.0  0.0 793048 11752 ?SFeb26   0:19 postgres: 
> port 31100, segment resource manager
> gpadmin   91760  0.0  0.0 861000 16840 ?TNsl Feb26   0:07 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15250) con558 seg4 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin   91762  0.0  0.0 861064 17116 ?SNsl Feb26   0:08 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15253) con558 seg5 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin  216648  0.0  0.0 103244   788 pts/0S+   19:54   0:00 grep 
> postgres
> {code}
> QE stack trace is:
> {code}
> (gdb) bt
> #0  0x0032214e1523 in select () from /lib64/libc.so.6
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> #2  0x00695798 in ExecEndMaterial (node=0x1d2eb50) at 
> nodeMaterial.c:512
> #3  0x0067048d in ExecEndNode (node=0x1d2eb50) at execProcnode.c:1681
> #4  0x0069c6b5 in ExecEndShareInputScan (node=0x1d2e6f0) at 
> nodeShareInputScan.c:382
> #5  0x0067042a in ExecEndNode (node=0x1d2e6f0) at execProcnode.c:1674
> #6  0x006ac9be in ExecEndSequence (node=0x1d23890) at 
> nodeSequence.c:165
> #7  0x006705f0 in ExecEndNode (node=0x1d23890) at execProcnode.c:1583
> #8  0x0069a0ab in ExecEndResult (node=0x1d214a0) at nodeResult.c:481
> #9  0x0067060d in ExecEndNode (node=0x1d214a0) at execProcnode.c:1575
> #10 0x0069a0ab in ExecEndResult (node=0x1d20860) at nodeResult.c:481
> #11 0x0067060d in ExecEndNode (node=0x1d20860) at execProcnode.c:1575
> #12 0x00698fd2 in ExecEndMotion (node=0x1d20320) at nodeMotion.c:1230
> #13 0x00670434 in ExecEndNode (node=0x1d20320) at execProcnode.c:1713
> #14 0x00669da7 in ExecEndPlan (planstate=0x1d20320, estate=0x1cb6b40) 
> at execMain.c:2896
> #15 0x0066a311 in ExecutorEnd (queryDesc=0x1cabf20) at execMain.c:1407
> #16 0x006195f2 in PortalCleanupHelper (portal=0x1cbcc40) at 
> portalcmds.c:365
> #17 PortalCleanup (portal=0x1cbcc40) at portalcmds.c:317
> #18 0x00900544 in AtAbort_Portals () at portalmem.c:693
> #19 0x004e697f in AbortTransaction () at xact.c:2800
> #20 0x004e7565 in AbortCurrentTransaction () at xact.c:3377
> #21 0x007ed0fa in PostgresMain (argc=, 
> argv=, username=0x1b47f10 "gpadmin") at postgres.c:4630
> #22 0x007a05d0 in BackendRun () at postmaster.c:5915
> #23 BackendStartup () at postmaster.c:5484
> #24 ServerLoop () at postmaster.c:2163
> #25 0x007a3399 in PostmasterMain (argc=Unhandled dwarf expression 
> opcode 0xf3
> ) at postmaster.c:1454
> #26 0x004a52e9 in main (argc=9, argv=0x1b0cd10) at main.c:226
> (gdb) p CurrentTransactionState->state
> $1 = TRANS_ABORT
> (gdb) p pctxt->donefd
> No symbol "pctxt" in current context.
> (gdb) f 1
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> 989   nodeShareInputScan.c: No such file or directory.
>   in nodeShareInputScan.c
> (gdb) p pctxt->donefd
> $2 = 15
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1408) PANICs during COPY ... FROM STDIN

2017-04-04 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1408:
--
Fix Version/s: (was: 2.1.0.0-incubating)
   2.2.0.0-incubating

> PANICs during COPY ... FROM STDIN
> -
>
> Key: HAWQ-1408
> URL: https://issues.apache.org/jira/browse/HAWQ-1408
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: backlog
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.2.0.0-incubating
>
>
> We found PANIC (and respective core dumps). From the initial analysis from 
> the logs and core dump, the query causing this PANIC is a "COPY ... FROM 
> STDIN". This query does not always panic.
> This kind of queries are executed from Java/Scala code (by one of IG Spark 
> Jobs). Connection to the DB is managed by connection pool (commons-dbcp2) and 
> validated on borrow by “select 1” validation query. IG is using 
> postgresql-9.4-1206-jdbc41 as a java driver to create those connections. I 
> believe they should be using the driver from DataDirect, available in PivNet; 
> however, I haven't found hard evidence pointing the driver as a root cause.
> My initial analysis on the packcore for the master PANIC. Not sure if this 
> helps or makes sense.
> This is the backtrace of the packcore for process 466858:
> {code}
> (gdb) bt
> #0  0x7fd875f906ab in raise () from 
> /data/logs/52280/packcore-core.postgres.466858/lib64/libpthread.so.0
> #1  0x008c0b19 in SafeHandlerForSegvBusIll (postgres_signal_arg=11, 
> processName=) at elog.c:4519
> #2  
> #3  0x0053b9c3 in SetSegnoForWrite (existing_segnos=0x4c46ff0, 
> existing_segnos@entry=0x0, relid=relid@entry=1195061, 
> segment_num=segment_num@entry=6, forNewRel=forNewRel@entry=0 '\000', 
> keepHash=keepHash@entry=1 '\001') at appendonlywriter.c:1166
> #4  0x0053c08f in assignPerRelSegno 
> (all_relids=all_relids@entry=0x2b96d68, segment_num=6) at 
> appendonlywriter.c:1212
> #5  0x005f79e8 in DoCopy (stmt=stmt@entry=0x2b2a3d8, 
> queryString=) at copy.c:1591
> #6  0x007ef737 in ProcessUtility 
> (parsetree=parsetree@entry=0x2b2a3d8, queryString=0x2c2f550 "COPY 
> mis_data_ig_client_derived_attributes.client_derived_attributes_src (id, 
> tracking_id, name, value_string, value_timestamp, value_number, 
> value_boolean, environment, account, channel, device, feat"...,
> params=0x0, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x2b2a7c8, completionTag=completionTag@entry=0x7ffcb5e318e0 
> "") at utility.c:1076
> #7  0x007ea95e in PortalRunUtility (portal=portal@entry=0x2b8eab0, 
> utilityStmt=utilityStmt@entry=0x2b2a3d8, isTopLevel=isTopLevel@entry=1 
> '\001', dest=dest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:1969
> #8  0x007ec13e in PortalRunMulti (portal=portal@entry=0x2b8eab0, 
> isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0x2b2a7c8, 
> altdest=altdest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:2079
> #9  0x007ede95 in PortalRun (portal=portal@entry=0x2b8eab0, 
> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x2b2a7c8, altdest=altdest@entry=0x2b2a7c8, 
> completionTag=completionTag@entry=0x7ffcb5e318e0 "") at pquery.c:1596
> #10 0x007e5ad9 in exec_simple_query 
> (query_string=query_string@entry=0x2b29100 "COPY 
> mis_data_ig_client_derived_attributes.client_derived_attributes_src (id, 
> tracking_id, name, value_string, value_timestamp, value_number, 
> value_boolean, environment, account, channel, device, feat"...,
> seqServerHost=seqServerHost@entry=0x0, 
> seqServerPort=seqServerPort@entry=-1) at postgres.c:1816
> #11 0x007e6cb2 in PostgresMain (argc=, argv= out>, argv@entry=0x29d7820, username=0x29d75d0 "mis_ig") at postgres.c:4840
> #12 0x00799540 in BackendRun (port=0x29afc50) at postmaster.c:5915
> #13 BackendStartup (port=0x29afc50) at postmaster.c:5484
> #14 ServerLoop () at postmaster.c:2163
> #15 0x0079c309 in PostmasterMain (argc=, 
> argv=) at postmaster.c:1454
> #16 0x004a4209 in main (argc=9, argv=0x29af010) at main.c:226
> {code}
> Jumping into the frame 3 and running info locals, we found something odd for 
> "status" variable:
> {code}
> (gdb) f 3
> #3  0x0053b9c3 in SetSegnoForWrite (existing_segnos=0x4c46ff0, 
> existing_segnos@entry=0x0, relid=relid@entry=1195061, 
> segment_num=segment_num@entry=6, forNewRel=forNewRel@entry=0 '\000', 
> keepHash=keepHash@entry=1 '\001') at appendonlywriter.c:1166
> 1166  appendonlywriter.c: No such file or directory.
> (gdb) info locals
> status = 0x0
> [...]
> {code}
> This panic comes from this piece of code in "appendonlywritter.c":
> {code}
> for 

[jira] [Closed] (HAWQ-1421) Improve PXF rpm package name format and dependencies

2017-04-01 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-1421.
-

> Improve PXF rpm package name format and dependencies
> 
>
> Key: HAWQ-1421
> URL: https://issues.apache.org/jira/browse/HAWQ-1421
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build, PXF
>Affects Versions: 2.1.0.0-incubating
>Reporter: Radar Lei
>Assignee: Shivram Mani
> Fix For: 2.2.0.0-incubating
>
>
> If we build pxf rpm package by 'make rpm', we will get below pxf packages:
> {quote}
>   apache-tomcat-7.0.62-el6.noarch.rpm
>   pxf-3.2.1.0-root.el6.noarch.rpm
>   pxf-hbase_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hive_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-jdbc_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-json_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-service_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> {quote}
> These rpm packages have dependencies on Apache Hadoop components only, some 
> other Hadoop distributes can't satisfy it. E.g. :
> {quote}
> rpm -ivh pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> error: Failed dependencies:
>   pxf-service_3_2_1_0 >= 3.2.1.0 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop >= 2.7.1 is needed by pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop-mapreduce >= 2.7.1 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
> {quote}
> We'd better make the rpm package name format and dependencies better. 
>   1. Remove the version string like '3_2_1_0'.
>   2. Remove the user name from the build environment.
>   3. Consider do we need to include the apache-tomcat rpm package into HAWQ 
> rpm release tarball.
>   4. Improve the hard code 'el6' string. (This might be optinal)
>   5. Improve the dependencies, including the dependencies between these pxf 
> rpm packages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1421) Improve PXF rpm package name format and dependencies

2017-04-01 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-1421.
---
Resolution: Fixed

> Improve PXF rpm package name format and dependencies
> 
>
> Key: HAWQ-1421
> URL: https://issues.apache.org/jira/browse/HAWQ-1421
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build, PXF
>Affects Versions: 2.1.0.0-incubating
>Reporter: Radar Lei
>Assignee: Shivram Mani
> Fix For: 2.2.0.0-incubating
>
>
> If we build pxf rpm package by 'make rpm', we will get below pxf packages:
> {quote}
>   apache-tomcat-7.0.62-el6.noarch.rpm
>   pxf-3.2.1.0-root.el6.noarch.rpm
>   pxf-hbase_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hive_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-jdbc_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-json_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-service_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> {quote}
> These rpm packages have dependencies on Apache Hadoop components only, some 
> other Hadoop distributes can't satisfy it. E.g. :
> {quote}
> rpm -ivh pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> error: Failed dependencies:
>   pxf-service_3_2_1_0 >= 3.2.1.0 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop >= 2.7.1 is needed by pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop-mapreduce >= 2.7.1 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
> {quote}
> We'd better make the rpm package name format and dependencies better. 
>   1. Remove the version string like '3_2_1_0'.
>   2. Remove the user name from the build environment.
>   3. Consider do we need to include the apache-tomcat rpm package into HAWQ 
> rpm release tarball.
>   4. Improve the hard code 'el6' string. (This might be optinal)
>   5. Improve the dependencies, including the dependencies between these pxf 
> rpm packages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1421) Improve PXF rpm package name format and dependencies

2017-04-01 Thread Ruilong Huo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo updated HAWQ-1421:
--
Affects Version/s: 2.1.0.0-incubating

> Improve PXF rpm package name format and dependencies
> 
>
> Key: HAWQ-1421
> URL: https://issues.apache.org/jira/browse/HAWQ-1421
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build, PXF
>Affects Versions: 2.1.0.0-incubating
>Reporter: Radar Lei
>Assignee: Shivram Mani
> Fix For: 2.2.0.0-incubating
>
>
> If we build pxf rpm package by 'make rpm', we will get below pxf packages:
> {quote}
>   apache-tomcat-7.0.62-el6.noarch.rpm
>   pxf-3.2.1.0-root.el6.noarch.rpm
>   pxf-hbase_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-hive_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-jdbc_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-json_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
>   pxf-service_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> {quote}
> These rpm packages have dependencies on Apache Hadoop components only, some 
> other Hadoop distributes can't satisfy it. E.g. :
> {quote}
> rpm -ivh pxf-hdfs_3_2_1_0-3.2.1.0-root.el6.noarch.rpm
> error: Failed dependencies:
>   pxf-service_3_2_1_0 >= 3.2.1.0 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop >= 2.7.1 is needed by pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
>   hadoop-mapreduce >= 2.7.1 is needed by 
> pxf-hdfs_3_2_1_0-0:3.2.1.0-root.el6.noarch
> {quote}
> We'd better make the rpm package name format and dependencies better. 
>   1. Remove the version string like '3_2_1_0'.
>   2. Remove the user name from the build environment.
>   3. Consider do we need to include the apache-tomcat rpm package into HAWQ 
> rpm release tarball.
>   4. Improve the hard code 'el6' string. (This might be optinal)
>   5. Improve the dependencies, including the dependencies between these pxf 
> rpm packages.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   3   4   5   6   7   >