[jira] [Commented] (HAWQ-1561) build faild on centos 6.8 bzip2

2017-12-05 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279544#comment-16279544
 ] 

Paul Guo commented on HAWQ-1561:


Please do not mix different versions of bzip2 since the linking library and 
runtime library could be different and probably there is wrong in function 
symbols.

You could uninstallall bzip2 and then reinstall them via "yum install".

> build faild on centos 6.8 bzip2
> ---
>
> Key: HAWQ-1561
> URL: https://issues.apache.org/jira/browse/HAWQ-1561
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: xinzhang
>Assignee: Radar Lei
>
> Hi. My env is CentOS release 6.8 .
> env:
>  # bzip2 --version
> bzip2, a block-sorting file compressor.  Version 1.0.6, 6-Sept-2010.
> fail log:
>...
>  checking for library containing BZ2_bzDecompress... no
> configure: error: library 'bzip2' is required.
> 'bzip2' is used for table compression.  Check config.log for details.
> It is possible the compiler isn't looking in the proper directory.
> q:
>   CENTOS 6.X use bzip1.0.5  as default . The dependence libs are the biggest 
> pros.
>What should I to do ? bzip2 1.0.6 had installed .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1561) build faild on centos 6.8 bzip2

2017-12-04 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276568#comment-16276568
 ] 

Paul Guo commented on HAWQ-1561:


I just tried. I could run configure with this version. My bzip2 and bzip2-devel 
was installed via "yum install". If you 1.0.5 does not work you could provide 
the error information.

> build faild on centos 6.8 bzip2
> ---
>
> Key: HAWQ-1561
> URL: https://issues.apache.org/jira/browse/HAWQ-1561
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: xinzhang
>Assignee: Radar Lei
>
> Hi. My env is CentOS release 6.8 .
> env:
>  # bzip2 --version
> bzip2, a block-sorting file compressor.  Version 1.0.6, 6-Sept-2010.
> fail log:
>...
>  checking for library containing BZ2_bzDecompress... no
> configure: error: library 'bzip2' is required.
> 'bzip2' is used for table compression.  Check config.log for details.
> It is possible the compiler isn't looking in the proper directory.
> q:
>   CENTOS 6.X use bzip1.0.5  as default . The dependence libs are the biggest 
> pros.
>What should I to do ? bzip2 1.0.6 had installed .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1561) build faild on centos 6.8 bzip2

2017-12-04 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16276543#comment-16276543
 ] 

Paul Guo commented on HAWQ-1561:


You need to install bzip2-devel

Admittedly, configure error message should be explicit. 

> build faild on centos 6.8 bzip2
> ---
>
> Key: HAWQ-1561
> URL: https://issues.apache.org/jira/browse/HAWQ-1561
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: xinzhang
>Assignee: Radar Lei
>
> Hi. My env is CentOS release 6.8 .
> env:
>  # bzip2 --version
> bzip2, a block-sorting file compressor.  Version 1.0.6, 6-Sept-2010.
> fail log:
>...
>  checking for library containing BZ2_bzDecompress... no
> configure: error: library 'bzip2' is required.
> 'bzip2' is used for table compression.  Check config.log for details.
> It is possible the compiler isn't looking in the proper directory.
> q:
>   CENTOS 6.X use bzip1.0.5  as default . The dependence libs are the biggest 
> pros.
>What should I to do ? bzip2 1.0.6 had installed .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1478) Enable hawq build on suse11

2017-06-01 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo resolved HAWQ-1478.

Resolution: Fixed

> Enable hawq build on suse11
> ---
>
> Key: HAWQ-1478
> URL: https://issues.apache.org/jira/browse/HAWQ-1478
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.2.0.0-incubating
>Reporter: Paul Guo
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> We have some users want hawq run on suse (typically suse11). We have at first 
> make it build fine on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1478) Enable hawq build on suse11

2017-05-31 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1478:
--

 Summary: Enable hawq build on suse11
 Key: HAWQ-1478
 URL: https://issues.apache.org/jira/browse/HAWQ-1478
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Radar Lei


We have some users want hawq run on suse (typically suse11). We have at first 
make it build fine on the platform.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1471) "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 4.7.2 is too low to install hawq

2017-05-24 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo resolved HAWQ-1471.

   Resolution: Fixed
 Assignee: Paul Guo  (was: Ed Espino)
Fix Version/s: 2.3.0.0-incubating

> "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 
> 4.7.2 is too low to install hawq
> -
>
> Key: HAWQ-1471
> URL: https://issues.apache.org/jira/browse/HAWQ-1471
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: fangpei
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> My OS environment is Red Hat 6.5. I referred to the  website 
> "https://cwiki.apache.org//confluence/display/HAWQ/Build+and+Install"; to 
> build and install hawq. I installed the GCC version specified on the website 
> (gcc 4.7.2 / gcc-c++ 4.7.2), but the error happened:
> "error: 'Hdfs::Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'" 
> or
> "error : 'Yarn:Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'"
> I found that GCC support for C ++ 11 is not good, leading to the 
> emergence of the error.
> So I installed gcc 4.8.5 / gcc-c++ 4.8.5, and the problem was resolved. 
> gcc / gcc-c++ version 4.7.2 is too low to install hawq, I suggest 
> updating the website about gcc / gcc-c++ version requirement.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-1471) "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 4.7.2 is too low to install hawq

2017-05-24 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022567#comment-16022567
 ] 

Paul Guo edited comment on HAWQ-1471 at 5/24/17 8:59 AM:
-

Modified the wiki page. So we could close this issue.


was (Author: paul guo):
Modified. So we could close this issue.

> "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 
> 4.7.2 is too low to install hawq
> -
>
> Key: HAWQ-1471
> URL: https://issues.apache.org/jira/browse/HAWQ-1471
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: fangpei
>Assignee: Ed Espino
> Fix For: 2.3.0.0-incubating
>
>
> My OS environment is Red Hat 6.5. I referred to the  website 
> "https://cwiki.apache.org//confluence/display/HAWQ/Build+and+Install"; to 
> build and install hawq. I installed the GCC version specified on the website 
> (gcc 4.7.2 / gcc-c++ 4.7.2), but the error happened:
> "error: 'Hdfs::Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'" 
> or
> "error : 'Yarn:Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'"
> I found that GCC support for C ++ 11 is not good, leading to the 
> emergence of the error.
> So I installed gcc 4.8.5 / gcc-c++ 4.8.5, and the problem was resolved. 
> gcc / gcc-c++ version 4.7.2 is too low to install hawq, I suggest 
> updating the website about gcc / gcc-c++ version requirement.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1471) "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 4.7.2 is too low to install hawq

2017-05-24 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022567#comment-16022567
 ] 

Paul Guo commented on HAWQ-1471:


Modified. So we could close this issue.

> "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 
> 4.7.2 is too low to install hawq
> -
>
> Key: HAWQ-1471
> URL: https://issues.apache.org/jira/browse/HAWQ-1471
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: fangpei
>Assignee: Ed Espino
>
> My OS environment is Red Hat 6.5. I referred to the  website 
> "https://cwiki.apache.org//confluence/display/HAWQ/Build+and+Install"; to 
> build and install hawq. I installed the GCC version specified on the website 
> (gcc 4.7.2 / gcc-c++ 4.7.2), but the error happened:
> "error: 'Hdfs::Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'" 
> or
> "error : 'Yarn:Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'"
> I found that GCC support for C ++ 11 is not good, leading to the 
> emergence of the error.
> So I installed gcc 4.8.5 / gcc-c++ 4.8.5, and the problem was resolved. 
> gcc / gcc-c++ version 4.7.2 is too low to install hawq, I suggest 
> updating the website about gcc / gcc-c++ version requirement.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1471) "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 4.7.2 is too low to install hawq

2017-05-23 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022263#comment-16022263
 ] 

Paul Guo commented on HAWQ-1471:


There is another gcc requirement on that page (see below). In my centos6.7 dev 
vm, the gcc version is
"gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-15)". If I remember correctly I 
followed the instruction and there
is not issue of compiling on my box.

Default version of gcc in Red Hat/CentOS 6.X is 4.4.7 or lower, you can quickly 
upgrade gcc following instructions below:
cd /etc/yum.repos.d
# make sure you have root permission
wget -O /etc/yum.repos.d/slc6-devtoolset.repo 
http://linuxsoft.cern.ch/cern/devtoolset/slc6-devtoolset.repo
# install higher version using devtoolset-2
yum install devtoolset-2-gcc devtoolset-2-binutils devtoolset-2-gcc-c++
# Start using software collections
scl enable devtoolset-2 bash

I think we should remove the gcc version items on that page (for rhel6.x). By 
the way, next time you might
wan to send to the dev/user email list since there are broader audience there 
and anyone have the permission
of that page could modify the page quickly.

> "Build and Install" under Red Hat 6.X environment, gcc / gcc-c++ version 
> 4.7.2 is too low to install hawq
> -
>
> Key: HAWQ-1471
> URL: https://issues.apache.org/jira/browse/HAWQ-1471
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: fangpei
>Assignee: Ed Espino
>
> My OS environment is Red Hat 6.5. I referred to the  website 
> "https://cwiki.apache.org//confluence/display/HAWQ/Build+and+Install"; to 
> build and install hawq. I installed the GCC version specified on the website 
> (gcc 4.7.2 / gcc-c++ 4.7.2), but the error happened:
> "error: 'Hdfs::Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'" 
> or
> "error : 'Yarn:Internal::Once' cannot be thread-local because it has 
> non-trivial type 'std::once_flag'"
> I found that GCC support for C ++ 11 is not good, leading to the 
> emergence of the error.
> So I installed gcc 4.8.5 / gcc-c++ 4.8.5, and the problem was resolved. 
> gcc / gcc-c++ version 4.7.2 is too low to install hawq, I suggest 
> updating the website about gcc / gcc-c++ version requirement.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1455) Wrong results on CTAS query over catalog

2017-05-16 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1455.
--
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

> Wrong results on CTAS query over catalog
> 
>
> Key: HAWQ-1455
> URL: https://issues.apache.org/jira/browse/HAWQ-1455
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> The last ctas sql returns 0 tuple. This is wrong.
> $ cat catalog.sql
> create temp table t1 (tta varchar, ttb varchar);
> create temp table t2 (tta varchar, ttb varchar);
> insert into t1 values('a', '1');
> insert into t1 values('a', '2');
> insert into t1 values('tta', '3');
> insert into t1 values('ttb', '4');
> insert into t2 select pg_attribute.attname,t1.ttb from pg_attribute join t1 
> on pg_attribute.attname = t1.tta;
> $ psql -f catalog.sql -d postgres
> CREATE TABLE
> CREATE TABLE
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 0
> The join result should be as below for a new database.
> INSERT 0 4
>  tta | ttb
> -+-
>  tta | 3
>  ttb | 4
>  tta | 3
>  ttb | 4
> (4 rows)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1455) Wrong results on CTAS query over catalog

2017-05-16 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16013628#comment-16013628
 ] 

Paul Guo commented on HAWQ-1455:


Closing this one after checking in the patch below.

commit 3461e64801eb2299d46c86f47734b7f000152a10
Author: Paul Guo 
Date:   Fri May 5 17:47:41 2017 +0800

HAWQ-1455. Wrong results on CTAS query over catalog

This reverts the previous fix for HAWQ-512, however to HAWQ-512, it looks 
like
we could modify the lock related code following gpdb to do a real fix. 
Those code
was probably deleted during early hawq development.


> Wrong results on CTAS query over catalog
> 
>
> Key: HAWQ-1455
> URL: https://issues.apache.org/jira/browse/HAWQ-1455
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> The last ctas sql returns 0 tuple. This is wrong.
> $ cat catalog.sql
> create temp table t1 (tta varchar, ttb varchar);
> create temp table t2 (tta varchar, ttb varchar);
> insert into t1 values('a', '1');
> insert into t1 values('a', '2');
> insert into t1 values('tta', '3');
> insert into t1 values('ttb', '4');
> insert into t2 select pg_attribute.attname,t1.ttb from pg_attribute join t1 
> on pg_attribute.attname = t1.tta;
> $ psql -f catalog.sql -d postgres
> CREATE TABLE
> CREATE TABLE
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 0
> The join result should be as below for a new database.
> INSERT 0 4
>  tta | ttb
> -+-
>  tta | 3
>  ttb | 4
>  tta | 3
>  ttb | 4
> (4 rows)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1460) WAL Send Server process should exit if postmaster on master is killed

2017-05-10 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16004401#comment-16004401
 ] 

Paul Guo commented on HAWQ-1460:


I saw this on a single-node platform without slave.

> WAL Send Server process should exit if postmaster on master is killed
> -
>
> Key: HAWQ-1460
> URL: https://issues.apache.org/jira/browse/HAWQ-1460
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> If we kill the postmaster on master, we will see two processes keep running.
> pguo  44007  1  0 16:35 ?00:00:00 postgres: port  5432, 
> master logger process
> pguo  44014  1  0 16:35 ?00:00:00 postgres: port  5432, WAL 
> Send Server process
> Well, maybe we should exit the "WAL Send Server process" so that the 
> processes on master are all gone via checking PostmasterIsAlive() in its loop 
> code.
> Note in distributed system any process could be killed at any time without 
> any callback, handler etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1457) Shared memory for SegmentStatus and MetadataCache should not be allocated on segments.

2017-05-10 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1457.
--
Resolution: Fixed

> Shared memory for SegmentStatus and MetadataCache should not be allocated on 
> segments.
> --
>
> Key: HAWQ-1457
> URL: https://issues.apache.org/jira/browse/HAWQ-1457
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> From code level, MetadataCache_ShmemInit() and
> SegmentStatusShmemInit() should not be called on segments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1460) WAL Send Server process should exit if postmaster on master is killed

2017-05-10 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1460.
--
Resolution: Fixed

> WAL Send Server process should exit if postmaster on master is killed
> -
>
> Key: HAWQ-1460
> URL: https://issues.apache.org/jira/browse/HAWQ-1460
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> If we kill the postmaster on master, we will see two processes keep running.
> pguo  44007  1  0 16:35 ?00:00:00 postgres: port  5432, 
> master logger process
> pguo  44014  1  0 16:35 ?00:00:00 postgres: port  5432, WAL 
> Send Server process
> Well, maybe we should exit the "WAL Send Server process" so that the 
> processes on master are all gone via checking PostmasterIsAlive() in its loop 
> code.
> Note in distributed system any process could be killed at any time without 
> any callback, handler etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1460) WAL Send Server process should exit if postmaster on master is killed

2017-05-09 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1460:
--

Assignee: Paul Guo  (was: Ed Espino)

> WAL Send Server process should exit if postmaster on master is killed
> -
>
> Key: HAWQ-1460
> URL: https://issues.apache.org/jira/browse/HAWQ-1460
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> If we kill the postmaster on master, we will see two processes keep running.
> pguo  44007  1  0 16:35 ?00:00:00 postgres: port  5432, 
> master logger process
> pguo  44014  1  0 16:35 ?00:00:00 postgres: port  5432, WAL 
> Send Server process
> Well, maybe we should exit the "WAL Send Server process" so that the 
> processes on master are all gone via checking PostmasterIsAlive() in its loop 
> code.
> Note in distributed system any process could be killed at any time without 
> any callback, handler etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1460) WAL Send Server process should exit if postmaster on master is killed

2017-05-09 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1460:
--

 Summary: WAL Send Server process should exit if postmaster on 
master is killed
 Key: HAWQ-1460
 URL: https://issues.apache.org/jira/browse/HAWQ-1460
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Core
Reporter: Paul Guo
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


If we kill the postmaster on master, we will see two processes keep running.

pguo  44007  1  0 16:35 ?00:00:00 postgres: port  5432, master 
logger process
pguo  44014  1  0 16:35 ?00:00:00 postgres: port  5432, WAL 
Send Server process

Well, maybe we should exit the "WAL Send Server process" so that the processes 
on master are all gone via checking PostmasterIsAlive() in its loop code.

Note in distributed system any process could be killed at any time without any 
callback, handler etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1458) Shared Input Scan QE hung in shareinput_reader_waitready().

2017-05-09 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16002291#comment-16002291
 ] 

Paul Guo commented on HAWQ-1458:


I'd suggest providing more details. Just a stack seems to be too simple for a 
lot of people do not have the background of this issue.

> Shared Input Scan QE hung in shareinput_reader_waitready().
> ---
>
> Key: HAWQ-1458
> URL: https://issues.apache.org/jira/browse/HAWQ-1458
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Reporter: Amy
>Assignee: Amy
> Fix For: backlog
>
>
> The stack is as below:
> ```
> 4/13/17 6:12:32 AM PDT: stack of postgres process (pid 108464) on test4:
> 4/13/17 6:12:32 AM PDT: Thread 2 (Thread 0x7f7ca0c7b700 (LWP 108465)):
> 4/13/17 6:12:32 AM PDT: #0  0x0032214df283 in poll () from 
> /lib64/libc.so.6
> 4/13/17 6:12:32 AM PDT: #1  0x0097e110 in rxThreadFunc ()
> 4/13/17 6:12:32 AM PDT: #2  0x003221807aa1 in start_thread () from 
> /lib64/libpthread.so.0
> 4/13/17 6:12:32 AM PDT: #3  0x0032214e8aad in clone () from 
> /lib64/libc.so.6
> 4/13/17 6:12:32 AM PDT: Thread 1 (Thread 0x7f7cc5d48920 (LWP 108464)):
> 4/13/17 6:12:32 AM PDT: #0  0x0032214e1523 in select () from 
> /lib64/libc.so.6
> 4/13/17 6:12:32 AM PDT: #1  0x0069baaf in shareinput_reader_waitready 
> ()
> 4/13/17 6:12:32 AM PDT: #2  0x0069be0d in 
> ExecSliceDependencyShareInputScan ()
> 4/13/17 6:12:32 AM PDT: #3  0x0066eb40 in ExecSliceDependencyNode ()
> 4/13/17 6:12:32 AM PDT: #4  0x0066eaa5 in ExecSliceDependencyNode ()
> 4/13/17 6:12:32 AM PDT: #5  0x0066eaa5 in ExecSliceDependencyNode ()
> 4/13/17 6:12:32 AM PDT: #6  0x0066af41 in ExecutePlan ()
> 4/13/17 6:12:32 AM PDT: #7  0x0066bafa in ExecutorRun ()
> 4/13/17 6:12:32 AM PDT: #8  0x007f52aa in PortalRun ()
> 4/13/17 6:12:32 AM PDT: #9  0x007eb044 in exec_mpp_query ()
> 4/13/17 6:12:32 AM PDT: #10 0x007effb4 in PostgresMain ()
> 4/13/17 6:12:32 AM PDT: #11 0x007a04f0 in ServerLoop ()
> 4/13/17 6:12:32 AM PDT: #12 0x007a32b9 in PostmasterMain ()
> 4/13/17 6:12:32 AM PDT: #13 0x004a52b9 in main ()
> ```



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1459) Tweak the feature test related entries in makefiles.

2017-05-09 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1459:
--

Assignee: Paul Guo  (was: Ed Espino)

> Tweak the feature test related entries in makefiles.
> 
>
> Key: HAWQ-1459
> URL: https://issues.apache.org/jira/browse/HAWQ-1459
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> We really do not need to set seperate entries for feature test in makefiles, 
> i.e.
> feature-test
> feature-test-clean
> This looks a bit ugly.
> Besides, in src/test/Makefile, there is typo, i.e.
> feature_test



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1459) Tweak the feature test related entries in makefiles.

2017-05-09 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1459:
--

 Summary: Tweak the feature test related entries in makefiles.
 Key: HAWQ-1459
 URL: https://issues.apache.org/jira/browse/HAWQ-1459
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Build
Reporter: Paul Guo
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


We really do not need to set seperate entries for feature test in makefiles, 
i.e.
feature-test
feature-test-clean

This looks a bit ugly.

Besides, in src/test/Makefile, there is typo, i.e.
feature_test



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1457) Shared memory for SegmentStatus and MetadataCache should not be allocated on segments.

2017-05-09 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1457:
--

Assignee: Paul Guo  (was: Ed Espino)

> Shared memory for SegmentStatus and MetadataCache should not be allocated on 
> segments.
> --
>
> Key: HAWQ-1457
> URL: https://issues.apache.org/jira/browse/HAWQ-1457
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> From code level, MetadataCache_ShmemInit() and
> SegmentStatusShmemInit() should not be called on segments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1457) Shared memory for SegmentStatus and MetadataCache should not be allocated on segments.

2017-05-09 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1457:
--

 Summary: Shared memory for SegmentStatus and MetadataCache should 
not be allocated on segments.
 Key: HAWQ-1457
 URL: https://issues.apache.org/jira/browse/HAWQ-1457
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Core
Reporter: Paul Guo
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


>From code level, MetadataCache_ShmemInit() and
SegmentStatusShmemInit() should not be called on segments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1444) Need to replace gettimeofday() with clock_gettime() for related timeout checking code

2017-05-08 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1444.
--
   Resolution: Not A Problem
Fix Version/s: (was: backlog)
   2.3.0.0-incubating

> Need to replace gettimeofday() with clock_gettime() for related timeout 
> checking code
> -
>
> Key: HAWQ-1444
> URL: https://issues.apache.org/jira/browse/HAWQ-1444
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> gettimeofday() could be affected by ntp kinda things. If using it for timeout 
> logic there could be wrong, e.g. time goes backwards. We could 
> clock_gettime() with CLOCK_MONOTONIC as an alternative.
> For some platforms/oses that does not have the support for clock_gettime(), 
> we can fall back to use gettimeofday().
> Note getCurrentTime() in code is a good example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1444) Need to replace gettimeofday() with clock_gettime() for related timeout checking code

2017-05-08 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16000498#comment-16000498
 ] 

Paul Guo commented on HAWQ-1444:


I went through the callers of gettimeofday() in code and did not find obvious 
bad use of it since the following code change, so I'm closing this JIRA.

HAWQ-1439. tolerate system time being changed to earlier point when 
checking resource context timeout

> Need to replace gettimeofday() with clock_gettime() for related timeout 
> checking code
> -
>
> Key: HAWQ-1444
> URL: https://issues.apache.org/jira/browse/HAWQ-1444
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: backlog
>
>
> gettimeofday() could be affected by ntp kinda things. If using it for timeout 
> logic there could be wrong, e.g. time goes backwards. We could 
> clock_gettime() with CLOCK_MONOTONIC as an alternative.
> For some platforms/oses that does not have the support for clock_gettime(), 
> we can fall back to use gettimeofday().
> Note getCurrentTime() in code is a good example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1444) Need to replace gettimeofday() with clock_gettime() for related timeout checking code

2017-05-08 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1444:
--

Assignee: Paul Guo  (was: Ed Espino)

> Need to replace gettimeofday() with clock_gettime() for related timeout 
> checking code
> -
>
> Key: HAWQ-1444
> URL: https://issues.apache.org/jira/browse/HAWQ-1444
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: backlog
>
>
> gettimeofday() could be affected by ntp kinda things. If using it for timeout 
> logic there could be wrong, e.g. time goes backwards. We could 
> clock_gettime() with CLOCK_MONOTONIC as an alternative.
> For some platforms/oses that does not have the support for clock_gettime(), 
> we can fall back to use gettimeofday().
> Note getCurrentTime() in code is a good example.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1455) Wrong results on CTAS query over catalog

2017-05-05 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15998012#comment-15998012
 ] 

Paul Guo commented on HAWQ-1455:


This is a regression which was introduced by
https://issues.apache.org/jira/browse/HAWQ-512

It is not accessing the catalog tables of database postgres (in my example) on 
entrydb QE. We need to re-fix HAWQ-512. It seems that it could be fixed in the 
lock manager instead.

> Wrong results on CTAS query over catalog
> 
>
> Key: HAWQ-1455
> URL: https://issues.apache.org/jira/browse/HAWQ-1455
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> The last ctas sql returns 0 tuple. This is wrong.
> $ cat catalog.sql
> create temp table t1 (tta varchar, ttb varchar);
> create temp table t2 (tta varchar, ttb varchar);
> insert into t1 values('a', '1');
> insert into t1 values('a', '2');
> insert into t1 values('tta', '3');
> insert into t1 values('ttb', '4');
> insert into t2 select pg_attribute.attname,t1.ttb from pg_attribute join t1 
> on pg_attribute.attname = t1.tta;
> $ psql -f catalog.sql -d postgres
> CREATE TABLE
> CREATE TABLE
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 0
> The join result should be as below for a new database.
> INSERT 0 4
>  tta | ttb
> -+-
>  tta | 3
>  ttb | 4
>  tta | 3
>  ttb | 4
> (4 rows)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1455) Wrong results on CTAS query over catalog

2017-05-05 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1455:
--

Assignee: Paul Guo  (was: Ed Espino)

> Wrong results on CTAS query over catalog
> 
>
> Key: HAWQ-1455
> URL: https://issues.apache.org/jira/browse/HAWQ-1455
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> The last ctas sql returns 0 tuple. This is wrong.
> $ cat catalog.sql
> create temp table t1 (tta varchar, ttb varchar);
> create temp table t2 (tta varchar, ttb varchar);
> insert into t1 values('a', '1');
> insert into t1 values('a', '2');
> insert into t1 values('tta', '3');
> insert into t1 values('ttb', '4');
> insert into t2 select pg_attribute.attname,t1.ttb from pg_attribute join t1 
> on pg_attribute.attname = t1.tta;
> $ psql -f catalog.sql -d postgres
> CREATE TABLE
> CREATE TABLE
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 1
> INSERT 0 0
> The join result should be as below for a new database.
> INSERT 0 4
>  tta | ttb
> -+-
>  tta | 3
>  ttb | 4
>  tta | 3
>  ttb | 4
> (4 rows)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1455) Wrong results on CTAS query over catalog

2017-05-05 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1455:
--

 Summary: Wrong results on CTAS query over catalog
 Key: HAWQ-1455
 URL: https://issues.apache.org/jira/browse/HAWQ-1455
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Core
Reporter: Paul Guo
Assignee: Ed Espino


The last ctas sql returns 0 tuple. This is wrong.

$ cat catalog.sql
create temp table t1 (tta varchar, ttb varchar);
create temp table t2 (tta varchar, ttb varchar);
insert into t1 values('a', '1');
insert into t1 values('a', '2');
insert into t1 values('tta', '3');
insert into t1 values('ttb', '4');

insert into t2 select pg_attribute.attname,t1.ttb from pg_attribute join t1 on 
pg_attribute.attname = t1.tta;

$ psql -f catalog.sql -d postgres
CREATE TABLE
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 0

The join result should be as below for a new database.

INSERT 0 4
 tta | ttb
-+-
 tta | 3
 ttb | 4
 tta | 3
 ttb | 4
(4 rows)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1436) Implement RPS High availability on HAWQ

2017-04-26 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15985959#comment-15985959
 ] 

Paul Guo commented on HAWQ-1436:


If you use the guc with list, you could design it as either load-balancer or 
master+slaves.

> Implement RPS High availability on HAWQ
> ---
>
> Key: HAWQ-1436
> URL: https://issues.apache.org/jira/browse/HAWQ-1436
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: backlog
>
> Attachments: RPSHADesign_v0.1.pdf
>
>
> Once Ranger is configured, HAWQ will rely on RPS to connect to Ranger. A 
> single point RPS may influence the robustness of HAWQ. 
> Thus We need to investigate and design out the way to implement RPS High 
> availability. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1444) Need to replace gettimeofday() with clock_gettime() for related timeout checking code

2017-04-26 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1444:
--

 Summary: Need to replace gettimeofday() with clock_gettime() for 
related timeout checking code
 Key: HAWQ-1444
 URL: https://issues.apache.org/jira/browse/HAWQ-1444
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Core
Reporter: Paul Guo
Assignee: Ed Espino
 Fix For: backlog


gettimeofday() could be affected by ntp kinda things. If using it for timeout 
logic there could be wrong, e.g. time goes backwards. We could clock_gettime() 
with CLOCK_MONOTONIC as an alternative.

For some platforms/oses that does not have the support for clock_gettime(), we 
can fall back to use gettimeofday().

Note getCurrentTime() in code is a good example.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-1436) Implement RPS High availability on HAWQ

2017-04-26 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984421#comment-15984421
 ] 

Paul Guo edited comment on HAWQ-1436 at 4/26/17 9:21 AM:
-

[~xsheng]

Typo in my comment. I removed the "do not" word.

[~lilima]

load balancer is just an option. Up to you to decide the necessity, but I 
really would offload work to other idle system. Anyway since the load is small, 
it is not a big issue. Frankly speaking, if using round-robin, the code logic 
change seems to be really small.

For proxy, the cps should be high enough normally. Even if the cps is low, why 
not use load-balance (e.g. round-robin) in hawq code directly since adding an 
additional proxy (I assume you mean reverse proxy kinda thing) will introduce 
unncessary latency, besides, the "proxy" will need HA also.


was (Author: paul guo):
[~xsheng]

Typo in my comment. I removed the "do not" word.

> Implement RPS High availability on HAWQ
> ---
>
> Key: HAWQ-1436
> URL: https://issues.apache.org/jira/browse/HAWQ-1436
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: backlog
>
> Attachments: RPSHADesign_v0.1.pdf
>
>
> Once Ranger is configured, HAWQ will rely on RPS to connect to Ranger. A 
> single point RPS may influence the robustness of HAWQ. 
> Thus We need to investigate and design out the way to implement RPS High 
> availability. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1436) Implement RPS High availability on HAWQ

2017-04-26 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984421#comment-15984421
 ] 

Paul Guo commented on HAWQ-1436:


[~xsheng]

Typo in my comment. I removed the "do not" word.

> Implement RPS High availability on HAWQ
> ---
>
> Key: HAWQ-1436
> URL: https://issues.apache.org/jira/browse/HAWQ-1436
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: backlog
>
> Attachments: RPSHADesign_v0.1.pdf
>
>
> Once Ranger is configured, HAWQ will rely on RPS to connect to Ranger. A 
> single point RPS may influence the robustness of HAWQ. 
> Thus We need to investigate and design out the way to implement RPS High 
> availability. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-1436) Implement RPS High availability on HAWQ

2017-04-26 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984225#comment-15984225
 ] 

Paul Guo edited comment on HAWQ-1436 at 4/26/17 8:43 AM:
-

Some suggestions:

1) Since RPS has been a stateless proxy to my knowledge, we really do not need 
to add an additional proxy service (i.e. http proxy?). i.e.  I think 3.1 is 
enough. If we really really need to add some logic of the "proxy", you could 
add the code logic in hawq.

2) I assume currently we need RPS on master and standby. think in the long run, 
we should decouple RPS nodes with master/standby nodes although running RPS is 
allowed on master/standby.

3) Since RPS is stateless, maybe we should use or at least allow a load 
balancer policy, instead of making the "standby" RPS idle.

4) I really expect combine all of the RPS related GUCs into one (no, 
hawq_rps_address_host, hawq_rps_address_port, hawq_rps_address_suffix), e.g.
hawq_rps_url_list (e.g. value with 
"http://192.168.1.66:1357/suffix,http://192.168.1.88.1357/suffix"; or with more 
nodes?)
Frankly speaking, when I first see hawq_rps_address_suffix, I was really 
confused.



was (Author: paul guo):
Some suggestions:

1) Since RPS has been a stateless proxy to my knowledge, we really do not need 
to add an additional proxy service (i.e. http proxy?). i.e.  I do not think 3.1 
is enough. If we really really need to add some logic of the "proxy", you could 
add the code logic in hawq.

2) I assume currently we need RPS on master and standby. think in the long run, 
we should decouple RPS nodes with master/standby nodes although running RPS is 
allowed on master/standby.

3) Since RPS is stateless, maybe we should use or at least allow a load 
balancer policy, instead of making the "standby" RPS idle.

4) I really expect combine all of the RPS related GUCs into one (no, 
hawq_rps_address_host, hawq_rps_address_port, hawq_rps_address_suffix), e.g.
hawq_rps_url_list (e.g. value with 
"http://192.168.1.66:1357/suffix,http://192.168.1.88.1357/suffix"; or with more 
nodes?)
Frankly speaking, when I first see hawq_rps_address_suffix, I was really 
confused.


> Implement RPS High availability on HAWQ
> ---
>
> Key: HAWQ-1436
> URL: https://issues.apache.org/jira/browse/HAWQ-1436
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: backlog
>
> Attachments: RPSHADesign_v0.1.pdf
>
>
> Once Ranger is configured, HAWQ will rely on RPS to connect to Ranger. A 
> single point RPS may influence the robustness of HAWQ. 
> Thus We need to investigate and design out the way to implement RPS High 
> availability. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1436) Implement RPS High availability on HAWQ

2017-04-25 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15984225#comment-15984225
 ] 

Paul Guo commented on HAWQ-1436:


Some suggestions:

1) Since RPS has been a stateless proxy to my knowledge, we really do not need 
to add an additional proxy service (i.e. http proxy?). i.e.  I do not think 3.1 
is enough. If we really really need to add some logic of the "proxy", you could 
add the code logic in hawq.

2) I assume currently we need RPS on master and standby. think in the long run, 
we should decouple RPS nodes with master/standby nodes although running RPS is 
allowed on master/standby.

3) Since RPS is stateless, maybe we should use or at least allow a load 
balancer policy, instead of making the "standby" RPS idle.

4) I really expect combine all of the RPS related GUCs into one (no, 
hawq_rps_address_host, hawq_rps_address_port, hawq_rps_address_suffix), e.g.
hawq_rps_url_list (e.g. value with 
"http://192.168.1.66:1357/suffix,http://192.168.1.88.1357/suffix"; or with more 
nodes?)
Frankly speaking, when I first see hawq_rps_address_suffix, I was really 
confused.


> Implement RPS High availability on HAWQ
> ---
>
> Key: HAWQ-1436
> URL: https://issues.apache.org/jira/browse/HAWQ-1436
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: backlog
>
> Attachments: RPSHADesign_v0.1.pdf
>
>
> Once Ranger is configured, HAWQ will rely on RPS to connect to Ranger. A 
> single point RPS may influence the robustness of HAWQ. 
> Thus We need to investigate and design out the way to implement RPS High 
> availability. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1423) cmock framework does not recognize __MAYBE_UNUSED.

2017-04-05 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo resolved HAWQ-1423.

   Resolution: Fixed
Fix Version/s: (was: backlog)
   2.3.0.0-incubating

> cmock framework does not recognize __MAYBE_UNUSED.
> --
>
> Key: HAWQ-1423
> URL: https://issues.apache.org/jira/browse/HAWQ-1423
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Ming LI
>Assignee: Paul Guo
> Fix For: 2.3.0.0-incubating
>
>
> This bug only exists on MacOS.
> Reproduce Steps: 
> {code}
> 1. ./configure 
> 2. make -j8 
> 3. cd src/backend
> 4. make unittest-check
> {code}
> Build log:
> {code}
> ../../../../../src/test/unit/mock/backend/libpq/be-secure_mock.c:174:2: 
> error: void function 'report_commerror'
>   should not return a value [-Wreturn-type]
> return (__MAYBE_UNUSED) mock();
> ^  ~~~
> 1 error generated.
> make[4]: *** 
> [../../../../../src/test/unit/mock/backend/libpq/be-secure_mock.o] Error 1
> make[3]: *** [mockup-phony] Error 2
> make[2]: *** [unittest-check] Error 2
> make[1]: *** [unittest-check] Error 2
> make: *** [unittest-check] Error 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1423) cmock framework does not recognize __MAYBE_UNUSED.

2017-04-05 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1423:
--

Assignee: Paul Guo  (was: Ed Espino)

> cmock framework does not recognize __MAYBE_UNUSED.
> --
>
> Key: HAWQ-1423
> URL: https://issues.apache.org/jira/browse/HAWQ-1423
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Ming LI
>Assignee: Paul Guo
> Fix For: backlog
>
>
> This bug only exists on MacOS.
> Reproduce Steps: 
> {code}
> 1. ./configure 
> 2. make -j8 
> 3. cd src/backend
> 4. make unittest-check
> {code}
> Build log:
> {code}
> ../../../../../src/test/unit/mock/backend/libpq/be-secure_mock.c:174:2: 
> error: void function 'report_commerror'
>   should not return a value [-Wreturn-type]
> return (__MAYBE_UNUSED) mock();
> ^  ~~~
> 1 error generated.
> make[4]: *** 
> [../../../../../src/test/unit/mock/backend/libpq/be-secure_mock.o] Error 1
> make[3]: *** [mockup-phony] Error 2
> make[2]: *** [unittest-check] Error 2
> make[1]: *** [unittest-check] Error 2
> make: *** [unittest-check] Error 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1423) cmock framework does not recognize __MAYBE_UNUSED.

2017-04-05 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1423:
---
Summary: cmock framework does not recognize __MAYBE_UNUSED.  (was: Build 
error when make unittest-check on MacOS)

> cmock framework does not recognize __MAYBE_UNUSED.
> --
>
> Key: HAWQ-1423
> URL: https://issues.apache.org/jira/browse/HAWQ-1423
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Ming LI
>Assignee: Ed Espino
> Fix For: backlog
>
>
> This bug only exists on MacOS.
> Reproduce Steps: 
> {code}
> 1. ./configure 
> 2. make -j8 
> 3. cd src/backend
> 4. make unittest-check
> {code}
> Build log:
> {code}
> ../../../../../src/test/unit/mock/backend/libpq/be-secure_mock.c:174:2: 
> error: void function 'report_commerror'
>   should not return a value [-Wreturn-type]
> return (__MAYBE_UNUSED) mock();
> ^  ~~~
> 1 error generated.
> make[4]: *** 
> [../../../../../src/test/unit/mock/backend/libpq/be-secure_mock.o] Error 1
> make[3]: *** [mockup-phony] Error 2
> make[2]: *** [unittest-check] Error 2
> make[1]: *** [unittest-check] Error 2
> make: *** [unittest-check] Error 2
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1413) Generate tools/bin/gppylib/data/*.json automatically

2017-03-27 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1413:
--

 Summary: Generate tools/bin/gppylib/data/*.json automatically
 Key: HAWQ-1413
 URL: https://issues.apache.org/jira/browse/HAWQ-1413
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Build
Reporter: Paul Guo
Assignee: Ed Espino


The json files below are used for various releases, however maybe we should 
generate them automatically in Makefile and should not keep them in codebase.

1.1.json  1.2.json  1.3.json  2.0.json  2.1.json



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1411) Inconsistent json file for catalog of hawq 2.1

2017-03-26 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942715#comment-15942715
 ] 

Paul Guo commented on HAWQ-1411:


Can we automate this (e.g. generate it automatically based on version?)


> Inconsistent json file for catalog of hawq 2.1
> --
>
> Key: HAWQ-1411
> URL: https://issues.apache.org/jira/browse/HAWQ-1411
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Affects Versions: 2.1.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.1.0.0-incubating
>
>
> To generate catalog information for hawq, we need to make sure the right 
> version of metadata information is used. For hawq 2.2, the 
> tools/bin/gppylib/data/2.2.json is created based on 2.2 code base in 
> [HAWQ-1406|https://issues.apache.org/jira/browse/HAWQ-1406]. However, we need 
> to correct that for hawq 2.0 and 2.1.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1391) s390x support for HWCRC32c

2017-03-17 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15931087#comment-15931087
 ] 

Paul Guo commented on HAWQ-1391:


I quickly looked the file. Yes that file does not include the s390x support. As 
far as I know hawq was mainly developed on x86 before although the core hawq 
code should be able to compile on some other archs+linux since they are 
originally from postgresql, however libhdfs3 are NOT from postgresql, so it is 
not surprising to see some code (especially low-level interfaces but I suspect 
there are just a few) does not work on other archs.

As to this function, I suspect you could simply hack to return false for 
HWCrc32c::available for x390x on linux if you do not want to add the HW CRC 
code support since it looks like libhdfs will fall back to use the software 
solution as usual.

e.g. src/client/LocalBlockReader.cpp
if (HWCrc32c::available()) {
checksum = shared_ptr(new HWCrc32c());
} else {
checksum = shared_ptr(new SWCrc32c());
}

You could discuss more on the dev mail list before filing a JIRA since many 
people does not receive JIRA email notifications. Thanks.

> s390x support for HWCRC32c
> --
>
> Key: HAWQ-1391
> URL: https://issues.apache.org/jira/browse/HAWQ-1391
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libhdfs
>Reporter: ketan
>Assignee: Ed Espino
>
> Hi ,
> I am in progress building Apache - HAWQ on s390x
> following instruction on
> https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install
> I am in the build stage i notice that during the build i encounter
> undefined reference to vtable for Hdfs::Internal::HWCrc32c
> On further debugging i observed that  libhdfs3/src/common/HWCRC32c.cpp has 
> not support for s390x.
> My questions are as follows.
> 1) I want to confirm whether does this check happens as part of unit testing 
> of libhdfs3?
> 2) if yes to 1 whether this test is specific to SSE based platforms ?
> 3) can we exactly get some information on what this check does>?
> 4) Is HAWQ source supported on  SSE based platforms only ?  
> Help would be appreciated.
> Adding Log for reference.
> **
> make[3]: Leaving directory `//incubator-hawq/src/backend/cdb'
> g++ -O3 -std=gnu99  -Wall -Wmissing-prototypes -Wpointer-arith  
> -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv 
> -fno-aggressive-loop-optimizations  -I/usr/include/libxml2 -L../../src/port 
> -L../../src/port -Wl,--as-needed 
> -L/scratch/ecos0013/ketan/incubator-hawq/depends/libhdfs3/build/install/usr/local/hawq/lib
>  
> -L/scratch/ecos0013/ketan/incubator-hawq/depends/libyarn/build/install/usr/local/hawq/lib
>  -Wl,-rpath,'/usr/local/hawq/lib',--enable-new-dtags -Wl,-E access/SUBSYS.o 
> bootstrap/SUBSYS.o catalog/SUBSYS.o parser/SUBSYS.o commands/SUBSYS.o 
> executor/SUBSYS.o foreign/SUBSYS.o lib/SUBSYS.o libpq/SUBSYS.o 
> gp_libpq_fe/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o 
> port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o rewrite/SUBSYS.o 
> storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o resourcemanager/SUBSYS.o 
> ../../src/timezone/SUBSYS.o cdb/SUBSYS.o ../../src/port/libpgport_srv.a 
> -lprotobuf -lboost_system -lboost_date_time -lstdc++ -lhdfs3 -lgsasl -lxml2 
> -ljson-c -levent -lyaml -lsnappy -lbz2 -lrt -lz -lcrypt -ldl -lm -lcurl 
> -lyarn -lkrb5 -lpthread -lthrift -lsnappy -o postgres
> /scratch/ecos0013/ketan/incubator-hawq/depends/libhdfs3/build/install/usr/local/hawq/lib/libhdfs3.so:
>  undefined reference to `Hdfs::Internal::HWCrc32c::available()'
> /scratch/ecos0013/ketan/incubator-hawq/depends/libhdfs3/build/install/usr/local/hawq/lib/libhdfs3.so:
>  undefined reference to `vtable for Hdfs::Internal::HWCrc32c'
> collect2: error: ld returned 1 exit status
> make[2]: *** [postgres] Error 1
> make[2]: Leaving directory `incubator-hawq/src/backend'
> make[1]: *** [all] Error 2
> make[1]: Leaving directory `/incubator-hawq/src'
> make: *** [all] Error 2
> **
> Regards
> Ketan



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1386) Mask some generated files for pljava feature testing for git.

2017-03-14 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1386.
--
Resolution: Fixed
  Assignee: Paul Guo  (was: Jiali Yao)

> Mask some generated files for pljava feature testing for git.
> -
>
> Key: HAWQ-1386
> URL: https://issues.apache.org/jira/browse/HAWQ-1386
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> While running feature tests, I found there are four files generated for 
> testing. We should mask it in .gitignore.
> UDF/sql/PLJavaAdd.class
> UDF/sql/PLJavaAdd.jar
> UDF/sql/PLJavauAdd.class
> UDF/sql/PLJavauAdd.jar



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1386) Mask some generated files for pljava feature testing for git.

2017-03-14 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1386:
--

 Summary: Mask some generated files for pljava feature testing for 
git.
 Key: HAWQ-1386
 URL: https://issues.apache.org/jira/browse/HAWQ-1386
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Tests
Reporter: Paul Guo
Assignee: Jiali Yao
 Fix For: 2.2.0.0-incubating


While running feature tests, I found there are four files generated for 
testing. We should mask it in .gitignore.

UDF/sql/PLJavaAdd.class
UDF/sql/PLJavaAdd.jar
UDF/sql/PLJavauAdd.class
UDF/sql/PLJavauAdd.jar



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1379) Do not send options multiple times in build_startup_packet()

2017-03-14 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1379.
--
   Resolution: Fixed
Fix Version/s: 2.2.0.0-incubating

> Do not send options multiple times in build_startup_packet()
> 
>
> Key: HAWQ-1379
> URL: https://issues.apache.org/jira/browse/HAWQ-1379
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> build_startup_packet() build a libpq packet, however it includes 
> conn->pgoptions more than 1 time - this is is unnecessary and really wastes 
> network bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1378) Elaborate the "invalid command-line arguments for server process" error.

2017-03-14 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1378.
--
   Resolution: Fixed
Fix Version/s: 2.2.0.0-incubating

> Elaborate the "invalid command-line arguments for server process" error.
> 
>
> Key: HAWQ-1378
> URL: https://issues.apache.org/jira/browse/HAWQ-1378
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> I saw the following errors when running several times,
> "Error dispatching to ***: connection pointer is NULL."
> FATAL:  invalid command-line arguments for server process
> While this usually means there is bug in related code but the code should 
> have reported more detailed log so that we could catch what argument is wrong 
> with less pain, even there is log level switch for argument dumping.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1379) Do not send options multiple times in build_startup_packet()

2017-03-06 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1379:
--

Assignee: Paul Guo  (was: Ed Espino)

> Do not send options multiple times in build_startup_packet()
> 
>
> Key: HAWQ-1379
> URL: https://issues.apache.org/jira/browse/HAWQ-1379
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> build_startup_packet() build a libpq packet, however it includes 
> conn->pgoptions more than 1 time - this is is unnecessary and really wastes 
> network bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1379) Do not send options multiple times in build_startup_packet()

2017-03-06 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1379:
--

 Summary: Do not send options multiple times in 
build_startup_packet()
 Key: HAWQ-1379
 URL: https://issues.apache.org/jira/browse/HAWQ-1379
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Ed Espino


build_startup_packet() build a libpq packet, however it includes 
conn->pgoptions more than 1 time - this is is unnecessary and really wastes 
network bandwidth.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1378) Elaborate the "invalid command-line arguments for server process" error.

2017-03-06 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1378:
--

Assignee: Paul Guo  (was: Ed Espino)

> Elaborate the "invalid command-line arguments for server process" error.
> 
>
> Key: HAWQ-1378
> URL: https://issues.apache.org/jira/browse/HAWQ-1378
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> I saw the following errors when running several times,
> "Error dispatching to ***: connection pointer is NULL."
> FATAL:  invalid command-line arguments for server process
> While this usually means there is bug in related code but the code should 
> have reported more detailed log so that we could catch what argument is wrong 
> with less pain, even there is log level switch for argument dumping.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1378) Elaborate the "invalid command-line arguments for server process" error.

2017-03-06 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15897170#comment-15897170
 ] 

Paul Guo commented on HAWQ-1378:


There have been a related pg upstream patch. It looks good. We could basically 
follow that.

   commit 86947e666d39229558311d7b0be45608fd071ed8
Author: Peter Eisentraut 
Date:   Sun Mar 11 01:52:05 2012 +0200

Add more detail to error message for invalid arguments for server 
process

It now prints the argument that was at fault.

Also fix a small misbehavior where the error message issued by
getopt() would complain about a program named "--single", because
that's what argv[0] is in the server process.

> Elaborate the "invalid command-line arguments for server process" error.
> 
>
> Key: HAWQ-1378
> URL: https://issues.apache.org/jira/browse/HAWQ-1378
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Ed Espino
>
> I saw the following errors when running several times,
> "Error dispatching to ***: connection pointer is NULL."
> FATAL:  invalid command-line arguments for server process
> While this usually means there is bug in related code but the code should 
> have reported more detailed log so that we could catch what argument is wrong 
> with less pain, even there is log level switch for argument dumping.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1378) Elaborate the "invalid command-line arguments for server process" error.

2017-03-06 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1378:
--

 Summary: Elaborate the "invalid command-line arguments for server 
process" error.
 Key: HAWQ-1378
 URL: https://issues.apache.org/jira/browse/HAWQ-1378
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Ed Espino


I saw the following errors when running several times,

"Error dispatching to ***: connection pointer is NULL."
FATAL:  invalid command-line arguments for server process

While this usually means there is bug in related code but the code should have 
reported more detailed log so that we could catch what argument is wrong with 
less pain, even there is log level switch for argument dumping.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1361) Remove some installcheck-good cases since they are in the feature test suite now.

2017-03-03 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1361.
--
   Resolution: Fixed
Fix Version/s: 2.2.0.0-incubating

> Remove some installcheck-good cases since they are in the feature test suite 
> now.
> -
>
> Key: HAWQ-1361
> URL: https://issues.apache.org/jira/browse/HAWQ-1361
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1361) Remove some installcheck-good cases since they are in the feature test suite now.

2017-03-01 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1361:
---
Summary: Remove some installcheck-good cases since they are in the feature 
test suite now.  (was: Remove ErrorTable in installcheck-good since it is in 
feature test suite now.)

> Remove some installcheck-good cases since they are in the feature test suite 
> now.
> -
>
> Key: HAWQ-1361
> URL: https://issues.apache.org/jira/browse/HAWQ-1361
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1337) Log stack info before forward signals sigsegv, sigill or sigbus in CdbProgramErrorHandler()

2017-02-28 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1337:
--

Assignee: (was: Paul Guo)

> Log stack info before forward signals sigsegv, sigill or sigbus in 
> CdbProgramErrorHandler()
> ---
>
> Key: HAWQ-1337
> URL: https://issues.apache.org/jira/browse/HAWQ-1337
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>
> CdbProgramErrorHandler() is a signal handler, while it seems that it just 
> forwards signal to its its main thread. This is not friendly for development 
> when encountering signals like sigsegv, signal and sigbus, etc. We should 
> save the thread stack info in log before forwarding.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1361) Remove ErrorTable in installcheck-good since it is in feature test suite now.

2017-02-28 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1361:
--

Assignee: Paul Guo  (was: Ed Espino)

> Remove ErrorTable in installcheck-good since it is in feature test suite now.
> -
>
> Key: HAWQ-1361
> URL: https://issues.apache.org/jira/browse/HAWQ-1361
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1361) Remove ErrorTable in installcheck-good since it is in feature test suite now.

2017-02-23 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1361:
--

 Summary: Remove ErrorTable in installcheck-good since it is in 
feature test suite now.
 Key: HAWQ-1361
 URL: https://issues.apache.org/jira/browse/HAWQ-1361
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Ed Espino






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1350) Add --enable-rps option to build ranger-plugin when build hawq.

2017-02-23 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15880221#comment-15880221
 ] 

Paul Guo commented on HAWQ-1350:


Maybe I misunderstood. So we do not have another option to enable hawq ranger 
support (e.g. code change under /src), just enable-rps for the plugin, right? 
If yes, I agree.

> Add --enable-rps option to build ranger-plugin when build hawq.
> ---
>
> Key: HAWQ-1350
> URL: https://issues.apache.org/jira/browse/HAWQ-1350
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Xiang Sheng
>Assignee: Xiang Sheng
> Fix For: 2.2.0.0-incubating
>
>
> when user wants to build hawq and ranger-plugin, there lacks an option for 
> building hawq with the plugin. So there should add the option to configure 
> files and related makefiles. 
> Then users can build hawq when configure with the option "--enable-rps" and 
> "make install" will install the plugin as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1350) Add --enable-rps option to build ranger-plugin when build hawq.

2017-02-23 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15880118#comment-15880118
 ] 

Paul Guo commented on HAWQ-1350:


So this part of code could be isolated from the ranger support, e.g. without 
rps, ranger support could still work, right? If yes I agree we have this option.

> Add --enable-rps option to build ranger-plugin when build hawq.
> ---
>
> Key: HAWQ-1350
> URL: https://issues.apache.org/jira/browse/HAWQ-1350
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Xiang Sheng
>Assignee: Xiang Sheng
> Fix For: 2.2.0.0-incubating
>
>
> when user wants to build hawq and ranger-plugin, there lacks an option for 
> building hawq with the plugin. So there should add the option to configure 
> files and related makefiles. 
> Then users can build hawq when configure with the option "--enable-rps" and 
> "make install" will install the plugin as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1244) unrecognized configuration parameter "ONETARY"

2017-02-22 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15879711#comment-15879711
 ] 

Paul Guo commented on HAWQ-1244:


This issue has been resolved, right? 

> unrecognized configuration parameter "ONETARY"
> --
>
> Key: HAWQ-1244
> URL: https://issues.apache.org/jira/browse/HAWQ-1244
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Dispatcher
>Reporter: Jon Roberts
>Assignee: Yi Jin
>
> Fatal error occurs when reading /usr/local/hawq/etc/_mgmt_config when the 
> file is large due to many temp directories, a highly parallelized query, 
> random distributed tables, and hawq_rm_nvseg_perquery_perseg_limit set higher 
> (12 for example).
> Query 88 in TPC-DS fails with random distribution, 10 nodes in AWS, 7TB of 
> data,  hawq_rm_memory_limit_perseg = 200gb, hawq_rm_stmt_vseg_memory = 16gb, 
> hawq_rm_nvseg_perquery_perseg_limit = 12.
> Error raised:
> psql:188.tpcds.88.sql:95: ERROR:  Error dispatching to seg25 
> ip-172-21-13-189.ec2.internal:4: connection pointer is NULL
> DETAIL:  Master unable to connect to seg25 
> ip-172-21-13-189.ec2.internal:4: FATAL:  unrecognized configuration 
> parameter "ONETARY"
> Error from segment:
> 2016-12-14 13:47:34.760839 
> UTC,"gpadmin","gpadmin",p737499,th542214432,"172.21.13.196","40327",2016-12-14
>  13:47:34 UTC,0,con23798,,seg-1,"FATAL","42704","unrecognized 
> configuration parameter ""ONETARY""",,,0,,"guc.c",10006,
> Workaround is to use less vsegs or reduce the number of temp directories.  
> This is probably not related to the number of temp directories but the size 
> of the _mgmt_config file.  The parameter it apparently is failing to parse is:
> hawq_lc_monetary=en_US.utf8
> The problem is likely in guc.c based on the segment error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1350) Add --enable-rps option to build ranger-plugin when build hawq.

2017-02-22 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877958#comment-15877958
 ] 

Paul Guo commented on HAWQ-1350:


Isn't enable-ranger (If have) better or enough?

> Add --enable-rps option to build ranger-plugin when build hawq.
> ---
>
> Key: HAWQ-1350
> URL: https://issues.apache.org/jira/browse/HAWQ-1350
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Build
>Reporter: Xiang Sheng
>Assignee: Xiang Sheng
> Fix For: 2.2.0.0-incubating
>
>
> when user wants to build hawq and ranger-plugin, there lacks an option for 
> building hawq with the plugin. So there should add the option to configure 
> files and related makefiles. 
> Then users can build hawq when configure with the option "--enable-rps" and 
> "make install" will install the plugin as well. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1316) Feature test compile failed on on Centos7.3

2017-02-21 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1316:
---
Fix Version/s: (was: backlog)
   2.1.0.0-incubating

> Feature test compile failed on on Centos7.3
> ---
>
> Key: HAWQ-1316
> URL: https://issues.apache.org/jira/browse/HAWQ-1316
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: 2.1.0.0-incubating
>
>
> While compiling HAWQ feature test on rhel7, I see below error.
> make -C lib all
> make[1]: Entering directory 
> `/tmp/build/78017950/hdb_apache/src/test/feature/lib'
> ar -r libtest.a 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/xml_parser.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/command.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/hawq_config.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/hawq_scp.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/string_util.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/hdfs_config.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/data_gen.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/psql.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/yarn_config.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/sql_util.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/file_replace.o
> ar: creating libtest.a
> make[1]: Leaving directory 
> `/tmp/build/78017950/hdb_apache/src/test/feature/lib'
> g++ -I/usr/include -I/usr/local/include -I/usr/include/libxml2 
> -I/tmp/build/78017950/hdb_apache/src/test/feature/ 
> -I/tmp/build/78017950/hdb_apache/src/test/feature/ManagementTool/ 
> -I/tmp/build/78017950/hdb_apache/src/test/feature/lib/ 
> -I/tmp/build/78017950/hdb_apache/src/interfaces/libpq 
> -I/tmp/build/78017950/hdb_apache/src/interfaces 
> -I/tmp/build/78017950/hdb_apache/src/include  
> -I/tmp/build/78017950/hdb_apache/depends/thirdparty/googletest/googletest/include
>  
> -I/tmp/build/78017950/hdb_apache/depends/thirdparty/googletest/googlemock/include
>  -Wall -O0 -g -std=c++11 test_main.o ao/TestAoSnappy.o utility/test_copy.o 
> utility/test_cmd.o testlib/test_lib.o ExternalSource/test_errortbl.o 
> ExternalSource/test_external_oid.o ExternalSource/test_exttab.o 
> query/test_create_type_composite.o query/test_prepare.o query/test_portal.o 
> query/test_temp.o query/test_polymorphism.o query/test_nested_case_null.o 
> query/test_sequence.o query/test_parser.o query/test_information_schema.o 
> query/test_gp_dist_random.o query/test_rowtypes.o query/test_insert.o 
> query/test_aggregate.o parquet/test_parquet.o 
> transactions/test_transactions.o ddl/test_database.o 
> partition/test_partition.o catalog/test_alter_table.o catalog/test_type.o 
> catalog/test_alter_owner.o catalog/test_guc.o catalog/test_create_table.o 
> toast/TestToast.o planner/test_subplan.o UDF/TestUDF.o 
> PreparedStatement/TestPreparedStatement.o lib/xml_parser.o lib/command.o 
> lib/hawq_config.o lib/hawq_scp.o lib/string_util.o lib/hdfs_config.o 
> lib/data_gen.o lib/psql.o lib/yarn_config.o lib/sql_util.o lib/file_replace.o 
> ManagementTool/test_hawq_register_usage2_case1.o 
> ManagementTool/test_hawq_register_usage2_case2.o 
> ManagementTool/test_hawq_register_usage1.o 
> ManagementTool/test_hawq_register_rollback.o 
> ManagementTool/test_hawq_register_partition.o -L../../../src/port 
> -L../../../src/port -Wl,--as-needed 
> -L/tmp/build/78017950/hdb_apache/depends/libhdfs3/build/install/usr/local/hawq/lib
>  
> -L/tmp/build/78017950/hdb_apache/depends/libyarn/build/install/usr/local/hawq/lib
>  -Wl,-rpath,'/usr/local/hawq/lib',--enable-new-dtags -L/usr/local/lib 
> -L/usr/lib -L/tmp/build/78017950/hdb_apache/src/test/feature/ 
> -L/tmp/build/78017950/hdb_apache/src/test/feature/lib/ 
> -L/tmp/build/78017950/hdb_apache/src/interfaces/libpq 
> -L/tmp/build/78017950/hdb_apache/depends/thirdparty/googletest/build/googlemock
>  
> -L/tmp/build/78017950/hdb_apache/depends/thirdparty/googletest/build/googlemock/gtest
>  -lpgport -ljson-c -levent -lyaml -lsnappy -lbz2 -lrt -lz -lreadline -lcrypt 
> -ldl -lm  -lcurl   -lyarn -lkrb5 -lgtest -lpq -lxml2 -ltest -o feature-test
> /usr/bin/ld: /usr/lib/libgtest.a(gtest-all.cc.o): undefined reference to 
> symbol 'pthread_key_delete@@GLIBC_2.2.5'
> /usr/lib64/libpthread.so.0: error adding symbols: DSO missing from command 
> line
> collect2: error: ld returned 1 exit status
> make: *** [all] Error 1
> After append '-l pthread', the compile passed. now this happens on centos7.3 
> version, centos7.2 working fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1348) Some hawq utility helps should say default timeout is 600 seconds

2017-02-21 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1348.
--
   Resolution: Fixed
 Assignee: Paul Guo  (was: Ed Espino)
Fix Version/s: 2.2.0.0-incubating

> Some hawq utility helps should say default timeout is 600 seconds 
> --
>
> Key: HAWQ-1348
> URL: https://issues.apache.org/jira/browse/HAWQ-1348
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> To match the code. Commands include: hawq start/stop/restart



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1348) Some hawq utility helps should say default timeout is 600 seconds

2017-02-21 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1348:
---
Description: To match the code. Commands include: hawq start/stop/restart  
(was: To match the code. Commands include: hawq restart/stop/restart)

> Some hawq utility helps should say default timeout is 600 seconds 
> --
>
> Key: HAWQ-1348
> URL: https://issues.apache.org/jira/browse/HAWQ-1348
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Paul Guo
>Assignee: Ed Espino
>
> To match the code. Commands include: hawq start/stop/restart



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1348) Some hawq utility helps should say default timeout is 600 seconds

2017-02-21 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1348:
---
Description: To match the code. Commands include: hawq restart/stop/restart 
 (was: To match the code. Commands include: hawq restart/stop/restart/activate)

> Some hawq utility helps should say default timeout is 600 seconds 
> --
>
> Key: HAWQ-1348
> URL: https://issues.apache.org/jira/browse/HAWQ-1348
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Paul Guo
>Assignee: Ed Espino
>
> To match the code. Commands include: hawq restart/stop/restart



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1348) Some hawq utility helps should say default timeout is 600 seconds

2017-02-21 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1348:
--

 Summary: Some hawq utility helps should say default timeout is 600 
seconds 
 Key: HAWQ-1348
 URL: https://issues.apache.org/jira/browse/HAWQ-1348
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Command Line Tools
Reporter: Paul Guo
Assignee: Ed Espino


To match the code. Commands include: hawq restart/stop/restart/activate



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1347) QD should check segment health only

2017-02-21 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1347.
--
   Resolution: Fixed
Fix Version/s: 2.2.0.0-incubating

> QD should check segment health only
> ---
>
> Key: HAWQ-1347
> URL: https://issues.apache.org/jira/browse/HAWQ-1347
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Dispatcher
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> Previously QD thread dispmgt_thread_func_run() checks segment node health 
> before proceeding while the code is buggy since it is possibly that QE could 
> be with connection to master and it could be poll()-ed also but our previous 
> code just handle the segment case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1347) QD should check segment health only

2017-02-21 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1347:
--

Assignee: Paul Guo  (was: Ed Espino)

> QD should check segment health only
> ---
>
> Key: HAWQ-1347
> URL: https://issues.apache.org/jira/browse/HAWQ-1347
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Dispatcher
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> Previously QD thread dispmgt_thread_func_run() checks segment node health 
> before proceeding while the code is buggy since it is possibly that QE could 
> be with connection to master and it could be poll()-ed also but our previous 
> code just handle the segment case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1347) QD should check segment health only

2017-02-21 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1347:
--

 Summary: QD should check segment health only
 Key: HAWQ-1347
 URL: https://issues.apache.org/jira/browse/HAWQ-1347
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Dispatcher
Reporter: Paul Guo
Assignee: Ed Espino


Previously QD thread dispmgt_thread_func_run() checks segment node health 
before proceeding while the code is buggy since it is possibly that QE could be 
with connection to master and it could be poll()-ed also but our previous code 
just handle the segment case.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1337) Log stack info before forward signals sigsegv, sigill or sigbus in CdbProgramErrorHandler()

2017-02-20 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1337:
---
Summary: Log stack info before forward signals sigsegv, sigill or sigbus in 
CdbProgramErrorHandler()  (was: Log stack info before forward signal sigsegv, 
sigill or sigbus in CdbProgramErrorHandler())

> Log stack info before forward signals sigsegv, sigill or sigbus in 
> CdbProgramErrorHandler()
> ---
>
> Key: HAWQ-1337
> URL: https://issues.apache.org/jira/browse/HAWQ-1337
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> CdbProgramErrorHandler() is a signal handler, while it seems that it just 
> forwards signal to its its main thread. This is not friendly for development 
> when encountering signals like sigsegv, signal and sigbus, etc. We should 
> save the thread stack info in log before forwarding.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1316) Feature test compile failed on on Centos7.3

2017-02-16 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15869592#comment-15869592
 ] 

Paul Guo commented on HAWQ-1316:


Version should not be "backlog".

> Feature test compile failed on on Centos7.3
> ---
>
> Key: HAWQ-1316
> URL: https://issues.apache.org/jira/browse/HAWQ-1316
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: backlog
>
>
> While compiling HAWQ feature test on rhel7, I see below error.
> make -C lib all
> make[1]: Entering directory 
> `/tmp/build/78017950/hdb_apache/src/test/feature/lib'
> ar -r libtest.a 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/xml_parser.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/command.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/hawq_config.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/hawq_scp.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/string_util.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/hdfs_config.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/data_gen.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/psql.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/yarn_config.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/sql_util.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/file_replace.o
> ar: creating libtest.a
> make[1]: Leaving directory 
> `/tmp/build/78017950/hdb_apache/src/test/feature/lib'
> g++ -I/usr/include -I/usr/local/include -I/usr/include/libxml2 
> -I/tmp/build/78017950/hdb_apache/src/test/feature/ 
> -I/tmp/build/78017950/hdb_apache/src/test/feature/ManagementTool/ 
> -I/tmp/build/78017950/hdb_apache/src/test/feature/lib/ 
> -I/tmp/build/78017950/hdb_apache/src/interfaces/libpq 
> -I/tmp/build/78017950/hdb_apache/src/interfaces 
> -I/tmp/build/78017950/hdb_apache/src/include  
> -I/tmp/build/78017950/hdb_apache/depends/thirdparty/googletest/googletest/include
>  
> -I/tmp/build/78017950/hdb_apache/depends/thirdparty/googletest/googlemock/include
>  -Wall -O0 -g -std=c++11 test_main.o ao/TestAoSnappy.o utility/test_copy.o 
> utility/test_cmd.o testlib/test_lib.o ExternalSource/test_errortbl.o 
> ExternalSource/test_external_oid.o ExternalSource/test_exttab.o 
> query/test_create_type_composite.o query/test_prepare.o query/test_portal.o 
> query/test_temp.o query/test_polymorphism.o query/test_nested_case_null.o 
> query/test_sequence.o query/test_parser.o query/test_information_schema.o 
> query/test_gp_dist_random.o query/test_rowtypes.o query/test_insert.o 
> query/test_aggregate.o parquet/test_parquet.o 
> transactions/test_transactions.o ddl/test_database.o 
> partition/test_partition.o catalog/test_alter_table.o catalog/test_type.o 
> catalog/test_alter_owner.o catalog/test_guc.o catalog/test_create_table.o 
> toast/TestToast.o planner/test_subplan.o UDF/TestUDF.o 
> PreparedStatement/TestPreparedStatement.o lib/xml_parser.o lib/command.o 
> lib/hawq_config.o lib/hawq_scp.o lib/string_util.o lib/hdfs_config.o 
> lib/data_gen.o lib/psql.o lib/yarn_config.o lib/sql_util.o lib/file_replace.o 
> ManagementTool/test_hawq_register_usage2_case1.o 
> ManagementTool/test_hawq_register_usage2_case2.o 
> ManagementTool/test_hawq_register_usage1.o 
> ManagementTool/test_hawq_register_rollback.o 
> ManagementTool/test_hawq_register_partition.o -L../../../src/port 
> -L../../../src/port -Wl,--as-needed 
> -L/tmp/build/78017950/hdb_apache/depends/libhdfs3/build/install/usr/local/hawq/lib
>  
> -L/tmp/build/78017950/hdb_apache/depends/libyarn/build/install/usr/local/hawq/lib
>  -Wl,-rpath,'/usr/local/hawq/lib',--enable-new-dtags -L/usr/local/lib 
> -L/usr/lib -L/tmp/build/78017950/hdb_apache/src/test/feature/ 
> -L/tmp/build/78017950/hdb_apache/src/test/feature/lib/ 
> -L/tmp/build/78017950/hdb_apache/src/interfaces/libpq 
> -L/tmp/build/78017950/hdb_apache/depends/thirdparty/googletest/build/googlemock
>  
> -L/tmp/build/78017950/hdb_apache/depends/thirdparty/googletest/build/googlemock/gtest
>  -lpgport -ljson-c -levent -lyaml -lsnappy -lbz2 -lrt -lz -lreadline -lcrypt 
> -ldl -lm  -lcurl   -lyarn -lkrb5 -lgtest -lpq -lxml2 -ltest -o feature-test
> /usr/bin/ld: /usr/lib/libgtest.a(gtest-all.cc.o): undefined reference to 
> symbol 'pthread_key_delete@@GLIBC_2.2.5'
> /usr/lib64/libpthread.so.0: error adding symbols: DSO missing from command 
> line
> collect2: error: ld returned 1 exit status
> make: *** [all] Error 1
> After append '-l pthread', the compile passed. now this happens on centos7.3 
> version, centos7.2 working fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1337) Log stack info before forward signal sigsegv, sigill or sigbus in CdbProgramErrorHandler()

2017-02-15 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1337:
--

Assignee: Paul Guo  (was: Ed Espino)

> Log stack info before forward signal sigsegv, sigill or sigbus in 
> CdbProgramErrorHandler()
> --
>
> Key: HAWQ-1337
> URL: https://issues.apache.org/jira/browse/HAWQ-1337
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> CdbProgramErrorHandler() is a signal handler, while it seems that it just 
> forwards signal to its its main thread. This is not friendly for development 
> when encountering signals like sigsegv, signal and sigbus, etc. We should 
> save the thread stack info in log before forwarding.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1337) Log stack info before forward signal sigsegv, sigill or sigbus in CdbProgramErrorHandler()

2017-02-15 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1337:
--

 Summary: Log stack info before forward signal sigsegv, sigill or 
sigbus in CdbProgramErrorHandler()
 Key: HAWQ-1337
 URL: https://issues.apache.org/jira/browse/HAWQ-1337
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Ed Espino


CdbProgramErrorHandler() is a signal handler, while it seems that it just 
forwards signal to its its main thread. This is not friendly for development 
when encountering signals like sigsegv, signal and sigbus, etc. We should save 
the thread stack info in log before forwarding.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1334) QD thread should set error code if failing so that the main process for the query could exit soon

2017-02-15 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1334.
--
   Resolution: Fixed
Fix Version/s: 2.2.0.0-incubating

> QD thread should set error code if failing so that the main process for the 
> query could exit soon
> -
>
> Key: HAWQ-1334
> URL: https://issues.apache.org/jira/browse/HAWQ-1334
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Dispatcher
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> In QD thread function dispmgt_thread_func_run(), if there are failures either 
> due to QE or QD itself, it will cancel the query and then clean up. The main 
> process for the query needs the error code of meleeResults be set so that it 
> soon proceeds to cancel the query, else we have to wait for timeout. 
> Typically dispmgt_thread_func_run() should set the error code, however I 
> found there are some cases who do not handle this, e.g. if poll() fails with 
> ENOMEM. One symptom of this issue is that we could sometimes see hang if a 
> query is canceled for some reasons.
> The potential solution is that:
> 1) We expect each branch jump ("goto error_cleanup") set proper error code 
> itself. It is not an easy job.
> 2) We add a "guard" function in the error_cleanup code to set an error code 
> if it is not set, i.e. 1) is not well done.
> I'd this JIRA cares about 2).
> In general, the cleanup code in QD seems to be really obscure and not 
> elegant. Maybe we should file another JIRA to refactor the error handling 
> logic in it. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1335) Need to refactor some QD error handling code.

2017-02-15 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1335:
--

 Summary: Need to refactor some QD error handling code.
 Key: HAWQ-1335
 URL: https://issues.apache.org/jira/browse/HAWQ-1335
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Dispatcher
Reporter: Paul Guo
Assignee: Ed Espino


While recently I worked on QD related issues I found the QD error handling code 
is really confusing. At least:

1) dispmgt_thread_func_run()
/*
 * Cleanup rules:
 * 1. query cancel, result error, and poll error: mark the executor stop.
 * 2. connection error: mark the gang error. Set by 
workermgr_mark_executor_error().
 */

I do not think the code is handling like this. Maybe I did not understand 
related details. Besides workermgr_mark_executor_error() does not exist also.

2) In executormgr_cancel()

#if 0
if (success)
{
executor->state = QES_STOP;
executor->health = QEH_CANCEL;
}
else
{
/* TODO: log error? how to deal with connection error. */
executormgr_catch_error(executor);
}
#endif 

{
write_log("function executormgr_cancel calling 
executormgr_catch_error");
executormgr_catch_error(executor);
}

Why executormgr_catch_error() is called for all cases? and whether "success" is 
not enough to judge whether executormgr_catch_error() should be called or not.

3) cdbdisp_seterrcode()

if (!dispatchResult->errcode)
{
dispatchResult->errcode =
(errcode == 0) ? ERRCODE_INTERNAL_ERROR : errcode;
if (resultIndex >= 0)
dispatchResult->errindex = resultIndex;
}

Why need to set ERRCODE_INTERNAL_ERROR while leave meleeResults->err code 
alone. This piece of code seems to be totally redundant. 

4) It is splitting general errors with normal cancel kinda of interrupts.
However I still see error code below related to cancellation.
if (errcode == ERRCODE_GP_OPERATION_CANCELED ||
errcode == ERRCODE_QUERY_CANCELED)
Is it possible to combine them to just error code only.

5) dispmgt_thread_func_run() should really set error code for each 
error_cleanup cases.

6) dispmgt_thread_func_run() could quit earlier not just because QE, e.g. lack 
of memory on QD so QD fails in some system calls with ENOMEM, while 
cdbdisp_seterrcode() seems to need a related executor.

7) Probably need to rewrite some of the logs and the comments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1334) QD thread should set error code if failing so that the main process for the query could exit soon

2017-02-15 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1334:
--

Assignee: Paul Guo  (was: Ed Espino)

> QD thread should set error code if failing so that the main process for the 
> query could exit soon
> -
>
> Key: HAWQ-1334
> URL: https://issues.apache.org/jira/browse/HAWQ-1334
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Dispatcher
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> In QD thread function dispmgt_thread_func_run(), if there are failures either 
> due to QE or QD itself, it will cancel the query and then clean up. The main 
> process for the query needs the error code of meleeResults be set so that it 
> soon proceeds to cancel the query, else we have to wait for timeout. 
> Typically dispmgt_thread_func_run() should set the error code, however I 
> found there are some cases who do not handle this, e.g. if poll() fails with 
> ENOMEM. One symptom of this issue is that we could sometimes see hang if a 
> query is canceled for some reasons.
> The potential solution is that:
> 1) We expect each branch jump ("goto error_cleanup") set proper error code 
> itself. It is not an easy job.
> 2) We add a "guard" function in the error_cleanup code to set an error code 
> if it is not set, i.e. 1) is not well done.
> I'd this JIRA cares about 2).
> In general, the cleanup code in QD seems to be really obscure and not 
> elegant. Maybe we should file another JIRA to refactor the error handling 
> logic in it. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1334) QD thread should set error code if failing so that the main process for the query could exit soon

2017-02-15 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1334:
---
Description: 
In QD thread function dispmgt_thread_func_run(), if there are failures either 
due to QE or QD itself, it will cancel the query and then clean up. The main 
process for the query needs the error code of meleeResults be set so that it 
soon proceeds to cancel the query, else we have to wait for timeout. Typically 
dispmgt_thread_func_run() should set the error code, however I found there are 
some cases who do not handle this, e.g. if poll() fails with ENOMEM. One 
symptom of this issue is that we could sometimes see hang if a query is 
canceled for some reasons.

The potential solution is that:

1) We expect each branch jump ("goto error_cleanup") set proper error code 
itself. It is not an easy job.
2) We add a "guard" function in the error_cleanup code to set an error code if 
it is not set, i.e. 1) is not well done.

I'd this JIRA cares about 2).

In general, the cleanup code in QD seems to be really obscure and not elegant. 
Maybe we should file another JIRA to refactor the error handling logic in it. 

  was:
In QD thread dispmgt_thread_func_run(), if there are failures either due to QE 
or QD itself, it will cancel the query and then clean up. The main process for 
the query need to have the error code of meleeResults be set so that it soon 
proceed to cancel the query, else we have to wait for timeout. Typically 
dispmgt_thread_func_run() should set the error code, however I found there are 
some cases who do not handle this, e.g. if poll() fails with ENOMEM. One 
symptom of this issue is that we could sometimes see hang if a query is 
canceled for some reasons.

The potential solution is that:

1) We expect each branch jump ("goto error_cleanup") should set proper error 
code it self.
2) We add a "guard" function in the error_cleanup code to set an error code if 
it is not set.

In general, the cleanup code in QD seems to be really obscure and not elegant. 
Maybe we should file another JIRA to refactor the error handling logic in it. 


> QD thread should set error code if failing so that the main process for the 
> query could exit soon
> -
>
> Key: HAWQ-1334
> URL: https://issues.apache.org/jira/browse/HAWQ-1334
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Dispatcher
>Reporter: Paul Guo
>Assignee: Ed Espino
>
> In QD thread function dispmgt_thread_func_run(), if there are failures either 
> due to QE or QD itself, it will cancel the query and then clean up. The main 
> process for the query needs the error code of meleeResults be set so that it 
> soon proceeds to cancel the query, else we have to wait for timeout. 
> Typically dispmgt_thread_func_run() should set the error code, however I 
> found there are some cases who do not handle this, e.g. if poll() fails with 
> ENOMEM. One symptom of this issue is that we could sometimes see hang if a 
> query is canceled for some reasons.
> The potential solution is that:
> 1) We expect each branch jump ("goto error_cleanup") set proper error code 
> itself. It is not an easy job.
> 2) We add a "guard" function in the error_cleanup code to set an error code 
> if it is not set, i.e. 1) is not well done.
> I'd this JIRA cares about 2).
> In general, the cleanup code in QD seems to be really obscure and not 
> elegant. Maybe we should file another JIRA to refactor the error handling 
> logic in it. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1334) QD thread should set error code if failing so that the main process for the query could exit soon

2017-02-15 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1334:
--

 Summary: QD thread should set error code if failing so that the 
main process for the query could exit soon
 Key: HAWQ-1334
 URL: https://issues.apache.org/jira/browse/HAWQ-1334
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Dispatcher
Reporter: Paul Guo
Assignee: Ed Espino


In QD thread dispmgt_thread_func_run(), if there are failures either due to QE 
or QD itself, it will cancel the query and then clean up. The main process for 
the query need to have the error code of meleeResults be set so that it soon 
proceed to cancel the query, else we have to wait for timeout. Typically 
dispmgt_thread_func_run() should set the error code, however I found there are 
some cases who do not handle this, e.g. if poll() fails with ENOMEM. One 
symptom of this issue is that we could sometimes see hang if a query is 
canceled for some reasons.

The potential solution is that:

1) We expect each branch jump ("goto error_cleanup") should set proper error 
code it self.
2) We add a "guard" function in the error_cleanup code to set an error code if 
it is not set.

In general, the cleanup code in QD seems to be really obscure and not elegant. 
Maybe we should file another JIRA to refactor the error handling logic in it. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1278) Investigate installcheck-good issue on Mac OSX

2017-02-14 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15867171#comment-15867171
 ] 

Paul Guo commented on HAWQ-1278:


To my knowledge, this issue has been resolved. Maybe we could close this one 
now?

> Investigate installcheck-good issue on Mac OSX
> --
>
> Key: HAWQ-1278
> URL: https://issues.apache.org/jira/browse/HAWQ-1278
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Tests
>Reporter: Ed Espino
>Assignee: Ruilong Huo
> Fix For: 2.2.0.0-incubating
>
>
> I am filing this as a place holder for the Mac OSX installcheck-good 
> investigation work.  Ming Li originally reported installcheck-good testing 
> issues with errtable and hcatalog_lookup test suites.
> This issue is not seen on CentOS 6 & 7 environments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1326) Cancel the query earlier if one of the segments for the query crashes

2017-02-14 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1326.
--
Resolution: Fixed

> Cancel the query earlier if one of the segments for the query crashes
> -
>
> Key: HAWQ-1326
> URL: https://issues.apache.org/jira/browse/HAWQ-1326
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> QD thread could hang in the loop of poll() since: 1) The alive segments could 
> wait at the interconnect for the dead segment until interconnect timeout (by 
> default 1 hour). 2) In the QD thread poll() will not sense the system-down 
> until kernel tcp keepalive messaging is triggered, however the keepalive 
> timeout is a bit long (2 hours by default on rhel6.x) and it could be 
> configured via procfs only.
> A proper solution would be using the RM heartbeat mechanism:
> RM maintains a global ID lists (stable cross node adding or removing) for all 
> nodes and keeps updating the health state via userspace heartbeat mechanism, 
> thus we could maintain a bitmap in shared memory which keeps the latest node 
> healthy info updated then we could use it in QD code, i.e. Cancel the query 
> if finding the segment node, which handles part of the query, is down.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1326) Cancel the query earlier if one of the segments for the query crashes

2017-02-13 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1326:
---
Summary: Cancel the query earlier if one of the segments for the query 
crashes  (was: Cancel the query if one of the segments for the query crashes)

> Cancel the query earlier if one of the segments for the query crashes
> -
>
> Key: HAWQ-1326
> URL: https://issues.apache.org/jira/browse/HAWQ-1326
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> QD thread could hang in the loop of poll() since: 1) The alive segments could 
> wait at the interconnect for the dead segment until interconnect timeout (by 
> default 1 hour). 2) In the QD thread poll() will not sense the system-down 
> until kernel tcp keepalive messaging is triggered, however the keepalive 
> timeout is a bit long (2 hours by default on rhel6.x) and it could be 
> configured via procfs only.
> A proper solution would be using the RM heartbeat mechanism:
> RM maintains a global ID lists (stable cross node adding or removing) for all 
> nodes and keeps updating the health state via userspace heartbeat mechanism, 
> thus we could maintain a bitmap in shared memory which keeps the latest node 
> healthy info updated then we could use it in QD code, i.e. Cancel the query 
> if finding the segment node, which handles part of the query, is down.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1326) Cancel the query if one of the segments for the query crashes

2017-02-13 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1326:
--

Assignee: Paul Guo  (was: Ed Espino)

> Cancel the query if one of the segments for the query crashes
> -
>
> Key: HAWQ-1326
> URL: https://issues.apache.org/jira/browse/HAWQ-1326
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> QD thread could hang in the loop of poll() since: 1) The alive segments could 
> wait at the interconnect for the dead segment until interconnect timeout (by 
> default 1 hour). 2) In the QD thread poll() will not sense the system-down 
> until kernel tcp keepalive messaging is triggered, however the keepalive 
> timeout is a bit long (2 hours by default on rhel6.x) and it could be 
> configured via procfs only.
> A proper solution would be using the RM heartbeat mechanism:
> RM maintains a global ID lists (stable cross node adding or removing) for all 
> nodes and keeps updating the health state via userspace heartbeat mechanism, 
> thus we could maintain a bitmap in shared memory which keeps the latest node 
> healthy info updated then we could use it in QD code, i.e. Cancel the query 
> if finding the segment node, which handles part of the query, is down.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1327) Move ID from struct SegStatData to struct SegInfoData so that ID could be used in QD.

2017-02-13 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1327.
--
Resolution: Fixed

>  Move ID from struct SegStatData to struct SegInfoData so that ID could be 
> used in QD.
> --
>
> Key: HAWQ-1327
> URL: https://issues.apache.org/jira/browse/HAWQ-1327
> Project: Apache HAWQ
>  Issue Type: Sub-task
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> This is the sub-JIRA for
> HAWQ-1326. Cancel the query if one of the segments for the query crashes
> The summary is quite clear.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1327) Move ID from struct SegStatData to struct SegInfoData so that ID could be used in QD.

2017-02-13 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1327:
--

Assignee: Paul Guo  (was: Ed Espino)

>  Move ID from struct SegStatData to struct SegInfoData so that ID could be 
> used in QD.
> --
>
> Key: HAWQ-1327
> URL: https://issues.apache.org/jira/browse/HAWQ-1327
> Project: Apache HAWQ
>  Issue Type: Sub-task
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> This is the sub-JIRA for
> HAWQ-1326. Cancel the query if one of the segments for the query crashes
> The summary is quite clear.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1327) Move ID from struct SegStatData to struct SegInfoData so that ID could be used in QD.

2017-02-13 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1327:
--

 Summary:  Move ID from struct SegStatData to struct SegInfoData so 
that ID could be used in QD.
 Key: HAWQ-1327
 URL: https://issues.apache.org/jira/browse/HAWQ-1327
 Project: Apache HAWQ
  Issue Type: Sub-task
Reporter: Paul Guo
Assignee: Ed Espino


This is the sub-JIRA for
HAWQ-1326. Cancel the query if one of the segments for the query crashes

The summary is quite clear.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1326) Cancel the query if one of the segments for the query crashes

2017-02-13 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1326:
--

 Summary: Cancel the query if one of the segments for the query 
crashes
 Key: HAWQ-1326
 URL: https://issues.apache.org/jira/browse/HAWQ-1326
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Ed Espino
 Fix For: 2.2.0.0-incubating


QD thread could hang in the loop of poll() since: 1) The alive segments could 
wait at the interconnect for the dead segment until interconnect timeout (by 
default 1 hour). 2) In the QD thread poll() will not sense the system-down 
until kernel tcp keepalive messaging is triggered, however the keepalive 
timeout is a bit long (2 hours by default on rhel6.x) and it could be 
configured via procfs only.

A proper solution would be using the RM heartbeat mechanism:

RM maintains a global ID lists (stable cross node adding or removing) for all 
nodes and keeps updating the health state via userspace heartbeat mechanism, 
thus we could maintain a bitmap in shared memory which keeps the latest node 
healthy info updated then we could use it in QD code, i.e. Cancel the query if 
finding the segment node, which handles part of the query, is down.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1316) Feature test compile failed on on Centos7.3

2017-02-06 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15855481#comment-15855481
 ] 

Paul Guo commented on HAWQ-1316:


This is possibly due to subtle behaviour difference of binutils.

> Feature test compile failed on on Centos7.3
> ---
>
> Key: HAWQ-1316
> URL: https://issues.apache.org/jira/browse/HAWQ-1316
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: backlog
>
>
> While compiling HAWQ feature test on rhel7, I see below error.
> make -C lib all
> make[1]: Entering directory 
> `/tmp/build/78017950/hdb_apache/src/test/feature/lib'
> ar -r libtest.a 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/xml_parser.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/command.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/hawq_config.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/hawq_scp.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/string_util.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/hdfs_config.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/data_gen.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/psql.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/yarn_config.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/sql_util.o 
> /tmp/build/78017950/hdb_apache/src/test/feature/lib/file_replace.o
> ar: creating libtest.a
> make[1]: Leaving directory 
> `/tmp/build/78017950/hdb_apache/src/test/feature/lib'
> g++ -I/usr/include -I/usr/local/include -I/usr/include/libxml2 
> -I/tmp/build/78017950/hdb_apache/src/test/feature/ 
> -I/tmp/build/78017950/hdb_apache/src/test/feature/ManagementTool/ 
> -I/tmp/build/78017950/hdb_apache/src/test/feature/lib/ 
> -I/tmp/build/78017950/hdb_apache/src/interfaces/libpq 
> -I/tmp/build/78017950/hdb_apache/src/interfaces 
> -I/tmp/build/78017950/hdb_apache/src/include  
> -I/tmp/build/78017950/hdb_apache/depends/thirdparty/googletest/googletest/include
>  
> -I/tmp/build/78017950/hdb_apache/depends/thirdparty/googletest/googlemock/include
>  -Wall -O0 -g -std=c++11 test_main.o ao/TestAoSnappy.o utility/test_copy.o 
> utility/test_cmd.o testlib/test_lib.o ExternalSource/test_errortbl.o 
> ExternalSource/test_external_oid.o ExternalSource/test_exttab.o 
> query/test_create_type_composite.o query/test_prepare.o query/test_portal.o 
> query/test_temp.o query/test_polymorphism.o query/test_nested_case_null.o 
> query/test_sequence.o query/test_parser.o query/test_information_schema.o 
> query/test_gp_dist_random.o query/test_rowtypes.o query/test_insert.o 
> query/test_aggregate.o parquet/test_parquet.o 
> transactions/test_transactions.o ddl/test_database.o 
> partition/test_partition.o catalog/test_alter_table.o catalog/test_type.o 
> catalog/test_alter_owner.o catalog/test_guc.o catalog/test_create_table.o 
> toast/TestToast.o planner/test_subplan.o UDF/TestUDF.o 
> PreparedStatement/TestPreparedStatement.o lib/xml_parser.o lib/command.o 
> lib/hawq_config.o lib/hawq_scp.o lib/string_util.o lib/hdfs_config.o 
> lib/data_gen.o lib/psql.o lib/yarn_config.o lib/sql_util.o lib/file_replace.o 
> ManagementTool/test_hawq_register_usage2_case1.o 
> ManagementTool/test_hawq_register_usage2_case2.o 
> ManagementTool/test_hawq_register_usage1.o 
> ManagementTool/test_hawq_register_rollback.o 
> ManagementTool/test_hawq_register_partition.o -L../../../src/port 
> -L../../../src/port -Wl,--as-needed 
> -L/tmp/build/78017950/hdb_apache/depends/libhdfs3/build/install/usr/local/hawq/lib
>  
> -L/tmp/build/78017950/hdb_apache/depends/libyarn/build/install/usr/local/hawq/lib
>  -Wl,-rpath,'/usr/local/hawq/lib',--enable-new-dtags -L/usr/local/lib 
> -L/usr/lib -L/tmp/build/78017950/hdb_apache/src/test/feature/ 
> -L/tmp/build/78017950/hdb_apache/src/test/feature/lib/ 
> -L/tmp/build/78017950/hdb_apache/src/interfaces/libpq 
> -L/tmp/build/78017950/hdb_apache/depends/thirdparty/googletest/build/googlemock
>  
> -L/tmp/build/78017950/hdb_apache/depends/thirdparty/googletest/build/googlemock/gtest
>  -lpgport -ljson-c -levent -lyaml -lsnappy -lbz2 -lrt -lz -lreadline -lcrypt 
> -ldl -lm  -lcurl   -lyarn -lkrb5 -lgtest -lpq -lxml2 -ltest -o feature-test
> /usr/bin/ld: /usr/lib/libgtest.a(gtest-all.cc.o): undefined reference to 
> symbol 'pthread_key_delete@@GLIBC_2.2.5'
> /usr/lib64/libpthread.so.0: error adding symbols: DSO missing from command 
> line
> collect2: error: ld returned 1 exit status
> make: *** [all] Error 1
> After append '-l pthread', the compile passed. now this happens on centos7.3 
> version, centos7.2 working fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1260) Remove temp tables after hawq restart

2017-01-05 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1260:
--

 Summary: Remove temp tables after hawq restart 
 Key: HAWQ-1260
 URL: https://issues.apache.org/jira/browse/HAWQ-1260
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Command Line Tools
Reporter: Paul Guo
Assignee: Ed Espino


Sometimes hawq encounters errors so have to restart (e.g. oom-kill, debug), 
useless temp tables are left on hdfs and catalog. It seems that one of the 
solution is to remove the pg_temp_* schema automatically after hawq restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1245) can HAWQ support alternate python module deployment directory?

2017-01-03 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1245:
---
Component/s: (was: Core)
 Command Line Tools

> can HAWQ support alternate python module deployment directory?
> --
>
> Key: HAWQ-1245
> URL: https://issues.apache.org/jira/browse/HAWQ-1245
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Lisa Owen
>Assignee: Ed Espino
>Priority: Minor
>
> HAWQ no longer embeds python and is now using the system python installation. 
>  with this change, installing a new python module now requires root/sudo 
> access to the system python directories.  is there any reason why HAWQ would 
> not be able to support deploying python modules to an alternate directory 
> that is owned by gpadmin?  or using a python virtual environment?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1200) Incremental make install.

2017-01-03 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15794615#comment-15794615
 ] 

Paul Guo commented on HAWQ-1200:


Just saw it. It is an interesting idea. I have two concerns, 
1) I guess Makefiles have been with proper dependencies so that non-modified 
files will not be copied, else that is a Makefile bug.
2) Actually our install script install-sh supports similar idea via the option 
"-C". The difference is that is it uses cmp.
rsync is a bit heavy for comparison since it seems that it calculates using 
hash digest and allows shift.

I just quickly hacked src/backend/Makefile.global and test install performance 
on my virtual machine,

#INSTALL = $(SHELL) $(top_srcdir)/config/install-sh -c
INSTALL= $(SHELL) $(top_srcdir)/config/install-sh -C

No obvious improvement in my system. Actually the difference will depend on the 
cpu and io performance. Since the comparison solution will
introduce additional cpu cost for calculation and io cost for original file 
read. If write is slow (e.g. slow hard disk or through network), the solution
will be good, else no.

Anyone could easily customize it via setting the environment variable,
CUSTOM_INSTALL

> Incremental make install.
> -
>
> Key: HAWQ-1200
> URL: https://issues.apache.org/jira/browse/HAWQ-1200
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: hongwu
>Assignee: Lei Chang
>
> Current make install process cp all copy files from source directory to 
> install prefix, this is time-consuming. We can optimize it with rsync instead 
> of cp. It could improve the development efficiency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1194) XXXX Todo

2017-01-02 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15794435#comment-15794435
 ] 

Paul Guo commented on HAWQ-1194:


I guess this was created by mistake? If this is for no use, please delete it. 
Thanks.

>  Todo
> -
>
> Key: HAWQ-1194
> URL: https://issues.apache.org/jira/browse/HAWQ-1194
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Hongxu Ma
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-1240) Fix bug of plan refinement for cursor operation

2016-12-28 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1240.
--
   Resolution: Fixed
Fix Version/s: (was: backlog)
   2.2.0.0-incubating

> Fix bug of plan refinement for cursor operation
> ---
>
> Key: HAWQ-1240
> URL: https://issues.apache.org/jira/browse/HAWQ-1240
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> Currently we call refineCachedPlan() in the cursor open code 
> SPI_cursor_open(), however the code is buggy since the new plan is not used 
> for later operation. Also there is a debate that whether we need to replan 
> when needed from query tree for this, but this is another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-1241) No need of ext/python in *PATH in file greenplum_path.sh

2016-12-27 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1241.
--
   Resolution: Fixed
 Assignee: Paul Guo  (was: Ed Espino)
Fix Version/s: 2.2.0.0-incubating

> No need of ext/python in *PATH in file greenplum_path.sh
> 
>
> Key: HAWQ-1241
> URL: https://issues.apache.org/jira/browse/HAWQ-1241
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>
> We did not ship python in hawq package long time ago, so no need of setting 
> python paths in greenplum_path.sh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1241) No need of ext/python in *PATH in file greenplum_path.sh

2016-12-27 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1241:
--

 Summary: No need of ext/python in *PATH in file greenplum_path.sh
 Key: HAWQ-1241
 URL: https://issues.apache.org/jira/browse/HAWQ-1241
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Ed Espino


We did not ship python in hawq package long time ago, so no need of setting 
python paths in greenplum_path.sh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1240) Fix bug of plan refinement for cursor operation

2016-12-27 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1240:
---
Description: Currently we call refineCachedPlan() in the cursor open code 
SPI_cursor_open(), however the code is buggy since the new plan is not used for 
later operation. Also there is a debate that whether we need to replan when 
needed from query tree for this, but this is another issue.  (was: Currently we 
call refineCachedPlan() in the cursor open code SPI_cursor_open(), however the 
code is buggy since the new plan is not used for later operation. Also there is 
an internal debate that whether need to replan from query tree for these, but 
this is another issue.)

> Fix bug of plan refinement for cursor operation
> ---
>
> Key: HAWQ-1240
> URL: https://issues.apache.org/jira/browse/HAWQ-1240
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: backlog
>
>
> Currently we call refineCachedPlan() in the cursor open code 
> SPI_cursor_open(), however the code is buggy since the new plan is not used 
> for later operation. Also there is a debate that whether we need to replan 
> when needed from query tree for this, but this is another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-1240) Fix bug of plan refinement for cursor operation

2016-12-27 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1240:
--

Assignee: Paul Guo  (was: Ed Espino)

> Fix bug of plan refinement for cursor operation
> ---
>
> Key: HAWQ-1240
> URL: https://issues.apache.org/jira/browse/HAWQ-1240
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: backlog
>
>
> Currently we call refineCachedPlan() in the cursor open code 
> SPI_cursor_open(), however the code is buggy since the new plan is not used 
> for later operation. Also there is an internal debate that whether need to 
> replan from query tree for these, but this is another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1240) Fix bug of plan refinement for cursor operation

2016-12-27 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1240:
--

 Summary: Fix bug of plan refinement for cursor operation
 Key: HAWQ-1240
 URL: https://issues.apache.org/jira/browse/HAWQ-1240
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Ed Espino
 Fix For: backlog


Currently we call refineCachedPlan() in the cursor open code SPI_cursor_open(), 
however the code is buggy since the new plan is not used for later operation. 
Also there is an internal debate that whether need to replan from query tree 
for these, but this is another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-1230) Introduce macro __MAYBE_UNUSED to surpress "unused funtion" warnings.

2016-12-21 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1230.
--
   Resolution: Fixed
Fix Version/s: 2.1.0.0-incubating

> Introduce macro __MAYBE_UNUSED to surpress "unused funtion" warnings.
> -
>
> Key: HAWQ-1230
> URL: https://issues.apache.org/jira/browse/HAWQ-1230
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.1.0.0-incubating
>
>
> Saw similar warning:
> be-secure.c:323:1: warning: unused function 'report_commerror' 
> [-Wunused-function]
> One of the reason is that the callers are not compiled with some 
> configuration options.
> Add the macro for gcc attribute below to surpress these kind of warnings.
> #define __MAYBE_UNUSED __attribute__((used))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1230) Introduce macro __MAYBE_UNUSED to surpress "unused funtion" warnings.

2016-12-21 Thread Paul Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15766791#comment-15766791
 ] 

Paul Guo commented on HAWQ-1230:


Also could remove macros below,

-#ifndef POSSIBLE_UNUSED_VAR
-#define POSSIBLE_UNUSED_VAR(x) ((void)x)
-#endif
-
-#ifndef POSSIBLE_UNUSED_ARG
-#define POSSIBLE_UNUSED_ARG(x) ((void)x)

> Introduce macro __MAYBE_UNUSED to surpress "unused funtion" warnings.
> -
>
> Key: HAWQ-1230
> URL: https://issues.apache.org/jira/browse/HAWQ-1230
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> Saw similar warning:
> be-secure.c:323:1: warning: unused function 'report_commerror' 
> [-Wunused-function]
> One of the reason is that the callers are not compiled with some 
> configuration options.
> Add the macro for gcc attribute below to surpress these kind of warnings.
> #define __MAYBE_UNUSED __attribute__((used))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1230) Introduce macro __MAYBE_UNUSED to surpress "unused funtion" warnings.

2016-12-21 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1230:
---
Description: 
Saw similar warning:
be-secure.c:323:1: warning: unused function 'report_commerror' 
[-Wunused-function]

One of the reason is that the callers are not compiled with some configuration 
options.

Add the macro for gcc attribute below to surpress these kind of warnings.

#define __MAYBE_UNUSED __attribute__((used))

  was:
Saw similar warning:
be-secure.c:323:1: warning: unused function 'report_commerror' 
[-Wunused-function]

One of the reason is that the callers are not compiled with some configuration 
options.

Add the macro for gcc attribute below to surpress these kind of warnings.

#define __MAYBE_UNUSED_FUNC __attribute__((used))


> Introduce macro __MAYBE_UNUSED to surpress "unused funtion" warnings.
> -
>
> Key: HAWQ-1230
> URL: https://issues.apache.org/jira/browse/HAWQ-1230
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> Saw similar warning:
> be-secure.c:323:1: warning: unused function 'report_commerror' 
> [-Wunused-function]
> One of the reason is that the callers are not compiled with some 
> configuration options.
> Add the macro for gcc attribute below to surpress these kind of warnings.
> #define __MAYBE_UNUSED __attribute__((used))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-1230) Introduce macro __MAYBE_UNUSED to surpress "unused funtion" warnings.

2016-12-21 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo reassigned HAWQ-1230:
--

Assignee: Paul Guo  (was: Ed Espino)

> Introduce macro __MAYBE_UNUSED to surpress "unused funtion" warnings.
> -
>
> Key: HAWQ-1230
> URL: https://issues.apache.org/jira/browse/HAWQ-1230
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>
> Saw similar warning:
> be-secure.c:323:1: warning: unused function 'report_commerror' 
> [-Wunused-function]
> One of the reason is that the callers are not compiled with some 
> configuration options.
> Add the macro for gcc attribute below to surpress these kind of warnings.
> #define __MAYBE_UNUSED_FUNC __attribute__((used))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1230) Introduce macro __MAYBE_UNUSED to surpress "unused funtion" warnings.

2016-12-21 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1230:
---
Summary: Introduce macro __MAYBE_UNUSED to surpress "unused funtion" 
warnings.  (was: Introduce macro __MAYBE_UNUSED_FUNC to surpress "unused 
funtion" warnings.)

> Introduce macro __MAYBE_UNUSED to surpress "unused funtion" warnings.
> -
>
> Key: HAWQ-1230
> URL: https://issues.apache.org/jira/browse/HAWQ-1230
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Ed Espino
>
> Saw similar warning:
> be-secure.c:323:1: warning: unused function 'report_commerror' 
> [-Wunused-function]
> One of the reason is that the callers are not compiled with some 
> configuration options.
> Add the macro for gcc attribute below to surpress these kind of warnings.
> #define __MAYBE_UNUSED_FUNC __attribute__((used))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1230) Introduce macro __MAYBE_UNUSED_FUNC to surpress "unused funtion" warnings.

2016-12-21 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-1230:
--

 Summary: Introduce macro __MAYBE_UNUSED_FUNC to surpress "unused 
funtion" warnings.
 Key: HAWQ-1230
 URL: https://issues.apache.org/jira/browse/HAWQ-1230
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Ed Espino


Saw similar warning:
be-secure.c:323:1: warning: unused function 'report_commerror' 
[-Wunused-function]

One of the reason is that the callers are not compiled with some configuration 
options.

Add the macro for gcc attribute below to surpress these kind of warnings.

#define __MAYBE_UNUSED_FUNC __attribute__((used))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-1186) Remove with-thrift in src/Makefile.global.in

2016-12-19 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo closed HAWQ-1186.
--

> Remove with-thrift in src/Makefile.global.in
> 
>
> Key: HAWQ-1186
> URL: https://issues.apache.org/jira/browse/HAWQ-1186
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: 2.0.1.0-incubating
>
>
> We removed the thrift code but forgot to remote with-thrift related things in 
> src/Makefile.global.in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   >