[GitHub] incubator-hawq pull request #1157: HAWQ-1371. Fix QE process hang in shared ...

2017-03-01 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1157#discussion_r103857662
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -885,6 +906,12 @@ writer_wait_for_acks(ShareInput_Lk_Context *pctxt, int 
share_id, int xslice)
while(ack_needed > 0)
{
CHECK_FOR_INTERRUPTS();
+   
+   if (IsAbortInProgress())
+   {
+   break;
+   }
+
--- End diff --

Whether comment is needed here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1157: HAWQ-1371. Fix QE process hang in shared ...

2017-03-01 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1157#discussion_r103856229
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -627,38 +627,50 @@ static void create_tmp_fifo(const char *fifoname)
 /* 
  * As all other read/write in postgres, we may be interrupted so retry is 
needed.
  */
-static int retry_read(int fd, char *buf, int rsize)
+static int retry_read(int *fd, char *buf, int rsize)
 {
int sz;
Assert(rsize > 0);
 
 read_retry:
-   sz = read(fd, buf, rsize);
+   sz = read(*fd, buf, rsize);
--- End diff --

Frankly speaking, I'd retry_read() logic simple like this:
do
{
err =read(fd, buf, rsize);
} while (err == -1 && errno == EINTR);

And leave close() and error handling code in callers of it.

If you insist on this, at least you could modify the function name to 
reflect the additional close() call and exiting.

I do not why a fd pointer is needed here since elog(ERROR, ...) will quit 
the progress.

The comment applies to the write change below also.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1157: HAWQ-1371. Fix QE process hang in shared ...

2017-03-01 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1157#discussion_r103856770
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -627,38 +627,50 @@ static void create_tmp_fifo(const char *fifoname)
 /* 
  * As all other read/write in postgres, we may be interrupted so retry is 
needed.
  */
-static int retry_read(int fd, char *buf, int rsize)
+static int retry_read(int *fd, char *buf, int rsize)
 {
int sz;
Assert(rsize > 0);
 
 read_retry:
-   sz = read(fd, buf, rsize);
+   sz = read(*fd, buf, rsize);
if (sz > 0)
return sz;
-   else if(sz == 0 || errno == EINTR)
+   else if(sz == 0) // read EOF 
+   return 0;
--- End diff --

Why not if (sz >= 0)?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1157: HAWQ-1371. Fix QE process hang in shared ...

2017-03-01 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1157#discussion_r103857185
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -627,38 +627,50 @@ static void create_tmp_fifo(const char *fifoname)
 /* 
  * As all other read/write in postgres, we may be interrupted so retry is 
needed.
  */
-static int retry_read(int fd, char *buf, int rsize)
+static int retry_read(int *fd, char *buf, int rsize)
 {
int sz;
Assert(rsize > 0);
 
 read_retry:
-   sz = read(fd, buf, rsize);
+   sz = read(*fd, buf, rsize);
if (sz > 0)
return sz;
-   else if(sz == 0 || errno == EINTR)
+   else if(sz == 0) // read EOF 
+   return 0;
+   else if(errno == EINTR)
goto read_retry;
else
{
+   if(*fd >= 0)
+   {
+   gp_retry_close(fd);
+   *fd = -1;
+   }
elog(ERROR, "could not read from fifo: %m");
}
Assert(!"Never be here");
return 0;
--- End diff --

Although this will never be reached, but I'd suggest -1 for return value.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #1157: HAWQ-1371. Fix QE process hang in shared ...

2017-03-01 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1157#discussion_r103855671
  
--- Diff: src/backend/executor/nodeShareInputScan.c ---
@@ -1009,10 +1059,10 @@ shareinput_writer_waitdone(void *ctxt, int 
share_id, int nsharer_xslice)
{
int save_errno = errno;
elog(LOG, "SISC WRITER (shareid=%d, slice=%d): wait 
done time out once, errno %d",
-   share_id, currentSliceId, save_errno);
-   if(save_errno == EBADF)
+   share_id, currentSliceId, save_errno);  
+   if(save_errno == EBADF || save_errno == EINVAL)
{
-   /* The file description is invalid, maybe this 
FD has been already closed by writer in some cases
+   /* The file description is invalid, maybe this 
FD has been already closed by others in some cases
--- End diff --

The comment does not apply for the check logic.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-760) Hawq register

2017-03-01 Thread Lili Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891592#comment-15891592
 ] 

Lili Ma commented on HAWQ-760:
--

[~kdunn926] 

HAWQ register doesn't check HAWQ version number.  Although HAWQ 2.X optimized 
the storage for AO format table, it can still read the AO file generated by 
HAWQ 1.X. Parquet file does not changed, so there won't be problem. 
So, I don't think you will encounter problem if you want to register table from 
HAWQ 1.X to HAWQ 2.X.

If you want to register Parquet files generated by other products such as Hive, 
Impala which may use a later version, hawq register don't throw error when 
register.  But you may meet some error thrown out when select from the 
registered table.  For example, if some data page is encoded with dictionary 
encoding, HAWQ will throw error out indicating that it can not process that. 

> Hawq register
> -
>
> Key: HAWQ-760
> URL: https://issues.apache.org/jira/browse/HAWQ-760
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Command Line Tools
>Reporter: Yangcheng Luo
>Assignee: Lili Ma
> Fix For: backlog
>
>
> Scenario: 
> 1. Register a parquet file generated by other systems, such as Hive, Spark, 
> etc.
> 2. For cluster Disaster Recovery. Two clusters co-exist, periodically import 
> data from Cluster A to Cluster B. Need Register data to Cluster B.
> 3. For the rollback of table. Do checkpoints somewhere, and need to rollback 
> to previous checkpoint. 
> Usage1
> Description
> Register a file/folder to an existing table. Can register a file or a folder. 
> If we register a file, can specify eof of this file. If eof not specified, 
> directly use actual file size. If we register a folder, directly use actual 
> file size.
> hawq register [-h hostname] [-p port] [-U username] [-d databasename] [-f 
> filepath] [-e eof]
> Usage 2
> Description
> Register according to .yml configuration file. 
> hawq register [-h hostname] [-p port] [-U username] [-d databasename] [-c 
> config] [--force][--repair]  
> Behavior:
> 1. If table doesn't exist, will automatically create the table and register 
> the files in .yml configuration file. Will use the filesize specified in .yml 
> to update the catalog table. 
> 2. If table already exist, and neither --force nor --repair configured. Do 
> not create any table, and directly register the files specified in .yml file 
> to the table. Note that if the file is under table directory in HDFS, will 
> throw error, say, to-be-registered files should not under the table path.
> 3. If table already exist, and --force is specified. Will clear all the 
> catalog contents in pg_aoseg.pg_paqseg_$relid while keep the files on HDFS, 
> and then re-register all the files to the table.  This is for scenario 2.
> 4. If table already exist, and --repair is specified. Will change both file 
> folder and catalog table pg_aoseg.pg_paqseg_$relid to the state which .yml 
> file configures. Note may some new generated files since the checkpoint may 
> be deleted here. Also note the all the files in .yml file should all under 
> the table folder on HDFS. Limitation: Do not support cases for hash table 
> redistribution, table truncate and table drop. This is for scenario 3.
> Requirements for both the cases:
> 1. To be registered file path has to colocate with HAWQ in the same HDFS 
> cluster.
> 2. If to be registered is a hash table, the registered file number should be 
> one or multiple times or hash table bucket number.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1154: HAWQ-1367. HAWQ can access to user tables...

2017-03-01 Thread wcl14
Github user wcl14 closed the pull request at:

https://github.com/apache/incubator-hawq/pull/1154


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #1151: HAWQ-1359. Add policy test for HAWQ with Ranger ...

2017-03-01 Thread linwen
Github user linwen commented on the issue:

https://github.com/apache/incubator-hawq/pull/1151
  
+1 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (HAWQ-1372) doc ambari hawq config change procedure that does not require cluster restart

2017-03-01 Thread Lisa Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisa Owen updated HAWQ-1372:

Priority: Minor  (was: Major)

> doc ambari hawq config change procedure that does not require cluster restart
> -
>
> Key: HAWQ-1372
> URL: https://issues.apache.org/jira/browse/HAWQ-1372
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>Priority: Minor
>
> document the workaround for updating hawq configuration via ambari (for  
> ambari-managed clusters) in cases where a complete cluster restart cannot be 
> tolerated:
> update config via ambari, do not restart
> update config via "hawq config -c xxx -v xxx"
> hawq stop cluster --reload



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1372) doc ambari hawq config change procedure that does not require cluster restart

2017-03-01 Thread Lisa Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisa Owen updated HAWQ-1372:

Fix Version/s: (was: 2.2.0.0-incubating)

> doc ambari hawq config change procedure that does not require cluster restart
> -
>
> Key: HAWQ-1372
> URL: https://issues.apache.org/jira/browse/HAWQ-1372
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>Priority: Minor
>
> document the workaround for updating hawq configuration via ambari (for  
> ambari-managed clusters) in cases where a complete cluster restart cannot be 
> tolerated:
> update config via ambari, do not restart
> update config via "hawq config -c xxx -v xxx"
> hawq stop cluster --reload



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1372) doc ambari hawq config change procedure that does not require cluster restart

2017-03-01 Thread Lisa Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisa Owen updated HAWQ-1372:

Fix Version/s: 2.2.0.0-incubating

> doc ambari hawq config change procedure that does not require cluster restart
> -
>
> Key: HAWQ-1372
> URL: https://issues.apache.org/jira/browse/HAWQ-1372
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>Priority: Minor
>
> document the workaround for updating hawq configuration via ambari (for  
> ambari-managed clusters) in cases where a complete cluster restart cannot be 
> tolerated:
> update config via ambari, do not restart
> update config via "hawq config -c xxx -v xxx"
> hawq stop cluster --reload



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1372) doc ambari hawq config change procedure that does not require cluster restart

2017-03-01 Thread Lisa Owen (JIRA)
Lisa Owen created HAWQ-1372:
---

 Summary: doc ambari hawq config change procedure that does not 
require cluster restart
 Key: HAWQ-1372
 URL: https://issues.apache.org/jira/browse/HAWQ-1372
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: Documentation
Reporter: Lisa Owen
Assignee: David Yozie


document the workaround for updating hawq configuration via ambari (for  
ambari-managed clusters) in cases where a complete cluster restart cannot be 
tolerated:

update config via ambari, do not restart
update config via "hawq config -c xxx -v xxx"
hawq stop cluster --reload




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1151: HAWQ-1359. Add policy test for HAWQ with ...

2017-03-01 Thread linwen
Github user linwen commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/1151#discussion_r103832796
  
--- Diff: src/test/feature/Ranger/rangeruser.py ---
@@ -0,0 +1,117 @@
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+"""
+
+import sys
+import urllib2, base64
+import json
+
+from optparse import OptionParser
+from rangerrest import RangerRestHelper
+
+
+def foo_callback(option, opt, value, parser):
+  setattr(parser.values, option.dest, value.split(','))
+
+def option_parser():
+'''option parser'''
+parser = OptionParser()
+parser.remove_option('-h')
+parser.add_option('-?', '--help', action='help')
+parser.add_option('-h', '--host', dest="host", help='host of the 
target DB', \
+  default='localhost')
+parser.add_option('-p', '--port', dest="port", \
--- End diff --

should be the port of Ranger Policy Manager


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (HAWQ-1370) Misuse of regular expressions in init_file of feature test.

2017-03-01 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang resolved HAWQ-1370.

   Resolution: Fixed
Fix Version/s: 2.2.0.0-incubating

fixed

> Misuse of regular expressions in init_file of feature test.
> ---
>
> Key: HAWQ-1370
> URL: https://issues.apache.org/jira/browse/HAWQ-1370
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
> Fix For: 2.2.0.0-incubating
>
>
> in global_init_file of feature test, we want to skip expressions which 
> include file and line number, e.g.(aclchk.c:123), or (aclchk.cpp:134).
> But currently, the regular expressions is {code}(.*c[p]+:\d+) {code} which 
> need to be replaced by {code}(.*c[p]*:\d+) {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1370) Misuse of regular expressions in init_file of feature test.

2017-03-01 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang reassigned HAWQ-1370:
--

Assignee: Hubert Zhang  (was: Ed Espino)

> Misuse of regular expressions in init_file of feature test.
> ---
>
> Key: HAWQ-1370
> URL: https://issues.apache.org/jira/browse/HAWQ-1370
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> in global_init_file of feature test, we want to skip expressions which 
> include file and line number, e.g.(aclchk.c:123), or (aclchk.cpp:134).
> But currently, the regular expressions is {code}(.*c[p]+:\d+) {code} which 
> need to be replaced by {code}(.*c[p]*:\d+) {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1156: HAWQ-1370. Misuse of regular expressions ...

2017-03-01 Thread zhangh43
Github user zhangh43 closed the pull request at:

https://github.com/apache/incubator-hawq/pull/1156


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-1352) doc updates for HAWQ-1348

2017-03-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891260#comment-15891260
 ] 

ASF GitHub Bot commented on HAWQ-1352:
--

Github user janebeckman closed the pull request at:

https://github.com/apache/incubator-hawq-docs/pull/95


> doc updates for HAWQ-1348
> -
>
> Key: HAWQ-1352
> URL: https://issues.apache.org/jira/browse/HAWQ-1352
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Lisa Owen
>Assignee: David Yozie
>
> timeout for hawq start/stop/restart is 600secs not 60secs.  update the 
> relevant reference pages and determine if other parts of the doc identify the 
> timeout as 60 seconds.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-760) Hawq register

2017-03-01 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890537#comment-15890537
 ] 

Kyle R Dunn commented on HAWQ-760:
--

[~lilima] - How does hawq register change or handle trying to register files 
from different versions where the catalog could have changed? i.e. what would 
happen if I try to register tables from hawq 1.x into hawq 2.x?

> Hawq register
> -
>
> Key: HAWQ-760
> URL: https://issues.apache.org/jira/browse/HAWQ-760
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Command Line Tools
>Reporter: Yangcheng Luo
>Assignee: Lili Ma
> Fix For: backlog
>
>
> Scenario: 
> 1. Register a parquet file generated by other systems, such as Hive, Spark, 
> etc.
> 2. For cluster Disaster Recovery. Two clusters co-exist, periodically import 
> data from Cluster A to Cluster B. Need Register data to Cluster B.
> 3. For the rollback of table. Do checkpoints somewhere, and need to rollback 
> to previous checkpoint. 
> Usage1
> Description
> Register a file/folder to an existing table. Can register a file or a folder. 
> If we register a file, can specify eof of this file. If eof not specified, 
> directly use actual file size. If we register a folder, directly use actual 
> file size.
> hawq register [-h hostname] [-p port] [-U username] [-d databasename] [-f 
> filepath] [-e eof]
> Usage 2
> Description
> Register according to .yml configuration file. 
> hawq register [-h hostname] [-p port] [-U username] [-d databasename] [-c 
> config] [--force][--repair]  
> Behavior:
> 1. If table doesn't exist, will automatically create the table and register 
> the files in .yml configuration file. Will use the filesize specified in .yml 
> to update the catalog table. 
> 2. If table already exist, and neither --force nor --repair configured. Do 
> not create any table, and directly register the files specified in .yml file 
> to the table. Note that if the file is under table directory in HDFS, will 
> throw error, say, to-be-registered files should not under the table path.
> 3. If table already exist, and --force is specified. Will clear all the 
> catalog contents in pg_aoseg.pg_paqseg_$relid while keep the files on HDFS, 
> and then re-register all the files to the table.  This is for scenario 2.
> 4. If table already exist, and --repair is specified. Will change both file 
> folder and catalog table pg_aoseg.pg_paqseg_$relid to the state which .yml 
> file configures. Note may some new generated files since the checkpoint may 
> be deleted here. Also note the all the files in .yml file should all under 
> the table folder on HDFS. Limitation: Do not support cases for hash table 
> redistribution, table truncate and table drop. This is for scenario 3.
> Requirements for both the cases:
> 1. To be registered file path has to colocate with HAWQ in the same HDFS 
> cluster.
> 2. If to be registered is a hash table, the registered file number should be 
> one or multiple times or hash table bucket number.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-8) Installing the HAWQ Software thru the Apache Ambari

2017-03-01 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890288#comment-15890288
 ] 

Kyle R Dunn edited comment on HAWQ-8 at 3/1/17 2:38 PM:


I think we should aim to have the build system be OS agnostic. 

I was already able to successfully compile for SLES 11.4. The plan is to 
capture it all in a Dockerfile, then try to replicate for other SLES versions, 
and ultimately Ubuntu.

We also need to think about how to handle run library dependencies - one 
solution is to bundle them in {{/usr/local/hawq/lib}} with everything else - 
but that would imply we have a particular prefix at compile-time for things 
like Boost, YAML, Thrift, etc.


was (Author: kdunn926):
I think we should aim to have the build system be OS agnostic. 

I was already able to successfully compile for SLES 11.4. The plan is to 
capture it all in a Dockerfile, then try to replicate for other SLES versions, 
and ultimately Ubuntu.

We also need to think about how to handle run library dependencies - one 
solution is to bundle them in `/usr/local/hawq/lib` with everything else - but 
that would imply we have a particular prefix at compile-time for things like 
Boost, YAML, Thrift, etc.

> Installing the HAWQ Software thru the Apache Ambari 
> 
>
> Key: HAWQ-8
> URL: https://issues.apache.org/jira/browse/HAWQ-8
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Ambari
> Environment: CentOS
>Reporter: Vijayakumar Ramdoss
>Assignee: Alexander Denissov
> Fix For: backlog
>
> Attachments: 1Le8tdm[1]
>
>
> In order to integrate with the Hadoop system, We would have to install the 
> HAWQ software thru Ambari.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-8) Installing the HAWQ Software thru the Apache Ambari

2017-03-01 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890288#comment-15890288
 ] 

Kyle R Dunn commented on HAWQ-8:


I think we should aim to have the build system be OS agnostic. 

I was already able to successfully compile for SLES 11.4. The plan is to 
capture it all in a Dockerfile, then try to replicate for other SLES versions, 
and ultimately Ubuntu.

We also need to think about how to handle run library dependencies - one 
solution is to bundle them in `/usr/local/hawq/lib` with everything else - but 
that would imply we have a particular prefix at compile-time for things like 
Boost, YAML, Thrift, etc.

> Installing the HAWQ Software thru the Apache Ambari 
> 
>
> Key: HAWQ-8
> URL: https://issues.apache.org/jira/browse/HAWQ-8
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Ambari
> Environment: CentOS
>Reporter: Vijayakumar Ramdoss
>Assignee: Alexander Denissov
> Fix For: backlog
>
> Attachments: 1Le8tdm[1]
>
>
> In order to integrate with the Hadoop system, We would have to install the 
> HAWQ software thru Ambari.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-326) Support RPM build for HAWQ

2017-03-01 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890278#comment-15890278
 ] 

Kyle R Dunn commented on HAWQ-326:
--

PXF can be built by performing the following:

{code}
cd incubator-hawq-source-dir/pxf
make tomcat
make rpm
{code}

The resulting PXF RPMs will be in 
{{incubator-hawq-source-dir/pxf/build/distributions}} and for tomcat: 
{{incubator-hawq-source-dir/pxf/distributions}}

> Support RPM build for HAWQ
> --
>
> Key: HAWQ-326
> URL: https://issues.apache.org/jira/browse/HAWQ-326
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Build
>Reporter: Lei Chang
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-326) Support RPM build for HAWQ

2017-03-01 Thread Kyle R Dunn (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889199#comment-15889199
 ] 

Kyle R Dunn edited comment on HAWQ-326 at 3/1/17 2:31 PM:
--

I've done some initial work on this.

After compiling HAWQ from source and running {{make install}}, with the 
{{rpmbuild}} utility installed, perform the following steps:
{code}
$ mkdir -p ~/RPMBUILD/{hawq,SPECS}
$ cd /usr/local
$ tar cjf ~/RPMBUILD/hawq/hawq-2.1.0.0-rc4.tar.bz2 hawq

$ cd ~/RPMBUILD
$ rpmbuild -bb SPECS/hawq-2.1.0.0-rc4.spec
{code}

where the above RPM SPEC file contains the following:
{code}
# Don't try fancy stuff like debuginfo, which is useless on binary-only
# packages. Don't strip binary too
# Be sure buildpolicy set to do nothing
%define__spec_install_post %{nil}
%define  debug_package %{nil}
%define__os_install_post %{_dbpath}/brp-compress
%define _unpackaged_files_terminate_build 0

Summary: Apache HAWQ
Name: hawq
Version: 2.1.0.0
Release: rc4
License: Apache 2.0
Group: Development/Tools
SOURCE0 : %{name}-%{version}-%{release}.tar.bz2
URL: https://hawq.incubator.apache.org

%define installdir hawq

BuildRoot: %{_tmppath}/%{name}

%description
%{summary}

%prep
%setup -n %{installdir}

rm -rf /usr/local/%{installdir}
mkdir /usr/local/%{installdir}

# in buildroot
cp -ra * /usr/local/%{installdir}/

useradd -m gpadmin

chown -R gpadmin:gpadmin /usr/local/%{installdir}

%clean
rm -rf %{buildroot}


%files
%defattr(-,root,root,-)
/greenplum_path.sh
/bin
/sbin
/docs
/etc
/include
/lib
/share
{code}

Note, we need to add steps to create the {{gpadmin}} user and ensure 
installation directory permissions are the correct owner and mode.


was (Author: kdunn926):
I've done some initial work on this.

After compiling HAWQ from source and running {{make install}}, with the 
{{rpmbuild}} utility installed, perform the following steps:
{code}
$ mkdir -p ~/RPMBUILD/{hawq,SPECS}
$ cd /usr/local
$ tar cjf ~/RPMBUILD/hawq/hawq-2.1.0.0-rc4.tar.bz2 hawq

$ cd ~/RPMBUILD
$ rpmbuild -bb SPECS/hawq-2.1.0.0-rc4.spec
{code}

where the above RPM SPEC file contains the following:
{code}
# Don't try fancy stuff like debuginfo, which is useless on binary-only
# packages. Don't strip binary too
# Be sure buildpolicy set to do nothing
%define__spec_install_post %{nil}
%define  debug_package %{nil}
%define__os_install_post %{_dbpath}/brp-compress
%define _unpackaged_files_terminate_build 0

Summary: Apache HAWQ
Name: hawq
Version: 2.1.0.0
Release: rc4
License: Apache 2.0
Group: Development/Tools
SOURCE0 : %{name}-%{version}-%{release}.tar.bz2
URL: https://hawq.incubator.apache.org

%define installdir hawq

BuildRoot: %{_tmppath}/%{name}

%description
%{summary}

%prep
%setup -n %{installdir}

#%build
# Empty section.

%install
rm -rf /usr/local/%{installdir}
mkdir /usr/local/%{installdir}

# in buildroot
cp -ra * /usr/local/%{installdir}/


%clean
rm -rf %{buildroot}


%files
%defattr(-,root,root,-)
/greenplum_path.sh
/bin
/sbin
/docs
/etc
/include
/lib
/share
{code}

Note, we need to add steps to create the {{gpadmin}} user and ensure 
installation directory permissions are the correct owner and mode.

> Support RPM build for HAWQ
> --
>
> Key: HAWQ-326
> URL: https://issues.apache.org/jira/browse/HAWQ-326
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Build
>Reporter: Lei Chang
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HAWQ-326) Support RPM build for HAWQ

2017-03-01 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890207#comment-15890207
 ] 

Ruilong Huo edited comment on HAWQ-326 at 3/1/17 1:59 PM:
--

[~kdunn926], thanks for the great effort!

[~shivram], please let us know if the pxf build process 
([https://cwiki.apache.org/confluence/display/HAWQ/PXF+Build+and+Install|https://cwiki.apache.org/confluence/display/HAWQ/PXF+Build+and+Install])
 is included as part of hawq build or we need extra step to do that. Thanks.


was (Author: huor):
[~kdunn926], thanks for the great effort! [~shivram], please let us if the pxf 
build process 
([https://cwiki.apache.org/confluence/display/HAWQ/PXF+Build+and+Install|https://cwiki.apache.org/confluence/display/HAWQ/PXF+Build+and+Install])
 is included as part of hawq build or we need extra step to do that.

> Support RPM build for HAWQ
> --
>
> Key: HAWQ-326
> URL: https://issues.apache.org/jira/browse/HAWQ-326
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Build
>Reporter: Lei Chang
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-326) Support RPM build for HAWQ

2017-03-01 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890207#comment-15890207
 ] 

Ruilong Huo commented on HAWQ-326:
--

[~kdunn926], thanks for the great effort! [~shivram], please let us if the pxf 
build process 
([https://cwiki.apache.org/confluence/display/HAWQ/PXF+Build+and+Install|https://cwiki.apache.org/confluence/display/HAWQ/PXF+Build+and+Install])
 is included as part of hawq build or we need extra step to do that.

> Support RPM build for HAWQ
> --
>
> Key: HAWQ-326
> URL: https://issues.apache.org/jira/browse/HAWQ-326
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Build
>Reporter: Lei Chang
>Assignee: Paul Guo
> Fix For: 2.2.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-8) Installing the HAWQ Software thru the Apache Ambari

2017-03-01 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890168#comment-15890168
 ] 

Ruilong Huo commented on HAWQ-8:


[~kdunn926], so far hawq only support rhel and there is plan to add sles 
support.

> Installing the HAWQ Software thru the Apache Ambari 
> 
>
> Key: HAWQ-8
> URL: https://issues.apache.org/jira/browse/HAWQ-8
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Ambari
> Environment: CentOS
>Reporter: Vijayakumar Ramdoss
>Assignee: Alexander Denissov
> Fix For: backlog
>
> Attachments: 1Le8tdm[1]
>
>
> In order to integrate with the Hadoop system, We would have to install the 
> HAWQ software thru Ambari.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-8) Installing the HAWQ Software thru the Apache Ambari

2017-03-01 Thread Ruilong Huo (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890165#comment-15890165
 ] 

Ruilong Huo commented on HAWQ-8:


[~adenissov], is that means once hawq and pxf rpm is available, Ambari can be 
leveraged by open source users to install and monitor hawq?

> Installing the HAWQ Software thru the Apache Ambari 
> 
>
> Key: HAWQ-8
> URL: https://issues.apache.org/jira/browse/HAWQ-8
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Ambari
> Environment: CentOS
>Reporter: Vijayakumar Ramdoss
>Assignee: Alexander Denissov
> Fix For: backlog
>
> Attachments: 1Le8tdm[1]
>
>
> In order to integrate with the Hadoop system, We would have to install the 
> HAWQ software thru Ambari.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1361) Remove some installcheck-good cases since they are in the feature test suite now.

2017-03-01 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-1361:
---
Summary: Remove some installcheck-good cases since they are in the feature 
test suite now.  (was: Remove ErrorTable in installcheck-good since it is in 
feature test suite now.)

> Remove some installcheck-good cases since they are in the feature test suite 
> now.
> -
>
> Key: HAWQ-1361
> URL: https://issues.apache.org/jira/browse/HAWQ-1361
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Paul Guo
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-hawq pull request #1157: HAWQ-1371. Fix QE process hang in shared ...

2017-03-01 Thread amyrazz44
GitHub user amyrazz44 opened a pull request:

https://github.com/apache/incubator-hawq/pull/1157

HAWQ-1371. Fix QE process hang in shared input scan node



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/amyrazz44/incubator-hawq ShareinputScan

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/1157.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1157


commit aa11788b0899bcc7a94dcf4380751e40e546a92e
Author: amyrazz44 
Date:   2017-03-01T08:10:59Z

HAWQ-1371. Fix QE process hang in shared input scan node




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (HAWQ-1371) QE process hang in shared input scan

2017-03-01 Thread Amy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amy reassigned HAWQ-1371:
-

Assignee: Amy  (was: Lei Chang)

> QE process hang in shared input scan
> 
>
> Key: HAWQ-1371
> URL: https://issues.apache.org/jira/browse/HAWQ-1371
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Reporter: Amy
>Assignee: Amy
> Fix For: backlog
>
>
> process hang on some segment node while QD and QE on other segment nodes 
> terminated.
> {code}
> on segment test2:
> [gpadmin@test2 ~]$ pp
> gpadmin   21614  0.0  1.2 788636 407428 ?   Ss   Feb26   1:19 
> /usr/local/hawq_2_1_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-YARN/product/segmentdd -p 
> 31100 --silent-mode=true -M segment -i
> gpadmin   21615  0.0  0.0 279896  6952 ?Ss   Feb26   0:08 postgres: 
> port 31100, logger process
> gpadmin   21618  0.0  0.0 282128  6980 ?Ss   Feb26   0:00 postgres: 
> port 31100, stats collector process
> gpadmin   21619  0.0  0.0 788636  7280 ?Ss   Feb26   0:11 postgres: 
> port 31100, writer process
> gpadmin   21620  0.0  0.0 788636  7064 ?Ss   Feb26   0:01 postgres: 
> port 31100, checkpoint process
> gpadmin   21621  0.0  0.0 793048 11752 ?SFeb26   0:19 postgres: 
> port 31100, segment resource manager
> gpadmin   91760  0.0  0.0 861000 16840 ?TNsl Feb26   0:07 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15250) con558 seg4 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin   91762  0.0  0.0 861064 17116 ?SNsl Feb26   0:08 postgres: 
> port 31100, gpadmin parquetola... 10.32.35.141(15253) con558 seg5 cmd2 
> slice11 MPPEXEC SELECT
> gpadmin  216648  0.0  0.0 103244   788 pts/0S+   19:54   0:00 grep 
> postgres
> {code}
> QE stack trace is:
> {code}
> (gdb) bt
> #0  0x0032214e1523 in select () from /lib64/libc.so.6
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> #2  0x00695798 in ExecEndMaterial (node=0x1d2eb50) at 
> nodeMaterial.c:512
> #3  0x0067048d in ExecEndNode (node=0x1d2eb50) at execProcnode.c:1681
> #4  0x0069c6b5 in ExecEndShareInputScan (node=0x1d2e6f0) at 
> nodeShareInputScan.c:382
> #5  0x0067042a in ExecEndNode (node=0x1d2e6f0) at execProcnode.c:1674
> #6  0x006ac9be in ExecEndSequence (node=0x1d23890) at 
> nodeSequence.c:165
> #7  0x006705f0 in ExecEndNode (node=0x1d23890) at execProcnode.c:1583
> #8  0x0069a0ab in ExecEndResult (node=0x1d214a0) at nodeResult.c:481
> #9  0x0067060d in ExecEndNode (node=0x1d214a0) at execProcnode.c:1575
> #10 0x0069a0ab in ExecEndResult (node=0x1d20860) at nodeResult.c:481
> #11 0x0067060d in ExecEndNode (node=0x1d20860) at execProcnode.c:1575
> #12 0x00698fd2 in ExecEndMotion (node=0x1d20320) at nodeMotion.c:1230
> #13 0x00670434 in ExecEndNode (node=0x1d20320) at execProcnode.c:1713
> #14 0x00669da7 in ExecEndPlan (planstate=0x1d20320, estate=0x1cb6b40) 
> at execMain.c:2896
> #15 0x0066a311 in ExecutorEnd (queryDesc=0x1cabf20) at execMain.c:1407
> #16 0x006195f2 in PortalCleanupHelper (portal=0x1cbcc40) at 
> portalcmds.c:365
> #17 PortalCleanup (portal=0x1cbcc40) at portalcmds.c:317
> #18 0x00900544 in AtAbort_Portals () at portalmem.c:693
> #19 0x004e697f in AbortTransaction () at xact.c:2800
> #20 0x004e7565 in AbortCurrentTransaction () at xact.c:3377
> #21 0x007ed0fa in PostgresMain (argc=, 
> argv=, username=0x1b47f10 "gpadmin") at postgres.c:4630
> #22 0x007a05d0 in BackendRun () at postmaster.c:5915
> #23 BackendStartup () at postmaster.c:5484
> #24 ServerLoop () at postmaster.c:2163
> #25 0x007a3399 in PostmasterMain (argc=Unhandled dwarf expression 
> opcode 0xf3
> ) at postmaster.c:1454
> #26 0x004a52e9 in main (argc=9, argv=0x1b0cd10) at main.c:226
> (gdb) p CurrentTransactionState->state
> $1 = TRANS_ABORT
> (gdb) p pctxt->donefd
> No symbol "pctxt" in current context.
> (gdb) f 1
> #1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
> share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
> 989   nodeShareInputScan.c: No such file or directory.
>   in nodeShareInputScan.c
> (gdb) p pctxt->donefd
> $2 = 15
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1371) QE process hang in shared input scan

2017-03-01 Thread Amy (JIRA)
Amy created HAWQ-1371:
-

 Summary: QE process hang in shared input scan
 Key: HAWQ-1371
 URL: https://issues.apache.org/jira/browse/HAWQ-1371
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Query Execution
Reporter: Amy
Assignee: Lei Chang
 Fix For: backlog


process hang on some segment node while QD and QE on other segment nodes 
terminated.

{code}
on segment test2:
[gpadmin@test2 ~]$ pp
gpadmin   21614  0.0  1.2 788636 407428 ?   Ss   Feb26   1:19 
/usr/local/hawq_2_1_0_0/bin/postgres -D 
/data/pulse-agent-data/HAWQ-main-FeatureTest-opt-YARN/product/segmentdd -p 
31100 --silent-mode=true -M segment -i
gpadmin   21615  0.0  0.0 279896  6952 ?Ss   Feb26   0:08 postgres: 
port 31100, logger process
gpadmin   21618  0.0  0.0 282128  6980 ?Ss   Feb26   0:00 postgres: 
port 31100, stats collector process
gpadmin   21619  0.0  0.0 788636  7280 ?Ss   Feb26   0:11 postgres: 
port 31100, writer process
gpadmin   21620  0.0  0.0 788636  7064 ?Ss   Feb26   0:01 postgres: 
port 31100, checkpoint process
gpadmin   21621  0.0  0.0 793048 11752 ?SFeb26   0:19 postgres: 
port 31100, segment resource manager
gpadmin   91760  0.0  0.0 861000 16840 ?TNsl Feb26   0:07 postgres: 
port 31100, gpadmin parquetola... 10.32.35.141(15250) con558 seg4 cmd2 slice11 
MPPEXEC SELECT
gpadmin   91762  0.0  0.0 861064 17116 ?SNsl Feb26   0:08 postgres: 
port 31100, gpadmin parquetola... 10.32.35.141(15253) con558 seg5 cmd2 slice11 
MPPEXEC SELECT
gpadmin  216648  0.0  0.0 103244   788 pts/0S+   19:54   0:00 grep postgres
{code}

QE stack trace is:
{code}
(gdb) bt
#0  0x0032214e1523 in select () from /lib64/libc.so.6
#1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
#2  0x00695798 in ExecEndMaterial (node=0x1d2eb50) at nodeMaterial.c:512
#3  0x0067048d in ExecEndNode (node=0x1d2eb50) at execProcnode.c:1681
#4  0x0069c6b5 in ExecEndShareInputScan (node=0x1d2e6f0) at 
nodeShareInputScan.c:382
#5  0x0067042a in ExecEndNode (node=0x1d2e6f0) at execProcnode.c:1674
#6  0x006ac9be in ExecEndSequence (node=0x1d23890) at nodeSequence.c:165
#7  0x006705f0 in ExecEndNode (node=0x1d23890) at execProcnode.c:1583
#8  0x0069a0ab in ExecEndResult (node=0x1d214a0) at nodeResult.c:481
#9  0x0067060d in ExecEndNode (node=0x1d214a0) at execProcnode.c:1575
#10 0x0069a0ab in ExecEndResult (node=0x1d20860) at nodeResult.c:481
#11 0x0067060d in ExecEndNode (node=0x1d20860) at execProcnode.c:1575
#12 0x00698fd2 in ExecEndMotion (node=0x1d20320) at nodeMotion.c:1230
#13 0x00670434 in ExecEndNode (node=0x1d20320) at execProcnode.c:1713
#14 0x00669da7 in ExecEndPlan (planstate=0x1d20320, estate=0x1cb6b40) 
at execMain.c:2896
#15 0x0066a311 in ExecutorEnd (queryDesc=0x1cabf20) at execMain.c:1407
#16 0x006195f2 in PortalCleanupHelper (portal=0x1cbcc40) at 
portalcmds.c:365
#17 PortalCleanup (portal=0x1cbcc40) at portalcmds.c:317
#18 0x00900544 in AtAbort_Portals () at portalmem.c:693
#19 0x004e697f in AbortTransaction () at xact.c:2800
#20 0x004e7565 in AbortCurrentTransaction () at xact.c:3377
#21 0x007ed0fa in PostgresMain (argc=, argv=, username=0x1b47f10 "gpadmin") at postgres.c:4630
#22 0x007a05d0 in BackendRun () at postmaster.c:5915
#23 BackendStartup () at postmaster.c:5484
#24 ServerLoop () at postmaster.c:2163
#25 0x007a3399 in PostmasterMain (argc=Unhandled dwarf expression 
opcode 0xf3
) at postmaster.c:1454
#26 0x004a52e9 in main (argc=9, argv=0x1b0cd10) at main.c:226
(gdb) p CurrentTransactionState->state
$1 = TRANS_ABORT
(gdb) p pctxt->donefd
No symbol "pctxt" in current context.
(gdb) f 1
#1  0x0069c2fa in shareinput_writer_waitdone (ctxt=0x1dae520, 
share_id=0, nsharer_xslice=7) at nodeShareInputScan.c:989
989 nodeShareInputScan.c: No such file or directory.
in nodeShareInputScan.c
(gdb) p pctxt->donefd
$2 = 15
{code}




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)