[jira] [Closed] (HAWQ-1622) Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be done on each request

2018-07-26 Thread Shivram Mani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani closed HAWQ-1622.
--

> Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be 
> done on each request
> ---
>
> Key: HAWQ-1622
> URL: https://issues.apache.org/jira/browse/HAWQ-1622
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Alexander Denissov
>Assignee: Lav Jain
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Closing PXF proxy UGIs on each request (implemented in HAWQ-1621) slows down 
> PXF request response time significantly when several threads work 
> concurrently as it locks FileSystem cache and holds the lock while the 
> cleanup of DFSClients is completed.
> This can be avoided by caching the proxy UGI for a given proxy user between 
> requests. Care must be taken to remove the cached entry after some 
> pre-defined TTL if and only if there are no current threads using any 
> FileSystem entries held by the cache. A combination of TTL-based cache with 
> ref-counting might be utilized to achieve this.
>  
> For some example of this, see: 
> https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/service/UserGroupInformationService.java
> Caching UGIs might be tricky when Kerberos support is implemented later, see: 
> https://issues.apache.org/jira/browse/HIVE-3098?focusedCommentId=13398979&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13398979



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] incubator-hawq pull request #1379: HAWQ-1622. Cache PXF proxy UGI so that cl...

2018-07-26 Thread lavjain
Github user lavjain closed the pull request at:

https://github.com/apache/incubator-hawq/pull/1379


---


[GitHub] incubator-hawq issue #1379: HAWQ-1622. Cache PXF proxy UGI so that cleanup o...

2018-07-26 Thread benchristel
Github user benchristel commented on the issue:

https://github.com/apache/incubator-hawq/pull/1379
  
Merged to master. Closing PR...


---


[jira] [Resolved] (HAWQ-1622) Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be done on each request

2018-07-26 Thread Shivram Mani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shivram Mani resolved HAWQ-1622.

   Resolution: Fixed
Fix Version/s: 2.4.0.0-incubating

The PR has been merged to master.

Based on performance tests we see 17x speedup and 50x reduction in threads=

Closing JIRA...

> Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be 
> done on each request
> ---
>
> Key: HAWQ-1622
> URL: https://issues.apache.org/jira/browse/HAWQ-1622
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Alexander Denissov
>Assignee: Lav Jain
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Closing PXF proxy UGIs on each request (implemented in HAWQ-1621) slows down 
> PXF request response time significantly when several threads work 
> concurrently as it locks FileSystem cache and holds the lock while the 
> cleanup of DFSClients is completed.
> This can be avoided by caching the proxy UGI for a given proxy user between 
> requests. Care must be taken to remove the cached entry after some 
> pre-defined TTL if and only if there are no current threads using any 
> FileSystem entries held by the cache. A combination of TTL-based cache with 
> ref-counting might be utilized to achieve this.
>  
> For some example of this, see: 
> https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/service/UserGroupInformationService.java
> Caching UGIs might be tricky when Kerberos support is implemented later, see: 
> https://issues.apache.org/jira/browse/HIVE-3098?focusedCommentId=13398979&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13398979



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HAWQ-1622) Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be done on each request

2018-07-26 Thread Lav Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558775#comment-16558775
 ] 

Lav Jain edited comment on HAWQ-1622 at 7/26/18 7:02 PM:
-

Evaluation notes on caching options

1. LRU/timed expiration isn't appropriate because someone might be holding a 
reference to the UGI object when it's evicted from the cache. Existing java 
caching libraries (Guava,JCS,EHCache) dont' allow us to implement a policy 
which leverages both reference and expiry.

2. A WeakReference-based cache wouldn't work because we need fine-grained 
control over when UGIs are removed from the cache.


was (Author: lavjain):
We can limit the number of UGIs we create by caching them for the duration of 
the session

1. LRU/timed expiration isn't appropriate because someone might be holding a 
reference to the UGI object when it's evicted from the cache. Existing java 
caching libraries (Guava,JCS,EHCache) dont' allow us to implement a policy 
which leverages both reference and expiry.

2. A WeakReference-based cache wouldn't work because we need fine-grained 
control over when UGIs are removed from the cache.

> Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be 
> done on each request
> ---
>
> Key: HAWQ-1622
> URL: https://issues.apache.org/jira/browse/HAWQ-1622
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Alexander Denissov
>Assignee: Lav Jain
>Priority: Major
>
> Closing PXF proxy UGIs on each request (implemented in HAWQ-1621) slows down 
> PXF request response time significantly when several threads work 
> concurrently as it locks FileSystem cache and holds the lock while the 
> cleanup of DFSClients is completed.
> This can be avoided by caching the proxy UGI for a given proxy user between 
> requests. Care must be taken to remove the cached entry after some 
> pre-defined TTL if and only if there are no current threads using any 
> FileSystem entries held by the cache. A combination of TTL-based cache with 
> ref-counting might be utilized to achieve this.
>  
> For some example of this, see: 
> https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/service/UserGroupInformationService.java
> Caching UGIs might be tricky when Kerberos support is implemented later, see: 
> https://issues.apache.org/jira/browse/HIVE-3098?focusedCommentId=13398979&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13398979



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1622) Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be done on each request

2018-07-26 Thread Lav Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558776#comment-16558776
 ] 

Lav Jain commented on HAWQ-1622:


Cache key is SessionId (= {*user, seg_id, xid*})

user because UGIs identify a user

segment so we can know when the UGI is done being used (the segment passes a 
"last call" header to tell PXF that the UGI's resources can be cleaned up)

xid so the UGI will only be in use for a limited amount of time. We want to be 
sure it can be destroyed before the credentials it stores become invalid.

> Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be 
> done on each request
> ---
>
> Key: HAWQ-1622
> URL: https://issues.apache.org/jira/browse/HAWQ-1622
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Alexander Denissov
>Assignee: Lav Jain
>Priority: Major
>
> Closing PXF proxy UGIs on each request (implemented in HAWQ-1621) slows down 
> PXF request response time significantly when several threads work 
> concurrently as it locks FileSystem cache and holds the lock while the 
> cleanup of DFSClients is completed.
> This can be avoided by caching the proxy UGI for a given proxy user between 
> requests. Care must be taken to remove the cached entry after some 
> pre-defined TTL if and only if there are no current threads using any 
> FileSystem entries held by the cache. A combination of TTL-based cache with 
> ref-counting might be utilized to achieve this.
>  
> For some example of this, see: 
> https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/service/UserGroupInformationService.java
> Caching UGIs might be tricky when Kerberos support is implemented later, see: 
> https://issues.apache.org/jira/browse/HIVE-3098?focusedCommentId=13398979&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13398979



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1622) Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be done on each request

2018-07-26 Thread Lav Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558775#comment-16558775
 ] 

Lav Jain commented on HAWQ-1622:


We can limit the number of UGIs we create by caching them for the duration of 
the session

1. LRU/timed expiration isn't appropriate because someone might be holding a 
reference to the UGI object when it's evicted from the cache. Existing java 
caching libraries (Guava,JCS,EHCache) dont' allow us to implement a policy 
which leverages both reference and expiry.

2. A WeakReference-based cache wouldn't work because we need fine-grained 
control over when UGIs are removed from the cache.

> Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be 
> done on each request
> ---
>
> Key: HAWQ-1622
> URL: https://issues.apache.org/jira/browse/HAWQ-1622
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Alexander Denissov
>Assignee: Lav Jain
>Priority: Major
>
> Closing PXF proxy UGIs on each request (implemented in HAWQ-1621) slows down 
> PXF request response time significantly when several threads work 
> concurrently as it locks FileSystem cache and holds the lock while the 
> cleanup of DFSClients is completed.
> This can be avoided by caching the proxy UGI for a given proxy user between 
> requests. Care must be taken to remove the cached entry after some 
> pre-defined TTL if and only if there are no current threads using any 
> FileSystem entries held by the cache. A combination of TTL-based cache with 
> ref-counting might be utilized to achieve this.
>  
> For some example of this, see: 
> https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/service/UserGroupInformationService.java
> Caching UGIs might be tricky when Kerberos support is implemented later, see: 
> https://issues.apache.org/jira/browse/HIVE-3098?focusedCommentId=13398979&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13398979



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1622) Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be done on each request

2018-07-26 Thread Lav Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558778#comment-16558778
 ] 

Lav Jain commented on HAWQ-1622:


Maintain reference count for each cache entry (UGI)

UGI stays in the cache when it has an active reference

UGIs expire 15 minutes after last access

Expired UGIs are cleaned up when the cache is accessed.

Clean up UGI resources immediately after processing the last block of data for 
a segment within xid

> Cache PXF proxy UGI so that cleanup of FileSystem cache doesn't have to be 
> done on each request
> ---
>
> Key: HAWQ-1622
> URL: https://issues.apache.org/jira/browse/HAWQ-1622
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Alexander Denissov
>Assignee: Lav Jain
>Priority: Major
>
> Closing PXF proxy UGIs on each request (implemented in HAWQ-1621) slows down 
> PXF request response time significantly when several threads work 
> concurrently as it locks FileSystem cache and holds the lock while the 
> cleanup of DFSClients is completed.
> This can be avoided by caching the proxy UGI for a given proxy user between 
> requests. Care must be taken to remove the cached entry after some 
> pre-defined TTL if and only if there are no current threads using any 
> FileSystem entries held by the cache. A combination of TTL-based cache with 
> ref-counting might be utilized to achieve this.
>  
> For some example of this, see: 
> https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/service/UserGroupInformationService.java
> Caching UGIs might be tricky when Kerberos support is implemented later, see: 
> https://issues.apache.org/jira/browse/HIVE-3098?focusedCommentId=13398979&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13398979



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-127) Create CI projects for HAWQ releases

2018-07-26 Thread Radar Lei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radar Lei updated HAWQ-127:
---
Fix Version/s: (was: 2.4.0.0-incubating)
   backlog

> Create CI projects for HAWQ releases
> 
>
> Key: HAWQ-127
> URL: https://issues.apache.org/jira/browse/HAWQ-127
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Lei Chang
>Assignee: Jiali Yao
>Priority: Major
> Fix For: backlog
>
>
> Create Jenkins projects that build HAWQ binary, source tarballs and docker 
> images, and run sanity tests including at least installcheck-good tests for 
> each commit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-786) Framework to support pluggable formats and file systems

2018-07-26 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo closed HAWQ-786.


> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
> Attachments: HAWQ Pluggable Storage Framework.pdf, 
> ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-786) Framework to support pluggable formats and file systems

2018-07-26 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo resolved HAWQ-786.
--
Resolution: Fixed

Closing this feature given the pluggable storage framework is available in hawq 
code base. The subsequent features including hdfs protocol, orc, text csv, hive 
protocol, etc will be tracked in separate issues.

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
> Attachments: HAWQ Pluggable Storage Framework.pdf, 
> ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1630) Support TEXT/CSV format using pluggable storage framework

2018-07-26 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1630:
-

Assignee: oushu1longziyang1

> Support TEXT/CSV format using pluggable storage framework
> -
>
> Key: HAWQ-1630
> URL: https://issues.apache.org/jira/browse/HAWQ-1630
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: oushu1longziyang1
>Priority: Major
> Fix For: backlog
>
>
> Add TEXT/CSV format using pluggable storage framework so that user can store 
> data in table with TEXT/CSV format. The TEXT/CSV format in pluggable storage 
> framework will have better performance and extensibility comparing with that 
> in external table framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1629) Support ORC format using pluggable storage framework

2018-07-26 Thread Ruilong Huo (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558018#comment-16558018
 ] 

Ruilong Huo commented on HAWQ-1629:
---

Assigning to Long Ziyang to add the feature.

> Support ORC format using pluggable storage framework
> 
>
> Key: HAWQ-1629
> URL: https://issues.apache.org/jira/browse/HAWQ-1629
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: oushu1longziyang1
>Priority: Major
> Fix For: backlog
>
>
> Add ORC format using pluggable storage framework so that user can store data 
> in table with ORC format, which is a commonly adopted format and has 
> potential performance gain with statistics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1629) Support ORC format using pluggable storage framework

2018-07-26 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1629:
-

Assignee: oushu1longziyang1

> Support ORC format using pluggable storage framework
> 
>
> Key: HAWQ-1629
> URL: https://issues.apache.org/jira/browse/HAWQ-1629
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: oushu1longziyang1
>Priority: Major
> Fix For: backlog
>
>
> Add ORC format using pluggable storage framework so that user can store data 
> in table with ORC format, which is a commonly adopted format and has 
> potential performance gain with statistics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1628) Support HDFS protocol using pluggable storage framework

2018-07-26 Thread Ruilong Huo (JIRA)


 [ 
https://issues.apache.org/jira/browse/HAWQ-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruilong Huo reassigned HAWQ-1628:
-

Assignee: Oushu_WangZiming

> Support HDFS protocol using pluggable storage framework
> ---
>
> Key: HAWQ-1628
> URL: https://issues.apache.org/jira/browse/HAWQ-1628
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Oushu_WangZiming
>Priority: Major
> Fix For: backlog
>
>
> The purpose of supporting HDFS protocol using pluggable storage framework can 
> be in two folds:
> 1. demonstrate an example on how to add a protocol in pluggable storage 
> framework
> 2. data formats including orc, text, csv etc can be added to hawq using 
> pluggable storage framework



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1628) Support HDFS protocol using pluggable storage framework

2018-07-26 Thread Ruilong Huo (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558011#comment-16558011
 ] 

Ruilong Huo commented on HAWQ-1628:
---

Assigning to Wang Ziming to add the feature.

> Support HDFS protocol using pluggable storage framework
> ---
>
> Key: HAWQ-1628
> URL: https://issues.apache.org/jira/browse/HAWQ-1628
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Storage
>Affects Versions: 2.3.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Oushu_WangZiming
>Priority: Major
> Fix For: backlog
>
>
> The purpose of supporting HDFS protocol using pluggable storage framework can 
> be in two folds:
> 1. demonstrate an example on how to add a protocol in pluggable storage 
> framework
> 2. data formats including orc, text, csv etc can be added to hawq using 
> pluggable storage framework



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1494) The bug can appear every time when I execute a specific sql: Unexpect internal error (setref.c:298), server closed the connection unexpectedly

2018-07-26 Thread Yi Jin (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16557998#comment-16557998
 ] 

Yi Jin commented on HAWQ-1494:
--

This looks like an optimizer or plan related processing bug, can anyone 
familiar with these components have a look at it?

> The bug can appear every time when I execute a specific sql:  Unexpect 
> internal error (setref.c:298), server closed the connection unexpectedly
> ---
>
> Key: HAWQ-1494
> URL: https://issues.apache.org/jira/browse/HAWQ-1494
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: fangpei
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> When I execute a specific sql, a serious bug can happen every time. (Hawq 
> version is 2.2.0.0)
> BUG information:
> FATAL: Unexpect internal error (setref.c:298)
> DETAIL: AssertImply failed("!(!var->varattno >= 0) || (var->varattno <= 
> list_length(colNames) + list_length(rte- >pseudocols)))", File: "setrefs.c", 
> Line: 298)
> HINT:  Process 239600 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released  until then.
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
>  before or while processing the request.
> The connection to the server was lost. Attemping reset: Succeeded.
> I use GDB to debug, the GDB information is the same every time. The 
> information is: 
> Loaded symbols for /lib64/libnss_files.so.2
> 0x0032dd40eb5c in recv 0 from /lib64/libpthread.so.0
> (gdb) b setrefs.c:298
> Breakpoint 1 at 0x846063: file setrefs.c, line 298.
> (gdb) c 
> Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe8e930adb8) at setrefs.c:298
> 298 set ref s .c:没有那个文件或目录.
> (gdb) c 1923
> Will ignore next 1922 crossings of breakpoint 1. Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe869c70340) at setrefs.c:298
> 298 in setrefs.c
> (gdb) p list_length(allVars) 
> $1 = 1422
> (gdb) p var->varno 
> $2 = 65001
> (gdb) p list_length(glob->finalrtable) 
> $3 = 66515
> (gdb) p var->varattno 
> $4 = 31
> (gdb) p list_length(colNames) 
> $5 = 30
> (gdb) p list_length(rte->pseudocols) 
> $6 = 0
> the SQL sentence is just like :
> SELECT *
> FROM (select t.*,1001 as ttt from AAA t where  ( aaa = '3201066235'  
> or aaa = '3201066236'  or aaa = '3201026292'  or aaa = 
> '3201066293'  or aaa = '3201060006393' ) and (  bbb between 
> '20170601065900' and '20170601175100'  and (ccc = '2017-06-01' ))  union all  
> select t.*,1002 as ttt from AAA t where  ( aaa = '3201066007'  or aaa 
> = '3201066006' ) and (  bbb between '20170601072900' and 
> '20170601210100'  and ( ccc = '2017-06-01' ))  union all  select t.*,1003 as 
> ttt from AAA t where  ( aaa = '3201062772' ) and (  bbb between 
> '20170601072900' and '20170601170100'  and ( ccc = '2017-06-01' ))  union all 
>  select t.*,1004 as ttt from AAA t where  (aaa = '3201066115'  or aaa 
> = '3201066116'  or aaa = '3201066318'  or aaa = 
> '3201066319' ) and (  bbb between '20170601085900' and 
> '20170601163100' and ( ccc = '2017-06-01' ))  union all  select t.*,1005 as 
> ttt from AAA t where  ( aaa = '3201066180' or aaa = 
> '3201046385' ) and (  bbb between '20170601205900' and 
> '20170601230100'  and ( ccc = '2017-06-01' )) union all  select t.*,1006 as 
> ttt from AAA t where  ( aaa = '3201026423'  or aaa = 
> '3201026255'  or aaa = '3201066258'  or aaa = 
> '3201066259' ) and (  bbb between '20170601215900' and 
> '20170602004900'  and ( ccc = '2017-06-01'  or ccc = '2017-06-02' ))  union 
> all select t.*,1007 as ttt from AAA t where  ( aaa = '3201066175' or 
> aaa = '3201066004' ) and (  bbb between '20170602074900' and 
> '20170602182100'  and ( ccc = '2017-06-02'  )) union all select t.*,1008 as 
> ttt from AAA t where  ( aaa = '3201026648' ) and (  bbb between 
> '20170602132900' and '20170602134600'  and ( ccc = '2017-06-02' ))  union all 
>  select t.*,1009 as ttt from AAA t where  ( aaa = '3201062765'  or 
> aaa = '3201006282' ) and (  bbb between '20170602142900' and 
> '20170603175100'  and ( ccc = '2017-06-02'  or ccc = '2017-06-03' ))  union 
> all  select t.*,1010 as ttt from AAA t where  (aaa = '3201066060' ) 
> and (  bbb between '20170602165900' and '20170603034100'  and ( ccc = 
> '2017-06-02'  or ccc = '2017-06-03' ))  union all select t.*,1011 as ttt from 
> AAA 

[jira] [Commented] (HAWQ-1643) How to solve this problem in HAWQ install on centos 7 with version 2.1.0 ?

2018-07-26 Thread ercengsha (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16557990#comment-16557990
 ] 

ercengsha commented on HAWQ-1643:
-

OK, Thanks for your answer.

> How to solve this problem in HAWQ install on centos 7 with version 2.1.0 ?
> --
>
> Key: HAWQ-1643
> URL: https://issues.apache.org/jira/browse/HAWQ-1643
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: ercengsha
>Assignee: Radar Lei
>Priority: Major
>
> Hello managers,Here are Problem Descript below:
> this problems raised in process of make 
>  
> gcc -O3 -std=gnu99  -Wall -Wmissing-prototypes -Wpointer-arith  
> -Wendif-labels -Wformat-security -fno-strict-aliasing -fwrapv 
> -fno-aggressive-loop-optimizations  -I/usr/include/libxml2 -fpic -I. 
> -I../../src/include -D_GNU_SOURCE  -I***- 
> incubating/depends/libhdfs3/build/install/usr/local/hawq/include -I ***-  -c 
> -o sqlparse.o sqlparse.c
>  *+sqlparse.y: In function ‘orafce_sql_yyparse’:+*
>  *+sqlparse.y:88:17: error: ‘result’ undeclared (first use in this function)+*
>    *+elements \{*((void*)result) = $1; }+*
>                         *+^+*
>  *+sqlparse.y:88:17: note: each undeclared identifier is reported only once 
> for each function it appears in+*
>  make[2]: *** [sqlparse.o] Error 1
>  make[2]: *** Waiting for unfinished jobs
>  make[2]: Leaving directory `***`
>  make[1]: *** [all] Error 2
>  make[1]: Leaving directory `***`
>  make: *** [all] Error 2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)