[jira] [Commented] (HAWQ-1494) The bug can appear every time when I execute a specific sql: Unexpect internal error (setref.c:298), server closed the connection unexpectedly

2018-07-26 Thread Yi Jin (JIRA)


[ 
https://issues.apache.org/jira/browse/HAWQ-1494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16557998#comment-16557998
 ] 

Yi Jin commented on HAWQ-1494:
--

This looks like an optimizer or plan related processing bug, can anyone 
familiar with these components have a look at it?

> The bug can appear every time when I execute a specific sql:  Unexpect 
> internal error (setref.c:298), server closed the connection unexpectedly
> ---
>
> Key: HAWQ-1494
> URL: https://issues.apache.org/jira/browse/HAWQ-1494
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: fangpei
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> When I execute a specific sql, a serious bug can happen every time. (Hawq 
> version is 2.2.0.0)
> BUG information:
> FATAL: Unexpect internal error (setref.c:298)
> DETAIL: AssertImply failed("!(!var->varattno >= 0) || (var->varattno <= 
> list_length(colNames) + list_length(rte- >pseudocols)))", File: "setrefs.c", 
> Line: 298)
> HINT:  Process 239600 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released  until then.
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
>  before or while processing the request.
> The connection to the server was lost. Attemping reset: Succeeded.
> I use GDB to debug, the GDB information is the same every time. The 
> information is: 
> Loaded symbols for /lib64/libnss_files.so.2
> 0x0032dd40eb5c in recv 0 from /lib64/libpthread.so.0
> (gdb) b setrefs.c:298
> Breakpoint 1 at 0x846063: file setrefs.c, line 298.
> (gdb) c 
> Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe8e930adb8) at setrefs.c:298
> 298 set ref s .c:没有那个文件或目录.
> (gdb) c 1923
> Will ignore next 1922 crossings of breakpoint 1. Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe869c70340) at setrefs.c:298
> 298 in setrefs.c
> (gdb) p list_length(allVars) 
> $1 = 1422
> (gdb) p var->varno 
> $2 = 65001
> (gdb) p list_length(glob->finalrtable) 
> $3 = 66515
> (gdb) p var->varattno 
> $4 = 31
> (gdb) p list_length(colNames) 
> $5 = 30
> (gdb) p list_length(rte->pseudocols) 
> $6 = 0
> the SQL sentence is just like :
> SELECT *
> FROM (select t.*,1001 as ttt from AAA t where  ( aaa = '3201066235'  
> or aaa = '3201066236'  or aaa = '3201026292'  or aaa = 
> '3201066293'  or aaa = '3201060006393' ) and (  bbb between 
> '20170601065900' and '20170601175100'  and (ccc = '2017-06-01' ))  union all  
> select t.*,1002 as ttt from AAA t where  ( aaa = '3201066007'  or aaa 
> = '3201066006' ) and (  bbb between '20170601072900' and 
> '20170601210100'  and ( ccc = '2017-06-01' ))  union all  select t.*,1003 as 
> ttt from AAA t where  ( aaa = '3201062772' ) and (  bbb between 
> '20170601072900' and '20170601170100'  and ( ccc = '2017-06-01' ))  union all 
>  select t.*,1004 as ttt from AAA t where  (aaa = '3201066115'  or aaa 
> = '3201066116'  or aaa = '3201066318'  or aaa = 
> '3201066319' ) and (  bbb between '20170601085900' and 
> '20170601163100' and ( ccc = '2017-06-01' ))  union all  select t.*,1005 as 
> ttt from AAA t where  ( aaa = '3201066180' or aaa = 
> '3201046385' ) and (  bbb between '20170601205900' and 
> '20170601230100'  and ( ccc = '2017-06-01' )) union all  select t.*,1006 as 
> ttt from AAA t where  ( aaa = '3201026423'  or aaa = 
> '3201026255'  or aaa = '3201066258'  or aaa = 
> '3201066259' ) and (  bbb between '20170601215900' and 
> '20170602004900'  and ( ccc = '2017-06-01'  or ccc = '2017-06-02' ))  union 
> all select t.*,1007 as ttt from AAA t where  ( aaa = '3201066175' or 
> aaa = '3201066004' ) and (  bbb between '20170602074900' and 
> '20170602182100'  and ( ccc = '2017-06-02'  )) union all select t.*,1008 as 
> ttt from AAA t where  ( aaa = '3201026648' ) and (  bbb between 
> '20170602132900' and '20170602134600'  and ( ccc = '2017-06-02' ))  union all 
>  select t.*,1009 as ttt from AAA t where  ( aaa = '3201062765'  or 
> aaa = '3201006282' ) and (  bbb between '20170602142900' and 
> '20170603175100'  and ( ccc = '2017-06-02'  or ccc = '2017-06-03' ))  union 
> all  select t.*,1010 as ttt from AAA t where  (aaa = '3201066060' ) 
> and (  bbb between '20170602165900' and '20170603034100'  and ( ccc = 
> '2017-06-02'  or ccc = '2017-06-03' ))  union all select t.*,1011 as ttt from 
> AAA 

[jira] [Commented] (HAWQ-1596) Please delete old releases from mirroring system

2018-03-18 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404351#comment-16404351
 ] 

Yi Jin commented on HAWQ-1596:
--

removed the files of old releases 2.0.0.0-2.2.0.0, 

 

yjinmac:hawq yijin$ svn delete 2.0.0.0-incubating
D 2.0.0.0-incubating
D 2.0.0.0-incubating/apache-hawq-src-2.0.0.0-incubating.tar.gz
D 2.0.0.0-incubating/apache-hawq-src-2.0.0.0-incubating.tar.gz.asc
D 2.0.0.0-incubating/apache-hawq-src-2.0.0.0-incubating.tar.gz.md5
D 2.0.0.0-incubating/apache-hawq-src-2.0.0.0-incubating.tar.gz.sha1
yjinmac:hawq yijin$ pwd
/Users/yijin/apache/releasework/hawq
yjinmac:hawq yijin$ svn delete 2.1.0.0-incubating
D 2.1.0.0-incubating
D 2.1.0.0-incubating/apache-hawq-src-2.1.0.0-incubating.tar.gz
D 2.1.0.0-incubating/apache-hawq-src-2.1.0.0-incubating.tar.gz.asc
D 2.1.0.0-incubating/apache-hawq-src-2.1.0.0-incubating.tar.gz.md5
D 2.1.0.0-incubating/apache-hawq-src-2.1.0.0-incubating.tar.gz.sha256
yjinmac:hawq yijin$ svn delete 2.2.0.0-incubating
D 2.2.0.0-incubating
D 2.2.0.0-incubating/apache-hawq-rpm-2.2.0.0-incubating.tar.gz
D 2.2.0.0-incubating/apache-hawq-rpm-2.2.0.0-incubating.tar.gz.asc
D 2.2.0.0-incubating/apache-hawq-rpm-2.2.0.0-incubating.tar.gz.md5
D 2.2.0.0-incubating/apache-hawq-rpm-2.2.0.0-incubating.tar.gz.sha256
D 2.2.0.0-incubating/apache-hawq-src-2.2.0.0-incubating.tar.gz
D 2.2.0.0-incubating/apache-hawq-src-2.2.0.0-incubating.tar.gz.asc
D 2.2.0.0-incubating/apache-hawq-src-2.2.0.0-incubating.tar.gz.md5
D 2.2.0.0-incubating/apache-hawq-src-2.2.0.0-incubating.tar.gz.sha256
yjinmac:hawq yijin$ svn commit -m "HAWQ-1596. delete old releases from 
mirroring system"
Deleting 2.0.0.0-incubating
Deleting 2.1.0.0-incubating
Deleting 2.2.0.0-incubating
Committing transaction...
Committed revision 25809.
yjinmac:hawq yijin$

> Please delete old releases from mirroring system
> 
>
> Key: HAWQ-1596
> URL: https://issues.apache.org/jira/browse/HAWQ-1596
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Sebb
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> It's unfair to expect the 3rd party mirrors to carry old releases.
> Note that older releases can still be linked from the download page, but such 
> links should use the archive server at:
> https://archive.apache.org/dist/incubator/hawq/
> A suggested process is:
> + Change the download page to use archive.a.o for old releases (artifacts, 
> sigs, hashes)
> + Delete the corresponding directories from 
> {{https://dist.apache.org/repos/dist/release/incubator/hawq/}}
> e.g. {{svn delete 
> https://dist.apache.org/repos/dist/release/incubator/hawq/2.0.0.o-incubating}}
> Thanks!
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1596) Please delete old releases from mirroring system

2018-03-18 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin resolved HAWQ-1596.
--
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

> Please delete old releases from mirroring system
> 
>
> Key: HAWQ-1596
> URL: https://issues.apache.org/jira/browse/HAWQ-1596
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Sebb
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> It's unfair to expect the 3rd party mirrors to carry old releases.
> Note that older releases can still be linked from the download page, but such 
> links should use the archive server at:
> https://archive.apache.org/dist/incubator/hawq/
> A suggested process is:
> + Change the download page to use archive.a.o for old releases (artifacts, 
> sigs, hashes)
> + Delete the corresponding directories from 
> {{https://dist.apache.org/repos/dist/release/incubator/hawq/}}
> e.g. {{svn delete 
> https://dist.apache.org/repos/dist/release/incubator/hawq/2.0.0.o-incubating}}
> Thanks!
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1596) Please delete old releases from mirroring system

2018-03-18 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16404325#comment-16404325
 ] 

Yi Jin commented on HAWQ-1596:
--

I updated the site to use archive site to download old versions

> Please delete old releases from mirroring system
> 
>
> Key: HAWQ-1596
> URL: https://issues.apache.org/jira/browse/HAWQ-1596
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Sebb
>Assignee: Yi Jin
>Priority: Major
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> It's unfair to expect the 3rd party mirrors to carry old releases.
> Note that older releases can still be linked from the download page, but such 
> links should use the archive server at:
> https://archive.apache.org/dist/incubator/hawq/
> A suggested process is:
> + Change the download page to use archive.a.o for old releases (artifacts, 
> sigs, hashes)
> + Delete the corresponding directories from 
> {{https://dist.apache.org/repos/dist/release/incubator/hawq/}}
> e.g. {{svn delete 
> https://dist.apache.org/repos/dist/release/incubator/hawq/2.0.0.o-incubating}}
> Thanks!
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1595) Please use HTTPS for KEYS, sigs and hashes

2018-03-18 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin resolved HAWQ-1595.
--
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

> Please use HTTPS for KEYS, sigs and hashes
> --
>
> Key: HAWQ-1595
> URL: https://issues.apache.org/jira/browse/HAWQ-1595
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Sebb
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> As the subject says.
> Also the links should be to www.apache.org, not www-eu.apache.org



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1596) Please delete old releases from mirroring system

2018-03-14 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1596:


Assignee: Yi Jin  (was: Radar Lei)

> Please delete old releases from mirroring system
> 
>
> Key: HAWQ-1596
> URL: https://issues.apache.org/jira/browse/HAWQ-1596
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Sebb
>Assignee: Yi Jin
>Priority: Major
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> It's unfair to expect the 3rd party mirrors to carry old releases.
> Note that older releases can still be linked from the download page, but such 
> links should use the archive server at:
> https://archive.apache.org/dist/incubator/hawq/
> A suggested process is:
> + Change the download page to use archive.a.o for old releases (artifacts, 
> sigs, hashes)
> + Delete the corresponding directories from 
> {{https://dist.apache.org/repos/dist/release/incubator/hawq/}}
> e.g. {{svn delete 
> https://dist.apache.org/repos/dist/release/incubator/hawq/2.0.0.o-incubating}}
> Thanks!
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1595) Please use HTTPS for KEYS, sigs and hashes

2018-03-14 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16399824#comment-16399824
 ] 

Yi Jin commented on HAWQ-1595:
--

Thank you sebb for creating this issue to track, I will do this in this release 

> Please use HTTPS for KEYS, sigs and hashes
> --
>
> Key: HAWQ-1595
> URL: https://issues.apache.org/jira/browse/HAWQ-1595
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> As the subject says.
> Also the links should be to www.apache.org, not www-eu.apache.org



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1595) Please use HTTPS for KEYS, sigs and hashes

2018-03-14 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1595:


Assignee: Yi Jin  (was: Radar Lei)

> Please use HTTPS for KEYS, sigs and hashes
> --
>
> Key: HAWQ-1595
> URL: https://issues.apache.org/jira/browse/HAWQ-1595
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Sebb
>Assignee: Yi Jin
>Priority: Major
>
> As the subject says.
> Also the links should be to www.apache.org, not www-eu.apache.org



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1596) Please delete old releases from mirroring system

2018-03-14 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16399823#comment-16399823
 ] 

Yi Jin commented on HAWQ-1596:
--

Thank you sebb for creating this issue to track, I will do this in this release 

> Please delete old releases from mirroring system
> 
>
> Key: HAWQ-1596
> URL: https://issues.apache.org/jira/browse/HAWQ-1596
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Sebb
>Assignee: Radar Lei
>Priority: Major
>
> To reduce the load on the ASF mirrors, projects are required to delete old 
> releases [1]
> Please can you remove all non-current releases?
> It's unfair to expect the 3rd party mirrors to carry old releases.
> Note that older releases can still be linked from the download page, but such 
> links should use the archive server at:
> https://archive.apache.org/dist/incubator/hawq/
> A suggested process is:
> + Change the download page to use archive.a.o for old releases (artifacts, 
> sigs, hashes)
> + Delete the corresponding directories from 
> {{https://dist.apache.org/repos/dist/release/incubator/hawq/}}
> e.g. {{svn delete 
> https://dist.apache.org/repos/dist/release/incubator/hawq/2.0.0.o-incubating}}
> Thanks!
> [1] http://www.apache.org/dev/release.html#when-to-archive



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1586) Update version from 2.2.0.0 to 2.3.0.0

2018-02-26 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin closed HAWQ-1586.


> Update version from 2.2.0.0 to 2.3.0.0
> --
>
> Key: HAWQ-1586
> URL: https://issues.apache.org/jira/browse/HAWQ-1586
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Yi Jin
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Update version number from 2.2.0.0 to 2.3.0.0 for release of 2.3.0.0 
> incubating



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HAWQ-1586) Update version from 2.2.0.0 to 2.3.0.0

2018-02-26 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin resolved HAWQ-1586.
--
Resolution: Fixed

> Update version from 2.2.0.0 to 2.3.0.0
> --
>
> Key: HAWQ-1586
> URL: https://issues.apache.org/jira/browse/HAWQ-1586
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Yi Jin
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Update version number from 2.2.0.0 to 2.3.0.0 for release of 2.3.0.0 
> incubating



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HAWQ-1586) Update version from 2.2.0.0 to 2.3.0.0

2018-02-07 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1586:


Assignee: Yi Jin  (was: Radar Lei)

> Update version from 2.2.0.0 to 2.3.0.0
> --
>
> Key: HAWQ-1586
> URL: https://issues.apache.org/jira/browse/HAWQ-1586
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Yi Jin
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Update version number from 2.2.0.0 to 2.3.0.0 for release of 2.3.0.0 
> incubating



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HAWQ-1586) Update version from 2.2.0.0 to 2.3.0.0

2018-02-07 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-1586:


 Summary: Update version from 2.2.0.0 to 2.3.0.0
 Key: HAWQ-1586
 URL: https://issues.apache.org/jira/browse/HAWQ-1586
 Project: Apache HAWQ
  Issue Type: Task
  Components: Build
Reporter: Yi Jin
Assignee: Radar Lei
 Fix For: 2.3.0.0-incubating


Update version number from 2.2.0.0 to 2.3.0.0 for release of 2.3.0.0 incubating



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1496) Deliver LICENSE, NOTICE and DISCLAIMER in main PXF RPM

2018-02-01 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349471#comment-16349471
 ] 

Yi Jin commented on HAWQ-1496:
--

Thank you Ed!

> Deliver LICENSE, NOTICE and DISCLAIMER in main PXF RPM
> --
>
> Key: HAWQ-1496
> URL: https://issues.apache.org/jira/browse/HAWQ-1496
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build, PXF
>Reporter: Ed Espino
>Assignee: Vineet Goel
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Deliver LICENSE, NOTICE and DISCLAIMER in main PXF RPM in which all others 
> are dependent. Currently, they are delivered in all supplied PXF jar files. 
> This was identified during IPMC voting process for the 2.2.0.0 convenience 
> binary release artifacts 
> (https://lists.apache.org/thread.html/35f0f37fb96ad88b1b7cee286640a94894a5b6956b9ae41a7d554ed6@%3Cgeneral.incubator.apache.org%3E)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1512) Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria

2018-02-01 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349291#comment-16349291
 ] 

Yi Jin commented on HAWQ-1512:
--

[https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75963741]

 

[~rlei] did one initial version, and 

[~huor] [~espino] please help to have a review.

 

I am wondering

1) if we should mention libhdfs3 and libyarn as mandatory libraries as they are 
maintained together with HAWQ. And 

2) should we consider mandatory libraries as those we only need when we run 
./configure without any build options explicitly specified?

> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> --
>
> Key: HAWQ-1512
> URL: https://issues.apache.org/jira/browse/HAWQ-1512
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Yi Jin
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria
> Check the following page for the criteria
> https://cwiki.apache.org/confluence/display/HAWQ/ASF+Maturity+Evaluation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1514) TDE feature makes libhdfs3 require openssl1.1

2018-02-01 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16348350#comment-16348350
 ] 

Yi Jin commented on HAWQ-1514:
--

[~hongxu ma] [~rlei] I think we also need some document changes mentioning this 
kind of requirement. Especially in wiki 'build and install'

 

[https://cwiki.apache.org/confluence/display/HAWQ/Build+and+Install]

 

> TDE feature makes libhdfs3 require openssl1.1
> -
>
> Key: HAWQ-1514
> URL: https://issues.apache.org/jira/browse/HAWQ-1514
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: libhdfs
>Reporter: Yi Jin
>Assignee: WANG Weinan
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> New TDE feature delivered in libhdfs3 requires specific version of openssl, 
> at least per my test, 1.0.21 does not work, and 1.1 source code built library 
> passed.
> So maybe we need some build and installation instruction improvement. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1582) hawq ssh cmd bug when pipe in cmd

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin closed HAWQ-1582.

Resolution: Fixed

> hawq ssh cmd bug when pipe in cmd
> -
>
> Key: HAWQ-1582
> URL: https://issues.apache.org/jira/browse/HAWQ-1582
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Yang Sen
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> h1. bug description
> {code:bash}
> hawq ssh -h sdw2 -h localhost -e 'ls -1 | wc -l'
> {code}
> When running this command, the expected action is that `ls -1 | wc -l` is 
> executed in each host. The expected output is (the number may be different):
> {code:bash}
> [sdw2] ls -1 | wc -l
> [sdw2] 23
> [localhost] ls -1 | wc -l
> [localhost] 20
> {code}
> While the output got is:
> {code:bash}
> 45
> {code}
> The result looks like `ls -l` was executed in each host and the output of 
> `hawq ssh -h sdw2 -h localhost -e 'ls -1'` was redirect to pipe to `wc -l`.
> h2. Another related issue
> {code:bash}
> hawq ssh -h sdw2 -h localhost -e 'kill -9 $(pgrep lava)'
> {code}
> This command expects to kill process named lava in each host. While `$(pgrep 
> lava)` is executed in localhost, and program gets the process id, for example 
> 5. And then `kill -9 5` is executed in each host, which is definitely 
> not match with our expect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1582) hawq ssh cmd bug when pipe in cmd

2018-01-29 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344402#comment-16344402
 ] 

Yi Jin commented on HAWQ-1582:
--

Close this issue as this has been delivered and verified.

> hawq ssh cmd bug when pipe in cmd
> -
>
> Key: HAWQ-1582
> URL: https://issues.apache.org/jira/browse/HAWQ-1582
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Yang Sen
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> h1. bug description
> {code:bash}
> hawq ssh -h sdw2 -h localhost -e 'ls -1 | wc -l'
> {code}
> When running this command, the expected action is that `ls -1 | wc -l` is 
> executed in each host. The expected output is (the number may be different):
> {code:bash}
> [sdw2] ls -1 | wc -l
> [sdw2] 23
> [localhost] ls -1 | wc -l
> [localhost] 20
> {code}
> While the output got is:
> {code:bash}
> 45
> {code}
> The result looks like `ls -l` was executed in each host and the output of 
> `hawq ssh -h sdw2 -h localhost -e 'ls -1'` was redirect to pipe to `wc -l`.
> h2. Another related issue
> {code:bash}
> hawq ssh -h sdw2 -h localhost -e 'kill -9 $(pgrep lava)'
> {code}
> This command expects to kill process named lava in each host. While `$(pgrep 
> lava)` is executed in localhost, and program gets the process id, for example 
> 5. And then `kill -9 5` is executed in each host, which is definitely 
> not match with our expect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1575) Implement readable Parquet profile

2018-01-29 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344400#comment-16344400
 ] 

Yi Jin commented on HAWQ-1575:
--

Shell we put this feature in 2.3.0.0? If yes, can anyone who has it delivered 
asap? Thanks

> Implement readable Parquet profile
> --
>
> Key: HAWQ-1575
> URL: https://issues.apache.org/jira/browse/HAWQ-1575
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Ed Espino
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> PXF should be able to read data from Parquet files stored in HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1416) hawq_toolkit administrative schema missing in HAWQ installation

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1416:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> hawq_toolkit administrative schema missing in HAWQ installation
> ---
>
> Key: HAWQ-1416
> URL: https://issues.apache.org/jira/browse/HAWQ-1416
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools, DDL
>Reporter: Vineet Goel
>Assignee: Chunling Wang
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> hawq_toolkit administrative schema is not pre-installed with HAWQ, but should 
> actually be available once HAWQ is installed and initialized.
> Current workaround seems to be a manual command to install it:
> psql -f /usr/local/hawq/share/postgresql/gp_toolkit.sql



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1483) cache lookup failure

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1483:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> cache lookup failure
> 
>
> Key: HAWQ-1483
> URL: https://issues.apache.org/jira/browse/HAWQ-1483
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Rahul Iyer
>Assignee: Radar Lei
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> I'm getting a failure when performing a distinct count with another immutable 
> aggregate. We found this issue when running MADlib on HAWQ 2.0.0. Please find 
> below a simple repro. 
> Setup: 
> {code}
> CREATE TABLE example_data(
> id SERIAL,
> outlook text,
> temperature float8,
> humidity float8,
> windy text,
> class text) ;
> COPY example_data (outlook, temperature, humidity, windy, class) FROM stdin 
> DELIMITER ',' NULL '?' ;
> sunny, 85, 85, false, Don't Play
> sunny, 80, 90, true, Don't Play
> overcast, 83, 78, false, Play
> rain, 70, 96, false, Play
> rain, 68, 80, false, Play
> rain, 65, 70, true, Don't Play
> overcast, 64, 65, true, Play
> sunny, 72, 95, false, Don't Play
> sunny, 69, 70, false, Play
> rain, 75, 80, false, Play
> sunny, 75, 70, true, Play
> overcast, 72, 90, true, Play
> overcast, 81, 75, false, Play
> rain, 71, 80, true, Don't Play
> \.
> create function grt_sfunc(agg_state point, el float8)
> returns point
> immutable
> language plpgsql
> as $$
> declare
>   greatest_sum float8;
>   current_sum float8;
> begin
>   current_sum := agg_state[0] + el;
>   if agg_state[1] < current_sum then
> greatest_sum := current_sum;
>   else
> greatest_sum := agg_state[1];
>   end if;
>   return point(current_sum, greatest_sum);
> end;
> $$;
> create function grt_finalfunc(agg_state point)
> returns float8
> immutable
> strict
> language plpgsql
> as $$
> begin
>   return agg_state[1];
> end;
> $$;
> create aggregate greatest_running_total (float8)
> (
> sfunc = grt_sfunc,
> stype = point,
> finalfunc = grt_finalfunc
> );
> {code}
> Error: 
> {code}
> select count(distinct outlook), greatest_running_total(humidity::integer) 
> from example_data;
> {code} 
> {code}
> ERROR:  cache lookup failed for function 0 (fmgr.c:223)
> {code}
> Execution goes through if I remove the {{distinct}} or if I add another 
> column for the {{count(distinct)}}. 
> {code:sql}
> select count(distinct outlook) as c1, count(distinct windy) as c2, 
> greatest_running_total(humidity) from example_data;
> {code}
> {code}
>  c1 | c2 | greatest_running_total
> ++
>   3 |  2 |
> (1 row)
> {code}
> {code:sql}
> select count(outlook) as c1, greatest_running_total(humidity) from 
> example_data;
> {code}
> {code}
>  count | greatest_running_total
> ---+
> 14 |
> (1 row)
> {code}
> It's an older build - I don't have the resources at present to test this on 
> the latest HAWQ. 
> {code}
> select version();
>   
>   version
> ---
>  PostgreSQL 8.2.15 (Greenplum Database 4.2.0 build 1) (HAWQ 2.0.0.0 build 
> 22126) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled 
> on Apr 25 2016 09:52:54
> (1 row)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1494) The bug can appear every time when I execute a specific sql: Unexpect internal error (setref.c:298), server closed the connection unexpectedly

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1494:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> The bug can appear every time when I execute a specific sql:  Unexpect 
> internal error (setref.c:298), server closed the connection unexpectedly
> ---
>
> Key: HAWQ-1494
> URL: https://issues.apache.org/jira/browse/HAWQ-1494
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: fangpei
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> When I execute a specific sql, a serious bug can happen every time. (Hawq 
> version is 2.2.0.0)
> BUG information:
> FATAL: Unexpect internal error (setref.c:298)
> DETAIL: AssertImply failed("!(!var->varattno >= 0) || (var->varattno <= 
> list_length(colNames) + list_length(rte- >pseudocols)))", File: "setrefs.c", 
> Line: 298)
> HINT:  Process 239600 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released  until then.
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
>  before or while processing the request.
> The connection to the server was lost. Attemping reset: Succeeded.
> I use GDB to debug, the GDB information is the same every time. The 
> information is: 
> Loaded symbols for /lib64/libnss_files.so.2
> 0x0032dd40eb5c in recv 0 from /lib64/libpthread.so.0
> (gdb) b setrefs.c:298
> Breakpoint 1 at 0x846063: file setrefs.c, line 298.
> (gdb) c 
> Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe8e930adb8) at setrefs.c:298
> 298 set ref s .c:没有那个文件或目录.
> (gdb) c 1923
> Will ignore next 1922 crossings of breakpoint 1. Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe869c70340) at setrefs.c:298
> 298 in setrefs.c
> (gdb) p list_length(allVars) 
> $1 = 1422
> (gdb) p var->varno 
> $2 = 65001
> (gdb) p list_length(glob->finalrtable) 
> $3 = 66515
> (gdb) p var->varattno 
> $4 = 31
> (gdb) p list_length(colNames) 
> $5 = 30
> (gdb) p list_length(rte->pseudocols) 
> $6 = 0
> the SQL sentence is just like :
> SELECT *
> FROM (select t.*,1001 as ttt from AAA t where  ( aaa = '3201066235'  
> or aaa = '3201066236'  or aaa = '3201026292'  or aaa = 
> '3201066293'  or aaa = '3201060006393' ) and (  bbb between 
> '20170601065900' and '20170601175100'  and (ccc = '2017-06-01' ))  union all  
> select t.*,1002 as ttt from AAA t where  ( aaa = '3201066007'  or aaa 
> = '3201066006' ) and (  bbb between '20170601072900' and 
> '20170601210100'  and ( ccc = '2017-06-01' ))  union all  select t.*,1003 as 
> ttt from AAA t where  ( aaa = '3201062772' ) and (  bbb between 
> '20170601072900' and '20170601170100'  and ( ccc = '2017-06-01' ))  union all 
>  select t.*,1004 as ttt from AAA t where  (aaa = '3201066115'  or aaa 
> = '3201066116'  or aaa = '3201066318'  or aaa = 
> '3201066319' ) and (  bbb between '20170601085900' and 
> '20170601163100' and ( ccc = '2017-06-01' ))  union all  select t.*,1005 as 
> ttt from AAA t where  ( aaa = '3201066180' or aaa = 
> '3201046385' ) and (  bbb between '20170601205900' and 
> '20170601230100'  and ( ccc = '2017-06-01' )) union all  select t.*,1006 as 
> ttt from AAA t where  ( aaa = '3201026423'  or aaa = 
> '3201026255'  or aaa = '3201066258'  or aaa = 
> '3201066259' ) and (  bbb between '20170601215900' and 
> '20170602004900'  and ( ccc = '2017-06-01'  or ccc = '2017-06-02' ))  union 
> all select t.*,1007 as ttt from AAA t where  ( aaa = '3201066175' or 
> aaa = '3201066004' ) and (  bbb between '20170602074900' and 
> '20170602182100'  and ( ccc = '2017-06-02'  )) union all select t.*,1008 as 
> ttt from AAA t where  ( aaa = '3201026648' ) and (  bbb between 
> '20170602132900' and '20170602134600'  and ( ccc = '2017-06-02' ))  union all 
>  select t.*,1009 as ttt from AAA t where  ( aaa = '3201062765'  or 
> aaa = '3201006282' ) and (  bbb between '20170602142900' and 
> '20170603175100'  and ( ccc = '2017-06-02'  or ccc = '2017-06-03' ))  union 
> all  select t.*,1010 as ttt from AAA t where  (aaa = '3201066060' ) 
> and (  bbb between '20170602165900' and '20170603034100'  and ( ccc = 
> '2017-06-02'  or ccc = '2017-06-03' ))  union all select t.*,1011 as ttt from 
> AAA t where  ( aaa = '32010662229'  or aaa = '3201066230'  or 
> aaa = '3201020

[jira] [Updated] (HAWQ-1566) Include Pluggable Storage Format Framework in External Table Insert

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1566:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> Include Pluggable Storage Format Framework in External Table Insert
> ---
>
> Key: HAWQ-1566
> URL: https://issues.apache.org/jira/browse/HAWQ-1566
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, Storage
>Reporter: Chiyang Wan
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> There are 2 types of operation related to external table, i.e. scan, insert. 
> Including pluggable storage framework in these operations is necessary. 
> We add the external table insert and copy from(write into external table) 
> related feature here.
> In the following steps, we still need to specify some of the critical info 
> that comes from the planner and the file splits info in the pluggable 
> filesystem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-786) Framework to support pluggable formats and file systems

2018-01-29 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344191#comment-16344191
 ] 

Yi Jin commented on HAWQ-786:
-

As its demo and feature test will not be delivered in version 2.3.0.0, this 
issue is extended to next version to complete in the future.

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
> Attachments: HAWQ Pluggable Storage Framework.pdf, 
> ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-786) Framework to support pluggable formats and file systems

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-786:

Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
> Attachments: HAWQ Pluggable Storage Framework.pdf, 
> ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-127) Create CI projects for HAWQ releases

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-127:

Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> Create CI projects for HAWQ releases
> 
>
> Key: HAWQ-127
> URL: https://issues.apache.org/jira/browse/HAWQ-127
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Lei Chang
>Assignee: Jiali Yao
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Create Jenkins projects that build HAWQ binary, source tarballs and docker 
> images, and run sanity tests including at least installcheck-good tests for 
> each commit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1576) Add demo for pluggable format scan

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1576:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> Add demo for pluggable format scan
> --
>
> Key: HAWQ-1576
> URL: https://issues.apache.org/jira/browse/HAWQ-1576
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, Storage
>Reporter: Chiyang Wan
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Once the new feature of pluggable storage framework ready, It is necessary to 
> add a demo on how to implement external scan on a new format using the 
> pluggable framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HAWQ-1577) Add demo for pluggable format insert

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1577:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> Add demo for pluggable format insert
> 
>
> Key: HAWQ-1577
> URL: https://issues.apache.org/jira/browse/HAWQ-1577
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, Storage
>Reporter: Chiyang Wan
>Assignee: Chiyang Wan
>Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Once the new feature of pluggable storage framework ready, It is necessary to 
> add a demo on how to implement external insert on a new format using the 
> pluggable framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1530) Illegally killing a JDBC select query causes locking problems

2018-01-29 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344186#comment-16344186
 ] 

Yi Jin commented on HAWQ-1530:
--

Close this issue as probably the fix worked for this problem. 

> Illegally killing a JDBC select query causes locking problems
> -
>
> Key: HAWQ-1530
> URL: https://issues.apache.org/jira/browse/HAWQ-1530
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Grant Krieger
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Hi,
> When you perform a long running select statement on 2 hawq tables (join) from 
> JDBC and illegally kill the JDBC client (CTRL ALT DEL) before completion of 
> the query the 2 tables remained locked even when the query completes on the 
> server. 
> The lock is visible via PG_locks. One cannot kill the query via SELECT 
> pg_terminate_backend(393937). The only way to get rid of it is to kill -9 
> from linux or restart hawq but this can kill other things as well.
> The JDBC client I am using is Aqua Data Studio.
> I can provide exact steps to reproduce if required
> Thank you
> Grant 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (HAWQ-1530) Illegally killing a JDBC select query causes locking problems

2018-01-29 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin closed HAWQ-1530.

Resolution: Fixed

> Illegally killing a JDBC select query causes locking problems
> -
>
> Key: HAWQ-1530
> URL: https://issues.apache.org/jira/browse/HAWQ-1530
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Grant Krieger
>Assignee: Yi Jin
>Priority: Major
> Fix For: 2.3.0.0-incubating
>
>
> Hi,
> When you perform a long running select statement on 2 hawq tables (join) from 
> JDBC and illegally kill the JDBC client (CTRL ALT DEL) before completion of 
> the query the 2 tables remained locked even when the query completes on the 
> server. 
> The lock is visible via PG_locks. One cannot kill the query via SELECT 
> pg_terminate_backend(393937). The only way to get rid of it is to kill -9 
> from linux or restart hawq but this can kill other things as well.
> The JDBC client I am using is Aqua Data Studio.
> I can provide exact steps to reproduce if required
> Thank you
> Grant 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HAWQ-1567) Unknown process holds the lock causes DROP TABLE hangs forever

2017-12-05 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279713#comment-16279713
 ] 

Yi Jin commented on HAWQ-1567:
--

Please check all lock records to see if there is one process having no pid 
holding the lock. If yes, that should be duplicate with HAWQ-1530

select * from pg_locks;

> Unknown process holds the lock causes DROP TABLE hangs forever
> --
>
> Key: HAWQ-1567
> URL: https://issues.apache.org/jira/browse/HAWQ-1567
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Kuien Liu
>Assignee: Radar Lei
>
> On Hawq 2.2.0.0-incubating (Jun 2017), we meet several times that query is 
> hanging for long time:
> # 1. DROP TABLE hangs for tens of minutes, because it waits for 
> AccessExclusiveLock.
> # 2. BUT the lock is held by a ghost process ( not alive, and little message  
> in log file is availabe to know what's up)
> A detailed context is pasted:
> postgres=# select procpid, sess_id, usesysid, xact_start, waiting, 
> current_query from pg_stat_activity where current_query <> '';
>  procpid | sess_id | usesysid |  xact_start   | waiting | 
>   
>   current_query
> -+-+--+---+-+---
>91321 |  120242 |   328199 | 2017-11-28 14:45:52.631739+08 | t   |  
> drop table if exists ads_is_svc_rcv_approval_detail_df
> postgres=# select * from pg_locks where pid = 91321;
>locktype| database | relation | page | tuple | transactionid | classid 
> | objid | objsubid | transaction |  pid  |mode | granted | 
> mppsessionid | mppiswriter | gp_segment_id
> ---+--+--+--+---+---+-+---+--+-+---+-+-+--+-+---
>  transactionid |  |  |  |   |  21867785 | 
> |   |  |21867785 | 91321 | ExclusiveLock   | t   |
>120242 | f   |-1
>  relation  |16510 | 2608 |  |   |   | 
> |   |  |21867785 | 91321 | RowExclusiveLock| t   |
>120242 | f   |-1
>  relation  |16510 | 1259 |  |   |   | 
> |   |  |21867785 | 91321 | RowExclusiveLock| t   |
>120242 | f   |-1
>  relation  |16510 |  3212612 |  |   |   | 
> |   |  |21867785 | 91321 | AccessExclusiveLock | f   |
>120242 | f   |-1
>  relation  |16510 | 1247 |  |   |   | 
> |   |  |21867785 | 91321 | RowExclusiveLock| t   |
>120242 | f   |-1
> (5 rows)
> postgres=# select * from pg_locks where relation = 3212612;
>  locktype | database | relation | page | tuple | transactionid | classid | 
> objid | objsubid | transaction |  pid   |mode | granted | 
> mppsessionid | mppiswriter | gp_segment_id
> --+--+--+--+---+---+-+---+--+-++-+-+--+-+---
>  relation |16510 |  3212612 |  |   |   | |
>|  |21867785 |  91321 | AccessExclusiveLock | f   |   
> 120242 | f   |-1
>  relation |16510 |  3212612 |  |   |   | |
>|  |   0 | 107940 | AccessShareLock | t   |   
> 120553 | f   |-1
> (2 rows)
> postgres=# select * from pg_stat_activity where procpid = 107940;
>  datid | datname | procpid | sess_id | usesysid | usename | current_query | 
> waiting | query_start | backend_start | client_addr | client_port | 
> application_name | xact_start | waiting_resource
> ---+-+-+-+--+-+---+-+-+---+-+-+--++--
> (0 rows)
> postgres=# select * from pg_locks  where pid = 107940 or mppsessionid = 
> 120553;
>  locktype | database | relation | page |

[jira] [Commented] (HAWQ-1548) Ambiguous message while logging hawq utilization

2017-11-30 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273862#comment-16273862
 ] 

Yi Jin commented on HAWQ-1548:
--

[~outofmemory] [~wlin] Thanks Shubham and Wen. 

> Ambiguous message while logging hawq utilization
> 
>
> Key: HAWQ-1548
> URL: https://issues.apache.org/jira/browse/HAWQ-1548
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: libyarn
>Reporter: Shubham Sharma
>Assignee: Lin Wen
>
> While YARN mode is enabled, resource broker logs two things - 
> - YARN cluster total resource 
> - HAWQ's total resource per node.
> Following messages are logged 
> {code}
> 2017-11-11 23:21:40.944904 
> UTC,,,p549330,th9000778560,con4,,seg-1,"LOG","0","Resource 
> manager YARN resource broker counted YARN cluster having total resource 
> (1376256 MB, 168.00 CORE).",,,0,,"resourcebroker_LIBYARN.c",776,
> 2017-11-11 23:21:40.944921 
> UTC,,,p549330,th9000778560,con4,,seg-1,"LOG","0","Resource 
> manager YARN resource broker counted HAWQ cluster now having (98304 MB, 
> 12.00 CORE) in a YARN cluster of total resource (1376256 MB, 168.00 
> CORE).",,,0,,"resourcebroker_LIBYARN.c",785,
> {code}
> The second message shown above is ambiguous, After reading the sentence below 
> it looks like that complete Hawq cluster in whole has only 98304 MB and 12 
> cores. However according to the configuration it should be 98304 MB and 12 
> cores per segment server.
> {code}
> Resource manager YARN resource broker counted HAWQ cluster now having (98304 
> MB, 12.00 CORE) in a YARN cluster of total resource (1376256 MB, 
> 168.00 CORE).
> {code}
> Either the wrong variables are printed or we can correct the message to 
> represent that the resources logged are per node. As this can confuse the 
> user into thinking that hawq cluster does not have enough resources.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1548) Ambiguous message while logging hawq utilization

2017-11-13 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250961#comment-16250961
 ] 

Yi Jin commented on HAWQ-1548:
--

Per my design, the first log is to output dynamic total YARN cluster capacity 
change, not per node value, that log is output only when total YARN capacity 
changes.

The second log is output only when available YARN capacity for HAWQ changes, 
not for per node. 

So basically, we cannot say that is for per node. 

> Ambiguous message while logging hawq utilization
> 
>
> Key: HAWQ-1548
> URL: https://issues.apache.org/jira/browse/HAWQ-1548
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: libyarn
>Reporter: Shubham Sharma
>Assignee: Lin Wen
>
> While YARN mode is enabled, resource broker logs two things - 
> - YARN cluster total resource 
> - HAWQ's total resource per node.
> Following messages are logged 
> {code}
> 2017-11-11 23:21:40.944904 
> UTC,,,p549330,th9000778560,con4,,seg-1,"LOG","0","Resource 
> manager YARN resource broker counted YARN cluster having total resource 
> (1376256 MB, 168.00 CORE).",,,0,,"resourcebroker_LIBYARN.c",776,
> 2017-11-11 23:21:40.944921 
> UTC,,,p549330,th9000778560,con4,,seg-1,"LOG","0","Resource 
> manager YARN resource broker counted HAWQ cluster now having (98304 MB, 
> 12.00 CORE) in a YARN cluster of total resource (1376256 MB, 168.00 
> CORE).",,,0,,"resourcebroker_LIBYARN.c",785,
> {code}
> The second message shown above is ambiguous, After reading the sentence below 
> it looks like that complete Hawq cluster in whole has only 98304 MB and 12 
> cores. However according to the configuration it should be 98304 MB and 12 
> cores per segment server.
> {code}
> Resource manager YARN resource broker counted HAWQ cluster now having (98304 
> MB, 12.00 CORE) in a YARN cluster of total resource (1376256 MB, 
> 168.00 CORE).
> {code}
> Either the wrong variables are printed or we can correct the message to 
> represent that the resources logged are per node. As this can confuse the 
> user into thinking that hawq cluster does not have enough resources.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1530) Illegally killing a JDBC select query causes locking problems

2017-11-08 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16244963#comment-16244963
 ] 

Yi Jin commented on HAWQ-1530:
--

I pushed the fix, Grant. Thank you for your help. 

I think you can try to randomly cancel or terminate one running query/insert 
anytime, and then check the consequent DROP TABLE works without hang.  

This is not so easy to test, the reason I can explain here more to let you know 
what happens before fixing:

A process accesses table T with shared lock, and FATAL error is raised when it 
is cancelled or terminated, unfortunately, when another ERROR occurs and it is 
promoted to a FATAL error, this breaks normal transaction cleanup logic, this 
causes the lock counter structure not correctly cleaned.

Then when B process access table T2, B reuses A used lock counter structure, 
and after all B logic and computation, B has to clean up the lock counter 
structure again, because this has been polluted by A, B cannot make counter 
back to 0 again, which makes B not able to release lock of T2. B happily exits 
then without reporting any error. This is reason why we can observe one lock 
occupied by unknown process.

When a bad luck process C comes and tries to DROP TABLE, it cannot get 
exclusive lock. So C hangs.

The fix is to drop potential ERROR when HAWQ is in exit progress due to 
previous FATAL error.

> Illegally killing a JDBC select query causes locking problems
> -
>
> Key: HAWQ-1530
> URL: https://issues.apache.org/jira/browse/HAWQ-1530
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Grant Krieger
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> Hi,
> When you perform a long running select statement on 2 hawq tables (join) from 
> JDBC and illegally kill the JDBC client (CTRL ALT DEL) before completion of 
> the query the 2 tables remained locked even when the query completes on the 
> server. 
> The lock is visible via PG_locks. One cannot kill the query via SELECT 
> pg_terminate_backend(393937). The only way to get rid of it is to kill -9 
> from linux or restart hawq but this can kill other things as well.
> The JDBC client I am using is Aqua Data Studio.
> I can provide exact steps to reproduce if required
> Thank you
> Grant 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1530) Illegally killing a JDBC select query causes locking problems

2017-11-07 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16242854#comment-16242854
 ] 

Yi Jin commented on HAWQ-1530:
--

I created one PR for this bug, 
https://github.com/apache/incubator-hawq/pull/1308

Please have a check

> Illegally killing a JDBC select query causes locking problems
> -
>
> Key: HAWQ-1530
> URL: https://issues.apache.org/jira/browse/HAWQ-1530
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Grant Krieger
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> Hi,
> When you perform a long running select statement on 2 hawq tables (join) from 
> JDBC and illegally kill the JDBC client (CTRL ALT DEL) before completion of 
> the query the 2 tables remained locked even when the query completes on the 
> server. 
> The lock is visible via PG_locks. One cannot kill the query via SELECT 
> pg_terminate_backend(393937). The only way to get rid of it is to kill -9 
> from linux or restart hawq but this can kill other things as well.
> The JDBC client I am using is Aqua Data Studio.
> I can provide exact steps to reproduce if required
> Thank you
> Grant 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1530) Illegally killing a JDBC select query causes locking problems

2017-11-07 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1530:
-
Fix Version/s: 2.3.0.0-incubating

> Illegally killing a JDBC select query causes locking problems
> -
>
> Key: HAWQ-1530
> URL: https://issues.apache.org/jira/browse/HAWQ-1530
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Grant Krieger
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> Hi,
> When you perform a long running select statement on 2 hawq tables (join) from 
> JDBC and illegally kill the JDBC client (CTRL ALT DEL) before completion of 
> the query the 2 tables remained locked even when the query completes on the 
> server. 
> The lock is visible via PG_locks. One cannot kill the query via SELECT 
> pg_terminate_backend(393937). The only way to get rid of it is to kill -9 
> from linux or restart hawq but this can kill other things as well.
> The JDBC client I am using is Aqua Data Studio.
> I can provide exact steps to reproduce if required
> Thank you
> Grant 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1530) Illegally killing a JDBC select query causes locking problems

2017-11-07 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16241780#comment-16241780
 ] 

Yi Jin commented on HAWQ-1530:
--

I will provide a fix very soon which possibly fix this issue .

> Illegally killing a JDBC select query causes locking problems
> -
>
> Key: HAWQ-1530
> URL: https://issues.apache.org/jira/browse/HAWQ-1530
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Grant Krieger
>Assignee: Radar Lei
>
> Hi,
> When you perform a long running select statement on 2 hawq tables (join) from 
> JDBC and illegally kill the JDBC client (CTRL ALT DEL) before completion of 
> the query the 2 tables remained locked even when the query completes on the 
> server. 
> The lock is visible via PG_locks. One cannot kill the query via SELECT 
> pg_terminate_backend(393937). The only way to get rid of it is to kill -9 
> from linux or restart hawq but this can kill other things as well.
> The JDBC client I am using is Aqua Data Studio.
> I can provide exact steps to reproduce if required
> Thank you
> Grant 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-127) Create CI projects for HAWQ releases

2017-09-28 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-127:
---

Assignee: Jiali Yao  (was: Radar Lei)

> Create CI projects for HAWQ releases
> 
>
> Key: HAWQ-127
> URL: https://issues.apache.org/jira/browse/HAWQ-127
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Lei Chang
>Assignee: Jiali Yao
> Fix For: 2.3.0.0-incubating
>
>
> Create Jenkins projects that build HAWQ binary, source tarballs and docker 
> images, and run sanity tests including at least installcheck-good tests for 
> each commit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-127) Create CI projects for HAWQ releases

2017-09-28 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-127:

Fix Version/s: (was: backlog)
   2.3.0.0-incubating

> Create CI projects for HAWQ releases
> 
>
> Key: HAWQ-127
> URL: https://issues.apache.org/jira/browse/HAWQ-127
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.1.0.0-incubating
>Reporter: Lei Chang
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> Create Jenkins projects that build HAWQ binary, source tarballs and docker 
> images, and run sanity tests including at least installcheck-good tests for 
> each commit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1494) The bug can appear every time when I execute a specific sql: Unexpect internal error (setref.c:298), server closed the connection unexpectedly

2017-09-28 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1494:


Assignee: Yi Jin  (was: Radar Lei)

> The bug can appear every time when I execute a specific sql:  Unexpect 
> internal error (setref.c:298), server closed the connection unexpectedly
> ---
>
> Key: HAWQ-1494
> URL: https://issues.apache.org/jira/browse/HAWQ-1494
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: fangpei
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> When I execute a specific sql, a serious bug can happen every time. (Hawq 
> version is 2.2.0.0)
> BUG information:
> FATAL: Unexpect internal error (setref.c:298)
> DETAIL: AssertImply failed("!(!var->varattno >= 0) || (var->varattno <= 
> list_length(colNames) + list_length(rte- >pseudocols)))", File: "setrefs.c", 
> Line: 298)
> HINT:  Process 239600 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released  until then.
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
>  before or while processing the request.
> The connection to the server was lost. Attemping reset: Succeeded.
> I use GDB to debug, the GDB information is the same every time. The 
> information is: 
> Loaded symbols for /lib64/libnss_files.so.2
> 0x0032dd40eb5c in recv 0 from /lib64/libpthread.so.0
> (gdb) b setrefs.c:298
> Breakpoint 1 at 0x846063: file setrefs.c, line 298.
> (gdb) c 
> Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe8e930adb8) at setrefs.c:298
> 298 set ref s .c:没有那个文件或目录.
> (gdb) c 1923
> Will ignore next 1922 crossings of breakpoint 1. Continuing.
> Breakpoint 1, set_plan_references_output_asserts (glob=0x7fe96fbccab0, 
> plan=0x7fe869c70340) at setrefs.c:298
> 298 in setrefs.c
> (gdb) p list_length(allVars) 
> $1 = 1422
> (gdb) p var->varno 
> $2 = 65001
> (gdb) p list_length(glob->finalrtable) 
> $3 = 66515
> (gdb) p var->varattno 
> $4 = 31
> (gdb) p list_length(colNames) 
> $5 = 30
> (gdb) p list_length(rte->pseudocols) 
> $6 = 0
> the SQL sentence is just like :
> SELECT *
> FROM (select t.*,1001 as ttt from AAA t where  ( aaa = '3201066235'  
> or aaa = '3201066236'  or aaa = '3201026292'  or aaa = 
> '3201066293'  or aaa = '3201060006393' ) and (  bbb between 
> '20170601065900' and '20170601175100'  and (ccc = '2017-06-01' ))  union all  
> select t.*,1002 as ttt from AAA t where  ( aaa = '3201066007'  or aaa 
> = '3201066006' ) and (  bbb between '20170601072900' and 
> '20170601210100'  and ( ccc = '2017-06-01' ))  union all  select t.*,1003 as 
> ttt from AAA t where  ( aaa = '3201062772' ) and (  bbb between 
> '20170601072900' and '20170601170100'  and ( ccc = '2017-06-01' ))  union all 
>  select t.*,1004 as ttt from AAA t where  (aaa = '3201066115'  or aaa 
> = '3201066116'  or aaa = '3201066318'  or aaa = 
> '3201066319' ) and (  bbb between '20170601085900' and 
> '20170601163100' and ( ccc = '2017-06-01' ))  union all  select t.*,1005 as 
> ttt from AAA t where  ( aaa = '3201066180' or aaa = 
> '3201046385' ) and (  bbb between '20170601205900' and 
> '20170601230100'  and ( ccc = '2017-06-01' )) union all  select t.*,1006 as 
> ttt from AAA t where  ( aaa = '3201026423'  or aaa = 
> '3201026255'  or aaa = '3201066258'  or aaa = 
> '3201066259' ) and (  bbb between '20170601215900' and 
> '20170602004900'  and ( ccc = '2017-06-01'  or ccc = '2017-06-02' ))  union 
> all select t.*,1007 as ttt from AAA t where  ( aaa = '3201066175' or 
> aaa = '3201066004' ) and (  bbb between '20170602074900' and 
> '20170602182100'  and ( ccc = '2017-06-02'  )) union all select t.*,1008 as 
> ttt from AAA t where  ( aaa = '3201026648' ) and (  bbb between 
> '20170602132900' and '20170602134600'  and ( ccc = '2017-06-02' ))  union all 
>  select t.*,1009 as ttt from AAA t where  ( aaa = '3201062765'  or 
> aaa = '3201006282' ) and (  bbb between '20170602142900' and 
> '20170603175100'  and ( ccc = '2017-06-02'  or ccc = '2017-06-03' ))  union 
> all  select t.*,1010 as ttt from AAA t where  (aaa = '3201066060' ) 
> and (  bbb between '20170602165900' and '20170603034100'  and ( ccc = 
> '2017-06-02'  or ccc = '2017-06-03' ))  union all select t.*,1011 as ttt from 
> AAA t where  ( aaa = '32010662229'  or aaa = '3201066230'  or 
> aaa = '3201022783'  or aaa = '3201026304' ) and (  bbb 
> between '20170

[jira] [Commented] (HAWQ-786) Framework to support pluggable formats and file systems

2017-09-04 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16152365#comment-16152365
 ] 

Yi Jin commented on HAWQ-786:
-

Now let's assign Ruilong to complete this feature. Thanks.

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
> Attachments: ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-786) Framework to support pluggable formats and file systems

2017-09-04 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-786:
---

Assignee: Ruilong Huo  (was: Lei Chang)

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
> Attachments: ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1310) Reformat resource_negotiator()

2017-08-11 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin resolved HAWQ-1310.
--
Resolution: Fixed

> Reformat resource_negotiator()
> --
>
> Key: HAWQ-1310
> URL: https://issues.apache.org/jira/browse/HAWQ-1310
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Yi Jin
>Assignee: Amy
> Fix For: 2.3.0.0-incubating
>
>
> The indents in function resource_negotiator() is not aligned. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1498) Segments keep open file descriptors for deleted files

2017-08-11 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin resolved HAWQ-1498.
--
Resolution: Fixed

> Segments keep open file descriptors for deleted files
> -
>
> Key: HAWQ-1498
> URL: https://issues.apache.org/jira/browse/HAWQ-1498
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Harald Bögeholz
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> I have been running some large computations in HAWQ using psql on the master. 
> These computations created temporary tables and dropped them again. 
> Nevertheless free disk space in HDFS decreased by much more than it should. 
> While the psql session on the master was still open I investigated on one of 
> the slave machines.
> HDFS is stored on /mds:
> {noformat}
> [root@mds-hdp-04 ~]# ls -l /mds
> total 36
> drwxr-xr-x. 3 root  root4096 Jun 14 04:23 falcon
> drwxr-xr-x. 3 root  root4096 Jun 14 04:42 hdfs
> drwx--. 2 root  root   16384 Jun  8 02:48 lost+found
> drwxr-xr-x. 5 storm hadoop  4096 Jun 14 04:45 storm
> drwxr-xr-x. 4 root  root4096 Jun 14 04:43 yarn
> drwxr-xr-x. 2 zookeeper hadoop  4096 Jun 14 04:39 zookeeper
> [root@mds-hdp-04 ~]# df /mds
> Filesystem 1K-blocks  Used Available Use% Mounted on
> /dev/vdc   515928320 314560220 175137316  65% /mds
> [root@mds-hdp-04 ~]# du -s /mds
> 89918952  /mds
> {noformat}
> Note that there is a more than 200 GB difference between the disk space used 
> according to df and the sum of all files on that file system according to du.
> I have found the culprit to be several postgres processes running as gpadmin 
> and holding open file descriptors to deleted files. Here are the first few:
> {noformat}
> [root@mds-hdp-04 ~]# lsof +L1 | grep /mds/hdfs | head -10
> postgres 665334 gpadmin   18r   REG 253,32 134217728 0  9438234 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922482
>  (deleted)
> postgres 665334 gpadmin   34r   REG 253,32 24488 0  9438114 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922398
>  (deleted)
> postgres 665334 gpadmin   35r   REG 253,32   199 0  9438115 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922398_187044.meta
>  (deleted)
> postgres 665334 gpadmin   37r   REG 253,32 134217728 0  9438208 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922446
>  (deleted)
> postgres 665334 gpadmin   38r   REG 253,32   1048583 0  9438209 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922446_187092.meta
>  (deleted)
> postgres 665334 gpadmin   39r   REG 253,32   1048583 0  9438235 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922482_187128.meta
>  (deleted)
> postgres 665334 gpadmin   40r   REG 253,32 134217728 0  9438262 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922555
>  (deleted)
> postgres 665334 gpadmin   41r   REG 253,32   1048583 0  9438263 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922555_187201.meta
>  (deleted)
> postgres 665334 gpadmin   42r   REG 253,32 134217728 0  9438285 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir194/blk_1073922602
>  (deleted)
> postgres 665334 gpadmin   43r   REG 253,32   1048583 0  9438286 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir194/blk_1073922602_187248.meta
>  (deleted)
> {noformat}
> As soon I close the psql session on the master the disk space is freed on the 
> slaves:
> {noformat}
> [root@mds-hdp-04 ~]# df /mds
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/vdc   515928320 89992720 399704816  19% /mds
> [root@mds-hdp-04 ~]# du -s /mds
> 89918952  /mds
> [root@mds-hdp-04 ~]# lsof +L1 | grep /mds/hdfs | head -10
> {noformat}
> I believe this to be a bug. At least for me it looks like a very undesirable 
> behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1272) Modify code sample to use standard SQL syntax

2017-08-10 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1272:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   backlog

> Modify code sample to use standard SQL syntax
> -
>
> Key: HAWQ-1272
> URL: https://issues.apache.org/jira/browse/HAWQ-1272
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Jane Beckman
>Assignee: Radar Lei
> Fix For: backlog
>
>
> Parts of the code sample for checking table size should be uppercase.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1459) Tweak the feature test related entries in makefiles.

2017-08-10 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1459:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   backlog

> Tweak the feature test related entries in makefiles.
> 
>
> Key: HAWQ-1459
> URL: https://issues.apache.org/jira/browse/HAWQ-1459
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Paul Guo
>Assignee: Paul Guo
> Fix For: backlog
>
>
> We really do not need to set seperate entries for feature test in makefiles, 
> i.e.
> feature-test
> feature-test-clean
> This looks a bit ugly.
> Besides, in src/test/Makefile, there is typo, i.e.
> feature_test



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1147) Analyze will ERROR after register multiple parquet data files to a parquet table while enable debug and cassert in hawq configure

2017-08-10 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1147:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   backlog

> Analyze will ERROR after register multiple parquet data files to a parquet 
> table while enable debug and cassert in hawq configure
> -
>
> Key: HAWQ-1147
> URL: https://issues.apache.org/jira/browse/HAWQ-1147
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Xiang Sheng
>Assignee: Xiang Sheng
> Fix For: backlog
>
>
> Enable debug and cassert in hawq configure, analyze will report Unexpected 
> internal error after register multiple parquet data files to the table. 
> reproduce steps:
> reproduce steps: 
> 1.  ./configure --enable-debug  --enable-cassert
> 2.  make -j8
> 3.  make install
> 4.  hawq init cluster -a
> 5.  hadoop fs -mkdir hdfs://localhost:8020/hawq_register_test
> 6.  hadoop fs -put 
> $hawq_home/src/test/feature/ManagementTool/test_hawq_register_hawq.paq 
> hdfs://localhost:8020/hawq_register_test/hawq1.paq
> 7.  create table t (i int) with (appendonly=true, orientation=parquet);
> 8.  hawq register -d postgres -f 
> hdfs://localhost:8020/hawq_register_test/hawq1.paq  t
> 9.  
> postgres=#  select oid from pg_class where relname = 't';
> oid
>+--+
>  24586
> (1 row)
> postgres=# select * from pg_aoseg.pg_paqseg_24586;
>  segno | eof | tupcount | eofuncompressed
> ---+-+--+-
>  1 | 657 |   -1 |  -1
> (1 rows)
> postgres=#analyze t;
> FATAL:  Unexpected internal error (analyze.c:1718)
> DETAIL:  FailedAssertion("!(relTuples > -1.0)", File: "analyze.c", Line: 1718)
> HINT:  Process 43356 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released until then.
> server closed the connection unexpectedly
>   This probably means the server terminated abnormally
>   before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-991) "HAWQ register" could register tables according to .yml configuration file

2017-08-10 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-991:

Fix Version/s: (was: 2.3.0.0-incubating)
   backlog

> "HAWQ register" could register tables according to .yml configuration file
> --
>
> Key: HAWQ-991
> URL: https://issues.apache.org/jira/browse/HAWQ-991
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Command Line Tools
>Affects Versions: 2.1.0.0-incubating
>Reporter: hongwu
>Assignee: hongwu
> Fix For: backlog
>
>
> Scenario: 
> 1. For cluster Disaster Recovery. Two clusters co-exist, periodically import 
> data from Cluster A to Cluster B. Need Register data to Cluster B.
> 2. For the rollback of table. Do checkpoints somewhere, and need to rollback 
> to previous checkpoint. 
> Description:
> Register according to .yml configuration file. 
> hawq register [-h hostname] [-p port] [-U username] [-d databasename] [-c 
> config] [--force][--repair]  
> Behaviors:
> 1. If table doesn't exist, will automatically create the table and register 
> the files in .yml configuration file. Will use the filesize specified in .yml 
> to update the catalog table. 
> 2. If table already exist, and neither --force nor --repair configured. Do 
> not create any table, and directly register the files specified in .yml file 
> to the table. Note that if the file is under table directory in HDFS, will 
> throw error, say, to-be-registered files should not under the table path.
> 3. If table already exist, and --force is specified. Will clear all the 
> catalog contents in pg_aoseg.pg_paqseg_$relid while keep the files on HDFS, 
> and then re-register all the files to the table.  This is for scenario 2.
> 4. If table already exist, and --repair is specified. Will change both file 
> folder and catalog table pg_aoseg.pg_paqseg_$relid to the state which .yml 
> file configures. Note may some new generated files since the checkpoint may 
> be deleted here. Also note the all the files in .yml file should all under 
> the table folder on HDFS. Limitation: Do not support cases for hash table 
> redistribution, table truncate and table drop. This is for scenario 3.
> Requirements:
> 1. To be registered file path has to colocate with HAWQ in the same HDFS 
> cluster.
> 2. If to be registered is a hash table, the registered file number should be 
> one or multiple times or hash table bucket number.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-916) Replace com.pivotal.hawq package name to org.apache.hawq

2017-08-10 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-916:

Fix Version/s: (was: 2.3.0.0-incubating)
   backlog

> Replace com.pivotal.hawq package name to org.apache.hawq
> 
>
> Key: HAWQ-916
> URL: https://issues.apache.org/jira/browse/HAWQ-916
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: backlog
>
> Attachments: pivotal.txt
>
>
> com.pivotal.hawq.mapreduce types are referenced in at least the following 
> apache hawq (incubating) directories, master branch:
> contrib/hawq-hadoop
> contrib/hawq-hadoop/hawq-mapreduce-tool
> contrib/hawq-hadoop/hawq-mapreduce-parquet
> contrib/hawq-hadoop/hawq-mapreduce-common
> contrib/hawq-hadoop/hawq-mapreduce-ao
> contrib/hawq-hadoop/target/apidocs



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1514) TDE feature makes libhdfs3 require openssl1.1

2017-08-08 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1514:
-
Fix Version/s: 2.3.0.0-incubating

> TDE feature makes libhdfs3 require openssl1.1
> -
>
> Key: HAWQ-1514
> URL: https://issues.apache.org/jira/browse/HAWQ-1514
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: libhdfs
>Reporter: Yi Jin
>Assignee: Radar Lei
> Fix For: 2.3.0.0-incubating
>
>
> New TDE feature delivered in libhdfs3 requires specific version of openssl, 
> at least per my test, 1.0.21 does not work, and 1.1 source code built library 
> passed.
> So maybe we need some build and installation instruction improvement. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HAWQ-1514) TDE feature makes libhdfs3 require openssl1.1

2017-08-08 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-1514:


 Summary: TDE feature makes libhdfs3 require openssl1.1
 Key: HAWQ-1514
 URL: https://issues.apache.org/jira/browse/HAWQ-1514
 Project: Apache HAWQ
  Issue Type: Task
  Components: libhdfs
Reporter: Yi Jin
Assignee: Radar Lei


New TDE feature delivered in libhdfs3 requires specific version of openssl, at 
least per my test, 1.0.21 does not work, and 1.1 source code built library 
passed.

So maybe we need some build and installation instruction improvement. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1498) Segments keep open file descriptors for deleted files

2017-08-07 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117907#comment-16117907
 ] 

Yi Jin commented on HAWQ-1498:
--

The idea at the back of the fix is to explicitly release those connections 
which are used to drop objects in HDFS. This logic is triggered when current 
transaction ends no matter it is a commit or a rollback. Those cached 
connections only used for reading or updating are not flushed. I roughly have 
had this fixed in my environment, and I will propose a pull request for review. 
Thanks.

> Segments keep open file descriptors for deleted files
> -
>
> Key: HAWQ-1498
> URL: https://issues.apache.org/jira/browse/HAWQ-1498
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Harald Bögeholz
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> I have been running some large computations in HAWQ using psql on the master. 
> These computations created temporary tables and dropped them again. 
> Nevertheless free disk space in HDFS decreased by much more than it should. 
> While the psql session on the master was still open I investigated on one of 
> the slave machines.
> HDFS is stored on /mds:
> {noformat}
> [root@mds-hdp-04 ~]# ls -l /mds
> total 36
> drwxr-xr-x. 3 root  root4096 Jun 14 04:23 falcon
> drwxr-xr-x. 3 root  root4096 Jun 14 04:42 hdfs
> drwx--. 2 root  root   16384 Jun  8 02:48 lost+found
> drwxr-xr-x. 5 storm hadoop  4096 Jun 14 04:45 storm
> drwxr-xr-x. 4 root  root4096 Jun 14 04:43 yarn
> drwxr-xr-x. 2 zookeeper hadoop  4096 Jun 14 04:39 zookeeper
> [root@mds-hdp-04 ~]# df /mds
> Filesystem 1K-blocks  Used Available Use% Mounted on
> /dev/vdc   515928320 314560220 175137316  65% /mds
> [root@mds-hdp-04 ~]# du -s /mds
> 89918952  /mds
> {noformat}
> Note that there is a more than 200 GB difference between the disk space used 
> according to df and the sum of all files on that file system according to du.
> I have found the culprit to be several postgres processes running as gpadmin 
> and holding open file descriptors to deleted files. Here are the first few:
> {noformat}
> [root@mds-hdp-04 ~]# lsof +L1 | grep /mds/hdfs | head -10
> postgres 665334 gpadmin   18r   REG 253,32 134217728 0  9438234 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922482
>  (deleted)
> postgres 665334 gpadmin   34r   REG 253,32 24488 0  9438114 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922398
>  (deleted)
> postgres 665334 gpadmin   35r   REG 253,32   199 0  9438115 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922398_187044.meta
>  (deleted)
> postgres 665334 gpadmin   37r   REG 253,32 134217728 0  9438208 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922446
>  (deleted)
> postgres 665334 gpadmin   38r   REG 253,32   1048583 0  9438209 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922446_187092.meta
>  (deleted)
> postgres 665334 gpadmin   39r   REG 253,32   1048583 0  9438235 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922482_187128.meta
>  (deleted)
> postgres 665334 gpadmin   40r   REG 253,32 134217728 0  9438262 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922555
>  (deleted)
> postgres 665334 gpadmin   41r   REG 253,32   1048583 0  9438263 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922555_187201.meta
>  (deleted)
> postgres 665334 gpadmin   42r   REG 253,32 134217728 0  9438285 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir194/blk_1073922602
>  (deleted)
> postgres 665334 gpadmin   43r   REG 253,32   1048583 0  9438286 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir194/blk_1073922602_187248.meta
>  (deleted)
> {noformat}
> As soon I close the psql session on the master the disk space is freed on the 
> slaves:
> {noformat}
> [root@mds-hdp-04 ~]# df /mds
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/vdc   515928320 89992720 399704816  19% /mds
> [root@mds-hdp-04 ~]# du -s /mds
> 89918952  /mds
> [root@mds-hdp-04 ~]# lsof +L1 | grep /mds/hdfs | head -10
> {noformat}
> I believe this to be a bug. At least for me it looks like a very undesirable 
> behavior.



--
This message was s

[jira] [Commented] (HAWQ-1310) Reformat resource_negotiator()

2017-08-07 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117827#comment-16117827
 ] 

Yi Jin commented on HAWQ-1310:
--

Assigned to Amy to fix it recently per her active request. Thanks Amy.

> Reformat resource_negotiator()
> --
>
> Key: HAWQ-1310
> URL: https://issues.apache.org/jira/browse/HAWQ-1310
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Yi Jin
>Assignee: Amy
> Fix For: 2.3.0.0-incubating
>
>
> The indents in function resource_negotiator() is not aligned. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HAWQ-1310) Reformat resource_negotiator()

2017-08-07 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1310:


Assignee: Amy  (was: Yi Jin)

> Reformat resource_negotiator()
> --
>
> Key: HAWQ-1310
> URL: https://issues.apache.org/jira/browse/HAWQ-1310
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Yi Jin
>Assignee: Amy
> Fix For: 2.3.0.0-incubating
>
>
> The indents in function resource_negotiator() is not aligned. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1400) Add a small sleeping period in feature test utility before dropping test database

2017-08-07 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin closed HAWQ-1400.


> Add a small sleeping period in feature test utility before dropping test 
> database
> -
>
> Key: HAWQ-1400
> URL: https://issues.apache.org/jira/browse/HAWQ-1400
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> This improvement is to raise the stability of feature test. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1400) Add a small sleeping period in feature test utility before dropping test database

2017-08-07 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin resolved HAWQ-1400.
--
   Resolution: Fixed
Fix Version/s: 2.3.0.0-incubating

This fix has been delivered.

> Add a small sleeping period in feature test utility before dropping test 
> database
> -
>
> Key: HAWQ-1400
> URL: https://issues.apache.org/jira/browse/HAWQ-1400
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> This improvement is to raise the stability of feature test. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1333) Change access mode of source files for HAWQ

2017-08-07 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16117561#comment-16117561
 ] 

Yi Jin commented on HAWQ-1333:
--

Thanks Amy, it is nice to have it delivered in Aug. Hope this expectation is 
not pushing you.

> Change access mode of source files for HAWQ  
> -
>
> Key: HAWQ-1333
> URL: https://issues.apache.org/jira/browse/HAWQ-1333
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Reporter: Amy
>Assignee: Amy
> Fix For: 2.3.0.0-incubating
>
>
> There are several source files's access mode is 755 in HAWQ, e.g.  *.c *.cpp 
> *.h files. In order to guarantee the security, will change the source files' 
> access mode to 644. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HAWQ-1512) Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria

2017-08-06 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-1512:


 Summary: Check Apache HAWQ mandatory libraries to match LC20, LC30 
license criteria
 Key: HAWQ-1512
 URL: https://issues.apache.org/jira/browse/HAWQ-1512
 Project: Apache HAWQ
  Issue Type: Task
  Components: Build
Reporter: Yi Jin
Assignee: Radar Lei
 Fix For: 2.3.0.0-incubating


Check Apache HAWQ mandatory libraries to match LC20, LC30 license criteria

Check the following page for the criteria

https://cwiki.apache.org/confluence/display/HAWQ/ASF+Maturity+Evaluation



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-786) Framework to support pluggable formats and file systems

2017-08-06 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-786:

Fix Version/s: (was: backlog)
   2.3.0.0-incubating

> Framework to support pluggable formats and file systems
> ---
>
> Key: HAWQ-786
> URL: https://issues.apache.org/jira/browse/HAWQ-786
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: 2.3.0.0-incubating
>
> Attachments: ORCdesign-v0.1-2016-06-17.pdf
>
>
> In current HAWQ, two native formats are supported: AO and parquet. Now we 
> want to support ORC. A framework to support naive c/c++ pluggable format is 
> needed to support ORC more easily. And it can also be potentially used for 
> fast external data access.
> And there are a lot of requests for supporting S3, Ceph and other file 
> systems, and this is closely related to pluggable formats, so this JIRA is 
> proposing a framework to support both.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-947) set work_mem cannot work

2017-08-06 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin closed HAWQ-947.
---
Resolution: Fixed

This is a deprecated feature, no more fix needed.

> set work_mem cannot work
> 
>
> Key: HAWQ-947
> URL: https://issues.apache.org/jira/browse/HAWQ-947
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Biao Wu
>Assignee: Lei Chang
> Fix For: 2.3.0.0-incubating
>
>
> HAWQ version is 2.0.1.0 build dev.
> EXPLAIN ANALYZE:
> Work_mem: 9554K bytes max, 63834K bytes wanted。
> then set work_mem to '512MB',but not work
> {code:sql}
> test=# EXPLAIN ANALYZE SELECT count(DISTINCT item_sku_id)
> test-# FROM gdm_m03_item_sku_da
> test-# WHERE item_origin ='中国大陆';
>   
>   
>QUERY PLAN
> 
> 
>  Aggregate  (cost=54177150.69..54177150.70 rows=1 width=8)
>Rows out:  Avg 1.0 rows x 1 workers.  
> Max/Last(seg-1:BJHC-HEBE-9014.hadoop.jd.local/seg-1:BJHC-HEBE-9014.hadoop.jd.local)
>  1/1 rows with 532498/532498 ms to end, start offset by 201/201 ms.
>->  Gather Motion 306:1  (slice2; segments: 306)  
> (cost=54177147.60..54177150.68 rows=1 width=8)
>  Rows out:  Avg 306.0 rows x 1 workers at destination.  
> Max/Last(seg-1:BJHC-HEBE-9014.hadoop.jd.local/seg-1:BJHC-HEBE-9014.hadoop.jd.local)
>  306/306 rows with 529394/529394 ms to first row, 532498/532498 ms to end, 
> start offset b
> y 201/201 ms.
>  ->  Aggregate  (cost=54177147.60..54177147.61 rows=1 width=8)
>Rows out:  Avg 1.0 rows x 306 workers.  
> Max/Last(seg305:BJHC-HEBE-9031.hadoop.jd.local/seg258:BJHC-HEBE-9029.hadoop.jd.local)
>  1/1 rows with 530367/532274 ms to end, start offset by 396/246 ms.
>Executor memory:  9554K bytes avg, 9554K bytes max 
> (seg305:BJHC-HEBE-9031.hadoop.jd.local).
>Work_mem used:  9554K bytes avg, 9554K bytes max 
> (seg305:BJHC-HEBE-9031.hadoop.jd.local).
>Work_mem wanted: 63695K bytes avg, 63834K bytes max 
> (seg296:BJHC-HEBE-9031.hadoop.jd.local) to lessen workfile I/O affecting 306 
> workers.
>->  Redistribute Motion 306:306  (slice1; segments: 306)  
> (cost=0.00..53550018.97 rows=819776 width=11)
>  Hash Key: gdm_m03_item_sku_da.item_sku_id
>  Rows out:  Avg 820083.0 rows x 306 workers at 
> destination.  
> Max/Last(seg296:BJHC-HEBE-9031.hadoop.jd.local/seg20:BJHC-HEBE-9016.hadoop.jd.local)
>  821880/818660 rows with 769/771 ms to first row, 524681/525063 ms to e
> nd, start offset by 352/307 ms.
>  ->  Append-only Scan on gdm_m03_item_sku_da  
> (cost=0.00..48532990.00 rows=819776 width=11)
>Filter: item_origin::text = '中国大陆'::text
>Rows out:  Avg 820083.0 rows x 306 workers.  
> Max/Last(seg46:BJHC-HEBE-9017.hadoop.jd.local/seg5:BJHC-HEBE-9015.hadoop.jd.local)
>  893390/810582 rows with 28/127 ms to first row, 73062/526318 ms to end, 
> start off
> set by 354/458 ms.
>  Slice statistics:
>(slice0)Executor memory: 1670K bytes.
>(slice1)Executor memory: 3578K bytes avg x 306 workers, 4711K bytes 
> max (seg172:BJHC-HEBE-9024.hadoop.jd.local).
>(slice2)  * Executor memory: 10056K bytes avg x 306 workers, 10056K bytes 
> max (seg305:BJHC-HEBE-9031.hadoop.jd.local).  Work_mem: 9554K bytes max, 
> 63834K bytes wanted.
>  Statement statistics:
>Memory used: 262144K bytes
>Memory wanted: 64233K bytes
>  Settings:  default_hash_table_bucket_number=6
>  Dispatcher statistics:
>executors used(total/cached/new connection): (612/0/612); dispatcher 
> time(total/connection/dispatch data): (489.036 ms/192.741 ms/293.357 ms).
>dispatch data time(max/min/avg): (37.798 ms/0.011 ms/3.504 ms); consume 
> executor data time(max/min/avg): (0.016 ms/0.002 ms/0.005 ms); free executor 
> time(max/min/avg): (0.000 ms/0.000 ms/0.000 ms).
>  Data locality statistics:
>data locality ratio: 0.864; virtual segment number: 306; different host 
> number: 17; virtual segment number per host(avg/min/max): (18/18/18); segment 
> size(avg/min/max): (3435087582.693 B/3391891296 B/

[jira] [Assigned] (HAWQ-1498) Segments keep open file descriptors for deleted files

2017-08-06 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1498:


Assignee: Yi Jin  (was: Lin Wen)

> Segments keep open file descriptors for deleted files
> -
>
> Key: HAWQ-1498
> URL: https://issues.apache.org/jira/browse/HAWQ-1498
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Harald Bögeholz
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> I have been running some large computations in HAWQ using psql on the master. 
> These computations created temporary tables and dropped them again. 
> Nevertheless free disk space in HDFS decreased by much more than it should. 
> While the psql session on the master was still open I investigated on one of 
> the slave machines.
> HDFS is stored on /mds:
> {noformat}
> [root@mds-hdp-04 ~]# ls -l /mds
> total 36
> drwxr-xr-x. 3 root  root4096 Jun 14 04:23 falcon
> drwxr-xr-x. 3 root  root4096 Jun 14 04:42 hdfs
> drwx--. 2 root  root   16384 Jun  8 02:48 lost+found
> drwxr-xr-x. 5 storm hadoop  4096 Jun 14 04:45 storm
> drwxr-xr-x. 4 root  root4096 Jun 14 04:43 yarn
> drwxr-xr-x. 2 zookeeper hadoop  4096 Jun 14 04:39 zookeeper
> [root@mds-hdp-04 ~]# df /mds
> Filesystem 1K-blocks  Used Available Use% Mounted on
> /dev/vdc   515928320 314560220 175137316  65% /mds
> [root@mds-hdp-04 ~]# du -s /mds
> 89918952  /mds
> {noformat}
> Note that there is a more than 200 GB difference between the disk space used 
> according to df and the sum of all files on that file system according to du.
> I have found the culprit to be several postgres processes running as gpadmin 
> and holding open file descriptors to deleted files. Here are the first few:
> {noformat}
> [root@mds-hdp-04 ~]# lsof +L1 | grep /mds/hdfs | head -10
> postgres 665334 gpadmin   18r   REG 253,32 134217728 0  9438234 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922482
>  (deleted)
> postgres 665334 gpadmin   34r   REG 253,32 24488 0  9438114 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922398
>  (deleted)
> postgres 665334 gpadmin   35r   REG 253,32   199 0  9438115 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922398_187044.meta
>  (deleted)
> postgres 665334 gpadmin   37r   REG 253,32 134217728 0  9438208 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922446
>  (deleted)
> postgres 665334 gpadmin   38r   REG 253,32   1048583 0  9438209 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922446_187092.meta
>  (deleted)
> postgres 665334 gpadmin   39r   REG 253,32   1048583 0  9438235 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922482_187128.meta
>  (deleted)
> postgres 665334 gpadmin   40r   REG 253,32 134217728 0  9438262 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922555
>  (deleted)
> postgres 665334 gpadmin   41r   REG 253,32   1048583 0  9438263 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922555_187201.meta
>  (deleted)
> postgres 665334 gpadmin   42r   REG 253,32 134217728 0  9438285 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir194/blk_1073922602
>  (deleted)
> postgres 665334 gpadmin   43r   REG 253,32   1048583 0  9438286 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir194/blk_1073922602_187248.meta
>  (deleted)
> {noformat}
> As soon I close the psql session on the master the disk space is freed on the 
> slaves:
> {noformat}
> [root@mds-hdp-04 ~]# df /mds
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/vdc   515928320 89992720 399704816  19% /mds
> [root@mds-hdp-04 ~]# du -s /mds
> 89918952  /mds
> [root@mds-hdp-04 ~]# lsof +L1 | grep /mds/hdfs | head -10
> {noformat}
> I believe this to be a bug. At least for me it looks like a very undesirable 
> behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1498) Segments keep open file descriptors for deleted files

2017-08-06 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115801#comment-16115801
 ] 

Yi Jin commented on HAWQ-1498:
--

I will fix this issue recently in version 2.3 incubating.

> Segments keep open file descriptors for deleted files
> -
>
> Key: HAWQ-1498
> URL: https://issues.apache.org/jira/browse/HAWQ-1498
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Harald Bögeholz
>Assignee: Lin Wen
> Fix For: 2.3.0.0-incubating
>
>
> I have been running some large computations in HAWQ using psql on the master. 
> These computations created temporary tables and dropped them again. 
> Nevertheless free disk space in HDFS decreased by much more than it should. 
> While the psql session on the master was still open I investigated on one of 
> the slave machines.
> HDFS is stored on /mds:
> {noformat}
> [root@mds-hdp-04 ~]# ls -l /mds
> total 36
> drwxr-xr-x. 3 root  root4096 Jun 14 04:23 falcon
> drwxr-xr-x. 3 root  root4096 Jun 14 04:42 hdfs
> drwx--. 2 root  root   16384 Jun  8 02:48 lost+found
> drwxr-xr-x. 5 storm hadoop  4096 Jun 14 04:45 storm
> drwxr-xr-x. 4 root  root4096 Jun 14 04:43 yarn
> drwxr-xr-x. 2 zookeeper hadoop  4096 Jun 14 04:39 zookeeper
> [root@mds-hdp-04 ~]# df /mds
> Filesystem 1K-blocks  Used Available Use% Mounted on
> /dev/vdc   515928320 314560220 175137316  65% /mds
> [root@mds-hdp-04 ~]# du -s /mds
> 89918952  /mds
> {noformat}
> Note that there is a more than 200 GB difference between the disk space used 
> according to df and the sum of all files on that file system according to du.
> I have found the culprit to be several postgres processes running as gpadmin 
> and holding open file descriptors to deleted files. Here are the first few:
> {noformat}
> [root@mds-hdp-04 ~]# lsof +L1 | grep /mds/hdfs | head -10
> postgres 665334 gpadmin   18r   REG 253,32 134217728 0  9438234 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922482
>  (deleted)
> postgres 665334 gpadmin   34r   REG 253,32 24488 0  9438114 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922398
>  (deleted)
> postgres 665334 gpadmin   35r   REG 253,32   199 0  9438115 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922398_187044.meta
>  (deleted)
> postgres 665334 gpadmin   37r   REG 253,32 134217728 0  9438208 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922446
>  (deleted)
> postgres 665334 gpadmin   38r   REG 253,32   1048583 0  9438209 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922446_187092.meta
>  (deleted)
> postgres 665334 gpadmin   39r   REG 253,32   1048583 0  9438235 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922482_187128.meta
>  (deleted)
> postgres 665334 gpadmin   40r   REG 253,32 134217728 0  9438262 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922555
>  (deleted)
> postgres 665334 gpadmin   41r   REG 253,32   1048583 0  9438263 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir193/blk_1073922555_187201.meta
>  (deleted)
> postgres 665334 gpadmin   42r   REG 253,32 134217728 0  9438285 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir194/blk_1073922602
>  (deleted)
> postgres 665334 gpadmin   43r   REG 253,32   1048583 0  9438286 
> /mds/hdfs/data/current/BP-23056860-118.138.237.114-1497415333069/current/finalized/subdir2/subdir194/blk_1073922602_187248.meta
>  (deleted)
> {noformat}
> As soon I close the psql session on the master the disk space is freed on the 
> slaves:
> {noformat}
> [root@mds-hdp-04 ~]# df /mds
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/vdc   515928320 89992720 399704816  19% /mds
> [root@mds-hdp-04 ~]# du -s /mds
> 89918952  /mds
> [root@mds-hdp-04 ~]# lsof +L1 | grep /mds/hdfs | head -10
> {noformat}
> I believe this to be a bug. At least for me it looks like a very undesirable 
> behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-783) Remove quicklz in medadata

2017-08-06 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115799#comment-16115799
 ] 

Yi Jin commented on HAWQ-783:
-

The change to fix mentioned in this jira causes catalog change which has 
upgrade routine affected, should check and plan how and when to do it.

> Remove quicklz in medadata
> --
>
> Key: HAWQ-783
> URL: https://issues.apache.org/jira/browse/HAWQ-783
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: backlog
>
>
> This is the rest work of complete quicklz removal, beside HAWQ-780 (Remove 
> quicklz compression related code but keep related meta data in short term).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-783) Remove quicklz in medadata

2017-08-06 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-783:

Fix Version/s: (was: 2.3.0.0-incubating)
   backlog

> Remove quicklz in medadata
> --
>
> Key: HAWQ-783
> URL: https://issues.apache.org/jira/browse/HAWQ-783
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: backlog
>
>
> This is the rest work of complete quicklz removal, beside HAWQ-780 (Remove 
> quicklz compression related code but keep related meta data in short term).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-950) PXF support for Float filters encoded in header data

2017-08-06 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-950:

Fix Version/s: (was: 2.3.0.0-incubating)
   backlog

> PXF support for Float filters encoded in header data
> 
>
> Key: HAWQ-950
> URL: https://issues.apache.org/jira/browse/HAWQ-950
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Goden Yao
> Fix For: backlog
>
>
> HAWQ-779 contributed by [~jiadx] introduced the ability for hawq to serialize 
> filters on float columns and send the data to PXF. However, PXF is not 
> currently capable of parsing float values in the string filter.
> We need to 
> 1. add support for float type on JAVA side.
> 2. add unit test for this change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1224) It would be useful to have a gradle task that runs a PXF instance

2017-08-06 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1224:
-
Fix Version/s: (was: 2.4.0.0-incubating)
   backlog

> It would be useful to have a gradle task that runs a PXF instance
> -
>
> Key: HAWQ-1224
> URL: https://issues.apache.org/jira/browse/HAWQ-1224
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: backlog
>
> Attachments: HAWQ-1224.patch.txt
>
>
> For testing and tinkering it is very useful to be able to just say
>   $ gradle appRun
> and have a working instance of PXF running.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1224) It would be useful to have a gradle task that runs a PXF instance

2017-08-06 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1224:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> It would be useful to have a gradle task that runs a PXF instance
> -
>
> Key: HAWQ-1224
> URL: https://issues.apache.org/jira/browse/HAWQ-1224
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.4.0.0-incubating
>
> Attachments: HAWQ-1224.patch.txt
>
>
> For testing and tinkering it is very useful to be able to just say
>   $ gradle appRun
> and have a working instance of PXF running.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-311) Data Transfer tool

2017-08-06 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-311:

Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> Data Transfer tool
> --
>
> Key: HAWQ-311
> URL: https://issues.apache.org/jira/browse/HAWQ-311
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: NILESH MANKAR
> Fix For: backlog
>
>
> Some users asked a tool to transfer data between HAWQ clusters. It is quite 
> useful for data migration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-311) Data Transfer tool

2017-08-06 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-311:

Fix Version/s: (was: 2.4.0.0-incubating)
   backlog

> Data Transfer tool
> --
>
> Key: HAWQ-311
> URL: https://issues.apache.org/jira/browse/HAWQ-311
> Project: Apache HAWQ
>  Issue Type: Wish
>  Components: Storage
>Reporter: Lei Chang
>Assignee: NILESH MANKAR
> Fix For: backlog
>
>
> Some users asked a tool to transfer data between HAWQ clusters. It is quite 
> useful for data migration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1058) Create a separated tarball for libhdfs3

2017-08-06 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1058:
-
Fix Version/s: (was: 2.3.0.0-incubating)
   2.4.0.0-incubating

> Create a separated tarball for libhdfs3
> ---
>
> Key: HAWQ-1058
> URL: https://issues.apache.org/jira/browse/HAWQ-1058
> Project: Apache HAWQ
>  Issue Type: Test
>  Components: libhdfs
>Affects Versions: 2.0.0.0-incubating
>Reporter: Zhanwei Wang
>Assignee: Lei Chang
> Fix For: 2.4.0.0-incubating
>
>
> As discussed in the dev mail list. Proposed by Ramon that create a separated 
> tarball for libhdfs3 at HAWQ release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1333) Change access mode of source files for HAWQ

2017-08-06 Thread Yi Jin (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16115769#comment-16115769
 ] 

Yi Jin commented on HAWQ-1333:
--

Any plan when to fix it? Thanks.

> Change access mode of source files for HAWQ  
> -
>
> Key: HAWQ-1333
> URL: https://issues.apache.org/jira/browse/HAWQ-1333
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Reporter: Amy
>Assignee: Amy
> Fix For: 2.3.0.0-incubating
>
>
> There are several source files's access mode is 755 in HAWQ, e.g.  *.c *.cpp 
> *.h files. In order to guarantee the security, will change the source files' 
> access mode to 644. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (HAWQ-1439) tolerate system time being changed to earlier point when checking resource context timeout

2017-06-24 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin closed HAWQ-1439.


> tolerate system time being changed to earlier point when checking resource 
> context timeout
> --
>
> Key: HAWQ-1439
> URL: https://issues.apache.org/jira/browse/HAWQ-1439
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> When the system time is changed to an earlier point, the resource context 
> maybe timed out because the context's latest action time is larger than 
> checking time.
> This fix is to adjust latest action time to new system time when it is 
> adjusted to earlier point.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HAWQ-1439) tolerate system time being changed to earlier point when checking resource context timeout

2017-06-24 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin resolved HAWQ-1439.
--
Resolution: Fixed

> tolerate system time being changed to earlier point when checking resource 
> context timeout
> --
>
> Key: HAWQ-1439
> URL: https://issues.apache.org/jira/browse/HAWQ-1439
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> When the system time is changed to an earlier point, the resource context 
> maybe timed out because the context's latest action time is larger than 
> checking time.
> This fix is to adjust latest action time to new system time when it is 
> adjusted to earlier point.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1439) tolerate system time being changed to earlier point when checking resource context timeout

2017-04-24 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin updated HAWQ-1439:
-
Summary: tolerate system time being changed to earlier point when checking 
resource context timeout  (was: tolerate system time changed to earlier point 
when checking resource context timeout)

> tolerate system time being changed to earlier point when checking resource 
> context timeout
> --
>
> Key: HAWQ-1439
> URL: https://issues.apache.org/jira/browse/HAWQ-1439
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> When the system time is changed to an earlier point, the resource context 
> maybe timed out because the context's latest action time is larger than 
> checking time.
> This fix is to adjust latest action time to new system time when it is 
> adjusted to earlier point.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1439) tolerate system time changed to earlier point when checking resource context timeout

2017-04-24 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1439:


Assignee: Yi Jin  (was: Ed Espino)

> tolerate system time changed to earlier point when checking resource context 
> timeout
> 
>
> Key: HAWQ-1439
> URL: https://issues.apache.org/jira/browse/HAWQ-1439
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> When the system time is changed to an earlier point, the resource context 
> maybe timed out because the context's latest action time is larger than 
> checking time.
> This fix is to adjust latest action time to new system time when it is 
> adjusted to earlier point.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1439) tolerate system time changed to earlier point when checking resource context timeout

2017-04-24 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-1439:


 Summary: tolerate system time changed to earlier point when 
checking resource context timeout
 Key: HAWQ-1439
 URL: https://issues.apache.org/jira/browse/HAWQ-1439
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Resource Manager
Reporter: Yi Jin
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


When the system time is changed to an earlier point, the resource context maybe 
timed out because the context's latest action time is larger than checking time.

This fix is to adjust latest action time to new system time when it is adjusted 
to earlier point.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1433) ALTER RESOURCE QUEUE DDL does not check the format of attribute MEMORY_CLUSTER_LIMIT and CORE_CLUSTER_LIMIT

2017-04-17 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1433:


Assignee: Xiang Sheng  (was: Yi Jin)

> ALTER RESOURCE QUEUE DDL does not check the format of attribute 
> MEMORY_CLUSTER_LIMIT and CORE_CLUSTER_LIMIT
> ---
>
> Key: HAWQ-1433
> URL: https://issues.apache.org/jira/browse/HAWQ-1433
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Xiang Sheng
> Fix For: 2.3.0.0-incubating
>
>
> Shubham Sharma 
> 2:11 PM (2 hours ago)
> to user, sebastiao.gone. 
> Hello Sebastio, I think you have encountered the following issue - 
> 1 - Problem -  alter resource queue pg_default with 
> (CORE_LIMIT_CLUSTER/MEMORY_LIMIT_CLUSTER=90);
> gpadmin=# select * from pg_resqueue;
>   rsqname   | parentoid | activestats | memorylimit | corelimit | 
> resovercommit | allocpolicy | vsegresourcequota | nvsegupperlimit | 
> nvseglowerlimit | nvseg
> upperlimitperseg | nvseglowerlimitperseg | creationtime |  updatetime 
>   | status 
> +---+-+-+---+---+-+---+-+-+--
> -+---+--+---+
>  pg_root| 0 |  -1 | 100%| 100%  | 
> 2 | even|   |   0 |   0 | 
>  
>0 | 0 |  | 
>   | branch
>  pg_default |  9800 |  20 | 50% | 50%   | 
> 2 | even| mem:256mb |   0 |   0 | 
>  
>0 | 0 |  | 2017-04-12 
> 22:45:55.056102+01 | 
> (2 rows)
> gpadmin=# alter resource queue pg_default with (CORE_LIMIT_CLUSTER=90);
> ALTER QUEUE
> gpadmin=# select * from test;
>  a 
> ---
> (0 rows)
> gpadmin=# \q
> 2 - restart hawq cluster
> 3 - ERROR
> [gpadmin@hdp3 ~]$ psql
> psql (8.2.15)
> Type "help" for help.
> gpadmin=# select * from test;
> WARNING:  FD 31 having errors raised. errno 104
> ERROR:  failed to register in resource manager, failed to receive content 
> (pquery.c:787)
> 3 - alter resource queue pg_default with 
> (CORE_LIMIT_CLUSTER/MEMORY_LIMIT_CLUSTER=50%); --Let's switch back
> ! Not allowed !
> alter resource queue pg_default with (CORE_LIMIT_CLUSTER=50%);
> WARNING:  FD 33 having errors raised. errno 104
> ERROR:  failed to register in resource manager, failed to receive content 
> (resqueuecommand.c:364)
> 4 -  How to fix - Please be extra careful while using this.
> gpadmin=# begin;
> BEGIN
> gpadmin=# set allow_system_table_mods='dml';
> SET
> gpadmin=# select * from pg_resqueue where corelimit=90;
>   rsqname   | parentoid | activestats | memorylimit | corelimit | 
> resovercommit | allocpolicy | vsegresourcequota | nvsegupperlimit | 
> nvseglowerlimit | nvseg
> upperlimitperseg | nvseglowerlimitperseg | creationtime |  updatetime 
>   | status 
> +---+-+-+---+---+-+---+-+-+--
> -+---+--+---+
>  pg_default |  9800 |  20 | 50% | 90| 
> 2 | even| mem:256mb |   0 |   0 | 
>  
>0 | 0 |  | 2017-04-12 
> 22:59:30.092823+01 | 
> (1 row)
> gpadmin=# update pg_resqueue set corelimit='50%' where corelimit=90;
> UPDATE 1
> gpadmin=# commit;
> COMMIT
> 5 - System should be back to normal
> gpadmin=# select * from test;
>  a 
> ---
> (0 rows)
> Regards,
> Shubh



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1433) ALTER RESOURCE QUEUE DDL does not check the format of attribute MEMORY_CLUSTER_LIMIT

2017-04-12 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1433:


Assignee: Yi Jin  (was: Ed Espino)

> ALTER RESOURCE QUEUE DDL does not check the format of attribute 
> MEMORY_CLUSTER_LIMIT
> 
>
> Key: HAWQ-1433
> URL: https://issues.apache.org/jira/browse/HAWQ-1433
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.3.0.0-incubating
>
>
> Shubham Sharma 
> 2:11 PM (2 hours ago)
> to user, sebastiao.gone. 
> Hello Sebastio, I think you have encountered the following issue - 
> 1 - Problem -  alter resource queue pg_default with 
> (CORE_LIMIT_CLUSTER/MEMORY_LIMIT_CLUSTER=90);
> gpadmin=# select * from pg_resqueue;
>   rsqname   | parentoid | activestats | memorylimit | corelimit | 
> resovercommit | allocpolicy | vsegresourcequota | nvsegupperlimit | 
> nvseglowerlimit | nvseg
> upperlimitperseg | nvseglowerlimitperseg | creationtime |  updatetime 
>   | status 
> +---+-+-+---+---+-+---+-+-+--
> -+---+--+---+
>  pg_root| 0 |  -1 | 100%| 100%  | 
> 2 | even|   |   0 |   0 | 
>  
>0 | 0 |  | 
>   | branch
>  pg_default |  9800 |  20 | 50% | 50%   | 
> 2 | even| mem:256mb |   0 |   0 | 
>  
>0 | 0 |  | 2017-04-12 
> 22:45:55.056102+01 | 
> (2 rows)
> gpadmin=# alter resource queue pg_default with (CORE_LIMIT_CLUSTER=90);
> ALTER QUEUE
> gpadmin=# select * from test;
>  a 
> ---
> (0 rows)
> gpadmin=# \q
> 2 - restart hawq cluster
> 3 - ERROR
> [gpadmin@hdp3 ~]$ psql
> psql (8.2.15)
> Type "help" for help.
> gpadmin=# select * from test;
> WARNING:  FD 31 having errors raised. errno 104
> ERROR:  failed to register in resource manager, failed to receive content 
> (pquery.c:787)
> 3 - alter resource queue pg_default with 
> (CORE_LIMIT_CLUSTER/MEMORY_LIMIT_CLUSTER=50%); --Let's switch back
> ! Not allowed !
> alter resource queue pg_default with (CORE_LIMIT_CLUSTER=50%);
> WARNING:  FD 33 having errors raised. errno 104
> ERROR:  failed to register in resource manager, failed to receive content 
> (resqueuecommand.c:364)
> 4 -  How to fix - Please be extra careful while using this.
> gpadmin=# begin;
> BEGIN
> gpadmin=# set allow_system_table_mods='dml';
> SET
> gpadmin=# select * from pg_resqueue where corelimit=90;
>   rsqname   | parentoid | activestats | memorylimit | corelimit | 
> resovercommit | allocpolicy | vsegresourcequota | nvsegupperlimit | 
> nvseglowerlimit | nvseg
> upperlimitperseg | nvseglowerlimitperseg | creationtime |  updatetime 
>   | status 
> +---+-+-+---+---+-+---+-+-+--
> -+---+--+---+
>  pg_default |  9800 |  20 | 50% | 90| 
> 2 | even| mem:256mb |   0 |   0 | 
>  
>0 | 0 |  | 2017-04-12 
> 22:59:30.092823+01 | 
> (1 row)
> gpadmin=# update pg_resqueue set corelimit='50%' where corelimit=90;
> UPDATE 1
> gpadmin=# commit;
> COMMIT
> 5 - System should be back to normal
> gpadmin=# select * from test;
>  a 
> ---
> (0 rows)
> Regards,
> Shubh



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1433) ALTER RESOURCE QUEUE DDL does not check the format of attribute MEMORY_CLUSTER_LIMIT

2017-04-12 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-1433:


 Summary: ALTER RESOURCE QUEUE DDL does not check the format of 
attribute MEMORY_CLUSTER_LIMIT
 Key: HAWQ-1433
 URL: https://issues.apache.org/jira/browse/HAWQ-1433
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Resource Manager
Reporter: Yi Jin
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


Shubham Sharma 
2:11 PM (2 hours ago)

to user, sebastiao.gone. 
Hello Sebastio, I think you have encountered the following issue - 

1 - Problem -  alter resource queue pg_default with 
(CORE_LIMIT_CLUSTER/MEMORY_LIMIT_CLUSTER=90);

gpadmin=# select * from pg_resqueue;
  rsqname   | parentoid | activestats | memorylimit | corelimit | resovercommit 
| allocpolicy | vsegresourcequota | nvsegupperlimit | nvseglowerlimit | nvseg
upperlimitperseg | nvseglowerlimitperseg | creationtime |  updatetime   
| status 
+---+-+-+---+---+-+---+-+-+--
-+---+--+---+
 pg_root| 0 |  -1 | 100%| 100%  | 2 
| even|   |   0 |   0 |  
   0 | 0 |  |   
| branch
 pg_default |  9800 |  20 | 50% | 50%   | 2 
| even| mem:256mb |   0 |   0 |  
   0 | 0 |  | 2017-04-12 
22:45:55.056102+01 | 
(2 rows)

gpadmin=# alter resource queue pg_default with (CORE_LIMIT_CLUSTER=90);
ALTER QUEUE

gpadmin=# select * from test;
 a 
---
(0 rows)
gpadmin=# \q

2 - restart hawq cluster

3 - ERROR

[gpadmin@hdp3 ~]$ psql
psql (8.2.15)
Type "help" for help.
gpadmin=# select * from test;
WARNING:  FD 31 having errors raised. errno 104
ERROR:  failed to register in resource manager, failed to receive content 
(pquery.c:787)

3 - alter resource queue pg_default with 
(CORE_LIMIT_CLUSTER/MEMORY_LIMIT_CLUSTER=50%); --Let's switch back
! Not allowed !
alter resource queue pg_default with (CORE_LIMIT_CLUSTER=50%);
WARNING:  FD 33 having errors raised. errno 104
ERROR:  failed to register in resource manager, failed to receive content 
(resqueuecommand.c:364)

4 -  How to fix - Please be extra careful while using this.
gpadmin=# begin;
BEGIN
gpadmin=# set allow_system_table_mods='dml';
SET
gpadmin=# select * from pg_resqueue where corelimit=90;
  rsqname   | parentoid | activestats | memorylimit | corelimit | resovercommit 
| allocpolicy | vsegresourcequota | nvsegupperlimit | nvseglowerlimit | nvseg
upperlimitperseg | nvseglowerlimitperseg | creationtime |  updatetime   
| status 
+---+-+-+---+---+-+---+-+-+--
-+---+--+---+
 pg_default |  9800 |  20 | 50% | 90| 2 
| even| mem:256mb |   0 |   0 |  
   0 | 0 |  | 2017-04-12 
22:59:30.092823+01 | 
(1 row)
gpadmin=# update pg_resqueue set corelimit='50%' where corelimit=90;
UPDATE 1
gpadmin=# commit;
COMMIT

5 - System should be back to normal

gpadmin=# select * from test;
 a 
---
(0 rows)


Regards,
Shubh




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1400) Add a small sleeping period in feature test utility before dropping test database

2017-03-21 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-1400:


 Summary: Add a small sleeping period in feature test utility 
before dropping test database
 Key: HAWQ-1400
 URL: https://issues.apache.org/jira/browse/HAWQ-1400
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: Tests
Reporter: Yi Jin
Assignee: Jiali Yao


This improvement is to raise the stability of feature test. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1400) Add a small sleeping period in feature test utility before dropping test database

2017-03-21 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1400:


Assignee: Yi Jin  (was: Jiali Yao)

> Add a small sleeping period in feature test utility before dropping test 
> database
> -
>
> Key: HAWQ-1400
> URL: https://issues.apache.org/jira/browse/HAWQ-1400
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Yi Jin
>Assignee: Yi Jin
>
> This improvement is to raise the stability of feature test. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HAWQ-1321) failNames wrongly uses memory context to build message when ANALYZE failed

2017-02-15 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin resolved HAWQ-1321.
--
Resolution: Fixed

> failNames wrongly uses memory context to build message when ANALYZE failed
> --
>
> Key: HAWQ-1321
> URL: https://issues.apache.org/jira/browse/HAWQ-1321
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.2.0.0-incubating
>
>
> I find one bug exist in generating error message for ANALYZE when the message 
> size is large. 
> In analyzeStmt(), there is a variable called failNames. It is initialized in 
> caller's memory context, but it repallocs memory in relation context, and it 
> is freed in statement context. This is bug of wrongly using memory context. 
> when the relation and statement context are dropped at then end of function 
> analyzeStmt(), pat of its content will be flushed with 0. This explain why 
> another block's header was randomly wiped out in the bug.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (HAWQ-1321) failNames wrongly uses memory context to build message when ANALYZE failed

2017-02-15 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin closed HAWQ-1321.


> failNames wrongly uses memory context to build message when ANALYZE failed
> --
>
> Key: HAWQ-1321
> URL: https://issues.apache.org/jira/browse/HAWQ-1321
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.1.0.0-incubating
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.2.0.0-incubating
>
>
> I find one bug exist in generating error message for ANALYZE when the message 
> size is large. 
> In analyzeStmt(), there is a variable called failNames. It is initialized in 
> caller's memory context, but it repallocs memory in relation context, and it 
> is freed in statement context. This is bug of wrongly using memory context. 
> when the relation and statement context are dropped at then end of function 
> analyzeStmt(), pat of its content will be flushed with 0. This explain why 
> another block's header was randomly wiped out in the bug.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1321) failNames wrongly uses memory context to build message when ANALYZE failed

2017-02-09 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1321:


Assignee: Yi Jin  (was: Ed Espino)

> failNames wrongly uses memory context to build message when ANALYZE failed
> --
>
> Key: HAWQ-1321
> URL: https://issues.apache.org/jira/browse/HAWQ-1321
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.2.0.0-incubating
>
>
> I find one bug exist in generating error message for ANALYZE when the message 
> size is large. 
> In analyzeStmt(), there is a variable called failNames. It is initialized in 
> caller's memory context, but it repallocs memory in relation context, and it 
> is freed in statement context. This is bug of wrongly using memory context. 
> when the relation and statement context are dropped at then end of function 
> analyzeStmt(), pat of its content will be flushed with 0. This explain why 
> another block's header was randomly wiped out in the bug.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1321) failNames wrongly uses memory context to build message when ANALYZE failed

2017-02-09 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-1321:


 Summary: failNames wrongly uses memory context to build message 
when ANALYZE failed
 Key: HAWQ-1321
 URL: https://issues.apache.org/jira/browse/HAWQ-1321
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Core
Reporter: Yi Jin
Assignee: Ed Espino
 Fix For: 2.2.0.0-incubating


I find one bug exist in generating error message for ANALYZE when the message 
size is large. 

In analyzeStmt(), there is a variable called failNames. It is initialized in 
caller's memory context, but it repallocs memory in relation context, and it is 
freed in statement context. This is bug of wrongly using memory context. when 
the relation and statement context are dropped at then end of function 
analyzeStmt(), pat of its content will be flushed with 0. This explain why 
another block's header was randomly wiped out in the bug.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1310) Reformat resource_negotiator()

2017-02-02 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-1310:


 Summary: Reformat resource_negotiator()
 Key: HAWQ-1310
 URL: https://issues.apache.org/jira/browse/HAWQ-1310
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: Core
Reporter: Yi Jin
Assignee: Ed Espino
 Fix For: 2.2.0.0-incubating


The indents in function resource_negotiator() is not aligned. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HAWQ-1310) Reformat resource_negotiator()

2017-02-02 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1310:


Assignee: Yi Jin  (was: Ed Espino)

> Reformat resource_negotiator()
> --
>
> Key: HAWQ-1310
> URL: https://issues.apache.org/jira/browse/HAWQ-1310
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.2.0.0-incubating
>
>
> The indents in function resource_negotiator() is not aligned. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] (HAWQ-1299) Extend memory limit for one virtual segment

2017-01-30 Thread Yi Jin (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Yi Jin commented on  HAWQ-1299 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
  Re: Extend memory limit for one virtual segment  
 
 
 
 
 
 
 
 
 
 
hawq_rm_stmt_vseg_memory and hawq_rm_nvseg_perquery_perseg_limit  Inbox x  
Jon Roberts via hawq.incubator.apache.org  Jan 21 (10 days ago) 
to dev  Why is there a limit of 16GB for hawq_rm_stmt_vseg_memory? A cluster with 256GB per node and dedicated for HAWQ may certainly want to utilize more memory per segment. Is there something I'm missing regarding statement memory? 
Secondly, does the number of vsegs for a query get influenced by the statement memory or does it just look at the plan and hawq_rm_nvseg_perquery_perseg_limit? 
Lei Chang  Jan 21 (10 days ago) 
to dev  hawq_rm_stmt_vseg_memory and hawq_rm_stmt_nvseg need to be used together to set the specific number of segments and the vseg memory. And hawq_rm_stmt_nvseg should be less than hawq_rm_nvseg_perquery_perseg_limit. 
set hawq_rm_stmt_vseg_memory = '2GB';set hawq_rm_stmt_nvseg = 6; 
looks 16GB is somewhat small for big dedicated machines: if 16GB is per virtual segment memory, if 8 segment is used, it only use 128GB. 
Cheers 
Yi Jin  Jan 23 (8 days ago) 
to dev  Hi Jon, 
That guc setting limit means for one virtual segment, the maximum consumable memory is 16GB, for one segment/node, there maybe multiple vsegs allocated to run queries, so if a node has 256gb expected to be consumed by HAWQ, it will have at most 16 vsegs running concurrently.  
hawq_rm_stmt_vseg_memory is for setting statement level vseg memory consumption, it is required to specify hawq_rm_stmt_nvseg as well, only when hawq_rm_stmt_nvseg is greater than 0, hawq_rm_stmt_vseg_memory is activated regardless the original target resource queue vseg resource quota definition. For example, you can set hawq_rm_stmt_vseg_memory as 16gb, hawq_rm_stmt_nvseg as 256, if you have a cluster having 256gb * 16 nodes and your target resource queue can use 100% cluster resource, you will have 16 vsegs running per node to consume all memory resource for this query. 
Best, Yi 
Jon Roberts via hawq.incubator.apache.org  Jan 23 (8 days ago) 
to dev  I've been thinking about these scenarios: 
1. Hash distributed tables with fixed number of buckets. If the tables were built using the defaults, buckets = 6 * number of nodes. So you basically have 6 vsegs per host. Multiply that by 16GB and you only can use 96GB of the 256GB of RAM per node. 
2. A user has random tables but doesn't understand they can increase the number of vsegs. This will be common for users that come from Greenplum. They again can only set statement member to 16GB so they are stuck with a max of 96GB of RAM usage. 
3. User increases vsegs and statement memory. Possibly run out of memory if too aggressive with settings. 
 

I think we should be able to specify statement memory higher than 16GB. Maybe the limit should be something much higher such as 1TB.
 
   

[jira] (HAWQ-1299) Extend memory limit for one virtual segment

2017-01-30 Thread Yi Jin (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Yi Jin assigned an issue to Yi Jin 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Apache HAWQ /  HAWQ-1299 
 
 
 
  Extend memory limit for one virtual segment  
 
 
 
 
 
 
 
 
 

Change By:
 
 Yi Jin 
 
 
 

Assignee:
 
 Ed Espino Yi Jin 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 

 This message was sent by Atlassian JIRA (v6.3.15#6346-sha1:dbc023d) 
 
 
 
 
  
 
 
 
 
 
 
 
 
   



[jira] (HAWQ-1299) Extend memory limit for one virtual segment

2017-01-30 Thread Yi Jin (JIRA)
Title: Message Title
 
 
 
 
 
 
 
 
 
 
  
 
 Yi Jin created an issue 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 Apache HAWQ /  HAWQ-1299 
 
 
 
  Extend memory limit for one virtual segment  
 
 
 
 
 
 
 
 
 

Issue Type:
 
  Improvement 
 
 
 

Assignee:
 
 Ed Espino 
 
 
 

Components:
 

 Resource Manager 
 
 
 

Created:
 

 30/Jan/17 23:20 
 
 
 

Fix Versions:
 

 backlog 
 
 
 

Priority:
 
  Major 
 
 
 

Reporter:
 
 Yi Jin 
 
 
 
 
 
 
 
 
 
 
Someone proposed to increase the maximum limit of memory quota of one virtual segment more than 16GB. This is a track for the related discussion. 
 
 
 
 
 
 
 
 
 
 
 
 

 
 Add Comment 
 
 
 
 
 
 
 
 
 
 

[jira] [Resolved] (HAWQ-1258) segment resource manager does not switch back when it cannot resolve standby host name

2017-01-22 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin resolved HAWQ-1258.
--
Resolution: Fixed

> segment resource manager does not switch back when it cannot resolve standby 
> host name
> --
>
> Key: HAWQ-1258
> URL: https://issues.apache.org/jira/browse/HAWQ-1258
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.2.0.0-incubating
>
>
> When segment resource manager finds master resource manager is not available, 
> it should switch to standby server, however, when the standby server cannot 
> be resolved, it does not switch back to original master server side. And it 
> also should avoid switching to standby if there is no standby which is set as 
> value 'none'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-1285) resource manager outputs uninitialized string as host name

2017-01-22 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin resolved HAWQ-1285.
--
Resolution: Fixed

> resource manager outputs uninitialized string as host name
> --
>
> Key: HAWQ-1285
> URL: https://issues.apache.org/jira/browse/HAWQ-1285
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.2.0.0-incubating
>
>
> In yarn mode, when the host is not registered yet, RM will generate the 
> following log message which makes log unable to read to identify which host 
> is being processed.
> 2017-01-11 10:57:37.265071 
> GMT,,,p48567,th5266578240,con4,,seg-1,"LOG","0","Resource 
> manager adjusts segment ~@ original global resource manager resource capacity 
> from (188416 MB, 50 CORE) to (188416 MB, 46 
> CORE)",,,0,,"resourcepool.c",4700,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1285) resource manager outputs uninitialized string as host name

2017-01-19 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-1285:


 Summary: resource manager outputs uninitialized string as host name
 Key: HAWQ-1285
 URL: https://issues.apache.org/jira/browse/HAWQ-1285
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Resource Manager
Reporter: Yi Jin
Assignee: Ed Espino
 Fix For: 2.2.0.0-incubating


In yarn mode, when the host is not registered yet, RM will generate the 
following log message which makes log unable to read to identify which host is 
being processed.

2017-01-11 10:57:37.265071 
GMT,,,p48567,th5266578240,con4,,seg-1,"LOG","0","Resource 
manager adjusts segment ~@ original global resource manager resource capacity 
from (188416 MB, 50 CORE) to (188416 MB, 46 
CORE)",,,0,,"resourcepool.c",4700,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-1285) resource manager outputs uninitialized string as host name

2017-01-19 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1285:


Assignee: Yi Jin  (was: Ed Espino)

> resource manager outputs uninitialized string as host name
> --
>
> Key: HAWQ-1285
> URL: https://issues.apache.org/jira/browse/HAWQ-1285
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.2.0.0-incubating
>
>
> In yarn mode, when the host is not registered yet, RM will generate the 
> following log message which makes log unable to read to identify which host 
> is being processed.
> 2017-01-11 10:57:37.265071 
> GMT,,,p48567,th5266578240,con4,,seg-1,"LOG","0","Resource 
> manager adjusts segment ~@ original global resource manager resource capacity 
> from (188416 MB, 50 CORE) to (188416 MB, 46 
> CORE)",,,0,,"resourcepool.c",4700,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-1266) Open discuss on how to set HAWQ having no standby server

2017-01-11 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1266:


Assignee: Yi Jin  (was: Ed Espino)

> Open discuss on how to set HAWQ having no standby server
> 
>
> Key: HAWQ-1266
> URL: https://issues.apache.org/jira/browse/HAWQ-1266
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: backlog
>
>
> Currently, user can set hawq_standby_server_host as 'none' in hawq-site.xml 
> to tell HAWQ that there is no standby server, one tricky situation is that if 
> user sets one standby server hostname exactly as 'none', this will cause 
> config not working as expected. 
> One potential behavior change is to change this value to '' (empty string). 
> This also may cause some confusions. This jira is created to track the 
> discussion about improving this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1266) Open discuss on how to set HAWQ having no standby server

2017-01-11 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-1266:


 Summary: Open discuss on how to set HAWQ having no standby server
 Key: HAWQ-1266
 URL: https://issues.apache.org/jira/browse/HAWQ-1266
 Project: Apache HAWQ
  Issue Type: Improvement
  Components: Core
Reporter: Yi Jin
Assignee: Ed Espino
 Fix For: backlog


Currently, user can set hawq_standby_server_host as 'none' in hawq-site.xml to 
tell HAWQ that there is no standby server, one tricky situation is that if user 
sets one standby server hostname exactly as 'none', this will cause config not 
working as expected. 

One potential behavior change is to change this value to '' (empty string). 
This also may cause some confusions. This jira is created to track the 
discussion about improving this behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-1242) hawq-site.xml default content has wrong guc variable names

2017-01-05 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin closed HAWQ-1242.


> hawq-site.xml default content has wrong guc variable names
> --
>
> Key: HAWQ-1242
> URL: https://issues.apache.org/jira/browse/HAWQ-1242
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.2.0.0-incubating
>
>
> Reported by Paul Guo
> I happened to check the log and found some confusing logs.
> Have not looked into this yet. Anyone sawq this before?
> 2016-12-02 09:28:05.942583
> GMT,,,p366111,th4549860160,,,seg-1,"LOG","0","NOTE:
> Recognized configuration:
> hawq_dfs_url=localhost:8020/hawq_default""processXMLNode","kvproperties.c",134,
> ..
> 2016-12-02 09:28:05.948051
> GMT,,,p366111,th4549860160,,,seg-1,"LOG","42704","unrecognized
> configuration parameter
> ""hawq_standby_address_host""""set_config_option","guc.c",10041,
> 2016-12-02 09:28:05.948203
> GMT,,,p366111,th4549860160,,,seg-1,"LOG","42704","unrecognized
> configuration parameter
> ""hawq_dfs_url""""set_config_option","guc.c",10041,
> 2016-12-02 09:28:05.948349
> GMT,,,p366111,th4549860160,,,seg-1,"LOG","42704","unrecognized
> configuration parameter
> ""hawq_master_directory""""set_config_option","guc.c",10041,
> 2016-12-02 09:28:05.948493
> GMT,,,p366111,th4549860160,,,seg-1,"LOG","42704","unrecognized
> configuration parameter
> ""hawq_segment_directory""""set_config_option","guc.c",10041,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-1242) hawq-site.xml default content has wrong guc variable names

2017-01-05 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin resolved HAWQ-1242.
--
   Resolution: Fixed
Fix Version/s: 2.2.0.0-incubating

> hawq-site.xml default content has wrong guc variable names
> --
>
> Key: HAWQ-1242
> URL: https://issues.apache.org/jira/browse/HAWQ-1242
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.2.0.0-incubating
>
>
> Reported by Paul Guo
> I happened to check the log and found some confusing logs.
> Have not looked into this yet. Anyone sawq this before?
> 2016-12-02 09:28:05.942583
> GMT,,,p366111,th4549860160,,,seg-1,"LOG","0","NOTE:
> Recognized configuration:
> hawq_dfs_url=localhost:8020/hawq_default""processXMLNode","kvproperties.c",134,
> ..
> 2016-12-02 09:28:05.948051
> GMT,,,p366111,th4549860160,,,seg-1,"LOG","42704","unrecognized
> configuration parameter
> ""hawq_standby_address_host""""set_config_option","guc.c",10041,
> 2016-12-02 09:28:05.948203
> GMT,,,p366111,th4549860160,,,seg-1,"LOG","42704","unrecognized
> configuration parameter
> ""hawq_dfs_url""""set_config_option","guc.c",10041,
> 2016-12-02 09:28:05.948349
> GMT,,,p366111,th4549860160,,,seg-1,"LOG","42704","unrecognized
> configuration parameter
> ""hawq_master_directory""""set_config_option","guc.c",10041,
> 2016-12-02 09:28:05.948493
> GMT,,,p366111,th4549860160,,,seg-1,"LOG","42704","unrecognized
> configuration parameter
> ""hawq_segment_directory""""set_config_option","guc.c",10041,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-1258) segment resource manager does not switch back when it cannot resolve standby host name

2017-01-05 Thread Yi Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Jin reassigned HAWQ-1258:


Assignee: Yi Jin  (was: Ed Espino)

> segment resource manager does not switch back when it cannot resolve standby 
> host name
> --
>
> Key: HAWQ-1258
> URL: https://issues.apache.org/jira/browse/HAWQ-1258
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.2.0.0-incubating
>
>
> When segment resource manager finds master resource manager is not available, 
> it should switch to standby server, however, when the standby server cannot 
> be resolved, it does not switch back to original master server side. And it 
> also should avoid switching to standby if there is no standby which is set as 
> value 'none'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1258) segment resource manager does not switch back when it cannot resolve standby host name

2017-01-05 Thread Yi Jin (JIRA)
Yi Jin created HAWQ-1258:


 Summary: segment resource manager does not switch back when it 
cannot resolve standby host name
 Key: HAWQ-1258
 URL: https://issues.apache.org/jira/browse/HAWQ-1258
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Resource Manager
Reporter: Yi Jin
Assignee: Ed Espino
 Fix For: 2.2.0.0-incubating


When segment resource manager finds master resource manager is not available, 
it should switch to standby server, however, when the standby server cannot be 
resolved, it does not switch back to original master server side. And it also 
should avoid switching to standby if there is no standby which is set as value 
'none'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   5   6   7   >