[jira] [Commented] (HAWQ-473) Implement adding a entry into gp_configuration_history when segment' status is changed
[ https://issues.apache.org/jira/browse/HAWQ-473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194751#comment-15194751 ] ASF GitHub Bot commented on HAWQ-473: - Github user huor commented on the pull request: https://github.com/apache/incubator-hawq/pull/452#issuecomment-196675702 +1 > Implement adding a entry into gp_configuration_history when segment' status > is changed > -- > > Key: HAWQ-473 > URL: https://issues.apache.org/jira/browse/HAWQ-473 > Project: Apache HAWQ > Issue Type: Improvement > Components: Fault Tolerance, Resource Manager >Reporter: Lin Wen >Assignee: Lin Wen > > 1. Implement adding a entry into gp_configuration_history when segment' > status is changed, so that users can know the reason of status change by > querying this catalog table. > A cleanup function is needed when user decides to clear this table. > 2. remove failed_tmp_dir and failed_tmp_dir_num columns in > gp_segment_configuration. add a "description" column to show the reason of > this segment is down. > 3. merge YARN mode and NONE mode status. > postgres=# select * from gp_segment_configuration ; > registration_order | role | status | port | hostname |address| > description > +--++---+--+---+-- > 0 | m| u | 5432 | master | master| > 2 | p| d | 4 | node4| 192.168.2.205 | no > YARN node report; > 3 | p| d | 4 | node3| 192.168.2.204 | > heartbeat timeout; > 1 | p| d | 4 | node2| 192.168.2.203 | > failed temporary directory:/home/gpadmin/greenplum-db-data/tmp1; > (4 rows) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-473) Implement adding a entry into gp_configuration_history when segment' status is changed
[ https://issues.apache.org/jira/browse/HAWQ-473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194732#comment-15194732 ] ASF GitHub Bot commented on HAWQ-473: - GitHub user linwen opened a pull request: https://github.com/apache/incubator-hawq/pull/452 HAWQ-473. fix coverity errors Please review. You can merge this pull request into a Git repository by running: $ git pull https://github.com/linwen/incubator-hawq hawq-473-3 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/452.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #452 commit 335267257f4d3232a5ad6b3f71bb40639d958ae7 Author: Wen LinDate: 2016-03-15T04:44:34Z HAWQ-473. fix coverity errors > Implement adding a entry into gp_configuration_history when segment' status > is changed > -- > > Key: HAWQ-473 > URL: https://issues.apache.org/jira/browse/HAWQ-473 > Project: Apache HAWQ > Issue Type: Improvement > Components: Fault Tolerance, Resource Manager >Reporter: Lin Wen >Assignee: Lin Wen > > 1. Implement adding a entry into gp_configuration_history when segment' > status is changed, so that users can know the reason of status change by > querying this catalog table. > A cleanup function is needed when user decides to clear this table. > 2. remove failed_tmp_dir and failed_tmp_dir_num columns in > gp_segment_configuration. add a "description" column to show the reason of > this segment is down. > 3. merge YARN mode and NONE mode status. > postgres=# select * from gp_segment_configuration ; > registration_order | role | status | port | hostname |address| > description > +--++---+--+---+-- > 0 | m| u | 5432 | master | master| > 2 | p| d | 4 | node4| 192.168.2.205 | no > YARN node report; > 3 | p| d | 4 | node3| 192.168.2.204 | > heartbeat timeout; > 1 | p| d | 4 | node2| 192.168.2.203 | > failed temporary directory:/home/gpadmin/greenplum-db-data/tmp1; > (4 rows) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-541) When YARN containers disappear from container report, the segment available resource maybe negative
[ https://issues.apache.org/jira/browse/HAWQ-541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194689#comment-15194689 ] ASF GitHub Bot commented on HAWQ-541: - Github user yaoj2 commented on the pull request: https://github.com/apache/incubator-hawq/pull/451#issuecomment-196636736 +1 > When YARN containers disappear from container report, the segment available > resource maybe negative > --- > > Key: HAWQ-541 > URL: https://issues.apache.org/jira/browse/HAWQ-541 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-541) When YARN containers disappear from container report, the segment available resource maybe negative
[ https://issues.apache.org/jira/browse/HAWQ-541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194686#comment-15194686 ] ASF GitHub Bot commented on HAWQ-541: - GitHub user jiny2 opened a pull request: https://github.com/apache/incubator-hawq/pull/451 HAWQ-541. When YARN containers disappear from container report, the segment available resource maybe negative You can merge this pull request into a Git repository by running: $ git pull https://github.com/jiny2/incubator-hawq HAWQ-541 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/451.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #451 commit 333e06c05dbc0732446c21580b0234bca16cae5c Author: YI JINDate: 2016-03-15T03:25:54Z HAWQ-541. When YARN containers disappear from container report, the segment available resource maybe negative > When YARN containers disappear from container report, the segment available > resource maybe negative > --- > > Key: HAWQ-541 > URL: https://issues.apache.org/jira/browse/HAWQ-541 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-541) When YARN containers disappear from container report, the segment available resource maybe negative
[ https://issues.apache.org/jira/browse/HAWQ-541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Jin reassigned HAWQ-541: --- Assignee: Yi Jin (was: Lei Chang) > When YARN containers disappear from container report, the segment available > resource maybe negative > --- > > Key: HAWQ-541 > URL: https://issues.apache.org/jira/browse/HAWQ-541 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-541) When YARN containers disappear from container report, the segment available resource maybe negative
Yi Jin created HAWQ-541: --- Summary: When YARN containers disappear from container report, the segment available resource maybe negative Key: HAWQ-541 URL: https://issues.apache.org/jira/browse/HAWQ-541 Project: Apache HAWQ Issue Type: Bug Components: Resource Manager Reporter: Yi Jin Assignee: Lei Chang -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-535) hawqextract column context does not exist error
[ https://issues.apache.org/jira/browse/HAWQ-535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194642#comment-15194642 ] ASF GitHub Bot commented on HAWQ-535: - Github user radarwave commented on the pull request: https://github.com/apache/incubator-hawq/pull/305#issuecomment-196626130 Now the fix is in. Thanks. > hawqextract column context does not exist error > --- > > Key: HAWQ-535 > URL: https://issues.apache.org/jira/browse/HAWQ-535 > Project: Apache HAWQ > Issue Type: Bug > Components: Command Line Tools >Reporter: Daniel Lynch >Assignee: Lei Chang > > When running hawq extract python stack trace is returned because pg_aoseg no > longer has a column called content > ``` > [gpadmin@node2 ~]$ hawq extract -o rank_table.yaml foo > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to connect > database localhost:5432 gpadmin 20160129:23:21:41:004538 > hawqextract:node2:gpadmin-[INFO]:-try to extract metadata of table 'foo' > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- detect > FileFormat: AO 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- > extract AO_FileLocations Traceback (most recent call last): File > "/usr/local/hawq-master/bin/hawqextract", line 551, in > sys.exit(main()) File "/usr/local/hawq-master/bin/hawqextract", line 528, in > main metadata = extract_metadata(conn, args[0]) File > "/usr/local/hawq-master/bin/hawqextract", line 444, in extract_metadata > cases[file_format]() File "/usr/local/hawq-master/bin/hawqextract", line 363, > in extract_AO_metadata 'Files': get_ao_table_files(rel_pgclass['oid'], > rel_pgclass['relfilenode']) File "/usr/local/hawq-master/bin/hawqextract", > line 322, in get_ao_table_files for f in accessor.get_aoseg_files(oid): File > "/usr/local/hawq-master/bin/hawqextract", line 164, in get_aoseg_files return > self.exec_query(qry) File "/usr/local/hawq-master/bin/hawqextract", line 129, > in exec_query return self.conn.query(sql).dictresult() pg.ProgrammingError: > ERROR: column "content" does not exist LINE 2: SELECT content, segno as > fileno, eof as filesize > ``` -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-535) hawqextract column context does not exist error
[ https://issues.apache.org/jira/browse/HAWQ-535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194635#comment-15194635 ] ASF GitHub Bot commented on HAWQ-535: - Github user yaoj2 commented on the pull request: https://github.com/apache/incubator-hawq/pull/305#issuecomment-196625535 LGTM > hawqextract column context does not exist error > --- > > Key: HAWQ-535 > URL: https://issues.apache.org/jira/browse/HAWQ-535 > Project: Apache HAWQ > Issue Type: Bug > Components: Command Line Tools >Reporter: Daniel Lynch >Assignee: Lei Chang > > When running hawq extract python stack trace is returned because pg_aoseg no > longer has a column called content > ``` > [gpadmin@node2 ~]$ hawq extract -o rank_table.yaml foo > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to connect > database localhost:5432 gpadmin 20160129:23:21:41:004538 > hawqextract:node2:gpadmin-[INFO]:-try to extract metadata of table 'foo' > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- detect > FileFormat: AO 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- > extract AO_FileLocations Traceback (most recent call last): File > "/usr/local/hawq-master/bin/hawqextract", line 551, in > sys.exit(main()) File "/usr/local/hawq-master/bin/hawqextract", line 528, in > main metadata = extract_metadata(conn, args[0]) File > "/usr/local/hawq-master/bin/hawqextract", line 444, in extract_metadata > cases[file_format]() File "/usr/local/hawq-master/bin/hawqextract", line 363, > in extract_AO_metadata 'Files': get_ao_table_files(rel_pgclass['oid'], > rel_pgclass['relfilenode']) File "/usr/local/hawq-master/bin/hawqextract", > line 322, in get_ao_table_files for f in accessor.get_aoseg_files(oid): File > "/usr/local/hawq-master/bin/hawqextract", line 164, in get_aoseg_files return > self.exec_query(qry) File "/usr/local/hawq-master/bin/hawqextract", line 129, > in exec_query return self.conn.query(sql).dictresult() pg.ProgrammingError: > ERROR: column "content" does not exist LINE 2: SELECT content, segno as > fileno, eof as filesize > ``` -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-522) Add socket connection pool feature to improve resource negotiation performance
[ https://issues.apache.org/jira/browse/HAWQ-522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194624#comment-15194624 ] ASF GitHub Bot commented on HAWQ-522: - Github user jiny2 closed the pull request at: https://github.com/apache/incubator-hawq/pull/439 > Add socket connection pool feature to improve resource negotiation performance > -- > > Key: HAWQ-522 > URL: https://issues.apache.org/jira/browse/HAWQ-522 > Project: Apache HAWQ > Issue Type: Improvement > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-527) Remove dead code of generateResourceRefreshHeartBeat()
[ https://issues.apache.org/jira/browse/HAWQ-527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194622#comment-15194622 ] ASF GitHub Bot commented on HAWQ-527: - Github user jiny2 closed the pull request at: https://github.com/apache/incubator-hawq/pull/443 > Remove dead code of generateResourceRefreshHeartBeat() > -- > > Key: HAWQ-527 > URL: https://issues.apache.org/jira/browse/HAWQ-527 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-525) query dispatcher heart-beat threads should not close FD if FD is not open ever
[ https://issues.apache.org/jira/browse/HAWQ-525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194623#comment-15194623 ] ASF GitHub Bot commented on HAWQ-525: - Github user jiny2 closed the pull request at: https://github.com/apache/incubator-hawq/pull/440 > query dispatcher heart-beat threads should not close FD if FD is not open ever > -- > > Key: HAWQ-525 > URL: https://issues.apache.org/jira/browse/HAWQ-525 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-540) Change output.replace-datanode-on-failure to true by default
[ https://issues.apache.org/jira/browse/HAWQ-540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194617#comment-15194617 ] ASF GitHub Bot commented on HAWQ-540: - Github user asfgit closed the pull request at: https://github.com/apache/incubator-hawq/pull/450 > Change output.replace-datanode-on-failure to true by default > > > Key: HAWQ-540 > URL: https://issues.apache.org/jira/browse/HAWQ-540 > Project: Apache HAWQ > Issue Type: Bug > Components: libhdfs >Reporter: zhenglin tao >Assignee: Lei Chang > > By default, output.replace-datanode-on-failure should be true for large > cluster -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-540) Change output.replace-datanode-on-failure to true by default
[ https://issues.apache.org/jira/browse/HAWQ-540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194598#comment-15194598 ] ASF GitHub Bot commented on HAWQ-540: - Github user zhangh43 commented on the pull request: https://github.com/apache/incubator-hawq/pull/450#issuecomment-196619713 +1 > Change output.replace-datanode-on-failure to true by default > > > Key: HAWQ-540 > URL: https://issues.apache.org/jira/browse/HAWQ-540 > Project: Apache HAWQ > Issue Type: Bug > Components: libhdfs >Reporter: zhenglin tao >Assignee: Lei Chang > > By default, output.replace-datanode-on-failure should be true for large > cluster -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-540) Change output.replace-datanode-on-failure to true by default
[ https://issues.apache.org/jira/browse/HAWQ-540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194592#comment-15194592 ] ASF GitHub Bot commented on HAWQ-540: - GitHub user ztao1987 opened a pull request: https://github.com/apache/incubator-hawq/pull/450 HAWQ-540. Change output.replace-datanode-on-failure to true by default. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ztao1987/incubator-hawq HAWQ-540 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/450.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #450 commit a7ad3753c064d6a7f8c4c28f691fc346d7d68c96 Author: zhenglin taoDate: 2016-03-15T02:12:36Z HAWQ-540. Change output.replace-datanode-on-failure to true by default. > Change output.replace-datanode-on-failure to true by default > > > Key: HAWQ-540 > URL: https://issues.apache.org/jira/browse/HAWQ-540 > Project: Apache HAWQ > Issue Type: Bug > Components: libhdfs >Reporter: zhenglin tao >Assignee: Lei Chang > > By default, output.replace-datanode-on-failure should be true for large > cluster -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-540) Change output.replace-datanode-on-failure to true by default
zhenglin tao created HAWQ-540: - Summary: Change output.replace-datanode-on-failure to true by default Key: HAWQ-540 URL: https://issues.apache.org/jira/browse/HAWQ-540 Project: Apache HAWQ Issue Type: Bug Components: libhdfs Reporter: zhenglin tao Assignee: Lei Chang By default, output.replace-datanode-on-failure should be true for large cluster -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-404) Add sort during INSERT of append only row oriented partition tables
[ https://issues.apache.org/jira/browse/HAWQ-404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194553#comment-15194553 ] ASF GitHub Bot commented on HAWQ-404: - Github user asfgit closed the pull request at: https://github.com/apache/incubator-hawq/pull/392 > Add sort during INSERT of append only row oriented partition tables > --- > > Key: HAWQ-404 > URL: https://issues.apache.org/jira/browse/HAWQ-404 > Project: Apache HAWQ > Issue Type: Improvement >Reporter: Haisheng Yuan >Assignee: Lei Chang > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-538) Add fault injection for dispatcher
Chunling Wang created HAWQ-538: -- Summary: Add fault injection for dispatcher Key: HAWQ-538 URL: https://issues.apache.org/jira/browse/HAWQ-538 Project: Apache HAWQ Issue Type: New Feature Components: Dispatcher Reporter: Chunling Wang Assignee: Lei Chang -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-534) A query resource quota can be calculated to from 0 to 0
[ https://issues.apache.org/jira/browse/HAWQ-534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194536#comment-15194536 ] ASF GitHub Bot commented on HAWQ-534: - Github user ictmalili commented on the pull request: https://github.com/apache/incubator-hawq/pull/446#issuecomment-196601495 +1 > A query resource quota can be calculated to from 0 to 0 > --- > > Key: HAWQ-534 > URL: https://issues.apache.org/jira/browse/HAWQ-534 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-455) Disable creating partition tables with non uniform bucket schema
[ https://issues.apache.org/jira/browse/HAWQ-455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haisheng Yuan resolved HAWQ-455. Resolution: Fixed Resolved in commit: https://github.com/apache/incubator-hawq/commit/d72d0fa041953c66e1663ce8952f0191b12bf8a1 > Disable creating partition tables with non uniform bucket schema > - > > Key: HAWQ-455 > URL: https://issues.apache.org/jira/browse/HAWQ-455 > Project: Apache HAWQ > Issue Type: Improvement > Components: DDL >Reporter: Haisheng Yuan >Assignee: Lei Chang > > HAWQ user should not be able to create partition tables with non uniform > bucket schema so that don't make orca trip up when it queries across a single > partition. Or else the user should see an error message. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-499) IPv6 address not read properly on OSX version < 10.11
[ https://issues.apache.org/jira/browse/HAWQ-499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194523#comment-15194523 ] ASF GitHub Bot commented on HAWQ-499: - Github user xinzweb commented on the pull request: https://github.com/apache/incubator-hawq/pull/419#issuecomment-196598092 @changleicn and @radarwave please test this PR on your side, and if all good, please push them in. Thanks. > IPv6 address not read properly on OSX version < 10.11 > - > > Key: HAWQ-499 > URL: https://issues.apache.org/jira/browse/HAWQ-499 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Jacob Max Frank >Assignee: Lei Chang > > The commit 91c50f70 for HAWQ-235 fixed IP lookup for IPv4, but appears to > have missed the corresponding case for IPv6. > On my laptop (currently running OSX 10.10.15), I hit the else case in this > block from *hawqinit.sh*: > {code} > get_master_ipv6_addresses() { > if [ "${distro_based_on}" = "Mac" ] && [ "${distro_version:0:5}" = > "10.11" ]; then > MASTER_IPV6_LOCAL_ADDRESS_ALL=(`${IFCONFIG} | ${GREP} inet6 | ${AWK} > '{print $2}' | cut -d'%' -f1`) > else > MASTER_IPV6_LOCAL_ADDRESS_ALL=(`ip -6 address show |${GREP} > inet6|${AWK} '{print $2}' |cut -d'/' -f1`) > fi > } > {code} > That resulted in this error, since the "ip" command isn't on my computer: > {code} > /usr/local/hawq/bin/lib/hawqinit.sh: line 196: ip: command not found > {code} > We verified that the positive case in the block above works fine on my > version of OSX. I can't say for sure how it will work on older OSX versions, > but IPv6 has been supported since OSX 10.1. > We're open to suggestions for the bug's most relevant "component"; "build" > was the closest but didn't really seem to fit. Perhaps we could add a > component for support/infrastructure scripts. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-404) Add sort during INSERT of append only row oriented partition tables
[ https://issues.apache.org/jira/browse/HAWQ-404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194519#comment-15194519 ] ASF GitHub Bot commented on HAWQ-404: - Github user xinzweb commented on the pull request: https://github.com/apache/incubator-hawq/pull/392#issuecomment-196597824 @changleicn , please go ahead and merge it. Thanks. (Please don't squash the commits) > Add sort during INSERT of append only row oriented partition tables > --- > > Key: HAWQ-404 > URL: https://issues.apache.org/jira/browse/HAWQ-404 > Project: Apache HAWQ > Issue Type: Improvement >Reporter: Haisheng Yuan >Assignee: Lei Chang > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-534) A query resource quota can be calculated to from 0 to 0
[ https://issues.apache.org/jira/browse/HAWQ-534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194513#comment-15194513 ] ASF GitHub Bot commented on HAWQ-534: - Github user yaoj2 commented on the pull request: https://github.com/apache/incubator-hawq/pull/446#issuecomment-196594716 +1 > A query resource quota can be calculated to from 0 to 0 > --- > > Key: HAWQ-534 > URL: https://issues.apache.org/jira/browse/HAWQ-534 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-455) Disable creating partition tables with non uniform bucket schema
[ https://issues.apache.org/jira/browse/HAWQ-455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194507#comment-15194507 ] ASF GitHub Bot commented on HAWQ-455: - Github user hsyuan closed the pull request at: https://github.com/apache/incubator-hawq/pull/387 > Disable creating partition tables with non uniform bucket schema > - > > Key: HAWQ-455 > URL: https://issues.apache.org/jira/browse/HAWQ-455 > Project: Apache HAWQ > Issue Type: Improvement > Components: DDL >Reporter: Haisheng Yuan >Assignee: Lei Chang > > HAWQ user should not be able to create partition tables with non uniform > bucket schema so that don't make orca trip up when it queries across a single > partition. Or else the user should see an error message. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-455) Disable creating partition tables with non uniform bucket schema
[ https://issues.apache.org/jira/browse/HAWQ-455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194506#comment-15194506 ] ASF GitHub Bot commented on HAWQ-455: - Github user hsyuan commented on the pull request: https://github.com/apache/incubator-hawq/pull/387#issuecomment-196594521 Pushed to master: https://github.com/apache/incubator-hawq/commit/d72d0fa041953c66e1663ce8952f0191b12bf8a1 > Disable creating partition tables with non uniform bucket schema > - > > Key: HAWQ-455 > URL: https://issues.apache.org/jira/browse/HAWQ-455 > Project: Apache HAWQ > Issue Type: Improvement > Components: DDL >Reporter: Haisheng Yuan >Assignee: Lei Chang > > HAWQ user should not be able to create partition tables with non uniform > bucket schema so that don't make orca trip up when it queries across a single > partition. Or else the user should see an error message. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-502) Support --enable-orca in autoconf
[ https://issues.apache.org/jira/browse/HAWQ-502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xin zhang resolved HAWQ-502. Resolution: Fixed Fix Version/s: 2.0.0-beta-incubating This is resolved. The commit is in the git a48600a125eed2a3da00133d6f5f8dad22d4a6a4. > Support --enable-orca in autoconf > - > > Key: HAWQ-502 > URL: https://issues.apache.org/jira/browse/HAWQ-502 > Project: Apache HAWQ > Issue Type: Improvement > Components: Optimizer >Reporter: Jacob Max Frank >Assignee: xin zhang > Fix For: 2.0.0-beta-incubating > > > Autoconf and Makefile changes are needed to build with HAWQ with GPORCA. We > spent some time hacking on this and will file a PR soon. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-502) Support --enable-orca in autoconf
[ https://issues.apache.org/jira/browse/HAWQ-502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194121#comment-15194121 ] ASF GitHub Bot commented on HAWQ-502: - Github user xinzweb commented on the pull request: https://github.com/apache/incubator-hawq/pull/420#issuecomment-196520992 Commit: a48600a125eed2a3da00133d6f5f8dad22d4a6a4 > Support --enable-orca in autoconf > - > > Key: HAWQ-502 > URL: https://issues.apache.org/jira/browse/HAWQ-502 > Project: Apache HAWQ > Issue Type: Improvement > Components: Optimizer >Reporter: Jacob Max Frank >Assignee: xin zhang > > Autoconf and Makefile changes are needed to build with HAWQ with GPORCA. We > spent some time hacking on this and will file a PR soon. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-502) Support --enable-orca in autoconf
[ https://issues.apache.org/jira/browse/HAWQ-502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194062#comment-15194062 ] ASF GitHub Bot commented on HAWQ-502: - Github user xinzweb commented on the pull request: https://github.com/apache/incubator-hawq/pull/420#issuecomment-196507939 +1 Please make sure when sync the GPORCA and GPOS, please use the following history: - sync/clone GPOS to version 1.133 (78dac34, tag: v.1.133) - sync/clone GP-XERCES - sync/clone GPORCA to version 1.617 (8937084, tag: v1.617) > Support --enable-orca in autoconf > - > > Key: HAWQ-502 > URL: https://issues.apache.org/jira/browse/HAWQ-502 > Project: Apache HAWQ > Issue Type: Improvement > Components: Optimizer >Reporter: Jacob Max Frank >Assignee: xin zhang > > Autoconf and Makefile changes are needed to build with HAWQ with GPORCA. We > spent some time hacking on this and will file a PR soon. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-307) Ubuntu Support
[ https://issues.apache.org/jira/browse/HAWQ-307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15194034#comment-15194034 ] Konstantin Boudnik commented on HAWQ-307: - bq. For Konstantin Boudnik, can anything easily be done for BIGTOP-2321 to direct the packages to depend on the right system packages from the HAWQ side?) Actually, the way we build stacks for the Apache Bigdata ecosystem is by standardizing the build environment. So far everything was falling well into this bucket. However, Hawq needs some extras and for that I have altered the standard Bigtop toolchain somewhat (BIGTOP-2323). I was also able to get around most of the passwordless-ssh issues by completely avoiding the Python scripts. And if you look at my latest comments to HAWQ-469 you'll immediately see why I had to. As of this moment, I can automatically (with Puppet) stand up HDFS cluster with HAWQ and run some trivial create table, insert tests on it. This is done for Centos7 only. Now it is Ubuntu turn ;) [~clayb], any chance you can publish the changes needed for this JIRA as patches or a GH branch? I want to put them on a separate HAWQ-307 branch and start doing the integration between that branch and BIGTOP-2320 one. Thanks for your help! > Ubuntu Support > -- > > Key: HAWQ-307 > URL: https://issues.apache.org/jira/browse/HAWQ-307 > Project: Apache HAWQ > Issue Type: New Feature > Components: Build >Reporter: Lei Chang >Assignee: Clay B. > Fix For: 2.1.0 > > > To support HAWQ running on Ubuntu OS 14.04.3 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-482) Failed to cancel HDFS delegation token
[ https://issues.apache.org/jira/browse/HAWQ-482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193943#comment-15193943 ] Jemish Patel commented on HAWQ-482: --- What I see happening is the user postgres@{REALM} requests a delegation token from NN (isilon in this case) and gets a token for hdfs@{REALM}. Then when postgres tries to cancel, he is not the owner of that token (as it is for hdfs@{REALM} ) and hence there is an error as the owner would be hdfs and not postgres. Correct? If that is correct, than this warning would be thrown even on a non-Isilon environment. Let me know. Jemish > Failed to cancel HDFS delegation token > -- > > Key: HAWQ-482 > URL: https://issues.apache.org/jira/browse/HAWQ-482 > Project: Apache HAWQ > Issue Type: Bug > Components: Security >Affects Versions: 2.0.0-beta-incubating >Reporter: Jemish Patel >Assignee: Lei Chang > Attachments: hawq-2016-03-03_00.csv, hdfs.log > > > Hi I am using HDB 2.0.0.0_beta-19716 in a kerberized environment. > Every time I select/insert rows or create a table, I see the warning below: > WARNING: failed to cancel hdfs delegation token. > DETAIL: User postg...@vlan172.fe.gopivotal.com is not authorized to cancel > the token > The operation does succeed but I am wondering why is it trying to delete a > delegation token and if I have something misconfigured? > Can you please let me know why this is happening and how to resolve it? > Jemish -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-455) Disable creating partition tables with non uniform bucket schema
[ https://issues.apache.org/jira/browse/HAWQ-455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193898#comment-15193898 ] ASF GitHub Bot commented on HAWQ-455: - Github user hsyuan commented on the pull request: https://github.com/apache/incubator-hawq/pull/387#issuecomment-196471228 I didn't see the merge history on master. @changleicn Can you post the link? > Disable creating partition tables with non uniform bucket schema > - > > Key: HAWQ-455 > URL: https://issues.apache.org/jira/browse/HAWQ-455 > Project: Apache HAWQ > Issue Type: Improvement > Components: DDL >Reporter: Haisheng Yuan >Assignee: Lei Chang > > HAWQ user should not be able to create partition tables with non uniform > bucket schema so that don't make orca trip up when it queries across a single > partition. Or else the user should see an error message. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HAWQ-434) Add Cassandra Plugin for PXF
[ https://issues.apache.org/jira/browse/HAWQ-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193841#comment-15193841 ] Goden Yao edited comment on HAWQ-434 at 3/14/16 6:34 PM: - Not testing work. The existing plugin developed is just for your reference.You can choose to start with the existing code or develop your own completely based on the framework. I don't think we have an open slack room for HAWQ yet. If you find one (or create one) please let us know. was (Author: godenyao): Not testing work. The existing plugin developed is just for your reference. I don't think we have an open slack room for HAWQ yet. If you find one (or create one) please let us know. > Add Cassandra Plugin for PXF > > > Key: HAWQ-434 > URL: https://issues.apache.org/jira/browse/HAWQ-434 > Project: Apache HAWQ > Issue Type: New Feature > Components: PXF >Reporter: Goden Yao >Assignee: Goden Yao > Labels: gsoc2016 > > (Cassandra | http://cassandra.apache.org/) has been proved a popular > key/value storage for open source community. We had some contribution from > early PXF users, it'd be good to integrate the code back to HAWQ code base > and follow build , test policy. > Original Contribution: > http://pivotal-field-engineering.github.io/pxf-field/cassandra.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-434) Add Cassandra Plugin for PXF
[ https://issues.apache.org/jira/browse/HAWQ-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193841#comment-15193841 ] Goden Yao commented on HAWQ-434: Not testing work. The existing plugin developed is just for your reference. I don't think we have an open slack room for HAWQ yet. If you find one (or create one) please let us know. > Add Cassandra Plugin for PXF > > > Key: HAWQ-434 > URL: https://issues.apache.org/jira/browse/HAWQ-434 > Project: Apache HAWQ > Issue Type: New Feature > Components: PXF >Reporter: Goden Yao >Assignee: Goden Yao > Labels: gsoc2016 > > (Cassandra | http://cassandra.apache.org/) has been proved a popular > key/value storage for open source community. We had some contribution from > early PXF users, it'd be good to integrate the code back to HAWQ code base > and follow build , test policy. > Original Contribution: > http://pivotal-field-engineering.github.io/pxf-field/cassandra.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-434) Add Cassandra Plugin for PXF
[ https://issues.apache.org/jira/browse/HAWQ-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193826#comment-15193826 ] Jessica Paige commented on HAWQ-434: Thanks for the feedback Goden. I was looking forward to it. If I get you right, there already exist a plugin for Cassandra we with to integrate into HAWQ? So work will be more of testing? Also Is there an IRC/slack room for HAWQ developers, I really look forward to working on something involving Cassandra. > Add Cassandra Plugin for PXF > > > Key: HAWQ-434 > URL: https://issues.apache.org/jira/browse/HAWQ-434 > Project: Apache HAWQ > Issue Type: New Feature > Components: PXF >Reporter: Goden Yao >Assignee: Goden Yao > Labels: gsoc2016 > > (Cassandra | http://cassandra.apache.org/) has been proved a popular > key/value storage for open source community. We had some contribution from > early PXF users, it'd be good to integrate the code back to HAWQ code base > and follow build , test policy. > Original Contribution: > http://pivotal-field-engineering.github.io/pxf-field/cassandra.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-434) Add Cassandra Plugin for PXF
[ https://issues.apache.org/jira/browse/HAWQ-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193796#comment-15193796 ] Goden Yao commented on HAWQ-434: [~jessypaige] / [~venkat1996] thanks for your comments and passion for contributing to the Open Source!! The goal for this is to have Cassandra plugin merged into the HAWQ codebase, so community can compile and use it if they want to. You need to understand PXF framework and the 3 classes you develop for cassandra (resolver, accessor, fragmenter) check PXF wiki page for more info and please let me know if you have further questions. https://cwiki.apache.org/confluence/display/HAWQ/PXF > Add Cassandra Plugin for PXF > > > Key: HAWQ-434 > URL: https://issues.apache.org/jira/browse/HAWQ-434 > Project: Apache HAWQ > Issue Type: New Feature > Components: PXF >Reporter: Goden Yao >Assignee: Goden Yao > Labels: gsoc2016 > > (Cassandra | http://cassandra.apache.org/) has been proved a popular > key/value storage for open source community. We had some contribution from > early PXF users, it'd be good to integrate the code back to HAWQ code base > and follow build , test policy. > Original Contribution: > http://pivotal-field-engineering.github.io/pxf-field/cassandra.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-502) Support --enable-orca in autoconf
[ https://issues.apache.org/jira/browse/HAWQ-502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xin zhang reassigned HAWQ-502: -- Assignee: xin zhang (was: Amr El-Helw) > Support --enable-orca in autoconf > - > > Key: HAWQ-502 > URL: https://issues.apache.org/jira/browse/HAWQ-502 > Project: Apache HAWQ > Issue Type: Improvement > Components: Optimizer >Reporter: Jacob Max Frank >Assignee: xin zhang > > Autoconf and Makefile changes are needed to build with HAWQ with GPORCA. We > spent some time hacking on this and will file a PR soon. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-537) 'make distdir' can't handle filenames with spaces in them
Tom Meyer created HAWQ-537: -- Summary: 'make distdir' can't handle filenames with spaces in them Key: HAWQ-537 URL: https://issues.apache.org/jira/browse/HAWQ-537 Project: Apache HAWQ Issue Type: Bug Components: Build Reporter: Tom Meyer Assignee: Lei Chang -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [jira] [Commented] (HAWQ-521) external table test failure in installcheck-good with orca disabled
Hi Tom and Venky, Good catch. It looks like there was a change to the error message text. We should keep looking for more ways to demonstrate tests. +1 Thanks, C.J. On Mon, Mar 14, 2016 at 10:12 AM, ASF GitHub Bot (JIRA)wrote: > > [ > https://issues.apache.org/jira/browse/HAWQ-521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193659#comment-15193659 > ] > > ASF GitHub Bot commented on HAWQ-521: > - > > GitHub user tom-meyer opened a pull request: > > https://github.com/apache/incubator-hawq/pull/449 > > HAWQ-521. Fixing error message for exttab1 source file when optimizer… > > … is turned off > > You can merge this pull request into a Git repository by running: > > $ git pull https://github.com/tom-meyer/incubator-hawq HAWQ-521 > > Alternatively you can review and apply these changes as the patch at: > > https://github.com/apache/incubator-hawq/pull/449.patch > > To close this pull request, make a commit to your master/trunk branch > with (at least) the following in the commit message: > > This closes #449 > > > commit 44ad0dfc45ba23ad4b744bd62595f5e426d4a892 > Author: Venkatesh Raghavan > Date: 2016-03-11T02:47:35Z > > HAWQ-521. Fixing error message for exttab1 source file when optimizer > is turned off > > > > > > external table test failure in installcheck-good with orca disabled > > --- > > > > Key: HAWQ-521 > > URL: https://issues.apache.org/jira/browse/HAWQ-521 > > Project: Apache HAWQ > > Issue Type: Bug > >Reporter: Tom Meyer > >Assignee: Lei Chang > > > > We are running installcheck-good against a hawq built without > --enable-orca > > {noformat} > > *** ./expected/exttab1.out2016-02-26 12:54:17.786833482 + > > --- ./results/exttab1.out 2016-02-26 12:54:17.866833482 + > > *** > > *** 673,681 > > -- positive > > --- > > -- > > - ERROR: ON clause may not be used with a writable external table > > ERROR: it is not possible to read from a WRITABLE external table. > > ERROR: location uri "gpfdist://localhost:7070/wet.out" appears more > than once > > ERROR: the file protocol for external tables is deprecated > > HINT: Create the table as READABLE instead > > HINT: use the gpfdist protocol or COPY FROM instead > > --- 670,678 > > -- positive > > --- > > -- > > ERROR: it is not possible to read from a WRITABLE external table. > > ERROR: location uri "gpfdist://localhost:7070/wet.out" appears more > than once > > + ERROR: the ON segment syntax for writable external tables is > deprecated > > ERROR: the file protocol for external tables is deprecated > > HINT: Create the table as READABLE instead > > HINT: use the gpfdist protocol or COPY FROM instead > > {noformat} > > > > -- > This message was sent by Atlassian JIRA > (v6.3.4#6332) >
[jira] [Commented] (HAWQ-521) external table test failure in installcheck-good with orca disabled
[ https://issues.apache.org/jira/browse/HAWQ-521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193659#comment-15193659 ] ASF GitHub Bot commented on HAWQ-521: - GitHub user tom-meyer opened a pull request: https://github.com/apache/incubator-hawq/pull/449 HAWQ-521. Fixing error message for exttab1 source file when optimizer… … is turned off You can merge this pull request into a Git repository by running: $ git pull https://github.com/tom-meyer/incubator-hawq HAWQ-521 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/449.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #449 commit 44ad0dfc45ba23ad4b744bd62595f5e426d4a892 Author: Venkatesh RaghavanDate: 2016-03-11T02:47:35Z HAWQ-521. Fixing error message for exttab1 source file when optimizer is turned off > external table test failure in installcheck-good with orca disabled > --- > > Key: HAWQ-521 > URL: https://issues.apache.org/jira/browse/HAWQ-521 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Tom Meyer >Assignee: Lei Chang > > We are running installcheck-good against a hawq built without --enable-orca > {noformat} > *** ./expected/exttab1.out2016-02-26 12:54:17.786833482 + > --- ./results/exttab1.out 2016-02-26 12:54:17.866833482 + > *** > *** 673,681 > -- positive > --- > -- > - ERROR: ON clause may not be used with a writable external table > ERROR: it is not possible to read from a WRITABLE external table. > ERROR: location uri "gpfdist://localhost:7070/wet.out" appears more than > once > ERROR: the file protocol for external tables is deprecated > HINT: Create the table as READABLE instead > HINT: use the gpfdist protocol or COPY FROM instead > --- 670,678 > -- positive > --- > -- > ERROR: it is not possible to read from a WRITABLE external table. > ERROR: location uri "gpfdist://localhost:7070/wet.out" appears more than > once > + ERROR: the ON segment syntax for writable external tables is deprecated > ERROR: the file protocol for external tables is deprecated > HINT: Create the table as READABLE instead > HINT: use the gpfdist protocol or COPY FROM instead > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-536) Regress test 'function' fails when orca is not enabled
[ https://issues.apache.org/jira/browse/HAWQ-536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193600#comment-15193600 ] ASF GitHub Bot commented on HAWQ-536: - GitHub user tom-meyer opened a pull request: https://github.com/apache/incubator-hawq/pull/448 HAWQ-536. Fix expected function.out when orca is not enabled You can merge this pull request into a Git repository by running: $ git pull https://github.com/tom-meyer/incubator-hawq HAWQ-536 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/448.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #448 commit 8992554400ccbffb857bc0fcde770ba827f711fb Author: Devadass Santhosh Sampath and Tom MeyerDate: 2016-03-12T02:03:07Z HAWQ-536. Fix expected function.out when orca is not enabled > Regress test 'function' fails when orca is not enabled > -- > > Key: HAWQ-536 > URL: https://issues.apache.org/jira/browse/HAWQ-536 > Project: Apache HAWQ > Issue Type: Bug > Components: Tests >Reporter: Tom Meyer >Assignee: Jiali Yao > > {noformat} > The differences that caused some tests to fail can be viewed in the > file "./regression.diffs". A copy of the test summary that you see > above is saved in the file "./regression.out". > make[2]: *** [installcheck-good] Error 1 > make[2]: Leaving directory > `/tmp/build/486517ee/hawq_src/apache-hawq/src/test/regress' > make[1]: *** [installcheck-good] Error 2 > make[1]: Leaving directory `/tmp/build/486517ee/hawq_src/apache-hawq/src/test' > make: *** [installcheck-good] Error 2 > *** ./expected/function.out 2016-03-11 03:53:32.533109484 + > --- ./results/function.out2016-03-11 03:53:32.609109484 + > *** > *** 881,887 > DROP FUNCTION inner(int); > -- TEARDOWN > DROP TABLE foo; > - > -- HAWQ-510 > drop table if exists testEntryDB; > create table testEntryDB(key int, value int) distributed randomly; > --- 881,886 > *** > *** 894,896 > --- 893,898 > -+--- > 1 | 0 > 2 | 0 > + (2 rows) > + > + drop table testEntryDB; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [jira] [Commented] (HAWQ-536) Regress test 'function' fails when orca is not enabled
Hi Tom, Thank you for the pull request, and welcome to the contributor community! Others can take a closer look, but looks good to me. +1 C.J. On Mon, Mar 14, 2016 at 9:40 AM, ASF GitHub Bot (JIRA)wrote: > > [ > https://issues.apache.org/jira/browse/HAWQ-536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193600#comment-15193600 > ] > > ASF GitHub Bot commented on HAWQ-536: > - > > GitHub user tom-meyer opened a pull request: > > https://github.com/apache/incubator-hawq/pull/448 > > HAWQ-536. Fix expected function.out when orca is not enabled > > > > You can merge this pull request into a Git repository by running: > > $ git pull https://github.com/tom-meyer/incubator-hawq HAWQ-536 > > Alternatively you can review and apply these changes as the patch at: > > https://github.com/apache/incubator-hawq/pull/448.patch > > To close this pull request, make a commit to your master/trunk branch > with (at least) the following in the commit message: > > This closes #448 > > > commit 8992554400ccbffb857bc0fcde770ba827f711fb > Author: Devadass Santhosh Sampath and Tom Meyer > Date: 2016-03-12T02:03:07Z > > HAWQ-536. Fix expected function.out when orca is not enabled > > > > > > Regress test 'function' fails when orca is not enabled > > -- > > > > Key: HAWQ-536 > > URL: https://issues.apache.org/jira/browse/HAWQ-536 > > Project: Apache HAWQ > > Issue Type: Bug > > Components: Tests > >Reporter: Tom Meyer > >Assignee: Jiali Yao > > > > {noformat} > > The differences that caused some tests to fail can be viewed in the > > file "./regression.diffs". A copy of the test summary that you see > > above is saved in the file "./regression.out". > > make[2]: *** [installcheck-good] Error 1 > > make[2]: Leaving directory > `/tmp/build/486517ee/hawq_src/apache-hawq/src/test/regress' > > make[1]: *** [installcheck-good] Error 2 > > make[1]: Leaving directory > `/tmp/build/486517ee/hawq_src/apache-hawq/src/test' > > make: *** [installcheck-good] Error 2 > > *** ./expected/function.out 2016-03-11 03:53:32.533109484 + > > --- ./results/function.out2016-03-11 03:53:32.609109484 + > > *** > > *** 881,887 > > DROP FUNCTION inner(int); > > -- TEARDOWN > > DROP TABLE foo; > > - > > -- HAWQ-510 > > drop table if exists testEntryDB; > > create table testEntryDB(key int, value int) distributed randomly; > > --- 881,886 > > *** > > *** 894,896 > > --- 893,898 > > -+--- > > 1 | 0 > > 2 | 0 > > + (2 rows) > > + > > + drop table testEntryDB; > > {noformat} > > > > -- > This message was sent by Atlassian JIRA > (v6.3.4#6332) >
[jira] [Created] (HAWQ-536) Regress test 'function' fails when orca is not enabled
Tom Meyer created HAWQ-536: -- Summary: Regress test 'function' fails when orca is not enabled Key: HAWQ-536 URL: https://issues.apache.org/jira/browse/HAWQ-536 Project: Apache HAWQ Issue Type: Bug Components: Tests Reporter: Tom Meyer Assignee: Jiali Yao {noformat} The differences that caused some tests to fail can be viewed in the file "./regression.diffs". A copy of the test summary that you see above is saved in the file "./regression.out". make[2]: *** [installcheck-good] Error 1 make[2]: Leaving directory `/tmp/build/486517ee/hawq_src/apache-hawq/src/test/regress' make[1]: *** [installcheck-good] Error 2 make[1]: Leaving directory `/tmp/build/486517ee/hawq_src/apache-hawq/src/test' make: *** [installcheck-good] Error 2 *** ./expected/function.out 2016-03-11 03:53:32.533109484 + --- ./results/function.out 2016-03-11 03:53:32.609109484 + *** *** 881,887 DROP FUNCTION inner(int); -- TEARDOWN DROP TABLE foo; - -- HAWQ-510 drop table if exists testEntryDB; create table testEntryDB(key int, value int) distributed randomly; --- 881,886 *** *** 894,896 --- 893,898 -+--- 1 | 0 2 | 0 + (2 rows) + + drop table testEntryDB; {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-535) hawqextract column context does not exist error
[ https://issues.apache.org/jira/browse/HAWQ-535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193380#comment-15193380 ] ASF GitHub Bot commented on HAWQ-535: - Github user radarwave commented on the pull request: https://github.com/apache/incubator-hawq/pull/305#issuecomment-196343453 You might need to update the git commit message with the jira number, so when we finished merge, it can keep the origin author information. > hawqextract column context does not exist error > --- > > Key: HAWQ-535 > URL: https://issues.apache.org/jira/browse/HAWQ-535 > Project: Apache HAWQ > Issue Type: Bug > Components: Command Line Tools >Reporter: Daniel Lynch >Assignee: Lei Chang > > When running hawq extract python stack trace is returned because pg_aoseg no > longer has a column called content > ``` > [gpadmin@node2 ~]$ hawq extract -o rank_table.yaml foo > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to connect > database localhost:5432 gpadmin 20160129:23:21:41:004538 > hawqextract:node2:gpadmin-[INFO]:-try to extract metadata of table 'foo' > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- detect > FileFormat: AO 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- > extract AO_FileLocations Traceback (most recent call last): File > "/usr/local/hawq-master/bin/hawqextract", line 551, in > sys.exit(main()) File "/usr/local/hawq-master/bin/hawqextract", line 528, in > main metadata = extract_metadata(conn, args[0]) File > "/usr/local/hawq-master/bin/hawqextract", line 444, in extract_metadata > cases[file_format]() File "/usr/local/hawq-master/bin/hawqextract", line 363, > in extract_AO_metadata 'Files': get_ao_table_files(rel_pgclass['oid'], > rel_pgclass['relfilenode']) File "/usr/local/hawq-master/bin/hawqextract", > line 322, in get_ao_table_files for f in accessor.get_aoseg_files(oid): File > "/usr/local/hawq-master/bin/hawqextract", line 164, in get_aoseg_files return > self.exec_query(qry) File "/usr/local/hawq-master/bin/hawqextract", line 129, > in exec_query return self.conn.query(sql).dictresult() pg.ProgrammingError: > ERROR: column "content" does not exist LINE 2: SELECT content, segno as > fileno, eof as filesize > ``` -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-535) hawqextract column context does not exist error
[ https://issues.apache.org/jira/browse/HAWQ-535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193377#comment-15193377 ] ASF GitHub Bot commented on HAWQ-535: - Github user randomtask1155 commented on the pull request: https://github.com/apache/incubator-hawq/pull/305#issuecomment-196341498 jira created https://issues.apache.org/jira/browse/HAWQ-535 > hawqextract column context does not exist error > --- > > Key: HAWQ-535 > URL: https://issues.apache.org/jira/browse/HAWQ-535 > Project: Apache HAWQ > Issue Type: Bug > Components: Command Line Tools >Reporter: Daniel Lynch >Assignee: Lei Chang > > When running hawq extract python stack trace is returned because pg_aoseg no > longer has a column called content > ``` > [gpadmin@node2 ~]$ hawq extract -o rank_table.yaml foo > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to connect > database localhost:5432 gpadmin 20160129:23:21:41:004538 > hawqextract:node2:gpadmin-[INFO]:-try to extract metadata of table 'foo' > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- detect > FileFormat: AO 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- > extract AO_FileLocations Traceback (most recent call last): File > "/usr/local/hawq-master/bin/hawqextract", line 551, in > sys.exit(main()) File "/usr/local/hawq-master/bin/hawqextract", line 528, in > main metadata = extract_metadata(conn, args[0]) File > "/usr/local/hawq-master/bin/hawqextract", line 444, in extract_metadata > cases[file_format]() File "/usr/local/hawq-master/bin/hawqextract", line 363, > in extract_AO_metadata 'Files': get_ao_table_files(rel_pgclass['oid'], > rel_pgclass['relfilenode']) File "/usr/local/hawq-master/bin/hawqextract", > line 322, in get_ao_table_files for f in accessor.get_aoseg_files(oid): File > "/usr/local/hawq-master/bin/hawqextract", line 164, in get_aoseg_files return > self.exec_query(qry) File "/usr/local/hawq-master/bin/hawqextract", line 129, > in exec_query return self.conn.query(sql).dictresult() pg.ProgrammingError: > ERROR: column "content" does not exist LINE 2: SELECT content, segno as > fileno, eof as filesize > ``` -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-535) hawqextract column context does not exist error
Daniel Lynch created HAWQ-535: - Summary: hawqextract column context does not exist error Key: HAWQ-535 URL: https://issues.apache.org/jira/browse/HAWQ-535 Project: Apache HAWQ Issue Type: Bug Components: Command Line Tools Reporter: Daniel Lynch Assignee: Lei Chang When running hawq extract python stack trace is returned because pg_aoseg no longer has a column called content ``` [gpadmin@node2 ~]$ hawq extract -o rank_table.yaml foo 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to connect database localhost:5432 gpadmin 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to extract metadata of table 'foo' 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- detect FileFormat: AO 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- extract AO_FileLocations Traceback (most recent call last): File "/usr/local/hawq-master/bin/hawqextract", line 551, in sys.exit(main()) File "/usr/local/hawq-master/bin/hawqextract", line 528, in main metadata = extract_metadata(conn, args[0]) File "/usr/local/hawq-master/bin/hawqextract", line 444, in extract_metadata cases[file_format]() File "/usr/local/hawq-master/bin/hawqextract", line 363, in extract_AO_metadata 'Files': get_ao_table_files(rel_pgclass['oid'], rel_pgclass['relfilenode']) File "/usr/local/hawq-master/bin/hawqextract", line 322, in get_ao_table_files for f in accessor.get_aoseg_files(oid): File "/usr/local/hawq-master/bin/hawqextract", line 164, in get_aoseg_files return self.exec_query(qry) File "/usr/local/hawq-master/bin/hawqextract", line 129, in exec_query return self.conn.query(sql).dictresult() pg.ProgrammingError: ERROR: column "content" does not exist LINE 2: SELECT content, segno as fileno, eof as filesize ``` -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-434) Add Cassandra Plugin for PXF
[ https://issues.apache.org/jira/browse/HAWQ-434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193340#comment-15193340 ] Jessica Paige commented on HAWQ-434: Hello, I am Jessica Paige a MS student at York University. I am proficient in C/C++ and Java. I also have good working knowledge of Cassandra and would love to work on this project. I have looked at the Extension API and how to use the 3 classes in adding new plugins. In this project, We wish to add support for Cassandra and afaik, the right way to talk to Cassandra is by using the Datastax driver since the Thrift API is considered deprecated. Please is that what we are looking to achieve in this project? I will be writing a proposal for this project hoping its given attension. > Add Cassandra Plugin for PXF > > > Key: HAWQ-434 > URL: https://issues.apache.org/jira/browse/HAWQ-434 > Project: Apache HAWQ > Issue Type: New Feature > Components: PXF >Reporter: Goden Yao >Assignee: Goden Yao > Labels: gsoc2016 > > (Cassandra | http://cassandra.apache.org/) has been proved a popular > key/value storage for open source community. We had some contribution from > early PXF users, it'd be good to integrate the code back to HAWQ code base > and follow build , test policy. > Original Contribution: > http://pivotal-field-engineering.github.io/pxf-field/cassandra.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-529) Allocate resource for udf in resource negotiator.
[ https://issues.apache.org/jira/browse/HAWQ-529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193041#comment-15193041 ] ASF GitHub Bot commented on HAWQ-529: - GitHub user zhangh43 opened a pull request: https://github.com/apache/incubator-hawq/pull/447 HAWQ-529. Allocate resource for udf in resource negotiator. You can merge this pull request into a Git repository by running: $ git pull https://github.com/zhangh43/incubator-hawq hawq529 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/447.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #447 commit 174f8177e813e2dee0613d25b850eb3bdd1c5e5f Author: hubertzhangDate: 2016-03-01T06:12:01Z Revert "Revert "HAWQ-453. Do not allocate query resource in prepare stage for prepared statement"" This reverts commit b8b9a0e843dad2ac1e413d184979b463f5d5b4b2. commit 7c611f151c3e27d5a86c58b85cd4fcd8e1d8107d Author: hubertzhang Date: 2016-03-01T06:24:15Z Revert "Revert "HAWQ-198. Build-in functions will be executed in resource_negotiator stage which causes it will be executed twice"" This reverts commit 598bb436650809011732ff96b667df8ac9712203. commit 119b6d43104062b4a27cfa4778ed9f81978a8120 Author: hubertzhang Date: 2016-03-14T10:14:22Z HAWQ-529. Allocate resource for udf in resource negotiator. > Allocate resource for udf in resource negotiator. > - > > Key: HAWQ-529 > URL: https://issues.apache.org/jira/browse/HAWQ-529 > Project: Apache HAWQ > Issue Type: Improvement > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > > for udf , we should allocate resource in top level, and inside udf, we just > inherit the reource allocated from top level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-534) A query resource quota can be calculated to from 0 to 0
[ https://issues.apache.org/jira/browse/HAWQ-534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193040#comment-15193040 ] ASF GitHub Bot commented on HAWQ-534: - GitHub user jiny2 opened a pull request: https://github.com/apache/incubator-hawq/pull/446 HAWQ-534. A query resource quota can be calculated to from 0 to 0 You can merge this pull request into a Git repository by running: $ git pull https://github.com/jiny2/incubator-hawq HAWQ-534 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/446.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #446 commit 2ea40deb07e5caae7c7fc63c40d6a326ff145216 Author: YI JINDate: 2016-03-14T10:14:00Z HAWQ-534. A query resource quota can be calculated to from 0 to 0 > A query resource quota can be calculated to from 0 to 0 > --- > > Key: HAWQ-534 > URL: https://issues.apache.org/jira/browse/HAWQ-534 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-532) Optimise vseg number for copy to statement.
[ https://issues.apache.org/jira/browse/HAWQ-532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hubert Zhang reassigned HAWQ-532: - Assignee: Hubert Zhang (was: Lei Chang) > Optimise vseg number for copy to statement. > --- > > Key: HAWQ-532 > URL: https://issues.apache.org/jira/browse/HAWQ-532 > Project: Apache HAWQ > Issue Type: Improvement > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > > Now copy to statement use table bucket number as vseg number. For random > table, this is not appropriate -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-534) A query resource quota can be calculated to from 0 to 0
Yi Jin created HAWQ-534: --- Summary: A query resource quota can be calculated to from 0 to 0 Key: HAWQ-534 URL: https://issues.apache.org/jira/browse/HAWQ-534 Project: Apache HAWQ Issue Type: Bug Components: Resource Manager Reporter: Yi Jin Assignee: Lei Chang -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-531) Change GUC default value
[ https://issues.apache.org/jira/browse/HAWQ-531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hubert Zhang reassigned HAWQ-531: - Assignee: Hubert Zhang (was: Lei Chang) > Change GUC default value > > > Key: HAWQ-531 > URL: https://issues.apache.org/jira/browse/HAWQ-531 > Project: Apache HAWQ > Issue Type: Improvement > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > > hawq_rm_vseg_perquery_limit should change to 512 min 1, max 10 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-534) A query resource quota can be calculated to from 0 to 0
[ https://issues.apache.org/jira/browse/HAWQ-534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Jin reassigned HAWQ-534: --- Assignee: Yi Jin (was: Lei Chang) > A query resource quota can be calculated to from 0 to 0 > --- > > Key: HAWQ-534 > URL: https://issues.apache.org/jira/browse/HAWQ-534 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (HAWQ-525) query dispatcher heart-beat threads should not close FD if FD is not open ever
[ https://issues.apache.org/jira/browse/HAWQ-525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Jin closed HAWQ-525. --- > query dispatcher heart-beat threads should not close FD if FD is not open ever > -- > > Key: HAWQ-525 > URL: https://issues.apache.org/jira/browse/HAWQ-525 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (HAWQ-522) Add socket connection pool feature to improve resource negotiation performance
[ https://issues.apache.org/jira/browse/HAWQ-522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Jin closed HAWQ-522. --- > Add socket connection pool feature to improve resource negotiation performance > -- > > Key: HAWQ-522 > URL: https://issues.apache.org/jira/browse/HAWQ-522 > Project: Apache HAWQ > Issue Type: Improvement > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-525) query dispatcher heart-beat threads should not close FD if FD is not open ever
[ https://issues.apache.org/jira/browse/HAWQ-525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Jin resolved HAWQ-525. - Resolution: Fixed Fix Version/s: 2.0.0 > query dispatcher heart-beat threads should not close FD if FD is not open ever > -- > > Key: HAWQ-525 > URL: https://issues.apache.org/jira/browse/HAWQ-525 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-522) Add socket connection pool feature to improve resource negotiation performance
[ https://issues.apache.org/jira/browse/HAWQ-522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Jin resolved HAWQ-522. - Resolution: Fixed Fix Version/s: 2.0.0 > Add socket connection pool feature to improve resource negotiation performance > -- > > Key: HAWQ-522 > URL: https://issues.apache.org/jira/browse/HAWQ-522 > Project: Apache HAWQ > Issue Type: Improvement > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (HAWQ-527) Remove dead code of generateResourceRefreshHeartBeat()
[ https://issues.apache.org/jira/browse/HAWQ-527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Jin closed HAWQ-527. --- > Remove dead code of generateResourceRefreshHeartBeat() > -- > > Key: HAWQ-527 > URL: https://issues.apache.org/jira/browse/HAWQ-527 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > Fix For: 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-533) Cursor failed, if don't allocate resource in prepare.
[ https://issues.apache.org/jira/browse/HAWQ-533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hubert Zhang reassigned HAWQ-533: - Assignee: Hubert Zhang (was: Lei Chang) > Cursor failed, if don't allocate resource in prepare. > - > > Key: HAWQ-533 > URL: https://issues.apache.org/jira/browse/HAWQ-533 > Project: Apache HAWQ > Issue Type: Bug > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > > we need to allocate resource in cursor operation, if we don't allocate > resource in prepare -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-532) Optimise vseg number for copy to statement.
Hubert Zhang created HAWQ-532: - Summary: Optimise vseg number for copy to statement. Key: HAWQ-532 URL: https://issues.apache.org/jira/browse/HAWQ-532 Project: Apache HAWQ Issue Type: Improvement Components: Core Reporter: Hubert Zhang Assignee: Lei Chang Now copy to statement use table bucket number as vseg number. For random table, this is not appropriate -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-524) do not resolve the condition of 'executor->refResult = NULL' in executormgr_bind_executor_task()
[ https://issues.apache.org/jira/browse/HAWQ-524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193001#comment-15193001 ] ASF GitHub Bot commented on HAWQ-524: - Github user zhangh43 commented on the pull request: https://github.com/apache/incubator-hawq/pull/444#issuecomment-196231052 +1 > do not resolve the condition of 'executor->refResult = NULL' in > executormgr_bind_executor_task() > - > > Key: HAWQ-524 > URL: https://issues.apache.org/jira/browse/HAWQ-524 > Project: Apache HAWQ > Issue Type: Bug > Components: Dispatcher >Affects Versions: 2.0.0 >Reporter: Chunling Wang >Assignee: Lei Chang > > In executormgr.c, the code below should not be Assert(). The condition of > 'executor->refResult = NULL' should be catch. > bool > executormgr_bind_executor_task(struct DispatchData *data, > QueryExecutor *executor, > > SegmentDatabaseDescriptor *desc, > struct DispatchTask > *task, > struct DispatchSlice > *slice) > { > ... > Assert(executor->refResult != NULL); > ... > } -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-528) Reset gp_connections_per_thread for dispatcher guc range from 1 to 512, 0 marks as invalid.
[ https://issues.apache.org/jira/browse/HAWQ-528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15193000#comment-15193000 ] ASF GitHub Bot commented on HAWQ-528: - GitHub user ictmalili opened a pull request: https://github.com/apache/incubator-hawq/pull/445 HAWQ-528. Reset gp_connections_per_thread for dispatcher guc range Reset gp_connections_per_thread for dispatcher guc range from 1 to 512, 0 is invalid. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ictmalili/incubator-hawq HAWQ-528 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/445.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #445 commit 69b95e01ffbff350cbcc1d0629dcd49062b64a30 Author: Lili MaDate: 2016-03-14T09:39:31Z HAWQ-528. Reset gp_connections_per_thread for dispatcher guc range from 1 to 512, 0 marks as invalid. > Reset gp_connections_per_thread for dispatcher guc range from 1 to 512, 0 > marks as invalid. > --- > > Key: HAWQ-528 > URL: https://issues.apache.org/jira/browse/HAWQ-528 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Lili Ma >Assignee: Lili Ma > > Reset gp_connections_per_thread for dispatcher guc range from 1 to 512, 0 > marks as invalid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-524) do not resolve the condition of 'executor->refResult = NULL' in executormgr_bind_executor_task()
[ https://issues.apache.org/jira/browse/HAWQ-524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15192996#comment-15192996 ] ASF GitHub Bot commented on HAWQ-524: - GitHub user ictmalili opened a pull request: https://github.com/apache/incubator-hawq/pull/444 HAWQ-524. Modify processing for cdbDispatchResult->resultBuf Modify processing for cdbDispatchResult->resultBuf when creation and free. Should judge whether NULL after creation and when do destroy. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ictmalili/incubator-hawq HAWQ-524 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/444.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #444 commit 16c8d7a4cdc9f6c1f7806e280248ee4e32b1c36f Author: Lili MaDate: 2016-03-14T09:47:42Z HAWQ-524. Modify processing for cdbDispatchResult->resultBuf when creation and free > do not resolve the condition of 'executor->refResult = NULL' in > executormgr_bind_executor_task() > - > > Key: HAWQ-524 > URL: https://issues.apache.org/jira/browse/HAWQ-524 > Project: Apache HAWQ > Issue Type: Bug > Components: Dispatcher >Affects Versions: 2.0.0 >Reporter: Chunling Wang >Assignee: Lei Chang > > In executormgr.c, the code below should not be Assert(). The condition of > 'executor->refResult = NULL' should be catch. > bool > executormgr_bind_executor_task(struct DispatchData *data, > QueryExecutor *executor, > > SegmentDatabaseDescriptor *desc, > struct DispatchTask > *task, > struct DispatchSlice > *slice) > { > ... > Assert(executor->refResult != NULL); > ... > } -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-531) Change GUC default value
Hubert Zhang created HAWQ-531: - Summary: Change GUC default value Key: HAWQ-531 URL: https://issues.apache.org/jira/browse/HAWQ-531 Project: Apache HAWQ Issue Type: Improvement Components: Core Reporter: Hubert Zhang Assignee: Lei Chang hawq_rm_vseg_perquery_limit should change to 512 min 1, max 10 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-530) Explain analyze should include allocate resource/ data locality time.
[ https://issues.apache.org/jira/browse/HAWQ-530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hubert Zhang reassigned HAWQ-530: - Assignee: Hubert Zhang (was: Lei Chang) > Explain analyze should include allocate resource/ data locality time. > - > > Key: HAWQ-530 > URL: https://issues.apache.org/jira/browse/HAWQ-530 > Project: Apache HAWQ > Issue Type: Improvement > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > > Now in explain analyze the timing do not include allocate resource > information. > We should include this in explain anlyze -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-530) Explain analyze should include allocate resource/ data locality time.
Hubert Zhang created HAWQ-530: - Summary: Explain analyze should include allocate resource/ data locality time. Key: HAWQ-530 URL: https://issues.apache.org/jira/browse/HAWQ-530 Project: Apache HAWQ Issue Type: Improvement Components: Core Reporter: Hubert Zhang Assignee: Lei Chang Now in explain analyze the timing do not include allocate resource information. We should include this in explain anlyze -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-529) Allocate resource for udf in resource negotiator
[ https://issues.apache.org/jira/browse/HAWQ-529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hubert Zhang updated HAWQ-529: -- Assignee: Hubert Zhang (was: Lei Chang) Issue Type: Improvement (was: Bug) > Allocate resource for udf in resource negotiator > > > Key: HAWQ-529 > URL: https://issues.apache.org/jira/browse/HAWQ-529 > Project: Apache HAWQ > Issue Type: Improvement > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > > for udf , we should allocate resource in top level, and inside udf, we just > inherit the reource allocated from top level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-529) Allocate resource for udf in resource negotiator.
[ https://issues.apache.org/jira/browse/HAWQ-529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hubert Zhang updated HAWQ-529: -- Summary: Allocate resource for udf in resource negotiator. (was: Allocate resource for udf in resource negotiator) > Allocate resource for udf in resource negotiator. > - > > Key: HAWQ-529 > URL: https://issues.apache.org/jira/browse/HAWQ-529 > Project: Apache HAWQ > Issue Type: Improvement > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > > for udf , we should allocate resource in top level, and inside udf, we just > inherit the reource allocated from top level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-529) Allocate resource for udf in resource negotiator
Hubert Zhang created HAWQ-529: - Summary: Allocate resource for udf in resource negotiator Key: HAWQ-529 URL: https://issues.apache.org/jira/browse/HAWQ-529 Project: Apache HAWQ Issue Type: Bug Components: Core Reporter: Hubert Zhang Assignee: Lei Chang for udf , we should allocate resource in top level, and inside udf, we just inherit the reource allocated from top level. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-528) Reset gp_connections_per_thread for dispatcher guc range from 1 to 512, 0 marks as invalid.
[ https://issues.apache.org/jira/browse/HAWQ-528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lili Ma reassigned HAWQ-528: Assignee: Lili Ma (was: Lei Chang) > Reset gp_connections_per_thread for dispatcher guc range from 1 to 512, 0 > marks as invalid. > --- > > Key: HAWQ-528 > URL: https://issues.apache.org/jira/browse/HAWQ-528 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Lili Ma >Assignee: Lili Ma > > Reset gp_connections_per_thread for dispatcher guc range from 1 to 512, 0 > marks as invalid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-528) Reset gp_connections_per_thread for dispatcher guc range from 1 to 512, 0 marks as invalid.
Lili Ma created HAWQ-528: Summary: Reset gp_connections_per_thread for dispatcher guc range from 1 to 512, 0 marks as invalid. Key: HAWQ-528 URL: https://issues.apache.org/jira/browse/HAWQ-528 Project: Apache HAWQ Issue Type: Bug Reporter: Lili Ma Assignee: Lei Chang Reset gp_connections_per_thread for dispatcher guc range from 1 to 512, 0 marks as invalid. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-524) do not resolve the condition of 'executor->refResult = NULL' in executormgr_bind_executor_task()
[ https://issues.apache.org/jira/browse/HAWQ-524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15192821#comment-15192821 ] Chunling Wang commented on HAWQ-524: In cdbdispatcheresult.c, when dispatchResult->resultbuf == NULL, there is no need to free the PGresult objects again in function cdbdisp_resetResult(). Change the code like below: void cdbdisp_resetResult(CdbDispatchResult *dispatchResult) { if (dispatchResult->resultbuf != NULL) { PQExpBuffer buf = dispatchResult->resultbuf; PGresult **begp = (PGresult **)buf->data; PGresult **endp = (PGresult **)(buf->data + buf->len); PGresult **p; /* Free the PGresult objects. */ for (p = begp; p < endp; ++p) { Assert(*p != NULL); PQclear(*p); } } ... } > do not resolve the condition of 'executor->refResult = NULL' in > executormgr_bind_executor_task() > - > > Key: HAWQ-524 > URL: https://issues.apache.org/jira/browse/HAWQ-524 > Project: Apache HAWQ > Issue Type: Bug > Components: Dispatcher >Affects Versions: 2.0.0 >Reporter: Chunling Wang >Assignee: Lei Chang > > In executormgr.c, the code below should not be Assert(). The condition of > 'executor->refResult = NULL' should be catch. > bool > executormgr_bind_executor_task(struct DispatchData *data, > QueryExecutor *executor, > > SegmentDatabaseDescriptor *desc, > struct DispatchTask > *task, > struct DispatchSlice > *slice) > { > ... > Assert(executor->refResult != NULL); > ... > } -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-519) hawqextract column context does not exist error
[ https://issues.apache.org/jira/browse/HAWQ-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Radar Lei reassigned HAWQ-519: -- Assignee: Radar Lei (was: Lei Chang) > hawqextract column context does not exist error > --- > > Key: HAWQ-519 > URL: https://issues.apache.org/jira/browse/HAWQ-519 > Project: Apache HAWQ > Issue Type: Bug > Components: Command Line Tools >Reporter: Daniel Lynch >Assignee: Radar Lei > > When running hawq extract python stack trace is returned because pg_aoseg no > longer has a column called content > https://github.com/apache/incubator-hawq/pull/305 > ``` > [gpadmin@node2 ~]$ hawq extract -o rank_table.yaml foo > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to connect > database localhost:5432 gpadmin 20160129:23:21:41:004538 > hawqextract:node2:gpadmin-[INFO]:-try to extract metadata of table 'foo' > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- detect > FileFormat: AO 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- > extract AO_FileLocations Traceback (most recent call last): File > "/usr/local/hawq-master/bin/hawqextract", line 551, in > sys.exit(main()) File "/usr/local/hawq-master/bin/hawqextract", line 528, in > main metadata = extract_metadata(conn, args[0]) File > "/usr/local/hawq-master/bin/hawqextract", line 444, in extract_metadata > cases[file_format]() File "/usr/local/hawq-master/bin/hawqextract", line 363, > in extract_AO_metadata 'Files': get_ao_table_files(rel_pgclass['oid'], > rel_pgclass['relfilenode']) File "/usr/local/hawq-master/bin/hawqextract", > line 322, in get_ao_table_files for f in accessor.get_aoseg_files(oid): File > "/usr/local/hawq-master/bin/hawqextract", line 164, in get_aoseg_files return > self.exec_query(qry) File "/usr/local/hawq-master/bin/hawqextract", line 129, > in exec_query return self.conn.query(sql).dictresult() pg.ProgrammingError: > ERROR: column "content" does not exist LINE 2: SELECT content, segno as > fileno, eof as filesize > ``` -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-519) hawqextract column context does not exist error
[ https://issues.apache.org/jira/browse/HAWQ-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Radar Lei updated HAWQ-519: --- Assignee: Lei Chang (was: Radar Lei) > hawqextract column context does not exist error > --- > > Key: HAWQ-519 > URL: https://issues.apache.org/jira/browse/HAWQ-519 > Project: Apache HAWQ > Issue Type: Bug > Components: Command Line Tools >Reporter: Daniel Lynch >Assignee: Lei Chang > > When running hawq extract python stack trace is returned because pg_aoseg no > longer has a column called content > https://github.com/apache/incubator-hawq/pull/305 > ``` > [gpadmin@node2 ~]$ hawq extract -o rank_table.yaml foo > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to connect > database localhost:5432 gpadmin 20160129:23:21:41:004538 > hawqextract:node2:gpadmin-[INFO]:-try to extract metadata of table 'foo' > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- detect > FileFormat: AO 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- > extract AO_FileLocations Traceback (most recent call last): File > "/usr/local/hawq-master/bin/hawqextract", line 551, in > sys.exit(main()) File "/usr/local/hawq-master/bin/hawqextract", line 528, in > main metadata = extract_metadata(conn, args[0]) File > "/usr/local/hawq-master/bin/hawqextract", line 444, in extract_metadata > cases[file_format]() File "/usr/local/hawq-master/bin/hawqextract", line 363, > in extract_AO_metadata 'Files': get_ao_table_files(rel_pgclass['oid'], > rel_pgclass['relfilenode']) File "/usr/local/hawq-master/bin/hawqextract", > line 322, in get_ao_table_files for f in accessor.get_aoseg_files(oid): File > "/usr/local/hawq-master/bin/hawqextract", line 164, in get_aoseg_files return > self.exec_query(qry) File "/usr/local/hawq-master/bin/hawqextract", line 129, > in exec_query return self.conn.query(sql).dictresult() pg.ProgrammingError: > ERROR: column "content" does not exist LINE 2: SELECT content, segno as > fileno, eof as filesize > ``` -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-455) Disable creating partition tables with non uniform bucket schema
[ https://issues.apache.org/jira/browse/HAWQ-455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15192808#comment-15192808 ] ASF GitHub Bot commented on HAWQ-455: - Github user changleicn commented on the pull request: https://github.com/apache/incubator-hawq/pull/387#issuecomment-196163072 @addisonhuddy this pull request was merged in last week. > Disable creating partition tables with non uniform bucket schema > - > > Key: HAWQ-455 > URL: https://issues.apache.org/jira/browse/HAWQ-455 > Project: Apache HAWQ > Issue Type: Improvement > Components: DDL >Reporter: Haisheng Yuan >Assignee: Lei Chang > > HAWQ user should not be able to create partition tables with non uniform > bucket schema so that don't make orca trip up when it queries across a single > partition. Or else the user should see an error message. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-519) hawqextract column context does not exist error
[ https://issues.apache.org/jira/browse/HAWQ-519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei Chang updated HAWQ-519: --- Assignee: Radar Lei (was: Lei Chang) > hawqextract column context does not exist error > --- > > Key: HAWQ-519 > URL: https://issues.apache.org/jira/browse/HAWQ-519 > Project: Apache HAWQ > Issue Type: Bug > Components: Command Line Tools >Reporter: Daniel Lynch >Assignee: Radar Lei > > When running hawq extract python stack trace is returned because pg_aoseg no > longer has a column called content > https://github.com/apache/incubator-hawq/pull/305 > ``` > [gpadmin@node2 ~]$ hawq extract -o rank_table.yaml foo > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:-try to connect > database localhost:5432 gpadmin 20160129:23:21:41:004538 > hawqextract:node2:gpadmin-[INFO]:-try to extract metadata of table 'foo' > 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- detect > FileFormat: AO 20160129:23:21:41:004538 hawqextract:node2:gpadmin-[INFO]:--- > extract AO_FileLocations Traceback (most recent call last): File > "/usr/local/hawq-master/bin/hawqextract", line 551, in > sys.exit(main()) File "/usr/local/hawq-master/bin/hawqextract", line 528, in > main metadata = extract_metadata(conn, args[0]) File > "/usr/local/hawq-master/bin/hawqextract", line 444, in extract_metadata > cases[file_format]() File "/usr/local/hawq-master/bin/hawqextract", line 363, > in extract_AO_metadata 'Files': get_ao_table_files(rel_pgclass['oid'], > rel_pgclass['relfilenode']) File "/usr/local/hawq-master/bin/hawqextract", > line 322, in get_ao_table_files for f in accessor.get_aoseg_files(oid): File > "/usr/local/hawq-master/bin/hawqextract", line 164, in get_aoseg_files return > self.exec_query(qry) File "/usr/local/hawq-master/bin/hawqextract", line 129, > in exec_query return self.conn.query(sql).dictresult() pg.ProgrammingError: > ERROR: column "content" does not exist LINE 2: SELECT content, segno as > fileno, eof as filesize > ``` -- This message was sent by Atlassian JIRA (v6.3.4#6332)