[jira] [Closed] (HAWQ-914) Improve user experience of HAWQ's build infrastructure
[ https://issues.apache.org/jira/browse/HAWQ-914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Guo closed HAWQ-914. - Resolution: Fixed > Improve user experience of HAWQ's build infrastructure > -- > > Key: HAWQ-914 > URL: https://issues.apache.org/jira/browse/HAWQ-914 > Project: Apache HAWQ > Issue Type: Improvement > Components: Build >Affects Versions: 2.0.0.0-incubating >Reporter: Roman Shaposhnik >Assignee: Paul Guo > Fix For: 2.0.1.0-incubating > > > This is likely to end up being an umbrella JIRA so feel free to fork off > sub-tasks whenever it makes sense. > As an end-user of HAWQ's build system, I'd like to see the default of the > build system (running configure/etc. with no arguments) to be: > # treating optional missing dependencies with a WARNING similar to what > PostrgreSQL configure does in the following example: > {noformat} > checking for bison... no > configure: WARNING: > *** Without Bison you will not be able to build PostgreSQL from CVS nor > *** change any of the parser definition files. You can obtain Bison from > *** a GNU mirror site. (If you are using the official distribution of > *** PostgreSQL then you do not need to worry about this, because the Bison > *** output is pre-generated.) To use a different yacc program (possible, > *** but not recommended), set the environment variable YACC before running > *** 'configure'. > {noformat} > # treating all the missing suggested dependencies by failing the build and > suggesting how to point at binary copies of these missing dependencies > similar to what PostrgreSQL configure does in the following example: > {noformat} > checking for -ledit... no > configure: error: readline library not found > If you have readline already installed, see config.log for details on the > failure. It is possible the compiler isn't looking in the proper directory. > Use --without-readline to disable readline support. > {noformat} > # treating the core dependencies same as suggested dependencies, but > obviously about the option of continuing the build without them. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-1000) Set dummy workfile pointer to NULL after calling ExecWorkFile_Close()
[ https://issues.apache.org/jira/browse/HAWQ-1000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming LI resolved HAWQ-1000. --- Resolution: Fixed > Set dummy workfile pointer to NULL after calling ExecWorkFile_Close() > - > > Key: HAWQ-1000 > URL: https://issues.apache.org/jira/browse/HAWQ-1000 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Ming LI >Assignee: Ming LI > Fix For: 2.0.1.0-incubating > > > The parameter workfile for ExecWorkFile_Close() is freed in this function, > but in the calling function outside, the pointer variable still exists, we > need to set it to NULL pointer immediately, otherwise it will use some freed > pointer afterward. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-999) Treat hash table as random when file count is not in proportion to bucket number of table.
[ https://issues.apache.org/jira/browse/HAWQ-999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418251#comment-15418251 ] Hubert Zhang commented on HAWQ-999: --- [~jianlirong] We need to investigate why bucket number and file count mismatch happens. This JIRA is just to ensure even when mismatch happens, The query will not failed(of course catalog and physical file information must be consistent) > Treat hash table as random when file count is not in proportion to bucket > number of table. > -- > > Key: HAWQ-999 > URL: https://issues.apache.org/jira/browse/HAWQ-999 > Project: Apache HAWQ > Issue Type: Improvement > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > Fix For: 2.0.1.0-incubating > > > By definition, file count of a hash table should be equal to or a multiple of > the bucket number of the table. So if mismatch happens, we should not treat > it as hash table in data locality algorithm. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-999) Treat hash table as random when file count is not in proportion to bucket number of table.
[ https://issues.apache.org/jira/browse/HAWQ-999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15418252#comment-15418252 ] Hubert Zhang commented on HAWQ-999: --- Lirong Jian We need to investigate why bucket number and file count mismatch happens. This JIRA is just to ensure even when mismatch happens, The query will not failed(of course catalog and physical file information must be consistent) > Treat hash table as random when file count is not in proportion to bucket > number of table. > -- > > Key: HAWQ-999 > URL: https://issues.apache.org/jira/browse/HAWQ-999 > Project: Apache HAWQ > Issue Type: Improvement > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > Fix For: 2.0.1.0-incubating > > > By definition, file count of a hash table should be equal to or a multiple of > the bucket number of the table. So if mismatch happens, we should not treat > it as hash table in data locality algorithm. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-1000) Set dummy workfile pointer to NULL after calling ExecWorkFile_Close()
[ https://issues.apache.org/jira/browse/HAWQ-1000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-1000: Fix Version/s: 2.0.1.0-incubating > Set dummy workfile pointer to NULL after calling ExecWorkFile_Close() > - > > Key: HAWQ-1000 > URL: https://issues.apache.org/jira/browse/HAWQ-1000 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Ming LI >Assignee: Ming LI > Fix For: 2.0.1.0-incubating > > > The parameter workfile for ExecWorkFile_Close() is freed in this function, > but in the calling function outside, the pointer variable still exists, we > need to set it to NULL pointer immediately, otherwise it will use some freed > pointer afterward. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-999) Treat hash table as random when file count is not in proportion to bucket number of table.
[ https://issues.apache.org/jira/browse/HAWQ-999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-999: --- Fix Version/s: 2.0.1.0-incubating > Treat hash table as random when file count is not in proportion to bucket > number of table. > -- > > Key: HAWQ-999 > URL: https://issues.apache.org/jira/browse/HAWQ-999 > Project: Apache HAWQ > Issue Type: Improvement > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > Fix For: 2.0.1.0-incubating > > > By definition, file count of a hash table should be equal to or a multiple of > the bucket number of the table. So if mismatch happens, we should not treat > it as hash table in data locality algorithm. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-997) HAWQ doesn't send PXF data type with precision
[ https://issues.apache.org/jira/browse/HAWQ-997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-997: --- Fix Version/s: backlog > HAWQ doesn't send PXF data type with precision > --- > > Key: HAWQ-997 > URL: https://issues.apache.org/jira/browse/HAWQ-997 > Project: Apache HAWQ > Issue Type: Bug > Components: PXF >Reporter: Shivram Mani >Assignee: Goden Yao > Fix For: backlog > > > HAWQ/PXF sends via the Rest api information about ATTR and types using > x-gp-attr-typename. Attributes such as varchar(3) char(3) are sent as varchar > and char. This causes HAWQ-992 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-999) Treat hash table as random when file count is not in proportion to bucket number of table.
[ https://issues.apache.org/jira/browse/HAWQ-999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15417271#comment-15417271 ] Lirong Jian commented on HAWQ-999: -- Would you please explain a little bit under what kind of circumstance this mismatch would occur? Lirong > Treat hash table as random when file count is not in proportion to bucket > number of table. > -- > > Key: HAWQ-999 > URL: https://issues.apache.org/jira/browse/HAWQ-999 > Project: Apache HAWQ > Issue Type: Improvement > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > > By definition, file count of a hash table should be equal to or a multiple of > the bucket number of the table. So if mismatch happens, we should not treat > it as hash table in data locality algorithm. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-1000) Set dummy workfile pointer to NULL after calling ExecWorkFile_Close()
[ https://issues.apache.org/jira/browse/HAWQ-1000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming LI reassigned HAWQ-1000: - Assignee: Ming LI (was: Lei Chang) > Set dummy workfile pointer to NULL after calling ExecWorkFile_Close() > - > > Key: HAWQ-1000 > URL: https://issues.apache.org/jira/browse/HAWQ-1000 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Ming LI >Assignee: Ming LI > > The parameter workfile for ExecWorkFile_Close() is freed in this function, > but in the calling function outside, the pointer variable still exists, we > need to set it to NULL pointer immediately, otherwise it will use some freed > pointer afterward. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq issue #811: HAWQ-1000: Set dummy workfile pointer to NULL aft...
Github user xunzhang commented on the issue: https://github.com/apache/incubator-hawq/pull/811 Above defects are covered into the modification, LGTM. ðº HAWQ-1000 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (HAWQ-1000) Set dummy workfile pointer to NULL after calling ExecWorkFile_Close()
Ming LI created HAWQ-1000: - Summary: Set dummy workfile pointer to NULL after calling ExecWorkFile_Close() Key: HAWQ-1000 URL: https://issues.apache.org/jira/browse/HAWQ-1000 Project: Apache HAWQ Issue Type: Bug Reporter: Ming LI Assignee: Lei Chang The parameter workfile for ExecWorkFile_Close() is freed in this function, but in the calling function outside, the pointer variable still exists, we need to set it to NULL pointer immediately, otherwise it will use some freed pointer afterward. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq pull request #845: HAWQ-999. Treat hash table as random when ...
GitHub user zhangh43 opened a pull request: https://github.com/apache/incubator-hawq/pull/845 HAWQ-999. Treat hash table as random when file count is not in propor⦠â¦tion to bucket number of table. By definition, file count of a hash table should be equal to or a multiple of the bucket number of the table. So if mismatch happens, we should not treat it as hash table in data locality algorithm. You can merge this pull request into a Git repository by running: $ git pull https://github.com/zhangh43/incubator-hawq hawq999 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/845.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #845 commit ecc68ab75c9513e80ed1d5c1b5ca7f313a95ef70 Author: hzhang2Date: 2016-08-11T09:44:47Z HAWQ-999. Treat hash table as random when file count is not in proportion to bucket number of table. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq issue #843: HAWQ-914. Improve user experience of HAWQ's build...
Github user radarwave commented on the issue: https://github.com/apache/incubator-hawq/pull/843 Cool +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (HAWQ-999) Treat hash table as random when file count is not in proportion to bucket number of table.
Hubert Zhang created HAWQ-999: - Summary: Treat hash table as random when file count is not in proportion to bucket number of table. Key: HAWQ-999 URL: https://issues.apache.org/jira/browse/HAWQ-999 Project: Apache HAWQ Issue Type: Improvement Components: Core Reporter: Hubert Zhang Assignee: Lei Chang By definition, file count of a hash table should be equal to or a multiple of the bucket number of the table. So if mismatch happens, we should not treat it as hash table in data locality algorithm. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-999) Treat hash table as random when file count is not in proportion to bucket number of table.
[ https://issues.apache.org/jira/browse/HAWQ-999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hubert Zhang reassigned HAWQ-999: - Assignee: Hubert Zhang (was: Lei Chang) > Treat hash table as random when file count is not in proportion to bucket > number of table. > -- > > Key: HAWQ-999 > URL: https://issues.apache.org/jira/browse/HAWQ-999 > Project: Apache HAWQ > Issue Type: Improvement > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > > By definition, file count of a hash table should be equal to or a multiple of > the bucket number of the table. So if mismatch happens, we should not treat > it as hash table in data locality algorithm. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq pull request #837: HAWQ-779 support pxf filter pushdwon at th...
Github user jiadexin commented on a diff in the pull request: https://github.com/apache/incubator-hawq/pull/837#discussion_r74380940 --- Diff: src/backend/access/external/test/pxffilters_test.c --- @@ -61,7 +62,7 @@ test__supported_filter_type(void **state) /* go over pxf_supported_types array */ int nargs = sizeof(pxf_supported_types) / sizeof(Oid); - assert_int_equal(nargs, 12); + assert_int_equal(nargs, 13); --- End diff -- This test is to check pxf_supported_types number, it's old value is hard-coded too(be 12), after an increase of DATEOID become 13. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (HAWQ-991) Add support for "HAWQ register" that could register tables by using "hawq extract" output
[ https://issues.apache.org/jira/browse/HAWQ-991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hongwu updated HAWQ-991: Description: User should be able to use HAWQ Register utility to register HAWQ table files/directories into a new HAWQ cluster so that the data can be copied from one cluster to another, and the HAWQ catalog metadata is synchronized with these HDFS HAWQ files. The ask for this feature is basically to pass `hawq register` an input file (or set of files) containing the last-known-good metadata that it can use to update the portion of the catalog managing HDFS blocks. Prior to every new data load, the user can leverage the `hawq extract` command to snapshot the metadata for every table to protect against corruption / divergence. > Add support for "HAWQ register" that could register tables by using "hawq > extract" output > - > > Key: HAWQ-991 > URL: https://issues.apache.org/jira/browse/HAWQ-991 > Project: Apache HAWQ > Issue Type: Improvement > Components: Command Line Tools, External Tables >Affects Versions: 2.0.1.0-incubating >Reporter: hongwu >Assignee: hongwu > Fix For: 2.0.1.0-incubating > > > User should be able to use HAWQ Register utility to register HAWQ table > files/directories into a new HAWQ cluster so that the data can be copied from > one cluster to another, and the HAWQ catalog metadata is synchronized with > these HDFS HAWQ files. > The ask for this feature is basically to pass `hawq register` an input file > (or set of files) containing the last-known-good metadata that it can use to > update the portion of the catalog managing HDFS blocks. Prior to every new > data load, the user can leverage the `hawq extract` command to snapshot the > metadata for every table to protect against corruption / divergence. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq pull request #844: HAWQ-998. Fix test for aggregate-with-null...
Github user asfgit closed the pull request at: https://github.com/apache/incubator-hawq/pull/844 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq issue #844: HAWQ-998. Fix test for aggregate-with-null.
Github user linwen commented on the issue: https://github.com/apache/incubator-hawq/pull/844 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq issue #844: HAWQ-998. Fix test for aggregate-with-null.
Github user huor commented on the issue: https://github.com/apache/incubator-hawq/pull/844 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq pull request #844: HAWQ-998. Fix test for aggregate-with-null...
GitHub user stanlyxiang opened a pull request: https://github.com/apache/incubator-hawq/pull/844 HAWQ-998. Fix test for aggregate-with-null. You can merge this pull request into a Git repository by running: $ git pull https://github.com/stanlyxiang/incubator-hawq fix_agg_featuretest Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/844.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #844 commit 217c17c9c6291bc5b9c746bb528cedf9dc43220d Author: stanlyxiangDate: 2016-08-11T06:39:21Z HAWQ-998. Fix test for aggregate-with-null. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (HAWQ-998) Fix test for aggregate-with-null test.
[ https://issues.apache.org/jira/browse/HAWQ-998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Sheng updated HAWQ-998: - Fix Version/s: 2.0.1.0-incubating > Fix test for aggregate-with-null test. > -- > > Key: HAWQ-998 > URL: https://issues.apache.org/jira/browse/HAWQ-998 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Xiang Sheng >Assignee: Lei Chang >Priority: Minor > Fix For: 2.0.1.0-incubating > > > For test TestAggregateWithNull in src/test/feature/query/test_aggregate, the > data order of expectStr is not equal to the resultStr. We should add order by > clause to make sure the result unique and always the same for better ci test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-998) Fix test for aggregate-with-null test.
[ https://issues.apache.org/jira/browse/HAWQ-998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Sheng updated HAWQ-998: - Issue Type: Bug (was: Test) > Fix test for aggregate-with-null test. > -- > > Key: HAWQ-998 > URL: https://issues.apache.org/jira/browse/HAWQ-998 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Xiang Sheng >Assignee: Lei Chang > > For test TestAggregateWithNull in src/test/feature/query/test_aggregate, the > data order of expectStr is not equal to the resultStr. We should add order by > clause to make sure the result unique and always the same for better ci test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-998) Fix test for aggregate-with-null test.
[ https://issues.apache.org/jira/browse/HAWQ-998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Sheng updated HAWQ-998: - Priority: Minor (was: Major) > Fix test for aggregate-with-null test. > -- > > Key: HAWQ-998 > URL: https://issues.apache.org/jira/browse/HAWQ-998 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Xiang Sheng >Assignee: Lei Chang >Priority: Minor > > For test TestAggregateWithNull in src/test/feature/query/test_aggregate, the > data order of expectStr is not equal to the resultStr. We should add order by > clause to make sure the result unique and always the same for better ci test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-998) Fix test for aggregate-with-null test.
Xiang Sheng created HAWQ-998: Summary: Fix test for aggregate-with-null test. Key: HAWQ-998 URL: https://issues.apache.org/jira/browse/HAWQ-998 Project: Apache HAWQ Issue Type: Test Reporter: Xiang Sheng Assignee: Lei Chang For test TestAggregateWithNull in src/test/feature/query/test_aggregate, the data order of expectStr is not equal to the resultStr. We should add order by clause to make sure the result unique and always the same for better ci test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)