[GitHub] incubator-hawq pull request #844: HAWQ-998. Fix test for aggregate-with-null...
Github user asfgit closed the pull request at: https://github.com/apache/incubator-hawq/pull/844 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq issue #844: HAWQ-998. Fix test for aggregate-with-null.
Github user linwen commented on the issue: https://github.com/apache/incubator-hawq/pull/844 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq issue #844: HAWQ-998. Fix test for aggregate-with-null.
Github user huor commented on the issue: https://github.com/apache/incubator-hawq/pull/844 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq pull request #844: HAWQ-998. Fix test for aggregate-with-null...
GitHub user stanlyxiang opened a pull request: https://github.com/apache/incubator-hawq/pull/844 HAWQ-998. Fix test for aggregate-with-null. You can merge this pull request into a Git repository by running: $ git pull https://github.com/stanlyxiang/incubator-hawq fix_agg_featuretest Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/844.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #844 commit 217c17c9c6291bc5b9c746bb528cedf9dc43220d Author: stanlyxiang Date: 2016-08-11T06:39:21Z HAWQ-998. Fix test for aggregate-with-null. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (HAWQ-998) Fix test for aggregate-with-null test.
[ https://issues.apache.org/jira/browse/HAWQ-998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Sheng updated HAWQ-998: - Fix Version/s: 2.0.1.0-incubating > Fix test for aggregate-with-null test. > -- > > Key: HAWQ-998 > URL: https://issues.apache.org/jira/browse/HAWQ-998 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Xiang Sheng >Assignee: Lei Chang >Priority: Minor > Fix For: 2.0.1.0-incubating > > > For test TestAggregateWithNull in src/test/feature/query/test_aggregate, the > data order of expectStr is not equal to the resultStr. We should add order by > clause to make sure the result unique and always the same for better ci test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-998) Fix test for aggregate-with-null test.
[ https://issues.apache.org/jira/browse/HAWQ-998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Sheng updated HAWQ-998: - Issue Type: Bug (was: Test) > Fix test for aggregate-with-null test. > -- > > Key: HAWQ-998 > URL: https://issues.apache.org/jira/browse/HAWQ-998 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Xiang Sheng >Assignee: Lei Chang > > For test TestAggregateWithNull in src/test/feature/query/test_aggregate, the > data order of expectStr is not equal to the resultStr. We should add order by > clause to make sure the result unique and always the same for better ci test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-998) Fix test for aggregate-with-null test.
[ https://issues.apache.org/jira/browse/HAWQ-998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Sheng updated HAWQ-998: - Priority: Minor (was: Major) > Fix test for aggregate-with-null test. > -- > > Key: HAWQ-998 > URL: https://issues.apache.org/jira/browse/HAWQ-998 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Xiang Sheng >Assignee: Lei Chang >Priority: Minor > > For test TestAggregateWithNull in src/test/feature/query/test_aggregate, the > data order of expectStr is not equal to the resultStr. We should add order by > clause to make sure the result unique and always the same for better ci test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-998) Fix test for aggregate-with-null test.
Xiang Sheng created HAWQ-998: Summary: Fix test for aggregate-with-null test. Key: HAWQ-998 URL: https://issues.apache.org/jira/browse/HAWQ-998 Project: Apache HAWQ Issue Type: Test Reporter: Xiang Sheng Assignee: Lei Chang For test TestAggregateWithNull in src/test/feature/query/test_aggregate, the data order of expectStr is not equal to the resultStr. We should add order by clause to make sure the result unique and always the same for better ci test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq issue #843: HAWQ-914. Improve user experience of HAWQ's build...
Github user huor commented on the issue: https://github.com/apache/incubator-hawq/pull/843 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (HAWQ-992) PXF Hive data type check in Fragmenter too restrictive
[ https://issues.apache.org/jira/browse/HAWQ-992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15416122#comment-15416122 ] Shivram Mani commented on HAWQ-992: --- HAWQ/PXF doesn't send the varchar/char with length/precision information to the PXF webapp HAWQ-997. This results in the above error during data deserialization while reading the data (Resolver). > PXF Hive data type check in Fragmenter too restrictive > -- > > Key: HAWQ-992 > URL: https://issues.apache.org/jira/browse/HAWQ-992 > Project: Apache HAWQ > Issue Type: Bug > Components: PXF >Reporter: Shivram Mani >Assignee: Shivram Mani > Fix For: backlog > > > HiveDataFragmenter used by both HiveText and HiveRC profiles has a very > strict type check. > Hawq type numeric(10,10) is compatible with hive's decimal(10,10) > But Hawq type numeric is not compatible with hive's decimal(10,10) > Similar issue exits with other data types which have variable optional > arguments. The type check should be modified to allow hawq type that is a > compabitle type but without optional precision/length arguments to work with > the corresponding hive type. > Support following additional hive data types: date, varchar, char -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-997) HAWQ doesn't send PXF data type with precision
Shivram Mani created HAWQ-997: - Summary: HAWQ doesn't send PXF data type with precision Key: HAWQ-997 URL: https://issues.apache.org/jira/browse/HAWQ-997 Project: Apache HAWQ Issue Type: Bug Components: PXF Reporter: Shivram Mani Assignee: Goden Yao HAWQ/PXF sends via the Rest api information about ATTR and types using x-gp-attr-typename. Attributes such as varchar(3) char(3) are sent as varchar and char. This causes HAWQ-992 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HAWQ-992) PXF Hive data type check in Fragmenter too restrictive
[ https://issues.apache.org/jira/browse/HAWQ-992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415589#comment-15415589 ] Shivram Mani edited comment on HAWQ-992 at 8/10/16 9:57 PM: HiveText,HiveRc and HiveOrc uses a fragmenter that doens't send back the partition properties - colummn types, serailaization info etc) and uses the hawq DDL column types Even though the fragmenter can permit hawq types such as varchar char without length, the Hive type checker TypeInfoUtils doesn't permit using types without length. For example using varchar in hawq tagble resutls in the following exception "java.lang.Exception: java.lang.IllegalArgumentException: varchar type is specified without length" {code} Caused by: java.lang.IllegalArgumentException: varchar type is specified without length: string,string,int,double,decimal,timestamp,float,bigint,boolean,small at org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseType(TypeInfoUtils.java:403) at org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseTypeInfos(TypeInfoUtils.java:305) at org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.getTypeInfosFromTypeString(TypeInfoUtils.java:765) at org.apache.hadoop.hive.ql.io.orc.OrcSerde.initialize(OrcSerde.java:104) at org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver.initSerde(HiveORCSerdeResolver.java:147) at org.apache.hawq.pxf.plugins.hive.HiveResolver.(HiveResolver.java:106) at org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver.(HiveORCSerdeResolver.java:70) {code} For this reason we will have to enforce strict type check including precision/length. The only other option will be to send back the partition properties as part of the userData. This is done in makeUserData in fragmenter. was (Author: shivram): Even though the fragmenter can permitm hawq types such as varchar char without length, the Hive type checker TypeInfoUtils doesn't permit using types without length. For example using varchar in hawq tagble resutls in the following exception "java.lang.Exception: java.lang.IllegalArgumentException: varchar type is specified without length" {code} Caused by: java.lang.IllegalArgumentException: varchar type is specified without length: string,string,int,double,decimal,timestamp,float,bigint,boolean,small at org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseType(TypeInfoUtils.java:403) at org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseTypeInfos(TypeInfoUtils.java:305) at org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.getTypeInfosFromTypeString(TypeInfoUtils.java:765) at org.apache.hadoop.hive.ql.io.orc.OrcSerde.initialize(OrcSerde.java:104) at org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver.initSerde(HiveORCSerdeResolver.java:147) at org.apache.hawq.pxf.plugins.hive.HiveResolver.(HiveResolver.java:106) at org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver.(HiveORCSerdeResolver.java:70) {code} > PXF Hive data type check in Fragmenter too restrictive > -- > > Key: HAWQ-992 > URL: https://issues.apache.org/jira/browse/HAWQ-992 > Project: Apache HAWQ > Issue Type: Bug > Components: PXF >Reporter: Shivram Mani >Assignee: Shivram Mani > Fix For: backlog > > > HiveDataFragmenter used by both HiveText and HiveRC profiles has a very > strict type check. > Hawq type numeric(10,10) is compatible with hive's decimal(10,10) > But Hawq type numeric is not compatible with hive's decimal(10,10) > Similar issue exits with other data types which have variable optional > arguments. The type check should be modified to allow hawq type that is a > compabitle type but without optional precision/length arguments to work with > the corresponding hive type. > Support following additional hive data types: date, varchar, char -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HAWQ-992) PXF Hive data type check in Fragmenter too restrictive
[ https://issues.apache.org/jira/browse/HAWQ-992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415589#comment-15415589 ] Shivram Mani edited comment on HAWQ-992 at 8/10/16 9:37 PM: Even though the fragmenter can permitm hawq types such as varchar char without length, the Hive type checker TypeInfoUtils doesn't permit using types without length. For example using varchar in hawq tagble resutls in the following exception "java.lang.Exception: java.lang.IllegalArgumentException: varchar type is specified without length" {code} Caused by: java.lang.IllegalArgumentException: varchar type is specified without length: string,string,int,double,decimal,timestamp,float,bigint,boolean,small at org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseType(TypeInfoUtils.java:403) at org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils$TypeInfoParser.parseTypeInfos(TypeInfoUtils.java:305) at org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils.getTypeInfosFromTypeString(TypeInfoUtils.java:765) at org.apache.hadoop.hive.ql.io.orc.OrcSerde.initialize(OrcSerde.java:104) at org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver.initSerde(HiveORCSerdeResolver.java:147) at org.apache.hawq.pxf.plugins.hive.HiveResolver.(HiveResolver.java:106) at org.apache.hawq.pxf.plugins.hive.HiveORCSerdeResolver.(HiveORCSerdeResolver.java:70) {code} was (Author: shivram): Even though the fragmenter can permitm hawq types such as varchar char without length, the Hive type checker TypeInfoUtils doesn't permit using types without length. For example using varchar in hawq tagble resutls in the following exception "java.lang.Exception: java.lang.IllegalArgumentException: varchar type is specified without length" > PXF Hive data type check in Fragmenter too restrictive > -- > > Key: HAWQ-992 > URL: https://issues.apache.org/jira/browse/HAWQ-992 > Project: Apache HAWQ > Issue Type: Bug > Components: PXF >Reporter: Shivram Mani >Assignee: Shivram Mani > Fix For: backlog > > > HiveDataFragmenter used by both HiveText and HiveRC profiles has a very > strict type check. > Hawq type numeric(10,10) is compatible with hive's decimal(10,10) > But Hawq type numeric is not compatible with hive's decimal(10,10) > Similar issue exits with other data types which have variable optional > arguments. The type check should be modified to allow hawq type that is a > compabitle type but without optional precision/length arguments to work with > the corresponding hive type. > Support following additional hive data types: date, varchar, char -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-996) gpfdist online help instructs user to download HAWQ Loader package from incorrect site
[ https://issues.apache.org/jira/browse/HAWQ-996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-996: --- Priority: Minor (was: Major) > gpfdist online help instructs user to download HAWQ Loader package from > incorrect site > -- > > Key: HAWQ-996 > URL: https://issues.apache.org/jira/browse/HAWQ-996 > Project: Apache HAWQ > Issue Type: Bug > Components: Command Line Tools >Reporter: Lisa Owen >Assignee: Lei Chang >Priority: Minor > Fix For: 2.0.1.0-incubating > > > running "gpfdist --help" displays the following incorrect output: > * > RUNNING GPFDIST AS A WINDOWS SERVICE > * > HAWQ Loaders allow gpfdist to run as a Windows Service. > Follow the instructions below to download, register and > activate gpfdist as a service: > 1. Update your HAWQ Loader package to the latest >version. This package is available from the >EMC Download Center (https://emc.subscribenet.com) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-996) gpfdist online help instructs user to download HAWQ Loader package from incorrect site
[ https://issues.apache.org/jira/browse/HAWQ-996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-996: --- Fix Version/s: 2.0.1.0-incubating > gpfdist online help instructs user to download HAWQ Loader package from > incorrect site > -- > > Key: HAWQ-996 > URL: https://issues.apache.org/jira/browse/HAWQ-996 > Project: Apache HAWQ > Issue Type: Bug > Components: Command Line Tools >Reporter: Lisa Owen >Assignee: Lei Chang > Fix For: 2.0.1.0-incubating > > > running "gpfdist --help" displays the following incorrect output: > * > RUNNING GPFDIST AS A WINDOWS SERVICE > * > HAWQ Loaders allow gpfdist to run as a Windows Service. > Follow the instructions below to download, register and > activate gpfdist as a service: > 1. Update your HAWQ Loader package to the latest >version. This package is available from the >EMC Download Center (https://emc.subscribenet.com) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-996) gpfdist online help instructs user to download HAWQ Loader package from incorrect site
Lisa Owen created HAWQ-996: -- Summary: gpfdist online help instructs user to download HAWQ Loader package from incorrect site Key: HAWQ-996 URL: https://issues.apache.org/jira/browse/HAWQ-996 Project: Apache HAWQ Issue Type: Bug Components: Command Line Tools Reporter: Lisa Owen Assignee: Lei Chang running "gpfdist --help" displays the following incorrect output: * RUNNING GPFDIST AS A WINDOWS SERVICE * HAWQ Loaders allow gpfdist to run as a Windows Service. Follow the instructions below to download, register and activate gpfdist as a service: 1. Update your HAWQ Loader package to the latest version. This package is available from the EMC Download Center (https://emc.subscribenet.com) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-995) Bump PXF version to 3.0.1
[ https://issues.apache.org/jira/browse/HAWQ-995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao resolved HAWQ-995. Resolution: Implemented > Bump PXF version to 3.0.1 > - > > Key: HAWQ-995 > URL: https://issues.apache.org/jira/browse/HAWQ-995 > Project: Apache HAWQ > Issue Type: Task > Components: PXF >Reporter: Goden Yao >Assignee: Goden Yao > Fix For: 2.0.1.0-incubating > > > this is to match HAWQ 2.0.1.0 release -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (HAWQ-995) Bump PXF version to 3.0.1
[ https://issues.apache.org/jira/browse/HAWQ-995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao closed HAWQ-995. -- > Bump PXF version to 3.0.1 > - > > Key: HAWQ-995 > URL: https://issues.apache.org/jira/browse/HAWQ-995 > Project: Apache HAWQ > Issue Type: Task > Components: PXF >Reporter: Goden Yao >Assignee: Goden Yao > Fix For: 2.0.1.0-incubating > > > this is to match HAWQ 2.0.1.0 release -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-995) Bump PXF version to 3.0.1.0
Goden Yao created HAWQ-995: -- Summary: Bump PXF version to 3.0.1.0 Key: HAWQ-995 URL: https://issues.apache.org/jira/browse/HAWQ-995 Project: Apache HAWQ Issue Type: Task Components: PXF Reporter: Goden Yao Assignee: Goden Yao this is to match HAWQ 2.0.1.0 release -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-995) Bump PXF version to 3.0.1.0
[ https://issues.apache.org/jira/browse/HAWQ-995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-995: --- Fix Version/s: 2.0.1.0-incubating > Bump PXF version to 3.0.1.0 > --- > > Key: HAWQ-995 > URL: https://issues.apache.org/jira/browse/HAWQ-995 > Project: Apache HAWQ > Issue Type: Task > Components: PXF >Reporter: Goden Yao >Assignee: Goden Yao > Fix For: 2.0.1.0-incubating > > > this is to match HAWQ 2.0.1.0 release -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-995) Bump PXF version to 3.0.1
[ https://issues.apache.org/jira/browse/HAWQ-995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-995: --- Summary: Bump PXF version to 3.0.1 (was: Bump PXF version to 3.0.1.0) > Bump PXF version to 3.0.1 > - > > Key: HAWQ-995 > URL: https://issues.apache.org/jira/browse/HAWQ-995 > Project: Apache HAWQ > Issue Type: Task > Components: PXF >Reporter: Goden Yao >Assignee: Goden Yao > Fix For: 2.0.1.0-incubating > > > this is to match HAWQ 2.0.1.0 release -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-994) PL/R UDF need to be separated from postgres process for robustness
[ https://issues.apache.org/jira/browse/HAWQ-994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-994: --- Fix Version/s: backlog > PL/R UDF need to be separated from postgres process for robustness > -- > > Key: HAWQ-994 > URL: https://issues.apache.org/jira/browse/HAWQ-994 > Project: Apache HAWQ > Issue Type: New Feature >Reporter: Ming LI >Assignee: Lei Chang > Fix For: backlog > > > Background: > In previous single node DB, user always deploy testing code on another > testing DB. Now the data maintained in HAWQ grows enormously, so it is hard > to deploy a testing hawq with the same test data. > So user need to run some testing UDF or deploy some UDFs which lack of > testing the whole data directly onto hawq in production env, which may crash > in PL/R or R code. Sometimes poorly written query leads to postmaster reset > causing all running jobs to be cancelled and rolled back. Customer often sees > this as a HAWQ issue even if it is a user code issue. So we need to separated > from postgres process, and change inter process communication from shared > memory to others(e.g. pipe, socket and so on). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-992) PXF Hive data type check in Fragmenter too restrictive
[ https://issues.apache.org/jira/browse/HAWQ-992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415589#comment-15415589 ] Shivram Mani commented on HAWQ-992: --- Even though the fragmenter can permitm hawq types such as varchar char without length, the Hive type checker TypeInfoUtils doesn't permit using types without length. For example using varchar in hawq tagble resutls in the following exception "java.lang.Exception: java.lang.IllegalArgumentException: varchar type is specified without length" > PXF Hive data type check in Fragmenter too restrictive > -- > > Key: HAWQ-992 > URL: https://issues.apache.org/jira/browse/HAWQ-992 > Project: Apache HAWQ > Issue Type: Bug > Components: PXF >Reporter: Shivram Mani >Assignee: Shivram Mani > Fix For: backlog > > > HiveDataFragmenter used by both HiveText and HiveRC profiles has a very > strict type check. > Hawq type numeric(10,10) is compatible with hive's decimal(10,10) > But Hawq type numeric is not compatible with hive's decimal(10,10) > Similar issue exits with other data types which have variable optional > arguments. The type check should be modified to allow hawq type that is a > compabitle type but without optional precision/length arguments to work with > the corresponding hive type. > Support following additional hive data types: date, varchar, char -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq pull request #837: HAWQ-779 support pxf filter pushdwon at th...
Github user kavinderd commented on a diff in the pull request: https://github.com/apache/incubator-hawq/pull/837#discussion_r74286229 --- Diff: src/backend/access/external/test/pxffilters_test.c --- @@ -61,7 +62,7 @@ test__supported_filter_type(void **state) /* go over pxf_supported_types array */ int nargs = sizeof(pxf_supported_types) / sizeof(Oid); - assert_int_equal(nargs, 12); + assert_int_equal(nargs, 13); --- End diff -- Maybe have this value derived instead of hard-coded number --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (HAWQ-994) PL/R UDF need to be separated from postgres process for robustness
Ming LI created HAWQ-994: Summary: PL/R UDF need to be separated from postgres process for robustness Key: HAWQ-994 URL: https://issues.apache.org/jira/browse/HAWQ-994 Project: Apache HAWQ Issue Type: New Feature Reporter: Ming LI Assignee: Lei Chang Background: In previous single node DB, user always deploy testing code on another testing DB. Now the data maintained in HAWQ grows enormously, so it is hard to deploy a testing hawq with the same test data. So user need to run some testing UDF or deploy some UDFs which lack of testing the whole data directly onto hawq in production env, which may crash in PL/R or R code. Sometimes poorly written query leads to postmaster reset causing all running jobs to be cancelled and rolled back. Customer often sees this as a HAWQ issue even if it is a user code issue. So we need to separated from postgres process, and change inter process communication from shared memory to others(e.g. pipe, socket and so on). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] incubator-hawq issue #835: HAWQ-980. hawq does not handle guc value with spa...
Github user ictmalili commented on the issue: https://github.com/apache/incubator-hawq/pull/835 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---