[jira] [Resolved] (HAWQ-809) Change libhdfs3 function test port to hadoop default port
[ https://issues.apache.org/jira/browse/HAWQ-809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiali Yao resolved HAWQ-809. Resolution: Fixed Fix Version/s: 2.0.0.0-incubating > Change libhdfs3 function test port to hadoop default port > - > > Key: HAWQ-809 > URL: https://issues.apache.org/jira/browse/HAWQ-809 > Project: Apache HAWQ > Issue Type: Test > Components: libhdfs >Reporter: Jiali Yao >Assignee: Jiali Yao > Fix For: 2.0.0.0-incubating > > > In libhdfs3 function test, now it use hdfs port 9000 we need to change it to > default hdfs port -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-832) Integrate ICG to Gtest
[ https://issues.apache.org/jira/browse/HAWQ-832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiali Yao updated HAWQ-832: --- Description: In HAWQ test, google test is used for libyarn and libhdfs test. Install check test framework is used for smoke test and has a lot of limitations. To make it easy to learn and consolidate test, we want to unify the two frameworks. Considering below factors, we want to use google test: It supports more functions and then a developer can write more complex tests which are not only limited to SQL test. Google test support run test parallelly. Google mock is an extension for google test, it can also be used for unit test. Please see the attach file for detail. was: In HAWQ test, google test is used for libyarn and libhdfs test. Install check test framework is used for smoke test and has a lot of limitations. To make it easy to learn and consolidate test, we want to unify the two frameworks. Considering below factors, we want to use google test: It supports more functions and then a developer can write more complex tests which are not only limited to SQL test. Google test support run test parallelly. Google mock is an extension for google test, it can also be used for unit test. > Integrate ICG to Gtest > -- > > Key: HAWQ-832 > URL: https://issues.apache.org/jira/browse/HAWQ-832 > Project: Apache HAWQ > Issue Type: Test > Components: Tests >Reporter: Jiali Yao >Assignee: Jiali Yao > Attachments: GoogleTest.pdf > > > In HAWQ test, google test is used for libyarn and libhdfs test. Install check > test framework is used for smoke test and has a lot of limitations. To make > it easy to learn and consolidate test, we want to unify the two frameworks. > Considering below factors, we want to use google test: > It supports more functions and then a developer can write more complex tests > which are not only limited to SQL test. > Google test support run test parallelly. > Google mock is an extension for google test, it can also be used for unit > test. > Please see the attach file for detail. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-832) Integrate ICG to Gtest
Jiali Yao created HAWQ-832: -- Summary: Integrate ICG to Gtest Key: HAWQ-832 URL: https://issues.apache.org/jira/browse/HAWQ-832 Project: Apache HAWQ Issue Type: Test Components: Tests Reporter: Jiali Yao Assignee: Jiali Yao Attachments: GoogleTest.pdf In HAWQ test, google test is used for libyarn and libhdfs test. Install check test framework is used for smoke test and has a lot of limitations. To make it easy to learn and consolidate test, we want to unify the two frameworks. Considering below factors, we want to use google test: It supports more functions and then a developer can write more complex tests which are not only limited to SQL test. Google test support run test parallelly. Google mock is an extension for google test, it can also be used for unit test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-832) Integrate ICG to Gtest
[ https://issues.apache.org/jira/browse/HAWQ-832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiali Yao updated HAWQ-832: --- Attachment: GoogleTest.pdf > Integrate ICG to Gtest > -- > > Key: HAWQ-832 > URL: https://issues.apache.org/jira/browse/HAWQ-832 > Project: Apache HAWQ > Issue Type: Test > Components: Tests >Reporter: Jiali Yao >Assignee: Jiali Yao > Attachments: GoogleTest.pdf > > > In HAWQ test, google test is used for libyarn and libhdfs test. Install check > test framework is used for smoke test and has a lot of limitations. To make > it easy to learn and consolidate test, we want to unify the two frameworks. > Considering below factors, we want to use google test: > It supports more functions and then a developer can write more complex tests > which are not only limited to SQL test. > Google test support run test parallelly. > Google mock is an extension for google test, it can also be used for unit > test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-809) Change libhdfs3 function test port to hadoop default port
[ https://issues.apache.org/jira/browse/HAWQ-809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiali Yao updated HAWQ-809: --- Assignee: Jiali Yao (was: Lei Chang) > Change libhdfs3 function test port to hadoop default port > - > > Key: HAWQ-809 > URL: https://issues.apache.org/jira/browse/HAWQ-809 > Project: Apache HAWQ > Issue Type: Test > Components: libhdfs >Reporter: Jiali Yao >Assignee: Jiali Yao > > In libhdfs3 function test, now it use hdfs port 9000 we need to change it to > default hdfs port -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-809) Change libhdfs3 function test port to hadoop default port
Jiali Yao created HAWQ-809: -- Summary: Change libhdfs3 function test port to hadoop default port Key: HAWQ-809 URL: https://issues.apache.org/jira/browse/HAWQ-809 Project: Apache HAWQ Issue Type: Test Components: libhdfs Reporter: Jiali Yao Assignee: Lei Chang In libhdfs3 function test, now it use hdfs port 9000 we need to change it to default hdfs port -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-731) Implement Data Generator
[ https://issues.apache.org/jira/browse/HAWQ-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiali Yao resolved HAWQ-731. Resolution: Fixed > Implement Data Generator > - > > Key: HAWQ-731 > URL: https://issues.apache.org/jira/browse/HAWQ-731 > Project: Apache HAWQ > Issue Type: New Feature > Components: Tests >Reporter: Lili Ma >Assignee: Lili Ma > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-731) Implement Data Generator
[ https://issues.apache.org/jira/browse/HAWQ-731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiali Yao updated HAWQ-731: --- Assignee: Lili Ma (was: Jiali Yao) > Implement Data Generator > - > > Key: HAWQ-731 > URL: https://issues.apache.org/jira/browse/HAWQ-731 > Project: Apache HAWQ > Issue Type: New Feature > Components: Tests >Reporter: Lili Ma >Assignee: Lili Ma > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-619) Change 'gpextract' to 'hawqextract' for InputFormat unit test
[ https://issues.apache.org/jira/browse/HAWQ-619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiali Yao resolved HAWQ-619. Resolution: Fixed Fix Version/s: 2.0.0 > Change 'gpextract' to 'hawqextract' for InputFormat unit test > - > > Key: HAWQ-619 > URL: https://issues.apache.org/jira/browse/HAWQ-619 > Project: Apache HAWQ > Issue Type: Task > Components: Tests >Reporter: Chunling Wang >Assignee: Jiali Yao > Fix For: 2.0.0 > > > Change 'gpextract' to 'hawqextract' in SimpleTableLocalTester.java for > InputFormat unit test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-709) Include googletest for unittest/featuretest for HAWQ
[ https://issues.apache.org/jira/browse/HAWQ-709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiali Yao updated HAWQ-709: --- Assignee: hongwu (was: Jiali Yao) > Include googletest for unittest/featuretest for HAWQ > > > Key: HAWQ-709 > URL: https://issues.apache.org/jira/browse/HAWQ-709 > Project: Apache HAWQ > Issue Type: New Feature > Components: Tests >Reporter: hongwu >Assignee: hongwu > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-606) Change seg_max_connections default value and remove gp_enable_column_oriented_table
Jiali Yao created HAWQ-606: -- Summary: Change seg_max_connections default value and remove gp_enable_column_oriented_table Key: HAWQ-606 URL: https://issues.apache.org/jira/browse/HAWQ-606 Project: Apache HAWQ Issue Type: Improvement Components: Catalog Reporter: Jiali Yao Assignee: Lei Chang According to test evaluation result, we need to change default value for seg_max_connections to 3000. gp_enable_column_oriented_table is no needed. Remove this GUC. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-64) Forward NULL issue for ExtractPrincipalFromTicketCache
Jiali Yao created HAWQ-64: - Summary: Forward NULL issue for ExtractPrincipalFromTicketCache Key: HAWQ-64 URL: https://issues.apache.org/jira/browse/HAWQ-64 Project: Apache HAWQ Issue Type: Bug Components: libyarn Reporter: Jiali Yao Assignee: Lin Wen According to coverity report, below code has FORWARD NULL issue: In apache-hawq/src/backend/resourcemanager/resourcebroker if (!cache) { if (0 != setenv("KRB5CCNAME", cache, 1)) { elog(WARNING, "Cannot set env parameter \"KRB5CCNAME\" when extract principal from cache:%s", cache); return NULL; } } If the cache is NULL, it will be error in setenv. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-41) cannot open more than 262144 append-only table segment files cocurrently
[ https://issues.apache.org/jira/browse/HAWQ-41?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiali Yao updated HAWQ-41: -- Assignee: Lirong Jian (was: Roman Shaposhnik) > cannot open more than 262144 append-only table segment files cocurrently > > > Key: HAWQ-41 > URL: https://issues.apache.org/jira/browse/HAWQ-41 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Xiang Sheng >Assignee: Lirong Jian >Priority: Critical > > The append-only table segment files weren`t closed correctly while loading > data error using TPCH. The num of the opened file increased to the max > limitations and error out "cannot open more than 262144 append-only table > segment files cocurrently". -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-12) "Cannot allocate memory" in parquet_compression test in installcheck-good with hawq dbg build
[ https://issues.apache.org/jira/browse/HAWQ-12?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiali Yao updated HAWQ-12: -- Assignee: Jiali Yao (was: Ruilong Huo) > "Cannot allocate memory" in parquet_compression test in installcheck-good > with hawq dbg build > - > > Key: HAWQ-12 > URL: https://issues.apache.org/jira/browse/HAWQ-12 > Project: Apache HAWQ > Issue Type: Bug > Components: core > Environment: Red Hat Enterprise Linux Server release 5.5 (Tikanga) > Linux pbld3 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 > x86_64 GNU/Linux >Reporter: Ruilong Huo >Assignee: Jiali Yao > > When running installcheck-good with hawq dbg build on a Linux box (RHEL 5.5, > 12G Memory, Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz with 4 processors), the > parquet_compression test fails with "Cannot allocate memory" from time to > time. > Initial investigation shows that strcoll fails to allocate memory to complete > string comparison with locale considered during outer join of two partitioned > parquet tables with gzip compression. > We need to understand: 1) the amount of memory used by outer join query and > conclude if it is expected; 2) fix the oom if there are issues either with > memory leak or with memory protection/enforcement. > {noformat} > 2015-09-25 00:31:22.852771 > PDT,"gpadmin","regression",p9703,th-1437302464,"127.0.0.1","39230",2015-09-25 > 00:31:16 PDT,4502,con368,cmd50,seg-1,,,x4502,sx1,"ERROR","XX000","Unable to > compare strings. Error: Cannot allocate memory. First string has length > 1145620 and value (limited to 100 characters): 'large data value for text > data typelarge data value for text data typelarge data value for text data'. > Second string has length 1145620 and value (limited to 100 characters): > 'large data value for text data typelarge data value for text data typelarge > data value for text data' (string_wrapper.h:58) (seg0 pbld3:23011 pid=9715) > (dispatcher.c:1681)",,"select count(*) from parquet_gzip_part c1 full > outer join parquet_gzip_part_unc c2 on c1.p1=c2.p1 and > c1.document=c2.document and c1.vch1=c2.vch1 and c1.bta1=c2.bta1 and > c1.bitv1=c2.bitv1;",0,,"dispatcher.c",1681,"Stack trace: > 10x9de185 postgres errstart (elog.c:473) > 20xb856f2 postgres (dispatcher.c:1679) > 30xb84c45 postgres dispatch_catch_error (dispatcher.c:1342) > 40x7384e0 postgres mppExecutorCleanup (execUtils.c:2267) > 50x718b21 postgres ExecutorRun (execMain.c:1230) > 60x900648 postgres (pquery.c:1642) > 70x900225 postgres PortalRun (pquery.c:1466) > 80x8f6276 postgres (postgres.c:1728) > 90x8faec8 postgres PostgresMain (postgres.c:4693) > 10 0x89db5a postgres (postmaster.c:5846) > 11 0x89cfe4 postgres (postmaster.c:5438) > 12 0x897702 postgres (postmaster.c:2146) > 13 0x8967d8 postgres PostmasterMain (postmaster.c:1432) > 14 0x7b095e postgres main (main.c:226) > 15 0x336e21d994 libc.so.6 __libc_start_main (??:0) > 16 0x4b9109 postgres (??:0) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)