[jira] [Updated] (HAWQ-30) HAWQ InputFormat failed when accessing a table which has been altered

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-30?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-30:
--
Assignee: zhenglin tao  (was: Lirong Jian)

> HAWQ InputFormat failed when accessing a table which has been altered
> -
>
> Key: HAWQ-30
> URL: https://issues.apache.org/jira/browse/HAWQ-30
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Lirong Jian
>Assignee: zhenglin tao
>Priority: Minor
> Fix For: 2.0.0
>
>
> When accessing a table which has been altered, HAWQ InputFormat would fail, 
> because the paths of the data files of the targeted table have been changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-359) Update bug reporting location for "./configure --help"

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-359:
---
Assignee: Radar Lei  (was: Lei Chang)

> Update bug reporting location for "./configure --help"
> --
>
> Key: HAWQ-359
> URL: https://issues.apache.org/jira/browse/HAWQ-359
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Caleb Welton
>Assignee: Radar Lei
>Priority: Trivial
> Fix For: 2.0.0
>
>
> When running {{./configure --help}} the following is printed:
> {noformat}
> ...
> Report bugs to .
> {noformat}
> This is not correct for the Apache HAWQ project, this should be updated to:
> {noformat}
> ...
> Report bugs at: https://issues.apache.org/jira/browse/HAWQ
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-359) Update bug reporting location for "./configure --help"

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-359:
---
Fix Version/s: 2.0.0

> Update bug reporting location for "./configure --help"
> --
>
> Key: HAWQ-359
> URL: https://issues.apache.org/jira/browse/HAWQ-359
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Caleb Welton
>Assignee: Radar Lei
>Priority: Trivial
> Fix For: 2.0.0
>
>
> When running {{./configure --help}} the following is printed:
> {noformat}
> ...
> Report bugs to .
> {noformat}
> This is not correct for the Apache HAWQ project, this should be updated to:
> {noformat}
> ...
> Report bugs at: https://issues.apache.org/jira/browse/HAWQ
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-208) Stopping PXF shows SEVERE message, which is misleading

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-208:
---
Fix Version/s: backlog

> Stopping PXF shows SEVERE message, which is misleading 
> ---
>
> Key: HAWQ-208
> URL: https://issues.apache.org/jira/browse/HAWQ-208
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: jun aoki
>Assignee: Goden Yao
>Priority: Minor
> Fix For: backlog
>
>
> When stopping PXF, although it stops OK it shows SEVERE error message 
> regarding port.
> {code}
> SEVERE: No shutdown port configured. Shut down server through OS signal. 
> Server not shut down.
> The stop command failed. Attempting to signal the process to stop through OS 
> signal.
> Tomcat stopped.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-101) Remove postgres version number from HAWQ

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-101:
---
Fix Version/s: backlog

> Remove postgres version number from HAWQ 
> -
>
> Key: HAWQ-101
> URL: https://issues.apache.org/jira/browse/HAWQ-101
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Command Line Tools
>Reporter: Goden Yao
>Assignee: Lei Chang
>Priority: Minor
>  Labels: OSS
> Fix For: backlog
>
>
> Some version numbers showing in command line is due to historical reason that 
> HAWQ is derived from Greenplum and postgres.
> It doesn't make sense to have these version numbers which would confuse users 
> after open source.
> {code:actionscript}
> gpadmin@rhel65-1 ~]$ psql --version
> psql (PostgreSQL) 8.2.15
> contains support for command-line editing
> {code}
> {code:actionscript}
> [gpadmin@rhel65-1 ~]$ postgres --version
> postgres (HAWQ) 8.2.15
> {code}
> {code:actionscript}
> [gpadmin@rhel65-1 ~]$ postgres --hawq-version
> postgres (HAWQ) 2.0.0.0 build dev
> {code}
> {code:actionscript}
> [gpadmin@rhel65-1 ~]$ postgres --gp-version
> postgres (HAWQ) 4.2.0 build 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-29) Refactor HAWQ InputFormat to support Spark/Scala

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-29?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-29:
--
Fix Version/s: backlog

> Refactor HAWQ InputFormat to support Spark/Scala
> 
>
> Key: HAWQ-29
> URL: https://issues.apache.org/jira/browse/HAWQ-29
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Storage
>Reporter: Lirong Jian
>Assignee: Lirong Jian
>Priority: Minor
>  Labels: features
> Fix For: backlog
>
>
> Currently the implementation of HAWQ InputFormat doesn't support Spark/Scala 
> very well. We need to refactor the code to support that feature. More 
> specifically, we need implement the serializable interface for some classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-198) Build-in functions will be executed in resource_negotiator stage which causes it will be executed twice

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-198:
---
Fix Version/s: 2.0.0

> Build-in functions will be executed in resource_negotiator  stage which 
> causes it will be executed twice 
> -
>
> Key: HAWQ-198
> URL: https://issues.apache.org/jira/browse/HAWQ-198
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Dong Li
>Assignee: Ruilong Huo
> Fix For: 2.0.0
>
>
> Build-in functions will be executed twice as it will be executed in 
> resource_negotiator stage.
> I use function gp_elog() to make a demo. Function gp_elog() permit superuser 
> to insert elog information into pg_log.
> {code}
> select gp_elog('!!again test !!');
> {code}
> When I vim the pg_log file, I find that the log information display twice.
> 2015-11-27 16:35:15.912111 
> CST,"intern","ff",p40030,th-1593601580,"[local]",,2015-11-25 18:17:21 
> CST,1080,con25,cmd42,seg-1,,,x1080,sx1,"LOG","XX100","!!again 
> test !!",,,0
> 2015-11-27 16:35:45.985348 
> CST,"intern","ff",p40030,th-1593601580,"[local]",,2015-11-25 18:17:21 
> CST,1080,con25,cmd42,seg-1,,,x1080,sx1,"LOG","XX100","!!again 
> test !!",,,0
> When I use lldb to debug and set breakpoint in function gp_elog. I find 
> gp_elog is executed twice. And I put the bt information for the two times 
> calling below, and you will find  it was exectuted in the resource_negotiator 
> stage when it  should not be actually executed. 
> The first calling for gp_elog
> {code}
> * thread #1: tid = 0xe078b, 0x0056ca0d postgres`gp_elog(fcinfo=0xbfffd6bc) + 
> 17 at elog.c:4130, queue = 'com.apple.main-thread', stop reason = breakpoint 
> 1.1
>   * frame #0: 0x0056ca0d postgres`gp_elog(fcinfo=0xbfffd6bc) + 17 at 
> elog.c:4130
> frame #1: 0x00294ab8 postgres`ExecMakeFunctionResult(fcache=0x049237c0, 
> econtext=0x04923c58, isNull=0xbfffdb0b, isDone=0x) + 1730 at 
> execQual.c:1749
> frame #2: 0x002957e1 postgres`ExecEvalFunc(fcache=0x049237c0, 
> econtext=0x04923c58, isNull=0xbfffdb0b, isDone=0x) + 104 at 
> execQual.c:2197
> frame #3: 0x0029a3dd 
> postgres`ExecEvalExprSwitchContext(expression=0x049237c0, 
> econtext=0x04923c58, isNull=0xbfffdb0b, isDone=0x) + 58 at 
> execQual.c:4426
> frame #4: 0x003db70a postgres`evaluate_expr(expr=0x04921f14, 
> result_type=2278) + 111 at clauses.c:3354
> frame #5: 0x003dac9a postgres`evaluate_function(funcid=5044, 
> result_type=2278, args=0x04921e8c, func_tuple=0x040e84d0, context=0xbfffe598) 
> + 388 at clauses.c:2960
> frame #6: 0x003daa1e postgres`simplify_function(funcid=5044, 
> result_type=2278, args=0x04921e8c, allow_inline='\x01', context=0xbfffe598) + 
> 208 at clauses.c:2816
> frame #7: 0x003d8a88 
> postgres`eval_const_expressions_mutator(node=0x04921b18, context=0xbfffe598) 
> + 757 at clauses.c:1757
> frame #8: 0x003dd606 postgres`expression_tree_mutator(node=0x04921adc, 
> mutator=0x003d8793, context=0xbfffe598) + 7705 at clauses.c:3922
> frame #9: 0x003da4b8 
> postgres`eval_const_expressions_mutator(node=0x04921adc, context=0xbfffe598) 
> + 7461 at clauses.c:2519
> frame #10: 0x003dd6d4 postgres`expression_tree_mutator(node=0x04921c40, 
> mutator=0x003d8793, context=0xbfffe598) + 7911 at clauses.c:3958
> frame #11: 0x003da4b8 
> postgres`eval_const_expressions_mutator(node=0x04921c40, context=0xbfffe598) 
> + 7461 at clauses.c:2519
> frame #12: 0x003d8742 postgres`eval_const_expressions(root=0x04921c98, 
> node=0x04921c40) + 92 at clauses.c:1643
> frame #13: 0x003b6ee5 postgres`preprocess_expression(root=0x04921c98, 
> expr=0x04921c40, kind=1) + 143 at planner.c:1117
> frame #14: 0x003b690d postgres`subquery_planner(glob=0x050310e8, 
> parse=0x04921a40, parent_root=0x, tuple_fraction=0, 
> subroot=0xbfffe698, config=0x05030aa0) + 896 at planner.c:857
> frame #15: 0x003b60d1 postgres`standard_planner(parse=0x04921a40, 
> cursorOptions=0, boundParams=0x) + 494 at planner.c:587
> frame #16: 0x003b5df4 postgres`resource_negotiator(parse=0x05030ee8, 
> cursorOptions=0, boundParams=0x, resourceLife=QRL_ONCE, 
> result=0xbfffe7b4) + 79 at planner.c:476
> frame #17: 0x003b566b postgres`planner(parse=0x05030ee8, cursorOptions=0, 
> boundParams=0x, resourceLife=QRL_ONCE) + 466 at planner.c:303
> frame #18: 0x0046c773 postgres`pg_plan_query(querytree=0x05030ee8, 
> boundParams=0x, resource_life=QRL_ONCE) + 86 at postgres.c:816
> frame #19: 0x0046c89e postgres`pg_plan_queries(querytrees=0x05031654, 
> boundParams=0x, needSnapshot='\0', resource_life=QRL_ONCE) + 131 at 
> postgres.c:889
> frame #20: 

[jira] [Updated] (HAWQ-30) HAWQ InputFormat failed when accessing a table which has been altered

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-30?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-30:
--
Fix Version/s: (was: backlog)
   2.0.0

> HAWQ InputFormat failed when accessing a table which has been altered
> -
>
> Key: HAWQ-30
> URL: https://issues.apache.org/jira/browse/HAWQ-30
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Lirong Jian
>Assignee: Lirong Jian
>Priority: Minor
> Fix For: 2.0.0
>
>
> When accessing a table which has been altered, HAWQ InputFormat would fail, 
> because the paths of the data files of the targeted table have been changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-230) fix build failure on centos 7

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-230.
--
Resolution: Fixed

> fix build failure on centos 7
> -
>
> Key: HAWQ-230
> URL: https://issues.apache.org/jira/browse/HAWQ-230
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Zhanwei Wang
>Assignee: Zhanwei Wang
>  Labels: closed-pr
> Fix For: 2.0.0
>
>
> HAWQ build will fail on centos7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-31) Refactor HAWQ InputFormat to support accessing two or more tables in one MR job

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-31?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-31:
--
Fix Version/s: backlog

> Refactor HAWQ InputFormat to support accessing two or more tables in one MR 
> job
> ---
>
> Key: HAWQ-31
> URL: https://issues.apache.org/jira/browse/HAWQ-31
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Storage
>Reporter: Lirong Jian
>Assignee: Lirong Jian
>Priority: Minor
> Fix For: backlog
>
>
> In the current implementation of HAWQ InputFormat, inside one MR job, only 
> one table can be accessed. It is impossible to join two tables with HAWQ 
> InputFormat in one MR job.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-230) fix build failure on centos 7

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-230:
---
Fix Version/s: (was: 2.1.0)
   2.0.0

> fix build failure on centos 7
> -
>
> Key: HAWQ-230
> URL: https://issues.apache.org/jira/browse/HAWQ-230
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Zhanwei Wang
>Assignee: Zhanwei Wang
>  Labels: closed-pr
> Fix For: 2.0.0
>
>
> HAWQ build will fail on centos7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-274) Add disk check for JBOD temporary directory in segment FTS

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-274:
---
Assignee: Lin Wen  (was: Lei Chang)

> Add disk check for JBOD temporary directory in segment FTS
> --
>
> Key: HAWQ-274
> URL: https://issues.apache.org/jira/browse/HAWQ-274
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Fault Tolerance, Resource Manager
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.0
>
>
> Add disk check for JBOD temporary directory in segment FTS.
> Add column for catalog table gp_segment_configuration, indicates which 
> directory is failed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-30) HAWQ InputFormat failed when accessing a table which has been altered

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-30?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-30:
--
Fix Version/s: backlog

> HAWQ InputFormat failed when accessing a table which has been altered
> -
>
> Key: HAWQ-30
> URL: https://issues.apache.org/jira/browse/HAWQ-30
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Lirong Jian
>Assignee: Lirong Jian
>Priority: Minor
> Fix For: backlog
>
>
> When accessing a table which has been altered, HAWQ InputFormat would fail, 
> because the paths of the data files of the targeted table have been changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-219) Built-in function gp_persistent_build_all doesn't reshape gp_persistent_relation_node table

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-219:
---
Labels:   (was: closed-pr)

> Built-in function gp_persistent_build_all doesn't reshape 
> gp_persistent_relation_node table
> ---
>
> Key: HAWQ-219
> URL: https://issues.apache.org/jira/browse/HAWQ-219
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> {code}
> select gp_persistent_reset_all();
> select gp_persistent_build_all(false);
> e=# select * from gp_persistent_relation_node ;
>  tablespace_oid | database_oid | relfilenode_oid | persistent_state | 
> reserved | parent_xid | persistent_serial_num | previous_free_tid
> +--+-+--+--++---+---
> (0 rows)
> {code}
> Function gp_persistent_build_all will call function PersistentBuild_BuildDb.
> But the  PersistentBuild_BuildDb function only reshapes 
> gp_persistent_database_node and gp_persistent_relfile_node.
> It doesn't reshape gp_persistent_relation_node table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-219) Built-in function gp_persistent_build_all doesn't reshape gp_persistent_relation_node table

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-219.
--
Resolution: Fixed

> Built-in function gp_persistent_build_all doesn't reshape 
> gp_persistent_relation_node table
> ---
>
> Key: HAWQ-219
> URL: https://issues.apache.org/jira/browse/HAWQ-219
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> {code}
> select gp_persistent_reset_all();
> select gp_persistent_build_all(false);
> e=# select * from gp_persistent_relation_node ;
>  tablespace_oid | database_oid | relfilenode_oid | persistent_state | 
> reserved | parent_xid | persistent_serial_num | previous_free_tid
> +--+-+--+--++---+---
> (0 rows)
> {code}
> Function gp_persistent_build_all will call function PersistentBuild_BuildDb.
> But the  PersistentBuild_BuildDb function only reshapes 
> gp_persistent_database_node and gp_persistent_relfile_node.
> It doesn't reshape gp_persistent_relation_node table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-230) fix build failure on centos 7

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-230:
---
Fix Version/s: 2.1.0

> fix build failure on centos 7
> -
>
> Key: HAWQ-230
> URL: https://issues.apache.org/jira/browse/HAWQ-230
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Zhanwei Wang
>Assignee: Zhanwei Wang
>  Labels: closed-pr
> Fix For: 2.1.0
>
>
> HAWQ build will fail on centos7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-219) Built-in function gp_persistent_build_all doesn't reshape gp_persistent_relation_node table

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-219:
---
Assignee: Ming LI  (was: Lei Chang)

> Built-in function gp_persistent_build_all doesn't reshape 
> gp_persistent_relation_node table
> ---
>
> Key: HAWQ-219
> URL: https://issues.apache.org/jira/browse/HAWQ-219
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> {code}
> select gp_persistent_reset_all();
> select gp_persistent_build_all(false);
> e=# select * from gp_persistent_relation_node ;
>  tablespace_oid | database_oid | relfilenode_oid | persistent_state | 
> reserved | parent_xid | persistent_serial_num | previous_free_tid
> +--+-+--+--++---+---
> (0 rows)
> {code}
> Function gp_persistent_build_all will call function PersistentBuild_BuildDb.
> But the  PersistentBuild_BuildDb function only reshapes 
> gp_persistent_database_node and gp_persistent_relfile_node.
> It doesn't reshape gp_persistent_relation_node table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-219) Built-in function gp_persistent_build_all doesn't reshape gp_persistent_relation_node table

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-219:
---
Fix Version/s: 2.0.0

> Built-in function gp_persistent_build_all doesn't reshape 
> gp_persistent_relation_node table
> ---
>
> Key: HAWQ-219
> URL: https://issues.apache.org/jira/browse/HAWQ-219
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> {code}
> select gp_persistent_reset_all();
> select gp_persistent_build_all(false);
> e=# select * from gp_persistent_relation_node ;
>  tablespace_oid | database_oid | relfilenode_oid | persistent_state | 
> reserved | parent_xid | persistent_serial_num | previous_free_tid
> +--+-+--+--++---+---
> (0 rows)
> {code}
> Function gp_persistent_build_all will call function PersistentBuild_BuildDb.
> But the  PersistentBuild_BuildDb function only reshapes 
> gp_persistent_database_node and gp_persistent_relfile_node.
> It doesn't reshape gp_persistent_relation_node table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-283) Remove the content column from the catalog pg_aoseg.pg_aoseg_xxxxx

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-283.
--
Resolution: Fixed

> Remove the content column from the catalog pg_aoseg.pg_aoseg_x
> --
>
> Key: HAWQ-283
> URL: https://issues.apache.org/jira/browse/HAWQ-283
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Catalog
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> In HAWQ 2.0, the content column inside the table pg_aoseg.pg_aoseg_x is 
> meaningless now. We need to remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-283) Remove the content column from the catalog pg_aoseg.pg_aoseg_xxxxx

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-283:
---
Labels:   (was: closed-pr)

> Remove the content column from the catalog pg_aoseg.pg_aoseg_x
> --
>
> Key: HAWQ-283
> URL: https://issues.apache.org/jira/browse/HAWQ-283
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Catalog
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> In HAWQ 2.0, the content column inside the table pg_aoseg.pg_aoseg_x is 
> meaningless now. We need to remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-266) Check if krb_server_keyfile exist in hawq-site.xml

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-266:
---
Assignee: Radar Lei  (was: Lei Chang)

> Check if krb_server_keyfile exist in hawq-site.xml
> --
>
> Key: HAWQ-266
> URL: https://issues.apache.org/jira/browse/HAWQ-266
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: 2.0.0
>
>
> For security mode, currently if 'enable_secure_filesystem' exist, we assume 
> 'krb_server_keyfile' exist too.
> This will error out if we do not have 'krb_server_keyfile' in hawq-site.xml 
> while 'enable_secure_filesystem' exist.
> We should check 'krb_server_keyfile' value if 'enable_secure_filesystem' is 
> set to 'on'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-266) Check if krb_server_keyfile exist in hawq-site.xml

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-266:
---
Fix Version/s: 2.0.0

> Check if krb_server_keyfile exist in hawq-site.xml
> --
>
> Key: HAWQ-266
> URL: https://issues.apache.org/jira/browse/HAWQ-266
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> For security mode, currently if 'enable_secure_filesystem' exist, we assume 
> 'krb_server_keyfile' exist too.
> This will error out if we do not have 'krb_server_keyfile' in hawq-site.xml 
> while 'enable_secure_filesystem' exist.
> We should check 'krb_server_keyfile' value if 'enable_secure_filesystem' is 
> set to 'on'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-266) Check if krb_server_keyfile exist in hawq-site.xml

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-266:
---
Labels:   (was: closed-pr)

> Check if krb_server_keyfile exist in hawq-site.xml
> --
>
> Key: HAWQ-266
> URL: https://issues.apache.org/jira/browse/HAWQ-266
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> For security mode, currently if 'enable_secure_filesystem' exist, we assume 
> 'krb_server_keyfile' exist too.
> This will error out if we do not have 'krb_server_keyfile' in hawq-site.xml 
> while 'enable_secure_filesystem' exist.
> We should check 'krb_server_keyfile' value if 'enable_secure_filesystem' is 
> set to 'on'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-283) Remove the content column from the catalog pg_aoseg.pg_aoseg_xxxxx

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-283:
---
Fix Version/s: 2.0.0

> Remove the content column from the catalog pg_aoseg.pg_aoseg_x
> --
>
> Key: HAWQ-283
> URL: https://issues.apache.org/jira/browse/HAWQ-283
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Catalog
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> In HAWQ 2.0, the content column inside the table pg_aoseg.pg_aoseg_x is 
> meaningless now. We need to remove it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-266) Check if krb_server_keyfile exist in hawq-site.xml

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-266.
--
Resolution: Fixed

> Check if krb_server_keyfile exist in hawq-site.xml
> --
>
> Key: HAWQ-266
> URL: https://issues.apache.org/jira/browse/HAWQ-266
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: 2.0.0
>
>
> For security mode, currently if 'enable_secure_filesystem' exist, we assume 
> 'krb_server_keyfile' exist too.
> This will error out if we do not have 'krb_server_keyfile' in hawq-site.xml 
> while 'enable_secure_filesystem' exist.
> We should check 'krb_server_keyfile' value if 'enable_secure_filesystem' is 
> set to 'on'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-273) Concurrent read committed SELECT return 0 rows for AO table which is ALTERed with REORGANIZE by other transaction

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-273.
--

> Concurrent read committed SELECT return 0 rows for AO table which is ALTERed 
> with REORGANIZE by other transaction
> -
>
> Key: HAWQ-273
> URL: https://issues.apache.org/jira/browse/HAWQ-273
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> testdb=# DROP TABLE tbl_isolation;
> DROP TABLE
> testdb=# CREATE TABLE tbl_isolation (a INT, b int, c int) WITH 
> (appendonly=true);
> CREATE TABLE
> testdb=# INSERT INTO tbl_isolation SELECT generate_series(1, 10), 
> generate_series(1, 10), generate_series(1, 10);
> INSERT 0 10
> •
> Thread A:
> testdb=# BEGIN transaction isolation level SERIALIZABLE;
> BEGIN
> testdb=# ALTER TABLE tbl_isolation set with ( reorganize='true') distributed 
> randomly;
> ALTER TABLE
> •
> Thread B:
> testdb=# BEGIN transaction isolation level read committed;
> BEGIN
> testdb=# select count(*) from tbl_isolation;
> •
> Thread A:
> testdb=# commit;
> COMMIT
> •
> •
> Thread B:
> testdb=# select count(*) from tbl_isolation;
> count
> ---
> 0
> (1 row)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-273) Concurrent read committed SELECT return 0 rows for AO table which is ALTERed with REORGANIZE by other transaction

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang resolved HAWQ-273.

Resolution: Fixed

> Concurrent read committed SELECT return 0 rows for AO table which is ALTERed 
> with REORGANIZE by other transaction
> -
>
> Key: HAWQ-273
> URL: https://issues.apache.org/jira/browse/HAWQ-273
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> testdb=# DROP TABLE tbl_isolation;
> DROP TABLE
> testdb=# CREATE TABLE tbl_isolation (a INT, b int, c int) WITH 
> (appendonly=true);
> CREATE TABLE
> testdb=# INSERT INTO tbl_isolation SELECT generate_series(1, 10), 
> generate_series(1, 10), generate_series(1, 10);
> INSERT 0 10
> •
> Thread A:
> testdb=# BEGIN transaction isolation level SERIALIZABLE;
> BEGIN
> testdb=# ALTER TABLE tbl_isolation set with ( reorganize='true') distributed 
> randomly;
> ALTER TABLE
> •
> Thread B:
> testdb=# BEGIN transaction isolation level read committed;
> BEGIN
> testdb=# select count(*) from tbl_isolation;
> •
> Thread A:
> testdb=# commit;
> COMMIT
> •
> •
> Thread B:
> testdb=# select count(*) from tbl_isolation;
> count
> ---
> 0
> (1 row)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-273) Concurrent read committed SELECT return 0 rows for AO table which is ALTERed with REORGANIZE by other transaction

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-273:
---
Fix Version/s: 2.0.0

> Concurrent read committed SELECT return 0 rows for AO table which is ALTERed 
> with REORGANIZE by other transaction
> -
>
> Key: HAWQ-273
> URL: https://issues.apache.org/jira/browse/HAWQ-273
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> testdb=# DROP TABLE tbl_isolation;
> DROP TABLE
> testdb=# CREATE TABLE tbl_isolation (a INT, b int, c int) WITH 
> (appendonly=true);
> CREATE TABLE
> testdb=# INSERT INTO tbl_isolation SELECT generate_series(1, 10), 
> generate_series(1, 10), generate_series(1, 10);
> INSERT 0 10
> •
> Thread A:
> testdb=# BEGIN transaction isolation level SERIALIZABLE;
> BEGIN
> testdb=# ALTER TABLE tbl_isolation set with ( reorganize='true') distributed 
> randomly;
> ALTER TABLE
> •
> Thread B:
> testdb=# BEGIN transaction isolation level read committed;
> BEGIN
> testdb=# select count(*) from tbl_isolation;
> •
> Thread A:
> testdb=# commit;
> COMMIT
> •
> •
> Thread B:
> testdb=# select count(*) from tbl_isolation;
> count
> ---
> 0
> (1 row)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-273) Concurrent read committed SELECT return 0 rows for AO table which is ALTERed with REORGANIZE by other transaction

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-273:
---
Labels:   (was: closed-pr)

> Concurrent read committed SELECT return 0 rows for AO table which is ALTERed 
> with REORGANIZE by other transaction
> -
>
> Key: HAWQ-273
> URL: https://issues.apache.org/jira/browse/HAWQ-273
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Transaction
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> testdb=# DROP TABLE tbl_isolation;
> DROP TABLE
> testdb=# CREATE TABLE tbl_isolation (a INT, b int, c int) WITH 
> (appendonly=true);
> CREATE TABLE
> testdb=# INSERT INTO tbl_isolation SELECT generate_series(1, 10), 
> generate_series(1, 10), generate_series(1, 10);
> INSERT 0 10
> •
> Thread A:
> testdb=# BEGIN transaction isolation level SERIALIZABLE;
> BEGIN
> testdb=# ALTER TABLE tbl_isolation set with ( reorganize='true') distributed 
> randomly;
> ALTER TABLE
> •
> Thread B:
> testdb=# BEGIN transaction isolation level read committed;
> BEGIN
> testdb=# select count(*) from tbl_isolation;
> •
> Thread A:
> testdb=# commit;
> COMMIT
> •
> •
> Thread B:
> testdb=# select count(*) from tbl_isolation;
> count
> ---
> 0
> (1 row)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-255) Checkpoint is blocked by TRANSACTION ABORT for INSERTING INTO a big partition table

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-255.
--
Resolution: Fixed

> Checkpoint is blocked by TRANSACTION ABORT for INSERTING INTO a big partition 
> table
> ---
>
> Key: HAWQ-255
> URL: https://issues.apache.org/jira/browse/HAWQ-255
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Ming LI
>Assignee: Ming LI
>  Labels: closed-pr
> Fix For: 2.0.0
>
>
> If at the same time there are other INSERT commands running in parallel, it 
> will generates a lot of pg_xlog files. If at this time the system/master 
> nodes crashed, it will take a very long time for recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-255) Checkpoint is blocked by TRANSACTION ABORT for INSERTING INTO a big partition table

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-255:
---
Assignee: Ming LI  (was: Lei Chang)

> Checkpoint is blocked by TRANSACTION ABORT for INSERTING INTO a big partition 
> table
> ---
>
> Key: HAWQ-255
> URL: https://issues.apache.org/jira/browse/HAWQ-255
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Ming LI
>Assignee: Ming LI
>  Labels: closed-pr
> Fix For: 2.0.0
>
>
> If at the same time there are other INSERT commands running in parallel, it 
> will generates a lot of pg_xlog files. If at this time the system/master 
> nodes crashed, it will take a very long time for recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-339) Validate standby host name while read hawq-site.xml

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-339:
---
Assignee: Radar Lei  (was: Lei Chang)

> Validate standby host name while read hawq-site.xml
> ---
>
> Key: HAWQ-339
> URL: https://issues.apache.org/jira/browse/HAWQ-339
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
>  Labels: closed-pr
> Fix For: 2.0.0
>
>
> Standby host name should not be '', 'none', 'localhost' etc. We will add an 
> initial check while read it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-339) Validate standby host name while read hawq-site.xml

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-339:
---
Fix Version/s: 2.0.0

> Validate standby host name while read hawq-site.xml
> ---
>
> Key: HAWQ-339
> URL: https://issues.apache.org/jira/browse/HAWQ-339
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
>  Labels: closed-pr
> Fix For: 2.0.0
>
>
> Standby host name should not be '', 'none', 'localhost' etc. We will add an 
> initial check while read it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-255) Checkpoint is blocked by TRANSACTION ABORT for INSERTING INTO a big partition table

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-255:
---
Fix Version/s: 2.0.0

> Checkpoint is blocked by TRANSACTION ABORT for INSERTING INTO a big partition 
> table
> ---
>
> Key: HAWQ-255
> URL: https://issues.apache.org/jira/browse/HAWQ-255
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Ming LI
>Assignee: Lei Chang
>  Labels: closed-pr
> Fix For: 2.0.0
>
>
> If at the same time there are other INSERT commands running in parallel, it 
> will generates a lot of pg_xlog files. If at this time the system/master 
> nodes crashed, it will take a very long time for recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-339) Validate standby host name while read hawq-site.xml

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-339.
--
Resolution: Fixed

> Validate standby host name while read hawq-site.xml
> ---
>
> Key: HAWQ-339
> URL: https://issues.apache.org/jira/browse/HAWQ-339
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
>  Labels: closed-pr
> Fix For: 2.0.0
>
>
> Standby host name should not be '', 'none', 'localhost' etc. We will add an 
> initial check while read it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-127) Create CI projects for HAWQ releases

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-127:
---
Fix Version/s: 2.1.0

> Create CI projects for HAWQ releases
> 
>
> Key: HAWQ-127
> URL: https://issues.apache.org/jira/browse/HAWQ-127
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: 2.1.0
>
>
> Create Jenkins projects that build HAWQ binary, source tarballs and docker 
> images, and run sanity tests including at least installcheck-good tests for 
> each commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-268) HAWQ activate standby fails

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-268.
--
Resolution: Fixed

> HAWQ activate standby fails
> ---
>
> Key: HAWQ-268
> URL: https://issues.apache.org/jira/browse/HAWQ-268
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
>  Labels: closed-pr
> Fix For: 2.0.0
>
>
> Since we changed the error out logic of 'hawq stop/start', now hawq activate 
> standby will fail due to stop/start cluster error.
> Now we should change to :
> 1. While doing activate standby, hawq will try to stop master/standby node, 
> but not fail out if they can't get stopped. Need to make sure no standby 
> syncmaster running.
> 2. We should consider if it's necessary to stop/start segments during the 
> activate progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-268) HAWQ activate standby fails

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-268:
---
Assignee: Radar Lei  (was: Lei Chang)

> HAWQ activate standby fails
> ---
>
> Key: HAWQ-268
> URL: https://issues.apache.org/jira/browse/HAWQ-268
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
>  Labels: closed-pr
> Fix For: 2.0.0
>
>
> Since we changed the error out logic of 'hawq stop/start', now hawq activate 
> standby will fail due to stop/start cluster error.
> Now we should change to :
> 1. While doing activate standby, hawq will try to stop master/standby node, 
> but not fail out if they can't get stopped. Need to make sure no standby 
> syncmaster running.
> 2. We should consider if it's necessary to stop/start segments during the 
> activate progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-268) HAWQ activate standby fails

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-268:
---
Fix Version/s: 2.0.0

> HAWQ activate standby fails
> ---
>
> Key: HAWQ-268
> URL: https://issues.apache.org/jira/browse/HAWQ-268
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
>  Labels: closed-pr
> Fix For: 2.0.0
>
>
> Since we changed the error out logic of 'hawq stop/start', now hawq activate 
> standby will fail due to stop/start cluster error.
> Now we should change to :
> 1. While doing activate standby, hawq will try to stop master/standby node, 
> but not fail out if they can't get stopped. Need to make sure no standby 
> syncmaster running.
> 2. We should consider if it's necessary to stop/start segments during the 
> activate progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-127) Create CI projects for HAWQ releases

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-127:
---
Assignee: Radar Lei  (was: Lei Chang)

> Create CI projects for HAWQ releases
> 
>
> Key: HAWQ-127
> URL: https://issues.apache.org/jira/browse/HAWQ-127
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Build
>Reporter: Lei Chang
>Assignee: Radar Lei
> Fix For: 2.1.0
>
>
> Create Jenkins projects that build HAWQ binary, source tarballs and docker 
> images, and run sanity tests including at least installcheck-good tests for 
> each commit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-8) Installing the HAWQ Software thru the Apache Ambari

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-8?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-8:
-
Fix Version/s: backlog

> Installing the HAWQ Software thru the Apache Ambari 
> 
>
> Key: HAWQ-8
> URL: https://issues.apache.org/jira/browse/HAWQ-8
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Ambari
> Environment: CentOS
>Reporter: Vijayakumar Ramdoss
>Assignee: Alexander Denissov
> Fix For: backlog
>
> Attachments: 1Le8tdm[1]
>
>
> In order to integrate with the Hadoop system, We would have to install the 
> HAWQ software thru Ambari.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-124) Create Project Maturity Model summary file

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-124:
---
Fix Version/s: (was: backlog)
   2.1.0

> Create Project Maturity Model summary file
> --
>
> Key: HAWQ-124
> URL: https://issues.apache.org/jira/browse/HAWQ-124
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Core
>Reporter: Caleb Welton
>Assignee: Lei Chang
> Fix For: 2.1.0
>
>
> Graduating from an Apache Incubator project requires showing the Apache 
> Incubator IPMC that we have reached a level of maturity as an incubator 
> project.  One tool that can be used to assess our maturity is the [Apache 
> Project Maturity Model 
> Document|https://community.apache.org/apache-way/apache-project-maturity-model.html].
>   
> I propose we do something similar to what Groovy did and include a Project 
> Maturity Self assessment in our source code and evaluate ourselves with 
> respect to project maturity with each of our reports.  
> To do:
> 1. Create a MATURITY.adoc file in our root project directory containing our 
> self assessment.
> See 
> https://github.com/apache/groovy/blob/67b87a3592f13a6281f5b20081c37a66c80079b9/MATURITY.adoc
>  as an example document in the Groovy project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-98) Moving HAWQ docker file into code base

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-98?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-98:
--
Fix Version/s: 2.1.0

> Moving HAWQ docker file into code base
> --
>
> Key: HAWQ-98
> URL: https://issues.apache.org/jira/browse/HAWQ-98
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Goden Yao
>Assignee: Roman Shaposhnik
> Fix For: 2.1.0
>
>
> We have a pre-built docker image (check [HAWQ build & 
> install|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61320026]
>  sitting outside the codebase.
> It should be incorporated in the Apache git and maintained by the community.
> Proposed location is to create a  folder under root



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-351) Add movefilespace option to 'hawq filespace'

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-351:
---
Fix Version/s: 2.0.0

> Add movefilespace option to 'hawq filespace'
> 
>
> Key: HAWQ-351
> URL: https://issues.apache.org/jira/browse/HAWQ-351
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: 2.0.0
>
>
> Currently hawq filespace can only create new filespace, will add 
> '--movefilespace' option and '--location' option to support change existing 
> filespace locations.
> This is important for change filespace hdfs location from non-HA to HA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-355) order by problem: sorting varchar column with space is not correct.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-355:
---
Fix Version/s: backlog

> order by problem: sorting varchar column with space is not correct.
> ---
>
> Key: HAWQ-355
> URL: https://issues.apache.org/jira/browse/HAWQ-355
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: SuperJDC
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Firstly, my hawq download from official website:  
> https://network.pivotal.io/products/pivotal-hdb, a released stable version. 
> My steps:
> DROP TABLE IF EXISTS testorder;
> CREATE TABLE testorder(
>   ss VARCHAR(10)
> ) distributed randomly;
> INSERT INTO testorder 
> VALUES ('cc'), ('c c'), ('cc'), 
> ('aa'), ('a a'), ('ac'), 
> ('b c'), ('bc'), ('bb');
> SELECT ss FROM testorder 
> ORDER BY ss;
> The result:
> aa
> a a
> ac
> bb
> bc
> b c
> cc
> cc
> c c
> It seems that when a colum has a space char, the sorted result would been not 
> correct.
> I followed the document steps and successfully integrated with the ambari. 
> All of hawq configurations are the default.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-274) Add disk check for JBOD temporary directory in segment FTS

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-274:
---
Fix Version/s: 2.0.0

> Add disk check for JBOD temporary directory in segment FTS
> --
>
> Key: HAWQ-274
> URL: https://issues.apache.org/jira/browse/HAWQ-274
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Fault Tolerance, Resource Manager
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.0
>
>
> Add disk check for JBOD temporary directory in segment FTS.
> Add column for catalog table gp_segment_configuration, indicates which 
> directory is failed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-124) Create Project Maturity Model summary file

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-124:
---
Fix Version/s: backlog

> Create Project Maturity Model summary file
> --
>
> Key: HAWQ-124
> URL: https://issues.apache.org/jira/browse/HAWQ-124
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Core
>Reporter: Caleb Welton
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Graduating from an Apache Incubator project requires showing the Apache 
> Incubator IPMC that we have reached a level of maturity as an incubator 
> project.  One tool that can be used to assess our maturity is the [Apache 
> Project Maturity Model 
> Document|https://community.apache.org/apache-way/apache-project-maturity-model.html].
>   
> I propose we do something similar to what Groovy did and include a Project 
> Maturity Self assessment in our source code and evaluate ourselves with 
> respect to project maturity with each of our reports.  
> To do:
> 1. Create a MATURITY.adoc file in our root project directory containing our 
> self assessment.
> See 
> https://github.com/apache/groovy/blob/67b87a3592f13a6281f5b20081c37a66c80079b9/MATURITY.adoc
>  as an example document in the Groovy project.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-274) Add disk check for JBOD temporary directory in segment FTS

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-274.
--
Resolution: Fixed

> Add disk check for JBOD temporary directory in segment FTS
> --
>
> Key: HAWQ-274
> URL: https://issues.apache.org/jira/browse/HAWQ-274
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Fault Tolerance, Resource Manager
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.0
>
>
> Add disk check for JBOD temporary directory in segment FTS.
> Add column for catalog table gp_segment_configuration, indicates which 
> directory is failed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-360) Data loss when alter partition table by add two column in one time.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-360:
---
Assignee: Ming LI  (was: Lei Chang)

> Data loss when alter partition table by add two column in one time.
> ---
>
> Key: HAWQ-360
> URL: https://issues.apache.org/jira/browse/HAWQ-360
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: DDL
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> {code}
> CREATE TABLE part_1 (a int, b int, c int)
> WITH (appendonly=true, compresslevel=5)
> partition by range (a)
> (
>  partition b start (1) end (50) every (1)
> );
> insert into part_1 values(1,1,1);
> select * from part_1;
>  a | b | c
> ---+---+---
>  1 | 1 | 1
> (1 row)
> alter table part_1 add column p int default 3,add column q int default 4;
> select * from part_1;
>  a | b | c | p | q
> ---+---+---+---+---
> (0 rows)
> {code}
> When I check hdfs file, I find the size of new hdfs files is 0, which means 
> the data loss when alter the table and create new file for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-360) Data loss when alter partition table by add two column in one time.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-360:
---
Fix Version/s: 2.0.0

> Data loss when alter partition table by add two column in one time.
> ---
>
> Key: HAWQ-360
> URL: https://issues.apache.org/jira/browse/HAWQ-360
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: DDL
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> {code}
> CREATE TABLE part_1 (a int, b int, c int)
> WITH (appendonly=true, compresslevel=5)
> partition by range (a)
> (
>  partition b start (1) end (50) every (1)
> );
> insert into part_1 values(1,1,1);
> select * from part_1;
>  a | b | c
> ---+---+---
>  1 | 1 | 1
> (1 row)
> alter table part_1 add column p int default 3,add column q int default 4;
> select * from part_1;
>  a | b | c | p | q
> ---+---+---+---+---
> (0 rows)
> {code}
> When I check hdfs file, I find the size of new hdfs files is 0, which means 
> the data loss when alter the table and create new file for it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-216) Built-in functions gp_update_global_sequence_entry has a bug

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-216:
---
Fix Version/s: 2.0.0

> Built-in functions gp_update_global_sequence_entry has a bug
> 
>
> Key: HAWQ-216
> URL: https://issues.apache.org/jira/browse/HAWQ-216
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> The code in persistentutil.c:200 is as follow.
> {code}
> line 200: int8sequenceVal;
> line 212: sequenceVal = PG_GETARG_INT64(1);
> {code}
> It make put a int64 to int8, which will make bugs as follow.
> {code}
> ff=# select * from gp_global_sequence ;
>  sequence_num
> --
>  1200
>   100
>   100
>   100
>   100
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> (15 rows)
> ff=# select gp_update_global_sequence_entry('(0,2)'::tid,128);
> ERROR:  sequence number too low (persistentutil.c:232)
> {code}
> It compares 128 with 100, and judges that 128<100.
> Because it makes 128 into  int8 type, which make 0x80(128) be calculated as  
> -128 in int8 type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-229) External table can be altered, which make errors.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-229:
---
Fix Version/s: backlog

> External table can be altered, which make errors.
> -
>
> Key: HAWQ-229
> URL: https://issues.apache.org/jira/browse/HAWQ-229
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: backlog
>
>
> We can't use "alter external table" to alter an external table, but we can 
> use "alter table" to alter an external table.
> {code}
> mytest=# create external web table e4 (c1 int, c2 int) execute 'echo 1, 1' ON 
> 2 format 'CSV';
> CREATE EXTERNAL TABLE
> mytest=# select * from e4;
>  c1 | c2
> +
>   1 |  1
>   1 |  1
> (2 rows)
> mytest=# alter table e4 drop column c2;
> WARNING:  "e4" is an external table. ALTER TABLE for external tables is 
> deprecated.
> HINT:  Use ALTER EXTERNAL TABLE instead
> ALTER TABLE
> mytest=# select * from e4;
> ERROR:  extra data after last expected column  (seg0 localhost:4 
> pid=57645)
> DETAIL:  External table e4, line 1 of execute:echo 1, 1: "1, 1"
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-133) core when use plpython udf

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-133:
---
Fix Version/s: 2.0.0

> core when use plpython udf
> --
>
> Key: HAWQ-133
> URL: https://issues.apache.org/jira/browse/HAWQ-133
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Dong Li
>Assignee: Ruilong Huo
> Fix For: 2.0.0
>
>
> Run sqls below can recur the core.
> {code}
> CREATE PROCEDURAL LANGUAGE plpythonu;
> CREATE TABLE users (
>   fname text not null,
>   lname text not null,
>   username text,
>   userid serial
>   -- , PRIMARY KEY(lname, fname) 
>   ) DISTRIBUTED BY (userid);
> INSERT INTO users (fname, lname, username) VALUES ('jane', 'doe', 'j_doe');
> INSERT INTO users (fname, lname, username) VALUES ('john', 'doe', 'johnd');
> INSERT INTO users (fname, lname, username) VALUES ('willem', 'doe', 'w_doe');
> INSERT INTO users (fname, lname, username) VALUES ('rick', 'smith', 'slash');
> CREATE FUNCTION spi_prepared_plan_test_one(a text) RETURNS text
>   AS
> 'if not SD.has_key("myplan"):
>   q = "SELECT count(*) FROM users WHERE lname = $1"
>   SD["myplan"] = plpy.prepare(q, [ "text" ])
> try:
>   rv = plpy.execute(SD["myplan"], [a])
>   return "there are " + str(rv[0]["count"]) + " " + str(a) + "s"
> except Exception, ex:
>   plpy.error(str(ex))
> return None
> '
>   LANGUAGE plpythonu;
> select spi_prepared_plan_test_one('doe');
> select spi_prepared_plan_test_one('smith');
> {code}
> when execute "select spi_prepared_plan_test_one('smith');"
> server closed the connection unexpectedly
>   This probably means the server terminated abnormally
>   before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.
> !>



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-216) Built-in functions gp_update_global_sequence_entry has a bug

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-216:
---
Assignee: Ming LI  (was: Lei Chang)

> Built-in functions gp_update_global_sequence_entry has a bug
> 
>
> Key: HAWQ-216
> URL: https://issues.apache.org/jira/browse/HAWQ-216
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Dong Li
>Assignee: Ming LI
>
> The code in persistentutil.c:200 is as follow.
> {code}
> line 200: int8sequenceVal;
> line 212: sequenceVal = PG_GETARG_INT64(1);
> {code}
> It make put a int64 to int8, which will make bugs as follow.
> {code}
> ff=# select * from gp_global_sequence ;
>  sequence_num
> --
>  1200
>   100
>   100
>   100
>   100
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> 0
> (15 rows)
> ff=# select gp_update_global_sequence_entry('(0,2)'::tid,128);
> ERROR:  sequence number too low (persistentutil.c:232)
> {code}
> It compares 128 with 100, and judges that 128<100.
> Because it makes 128 into  int8 type, which make 0x80(128) be calculated as  
> -128 in int8 type.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-358) Installcheck good failures in hawq-dev environment

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-358:
---
Assignee: Ruilong Huo  (was: Jiali Yao)

> Installcheck good failures in hawq-dev environment
> --
>
> Key: HAWQ-358
> URL: https://issues.apache.org/jira/browse/HAWQ-358
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Reporter: Caleb Welton
>Assignee: Ruilong Huo
> Fix For: 2.0.0
>
>
> Build and test within a hawq dev environment setup via the instructions 
> outlined in the hawq-devel docker enviroment: 
> https://hub.docker.com/r/mayjojo/hawq-devel/
> Results in the following errors
> {noformat}
> ...
> test errortbl ... FAILED (6.83 sec)
> ...
> test subplan  ... FAILED (8.15 sec)
> ...
> test create_table_distribution ... FAILED (3.47 sec)
> test copy ... FAILED (34.76 sec)
> ...
> test set_functions... FAILED (4.90 sec)
> ...
> test exttab1  ... FAILED (17.66 sec)
> ...
> {noformat}
> Summary of issues:
> * *errortbl* - every connection to gpfdist results in "connection with 
> gpfdist failed for gpfdist://localhost:7070/nation.tbl"
> * *subplan* - trying to create plpython resulted in "could not access file 
> "$libdir/plpython": No such file or directory", lack of plpython causes many 
> other statements to fail
> * *create_table_distribution* - test likely needs some refactoring to reflect 
> calculating correct bucketnum based on current system configuration
> * *copy* - seems to be failing because rows aren't coming out in the expected 
> order, test needs fixing to be able to handle this
> * *set_functions* - same plpythonu issue described above
> * *exttab1* - same issue reading from gpfdist described above



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-357) Track how many times a segment can not get expected containers from global resource manager

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-357:
---
Fix Version/s: 2.0.0

> Track how many times a segment can not get expected containers from global 
> resource manager
> ---
>
> Key: HAWQ-357
> URL: https://issues.apache.org/jira/browse/HAWQ-357
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Resource Manager
>Reporter: Yi Jin
>Assignee: Yi Jin
> Fix For: 2.0.0
>
>
> This improvement makes HAWQ RM able to track the times one segment can not 
> get expected containers from global resource manager, YARN for example. 
> In some cases, another YARN application may hold containers without returning 
> them in time, HAWQ RM may always find some segments having no resource. This 
> improvement make HAWQ RM log this situation as a warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-358) Installcheck good failures in hawq-dev environment

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-358:
---
Fix Version/s: 2.0.0

> Installcheck good failures in hawq-dev environment
> --
>
> Key: HAWQ-358
> URL: https://issues.apache.org/jira/browse/HAWQ-358
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Reporter: Caleb Welton
>Assignee: Jiali Yao
> Fix For: 2.0.0
>
>
> Build and test within a hawq dev environment setup via the instructions 
> outlined in the hawq-devel docker enviroment: 
> https://hub.docker.com/r/mayjojo/hawq-devel/
> Results in the following errors
> {noformat}
> ...
> test errortbl ... FAILED (6.83 sec)
> ...
> test subplan  ... FAILED (8.15 sec)
> ...
> test create_table_distribution ... FAILED (3.47 sec)
> test copy ... FAILED (34.76 sec)
> ...
> test set_functions... FAILED (4.90 sec)
> ...
> test exttab1  ... FAILED (17.66 sec)
> ...
> {noformat}
> Summary of issues:
> * *errortbl* - every connection to gpfdist results in "connection with 
> gpfdist failed for gpfdist://localhost:7070/nation.tbl"
> * *subplan* - trying to create plpython resulted in "could not access file 
> "$libdir/plpython": No such file or directory", lack of plpython causes many 
> other statements to fail
> * *create_table_distribution* - test likely needs some refactoring to reflect 
> calculating correct bucketnum based on current system configuration
> * *copy* - seems to be failing because rows aren't coming out in the expected 
> order, test needs fixing to be able to handle this
> * *set_functions* - same plpythonu issue described above
> * *exttab1* - same issue reading from gpfdist described above



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-348) Optimizer (ORCA/Planner) should not preprocess (table) functions at planning phase

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-348:
---
Fix Version/s: backlog

> Optimizer (ORCA/Planner) should not preprocess (table) functions at planning 
> phase
> --
>
> Key: HAWQ-348
> URL: https://issues.apache.org/jira/browse/HAWQ-348
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Optimizer
>Reporter: Ruilong Huo
>Assignee: Amr El-Helw
> Fix For: backlog
>
>
> Optimizer (ORCA/Planner) currently preprocesses (table) functions (either in 
> target list or from clause) at planning phase. This introduces:
> 1. Much lower performance since the result of the (table) function is 
> motioned to QD from QEs after the preprocessing, and it is further processed 
> there at QD, especially the result is large. In this case, QD does heavy 
> workload and becomes the bottleneck. It shows about 20x performance 
> difference in below example.
> 2. Much more memory overhead at QD since it needs to hold the result of the 
> (table) function. This is risky since the result might be unpredictably large.
> Here are the steps to reproduce this issue, as well as some initial analysis:
> Step 1: Prepare schema and data
> {noformat}
> CREATE TABLE t (id INT);
> CREATE TABLE
> INSERT INTO t SELECT generate_series(1, 1);
> INSERT 0 1
> CREATE OR REPLACE FUNCTION get_t()
> RETURNS SETOF t
> LANGUAGE SQL AS
> 'SELECT * FROM t'
> STABLE;
> CREATE FUNCTION
> {noformat}
> 2. With optimizer = OFF (Planner)
> {noformat}
> SET optimizer='OFF';
> SET
> select sum(id) from t;
>sum
> --
>  50005000
> (1 row)
> Time: 8801.577 ms
> select sum(id) from get_t();
>sum
> --
>  50005000
> (1 row)
> Time: 189992.273 ms
> EXPLAIN SELECT sum(id) FROM get_t();
>  QUERY PLAN
> 
>  Aggregate  (cost=32.50..32.51 rows=1 width=8)
>->  Function Scan on get_t  (cost=0.00..12.50 rows=8000 width=4)
>  Settings:  default_segment_num=8; optimizer=off
>  Optimizer status: legacy query optimizer
> (4 rows)
> {noformat}
> 3. With optimizer = ON (ORCA)
> {noformat}
> SET optimizer='ON';
> SET
> select sum(id) from t;
>sum
> --
>  50005000
> (1 row)
> Time: 10103.436 ms
> select sum(id) from get_t();
>sum
> --
>  50005000
> (1 row)
> Time: 195551.740 ms
> EXPLAIN SELECT sum(id) FROM get_t();
>  QUERY PLAN
> 
>  Aggregate  (cost=32.50..32.51 rows=1 width=8)
>->  Function Scan on get_t  (cost=0.00..12.50 rows=8000 width=4)
>  Settings:  default_segment_num=8
>  Optimizer status: legacy query optimizer
> (4 rows)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-350) Disable some of installcheck tests due to plpython is not installed by default

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-350:
---
Fix Version/s: (was: 2.1.0)
   2.0.0

> Disable some of installcheck tests due to plpython is not installed by default
> --
>
> Key: HAWQ-350
> URL: https://issues.apache.org/jira/browse/HAWQ-350
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.0.0
>
>
> We have 3 three suites in installcheck test failes:
> 1. subplan and set_functions fail due to plpython is not installed by default
> 2. exttab1 fails due to gpfdist changes
> {noformat}
> [gpadmin@localhost incubator-hawq]$ make installcheck-good
> == dropping database "regression" ==
> NOTICE:  database "regression" does not exist, skipping
> DROP DATABASE
> == creating database "regression" ==
> CREATE DATABASE
> ALTER DATABASE
> == checking optimizer status  ==
> Optimizer disabled. Using planner answer files
> == running regression test queries==
> test type_sanity  ... ok (0.05 sec)
> test querycontext ... ok (8.19 sec)
> test errortbl ... ok (4.80 sec)
> test goh_create_type_composite ... ok (3.10 sec)
> test goh_partition... ok (37.64 sec)
> test goh_toast... ok (1.25 sec)
> test goh_database ... ok (2.84 sec)
> test goh_gp_dist_random   ... ok (0.24 sec)
> test gpsql_alter_table... ok (12.11 sec)
> test goh_portals  ... ok (8.25 sec)
> test goh_prepare  ... ok (7.08 sec)
> test goh_alter_owner  ... ok (0.25 sec)
> test boolean  ... ok (3.18 sec)
> test char ... ok (2.58 sec)
> test name ... ok (2.29 sec)
> test varchar  ... ok (2.58 sec)
> test text ... ok (0.58 sec)
> test int2 ... ok (4.35 sec)
> test int4 ... ok (5.63 sec)
> test int8 ... ok (4.17 sec)
> test oid  ... ok (2.01 sec)
> test float4   ... ok (2.89 sec)
> test date ... ok (2.45 sec)
> test time ... ok (1.98 sec)
> test insert   ... ok (4.44 sec)
> test create_function_1... ok (0.01 sec)
> test function ... ok (8.10 sec)
> test function_extensions  ... ok (0.03 sec)
> test subplan  ... FAILED (9.59 sec)
> test create_table_test... ok (0.25 sec)
> test create_table_distribution ... ok (3.20 sec)
> test copy ... ok (35.09 sec)
> test create_aggregate ... ok (9.77 sec)
> test aggregate_with_groupingsets ... ok (0.81 sec)
> test information_schema   ... ok (0.09 sec)
> test transactions ... ok (6.32 sec)
> test temp ... ok (4.09 sec)
> test set_functions... FAILED (5.86 sec)
> test sequence ... ok (1.19 sec)
> test polymorphism ... ok (3.99 sec)
> test rowtypes ... ok (2.67 sec)
> test exttab1  ... FAILED (13.85 sec)
> test gpcopy   ... ok (29.14 sec)
> test madlib_svec_test ... ok (1.57 sec)
> test agg_derived_win  ... ok (3.18 sec)
> test parquet_ddl  ... ok (8.29 sec)
> test parquet_multipletype ... ok (2.81 sec)
> test parquet_pagerowgroup_size ... ok (14.40 sec)
> test parquet_compression  ... ok (13.93 sec)
> test parquet_subpartition ... ok (7.24 sec)
> test caqlinmem... ok (0.14 sec)
> test hcatalog_lookup  ... ok (2.02 sec)
> test json_load... ok (0.42 sec)
> test external_oid ... ok (0.79 sec)
> test validator_function   ... ok (0.03 sec)
> ===
>  3 of 55 tests failed.
> ===
> The differences that caused some tests to fail can be viewed in the
> file "./regression.diffs".  A copy of the test summary that you see
> above is saved in the file "./regression.out".
> {noformat}
> {noformat}
> [gpadmin@localhost incubator-hawq]$ cat src/test/regress/regression.diffs
> *** ./expected/subplan.out2016-01-18 05:36:05.000680391 -0800
> --- ./results/subplan.out 2016-01-18 05:36:05.048608087 -0800
> ***
> *** 20,25 
> --- 20,26 
>   insert into i4 select i, i-10 from generate_series(-5,0)i;
>   DROP LANGUAGE IF EXISTS plpythonu CASCADE;
>   CREATE LANGUAGE plpythonu;
> + ERROR:  could not access file "$libdir/plpython": No such file or directory
>   create or replace function twice(int) returns int as $$
>  select 2 * $1;
>   $$ language sql;
> ***
> *** 34,56 
>   else:
>   return x * 3
>   $$ language plpythonu;
> 

[jira] [Updated] (HAWQ-350) Disable some of installcheck tests due to plpython is not installed by default

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-350:
---
Fix Version/s: 2.1.0

> Disable some of installcheck tests due to plpython is not installed by default
> --
>
> Key: HAWQ-350
> URL: https://issues.apache.org/jira/browse/HAWQ-350
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.1.0
>
>
> We have 3 three suites in installcheck test failes:
> 1. subplan and set_functions fail due to plpython is not installed by default
> 2. exttab1 fails due to gpfdist changes
> {noformat}
> [gpadmin@localhost incubator-hawq]$ make installcheck-good
> == dropping database "regression" ==
> NOTICE:  database "regression" does not exist, skipping
> DROP DATABASE
> == creating database "regression" ==
> CREATE DATABASE
> ALTER DATABASE
> == checking optimizer status  ==
> Optimizer disabled. Using planner answer files
> == running regression test queries==
> test type_sanity  ... ok (0.05 sec)
> test querycontext ... ok (8.19 sec)
> test errortbl ... ok (4.80 sec)
> test goh_create_type_composite ... ok (3.10 sec)
> test goh_partition... ok (37.64 sec)
> test goh_toast... ok (1.25 sec)
> test goh_database ... ok (2.84 sec)
> test goh_gp_dist_random   ... ok (0.24 sec)
> test gpsql_alter_table... ok (12.11 sec)
> test goh_portals  ... ok (8.25 sec)
> test goh_prepare  ... ok (7.08 sec)
> test goh_alter_owner  ... ok (0.25 sec)
> test boolean  ... ok (3.18 sec)
> test char ... ok (2.58 sec)
> test name ... ok (2.29 sec)
> test varchar  ... ok (2.58 sec)
> test text ... ok (0.58 sec)
> test int2 ... ok (4.35 sec)
> test int4 ... ok (5.63 sec)
> test int8 ... ok (4.17 sec)
> test oid  ... ok (2.01 sec)
> test float4   ... ok (2.89 sec)
> test date ... ok (2.45 sec)
> test time ... ok (1.98 sec)
> test insert   ... ok (4.44 sec)
> test create_function_1... ok (0.01 sec)
> test function ... ok (8.10 sec)
> test function_extensions  ... ok (0.03 sec)
> test subplan  ... FAILED (9.59 sec)
> test create_table_test... ok (0.25 sec)
> test create_table_distribution ... ok (3.20 sec)
> test copy ... ok (35.09 sec)
> test create_aggregate ... ok (9.77 sec)
> test aggregate_with_groupingsets ... ok (0.81 sec)
> test information_schema   ... ok (0.09 sec)
> test transactions ... ok (6.32 sec)
> test temp ... ok (4.09 sec)
> test set_functions... FAILED (5.86 sec)
> test sequence ... ok (1.19 sec)
> test polymorphism ... ok (3.99 sec)
> test rowtypes ... ok (2.67 sec)
> test exttab1  ... FAILED (13.85 sec)
> test gpcopy   ... ok (29.14 sec)
> test madlib_svec_test ... ok (1.57 sec)
> test agg_derived_win  ... ok (3.18 sec)
> test parquet_ddl  ... ok (8.29 sec)
> test parquet_multipletype ... ok (2.81 sec)
> test parquet_pagerowgroup_size ... ok (14.40 sec)
> test parquet_compression  ... ok (13.93 sec)
> test parquet_subpartition ... ok (7.24 sec)
> test caqlinmem... ok (0.14 sec)
> test hcatalog_lookup  ... ok (2.02 sec)
> test json_load... ok (0.42 sec)
> test external_oid ... ok (0.79 sec)
> test validator_function   ... ok (0.03 sec)
> ===
>  3 of 55 tests failed.
> ===
> The differences that caused some tests to fail can be viewed in the
> file "./regression.diffs".  A copy of the test summary that you see
> above is saved in the file "./regression.out".
> {noformat}
> {noformat}
> [gpadmin@localhost incubator-hawq]$ cat src/test/regress/regression.diffs
> *** ./expected/subplan.out2016-01-18 05:36:05.000680391 -0800
> --- ./results/subplan.out 2016-01-18 05:36:05.048608087 -0800
> ***
> *** 20,25 
> --- 20,26 
>   insert into i4 select i, i-10 from generate_series(-5,0)i;
>   DROP LANGUAGE IF EXISTS plpythonu CASCADE;
>   CREATE LANGUAGE plpythonu;
> + ERROR:  could not access file "$libdir/plpython": No such file or directory
>   create or replace function twice(int) returns int as $$
>  select 2 * $1;
>   $$ language sql;
> ***
> *** 34,56 
>   else:
>   return x * 3
>   $$ language plpythonu;
>   select t1.* from t1 where (t1.a, t

[jira] [Updated] (HAWQ-351) Add movefilespace option to 'hawq filespace'

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-351:
---
Assignee: Radar Lei  (was: Lei Chang)

> Add movefilespace option to 'hawq filespace'
> 
>
> Key: HAWQ-351
> URL: https://issues.apache.org/jira/browse/HAWQ-351
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Radar Lei
> Fix For: 2.0.0
>
>
> Currently hawq filespace can only create new filespace, will add 
> '--movefilespace' option and '--location' option to support change existing 
> filespace locations.
> This is important for change filespace hdfs location from non-HA to HA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-343) Core when setting enable_secure_filesystem to true

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-343:
---
Fix Version/s: 2.0.0

> Core when setting enable_secure_filesystem to true
> --
>
> Key: HAWQ-343
> URL: https://issues.apache.org/jira/browse/HAWQ-343
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Noa Horn
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> Happened only with debug build. In optimized build seems to work ok.
> Repro:
> {noformat}
> # set enable_secure_filesystem to true;
> FATAL:  Unexpected internal error (cdbfilesystemcredential.c:357)
> DETAIL:  FailedAssertion("!(((void *)0) != credentials)", File: 
> "cdbfilesystemcredential.c", Line: 357)
> HINT:  Process 21815 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released until then.
> server closed the connection unexpectedly
>   This probably means the server terminated abnormally
>   before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded.
> {noformat}
> Backtrace:
> {noformat}
> (gdb) bt
> #0  0x0031e06e14f3 in select () from /lib64/libc.so.6
> #1  0x00b8f108 in pg_usleep (microsec=3000) at pgsleep.c:43
> #2  0x009e4418 in elog_debug_linger (edata=0x1186600) at elog.c:4125
> #3  0x009dca95 in errfinish (dummy=0) at elog.c:595
> #4  0x009db1e8 in ExceptionalCondition (conditionName=0xe045d5 
> "!(((void *)0) != credentials)", errorType=0xe04466 "FailedAssertion", 
> fileName=0xe0432d "cdbfilesystemcredential.c", lineNumber=357) at assert.c:66
> #5  0x00b7169f in cancel_filesystem_credentials (credentials=0x0, 
> mcxt=0x0) at cdbfilesystemcredential.c:357
> #6  0x00b7176f in cleanup_filesystem_credentials (portal=0x20b8388) 
> at cdbfilesystemcredential.c:388
> #7  0x00a19ade in PortalDrop (portal=0x20b8388, isTopCommit=0 '\000') 
> at portalmem.c:419
> #8  0x008f57c5 in exec_simple_query (query_string=0x206d1f8 "set 
> enable_secure_filesystem to true;", seqServerHost=0x0, seqServerPort=-1) at 
> postgres.c:1758
> #9  0x008fa3cf in PostgresMain (argc=4, argv=0x1fb8c88, 
> username=0x1fb8a80 "hornn") at postgres.c:4711
> #10 0x008a093e in BackendRun (port=0x1f68c80) at postmaster.c:5875
> #11 0x0089fdc8 in BackendStartup (port=0x1f68c80) at postmaster.c:5468
> #12 0x00899df5 in ServerLoop () at postmaster.c:2147
> #13 0x00898eb8 in PostmasterMain (argc=9, argv=0x1f7f940) at 
> postmaster.c:1439
> #14 0x007b2812 in main (argc=9, argv=0x1f7f940) at main.c:226
> (gdb) f 5
> #5  0x00b7169f in cancel_filesystem_credentials (credentials=0x0, 
> mcxt=0x0) at cdbfilesystemcredential.c:357
> 357   Assert(NULL != credentials);
> (gdb) p mcxt
> $1 = (MemoryContext) 0x0
> (gdb) 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-343) Core when setting enable_secure_filesystem to true

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-343:
---
Assignee: Zhanwei Wang  (was: Lei Chang)

> Core when setting enable_secure_filesystem to true
> --
>
> Key: HAWQ-343
> URL: https://issues.apache.org/jira/browse/HAWQ-343
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Noa Horn
>Assignee: Zhanwei Wang
> Fix For: 2.0.0
>
>
> Happened only with debug build. In optimized build seems to work ok.
> Repro:
> {noformat}
> # set enable_secure_filesystem to true;
> FATAL:  Unexpected internal error (cdbfilesystemcredential.c:357)
> DETAIL:  FailedAssertion("!(((void *)0) != credentials)", File: 
> "cdbfilesystemcredential.c", Line: 357)
> HINT:  Process 21815 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released until then.
> server closed the connection unexpectedly
>   This probably means the server terminated abnormally
>   before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded.
> {noformat}
> Backtrace:
> {noformat}
> (gdb) bt
> #0  0x0031e06e14f3 in select () from /lib64/libc.so.6
> #1  0x00b8f108 in pg_usleep (microsec=3000) at pgsleep.c:43
> #2  0x009e4418 in elog_debug_linger (edata=0x1186600) at elog.c:4125
> #3  0x009dca95 in errfinish (dummy=0) at elog.c:595
> #4  0x009db1e8 in ExceptionalCondition (conditionName=0xe045d5 
> "!(((void *)0) != credentials)", errorType=0xe04466 "FailedAssertion", 
> fileName=0xe0432d "cdbfilesystemcredential.c", lineNumber=357) at assert.c:66
> #5  0x00b7169f in cancel_filesystem_credentials (credentials=0x0, 
> mcxt=0x0) at cdbfilesystemcredential.c:357
> #6  0x00b7176f in cleanup_filesystem_credentials (portal=0x20b8388) 
> at cdbfilesystemcredential.c:388
> #7  0x00a19ade in PortalDrop (portal=0x20b8388, isTopCommit=0 '\000') 
> at portalmem.c:419
> #8  0x008f57c5 in exec_simple_query (query_string=0x206d1f8 "set 
> enable_secure_filesystem to true;", seqServerHost=0x0, seqServerPort=-1) at 
> postgres.c:1758
> #9  0x008fa3cf in PostgresMain (argc=4, argv=0x1fb8c88, 
> username=0x1fb8a80 "hornn") at postgres.c:4711
> #10 0x008a093e in BackendRun (port=0x1f68c80) at postmaster.c:5875
> #11 0x0089fdc8 in BackendStartup (port=0x1f68c80) at postmaster.c:5468
> #12 0x00899df5 in ServerLoop () at postmaster.c:2147
> #13 0x00898eb8 in PostmasterMain (argc=9, argv=0x1f7f940) at 
> postmaster.c:1439
> #14 0x007b2812 in main (argc=9, argv=0x1f7f940) at main.c:226
> (gdb) f 5
> #5  0x00b7169f in cancel_filesystem_credentials (credentials=0x0, 
> mcxt=0x0) at cdbfilesystemcredential.c:357
> 357   Assert(NULL != credentials);
> (gdb) p mcxt
> $1 = (MemoryContext) 0x0
> (gdb) 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-258) Investigate whether gp_fastsequence is still needed

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang closed HAWQ-258.
--
Resolution: Fixed

> Investigate whether gp_fastsequence is still needed
> ---
>
> Key: HAWQ-258
> URL: https://issues.apache.org/jira/browse/HAWQ-258
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Storage
>Reporter: Lirong Jian
>Assignee: Lirong Jian
> Fix For: 2.0.0
>
>
> Since the block directory for AO relations is not supported any more, we 
> suspect that gp_fastsequence is not needed anymore. However, further 
> investigations are needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-331) Fix HAWQ Jenkins pullrequest build reporting SUCCESS when it was a failure

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-331:
---
Fix Version/s: 2.1.0

> Fix HAWQ Jenkins pullrequest build reporting SUCCESS when it was a failure
> --
>
> Key: HAWQ-331
> URL: https://issues.apache.org/jira/browse/HAWQ-331
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Reporter: Goden Yao
>Assignee: Radar Lei
> Fix For: 2.1.0
>
>
> https://builds.apache.org/job/HAWQ-build-pullrequest/83/console
> It has been recently discovered that Jenkins reports SUCCESS even when a 
> build was actually failed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-258) Investigate whether gp_fastsequence is still needed

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-258:
---
Fix Version/s: 2.0.0

> Investigate whether gp_fastsequence is still needed
> ---
>
> Key: HAWQ-258
> URL: https://issues.apache.org/jira/browse/HAWQ-258
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Storage
>Reporter: Lirong Jian
>Assignee: Lirong Jian
> Fix For: 2.0.0
>
>
> Since the block directory for AO relations is not supported any more, we 
> suspect that gp_fastsequence is not needed anymore. However, further 
> investigations are needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-323) Cannot query when cluster include more than 1 segment

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-323:
---
Fix Version/s: 2.0.0

> Cannot query when cluster include more than 1 segment
> -
>
> Key: HAWQ-323
> URL: https://issues.apache.org/jira/browse/HAWQ-323
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core, Resource Manager
>Affects Versions: 2.0.0-beta-incubating
>Reporter: zharui
>Assignee: Lin Wen
> Fix For: 2.0.0
>
>
> The version I use is 2.0.0-beta-RC2. I can query data normally when cluster 
> just have 1 segment. Once the cluster have more then 1 segments online, I 
> cannot finish any query and being informed that "ERROR:  failed to acquire 
> resource from resource manager, 7 of 8 segments are unavailable 
> (pquery.c:788)".
> I have read the segment logs and the source code about resource manager. I 
> guess this issue is because of the communication failure between segment 
> instance and resource manager server. I can find the logs of the segment 
> connect to resource manager successfully such as "AsyncComm framework 
> receives message 518 from FD5" and "Resource enforcer increases memory quota 
> to: total memory quota=65536 MB, delta memory quota = 65536 MB", but the 
> other online segments have no these log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-321) Support plpython3u

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-321:
---
Assignee: (was: Lei Chang)

> Support plpython3u
> --
>
> Key: HAWQ-321
> URL: https://issues.apache.org/jira/browse/HAWQ-321
> Project: Apache HAWQ
>  Issue Type: New Feature
>Reporter: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-319) REST API for HAWQ

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-319:
---
Fix Version/s: backlog

> REST API for HAWQ
> -
>
> Key: HAWQ-319
> URL: https://issues.apache.org/jira/browse/HAWQ-319
> Project: Apache HAWQ
>  Issue Type: New Feature
>Reporter: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-321) Support plpython3u

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-321:
---
Fix Version/s: backlog

> Support plpython3u
> --
>
> Key: HAWQ-321
> URL: https://issues.apache.org/jira/browse/HAWQ-321
> Project: Apache HAWQ
>  Issue Type: New Feature
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-326) Support RPM build for HAWQ

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-326:
---
Fix Version/s: 2.1.0

> Support RPM build for HAWQ
> --
>
> Key: HAWQ-326
> URL: https://issues.apache.org/jira/browse/HAWQ-326
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Build
>Reporter: Lei Chang
> Fix For: 2.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-315) Invalid Byte Sequence Error when loading a large(100MB+) csv file

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-315:
---
Assignee: Ruilong Huo  (was: Lei Chang)

> Invalid Byte Sequence Error when loading a large(100MB+) csv file
> -
>
> Key: HAWQ-315
> URL: https://issues.apache.org/jira/browse/HAWQ-315
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Goden Yao
>Assignee: Ruilong Huo
> Fix For: 2.0.0
>
>
> This bug occurs when copying or reading a large csv file. The reproducible 
> file we tried is 100MB+ so cannot be uploaded to JIRA.
> *Repro steps*
> The large CSV file size needs to be over 100MB at least.
> The pattern in the csv file should contain the following:
> {code:actionscript}
> ..., "dummy data text1
> dummy data text2,
> dummy data text3,
> dummy data text4"
> {code}
> basically , a long text broken into multiple lines but within quotes.
> This doesn't cause issue in a smaller size file though.
> {code:SQL}
> DROP TABLE IF EXISTS <_test table name_>;
> CREATE TABLE <_test table name_>
> (
> <_define test table schema_>
>  ...
> );
> COPY <_test table name_> FROM '<_csv file path_>'
> DELIMITER ','
> NULL ''
> ESCAPE '"'
> CSV QUOTE '"'
> LOG ERRORS INTO <_error reject table name_>
> SEGMENT REJECT LIMIT 10 rows
> ;
> {code}
> *Errors*
> Error in first line with quoted data:
> {code}
> DEBUG5:  invalid byte sequence for encoding "UTF8": 0x00
> HINT:  This error can also happen if the byte sequence does not match the 
> encoding expected by the server, which is controlled by "client_encoding".
> CONTEXT:  COPY addresses_heap, line 604932
> {code}
> Error in second line with quoted data: this is due to wrong formatting (as 
> the first half line within the quotes was mishandled.
> {code}
> DEBUG5:  missing data for column "sourceid"
> CONTEXT:  COPY addresses_heap, line 604933: "...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-315) Invalid Byte Sequence Error when loading a large(100MB+) csv file

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-315:
---
Fix Version/s: 2.0.0

> Invalid Byte Sequence Error when loading a large(100MB+) csv file
> -
>
> Key: HAWQ-315
> URL: https://issues.apache.org/jira/browse/HAWQ-315
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Goden Yao
>Assignee: Ruilong Huo
> Fix For: 2.0.0
>
>
> This bug occurs when copying or reading a large csv file. The reproducible 
> file we tried is 100MB+ so cannot be uploaded to JIRA.
> *Repro steps*
> The large CSV file size needs to be over 100MB at least.
> The pattern in the csv file should contain the following:
> {code:actionscript}
> ..., "dummy data text1
> dummy data text2,
> dummy data text3,
> dummy data text4"
> {code}
> basically , a long text broken into multiple lines but within quotes.
> This doesn't cause issue in a smaller size file though.
> {code:SQL}
> DROP TABLE IF EXISTS <_test table name_>;
> CREATE TABLE <_test table name_>
> (
> <_define test table schema_>
>  ...
> );
> COPY <_test table name_> FROM '<_csv file path_>'
> DELIMITER ','
> NULL ''
> ESCAPE '"'
> CSV QUOTE '"'
> LOG ERRORS INTO <_error reject table name_>
> SEGMENT REJECT LIMIT 10 rows
> ;
> {code}
> *Errors*
> Error in first line with quoted data:
> {code}
> DEBUG5:  invalid byte sequence for encoding "UTF8": 0x00
> HINT:  This error can also happen if the byte sequence does not match the 
> encoding expected by the server, which is controlled by "client_encoding".
> CONTEXT:  COPY addresses_heap, line 604932
> {code}
> Error in second line with quoted data: this is due to wrong formatting (as 
> the first half line within the quotes was mishandled.
> {code}
> DEBUG5:  missing data for column "sourceid"
> CONTEXT:  COPY addresses_heap, line 604933: "...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-178) Add JSON plugin support in code base

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-178:
---
Fix Version/s: backlog

> Add JSON plugin support in code base
> 
>
> Key: HAWQ-178
> URL: https://issues.apache.org/jira/browse/HAWQ-178
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: backlog
>
>
> JSON has been a popular format used in HDFS as well as in the community, 
> there has been a few JSON PXF plugins developed by the community and we'd 
> like to see it being incorporated into the code base as an optional package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-91) “Out of memory” error when use gpload to load data

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-91?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-91:
--
Fix Version/s: backlog

> “Out of memory” error when use gpload to load data
> --
>
> Key: HAWQ-91
> URL: https://issues.apache.org/jira/browse/HAWQ-91
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: dingyuanpu
>Assignee: Lei Chang
> Fix For: backlog
>
>
> I have some problems with HAWQ : My HAWQ version is 1.3 on HDP2.2.6 ,which is 
> on 4 servers with x86 system(256G memory and 1T hard disk for each)
> The detail information is follow:
> I used the gpload tools to upload the store_sales.dat(the data is 188G) of 
> TPC-DS, the errors are:
> 2015-10-27 01:24:51|INFO|gpload session started 2015-10-27 01:24:51
> 2015-10-27 01:24:51|INFO|setting schema 'public' for table 'store_sales'
> 2015-10-27 01:24:52|INFO|started gpfdist -p 8081 -P 8082 -f 
> "tpc500g-data/store_sales_aa_aa_aa" -t 30
> 2015-10-27 01:30:25|ERROR|ERROR:  Out of memory  (seg0 node1.fd.h3c.com:4 
> pid=74456)
> DETAIL:  
> VM Protect failed to allocate 8388608 bytes, 7 MB available
> External table ext_gpload20151027_012451_543181, line N/A of 
> gpfdist://node2:8081/tpc500g-data/store_sales_aa_aa_aa: ""
> encountered while running INSERT INTO public."store_sales" 
> ("ss_sold_date_sk","ss_sold_time_sk","ss_item_sk","ss_customer_sk","ss_cdemo_sk","ss_hdemo_sk","ss_addr_sk","ss_store_sk","ss_promo_sk","ss_ticket_number","ss_quantity","ss_wholesale_cost","ss_list_price","ss_sales_price","ss_ext_discount_amt","ss_ext_sales_price","ss_ext_wholesale_cost","ss_ext_list_price","ss_ext_tax","ss_coupon_amt","ss_net_paid","ss_net_paid_inc_tax","ss_net_profit")
>  SELECT 
> "ss_sold_date_sk","ss_sold_time_sk","ss_item_sk","ss_customer_sk","ss_cdemo_sk","ss_hdemo_sk","ss_addr_sk","ss_store_sk","ss_promo_sk","ss_ticket_number","ss_quantity","ss_wholesale_cost","ss_list_price","ss_sales_price","ss_ext_discount_amt","ss_ext_sales_price","ss_ext_wholesale_cost","ss_ext_list_price","ss_ext_tax","ss_coupon_amt","ss_net_paid","ss_net_paid_inc_tax","ss_net_profit"
>  FROM ext_gpload20151027_012451_543181
> 2015-10-27 01:30:25|INFO|rows Inserted  = 0
> 2015-10-27 01:30:25|INFO|rows Updated   = 0
> 2015-10-27 01:30:25|INFO|data formatting errors = 0
> 2015-10-27 01:30:25|INFO|gpload failed
> I have used the following command to modify the parameters,the errors still 
> exist:
> gpconfig -c gp_vmem_protect_limit -v 8192MB (I have also tried 
> 4096、8192、16384、32768、81920、245760、262144)
> gpstop –r
> please help me to solve the problem ,thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-299) Extra ";" in udpSignalTimeoutWait

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-299:
---
Fix Version/s: 2.0.0

> Extra ";" in udpSignalTimeoutWait
> -
>
> Key: HAWQ-299
> URL: https://issues.apache.org/jira/browse/HAWQ-299
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Interconnect
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> In function udpSignalTimeoutWait, there is an extra ";" which should be 
> removed.
> {code}
>   if (udpSignalGet(sig));
>   ret = 0;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-256) Integrate Security with Apache Ranger

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-256:
---
Fix Version/s: backlog

> Integrate Security with Apache Ranger
> -
>
> Key: HAWQ-256
> URL: https://issues.apache.org/jira/browse/HAWQ-256
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Michael Andre Pearce (IG)
>Assignee: Lei Chang
> Fix For: backlog
>
>
> Integrate security with Apache Ranger for a unified Hadoop security solution. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-275) After killing QE of segment, the QE pool is not updated when dispatch

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-275:
---
Fix Version/s: 2.0.0

> After killing QE of segment, the QE pool is not updated when dispatch
> -
>
> Key: HAWQ-275
> URL: https://issues.apache.org/jira/browse/HAWQ-275
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Dispatcher
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> When we kill a QE of segment, the segment will restart all of its child 
> process, and not have QE anymore. But the master still believes the QEs are 
> cached, and it will dispatch these non-exist QE to handle query. 
> The most problem is that if we have 6 QEs before kill them, and we want to 
> execute a simple sql which only need 2 QEs. Then it will check and error 
> three times and only after that it order segment to start QEs.
> {code}
> intern=# insert into b values (2 );
> ERROR:  Query Executor Error in seg4 localhost:4 pid=19024: server closed 
> the connection unexpectedly
> DETAIL:
>   This probably means the server terminated abnormally
>   before or while processing the request.
> intern=# insert into b values (2 );
> ERROR:  Query Executor Error in seg0 localhost:4 pid=19020: server closed 
> the connection unexpectedly
> DETAIL:
>   This probably means the server terminated abnormally
>   before or while processing the request.
> intern=# insert into b values (2 );
> ERROR:  Query Executor Error in seg2 localhost:4 pid=19022: server closed 
> the connection unexpectedly
> DETAIL:
>   This probably means the server terminated abnormally
>   before or while processing the request.
> intern=# insert into b values (2 );
> INSERT 0 1
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-264) Fix Coverity issues

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang resolved HAWQ-264.

   Resolution: Fixed
Fix Version/s: 2.0.0

> Fix Coverity issues
> ---
>
> Key: HAWQ-264
> URL: https://issues.apache.org/jira/browse/HAWQ-264
> Project: Apache HAWQ
>  Issue Type: Task
>Reporter: Entong Shen
>Assignee: Entong Shen
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-253) Separate pxf-hdfs and pxf-hive packages from pxf-service

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-253:
---
Fix Version/s: backlog

> Separate pxf-hdfs and pxf-hive packages from pxf-service
> 
>
> Key: HAWQ-253
> URL: https://issues.apache.org/jira/browse/HAWQ-253
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Noa Horn
>Assignee: Goden Yao
> Fix For: backlog
>
>
> The PXF plugins should only depend on pxf-api package.
> pxf-service is supposed to be an internal package, not exposed to the plugins.
> Currently both pxf-hdfs and pxf-hive depend on pxf-service, which should be 
> fixed.
> {noformat}
> $ grep -rI "pxf.service" pxf-hdfs/src/main/.
> pxf-hdfs/src/main/./java/org/apache/hawq/pxf/plugins/hdfs/HdfsAnalyzer.java:import
>  org.apache.hawq.pxf.service.ReadBridge;
> pxf-hdfs/src/main/./java/org/apache/hawq/pxf/plugins/hdfs/utilities/HdfsUtilities.java:import
>  org.apache.hawq.pxf.service.utilities.Utilities;
> pxf-hdfs/src/main/./java/org/apache/hawq/pxf/plugins/hdfs/WritableResolver.java:import
>  org.apache.hawq.pxf.service.utilities.Utilities;
> $ grep -rI "pxf.service" pxf-hive/src/main/.
> pxf-hive/src/main/./java/org/apache/hawq/pxf/plugins/hive/HiveColumnarSerdeResolver.java:import
>  org.apache.hawq.pxf.service.utilities.Utilities;
> pxf-hive/src/main/./java/org/apache/hawq/pxf/plugins/hive/HiveResolver.java:import
>  org.apache.hawq.pxf.service.utilities.Utilities;
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-335) Cannot query parquet hive table through PXF

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-335:
---
Fix Version/s: backlog

> Cannot query parquet hive table through PXF
> ---
>
> Key: HAWQ-335
> URL: https://issues.apache.org/jira/browse/HAWQ-335
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Affects Versions: 2.0.0-beta-incubating
>Reporter: zharui
>Assignee: Goden Yao
> Fix For: backlog
>
>
> I created an external table in hawq that exist in hive with parquet format, 
> but I cannot query this table in hawq. The segment processes are idle and 
> nothing happened.
> The clause of creating external hive parquet table as below:
> {code}
> create external table zc_parquet800_partitioned 
> (
> start_time bigint,
> cdr_id int,
> "offset" int,
> calling varchar(255),
> imsi varchar(255),
> user_ip int,
> tmsi int,
> p_tmsi int,
> imei varchar(255),
> mcc int,
> mnc int,
> lac int,
> rac int,
> cell_id int,
> bsc_ip int,
> opc int,
> dpc int,
> sgsn_sg_ip int,
> ggsn_sg_ip int,
> sgsn_data_ip int,
> ggsn_data_ip int,
> apn varchar(255),
> rat int,
> service_type smallint,
> service_group smallint,
> up_packets int,
> down_packets int,
> up_bytes int,
> down_bytes int,
> up_speed real,
> down_speed real,
> trans_time int,
> first_time timestamp,
> end_time timestamp,
> is_end int,
> user_port int,
> proto_type int,
> dest_ip int,
> dest_port int,
> paging_count smallint,
> assignment_count smallint,
> joiner_id varchar(255),
> operation smallint,
> country smallint,
> loc_prov smallint,
> loc_city smallint,
> roam_prov smallint,
> roam_city smallint,
> sgsn varchar(255),
> bsc_rnc varchar(255),
> terminal_fac smallint,
> terminal_type int,
> terminal_class smallint,
> roaming_type smallint,
> host_operator smallint,
> net_type smallint, 
> time int, 
> calling_hash int) 
> LOCATION ('pxf://ws01.mzhen.cn:51200/zc_parquet800_partitioned?PROFILE=Hive') 
> FORMAT 'custom' (formatter='pxfwritable_import');
> {code}
> The catalina logs as below:
> {code}
> Jan 13, 2016 11:26:29 AM WARNING: parquet.hadoop.ParquetRecordReader: Can not 
> initialize counter due to context is not a instance of 
> TaskInputOutputContext, but is 
> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
> Jan 13, 2016 11:26:29 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> RecordReader initialized will read a total of 1332450 records.
> Jan 13, 2016 11:26:29 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
> row 0. reading next block
> Jan 13, 2016 11:26:30 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> block read in memory in 398 ms. row count = 1332450
> Jan 13, 2016 11:26:58 AM WARNING: parquet.hadoop.ParquetRecordReader: Can not 
> initialize counter due to context is not a instance of 
> TaskInputOutputContext, but is 
> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
> Jan 13, 2016 11:26:58 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> RecordReader initialized will read a total of 1460760 records.
> Jan 13, 2016 11:26:58 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
> row 0. reading next block
> Jan 13, 2016 11:26:59 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> block read in memory in 441 ms. row count = 1460760
> Jan 13, 2016 11:27:34 AM WARNING: parquet.hadoop.ParquetRecordReader: Can not 
> initialize counter due to context is not a instance of 
> TaskInputOutputContext, but is 
> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
> Jan 13, 2016 11:27:34 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> RecordReader initialized will read a total of 1396605 records.
> Jan 13, 2016 11:27:34 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
> row 0. reading next block
> Jan 13, 2016 11:27:34 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> block read in memory in 367 ms. row count = 1396605
> Jan 13, 2016 11:28:06 AM WARNING: parquet.hadoop.ParquetRecordReader: Can not 
> initialize counter due to context is not a instance of 
> TaskInputOutputContext, but is 
> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
> Jan 13, 2016 11:28:06 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> RecordReader initialized will read a total of 1337385 records.
> Jan 13, 2016 11:28:06 AM INFO: parquet.hadoop.InternalParquetRecordReader: at 
> row 0. reading next block
> Jan 13, 2016 11:28:06 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> block read in memory in 348 ms. row count = 1337385
> Jan 13, 2016 11:28:32 AM WARNING: parquet.hadoop.ParquetRecordReader: Can not 
> initialize counter due to context is not a instance of 
> TaskInputOutputContext, but is 
> org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
> Jan 13, 2016 11:28:32 AM INFO: parquet.hadoop.InternalParquetRecordReader: 
> RecordRea

[jira] [Updated] (HAWQ-206) Use the DELIMITER in FORMAT for External Table DDL Creation

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-206:
---
Fix Version/s: backlog

> Use the DELIMITER in FORMAT for External Table DDL Creation
> ---
>
> Key: HAWQ-206
> URL: https://issues.apache.org/jira/browse/HAWQ-206
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Goden Yao
>Assignee: Goden Yao
> Fix For: backlog
>
>
> As a HAWQ user, I should be able to:
> not type delimiter twice in DDL in the case of using HiveRC/Text profile via 
> PXF
> *Background*
> Currently user has to specify the same delimiter twice in DDL(in the case of 
> HiveRC/Text profiles):
> {code}
> ...
> location(E'pxf://...&delimiter=\x01') FORMAT TEXT (delimiter = E'\x01');
> {code}
> It would be really helpful if we could use the delimiter provided in TEXT 
> this would reduce error prone DDL grammar and duplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-337) Support Label Based Scheduling in Libyarn and RM

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-337:
---
Fix Version/s: backlog

> Support Label Based Scheduling in Libyarn and RM
> 
>
> Key: HAWQ-337
> URL: https://issues.apache.org/jira/browse/HAWQ-337
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: libyarn, Resource Manager
>Reporter: Lin Wen
>Assignee: Lei Chang
> Fix For: backlog
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-214) Build-in functions for gp_partition will cause cores.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-214:
---
Fix Version/s: 2.0.0

> Build-in functions for gp_partition will cause cores.
> -
>
> Key: HAWQ-214
> URL: https://issues.apache.org/jira/browse/HAWQ-214
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Unknown
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> There are four build-in functions for gp_patition , and all of them will 
> cause core.
> gp_partition_expansion
> gp_partition_inverse
> gp_partition_propagation
> gp_partition_selection
> {code}
> create table pt_table(a int, b int) distributed by (a) partition by range(b) 
> (default partition others,start(1) end(100) every(10));
> {code}
> {code}
> e=# select pg_catalog.gp_partition_selection(16550,1);
> FATAL:  Unexpected internal error (gp_partition_functions.c:197)
> DETAIL:  FailedAssertion("!(dynamicTableScanInfo != ((void *)0))", File: 
> "gp_partition_functions.c", Line: 197)
> HINT:  Process 22247 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released until then.
> server closed the connection unexpectedly
>   This probably means the server terminated abnormally
>   before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-215) View gp_distributed_log and gp_distributed_xacts need to be removed if we don't want to support it anymore.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-215:
---
Assignee: Ming LI  (was: Lei Chang)

> View gp_distributed_log and gp_distributed_xacts need to be removed if we 
> don't want to support it anymore.
> ---
>
> Key: HAWQ-215
> URL: https://issues.apache.org/jira/browse/HAWQ-215
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> View gp_distributed_log   depends on built-in function gp_distributed_log(). 
> And  gp_distributed_log() just return null. So the view can't work at all.
> So do view gp_distributed_xacts.
> {code}
> e=# select * from gp_distributed_log;
> ERROR:  function returning set of rows cannot return null value
> e=# select * from gp_distributed_xacts;
> ERROR:  function returning set of rows cannot return null value
> {code}
> function gp_distributed_log is defined in gp_distributed_log.c :27
> function gp_distributed_xacts  is defined in cdbdistributedxacts.c:44



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-215) View gp_distributed_log and gp_distributed_xacts need to be removed if we don't want to support it anymore.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-215:
---
Fix Version/s: 2.0.0

> View gp_distributed_log and gp_distributed_xacts need to be removed if we 
> don't want to support it anymore.
> ---
>
> Key: HAWQ-215
> URL: https://issues.apache.org/jira/browse/HAWQ-215
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> View gp_distributed_log   depends on built-in function gp_distributed_log(). 
> And  gp_distributed_log() just return null. So the view can't work at all.
> So do view gp_distributed_xacts.
> {code}
> e=# select * from gp_distributed_log;
> ERROR:  function returning set of rows cannot return null value
> e=# select * from gp_distributed_xacts;
> ERROR:  function returning set of rows cannot return null value
> {code}
> function gp_distributed_log is defined in gp_distributed_log.c :27
> function gp_distributed_xacts  is defined in cdbdistributedxacts.c:44



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-214) Build-in functions for gp_partition will cause cores.

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-214:
---
Assignee: Ming LI  (was: Lei Chang)

> Build-in functions for gp_partition will cause cores.
> -
>
> Key: HAWQ-214
> URL: https://issues.apache.org/jira/browse/HAWQ-214
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Unknown
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> There are four build-in functions for gp_patition , and all of them will 
> cause core.
> gp_partition_expansion
> gp_partition_inverse
> gp_partition_propagation
> gp_partition_selection
> {code}
> create table pt_table(a int, b int) distributed by (a) partition by range(b) 
> (default partition others,start(1) end(100) every(10));
> {code}
> {code}
> e=# select pg_catalog.gp_partition_selection(16550,1);
> FATAL:  Unexpected internal error (gp_partition_functions.c:197)
> DETAIL:  FailedAssertion("!(dynamicTableScanInfo != ((void *)0))", File: 
> "gp_partition_functions.c", Line: 197)
> HINT:  Process 22247 will wait for gp_debug_linger=120 seconds before 
> termination.
> Note that its locks and other resources will not be released until then.
> server closed the connection unexpectedly
>   This probably means the server terminated abnormally
>   before or while processing the request.
> The connection to the server was lost. Attempting reset: Succeeded.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-99) OpenSSL 0.9.x to 1.x upgrade

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-99?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-99:
--
Fix Version/s: backlog

> OpenSSL 0.9.x to 1.x upgrade
> 
>
> Key: HAWQ-99
> URL: https://issues.apache.org/jira/browse/HAWQ-99
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Goden Yao
>Assignee: Lei Chang
> Fix For: backlog
>
>
> 0.9.x product line will be deprecated by end of 2015.
> We need to move to the new 1.x product line.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-282) Refine reject limit check and handle for error table in external table and COPY

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-282:
---
Fix Version/s: backlog

> Refine reject limit check and handle for error table in external table and 
> COPY
> ---
>
> Key: HAWQ-282
> URL: https://issues.apache.org/jira/browse/HAWQ-282
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: External Tables, Storage
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: backlog
>
>
> It uses macros to implement  reject limit check and handle for error table in 
> external table and COPY. It is better to refine them using inline functions 
> or just functions to improve readability.
> The related macros include:
> 1. src/backend/access/external/fileam.c
> {noformat}
> EXT_RESET_LINEBUF
> FILEAM_HANDLE_ERROR
> CSV_IS_UNPARSABLE
> FILEAM_IF_REJECT_LIMIT_REACHED_ABORT
> {noformat}
> 2. src/backend/commands/copy.c
> {noformat}
> RESET_LINEBUF
> COPY_HANDLE_ERROR
> QD_GOTO_NEXT_ROW
> QE_GOTO_NEXT_ROW
> CSV_IS_UNPARSABLE
> IF_REJECT_LIMIT_REACHED_ABORT
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-165) PXF loggers should all be private static final

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-165:
---
Fix Version/s: 2.0.0

> PXF loggers should all be private static final
> --
>
> Key: HAWQ-165
> URL: https://issues.apache.org/jira/browse/HAWQ-165
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Noa Horn
>Assignee: Noa Horn
> Fix For: 2.0.0
>
>
> PXF uses org.apache.commons.logging.Log as its logging mechanism.
> In some classes the logger is initialized as a private variable, at others as 
> static. We should consolidate all of the loggers to be private static final.
> e.g. 
> {noformat}
> private static final Log Log = LogFactory.getLog(ReadBridge.class);
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-182) Collect advanced statistics for HBase plugin

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-182:
---
Fix Version/s: backlog

> Collect advanced statistics for HBase plugin
> 
>
> Key: HAWQ-182
> URL: https://issues.apache.org/jira/browse/HAWQ-182
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Noa Horn
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Implement getFragmentsStats in HBase's fragmenter (HBaseDataFragmenter).
> As a result when running ANALYZE on PXF tables with HBase profile, advanced 
> statistics will be collected for that table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-240) Set Progress as 50% When Return Resource

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-240:
---
Assignee: Lin Wen  (was: Lei Chang)

> Set Progress as 50% When Return Resource
> 
>
> Key: HAWQ-240
> URL: https://issues.apache.org/jira/browse/HAWQ-240
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libyarn
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.0
>
>
> In current implementation, when return resource to Hadoop Yarn, the progress 
> of hawq becomes 100%.
> Hawq with yarn is a unmanaged AM and a long-term application, the progress 
> should be 50% by design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-240) Set Progress as 50% When Return Resource

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-240:
---
Fix Version/s: (was: backlog)
   2.0.0

> Set Progress as 50% When Return Resource
> 
>
> Key: HAWQ-240
> URL: https://issues.apache.org/jira/browse/HAWQ-240
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libyarn
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.0.0
>
>
> In current implementation, when return resource to Hadoop Yarn, the progress 
> of hawq becomes 100%.
> Hawq with yarn is a unmanaged AM and a long-term application, the progress 
> should be 50% by design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-240) Set Progress as 50% When Return Resource

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-240:
---
Fix Version/s: backlog

> Set Progress as 50% When Return Resource
> 
>
> Key: HAWQ-240
> URL: https://issues.apache.org/jira/browse/HAWQ-240
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: libyarn
>Reporter: Lin Wen
>Assignee: Lei Chang
> Fix For: 2.0.0
>
>
> In current implementation, when return resource to Hadoop Yarn, the progress 
> of hawq becomes 100%.
> Hawq with yarn is a unmanaged AM and a long-term application, the progress 
> should be 50% by design.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-231) Alter table by drop all columns of it, then it has some interesting problems

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-231:
---
Fix Version/s: backlog

> Alter table by drop all columns of it, then it has some interesting problems
> 
>
> Key: HAWQ-231
> URL: https://issues.apache.org/jira/browse/HAWQ-231
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Storage
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: backlog
>
>
> It is a design behavior problem. 
> When we drop all the columns, should it truncate the table?
> Otherwise if the count of invisible rows has meaning.
> You can not see anything, but it shows that there are 1000 rows here.
> I know that in storage view and design view it is ok.
> But in  user view, it may be puzzled.
> {code}
> intern=# create table alterall (i int, j int);
> CREATE TABLE
> intern=# insert into alterall VALUES 
> (generate_series(1,1000),generate_series(1,2));
> INSERT 0 1000
> intern=# alter table alterall drop COLUMN i;
> ALTER TABLE
> intern=# alter TABLE alterall drop COLUMN j;
> ALTER TABLE
> intern=# select * from alterall ;
> --
> (1000 rows)
> intern=# alter TABLE alterall add column k int default 3;
> ALTER TABLE
> intern=# select * from alterall;
>  k
> ---
>  3
>  3
>  3
>  3
>  3
>  3
>  3
> ...
> (1000 rows)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-235) HAWQ init report error message on centos7

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-235:
---
Fix Version/s: 2.1.0

> HAWQ init report error message on centos7
> -
>
> Key: HAWQ-235
> URL: https://issues.apache.org/jira/browse/HAWQ-235
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Zhanwei Wang
>Assignee: Radar Lei
> Fix For: 2.1.0
>
>
> {code}
> [gpadmin@centos7-namenode hawq-devel]$ hawq init cluster
> 20151209:03:02:07:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Prepare 
> to do 'hawq init'
> 20151209:03:02:07:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-You can 
> check log in /home/gpadmin/hawqAdminLogs/hawq_init_20151209.log
> 20151209:03:02:07:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Init hawq 
> with args: ['init', 'cluster']
> Continue with HAWQ init Yy|Nn (default=N):
> > y
> 20151209:03:02:08:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Check if 
> hdfs path is available
> 20151209:03:02:08:000292 
> hawq_init:centos7-namenode:gpadmin-[WARNING]:-WARNING:'hdfs://centos7-namenode:8020/hawq_default'
>  does not exist, create it ...
> 20151209:03:02:08:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-3 segment 
> hosts defined
> 20151209:03:02:08:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Set 
> default_segment_num as: 24
> The authenticity of host 'centos7-datanode1 (172.17.0.85)' can't be 
> established.
> ECDSA key fingerprint is 51:84:fc:86:2c:d3:30:0b:06:ac:49:f4:a8:5d:e1:bd.
> Are you sure you want to continue connecting (yes/no)? yes
> The authenticity of host 'centos7-datanode2 (172.17.0.86)' can't be 
> established.
> ECDSA key fingerprint is 51:84:fc:86:2c:d3:30:0b:06:ac:49:f4:a8:5d:e1:bd.
> Are you sure you want to continue connecting (yes/no)? yes
> The authenticity of host 'centos7-datanode3 (172.17.0.87)' can't be 
> established.
> ECDSA key fingerprint is 51:84:fc:86:2c:d3:30:0b:06:ac:49:f4:a8:5d:e1:bd.
> Are you sure you want to continue connecting (yes/no)? yes
> The authenticity of host 'centos7-namenode (172.17.0.84)' can't be 
> established.
> ECDSA key fingerprint is 51:84:fc:86:2c:d3:30:0b:06:ac:49:f4:a8:5d:e1:bd.
> Are you sure you want to continue connecting (yes/no)? yes
> 20151209:03:02:15:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Start to 
> init master node: 'centos7-namenode'
> 20151209:03:02:23:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Master 
> init successfully
> 20151209:03:02:23:000292 hawq_init:centos7-namenode:gpadmin-[INFO]:-Init 
> segments in list: ['centos7-datanode1', 'centos7-datanode2', 
> 'centos7-datanode3']
> .20151209:03:02:32:000292 
> hawq_init:centos7-namenode:gpadmin-[INFO]:-/data/hawq-devel/bin/lib/hawq_bash_functions.sh:
>  line 59: return: Problem in hawq_bash_functions, command 'ifconfig' not 
> found in COMMAND path. You will need to edit the script named 
> hawq_bash_functions.sh to properly locate the needed commands 
> for your platform.: numeric argument required
> /data/hawq-devel/bin/lib/hawq_bash_functions.sh: line 59: return: Problem in 
> hawq_bash_functions, command 'netstat' not found in COMMAND path. 
> You will need to edit the script named hawq_bash_functions.sh to properly 
> locate the needed commands for your platform.: numeric 
> argument required
> Host key verification failed.
> /data/hawq-devel/bin/lib/hawqinit.sh: line 72: ifconfig: command not found
> 20151209:03:02:32:000292 
> hawq_init:centos7-namenode:gpadmin-[INFO]:-/data/hawq-devel/bin/lib/hawq_bash_functions.sh:
>  line 59: return: Problem in hawq_bash_functions, command 'ifconfig' not 
> found in COMMAND path. You will need to edit the script named 
> hawq_bash_functions.sh to properly locate the needed commands 
> for your platform.: numeric argument required
> /data/hawq-devel/bin/lib/hawq_bash_functions.sh: line 59: return: Problem in 
> hawq_bash_functions, command 'netstat' not found in COMMAND path. 
> You will need to edit the script named hawq_bash_functions.sh to properly 
> locate the needed commands for your platform.: numeric 
> argument required
> Host key verification failed.
> /data/hawq-devel/bin/lib/hawqinit.sh: line 72: ifconfig: command not found
> 20151209:03:02:32:000292 
> hawq_init:centos7-namenode:gpadmin-[INFO]:-/data/hawq-devel/bin/lib/hawq_bash_functions.sh:
>  line 59: return: Problem in hawq_bash_functions, command 'ifconfig' not 
> found in COMMAND path. You will need to edit the script named 
> hawq_bash_functions.sh to properly locate the needed commands 
> for your platform.: numeric argument required
> /data/hawq-devel/bin/lib/hawq_bash_functions.sh: line

[jira] [Updated] (HAWQ-181) Collect advanced statistics for Hive plugins

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-181:
---
Fix Version/s: backlog

> Collect advanced statistics for Hive plugins
> 
>
> Key: HAWQ-181
> URL: https://issues.apache.org/jira/browse/HAWQ-181
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Noa Horn
>Assignee: Goden Yao
> Fix For: backlog
>
>
> Implement getFragmentsStats in Hive's fragmenters (HiveDataFragmenter and 
> HiveInputFormatFragmenter).
> As a result when running ANALYZE on PXF tables with Hive profile, advanced 
> statistics will be collected for that table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-176) REORGANIZE parameter is useless when change distribute policy from hash to random

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-176:
---
Fix Version/s: backlog

> REORGANIZE parameter is useless when  change distribute policy from hash to 
> random 
> ---
>
> Key: HAWQ-176
> URL: https://issues.apache.org/jira/browse/HAWQ-176
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Storage
>Reporter: Dong Li
>Assignee: Lei Chang
> Fix For: backlog
>
>
> When change distribute policy from hash to random with REORGANIZE=true, the 
> data distribution is not reorgnized.
> Run commands as follow.
> {code}
> set default_segment_num=2;
> create table testreorg( i int , j int ,q int) distributed by (q);
> insert into testreorg VALUES (1,1,1);
> insert into testreorg VALUES (1,2,1);
> insert into testreorg VALUES (2,3,1);
> insert into testreorg VALUES (2,4,1);
> insert into testreorg VALUES (2,5,1);
> {code}
> gpadmin=# select relfilenode from pg_class where relname='testreorg';
>  relfilenode
> -
>16840
> (1 row)
> gpadmin=# select * from pg_aoseg.pg_aoseg_16840;
>  segno | eof | tupcount | varblockcount | eofuncompressed | content
> ---+-+--+---+-+-
>  2 |   0 |0 | 0 |   0 |  -1
>  1 | 160 |5 | 5 | 160 |  -1
> (2 rows)
> {code}
> alter TABLE testreorg set with (REORGANIZE=true) DISTRIBUTED randomly;
> {code}
> gpadmin=# select relfilenode from pg_class where relname='testreorg';
>  relfilenode
> -
>16845
> (1 row)
> gpadmin=# select * from pg_aoseg.pg_aoseg_16845;
>  segno | eof | tupcount | varblockcount | eofuncompressed | content
> ---+-+--+---+-+-
>  2 |   0 |0 | 0 |   0 |  -1
>  1 | 120 |5 | 1 | 120 |  -1
> (2 rows)
> The aoseg file is changed , but the data distribution has not changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-150) External tables can be designated for both READ and WRITE

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-150:
---
Assignee: Lei Chang  (was: Goden Yao)

> External tables can be designated for both READ and WRITE
> -
>
> Key: HAWQ-150
> URL: https://issues.apache.org/jira/browse/HAWQ-150
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: External Tables
>Reporter: C.J. Jameson
>Assignee: Lei Chang
> Fix For: 3.0.0
>
>
> Currently, external tables are either read-only or write-only when they are 
> created. We could support an external table with the capability for both 
> reads and writes.
> As pointed out by hawqst...@163.com



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-144) Build HAWQ on MacOS

2016-01-23 Thread Lei Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Chang updated HAWQ-144:
---
Fix Version/s: 2.1.0

> Build HAWQ on MacOS
> ---
>
> Key: HAWQ-144
> URL: https://issues.apache.org/jira/browse/HAWQ-144
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Build
>Reporter: Lei Chang
>Assignee: Lei Chang
> Fix For: 2.1.0
>
>
> Currently, the only tested build platform for HAWQ is redhat 6.x. It will be 
> very nice if it can work on Mac with clang. This will make new contributions 
> much easier.
> Instructions on building HAWQ on linux is at: 
> https://github.com/apache/incubator-hawq/blob/master/BUILD_INSTRUCTIONS.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >