[jira] [Commented] (HAWQ-498) Update property value in gpcheck.cnf

2016-03-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184529#comment-15184529
 ] 

ASF GitHub Bot commented on HAWQ-498:
-

GitHub user stanlyxiang opened a pull request:

https://github.com/apache/incubator-hawq/pull/418

HAWQ-498.Update property value in gpcheck.cnf



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/stanlyxiang/incubator-hawq hawq-498

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/418.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #418


commit e34a043a4cbcc82f02071f5eb4e4ebdb1bdf164d
Author: xsheng 
Date:   2016-03-08T06:30:42Z

HAWQ-498.Update property value in gpcheck.cnf




> Update property value in gpcheck.cnf
> 
>
> Key: HAWQ-498
> URL: https://issues.apache.org/jira/browse/HAWQ-498
> Project: Apache HAWQ
>  Issue Type: Improvement
>Reporter: Xiang Sheng
>Assignee: Lei Chang
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-498) Update property value in gpcheck.cnf

2016-03-07 Thread Xiang Sheng (JIRA)
Xiang Sheng created HAWQ-498:


 Summary: Update property value in gpcheck.cnf
 Key: HAWQ-498
 URL: https://issues.apache.org/jira/browse/HAWQ-498
 Project: Apache HAWQ
  Issue Type: Improvement
Reporter: Xiang Sheng
Assignee: Lei Chang






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-486) gpcheck can't find namenode with Ambari install PHD

2016-03-07 Thread Xiang Sheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Sheng closed HAWQ-486.


> gpcheck can't find namenode with Ambari install PHD
> ---
>
> Key: HAWQ-486
> URL: https://issues.apache.org/jira/browse/HAWQ-486
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Xiang Sheng
>Assignee: Xiang Sheng
>
> gpcheck can't find namenode with Ambari install PHD.
> Seems like because with Ambari install, there is no 'fs.default.name' 
> property exist in core-site.xml.They use 'fs.defaultFS' property instead. We 
> should recognize both.
> Actually 'fs.default.name' was deprecated. Use (fs.defaultFS) property instead
> 
> fs.default.name
> hdfs://test5:8020
> 
> 
>   fs.defaultFS
>   hdfs://test5:8020
>   true
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-486) gpcheck can't find namenode with Ambari install PHD

2016-03-07 Thread Xiang Sheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Sheng resolved HAWQ-486.
--
Resolution: Fixed

> gpcheck can't find namenode with Ambari install PHD
> ---
>
> Key: HAWQ-486
> URL: https://issues.apache.org/jira/browse/HAWQ-486
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Xiang Sheng
>Assignee: Xiang Sheng
>
> gpcheck can't find namenode with Ambari install PHD.
> Seems like because with Ambari install, there is no 'fs.default.name' 
> property exist in core-site.xml.They use 'fs.defaultFS' property instead. We 
> should recognize both.
> Actually 'fs.default.name' was deprecated. Use (fs.defaultFS) property instead
> 
> fs.default.name
> hdfs://test5:8020
> 
> 
>   fs.defaultFS
>   hdfs://test5:8020
>   true
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-486) gpcheck can't find namenode with Ambari install PHD

2016-03-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184520#comment-15184520
 ] 

ASF GitHub Bot commented on HAWQ-486:
-

Github user stanlyxiang closed the pull request at:

https://github.com/apache/incubator-hawq/pull/407


> gpcheck can't find namenode with Ambari install PHD
> ---
>
> Key: HAWQ-486
> URL: https://issues.apache.org/jira/browse/HAWQ-486
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Xiang Sheng
>Assignee: Xiang Sheng
>
> gpcheck can't find namenode with Ambari install PHD.
> Seems like because with Ambari install, there is no 'fs.default.name' 
> property exist in core-site.xml.They use 'fs.defaultFS' property instead. We 
> should recognize both.
> Actually 'fs.default.name' was deprecated. Use (fs.defaultFS) property instead
> 
> fs.default.name
> hdfs://test5:8020
> 
> 
>   fs.defaultFS
>   hdfs://test5:8020
>   true
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-495) Core in cdbexplain_depositStatsToNode

2016-03-07 Thread Ivan Weng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Weng resolved HAWQ-495.

   Resolution: Fixed
Fix Version/s: 2.0.0-beta-incubating

>  Core in cdbexplain_depositStatsToNode 
> ---
>
> Key: HAWQ-495
> URL: https://issues.apache.org/jira/browse/HAWQ-495
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Ivan Weng
>Assignee: Lei Chang
> Fix For: 2.0.0-beta-incubating
>
>
> (gdb) bt
> #0  0x003c1ec0f5db in raise () from /lib64/libpthread.so.0
> #1  0x0086cf22 in SafeHandlerForSegvBusIll (processName= optimized out>, postgres_signal_arg=11) at elog.c:4515
> #2  
> #3  0x0092d5d8 in cdbexplain_depositStatsToNode (planstate= optimized out>, ctx=)
> at cdbexplain.c:1150
> #4  0x0092dd22 in cdbexplain_recvStatWalker (planstate=0x3376020, 
> context=0x7fffc8ca61a8) at cdbexplain.c:666
> #5  0x0065de3f in planstate_walk_node_extended (planstate=0x3316080, 
> walker=0x92dd00 ,
> context=0x7fffc8ca7be0, flags=53967800) at execProcnode.c:2057
> #6  planstate_walk_kids (planstate=0x3316080, walker=0x92dd00 
> , context=0x7fffc8ca7be0,
> flags=53967800) at execProcnode.c:2157
> #7  0x0065e269 in planstate_walk_node_extended (planstate=0x33103a0, 
> walker=0x92dd00 ,
> context=0x7fffc8ca7be0, flags=) at 
> execProcnode.c:2059
> #8  planstate_walk_kids (planstate=0x33103a0, walker=0x92dd00 
> , context=0x7fffc8ca7be0,
> flags=) at execProcnode.c:2157
> #9  0x0065e202 in planstate_walk_node_extended (planstate=0x332c4b8, 
> walker=0x92dd00 ,
> context=0x7fffc8ca7be0, flags=) at 
> execProcnode.c:2059
> #10 planstate_walk_array (planstate=0x332c4b8, walker=0x92dd00 
> , context=0x7fffc8ca7be0,
> flags=) at execProcnode.c:2081
> #11 planstate_walk_kids (planstate=0x332c4b8, walker=0x92dd00 
> , context=0x7fffc8ca7be0,
> flags=) at execProcnode.c:2128
> #12 0x00929985 in cdbexplain_recvExecStats (planstate= optimized out>, dispatchResults=0x7f30540008c8,
> sliceIndex=, showstatctx=0x32b5b00, 
> segmentNum=) at cdbexplain.c:630
> #13 0x0092dd6a in cdbexplain_recvStatWalker (planstate=0x332ae98, 
> context=) at cdbexplain.c:675
> #14 0x0065e356 in planstate_walk_node_extended (planstate=0x1, 
> walker=0x92dd00 ,
> context=0x7fffc8ca7d10) at execProcnode.c:2057
> #15 planstate_walk_node (planstate=0x1, walker=0x92dd00 
> , context=0x7fffc8ca7d10)
> at execProcnode.c:2039
> #16 0x00929985 in cdbexplain_recvExecStats (planstate= optimized out>, dispatchResults=0x3044f68,
> sliceIndex=, showstatctx=0x32b5b00, 
> segmentNum=) at cdbexplain.c:630
> #17 0x005f92e8 in ExplainOnePlan_internal (plannedstmt= optimized out>, stmt=,
> queryString=, params=, 
> tstate=, es=,
> isSequential=0 '\000') at explain.c:588
> #18 0x005f9b7d in ExplainOnePlan (plannedstmt=, 
> stmt=,
> queryString=, params=, 
> tstate=) at explain.c:442
> #19 0x005f9c9a in ExplainOneQuery (query=, 
> stmt=0x2ffe588,
> queryString=0x2ffcb80 "explain analyze select * from(with cte as (select 
> * from pt1) select * from cte r1, cte r2 where r1.c1=5 and r1.c1=r2.c1 )i ;", 
> params=, tstate=) at explain.c:357
> #20 0x005f9dc5 in ExplainQuery (stmt=0x2ffe588,
> queryString=0x2ffcb80 "explain analyze select * from(with cte as (select 
> * from pt1) select * from cte r1, cte r2 where r1.c1=5 and r1.c1=r2.c1 )i ;", 
> params=0x0, dest=0x308d8e0) at explain.c:200
> #21 0x007b7cda in ProcessUtility (parsetree=0x2ffe588, 
> queryString=, params=0x0,
> isTopLevel=1 '\001', dest=0x308d8e0, completionTag=0x7fffc8ca8310 "") at 
> utility.c:1475
> #22 0x007b303a in PortalRunUtility (portal=0x3040fa0, 
> utilityStmt=0x2ffe588, isTopLevel=-72 '\270', dest=0x308d8e0,
> completionTag=0x7fffc8ca8310 "") at pquery.c:1887
> #23 0x007b56a4 in FillPortalStore (portal=0x3040fa0, isTopLevel=0 
> '\000') at pquery.c:1759
> #24 0x007b5aa5 in PortalRun (portal=, 
> count=, isTopLevel=-88 '\250',
> dest=, altdest=, 
> completionTag=) at pquery.c:1493
> #25 0x007aeb5a in exec_simple_query (query_string= out>, seqServerHost=,
> seqServerPort=) at postgres.c:1741
> #26 0x007b0052 in PostgresMain (argc=, 
> argv=0x2f4b160, username=)
> at postgres.c:4711
> #27 0x00763933 in BackendRun (port=0x2f01750) at postmaster.c:5875
> #28 BackendStartup (port=0x2f01750) at postmaster.c:5468
> #29 0x0076409d in ServerLoop () at postmaster.c:2147
> #30 0x00765eae in PostmasterMain (argc=9, argv=0x2f175b0) at 
> postmaster.c:1439
> #31 0x006c070a in main (argc=9, argv=0x2f17570) at main.c:226
> ```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-495) Core in cdbexplain_depositStatsToNode

2016-03-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184335#comment-15184335
 ] 

ASF GitHub Bot commented on HAWQ-495:
-

Github user wengyanqing closed the pull request at:

https://github.com/apache/incubator-hawq/pull/415


>  Core in cdbexplain_depositStatsToNode 
> ---
>
> Key: HAWQ-495
> URL: https://issues.apache.org/jira/browse/HAWQ-495
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Ivan Weng
>Assignee: Lei Chang
>
> (gdb) bt
> #0  0x003c1ec0f5db in raise () from /lib64/libpthread.so.0
> #1  0x0086cf22 in SafeHandlerForSegvBusIll (processName= optimized out>, postgres_signal_arg=11) at elog.c:4515
> #2  
> #3  0x0092d5d8 in cdbexplain_depositStatsToNode (planstate= optimized out>, ctx=)
> at cdbexplain.c:1150
> #4  0x0092dd22 in cdbexplain_recvStatWalker (planstate=0x3376020, 
> context=0x7fffc8ca61a8) at cdbexplain.c:666
> #5  0x0065de3f in planstate_walk_node_extended (planstate=0x3316080, 
> walker=0x92dd00 ,
> context=0x7fffc8ca7be0, flags=53967800) at execProcnode.c:2057
> #6  planstate_walk_kids (planstate=0x3316080, walker=0x92dd00 
> , context=0x7fffc8ca7be0,
> flags=53967800) at execProcnode.c:2157
> #7  0x0065e269 in planstate_walk_node_extended (planstate=0x33103a0, 
> walker=0x92dd00 ,
> context=0x7fffc8ca7be0, flags=) at 
> execProcnode.c:2059
> #8  planstate_walk_kids (planstate=0x33103a0, walker=0x92dd00 
> , context=0x7fffc8ca7be0,
> flags=) at execProcnode.c:2157
> #9  0x0065e202 in planstate_walk_node_extended (planstate=0x332c4b8, 
> walker=0x92dd00 ,
> context=0x7fffc8ca7be0, flags=) at 
> execProcnode.c:2059
> #10 planstate_walk_array (planstate=0x332c4b8, walker=0x92dd00 
> , context=0x7fffc8ca7be0,
> flags=) at execProcnode.c:2081
> #11 planstate_walk_kids (planstate=0x332c4b8, walker=0x92dd00 
> , context=0x7fffc8ca7be0,
> flags=) at execProcnode.c:2128
> #12 0x00929985 in cdbexplain_recvExecStats (planstate= optimized out>, dispatchResults=0x7f30540008c8,
> sliceIndex=, showstatctx=0x32b5b00, 
> segmentNum=) at cdbexplain.c:630
> #13 0x0092dd6a in cdbexplain_recvStatWalker (planstate=0x332ae98, 
> context=) at cdbexplain.c:675
> #14 0x0065e356 in planstate_walk_node_extended (planstate=0x1, 
> walker=0x92dd00 ,
> context=0x7fffc8ca7d10) at execProcnode.c:2057
> #15 planstate_walk_node (planstate=0x1, walker=0x92dd00 
> , context=0x7fffc8ca7d10)
> at execProcnode.c:2039
> #16 0x00929985 in cdbexplain_recvExecStats (planstate= optimized out>, dispatchResults=0x3044f68,
> sliceIndex=, showstatctx=0x32b5b00, 
> segmentNum=) at cdbexplain.c:630
> #17 0x005f92e8 in ExplainOnePlan_internal (plannedstmt= optimized out>, stmt=,
> queryString=, params=, 
> tstate=, es=,
> isSequential=0 '\000') at explain.c:588
> #18 0x005f9b7d in ExplainOnePlan (plannedstmt=, 
> stmt=,
> queryString=, params=, 
> tstate=) at explain.c:442
> #19 0x005f9c9a in ExplainOneQuery (query=, 
> stmt=0x2ffe588,
> queryString=0x2ffcb80 "explain analyze select * from(with cte as (select 
> * from pt1) select * from cte r1, cte r2 where r1.c1=5 and r1.c1=r2.c1 )i ;", 
> params=, tstate=) at explain.c:357
> #20 0x005f9dc5 in ExplainQuery (stmt=0x2ffe588,
> queryString=0x2ffcb80 "explain analyze select * from(with cte as (select 
> * from pt1) select * from cte r1, cte r2 where r1.c1=5 and r1.c1=r2.c1 )i ;", 
> params=0x0, dest=0x308d8e0) at explain.c:200
> #21 0x007b7cda in ProcessUtility (parsetree=0x2ffe588, 
> queryString=, params=0x0,
> isTopLevel=1 '\001', dest=0x308d8e0, completionTag=0x7fffc8ca8310 "") at 
> utility.c:1475
> #22 0x007b303a in PortalRunUtility (portal=0x3040fa0, 
> utilityStmt=0x2ffe588, isTopLevel=-72 '\270', dest=0x308d8e0,
> completionTag=0x7fffc8ca8310 "") at pquery.c:1887
> #23 0x007b56a4 in FillPortalStore (portal=0x3040fa0, isTopLevel=0 
> '\000') at pquery.c:1759
> #24 0x007b5aa5 in PortalRun (portal=, 
> count=, isTopLevel=-88 '\250',
> dest=, altdest=, 
> completionTag=) at pquery.c:1493
> #25 0x007aeb5a in exec_simple_query (query_string= out>, seqServerHost=,
> seqServerPort=) at postgres.c:1741
> #26 0x007b0052 in PostgresMain (argc=, 
> argv=0x2f4b160, username=)
> at postgres.c:4711
> #27 0x00763933 in BackendRun (port=0x2f01750) at postmaster.c:5875
> #28 BackendStartup (port=0x2f01750) at postmaster.c:5468
> #29 0x0076409d in ServerLoop () at postmaster.c:2147
> #30 0x00765eae in PostmasterMain (argc=9, argv=0x2f175b0) at 
> postmaster.c:1439
> #31 0x006c070a in main (argc=9, argv=0x2f17570) at main.c:226
> ```



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HAWQ-495) Core in cdbexplain_depositStatsToNode

2016-03-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184325#comment-15184325
 ] 

ASF GitHub Bot commented on HAWQ-495:
-

Github user zhangh43 commented on the pull request:

https://github.com/apache/incubator-hawq/pull/415#issuecomment-193585155
  
+1


>  Core in cdbexplain_depositStatsToNode 
> ---
>
> Key: HAWQ-495
> URL: https://issues.apache.org/jira/browse/HAWQ-495
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Ivan Weng
>Assignee: Lei Chang
>
> (gdb) bt
> #0  0x003c1ec0f5db in raise () from /lib64/libpthread.so.0
> #1  0x0086cf22 in SafeHandlerForSegvBusIll (processName= optimized out>, postgres_signal_arg=11) at elog.c:4515
> #2  
> #3  0x0092d5d8 in cdbexplain_depositStatsToNode (planstate= optimized out>, ctx=)
> at cdbexplain.c:1150
> #4  0x0092dd22 in cdbexplain_recvStatWalker (planstate=0x3376020, 
> context=0x7fffc8ca61a8) at cdbexplain.c:666
> #5  0x0065de3f in planstate_walk_node_extended (planstate=0x3316080, 
> walker=0x92dd00 ,
> context=0x7fffc8ca7be0, flags=53967800) at execProcnode.c:2057
> #6  planstate_walk_kids (planstate=0x3316080, walker=0x92dd00 
> , context=0x7fffc8ca7be0,
> flags=53967800) at execProcnode.c:2157
> #7  0x0065e269 in planstate_walk_node_extended (planstate=0x33103a0, 
> walker=0x92dd00 ,
> context=0x7fffc8ca7be0, flags=) at 
> execProcnode.c:2059
> #8  planstate_walk_kids (planstate=0x33103a0, walker=0x92dd00 
> , context=0x7fffc8ca7be0,
> flags=) at execProcnode.c:2157
> #9  0x0065e202 in planstate_walk_node_extended (planstate=0x332c4b8, 
> walker=0x92dd00 ,
> context=0x7fffc8ca7be0, flags=) at 
> execProcnode.c:2059
> #10 planstate_walk_array (planstate=0x332c4b8, walker=0x92dd00 
> , context=0x7fffc8ca7be0,
> flags=) at execProcnode.c:2081
> #11 planstate_walk_kids (planstate=0x332c4b8, walker=0x92dd00 
> , context=0x7fffc8ca7be0,
> flags=) at execProcnode.c:2128
> #12 0x00929985 in cdbexplain_recvExecStats (planstate= optimized out>, dispatchResults=0x7f30540008c8,
> sliceIndex=, showstatctx=0x32b5b00, 
> segmentNum=) at cdbexplain.c:630
> #13 0x0092dd6a in cdbexplain_recvStatWalker (planstate=0x332ae98, 
> context=) at cdbexplain.c:675
> #14 0x0065e356 in planstate_walk_node_extended (planstate=0x1, 
> walker=0x92dd00 ,
> context=0x7fffc8ca7d10) at execProcnode.c:2057
> #15 planstate_walk_node (planstate=0x1, walker=0x92dd00 
> , context=0x7fffc8ca7d10)
> at execProcnode.c:2039
> #16 0x00929985 in cdbexplain_recvExecStats (planstate= optimized out>, dispatchResults=0x3044f68,
> sliceIndex=, showstatctx=0x32b5b00, 
> segmentNum=) at cdbexplain.c:630
> #17 0x005f92e8 in ExplainOnePlan_internal (plannedstmt= optimized out>, stmt=,
> queryString=, params=, 
> tstate=, es=,
> isSequential=0 '\000') at explain.c:588
> #18 0x005f9b7d in ExplainOnePlan (plannedstmt=, 
> stmt=,
> queryString=, params=, 
> tstate=) at explain.c:442
> #19 0x005f9c9a in ExplainOneQuery (query=, 
> stmt=0x2ffe588,
> queryString=0x2ffcb80 "explain analyze select * from(with cte as (select 
> * from pt1) select * from cte r1, cte r2 where r1.c1=5 and r1.c1=r2.c1 )i ;", 
> params=, tstate=) at explain.c:357
> #20 0x005f9dc5 in ExplainQuery (stmt=0x2ffe588,
> queryString=0x2ffcb80 "explain analyze select * from(with cte as (select 
> * from pt1) select * from cte r1, cte r2 where r1.c1=5 and r1.c1=r2.c1 )i ;", 
> params=0x0, dest=0x308d8e0) at explain.c:200
> #21 0x007b7cda in ProcessUtility (parsetree=0x2ffe588, 
> queryString=, params=0x0,
> isTopLevel=1 '\001', dest=0x308d8e0, completionTag=0x7fffc8ca8310 "") at 
> utility.c:1475
> #22 0x007b303a in PortalRunUtility (portal=0x3040fa0, 
> utilityStmt=0x2ffe588, isTopLevel=-72 '\270', dest=0x308d8e0,
> completionTag=0x7fffc8ca8310 "") at pquery.c:1887
> #23 0x007b56a4 in FillPortalStore (portal=0x3040fa0, isTopLevel=0 
> '\000') at pquery.c:1759
> #24 0x007b5aa5 in PortalRun (portal=, 
> count=, isTopLevel=-88 '\250',
> dest=, altdest=, 
> completionTag=) at pquery.c:1493
> #25 0x007aeb5a in exec_simple_query (query_string= out>, seqServerHost=,
> seqServerPort=) at postgres.c:1741
> #26 0x007b0052 in PostgresMain (argc=, 
> argv=0x2f4b160, username=)
> at postgres.c:4711
> #27 0x00763933 in BackendRun (port=0x2f01750) at postmaster.c:5875
> #28 BackendStartup (port=0x2f01750) at postmaster.c:5468
> #29 0x0076409d in ServerLoop () at postmaster.c:2147
> #30 0x00765eae in PostmasterMain (argc=9, argv=0x2f175b0) at 
> postmaster.c:1439
> #31 0x006c070a in main (argc=9, argv=0x2f17570) at main.c:226
> ```



--
This message was 

[jira] [Commented] (HAWQ-494) Add checks to standby start/init

2016-03-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184293#comment-15184293
 ] 

ASF GitHub Bot commented on HAWQ-494:
-

Github user asfgit closed the pull request at:

https://github.com/apache/incubator-hawq/pull/417


> Add checks to standby start/init
> 
>
> Key: HAWQ-494
> URL: https://issues.apache.org/jira/browse/HAWQ-494
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Lei Chang
>
> We should check if standby is sync after start standby and master. If standby 
> is not sync, we should error out.
> Need to check if master is doing recovery before init standby, if true abort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-494) Add checks to standby start/init

2016-03-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184287#comment-15184287
 ] 

ASF GitHub Bot commented on HAWQ-494:
-

Github user yaoj2 commented on the pull request:

https://github.com/apache/incubator-hawq/pull/417#issuecomment-193575875
  
+1 


> Add checks to standby start/init
> 
>
> Key: HAWQ-494
> URL: https://issues.apache.org/jira/browse/HAWQ-494
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Lei Chang
>
> We should check if standby is sync after start standby and master. If standby 
> is not sync, we should error out.
> Need to check if master is doing recovery before init standby, if true abort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-494) Add checks to standby start/init

2016-03-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184281#comment-15184281
 ] 

ASF GitHub Bot commented on HAWQ-494:
-

GitHub user radarwave opened a pull request:

https://github.com/apache/incubator-hawq/pull/417

HAWQ-494. Add checks to standby start/init

Tested multinodeparallel, tincmisc, sanity.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/radarwave/incubator-hawq HAWQ-494

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/417.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #417


commit 5b2302961d79275776b0dbe37d0d1829c5fbb565
Author: rlei 
Date:   2016-03-06T12:59:21Z

HAWQ-494. Add checks to standby start/init




> Add checks to standby start/init
> 
>
> Key: HAWQ-494
> URL: https://issues.apache.org/jira/browse/HAWQ-494
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Command Line Tools
>Reporter: Radar Lei
>Assignee: Lei Chang
>
> We should check if standby is sync after start standby and master. If standby 
> is not sync, we should error out.
> Need to check if master is doing recovery before init standby, if true abort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-490) Add version and compatibility detection for HAWQ + GPORCA

2016-03-07 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184233#comment-15184233
 ] 

Roman Shaposhnik commented on HAWQ-490:
---

[~xzhang.pivotal] the standard way is what I linked to. There's no other 
standard way for C and C++ components. 

> Add version and compatibility detection for HAWQ + GPORCA
> -
>
> Key: HAWQ-490
> URL: https://issues.apache.org/jira/browse/HAWQ-490
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: Jacob Max Frank
>Assignee: Lei Chang
>Priority: Minor
>
> Autoconf should be able to detect GPORCA version information and check for 
> compatibility with the version of HAWQ being built.  Additionally, running 
> {{select gp_opt_version();}} on HAWQ compiled with GPORCA should output 
> correct version information for its components (GPOPT, GPOS, and GPXerces).
> [~rvs] suggested using {{pkgutil}}.  Alternate potential strategies include 
> using a {{git submodule}} or pulling values from a {{version.h}} file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-482) Failed to cancel HDFS delegation token

2016-03-07 Thread zhenglin tao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184229#comment-15184229
 ] 

zhenglin tao commented on HAWQ-482:
---

{code}
public synchronized TokenIdent cancelToken(Token token,
  String canceller) throws IOException {
ByteArrayInputStream buf = new ByteArrayInputStream(token.getIdentifier());
DataInputStream in = new DataInputStream(buf);
TokenIdent id = createIdentifier();
id.readFields(in);
LOG.info("Token cancelation requested for identifier: "+id);

if (id.getUser() == null) {
  throw new InvalidToken("Token with no owner");
}
String owner = id.getUser().getUserName();
Text renewer = id.getRenewer();
HadoopKerberosName cancelerKrbName = new HadoopKerberosName(canceller);
String cancelerShortName = cancelerKrbName.getShortName();
if (!canceller.equals(owner)
&& (renewer == null || renewer.toString().isEmpty() || 
!cancelerShortName
.equals(renewer.toString( {
  throw new AccessControlException(canceller
  + " is not authorized to cancel the token");
}
DelegationTokenInformation info = currentTokens.remove(id);
if (info == null) {
  throw new InvalidToken("Token not found");
}
removeStoredToken(id);
return id;
  }
{code}
>From HDFS code, we see that this error info is only reported when canceller is 
>not the owner of the token. In your case, seems that getDelegationToken in 
>namenode succeeds but cancelDelegationToken failed. It is strange that 
>canceller changes to be not the owner. From hawq code side, this role should 
>not change. Do you know anything about isilon?

> Failed to cancel HDFS delegation token
> --
>
> Key: HAWQ-482
> URL: https://issues.apache.org/jira/browse/HAWQ-482
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Security
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Jemish Patel
>Assignee: Lei Chang
> Attachments: hawq-2016-03-03_00.csv, hdfs.log
>
>
> Hi I am using  HDB 2.0.0.0_beta-19716 in a kerberized environment. 
> Every time I select/insert rows or create a table, I see the warning below: 
> WARNING:  failed to cancel hdfs delegation token.
> DETAIL:  User postg...@vlan172.fe.gopivotal.com is not authorized to cancel 
> the token
> The operation does succeed but I am wondering why is it trying to delete a 
> delegation token and if I have something misconfigured?
> Can you please let me know why this is happening and how to resolve it?
> Jemish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-497) Refactor resource related GUC.

2016-03-07 Thread Hubert Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184163#comment-15184163
 ] 

Hubert Zhang commented on HAWQ-497:
---

Resource related GUCs:
1 [Exposed] default_hash_table_bucket_number: the default hash table bucket 
number. Default Value:6*SegmentCount.
2 [Exposed] hawq_rm_nvseg_perquery_perseg_limit: the limit of the number of 
virtual segments in one segment for one query Default Value:6
3 [Exposed] hawq_rm_nvseg_perquery_limit: the limit of the number of virtual 
segments for one query. Default Value:512
4 [NotExposed] hawq_rm_nvseg_for_copy_from_perquery: the default virtual 
segment number for copy from statement Default Value:6
5 [NotExposed] hawq_rm_nvseg_for_analyze_perquery_perseg_limit: the limit of 
the number of virtual segments in one segment for one analyze query Default 
Value: 4
6 [NotExposed] hawq_rm_nvseg_for_analyze_perquery_limit: the limit of the 
number of virtual segments for one analyze query. Default Value:256
Here are the best practice for users:
The elastic execution runtime feature is introduced in HAWQ 2.0. It is based on 
virtual segments. The number of virtual segments is allocated on demand based 
on costs of queries. More specifically, for big queries, we start a large 
number of segments, while for small queries, fewer segments are started.
You can control the limit of virtual segment number for a query by tuning the 
following three GUCs.
1 default_hash_table_bucket_number. This GUC defines the default table bucket 
number when you create a hash table. When you query on a hash table, no matter 
the data size of this table is large or small, the query resource is fixed: the 
bucket number of this table. So for small hash table, you'd better set 
default_hash_table_bucket_number to a small value before creating it; while for 
large hash table, set it to 6 * SlaveCount may be a good choice.
2 hawq_rm_nvseg_perquery_perseg_limit. This GUC defines the limit of the number 
of virtual segments in one segment for one query. This GUC works on random 
tables, external tables, user defined functions, but except hash tables. When 
you query on a random table, the query resource is related to the data size of 
the table, in general one virtual segment for one HDFS block. As a result, 
large table will cost too many resources. This GUC is used to limit the maximum 
resource usage on each segment. If you want to improve concurrency, reduce this 
GUC may help. When your query contains external tables or user defined 
functions, the resources directly allocated are: GUC value * segment count.
3 hawq_rm_nvseg_perquery_limit. This GUC defines the limit of the number of 
virtual segments for one query. It's a query level GUC, no matter how many 
segments are configured, the total resource for a query should not exceed this 
value. Reduce this value can improve concurrency, and it's better to set this 
GUC to the multiple of the number of segments

> Refactor resource related GUC.
> --
>
> Key: HAWQ-497
> URL: https://issues.apache.org/jira/browse/HAWQ-497
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> we have confusing GUCs to control how many vsegs a query will use. It's 
> better to remove the default_segment number and replace it with other 
> meaningful GUCs.
> And we need to expose as less GUCs as possible to users, which let 
> performance tuning become easy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-497) Refactor resource related GUC.

2016-03-07 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang reassigned HAWQ-497:
-

Assignee: Hubert Zhang  (was: Lei Chang)

> Refactor resource related GUC.
> --
>
> Key: HAWQ-497
> URL: https://issues.apache.org/jira/browse/HAWQ-497
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> we have confusing GUCs to control how many vsegs a query will use. It's 
> better to remove the default_segment number and replace it with other 
> meaningful GUCs.
> And we need to expose as less GUCs as possible to users, which let 
> performance tuning become easy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-497) Refactor resource related GUC.

2016-03-07 Thread Hubert Zhang (JIRA)
Hubert Zhang created HAWQ-497:
-

 Summary: Refactor resource related GUC.
 Key: HAWQ-497
 URL: https://issues.apache.org/jira/browse/HAWQ-497
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Core
Reporter: Hubert Zhang
Assignee: Lei Chang


we have confusing GUCs to control how many vsegs a query will use. It's better 
to remove the default_segment number and replace it with other meaningful GUCs.
And we need to expose as less GUCs as possible to users, which let performance 
tuning become easy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-497) Refactor resource related GUC.

2016-03-07 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang updated HAWQ-497:
--
Issue Type: Improvement  (was: Bug)

> Refactor resource related GUC.
> --
>
> Key: HAWQ-497
> URL: https://issues.apache.org/jira/browse/HAWQ-497
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> we have confusing GUCs to control how many vsegs a query will use. It's 
> better to remove the default_segment number and replace it with other 
> meaningful GUCs.
> And we need to expose as less GUCs as possible to users, which let 
> performance tuning become easy. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-496) Add unit to output of datalocality info of analyze.

2016-03-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15184145#comment-15184145
 ] 

ASF GitHub Bot commented on HAWQ-496:
-

GitHub user zhangh43 opened a pull request:

https://github.com/apache/incubator-hawq/pull/416

HAWQ-496. Add unit to output of datalocality info of analyze.

and unit Byte to analyze output

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zhangh43/incubator-hawq hawq496

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/416.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #416


commit 2cbce2d77ec4faabbea4d25138c505d7af7517cb
Author: hubertzhang 
Date:   2016-03-08T01:09:50Z

HAWQ-496. Add unit to output of datalocality info of analyze.




> Add unit to output of datalocality info of analyze.
> ---
>
> Key: HAWQ-496
> URL: https://issues.apache.org/jira/browse/HAWQ-496
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> when we run expaln analyze query, the datalocality output only contains 
> number, we need to add unit to it,such as B or KB



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-496) Add unit to output of datalocality info of analyze.

2016-03-07 Thread Hubert Zhang (JIRA)
Hubert Zhang created HAWQ-496:
-

 Summary: Add unit to output of datalocality info of analyze.
 Key: HAWQ-496
 URL: https://issues.apache.org/jira/browse/HAWQ-496
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Core
Reporter: Hubert Zhang
Assignee: Lei Chang


when we run expaln analyze query, the datalocality output only contains number, 
we need to add unit to it,such as B or KB



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HAWQ-496) Add unit to output of datalocality info of analyze.

2016-03-07 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang reassigned HAWQ-496:
-

Assignee: Hubert Zhang  (was: Lei Chang)

> Add unit to output of datalocality info of analyze.
> ---
>
> Key: HAWQ-496
> URL: https://issues.apache.org/jira/browse/HAWQ-496
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> when we run expaln analyze query, the datalocality output only contains 
> number, we need to add unit to it,such as B or KB



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-420) Memory leaks in DataLocality during ANALYZE lasting 1 ~ 2 days on 100 nodes cluster.

2016-03-07 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang closed HAWQ-420.
-
Resolution: Fixed

> Memory leaks in DataLocality during ANALYZE lasting 1 ~ 2 days on 100 nodes 
> cluster.
> 
>
> Key: HAWQ-420
> URL: https://issues.apache.org/jira/browse/HAWQ-420
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> On about 100 nodes cluster (SFO), we load data to lineitem and then do 
> ANALYZE on it. It takes 1 ~ 2 days before we cancel the ANALYZE and the QD 
> uses about 48G memory. The major memory consumers are:
> ```
> TopMemoryContext: 48G
> --
> other: 44G
> split_to_segment_mapping_context: 4G
> ```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-438) EagerlyReleased hash table involved in hash join in explain statement introduce core on hawq dbg build.

2016-03-07 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang closed HAWQ-438.
-
Resolution: Fixed

> EagerlyReleased hash table involved in hash join in explain statement 
> introduce core on hawq dbg build.
> ---
>
> Key: HAWQ-438
> URL: https://issues.apache.org/jira/browse/HAWQ-438
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> (gdb) bt
> #0  0x003c1e832925 in raise () from /lib64/libc.so.6
> #1  0x003c1e834105 in abort () from /lib64/libc.so.6
> #2  0x009d83bc in ExceptionalCondition (conditionName=0xcf9300 
> "!(!hashtable->eagerlyReleased)", errorType=0xcf90f5 "FailedAssertion", 
> fileName=0xcf915c "nodeHashjoin.c", lineNumber=330) at assert.c:60
> #3  0x0074feb4 in ExecHashJoin (node=0x2ae1eb8) at nodeHashjoin.c:330
> #4  0x0071ec94 in ExecProcNode (node=0x2ae1eb8) at execProcnode.c:967



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-419) Failed to MemoryAccounting_SaveToLog in resource negotiator.

2016-03-07 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang closed HAWQ-419.
-
Resolution: Fixed

> Failed to MemoryAccounting_SaveToLog in resource negotiator.
> 
>
> Key: HAWQ-419
> URL: https://issues.apache.org/jira/browse/HAWQ-419
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> when try to call MemoryAccounting_SaveToLog in lldb , there is an error:
> "Unexpected internal error (memaccounting.c:425)",   
> "FailedAssertion(""!(((bool) 0))"", File: ""memaccounting.c"", Line: 425)"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-397) Remove useless guc: debug_fix_vseg_num.

2016-03-07 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang closed HAWQ-397.
-
Resolution: Fixed

> Remove useless guc: debug_fix_vseg_num.
> ---
>
> Key: HAWQ-397
> URL: https://issues.apache.org/jira/browse/HAWQ-397
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 2.0.0
>Reporter: Lei Chang
>Assignee: Hubert Zhang
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-476) Unexpected internal error in execHHashagg.

2016-03-07 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang closed HAWQ-476.
-
Resolution: Fixed

> Unexpected internal error in execHHashagg.
> --
>
> Key: HAWQ-476
> URL: https://issues.apache.org/jira/browse/HAWQ-476
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> when we run query on big cluster, we encounter Unexpected Internal Error in 
> execHHashagg. with errormessage too many passes or hash entries in pg_logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (HAWQ-477) Copy table to file do not real execute for lineitem.

2016-03-07 Thread Hubert Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hubert Zhang closed HAWQ-477.
-
Resolution: Fixed

> Copy table to file do not real execute for lineitem.
> 
>
> Key: HAWQ-477
> URL: https://issues.apache.org/jira/browse/HAWQ-477
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
>
> Copy table to file do not real execute for lineitem for partition table



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-465) Implement stored procedure to return fields metainfo from PXF

2016-03-07 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-465:
-
Summary: Implement stored procedure to return fields metainfo from PXF  
(was: Implement stored procedure to return PXF metadata)

> Implement stored procedure to return fields metainfo from PXF
> -
>
> Key: HAWQ-465
> URL: https://issues.apache.org/jira/browse/HAWQ-465
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Hcatalog, PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.0
>
>
> User should be able to call built-in function:
> {code}
> select pxf_get_object_metadata('source_name', 'container_name', 
> 'object_name');
>  pxf_get_object_metadata 
> {code}
> to retrieve all metadata for given source, container, object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-465) Implement stored procedure to return PXF metadata

2016-03-07 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-465:
-
Description: 
User should be able to call built-in function:

{code}
select pxf_get_object_metadata('source_name', 'container_name', 'object_name');
 pxf_get_object_metadata 
{code}
to retrieve all metadata for given source, container, object.

  was:
User should be able to call built-in function:

{code}
select pxf_get_object_metadata('hcatalog', 'default', 'table1');
 pxf_get_object_metadata 
{code}
to retrieve all metadata for given source, container, object.


> Implement stored procedure to return PXF metadata
> -
>
> Key: HAWQ-465
> URL: https://issues.apache.org/jira/browse/HAWQ-465
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Hcatalog, PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.0
>
>
> User should be able to call built-in function:
> {code}
> select pxf_get_object_metadata('source_name', 'container_name', 
> 'object_name');
>  pxf_get_object_metadata 
> {code}
> to retrieve all metadata for given source, container, object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-465) Implement stored procedure to return PXF metadata

2016-03-07 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-465:
-
Summary: Implement stored procedure to return PXF metadata  (was: Call PXF 
metadata api from psql to display Hcatalog table details)

> Implement stored procedure to return PXF metadata
> -
>
> Key: HAWQ-465
> URL: https://issues.apache.org/jira/browse/HAWQ-465
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Hcatalog, PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.0
>
>
> User should be able to 
> 1. Ability to describe a single table.
>```\d hcatalog.databaseName.t1```,  ```\d+ hcatalog.databaseName.t1```
> 2. Ability to describe the whole schema:
>```\d hcatalog.databaseName.*```
> 3. Ability to describe the whole schema:
>```\d hcatalog.*.*```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-490) Add version and compatibility detection for HAWQ + GPORCA

2016-03-07 Thread xin zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183793#comment-15183793
 ] 

xin zhang commented on HAWQ-490:


+1. There has to be a standard way to figure out dependencies, and some best 
practice in the OSS community.

At this moment, I see following building blocks:
. Add tag marking for the release of dependent repos, e.g., GPORCA, GPOS, and 
GPXERCES in our case.
. Figure out OSS best practice of version dependency in package manager 
(https://en.wikipedia.org/wiki/Package_manager). At least we need a popular 
Linux flavor (apt-get) and an OSX one (brew) for deployment and development.
. Automate the testing (e.g. Travis) with dependencies using those package 
managers.

> Add version and compatibility detection for HAWQ + GPORCA
> -
>
> Key: HAWQ-490
> URL: https://issues.apache.org/jira/browse/HAWQ-490
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Build
>Reporter: Jacob Max Frank
>Assignee: Lei Chang
>Priority: Minor
>
> Autoconf should be able to detect GPORCA version information and check for 
> compatibility with the version of HAWQ being built.  Additionally, running 
> {{select gp_opt_version();}} on HAWQ compiled with GPORCA should output 
> correct version information for its components (GPOPT, GPOS, and GPXerces).
> [~rvs] suggested using {{pkgutil}}.  Alternate potential strategies include 
> using a {{git submodule}} or pulling values from a {{version.h}} file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-482) Failed to cancel HDFS delegation token

2016-03-07 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15183379#comment-15183379
 ] 

Goden Yao commented on HAWQ-482:


but why it issues warning at user visible level?

> Failed to cancel HDFS delegation token
> --
>
> Key: HAWQ-482
> URL: https://issues.apache.org/jira/browse/HAWQ-482
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Security
>Affects Versions: 2.0.0-beta-incubating
>Reporter: Jemish Patel
>Assignee: Lei Chang
> Attachments: hawq-2016-03-03_00.csv, hdfs.log
>
>
> Hi I am using  HDB 2.0.0.0_beta-19716 in a kerberized environment. 
> Every time I select/insert rows or create a table, I see the warning below: 
> WARNING:  failed to cancel hdfs delegation token.
> DETAIL:  User postg...@vlan172.fe.gopivotal.com is not authorized to cancel 
> the token
> The operation does succeed but I am wondering why is it trying to delete a 
> delegation token and if I have something misconfigured?
> Can you please let me know why this is happening and how to resolve it?
> Jemish



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)