[jira] [Commented] (HAWQ-1453) relation_close() report error at analyzeStmt(): is not owned by resource owner TopTransaction (resowner.c:814)

2017-07-12 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084933#comment-16084933
 ] 

Vineet Goel commented on HAWQ-1453:
---

[~liming01] - I think this JIRA can be resolved now, since you merged the PR. 
If that's correct, could you please Resolve this? 
Thanks

> relation_close() report error at analyzeStmt(): is not owned by resource 
> owner TopTransaction (resowner.c:814)
> --
>
> Key: HAWQ-1453
> URL: https://issues.apache.org/jira/browse/HAWQ-1453
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: backlog
>
>
> I created a simple MapReduce map-only program (to simulate a Spark executor, 
> like in the customer's environment) that uses JDBC through Postgresql Driver 
> (like the customer is doing) and I executed the queries the customer is 
> trying to execute. I can reproduce all the errors reported by customer.
> 2017-04-28 03:50:38.299276 
> IST,"gpadmin","gpadmin",p91745,th-609535712,"10.193.102.144","3228",2017-04-28
>  03:50:35 
> IST,156637,con4578,cmd36,seg-1,,,x156637,sx1,"ERROR","XX000","relcache 
> reference e_event_1_0_102_1_prt_2 is not owned by resource owner 
> TopTransaction (resowner.c:814)",,"ANALYZE 
> mis_data_ig_account_details.e_event_1_0_102",0,,"resowner.c",814,"Stack trace:
> 10x8ce4a8 postgres errstart + 0x288
> 20x8d022b postgres elog_finish + 0xab
> 30x4ca654 postgres relation_close + 0x14
> 40x5e7508 postgres analyzeStmt + 0xd58
> 50x5e8b07 postgres analyzeStatement + 0x97
> 60x65c3bc postgres vacuum + 0x6c
> 70x7f61e2 postgres ProcessUtility + 0x542
> 80x7f1cae postgres  + 0x7f1cae
> 90x7f348e postgres  + 0x7f348e
> 10   0x7f51f5 postgres PortalRun + 0x465
> 11   0x7ee268 postgres PostgresMain + 0x1908
> 12   0x7a0560 postgres  + 0x7a0560
> 13   0x7a3329 postgres PostmasterMain + 0x759
> 14   0x4a5319 postgres main + 0x519
> 15   0x3a1661ed1d libc.so.6 __libc_start_main + 0xfd
> 16   0x4a5399 postgres  + 0x4a5399
> "



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1443) Implement Ranger lookup for HAWQ with Kerberos enabled.

2017-07-12 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084930#comment-16084930
 ] 

Vineet Goel commented on HAWQ-1443:
---

Is this JIRA ready to resolve now?

> Implement Ranger lookup for HAWQ with Kerberos enabled.
> ---
>
> Key: HAWQ-1443
> URL: https://issues.apache.org/jira/browse/HAWQ-1443
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Hubert Zhang
>Assignee: Hubert Zhang
> Fix For: backlog
>
> Attachments: Kerberos Support for Ranger Lookup HAWQ.pdf
>
>
> When add a HAWQ service in Ranger, we also need to configure Ranger look up 
> service for HAWQ. Lookup service can be done through JDBC with username and 
> password. But It cannot support Kerberos authentication currently.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1438) Analyze report error: relcache reference xxx is not owned by resource owner TopTransaction

2017-07-12 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084928#comment-16084928
 ] 

Vineet Goel commented on HAWQ-1438:
---

[~liming01] - I think this JIRA can be resolved now, since you merged the PR. 
If that's correct, could you please Resolve this? 
Thanks

> Analyze report error: relcache reference xxx is not owned by resource owner 
> TopTransaction
> --
>
> Key: HAWQ-1438
> URL: https://issues.apache.org/jira/browse/HAWQ-1438
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: backlog, 2.3.0.0-incubating
>
>
> 2017-04-12 14:23:13.866064 
> BST,"mis_ig","ig",p124811,th-224249568,"10.33.188.8","5172",2017-04-12 
> 14:20:42 
> BST,76687174,con61,cmd16,seg-1,,,x76687174,sx1,"ERROR","XX000","relcache 
> reference e_event_1_0_102_1_prt_2 is not owned by resource owner 
> TopTransaction (resowner.c:766)",,"ANALYZE 
> mis_data_ig_account_details.e_event_1_0_102",0,,"resowner.c",766,"Stack trace:
> 1 0x8ce438 postgres errstart (elog.c:492)
> 2 0x8d01bb postgres elog_finish (elog.c:1443)
> 3 0x4ca5f4 postgres relation_close (heapam.c:1267)
> 4 0x5e7498 postgres analyzeStmt (analyze.c:728)
> 5 0x5e8a97 postgres analyzeStatement (analyze.c:274)
> 6 0x65c34c postgres vacuum (vacuum.c:319)
> 7 0x7f6172 postgres ProcessUtility (utility.c:1472)
> 8 0x7f1c3e postgres  (pquery.c:1974)
> 9 0x7f341e postgres  (pquery.c:2078)
> 10 0x7f5185 postgres PortalRun (pquery.c:1599)
> 11 0x7ee1f8 postgres PostgresMain (postgres.c:2782)
> 12 0x7a04f0 postgres  (postmaster.c:5486)
> 13 0x7a32b9 postgres PostmasterMain (postmaster.c:1459)
> 14 0x4a52b9 postgres main (main.c:226)
> 15 0x7fcaee7ded5d libc.so.6 __libc_start_main (??:0)
> 16 0x4a5339 postgres  (??:0)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1487) hang process due to deadlock when it try to process interrupt in error handling

2017-07-12 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084935#comment-16084935
 ] 

Vineet Goel commented on HAWQ-1487:
---

Hi [~huor], can this JIRA be resolved now?

> hang process due to deadlock when it try to process interrupt in error 
> handling
> ---
>
> Key: HAWQ-1487
> URL: https://issues.apache.org/jira/browse/HAWQ-1487
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Query Execution
>Affects Versions: 2.2.0.0-incubating
>Reporter: Ruilong Huo
>Assignee: Ruilong Huo
> Fix For: 2.3.0.0-incubating
>
>
> It has hang process when it try to process interrupt in error handling. To be 
> specific, some QE encounter division by zero error, and then it error out. 
> During the error processing, it try to handle query cancelling interrupt and 
> thus deadlock occur.
> The hang process is:
> {noformat}
> $ hawq ssh -f hostfile -e "ps -ef | grep postgres | grep -v grep"
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger p
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> co
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer p
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoi
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsupe
> $ ps -ef | grep postgres | grep -v grep
> gpadmin   51245  1  0 06:15 ?00:01:01 
> /usr/local/hawq_2_2_0_0/bin/postgres -D 
> /data/pulse-agent-data/HAWQ-main-FeatureTest-opt-Multinode-parallel/product/segmentdd
>  -i -M segment -p 20100 --silent-mode=true
> gpadmin   51246  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> logger process
> gpadmin   51249  51245  0 06:15 ?00:00:00 postgres: port 20100, stats 
> collector process
> gpadmin   51250  51245  0 06:15 ?00:00:07 postgres: port 20100, 
> writer process
> gpadmin   51251  51245  0 06:15 ?00:00:01 postgres: port 20100, 
> checkpoint process
> gpadmin   51252  51245  0 06:15 ?00:00:11 postgres: port 20100, 
> segment resource manager
> gpadmin  182983  51245  0 07:00 ?00:00:03 postgres: port 20100, 
> hawqsuperuser olap_winow... 10.32.34.225(45462) con4405 seg0 cmd2 slice7 
> MPPEXEC SELECT
> gpadmin  194424 194402  0 23:50 pts/000:00:00 grep postgres
> {noformat}
> The call stack is:
> {noformat}
> $ sudo gdb -p 182983
> (gdb) bt
> #0  0x003ff060e2e4 in __lll_lock_wait () from /lib64/libpthread.so.0
> #1  0x003ff0609588 in _L_lock_854 () from /lib64/libpthread.so.0
> #2  0x003ff0609457 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #3  0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #4  0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #5  0x003ff220ff49 in ?? () from /lib64/libgcc_s.so.1
> #6  0x003ff22100e7 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #7  0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #8  0x009cda3f in errstart (elevel=20, filename=0xd309e0 
> "postgres.c", lineno=3618,
> funcname=0xd32fc0 "ProcessInterrupts", domain=0x0) at elog.c:492
> #9  0x008e8fcb in ProcessInterrupts () at postgres.c:3616
> #10 0x008e8c9e in StatementCancelHandler (postgres_signal_arg=2) at 
> postgres.c:3463
> #11 
> #12 0x003ff0609451 in pthread_mutex_lock () from /lib64/libpthread.so.0
> #13 0x003ff221206a in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
> #14 0x003ff220f603 in ?? () from /lib64/libgcc_s.so.1
> #15 0x003ff2210119 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
> #16 0x003ff02fe966 in backtrace () from /lib64/libc.so.6
> #17 0x009cda3f in errstart (elevel=20, filename=0xd3ba00 "float.c", 
> lineno=839, funcname=0xd3bf3a "float8div",
> domain=0x0) at elog.c:492
> #18 0x00921a84 in float8div (fcinfo=0x7ffd04d2b8b0) at float.c:836
> #19 0x00722fe5 in ExecMakeFunctionResult (fcache=0x324a088, 
> econtext=0x32495d8, isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:1762
> #20 0x00723d87 in ExecEvalOper (fcache=0x324a088, econtext=0x32495d8, 
> isNull=0x7ffd04d2c0e0 "\030",
> isDone=0x7ffd04d2bd04) at execQual.c:2250
> #21 0x00722451 in ExecEvalFuncArgs (fcinfo=0x7ffd04d2bda0, 
> argList=0x324b378, econtext=0x32495d8) at execQual.c:1317
> #22 0x00722a68 in ExecMakeFunctionResult (fcache=0x3249850, 
> econtext=0x32495d8,
> isNull=0x7ffd04d2c5c1 "\306\322\004\375\177", isDone=0x0) at 
> execQual.c:1532
> #23 0x00723d1e in ExecEvalFunc (fcache=0x3249850, econtext=0x32495d8, 
> isNull=0x7ffd04d2c5c1 

[jira] [Updated] (HAWQ-1438) Analyze report error: relcache reference xxx is not owned by resource owner TopTransaction

2017-07-12 Thread Vineet Goel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Goel updated HAWQ-1438:
--
Fix Version/s: 2.3.0.0-incubating

> Analyze report error: relcache reference xxx is not owned by resource owner 
> TopTransaction
> --
>
> Key: HAWQ-1438
> URL: https://issues.apache.org/jira/browse/HAWQ-1438
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: backlog, 2.3.0.0-incubating
>
>
> 2017-04-12 14:23:13.866064 
> BST,"mis_ig","ig",p124811,th-224249568,"10.33.188.8","5172",2017-04-12 
> 14:20:42 
> BST,76687174,con61,cmd16,seg-1,,,x76687174,sx1,"ERROR","XX000","relcache 
> reference e_event_1_0_102_1_prt_2 is not owned by resource owner 
> TopTransaction (resowner.c:766)",,"ANALYZE 
> mis_data_ig_account_details.e_event_1_0_102",0,,"resowner.c",766,"Stack trace:
> 1 0x8ce438 postgres errstart (elog.c:492)
> 2 0x8d01bb postgres elog_finish (elog.c:1443)
> 3 0x4ca5f4 postgres relation_close (heapam.c:1267)
> 4 0x5e7498 postgres analyzeStmt (analyze.c:728)
> 5 0x5e8a97 postgres analyzeStatement (analyze.c:274)
> 6 0x65c34c postgres vacuum (vacuum.c:319)
> 7 0x7f6172 postgres ProcessUtility (utility.c:1472)
> 8 0x7f1c3e postgres  (pquery.c:1974)
> 9 0x7f341e postgres  (pquery.c:2078)
> 10 0x7f5185 postgres PortalRun (pquery.c:1599)
> 11 0x7ee1f8 postgres PostgresMain (postgres.c:2782)
> 12 0x7a04f0 postgres  (postmaster.c:5486)
> 13 0x7a32b9 postgres PostmasterMain (postmaster.c:1459)
> 14 0x4a52b9 postgres main (main.c:226)
> 15 0x7fcaee7ded5d libc.so.6 __libc_start_main (??:0)
> 16 0x4a5339 postgres  (??:0)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1454) Exclude certain jars from Ranger Plugin Service packaging

2017-07-12 Thread Vineet Goel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Goel updated HAWQ-1454:
--
Fix Version/s: (was: backlog)
   2.3.0.0-incubating

> Exclude certain jars from Ranger Plugin Service packaging
> -
>
> Key: HAWQ-1454
> URL: https://issues.apache.org/jira/browse/HAWQ-1454
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Security
>Reporter: Lav Jain
>Assignee: Ed Espino
> Fix For: 2.3.0.0-incubating
>
>
> The following jars may cause conflicts in certain environments depending on 
> how the classes are being loaded.
> ```
> WEB-INF/lib/jersey-json-1.9.jar
> WEB-INF/lib/jersey-core-1.9.jar
> WEB-INF/lib/jersey-server-1.9.jar
> ```
> We need to exclude them while building the RPM.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1469) Don't expose RPS warning messages to command line

2017-07-12 Thread Vineet Goel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Goel updated HAWQ-1469:
--
Fix Version/s: (was: backlog)
   2.3.0.0-incubating

> Don't expose RPS warning messages to command line
> -
>
> Key: HAWQ-1469
> URL: https://issues.apache.org/jira/browse/HAWQ-1469
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Security
>Reporter: Lin Wen
>Assignee: Lin Wen
> Fix For: 2.3.0.0-incubating
>
>
> RPS service address exposing to end-user is not secure, and we should not 
> expose it out.
> **Case 1: When master RPS is down, changing to standby RPS**
> Current behavior
> ```
> postgres=# select * from a;
> WARNING:  ranger plugin service from http://test1:8432/rps is unavailable : 
> Couldn't connect to server, try another http://test5:8432/rps
> ERROR:  permission denied for relation(s): public.a
> ``` 
> Warning should be removed.
> Expected
> ```
> postgres=# select * from a;
> ERROR:  permission denied for relation(s): public.a
> ```
> **Case 2: When both RPS are down, should only print that RPS is unavailable.**
> Current Behavior:
> ```
> postgres=# select * from a;
> WARNING:  ranger plugin service from http://test5:8432/rps is unavailable : 
> Couldn't connect to server, try another http://test1:8432/rps
> ERROR:  ranger plugin service from http://test1:8432/rps is unavailable : 
> Couldn't connect to server. (rangerrest.c:463)
> ```
> Expected
> ```
> postgres=# select * from a;
> ERROR:  ranger plugin service is unavailable : Couldn't connect to server. 
> (rangerrest.c:463)
> ```
> The warning message should be printed in cvs log file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HAWQ-1417) Crashed at ANALYZE after COPY

2017-07-12 Thread Vineet Goel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Goel updated HAWQ-1417:
--
Fix Version/s: (was: backlog)
   2.3.0.0-incubating

> Crashed at ANALYZE after COPY
> -
>
> Key: HAWQ-1417
> URL: https://issues.apache.org/jira/browse/HAWQ-1417
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Core
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: 2.3.0.0-incubating
>
>
> This is the line in the master log where the PANIC is reported:
> {code}
> (gdb) bt
> #0  0x7f6d35b0e6ab in raise () from 
> /data/logs/52280/new_panic/packcore-core.postgres.457052/lib64/libpthread.so.0
> #1  0x008c7d79 in SafeHandlerForSegvBusIll (postgres_signal_arg=11, 
> processName=) at elog.c:4519
> #2  
> #3  ResourceOwnerEnlargeRelationRefs (owner=0x0) at resowner.c:708
> #4  0x008b5659 in RelationIncrementReferenceCount (rel=0x1baf500) at 
> relcache.c:1941
> #5  RelationIdGetRelation (relationId=relationId@entry=1259) at 
> relcache.c:1895
> #6  0x004ca664 in relation_open (lockmode=lockmode@entry=1, 
> relationId=relationId@entry=1259) at heapam.c:882
> #7  heap_open (relationId=relationId@entry=1259, lockmode=lockmode@entry=1) 
> at heapam.c:1285
> #8  0x008b0945 in ScanPgRelation (targetRelId=targetRelId@entry=5010, 
> indexOK=indexOK@entry=1 '\001', 
> pg_class_relation=pg_class_relation@entry=0x7ffdf2aed390) at relcache.c:279
> #9  0x008b4302 in RelationBuildDesc (targetRelId=5010, 
> insertIt=) at relcache.c:1209
> #10 0x008b56c7 in RelationIdGetRelation 
> (relationId=relationId@entry=5010) at relcache.c:1918
> #11 0x004ca664 in relation_open (lockmode=, 
> relationId=5010) at heapam.c:882
> #12 heap_open (relationId=5010, lockmode=) at heapam.c:1285
> #13 0x0055d1e6 in caql_basic_fn_all (pcql=0x1d70a58, 
> bLockEntireTable=0 '\000', pCtx=0x7ffdf2aed480, pchn=0xf4b328 
> ) at caqlanalyze.c:343
> #14 caql_switch (pchn=pchn@entry=0xf4b328 , 
> pCtx=pCtx@entry=0x7ffdf2aed480, pcql=pcql@entry=0x1d70a58) at 
> caqlanalyze.c:229
> #15 0x005636db in caql_getcount (pCtx0=pCtx0@entry=0x0, 
> pcql=0x1d70a58) at caqlaccess.c:367
> #16 0x009ddc47 in rel_is_partitioned (relid=1882211) at 
> cdbpartition.c:232
> #17 rel_part_status (relid=relid@entry=1882211) at cdbpartition.c:484
> #18 0x005e7d43 in calculate_virtual_segment_number 
> (candidateOids=) at analyze.c:833
> #19 analyzeStmt (stmt=stmt@entry=0x2045dd0, relids=relids@entry=0x0, 
> preferred_seg_num=preferred_seg_num@entry=-1) at analyze.c:486
> #20 0x005e89a7 in analyzeStatement (stmt=stmt@entry=0x2045dd0, 
> relids=relids@entry=0x0, preferred_seg_num=preferred_seg_num@entry=-1) at 
> analyze.c:271
> #21 0x0065c25c in vacuum (vacstmt=vacstmt@entry=0x2045bf0, 
> relids=relids@entry=0x0, preferred_seg_num=preferred_seg_num@entry=-1) at 
> vacuum.c:316
> #22 0x007f6012 in ProcessUtility 
> (parsetree=parsetree@entry=0x2045bf0, queryString=0x2045d30 "ANALYZE 
> mis_data_ig_account_details.e_event_1_0_102", params=0x0, 
> isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0xf04ba0 ,
> completionTag=completionTag@entry=0x7ffdf2aee3f0 "") at utility.c:1471
> #23 0x007f1ade in PortalRunUtility (portal=portal@entry=0x1bfb490, 
> utilityStmt=utilityStmt@entry=0x2045bf0, isTopLevel=isTopLevel@entry=1 
> '\001', dest=dest@entry=0xf04ba0 , 
> completionTag=completionTag@entry=0x7ffdf2aee3f0 "") at pquery.c:1968
> #24 0x007f32be in PortalRunMulti (portal=portal@entry=0x1bfb490, 
> isTopLevel=isTopLevel@entry=1 '\001', dest=0xf04ba0 , 
> dest@entry=0x1b93f60, altdest=0xf04ba0 , 
> altdest@entry=0x1b93f60, completionTag=completionTag@entry=0x7ffdf2aee3f0 "")
> at pquery.c:2078
> #25 0x007f5025 in PortalRun (portal=portal@entry=0x1bfb490, 
> count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', 
> dest=dest@entry=0x1b93f60, altdest=altdest@entry=0x1b93f60, 
> completionTag=completionTag@entry=0x7ffdf2aee3f0 "") at pquery.c:1595
> #26 0x007ee098 in exec_execute_message (max_rows=9223372036854775807, 
> portal_name=0x1b93ad0 "") at postgres.c:2782
> #27 PostgresMain (argc=, argv=, 
> argv@entry=0x1a49b40, username=0x1a498f0 "mis_ig") at postgres.c:5170
> #28 0x007a0390 in BackendRun (port=0x1a185f0) at postmaster.c:5915
> #29 BackendStartup (port=0x1a185f0) at postmaster.c:5484
> #30 ServerLoop () at postmaster.c:2163
> #31 0x007a3159 in PostmasterMain (argc=, 
> argv=) at postmaster.c:1454
> #32 0x004a52b9 in main (argc=9, argv=0x1a20d10) at main.c:226
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1194) Add EncryptionZones related RPC

2017-07-12 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084492#comment-16084492
 ] 

Vineet Goel commented on HAWQ-1194:
---

Could you please update "Fix Version/s:" field with the appropriate version 
number? I think it should be 2.3.0.0.

> Add EncryptionZones related RPC
> ---
>
> Key: HAWQ-1194
> URL: https://issues.apache.org/jira/browse/HAWQ-1194
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: libhdfs
>Reporter: Hongxu Ma
>Assignee: Amy
> Fix For: backlog
>
>
> Add createEncryption, getEZForPath, listEncryptionZones RPC



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HAWQ-1480) Packing a core file in hawq

2017-06-05 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16037818#comment-16037818
 ] 

Vineet Goel commented on HAWQ-1480:
---

Shubham, I suggest you submit a PR. Thanks!

> Packing a core file in hawq
> ---
>
> Key: HAWQ-1480
> URL: https://issues.apache.org/jira/browse/HAWQ-1480
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Command Line Tools
>Reporter: Shubham Sharma
>Assignee: Radar Lei
>
> Currently there is no way to packing a core file with its context – 
> executable, application and system shared libraries in hawq. This information 
> can be later unpacked on another system and helps in debugging. It is a 
> useful feature to quickly gather all the data needed from a crash/core 
> generated on the system to analyze it later.
> Another open source project, greenplum, uses a script 
> [https://github.com/greenplum-db/gpdb/blob/master/gpMgmt/sbin/packcore] to 
> collect this information. Tested this script against Hawq's installation and 
> it collects the required information needed for debug.
> Can this be merged into Hawq, if yes, I can submit a pull request and test it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HAWQ-1472) Support 'alter role set' statements in HAWQ

2017-05-25 Thread Vineet Goel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Goel updated HAWQ-1472:
--
Description: 
'alter role set' statements support should be added in HAWQ.

```
gpadmin=# alter role gpadmin SET search_path TO myschema;
ERROR:  Cannot support alter role set statement yet
```

https://github.com/apache/incubator-hawq/blob/master/src/backend/tcop/utility.c#L1654


  was:
'alter role set' statements support should be added in HAWQ.

```
gpadmin=# alter role gpadmin SET search_path TO myschema;
ERROR:  Cannot support alter role set statement yet
```

https://github.com/apache/incubator-hawq/blob/master/
src/backend/tcop/utility.c#L1654



> Support 'alter role set' statements in HAWQ
> ---
>
> Key: HAWQ-1472
> URL: https://issues.apache.org/jira/browse/HAWQ-1472
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Core, DDL
>Reporter: Vineet Goel
>Assignee: Ed Espino
> Fix For: backlog
>
>
> 'alter role set' statements support should be added in HAWQ.
> ```
> gpadmin=# alter role gpadmin SET search_path TO myschema;
> ERROR:  Cannot support alter role set statement yet
> ```
> https://github.com/apache/incubator-hawq/blob/master/src/backend/tcop/utility.c#L1654



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1472) Support 'alter role set' statements in HAWQ

2017-05-25 Thread Vineet Goel (JIRA)
Vineet Goel created HAWQ-1472:
-

 Summary: Support 'alter role set' statements in HAWQ
 Key: HAWQ-1472
 URL: https://issues.apache.org/jira/browse/HAWQ-1472
 Project: Apache HAWQ
  Issue Type: New Feature
  Components: Core, DDL
Reporter: Vineet Goel
Assignee: Ed Espino
 Fix For: backlog


'alter role set' statements support should be added in HAWQ.

```
gpadmin=# alter role gpadmin SET search_path TO myschema;
ERROR:  Cannot support alter role set statement yet
```

https://github.com/apache/incubator-hawq/blob/master/
src/backend/tcop/utility.c#L1654




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1467) PXF HiveVectorizedORC profile should support Predicate Pushdown

2017-05-12 Thread Vineet Goel (JIRA)
Vineet Goel created HAWQ-1467:
-

 Summary: PXF HiveVectorizedORC profile should support Predicate 
Pushdown
 Key: HAWQ-1467
 URL: https://issues.apache.org/jira/browse/HAWQ-1467
 Project: Apache HAWQ
  Issue Type: New Feature
  Components: PXF
Reporter: Vineet Goel
Assignee: Vineet Goel
 Fix For: backlog


PXF HiveVectorizedORC profile should support Predicate Pushdown, using the same 
operator set and data type set as HiveORC profile, so that the queries can 
benefit from pushdown performance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1466) PXF HiveVectorizedORC profile should handle complex types

2017-05-12 Thread Vineet Goel (JIRA)
Vineet Goel created HAWQ-1466:
-

 Summary: PXF HiveVectorizedORC profile should handle complex types
 Key: HAWQ-1466
 URL: https://issues.apache.org/jira/browse/HAWQ-1466
 Project: Apache HAWQ
  Issue Type: New Feature
  Components: PXF
Reporter: Vineet Goel
Assignee: Vineet Goel
 Fix For: backlog


PXF HiveVectorizedORC profile should handle complex hive types 
(array,map,struct,union,etc) and treat them as text in HAWQ, similar to HiveORC 
profile. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1082) Do not send filter information on fragmenter call

2017-05-08 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001812#comment-16001812
 ] 

Vineet Goel commented on HAWQ-1082:
---

Looks like there are two "vineet goel". Sorry about assigning you the issue by 
mistake.

> Do not send filter information on fragmenter call
> -
>
> Key: HAWQ-1082
> URL: https://issues.apache.org/jira/browse/HAWQ-1082
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: backlog
>
>
> As for now HAWQ master evaluates and sends X-GP-FILTER value. Only one 
> fragmenter in PXF uses this value - HiveDataFragmenter, and loads metadata of 
> partitions which satisfy filter condition.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1442) Upgrade Parquet format version in HAWQ

2017-04-26 Thread Vineet Goel (JIRA)
Vineet Goel created HAWQ-1442:
-

 Summary: Upgrade Parquet format version in HAWQ
 Key: HAWQ-1442
 URL: https://issues.apache.org/jira/browse/HAWQ-1442
 Project: Apache HAWQ
  Issue Type: New Feature
  Components: Storage
Reporter: Vineet Goel
Assignee: Ed Espino
 Fix For: backlog


Parquet format version in HAWQ should be upgraded to the latest version so that 
it offers higher compatibility with other Hadoop tools/engines using Parquet, 
as well as use new optimizations for performance in the new version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1416) hawq_toolkit administrative schema missing in HAWQ installation

2017-03-27 Thread Vineet Goel (JIRA)
Vineet Goel created HAWQ-1416:
-

 Summary: hawq_toolkit administrative schema missing in HAWQ 
installation
 Key: HAWQ-1416
 URL: https://issues.apache.org/jira/browse/HAWQ-1416
 Project: Apache HAWQ
  Issue Type: Bug
  Components: Command Line Tools, DDL
Reporter: Vineet Goel
Assignee: Ed Espino
 Fix For: 2.3.0.0-incubating


hawq_toolkit administrative schema is not pre-installed with HAWQ, but should 
actually be available once HAWQ is installed and initialized.

Current workaround seems to be a manual command to install it:
psql -f /usr/local/hawq/share/postgresql/gp_toolkit.sql



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1108) Add JDBC PXF Plugin

2017-02-21 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15877533#comment-15877533
 ] 

Vineet Goel commented on HAWQ-1108:
---

Hi [~jiadx] any updates on this? 
Thanks

> Add JDBC PXF Plugin
> ---
>
> Key: HAWQ-1108
> URL: https://issues.apache.org/jira/browse/HAWQ-1108
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Michael Andre Pearce (IG)
>Assignee: Devin Jia
>
> On the back of the work in :
> https://issues.apache.org/jira/browse/HAWQ-779
> We would like to add to Hawq Plugins a JDBC implementation.
> There are currently two noted implementations in the openly available in 
> GitHub.
> 1) https://github.com/kojec/pxf-field/tree/master/jdbc-pxf-ext
> 2) https://github.com/inspur-insight/pxf-plugin/tree/master/pxf-jdbc
> The latter (2) is an improved version of the former (1) and also what 
> HAWQ-779 changes were to support.
> [~jiadx] would you be happy to contribute the source as apache 2 license open 
> source?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HAWQ-1320) Remove PXF version references from the code repo

2017-02-09 Thread Vineet Goel (JIRA)
Vineet Goel created HAWQ-1320:
-

 Summary: Remove PXF version references from the code repo
 Key: HAWQ-1320
 URL: https://issues.apache.org/jira/browse/HAWQ-1320
 Project: Apache HAWQ
  Issue Type: Task
  Components: PXF
Reporter: Vineet Goel
Assignee: Ed Espino
 Fix For: 2.1.0.0-incubating


Currently, the PXF version is statically defined in the codebase in 
gradle.properties file. It would be better to pass these version strings 
dynamically while running make.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HAWQ-1197) Add note about default_hash_table_bucket_number to PXF section

2016-12-20 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15765741#comment-15765741
 ] 

Vineet Goel commented on HAWQ-1197:
---

Currently, queries on PXF external tables use default_hash_table_bucket_number 
as the static number of virtual segments to process the query. We don't want to 
recommend changing this parameter for tuning purposes, as changing the 
parameter would affect HASH distribution on HAWQ internal tables, requiring a 
re-distribution of data. The future enhancement goal is to have PXF use 
something more dynamic instead.

> Add note about default_hash_table_bucket_number to PXF section
> --
>
> Key: HAWQ-1197
> URL: https://issues.apache.org/jira/browse/HAWQ-1197
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Jane Beckman
>Assignee: David Yozie
>
> The discussion of default_hash_table_bucket_number controlling numbe rof 
> segments in PXF external table queries is in Best Practices, but not the PXF 
> sections.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1195) Synchrony:Union not working on external tables ERROR:"Two or more external tables use the same error table ""xxxxxxx"" in a statement (execMain.c:274)"

2016-12-20 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15765712#comment-15765712
 ] 

Vineet Goel commented on HAWQ-1195:
---

Could you please change the "Fix Version/s:" field to the correct version where 
this commit was made? Thanks

> Synchrony:Union not working on external tables ERROR:"Two or more external 
> tables use the same error table ""xxx"" in a statement (execMain.c:274)"
> ---
>
> Key: HAWQ-1195
> URL: https://issues.apache.org/jira/browse/HAWQ-1195
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: External Tables
>Reporter: Ming LI
>Assignee: Ming LI
> Fix For: backlog
>
>
> Hello,
> User create an external table and define the error table. Then he do the 
> union on the same external table with different where condition. Then return 
> the error:ERROR:  Two or more external tables use the same error table 
> "err_ext_pdr_cdci_pivotal_request_43448" in a statement (execMain.c:274)
> Below is the master log when I reproduce: (the whole log file attached in 
> attachement)
> {code}
> 2016-11-29 22:49:51.976864 
> PST,"gpadmin","postgres",p769199,th-2123704032,"[local]",,2016-11-29 22:46:14 
> PST,1260,con72,cmd10,seg-1,,,x1260,sx1,"ERROR","XX000","Two or more external 
> tables use the same error table ""err_ext_pdr_cdci_pivotal_request_43448"" in 
> a statement (execMain.c:274)",,"select current_account_nbr,yearmonthint, 
> bank_name, first_date_open, max_cr_limit, care_credit_flag, cc1_flag, 
> partition_value, 'US' as loc from pdr_cdci_pivotal_request_43448 where 
> care_credit_flag<1
> union
> select current_account_nbr,yearmonthint, bank_name, first_date_open, 
> max_cr_limit, care_credit_flag, cc1_flag, partition_value, 'Non-US' as loc 
> from pdr_cdci_pivotal_request_43448 where 
> care_credit_flag=1;",0,,"execMain.c",274,"Stack trace:
> 10x8c5858 postgres errstart (??:0)
> 20x8c75db postgres elog_finish (??:0)
> 30x65f669 postgres  (??:0)
> 40x77d06a postgres walk_plan_node_fields (??:0)
> 50x77e3ee postgres plan_tree_walker (??:0)
> 60x77c70a postgres expression_tree_walker (??:0)
> 70x77e35d postgres plan_tree_walker (??:0)
> 80x77d06a postgres walk_plan_node_fields (??:0)
> 90x77dfe6 postgres plan_tree_walker (??:0)
> 10   0x77d06a postgres walk_plan_node_fields (??:0)
> 11   0x77e1e5 postgres plan_tree_walker (??:0)
> 12   0x77d06a postgres walk_plan_node_fields (??:0)
> 13   0x77dfe6 postgres plan_tree_walker (??:0)
> 14   0x77d06a postgres walk_plan_node_fields (??:0)
> 15   0x77e1e5 postgres plan_tree_walker (??:0)
> 16   0x66079b postgres ExecutorStart (??:0)
> 17   0x7ebf1d postgres PortalStart (??:0)
> 18   0x7e4288 postgres  (??:0)
> 19   0x7e54c2 postgres PostgresMain (??:0)
> 20   0x797d50 postgres  (??:0)
> 21   0x79ab19 postgres PostmasterMain (??:0)
> 22   0x4a4069 postgres main (??:0)
> 23   0x7fd97d486d5d libc.so.6 __libc_start_main (??:0)
> 24   0x4a40e9 postgres  (??:0)
> "
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1111) Support for IN() operator in PXF

2016-10-18 Thread Vineet Goel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Goel updated HAWQ-:
--
Summary: Support for IN() operator in PXF  (was: Support for IN() operation 
in PXF)

> Support for IN() operator in PXF
> 
>
> Key: HAWQ-
> URL: https://issues.apache.org/jira/browse/HAWQ-
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Vineet Goel
>Assignee: Lei Chang
>
> HAWQ PXF external tables should be optimized for IN() operator so that users 
> get the benefit of predicate pushdown. In order to achieve this, HAWQ bridge 
> must send serialized expression for IN() operator to PXF. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-1111) Support for IN() operation in PXF

2016-10-17 Thread Vineet Goel (JIRA)
Vineet Goel created HAWQ-:
-

 Summary: Support for IN() operation in PXF
 Key: HAWQ-
 URL: https://issues.apache.org/jira/browse/HAWQ-
 Project: Apache HAWQ
  Issue Type: New Feature
  Components: PXF
Reporter: Vineet Goel
Assignee: Lei Chang


HAWQ PXF external tables should be optimized for IN() operator so that users 
get the benefit of predicate pushdown. In order to achieve this, HAWQ bridge 
must send serialized expression for IN() operator to PXF. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1082) Do not send filter information on fragmenter call

2016-09-30 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15537447#comment-15537447
 ] 

Vineet Goel commented on HAWQ-1082:
---

I think this was assigned to me by mistake. I am not a contributor on the 
Apache HAWQ project.

> Do not send filter information on fragmenter call
> -
>
> Key: HAWQ-1082
> URL: https://issues.apache.org/jira/browse/HAWQ-1082
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
> Fix For: backlog
>
>
> As for now HAWQ master evaluates and sends X-GP-FILTER value. Only one 
> fragmenter in PXF uses this value - HiveDataFragmenter, and loads metadata of 
> partitions which satisfy filter condition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-1082) Do not send filter information on fragmenter call

2016-09-30 Thread Vineet Goel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-1082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Goel updated HAWQ-1082:
--
Assignee: (was: Vineet Goel)

> Do not send filter information on fragmenter call
> -
>
> Key: HAWQ-1082
> URL: https://issues.apache.org/jira/browse/HAWQ-1082
> Project: Apache HAWQ
>  Issue Type: Improvement
>  Components: PXF
>Reporter: Oleksandr Diachenko
> Fix For: backlog
>
>
> As for now HAWQ master evaluates and sends X-GP-FILTER value. Only one 
> fragmenter in PXF uses this value - HiveDataFragmenter, and loads metadata of 
> partitions which satisfy filter condition.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1036) Support user impersonation in PXF for external tables

2016-09-02 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15457649#comment-15457649
 ] 

Vineet Goel commented on HAWQ-1036:
---

This is a useful feature needed in PXF for external tables access, for 
security-sensitive data access in HDFS. My thoughts:

1) As the description/title suggests, we should keep the scope of this JIRA to 
PXF External table access, not HAWQ Internal tables. 
2) Changing the HAWQ’s HDFS storage permission/ACL policy should be out of 
scope. That is a complex and separate set of work and not sure if it solves a 
problem for HAWQ users. If HAWQ is your SQL access point, then the assumption 
is that HAWQ database authorization is serving comprehensive control management 
on internal tables. Ranger integration (HAWQ-256) plays a role here as well. 
All HAWQ data files have the same HDFS permission model.
3) Lili, given the scope limited to PXF External tables, some of the questions 
that you asked above may not be an issue in PXF case. Is that right?
4) Alastair, good point on the SET SESSION scenario. 

Things I’m wondering about, that may need research/discussion:

a) The impersonation might just apply the same to Readable as well as writable 
PXF tables as well. True?
b) External authentication such as LDAP are very important in this case as well 
as Ranger, if there are issues with trusting super users, who could create role 
and database logins to impersonate users for PXF external tables access. 
c) Does anything change if Hive is using “SQL Standard-Based Authorization” 
instead of Storage-Based Authorization? Do HDFS files have the same ACLs & 
permissions in both cases, for external table reads?


> Support user impersonation in PXF for external tables
> -
>
> Key: HAWQ-1036
> URL: https://issues.apache.org/jira/browse/HAWQ-1036
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Alastair "Bell" Turner
>Assignee: Goden Yao
>Priority: Critical
> Fix For: backlog
>
> Attachments: HAWQ_Impersonation_rationale.txt
>
>
> Currently HAWQ executes all queries as the user running the HAWQ process or 
> the user running the PXF process, not as the user who issued the query via 
> ODBC/JDBC/... This restricts the options available for integrating with 
> existing security defined in HDFS, Hive, etc.
> Impersonation provides an alternative Ranger integration (as discussed in 
> HAWQ-256 ) for consistent security across HAWQ, HDFS, Hive...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1013) Move HAWQ Ambari plugin to Apache HAWQ

2016-08-24 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435483#comment-15435483
 ] 

Vineet Goel commented on HAWQ-1013:
---

RPM packaging is a build-time decision, separate from source code management. 
There are pros and cons to both. Separate RPM allows the flexibility of a quick 
install required on Ambari-server node only, where HAWQ binaries may not be 
necessary (depending on cluster size and topology).

> Move HAWQ Ambari plugin to Apache HAWQ
> --
>
> Key: HAWQ-1013
> URL: https://issues.apache.org/jira/browse/HAWQ-1013
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Ambari
>Reporter: Matt
>Assignee: Alexander Denissov
>  Labels: UX
> Fix For: backlog
>
>
> To add HAWQ and PXF to Ambari, users have to follow certain manual steps. 
> These manual steps include:
> - Adding HAWQ and PXF metainfo.xml files (containing metadata about the 
> service) under the stack to be installed.
> - Adding repositories, where HAWQ and PXF rpms reside so that Ambari can use 
> it during installation. This requires updating repoinfo.xml under the stack 
> HAWQ and PXF is being added to.
> - Adding repositories to an existing stack managed by Ambari requires adding 
> the repositories using the Ambari REST endpoint.
> The HAWQ Ambari plugin automates the above steps using a script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-1013) Move HAWQ Ambari plugin to Apache HAWQ

2016-08-24 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435462#comment-15435462
 ] 

Vineet Goel commented on HAWQ-1013:
---

This JIRA is actually not related to moving Ambari plugin to HAWQ repo, perhaps 
the description should be changed.
Ambari plugin requires Ambari to be aware of a service, and that plugin code 
already resides in Apache Ambari repo, here:
https://github.com/apache/ambari/tree/trunk/ambari-server/src/main/resources/common-services/HAWQ/2.0.0

In order to enable the Ambari plugin, a script is required during HAWQ 
install-time to make an Ambari “Stack” aware of the service being managed. That 
script and implementation is tightly coupled with the service (HAWQ) itself and 
should be managed in the service (HAWQ) repo. Hope this helps.

> Move HAWQ Ambari plugin to Apache HAWQ
> --
>
> Key: HAWQ-1013
> URL: https://issues.apache.org/jira/browse/HAWQ-1013
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: Ambari
>Reporter: Matt
>Assignee: Alexander Denissov
>  Labels: UX
> Fix For: backlog
>
>
> To add HAWQ and PXF to Ambari, users have to follow certain manual steps. 
> These manual steps include:
> - Adding HAWQ and PXF metainfo.xml files (containing metadata about the 
> service) under the stack to be installed.
> - Adding repositories, where HAWQ and PXF rpms reside so that Ambari can use 
> it during installation. This requires updating repoinfo.xml under the stack 
> HAWQ and PXF is being added to.
> - Adding repositories to an existing stack managed by Ambari requires adding 
> the repositories using the Ambari REST endpoint.
> The HAWQ Ambari plugin automates the above steps using a script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-256) Integrate Security with Apache Ranger

2016-08-24 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434467#comment-15434467
 ] 

Vineet Goel commented on HAWQ-256:
--

I found this in the Hive documentation:

"The ADMIN permission in Ranger is the equivalent to the WITH GRANT OPTION in 
SQL standard-based authorization. However, the ADMIN permission gives the 
grantee the ability to grant all permissions rather than just the permissions 
possessed by the grantor. With SQL standard-based authorization, the WITH GRANT 
OPTION applies only to permissions possessed by the grantor."

This seems to suggest that "WITH GRANT OPTION" doesn't translate into same 
behavior at the Ranger level. This is understandable and acceptable I think. 
Ranger users and Component (Hive or HAWQ) users are likely two separate groups 
and they don't need to cross in their functions. This likely means, WITH GRANT 
OPTION on the CLI probably doesn't propagate into any Ranger policy updates and 
is ignored?

Secondly, I'm late to this discussion, but it seems like [~bosco] was 
suggesting to design in such a way that "native component CLI commands" should 
not be encouraged, but rather, only Ranger UI/APIs should be used to set those 
policies (if Ranger authentication is switched ON in the component). If that's 
the case, I like that idea, to reduce design complexity. Hence, Authentication 
changes made with GRANT and REVOKE statements on component CLI must be disabled 
if Ranger authentication is switched ON. If Ranger is not in use, native 
component behavior remains unchanged. Users are expected not to flip back and 
forth between using Ranger and not using Ranger.

> Integrate Security with Apache Ranger
> -
>
> Key: HAWQ-256
> URL: https://issues.apache.org/jira/browse/HAWQ-256
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Michael Andre Pearce (IG)
>Assignee: Lili Ma
> Fix For: backlog
>
> Attachments: HAWQRangerSupportDesign.pdf
>
>
> Integrate security with Apache Ranger for a unified Hadoop security solution. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-19) Money type overflow

2016-07-12 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-19?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374128#comment-15374128
 ] 

Vineet Goel commented on HAWQ-19:
-

[~liming01] and [~ftian] - does this JIRA need to be targeted for 
"2.0.0.0-incubating" release? If not, we should change the Fix Version to 
"backlog" or another release. Please update the JIRA Fix Version field soon so 
"2.0.0.0-incubating" is not blocked.

> Money type overflow
> ---
>
> Key: HAWQ-19
> URL: https://issues.apache.org/jira/browse/HAWQ-19
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Feng Tian
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> Use tpch schema, but change l_extendedprice to use MONEY type, run Q1, you 
> should see negative amounts.   
> I believe this is due to overflow.
> Side mark, postgres 9 money type use 8 bytes and will return correct result.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-215) View gp_distributed_log and gp_distributed_xacts need to be removed if we don't want to support it anymore.

2016-07-12 Thread Vineet Goel (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15374110#comment-15374110
 ] 

Vineet Goel commented on HAWQ-215:
--

[~liming01] and [~doli] does this JIRA need to be targeted for 
"2.0.0.0-incubating" release? If not, we should change the Fix Version to 
"backlog" or another release. Please update the JIRA Fix Version field soon so 
"2.0.0.0-incubating" is not blocked.

> View gp_distributed_log and gp_distributed_xacts need to be removed if we 
> don't want to support it anymore.
> ---
>
> Key: HAWQ-215
> URL: https://issues.apache.org/jira/browse/HAWQ-215
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Dong Li
>Assignee: Ming LI
> Fix For: 2.0.0
>
>
> View gp_distributed_log   depends on built-in function gp_distributed_log(). 
> And  gp_distributed_log() just return null. So the view can't work at all.
> So do view gp_distributed_xacts.
> {code}
> e=# select * from gp_distributed_log;
> ERROR:  function returning set of rows cannot return null value
> e=# select * from gp_distributed_xacts;
> ERROR:  function returning set of rows cannot return null value
> {code}
> function gp_distributed_log is defined in gp_distributed_log.c :27
> function gp_distributed_xacts  is defined in cdbdistributedxacts.c:44



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-895) Investigate migration to 3-digit Semantic versioning

2016-07-05 Thread Vineet Goel (JIRA)
Vineet Goel created HAWQ-895:


 Summary: Investigate migration to 3-digit Semantic versioning
 Key: HAWQ-895
 URL: https://issues.apache.org/jira/browse/HAWQ-895
 Project: Apache HAWQ
  Issue Type: Task
  Components: Core
Reporter: Vineet Goel
Assignee: Lei Chang


Current HAWQ code is tied to 4-digit versioning which is related to the library 
compatibility and inherited from old Postgres. We should investigate the impact 
of switching to 3-digit Semantic versioning (http://semver.org)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)