[jira] [Resolved] (HAWQ-627) When we run 128 partition in 128 nodes, it core in metadata cache
[ https://issues.apache.org/jira/browse/HAWQ-627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Weng resolved HAWQ-627. Resolution: Fixed Fix Version/s: 2.0.0-beta-incubating > When we run 128 partition in 128 nodes, it core in metadata cache > - > > Key: HAWQ-627 > URL: https://issues.apache.org/jira/browse/HAWQ-627 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Ivan Weng >Assignee: Lei Chang > Fix For: 2.0.0-beta-incubating > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-627) When we run 128 partition in 128 nodes, it core in metadata cache
[ https://issues.apache.org/jira/browse/HAWQ-627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225748#comment-15225748 ] ASF GitHub Bot commented on HAWQ-627: - Github user asfgit closed the pull request at: https://github.com/apache/incubator-hawq/pull/556 > When we run 128 partition in 128 nodes, it core in metadata cache > - > > Key: HAWQ-627 > URL: https://issues.apache.org/jira/browse/HAWQ-627 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Ivan Weng >Assignee: Lei Chang > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-625) Fix build failure on MAC for the fix of HAWQ-462.
[ https://issues.apache.org/jira/browse/HAWQ-625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225746#comment-15225746 ] ASF GitHub Bot commented on HAWQ-625: - Github user ictmalili closed the pull request at: https://github.com/apache/incubator-hawq/pull/554 > Fix build failure on MAC for the fix of HAWQ-462. > -- > > Key: HAWQ-625 > URL: https://issues.apache.org/jira/browse/HAWQ-625 > Project: Apache HAWQ > Issue Type: Sub-task > Components: External Tables, Hcatalog, PXF >Reporter: Lili Ma >Assignee: Lei Chang > Fix For: 2.0.0 > > > The function is declared at end of file, but referenced before the > declaration. Should declare the function first. > {code} > cdbquerycontextdispatching.c:3026:1: error: conflicting types for > 'prepareDfsAddressForDispatch' > prepareDfsAddressForDispatch(QueryContextInfo* cxt) > ^ > cdbquerycontextdispatching.c:1777:3: note: previous implicit declaration is > here > prepareDfsAddressForDispatch(cxt); > ^ > 2 warnings and 1 error generated. > make[3]: *** [cdbquerycontextdispatching.o] Error 1 > make[2]: *** [cdb-recursive] Error 2 > make[1]: *** [all] Error 2 > make: *** [all] Error 2 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-627) When we run 128 partition in 128 nodes, it core in metadata cache
[ https://issues.apache.org/jira/browse/HAWQ-627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225744#comment-15225744 ] ASF GitHub Bot commented on HAWQ-627: - Github user zhangh43 commented on the pull request: https://github.com/apache/incubator-hawq/pull/556#issuecomment-205669831 +1 > When we run 128 partition in 128 nodes, it core in metadata cache > - > > Key: HAWQ-627 > URL: https://issues.apache.org/jira/browse/HAWQ-627 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Ivan Weng >Assignee: Lei Chang > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-627) When we run 128 partition in 128 nodes, it core in metadata cache
[ https://issues.apache.org/jira/browse/HAWQ-627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225742#comment-15225742 ] ASF GitHub Bot commented on HAWQ-627: - GitHub user wengyanqing opened a pull request: https://github.com/apache/incubator-hawq/pull/556 HAWQ-627. Fix core dump in metadata cache You can merge this pull request into a Git repository by running: $ git pull https://github.com/wengyanqing/incubator-hawq HAWQ-627 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/556.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #556 commit fa2600cfbf24d193a63f25e3286e238278348c6d Author: ivan Date: 2016-04-05T06:15:14Z HAWQ-627. Fix core dump in metadata cache > When we run 128 partition in 128 nodes, it core in metadata cache > - > > Key: HAWQ-627 > URL: https://issues.apache.org/jira/browse/HAWQ-627 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Ivan Weng >Assignee: Lei Chang > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-627) When we run 128 partition in 128 nodes, it core in metadata cache
Ivan Weng created HAWQ-627: -- Summary: When we run 128 partition in 128 nodes, it core in metadata cache Key: HAWQ-627 URL: https://issues.apache.org/jira/browse/HAWQ-627 Project: Apache HAWQ Issue Type: Bug Reporter: Ivan Weng Assignee: Lei Chang -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-616) Mininum resource needed by copy random table from a file should be one.
[ https://issues.apache.org/jira/browse/HAWQ-616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225708#comment-15225708 ] ASF GitHub Bot commented on HAWQ-616: - Github user zhangh43 closed the pull request at: https://github.com/apache/incubator-hawq/pull/544 > Mininum resource needed by copy random table from a file should be one. > --- > > Key: HAWQ-616 > URL: https://issues.apache.org/jira/browse/HAWQ-616 > Project: Apache HAWQ > Issue Type: Bug > Components: Core >Reporter: Hubert Zhang >Assignee: Hubert Zhang > > Now, we use hawq_rm_nvseg_for_copy_from_perquery as the min and max vseg > number for copy random table from a file, For max vseg num is OK, but min > value should be one in case there is not enough resource -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails
[ https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225689#comment-15225689 ] ASF GitHub Bot commented on HAWQ-462: - Github user kavinderd commented on the pull request: https://github.com/apache/incubator-hawq/pull/503#issuecomment-205644754 @ictmalili Oh ok cool. Thanks for taking care of it > Querying Hcatalog in HA Secure Environment Fails > > > Key: HAWQ-462 > URL: https://issues.apache.org/jira/browse/HAWQ-462 > Project: Apache HAWQ > Issue Type: Bug > Components: External Tables, Hcatalog, PXF >Affects Versions: 2.0.0-beta-incubating >Reporter: Kavinder Dhaliwal >Assignee: Shivram Mani > Fix For: 2.0.0 > > > On an HA Secure Cluster querying a hive external table works: > {code} > create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b > boolean) location > ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') > format 'custom' (formatter='pxfwritable_import'); > select * from pxf_hive; > {code} > but querying the same table via hcatalog does not > {code} > SELECT * FROM hcatalog.default.hive_table; > ERROR: Failed to acquire a delegation token for uri hdfs://localhost:8020/ > (hd_work_mgr.c:930) > {code} > This should be fixed by the PR for > https://issues.apache.org/jira/browse/HAWQ-317 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-626) HAWQ stop segments should check if node alive first
Radar Lei created HAWQ-626: -- Summary: HAWQ stop segments should check if node alive first Key: HAWQ-626 URL: https://issues.apache.org/jira/browse/HAWQ-626 Project: Apache HAWQ Issue Type: Bug Reporter: Radar Lei Assignee: Lei Chang Currently 'hawq stop allsegments' will check if HAWQ process running on the node, if not, will skip this node. Now it will treat not alive node as no HAWQ process running node. But if the node is not alive, it should error out, unless '--ignore-bad-hosts' is given. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-617) Add a flag to hawq config to allow skipping hosts on which ssh fails and continue with syncing configurations files
[ https://issues.apache.org/jira/browse/HAWQ-617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225614#comment-15225614 ] ASF GitHub Bot commented on HAWQ-617: - Github user radarwave commented on the pull request: https://github.com/apache/incubator-hawq/pull/546#issuecomment-205627238 BTW, it's better to squash the commits since there is one commit have merged things and un-format name. See the commit 'c5c7d8fc0757822d21e18958e03d1e536fc27a56': "Merge branch 'master' into HAWQ-617" > Add a flag to hawq config to allow skipping hosts on which ssh fails and > continue with syncing configurations files > --- > > Key: HAWQ-617 > URL: https://issues.apache.org/jira/browse/HAWQ-617 > Project: Apache HAWQ > Issue Type: Bug >Reporter: bhuvnesh chaudhary >Assignee: Lei Chang > > Add a flag to hawq config to allow skipping hosts on which ssh fails and > continue with syncing configurations files. > Currently, if there is any one bad host, sync operations fails. Hawq activate > standby uses hawq config to update hawq-site.xml configuration, however if > the active master host is down, it fails. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-617) Add a flag to hawq config to allow skipping hosts on which ssh fails and continue with syncing configurations files
[ https://issues.apache.org/jira/browse/HAWQ-617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225610#comment-15225610 ] ASF GitHub Bot commented on HAWQ-617: - Github user radarwave commented on the pull request: https://github.com/apache/incubator-hawq/pull/546#issuecomment-205625040 @bhuvnesh2703 Now 'hawq stop' works just like '--ignore-bad-hosts' always enabled. But 'hawq stop' should only check if hawq process is running on the node, it should fail if the node is not available. We will fix that in another jira. It's ok to apply the 'option' to 'hawq stop' next time when we fixing above hawq stop check issues. But if it convenience, please also apply it to 'hawq stop' so we can keep the option works for both start and stop. Thanks. > Add a flag to hawq config to allow skipping hosts on which ssh fails and > continue with syncing configurations files > --- > > Key: HAWQ-617 > URL: https://issues.apache.org/jira/browse/HAWQ-617 > Project: Apache HAWQ > Issue Type: Bug >Reporter: bhuvnesh chaudhary >Assignee: Lei Chang > > Add a flag to hawq config to allow skipping hosts on which ssh fails and > continue with syncing configurations files. > Currently, if there is any one bad host, sync operations fails. Hawq activate > standby uses hawq config to update hawq-site.xml configuration, however if > the active master host is down, it fails. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-625) Fix build failure on MAC for the fix of HAWQ-462.
[ https://issues.apache.org/jira/browse/HAWQ-625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225608#comment-15225608 ] ASF GitHub Bot commented on HAWQ-625: - Github user jiny2 commented on the pull request: https://github.com/apache/incubator-hawq/pull/554#issuecomment-205624141 +1 > Fix build failure on MAC for the fix of HAWQ-462. > -- > > Key: HAWQ-625 > URL: https://issues.apache.org/jira/browse/HAWQ-625 > Project: Apache HAWQ > Issue Type: Sub-task > Components: External Tables, Hcatalog, PXF >Reporter: Lili Ma >Assignee: Lei Chang > Fix For: 2.0.0 > > > The function is declared at end of file, but referenced before the > declaration. Should declare the function first. > {code} > cdbquerycontextdispatching.c:3026:1: error: conflicting types for > 'prepareDfsAddressForDispatch' > prepareDfsAddressForDispatch(QueryContextInfo* cxt) > ^ > cdbquerycontextdispatching.c:1777:3: note: previous implicit declaration is > here > prepareDfsAddressForDispatch(cxt); > ^ > 2 warnings and 1 error generated. > make[3]: *** [cdbquerycontextdispatching.o] Error 1 > make[2]: *** [cdb-recursive] Error 2 > make[1]: *** [all] Error 2 > make: *** [all] Error 2 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails
[ https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225604#comment-15225604 ] ASF GitHub Bot commented on HAWQ-462: - Github user ictmalili commented on the pull request: https://github.com/apache/incubator-hawq/pull/503#issuecomment-205623491 @kavinderd I have fixed this. Could you see the pull request? https://github.com/apache/incubator-hawq/pull/554 > Querying Hcatalog in HA Secure Environment Fails > > > Key: HAWQ-462 > URL: https://issues.apache.org/jira/browse/HAWQ-462 > Project: Apache HAWQ > Issue Type: Bug > Components: External Tables, Hcatalog, PXF >Affects Versions: 2.0.0-beta-incubating >Reporter: Kavinder Dhaliwal >Assignee: Shivram Mani > Fix For: 2.0.0 > > > On an HA Secure Cluster querying a hive external table works: > {code} > create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b > boolean) location > ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') > format 'custom' (formatter='pxfwritable_import'); > select * from pxf_hive; > {code} > but querying the same table via hcatalog does not > {code} > SELECT * FROM hcatalog.default.hive_table; > ERROR: Failed to acquire a delegation token for uri hdfs://localhost:8020/ > (hd_work_mgr.c:930) > {code} > This should be fixed by the PR for > https://issues.apache.org/jira/browse/HAWQ-317 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-625) Fix build failure on MAC for the fix of HAWQ-462.
[ https://issues.apache.org/jira/browse/HAWQ-625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225605#comment-15225605 ] ASF GitHub Bot commented on HAWQ-625: - Github user zhangh43 commented on the pull request: https://github.com/apache/incubator-hawq/pull/554#issuecomment-205623494 +1 > Fix build failure on MAC for the fix of HAWQ-462. > -- > > Key: HAWQ-625 > URL: https://issues.apache.org/jira/browse/HAWQ-625 > Project: Apache HAWQ > Issue Type: Sub-task > Components: External Tables, Hcatalog, PXF >Reporter: Lili Ma >Assignee: Lei Chang > Fix For: 2.0.0 > > > The function is declared at end of file, but referenced before the > declaration. Should declare the function first. > {code} > cdbquerycontextdispatching.c:3026:1: error: conflicting types for > 'prepareDfsAddressForDispatch' > prepareDfsAddressForDispatch(QueryContextInfo* cxt) > ^ > cdbquerycontextdispatching.c:1777:3: note: previous implicit declaration is > here > prepareDfsAddressForDispatch(cxt); > ^ > 2 warnings and 1 error generated. > make[3]: *** [cdbquerycontextdispatching.o] Error 1 > make[2]: *** [cdb-recursive] Error 2 > make[1]: *** [all] Error 2 > make: *** [all] Error 2 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-625) Fix build failure on MAC for the fix of HAWQ-462.
[ https://issues.apache.org/jira/browse/HAWQ-625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225597#comment-15225597 ] ASF GitHub Bot commented on HAWQ-625: - GitHub user ictmalili opened a pull request: https://github.com/apache/incubator-hawq/pull/554 HAWQ-625. Fix the build failure on MAC: function referenced before de… …claration You can merge this pull request into a Git repository by running: $ git pull https://github.com/ictmalili/incubator-hawq HAWQ-625 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/554.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #554 commit df5337c26f052f6b0dbc1a02b389ef30a9e9bfa4 Author: Lili Ma Date: 2016-04-05T03:17:41Z HAWQ-625. Fix the build failure on MAC: function referenced before declaration > Fix build failure on MAC for the fix of HAWQ-462. > -- > > Key: HAWQ-625 > URL: https://issues.apache.org/jira/browse/HAWQ-625 > Project: Apache HAWQ > Issue Type: Sub-task > Components: External Tables, Hcatalog, PXF >Reporter: Lili Ma >Assignee: Lei Chang > Fix For: 2.0.0 > > > The function is declared at end of file, but referenced before the > declaration. Should declare the function first. > {code} > cdbquerycontextdispatching.c:3026:1: error: conflicting types for > 'prepareDfsAddressForDispatch' > prepareDfsAddressForDispatch(QueryContextInfo* cxt) > ^ > cdbquerycontextdispatching.c:1777:3: note: previous implicit declaration is > here > prepareDfsAddressForDispatch(cxt); > ^ > 2 warnings and 1 error generated. > make[3]: *** [cdbquerycontextdispatching.o] Error 1 > make[2]: *** [cdb-recursive] Error 2 > make[1]: *** [all] Error 2 > make: *** [all] Error 2 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-617) Add a flag to hawq config to allow skipping hosts on which ssh fails and continue with syncing configurations files
[ https://issues.apache.org/jira/browse/HAWQ-617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225586#comment-15225586 ] ASF GitHub Bot commented on HAWQ-617: - Github user radarwave commented on a diff in the pull request: https://github.com/apache/incubator-hawq/pull/546#discussion_r58481787 --- Diff: .gitignore --- @@ -1,54 +1 @@ -# Object files --- End diff -- I see you removed all the lines in .gitignore, I think we should not modify and change it here. > Add a flag to hawq config to allow skipping hosts on which ssh fails and > continue with syncing configurations files > --- > > Key: HAWQ-617 > URL: https://issues.apache.org/jira/browse/HAWQ-617 > Project: Apache HAWQ > Issue Type: Bug >Reporter: bhuvnesh chaudhary >Assignee: Lei Chang > > Add a flag to hawq config to allow skipping hosts on which ssh fails and > continue with syncing configurations files. > Currently, if there is any one bad host, sync operations fails. Hawq activate > standby uses hawq config to update hawq-site.xml configuration, however if > the active master host is down, it fails. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails
[ https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225582#comment-15225582 ] ASF GitHub Bot commented on HAWQ-462: - Github user kavinderd commented on the pull request: https://github.com/apache/incubator-hawq/pull/503#issuecomment-205621695 @ictmalili will fix asap > Querying Hcatalog in HA Secure Environment Fails > > > Key: HAWQ-462 > URL: https://issues.apache.org/jira/browse/HAWQ-462 > Project: Apache HAWQ > Issue Type: Bug > Components: External Tables, Hcatalog, PXF >Affects Versions: 2.0.0-beta-incubating >Reporter: Kavinder Dhaliwal >Assignee: Shivram Mani > Fix For: 2.0.0 > > > On an HA Secure Cluster querying a hive external table works: > {code} > create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b > boolean) location > ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') > format 'custom' (formatter='pxfwritable_import'); > select * from pxf_hive; > {code} > but querying the same table via hcatalog does not > {code} > SELECT * FROM hcatalog.default.hive_table; > ERROR: Failed to acquire a delegation token for uri hdfs://localhost:8020/ > (hd_work_mgr.c:930) > {code} > This should be fixed by the PR for > https://issues.apache.org/jira/browse/HAWQ-317 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-625) Fix build failure on MAC for the fix of HAWQ-462.
[ https://issues.apache.org/jira/browse/HAWQ-625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lili Ma updated HAWQ-625: - Description: The function is declared at end of file, but referenced before the declaration. Should declare the function first. {code} cdbquerycontextdispatching.c:3026:1: error: conflicting types for 'prepareDfsAddressForDispatch' prepareDfsAddressForDispatch(QueryContextInfo* cxt) ^ cdbquerycontextdispatching.c:1777:3: note: previous implicit declaration is here prepareDfsAddressForDispatch(cxt); ^ 2 warnings and 1 error generated. make[3]: *** [cdbquerycontextdispatching.o] Error 1 make[2]: *** [cdb-recursive] Error 2 make[1]: *** [all] Error 2 make: *** [all] Error 2 {code} was:The function is declared at end of file, but referenced before the declaration. Should declare the function first. Summary: Fix build failure on MAC for the fix of HAWQ-462. (was: Fix build failure on MAC for the fix) > Fix build failure on MAC for the fix of HAWQ-462. > -- > > Key: HAWQ-625 > URL: https://issues.apache.org/jira/browse/HAWQ-625 > Project: Apache HAWQ > Issue Type: Sub-task > Components: External Tables, Hcatalog, PXF >Reporter: Lili Ma >Assignee: Lei Chang > Fix For: 2.0.0 > > > The function is declared at end of file, but referenced before the > declaration. Should declare the function first. > {code} > cdbquerycontextdispatching.c:3026:1: error: conflicting types for > 'prepareDfsAddressForDispatch' > prepareDfsAddressForDispatch(QueryContextInfo* cxt) > ^ > cdbquerycontextdispatching.c:1777:3: note: previous implicit declaration is > here > prepareDfsAddressForDispatch(cxt); > ^ > 2 warnings and 1 error generated. > make[3]: *** [cdbquerycontextdispatching.o] Error 1 > make[2]: *** [cdb-recursive] Error 2 > make[1]: *** [all] Error 2 > make: *** [all] Error 2 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails
[ https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225578#comment-15225578 ] ASF GitHub Bot commented on HAWQ-462: - Github user ictmalili commented on the pull request: https://github.com/apache/incubator-hawq/pull/503#issuecomment-205621290 @shivzone @kavinderd , actually this commit blocks build on MAC. Since the function is referenced before the declaration. {code} cdbquerycontextdispatching.c:3026:1: error: conflicting types for 'prepareDfsAddressForDispatch' prepareDfsAddressForDispatch(QueryContextInfo* cxt) ^ cdbquerycontextdispatching.c:1777:3: note: previous implicit declaration is here prepareDfsAddressForDispatch(cxt); ^ 2 warnings and 1 error generated. make[3]: *** [cdbquerycontextdispatching.o] Error 1 make[2]: *** [cdb-recursive] Error 2 make[1]: *** [all] Error 2 make: *** [all] Error 2 {code} > Querying Hcatalog in HA Secure Environment Fails > > > Key: HAWQ-462 > URL: https://issues.apache.org/jira/browse/HAWQ-462 > Project: Apache HAWQ > Issue Type: Bug > Components: External Tables, Hcatalog, PXF >Affects Versions: 2.0.0-beta-incubating >Reporter: Kavinder Dhaliwal >Assignee: Shivram Mani > Fix For: 2.0.0 > > > On an HA Secure Cluster querying a hive external table works: > {code} > create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b > boolean) location > ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') > format 'custom' (formatter='pxfwritable_import'); > select * from pxf_hive; > {code} > but querying the same table via hcatalog does not > {code} > SELECT * FROM hcatalog.default.hive_table; > ERROR: Failed to acquire a delegation token for uri hdfs://localhost:8020/ > (hd_work_mgr.c:930) > {code} > This should be fixed by the PR for > https://issues.apache.org/jira/browse/HAWQ-317 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-625) Fix build failure on MAC for the fix
Lili Ma created HAWQ-625: Summary: Fix build failure on MAC for the fix Key: HAWQ-625 URL: https://issues.apache.org/jira/browse/HAWQ-625 Project: Apache HAWQ Issue Type: Sub-task Reporter: Lili Ma Assignee: Lei Chang The function is declared at end of file, but referenced before the declaration. Should declare the function first. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-624) Copy a large table to output file, multiple QEs are started, but only one QE is assigned actual task.
Lili Ma created HAWQ-624: Summary: Copy a large table to output file, multiple QEs are started, but only one QE is assigned actual task. Key: HAWQ-624 URL: https://issues.apache.org/jira/browse/HAWQ-624 Project: Apache HAWQ Issue Type: Bug Components: Core Reporter: Lili Ma Assignee: Lei Chang Create a large table a, for example, with 1,000,000,000 records. Then run copy command "copy a to '/tmp/a'". Six QEs are started, but only one QE is assigned a valid split to scan, other QEs scan split range is NULL. Attach to one running QE, see the split information for each QE. {code} (lldb) bt * thread #1: tid = 0x6a7b98, 0x0001016bcdd6 postgres`appendonly_getnext(scan=0x7fe5c4046430, direction=ForwardScanDirection, slot=0x7fe5c5000c40) + 54 at appendonlyam.c:1614, queue = 'com.apple.main-thread', stop reason = step over frame #0: 0x0001016bcdd6 postgres`appendonly_getnext(scan=0x7fe5c4046430, direction=ForwardScanDirection, slot=0x7fe5c5000c40) + 54 at appendonlyam.c:1614 * frame #1: 0x0001017f60ec postgres`CopyTo(cstate=0x7fe5c6802030) + 2060 at copy.c:2360 frame #2: 0x0001017ee4dc postgres`DoCopyTo(cstate=0x7fe5c6802030) + 1452 at copy.c:1905 frame #3: 0x0001017e4014 postgres`DoCopy(stmt=0x7fe5c5824758, queryString=0x7fe5c58230f2) + 10676 at copy.c:1686 frame #4: 0x000101a992eb postgres`ProcessUtility(parsetree=0x7fe5c5824758, queryString=0x7fe5c7001c30, params=0x, isTopLevel='\x01', dest=0x7fe5c4037da0, completionTag=0x7fff5e619bb0) + 3467 at utility.c:1076 frame #5: 0x000101a97f53 postgres`PortalRunUtility(portal=0x7fe5c5840230, utilityStmt=0x7fe5c5824758, isTopLevel='\x01', dest=0x7fe5c4037da0, completionTag=0x7fff5e619bb0) + 467 at pquery.c:1969 frame #6: 0x000101a964a0 postgres`PortalRunMulti(portal=0x7fe5c5840230, isTopLevel='\x01', dest=0x7fe5c4037da0, altdest=0x7fe5c4037da0, completionTag=0x7fff5e619bb0) + 544 at pquery.c:2079 frame #7: 0x000101a959fb postgres`PortalRun(portal=0x7fe5c5840230, count=9223372036854775807, isTopLevel='\x01', dest=0x7fe5c4037da0, altdest=0x7fe5c4037da0, completionTag=0x7fff5e619bb0) + 1291 at pquery.c:1596 frame #8: 0x000101a8cfc9 postgres`exec_mpp_query(query_string=0x7fe5c58230f2, serializedQuerytree=0x7fe5c5823133, serializedQuerytreelen=980, serializedPlantree=0x, serializedPlantreelen=0, serializedParams=0x, serializedParamslen=0, serializedSliceInfo=0x, serializedSliceInfolen=0, serializedResource=0x7fe5c5823554, serializedResourceLen=41, seqServerHost=0x7fe5c582357d, seqServerPort=54307, localSlice=0) + 5049 at postgres.c:1414 frame #9: 0x000101a8a4e6 postgres`PostgresMain(argc=250, argv=0x7fe5c4826e30, username=0x7fe5c4801670) + 9686 at postgres.c:4945 frame #10: 0x000101a2d1cb postgres`BackendRun(port=0x7fe5c3c19360) + 1019 at postmaster.c:5889 frame #11: 0x000101a2c2a2 postgres`BackendStartup(port=0x7fe5c3c19360) + 402 at postmaster.c:5484 frame #12: 0x000101a28d94 postgres`ServerLoop + 1348 at postmaster.c:2163 frame #13: 0x000101a27350 postgres`PostmasterMain(argc=9, argv=0x7fe5c3c1d300) + 5072 at postmaster.c:1454 frame #14: 0x00010192c211 postgres`main(argc=9, argv=0x7fe5c3c1d300) + 993 at main.c:226 frame #15: 0x7fff8642e5c9 libdyld.dylib`start + 1 (lldb) p *cstate->splits (List) $43 = { type = T_List length = 1 head = 0x7fe5c402fc18 tail = 0x7fe5c402fc18 } (lldb) p *(ListCell*)0x7fe5c402fc18 (ListCell) $44 = { data = (ptr_value = void * = 0x7fe5c402fa88, int_value = -1006437752, oid_value = 3288529544) next = 0x } (lldb) p *(SegFileSplitMapNode *)0x7fe5c402fa88 (SegFileSplitMapNode) $45 = { type = T_SegFileSplitMapNode relid = 16508 splits = 0x7fe5c402fb48 } (lldb) p *(List*)0x7fe5c402fb48 (List) $46 = { type = T_List length = 6 head = 0x7fe5c402fb28 tail = 0x7fe5c402fbf8 } (lldb) p *(ListCell*)0x7fe5c402fb28 (ListCell) $47 = { data = (ptr_value = void * = 0x7fe5c402faf8, int_value = -1006437640, oid_value = 3288529656) next = 0x7fe5c402fb78 } (lldb) p *(ListCell*)0x7fe5c402fb78 (ListCell) $48 = { data = (ptr_value = void * = 0x, int_value = 0, oid_value = 0) next = 0x7fe5c402fb98 } (lldb) p *(ListCell*)0x7fe5c402fb98 (ListCell) $49 = { data = (ptr_value = void * = 0x, int_value = 0, oid_value = 0) next = 0x7fe5c402fbb8 } (lldb) p *(ListCell*)0x7fe5c402fbb8 (ListCell) $50 = { data = (ptr_value = void * = 0x, int_value = 0, oid_value = 0) next = 0x7fe5c402fbd8 } (lldb) p *(ListCell*)0x0
[jira] [Commented] (HAWQ-623) Resource quota request does not follow latest resource quota calculating logic
[ https://issues.apache.org/jira/browse/HAWQ-623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225496#comment-15225496 ] ASF GitHub Bot commented on HAWQ-623: - Github user linwen commented on the pull request: https://github.com/apache/incubator-hawq/pull/552#issuecomment-205586771 +1 > Resource quota request does not follow latest resource quota calculating logic > -- > > Key: HAWQ-623 > URL: https://issues.apache.org/jira/browse/HAWQ-623 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-623) Resource quota request does not follow latest resource quota calculating logic
[ https://issues.apache.org/jira/browse/HAWQ-623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225462#comment-15225462 ] ASF GitHub Bot commented on HAWQ-623: - Github user yaoj2 commented on the pull request: https://github.com/apache/incubator-hawq/pull/552#issuecomment-205573835 LGTM > Resource quota request does not follow latest resource quota calculating logic > -- > > Key: HAWQ-623 > URL: https://issues.apache.org/jira/browse/HAWQ-623 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-564) QD hangs when connecting to resource manager
[ https://issues.apache.org/jira/browse/HAWQ-564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225460#comment-15225460 ] ASF GitHub Bot commented on HAWQ-564: - Github user yaoj2 commented on the pull request: https://github.com/apache/incubator-hawq/pull/550#issuecomment-205573503 +1 > QD hangs when connecting to resource manager > > > Key: HAWQ-564 > URL: https://issues.apache.org/jira/browse/HAWQ-564 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Affects Versions: 2.0.0 >Reporter: Chunling Wang >Assignee: Yi Jin > > When first inject panic in QE process, we run a query and segment is down. > After the segment is up, we run another query and get correct answer. Then we > inject the same panic second time. After the segment is down and then up > again, we run a query and find QD process hangs when connecting to resource > manager. Here is the backtrace when QD hangs: > {code} > * thread #1: tid = 0x21d8be, 0x7fff890355be libsystem_kernel.dylib`poll + > 10, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP > * frame #0: 0x7fff890355be libsystem_kernel.dylib`poll + 10 > frame #1: 0x000101daeafe postgres`processAllCommFileDescs + 158 at > rmcomm_AsyncComm.c:156 > frame #2: 0x000101db85f5 > postgres`callSyncRPCRemote(hostname=0x7f9c19e00cd0, port=5437, > sendbuff=0x7f9c1b918f50, sendbuffsize=80, sendmsgid=259, > exprecvmsgid=2307, recvsmb=, errorbuf=0x00010230c1a0, > errorbufsize=) + 645 at rmcomm_SyncComm.c:122 > frame #3: 0x000101db2d85 postgres`acquireResourceFromRM [inlined] > callSyncRPCToRM(sendbuff=0x7f9c1b918f50, sendbuffsize=, > sendmsgid=259, exprecvmsgid=2307, recvsmb=0x7f9c1b918e70, > errorbuf=, errorbufsize=1024) + 73 at rmcomm_QD2RM.c:2780 > frame #4: 0x000101db2d3c > postgres`acquireResourceFromRM(index=, sessionid=12, > slice_size=462524016, iobytes=134217728, preferred_nodes=0x7f9c1a02d398, > preferred_nodes_size=, max_seg_count_fix=, > min_seg_count_fix=, errorbuf=, > errorbufsize=) + 572 at rmcomm_QD2RM.c:742 > frame #5: 0x000101c979e7 postgres`AllocateResource(life=QRL_ONCE, > slice_size=5, iobytes=134217728, max_target_segment_num=1, > min_target_segment_num=1, vol_info=0x7f9c1a02d398, vol_info_size=1) + 631 > at pquery.c:796 > frame #6: 0x000101e8c60f > postgres`calculate_planner_segment_num(query=, > resourceLife=QRL_ONCE, fullRangeTable=, > intoPolicy=, sliceNum=5) + 14287 at cdbdatalocality.c:4207 > frame #7: 0x000101c0f671 postgres`planner + 106 at planner.c:496 > frame #8: 0x000101c0f607 postgres`planner(parse=0x7f9c1a02a140, > cursorOptions=, boundParams=0x, > resourceLife=QRL_ONCE) + 311 at planner.c:310 > frame #9: 0x000101c8eb33 > postgres`pg_plan_query(querytree=0x7f9c1a02a140, > boundParams=0x, resource_life=QRL_ONCE) + 99 at postgres.c:837 > frame #10: 0x000101c956ae postgres`exec_simple_query + 21 at > postgres.c:911 > frame #11: 0x000101c95699 > postgres`exec_simple_query(query_string=0x7f9c1a028a30, > seqServerHost=0x, seqServerPort=-1) + 1577 at postgres.c:1671 > frame #12: 0x000101c91a4c postgres`PostgresMain(argc=, > argv=, username=0x7f9c1b808cf0) + 9404 at postgres.c:4754 > frame #13: 0x000101c4ae02 postgres`ServerLoop [inlined] BackendRun + > 105 at postmaster.c:5889 > frame #14: 0x000101c4ad99 postgres`ServerLoop at postmaster.c:5484 > frame #15: 0x000101c4ad99 postgres`ServerLoop + 9593 at > postmaster.c:2163 > frame #16: 0x000101c47d3b postgres`PostmasterMain(argc=, > argv=) + 5019 at postmaster.c:1454 > frame #17: 0x000101bb1aa9 postgres`main(argc=9, > argv=0x7f9c19c1eef0) + 1433 at main.c:209 > frame #18: 0x7fff95e8c5c9 libdyld.dylib`start + 1 > thread #2: tid = 0x21d8bf, 0x7fff890355be libsystem_kernel.dylib`poll + > 10 > frame #0: 0x7fff890355be libsystem_kernel.dylib`poll + 10 > frame #1: 0x000101dfe723 postgres`rxThreadFunc(arg=) + > 2163 at ic_udp.c:6251 > frame #2: 0x7fff95e822fc libsystem_pthread.dylib`_pthread_body + 131 > frame #3: 0x7fff95e82279 libsystem_pthread.dylib`_pthread_start + 176 > frame #4: 0x7fff95e804b1 libsystem_pthread.dylib`thread_start + 13 > thread #3: tid = 0x21d9c2, 0x7fff890343f6 > libsystem_kernel.dylib`__select + 10 > frame #0: 0x7fff890343f6 libsystem_kernel.dylib`__select + 10 > frame #1: 0x000101e9d42e postgres`pg_usleep(microsec=) + > 78 at pgsleep.c:43 > frame #2: 0x000101db1a66 > postgres`generateResourceRefreshHeartBeat(arg=0x7f9c19f02480) + 166 at > rmcomm_QD2RM.c:1519 > frame #3: 0
[jira] [Commented] (HAWQ-623) Resource quota request does not follow latest resource quota calculating logic
[ https://issues.apache.org/jira/browse/HAWQ-623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225454#comment-15225454 ] ASF GitHub Bot commented on HAWQ-623: - GitHub user jiny2 opened a pull request: https://github.com/apache/incubator-hawq/pull/552 HAWQ-623. Resource quota request does not follow latest resource quota calculating logic You can merge this pull request into a Git repository by running: $ git pull https://github.com/jiny2/incubator-hawq HAWQ-623 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/552.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #552 commit 852190f018baed481b09ebc8782ad81b9df20077 Author: YI JIN Date: 2016-04-05T01:01:16Z HAWQ-623. Resource quota request does not follow latest resource quota calculating logic > Resource quota request does not follow latest resource quota calculating logic > -- > > Key: HAWQ-623 > URL: https://issues.apache.org/jira/browse/HAWQ-623 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-623) Resource quota request does not follow latest resource quota calculating logic
[ https://issues.apache.org/jira/browse/HAWQ-623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Jin reassigned HAWQ-623: --- Assignee: Yi Jin (was: Lei Chang) > Resource quota request does not follow latest resource quota calculating logic > -- > > Key: HAWQ-623 > URL: https://issues.apache.org/jira/browse/HAWQ-623 > Project: Apache HAWQ > Issue Type: Bug > Components: Resource Manager >Reporter: Yi Jin >Assignee: Yi Jin > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HAWQ-623) Resource quota request does not follow latest resource quota calculating logic
Yi Jin created HAWQ-623: --- Summary: Resource quota request does not follow latest resource quota calculating logic Key: HAWQ-623 URL: https://issues.apache.org/jira/browse/HAWQ-623 Project: Apache HAWQ Issue Type: Bug Components: Resource Manager Reporter: Yi Jin Assignee: Lei Chang -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-615) PXF getMetadata endpoint fails if set of items described by passed pattern have unsupported by Hawq Hive objects
[ https://issues.apache.org/jira/browse/HAWQ-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225439#comment-15225439 ] ASF GitHub Bot commented on HAWQ-615: - Github user hornn commented on a diff in the pull request: https://github.com/apache/incubator-hawq/pull/551#discussion_r58473430 --- Diff: pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcher.java --- @@ -62,11 +74,24 @@ public HiveMetadataFetcher(InputData md) { List metadataList = new ArrayList(); +if(tblsDesc.size() > 1) { --- End diff -- What if there are 2 tables but both are not supported? in that case the function will return 0 tables. Is that an acceptable scenario? if so, perhaps add a test to cover it? Did you consider changing the condition to have at least one 'good' table - check in the end of the loop if there metadataList is not empty. If it is, throw an exception. > PXF getMetadata endpoint fails if set of items described by passed pattern > have unsupported by Hawq Hive objects > > > Key: HAWQ-615 > URL: https://issues.apache.org/jira/browse/HAWQ-615 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Shivram Mani > Fix For: 2.0.0 > > > STR: > 1) Hive instance having at least one view object > 2) Read metadata from PXF using wildcard as a pattern: > http://localhost:51200/pxf/v14/Metadata/getMetadata?profile=hive&pattern=* > AR: > {code} > java.lang.UnsupportedOperationException: Hive views are not supported by HAWQ > > org.apache.hawq.pxf.plugins.hive.utilities.HiveUtilities.getHiveTable(HiveUtilities.java:79) > > org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher.getMetadata(HiveMetadataFetcher.java:67) > > org.apache.hawq.pxf.service.rest.MetadataResource.read(MetadataResource.java:107) > sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke(Method.java:606) > > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) > > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) > > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) > javax.servlet.http.HttpServlet.service(HttpServlet.java:731) > org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) > {code} > ER: > Response should contain set of items which are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-615) PXF getMetadata endpoint fails if set of items described by passed pattern have unsupported by Hawq Hive objects
[ https://issues.apache.org/jira/browse/HAWQ-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225422#comment-15225422 ] ASF GitHub Bot commented on HAWQ-615: - Github user hornn commented on a diff in the pull request: https://github.com/apache/incubator-hawq/pull/551#discussion_r58473077 --- Diff: pxf/pxf-hive/src/test/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcherTest.java --- @@ -154,6 +155,115 @@ public void getTableMetadata() throws Exception { assertEquals("int4", field.getType()); } +@Test +public void getTableMetadataWithMultipleTables() throws Exception { +prepareConstruction(); + +fetcher = new HiveMetadataFetcher(inputData); + +String tablepattern = "*"; +String dbpattern = "*"; +String dbname = "default"; +String tablenamebase = "regulartable"; +String pattern = dbpattern + "." + tablepattern; + +List dbNames = new ArrayList(Arrays.asList(dbname)); +List tableNames = new ArrayList(); + +// Prepare for tables +List fields = new ArrayList(); +fields.add(new FieldSchema("field1", "string", null)); +fields.add(new FieldSchema("field2", "int", null)); +StorageDescriptor sd = new StorageDescriptor(); +sd.setCols(fields); + +// Mock hive tables returned from hive client +for(int index=1;index<=2;index++) { +String tableName = tablenamebase + index; +tableNames.add(tableName);; --- End diff -- two ; > PXF getMetadata endpoint fails if set of items described by passed pattern > have unsupported by Hawq Hive objects > > > Key: HAWQ-615 > URL: https://issues.apache.org/jira/browse/HAWQ-615 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Shivram Mani > Fix For: 2.0.0 > > > STR: > 1) Hive instance having at least one view object > 2) Read metadata from PXF using wildcard as a pattern: > http://localhost:51200/pxf/v14/Metadata/getMetadata?profile=hive&pattern=* > AR: > {code} > java.lang.UnsupportedOperationException: Hive views are not supported by HAWQ > > org.apache.hawq.pxf.plugins.hive.utilities.HiveUtilities.getHiveTable(HiveUtilities.java:79) > > org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher.getMetadata(HiveMetadataFetcher.java:67) > > org.apache.hawq.pxf.service.rest.MetadataResource.read(MetadataResource.java:107) > sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke(Method.java:606) > > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) > > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) > > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) > javax.servlet.http.HttpServlet.service(HttpServlet.java:731) > org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) > {code} > ER: > Response should contain set of
[jira] [Resolved] (HAWQ-615) PXF getMetadata endpoint fails if set of items described by passed pattern have unsupported by Hawq Hive objects
[ https://issues.apache.org/jira/browse/HAWQ-615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shivram Mani resolved HAWQ-615. --- Resolution: Fixed > PXF getMetadata endpoint fails if set of items described by passed pattern > have unsupported by Hawq Hive objects > > > Key: HAWQ-615 > URL: https://issues.apache.org/jira/browse/HAWQ-615 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Shivram Mani > Fix For: 2.0.0 > > > STR: > 1) Hive instance having at least one view object > 2) Read metadata from PXF using wildcard as a pattern: > http://localhost:51200/pxf/v14/Metadata/getMetadata?profile=hive&pattern=* > AR: > {code} > java.lang.UnsupportedOperationException: Hive views are not supported by HAWQ > > org.apache.hawq.pxf.plugins.hive.utilities.HiveUtilities.getHiveTable(HiveUtilities.java:79) > > org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher.getMetadata(HiveMetadataFetcher.java:67) > > org.apache.hawq.pxf.service.rest.MetadataResource.read(MetadataResource.java:107) > sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke(Method.java:606) > > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) > > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) > > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) > javax.servlet.http.HttpServlet.service(HttpServlet.java:731) > org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) > {code} > ER: > Response should contain set of items which are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (HAWQ-577) Stream PXF metadata response
[ https://issues.apache.org/jira/browse/HAWQ-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shivram Mani closed HAWQ-577. - > Stream PXF metadata response > - > > Key: HAWQ-577 > URL: https://issues.apache.org/jira/browse/HAWQ-577 > Project: Apache HAWQ > Issue Type: Bug > Components: PXF >Reporter: Shivram Mani >Assignee: Shivram Mani > > getMetadata api returns the metadata corresponding to the user specified > pattern. There is no limit to the #of tables the pattern can correspond do > and the current approach of building in memory the json object might not > scale. > We needed to serialize them inside a streaming object similar to the approach > used for streaming the FragmentsResponse > The same applies also for the debug function that prints metadata of all the > items - if there are too many of them the StringBuilder will run out of > memory. The solution in the fragments case was to print a log of one fragment > at a time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-577) Stream PXF metadata response
[ https://issues.apache.org/jira/browse/HAWQ-577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shivram Mani resolved HAWQ-577. --- Resolution: Fixed > Stream PXF metadata response > - > > Key: HAWQ-577 > URL: https://issues.apache.org/jira/browse/HAWQ-577 > Project: Apache HAWQ > Issue Type: Bug > Components: PXF >Reporter: Shivram Mani >Assignee: Shivram Mani > > getMetadata api returns the metadata corresponding to the user specified > pattern. There is no limit to the #of tables the pattern can correspond do > and the current approach of building in memory the json object might not > scale. > We needed to serialize them inside a streaming object similar to the approach > used for streaming the FragmentsResponse > The same applies also for the debug function that prints metadata of all the > items - if there are too many of them the StringBuilder will run out of > memory. The solution in the fragments case was to print a log of one fragment > at a time. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (HAWQ-615) PXF getMetadata endpoint fails if set of items described by passed pattern have unsupported by Hawq Hive objects
[ https://issues.apache.org/jira/browse/HAWQ-615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shivram Mani closed HAWQ-615. - > PXF getMetadata endpoint fails if set of items described by passed pattern > have unsupported by Hawq Hive objects > > > Key: HAWQ-615 > URL: https://issues.apache.org/jira/browse/HAWQ-615 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Shivram Mani > Fix For: 2.0.0 > > > STR: > 1) Hive instance having at least one view object > 2) Read metadata from PXF using wildcard as a pattern: > http://localhost:51200/pxf/v14/Metadata/getMetadata?profile=hive&pattern=* > AR: > {code} > java.lang.UnsupportedOperationException: Hive views are not supported by HAWQ > > org.apache.hawq.pxf.plugins.hive.utilities.HiveUtilities.getHiveTable(HiveUtilities.java:79) > > org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher.getMetadata(HiveMetadataFetcher.java:67) > > org.apache.hawq.pxf.service.rest.MetadataResource.read(MetadataResource.java:107) > sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke(Method.java:606) > > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) > > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) > > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) > javax.servlet.http.HttpServlet.service(HttpServlet.java:731) > org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) > {code} > ER: > Response should contain set of items which are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-615) PXF getMetadata endpoint fails if set of items described by passed pattern have unsupported by Hawq Hive objects
[ https://issues.apache.org/jira/browse/HAWQ-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225271#comment-15225271 ] ASF GitHub Bot commented on HAWQ-615: - Github user shivzone commented on a diff in the pull request: https://github.com/apache/incubator-hawq/pull/551#discussion_r58466610 --- Diff: pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcher.java --- @@ -50,9 +50,21 @@ public HiveMetadataFetcher(InputData md) { client = HiveUtilities.initHiveClient(); } +/** + * Fetches metadata of hive tables corresponding to the given pattern + * For patterns matching more than one table, the tables are skipped. --- End diff -- will update > PXF getMetadata endpoint fails if set of items described by passed pattern > have unsupported by Hawq Hive objects > > > Key: HAWQ-615 > URL: https://issues.apache.org/jira/browse/HAWQ-615 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Shivram Mani > Fix For: 2.0.0 > > > STR: > 1) Hive instance having at least one view object > 2) Read metadata from PXF using wildcard as a pattern: > http://localhost:51200/pxf/v14/Metadata/getMetadata?profile=hive&pattern=* > AR: > {code} > java.lang.UnsupportedOperationException: Hive views are not supported by HAWQ > > org.apache.hawq.pxf.plugins.hive.utilities.HiveUtilities.getHiveTable(HiveUtilities.java:79) > > org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher.getMetadata(HiveMetadataFetcher.java:67) > > org.apache.hawq.pxf.service.rest.MetadataResource.read(MetadataResource.java:107) > sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke(Method.java:606) > > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) > > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) > > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) > javax.servlet.http.HttpServlet.service(HttpServlet.java:731) > org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) > {code} > ER: > Response should contain set of items which are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-615) PXF getMetadata endpoint fails if set of items described by passed pattern have unsupported by Hawq Hive objects
[ https://issues.apache.org/jira/browse/HAWQ-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225225#comment-15225225 ] ASF GitHub Bot commented on HAWQ-615: - Github user sansanichfb commented on the pull request: https://github.com/apache/incubator-hawq/pull/551#issuecomment-205532919 LGTM other than small cosmetic comments. > PXF getMetadata endpoint fails if set of items described by passed pattern > have unsupported by Hawq Hive objects > > > Key: HAWQ-615 > URL: https://issues.apache.org/jira/browse/HAWQ-615 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Shivram Mani > Fix For: 2.0.0 > > > STR: > 1) Hive instance having at least one view object > 2) Read metadata from PXF using wildcard as a pattern: > http://localhost:51200/pxf/v14/Metadata/getMetadata?profile=hive&pattern=* > AR: > {code} > java.lang.UnsupportedOperationException: Hive views are not supported by HAWQ > > org.apache.hawq.pxf.plugins.hive.utilities.HiveUtilities.getHiveTable(HiveUtilities.java:79) > > org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher.getMetadata(HiveMetadataFetcher.java:67) > > org.apache.hawq.pxf.service.rest.MetadataResource.read(MetadataResource.java:107) > sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke(Method.java:606) > > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) > > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) > > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) > javax.servlet.http.HttpServlet.service(HttpServlet.java:731) > org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) > {code} > ER: > Response should contain set of items which are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-615) PXF getMetadata endpoint fails if set of items described by passed pattern have unsupported by Hawq Hive objects
[ https://issues.apache.org/jira/browse/HAWQ-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225222#comment-15225222 ] ASF GitHub Bot commented on HAWQ-615: - Github user sansanichfb commented on a diff in the pull request: https://github.com/apache/incubator-hawq/pull/551#discussion_r58462813 --- Diff: pxf/pxf-hive/src/test/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcherTest.java --- @@ -154,6 +155,115 @@ public void getTableMetadata() throws Exception { assertEquals("int4", field.getType()); } +@Test +public void getTableMetadataWithMultipleTables() throws Exception { +prepareConstruction(); + +fetcher = new HiveMetadataFetcher(inputData); + +String tablepattern = "*"; +String dbpattern = "*"; +String dbname = "default"; +String tablenamebase = "regulartable"; +String pattern = dbpattern + "." + tablepattern; + +List dbNames = new ArrayList(Arrays.asList(dbname)); +List tableNames = new ArrayList(); + +// Prepare for tables +List fields = new ArrayList(); +fields.add(new FieldSchema("field1", "string", null)); +fields.add(new FieldSchema("field2", "int", null)); +StorageDescriptor sd = new StorageDescriptor(); +sd.setCols(fields); + +// Mock hive tables returned from hive client +for(int index=1;index<=2;index++) { --- End diff -- Add comment why starting from 1 not from 0? > PXF getMetadata endpoint fails if set of items described by passed pattern > have unsupported by Hawq Hive objects > > > Key: HAWQ-615 > URL: https://issues.apache.org/jira/browse/HAWQ-615 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Shivram Mani > Fix For: 2.0.0 > > > STR: > 1) Hive instance having at least one view object > 2) Read metadata from PXF using wildcard as a pattern: > http://localhost:51200/pxf/v14/Metadata/getMetadata?profile=hive&pattern=* > AR: > {code} > java.lang.UnsupportedOperationException: Hive views are not supported by HAWQ > > org.apache.hawq.pxf.plugins.hive.utilities.HiveUtilities.getHiveTable(HiveUtilities.java:79) > > org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher.getMetadata(HiveMetadataFetcher.java:67) > > org.apache.hawq.pxf.service.rest.MetadataResource.read(MetadataResource.java:107) > sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke(Method.java:606) > > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) > > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) > > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) > javax.servlet.http.HttpServlet.service(HttpServlet.java:731) > org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) > {code} > ER: > Response should contain set of items which are supported. -- This message was sent by At
[jira] [Commented] (HAWQ-615) PXF getMetadata endpoint fails if set of items described by passed pattern have unsupported by Hawq Hive objects
[ https://issues.apache.org/jira/browse/HAWQ-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225220#comment-15225220 ] ASF GitHub Bot commented on HAWQ-615: - Github user sansanichfb commented on a diff in the pull request: https://github.com/apache/incubator-hawq/pull/551#discussion_r58462587 --- Diff: pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcher.java --- @@ -50,9 +50,21 @@ public HiveMetadataFetcher(InputData md) { client = HiveUtilities.initHiveClient(); } +/** + * Fetches metadata of hive tables corresponding to the given pattern + * For patterns matching more than one table, the tables are skipped. --- End diff -- Maybe For patterns matching more than one table, the unsupported tables are skipped.? > PXF getMetadata endpoint fails if set of items described by passed pattern > have unsupported by Hawq Hive objects > > > Key: HAWQ-615 > URL: https://issues.apache.org/jira/browse/HAWQ-615 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Shivram Mani > Fix For: 2.0.0 > > > STR: > 1) Hive instance having at least one view object > 2) Read metadata from PXF using wildcard as a pattern: > http://localhost:51200/pxf/v14/Metadata/getMetadata?profile=hive&pattern=* > AR: > {code} > java.lang.UnsupportedOperationException: Hive views are not supported by HAWQ > > org.apache.hawq.pxf.plugins.hive.utilities.HiveUtilities.getHiveTable(HiveUtilities.java:79) > > org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher.getMetadata(HiveMetadataFetcher.java:67) > > org.apache.hawq.pxf.service.rest.MetadataResource.read(MetadataResource.java:107) > sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke(Method.java:606) > > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) > > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) > > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) > javax.servlet.http.HttpServlet.service(HttpServlet.java:731) > org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) > {code} > ER: > Response should contain set of items which are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HAWQ-615) PXF getMetadata endpoint fails if set of items described by passed pattern have unsupported by Hawq Hive objects
[ https://issues.apache.org/jira/browse/HAWQ-615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shivram Mani reassigned HAWQ-615: - Assignee: Shivram Mani (was: Oleksandr Diachenko) > PXF getMetadata endpoint fails if set of items described by passed pattern > have unsupported by Hawq Hive objects > > > Key: HAWQ-615 > URL: https://issues.apache.org/jira/browse/HAWQ-615 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Shivram Mani > Fix For: 2.0.0 > > > STR: > 1) Hive instance having at least one view object > 2) Read metadata from PXF using wildcard as a pattern: > http://localhost:51200/pxf/v14/Metadata/getMetadata?profile=hive&pattern=* > AR: > {code} > java.lang.UnsupportedOperationException: Hive views are not supported by HAWQ > > org.apache.hawq.pxf.plugins.hive.utilities.HiveUtilities.getHiveTable(HiveUtilities.java:79) > > org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher.getMetadata(HiveMetadataFetcher.java:67) > > org.apache.hawq.pxf.service.rest.MetadataResource.read(MetadataResource.java:107) > sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke(Method.java:606) > > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) > > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) > > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) > javax.servlet.http.HttpServlet.service(HttpServlet.java:731) > org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) > {code} > ER: > Response should contain set of items which are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-615) PXF getMetadata endpoint fails if set of items described by passed pattern have unsupported by Hawq Hive objects
[ https://issues.apache.org/jira/browse/HAWQ-615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225207#comment-15225207 ] ASF GitHub Bot commented on HAWQ-615: - GitHub user shivzone opened a pull request: https://github.com/apache/incubator-hawq/pull/551 HAWQ-615. Handle incomptible tables with getMetadata PXF API Incompatible tables will be ignored when getMetadata is invoked with a pattern that matches > 1 table You can merge this pull request into a Git repository by running: $ git pull https://github.com/apache/incubator-hawq HAWQ-615 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/551.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #551 commit 676396fa0c40bb600aadda6f1dc7e8763a0bdde8 Author: Shivram Mani Date: 2016-04-04T22:41:08Z HAWQ-615. Handle incomptible tables with getMetadata PXF API > PXF getMetadata endpoint fails if set of items described by passed pattern > have unsupported by Hawq Hive objects > > > Key: HAWQ-615 > URL: https://issues.apache.org/jira/browse/HAWQ-615 > Project: Apache HAWQ > Issue Type: Sub-task > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Oleksandr Diachenko > Fix For: 2.0.0 > > > STR: > 1) Hive instance having at least one view object > 2) Read metadata from PXF using wildcard as a pattern: > http://localhost:51200/pxf/v14/Metadata/getMetadata?profile=hive&pattern=* > AR: > {code} > java.lang.UnsupportedOperationException: Hive views are not supported by HAWQ > > org.apache.hawq.pxf.plugins.hive.utilities.HiveUtilities.getHiveTable(HiveUtilities.java:79) > > org.apache.hawq.pxf.plugins.hive.HiveMetadataFetcher.getMetadata(HiveMetadataFetcher.java:67) > > org.apache.hawq.pxf.service.rest.MetadataResource.read(MetadataResource.java:107) > sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke(Method.java:606) > > com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) > > com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) > > com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) > > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) > > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) > > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) > > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) > > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) > javax.servlet.http.HttpServlet.service(HttpServlet.java:731) > org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) > {code} > ER: > Response should contain set of items which are supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-466) Call PXF metadata api from psql to suggest tab autocompletion
[ https://issues.apache.org/jira/browse/HAWQ-466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-466: --- Description: As a user of hcatalog integration feature, I'd expect to have auto-complete (tab), to list potential matches for the prefix pattern. > Call PXF metadata api from psql to suggest tab autocompletion > - > > Key: HAWQ-466 > URL: https://issues.apache.org/jira/browse/HAWQ-466 > Project: Apache HAWQ > Issue Type: Improvement > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Oleksandr Diachenko > Fix For: backlog > > > As a user of hcatalog integration feature, I'd expect to have auto-complete > (tab), to list potential matches for the prefix pattern. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-466) Call PXF metadata api from psql to suggest tab autocompletion
[ https://issues.apache.org/jira/browse/HAWQ-466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-466: --- Fix Version/s: (was: 2.0.0) backlog > Call PXF metadata api from psql to suggest tab autocompletion > - > > Key: HAWQ-466 > URL: https://issues.apache.org/jira/browse/HAWQ-466 > Project: Apache HAWQ > Issue Type: Improvement > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Oleksandr Diachenko > Fix For: backlog > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HAWQ-466) Call PXF metadata api from psql to suggest tab autocompletion
[ https://issues.apache.org/jira/browse/HAWQ-466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao updated HAWQ-466: --- Issue Type: Improvement (was: Sub-task) Parent: (was: HAWQ-393) > Call PXF metadata api from psql to suggest tab autocompletion > - > > Key: HAWQ-466 > URL: https://issues.apache.org/jira/browse/HAWQ-466 > Project: Apache HAWQ > Issue Type: Improvement > Components: Hcatalog, PXF >Reporter: Oleksandr Diachenko >Assignee: Oleksandr Diachenko > Fix For: backlog > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-317) PXF supports Secure Isilon Cluster
[ https://issues.apache.org/jira/browse/HAWQ-317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao resolved HAWQ-317. Resolution: Fixed Final fix is to use querycontext to pass on the security token as well as the key in Hash table (key is in the format of "protocol://"). So pxf (as external table) is using the same code path to handle secured environment query with tokens as native tables. Fix code in HAWQ-462 > PXF supports Secure Isilon Cluster > -- > > Key: HAWQ-317 > URL: https://issues.apache.org/jira/browse/HAWQ-317 > Project: Apache HAWQ > Issue Type: New Feature > Components: PXF >Reporter: Goden Yao >Assignee: Noa Horn > Fix For: 2.0.0 > > > HAWQ users who use Isilon (EMC storage solution), they expect PXF to work > fine with Secure Cluster settings. > **Background** > PXF has a separate code path to handle secure cluster scenario: > [hd_work_mgr.c | > https://github.com/apache/incubator-hawq/blob/81385f09fbbc59b2912b2f8ff9a3edbee6427e9b/src/backend/access/external/hd_work_mgr.c] > generate_delegation_token() > 8020 is hardcoded for pxf to get security token at the moment. > Suppose you defined the table as "location("pxf://uri:port/" > From the code, you can see that , if it's nonHA but secure cluster, pxf will > try to get the token from "uri:8020" where "uri" is from user input data. > For Isilon, as there's no specific name node per say - data nodes are for > storage and cannot be installed with PXF, work nodes installed with pxf but > don't handle security token request. > So if user puts a work node uri , it would work with non-secure cluster. When > it's secure cluster, it fails. If user puts a Isilon uri there, they can get > the security token, but will fail later on when querying as Isilon data nodes > don't have pxf installed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails
[ https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Goden Yao resolved HAWQ-462. Resolution: Fixed final fix is to use querycontext to pass on the security token as well as the key in Hash table (key is in the format of "protocol://"). So pxf (as external table) is using the same code path to handle secured environment query with tokens as native tables. > Querying Hcatalog in HA Secure Environment Fails > > > Key: HAWQ-462 > URL: https://issues.apache.org/jira/browse/HAWQ-462 > Project: Apache HAWQ > Issue Type: Bug > Components: External Tables, Hcatalog, PXF >Affects Versions: 2.0.0-beta-incubating >Reporter: Kavinder Dhaliwal >Assignee: Shivram Mani > Fix For: 2.0.0 > > > On an HA Secure Cluster querying a hive external table works: > {code} > create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b > boolean) location > ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') > format 'custom' (formatter='pxfwritable_import'); > select * from pxf_hive; > {code} > but querying the same table via hcatalog does not > {code} > SELECT * FROM hcatalog.default.hive_table; > ERROR: Failed to acquire a delegation token for uri hdfs://localhost:8020/ > (hd_work_mgr.c:930) > {code} > This should be fixed by the PR for > https://issues.apache.org/jira/browse/HAWQ-317 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails
[ https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15224881#comment-15224881 ] ASF GitHub Bot commented on HAWQ-462: - Github user kavinderd commented on the pull request: https://github.com/apache/incubator-hawq/pull/503#issuecomment-205460332 Merged into master by @shivzone [here](https://github.com/apache/incubator-hawq/commit/59ebfa7072621117827ae3d9464c971a61919672) > Querying Hcatalog in HA Secure Environment Fails > > > Key: HAWQ-462 > URL: https://issues.apache.org/jira/browse/HAWQ-462 > Project: Apache HAWQ > Issue Type: Bug > Components: External Tables, Hcatalog, PXF >Affects Versions: 2.0.0-beta-incubating >Reporter: Kavinder Dhaliwal >Assignee: Shivram Mani > Fix For: 2.0.0 > > > On an HA Secure Cluster querying a hive external table works: > {code} > create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b > boolean) location > ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') > format 'custom' (formatter='pxfwritable_import'); > select * from pxf_hive; > {code} > but querying the same table via hcatalog does not > {code} > SELECT * FROM hcatalog.default.hive_table; > ERROR: Failed to acquire a delegation token for uri hdfs://localhost:8020/ > (hd_work_mgr.c:930) > {code} > This should be fixed by the PR for > https://issues.apache.org/jira/browse/HAWQ-317 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HAWQ-462) Querying Hcatalog in HA Secure Environment Fails
[ https://issues.apache.org/jira/browse/HAWQ-462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15224882#comment-15224882 ] ASF GitHub Bot commented on HAWQ-462: - Github user kavinderd closed the pull request at: https://github.com/apache/incubator-hawq/pull/503 > Querying Hcatalog in HA Secure Environment Fails > > > Key: HAWQ-462 > URL: https://issues.apache.org/jira/browse/HAWQ-462 > Project: Apache HAWQ > Issue Type: Bug > Components: External Tables, Hcatalog, PXF >Affects Versions: 2.0.0-beta-incubating >Reporter: Kavinder Dhaliwal >Assignee: Shivram Mani > Fix For: 2.0.0 > > > On an HA Secure Cluster querying a hive external table works: > {code} > create external table pxf_hive(s1 text, n1 int, d1 float, bg bigint, b > boolean) location > ('pxf://ip-10-32-38-119.ore1.vpc.pivotal.io:51200/hive_table?profile=Hive') > format 'custom' (formatter='pxfwritable_import'); > select * from pxf_hive; > {code} > but querying the same table via hcatalog does not > {code} > SELECT * FROM hcatalog.default.hive_table; > ERROR: Failed to acquire a delegation token for uri hdfs://localhost:8020/ > (hd_work_mgr.c:930) > {code} > This should be fixed by the PR for > https://issues.apache.org/jira/browse/HAWQ-317 -- This message was sent by Atlassian JIRA (v6.3.4#6332)