[jira] [Assigned] (HAWQ-1337) Log stack info before forward signals sigsegv, sigill or sigbus in CdbProgramErrorHandler()
[ https://issues.apache.org/jira/browse/HAWQ-1337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Guo reassigned HAWQ-1337: -- Assignee: (was: Paul Guo) > Log stack info before forward signals sigsegv, sigill or sigbus in > CdbProgramErrorHandler() > --- > > Key: HAWQ-1337 > URL: https://issues.apache.org/jira/browse/HAWQ-1337 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Paul Guo > > CdbProgramErrorHandler() is a signal handler, while it seems that it just > forwards signal to its its main thread. This is not friendly for development > when encountering signals like sigsegv, signal and sigbus, etc. We should > save the thread stack info in log before forwarding. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HAWQ-1361) Remove ErrorTable in installcheck-good since it is in feature test suite now.
[ https://issues.apache.org/jira/browse/HAWQ-1361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Guo reassigned HAWQ-1361: -- Assignee: Paul Guo (was: Ed Espino) > Remove ErrorTable in installcheck-good since it is in feature test suite now. > - > > Key: HAWQ-1361 > URL: https://issues.apache.org/jira/browse/HAWQ-1361 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Paul Guo >Assignee: Paul Guo > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] incubator-hawq issue #1156: HAWQ-1370. Misuse of regular expressions in init...
Github user paul-guo- commented on the issue: https://github.com/apache/incubator-hawq/pull/1156 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq issue #1156: HAWQ-1370. Misuse of regular expressions in init...
Github user linwen commented on the issue: https://github.com/apache/incubator-hawq/pull/1156 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Closed] (HAWQ-1365) Print out detailed schema information for tables which the user doesn't have privileges
[ https://issues.apache.org/jira/browse/HAWQ-1365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hongxu Ma closed HAWQ-1365. --- Resolution: Fixed > Print out detailed schema information for tables which the user doesn't have > privileges > --- > > Key: HAWQ-1365 > URL: https://issues.apache.org/jira/browse/HAWQ-1365 > Project: Apache HAWQ > Issue Type: Improvement >Reporter: Hongxu Ma >Assignee: Hongxu Ma > Fix For: 2.2.0.0-incubating > > > Business Value: > Current output information for the table name which user doesn't have > privileges doesn't include schema output. We should print out the schema > information out, otherwise it's difficult for users to identify the > identified table. > {code} > postgres=# select * from public.a, s1.a; > ERROR: permission denied for relation(s): a, a > {code} > Should print out the schema information for the table. > {code} > postgres=# select * from public.a, s1.a; > ERROR: permission denied for relation(s): public.a, s1.a > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] incubator-hawq pull request #1152: Hawq 1365. Print out detailed schema info...
Github user interma closed the pull request at: https://github.com/apache/incubator-hawq/pull/1152 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq issue #1152: Hawq 1365. Print out detailed schema information...
Github user interma commented on the issue: https://github.com/apache/incubator-hawq/pull/1152 commited. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq issue #1145: HAWQ-1353. Added SOLR properties to RPS audit co...
Github user linwen commented on the issue: https://github.com/apache/incubator-hawq/pull/1145 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq issue #1154: HAWQ-1367. HAWQ can access to user tables that h...
Github user linwen commented on the issue: https://github.com/apache/incubator-hawq/pull/1154 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Updated] (HAWQ-1370) Misuse of regular expressions in init_file of feature test.
[ https://issues.apache.org/jira/browse/HAWQ-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hubert Zhang updated HAWQ-1370: --- Description: in global_init_file of feature test, we want to skip expressions which include file and line number, e.g.(aclchk.c:123), or (aclchk.cpp:134). But currently, the regular expressions is {code}(.*c[p]+:\d+) {code} which need to be replaced by {code}(.*c[p]*:\d+) {code} was: in global_init_file of feature test, we want to skip expressions which include file and line number, e.g.(aclchk.c:123), or (aclchk.cpp:134). But currently, the regular expressions is`(.*c[p]+:\d+) ` which need to be replaced by `(.*c[p]*:\d+) ` > Misuse of regular expressions in init_file of feature test. > --- > > Key: HAWQ-1370 > URL: https://issues.apache.org/jira/browse/HAWQ-1370 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Hubert Zhang >Assignee: Ed Espino > > in global_init_file of feature test, we want to skip expressions which > include file and line number, e.g.(aclchk.c:123), or (aclchk.cpp:134). > But currently, the regular expressions is {code}(.*c[p]+:\d+) {code} which > need to be replaced by {code}(.*c[p]*:\d+) {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (HAWQ-1370) Misuse of regular expressions in init_file of feature test.
[ https://issues.apache.org/jira/browse/HAWQ-1370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hubert Zhang updated HAWQ-1370: --- Description: in global_init_file of feature test, we want to skip expressions which include file and line number, e.g.(aclchk.c:123), or (aclchk.cpp:134). But currently, the regular expressions is`(.*c[p]+:\d+) ` which need to be replaced by `(.*c[p]*:\d+) ` was: in global_init_file of feature test, we want to skip expressions which include file and line number, e.g.(aclchk.c:123), or (aclchk.cpp:134). But currently, the regular expressions is \(.*c[p]+:\d+\) which need to be replaced by (.*c[p]*:\d+\) > Misuse of regular expressions in init_file of feature test. > --- > > Key: HAWQ-1370 > URL: https://issues.apache.org/jira/browse/HAWQ-1370 > Project: Apache HAWQ > Issue Type: Bug >Reporter: Hubert Zhang >Assignee: Ed Espino > > in global_init_file of feature test, we want to skip expressions which > include file and line number, e.g.(aclchk.c:123), or (aclchk.cpp:134). > But currently, the regular expressions is`(.*c[p]+:\d+) ` which need to be > replaced by `(.*c[p]*:\d+) ` -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] incubator-hawq pull request #1156: HAWQ-1370. Misuse of regular expressions ...
GitHub user zhangh43 opened a pull request: https://github.com/apache/incubator-hawq/pull/1156 HAWQ-1370. Misuse of regular expressions in init_file of feature test. in global_init_file of feature test, we want to skip expressions which include file and line number, e.g.(aclchk.c:123), or (aclchk.cpp:134). But currently, the regular expressions is `(.*c[p]+:\d+) `which need to be replaced by `(.*c[p]*:\d+) ` You can merge this pull request into a Git repository by running: $ git pull https://github.com/zhangh43/incubator-hawq hawq1370 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/1156.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1156 commit 584881b6866311b4df571f13293a088f97abbc3f Author: hubertzhang Date: 2017-03-01T03:34:13Z HAWQ-1370. Misuse of regular expressions in init_file of feature test. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Resolved] (HAWQ-1366) HAWQ should throw error if finding dictionary encoding type for Parquet
[ https://issues.apache.org/jira/browse/HAWQ-1366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lili Ma resolved HAWQ-1366. --- Resolution: Fixed Assignee: Lili Ma (was: Ed Espino) > HAWQ should throw error if finding dictionary encoding type for Parquet > --- > > Key: HAWQ-1366 > URL: https://issues.apache.org/jira/browse/HAWQ-1366 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Lili Ma >Assignee: Lili Ma > Fix For: 2.2.0.0-incubating > > > Since HAWQ is based on Parquet format version 1.0, which does not support > dictionary page, and hawq register may register Parquet format version 2.0 > data into HAWQ, we should throw error if finding unsupported page for column. > Reproduce Steps: > 1. In Hive, create a table and insert into 8 records: > {code} > (hive> create table tt (i int, > > fname varchar(100), > > title varchar(100), > > salary double > > ) > > STORED AS PARQUET; > OK > Time taken: 0.029 seconds > hive> insert into tt values (5,'OYLNUQSQIGWDWBKMDQNYUGYXOBDFGW', > 'Sales',80282.54), > > (7,'UKIPCBGKHDNEEXQHOFGKKFIZGLFNHE','Engineer',10206.65), > > (4,'PTPIRDISZNTWNFRNBPCUKWXYFGSRBQ','Director',63691.23), > > (9,'CTDCDYRURBZMBLNWHQNOQCYFFVULOP','Engineer',63867.44), > > (10,'WZQGZJEEVDKOKTPRFKLVCBSBIYTEDK','Sales',97720.08); > WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the > future versions. Consider using a different execution engine (i.e. spark, > tez) or using Hive 1.X releases. > Query ID = malili_20170228173956_f370414c-ddc8-4e6d-99e9-7c1fa1f678d1 > Total jobs = 3 > Launching Job 1 out of 3 > Number of reduce tasks is set to 0 since there's no reduce operator > Job running in-process (local Hadoop) > 2017-02-28 17:39:58,713 Stage-1 map = 100%, reduce = 0% > Ended Job = job_local2046305831_0004 > Stage-4 is selected by condition resolver. > Stage-3 is filtered out by condition resolver. > Stage-5 is filtered out by condition resolver. > Moving data to directory > hdfs://127.0.0.1:8020/user/hive/warehouse/tt/.hive-staging_hive_2017-02-28_17-39-56_806_3518057455919651199-1/-ext-1 > Loading data to table default.tt > MapReduce Jobs Launched: > Stage-Stage-1: HDFS Read: 3945 HDFS Write: 4226 SUCCESS > Total MapReduce CPU Time Spent: 0 msec > OK > Time taken: 1.975 seconds > hive> select * from tt; > OK > 5 OYLNUQSQIGWDWBKMDQNYUGYXOBDFGW Sales 80282.54 > 7 UKIPCBGKHDNEEXQHOFGKKFIZGLFNHE Engineer10206.65 > 4 PTPIRDISZNTWNFRNBPCUKWXYFGSRBQ Director63691.23 > 9 CTDCDYRURBZMBLNWHQNOQCYFFVULOP Engineer63867.44 > 10WZQGZJEEVDKOKTPRFKLVCBSBIYTEDK Sales 97720.08 > Time taken: 0.056 seconds, Fetched: 5 row(s) > {code} > 2. Create table in HAWQ > {code} > CREATE TABLE public.tt > (i int, > fname varchar(100), > title varchar(100), > salary float8) > WITH (appendonly=true,orientation=parquet); > {code} > 3. run hawq register > {code} > malilis-MacBook-Pro:Hawq_register malili$ hawq register -d postgres -f > hdfs://localhost:8020/user/hive/warehouse/tt tt > 20170228:17:40:25:090499 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-try > to connect database localhost:5432 postgres > 20170228:17:40:33:090499 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New > file(s) to be registered: > ['hdfs://localhost:8020/user/hive/warehouse/tt/00_0'] > hdfscmd: "hadoop fs -mv hdfs://localhost:8020/user/hive/warehouse/tt/00_0 > hdfs://localhost:8020/hawq_default/16385/16387/49281/1" > 20170228:17:40:41:090499 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Hawq > Register Succeed. > {code} > 4. select from hawq > {code} > postgres=# select * from tt; > i | fname | title | salary > ++---+-- > 5 | OYLNUQSQIGWDWBKMDQNYUGYXOBDFGW | | 80282.54 > 7 | UKIPCBGKHDNEEXQHOFGKKFIZGLFNHE | | 10206.65 > 4 | PTPIRDISZNTWNFRNBPCUKWXYFGSRBQ | | 63691.23 > 9 | CTDCDYRURBZMBLNWHQNOQCYFFVULOP | | 63867.44 > 10 | WZQGZJEEVDKOKTPRFKLVCBSBIYTEDK | | 97720.08 > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HAWQ-1370) Misuse of regular expressions in init_file of feature test.
Hubert Zhang created HAWQ-1370: -- Summary: Misuse of regular expressions in init_file of feature test. Key: HAWQ-1370 URL: https://issues.apache.org/jira/browse/HAWQ-1370 Project: Apache HAWQ Issue Type: Bug Reporter: Hubert Zhang Assignee: Ed Espino in global_init_file of feature test, we want to skip expressions which include file and line number, e.g.(aclchk.c:123), or (aclchk.cpp:134). But currently, the regular expressions is \(.*c[p]+:\d+\) which need to be replaced by (.*c[p]*:\d+\) -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] incubator-hawq pull request #1153: HAWQ-1366. Throw unsupported error out fo...
Github user ictmalili closed the pull request at: https://github.com/apache/incubator-hawq/pull/1153 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq issue #1150: HAWQ-1361. Remove some installcheck-good cases s...
Github user jiny2 commented on the issue: https://github.com/apache/incubator-hawq/pull/1150 +1 LGTM --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq pull request #1152: Hawq 1365. Print out detailed schema info...
Github user interma commented on a diff in the pull request: https://github.com/apache/incubator-hawq/pull/1152#discussion_r103599880 --- Diff: src/backend/catalog/namespace.c --- @@ -1982,6 +1982,13 @@ recomputeNamespacePath(void) elog(DEBUG3, "recompute search_path[%s] when acl_type is ranger", namespace_search_path); } } + else + { + if (aclType == HAWQ_ACL_RANGER && debug_query_string != NULL) + { + last_query_sign = string_hash(debug_query_string, strlen(debug_query_string)); + } + } --- End diff -- Thx! If there are other code should be changed also, I will try to clear the logic. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (HAWQ-401) json type support
[ https://issues.apache.org/jira/browse/HAWQ-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889312#comment-15889312 ] Lili Ma commented on HAWQ-401: -- [~kdunn926] I reviewed the pull request for JSON support in Greenplum. It seems the modified part can be directly applied to HAWQ. Since it involved catalog change including pg_proc and pg_type, we may need consider this in hawq upgrade. Thanks > json type support > - > > Key: HAWQ-401 > URL: https://issues.apache.org/jira/browse/HAWQ-401 > Project: Apache HAWQ > Issue Type: Wish > Components: Core >Reporter: Lei Chang >Assignee: Lei Chang > Fix For: backlog > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HAWQ-8) Installing the HAWQ Software thru the Apache Ambari
[ https://issues.apache.org/jira/browse/HAWQ-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889223#comment-15889223 ] Alexander Denissov commented on HAWQ-8: --- Yes, Ambari installation requires an RPM. Also, to update from my previous comment, the scripts to help customers register HAWQ/PXF repos with Ambari have been checked into contrib/hawq-ambari-plugin folder of HAWQ project > Installing the HAWQ Software thru the Apache Ambari > > > Key: HAWQ-8 > URL: https://issues.apache.org/jira/browse/HAWQ-8 > Project: Apache HAWQ > Issue Type: Wish > Components: Ambari > Environment: CentOS >Reporter: Vijayakumar Ramdoss >Assignee: Alexander Denissov > Fix For: backlog > > Attachments: 1Le8tdm[1] > > > In order to integrate with the Hadoop system, We would have to install the > HAWQ software thru Ambari. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HAWQ-303) Index support for non-heap tables
[ https://issues.apache.org/jira/browse/HAWQ-303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889214#comment-15889214 ] Kyle R Dunn commented on HAWQ-303: -- Once we have support for indexes, high-performance PostGIS on HAWQ becomes a very compelling differentiating feature. > Index support for non-heap tables > - > > Key: HAWQ-303 > URL: https://issues.apache.org/jira/browse/HAWQ-303 > Project: Apache HAWQ > Issue Type: Wish > Components: Storage >Reporter: Lei Chang >Assignee: Lili Ma > Fix For: 3.0.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HAWQ-303) Index support for non-heap tables
[ https://issues.apache.org/jira/browse/HAWQ-303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889211#comment-15889211 ] Kyle R Dunn commented on HAWQ-303: -- I'm wondering if/how we can prioritize this? Is it accurate that the feature is targeted for 3.0.0.0? > Index support for non-heap tables > - > > Key: HAWQ-303 > URL: https://issues.apache.org/jira/browse/HAWQ-303 > Project: Apache HAWQ > Issue Type: Wish > Components: Storage >Reporter: Lei Chang >Assignee: Lili Ma > Fix For: 3.0.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HAWQ-98) Moving HAWQ docker file into code base
[ https://issues.apache.org/jira/browse/HAWQ-98?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889207#comment-15889207 ] Kyle R Dunn commented on HAWQ-98: - Looks like we can close this as the Docker bits are in [master|https://github.com/apache/incubator-hawq/tree/master/contrib/hawq-docker]. > Moving HAWQ docker file into code base > -- > > Key: HAWQ-98 > URL: https://issues.apache.org/jira/browse/HAWQ-98 > Project: Apache HAWQ > Issue Type: Wish >Reporter: Goden Yao >Assignee: Roman Shaposhnik > Fix For: 2.2.0.0-incubating > > > We have a pre-built docker image (check [HAWQ build & > install|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61320026] > sitting outside the codebase. > It should be incorporated in the Apache git and maintained by the community. > Proposed location is to create a folder under root -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HAWQ-326) Support RPM build for HAWQ
[ https://issues.apache.org/jira/browse/HAWQ-326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889199#comment-15889199 ] Kyle R Dunn edited comment on HAWQ-326 at 3/1/17 12:29 AM: --- I've done some initial work on this. After compiling HAWQ from source and running {{make install}}, with the {{rpmbuild}} utility installed, perform the following steps: {code} $ mkdir -p ~/RPMBUILD/hawq $ cd /usr/local $ tar cjf ~/RPMBUILD/hawq/hawq-2.1.0.0-rc4.tar.bz2 hawq $ rpmbuild -bb SPECS/hawq-2.1.0.0-rc4.spec {code} where the above RPM SPEC file contains the following: {code} # Don't try fancy stuff like debuginfo, which is useless on binary-only # packages. Don't strip binary too # Be sure buildpolicy set to do nothing %define__spec_install_post %{nil} %define debug_package %{nil} %define__os_install_post %{_dbpath}/brp-compress %define _unpackaged_files_terminate_build 0 Summary: Apache HAWQ Name: hawq Version: 2.1.0.0 Release: rc4 License: Apache 2.0 Group: Development/Tools SOURCE0 : %{name}-%{version}-%{release}.tar.bz2 URL: https://hawq.incubator.apache.org %define installdir hawq BuildRoot: %{_tmppath}/%{name} %description %{summary} %prep %setup -n %{installdir} #%build # Empty section. %install rm -rf /usr/local/%{installdir} mkdir /usr/local/%{installdir} # in buildroot cp -ra * /usr/local/%{installdir}/ %clean rm -rf %{buildroot} %files %defattr(-,root,root,-) /greenplum_path.sh /bin /sbin /docs /etc /include /lib /share {code} Note, we need to add steps to create the {{gpadmin}} user and ensure installation directory permissions are the correct owner and mode. was (Author: kdunn926): I've done some initial work on this. After compiling HAWQ from source and running {{make install}}, with the {{rpmbuild}} utility installed, perform the following steps: {code} $ mkdir -p ~/RPMBUILD/hawq $ cd /usr/local $ tar cjf ~/RPMBUILD/hawq/hawq-2.1.0.0-rc4.tar.bz2 hawq ) $ rpmbuild -bb SPECS/hawq-2.1.0.0-rc4.spec {code} where the above RPM SPEC file contains the following: {code} # Don't try fancy stuff like debuginfo, which is useless on binary-only # packages. Don't strip binary too # Be sure buildpolicy set to do nothing %define__spec_install_post %{nil} %define debug_package %{nil} %define__os_install_post %{_dbpath}/brp-compress %define _unpackaged_files_terminate_build 0 Summary: Apache HAWQ Name: hawq Version: 2.1.0.0 Release: rc4 License: Apache 2.0 Group: Development/Tools SOURCE0 : %{name}-%{version}-%{release}.tar.bz2 URL: https://hawq.incubator.apache.org %define installdir hawq BuildRoot: %{_tmppath}/%{name} %description %{summary} %prep %setup -n %{installdir} #%build # Empty section. %install rm -rf /usr/local/%{installdir} mkdir /usr/local/%{installdir} # in buildroot cp -ra * /usr/local/%{installdir}/ %clean rm -rf %{buildroot} %files %defattr(-,root,root,-) /greenplum_path.sh /bin /sbin /docs /etc /include /lib /share {code} Note, we need to add steps to create the {{gpadmin}} user and ensure installation directory permissions are the correct owner and mode. > Support RPM build for HAWQ > -- > > Key: HAWQ-326 > URL: https://issues.apache.org/jira/browse/HAWQ-326 > Project: Apache HAWQ > Issue Type: Wish > Components: Build >Reporter: Lei Chang >Assignee: Paul Guo > Fix For: 2.2.0.0-incubating > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HAWQ-326) Support RPM build for HAWQ
[ https://issues.apache.org/jira/browse/HAWQ-326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889199#comment-15889199 ] Kyle R Dunn edited comment on HAWQ-326 at 3/1/17 12:30 AM: --- I've done some initial work on this. After compiling HAWQ from source and running {{make install}}, with the {{rpmbuild}} utility installed, perform the following steps: {code} $ mkdir -p ~/RPMBUILD/{hawq,SPECS} $ cd /usr/local $ tar cjf ~/RPMBUILD/hawq/hawq-2.1.0.0-rc4.tar.bz2 hawq $ cd ~/RPMBUILD $ rpmbuild -bb SPECS/hawq-2.1.0.0-rc4.spec {code} where the above RPM SPEC file contains the following: {code} # Don't try fancy stuff like debuginfo, which is useless on binary-only # packages. Don't strip binary too # Be sure buildpolicy set to do nothing %define__spec_install_post %{nil} %define debug_package %{nil} %define__os_install_post %{_dbpath}/brp-compress %define _unpackaged_files_terminate_build 0 Summary: Apache HAWQ Name: hawq Version: 2.1.0.0 Release: rc4 License: Apache 2.0 Group: Development/Tools SOURCE0 : %{name}-%{version}-%{release}.tar.bz2 URL: https://hawq.incubator.apache.org %define installdir hawq BuildRoot: %{_tmppath}/%{name} %description %{summary} %prep %setup -n %{installdir} #%build # Empty section. %install rm -rf /usr/local/%{installdir} mkdir /usr/local/%{installdir} # in buildroot cp -ra * /usr/local/%{installdir}/ %clean rm -rf %{buildroot} %files %defattr(-,root,root,-) /greenplum_path.sh /bin /sbin /docs /etc /include /lib /share {code} Note, we need to add steps to create the {{gpadmin}} user and ensure installation directory permissions are the correct owner and mode. was (Author: kdunn926): I've done some initial work on this. After compiling HAWQ from source and running {{make install}}, with the {{rpmbuild}} utility installed, perform the following steps: {code} $ mkdir -p ~/RPMBUILD/hawq $ cd /usr/local $ tar cjf ~/RPMBUILD/hawq/hawq-2.1.0.0-rc4.tar.bz2 hawq $ rpmbuild -bb SPECS/hawq-2.1.0.0-rc4.spec {code} where the above RPM SPEC file contains the following: {code} # Don't try fancy stuff like debuginfo, which is useless on binary-only # packages. Don't strip binary too # Be sure buildpolicy set to do nothing %define__spec_install_post %{nil} %define debug_package %{nil} %define__os_install_post %{_dbpath}/brp-compress %define _unpackaged_files_terminate_build 0 Summary: Apache HAWQ Name: hawq Version: 2.1.0.0 Release: rc4 License: Apache 2.0 Group: Development/Tools SOURCE0 : %{name}-%{version}-%{release}.tar.bz2 URL: https://hawq.incubator.apache.org %define installdir hawq BuildRoot: %{_tmppath}/%{name} %description %{summary} %prep %setup -n %{installdir} #%build # Empty section. %install rm -rf /usr/local/%{installdir} mkdir /usr/local/%{installdir} # in buildroot cp -ra * /usr/local/%{installdir}/ %clean rm -rf %{buildroot} %files %defattr(-,root,root,-) /greenplum_path.sh /bin /sbin /docs /etc /include /lib /share {code} Note, we need to add steps to create the {{gpadmin}} user and ensure installation directory permissions are the correct owner and mode. > Support RPM build for HAWQ > -- > > Key: HAWQ-326 > URL: https://issues.apache.org/jira/browse/HAWQ-326 > Project: Apache HAWQ > Issue Type: Wish > Components: Build >Reporter: Lei Chang >Assignee: Paul Guo > Fix For: 2.2.0.0-incubating > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HAWQ-326) Support RPM build for HAWQ
[ https://issues.apache.org/jira/browse/HAWQ-326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889199#comment-15889199 ] Kyle R Dunn commented on HAWQ-326: -- I've done some initial work on this. After compiling HAWQ from source and running {{make install}}, with the {{rpmbuild}} utility installed, perform the following steps: {code} $ mkdir -p ~/RPMBUILD/hawq $ cd /usr/local $ tar cjf ~/RPMBUILD/hawq/hawq-2.1.0.0-rc4.tar.bz2 hawq ) $ rpmbuild -bb SPECS/hawq-2.1.0.0-rc4.spec {code} where the above RPM SPEC file contains the following: {code} # Don't try fancy stuff like debuginfo, which is useless on binary-only # packages. Don't strip binary too # Be sure buildpolicy set to do nothing %define__spec_install_post %{nil} %define debug_package %{nil} %define__os_install_post %{_dbpath}/brp-compress %define _unpackaged_files_terminate_build 0 Summary: Apache HAWQ Name: hawq Version: 2.1.0.0 Release: rc4 License: Apache 2.0 Group: Development/Tools SOURCE0 : %{name}-%{version}-%{release}.tar.bz2 URL: https://hawq.incubator.apache.org %define installdir hawq BuildRoot: %{_tmppath}/%{name} %description %{summary} %prep %setup -n %{installdir} #%build # Empty section. %install rm -rf /usr/local/%{installdir} mkdir /usr/local/%{installdir} # in buildroot cp -ra * /usr/local/%{installdir}/ %clean rm -rf %{buildroot} %files %defattr(-,root,root,-) /greenplum_path.sh /bin /sbin /docs /etc /include /lib /share {code} Note, we need to add steps to create the {{gpadmin}} user and ensure installation directory permissions are the correct owner and mode. > Support RPM build for HAWQ > -- > > Key: HAWQ-326 > URL: https://issues.apache.org/jira/browse/HAWQ-326 > Project: Apache HAWQ > Issue Type: Wish > Components: Build >Reporter: Lei Chang >Assignee: Paul Guo > Fix For: 2.2.0.0-incubating > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HAWQ-8) Installing the HAWQ Software thru the Apache Ambari
[ https://issues.apache.org/jira/browse/HAWQ-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889191#comment-15889191 ] Kyle R Dunn commented on HAWQ-8: It seems like a clear dependency that Ambari-only installation will require an RPM of HAWQ for both RHEL and SLES. > Installing the HAWQ Software thru the Apache Ambari > > > Key: HAWQ-8 > URL: https://issues.apache.org/jira/browse/HAWQ-8 > Project: Apache HAWQ > Issue Type: Wish > Components: Ambari > Environment: CentOS >Reporter: Vijayakumar Ramdoss >Assignee: Alexander Denissov > Fix For: backlog > > Attachments: 1Le8tdm[1] > > > In order to integrate with the Hadoop system, We would have to install the > HAWQ software thru Ambari. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HAWQ-401) json type support
[ https://issues.apache.org/jira/browse/HAWQ-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889182#comment-15889182 ] Kyle R Dunn edited comment on HAWQ-401 at 3/1/17 12:20 AM: --- [~lilima] - I'm wondering if we'll be able to incorporate the work done [here|https://github.com/greenplum-db/gpdb/pull/530] for JSON type support? was (Author: kdunn926): [~lilima] - I'm wondering if we'll be able to incorporate the work done [here](https://github.com/greenplum-db/gpdb/pull/530) for JSON type support? > json type support > - > > Key: HAWQ-401 > URL: https://issues.apache.org/jira/browse/HAWQ-401 > Project: Apache HAWQ > Issue Type: Wish > Components: Core >Reporter: Lei Chang >Assignee: Lei Chang > Fix For: backlog > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HAWQ-401) json type support
[ https://issues.apache.org/jira/browse/HAWQ-401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889182#comment-15889182 ] Kyle R Dunn commented on HAWQ-401: -- [~lilima] - I'm wondering if we'll be able to incorporate the work done [here](https://github.com/greenplum-db/gpdb/pull/530) for JSON type support? > json type support > - > > Key: HAWQ-401 > URL: https://issues.apache.org/jira/browse/HAWQ-401 > Project: Apache HAWQ > Issue Type: Wish > Components: Core >Reporter: Lei Chang >Assignee: Lei Chang > Fix For: backlog > > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (HAWQ-1332) Can not grant database and schema privileges without table privileges in ranger or ranger plugin service
[ https://issues.apache.org/jira/browse/HAWQ-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888629#comment-15888629 ] Alexander Denissov edited comment on HAWQ-1332 at 2/28/17 6:44 PM: --- [~xsheng] -- I think we are confusing 2 issues here. 1. Privilege to connect to database -- this is CONNECT privilege that must be granted to a database resource. Since due to Ranger bug, it is not possible to define just database resource without defining schema and table, our design convention is that to represent a given database resource, we need to define it with database name, but schema and table must be set to *. Then grant CONNECT privilege to users that should be able to connect to such a resource. All values here must be included. Do not grant any schema / table specific privileges to this resource, if not desired. 2. Excluding specific tables from policies. Not sure whether this works or not, but this should have nothing to do with connecting to database. Define a separate policy with excluded table with table-level privileges and test it out. This policy should not have any CONNECT privileges and database connect access should be managed by policy defined in #1 above. So, I still maintain that this is not an issue. In summary, any db-level privilege requires schema and table set to * and any schema level privilege requires table set to *. was (Author: adenissov): [~xsheng] -- I think we are confusing 2 issues here. 1. Privilege to connect to database -- this is CONNECT privilege that must be granted to a database resource. Since due to Ranger bug, it is not possible to define just database resource without defining schema and table, our design convention is that to represent a given database resource, we need to define it with database name, but shcema and table must be set to *. Then grant CONNECT privilege to users that should be able to connect to such a resource. All values here must be included. Do not grant any schema / table specific privileges to this resource, if not desired. 2. Excluding specific tables from policies. Not sure whether this works or not, but this should have nothing to do with connecting to database. Define a separate policy with excluded table with table-level privileges and test it out. This policy should not have any CONNECT privileges and database connect access should be managed by policy defined in #1 above. So, I still maintain that this is not an issues. In summary, any db-level privilege requires schema and table set to * and any schema level privilege requires table set to *. > Can not grant database and schema privileges without table privileges in > ranger or ranger plugin service > > > Key: HAWQ-1332 > URL: https://issues.apache.org/jira/browse/HAWQ-1332 > Project: Apache HAWQ > Issue Type: Bug > Components: Security >Reporter: Chunling Wang >Assignee: Alexander Denissov > Fix For: 2.2.0.0-incubating > > Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png > > > We try to grant database connect and schema usage privileges to a non-super > user to connect database. We find that if we set policy with database and > schema included, but with table excluded, we can not connect database. But if > we include table, we can connect to database. We think there may be bug in > Ranger Plugin Service or Ranger. Here are steps to reproduce it. > 1. create a new user "usertest1" in database: > {code} > $ psql postgres > psql (8.2.15) > Type "help" for help. > postgres=# CREATE USER usertest1; > NOTICE: resource queue required -- using default resource queue "pg_default" > CREATE ROLE > postgres=# > {code} > 2. add user "usertest1" in pg_hba.conf > {code} > local all usertest1 trust > {code} > 3. set policy with database and schema included, with table excluded > !screenshot-1.png|width=800,height=400! > 4. connect database with user "usertest1" but failed with permission denied > {code} > $ psql postgres -U usertest1 > psql: FATAL: permission denied for database "postgres" > DETAIL: User does not have CONNECT privilege. > {code} > 5. set policy with database, schema and table included > !screenshot-2.png|width=800,height=400! > 6. connect database with user "usertest1" and succeed > {code} > $ psql postgres -U usertest1 > psql (8.2.15) > Type "help" for help. > postgres=# > {code} > But if we do not set table as "*", and specify table like "a", we can not > access database either. > !screenshot-3.png|width=800,height=400! -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HAWQ-1332) Can not grant database and schema privileges without table privileges in ranger or ranger plugin service
[ https://issues.apache.org/jira/browse/HAWQ-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888629#comment-15888629 ] Alexander Denissov commented on HAWQ-1332: -- [~xsheng] -- I think we are confusing 2 issues here. 1. Privilege to connect to database -- this is CONNECT privilege that must be granted to a database resource. Since due to Ranger bug, it is not possible to define just database resource without defining schema and table, our design convention is that to represent a given database resource, we need to define it with database name, but shcema and table must be set to *. Then grant CONNECT privilege to users that should be able to connect to such a resource. All values here must be included. Do not grant any schema / table specific privileges to this resource, if not desired. 2. Excluding specific tables from policies. Not sure whether this works or not, but this should have nothing to do with connecting to database. Define a separate policy with excluded table with table-level privileges and test it out. This policy should not have any CONNECT privileges and database connect access should be managed by policy defined in #1 above. So, I still maintain that this is not an issues. In summary, any db-level privilege requires schema and table set to * and any schema level privilege requires table set to *. > Can not grant database and schema privileges without table privileges in > ranger or ranger plugin service > > > Key: HAWQ-1332 > URL: https://issues.apache.org/jira/browse/HAWQ-1332 > Project: Apache HAWQ > Issue Type: Bug > Components: Security >Reporter: Chunling Wang >Assignee: Alexander Denissov > Fix For: 2.2.0.0-incubating > > Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png > > > We try to grant database connect and schema usage privileges to a non-super > user to connect database. We find that if we set policy with database and > schema included, but with table excluded, we can not connect database. But if > we include table, we can connect to database. We think there may be bug in > Ranger Plugin Service or Ranger. Here are steps to reproduce it. > 1. create a new user "usertest1" in database: > {code} > $ psql postgres > psql (8.2.15) > Type "help" for help. > postgres=# CREATE USER usertest1; > NOTICE: resource queue required -- using default resource queue "pg_default" > CREATE ROLE > postgres=# > {code} > 2. add user "usertest1" in pg_hba.conf > {code} > local all usertest1 trust > {code} > 3. set policy with database and schema included, with table excluded > !screenshot-1.png|width=800,height=400! > 4. connect database with user "usertest1" but failed with permission denied > {code} > $ psql postgres -U usertest1 > psql: FATAL: permission denied for database "postgres" > DETAIL: User does not have CONNECT privilege. > {code} > 5. set policy with database, schema and table included > !screenshot-2.png|width=800,height=400! > 6. connect database with user "usertest1" and succeed > {code} > $ psql postgres -U usertest1 > psql (8.2.15) > Type "help" for help. > postgres=# > {code} > But if we do not set table as "*", and specify table like "a", we can not > access database either. > !screenshot-3.png|width=800,height=400! -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] incubator-hawq pull request #1155: HAWQ-1369. Update for RAT status and exte...
GitHub user edespino opened a pull request: https://github.com/apache/incubator-hawq/pull/1155 HAWQ-1369. Update for RAT status and external doc link in README.md Updating embedded external CI status images to include the HAWQ-rat project. Additionally, update documentation link to point at open source doc set. reviewers: @paul-guo- @radarwave You can merge this pull request into a Git repository by running: $ git pull https://github.com/edespino/incubator-hawq HAWQ-1369 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/1155.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1155 commit d656e6f7e7a4094ade0848b39c1cf2c39760b073 Author: Ed Espino Date: 2017-01-12T08:20:24Z HAWQ-1369. Update for RAT status and external doc link in README.md --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (HAWQ-1369) README.md update - Update to include RAT status and external links
Ed Espino created HAWQ-1369: --- Summary: README.md update - Update to include RAT status and external links Key: HAWQ-1369 URL: https://issues.apache.org/jira/browse/HAWQ-1369 Project: Apache HAWQ Issue Type: Task Components: Build Reporter: Ed Espino Assignee: Ed Espino Fix For: 2.2.0.0-incubating Updating embedded external CI status images to include the HAWQ-rat project. Additionally, update documentation link to point at open source doc set. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Assigned] (HAWQ-1367) hawq can access to user tables that have no permission with fallback check table.
[ https://issues.apache.org/jira/browse/HAWQ-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chunling Wang reassigned HAWQ-1367: --- Assignee: Chunling Wang (was: Ed Espino) > hawq can access to user tables that have no permission with fallback check > table. > -- > > Key: HAWQ-1367 > URL: https://issues.apache.org/jira/browse/HAWQ-1367 > Project: Apache HAWQ > Issue Type: Bug > Components: Security >Reporter: Xiang Sheng >Assignee: Chunling Wang > Fix For: 2.2.0.0-incubating > > > if a user have access to catalog table and he have no access to user table b. > he can access to table b using "select * from catalog_table, b;" -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HAWQ-1368) normal user who doesn't have home directory may have problem when running hawq register
Lili Ma created HAWQ-1368: - Summary: normal user who doesn't have home directory may have problem when running hawq register Key: HAWQ-1368 URL: https://issues.apache.org/jira/browse/HAWQ-1368 Project: Apache HAWQ Issue Type: Bug Components: Command Line Tools Reporter: Lili Ma Assignee: Ed Espino HAWQ register stores information in hawqregister_MMDD.log under directory ~/hawqAdminLogs, and normal user who doesn't have own home directory may encounter failure when running hawq regsiter. We can add -l option in order to set the target log directory and file name of hawq register. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] incubator-hawq issue #1154: HAWQ-1367. HAWQ can access to user tables that h...
Github user wcl14 commented on the issue: https://github.com/apache/incubator-hawq/pull/1154 @stanlyxiang @zhangh43 @ictmalili Please review, thanks. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq pull request #1154: HAWQ-1367. HAWQ can access to user tables...
GitHub user wcl14 reopened a pull request: https://github.com/apache/incubator-hawq/pull/1154 HAWQ-1367. HAWQ can access to user tables that have no permission wit⦠â¦h fallback check table. You can merge this pull request into a Git repository by running: $ git pull https://github.com/wcl14/incubator-hawq HAWQ-1367 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/1154.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1154 commit 2536993c42c5c3673c7fb21ef72660cb51f2b8bf Author: Chunling Wang Date: 2017-02-28T10:08:59Z HAWQ-1367. HAWQ can access to user tables that have no permission with fallback check table. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq pull request #1154: HAWQ-1367. HAWQ can access to user tables...
Github user wcl14 closed the pull request at: https://github.com/apache/incubator-hawq/pull/1154 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq pull request #1154: HAWQ-1367. HAWQ can access to user tables...
GitHub user wcl14 opened a pull request: https://github.com/apache/incubator-hawq/pull/1154 HAWQ-1367. HAWQ can access to user tables that have no permission wit⦠â¦h fallback check table. You can merge this pull request into a Git repository by running: $ git pull https://github.com/wcl14/incubator-hawq HAWQ-1367 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/1154.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1154 commit 2536993c42c5c3673c7fb21ef72660cb51f2b8bf Author: Chunling Wang Date: 2017-02-28T10:08:59Z HAWQ-1367. HAWQ can access to user tables that have no permission with fallback check table. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Created] (HAWQ-1367) hawq can access to user tables that have no permission with fallback check table.
Xiang Sheng created HAWQ-1367: - Summary: hawq can access to user tables that have no permission with fallback check table. Key: HAWQ-1367 URL: https://issues.apache.org/jira/browse/HAWQ-1367 Project: Apache HAWQ Issue Type: Bug Components: Security Reporter: Xiang Sheng Assignee: Ed Espino Fix For: 2.2.0.0-incubating if a user have access to catalog table and he have no access to user table b. he can access to table b using "select * from catalog_table, b;" -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] incubator-hawq issue #1153: HAWQ-1366. Throw unsupported error out for dicti...
Github user linwen commented on the issue: https://github.com/apache/incubator-hawq/pull/1153 +1 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] incubator-hawq pull request #1153: HAWQ-1366. Throw unsupported error out fo...
GitHub user ictmalili opened a pull request: https://github.com/apache/incubator-hawq/pull/1153 HAWQ-1366. Throw unsupported error out for dictionary page in Parquet⦠⦠storage You can merge this pull request into a Git repository by running: $ git pull https://github.com/ictmalili/incubator-hawq HAWQ-1366 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/incubator-hawq/pull/1153.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1153 commit f42dc9dcc0685d614ce5968b10001097971c4abc Author: Lili Ma Date: 2017-02-28T09:57:21Z HAWQ-1366. Throw unsupported error out for dictionary page in Parquet storage --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Reopened] (HAWQ-1332) Can not grant database and schema privileges without table privileges in ranger or ranger plugin service
[ https://issues.apache.org/jira/browse/HAWQ-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiang Sheng reopened HAWQ-1332: --- [~adenisso] I have checked how hive process include/exclude in Ranger. We found that hive provide some features about include/exclude that we don't have. As the jira description said, include database and schema, exclude table, we cannot connect database. Seems include/exclude table impacted database and schema privileges. But in hive, if we exclude a database, we don't have the privileges selected. if we include the database, we have the privileges. If we exclude one table, we still have the selected privileges for other table, if we include one table, we have some of the selected privileges on the table. So the include/exclude works fine(maybe not fine-grained) in hive. So we should re-examine how we process include/exclude and fix the problems. > Can not grant database and schema privileges without table privileges in > ranger or ranger plugin service > > > Key: HAWQ-1332 > URL: https://issues.apache.org/jira/browse/HAWQ-1332 > Project: Apache HAWQ > Issue Type: Bug > Components: Security >Reporter: Chunling Wang >Assignee: Alexander Denissov > Fix For: 2.2.0.0-incubating > > Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png > > > We try to grant database connect and schema usage privileges to a non-super > user to connect database. We find that if we set policy with database and > schema included, but with table excluded, we can not connect database. But if > we include table, we can connect to database. We think there may be bug in > Ranger Plugin Service or Ranger. Here are steps to reproduce it. > 1. create a new user "usertest1" in database: > {code} > $ psql postgres > psql (8.2.15) > Type "help" for help. > postgres=# CREATE USER usertest1; > NOTICE: resource queue required -- using default resource queue "pg_default" > CREATE ROLE > postgres=# > {code} > 2. add user "usertest1" in pg_hba.conf > {code} > local all usertest1 trust > {code} > 3. set policy with database and schema included, with table excluded > !screenshot-1.png|width=800,height=400! > 4. connect database with user "usertest1" but failed with permission denied > {code} > $ psql postgres -U usertest1 > psql: FATAL: permission denied for database "postgres" > DETAIL: User does not have CONNECT privilege. > {code} > 5. set policy with database, schema and table included > !screenshot-2.png|width=800,height=400! > 6. connect database with user "usertest1" and succeed > {code} > $ psql postgres -U usertest1 > psql (8.2.15) > Type "help" for help. > postgres=# > {code} > But if we do not set table as "*", and specify table like "a", we can not > access database either. > !screenshot-3.png|width=800,height=400! -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HAWQ-1366) HAWQ should throw error if finding dictionary encoding type for Parquet
[ https://issues.apache.org/jira/browse/HAWQ-1366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15887674#comment-15887674 ] Lili Ma commented on HAWQ-1366: --- With the modified code, HAWQ throws error out. {code} postgres=# select * from tt; ERROR: HAWQ does not support dictionary page type resolver for Parquet format in column 'title' (cdbparquetcolumn.c:152) (seg0 localhost:4 pid=90708) {code} > HAWQ should throw error if finding dictionary encoding type for Parquet > --- > > Key: HAWQ-1366 > URL: https://issues.apache.org/jira/browse/HAWQ-1366 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Lili Ma >Assignee: Ed Espino > Fix For: 2.2.0.0-incubating > > > Since HAWQ is based on Parquet format version 1.0, which does not support > dictionary page, and hawq register may register Parquet format version 2.0 > data into HAWQ, we should throw error if finding unsupported page for column. > Reproduce Steps: > 1. In Hive, create a table and insert into 8 records: > {code} > (hive> create table tt (i int, > > fname varchar(100), > > title varchar(100), > > salary double > > ) > > STORED AS PARQUET; > OK > Time taken: 0.029 seconds > hive> insert into tt values (5,'OYLNUQSQIGWDWBKMDQNYUGYXOBDFGW', > 'Sales',80282.54), > > (7,'UKIPCBGKHDNEEXQHOFGKKFIZGLFNHE','Engineer',10206.65), > > (4,'PTPIRDISZNTWNFRNBPCUKWXYFGSRBQ','Director',63691.23), > > (9,'CTDCDYRURBZMBLNWHQNOQCYFFVULOP','Engineer',63867.44), > > (10,'WZQGZJEEVDKOKTPRFKLVCBSBIYTEDK','Sales',97720.08); > WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the > future versions. Consider using a different execution engine (i.e. spark, > tez) or using Hive 1.X releases. > Query ID = malili_20170228173956_f370414c-ddc8-4e6d-99e9-7c1fa1f678d1 > Total jobs = 3 > Launching Job 1 out of 3 > Number of reduce tasks is set to 0 since there's no reduce operator > Job running in-process (local Hadoop) > 2017-02-28 17:39:58,713 Stage-1 map = 100%, reduce = 0% > Ended Job = job_local2046305831_0004 > Stage-4 is selected by condition resolver. > Stage-3 is filtered out by condition resolver. > Stage-5 is filtered out by condition resolver. > Moving data to directory > hdfs://127.0.0.1:8020/user/hive/warehouse/tt/.hive-staging_hive_2017-02-28_17-39-56_806_3518057455919651199-1/-ext-1 > Loading data to table default.tt > MapReduce Jobs Launched: > Stage-Stage-1: HDFS Read: 3945 HDFS Write: 4226 SUCCESS > Total MapReduce CPU Time Spent: 0 msec > OK > Time taken: 1.975 seconds > hive> select * from tt; > OK > 5 OYLNUQSQIGWDWBKMDQNYUGYXOBDFGW Sales 80282.54 > 7 UKIPCBGKHDNEEXQHOFGKKFIZGLFNHE Engineer10206.65 > 4 PTPIRDISZNTWNFRNBPCUKWXYFGSRBQ Director63691.23 > 9 CTDCDYRURBZMBLNWHQNOQCYFFVULOP Engineer63867.44 > 10WZQGZJEEVDKOKTPRFKLVCBSBIYTEDK Sales 97720.08 > Time taken: 0.056 seconds, Fetched: 5 row(s) > {code} > 2. Create table in HAWQ > {code} > CREATE TABLE public.tt > (i int, > fname varchar(100), > title varchar(100), > salary float8) > WITH (appendonly=true,orientation=parquet); > {code} > 3. run hawq register > {code} > malilis-MacBook-Pro:Hawq_register malili$ hawq register -d postgres -f > hdfs://localhost:8020/user/hive/warehouse/tt tt > 20170228:17:40:25:090499 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-try > to connect database localhost:5432 postgres > 20170228:17:40:33:090499 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New > file(s) to be registered: > ['hdfs://localhost:8020/user/hive/warehouse/tt/00_0'] > hdfscmd: "hadoop fs -mv hdfs://localhost:8020/user/hive/warehouse/tt/00_0 > hdfs://localhost:8020/hawq_default/16385/16387/49281/1" > 20170228:17:40:41:090499 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Hawq > Register Succeed. > {code} > 4. select from hawq > {code} > postgres=# select * from tt; > i | fname | title | salary > ++---+-- > 5 | OYLNUQSQIGWDWBKMDQNYUGYXOBDFGW | | 80282.54 > 7 | UKIPCBGKHDNEEXQHOFGKKFIZGLFNHE | | 10206.65 > 4 | PTPIRDISZNTWNFRNBPCUKWXYFGSRBQ | | 63691.23 > 9 | CTDCDYRURBZMBLNWHQNOQCYFFVULOP | | 63867.44 > 10 | WZQGZJEEVDKOKTPRFKLVCBSBIYTEDK | | 97720.08 > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (HAWQ-1366) HAWQ should throw error if finding dictionary encoding type for Parquet
[ https://issues.apache.org/jira/browse/HAWQ-1366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15887672#comment-15887672 ] Lili Ma commented on HAWQ-1366: --- The title is optimized in Hive to dictionary storage. Since HAWQ doesn't support this, the output information is a little werid. In short team, HAWQ should throw error out for this case. In long term, HAWQ should support Parquet 2.0 data read/write. > HAWQ should throw error if finding dictionary encoding type for Parquet > --- > > Key: HAWQ-1366 > URL: https://issues.apache.org/jira/browse/HAWQ-1366 > Project: Apache HAWQ > Issue Type: Bug > Components: Storage >Reporter: Lili Ma >Assignee: Ed Espino > Fix For: 2.2.0.0-incubating > > > Since HAWQ is based on Parquet format version 1.0, which does not support > dictionary page, and hawq register may register Parquet format version 2.0 > data into HAWQ, we should throw error if finding unsupported page for column. > Reproduce Steps: > 1. In Hive, create a table and insert into 8 records: > {code} > (hive> create table tt (i int, > > fname varchar(100), > > title varchar(100), > > salary double > > ) > > STORED AS PARQUET; > OK > Time taken: 0.029 seconds > hive> insert into tt values (5,'OYLNUQSQIGWDWBKMDQNYUGYXOBDFGW', > 'Sales',80282.54), > > (7,'UKIPCBGKHDNEEXQHOFGKKFIZGLFNHE','Engineer',10206.65), > > (4,'PTPIRDISZNTWNFRNBPCUKWXYFGSRBQ','Director',63691.23), > > (9,'CTDCDYRURBZMBLNWHQNOQCYFFVULOP','Engineer',63867.44), > > (10,'WZQGZJEEVDKOKTPRFKLVCBSBIYTEDK','Sales',97720.08); > WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the > future versions. Consider using a different execution engine (i.e. spark, > tez) or using Hive 1.X releases. > Query ID = malili_20170228173956_f370414c-ddc8-4e6d-99e9-7c1fa1f678d1 > Total jobs = 3 > Launching Job 1 out of 3 > Number of reduce tasks is set to 0 since there's no reduce operator > Job running in-process (local Hadoop) > 2017-02-28 17:39:58,713 Stage-1 map = 100%, reduce = 0% > Ended Job = job_local2046305831_0004 > Stage-4 is selected by condition resolver. > Stage-3 is filtered out by condition resolver. > Stage-5 is filtered out by condition resolver. > Moving data to directory > hdfs://127.0.0.1:8020/user/hive/warehouse/tt/.hive-staging_hive_2017-02-28_17-39-56_806_3518057455919651199-1/-ext-1 > Loading data to table default.tt > MapReduce Jobs Launched: > Stage-Stage-1: HDFS Read: 3945 HDFS Write: 4226 SUCCESS > Total MapReduce CPU Time Spent: 0 msec > OK > Time taken: 1.975 seconds > hive> select * from tt; > OK > 5 OYLNUQSQIGWDWBKMDQNYUGYXOBDFGW Sales 80282.54 > 7 UKIPCBGKHDNEEXQHOFGKKFIZGLFNHE Engineer10206.65 > 4 PTPIRDISZNTWNFRNBPCUKWXYFGSRBQ Director63691.23 > 9 CTDCDYRURBZMBLNWHQNOQCYFFVULOP Engineer63867.44 > 10WZQGZJEEVDKOKTPRFKLVCBSBIYTEDK Sales 97720.08 > Time taken: 0.056 seconds, Fetched: 5 row(s) > {code} > 2. Create table in HAWQ > {code} > CREATE TABLE public.tt > (i int, > fname varchar(100), > title varchar(100), > salary float8) > WITH (appendonly=true,orientation=parquet); > {code} > 3. run hawq register > {code} > malilis-MacBook-Pro:Hawq_register malili$ hawq register -d postgres -f > hdfs://localhost:8020/user/hive/warehouse/tt tt > 20170228:17:40:25:090499 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-try > to connect database localhost:5432 postgres > 20170228:17:40:33:090499 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New > file(s) to be registered: > ['hdfs://localhost:8020/user/hive/warehouse/tt/00_0'] > hdfscmd: "hadoop fs -mv hdfs://localhost:8020/user/hive/warehouse/tt/00_0 > hdfs://localhost:8020/hawq_default/16385/16387/49281/1" > 20170228:17:40:41:090499 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Hawq > Register Succeed. > {code} > 4. select from hawq > {code} > postgres=# select * from tt; > i | fname | title | salary > ++---+-- > 5 | OYLNUQSQIGWDWBKMDQNYUGYXOBDFGW | | 80282.54 > 7 | UKIPCBGKHDNEEXQHOFGKKFIZGLFNHE | | 10206.65 > 4 | PTPIRDISZNTWNFRNBPCUKWXYFGSRBQ | | 63691.23 > 9 | CTDCDYRURBZMBLNWHQNOQCYFFVULOP | | 63867.44 > 10 | WZQGZJEEVDKOKTPRFKLVCBSBIYTEDK | | 97720.08 > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HAWQ-1366) HAWQ should throw error if finding dictionary encoding type for Parquet
Lili Ma created HAWQ-1366: - Summary: HAWQ should throw error if finding dictionary encoding type for Parquet Key: HAWQ-1366 URL: https://issues.apache.org/jira/browse/HAWQ-1366 Project: Apache HAWQ Issue Type: Bug Components: Storage Reporter: Lili Ma Assignee: Ed Espino Fix For: 2.2.0.0-incubating Since HAWQ is based on Parquet format version 1.0, which does not support dictionary page, and hawq register may register Parquet format version 2.0 data into HAWQ, we should throw error if finding unsupported page for column. Reproduce Steps: 1. In Hive, create a table and insert into 8 records: {code} (hive> create table tt (i int, > fname varchar(100), > title varchar(100), > salary double > ) > STORED AS PARQUET; OK Time taken: 0.029 seconds hive> insert into tt values (5,'OYLNUQSQIGWDWBKMDQNYUGYXOBDFGW', 'Sales',80282.54), > (7,'UKIPCBGKHDNEEXQHOFGKKFIZGLFNHE','Engineer',10206.65), > (4,'PTPIRDISZNTWNFRNBPCUKWXYFGSRBQ','Director',63691.23), > (9,'CTDCDYRURBZMBLNWHQNOQCYFFVULOP','Engineer',63867.44), > (10,'WZQGZJEEVDKOKTPRFKLVCBSBIYTEDK','Sales',97720.08); WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Query ID = malili_20170228173956_f370414c-ddc8-4e6d-99e9-7c1fa1f678d1 Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks is set to 0 since there's no reduce operator Job running in-process (local Hadoop) 2017-02-28 17:39:58,713 Stage-1 map = 100%, reduce = 0% Ended Job = job_local2046305831_0004 Stage-4 is selected by condition resolver. Stage-3 is filtered out by condition resolver. Stage-5 is filtered out by condition resolver. Moving data to directory hdfs://127.0.0.1:8020/user/hive/warehouse/tt/.hive-staging_hive_2017-02-28_17-39-56_806_3518057455919651199-1/-ext-1 Loading data to table default.tt MapReduce Jobs Launched: Stage-Stage-1: HDFS Read: 3945 HDFS Write: 4226 SUCCESS Total MapReduce CPU Time Spent: 0 msec OK Time taken: 1.975 seconds hive> select * from tt; OK 5 OYLNUQSQIGWDWBKMDQNYUGYXOBDFGW Sales 80282.54 7 UKIPCBGKHDNEEXQHOFGKKFIZGLFNHE Engineer10206.65 4 PTPIRDISZNTWNFRNBPCUKWXYFGSRBQ Director63691.23 9 CTDCDYRURBZMBLNWHQNOQCYFFVULOP Engineer63867.44 10 WZQGZJEEVDKOKTPRFKLVCBSBIYTEDK Sales 97720.08 Time taken: 0.056 seconds, Fetched: 5 row(s) {code} 2. Create table in HAWQ {code} CREATE TABLE public.tt (i int, fname varchar(100), title varchar(100), salary float8) WITH (appendonly=true,orientation=parquet); {code} 3. run hawq register {code} malilis-MacBook-Pro:Hawq_register malili$ hawq register -d postgres -f hdfs://localhost:8020/user/hive/warehouse/tt tt 20170228:17:40:25:090499 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-try to connect database localhost:5432 postgres 20170228:17:40:33:090499 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-New file(s) to be registered: ['hdfs://localhost:8020/user/hive/warehouse/tt/00_0'] hdfscmd: "hadoop fs -mv hdfs://localhost:8020/user/hive/warehouse/tt/00_0 hdfs://localhost:8020/hawq_default/16385/16387/49281/1" 20170228:17:40:41:090499 hawqregister:malilis-MacBook-Pro:malili-[INFO]:-Hawq Register Succeed. {code} 4. select from hawq {code} postgres=# select * from tt; i | fname | title | salary ++---+-- 5 | OYLNUQSQIGWDWBKMDQNYUGYXOBDFGW | | 80282.54 7 | UKIPCBGKHDNEEXQHOFGKKFIZGLFNHE | | 10206.65 4 | PTPIRDISZNTWNFRNBPCUKWXYFGSRBQ | | 63691.23 9 | CTDCDYRURBZMBLNWHQNOQCYFFVULOP | | 63867.44 10 | WZQGZJEEVDKOKTPRFKLVCBSBIYTEDK | | 97720.08 {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[GitHub] incubator-hawq pull request #1152: Hawq 1365. Print out detailed schema info...
Github user wcl14 commented on a diff in the pull request: https://github.com/apache/incubator-hawq/pull/1152#discussion_r103395505 --- Diff: src/backend/catalog/namespace.c --- @@ -1982,6 +1982,13 @@ recomputeNamespacePath(void) elog(DEBUG3, "recompute search_path[%s] when acl_type is ranger", namespace_search_path); } } + else + { + if (aclType == HAWQ_ACL_RANGER && debug_query_string != NULL) + { + last_query_sign = string_hash(debug_query_string, strlen(debug_query_string)); + } + } --- End diff -- How about set current_qurey_sign before the if statement 'if (namespaceSearchPathValid && namespaceUser == roleid)' and set last_query_sign after the if statement? Can this make the return condition more clear? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---