[jira] [Updated] (HAWQ-256) Integrate Security with Apache Ranger

2016-07-27 Thread Lili Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lili Ma updated HAWQ-256:
-
Attachment: HAWQRangerSupportDesign.pdf

Hi all,
We have drafted a design for HAWQ Ranger Support. Any comments are welcome. 

Thanks

> Integrate Security with Apache Ranger
> -
>
> Key: HAWQ-256
> URL: https://issues.apache.org/jira/browse/HAWQ-256
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF, Security
>Reporter: Michael Andre Pearce (IG)
>Assignee: Lili Ma
> Fix For: backlog
>
> Attachments: HAWQRangerSupportDesign.pdf
>
>
> Integrate security with Apache Ranger for a unified Hadoop security solution. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq issue #778: HAWQ-900. Add dependency in PL/R rpm build spec f...

2016-07-27 Thread huor
Github user huor commented on the issue:

https://github.com/apache/incubator-hawq/pull/778
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #824: HAWQ-962. Make catalog:type_sanity be able...

2016-07-27 Thread paul-guo-
GitHub user paul-guo- opened a pull request:

https://github.com/apache/incubator-hawq/pull/824

HAWQ-962. Make catalog:type_sanity be able to run with other cases in…

… parallel

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/paul-guo-/incubator-hawq test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/824.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #824


commit 90a47020a804efaac5263dffbdc47527ce4b8fb7
Author: Paul Guo 
Date:   2016-07-28T05:33:38Z

HAWQ-962. Make catalog:type_sanity be able to run with other cases in 
parallel




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72567830
  
--- Diff: src/test/feature/parallel-run-feature-test.sh ---
@@ -0,0 +1,49 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please source greenplum_path.sh before running feature tests."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
--- End diff --

Others look good.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72567767
  
--- Diff: src/test/feature/parallel-run-feature-test.sh ---
@@ -0,0 +1,49 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please source greenplum_path.sh before running feature tests."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
--- End diff --

HAWQ_PASSWORD=${PGPASSWORD:-""}


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #823: HAWQ-905. add init file for temp table test case

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/823
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #823: HAWQ-905. add init file for temp table test case

2016-07-27 Thread ztao1987
Github user ztao1987 commented on the issue:

https://github.com/apache/incubator-hawq/pull/823
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #823: HAWQ-905. add init file for temp table tes...

2016-07-27 Thread wengyanqing
GitHub user wengyanqing opened a pull request:

https://github.com/apache/incubator-hawq/pull/823

HAWQ-905. add init file for temp table test case



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wengyanqing/incubator-hawq HAWQ-905-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/823.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #823


commit 261aec299acf54abb94ffa665998a50f7bb7a936
Author: ivan 
Date:   2016-07-28T05:34:52Z

HAWQ-905. add init file for temp table test case




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72565256
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
--- End diff --

GOT


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (HAWQ-962) Make catalog:type_sanity be able to run in parallel

2016-07-27 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-962:
-

 Summary: Make catalog:type_sanity be able to run in parallel
 Key: HAWQ-962
 URL: https://issues.apache.org/jira/browse/HAWQ-962
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Lei Chang


The test case will query some database-level system tables while with parallel  
google testing is being enabled (see  HAWQ-955. Add scriptS for feature test 
running in parallel.), the test could fail. We need to create a new database in 
the test case to avoid this..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-962) Make catalog:type_sanity be able to run with other cases in parallel

2016-07-27 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-962:
--
Summary: Make catalog:type_sanity be able to run with other cases in 
parallel  (was: Make catalog:type_sanity be able to run in parallel)

> Make catalog:type_sanity be able to run with other cases in parallel
> 
>
> Key: HAWQ-962
> URL: https://issues.apache.org/jira/browse/HAWQ-962
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
>
> The test case will query some database-level system tables while with 
> parallel  google testing is being enabled (see  HAWQ-955. Add scriptS for 
> feature test running in parallel.), the test could fail. We need to create a 
> new database in the test case to avoid this..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-961) Dispatch session user id (not current BOOTSTRAP_SUPERUSERID) on master to segments

2016-07-27 Thread Paul Guo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Guo updated HAWQ-961:
--
Fix Version/s: 2.0.1.0-incubating

> Dispatch session user id (not current BOOTSTRAP_SUPERUSERID) on master to 
> segments
> --
>
> Key: HAWQ-961
> URL: https://issues.apache.org/jira/browse/HAWQ-961
> Project: Apache HAWQ
>  Issue Type: Bug
>Reporter: Paul Guo
>Assignee: Lei Chang
> Fix For: 2.0.1.0-incubating
>
>
> This does not affect the functionality and security of hawq, but there are 
> users who want the session user id info on segments to do something.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-961) Dispatch session user id (not current BOOTSTRAP_SUPERUSERID) on master to segments

2016-07-27 Thread Paul Guo (JIRA)
Paul Guo created HAWQ-961:
-

 Summary: Dispatch session user id (not current 
BOOTSTRAP_SUPERUSERID) on master to segments
 Key: HAWQ-961
 URL: https://issues.apache.org/jira/browse/HAWQ-961
 Project: Apache HAWQ
  Issue Type: Bug
Reporter: Paul Guo
Assignee: Lei Chang


This does not affect the functionality and security of hawq, but there are 
users who want the session user id info on segments to do something.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HAWQ-905) Add feature test for temp table with new test framework

2016-07-27 Thread Ivan Weng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Weng resolved HAWQ-905.

Resolution: Fixed

> Add feature test for temp table with new test framework
> ---
>
> Key: HAWQ-905
> URL: https://issues.apache.org/jira/browse/HAWQ-905
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Ivan Weng
>Assignee: Ivan Weng
> Fix For: 2.0.1.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72559287
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
+
+run_sql() {
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT -c "$1" > /dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "$1 failed."
+exit 1
+  fi
+}
+
+init_hawq_test() {
+  source "${GPHOME}/greenplum_path.sh"
+  
+  TEST_DB_NAME="hawq_feature_test"
+  
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "create database $TEST_DB_NAME;" > /dev/null 2>&1
+  run_sql "alter database $TEST_DB_NAME set lc_messages to 'C';"
+  run_sql "alter database $TEST_DB_NAME set lc_monetary to 'C';"
+  run_sql "alter database $TEST_DB_NAME set lc_numeric to 'C';"
+  run_sql "alter database $TEST_DB_NAME set lc_time to 'C';"
+  run_sql "alter database $TEST_DB_NAME set timezone_abbreviations to 
'Default';"
+  run_sql "alter database $TEST_DB_NAME set timezone to 'PST8PDT';"
+  run_sql "alter database $TEST_DB_NAME set datestyle to 'postgres,MDY';"
+  PGDATABASE=$TEST_DB_NAME
+}
+
+run_feature_test() {
+  if [ $# -lt 2 ] || [ $1 == "--help" ]; then
--- End diff --

Oh, I think I will change the first condition, or it is a little tricky 
then.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72559152
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
+
+run_sql() {
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT -c "$1" > /dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "$1 failed."
+exit 1
+  fi
+}
+
+init_hawq_test() {
+  source "${GPHOME}/greenplum_path.sh"
+  
+  TEST_DB_NAME="hawq_feature_test"
+  
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "create database $TEST_DB_NAME;" > /dev/null 2>&1
+  run_sql "alter database $TEST_DB_NAME set lc_messages to 'C';"
+  run_sql "alter database $TEST_DB_NAME set lc_monetary to 'C';"
+  run_sql "alter database $TEST_DB_NAME set lc_numeric to 'C';"
+  run_sql "alter database $TEST_DB_NAME set lc_time to 'C';"
+  run_sql "alter database $TEST_DB_NAME set timezone_abbreviations to 
'Default';"
+  run_sql "alter database $TEST_DB_NAME set timezone to 'PST8PDT';"
+  run_sql "alter database $TEST_DB_NAME set datestyle to 'postgres,MDY';"
+  PGDATABASE=$TEST_DB_NAME
+}
+
+run_feature_test() {
+  if [ $# -lt 2 ] || [ $1 == "--help" ]; then
--- End diff --

I think "program --help" could be caught by "[ $# -lt 2]"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #822: HAWQ-905. Add feature test for temp table with ne...

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/822
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72558667
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
--- End diff --

Actually for direct run, you need to source greenplum_path.sh at first, 
thus "source greenplum_path.sh" could be the same prerequisite for both direct 
run and parallel run.
We could check GP_HOME just to simply but not strictly judge whether users 
have sourced that file or not, not for us to source that file ourselves in the 
code.

Besides, another benefit is that you could make doc neat in README.MD :-) 
see the existing related doc below.

"Make sure HAWQ is running correctly. If not, init or start HAWQ at first.
Load environment configuration by running source 
$INSTALL_PREFIX/greenplum_path.sh.
Load hdfs configuration. For example, export 
HADOOP_HOME=/Users/wuhong/hadoop-2.7.2 && export 
PATH=${PATH}:${HADOOP_HOME}/bin. Since some test cases need hdfs and hadoop 
command, just ensure these commands work before running. Otherwise you will get 
failure."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72557899
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
--- End diff --

In the logic, user do not need to know anything about source in running 
feature tests. In running this script, the only thing users need to do is set 
GPHOME. So I think the info is enough. Thoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72557615
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
--- End diff --

Since


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72557599
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
--- End diff --

Some we can not run a command like psql -c "xx" -U.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72557537
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
--- End diff --

Ok


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72557444
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
+
+run_sql() {
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT -c "$1" > /dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "$1 failed."
+exit 1
+  fi
+}
+
+init_hawq_test() {
+  source "${GPHOME}/greenplum_path.sh"
+  
+  TEST_DB_NAME="hawq_feature_test"
+  
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "create database $TEST_DB_NAME;" > /dev/null 2>&1
+  run_sql "alter database $TEST_DB_NAME set lc_messages to 'C';"
+  run_sql "alter database $TEST_DB_NAME set lc_monetary to 'C';"
+  run_sql "alter database $TEST_DB_NAME set lc_numeric to 'C';"
+  run_sql "alter database $TEST_DB_NAME set lc_time to 'C';"
+  run_sql "alter database $TEST_DB_NAME set timezone_abbreviations to 
'Default';"
+  run_sql "alter database $TEST_DB_NAME set timezone to 'PST8PDT';"
+  run_sql "alter database $TEST_DB_NAME set datestyle to 'postgres,MDY';"
+  PGDATABASE=$TEST_DB_NAME
+}
+
+run_feature_test() {
+  if [ $# -lt 2 ] || [ $1 == "--help" ]; then
--- End diff --

Why it should be removed?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72557467
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
+
+run_sql() {
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT -c "$1" > /dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "$1 failed."
+exit 1
+  fi
+}
+
+init_hawq_test() {
+  source "${GPHOME}/greenplum_path.sh"
+  
+  TEST_DB_NAME="hawq_feature_test"
--- End diff --

Ok


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread xunzhang
Github user xunzhang commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72557500
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
+
+run_sql() {
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT -c "$1" > /dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "$1 failed."
+exit 1
+  fi
+}
+
+init_hawq_test() {
+  source "${GPHOME}/greenplum_path.sh"
--- End diff --

It is!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #818: HAWQ-955. Add scriptS for feature test running in...

2016-07-27 Thread xunzhang
Github user xunzhang commented on the issue:

https://github.com/apache/incubator-hawq/pull/818
  
Actually I don't want to update README.md this time. We are not sure that 
the script is working well till now so users could still run these tests in old 
way until we are sure this works.

I like the name, I will update soon.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #822: HAWQ-905. Add feature test for temp table ...

2016-07-27 Thread wengyanqing
GitHub user wengyanqing opened a pull request:

https://github.com/apache/incubator-hawq/pull/822

HAWQ-905.  Add feature test for temp table with new test framework

1. sqlutility to support schema/database mode
2. temp table test case with new feature framework

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/wengyanqing/incubator-hawq HAWQ-905

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/822.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #822


commit ac7b7bc558b9be3ea61ec43dbbcd0696d8d6e471
Author: ivan 
Date:   2016-07-28T02:26:25Z

HAWQ-905  Add feature test for temp table with new test framework

commit 5e54b4d806c4b3d252d17ec14d75dbf804df25bd
Author: ivan 
Date:   2016-07-28T02:29:08Z

HAWQ-905  Add feature test for temp table with new test framework




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #818: HAWQ-955. Add scriptS for feature test running in...

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/818
  
I'm wondering whether need to rename the shell script file name to reflect 
its parallel-run feature.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #818: HAWQ-955. Add scriptS for feature test running in...

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on the issue:

https://github.com/apache/incubator-hawq/pull/818
  
By the way, you might need to update README.MD along with the change.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72555783
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
--- End diff --

Why ignore HAWQ_PASSWORD? It is easier to obtain in shell.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r7280
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
+
+run_sql() {
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT -c "$1" > /dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "$1 failed."
+exit 1
+  fi
+}
+
+init_hawq_test() {
+  source "${GPHOME}/greenplum_path.sh"
+  
+  TEST_DB_NAME="hawq_feature_test"
+  
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT \
+-c "create database $TEST_DB_NAME;" > /dev/null 2>&1
+  run_sql "alter database $TEST_DB_NAME set lc_messages to 'C';"
+  run_sql "alter database $TEST_DB_NAME set lc_monetary to 'C';"
+  run_sql "alter database $TEST_DB_NAME set lc_numeric to 'C';"
+  run_sql "alter database $TEST_DB_NAME set lc_time to 'C';"
+  run_sql "alter database $TEST_DB_NAME set timezone_abbreviations to 
'Default';"
+  run_sql "alter database $TEST_DB_NAME set timezone to 'PST8PDT';"
+  run_sql "alter database $TEST_DB_NAME set datestyle to 'postgres,MDY';"
+  PGDATABASE=$TEST_DB_NAME
+}
+
+run_feature_test() {
+  if [ $# -lt 2 ] || [ $1 == "--help" ]; then
--- End diff --

Add a line in help output to explain what is the script for? (for running 
in parallel)

I suspect '$1 == "--help"' could be removed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72555003
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
+
+run_sql() {
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT -c "$1" > /dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "$1 failed."
+exit 1
+  fi
+}
+
+init_hawq_test() {
+  source "${GPHOME}/greenplum_path.sh"
+  
+  TEST_DB_NAME="hawq_feature_test"
--- End diff --

I'm wondering whether change the name from hawq_feature_test to 
hawq_feature_test_db.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72554974
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
+
+PSQL=${GPHOME}/bin/psql
+HAWQ_DB=${PGDATABASE:-"postgres"}
+HAWQ_HOST=${PGHOST:-"localhost"}
+HAWQ_PORT=${PGPORT:-"5432"}
+
+run_sql() {
+  $PSQL -d $HAWQ_DB -h $HAWQ_HOST -p $HAWQ_PORT -c "$1" > /dev/null 2>&1
+  if [ $? -ne 0 ]; then
+echo "$1 failed."
+exit 1
+  fi
+}
+
+init_hawq_test() {
+  source "${GPHOME}/greenplum_path.sh"
--- End diff --

I think "source greenplum_path.sh" should be run before running the case? 
What is you opinion?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #818: HAWQ-955. Add scriptS for feature test run...

2016-07-27 Thread paul-guo-
Github user paul-guo- commented on a diff in the pull request:

https://github.com/apache/incubator-hawq/pull/818#discussion_r72554617
  
--- Diff: src/test/feature/run-feature-test.sh ---
@@ -0,0 +1,47 @@
+#! /bin/bash
+
+if [ x$GPHOME == 'x' ]; then
+  echo "Please export GPHOME variable."
+  exit 0
+fi
--- End diff --

I think we need more words. How about this?

Please export the environment variable GPHOME before running.
Typically you can run " source greenplum_path.sh" under
the hawq installation path.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #821: HAWQ-931. ORC optimized profile for PPD/CP

2016-07-27 Thread shivzone
GitHub user shivzone opened a pull request:

https://github.com/apache/incubator-hawq/pull/821

HAWQ-931. ORC optimized profile for PPD/CP

First version of the ORC optimized profile with support for Filter pushdown 
and Column projection.
We are still using Hive 1.2.x ORC apis.
Known issues: Operators on String data type doesn't work


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shivzone/incubator-hawq HAWQ-931

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/821.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #821


commit 3c23716c9560f3584a2c2c24523494e3be713ab6
Author: Shivram Mani 
Date:   2016-07-28T00:48:59Z

HAWQ-931. ORC optimized profile for PPD/CP




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #731: HAWQ-830. Fix wrong result in CTE query due to CT...

2016-07-27 Thread huor
Github user huor commented on the issue:

https://github.com/apache/incubator-hawq/pull/731
  
@vraghavan78, yes, you are correct.

gp_cte_sharing was introduced to workaround deadlock in shared scan in some 
CTE queries. For example, https://issues.apache.org/jira/browse/HAWQ-852 is a 
hang issue when gp_cte_sharing is on.

However, if we disable gp_cte_sharing (especially by default), we get wrong 
result when running some other CTE queries. One of the common root cause is 
that: with gp_cte_sharing off, some of the CTE expressions are evaluated 
multiple times in one single query; if the CTE expression itself is somehow 
"volatile", it is obviously that wrong result (at least inconsistent result) 
will be generated.
For example, https://issues.apache.org/jira/browse/HAWQ-830, 
https://issues.apache.org/jira/browse/HAWQ-852 are two issues that generate 
wrong results with CTE query.

My recommendation is: we need to enable gp_cte_sharing by default (actually 
this GUC should be removed ideally), and then fix the hang issue in shared 
scan. Your thoughts?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-956:
-
Description: 
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:

1. Create two external tables:
{code}
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}

2. Copy big file(~ 1Gb) from local fs to store_t:
{code}
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';
{code}

3. Restart HDFS while COPY is in progress.

4. Run HDFS report, some of files are still open for write:
{code}
hdfs fsck / -openforwrite
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
16/07/27 15:06:23 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Connecting to namenode via 
http://0.0.0.0:50070/fsck?ugi=adiachenko&openforwrite=1&path=%2F
FSCK started by adiachenko (auth:SIMPLE) from /127.0.0.1 for path / at Wed Jul 
27 15:06:24 PDT 2016

/data/15137_0 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_1 0 bytes, 
1 block(s), OPENFORWRITE: /data/15137_2 0 bytes, 1 block(s), OPENFORWRITE: 
/data/15137_3 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_4 0 bytes, 1 
block(s), OPENFORWRITE: /data/15137_5 0 bytes, 1 block(s), OPENFORWRITE: 
./hbase/MasterProcWALs/state-0010.log 0 bytes, 0 block(s), 
OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507..meta.1469656315513.meta
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507.default.1469656310882
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60021,1469656309708/192.168.97.183%2C60021%2C1469656309708.default.1469656312207
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60022,1469656311015/192.168.97.183%2C60022%2C1469656311015.default.1469656313131
 83 bytes, 1 block(s), OPENFORWRITE: 


...
Status:
 HEALTHY
 Total size:185557141 B
 Total dirs:350
 Total files:   30176
 Total symlinks:0
 Total blocks (validated):  173 (avg. block size 1072584 B)
 Minimally replicated blocks:   163 (94.21965 %)
 Over-replicated blocks:0 (0.0 %)
 Under-replicated blocks:   0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor:3
 Average block replication: 2.8265896
 Corrupt blocks:0
 Missing replicas:  0 (0.0 %)
 Number of data-nodes:  3
 Number of racks:   1
FSCK ended at Wed Jul 27 15:06:25 PDT 2016 in 1163 milliseconds


The filesystem under path '/' is HEALTHY
{code}

5. Try to read data from table:

{code}
# select * from read_t ;
ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Cannot obtain block length for 
LocatedBlock{BP-529696253-10.64.5.216-1469480876771:blk_1073742438_1639; 
getBlockSize()=7483392; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:50010,DS-ccd53310-9584-46d3-910d-0178f2a5e9fd,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:50011,DS-d8dccde3-106e-4ca6-8a65-9a371d570c25,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:50012,DS-148d1a1d-3b91-4a5f-af40-0a1a5c6c88fc,DISK]]}
description   The server encountered an internal error that prevented it 
from fulfilling this request.exception   java.io.IOException: Cannot obtain 
block length for 
LocatedBlock{BP-529696253-10.64.5.216-1469480876771:blk_1073742438_1639; 
getBlockSize()=7483392; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:50010,DS-ccd53310-9584-46d3-910d-0178f2a5e9fd,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:50011,DS-d8dccde3-106e-4ca6-8a65-9a371d570c25,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:50012,DS-148d1a1d-3b91-4a5f-af40-0a1a5c6c88fc,DISK]]}
 (libchurl.c:878)  (seg1 localhost:4 pid=31828)
DETAIL:  External table read_t, file 
pxf://127.0.0.1:51200/data?Profile=HdfsTextSimple
{code}



Behavior of COPY command for native Hawq tables is transactional, if somethi

[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-956:
-
Description: 
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:

1. Create two external tables:
{code}
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}

2. Copy big file(~ 1Gb) from local fs to store_t:
{code}
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';
{code}

3. Restart HDFS while COPY is in progress.

4. Run HDFS report, some of files are still open for write:
{code}
hdfs fsck / -openforwrite
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
16/07/27 15:06:23 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Connecting to namenode via 
http://0.0.0.0:50070/fsck?ugi=adiachenko&openforwrite=1&path=%2F
FSCK started by adiachenko (auth:SIMPLE) from /127.0.0.1 for path / at Wed Jul 
27 15:06:24 PDT 2016

/data/15137_0 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_1 0 bytes, 
1 block(s), OPENFORWRITE: /data/15137_2 0 bytes, 1 block(s), OPENFORWRITE: 
/data/15137_3 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_4 0 bytes, 1 
block(s), OPENFORWRITE: /data/15137_5 0 bytes, 1 block(s), OPENFORWRITE: 
./hbase/MasterProcWALs/state-0010.log 0 bytes, 0 block(s), 
OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507..meta.1469656315513.meta
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507.default.1469656310882
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60021,1469656309708/192.168.97.183%2C60021%2C1469656309708.default.1469656312207
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60022,1469656311015/192.168.97.183%2C60022%2C1469656311015.default.1469656313131
 83 bytes, 1 block(s), OPENFORWRITE: 


...
Status:
 HEALTHY
 Total size:185557141 B
 Total dirs:350
 Total files:   30176
 Total symlinks:0
 Total blocks (validated):  173 (avg. block size 1072584 B)
 Minimally replicated blocks:   163 (94.21965 %)
 Over-replicated blocks:0 (0.0 %)
 Under-replicated blocks:   0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor:3
 Average block replication: 2.8265896
 Corrupt blocks:0
 Missing replicas:  0 (0.0 %)
 Number of data-nodes:  3
 Number of racks:   1
FSCK ended at Wed Jul 27 15:06:25 PDT 2016 in 1163 milliseconds


The filesystem under path '/' is HEALTHY
{code}

5. Try to read data from table:

{code}
# select * from read_t ;
ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
report   message   Cannot obtain block length for 
LocatedBlock{BP-529696253-10.64.5.216-1469480876771:blk_1073742438_1639; 
getBlockSize()=7483392; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:50010,DS-ccd53310-9584-46d3-910d-0178f2a5e9fd,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:50011,DS-d8dccde3-106e-4ca6-8a65-9a371d570c25,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:50012,DS-148d1a1d-3b91-4a5f-af40-0a1a5c6c88fc,DISK]]}
description   The server encountered an internal error that prevented it 
from fulfilling this request.exception   java.io.IOException: Cannot obtain 
block length for 
LocatedBlock{BP-529696253-10.64.5.216-1469480876771:blk_1073742438_1639; 
getBlockSize()=7483392; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:50010,DS-ccd53310-9584-46d3-910d-0178f2a5e9fd,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:50011,DS-d8dccde3-106e-4ca6-8a65-9a371d570c25,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:50012,DS-148d1a1d-3b91-4a5f-af40-0a1a5c6c88fc,DISK]]}
 (libchurl.c:878)  (seg1 localhost:4 pid=31828)
DETAIL:  External table read_t, file 
pxf://127.0.0.1:51200/data?Profile=HdfsTextSimple
{code}

  was:
As for now COPY command is transactional for native HAWQ tables, but it

[jira] [Updated] (HAWQ-952) Clean up COPYRIGHT file and review NOTICE File

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-952:
---
Summary: Clean up COPYRIGHT file and review NOTICE File  (was: Merge 
COPYRIGHT file to NOTICE File)

> Clean up COPYRIGHT file and review NOTICE File
> --
>
> Key: HAWQ-952
> URL: https://issues.apache.org/jira/browse/HAWQ-952
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Goden Yao
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> Per mentor's suggestion, we should merge COPYRIGHT file into NOTICE file 
> rather than a separate file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-952) Clean up COPYRIGHT file and review NOTICE File

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-952:
---
Description: 
Per mentor's suggestion, we should not have a separate COPYRIGHT file 
And we need to review which copyright should be in NOTICE file in what way.
(need more clarification and discussion with mentors)

  was:Per mentor's suggestion, we should merge COPYRIGHT file into NOTICE file 
rather than a separate file.


> Clean up COPYRIGHT file and review NOTICE File
> --
>
> Key: HAWQ-952
> URL: https://issues.apache.org/jira/browse/HAWQ-952
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Goden Yao
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> Per mentor's suggestion, we should not have a separate COPYRIGHT file 
> And we need to review which copyright should be in NOTICE file in what way.
> (need more clarification and discussion with mentors)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-960) BUILD_INSTRUCTIONS.md should contain more than just a link

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-960:
---
Fix Version/s: (was: 2.0.1.0-incubating)
   2.0.0.0-incubating

> BUILD_INSTRUCTIONS.md should contain more than just a link
> --
>
> Key: HAWQ-960
> URL: https://issues.apache.org/jira/browse/HAWQ-960
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build, Documentation
>Reporter: Goden Yao
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> From [~jmclean] IPMC Release Candidate Review Feedback:
> {qoute}
> I expected it in BUILD_INSTRUCTIONS.md but it just contains a link to the 
> wiki page, but anywhere in the release is fine.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-960) BUILD_INSTRUCTIONS.md should contain more than just a link

2016-07-27 Thread Goden Yao (JIRA)
Goden Yao created HAWQ-960:
--

 Summary: BUILD_INSTRUCTIONS.md should contain more than just a link
 Key: HAWQ-960
 URL: https://issues.apache.org/jira/browse/HAWQ-960
 Project: Apache HAWQ
  Issue Type: Task
  Components: Build, Documentation
Reporter: Goden Yao
Assignee: Lei Chang


>From [~jmclean] IPMC Release Candidate Review Feedback:
{qoute}
I expected it in BUILD_INSTRUCTIONS.md but it just contains a link to the wiki 
page, but anywhere in the release is fine.
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-960) BUILD_INSTRUCTIONS.md should contain more than just a link

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-960:
---
Fix Version/s: 2.0.1.0-incubating

> BUILD_INSTRUCTIONS.md should contain more than just a link
> --
>
> Key: HAWQ-960
> URL: https://issues.apache.org/jira/browse/HAWQ-960
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build, Documentation
>Reporter: Goden Yao
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> From [~jmclean] IPMC Release Candidate Review Feedback:
> {qoute}
> I expected it in BUILD_INSTRUCTIONS.md but it just contains a link to the 
> wiki page, but anywhere in the release is fine.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-960) BUILD_INSTRUCTIONS.md should contain more than just a link

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-960:
---
Priority: Minor  (was: Major)

> BUILD_INSTRUCTIONS.md should contain more than just a link
> --
>
> Key: HAWQ-960
> URL: https://issues.apache.org/jira/browse/HAWQ-960
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build, Documentation
>Reporter: Goden Yao
>Assignee: Lei Chang
>Priority: Minor
> Fix For: 2.0.0.0-incubating
>
>
> From [~jmclean] IPMC Release Candidate Review Feedback:
> {qoute}
> I expected it in BUILD_INSTRUCTIONS.md but it just contains a link to the 
> wiki page, but anywhere in the release is fine.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-956:
-
Description: 
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:

1. Create two external tables:
{code}
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}

2. Copy big file(~ 1Gb) from local fs to store_t:
{code}
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';
{code}

3. Restart HDFS while COPY is in progress.

4. Run HDFS report, some of files are still open for write:
{code}
hdfs fsck / -openforwrite
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
16/07/27 15:06:23 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Connecting to namenode via 
http://0.0.0.0:50070/fsck?ugi=adiachenko&openforwrite=1&path=%2F
FSCK started by adiachenko (auth:SIMPLE) from /127.0.0.1 for path / at Wed Jul 
27 15:06:24 PDT 2016

/data/15137_0 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_1 0 bytes, 
1 block(s), OPENFORWRITE: /data/15137_2 0 bytes, 1 block(s), OPENFORWRITE: 
/data/15137_3 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_4 0 bytes, 1 
block(s), OPENFORWRITE: /data/15137_5 0 bytes, 1 block(s), OPENFORWRITE: 
./hbase/MasterProcWALs/state-0010.log 0 bytes, 0 block(s), 
OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507..meta.1469656315513.meta
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507.default.1469656310882
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60021,1469656309708/192.168.97.183%2C60021%2C1469656309708.default.1469656312207
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60022,1469656311015/192.168.97.183%2C60022%2C1469656311015.default.1469656313131
 83 bytes, 1 block(s), OPENFORWRITE: 


...
Status:
 HEALTHY
 Total size:185557141 B
 Total dirs:350
 Total files:   30176
 Total symlinks:0
 Total blocks (validated):  173 (avg. block size 1072584 B)
 Minimally replicated blocks:   163 (94.21965 %)
 Over-replicated blocks:0 (0.0 %)
 Under-replicated blocks:   0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor:3
 Average block replication: 2.8265896
 Corrupt blocks:0
 Missing replicas:  0 (0.0 %)
 Number of data-nodes:  3
 Number of racks:   1
FSCK ended at Wed Jul 27 15:06:25 PDT 2016 in 1163 milliseconds


The filesystem under path '/' is HEALTHY
{code}


  was:
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
1. Create two external tables:
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}
# Copy big file(~ 1Gb) from local fs to store_t:
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';

# Restart HDFS while COPY is in progress.

# Run HDFS report, some of files are still open for write:
hdfs fsck / -openforwrite
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
16/07/27 15:06:23 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Connecting to namenode via 
http://0.0.0.0:50070/fsck?ugi=adiachenko&openforwrite=1&path=%2F
FSCK started by adiachenko (auth:SIMPLE) from /127.0.0.1 for path / at Wed Jul 
27 15:06:24 PDT 2016
.

[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-956:
-
Description: 
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
1. Create two external tables:
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}
# Copy big file(~ 1Gb) from local fs to store_t:
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';

# Restart HDFS while COPY is in progress.

# Run HDFS report, some of files are still open for write:
hdfs fsck / -openforwrite
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
16/07/27 15:06:23 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Connecting to namenode via 
http://0.0.0.0:50070/fsck?ugi=adiachenko&openforwrite=1&path=%2F
FSCK started by adiachenko (auth:SIMPLE) from /127.0.0.1 for path / at Wed Jul 
27 15:06:24 PDT 2016

/data/15137_0 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_1 0 bytes, 
1 block(s), OPENFORWRITE: /data/15137_2 0 bytes, 1 block(s), OPENFORWRITE: 
/data/15137_3 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_4 0 bytes, 1 
block(s), OPENFORWRITE: /data/15137_5 0 bytes, 1 block(s), OPENFORWRITE: 
./hbase/MasterProcWALs/state-0010.log 0 bytes, 0 block(s), 
OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507..meta.1469656315513.meta
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507.default.1469656310882
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60021,1469656309708/192.168.97.183%2C60021%2C1469656309708.default.1469656312207
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60022,1469656311015/192.168.97.183%2C60022%2C1469656311015.default.1469656313131
 83 bytes, 1 block(s), OPENFORWRITE: 


...
Status:
 HEALTHY
 Total size:185557141 B
 Total dirs:350
 Total files:   30176
 Total symlinks:0
 Total blocks (validated):  173 (avg. block size 1072584 B)
 Minimally replicated blocks:   163 (94.21965 %)
 Over-replicated blocks:0 (0.0 %)
 Under-replicated blocks:   0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor:3
 Average block replication: 2.8265896
 Corrupt blocks:0
 Missing replicas:  0 (0.0 %)
 Number of data-nodes:  3
 Number of racks:   1
FSCK ended at Wed Jul 27 15:06:25 PDT 2016 in 1163 milliseconds


The filesystem under path '/' is HEALTHY


  was:
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
# Create two external tables:
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}
# Copy big file(~ 1Gb) from local fs to store_t:
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';

# Restart HDFS while COPY is in progress.

# Run HDFS report, some of files are still open for write:
hdfs fsck / -openforwrite
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
16/07/27 15:06:23 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Connecting to namenode via 
http://0.0.0.0:50070/fsck?ugi=adiachenko&openforwrite=1&path=%2F
FSCK started by adiachenko (auth:SIMPLE) from /127.0.0.1 for path / at Wed Jul 
27 15:06:24 PDT 2016
...

[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-956:
-
Description: 
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
# Create two external tables:
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}
# Copy big file(~ 1Gb) from local fs to store_t:
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';

# Restart HDFS while COPY is in progress.

# Run HDFS report, some of files are still open for write:
hdfs fsck / -openforwrite
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
16/07/27 15:06:23 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Connecting to namenode via 
http://0.0.0.0:50070/fsck?ugi=adiachenko&openforwrite=1&path=%2F
FSCK started by adiachenko (auth:SIMPLE) from /127.0.0.1 for path / at Wed Jul 
27 15:06:24 PDT 2016

/data/15137_0 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_1 0 bytes, 
1 block(s), OPENFORWRITE: /data/15137_2 0 bytes, 1 block(s), OPENFORWRITE: 
/data/15137_3 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_4 0 bytes, 1 
block(s), OPENFORWRITE: /data/15137_5 0 bytes, 1 block(s), OPENFORWRITE: 
./hbase/MasterProcWALs/state-0010.log 0 bytes, 0 block(s), 
OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507..meta.1469656315513.meta
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507.default.1469656310882
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60021,1469656309708/192.168.97.183%2C60021%2C1469656309708.default.1469656312207
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60022,1469656311015/192.168.97.183%2C60022%2C1469656311015.default.1469656313131
 83 bytes, 1 block(s), OPENFORWRITE: 


...
Status:
 HEALTHY
 Total size:185557141 B
 Total dirs:350
 Total files:   30176
 Total symlinks:0
 Total blocks (validated):  173 (avg. block size 1072584 B)
 Minimally replicated blocks:   163 (94.21965 %)
 Over-replicated blocks:0 (0.0 %)
 Under-replicated blocks:   0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor:3
 Average block replication: 2.8265896
 Corrupt blocks:0
 Missing replicas:  0 (0.0 %)
 Number of data-nodes:  3
 Number of racks:   1
FSCK ended at Wed Jul 27 15:06:25 PDT 2016 in 1163 milliseconds


The filesystem under path '/' is HEALTHY


  was:
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
# Create two external tables:
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}
# Copy big file(~ 1Gb) from local fs to store_t:
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';
# Restart HDFS while COPY is in progress.
# Run HDFS report, some of files are still open for write:
hdfs fsck / -openforwrite
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
16/07/27 15:06:23 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Connecting to namenode via 
http://0.0.0.0:50070/fsck?ugi=adiachenko&openforwrite=1&path=%2F
FSCK started by adiachenko (auth:SIMPLE) from /127.0.0.1 for path / at Wed Jul 
27 15:06:24 PDT 2016
..

[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-956:
-
Description: 
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
# Create two external tables:
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}
# Copy big file(~ 1Gb) from local fs to store_t:
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';
# Restart HDFS while COPY is in progress.
# Run HDFS report, some of files are still open for write:
hdfs fsck / -openforwrite
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
16/07/27 15:06:23 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Connecting to namenode via 
http://0.0.0.0:50070/fsck?ugi=adiachenko&openforwrite=1&path=%2F
FSCK started by adiachenko (auth:SIMPLE) from /127.0.0.1 for path / at Wed Jul 
27 15:06:24 PDT 2016

/data/15137_0 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_1 0 bytes, 
1 block(s), OPENFORWRITE: /data/15137_2 0 bytes, 1 block(s), OPENFORWRITE: 
/data/15137_3 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_4 0 bytes, 1 
block(s), OPENFORWRITE: /data/15137_5 0 bytes, 1 block(s), OPENFORWRITE: 
./hbase/MasterProcWALs/state-0010.log 0 bytes, 0 block(s), 
OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507..meta.1469656315513.meta
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507.default.1469656310882
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60021,1469656309708/192.168.97.183%2C60021%2C1469656309708.default.1469656312207
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60022,1469656311015/192.168.97.183%2C60022%2C1469656311015.default.1469656313131
 83 bytes, 1 block(s), OPENFORWRITE: 


...
Status:
 HEALTHY
 Total size:185557141 B
 Total dirs:350
 Total files:   30176
 Total symlinks:0
 Total blocks (validated):  173 (avg. block size 1072584 B)
 Minimally replicated blocks:   163 (94.21965 %)
 Over-replicated blocks:0 (0.0 %)
 Under-replicated blocks:   0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor:3
 Average block replication: 2.8265896
 Corrupt blocks:0
 Missing replicas:  0 (0.0 %)
 Number of data-nodes:  3
 Number of racks:   1
FSCK ended at Wed Jul 27 15:06:25 PDT 2016 in 1163 milliseconds


The filesystem under path '/' is HEALTHY


  was:
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
1) Create two external tables:
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}
2) Copy big file(~ 1Gb) from local fs to store_t:
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';
3) Restart HDFS while COPY is in progress.
4) Run HDFS report, some of files are still open for write:
hdfs fsck / -openforwrite
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
16/07/27 15:06:23 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Connecting to namenode via 
http://0.0.0.0:50070/fsck?ugi=adiachenko&openforwrite=1&path=%2F
FSCK started by adiachenko (auth:SIMPLE) from /127.0.0.1 for path / at Wed Jul 
27 15:06:24 PDT 2016

[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-956:
-
Description: 
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
1) Create two external tables:
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}
2) Copy big file(~ 1Gb) from local fs to store_t:
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';
3) Restart HDFS while COPY is in progress.
4) Run HDFS report, some of files are still open for write:
hdfs fsck / -openforwrite
Picked up _JAVA_OPTIONS: -Xmx2048m -XX:MaxPermSize=512m -Djava.awt.headless=true
16/07/27 15:06:23 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Connecting to namenode via 
http://0.0.0.0:50070/fsck?ugi=adiachenko&openforwrite=1&path=%2F
FSCK started by adiachenko (auth:SIMPLE) from /127.0.0.1 for path / at Wed Jul 
27 15:06:24 PDT 2016

/data/15137_0 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_1 0 bytes, 
1 block(s), OPENFORWRITE: /data/15137_2 0 bytes, 1 block(s), OPENFORWRITE: 
/data/15137_3 0 bytes, 1 block(s), OPENFORWRITE: /data/15137_4 0 bytes, 1 
block(s), OPENFORWRITE: /data/15137_5 0 bytes, 1 block(s), OPENFORWRITE: 
./hbase/MasterProcWALs/state-0010.log 0 bytes, 0 block(s), 
OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507..meta.1469656315513.meta
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60020,1469656307507/192.168.97.183%2C60020%2C1469656307507.default.1469656310882
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60021,1469656309708/192.168.97.183%2C60021%2C1469656309708.default.1469656312207
 83 bytes, 1 block(s), OPENFORWRITE: 
/hbase/WALs/192.168.97.183,60022,1469656311015/192.168.97.183%2C60022%2C1469656311015.default.1469656313131
 83 bytes, 1 block(s), OPENFORWRITE: 


...
Status:
 HEALTHY
 Total size:185557141 B
 Total dirs:350
 Total files:   30176
 Total symlinks:0
 Total blocks (validated):  173 (avg. block size 1072584 B)
 Minimally replicated blocks:   163 (94.21965 %)
 Over-replicated blocks:0 (0.0 %)
 Under-replicated blocks:   0 (0.0 %)
 Mis-replicated blocks: 0 (0.0 %)
 Default replication factor:3
 Average block replication: 2.8265896
 Corrupt blocks:0
 Missing replicas:  0 (0.0 %)
 Number of data-nodes:  3
 Number of racks:   1
FSCK ended at Wed Jul 27 15:06:25 PDT 2016 in 1163 milliseconds


The filesystem under path '/' is HEALTHY


  was:
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
1) Create two external tables:
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}
2) Copy big file(~ 1Gb) from local fs to store_t:
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';
3) Restart HDFS while COPY is in progress.



> Make COPY command transactional for external tables
> ---
>
> Key: HAWQ-956
> URL: https://issues.apache.org/jira/browse/HAWQ-956
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: External Tables, PXF
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
> Fix For: backlog
>
>
> As for now COPY command is transactional for native HAWQ tables, but it's not 
> for external tables.
> This command involves communication with underlying HDFS

[jira] [Updated] (HAWQ-959) Clean up Binary files based on RAT reports

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-959:
---
Fix Version/s: 2.0.0.0-incubating

> Clean up Binary files based on RAT reports
> --
>
> Key: HAWQ-959
> URL: https://issues.apache.org/jira/browse/HAWQ-959
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Affects Versions: 2.0.0.0-incubating
>Reporter: Goden Yao
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> From [~jmclean] and [~alangates] IPMC release candidate review
> An unexpected binary in the source code base:
> depends/thirdparty/thrift/lib/erl/rebar
> There might be others given rat reports 770+ binary files
> We need to clean them up or make exceptions for them with solid reasons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-959) Clean up Binary files based on RAT reports

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-959:
---
Affects Version/s: 2.0.0.0-incubating

> Clean up Binary files based on RAT reports
> --
>
> Key: HAWQ-959
> URL: https://issues.apache.org/jira/browse/HAWQ-959
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Build
>Affects Versions: 2.0.0.0-incubating
>Reporter: Goden Yao
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> From [~jmclean] and [~alangates] IPMC release candidate review
> An unexpected binary in the source code base:
> depends/thirdparty/thrift/lib/erl/rebar
> There might be others given rat reports 770+ binary files
> We need to clean them up or make exceptions for them with solid reasons.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-959) Clean up Binary files based on RAT reports

2016-07-27 Thread Goden Yao (JIRA)
Goden Yao created HAWQ-959:
--

 Summary: Clean up Binary files based on RAT reports
 Key: HAWQ-959
 URL: https://issues.apache.org/jira/browse/HAWQ-959
 Project: Apache HAWQ
  Issue Type: Task
  Components: Build
Reporter: Goden Yao
Assignee: Lei Chang


>From [~jmclean] and [~alangates] IPMC release candidate review

An unexpected binary in the source code base:
depends/thirdparty/thrift/lib/erl/rebar

There might be others given rat reports 770+ binary files
We need to clean them up or make exceptions for them with solid reasons.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-958) LICENSE file missing checklist

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-958:
---
Description: 
>From [~jmclean] IPMC release VOTE feedback

{quote}
- Please use the short form of the license linking to a license files in LICENSE
- BSD licensed code [3] 3. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD license code [7] 7. ./depends/thirdparty/thrift/compiler/cpp/src/md5.?
- license for this file [9] 9. ./src/backend/port/dynloader/ultrix4.h
- license for this file [10] Are we OK this was taken form GNU C? 10. 
./src/port/inet_aton.c
- MIT license PSI [11] 11. ./tools/bin/pythonSrc/PSI-0.3b2_gp/
- BSD licensed code [12] 12. ./src/port/snprintf.c
- BSD licensed code [13] 13 ./src/port/crypt.c  Is this regard as cryptography 
code? [14] 14. http://www.apache.org/dev/crypto.html
- BSD licensed code [15][16] 15. ./src/port/memcmp.c , 16. 
./src/backend/utils/mb/wstrcmp.c
- license for this file [17] 17. ./src/port/rand.c
- license of these files [18][19] 18. ./src/backend/utils/adt/inet_net_ntop.c
19. ./src/backend/utils/adt/inet_net_pton.c
- license of this file [20] 20 ./src/port/strlcpy.c
- regex license [21] 21. ./src/backend/regex/COPYRIGHT
- How are these files licensed? [22] + others copyright AEG Automation GmbH 22. 
./src/backend/port/qnx4/shm.c
- How is this file licensed? [23] 23. ./src/backend/port/beos/shm.c
- BSD licensed libpq [24]. 24. ./src/backend/libpq/sha2.?
Is this considered crypto code and may need an export license?
- pgdump [25] 25. ./src/bin/pg_dump/
- license for this file [26] 26. ./src/port/gettimeofday.c
- license for this file [27] Look like an ASF header may of been incorrectly 
added to this. 27. 
./depends/thirdparty/thrift/lib/cpp/src/thrift/windows/SocketPair.cpp
- This BSD licensed file [36] 36. ./src/bin/pg_controldata/pg_controldata.c
- license for these files [37][38] and others in [39]
37. ./depends/thirdparty/thrift/aclocal/ax_cxx_compile_stdcxx_11.m4
38. ./depends/thirdparty/thrift/aclocal/ax_boost_base.m4
39. ./depends/thirdparty/thrift/aclocal
- This BSD licensed file [40]
40. ./depends/thirdparty/thrift/build/cmake/FindGLIB.cmake
- This BSD licensed file [41]
41. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD licensed pychecker [42]
42. ./tools/bin/pythonSrc/pychecker-0.8.18/
- licenses for all of these files [43]
43. ./src/interfaces/libpq/po/*.po
- BSD license pg800 [44]
44. ./tools/bin/ext/pg8000/*
- how is this file licensed? [45]
45. ./src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
- license for this file [47] 47 
./tools/bin/pythonSrc/lockfile-0.9.1/lockfile/pidlockfile.py
- Python license for this file [48]. Is this an Apache comparable license? 48 
./tools/bin/pythonSrc/pychecker-0.8.18/pychecker2/symbols.py
- How are these files licensed? [49] Note multiple copyright owners and missing 
headers.
49.  ./src/backend/utils/mb/Unicode/*
- BSD licensed fig leaf. [50] Note that files incorrectly has had ASF headers 
applied.
50. ./tools/bin/ext/figleaf/*
- This BSD licensed file [51]
51. ./depends/thirdparty/thrift/lib/py/compat/win32/stdint.h
- This public domain style sheet [52]
52. ./tools/bin/pythonSrc/PyGreSQL-4.0/docs/default.css
- This file [53]
53. ./src/test/locale/test-ctype.c
- License for unit test2 [54]
54 ./tools/bin/pythonSrc/unittest2-0.5.1/unittest2/
- MIT licensed lock file [55]
55. ./tools/bin/pythonSrc/lockfile-0.9.1/LICENSE
- JSON code here [56]
56. ./src/include/catalog/JSON
- License for this file [57]
57. ./src/pl/plperl/ppport.h

Looks like GPL/LPGL licensed code may be included [4][5][6] in the release.
4. ./depends/thirdparty/thrift/debian/copyright (end of file)
5. ./depends/thirdparty/thrift/doc/licenses/lgpl-2.1.txt
6. ./tools/bin/gppylib/operations/test/test_package.py

This file [8] and others(?) may incorrectly have an ASF headers on it. Also why 
does this file have an ASF header with copyright line? [46]
8. ./tools/sbin/hawqstandbywatch.py
46. 
./contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/resources/log4j.properties

Code includes code licensed under the 4 clause BSD license which is not 
compatible with the Apache 2.0 license. [28][29][30][31][32][33] It may be that 
this clause has been rescinded [35] and it is OK to include but that needs to 
be checked.
28. ./src/backend/port/dynloader/freebsd.c
29. ./src/backend/port/dynloader/netbsd.c
30. ./src/backend/port/dynloader/openbsd.c
31. ./src/bin/gpfdist/src/gpfdist/glob.c
32. ./src/bin/gpfdist/src/gpfdist/include/glob.h
33. ./src/include/port/win32_msvc/glob.h
34. ./src/port/glob.c -- [Goden] was not in original Justin's feedback but 
given the context, I think it's in the same comment for [28]-[33] and [35] 
35. ftp://ftp.cs.berkeley.edu/pub/4bsd/README.Impt.License.Change
{quote}

  was:
>From [~jmclean] IPMC release VOTE feedback

{quote}
- BSD licensed code [3] 3. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD license code [7] 7. ./depends/thirdpart

[jira] [Updated] (HAWQ-958) LICENSE file missing checklist

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-958:
---
Description: 
>From [~jmclean] IPMC release VOTE feedback

{quote}
- BSD licensed code [3] 3. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD license code [7] 7. ./depends/thirdparty/thrift/compiler/cpp/src/md5.?
- license for this file [9] 9. ./src/backend/port/dynloader/ultrix4.h
- license for this file [10] Are we OK this was taken form GNU C? 10. 
./src/port/inet_aton.c
- MIT license PSI [11] 11. ./tools/bin/pythonSrc/PSI-0.3b2_gp/
- BSD licensed code [12] 12. ./src/port/snprintf.c
- BSD licensed code [13] 13 ./src/port/crypt.c  Is this regard as cryptography 
code? [14] 14. http://www.apache.org/dev/crypto.html
- BSD licensed code [15][16] 15. ./src/port/memcmp.c , 16. 
./src/backend/utils/mb/wstrcmp.c
- license for this file [17] 17. ./src/port/rand.c
- license of these files [18][19] 18. ./src/backend/utils/adt/inet_net_ntop.c
19. ./src/backend/utils/adt/inet_net_pton.c
- license of this file [20] 20 ./src/port/strlcpy.c
- regex license [21] 21. ./src/backend/regex/COPYRIGHT
- How are these files licensed? [22] + others copyright AEG Automation GmbH 22. 
./src/backend/port/qnx4/shm.c
- How is this file licensed? [23] 23. ./src/backend/port/beos/shm.c
- BSD licensed libpq [24]. 24. ./src/backend/libpq/sha2.?
Is this considered crypto code and may need an export license?
- pgdump [25] 25. ./src/bin/pg_dump/
- license for this file [26] 26. ./src/port/gettimeofday.c
- license for this file [27] Look like an ASF header may of been incorrectly 
added to this. 27. 
./depends/thirdparty/thrift/lib/cpp/src/thrift/windows/SocketPair.cpp
- This BSD licensed file [36] 36. ./src/bin/pg_controldata/pg_controldata.c
- license for these files [37][38] and others in [39]
37. ./depends/thirdparty/thrift/aclocal/ax_cxx_compile_stdcxx_11.m4
38. ./depends/thirdparty/thrift/aclocal/ax_boost_base.m4
39. ./depends/thirdparty/thrift/aclocal
- This BSD licensed file [40]
40. ./depends/thirdparty/thrift/build/cmake/FindGLIB.cmake
- This BSD licensed file [41]
41. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD licensed pychecker [42]
42. ./tools/bin/pythonSrc/pychecker-0.8.18/
- licenses for all of these files [43]
43. ./src/interfaces/libpq/po/*.po
- BSD license pg800 [44]
44. ./tools/bin/ext/pg8000/*
- how is this file licensed? [45]
45. ./src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
- license for this file [47] 47 
./tools/bin/pythonSrc/lockfile-0.9.1/lockfile/pidlockfile.py
- Python license for this file [48]. Is this an Apache comparable license? 48 
./tools/bin/pythonSrc/pychecker-0.8.18/pychecker2/symbols.py
- How are these files licensed? [49] Note multiple copyright owners and missing 
headers.
49.  ./src/backend/utils/mb/Unicode/*
- BSD licensed fig leaf. [50] Note that files incorrectly has had ASF headers 
applied.
50. ./tools/bin/ext/figleaf/*
- This BSD licensed file [51]
51. ./depends/thirdparty/thrift/lib/py/compat/win32/stdint.h
- This public domain style sheet [52]
52. ./tools/bin/pythonSrc/PyGreSQL-4.0/docs/default.css
- This file [53]
53. ./src/test/locale/test-ctype.c
- License for unit test2 [54]
54 ./tools/bin/pythonSrc/unittest2-0.5.1/unittest2/
- MIT licensed lock file [55]
55. ./tools/bin/pythonSrc/lockfile-0.9.1/LICENSE
- JSON code here [56]
56. ./src/include/catalog/JSON
- License for this file [57]
57. ./src/pl/plperl/ppport.h

Looks like GPL/LPGL licensed code may be included [4][5][6] in the release.
4. ./depends/thirdparty/thrift/debian/copyright (end of file)
5. ./depends/thirdparty/thrift/doc/licenses/lgpl-2.1.txt
6. ./tools/bin/gppylib/operations/test/test_package.py

This file [8] and others(?) may incorrectly have an ASF headers on it. Also why 
does this file have an ASF header with copyright line? [46]
8. ./tools/sbin/hawqstandbywatch.py
46. 
./contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/resources/log4j.properties

Code includes code licensed under the 4 clause BSD license which is not 
compatible with the Apache 2.0 license. [28][29][30][31][32][33] It may be that 
this clause has been rescinded [35] and it is OK to include but that needs to 
be checked.
28. ./src/backend/port/dynloader/freebsd.c
29. ./src/backend/port/dynloader/netbsd.c
30. ./src/backend/port/dynloader/openbsd.c
31. ./src/bin/gpfdist/src/gpfdist/glob.c
32. ./src/bin/gpfdist/src/gpfdist/include/glob.h
33. ./src/include/port/win32_msvc/glob.h
34. ./src/port/glob.c -- [Goden] was not in original Justin's feedback but 
given the context, I think it's in the same comment for [28]-[33] and [35] 
35. ftp://ftp.cs.berkeley.edu/pub/4bsd/README.Impt.License.Change
{quote}

  was:
>From [~jmclean] IPMC release VOTE feedback

{quote}
- BSD licensed code [3] 3. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD license code [7] 7. ./depends/thirdparty/thrift/compiler/cpp/src/md5.?
- license for this file [9] 9. ./src/backend/port

[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-956:
---
Fix Version/s: backlog

> Make COPY command transactional for external tables
> ---
>
> Key: HAWQ-956
> URL: https://issues.apache.org/jira/browse/HAWQ-956
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: External Tables, PXF
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
> Fix For: backlog
>
>
> As for now COPY command is transactional for native HAWQ tables, but it's not 
> for external tables.
> This command involves communication with underlying HDFS layer which isn't 
> under HAWQ's control.
> If something happens to HDFS during COPY data in table ending up being 
> corrupted.
> STR:
> {code}
> 1) Create two external tables:
> create writable external table store_t ( a text, b text, c text, d text ) 
> LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
> (DELIMITER ',');
> create external table read_t ( a text, b text, c text, d text ) LOCATION 
> ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
> (DELIMITER ',');
> {code}
> 2) Copy big file(~ 1Gb) from local fs to store_t:
> COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';
> 3) Restart HDFS while COPY is in progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-958) LICENSE file missing checklist

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-958:
---
Description: 
>From [~jmclean] IPMC release VOTE feedback

{quote}
- BSD licensed code [3] 3. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD license code [7] 7. ./depends/thirdparty/thrift/compiler/cpp/src/md5.?
- license for this file [9] 9. ./src/backend/port/dynloader/ultrix4.h
- license for this file [10] Are we OK this was taken form GNU C? 10. 
./src/port/inet_aton.c
- MIT license PSI [11] 11. ./tools/bin/pythonSrc/PSI-0.3b2_gp/
- BSD licensed code [12] 12. ./src/port/snprintf.c
- BSD licensed code [13] 13 ./src/port/crypt.c  Is this regard as cryptography 
code? [14] 14. http://www.apache.org/dev/crypto.html
- BSD licensed code [15][16] 15. ./src/port/memcmp.c , 16. 
./src/backend/utils/mb/wstrcmp.c
- license for this file [17] 17. ./src/port/rand.c
- license of these files [18][19] 18. ./src/backend/utils/adt/inet_net_ntop.c
19. ./src/backend/utils/adt/inet_net_pton.c
- license of this file [20] 20 ./src/port/strlcpy.c
- regex license [21] 21. ./src/backend/regex/COPYRIGHT
- How are these files licensed? [22] + others copyright AEG Automation GmbH 22. 
./src/backend/port/qnx4/shm.c
- How is this file licensed? [23] 23. ./src/backend/port/beos/shm.c
- BSD licensed libpq [24]. 24. ./src/backend/libpq/sha2.?
Is this considered crypto code and may need an export license?
- pgdump [25] 25. ./src/bin/pg_dump/
- license for this file [26] 26. ./src/port/gettimeofday.c
- license for this file [27] Look like an ASF header may of been incorrectly 
added to this. 27. 
./depends/thirdparty/thrift/lib/cpp/src/thrift/windows/SocketPair.cpp
- This BSD licensed file [36] 36. ./src/bin/pg_controldata/pg_controldata.c
- license for these files [37][38] and others in [39]
37. ./depends/thirdparty/thrift/aclocal/ax_cxx_compile_stdcxx_11.m4
38. ./depends/thirdparty/thrift/aclocal/ax_boost_base.m4
39. ./depends/thirdparty/thrift/aclocal
- This BSD licensed file [40]
40. ./depends/thirdparty/thrift/build/cmake/FindGLIB.cmake
- This BSD licensed file [41]
41. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD licensed pychecker [42]
42. ./tools/bin/pythonSrc/pychecker-0.8.18/
- licenses for all of these files [43]
43. ./src/interfaces/libpq/po/*.po
- BSD license pg800 [44]
44. ./tools/bin/ext/pg8000/*
- how is this file licensed? [45]
45. ./src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
- license for this file [47] 47 
./tools/bin/pythonSrc/lockfile-0.9.1/lockfile/pidlockfile.py
- Python license for this file [48]. Is this an Apache comparable license? 48 
./tools/bin/pythonSrc/pychecker-0.8.18/pychecker2/symbols.py
- How are these files licensed? [49] Note multiple copyright owners and missing 
headers.
49.  ./src/backend/utils/mb/Unicode/*
- BSD licensed fig leaf. [50] Note that files incorrectly has had ASF headers 
applied.
50. ./tools/bin/ext/figleaf/*
- This BSD licensed file [51]
51. ./depends/thirdparty/thrift/lib/py/compat/win32/stdint.h
- This public domain style sheet [52]
52. ./tools/bin/pythonSrc/PyGreSQL-4.0/docs/default.css
- This file [53]
53. ./src/test/locale/test-ctype.c
- License for unit test2 [54]
54 ./tools/bin/pythonSrc/unittest2-0.5.1/unittest2/
- MIT licensed lock file [55]
55. ./tools/bin/pythonSrc/lockfile-0.9.1/LICENSE
- JSON code here [56]
56. ./src/include/catalog/JSON
- License for this file [57]
57. ./src/pl/plperl/ppport.h

Looks like GPL/LPGL licensed code may be included [4][5][6] in the release.
4. ./depends/thirdparty/thrift/debian/copyright (end of file)
5. ./depends/thirdparty/thrift/doc/licenses/lgpl-2.1.txt
6. ./tools/bin/gppylib/operations/test/test_package.py

This file [8] and others(?) may incorrectly have an ASF headers on it. Also why 
does this file have an ASF header with copyright line? [46]
8. ./tools/sbin/hawqstandbywatch.py
46. 
./contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/resources/log4j.properties

Code includes code licensed under the 4 clause BSD license which is not 
compatible with the Apache 2.0 license. [28][29][30][31][32][33] It may be that 
this clause has been rescinded [35] and it is OK to include but that needs to 
be checked.
28. ./src/backend/port/dynloader/freebsd.c
29. ./src/backend/port/dynloader/netbsd.c
30. ./src/backend/port/dynloader/openbsd.c
31. ./src/bin/gpfdist/src/gpfdist/glob.c
32. ./src/bin/gpfdist/src/gpfdist/include/glob.h
33. ./src/include/port/win32_msvc/glob.h
35. ftp://ftp.cs.berkeley.edu/pub/4bsd/README.Impt.License.Change
{quote}

  was:
>From [~jmclean] IPMC release VOTE feedback

{quote}
- BSD licensed code [3] 3. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD license code [7] 7. ./depends/thirdparty/thrift/compiler/cpp/src/md5.?
- license for this file [9] 9. ./src/backend/port/dynloader/ultrix4.h
- license for this file [10] Are we OK this was taken form GNU C? 10. 
./src/port/inet_aton.c
- MIT license PSI [11] 11. ./tools/bi

[jira] [Updated] (HAWQ-958) LICENSE file missing checklist

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-958:
---
Description: 
>From [~jmclean] IPMC release VOTE feedback

{quote}
- BSD licensed code [3] 3. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD license code [7] 7. ./depends/thirdparty/thrift/compiler/cpp/src/md5.?
- license for this file [9] 9. ./src/backend/port/dynloader/ultrix4.h
- license for this file [10] Are we OK this was taken form GNU C? 10. 
./src/port/inet_aton.c
- MIT license PSI [11] 11. ./tools/bin/pythonSrc/PSI-0.3b2_gp/
- BSD licensed code [12] 12. ./src/port/snprintf.c
- BSD licensed code [13] 13 ./src/port/crypt.c  Is this regard as cryptography 
code? [14] 14. http://www.apache.org/dev/crypto.html
- BSD licensed code [15][16] 15. ./src/port/memcmp.c , 16. 
./src/backend/utils/mb/wstrcmp.c
- license for this file [17] 17. ./src/port/rand.c
- license of these files [18][19] 18. ./src/backend/utils/adt/inet_net_ntop.c
19. ./src/backend/utils/adt/inet_net_pton.c
- license of this file [20] 20 ./src/port/strlcpy.c
- regex license [21] 21. ./src/backend/regex/COPYRIGHT
- How are these files licensed? [22] + others copyright AEG Automation GmbH 22. 
./src/backend/port/qnx4/shm.c
- How is this file licensed? [23] 23. ./src/backend/port/beos/shm.c
- BSD licensed libpq [24]. 24. ./src/backend/libpq/sha2.?
Is this considered crypto code and may need an export license?
- pgdump [25] 25. ./src/bin/pg_dump/
- license for this file [26] 26. ./src/port/gettimeofday.c
- license for this file [27] Look like an ASF header may of been incorrectly 
added to this. 27. 
./depends/thirdparty/thrift/lib/cpp/src/thrift/windows/SocketPair.cpp
- This BSD licensed file [36] 36. ./src/bin/pg_controldata/pg_controldata.c
- license for these files [37][38] and others in [39]
37. ./depends/thirdparty/thrift/aclocal/ax_cxx_compile_stdcxx_11.m4
38. ./depends/thirdparty/thrift/aclocal/ax_boost_base.m4
39. ./depends/thirdparty/thrift/aclocal
- This BSD licensed file [40]
40. ./depends/thirdparty/thrift/build/cmake/FindGLIB.cmake
- This BSD licensed file [41]
41. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD licensed pychecker [42]
42. ./tools/bin/pythonSrc/pychecker-0.8.18/
- licenses for all of these files [43]
43. ./src/interfaces/libpq/po/*.po
- BSD license pg800 [44]
44. ./tools/bin/ext/pg8000/*
- how is this file licensed? [45]
45. ./src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
- license for this file [47] 47 
./tools/bin/pythonSrc/lockfile-0.9.1/lockfile/pidlockfile.py
- Python license for this file [48]. Is this an Apache comparable license? 48 
./tools/bin/pythonSrc/pychecker-0.8.18/pychecker2/symbols.py
- How are these files licensed? [49] Note multiple copyright owners and missing 
headers.
49.  ./src/backend/utils/mb/Unicode/*
- BSD licensed fig leaf. [50] Note that files incorrectly has had ASF headers 
applied.
- This BSD licensed file [51]
51. ./depends/thirdparty/thrift/lib/py/compat/win32/stdint.h
- This public domain style sheet [52]
52. ./tools/bin/pythonSrc/PyGreSQL-4.0/docs/default.css
- This file [53]
53. ./src/test/locale/test-ctype.c
- License for unit test2 [54]
54 ./tools/bin/pythonSrc/unittest2-0.5.1/unittest2/
- MIT licensed lock file [55]
55. ./tools/bin/pythonSrc/lockfile-0.9.1/LICENSE
- JSON code here [56]
56. ./src/include/catalog/JSON
- License for this file [57]
57. ./src/pl/plperl/ppport.h

Looks like GPL/LPGL licensed code may be included [4][5][6] in the release.
4. ./depends/thirdparty/thrift/debian/copyright (end of file)
5. ./depends/thirdparty/thrift/doc/licenses/lgpl-2.1.txt
6. ./tools/bin/gppylib/operations/test/test_package.py

This file [8] and others(?) may incorrectly have an ASF headers on it. Also why 
does this file have an ASF header with copyright line? [46]
8. ./tools/sbin/hawqstandbywatch.py
46. 
./contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/resources/log4j.properties

Code includes code licensed under the 4 clause BSD license which is not 
compatible with the Apache 2.0 license. [28][29][30][31][32][33] It may be that 
this clause has been rescinded [35] and it is OK to include but that needs to 
be checked.
28. ./src/backend/port/dynloader/freebsd.c
29. ./src/backend/port/dynloader/netbsd.c
30. ./src/backend/port/dynloader/openbsd.c
31. ./src/bin/gpfdist/src/gpfdist/glob.c
32. ./src/bin/gpfdist/src/gpfdist/include/glob.h
33. ./src/include/port/win32_msvc/glob.h
35. ftp://ftp.cs.berkeley.edu/pub/4bsd/README.Impt.License.Change
{quote}

  was:
>From [~jmclean] IPMC release VOTE feedback

{quote}
- BSD licensed code [3] 3. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD license code [7] 7. ./depends/thirdparty/thrift/compiler/cpp/src/md5.?
- license for this file [9] 9. ./src/backend/port/dynloader/ultrix4.h
- license for this file [10] Are we OK this was taken form GNU C? 10. 
./src/port/inet_aton.c
- MIT license PSI [11] 11. ./tools/bin/pythonSrc/PSI-0.3b2_gp/
- BS

[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-956:
-
Description: 
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
1) Create two external tables:
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}
2) Copy big file(~ 1Gb) from local fs to store_t:
COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';
3) Restart HDFS while COPY is in progress.


  was:
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
1) Create two external tables:
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}


> Make COPY command transactional for external tables
> ---
>
> Key: HAWQ-956
> URL: https://issues.apache.org/jira/browse/HAWQ-956
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: External Tables, PXF
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>
> As for now COPY command is transactional for native HAWQ tables, but it's not 
> for external tables.
> This command involves communication with underlying HDFS layer which isn't 
> under HAWQ's control.
> If something happens to HDFS during COPY data in table ending up being 
> corrupted.
> STR:
> {code}
> 1) Create two external tables:
> create writable external table store_t ( a text, b text, c text, d text ) 
> LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
> (DELIMITER ',');
> create external table read_t ( a text, b text, c text, d text ) LOCATION 
> ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
> (DELIMITER ',');
> {code}
> 2) Copy big file(~ 1Gb) from local fs to store_t:
> COPY store_table from '/tmp/data/1Gb.txt' DELIMITER ',';
> 3) Restart HDFS while COPY is in progress.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-956:
-
Description: 
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
1) Create two external tables:
create writable external table store_t ( a text, b text, c text, d text ) 
LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
(DELIMITER ',');
create external table read_t ( a text, b text, c text, d text ) LOCATION 
('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' (DELIMITER 
',');
{code}

  was:
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
1)
{code}


> Make COPY command transactional for external tables
> ---
>
> Key: HAWQ-956
> URL: https://issues.apache.org/jira/browse/HAWQ-956
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: External Tables, PXF
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>
> As for now COPY command is transactional for native HAWQ tables, but it's not 
> for external tables.
> This command involves communication with underlying HDFS layer which isn't 
> under HAWQ's control.
> If something happens to HDFS during COPY data in table ending up being 
> corrupted.
> STR:
> {code}
> 1) Create two external tables:
> create writable external table store_t ( a text, b text, c text, d text ) 
> LOCATION ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
> (DELIMITER ',');
> create external table read_t ( a text, b text, c text, d text ) LOCATION 
> ('pxf://localhost:51200/data?Profile=HdfsTextSimple') FORMAT 'TEXT' 
> (DELIMITER ',');
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-956:
-
Description: 
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

STR:
{code}
1)
{code}

  was:
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.


> Make COPY command transactional for external tables
> ---
>
> Key: HAWQ-956
> URL: https://issues.apache.org/jira/browse/HAWQ-956
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: External Tables, PXF
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>
> As for now COPY command is transactional for native HAWQ tables, but it's not 
> for external tables.
> This command involves communication with underlying HDFS layer which isn't 
> under HAWQ's control.
> If something happens to HDFS during COPY data in table ending up being 
> corrupted.
> STR:
> {code}
> 1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-958) LICENSE file missing checklist

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-958:
---
Affects Version/s: 2.0.0.0-incubating

> LICENSE file missing checklist
> --
>
> Key: HAWQ-958
> URL: https://issues.apache.org/jira/browse/HAWQ-958
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Affects Versions: 2.0.0.0-incubating
>Reporter: Goden Yao
>Assignee: Lei Chang
>Priority: Blocker
> Fix For: 2.0.0.0-incubating
>
>
> From [~jmclean] IPMC release VOTE feedback
> {quote}
> - BSD licensed code [3] 3. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
> - BSD license code [7] 7. ./depends/thirdparty/thrift/compiler/cpp/src/md5.?
> - license for this file [9] 9. ./src/backend/port/dynloader/ultrix4.h
> - license for this file [10] Are we OK this was taken form GNU C? 10. 
> ./src/port/inet_aton.c
> - MIT license PSI [11] 11. ./tools/bin/pythonSrc/PSI-0.3b2_gp/
> - BSD licensed code [12] 12. ./src/port/snprintf.c
> - BSD licensed code [13] 13 ./src/port/crypt.c  Is this regard as 
> cryptography code? [14] 14. http://www.apache.org/dev/crypto.html
> - BSD licensed code [15][16] 15. ./src/port/memcmp.c , 16. 
> ./src/backend/utils/mb/wstrcmp.c
> - license for this file [17] 17. ./src/port/rand.c
> - license of these files [18][19] 18. ./src/backend/utils/adt/inet_net_ntop.c
> 19. ./src/backend/utils/adt/inet_net_pton.c
> - license of this file [20] 20 ./src/port/strlcpy.c
> - regex license [21] 21. ./src/backend/regex/COPYRIGHT
> - How are these files licensed? [22] + others copyright AEG Automation GmbH 
> 22. ./src/backend/port/qnx4/shm.c
> - How is this file licensed? [23] 23. ./src/backend/port/beos/shm.c
> - BSD licensed libpq [24]. 24. ./src/backend/libpq/sha2.?
> Is this considered crypto code and may need an export license?
> - pgdump [25] 25. ./src/bin/pg_dump/
> - license for this file [26] 26. ./src/port/gettimeofday.c
> - license for this file [27] Look like an ASF header may of been incorrectly 
> added to this. 27. 
> ./depends/thirdparty/thrift/lib/cpp/src/thrift/windows/SocketPair.cpp
> - This BSD licensed file [36] 36. ./src/bin/pg_controldata/pg_controldata.c
> - license for these files [37][38] and others in [39]
> 37. ./depends/thirdparty/thrift/aclocal/ax_cxx_compile_stdcxx_11.m4
> 38. ./depends/thirdparty/thrift/aclocal/ax_boost_base.m4
> 39. ./depends/thirdparty/thrift/aclocal
> - This BSD licensed file [40]
> 40. ./depends/thirdparty/thrift/build/cmake/FindGLIB.cmake
> - This BSD licensed file [41]
> 41. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
> - BSD licensed pychecker [42]
> 42. ./tools/bin/pythonSrc/pychecker-0.8.18/
> - licenses for all of these files [43]
> 43. ./src/interfaces/libpq/po/*.po
> - BSD license pg800 [44]
> 44. ./tools/bin/ext/pg8000/*
> - how is this file licensed? [45]
> 45. ./src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
> - license for this file [47] 47 
> ./tools/bin/pythonSrc/lockfile-0.9.1/lockfile/pidlockfile.py
> - Python license for this file [48]. Is this an Apache comparable license? 48 
> ./tools/bin/pythonSrc/pychecker-0.8.18/pychecker2/symbols.py
> - How are these files licensed? [49] Note multiple copyright owners and 
> missing headers.
> 49.  ./src/backend/utils/mb/Unicode/*
> - BSD licensed fig leaf. [50] Note that files incorrectly has had ASF headers 
> applied.
> - This BSD licensed file [51]
> 51. ./depends/thirdparty/thrift/lib/py/compat/win32/stdint.h
> - This public domain style sheet [52]
> 52. ./tools/bin/pythonSrc/PyGreSQL-4.0/docs/default.css
> - This file [53]
> 53. ./src/test/locale/test-ctype.c
> - License for unit test2 [54]
> 54 ./tools/bin/pythonSrc/unittest2-0.5.1/unittest2/
> - MIT licensed lock file [55]
> 55. ./tools/bin/pythonSrc/lockfile-0.9.1/LICENSE
> - JSON code here [56]
> 56. ./src/include/catalog/JSON
> - License for this file [57]
> 57. ./src/pl/plperl/ppport.h
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-958) LICENSE file missing checklist

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-958:
---
Priority: Blocker  (was: Major)

> LICENSE file missing checklist
> --
>
> Key: HAWQ-958
> URL: https://issues.apache.org/jira/browse/HAWQ-958
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Goden Yao
>Assignee: Lei Chang
>Priority: Blocker
> Fix For: 2.0.0.0-incubating
>
>
> From [~jmclean] IPMC release VOTE feedback
> {quote}
> - BSD licensed code [3] 3. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
> - BSD license code [7] 7. ./depends/thirdparty/thrift/compiler/cpp/src/md5.?
> - license for this file [9] 9. ./src/backend/port/dynloader/ultrix4.h
> - license for this file [10] Are we OK this was taken form GNU C? 10. 
> ./src/port/inet_aton.c
> - MIT license PSI [11] 11. ./tools/bin/pythonSrc/PSI-0.3b2_gp/
> - BSD licensed code [12] 12. ./src/port/snprintf.c
> - BSD licensed code [13] 13 ./src/port/crypt.c  Is this regard as 
> cryptography code? [14] 14. http://www.apache.org/dev/crypto.html
> - BSD licensed code [15][16] 15. ./src/port/memcmp.c , 16. 
> ./src/backend/utils/mb/wstrcmp.c
> - license for this file [17] 17. ./src/port/rand.c
> - license of these files [18][19] 18. ./src/backend/utils/adt/inet_net_ntop.c
> 19. ./src/backend/utils/adt/inet_net_pton.c
> - license of this file [20] 20 ./src/port/strlcpy.c
> - regex license [21] 21. ./src/backend/regex/COPYRIGHT
> - How are these files licensed? [22] + others copyright AEG Automation GmbH 
> 22. ./src/backend/port/qnx4/shm.c
> - How is this file licensed? [23] 23. ./src/backend/port/beos/shm.c
> - BSD licensed libpq [24]. 24. ./src/backend/libpq/sha2.?
> Is this considered crypto code and may need an export license?
> - pgdump [25] 25. ./src/bin/pg_dump/
> - license for this file [26] 26. ./src/port/gettimeofday.c
> - license for this file [27] Look like an ASF header may of been incorrectly 
> added to this. 27. 
> ./depends/thirdparty/thrift/lib/cpp/src/thrift/windows/SocketPair.cpp
> - This BSD licensed file [36] 36. ./src/bin/pg_controldata/pg_controldata.c
> - license for these files [37][38] and others in [39]
> 37. ./depends/thirdparty/thrift/aclocal/ax_cxx_compile_stdcxx_11.m4
> 38. ./depends/thirdparty/thrift/aclocal/ax_boost_base.m4
> 39. ./depends/thirdparty/thrift/aclocal
> - This BSD licensed file [40]
> 40. ./depends/thirdparty/thrift/build/cmake/FindGLIB.cmake
> - This BSD licensed file [41]
> 41. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
> - BSD licensed pychecker [42]
> 42. ./tools/bin/pythonSrc/pychecker-0.8.18/
> - licenses for all of these files [43]
> 43. ./src/interfaces/libpq/po/*.po
> - BSD license pg800 [44]
> 44. ./tools/bin/ext/pg8000/*
> - how is this file licensed? [45]
> 45. ./src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
> - license for this file [47] 47 
> ./tools/bin/pythonSrc/lockfile-0.9.1/lockfile/pidlockfile.py
> - Python license for this file [48]. Is this an Apache comparable license? 48 
> ./tools/bin/pythonSrc/pychecker-0.8.18/pychecker2/symbols.py
> - How are these files licensed? [49] Note multiple copyright owners and 
> missing headers.
> 49.  ./src/backend/utils/mb/Unicode/*
> - BSD licensed fig leaf. [50] Note that files incorrectly has had ASF headers 
> applied.
> - This BSD licensed file [51]
> 51. ./depends/thirdparty/thrift/lib/py/compat/win32/stdint.h
> - This public domain style sheet [52]
> 52. ./tools/bin/pythonSrc/PyGreSQL-4.0/docs/default.css
> - This file [53]
> 53. ./src/test/locale/test-ctype.c
> - License for unit test2 [54]
> 54 ./tools/bin/pythonSrc/unittest2-0.5.1/unittest2/
> - MIT licensed lock file [55]
> 55. ./tools/bin/pythonSrc/lockfile-0.9.1/LICENSE
> - JSON code here [56]
> 56. ./src/include/catalog/JSON
> - License for this file [57]
> 57. ./src/pl/plperl/ppport.h
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-958) LICENSE file missing checklist

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-958:
---
Fix Version/s: 2.0.0.0-incubating

> LICENSE file missing checklist
> --
>
> Key: HAWQ-958
> URL: https://issues.apache.org/jira/browse/HAWQ-958
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Goden Yao
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> From [~jmclean] IPMC release VOTE feedback
> {quote}
> - BSD licensed code [3] 3. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
> - BSD license code [7] 7. ./depends/thirdparty/thrift/compiler/cpp/src/md5.?
> - license for this file [9] 9. ./src/backend/port/dynloader/ultrix4.h
> - license for this file [10] Are we OK this was taken form GNU C? 10. 
> ./src/port/inet_aton.c
> - MIT license PSI [11] 11. ./tools/bin/pythonSrc/PSI-0.3b2_gp/
> - BSD licensed code [12] 12. ./src/port/snprintf.c
> - BSD licensed code [13] 13 ./src/port/crypt.c  Is this regard as 
> cryptography code? [14] 14. http://www.apache.org/dev/crypto.html
> - BSD licensed code [15][16] 15. ./src/port/memcmp.c , 16. 
> ./src/backend/utils/mb/wstrcmp.c
> - license for this file [17] 17. ./src/port/rand.c
> - license of these files [18][19] 18. ./src/backend/utils/adt/inet_net_ntop.c
> 19. ./src/backend/utils/adt/inet_net_pton.c
> - license of this file [20] 20 ./src/port/strlcpy.c
> - regex license [21] 21. ./src/backend/regex/COPYRIGHT
> - How are these files licensed? [22] + others copyright AEG Automation GmbH 
> 22. ./src/backend/port/qnx4/shm.c
> - How is this file licensed? [23] 23. ./src/backend/port/beos/shm.c
> - BSD licensed libpq [24]. 24. ./src/backend/libpq/sha2.?
> Is this considered crypto code and may need an export license?
> - pgdump [25] 25. ./src/bin/pg_dump/
> - license for this file [26] 26. ./src/port/gettimeofday.c
> - license for this file [27] Look like an ASF header may of been incorrectly 
> added to this. 27. 
> ./depends/thirdparty/thrift/lib/cpp/src/thrift/windows/SocketPair.cpp
> - This BSD licensed file [36] 36. ./src/bin/pg_controldata/pg_controldata.c
> - license for these files [37][38] and others in [39]
> 37. ./depends/thirdparty/thrift/aclocal/ax_cxx_compile_stdcxx_11.m4
> 38. ./depends/thirdparty/thrift/aclocal/ax_boost_base.m4
> 39. ./depends/thirdparty/thrift/aclocal
> - This BSD licensed file [40]
> 40. ./depends/thirdparty/thrift/build/cmake/FindGLIB.cmake
> - This BSD licensed file [41]
> 41. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
> - BSD licensed pychecker [42]
> 42. ./tools/bin/pythonSrc/pychecker-0.8.18/
> - licenses for all of these files [43]
> 43. ./src/interfaces/libpq/po/*.po
> - BSD license pg800 [44]
> 44. ./tools/bin/ext/pg8000/*
> - how is this file licensed? [45]
> 45. ./src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
> - license for this file [47] 47 
> ./tools/bin/pythonSrc/lockfile-0.9.1/lockfile/pidlockfile.py
> - Python license for this file [48]. Is this an Apache comparable license? 48 
> ./tools/bin/pythonSrc/pychecker-0.8.18/pychecker2/symbols.py
> - How are these files licensed? [49] Note multiple copyright owners and 
> missing headers.
> 49.  ./src/backend/utils/mb/Unicode/*
> - BSD licensed fig leaf. [50] Note that files incorrectly has had ASF headers 
> applied.
> - This BSD licensed file [51]
> 51. ./depends/thirdparty/thrift/lib/py/compat/win32/stdint.h
> - This public domain style sheet [52]
> 52. ./tools/bin/pythonSrc/PyGreSQL-4.0/docs/default.css
> - This file [53]
> 53. ./src/test/locale/test-ctype.c
> - License for unit test2 [54]
> 54 ./tools/bin/pythonSrc/unittest2-0.5.1/unittest2/
> - MIT licensed lock file [55]
> 55. ./tools/bin/pythonSrc/lockfile-0.9.1/LICENSE
> - JSON code here [56]
> 56. ./src/include/catalog/JSON
> - License for this file [57]
> 57. ./src/pl/plperl/ppport.h
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-958) LICENSE file missing checklist

2016-07-27 Thread Goden Yao (JIRA)
Goden Yao created HAWQ-958:
--

 Summary: LICENSE file missing checklist
 Key: HAWQ-958
 URL: https://issues.apache.org/jira/browse/HAWQ-958
 Project: Apache HAWQ
  Issue Type: Task
  Components: Documentation
Reporter: Goden Yao
Assignee: Lei Chang


>From [~jmclean] IPMC release VOTE feedback

{quote}
- BSD licensed code [3] 3. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD license code [7] 7. ./depends/thirdparty/thrift/compiler/cpp/src/md5.?
- license for this file [9] 9. ./src/backend/port/dynloader/ultrix4.h
- license for this file [10] Are we OK this was taken form GNU C? 10. 
./src/port/inet_aton.c
- MIT license PSI [11] 11. ./tools/bin/pythonSrc/PSI-0.3b2_gp/
- BSD licensed code [12] 12. ./src/port/snprintf.c
- BSD licensed code [13] 13 ./src/port/crypt.c  Is this regard as cryptography 
code? [14] 14. http://www.apache.org/dev/crypto.html
- BSD licensed code [15][16] 15. ./src/port/memcmp.c , 16. 
./src/backend/utils/mb/wstrcmp.c
- license for this file [17] 17. ./src/port/rand.c
- license of these files [18][19] 18. ./src/backend/utils/adt/inet_net_ntop.c
19. ./src/backend/utils/adt/inet_net_pton.c
- license of this file [20] 20 ./src/port/strlcpy.c
- regex license [21] 21. ./src/backend/regex/COPYRIGHT
- How are these files licensed? [22] + others copyright AEG Automation GmbH 22. 
./src/backend/port/qnx4/shm.c
- How is this file licensed? [23] 23. ./src/backend/port/beos/shm.c
- BSD licensed libpq [24]. 24. ./src/backend/libpq/sha2.?
Is this considered crypto code and may need an export license?
- pgdump [25] 25. ./src/bin/pg_dump/
- license for this file [26] 26. ./src/port/gettimeofday.c
- license for this file [27] Look like an ASF header may of been incorrectly 
added to this. 27. 
./depends/thirdparty/thrift/lib/cpp/src/thrift/windows/SocketPair.cpp
- This BSD licensed file [36] 36. ./src/bin/pg_controldata/pg_controldata.c
- license for these files [37][38] and others in [39]
37. ./depends/thirdparty/thrift/aclocal/ax_cxx_compile_stdcxx_11.m4
38. ./depends/thirdparty/thrift/aclocal/ax_boost_base.m4
39. ./depends/thirdparty/thrift/aclocal
- This BSD licensed file [40]
40. ./depends/thirdparty/thrift/build/cmake/FindGLIB.cmake
- This BSD licensed file [41]
41. ./tools/bin/pythonSrc/unittest2-0.5.1/setup.py
- BSD licensed pychecker [42]
42. ./tools/bin/pythonSrc/pychecker-0.8.18/
- licenses for all of these files [43]
43. ./src/interfaces/libpq/po/*.po
- BSD license pg800 [44]
44. ./tools/bin/ext/pg8000/*
- how is this file licensed? [45]
45. ./src/backend/utils/mb/Unicode/UCS_to_GB18030.pl
- license for this file [47] 47 
./tools/bin/pythonSrc/lockfile-0.9.1/lockfile/pidlockfile.py
- Python license for this file [48]. Is this an Apache comparable license? 48 
./tools/bin/pythonSrc/pychecker-0.8.18/pychecker2/symbols.py
- How are these files licensed? [49] Note multiple copyright owners and missing 
headers.
49.  ./src/backend/utils/mb/Unicode/*
- BSD licensed fig leaf. [50] Note that files incorrectly has had ASF headers 
applied.
- This BSD licensed file [51]
51. ./depends/thirdparty/thrift/lib/py/compat/win32/stdint.h
- This public domain style sheet [52]
52. ./tools/bin/pythonSrc/PyGreSQL-4.0/docs/default.css
- This file [53]
53. ./src/test/locale/test-ctype.c
- License for unit test2 [54]
54 ./tools/bin/pythonSrc/unittest2-0.5.1/unittest2/
- MIT licensed lock file [55]
55. ./tools/bin/pythonSrc/lockfile-0.9.1/LICENSE
- JSON code here [56]
56. ./src/include/catalog/JSON
- License for this file [57]
57. ./src/pl/plperl/ppport.h

{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-956:
-
Description: 
As for now COPY command is transactional for native HAWQ tables, but it's not 
for external tables.
This command involves communication with underlying HDFS layer which isn't 
under HAWQ's control.
If something happens to HDFS during COPY data in table ending up being 
corrupted.

  was:As for now COPY command is transactional for native HAWQ tables, but it's 
not for external tables.


> Make COPY command transactional for external tables
> ---
>
> Key: HAWQ-956
> URL: https://issues.apache.org/jira/browse/HAWQ-956
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: External Tables, PXF
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>
> As for now COPY command is transactional for native HAWQ tables, but it's not 
> for external tables.
> This command involves communication with underlying HDFS layer which isn't 
> under HAWQ's control.
> If something happens to HDFS during COPY data in table ending up being 
> corrupted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleksandr Diachenko updated HAWQ-956:
-
Description: As for now COPY command is transactional for native HAWQ 
tables, but it's not for external tables.

> Make COPY command transactional for external tables
> ---
>
> Key: HAWQ-956
> URL: https://issues.apache.org/jira/browse/HAWQ-956
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: External Tables, PXF
>Reporter: Oleksandr Diachenko
>Assignee: Goden Yao
>
> As for now COPY command is transactional for native HAWQ tables, but it's not 
> for external tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-957) NOTICE file clean up

2016-07-27 Thread Goden Yao (JIRA)
Goden Yao created HAWQ-957:
--

 Summary: NOTICE file clean up
 Key: HAWQ-957
 URL: https://issues.apache.org/jira/browse/HAWQ-957
 Project: Apache HAWQ
  Issue Type: Task
  Components: Documentation
Reporter: Goden Yao
Assignee: Lei Chang


>From [~jmclean] IPMC review feedback:
{quote}
NOTICE incorrecly contains a long list of copyright statements. I would expect 
to see one or perhaps two here i.e. the original authors who donated the 
software and who copyright statements were removed from the original files.
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-957) NOTICE file clean up

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-957:
---
Fix Version/s: 2.0.0.0-incubating

> NOTICE file clean up
> 
>
> Key: HAWQ-957
> URL: https://issues.apache.org/jira/browse/HAWQ-957
> Project: Apache HAWQ
>  Issue Type: Task
>  Components: Documentation
>Reporter: Goden Yao
>Assignee: Lei Chang
> Fix For: 2.0.0.0-incubating
>
>
> From [~jmclean] IPMC review feedback:
> {quote}
> NOTICE incorrecly contains a long list of copyright statements. I would 
> expect to see one or perhaps two here i.e. the original authors who donated 
> the software and who copyright statements were removed from the original 
> files.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HAWQ-956) Make COPY command transactional for external tables

2016-07-27 Thread Oleksandr Diachenko (JIRA)
Oleksandr Diachenko created HAWQ-956:


 Summary: Make COPY command transactional for external tables
 Key: HAWQ-956
 URL: https://issues.apache.org/jira/browse/HAWQ-956
 Project: Apache HAWQ
  Issue Type: New Feature
  Components: External Tables, PXF
Reporter: Oleksandr Diachenko
Assignee: Goden Yao






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-950) PXF support for Float filters encoded in header data

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-950:
---
Issue Type: Sub-task  (was: Improvement)
Parent: HAWQ-779

> PXF support for Float filters encoded in header data
> 
>
> Key: HAWQ-950
> URL: https://issues.apache.org/jira/browse/HAWQ-950
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: External Tables, PXF
>Affects Versions: 2.0.0.0-incubating
>Reporter: Kavinder Dhaliwal
>Assignee: Goden Yao
> Fix For: 2.0.1.0-incubating
>
>
> HAWQ-779 contributed by [~jiadx] introduced the ability for hawq to serialize 
> filters on float columns and send the data to PXF. However, PXF is not 
> currently capable of parsing float values in the string filter.
> We need to 
> 1. add support for float type on JAVA side.
> 2. add unit test for this change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HAWQ-953) Pushed down filter fails when quering partitioned Hive tables

2016-07-27 Thread Goden Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HAWQ-953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Goden Yao updated HAWQ-953:
---
Issue Type: Sub-task  (was: Bug)
Parent: HAWQ-779

> Pushed down filter fails when quering partitioned Hive tables
> -
>
> Key: HAWQ-953
> URL: https://issues.apache.org/jira/browse/HAWQ-953
> Project: Apache HAWQ
>  Issue Type: Sub-task
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> After code changes in HAWQ-779 Hawq started sending filters to Hive, which 
> fails on partitioned tables.
> {code}
> # \d hive_partitions_all_types
> External table "public.hive_partitions_all_types"
>  Column |Type | Modifiers 
> +-+---
>  t1 | text| 
>  t2 | text| 
>  num1   | integer | 
>  dub1   | double precision| 
>  dec1   | numeric | 
>  tm | timestamp without time zone | 
>  r  | real| 
>  bg | bigint  | 
>  b  | boolean | 
>  tn | smallint| 
>  sml| smallint| 
>  dt | date| 
>  vc1| character varying(5)| 
>  c1 | character(3)| 
>  bin| bytea   | 
> Type: readable
> Encoding: UTF8
> Format type: custom
> Format options: formatter 'pxfwritable_import' 
> External location: 
> pxf://127.0.0.1:51200/hive_many_partitioned_table?PROFILE=Hive
> pxfautomation=# SELECT t1, t2, bg FROM hive_partitions_all_types where bg = 
> 23456789;
> ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
> report   message   MetaException(message:Filtering is supported only on 
> partition keys of type string)description   The server encountered an 
> internal error that prevented it from fulfilling this request.exception   
> javax.servlet.ServletException: MetaException(message:Filtering is supported 
> only on partition keys of type string) (libchurl.c:878)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HAWQ-779) support more pxf filter pushdwon

2016-07-27 Thread Goden Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396001#comment-15396001
 ] 

Goden Yao commented on HAWQ-779:


You can leave comments there, we'll try to merge your patch.

>  support more pxf filter pushdwon
> -
>
> Key: HAWQ-779
> URL: https://issues.apache.org/jira/browse/HAWQ-779
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Devin Jia
>Assignee: Shivram Mani
> Fix For: 2.0.1.0-incubating
>
>
> When I use the pxf hawq, I need to read a traditional relational database 
> systems and solr by way of the external table. The project 
> :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext,
>  only "WriteAccessor ",so I developed 2 plug-ins, the projects: 
> https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to 
> modified HAWQ:
> 1. When get a list of fragment from pxf services, push down the 
> 'filterString'. modify the backend / optimizer / plan / createplan.c of 
> create_pxf_plan methods:
> segdb_work_map = map_hddata_2gp_segments (uri_str,
> total_segs, segs_participating,
> relation, ctx-> root-> parse-> jointree-> quals);
> 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, 
> Date type data operator, Float type operator.
> 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE 
> operator.
> I already created a feature branch in my local ,and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq issue #731: HAWQ-830. Fix wrong result in CTE query due to CT...

2016-07-27 Thread vraghavan78
Github user vraghavan78 commented on the issue:

https://github.com/apache/incubator-hawq/pull/731
  
gp_cte_sharing when set to true, the planner sometimes gets into a deadlock.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq issue #818: HAWQ-955. Add scriptS for feature test running in...

2016-07-27 Thread radarwave
Github user radarwave commented on the issue:

https://github.com/apache/incubator-hawq/pull/818
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-779) support more pxf filter pushdwon

2016-07-27 Thread Devin Jia (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15395186#comment-15395186
 ] 

Devin Jia commented on HAWQ-779:


[~GodenYao] I created PR:https://github.com/apache/incubator-hawq/pull/820.
but there already has a branch HAWQ-953 : 
https://github.com/apache/incubator-hawq/tree/HAWQ-953

>  support more pxf filter pushdwon
> -
>
> Key: HAWQ-779
> URL: https://issues.apache.org/jira/browse/HAWQ-779
> Project: Apache HAWQ
>  Issue Type: New Feature
>  Components: PXF
>Reporter: Devin Jia
>Assignee: Shivram Mani
> Fix For: 2.0.1.0-incubating
>
>
> When I use the pxf hawq, I need to read a traditional relational database 
> systems and solr by way of the external table. The project 
> :https://github.com/Pivotal-Field-Engineering/pxf-field/tree/master/jdbc-pxf-ext,
>  only "WriteAccessor ",so I developed 2 plug-ins, the projects: 
> https://github.com/inspur-insight/pxf-plugin , But these two plug-ins need to 
> modified HAWQ:
> 1. When get a list of fragment from pxf services, push down the 
> 'filterString'. modify the backend / optimizer / plan / createplan.c of 
> create_pxf_plan methods:
> segdb_work_map = map_hddata_2gp_segments (uri_str,
> total_segs, segs_participating,
> relation, ctx-> root-> parse-> jointree-> quals);
> 2. modify pxffilters.h and pxffilters.c, support TEXT types LIKE operation, 
> Date type data operator, Float type operator.
> 3. Modify org.apache.hawq.pxf.api.FilterParser.java, support the LIKE 
> operator.
> I already created a feature branch in my local ,and tested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #820: HAWQ-953 hawq pxf-hive support partition c...

2016-07-27 Thread jiadexin
GitHub user jiadexin opened a pull request:

https://github.com/apache/incubator-hawq/pull/820

HAWQ-953 hawq pxf-hive  support partition column filter pushdown whose type 
is string

not "HAWQ-953 Reverted sending qualifiers when creating PXF plan."

only modify HiveDataFragmenter.java, support partition column filter 
pushdown whose type is string.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/inspur-insight/incubator-hawq HAWQ-953_2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/820.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #820


commit 7625eac224221615403fe6e30da49c8ff0b7f65d
Author: Devin Jia 
Date:   2016-07-27T07:39:07Z

HAWQ-953 hawq pxf-hive  support partition column filter pushdown whose type 
is string




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-hawq pull request #819: HAWQ-953 pxf-hive only support partition c...

2016-07-27 Thread jiadexin
Github user jiadexin closed the pull request at:

https://github.com/apache/incubator-hawq/pull/819


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (HAWQ-953) Pushed down filter fails when quering partitioned Hive tables

2016-07-27 Thread Devin Jia (JIRA)

[ 
https://issues.apache.org/jira/browse/HAWQ-953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15395162#comment-15395162
 ] 

Devin Jia commented on HAWQ-953:


Cause of the problem: Hive of JDO filter pushdown only support String 
PartitionKey. Can modify the file-hive-site.xml, add the following:

hive.metastore.integral.jdo.pushdown
true

Or , modify hawq pxf class-HiveDataFragmenter.java, allowing only string filter 
.

> Pushed down filter fails when quering partitioned Hive tables
> -
>
> Key: HAWQ-953
> URL: https://issues.apache.org/jira/browse/HAWQ-953
> Project: Apache HAWQ
>  Issue Type: Bug
>  Components: PXF
>Reporter: Oleksandr Diachenko
>Assignee: Oleksandr Diachenko
> Fix For: 2.0.1.0-incubating
>
>
> After code changes in HAWQ-779 Hawq started sending filters to Hive, which 
> fails on partitioned tables.
> {code}
> # \d hive_partitions_all_types
> External table "public.hive_partitions_all_types"
>  Column |Type | Modifiers 
> +-+---
>  t1 | text| 
>  t2 | text| 
>  num1   | integer | 
>  dub1   | double precision| 
>  dec1   | numeric | 
>  tm | timestamp without time zone | 
>  r  | real| 
>  bg | bigint  | 
>  b  | boolean | 
>  tn | smallint| 
>  sml| smallint| 
>  dt | date| 
>  vc1| character varying(5)| 
>  c1 | character(3)| 
>  bin| bytea   | 
> Type: readable
> Encoding: UTF8
> Format type: custom
> Format options: formatter 'pxfwritable_import' 
> External location: 
> pxf://127.0.0.1:51200/hive_many_partitioned_table?PROFILE=Hive
> pxfautomation=# SELECT t1, t2, bg FROM hive_partitions_all_types where bg = 
> 23456789;
> ERROR:  remote component error (500) from '127.0.0.1:51200':  type  Exception 
> report   message   MetaException(message:Filtering is supported only on 
> partition keys of type string)description   The server encountered an 
> internal error that prevented it from fulfilling this request.exception   
> javax.servlet.ServletException: MetaException(message:Filtering is supported 
> only on partition keys of type string) (libchurl.c:878)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-hawq pull request #819: HAWQ-953 pxf-hive only support partition c...

2016-07-27 Thread jiadexin
GitHub user jiadexin opened a pull request:

https://github.com/apache/incubator-hawq/pull/819

HAWQ-953 pxf-hive only support partition column filter pushdown whose types 
are string

HAWQ-953 pxf-hive only support partition column filter pushdown whose types 
are string

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/inspur-insight/incubator-hawq HAWQ-953

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-hawq/pull/819.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #819


commit fb00bd27021fbabc98c1c940ebb890e974496500
Author: root 
Date:   2016-06-07T01:16:06Z

HAWQ-779 Support more pxf filter pushdwon

commit caa20039e73589112c48c20a3f78c4a8f7b1f2d6
Author: Devin Jia 
Date:   2016-06-08T01:04:08Z

HAWQ-779 support more pxf filter pushdwon

commit cca9f8854a82783ca6afaa6530fd4004a6447c7d
Author: Devin Jia 
Date:   2016-06-08T01:04:08Z

HAWQ-779. support more pxf filter pushdwon

commit 2df879d53dc29149077346190e4c38549ba6e72b
Author: Devin Jia 
Date:   2016-06-12T01:10:46Z

HAWQ-779. support more pxf filter pushdwon(update FilterParserTest.java and 
HBaseFilterBuilder.java , to include HDOP_LIKE.)

commit 66717dccb05104c713ac30d4820c4d7005190f3a
Author: Devin Jia 
Date:   2016-06-12T01:23:58Z

Merge branch 'feature-pxf' of 
https://github.com/inspur-insight/incubator-hawq into feature-pxf

commit 6ed0e2b720057bd211dee0db2d13723171143738
Author: Devin Jia 
Date:   2016-06-12T01:32:57Z

HAWQ-779. support more pxf filter pushdwon - update FilterParserTest.java 
and HBaseFilterBuilder.java , to include HDOP_LIKE.

commit 45eb5b8fcbda72aa6fc1b5e4dbce929c7f4f7501
Author: Devin Jia 
Date:   2016-06-12T01:39:47Z

HAWQ-779. support more pxf filter pushdwon - update FilterParserTest.java 
and HBaseFilterBuilder.java , to include HDOP_LIKE.

commit 5fc6457408a239ff241f6a02d6739d232f210247
Author: Devin Jia 
Date:   2016-06-12T05:04:52Z

Merge remote branch 'upstream/master' into feature-pxf

commit 84cf8d268d6110c7120a82109f83febc0710b4fa
Author: Devin Jia 
Date:   2016-06-12T05:09:15Z

merge from origin/master.

commit a3cc461e5e1c71076f80c40195707f58dcb00377
Author: Devin Jia 
Date:   2016-06-12T05:13:13Z

Update hawq-site.xml

commit 0f0fd5ea92ffc6fcfec9023ff8c19d46c27b26d7
Author: Devin Jia 
Date:   2016-06-12T05:23:46Z

Merge pull request #1 from apache/master

merge from origin/master

commit 9e0f53ef3e208eee8f4ea2b6117f20a3b36e4f54
Author: Devin Jia 
Date:   2016-06-12T08:37:03Z

Merge pull request #2 from inspur-insight/master

Merge pull request #1 from apache/master

commit 1ffd32085265be2da912d551a0a37b871a4cdec3
Author: Devin Jia 
Date:   2016-06-12T08:38:43Z

Merge pull request #3 from inspur-insight/feature-pxf

Feature pxf

commit 134dc5752d150b688d6bf91372682f0fc15258a0
Author: Devin Jia 
Date:   2016-07-27T06:58:24Z

HAWQ-953 pxf-hive only support partition column filter pushdown whose types 
are string




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---