[incubator-hawq] Git Push Summary

2016-06-22 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-617 [deleted] 19604066b


[incubator-hawq] Git Push Summary

2016-06-22 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-677 [deleted] a16fd9221


[incubator-hawq] Git Push Summary

2016-06-22 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-679 [deleted] 65e270337


[incubator-hawq] Git Push Summary

2016-06-22 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-682 [deleted] d0b8379d3


[incubator-hawq] Git Push Summary

2016-06-22 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-717 [deleted] 49928d739


incubator-hawq git commit: HAWQ-717 Update variable name sub_args_list

2016-05-10 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/master 156e98353 -> e60c805f4


HAWQ-717 Update variable name sub_args_list


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/e60c805f
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/e60c805f
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/e60c805f

Branch: refs/heads/master
Commit: e60c805f4c18305b49037d76ef0c312724f57497
Parents: 156e983
Author: Bhuvnesh Chaudhary 
Authored: Tue May 10 11:22:33 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Tue May 10 11:22:33 2016 -0700

--
 tools/bin/hawq | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/e60c805f/tools/bin/hawq
--
diff --git a/tools/bin/hawq b/tools/bin/hawq
index 637553b..08c30e4 100755
--- a/tools/bin/hawq
+++ b/tools/bin/hawq
@@ -88,7 +88,7 @@ def main():
 if hawq_command == 'ssh-exkeys' and '-p' in sub_args_list:
 password_index = sub_args_list.index('-p') + 1
 if len(sub_args_list) > password_index:
-  sub_arg_list[password_index] = 
json.dumps(sub_args_list[password_index])
+  sub_args_list[password_index] = 
json.dumps(sub_args_list[password_index])
 sub_args = " ".join(sub_args_list)
 elif len(sys.argv) > 1:
 hawq_command = sys.argv[1]



incubator-hawq git commit: HAWQ-717 Update variable name sub_args_list

2016-05-10 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-717 [created] 49928d739


HAWQ-717 Update variable name sub_args_list


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/49928d73
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/49928d73
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/49928d73

Branch: refs/heads/HAWQ-717
Commit: 49928d7390cafaf09a33dd29aad72d2e9232373b
Parents: 156e983
Author: Bhuvnesh Chaudhary 
Authored: Tue May 10 11:21:43 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Tue May 10 11:21:43 2016 -0700

--
 tools/bin/hawq | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/49928d73/tools/bin/hawq
--
diff --git a/tools/bin/hawq b/tools/bin/hawq
index 637553b..08c30e4 100755
--- a/tools/bin/hawq
+++ b/tools/bin/hawq
@@ -88,7 +88,7 @@ def main():
 if hawq_command == 'ssh-exkeys' and '-p' in sub_args_list:
 password_index = sub_args_list.index('-p') + 1
 if len(sub_args_list) > password_index:
-  sub_arg_list[password_index] = 
json.dumps(sub_args_list[password_index])
+  sub_args_list[password_index] = 
json.dumps(sub_args_list[password_index])
 sub_args = " ".join(sub_args_list)
 elif len(sys.argv) > 1:
 hawq_command = sys.argv[1]



[incubator-hawq] Git Push Summary

2016-05-10 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-717 [deleted] 8e1384cc6


incubator-hawq git commit: HAWQ-717: Use password as fed by the user

2016-05-09 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/master 03a3e2069 -> 57b33b49c


HAWQ-717: Use password as fed by the user


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/57b33b49
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/57b33b49
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/57b33b49

Branch: refs/heads/master
Commit: 57b33b49c777c77a7e31381cb06965a4d09ec11a
Parents: 03a3e20
Author: Bhuvnesh Chaudhary 
Authored: Mon May 9 13:01:59 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Mon May 9 13:01:59 2016 -0700

--
 tools/bin/hawq | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/57b33b49/tools/bin/hawq
--
diff --git a/tools/bin/hawq b/tools/bin/hawq
index be628be..637553b 100755
--- a/tools/bin/hawq
+++ b/tools/bin/hawq
@@ -18,6 +18,7 @@
 
 try:
 import os
+import json
 import sys
 import subprocess
 from hawqpylib.HAWQ_HELP import *
@@ -82,7 +83,13 @@ def main():
 if len(sys.argv) > 2:
 hawq_command = sys.argv[1]
 second_arg = sys.argv[2]
-sub_args = " ".join(sys.argv[2:])
+sub_args_list = sys.argv[2:]
+# Password can have special characters like semicolon (;), quotes(", 
') etc, convert input password to a string
+if hawq_command == 'ssh-exkeys' and '-p' in sub_args_list:
+password_index = sub_args_list.index('-p') + 1
+if len(sub_args_list) > password_index:
+  sub_arg_list[password_index] = 
json.dumps(sub_args_list[password_index])
+sub_args = " ".join(sub_args_list)
 elif len(sys.argv) > 1:
 hawq_command = sys.argv[1]
 second_arg = ''



incubator-hawq git commit: HAWQ-717: Use password as fed by the user

2016-05-04 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-717 [created] 8e1384cc6


HAWQ-717: Use password as fed by the user


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/8e1384cc
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/8e1384cc
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/8e1384cc

Branch: refs/heads/HAWQ-717
Commit: 8e1384cc6d0016356776e87de1f4e48d06c7c8c4
Parents: 192cff1
Author: Bhuvnesh Chaudhary 
Authored: Wed May 4 17:10:39 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Wed May 4 17:10:39 2016 -0700

--
 tools/bin/hawq | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/8e1384cc/tools/bin/hawq
--
diff --git a/tools/bin/hawq b/tools/bin/hawq
index be628be..637553b 100755
--- a/tools/bin/hawq
+++ b/tools/bin/hawq
@@ -18,6 +18,7 @@
 
 try:
 import os
+import json
 import sys
 import subprocess
 from hawqpylib.HAWQ_HELP import *
@@ -82,7 +83,13 @@ def main():
 if len(sys.argv) > 2:
 hawq_command = sys.argv[1]
 second_arg = sys.argv[2]
-sub_args = " ".join(sys.argv[2:])
+sub_args_list = sys.argv[2:]
+# Password can have special characters like semicolon (;), quotes(", 
') etc, convert input password to a string
+if hawq_command == 'ssh-exkeys' and '-p' in sub_args_list:
+password_index = sub_args_list.index('-p') + 1
+if len(sub_args_list) > password_index:
+  sub_arg_list[password_index] = 
json.dumps(sub_args_list[password_index])
+sub_args = " ".join(sub_args_list)
 elif len(sys.argv) > 1:
 hawq_command = sys.argv[1]
 second_arg = ''



incubator-hawq git commit: HAWQ-682: hawq init master fails to syncup hawq-site xml if there is a segment host down (bhuvnesh2703)

2016-04-15 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/master 775597818 -> e45f405c5


HAWQ-682: hawq init master fails to syncup hawq-site xml if there is a segment 
host down (bhuvnesh2703)


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/e45f405c
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/e45f405c
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/e45f405c

Branch: refs/heads/master
Commit: e45f405c527eb23e5eb108edd7474617289645e7
Parents: 7755978
Author: Bhuvnesh Chaudhary 
Authored: Fri Apr 15 20:55:53 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Fri Apr 15 20:55:53 2016 -0700

--
 tools/bin/hawq_ctl | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/e45f405c/tools/bin/hawq_ctl
--
diff --git a/tools/bin/hawq_ctl b/tools/bin/hawq_ctl
index 91ac814..d592d25 100755
--- a/tools/bin/hawq_ctl
+++ b/tools/bin/hawq_ctl
@@ -256,8 +256,9 @@ class HawqInit:
 self.default_hash_table_bucket_number = buckets
 
 logger.info("Set default_hash_table_bucket_number as: %s" % 
self.default_hash_table_bucket_number)
-cmd = "hawq config -c default_hash_table_bucket_number -v %s 
--skipvalidation -q > /dev/null" % \
-   self.default_hash_table_bucket_number
+ignore_bad_hosts = '--ignore-bad-hosts' if opts.ignore_bad_hosts else 
''
+cmd = "hawq config -c default_hash_table_bucket_number -v %s 
--skipvalidation -q %s > /dev/null" % \
+   (self.default_hash_table_bucket_number, ignore_bad_hosts)
 result = local_ssh(cmd, logger)
 if result != 0:
 logger.error("Set default_hash_table_bucket_number failed")



incubator-hawq git commit: HAWQ-682 Include ignore-bad-hosts option in hawq init master command to avoid syncup failures during setting default bucket number

2016-04-15 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-682 [created] d0b8379d3


HAWQ-682 Include ignore-bad-hosts option in hawq init master command to avoid 
syncup failures during setting default bucket number


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/d0b8379d
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/d0b8379d
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/d0b8379d

Branch: refs/heads/HAWQ-682
Commit: d0b8379d34574f22e94536562f403a515215ffbd
Parents: 7755978
Author: Bhuvnesh Chaudhary 
Authored: Fri Apr 15 14:55:25 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Fri Apr 15 14:55:25 2016 -0700

--
 tools/bin/hawq_ctl | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/d0b8379d/tools/bin/hawq_ctl
--
diff --git a/tools/bin/hawq_ctl b/tools/bin/hawq_ctl
index 91ac814..d592d25 100755
--- a/tools/bin/hawq_ctl
+++ b/tools/bin/hawq_ctl
@@ -256,8 +256,9 @@ class HawqInit:
 self.default_hash_table_bucket_number = buckets
 
 logger.info("Set default_hash_table_bucket_number as: %s" % 
self.default_hash_table_bucket_number)
-cmd = "hawq config -c default_hash_table_bucket_number -v %s 
--skipvalidation -q > /dev/null" % \
-   self.default_hash_table_bucket_number
+ignore_bad_hosts = '--ignore-bad-hosts' if opts.ignore_bad_hosts else 
''
+cmd = "hawq config -c default_hash_table_bucket_number -v %s 
--skipvalidation -q %s > /dev/null" % \
+   (self.default_hash_table_bucket_number, ignore_bad_hosts)
 result = local_ssh(cmd, logger)
 if result != 0:
 logger.error("Set default_hash_table_bucket_number failed")



incubator-hawq git commit: HAWQ-679 Included ignore_bad_hosts command to remove standby

2016-04-14 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-679 [created] 65e270337


HAWQ-679 Included ignore_bad_hosts command to remove standby


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/65e27033
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/65e27033
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/65e27033

Branch: refs/heads/HAWQ-679
Commit: 65e2703378d20bd824a48ba5c5d30014822f0533
Parents: 1435927
Author: Bhuvnesh Chaudhary 
Authored: Thu Apr 14 15:35:49 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Thu Apr 14 15:35:49 2016 -0700

--
 tools/bin/hawq_ctl | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/65e27033/tools/bin/hawq_ctl
--
diff --git a/tools/bin/hawq_ctl b/tools/bin/hawq_ctl
index 32752b6..6b99cae 100755
--- a/tools/bin/hawq_ctl
+++ b/tools/bin/hawq_ctl
@@ -69,6 +69,7 @@ class HawqInit:
 self.shared_buffers = opts.shared_buffers
 self.default_hash_table_bucket_number = 
opts.default_hash_table_bucket_number
 self.lock = threading.Lock()
+self.ignore_bad_hosts = opts.ignore_bad_hosts
 self._get_config()
 self._write_config()
 self._get_ips()
@@ -296,7 +297,8 @@ class HawqInit:
 logger.info("Stop HAWQ cluster")
 cmd = "%s; hawq stop master -a -M fast -q" % source_hawq_env
 check_return_code(local_ssh(cmd, logger), logger, "Stop HAWQ 
master failed, exit")
-cmd = "%s; hawq stop allsegments -a -q" % source_hawq_env
+ignore_bad_hosts = '--ignore-bad-hosts' if self.ignore_bad_hosts 
else ''
+cmd = "%s; hawq stop allsegments -a -q %s" % (source_hawq_env, 
ignore_bad_hosts)
 check_return_code(local_ssh(cmd, logger), logger, "Stop HAWQ 
segments failed, exit")
 logger.info("Start HAWQ master")
 cmd = "%s; hawq start master -m -q" % source_hawq_env



incubator-hawq git commit: HAWQ-677 Propagated ignore-bad-hosts flag to stop all segments

2016-04-14 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-677 [created] a16fd9221


HAWQ-677 Propagated ignore-bad-hosts flag to stop all segments


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/a16fd922
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/a16fd922
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/a16fd922

Branch: refs/heads/HAWQ-677
Commit: a16fd9221fbe491c8c2f549257d02bbd07211467
Parents: 1435927
Author: Bhuvnesh Chaudhary 
Authored: Thu Apr 14 14:39:11 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Thu Apr 14 14:39:11 2016 -0700

--
 tools/bin/hawq_ctl | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/a16fd922/tools/bin/hawq_ctl
--
diff --git a/tools/bin/hawq_ctl b/tools/bin/hawq_ctl
index 32752b6..aaab010 100755
--- a/tools/bin/hawq_ctl
+++ b/tools/bin/hawq_ctl
@@ -1199,8 +1199,9 @@ def hawq_activate_standby(opts, hawq_dict):
 else:
 logger.info("HAWQ master is not running, skip")
 
+ignore_bad_hosts = '--ignore-bad-hosts' if opts.ignore_bad_hosts else ''
 logger.info("Stopping all the running segments")
-cmd = "%s; hawq stop allsegments -a -M fast -q;" % source_hawq_env
+cmd = "%s; hawq stop allsegments -a -M fast -q %s;" % (source_hawq_env, 
ignore_bad_hosts)
 result = remote_ssh(cmd, old_standby_host_name, '')
 if result != 0:
 logger.error("Stop segments failed, abort")
@@ -1220,7 +1221,6 @@ def hawq_activate_standby(opts, hawq_dict):
 
 # Set current standby host name as the new master host name in 
configuration.
 logger.info("Update master host name in hawq-site.xml")
-ignore_bad_hosts = '--ignore-bad-hosts' if opts.ignore_bad_hosts else ''
 cmd = "%s; hawq config -c hawq_master_address_host -v %s --skipvalidation 
-q %s" % (source_hawq_env, hawq_dict['hawq_standby_address_host'], 
ignore_bad_hosts)
 check_return_code(remote_ssh(cmd, old_standby_host_name, ''), logger, "Set 
hawq_master_address_host failed")
 



incubator-hawq git commit: HAWQ-617. Add ignore-bad-hosts option.

2016-04-06 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/master 8bd10631e -> 19604066b


HAWQ-617. Add ignore-bad-hosts option.


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/19604066
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/19604066
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/19604066

Branch: refs/heads/master
Commit: 19604066b134bac8b7226a3abdb55411c181a4c4
Parents: 8bd1063
Author: Bhuvnesh Chaudhary 
Authored: Wed Apr 6 14:35:30 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Wed Apr 6 14:35:30 2016 -0700

--
 tools/bin/gppylib/util/ssh_utils.py | 23 +++
 tools/bin/gpscp | 36 ++---
 tools/bin/hawq_ctl  | 36 +
 tools/bin/hawqconfig| 12 ++
 tools/bin/hawqpylib/HAWQ_HELP.py|  1 +
 tools/bin/hawqpylib/hawqlib.py  | 39 
 tools/doc/gpscp_help|  7 ++
 7 files changed, 126 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/19604066/tools/bin/gppylib/util/ssh_utils.py
--
diff --git a/tools/bin/gppylib/util/ssh_utils.py 
b/tools/bin/gppylib/util/ssh_utils.py
index 3194e11..853c0f5 100644
--- a/tools/bin/gppylib/util/ssh_utils.py
+++ b/tools/bin/gppylib/util/ssh_utils.py
@@ -160,6 +160,29 @@ class HostList():
 
 return self.list
 
+def removeBadHosts(self):
+''' Update list of host to include only the host on which SSH was 
successful'''
+
+pool = WorkerPool()
+
+for h in self.list:
+cmd = Echo('ssh test', '', ctxt=REMOTE, remoteHost=h)
+pool.addCommand(cmd)
+
+pool.join()
+pool.haltWork()
+
+bad_hosts = []
+working_hosts = []
+for cmd in pool.getCompletedItems():
+if not cmd.get_results().wasSuccessful():
+bad_hosts.append(cmd.remoteHost)
+else:
+working_hosts.append(cmd.remoteHost)
+
+self.list = working_hosts[:]
+return bad_hosts
+
 # Session is a command session, derived from a base class cmd.Cmd
 class Session(cmd.Cmd):
 '''Implements a list of open ssh sessions ready to execute commands'''

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/19604066/tools/bin/gpscp
--
diff --git a/tools/bin/gpscp b/tools/bin/gpscp
index d00f15d..c02d677 100755
--- a/tools/bin/gpscp
+++ b/tools/bin/gpscp
@@ -64,6 +64,7 @@ class Global:
 opt['-f'] = None
 opt['-J'] = '=:'
 opt['-r'] = False
+opt['--ignore-bad-hosts'] = False
 filePath = []
 
 GV = Global()
@@ -86,18 +87,19 @@ def print_version():
 #
 def parseCommandLine():
 try:
-(options, args) = getopt.getopt(sys.argv[1:], '?vrJ:p:u:h:f:', 
['version'])
+(options, args) = getopt.getopt(sys.argv[1:], '?vrJ:p:u:h:f:', 
['version', 'ignore-bad-hosts'])
 except Exception, e:
 usage('[ERROR] ' + str(e))
 
 for (switch, val) in options:
-   if (switch == '-?'):  usage(0)
-   elif (switch == '-v'):GV.opt[switch] = True
-   elif (switch == '-f'):GV.opt[switch] = val
-   elif (switch == '-h'):GV.opt[switch].append(val)
-elif (switch == '-J'):GV.opt[switch] = val + ':'
-elif (switch == '-r'):GV.opt[switch] = True
-elif (switch == '--version'): print_version()
+if (switch == '-?'): usage(0)
+elif (switch == '-v'):  GV.opt[switch] = 
True
+elif (switch == '-f'):  GV.opt[switch] = 
val
+elif (switch == '-h'):  
GV.opt[switch].append(val)
+elif (switch == '-J'):  GV.opt[switch] = 
val + ':'
+elif (switch == '-r'):  GV.opt[switch] = 
True
+elif (switch == '--version'):   print_version()
+elif (switch == '--ignore-bad-hosts'):  GV.opt[switch] = True
 
 hf = (len(GV.opt['-h']) and 1 or 0) + (GV.opt['-f'] and 1 or 0)
 if hf != 1:
@@ -131,15 +133,23 @@ try:
 if GV.opt['-f']:
 hostlist.parseFile(GV.opt['-f'])
 
-try:
-hostlist.checkSSH()
-except ssh_utils.SSHError, e:
-sys.exit('[ERROR] ' + str(e))
+if GV.opt['--ignore-bad-hosts']:
+original_hostlist = hostlist.list
+bad_hosts = hostlist.removeBadHosts()
+if len(bad_hosts) == len(original_hostlist):
+sys.exit('[ERROR]: Unable

incubator-hawq git commit: HAWQ-617. Add ignore-bad-hosts option.

2016-04-06 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-617 [created] 19604066b


HAWQ-617. Add ignore-bad-hosts option.


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/19604066
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/19604066
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/19604066

Branch: refs/heads/HAWQ-617
Commit: 19604066b134bac8b7226a3abdb55411c181a4c4
Parents: 8bd1063
Author: Bhuvnesh Chaudhary 
Authored: Wed Apr 6 14:35:30 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Wed Apr 6 14:35:30 2016 -0700

--
 tools/bin/gppylib/util/ssh_utils.py | 23 +++
 tools/bin/gpscp | 36 ++---
 tools/bin/hawq_ctl  | 36 +
 tools/bin/hawqconfig| 12 ++
 tools/bin/hawqpylib/HAWQ_HELP.py|  1 +
 tools/bin/hawqpylib/hawqlib.py  | 39 
 tools/doc/gpscp_help|  7 ++
 7 files changed, 126 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/19604066/tools/bin/gppylib/util/ssh_utils.py
--
diff --git a/tools/bin/gppylib/util/ssh_utils.py 
b/tools/bin/gppylib/util/ssh_utils.py
index 3194e11..853c0f5 100644
--- a/tools/bin/gppylib/util/ssh_utils.py
+++ b/tools/bin/gppylib/util/ssh_utils.py
@@ -160,6 +160,29 @@ class HostList():
 
 return self.list
 
+def removeBadHosts(self):
+''' Update list of host to include only the host on which SSH was 
successful'''
+
+pool = WorkerPool()
+
+for h in self.list:
+cmd = Echo('ssh test', '', ctxt=REMOTE, remoteHost=h)
+pool.addCommand(cmd)
+
+pool.join()
+pool.haltWork()
+
+bad_hosts = []
+working_hosts = []
+for cmd in pool.getCompletedItems():
+if not cmd.get_results().wasSuccessful():
+bad_hosts.append(cmd.remoteHost)
+else:
+working_hosts.append(cmd.remoteHost)
+
+self.list = working_hosts[:]
+return bad_hosts
+
 # Session is a command session, derived from a base class cmd.Cmd
 class Session(cmd.Cmd):
 '''Implements a list of open ssh sessions ready to execute commands'''

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/19604066/tools/bin/gpscp
--
diff --git a/tools/bin/gpscp b/tools/bin/gpscp
index d00f15d..c02d677 100755
--- a/tools/bin/gpscp
+++ b/tools/bin/gpscp
@@ -64,6 +64,7 @@ class Global:
 opt['-f'] = None
 opt['-J'] = '=:'
 opt['-r'] = False
+opt['--ignore-bad-hosts'] = False
 filePath = []
 
 GV = Global()
@@ -86,18 +87,19 @@ def print_version():
 #
 def parseCommandLine():
 try:
-(options, args) = getopt.getopt(sys.argv[1:], '?vrJ:p:u:h:f:', 
['version'])
+(options, args) = getopt.getopt(sys.argv[1:], '?vrJ:p:u:h:f:', 
['version', 'ignore-bad-hosts'])
 except Exception, e:
 usage('[ERROR] ' + str(e))
 
 for (switch, val) in options:
-   if (switch == '-?'):  usage(0)
-   elif (switch == '-v'):GV.opt[switch] = True
-   elif (switch == '-f'):GV.opt[switch] = val
-   elif (switch == '-h'):GV.opt[switch].append(val)
-elif (switch == '-J'):GV.opt[switch] = val + ':'
-elif (switch == '-r'):GV.opt[switch] = True
-elif (switch == '--version'): print_version()
+if (switch == '-?'): usage(0)
+elif (switch == '-v'):  GV.opt[switch] = 
True
+elif (switch == '-f'):  GV.opt[switch] = 
val
+elif (switch == '-h'):  
GV.opt[switch].append(val)
+elif (switch == '-J'):  GV.opt[switch] = 
val + ':'
+elif (switch == '-r'):  GV.opt[switch] = 
True
+elif (switch == '--version'):   print_version()
+elif (switch == '--ignore-bad-hosts'):  GV.opt[switch] = True
 
 hf = (len(GV.opt['-h']) and 1 or 0) + (GV.opt['-f'] and 1 or 0)
 if hf != 1:
@@ -131,15 +133,23 @@ try:
 if GV.opt['-f']:
 hostlist.parseFile(GV.opt['-f'])
 
-try:
-hostlist.checkSSH()
-except ssh_utils.SSHError, e:
-sys.exit('[ERROR] ' + str(e))
+if GV.opt['--ignore-bad-hosts']:
+original_hostlist = hostlist.list
+bad_hosts = hostlist.removeBadHosts()
+if len(bad_hosts) == len(original_hostlist):
+sys.exit('[ERROR]: Unabl

[incubator-hawq] Git Push Summary

2016-04-06 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-617 [deleted] 487929d54


[03/12] incubator-hawq git commit: HAWQ-564. HAWQ-564. Resume resource dispatching when reset a RUAlive pending segment

2016-04-05 Thread bhuvnesh2703
HAWQ-564. HAWQ-564. Resume resource dispatching when reset a RUAlive pending 
segment


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/3828d914
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/3828d914
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/3828d914

Branch: refs/heads/HAWQ-617
Commit: 3828d9147f6175848fb239f610c6b2cd2e6b8c8c
Parents: b7a8528
Author: YI JIN 
Authored: Tue Apr 5 11:22:44 2016 +1000
Committer: YI JIN 
Committed: Tue Apr 5 11:22:44 2016 +1000

--
 src/backend/resourcemanager/communication/rmcomm_RM2RMSEG.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/3828d914/src/backend/resourcemanager/communication/rmcomm_RM2RMSEG.c
--
diff --git a/src/backend/resourcemanager/communication/rmcomm_RM2RMSEG.c 
b/src/backend/resourcemanager/communication/rmcomm_RM2RMSEG.c
index c784f65..50038b1 100644
--- a/src/backend/resourcemanager/communication/rmcomm_RM2RMSEG.c
+++ b/src/backend/resourcemanager/communication/rmcomm_RM2RMSEG.c
@@ -210,12 +210,18 @@ void 
receivedRUAliveResponse(AsyncCommMessageHandlerContext  context,
refreshActualMinGRMContainerPerSeg();
}
else {
-   elog(DEBUG3, "Resource manager find host %s is down 
already.",
+   elog(DEBUG3, "Resource manager finds host %s is down 
already.",
 
GET_SEGRESOURCE_HOSTNAME(segres));
}
}
+   else
+   {
+   elog(DEBUG3, "Resource manager finds host %s still up.");
+   }
 
setSegResRUAlivePending(segres, false);
+   PQUEMGR->toRunQueryDispatch = true;
+
closeFileDesc(context->AsyncBuffer);
 }
 



[01/12] incubator-hawq git commit: HAWQ-462. Dispatch dfs_address from master to segment in secure mode

2016-04-05 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-617 c5c7d8fc0 -> 487929d54


HAWQ-462. Dispatch dfs_address from master to segment in secure mode


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/59ebfa70
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/59ebfa70
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/59ebfa70

Branch: refs/heads/HAWQ-617
Commit: 59ebfa7072621117827ae3d9464c971a61919672
Parents: e8fcfb0
Author: Shivram Mani 
Authored: Mon Apr 4 11:52:51 2016 -0700
Committer: Shivram Mani 
Committed: Mon Apr 4 11:52:51 2016 -0700

--
 src/backend/cdb/cdbquerycontextdispatching.c | 63 +--
 src/backend/storage/file/fd.c|  2 +
 src/bin/gpfusion/gpbridgeapi.c   | 17 +++---
 src/include/storage/fd.h |  2 +
 4 files changed, 74 insertions(+), 10 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/59ebfa70/src/backend/cdb/cdbquerycontextdispatching.c
--
diff --git a/src/backend/cdb/cdbquerycontextdispatching.c 
b/src/backend/cdb/cdbquerycontextdispatching.c
index 52f85f1..b3cf083 100644
--- a/src/backend/cdb/cdbquerycontextdispatching.c
+++ b/src/backend/cdb/cdbquerycontextdispatching.c
@@ -102,7 +102,7 @@ int QueryContextDispatchingSizeMemoryLimit = 100 * 1024; /* 
KB */
 
 enum QueryContextDispatchingItemType
 {
-MasterXid, TablespaceLocation, TupleType, EmptyTable, FileSystemCredential
+MasterXid, TablespaceLocation, TupleType, EmptyTable, 
FileSystemCredential, Namespace
 };
 typedef enum QueryContextDispatchingItemType QueryContextDispatchingItemType;
 
@@ -183,7 +183,7 @@ static char*
 GetExtTableFirstLocation(Datum *array);
 
 static void 
-AddFileSystemCredentialForPxfTable(char *uri);
+AddFileSystemCredentialForPxfTable();
 
 /**
  * construct the file location for query context dispatching.
@@ -770,6 +770,30 @@ RebuildTupleForRelation(QueryContextInfo *cxt)
 }
 
 /*
+ * Deserialize the Namespace data
+ */
+static void
+RebuildNamespace(QueryContextInfo *cxt)
+{
+
+   int len;
+   char buffer[4], *binary;
+   ReadData(cxt, buffer, sizeof(buffer), TRUE);
+
+   len = (int) ntohl(*(uint32 *) buffer);
+   binary = palloc(len);
+   if(ReadData(cxt, binary, len, TRUE))
+   {
+   StringInfoData buffer;
+   initStringInfoOfString(&buffer, binary, len);
+   dfs_address = strdup(buffer.data);
+   } else {
+   elog(ERROR, "Couldn't rebuild Namespace");
+   }
+   pfree(binary);
+}
+
+/*
  * rebuild execute context
  */
 void
@@ -801,6 +825,9 @@ RebuildQueryContext(QueryContextInfo *cxt, HTAB 
**currentFilesystemCredentials,
RebuildFilesystemCredentials(cxt, currentFilesystemCredentials,
currentFilesystemCredentialsMemoryContext);
break;
+case Namespace:
+RebuildNamespace(cxt);
+break;
 default:
 ereport(ERROR,
 (errcode(ERRCODE_GP_INTERNAL_ERROR), errmsg( "unrecognized 
"
@@ -1746,7 +1773,8 @@ prepareDispatchedCatalogExternalTable(QueryContextInfo 
*cxt,
if (IS_PXF_URI(location))
{
Insist(array_size == 1);
-   AddFileSystemCredentialForPxfTable(location);
+   AddFileSystemCredentialForPxfTable();
+   prepareDfsAddressForDispatch(cxt);
}
 
AddTupleWithToastsToContextInfo(cxt, ExtTableRelationId, "pg_exttable", 
tuple,
@@ -2880,7 +2908,7 @@ static char* GetExtTableFirstLocation(Datum *array)
  * prepareDispatchedCatalogFileSystemCredential will store the token
  * using port == 0 in HA case (otherwise the supplied port).
  */
-static void AddFileSystemCredentialForPxfTable(char *uri)
+static void AddFileSystemCredentialForPxfTable()
 {
char* dfs_address = NULL;
 
@@ -2984,3 +3012,30 @@ GetResultRelSegFileInfos(Oid relid, List *segnomaps, 
List *existing_seginfomaps)
 
return existing_seginfomaps;
 }
+
+/*
+ * prepareDfsAddressForDispatch
+ *
+ * Given the cxt use the currently set value of cxt->sharedPath
+ * and add it to cxt->buffer so that it is dispatched from
+ * master to segment. This is only required in the case of
+ * a secure filesystem.
+ */
+
+void
+prepareDfsAddressForDispatch(QueryContextInfo* cxt)
+{
+   if (!enable_secure_filesystem)
+   return;
+   const char *namespace = cxt->sharedPath;
+   int size = strlen(namespace);
+   StringInfoData buffer;
+   initStringInfo(&buffer);
+
+   pq_sendint(&buffer, (int) Namespace, sizeof(char));
+   pq_sendint(&buffer, size, sizeof(int));
+
+   WriteData(cxt, buffer.dat

[07/12] incubator-hawq git commit: HAWQ-619. Add comments for inputformat test cases.

2016-04-05 Thread bhuvnesh2703
HAWQ-619. Add comments for inputformat test cases.


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/ec8e7918
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/ec8e7918
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/ec8e7918

Branch: refs/heads/HAWQ-617
Commit: ec8e791849e6d549d452f5211a7e9a7774b1a783
Parents: 53a9f76
Author: Chunling Wang 
Authored: Tue Apr 5 12:03:19 2016 +0800
Committer: Lili Ma 
Committed: Tue Apr 5 13:46:19 2016 +0800

--
 .../java/com/pivotal/hawq/mapreduce/SimpleTableLocalTester.java  | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/ec8e7918/contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/java/com/pivotal/hawq/mapreduce/SimpleTableLocalTester.java
--
diff --git 
a/contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/java/com/pivotal/hawq/mapreduce/SimpleTableLocalTester.java
 
b/contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/java/com/pivotal/hawq/mapreduce/SimpleTableLocalTester.java
index bcec9f1..9c064a2 100644
--- 
a/contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/java/com/pivotal/hawq/mapreduce/SimpleTableLocalTester.java
+++ 
b/contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/java/com/pivotal/hawq/mapreduce/SimpleTableLocalTester.java
@@ -51,6 +51,8 @@ public class SimpleTableLocalTester extends SimpleTableTester 
{
final File answerFile   = new File(caseFolder, tableName + 
".ans");
final File metadataFile = new File(caseFolder, tableName + 
".yaml");
final File outputFile   = new File(caseFolder, 
"output/part-r-0");
+   final String caseName = tableName.replaceAll("\\.", "_");
+   System.out.println("Executing test case: " + caseName);
 
List answers;
 
@@ -106,5 +108,7 @@ public class SimpleTableLocalTester extends 
SimpleTableTester {
// compare result
List outputs = Files.readLines(outputFile, 
Charsets.UTF_8);
checkOutput(answers, outputs, table);
+   
+   System.out.println("Successfully finish test case: " + 
caseName);
}
 }



[02/12] incubator-hawq git commit: HAWQ-615. Handle incomptible tables with getMetadata PXF API

2016-04-05 Thread bhuvnesh2703
HAWQ-615. Handle incomptible tables with getMetadata PXF API


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/b7a8528c
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/b7a8528c
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/b7a8528c

Branch: refs/heads/HAWQ-617
Commit: b7a8528ce306905920cfa82dcaee97a6b051556d
Parents: 59ebfa7
Author: Shivram Mani 
Authored: Mon Apr 4 17:31:33 2016 -0700
Committer: Shivram Mani 
Committed: Mon Apr 4 17:31:33 2016 -0700

--
 .../pxf/plugins/hive/HiveMetadataFetcher.java   |  33 +-
 .../plugins/hive/HiveMetadataFetcherTest.java   | 112 ++-
 2 files changed, 140 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/b7a8528c/pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcher.java
--
diff --git 
a/pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcher.java
 
b/pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcher.java
index d228ec5..91f91e7 100644
--- 
a/pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcher.java
+++ 
b/pxf/pxf-hive/src/main/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcher.java
@@ -50,9 +50,21 @@ public class HiveMetadataFetcher extends MetadataFetcher {
 client = HiveUtilities.initHiveClient();
 }
 
+/**
+ * Fetches metadata of hive tables corresponding to the given pattern
+ * For patterns matching more than one table, the unsupported tables are 
skipped.
+ * If the pattern correspond to exactly one table, throws an exception if
+ * the table type is not supported or contains unsupported field types.
+ * Supported HCatalog types: TINYINT,
+ * SMALLINT, INT, BIGINT, BOOLEAN, FLOAT, DOUBLE, STRING, BINARY, 
TIMESTAMP,
+ * DATE, DECIMAL, VARCHAR, CHAR.
+ *
+ * @param pattern pattern table/file name or pattern in the given source
+ */
 @Override
 public List getMetadata(String pattern) throws Exception {
 
+boolean ignoreErrors = false;
 List tblsDesc = 
HiveUtilities.extractTablesFromPattern(client, pattern);
 
 if(tblsDesc == null || tblsDesc.isEmpty()) {
@@ -62,11 +74,24 @@ public class HiveMetadataFetcher extends MetadataFetcher {
 
 List metadataList = new ArrayList();
 
+if(tblsDesc.size() > 1) {
+ignoreErrors = true;
+}
+
 for(Metadata.Item tblDesc: tblsDesc) {
-Metadata metadata = new Metadata(tblDesc);
-Table tbl = HiveUtilities.getHiveTable(client, tblDesc);
-getSchema(tbl, metadata);
-metadataList.add(metadata);
+try {
+Metadata metadata = new Metadata(tblDesc);
+Table tbl = HiveUtilities.getHiveTable(client, tblDesc);
+getSchema(tbl, metadata);
+metadataList.add(metadata);
+} catch (UnsupportedTypeException | UnsupportedOperationException 
e) {
+if(ignoreErrors) {
+LOG.warn("Metadata fetch for " + tblDesc.toString() + " 
failed. " + e.getMessage());
+continue;
+} else {
+throw e;
+}
+}
 }
 
 return metadataList;

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/b7a8528c/pxf/pxf-hive/src/test/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcherTest.java
--
diff --git 
a/pxf/pxf-hive/src/test/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcherTest.java
 
b/pxf/pxf-hive/src/test/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcherTest.java
index 4ddb486..1323eea 100644
--- 
a/pxf/pxf-hive/src/test/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcherTest.java
+++ 
b/pxf/pxf-hive/src/test/java/org/apache/hawq/pxf/plugins/hive/HiveMetadataFetcherTest.java
@@ -24,6 +24,7 @@ import static org.junit.Assert.*;
 import static org.mockito.Mockito.*;
 
 import java.util.ArrayList;
+import java.util.Arrays;
 import java.util.List;
 
 import org.apache.commons.logging.Log;
@@ -137,7 +138,7 @@ public class HiveMetadataFetcherTest {
 hiveTable.setPartitionKeys(new ArrayList());
 when(hiveClient.getTable("default", tableName)).thenReturn(hiveTable);
 
-// get metadata
+// Get metadata
 metadataList = fetcher.getMetadata(tableName);
 Metadata metadata = metadataList.get(0);
 
@@ -154,6 +155,115 @@ public class HiveMetadataFetcherTest {
 assertEquals("int4", field.getType());
 }
 
+@Test
+public void getTableMetadataWit

[12/12] incubator-hawq git commit: Updated gitignore

2016-04-05 Thread bhuvnesh2703
Updated gitignore


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/487929d5
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/487929d5
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/487929d5

Branch: refs/heads/HAWQ-617
Commit: 487929d54214919c201e26d5193f0116f4e51191
Parents: 4395a03
Author: Bhuvnesh Chaudhary 
Authored: Tue Apr 5 20:43:44 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Tue Apr 5 20:43:44 2016 -0700

--
 .gitignore | 54 ++
 1 file changed, 54 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/487929d5/.gitignore
--
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 000..8fd9023
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,54 @@
+# Object files
+*.o
+*.ko
+*.obj
+*.elf
+.deps
+
+# Precompiled Headers
+*.gch
+*.pch
+
+# Libraries
+*.lib
+*.a
+*.la
+*.lo
+
+# Shared objects (inc. Windows DLLs)
+*.dll
+*.so
+*.so.*
+*.dylib
+
+# Executables
+*.exe
+*.app
+*.i*86
+*.x86_64
+*.hex
+
+# Debug files
+*.dSYM/
+*.o
+*.ko
+*.obj
+*.elf
+objfiles.txt
+
+# Eclipse Project
+.project
+.pydevproject
+.cproject
+.settings
+
+# Generated files
+BUILD_NUMBER
+GNUmakefile
+config.log
+config.status
+VERSION
+env.sh
+ext/
+plr.tgz
+autom4te.cache/



[10/12] incubator-hawq git commit: HAWQ-629. Insert into table select generate_series free resource too early.

2016-04-05 Thread bhuvnesh2703
HAWQ-629. Insert into table select generate_series free resource too early.


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/0aa69529
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/0aa69529
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/0aa69529

Branch: refs/heads/HAWQ-617
Commit: 0aa6952930e6f7d155624947dbb612e1516b0a00
Parents: 1fbdf8b
Author: hzhang2 
Authored: Wed Apr 6 10:59:15 2016 +0800
Committer: hzhang2 
Committed: Wed Apr 6 11:00:07 2016 +0800

--
 src/backend/executor/execMain.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/0aa69529/src/backend/executor/execMain.c
--
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 6fa5cd2..cc030be 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -1306,12 +1306,6 @@ ExecutorEnd(QueryDesc *queryDesc)
/* sanity checks */
Assert(queryDesc != NULL);
 
-   /* Cleanup the global resource reference for spi/function resource 
inheritate. */
-   if ( Gp_role == GP_ROLE_DISPATCH ) {
-   AutoFreeResource(queryDesc->resource);
-   queryDesc->resource = NULL;
-   }
-
estate = queryDesc->estate;
 
Assert(estate != NULL);
@@ -1386,6 +1380,12 @@ ExecutorEnd(QueryDesc *queryDesc)
}
PG_END_TRY();
 
+   /* Cleanup the global resource reference for spi/function resource 
inheritate. */
+   if ( Gp_role == GP_ROLE_DISPATCH ) {
+   AutoFreeResource(queryDesc->resource);
+   queryDesc->resource = NULL;
+   }
+
/*
 * If normal termination, let each operator clean itself up.
 * Otherwise don't risk it... an error might have left some



[06/12] incubator-hawq git commit: HAWQ-619. Change gpextract to hawqextract for InputFormat unit test.

2016-04-05 Thread bhuvnesh2703
HAWQ-619. Change gpextract to hawqextract for InputFormat unit test.


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/53a9f76f
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/53a9f76f
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/53a9f76f

Branch: refs/heads/HAWQ-617
Commit: 53a9f76f04d3f56684f3c0e3cb3dd17ba1ae1997
Parents: 92d5acd
Author: Chunling Wang 
Authored: Fri Apr 1 17:00:07 2016 +0800
Committer: Lili Ma 
Committed: Tue Apr 5 13:46:19 2016 +0800

--
 .../java/com/pivotal/hawq/mapreduce/SimpleTableLocalTester.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/53a9f76f/contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/java/com/pivotal/hawq/mapreduce/SimpleTableLocalTester.java
--
diff --git 
a/contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/java/com/pivotal/hawq/mapreduce/SimpleTableLocalTester.java
 
b/contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/java/com/pivotal/hawq/mapreduce/SimpleTableLocalTester.java
index ca226c8..bcec9f1 100644
--- 
a/contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/java/com/pivotal/hawq/mapreduce/SimpleTableLocalTester.java
+++ 
b/contrib/hawq-hadoop/hawq-mapreduce-tool/src/test/java/com/pivotal/hawq/mapreduce/SimpleTableLocalTester.java
@@ -78,7 +78,7 @@ public class SimpleTableLocalTester extends SimpleTableTester 
{
 
// extract metadata
MRFormatTestUtils.runShellCommand(
-   String.format("gpextract -d %s 
-o %s %s",
+   String.format("hawqextract -d 
%s -o %s %s",
  
TEST_DB_NAME, metadataFile.getPath(), tableName));
 
// copy data files to local in order to run 
local mapreduce job



[04/12] incubator-hawq git commit: HAWQ-623. Resource quota request does not follow latest resource quota calculating logic

2016-04-05 Thread bhuvnesh2703
HAWQ-623. Resource quota request does not follow latest resource quota 
calculating logic


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/cf2744dc
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/cf2744dc
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/cf2744dc

Branch: refs/heads/HAWQ-617
Commit: cf2744dc5fdefb47a4fbcd6f8fa807861822ebbd
Parents: 3828d91
Author: YI JIN 
Authored: Tue Apr 5 12:03:19 2016 +1000
Committer: YI JIN 
Committed: Tue Apr 5 12:03:19 2016 +1000

--
 src/backend/resourcemanager/requesthandler.c  | 73 +++---
 src/backend/resourcemanager/resqueuemanager.c | 40 +---
 2 files changed, 69 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/cf2744dc/src/backend/resourcemanager/requesthandler.c
--
diff --git a/src/backend/resourcemanager/requesthandler.c 
b/src/backend/resourcemanager/requesthandler.c
index 4e24848..b89d23a 100644
--- a/src/backend/resourcemanager/requesthandler.c
+++ b/src/backend/resourcemanager/requesthandler.c
@@ -793,18 +793,52 @@ bool handleRMRequestAcquireResourceQuota(void **arg)
int  res= FUNC_RETURN_OK;
ConnectionTrack  conntrack  = (ConnectionTrack)(*arg);
bool exist  = false;
+   uint64_t reqtime= gettime_microsec();
+   /* If we run in YARN mode, we expect that we should try to get at least 
one
+* available segment, and this requires at least once global resource 
manager
+* cluster report returned.
+*/
+   if ( reqtime - DRMGlobalInstance->ResourceManagerStartTime <=
+rm_nocluster_timeout * 100LL &&
+PRESPOOL->RBClusterReportCounter == 0 )
+   {
+   elog(DEBUG3, "Resource manager defers the resource request.");
+   return false;
+   }
+
+   /*
+* If resource queue has no concrete capacity set yet, no need to handle
+* the request.
+*/
+   if ( PQUEMGR->RootTrack->QueueInfo->ClusterMemoryMB <= 0 )
+   {
+   elog(DEBUG3, "Resource manager defers the resource request 
because the "
+"resource queues have no valid 
resource capacities yet.");
+   return false;
+   }
+
+   Assert(PRESPOOL->SlavesHostCount > 0);
+   int rejectlimit = ceil(PRESPOOL->SlavesHostCount * 
rm_rejectrequest_nseg_limit);
+   int unavailcount = PRESPOOL->SlavesHostCount - PRESPOOL->AvailNodeCount;
+   if ( unavailcount > rejectlimit )
+   {
+   snprintf(errorbuf, sizeof(errorbuf),
+"%d of %d segments %s unavailable, exceeds 
%.1f%% defined in "
+"GUC hawq_rm_rejectrequest_nseg_limit. The 
resource quota "
+"request is rejected.",
+unavailcount,
+PRESPOOL->SlavesHostCount,
+unavailcount == 1 ? "is" : "are",
+rm_rejectrequest_nseg_limit*100.0);
+   elog(WARNING, "ConnID %d. %s", conntrack->ConnID, errorbuf);
+   res = RESOURCEPOOL_TOO_MANY_UAVAILABLE_HOST;
+   goto errorexit;
+   }
 
RPCRequestHeadAcquireResourceQuotaFromRMByOID request =
SMBUFF_HEAD(RPCRequestHeadAcquireResourceQuotaFromRMByOID,
&(conntrack->MessageBuff));
 
-   elog(LOG, "ConnID %d. User "INT64_FORMAT" acquires query resource quota 
"
- "with expected %d vseg (MIN %d).",
- conntrack->ConnID,
- request->UseridOid,
- request->MaxSegCountFix,
- request->MinSegCountFix);
-
/* Get user name from oid. */
UserInfo reguser = getUserByUserOID(request->UseridOid, &exist);
if ( !exist )
@@ -826,6 +860,31 @@ bool handleRMRequestAcquireResourceQuota(void **arg)
conntrack->MinSegCountFixed = request->MinSegCountFix;
conntrack->VSegLimitPerSeg  = request->VSegLimitPerSeg;
conntrack->VSegLimit= request->VSegLimit;
+   conntrack->StatVSegMemoryMB = request->StatVSegMemoryMB;
+   conntrack->StatNVSeg= request->StatNVSeg;
+
+   elog(RMLOG, "ConnID %d. User "INT64_FORMAT" acquires query resource 
quota. "
+   "Expect %d vseg (MIN %d). "
+   "Each segment has maximum %d vseg. "
+   "Query has maximum

[09/12] incubator-hawq git commit: HAWQ-605. Some segment capacity changes are not logged out and when segment goes to up status, the capacity is not adjusted

2016-04-05 Thread bhuvnesh2703
HAWQ-605. Some segment capacity changes are not logged out and when segment 
goes to up status, the capacity is not adjusted


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/1fbdf8b9
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/1fbdf8b9
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/1fbdf8b9

Branch: refs/heads/HAWQ-617
Commit: 1fbdf8b9a17d9fec449f8ed920498e7952cdd4fd
Parents: fa2600c
Author: YI JIN 
Authored: Wed Apr 6 12:41:12 2016 +1000
Committer: YI JIN 
Committed: Wed Apr 6 12:41:12 2016 +1000

--
 src/backend/resourcemanager/resourcepool.c | 47 ++---
 1 file changed, 26 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/1fbdf8b9/src/backend/resourcemanager/resourcepool.c
--
diff --git a/src/backend/resourcemanager/resourcepool.c 
b/src/backend/resourcemanager/resourcepool.c
index 0f06f22..1be9079 100644
--- a/src/backend/resourcemanager/resourcepool.c
+++ b/src/backend/resourcemanager/resourcepool.c
@@ -1366,8 +1366,8 @@ int addHAWQSegWithSegStat(SegStat segstat, bool 
*capstatchanged)
  
segresource->Stat->FTSTotalCore);
}
 
-   elog(LOG, "Resource manager sets physical host '%s' 
capacity change "
- "from FTS (%d MB,%d CORE) to FTS (%d 
MB,%d CORE)",
+   elog(LOG, "Resource manager finds host %s segment 
resource capacity "
+ "changed from (%d MB,%d CORE) to (%d 
MB,%d CORE)",
  GET_SEGRESOURCE_HOSTNAME(segresource),
  oldftsmem,
  oldftscore,
@@ -1575,13 +1575,14 @@ int updateHAWQSegWithGRMSegStat( SegStat segstat)
  
segres->Stat->GRMTotalCore);
}
 
-   elog(LOG, "Resource manager finds host %s capacity changed from 
"
-   "GRM (%d MB, %d CORE) to GRM (%d MB, %d 
CORE)",
-   GET_SEGRESOURCE_HOSTNAME(segres),
-   oldgrmmem,
-   oldgrmcore,
-   segres->Stat->GRMTotalMemoryMB,
-   segres->Stat->GRMTotalCore);
+   elog(LOG, "Resource manager finds host %s global resource 
manager "
+ "node resource capacity changed from (%d MB, 
%d CORE) to "
+ "GRM (%d MB, %d CORE)",
+ GET_SEGRESOURCE_HOSTNAME(segres),
+ oldgrmmem,
+ oldgrmcore,
+ segres->Stat->GRMTotalMemoryMB,
+ segres->Stat->GRMTotalCore);
}
 
segres->Stat->GRMHandled = true;
@@ -1789,6 +1790,7 @@ int setSegResHAWQAvailability( SegResource segres, 
uint8_t newstatus)
}
else if (newstatus == RESOURCE_SEG_STATUS_AVAILABLE)
{
+   adjustSegmentCapacity(segres);
addResourceBundleData(&(PRESPOOL->FTSTotal),
  
segres->Stat->FTSTotalMemoryMB,
  
segres->Stat->FTSTotalCore);
@@ -4722,12 +4724,13 @@ void adjustSegmentStatFTSCapacity(SegStat segstat)
if ( oldmemorymb != segstat->FTSTotalMemoryMB ||
 oldcore != segstat->FTSTotalCore )
{
-   elog(RMLOG, "Resource manager adjusts segment FTS capacity from 
"
-   "(%d MB, %d CORE) to (%d MB, %d CORE)",
-   oldmemorymb,
-   oldcore,
-   segstat->FTSTotalMemoryMB,
-   segstat->FTSTotalCore);
+   elog(LOG, "Resource manager adjusts segment %s original 
resource "
+ "capacity from (%d MB, %d CORE) to (%d MB, %d 
CORE)",
+ GET_SEGINFO_HOSTNAME(&(segstat->Info)),
+ oldmemorymb,
+ oldcore,
+ segstat->FTSTotalMemoryMB,
+ segstat->FTSTotalCore);
}
 }
 
@@ -4746,12 +4749,14 @@ void adjustSegmentStatGRMCapacity(SegStat segstat)
if ( oldmemorymb != segsta

[11/12] incubator-hawq git commit: Merge branch 'master' into HAWQ-617

2016-04-05 Thread bhuvnesh2703
Merge branch 'master' into HAWQ-617


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/4395a03f
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/4395a03f
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/4395a03f

Branch: refs/heads/HAWQ-617
Commit: 4395a03fd3b6467d8188eb546373db788f842b9d
Parents: c5c7d8f 0aa6952
Author: Bhuvnesh Chaudhary 
Authored: Tue Apr 5 20:41:58 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Tue Apr 5 20:41:58 2016 -0700

--
 .../hawq/mapreduce/SimpleTableLocalTester.java  |   6 +-
 .../pxf/plugins/hive/HiveMetadataFetcher.java   |  33 +-
 .../plugins/hive/HiveMetadataFetcherTest.java   | 112 ++-
 src/backend/cdb/cdbmetadatacache_process.c  |  10 +-
 src/backend/cdb/cdbquerycontextdispatching.c|  66 ++-
 src/backend/executor/execMain.c |  12 +-
 .../communication/rmcomm_RM2RMSEG.c |   8 +-
 src/backend/resourcemanager/requesthandler.c|  73 ++--
 src/backend/resourcemanager/resourcepool.c  |  47 
 src/backend/resourcemanager/resqueuemanager.c   |  40 +--
 src/backend/storage/file/fd.c   |   2 +
 src/bin/gpfusion/gpbridgeapi.c  |  18 ++-
 src/include/storage/fd.h|   2 +
 13 files changed, 336 insertions(+), 93 deletions(-)
--




[05/12] incubator-hawq git commit: HAWQ-625. Fix the build failure on MAC: function referenced before declaration

2016-04-05 Thread bhuvnesh2703
HAWQ-625. Fix the build failure on MAC: function referenced before declaration


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/92d5acd3
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/92d5acd3
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/92d5acd3

Branch: refs/heads/HAWQ-617
Commit: 92d5acd341ee67c1d926882ee091ec7bde8e9ad2
Parents: cf2744d
Author: Lili Ma 
Authored: Tue Apr 5 11:17:41 2016 +0800
Committer: Lili Ma 
Committed: Tue Apr 5 11:21:19 2016 +0800

--
 src/backend/cdb/cdbquerycontextdispatching.c | 5 -
 src/bin/gpfusion/gpbridgeapi.c   | 1 +
 2 files changed, 5 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/92d5acd3/src/backend/cdb/cdbquerycontextdispatching.c
--
diff --git a/src/backend/cdb/cdbquerycontextdispatching.c 
b/src/backend/cdb/cdbquerycontextdispatching.c
index b3cf083..c6647cd 100644
--- a/src/backend/cdb/cdbquerycontextdispatching.c
+++ b/src/backend/cdb/cdbquerycontextdispatching.c
@@ -172,6 +172,9 @@ static void
 prepareDispatchedCatalogExternalTable(QueryContextInfo *cxt, Oid relid);
 
 static void
+prepareDfsAddressForDispatch(QueryContextInfo* cxt);
+
+static void
 addOperatorToDispatchContext(QueryContextInfo *ctx, Oid opOid);
 
 static void
@@ -3022,7 +3025,7 @@ GetResultRelSegFileInfos(Oid relid, List *segnomaps, List 
*existing_seginfomaps)
  * a secure filesystem.
  */
 
-void
+static void
 prepareDfsAddressForDispatch(QueryContextInfo* cxt)
 {
if (!enable_secure_filesystem)

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/92d5acd3/src/bin/gpfusion/gpbridgeapi.c
--
diff --git a/src/bin/gpfusion/gpbridgeapi.c b/src/bin/gpfusion/gpbridgeapi.c
index 63fc565..bd304ac 100644
--- a/src/bin/gpfusion/gpbridgeapi.c
+++ b/src/bin/gpfusion/gpbridgeapi.c
@@ -62,6 +62,7 @@ void  build_uri_for_write(gphadoop_context* context, 
PxfServer* rest_server);
 size_t fill_buffer(gphadoop_context* context, char* start, size_t size);
 void   add_delegation_token(PxfInputData *inputData);
 void   free_token_resources(PxfInputData *inputData);
+void free_dfs_address();
 
 /* Custom protocol entry point for read
  */



[08/12] incubator-hawq git commit: HAWQ-627. Fix core dump in metadata cache

2016-04-05 Thread bhuvnesh2703
HAWQ-627. Fix core dump in metadata cache


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/fa2600cf
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/fa2600cf
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/fa2600cf

Branch: refs/heads/HAWQ-617
Commit: fa2600cfbf24d193a63f25e3286e238278348c6d
Parents: ec8e791
Author: ivan 
Authored: Tue Apr 5 14:15:14 2016 +0800
Committer: ivan 
Committed: Tue Apr 5 14:15:14 2016 +0800

--
 src/backend/cdb/cdbmetadatacache_process.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/fa2600cf/src/backend/cdb/cdbmetadatacache_process.c
--
diff --git a/src/backend/cdb/cdbmetadatacache_process.c 
b/src/backend/cdb/cdbmetadatacache_process.c
index 6d1292c..7553d09 100644
--- a/src/backend/cdb/cdbmetadatacache_process.c
+++ b/src/backend/cdb/cdbmetadatacache_process.c
@@ -326,14 +326,14 @@ GenerateMetadataCacheLRUList()
 {
 HASH_SEQ_STATUS hstat;
 MetadataCacheEntry *entry;
-long cache_entry_num = hash_get_num_entries(MetadataCache);
+long cache_entry_num = 0;
 
 LWLockAcquire(MetadataCacheLock, LW_EXCLUSIVE);
+cache_entry_num = hash_get_num_entries(MetadataCache);
 
-if (MetadataCacheLRUList)
-{
-list_free_deep(MetadataCacheLRUList);
-MetadataCacheLRUList = NULL;
+if (cache_entry_num == 0) {
+LWLockRelease(MetadataCacheLock);
+return;
 }
 
 MetadataCacheEntry** entry_vector = 
(MetadataCacheEntry**)palloc(sizeof(MetadataCacheEntry*) * cache_entry_num);   



[46/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/bc0904ab
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/bc0904ab
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/bc0904ab

Branch: refs/heads/HAWQ-617
Commit: bc0904ab02bb3e8c3e3596ce139b3ea6b52e2685
Parents: b0ddaea
Author: xunzhang 
Authored: Fri Apr 1 16:59:27 2016 +0800
Committer: xunzhang 
Committed: Fri Apr 1 16:59:27 2016 +0800

--
 LICENSE |   18 +-
 .../CMake/CMakeTestCompileNestedException.cpp   |   10 +
 .../CMake/CMakeTestCompileSteadyClock.cpp   |7 +
 .../libhdfs3/CMake/CMakeTestCompileStrerror.cpp |   10 +
 depends/libhdfs3/CMake/CodeCoverage.cmake   |   48 +
 depends/libhdfs3/CMake/FindBoost.cmake  | 1162 
 depends/libhdfs3/CMake/FindGSasl.cmake  |   26 +
 depends/libhdfs3/CMake/FindKERBEROS.cmake   |   23 +
 depends/libhdfs3/CMake/FindLibUUID.cmake|   23 +
 depends/libhdfs3/CMake/Functions.cmake  |   46 +
 depends/libhdfs3/CMake/Options.cmake|  169 +
 depends/libhdfs3/CMake/Platform.cmake   |   33 +
 depends/libhdfs3/CMakeLists.txt |   62 +
 depends/libhdfs3/README.md  |   86 +
 depends/libhdfs3/bootstrap  |  122 +
 depends/libhdfs3/debian/.gitignore  |   15 +
 depends/libhdfs3/debian/build.sh|  100 +
 depends/libhdfs3/debian/changelog.in|5 +
 depends/libhdfs3/debian/compat  |1 +
 depends/libhdfs3/debian/control |   31 +
 depends/libhdfs3/debian/copyright   |   23 +
 depends/libhdfs3/debian/libhdfs3-dev.dirs   |2 +
 depends/libhdfs3/debian/libhdfs3-dev.install|4 +
 .../debian/libhdfs3-dev.lintian-overrides   |1 +
 depends/libhdfs3/debian/libhdfs3.dirs   |1 +
 depends/libhdfs3/debian/libhdfs3.install|1 +
 .../libhdfs3/debian/libhdfs3.lintian-overrides  |1 +
 depends/libhdfs3/debian/rules   |   24 +
 depends/libhdfs3/debian/source/format   |1 +
 depends/libhdfs3/gmock/CMakeLists.txt   |   31 +
 depends/libhdfs3/gmock/COPYING  |   28 +
 .../gmock/include/gmock/gmock-actions.h | 1078 
 .../gmock/include/gmock/gmock-cardinalities.h   |  147 +
 .../include/gmock/gmock-generated-actions.h | 2415 
 .../gmock/gmock-generated-function-mockers.h|  991 
 .../include/gmock/gmock-generated-matchers.h| 2190 
 .../include/gmock/gmock-generated-nice-strict.h |  397 ++
 .../gmock/include/gmock/gmock-matchers.h| 3986 ++
 .../gmock/include/gmock/gmock-more-actions.h|  233 +
 .../gmock/include/gmock/gmock-more-matchers.h   |   58 +
 .../gmock/include/gmock/gmock-spec-builders.h   | 1791 ++
 depends/libhdfs3/gmock/include/gmock/gmock.h|   94 +
 .../internal/gmock-generated-internal-utils.h   |  279 +
 .../gmock/internal/gmock-internal-utils.h   |  498 ++
 .../gmock/include/gmock/internal/gmock-port.h   |   78 +
 .../libhdfs3/gmock/src/gmock-cardinalities.cc   |  156 +
 .../libhdfs3/gmock/src/gmock-internal-utils.cc  |  174 +
 depends/libhdfs3/gmock/src/gmock-matchers.cc|  498 ++
 .../libhdfs3/gmock/src/gmock-spec-builders.cc   |  813 +++
 depends/libhdfs3/gmock/src/gmock.cc |  182 +
 depends/libhdfs3/gtest/CMakeLists.txt   |   28 +
 .../gtest/include/gtest/gtest-death-test.h  |  294 +
 .../gtest/include/gtest/gtest-message.h |  250 +
 .../gtest/include/gtest/gtest-param-test.h  | 1421 +
 .../gtest/include/gtest/gtest-printers.h|  855 +++
 .../libhdfs3/gtest/include/gtest/gtest-spi.h|  232 +
 .../gtest/include/gtest/gtest-test-part.h   |  179 +
 .../gtest/include/gtest/gtest-typed-test.h  |  259 +
 depends/libhdfs3/gtest/include/gtest/gtest.h| 2291 
 .../gtest/include/gtest/gtest_pred_impl.h   |  358 ++
 .../libhdfs3/gtest/include/gtest/gtest_prod.h   |   58 +
 .../gtest/internal/gtest-death-test-internal.h  |  319 ++
 .../include/gtest/internal/gtest-filepath.h |  206 +
 .../include/gtest/internal/gtest-internal.h | 1158 
 .../include/gtest/internal/gtest-linked_ptr.h   |  233 +
 .../gtest/internal/gtest-param-util-generated.h | 5143 ++
 .../include/gtest/internal/gtest-param-util.h   |  619 +++
 .../gtest/include/gtest/internal/gtest-port.h   | 1947 +++
 .../gtest/include/gtest/internal/gtest-string.h |  167 +
 .../gtest/include/gtest/internal/gtest-tuple.h  | 1012 
 .../include/gtest/internal/gtest-type-util.h| 3331 
 depends/libhdfs3/gtest/src/gtest-death-test.cc  | 1344 +
 depends/libhdfs3/gtest/src/gtest-filepath.cc|  3

[40/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gmock/include/gmock/gmock-matchers.h
--
diff --git a/depends/libhdfs3/gmock/include/gmock/gmock-matchers.h 
b/depends/libhdfs3/gmock/include/gmock/gmock-matchers.h
new file mode 100644
index 000..44055c9
--- /dev/null
+++ b/depends/libhdfs3/gmock/include/gmock/gmock-matchers.h
@@ -0,0 +1,3986 @@
+// Copyright 2007, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+
+// Google Mock - a framework for writing C++ mock classes.
+//
+// This file implements some commonly used argument matchers.  More
+// matchers can be defined by the user implementing the
+// MatcherInterface interface if necessary.
+
+#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_MATCHERS_H_
+#define GMOCK_INCLUDE_GMOCK_GMOCK_MATCHERS_H_
+
+#include 
+#include 
+#include 
+#include 
+#include   // NOLINT
+#include 
+#include 
+#include 
+#include 
+
+#include "gmock/internal/gmock-internal-utils.h"
+#include "gmock/internal/gmock-port.h"
+#include "gtest/gtest.h"
+
+#if GTEST_LANG_CXX11
+#include   // NOLINT -- must be after gtest.h
+#endif
+
+namespace testing {
+
+// To implement a matcher Foo for type T, define:
+//   1. a class FooMatcherImpl that implements the
+//  MatcherInterface interface, and
+//   2. a factory function that creates a Matcher object from a
+//  FooMatcherImpl*.
+//
+// The two-level delegation design makes it possible to allow a user
+// to write "v" instead of "Eq(v)" where a Matcher is expected, which
+// is impossible if we pass matchers by pointers.  It also eases
+// ownership management as Matcher objects can now be copied like
+// plain values.
+
+// MatchResultListener is an abstract class.  Its << operator can be
+// used by a matcher to explain why a value matches or doesn't match.
+//
+// TODO(w...@google.com): add method
+//   bool InterestedInWhy(bool result) const;
+// to indicate whether the listener is interested in why the match
+// result is 'result'.
+class MatchResultListener {
+ public:
+  // Creates a listener object with the given underlying ostream.  The
+  // listener does not own the ostream, and does not dereference it
+  // in the constructor or destructor.
+  explicit MatchResultListener(::std::ostream* os) : stream_(os) {}
+  virtual ~MatchResultListener() = 0;  // Makes this class abstract.
+
+  // Streams x to the underlying ostream; does nothing if the ostream
+  // is NULL.
+  template 
+  MatchResultListener& operator<<(const T& x) {
+if (stream_ != NULL)
+  *stream_ << x;
+return *this;
+  }
+
+  // Returns the underlying ostream.
+  ::std::ostream* stream() { return stream_; }
+
+  // Returns true iff the listener is interested in an explanation of
+  // the match result.  A matcher's MatchAndExplain() method can use
+  // this information to avoid generating the explanation when no one
+  // intends to hear it.
+  bool IsInterested() const { return stream_ != NULL; }
+
+ private:
+  ::std::ostream* const stream_;
+
+  GTEST_DISALLOW_COPY_AND_ASSIGN_(MatchResultListener);
+};
+
+inline MatchResultListener::~MatchResultListener() {
+}
+
+// An instance of a subclass of this knows how to describe itself as a
+// matcher.
+class MatcherDescriberInterface {
+ public:
+  virtual ~MatcherDescriberInterface() {}
+
+  // Describes this matcher to an ostream.  The function should print
+  // a v

[38/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gmock/include/gmock/internal/gmock-generated-internal-utils.h
--
diff --git 
a/depends/libhdfs3/gmock/include/gmock/internal/gmock-generated-internal-utils.h
 
b/depends/libhdfs3/gmock/include/gmock/internal/gmock-generated-internal-utils.h
new file mode 100644
index 000..0225845
--- /dev/null
+++ 
b/depends/libhdfs3/gmock/include/gmock/internal/gmock-generated-internal-utils.h
@@ -0,0 +1,279 @@
+// This file was GENERATED by command:
+// pump.py gmock-generated-internal-utils.h.pump
+// DO NOT EDIT BY HAND!!!
+
+// Copyright 2007, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+
+// Google Mock - a framework for writing C++ mock classes.
+//
+// This file contains template meta-programming utility classes needed
+// for implementing Google Mock.
+
+#ifndef GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_GENERATED_INTERNAL_UTILS_H_
+#define GMOCK_INCLUDE_GMOCK_INTERNAL_GMOCK_GENERATED_INTERNAL_UTILS_H_
+
+#include "gmock/internal/gmock-port.h"
+
+namespace testing {
+
+template 
+class Matcher;
+
+namespace internal {
+
+// An IgnoredValue object can be implicitly constructed from ANY value.
+// This is used in implementing the IgnoreResult(a) action.
+class IgnoredValue {
+ public:
+  // This constructor template allows any value to be implicitly
+  // converted to IgnoredValue.  The object has no data member and
+  // doesn't try to remember anything about the argument.  We
+  // deliberately omit the 'explicit' keyword in order to allow the
+  // conversion to be implicit.
+  template 
+  IgnoredValue(const T& /* ignored */) {}  // NOLINT(runtime/explicit)
+};
+
+// MatcherTuple::type is a tuple type where each field is a Matcher
+// for the corresponding field in tuple type T.
+template 
+struct MatcherTuple;
+
+template <>
+struct MatcherTuple< ::std::tr1::tuple<> > {
+  typedef ::std::tr1::tuple< > type;
+};
+
+template 
+struct MatcherTuple< ::std::tr1::tuple > {
+  typedef ::std::tr1::tuple > type;
+};
+
+template 
+struct MatcherTuple< ::std::tr1::tuple > {
+  typedef ::std::tr1::tuple, Matcher > type;
+};
+
+template 
+struct MatcherTuple< ::std::tr1::tuple > {
+  typedef ::std::tr1::tuple, Matcher, Matcher > type;
+};
+
+template 
+struct MatcherTuple< ::std::tr1::tuple > {
+  typedef ::std::tr1::tuple, Matcher, Matcher,
+  Matcher > type;
+};
+
+template 
+struct MatcherTuple< ::std::tr1::tuple > {
+  typedef ::std::tr1::tuple, Matcher, Matcher, Matcher,
+  Matcher > type;
+};
+
+template 
+struct MatcherTuple< ::std::tr1::tuple > {
+  typedef ::std::tr1::tuple, Matcher, Matcher, Matcher,
+  Matcher, Matcher > type;
+};
+
+template 
+struct MatcherTuple< ::std::tr1::tuple > {
+  typedef ::std::tr1::tuple, Matcher, Matcher, Matcher,
+  Matcher, Matcher, Matcher > type;
+};
+
+template 
+struct MatcherTuple< ::std::tr1::tuple > {
+  typedef ::std::tr1::tuple, Matcher, Matcher, Matcher,
+  Matcher, Matcher, Matcher, Matcher > type;
+};
+
+template 
+struct MatcherTuple< ::std::tr1::tuple > {
+  typedef ::std::tr1::tuple, Matcher, Matcher, Matcher,
+  Matcher, Matcher, Matcher, Matcher, Matcher > type;
+};
+
+template 
+struct MatcherTuple< ::std::tr1::tuple > {
+  typedef ::std::tr1::tuple, Matcher, Matcher, Matcher,
+  Matcher, Matcher, Matcher, Matcher, Matcher,
+  Matcher > type;
+};

[09/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/server/Namenode.h
--
diff --git a/depends/libhdfs3/src/server/Namenode.h 
b/depends/libhdfs3/src/server/Namenode.h
new file mode 100644
index 000..e0beef4
--- /dev/null
+++ b/depends/libhdfs3/src/server/Namenode.h
@@ -0,0 +1,823 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef _HDFS_LIBHDFS3_SERVER_NAMENODE_H_
+#define _HDFS_LIBHDFS3_SERVER_NAMENODE_H_
+
+#include "client/FileStatus.h"
+#include "client/Permission.h"
+#include "DatanodeInfo.h"
+#include "Exception.h"
+#include "ExtendedBlock.h"
+#include "LocatedBlock.h"
+#include "LocatedBlocks.h"
+#include "rpc/RpcAuth.h"
+#include "rpc/RpcCall.h"
+#include "rpc/RpcClient.h"
+#include "rpc/RpcConfig.h"
+#include "rpc/RpcProtocolInfo.h"
+#include "rpc/RpcServerInfo.h"
+#include "SessionConfig.h"
+
+#include 
+
+namespace Hdfs {
+namespace Internal {
+
+class Namenode {
+public:
+/**
+ * Destroy the namenode.
+ */
+virtual ~Namenode() {
+}
+
+/**
+ * Get locations of the blocks of the specified file within the specified 
range.
+ * DataNode locations for each block are sorted by
+ * the proximity to the client.
+ * 
+ * Return {//link LocatedBlocks} which contains
+ * file length, blocks and their locations.
+ * DataNode locations for each block are sorted by
+ * the distance to the client's address.
+ * 
+ * The client will then have to contact
+ * one of the indicated DataNodes to obtain the actual data.
+ *
+ * @param src file name
+ * @param offset range start offset
+ * @param length range length
+ * @param file length and array of blocks with their locations
+ * @param lbs output the returned blocks
+ *
+ * @throw AccessControlException If access is denied
+ * @throw FileNotFoundException If file src does not exist
+ * @throw UnresolvedLinkException If src contains a symlink
+ * @throw HdfsIOException If an I/O error occurred
+ */
+//Idempotent
+virtual void getBlockLocations(const std::string & src, int64_t offset,
+   int64_t length, LocatedBlocks & lbs) /* 
throw (AccessControlException,
+ FileNotFoundException, UnresolvedLinkException,
+ HdfsIOException) */ = 0;
+
+/**
+ * Create a new file entry in the namespace.
+ * 
+ * This will create an empty file specified by the source path.
+ * The path should reflect a full path originated at the root.
+ * The name-node does not have a notion of "current" directory for a 
client.
+ * 
+ * Once created, the file is visible and available for read to other 
clients.
+ * Although, other clients cannot {//link #delete(const std::string &, 
bool)}, re-create or
+ * {//link #rename(const std::string &, const std::string &)} it until the 
file is completed
+ * or explicitly as a result of lease expiration.
+ * 
+ * Blocks have a maximum size.  Clients that intend to create
+ * multi-block files must also use
+ * {//link #addBlock(const std::string &, const std::string &, 
ExtendedBlock, DatanodeInfo[])}
+ *
+ * @param src path of the file being created.
+ * @param masked masked permission.
+ * @param clientName name of the current client.
+ * @param flag indicates whether the file should be
+ * overwritten if it already exists or create if it does not exist or 
append.
+ * @param createParent create missing parent directory if true
+ * @param replication block replication factor.
+ * @param blockSize maximum block size.
+ *
+ * @throw AccessControlException If access 

[33/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/include/gtest/gtest_pred_impl.h
--
diff --git a/depends/libhdfs3/gtest/include/gtest/gtest_pred_impl.h 
b/depends/libhdfs3/gtest/include/gtest/gtest_pred_impl.h
new file mode 100644
index 000..30ae712
--- /dev/null
+++ b/depends/libhdfs3/gtest/include/gtest/gtest_pred_impl.h
@@ -0,0 +1,358 @@
+// Copyright 2006, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// This file is AUTOMATICALLY GENERATED on 10/31/2011 by command
+// 'gen_gtest_pred_impl.py 5'.  DO NOT EDIT BY HAND!
+//
+// Implements a family of generic predicate assertion macros.
+
+#ifndef GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_
+#define GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_
+
+// Makes sure this header is not included before gtest.h.
+#ifndef GTEST_INCLUDE_GTEST_GTEST_H_
+# error Do not include gtest_pred_impl.h directly.  Include gtest.h instead.
+#endif  // GTEST_INCLUDE_GTEST_GTEST_H_
+
+// This header implements a family of generic predicate assertion
+// macros:
+//
+//   ASSERT_PRED_FORMAT1(pred_format, v1)
+//   ASSERT_PRED_FORMAT2(pred_format, v1, v2)
+//   ...
+//
+// where pred_format is a function or functor that takes n (in the
+// case of ASSERT_PRED_FORMATn) values and their source expression
+// text, and returns a testing::AssertionResult.  See the definition
+// of ASSERT_EQ in gtest.h for an example.
+//
+// If you don't care about formatting, you can use the more
+// restrictive version:
+//
+//   ASSERT_PRED1(pred, v1)
+//   ASSERT_PRED2(pred, v1, v2)
+//   ...
+//
+// where pred is an n-ary function or functor that returns bool,
+// and the values v1, v2, ..., must support the << operator for
+// streaming to std::ostream.
+//
+// We also define the EXPECT_* variations.
+//
+// For now we only support predicates whose arity is at most 5.
+// Please email googletestframew...@googlegroups.com if you need
+// support for higher arities.
+
+// GTEST_ASSERT_ is the basic statement to which all of the assertions
+// in this file reduce.  Don't use this in your code.
+
+#define GTEST_ASSERT_(expression, on_failure) \
+  GTEST_AMBIGUOUS_ELSE_BLOCKER_ \
+  if (const ::testing::AssertionResult gtest_ar = (expression)) \
+; \
+  else \
+on_failure(gtest_ar.failure_message())
+
+
+// Helper function for implementing {EXPECT|ASSERT}_PRED1.  Don't use
+// this in your code.
+template 
+AssertionResult AssertPred1Helper(const char* pred_text,
+  const char* e1,
+  Pred pred,
+  const T1& v1) {
+  if (pred(v1)) return AssertionSuccess();
+
+  return AssertionFailure() << pred_text << "("
+<< e1 << ") evaluates to false, where"
+<< "\n" << e1 << " evaluates to " << v1;
+}
+
+// Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT1.
+// Don't use this in your code.
+#define GTEST_PRED_FORMAT1_(pred_format, v1, on_failure)\
+  GTEST_ASSERT_(pred_format(#v1, v1), \
+on_failure)
+
+// Internal macro for implementing {EXPECT|ASSERT}_PRED1.  Don't use
+// this in your code.
+#define GTEST_PRED1_(pred, v1, on_failure)\
+  GTEST_ASSERT_(::testing::AssertPred1Helper(#pred, \
+ #v1, \
+ pred, \
+  

[23/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/src/gtest.cc
--
diff --git a/depends/libhdfs3/gtest/src/gtest.cc 
b/depends/libhdfs3/gtest/src/gtest.cc
new file mode 100644
index 000..6de53dd
--- /dev/null
+++ b/depends/libhdfs3/gtest/src/gtest.cc
@@ -0,0 +1,5015 @@
+// Copyright 2005, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+//
+// The Google C++ Testing Framework (Google Test)
+
+#include "gtest/gtest.h"
+#include "gtest/gtest-spi.h"
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include   // NOLINT
+#include 
+#include 
+
+#if GTEST_OS_LINUX
+
+// TODO(ken...@google.com): Use autoconf to detect availability of
+// gettimeofday().
+# define GTEST_HAS_GETTIMEOFDAY_ 1
+
+# include   // NOLINT
+# include   // NOLINT
+# include   // NOLINT
+// Declares vsnprintf().  This header is not available on Windows.
+# include   // NOLINT
+# include   // NOLINT
+# include   // NOLINT
+# include   // NOLINT
+# include 
+
+#elif GTEST_OS_SYMBIAN
+# define GTEST_HAS_GETTIMEOFDAY_ 1
+# include   // NOLINT
+
+#elif GTEST_OS_ZOS
+# define GTEST_HAS_GETTIMEOFDAY_ 1
+# include   // NOLINT
+
+// On z/OS we additionally need strings.h for strcasecmp.
+# include   // NOLINT
+
+#elif GTEST_OS_WINDOWS_MOBILE  // We are on Windows CE.
+
+# include   // NOLINT
+
+#elif GTEST_OS_WINDOWS  // We are on Windows proper.
+
+# include   // NOLINT
+# include   // NOLINT
+# include   // NOLINT
+# include   // NOLINT
+
+# if GTEST_OS_WINDOWS_MINGW
+// MinGW has gettimeofday() but not _ftime64().
+// TODO(ken...@google.com): Use autoconf to detect availability of
+//   gettimeofday().
+// TODO(ken...@google.com): There are other ways to get the time on
+//   Windows, like GetTickCount() or GetSystemTimeAsFileTime().  MinGW
+//   supports these.  consider using them instead.
+#  define GTEST_HAS_GETTIMEOFDAY_ 1
+#  include   // NOLINT
+# endif  // GTEST_OS_WINDOWS_MINGW
+
+// cpplint thinks that the header is already included, so we want to
+// silence it.
+# include   // NOLINT
+
+#else
+
+// Assume other platforms have gettimeofday().
+// TODO(ken...@google.com): Use autoconf to detect availability of
+//   gettimeofday().
+# define GTEST_HAS_GETTIMEOFDAY_ 1
+
+// cpplint thinks that the header is already included, so we want to
+// silence it.
+# include   // NOLINT
+# include   // NOLINT
+
+#endif  // GTEST_OS_LINUX
+
+#if GTEST_HAS_EXCEPTIONS
+# include 
+#endif
+
+#if GTEST_CAN_STREAM_RESULTS_
+# include   // NOLINT
+# include   // NOLINT
+#endif
+
+// Indicates that this translation unit is part of Google Test's
+// implementation.  It must come before gtest-internal-inl.h is
+// included, or there will be a compiler error.  This trick is to
+// prevent a user from accidentally including gtest-internal-inl.h in
+// his code.
+#define GTEST_IMPLEMENTATION_ 1
+#include "src/gtest-internal-inl.h"
+#undef GTEST_IMPLEMENTATION_
+
+#if GTEST_OS_WINDOWS
+# define vsnprintf _vsnprintf
+#endif  // GTEST_OS_WINDOWS
+
+namespace testing {
+
+using internal::CountIf;
+using internal::ForEach;
+using internal::GetElementOr;
+using internal::Shuffle;
+
+// Constants.
+
+// A test whose test case name or test name matches this filter is
+// disabled and not run.
+static const char kDisableTestFilter[] = "DISABLED_*:*/DIS

[17/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/client/Pipeline.cpp
--
diff --git a/depends/libhdfs3/src/client/Pipeline.cpp 
b/depends/libhdfs3/src/client/Pipeline.cpp
new file mode 100644
index 000..3396e31
--- /dev/null
+++ b/depends/libhdfs3/src/client/Pipeline.cpp
@@ -0,0 +1,792 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "DateTime.h"
+#include "Pipeline.h"
+#include "Logger.h"
+#include "Exception.h"
+#include "ExceptionInternal.h"
+#include "OutputStreamInter.h"
+#include "FileSystemInter.h"
+#include "DataTransferProtocolSender.h"
+#include "datatransfer.pb.h"
+
+#include 
+
+namespace Hdfs {
+namespace Internal {
+
+PipelineImpl::PipelineImpl(bool append, const char * path, const SessionConfig 
& conf,
+   shared_ptr filesystem, int 
checksumType, int chunkSize,
+   int replication, int64_t bytesSent, PacketPool & 
packetPool, shared_ptr lastBlock) :
+checksumType(checksumType), chunkSize(chunkSize), errorIndex(-1), 
replication(replication), bytesAcked(
+bytesSent), bytesSent(bytesSent), packetPool(packetPool), 
filesystem(filesystem), lastBlock(lastBlock), path(
+path) {
+canAddDatanode = conf.canAddDatanode();
+blockWriteRetry = conf.getBlockWriteRetry();
+connectTimeout = conf.getOutputConnTimeout();
+readTimeout = conf.getOutputReadTimeout();
+writeTimeout = conf.getOutputWriteTimeout();
+clientName = filesystem->getClientName();
+
+if (append) {
+LOG(DEBUG2, "create pipeline for file %s to append to %s at position 
%" PRId64,
+path, lastBlock->toString().c_str(), lastBlock->getNumBytes());
+stage = PIPELINE_SETUP_APPEND;
+assert(lastBlock);
+nodes = lastBlock->getLocations();
+storageIDs = lastBlock->getStorageIDs();
+buildForAppendOrRecovery(false);
+stage = DATA_STREAMING;
+} else {
+LOG(DEBUG2, "create pipeline for file %s to write to a new block", 
path);
+stage = PIPELINE_SETUP_CREATE;
+buildForNewBlock();
+stage = DATA_STREAMING;
+}
+}
+
+int PipelineImpl::findNewDatanode(const std::vector & original) {
+if (nodes.size() != original.size() + 1) {
+THROW(HdfsIOException, "Failed to acquire a datanode for block %s from 
namenode.",
+  lastBlock->toString().c_str());
+}
+
+for (size_t i = 0; i < nodes.size(); i++) {
+size_t j = 0;
+
+for (; j < original.size() && !(nodes[i] == original[j]); j++)
+;
+
+if (j == original.size()) {
+return i;
+}
+}
+
+THROW(HdfsIOException, "Cannot add new datanode for block %s.", 
lastBlock->toString().c_str());
+}
+
+void PipelineImpl::transfer(const ExtendedBlock & blk, const DatanodeInfo & 
src,
+const std::vector & targets, const 
Token & token) {
+shared_ptr so(new TcpSocketImpl);
+shared_ptr in(new BufferedSocketReaderImpl(*so));
+so->connect(src.getIpAddr().c_str(), src.getXferPort(), connectTimeout);
+DataTransferProtocolSender sender(*so, writeTimeout, src.formatAddress());
+sender.transferBlock(blk, token, clientName.c_str(), targets);
+int size;
+size = in->readVarint32(readTimeout);
+std::vector buf(size);
+in->readFully(&buf[0], size, readTimeout);
+BlockOpResponseProto resp;
+
+if (!resp.ParseFromArray(&buf[0], size)) {
+THROW(HdfsIOException, "cannot parse datanode response from %s fro 
block %s.",
+  src.formatAddress().c_str(), lastBlock->toString().c_str());
+}
+
+if (Status::DT_PROTO_SUCCESS != resp.status()) {
+  

[29/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/include/gtest/internal/gtest-port.h
--
diff --git a/depends/libhdfs3/gtest/include/gtest/internal/gtest-port.h 
b/depends/libhdfs3/gtest/include/gtest/internal/gtest-port.h
new file mode 100644
index 000..dc4fe0c
--- /dev/null
+++ b/depends/libhdfs3/gtest/include/gtest/internal/gtest-port.h
@@ -0,0 +1,1947 @@
+// Copyright 2005, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: w...@google.com (Zhanyong Wan)
+//
+// Low-level types and utilities for porting Google Test to various
+// platforms.  They are subject to change without notice.  DO NOT USE
+// THEM IN USER CODE.
+//
+// This file is fundamental to Google Test.  All other Google Test source
+// files are expected to #include this.  Therefore, it cannot #include
+// any other Google Test header.
+
+#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_
+#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_
+
+// The user can define the following macros in the build script to
+// control Google Test's behavior.  If the user doesn't define a macro
+// in this list, Google Test will define it.
+//
+//   GTEST_HAS_CLONE  - Define it to 1/0 to indicate that clone(2)
+//  is/isn't available.
+//   GTEST_HAS_EXCEPTIONS - Define it to 1/0 to indicate that exceptions
+//  are enabled.
+//   GTEST_HAS_GLOBAL_STRING  - Define it to 1/0 to indicate that ::string
+//  is/isn't available (some systems define
+//  ::string, which is different to std::string).
+//   GTEST_HAS_GLOBAL_WSTRING - Define it to 1/0 to indicate that ::string
+//  is/isn't available (some systems define
+//  ::wstring, which is different to std::wstring).
+//   GTEST_HAS_POSIX_RE   - Define it to 1/0 to indicate that POSIX regular
+//  expressions are/aren't available.
+//   GTEST_HAS_PTHREAD- Define it to 1/0 to indicate that 
+//  is/isn't available.
+//   GTEST_HAS_RTTI   - Define it to 1/0 to indicate that RTTI is/isn't
+//  enabled.
+//   GTEST_HAS_STD_WSTRING- Define it to 1/0 to indicate that
+//  std::wstring does/doesn't work (Google Test can
+//  be used where std::wstring is unavailable).
+//   GTEST_HAS_TR1_TUPLE  - Define it to 1/0 to indicate tr1::tuple
+//  is/isn't available.
+//   GTEST_HAS_SEH- Define it to 1/0 to indicate whether the
+//  compiler supports Microsoft's "Structured
+//  Exception Handling".
+//   GTEST_HAS_STREAM_REDIRECTION
+//- Define it to 1/0 to indicate whether the
+//  platform supports I/O stream redirection using
+//  dup() and dup2().
+//   GTEST_USE_OWN_TR1_TUPLE  - Define it to 1/0 to indicate whether Google
+//  Test's own tr1 tuple implementation should be
+//  used.  Unused when the user sets
+//  GTEST_HAS_TR1_TUPLE to 0.
+//   GTEST_LANG_CXX11 - Define it to 1/0 to indicate tha

[19/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/client/InputStream.cpp
--
diff --git a/depends/libhdfs3/src/client/InputStream.cpp 
b/depends/libhdfs3/src/client/InputStream.cpp
new file mode 100644
index 000..6cbf46d
--- /dev/null
+++ b/depends/libhdfs3/src/client/InputStream.cpp
@@ -0,0 +1,107 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "FileSystemImpl.h"
+#include "FileSystemInter.h"
+#include "InputStream.h"
+#include "InputStreamImpl.h"
+#include "InputStreamInter.h"
+
+using namespace Hdfs::Internal;
+
+namespace Hdfs {
+
+InputStream::InputStream() {
+impl = new Internal::InputStreamImpl;
+}
+
+InputStream::~InputStream() {
+delete impl;
+}
+
+/**
+ * Open a file to read
+ * @param fs hdfs file system.
+ * @param path the file to be read.
+ * @param verifyChecksum verify the checksum.
+ */
+void InputStream::open(FileSystem & fs, const char * path,
+   bool verifyChecksum) {
+if (!fs.impl) {
+THROW(HdfsIOException, "FileSystem: not connected.");
+}
+
+impl->open(fs.impl->filesystem, path, verifyChecksum);
+}
+
+/**
+ * To read data from hdfs.
+ * @param buf the buffer used to filled.
+ * @param size buffer size.
+ * @return return the number of bytes filled in the buffer, it may less than 
size.
+ */
+int32_t InputStream::read(char * buf, int32_t size) {
+return impl->read(buf, size);
+}
+
+/**
+ * To read data from hdfs, block until get the given size of bytes.
+ * @param buf the buffer used to filled.
+ * @param size the number of bytes to be read.
+ */
+void InputStream::readFully(char * buf, int64_t size) {
+impl->readFully(buf, size);
+}
+
+int64_t InputStream::available() {
+return impl->available();
+}
+
+/**
+ * To move the file point to the given position.
+ * @param pos the given position.
+ */
+void InputStream::seek(int64_t pos) {
+impl->seek(pos);
+}
+
+/**
+ * To get the current file point position.
+ * @return the position of current file point.
+ */
+int64_t InputStream::tell() {
+return impl->tell();
+}
+
+/**
+ * Close the sthream.
+ */
+void InputStream::close() {
+impl->close();
+}
+
+}

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/client/InputStream.h
--
diff --git a/depends/libhdfs3/src/client/InputStream.h 
b/depends/libhdfs3/src/client/InputStream.h
new file mode 100644
index 000..73f45ca
--- /dev/null
+++ b/depends/libhdfs3/src/client/InputStream.h
@@ -0,0 +1,99 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY 

[20/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/client/FileSystemImpl.h
--
diff --git a/depends/libhdfs3/src/client/FileSystemImpl.h 
b/depends/libhdfs3/src/client/FileSystemImpl.h
new file mode 100644
index 000..1459c5c
--- /dev/null
+++ b/depends/libhdfs3/src/client/FileSystemImpl.h
@@ -0,0 +1,507 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef _HDFS_LIBHDFS3_CLIENT_FILESYSTEMIMPL_H_
+#define _HDFS_LIBHDFS3_CLIENT_FILESYSTEMIMPL_H_
+
+#include "BlockLocation.h"
+#include "DirectoryIterator.h"
+#include "FileStatus.h"
+#include "FileSystemInter.h"
+#include "FileSystemKey.h"
+#include "FileSystemStats.h"
+#include "Permission.h"
+#include "server/Namenode.h"
+#include "SessionConfig.h"
+#include "Unordered.h"
+#include "UserInfo.h"
+#include "XmlConfig.h"
+#ifdef MOCK
+#include "NamenodeStub.h"
+#endif
+
+#include 
+#include 
+
+namespace Hdfs {
+namespace Internal {
+
+class InputStreamInter;
+class OutputStreamInter;
+
+class FileSystemImpl: public FileSystemInter {
+public:
+/**
+ * Construct a FileSystemImpl instance.
+ * @param key a key which can be uniquely identify a FileSystemImpl 
instance.
+ * @param c a configuration objecto used to initialize the instance.
+ */
+FileSystemImpl(const FileSystemKey & key, const Config & c);
+
+/**
+ * Destroy a FileSystemBase instance
+ */
+~FileSystemImpl();
+
+/**
+ * Format the path to a absolute canonicalized path.
+ * @param path target path to be hendled.
+ * @return return a absolute canonicalized path.
+ */
+const std::string getStandardPath(const char * path);
+
+/**
+ * To get the client unique ID.
+ * @return return the client unique ID.
+ */
+const char * getClientName();
+
+/**
+ * Connect to hdfs
+ */
+void connect();
+
+/**
+ * disconnect from hdfs
+ */
+void disconnect();
+
+/**
+ * To get default number of replication.
+ * @return the default number of replication.
+ */
+int getDefaultReplication() const;
+
+/**
+ * To get the default block size.
+ * @return the default block size.
+ */
+int64_t getDefaultBlockSize() const;
+
+/**
+ * To get the home directory.
+ * @return home directory.
+ */
+std::string getHomeDirectory() const;
+
+/**
+ * To delete a file or directory.
+ * @param path the path to be deleted.
+ * @param recursive if path is a directory, delete the contents 
recursively.
+ * @return return true if success.
+ */
+bool deletePath(const char * path, bool recursive);
+
+/**
+ * To create a directory with given permission.
+ * @param path the directory path which is to be created.
+ * @param permission directory permission.
+ * @return return true if success.
+ */
+bool mkdir(const char * path, const Permission & permission);
+
+/**
+ * To create a directory which given permission.
+ * If parent path does not exits, create it.
+ * @param path the directory path which is to be created.
+ * @param permission directory permission.
+ * @return return true if success.
+ */
+bool mkdirs(const char * path, const Permission & permission);
+
+/**
+ * To get path information.
+ * @param path the path which information is to be returned.
+ * @return the path information.
+ */
+FileStatus getFileStatus(const char * path);
+
+/**
+ * Return an array containing hostnames, offset and size of
+ * portions of the given file.
+ *
+ * This call is most helpful with DFS, where it returns
+ * hostnames of machines that contain

[04/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/test/function/TestCInterface.cpp
--
diff --git a/depends/libhdfs3/test/function/TestCInterface.cpp 
b/depends/libhdfs3/test/function/TestCInterface.cpp
new file mode 100644
index 000..62df32a
--- /dev/null
+++ b/depends/libhdfs3/test/function/TestCInterface.cpp
@@ -0,0 +1,1593 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "gtest/gtest.h"
+#include "client/hdfs.h"
+#include "Logger.h"
+#include "SessionConfig.h"
+#include "TestUtil.h"
+#include "XmlConfig.h"
+
+#include 
+#include 
+#include 
+#include 
+
+using namespace Hdfs::Internal;
+
+#ifndef TEST_HDFS_PREFIX
+#define TEST_HDFS_PREFIX "./"
+#endif
+
+#define BASE_DIR TEST_HDFS_PREFIX"/testCInterface/"
+
+using Hdfs::CheckBuffer;
+
+static bool ReadFully(hdfsFS fs, hdfsFile file, char * buffer, size_t length) {
+int todo = length, rc;
+
+while (todo > 0) {
+rc = hdfsRead(fs, file, buffer + (length - todo), todo);
+
+if (rc <= 0) {
+return false;
+}
+
+todo = todo - rc;
+}
+
+return true;
+}
+
+static bool CreateFile(hdfsFS fs, const char * path, int64_t blockSize,
+   int64_t fileSize) {
+hdfsFile out;
+size_t offset = 0;
+int64_t todo = fileSize, batch;
+std::vector buffer(32 * 1024);
+int rc = -1;
+
+do {
+if (NULL == (out = hdfsOpenFile(fs, path, O_WRONLY, 0, 0, blockSize))) 
{
+break;
+}
+
+while (todo > 0) {
+batch = todo < static_cast(buffer.size()) ?
+todo : buffer.size();
+Hdfs::FillBuffer(&buffer[0], batch, offset);
+
+if (0 > (rc = hdfsWrite(fs, out, &buffer[0], batch))) {
+break;
+}
+
+todo -= rc;
+offset += rc;
+}
+
+rc = hdfsCloseFile(fs, out);
+} while (0);
+
+return rc >= 0;
+}
+
+bool CheckFileContent(hdfsFS fs, const char * path, int64_t len, size_t 
offset) {
+hdfsFile in = hdfsOpenFile(fs, path, O_RDONLY, 0, 0, 0);
+
+if (in == NULL) {
+return false;
+}
+
+std::vector buff(1 * 1024 * 1024);
+int rc, todo = len, batch;
+
+while (todo > 0) {
+batch = todo < static_cast(buff.size()) ? todo : buff.size();
+batch = hdfsRead(fs, in, &buff[0], batch);
+
+if (batch <= 0) {
+hdfsCloseFile(fs, in);
+return false;
+}
+
+todo = todo - batch;
+rc = Hdfs::CheckBuffer(&buff[0], batch, offset);
+offset += batch;
+
+if (!rc) {
+hdfsCloseFile(fs, in);
+return false;
+}
+}
+
+hdfsCloseFile(fs, in);
+return true;
+}
+
+int64_t GetFileLength(hdfsFS fs, const char * path) {
+int retval;
+hdfsFileInfo * info = hdfsGetPathInfo(fs, path);
+
+if (!info) {
+return -1;
+}
+
+retval = info->mSize;
+hdfsFreeFileInfo(info, 1);
+return retval;
+}
+
+TEST(TestCInterfaceConnect, TestConnect_InvalidInput) {
+hdfsFS fs = NULL;
+//test invalid input
+fs = hdfsConnect(NULL, 50070);
+EXPECT_TRUE(fs == NULL && EINVAL == errno);
+fs = hdfsConnect("hadoop.apache.org", 80);
+EXPECT_TRUE(fs == NULL && EIO == errno);
+fs = hdfsConnect("localhost", 22);
+EXPECT_TRUE(fs == NULL && EIO == errno);
+}
+
+static void ParseHdfsUri(const std::string & uri, std::string & host, int & 
port) {
+std::string str = uri;
+char * p = &str[0], *q;
+
+if (0 == strncasecmp(p, "hdfs://", strlen("hdfs://"))) {
+p += strlen("hdfs://");
+}
+
+q = strchr(p, ':');
+
+if (NULL == q) {
+port = 0;
+} el

[06/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/test/data/checksum2.in
--
diff --git a/depends/libhdfs3/test/data/checksum2.in 
b/depends/libhdfs3/test/data/checksum2.in
new file mode 100644
index 000..f93c73d
--- /dev/null
+++ b/depends/libhdfs3/test/data/checksum2.in
@@ -0,0 +1,512 @@
+1963114415
+a
+ab
+abc
+abcd
+abcde
+abcdef
+abcdefg
+abcdefgh
+abcdefghi
+abcdefghij
+abcdefghijk
+abcdefghijkl
+abcdefghijklm
+abcdefghijklmn
+abcdefghijklmno
+abcdefghijklmnop
+abcdefghijklmnopq
+abcdefghijklmnopqr
+abcdefghijklmnopqrs
+abcdefghijklmnopqrst
+abcdefghijklmnopqrstu
+abcdefghijklmnopqrstuv
+abcdefghijklmnopqrstuvw
+abcdefghijklmnopqrstuvwx
+abcdefghijklmnopqrstuvwxy
+abcdefghijklmnopqrstuvwxyz
+abcdefghijklmnopqrstuvwxyza
+abcdefghijklmnopqrstuvwxyzab
+abcdefghijklmnopqrstuvwxyzabc
+abcdefghijklmnopqrstuvwxyzabcd
+abcdefghijklmnopqrstuvwxyzabcde
+abcdefghijklmnopqrstuvwxyzabcdef
+abcdefghijklmnopqrstuvwxyzabcdefg
+abcdefghijklmnopqrstuvwxyzabcdefgh
+abcdefghijklmnopqrstuvwxyzabcdefghi
+abcdefghijklmnopqrstuvwxyzabcdefghij
+abcdefghijklmnopqrstuvwxyzabcdefghijk
+abcdefghijklmnopqrstuvwxyzabcdefghijkl
+abcdefghijklmnopqrstuvwxyzabcdefghijklm
+abcdefghijklmnopqrstuvwxyzabcdefghijklmn
+abcdefghijklmnopqrstuvwxyzabcdefghijklmno
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnop
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopq
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqr
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrs
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrst
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstu
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuv
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvw
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwx
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxy
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyza
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzab
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcd
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcde
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefg
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefgh
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghi
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghij
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijkl
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklm
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmn
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmno
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnop
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopq
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqr
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrs
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrst
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstu
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuv
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvw
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwx
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxy
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyza
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzab
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcd
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcde
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefg
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefgh
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghi
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghij
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijkl
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklm
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmn
+abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmno
+abcdefghijklmnopqrstuvwxyzabcdef

[02/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/test/unit/TestRpcChannel.cpp
--
diff --git a/depends/libhdfs3/test/unit/TestRpcChannel.cpp 
b/depends/libhdfs3/test/unit/TestRpcChannel.cpp
new file mode 100644
index 000..1d6f8cc
--- /dev/null
+++ b/depends/libhdfs3/test/unit/TestRpcChannel.cpp
@@ -0,0 +1,443 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "gtest/gtest.h"
+#include "gmock/gmock.h"
+
+#include "Atomic.h"
+#include "ClientNamenodeProtocol.pb.h"
+#include "Exception.h"
+#include "ExceptionInternal.h"
+#include "IpcConnectionContext.pb.h"
+#include "Logger.h"
+#include "MockBufferedSocketReader.h"
+#include "MockRpcClient.h"
+#include "MockRpcRemoteCall.h"
+#include "MockSocket.h"
+#include "rpc/RpcChannel.h"
+#include "RpcHeader.pb.h"
+#include "TestUtil.h"
+#include "Thread.h"
+#include "UnitTestUtils.h"
+
+#include 
+
+using namespace Hdfs;
+using namespace Hdfs::Mock;
+using namespace Hdfs::Internal;
+using namespace testing;
+using namespace ::google::protobuf;
+using namespace ::google::protobuf::io;
+
+RpcChannelKey BuildKey() {
+RpcAuth auth;
+RpcProtocolInfo protocol(0, "test", "kind");
+RpcServerInfo server("unknown token server", "unknown server", "unknown 
port");
+Config conf;
+SessionConfig session(conf);
+RpcConfig rpcConf(session);
+return RpcChannelKey(auth, protocol, server, rpcConf);
+}
+
+static RpcConfig & GetConfig(RpcChannelKey & key) {
+return const_cast(key.getConf());
+}
+
+static uint32_t GetCallId(uint32_t start) {
+static Hdfs::Internal::atomic id(start);
+return id++;
+}
+
+static void BuildResponse(uint32_t id,
+  RpcResponseHeaderProto::RpcStatusProto status, const 
char * errClass,
+  const char * errMsg, ::google::protobuf::Message * 
resp, std::vector & output) {
+WriteBuffer buffer;
+RpcResponseHeaderProto respHeader;
+respHeader.set_callid(id);
+respHeader.set_status(status);
+
+if (errClass != NULL) {
+respHeader.set_exceptionclassname(errClass);
+}
+
+if (errMsg != NULL) {
+respHeader.set_errormsg(errMsg);
+}
+
+int size = respHeader.ByteSize();
+int totalSize = size + CodedOutputStream::VarintSize32(size);
+
+if (resp != NULL) {
+size = resp->ByteSize();
+totalSize += size + CodedOutputStream::VarintSize32(size);
+}
+
+buffer.writeBigEndian(totalSize);
+size = respHeader.ByteSize();
+buffer.writeVarint32(size);
+respHeader.SerializeToArray(buffer.alloc(size), size);
+
+if (resp != NULL) {
+size = resp->ByteSize();
+buffer.writeVarint32(size);
+resp->SerializeToArray(buffer.alloc(size), size);
+}
+
+output.resize(buffer.getDataSize(0));
+memcpy(&output[0], buffer.getBuffer(0), buffer.getDataSize(0));
+}
+
+TEST(TestRpcChannel, TestConnectFailure) {
+RpcChannelKey key = BuildKey();
+MockRpcClient client;
+EXPECT_CALL(client, 
getClientId()).Times(AnyNumber()).WillRepeatedly(Return(""));
+MockSocket * sock = new MockSocket();
+MockBufferedSocketReader * in = new MockBufferedSocketReader();
+GetConfig(key).setMaxRetryOnConnect(2);
+EXPECT_CALL(*sock, connect(An(), An(), 
_)).Times(2).WillOnce(
+InvokeWithoutArgs(bind(&InvokeThrow,
+   "test expected connect failed",
+   static_cast(NULL.WillOnce(
+   InvokeWithoutArgs(
+   
bind(&InvokeThrow,
+"test expected connect timeout",
+

[45/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/CMakeLists.txt
--
diff --git a/depends/libhdfs3/CMakeLists.txt b/depends/libhdfs3/CMakeLists.txt
new file mode 100644
index 000..baa036a
--- /dev/null
+++ b/depends/libhdfs3/CMakeLists.txt
@@ -0,0 +1,62 @@
+CMAKE_MINIMUM_REQUIRED(VERSION 2.8)
+
+PROJECT(libhdfs3)
+
+SET(CMAKE_VERBOSE_MAKEFILE ON CACHE STRING "Verbose build." FORCE)
+
+IF(${CMAKE_SOURCE_DIR} STREQUAL ${CMAKE_BINARY_DIR})
+MESSAGE(FATAL_ERROR "cannot build the project in the source directory! 
Out-of-source build is enforced!")
+ENDIF()
+
+SET(CMAKE_MODULE_PATH "${CMAKE_CURRENT_SOURCE_DIR}/CMake" ${CMAKE_MODULE_PATH})
+SET(DOXYFILE_PATH ${CMAKE_SOURCE_DIR}/docs)
+
+INCLUDE(Platform)
+INCLUDE(Functions)
+INCLUDE(Options)
+
+FIND_PACKAGE(LibXml2 REQUIRED)
+FIND_PACKAGE(Protobuf REQUIRED)
+FIND_PACKAGE(KERBEROS REQUIRED)
+FIND_PACKAGE(GSasl REQUIRED)
+
+IF(OS_LINUX)
+FIND_PACKAGE(LibUUID REQUIRED)
+ENDIF(OS_LINUX)
+
+ADD_SUBDIRECTORY(mock)
+ADD_SUBDIRECTORY(src)
+ADD_SUBDIRECTORY(gtest)
+ADD_SUBDIRECTORY(gmock)
+ADD_SUBDIRECTORY(test)
+
+CONFIGURE_FILE(src/libhdfs3.pc.in ${CMAKE_SOURCE_DIR}/src/libhdfs3.pc @ONLY)
+CONFIGURE_FILE(debian/changelog.in ${CMAKE_SOURCE_DIR}/debian/changelog @ONLY)
+
+ADD_CUSTOM_TARGET(debian-package
+   COMMAND dpkg-buildpackage -us -uc -b
+   WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
+   COMMENT "Create debian package..."
+)
+
+ADD_CUSTOM_TARGET(rpm-package
+   COMMAND rpmbuild -bb --define "_topdir ${CMAKE_SOURCE_DIR}/rpms" 
--define "version ${libhdfs3_VERSION_STRING}" 
${CMAKE_SOURCE_DIR}/rpms/libhdfs3.spec
+   WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
+   COMMENT "Create rpm package..."
+)
+
+ADD_CUSTOM_TARGET(doc
+   COMMAND doxygen ${CMAKE_BINARY_DIR}/src/doxyfile
+   WORKING_DIRECTORY ${DOXYFILE_PATH}
+   COMMENT "Generate documents..."
+)
+
+ADD_CUSTOM_TARGET(style
+   COMMAND astyle --style=attach --indent=spaces=4 --indent-preprocessor 
--break-blocks --pad-oper --pad-header --unpad-paren --delete-empty-lines 
--suffix=none --align-pointer=middle --lineend=linux --indent-col1-comments 
${libhdfs3_SOURCES}
+   COMMAND astyle --style=attach --indent=spaces=4 --indent-preprocessor 
--break-blocks --pad-oper --pad-header --unpad-paren --delete-empty-lines 
--suffix=none --align-pointer=middle --lineend=linux --indent-col1-comments 
${unit_SOURCES}
+   COMMAND astyle --style=attach --indent=spaces=4 --indent-preprocessor 
--break-blocks --pad-oper --pad-header --unpad-paren --delete-empty-lines 
--suffix=none --align-pointer=middle --lineend=linux --indent-col1-comments 
${function_SOURCES}
+   COMMAND astyle --style=attach --indent=spaces=4 --indent-preprocessor 
--break-blocks --pad-oper --pad-header --unpad-paren --delete-empty-lines 
--suffix=none --align-pointer=middle --lineend=linux --indent-col1-comments 
${secure_SOURCES}
+   WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}
+   COMMENT "format code style..."
+)
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/README.md
--
diff --git a/depends/libhdfs3/README.md b/depends/libhdfs3/README.md
new file mode 100644
index 000..7f9bbc3
--- /dev/null
+++ b/depends/libhdfs3/README.md
@@ -0,0 +1,86 @@
+libhdfs3
+
+
+**A Native C/C++ HDFS Client**
+
+## Description
+
+The Hadoop Distributed File System (HDFS) is a distributed file system 
designed to run on commodity hardware. HDFS is highly fault-tolerant and is 
designed to be deployed on low-cost hardware. HDFS provides high throughput 
access to application data and is suitable for applications that have large 
data sets.
+
+HDFS is implemented in JAVA language and additionally provides a JNI based C 
language library *libhdfs*. To use libhdfs, users must deploy the HDFS jars on 
every machine. This adds operational complexity for non-Java clients that just 
want to integrate with HDFS.
+
+**Libhdfs3**, designed as an alternative implementation of libhdfs, is 
implemented based on native Hadoop RPC protocol and HDFS data transfer 
protocol. It gets rid of the drawbacks of JNI, and it has a lightweight, small 
memory footprint code base. In addition, it is easy to use and deploy.
+
+Libhdfs3 is developed by [Pivotal](http://www.pivotal.io/) and used in HAWQ, 
which is a massive parallel database engine in [Pivotal Hadoop 
Distribution](http://www.pivotal.io/big-data/pivotal-hd).
+
+
+## Installation
+
+### Requirement
+
+To build libhdfs3, the following libraries are needed.
+
+cmake (2.8+)http://www.cmake.org/
+boost (tested on 1.53+) http://www.boost.org/
+google protobuf http://code.google.com/p/protobuf/
+libxml2 http://www.xmlsoft.org/
+kerberos

[21/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/client/DataTransferProtocolSender.cpp
--
diff --git a/depends/libhdfs3/src/client/DataTransferProtocolSender.cpp 
b/depends/libhdfs3/src/client/DataTransferProtocolSender.cpp
new file mode 100644
index 000..c0d0e10
--- /dev/null
+++ b/depends/libhdfs3/src/client/DataTransferProtocolSender.cpp
@@ -0,0 +1,203 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "client/Token.h"
+#include "datatransfer.pb.h"
+#include "DataTransferProtocolSender.h"
+#include "Exception.h"
+#include "ExceptionInternal.h"
+#include "hdfs.pb.h"
+#include "Security.pb.h"
+#include "WriteBuffer.h"
+
+using namespace google::protobuf;
+
+namespace Hdfs {
+namespace Internal {
+
+static inline void Send(Socket & sock, DataTransferOp op, Message * msg,
+int writeTimeout) {
+WriteBuffer buffer;
+buffer.writeBigEndian(static_cast(DATA_TRANSFER_VERSION));
+buffer.write(static_cast(op));
+int msgSize = msg->ByteSize();
+buffer.writeVarint32(msgSize);
+char * b = buffer.alloc(msgSize);
+
+if (!msg->SerializeToArray(b, msgSize)) {
+THROW(HdfsIOException,
+  "DataTransferProtocolSender cannot serialize header to send 
buffer.");
+}
+
+sock.writeFully(buffer.getBuffer(0), buffer.getDataSize(0), writeTimeout);
+}
+
+static inline void BuildBaseHeader(const ExtendedBlock & block,
+   const Token & accessToken, BaseHeaderProto 
* header) {
+ExtendedBlockProto * eb = header->mutable_block();
+TokenProto * token = header->mutable_token();
+eb->set_blockid(block.getBlockId());
+eb->set_generationstamp(block.getGenerationStamp());
+eb->set_numbytes(block.getNumBytes());
+eb->set_poolid(block.getPoolId());
+token->set_identifier(accessToken.getIdentifier());
+token->set_password(accessToken.getPassword());
+token->set_kind(accessToken.getKind());
+token->set_service(accessToken.getService());
+}
+
+static inline void BuildClientHeader(const ExtendedBlock & block,
+ const Token & accessToken, const char * 
clientName,
+ ClientOperationHeaderProto * header) {
+header->set_clientname(clientName);
+BuildBaseHeader(block, accessToken, header->mutable_baseheader());
+}
+
+static inline void BuildNodeInfo(const DatanodeInfo & node,
+ DatanodeInfoProto * info) {
+DatanodeIDProto * id = info->mutable_id();
+id->set_hostname(node.getHostName());
+id->set_infoport(node.getInfoPort());
+id->set_ipaddr(node.getIpAddr());
+id->set_ipcport(node.getIpcPort());
+id->set_datanodeuuid(node.getDatanodeId());
+id->set_xferport(node.getXferPort());
+info->set_location(node.getLocation());
+}
+
+static inline void BuildNodesInfo(const std::vector & nodes,
+  RepeatedPtrField * infos) 
{
+for (std::size_t i = 0; i < nodes.size(); ++i) {
+BuildNodeInfo(nodes[i], infos->Add());
+}
+}
+
+DataTransferProtocolSender::DataTransferProtocolSender(Socket & sock,
+int writeTimeout, const std::string & datanodeAddr) :
+sock(sock), writeTimeout(writeTimeout), datanode(datanodeAddr) {
+}
+
+DataTransferProtocolSender::~DataTransferProtocolSender() {
+}
+
+void DataTransferProtocolSender::readBlock(const ExtendedBlock & blk,
+const Token & blockToken, const char * clientName,
+int64_t blockOffset, int64_t length) {
+try {
+OpReadBlockProto op;
+op.set_len(length);
+op.set_offset(blockOffset);
+BuildClientHeader(blk, blockTok

[18/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/client/OutputStreamImpl.cpp
--
diff --git a/depends/libhdfs3/src/client/OutputStreamImpl.cpp 
b/depends/libhdfs3/src/client/OutputStreamImpl.cpp
new file mode 100644
index 000..0c9f813
--- /dev/null
+++ b/depends/libhdfs3/src/client/OutputStreamImpl.cpp
@@ -0,0 +1,642 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "Atomic.h"
+#include "DateTime.h"
+#include "Exception.h"
+#include "ExceptionInternal.h"
+#include "FileSystemInter.h"
+#include "HWCrc32c.h"
+#include "LeaseRenewer.h"
+#include "Logger.h"
+#include "OutputStream.h"
+#include "OutputStreamImpl.h"
+#include "Packet.h"
+#include "PacketHeader.h"
+#include "SWCrc32c.h"
+
+#include 
+#include 
+
+namespace Hdfs {
+namespace Internal {
+
+OutputStreamImpl::OutputStreamImpl() :
+/*heartBeatStop(true),*/ closed(true), isAppend(false), syncBlock(false), 
checksumSize(0), chunkSize(
+0), chunksPerPacket(0), closeTimeout(0), heartBeatInterval(0), 
packetSize(0), position(
+0), replication(0), blockSize(0), bytesWritten(0), cursor(0), 
lastFlushed(
+0), nextSeqNo(0), packets(0) {
+if (HWCrc32c::available()) {
+checksum = shared_ptr < Checksum > (new HWCrc32c());
+} else {
+checksum = shared_ptr < Checksum > (new SWCrc32c());
+}
+
+checksumSize = sizeof(int32_t);
+lastSend = steady_clock::now();
+#ifdef MOCK
+stub = NULL;
+#endif
+}
+
+OutputStreamImpl::~OutputStreamImpl() {
+if (!closed) {
+try {
+close();
+} catch (...) {
+}
+}
+}
+
+void OutputStreamImpl::checkStatus() {
+if (closed) {
+THROW(HdfsIOException, "OutputStreamImpl: stream is not opened.");
+}
+
+lock_guard < mutex > lock(mut);
+
+if (lastError != exception_ptr()) {
+rethrow_exception(lastError);
+}
+}
+
+void OutputStreamImpl::setError(const exception_ptr & error) {
+try {
+lock_guard < mutex > lock(mut);
+lastError = error;
+} catch (...) {
+}
+}
+
+/**
+ * To create or append a file.
+ * @param fs hdfs file system.
+ * @param path the file path.
+ * @param flag creation flag, can be Create, Append or Create|Overwrite.
+ * @param permission create a new file with given permission.
+ * @param createParent if the parent does not exist, create it.
+ * @param replication create a file with given number of replication.
+ * @param blockSize  create a file with given block size.
+ */
+void OutputStreamImpl::open(shared_ptr fs, const char * path, 
int flag,
+const Permission & permission, bool createParent, 
int replication,
+int64_t blockSize) {
+if (NULL == path || 0 == strlen(path) || replication < 0 || blockSize < 0) 
{
+THROW(InvalidParameter, "Invalid parameter.");
+}
+
+if (!(flag == Create || flag == (Create | SyncBlock) || flag == Overwrite
+|| flag == (Overwrite | SyncBlock) || flag == Append
+|| flag == (Append | SyncBlock) || flag == (Create | Overwrite)
+|| flag == (Create | Overwrite | SyncBlock)
+|| flag == (Create | Append)
+|| flag == (Create | Append | SyncBlock))) {
+THROW(InvalidParameter, "Invalid flag.");
+}
+
+try {
+openInternal(fs, path, flag, permission, createParent, replication,
+ blockSize);
+} catch (...) {
+reset();
+throw;
+}
+}
+
+void OutputStreamImpl::computePacketChunkSize() {
+int chunkSizeWithChecksum = chunkSize + checksumSize;
+static const int packetHeaderSize = PacketHeader::GetPkgHeaderSize();
+

[44/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gmock/include/gmock/gmock-generated-actions.h
--
diff --git a/depends/libhdfs3/gmock/include/gmock/gmock-generated-actions.h 
b/depends/libhdfs3/gmock/include/gmock/gmock-generated-actions.h
new file mode 100644
index 000..2327393
--- /dev/null
+++ b/depends/libhdfs3/gmock/include/gmock/gmock-generated-actions.h
@@ -0,0 +1,2415 @@
+// This file was GENERATED by a script.  DO NOT EDIT BY HAND!!!
+
+// Copyright 2007, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+
+// Google Mock - a framework for writing C++ mock classes.
+//
+// This file implements some commonly used variadic actions.
+
+#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_ACTIONS_H_
+#define GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_ACTIONS_H_
+
+#include "gmock/gmock-actions.h"
+#include "gmock/internal/gmock-port.h"
+
+namespace testing {
+namespace internal {
+
+// InvokeHelper knows how to unpack an N-tuple and invoke an N-ary
+// function or method with the unpacked values, where F is a function
+// type that takes N arguments.
+template 
+class InvokeHelper;
+
+template 
+class InvokeHelper > {
+ public:
+  template 
+  static R Invoke(Function function, const ::std::tr1::tuple<>&) {
+return function();
+  }
+
+  template 
+  static R InvokeMethod(Class* obj_ptr,
+MethodPtr method_ptr,
+const ::std::tr1::tuple<>&) {
+return (obj_ptr->*method_ptr)();
+  }
+};
+
+template 
+class InvokeHelper > {
+ public:
+  template 
+  static R Invoke(Function function, const ::std::tr1::tuple& args) {
+using ::std::tr1::get;
+return function(get<0>(args));
+  }
+
+  template 
+  static R InvokeMethod(Class* obj_ptr,
+MethodPtr method_ptr,
+const ::std::tr1::tuple& args) {
+using ::std::tr1::get;
+return (obj_ptr->*method_ptr)(get<0>(args));
+  }
+};
+
+template 
+class InvokeHelper > {
+ public:
+  template 
+  static R Invoke(Function function, const ::std::tr1::tuple& args) {
+using ::std::tr1::get;
+return function(get<0>(args), get<1>(args));
+  }
+
+  template 
+  static R InvokeMethod(Class* obj_ptr,
+MethodPtr method_ptr,
+const ::std::tr1::tuple& args) {
+using ::std::tr1::get;
+return (obj_ptr->*method_ptr)(get<0>(args), get<1>(args));
+  }
+};
+
+template 
+class InvokeHelper > {
+ public:
+  template 
+  static R Invoke(Function function, const ::std::tr1::tuple& args) {
+using ::std::tr1::get;
+return function(get<0>(args), get<1>(args), get<2>(args));
+  }
+
+  template 
+  static R InvokeMethod(Class* obj_ptr,
+MethodPtr method_ptr,
+const ::std::tr1::tuple& args) {
+using ::std::tr1::get;
+return (obj_ptr->*method_ptr)(get<0>(args), get<1>(args), get<2>(args));
+  }
+};
+
+template 
+class InvokeHelper > {
+ public:
+  template 
+  static R Invoke(Function function, const ::std::tr1::tuple& args) {
+using ::std::tr1::get;
+return function(get<0>(args), get<1>(args), get<2>(args), get<3>(args));
+  }
+
+  template 
+  static R InvokeMethod(Class* obj_ptr,
+MethodPtr method_ptr,
+const ::std::tr1::tuple& args) {
+using ::std::tr1::get;
+return (obj_ptr-

[42/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gmock/include/gmock/gmock-generated-matchers.h
--
diff --git a/depends/libhdfs3/gmock/include/gmock/gmock-generated-matchers.h 
b/depends/libhdfs3/gmock/include/gmock/gmock-generated-matchers.h
new file mode 100644
index 000..b4c8571
--- /dev/null
+++ b/depends/libhdfs3/gmock/include/gmock/gmock-generated-matchers.h
@@ -0,0 +1,2190 @@
+// This file was GENERATED by command:
+// pump.py gmock-generated-matchers.h.pump
+// DO NOT EDIT BY HAND!!!
+
+// Copyright 2008, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// Google Mock - a framework for writing C++ mock classes.
+//
+// This file implements some commonly used variadic matchers.
+
+#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_MATCHERS_H_
+#define GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_MATCHERS_H_
+
+#include 
+#include 
+#include 
+#include 
+#include "gmock/gmock-matchers.h"
+
+namespace testing {
+namespace internal {
+
+// The type of the i-th (0-based) field of Tuple.
+#define GMOCK_FIELD_TYPE_(Tuple, i) \
+typename ::std::tr1::tuple_element::type
+
+// TupleFields is for selecting fields from a
+// tuple of type Tuple.  It has two members:
+//
+//   type: a tuple type whose i-th field is the ki-th field of Tuple.
+//   GetSelectedFields(t): returns fields k0, ..., and kn of t as a tuple.
+//
+// For example, in class TupleFields, 2, 0>, we have:
+//
+//   type is tuple, and
+//   GetSelectedFields(make_tuple(true, 'a', 42)) is (42, true).
+
+template 
+class TupleFields;
+
+// This generic version is used when there are 10 selectors.
+template 
+class TupleFields {
+ public:
+  typedef ::std::tr1::tuple type;
+  static type GetSelectedFields(const Tuple& t) {
+using ::std::tr1::get;
+return type(get(t), get(t), get(t), get(t), get(t),
+get(t), get(t), get(t), get(t), get(t));
+  }
+};
+
+// The following specialization is used for 0 ~ 9 selectors.
+
+template 
+class TupleFields {
+ public:
+  typedef ::std::tr1::tuple<> type;
+  static type GetSelectedFields(const Tuple& /* t */) {
+using ::std::tr1::get;
+return type();
+  }
+};
+
+template 
+class TupleFields {
+ public:
+  typedef ::std::tr1::tuple type;
+  static type GetSelectedFields(const Tuple& t) {
+using ::std::tr1::get;
+return type(get(t));
+  }
+};
+
+template 
+class TupleFields {
+ public:
+  typedef ::std::tr1::tuple type;
+  static type GetSelectedFields(const Tuple& t) {
+using ::std::tr1::get;
+return type(get(t), get(t));
+  }
+};
+
+template 
+class TupleFields {
+ public:
+  typedef ::std::tr1::tuple type;
+  static type GetSelectedFields(const Tuple& t) {
+using ::std::tr1::get;
+return type(get(t), get(t), get(t));
+  }
+};
+
+template 
+class TupleFields {
+ public:
+  typedef ::std::tr1::tuple type;
+  static type GetSelectedFields(const Tuple& t) {
+using ::std::tr1::get;
+return type(get(t), get(t), get(t), get(t));
+  }
+};
+
+template 
+class TupleFields {
+ public:
+  typedef ::std::tr1::tuple type;
+  static type GetSelectedFields(const Tuple& t) {
+using ::std::tr1::get;
+return type(get(t), get(t), get(t), get(t), get(t));
+  }
+};
+
+template 
+class TupleFields {
+ public:
+  typedef ::std::tr1::tuple type;
+  static type GetSelectedFields(const Tuple& t) {
+using ::std::tr1::get;
+return type(get(t), get(t), get(t), get(t), get(t),
+  

[22/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/mock/CMakeLists.txt
--
diff --git a/depends/libhdfs3/mock/CMakeLists.txt 
b/depends/libhdfs3/mock/CMakeLists.txt
new file mode 100644
index 000..2d01f84
--- /dev/null
+++ b/depends/libhdfs3/mock/CMakeLists.txt
@@ -0,0 +1,7 @@
+CMAKE_MINIMUM_REQUIRED(VERSION 2.8)
+
+AUTO_SOURCES(files "*.cpp" "RECURSE" "${CMAKE_CURRENT_SOURCE_DIR}")
+LIST(APPEND libhdfs3_MOCK_SOURCES ${files})
+
+SET(libhdfs3_MOCK_SOURCES ${libhdfs3_MOCK_SOURCES} PARENT_SCOPE)
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/mock/MockBufferedSocketReader.h
--
diff --git a/depends/libhdfs3/mock/MockBufferedSocketReader.h 
b/depends/libhdfs3/mock/MockBufferedSocketReader.h
new file mode 100644
index 000..eca29d9
--- /dev/null
+++ b/depends/libhdfs3/mock/MockBufferedSocketReader.h
@@ -0,0 +1,49 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 - 
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef _HDFS_LIBHDFS3_MOCK_MOCKBUFFEREDSOCKETREADER_H_
+#define _HDFS_LIBHDFS3_MOCK_MOCKBUFFEREDSOCKETREADER_H_
+
+#include "gmock/gmock.h"
+#include "network/BufferedSocketReader.h"
+
+namespace Hdfs {
+namespace Mock {
+
+class MockBufferedSocketReader: public Hdfs::Internal::BufferedSocketReader {
+public:
+   MOCK_METHOD2(read, int32_t(char * b, int32_t s));
+   MOCK_METHOD3(readFully, void(char * b, int32_t s, int timeout));
+   MOCK_METHOD1(readBigEndianInt32, int32_t(int timeout));
+   MOCK_METHOD1(readVarint32, int32_t(int timeout));
+   MOCK_METHOD1(poll, bool(int timeout));
+};
+
+}
+}
+
+#endif /* _HDFS_LIBHDFS3_MOCK_MOCKBUFFEREDSOCKETREADER_H_ */

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/mock/MockDatanode.h
--
diff --git a/depends/libhdfs3/mock/MockDatanode.h 
b/depends/libhdfs3/mock/MockDatanode.h
new file mode 100644
index 000..4acbefb
--- /dev/null
+++ b/depends/libhdfs3/mock/MockDatanode.h
@@ -0,0 +1,50 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 - 
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef _HDFS_LIBHDFS3_MOCK_MOCKDATANODE_H_
+#define _HDFS_LIBHDFS3_MOCK_MOCKDATANODE_H_
+
+#include "gmock/gmock.h"
+#include "server/Datanode.h"
+
+using namespace Hdfs::Internal;
+namespace Hdfs {
+
+namespace Mock {
+
+class MockDatanode: public Datanode {
+public:
+   MOCK_METHOD1(getReplicaVisibleLength, int64_t (const 
Hdfs::Internal:

[39/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gmock/include/gmock/gmock-more-actions.h
--
diff --git a/depends/libhdfs3/gmock/include/gmock/gmock-more-actions.h 
b/depends/libhdfs3/gmock/include/gmock/gmock-more-actions.h
new file mode 100644
index 000..fc5e5ca
--- /dev/null
+++ b/depends/libhdfs3/gmock/include/gmock/gmock-more-actions.h
@@ -0,0 +1,233 @@
+// Copyright 2007, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+
+// Google Mock - a framework for writing C++ mock classes.
+//
+// This file implements some actions that depend on gmock-generated-actions.h.
+
+#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_MORE_ACTIONS_H_
+#define GMOCK_INCLUDE_GMOCK_GMOCK_MORE_ACTIONS_H_
+
+#include 
+
+#include "gmock/gmock-generated-actions.h"
+
+namespace testing {
+namespace internal {
+
+// Implements the Invoke(f) action.  The template argument
+// FunctionImpl is the implementation type of f, which can be either a
+// function pointer or a functor.  Invoke(f) can be used as an
+// Action as long as f's type is compatible with F (i.e. f can be
+// assigned to a tr1::function).
+template 
+class InvokeAction {
+ public:
+  // The c'tor makes a copy of function_impl (either a function
+  // pointer or a functor).
+  explicit InvokeAction(FunctionImpl function_impl)
+  : function_impl_(function_impl) {}
+
+  template 
+  Result Perform(const ArgumentTuple& args) {
+return InvokeHelper::Invoke(function_impl_, args);
+  }
+
+ private:
+  FunctionImpl function_impl_;
+
+  GTEST_DISALLOW_ASSIGN_(InvokeAction);
+};
+
+// Implements the Invoke(object_ptr, &Class::Method) action.
+template 
+class InvokeMethodAction {
+ public:
+  InvokeMethodAction(Class* obj_ptr, MethodPtr method_ptr)
+  : obj_ptr_(obj_ptr), method_ptr_(method_ptr) {}
+
+  template 
+  Result Perform(const ArgumentTuple& args) const {
+return InvokeHelper::InvokeMethod(
+obj_ptr_, method_ptr_, args);
+  }
+
+ private:
+  Class* const obj_ptr_;
+  const MethodPtr method_ptr_;
+
+  GTEST_DISALLOW_ASSIGN_(InvokeMethodAction);
+};
+
+}  // namespace internal
+
+// Various overloads for Invoke().
+
+// Creates an action that invokes 'function_impl' with the mock
+// function's arguments.
+template 
+PolymorphicAction > Invoke(
+FunctionImpl function_impl) {
+  return MakePolymorphicAction(
+  internal::InvokeAction(function_impl));
+}
+
+// Creates an action that invokes the given method on the given object
+// with the mock function's arguments.
+template 
+PolymorphicAction > Invoke(
+Class* obj_ptr, MethodPtr method_ptr) {
+  return MakePolymorphicAction(
+  internal::InvokeMethodAction(obj_ptr, method_ptr));
+}
+
+// WithoutArgs(inner_action) can be used in a mock function with a
+// non-empty argument list to perform inner_action, which takes no
+// argument.  In other words, it adapts an action accepting no
+// argument to one that accepts (and ignores) arguments.
+template 
+inline internal::WithArgsAction
+WithoutArgs(const InnerAction& action) {
+  return internal::WithArgsAction(action);
+}
+
+// WithArg(an_action) creates an action that passes the k-th
+// (0-based) argument of the mock function to an_action and performs
+// it.  It adapts an action accepting one argument to one that accepts
+// multiple arguments.  For convenience, we also provide
+// WithArgs(an_action) (de

[08/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/server/NamenodeProxy.cpp
--
diff --git a/depends/libhdfs3/src/server/NamenodeProxy.cpp 
b/depends/libhdfs3/src/server/NamenodeProxy.cpp
new file mode 100644
index 000..d66bbd9
--- /dev/null
+++ b/depends/libhdfs3/src/server/NamenodeProxy.cpp
@@ -0,0 +1,534 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "Exception.h"
+#include "ExceptionInternal.h"
+#include "Logger.h"
+#include "NamenodeImpl.h"
+#include "NamenodeProxy.h"
+#include "StringUtil.h"
+
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+namespace Hdfs {
+namespace Internal {
+
+static uint32_t GetInitNamenodeIndex(const std::string id) {
+std::string path = "/tmp/";
+path += id;
+int fd;
+uint32_t index = 0;
+/*
+ * try create the file
+ */
+fd = open(path.c_str(), O_WRONLY | O_CREAT | O_EXCL, 0666);
+
+if (fd < 0) {
+if (errno == EEXIST) {
+/*
+ * the file already exist, try to open it
+ */
+fd = open(path.c_str(), O_RDONLY);
+} else {
+/*
+ * failed to create, do not care why
+ */
+return 0;
+}
+} else {
+if (0 != flock(fd, LOCK_EX)) {
+/*
+ * failed to lock
+ */
+close(fd);
+return index;
+}
+
+/*
+ * created file, initialize it with 0
+ */
+if (write(fd, &index, sizeof(index)) < 0) {
+  LOG(WARNING,
+  "NamenodeProxy: Failed to write current Namenode index into "
+  "cache file.");
+/*
+ * ignore the failure.
+ */
+}
+
+flock(fd, LOCK_UN);
+close(fd);
+return index;
+}
+
+/*
+ * the file exist, read it.
+ */
+if (fd >= 0) {
+if (0 != flock(fd, LOCK_SH)) {
+/*
+ * failed to lock
+ */
+close(fd);
+return index;
+}
+
+if (sizeof(index) != read(fd, &index, sizeof(index))) {
+/*
+ * failed to read, do not care why
+ */
+index = 0;
+}
+
+flock(fd, LOCK_UN);
+close(fd);
+}
+
+return index;
+}
+
+static void SetInitNamenodeIndex(const std::string & id, uint32_t index) {
+std::string path = "/tmp/";
+path += id;
+int fd;
+/*
+ * try open the file for write
+ */
+fd = open(path.c_str(), O_WRONLY);
+
+if (fd > 0) {
+if (0 != flock(fd, LOCK_EX)) {
+/*
+ * failed to lock
+ */
+close(fd);
+return;
+}
+
+if (write(fd, &index, sizeof(index)) < 0) {
+LOG(WARNING,
+"NamenodeProxy: Failed to write current Namenode index into "
+"cache file.");
+/*
+ * ignore the failure.
+ */
+}
+flock(fd, LOCK_UN);
+close(fd);
+}
+}
+
+NamenodeProxy::NamenodeProxy(const std::vector & namenodeInfos, 
const std::string & tokenService,
+ const SessionConfig & c, const RpcAuth & a) :
+clusterid(tokenService), currentNamenode(0) {
+if (namenodeInfos.size() == 1) {
+enableNamenodeHA = false;
+maxNamenodeHARetry = 0;
+} else {
+enableNamenodeHA = true;
+maxNamenodeHARetry = c.getRpcMaxHaRetry();
+}
+
+for (size_t i = 0; i < namenodeInfos.size(); ++i) {
+std::vector nninfo = 
StringSplit(namenodeInfos[i].getRpcAddr(), ":");
+
+if (nninfo.size() != 2) 

[01/48] incubator-hawq git commit: HAWQ-608. Hot refactor libyarn to fix bug under higher version of gcc, upgrade libyarn version number.

2016-04-03 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-617 53d264e98 -> c5c7d8fc0


HAWQ-608. Hot refactor libyarn to fix bug under higher version of gcc,
upgrade libyarn version number.


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/b0ddaea7
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/b0ddaea7
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/b0ddaea7

Branch: refs/heads/HAWQ-617
Commit: b0ddaea73b10e5ea809285d8c98b83428260de25
Parents: 071f276
Author: xunzhang 
Authored: Wed Mar 30 21:35:47 2016 +0800
Committer: Wen Lin 
Committed: Fri Apr 1 14:12:40 2016 +0800

--
 depends/libyarn/mock/MockApplicationClient.h| 2 +-
 depends/libyarn/mock/MockApplicationClientProtocol.h| 2 +-
 depends/libyarn/mock/MockApplicationMaster.h| 2 +-
 depends/libyarn/mock/MockApplicationMasterProtocol.h| 2 +-
 depends/libyarn/mock/MockContainerManagement.h  | 2 +-
 depends/libyarn/mock/MockContainerManagementProtocol.h  | 2 +-
 depends/libyarn/mock/MockLibYarnClient.h| 2 +-
 depends/libyarn/sample/client_main.cpp  | 5 -
 depends/libyarn/src/CMakeLists.txt  | 2 +-
 depends/libyarn/src/libyarnclient/ApplicationClient.cpp | 2 +-
 depends/libyarn/src/libyarnclient/ApplicationClient.h   | 2 +-
 depends/libyarn/src/libyarnclient/ApplicationMaster.cpp | 2 +-
 depends/libyarn/src/libyarnclient/ApplicationMaster.h   | 2 +-
 depends/libyarn/src/libyarnclient/ContainerManagement.h | 2 +-
 depends/libyarn/src/libyarnclient/LibYarnClient.h   | 5 +
 depends/libyarn/src/libyarnclient/LibYarnClientC.cpp| 2 +-
 depends/libyarn/src/libyarnserver/ApplicationClientProtocol.h   | 2 +-
 depends/libyarn/src/libyarnserver/ApplicationMasterProtocol.h   | 2 +-
 depends/libyarn/src/libyarnserver/ContainerManagementProtocol.h | 2 +-
 depends/libyarn/src/protocolrecords/AllocateRequest.h   | 2 +-
 depends/libyarn/src/protocolrecords/AllocateResponse.h  | 2 +-
 .../src/protocolrecords/FinishApplicationMasterRequest.h| 2 +-
 depends/libyarn/src/protocolrecords/GetApplicationsRequest.h| 2 +-
 depends/libyarn/src/protocolrecords/GetApplicationsResponse.h   | 2 +-
 depends/libyarn/src/protocolrecords/GetClusterNodesRequest.h| 2 +-
 depends/libyarn/src/protocolrecords/GetClusterNodesResponse.h   | 2 +-
 .../libyarn/src/protocolrecords/GetContainerStatusesRequest.h   | 2 +-
 .../src/protocolrecords/GetContainerStatusesResponse.cpp| 1 +
 .../libyarn/src/protocolrecords/GetContainerStatusesResponse.h  | 2 +-
 depends/libyarn/src/protocolrecords/GetContainersRequest.h  | 2 +-
 depends/libyarn/src/protocolrecords/GetContainersResponse.h | 2 +-
 depends/libyarn/src/protocolrecords/GetQueueInfoRequest.h   | 2 +-
 .../libyarn/src/protocolrecords/GetQueueUserAclsInfoResponse.h  | 2 +-
 .../src/protocolrecords/RegisterApplicationMasterRequest.h  | 2 +-
 .../src/protocolrecords/RegisterApplicationMasterResponse.h | 2 +-
 depends/libyarn/src/protocolrecords/StartContainerRequest.h | 2 +-
 depends/libyarn/src/protocolrecords/StartContainerResponse.h| 2 +-
 depends/libyarn/src/protocolrecords/StartContainersRequest.h| 2 +-
 depends/libyarn/src/protocolrecords/StartContainersResponse.h   | 2 +-
 depends/libyarn/src/protocolrecords/StopContainersRequest.h | 2 +-
 depends/libyarn/src/protocolrecords/StopContainersResponse.h| 2 +-
 depends/libyarn/src/records/ApplicationACLMap.h | 2 +-
 depends/libyarn/src/records/ApplicationReport.h | 2 +-
 depends/libyarn/src/records/ApplicationSubmissionContext.h  | 2 +-
 depends/libyarn/src/records/Container.h | 2 +-
 depends/libyarn/src/records/ContainerLaunchContext.h| 2 +-
 depends/libyarn/src/records/ContainerReport.h   | 2 +-
 depends/libyarn/src/records/ContainerStatus.h   | 2 +-
 depends/libyarn/src/records/LocalResource.h | 2 +-
 depends/libyarn/src/records/NodeId.h| 2 +-
 depends/libyarn/src/records/NodeReport.h| 2 +-
 depends/libyarn/src/records/PreemptionContract.cpp  | 3 +++
 depends/libyarn/src/records/PreemptionContract.h| 3 ++-
 depends/libyarn/src/records/QueueInfo.h | 2 +-
 depends/libyarn/src/records/QueueUserACLInfo.h  | 2 +-
 depends/libyarn/src/records/ResourceBlacklistRequest.h  | 2 +-
 depends/libyarn/src/records/ResourceRequest.h   | 2 +-
 depends/libyarn/src/records/SerializedException.h   | 2 +-
 depends/libyarn/src/records/StrictPre

[11/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/proto/hdfs.proto
--
diff --git a/depends/libhdfs3/src/proto/hdfs.proto 
b/depends/libhdfs3/src/proto/hdfs.proto
new file mode 100644
index 000..19e3f79
--- /dev/null
+++ b/depends/libhdfs3/src/proto/hdfs.proto
@@ -0,0 +1,461 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/**
+ * These .proto interfaces are private and stable.
+ * Please see http://wiki.apache.org/hadoop/Compatibility
+ * for what changes are allowed for a *stable* .proto interface.
+ */
+
+// This file contains protocol buffers that are used throughout HDFS -- i.e.
+// by the client, server, and data transfer protocols.
+
+
+option java_package = "org.apache.hadoop.hdfs.protocol.proto";
+option java_outer_classname = "HdfsProtos";
+option java_generate_equals_and_hash = true;
+package Hdfs.Internal;
+
+import "Security.proto";
+
+/**
+ * Extended block idenfies a block
+ */
+message ExtendedBlockProto {
+  required string poolId = 1;   // Block pool id - gloablly unique across 
clusters
+  required uint64 blockId = 2;  // the local id within a pool
+  required uint64 generationStamp = 3;
+  optional uint64 numBytes = 4 [default = 0];  // len does not belong in ebid 
+   // here for historical reasons
+}
+
+/**
+ * Identifies a Datanode
+ */
+message DatanodeIDProto {
+  required string ipAddr = 1;// IP address
+  required string hostName = 2;  // hostname
+  required string datanodeUuid = 3; // UUID assigned to the Datanode. For
+// upgraded clusters this is the same
+// as the original StorageID of the
+// Datanode.
+  required uint32 xferPort = 4;  // data streaming port
+  required uint32 infoPort = 5;  // datanode http port
+  required uint32 ipcPort = 6;   // ipc server port
+  optional uint32 infoSecurePort = 7 [default = 0]; // datanode https port
+}
+
+/**
+ * DatanodeInfo array
+ */
+message DatanodeInfosProto {
+  repeated DatanodeInfoProto datanodes = 1;
+}
+
+/**
+ * The status of a Datanode
+ */
+message DatanodeInfoProto {
+  required DatanodeIDProto id = 1;
+  optional uint64 capacity = 2 [default = 0];
+  optional uint64 dfsUsed = 3 [default = 0];
+  optional uint64 remaining = 4 [default = 0];
+  optional uint64 blockPoolUsed = 5 [default = 0];
+  optional uint64 lastUpdate = 6 [default = 0];
+  optional uint32 xceiverCount = 7 [default = 0];
+  optional string location = 8;
+  enum AdminState {
+NORMAL = 0;
+DECOMMISSION_INPROGRESS = 1;
+DECOMMISSIONED = 2;
+  }
+
+  optional AdminState adminState = 10 [default = NORMAL];
+  optional uint64 cacheCapacity = 11 [default = 0];
+  optional uint64 cacheUsed = 12 [default = 0];
+}
+
+/**
+ * Summary of a file or directory
+ */
+message ContentSummaryProto {
+  required uint64 length = 1;
+  required uint64 fileCount = 2;
+  required uint64 directoryCount = 3;
+  required uint64 quota = 4;
+  required uint64 spaceConsumed = 5;
+  required uint64 spaceQuota = 6;
+}
+
+/**
+ * Contains a list of paths corresponding to corrupt files and a cookie
+ * used for iterative calls to NameNode.listCorruptFileBlocks.
+ *
+ */
+message CorruptFileBlocksProto {
+ repeated string files = 1;
+ required string   cookie = 2;
+}
+
+/**
+ * File or Directory permision - same spec as posix
+ */
+message FsPermissionProto {
+  required uint32 perm = 1;   // Actually a short - only 16bits used
+}
+
+/**
+ * Types of recognized storage media.
+ */
+enum StorageTypeProto {
+  DISK = 1;
+  SSD = 2;
+}
+
+/**
+ * A list of storage IDs. 
+ */
+message StorageUuidsProto {
+  repeated string storageUuids = 1;
+}
+
+/**
+ * A LocatedBlock gives information about a block and its location.
+ */ 
+message LocatedBlockProto {
+  required ExtendedBlockProto b  = 1;
+  required uint64 offset = 2;   // offset of first byte of block in 
the file
+  repeated DatanodeInfoProto locs = 3;  // Locations ordered by proximity to 
client ip
+  required bool corrupt = 4;// true if all replicas of

[25/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/src/gtest-internal-inl.h
--
diff --git a/depends/libhdfs3/gtest/src/gtest-internal-inl.h 
b/depends/libhdfs3/gtest/src/gtest-internal-inl.h
new file mode 100644
index 000..35df303
--- /dev/null
+++ b/depends/libhdfs3/gtest/src/gtest-internal-inl.h
@@ -0,0 +1,1218 @@
+// Copyright 2005, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// Utility functions and classes used by the Google C++ testing framework.
+//
+// Author: w...@google.com (Zhanyong Wan)
+//
+// This file contains purely Google Test's internal implementation.  Please
+// DO NOT #INCLUDE IT IN A USER PROGRAM.
+
+#ifndef GTEST_SRC_GTEST_INTERNAL_INL_H_
+#define GTEST_SRC_GTEST_INTERNAL_INL_H_
+
+// GTEST_IMPLEMENTATION_ is defined to 1 iff the current translation unit is
+// part of Google Test's implementation; otherwise it's undefined.
+#if !GTEST_IMPLEMENTATION_
+// A user is trying to include this from his code - just say no.
+# error "gtest-internal-inl.h is part of Google Test's internal 
implementation."
+# error "It must not be included except by Google Test itself."
+#endif  // GTEST_IMPLEMENTATION_
+
+#ifndef _WIN32_WCE
+# include 
+#endif  // !_WIN32_WCE
+#include 
+#include   // For strtoll/_strtoul64/malloc/free.
+#include   // For memmove.
+
+#include 
+#include 
+#include 
+
+#include "gtest/internal/gtest-port.h"
+
+#if GTEST_CAN_STREAM_RESULTS_
+# include   // NOLINT
+# include   // NOLINT
+#endif
+
+#if GTEST_OS_WINDOWS
+# include   // NOLINT
+#endif  // GTEST_OS_WINDOWS
+
+#include "gtest/gtest.h"  // NOLINT
+#include "gtest/gtest-spi.h"
+
+namespace testing {
+
+// Declares the flags.
+//
+// We don't want the users to modify this flag in the code, but want
+// Google Test's own unit tests to be able to access it. Therefore we
+// declare it here as opposed to in gtest.h.
+GTEST_DECLARE_bool_(death_test_use_fork);
+
+namespace internal {
+
+// The value of GetTestTypeId() as seen from within the Google Test
+// library.  This is solely for testing GetTestTypeId().
+GTEST_API_ extern const TypeId kTestTypeIdInGoogleTest;
+
+// Names of the flags (needed for parsing Google Test flags).
+const char kAlsoRunDisabledTestsFlag[] = "also_run_disabled_tests";
+const char kBreakOnFailureFlag[] = "break_on_failure";
+const char kCatchExceptionsFlag[] = "catch_exceptions";
+const char kColorFlag[] = "color";
+const char kFilterFlag[] = "filter";
+const char kListTestsFlag[] = "list_tests";
+const char kOutputFlag[] = "output";
+const char kPrintTimeFlag[] = "print_time";
+const char kRandomSeedFlag[] = "random_seed";
+const char kRepeatFlag[] = "repeat";
+const char kShuffleFlag[] = "shuffle";
+const char kStackTraceDepthFlag[] = "stack_trace_depth";
+const char kStreamResultToFlag[] = "stream_result_to";
+const char kThrowOnFailureFlag[] = "throw_on_failure";
+
+// A valid random seed must be in [1, kMaxRandomSeed].
+const int kMaxRandomSeed = 9;
+
+// g_help_flag is true iff the --help flag or an equivalent form is
+// specified on the command line.
+GTEST_API_ extern bool g_help_flag;
+
+// Returns the current time in milliseconds.
+GTEST_API_ TimeInMillis GetTimeInMillis();
+
+// Returns true iff Google Test should use colors in the output.
+GTEST_API_ bool ShouldUseColor(bool stdout_is_tty);
+
+// Formats the given time in milliseconds as seconds.
+GTEST_API_ std::string Forma

[26/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/src/gtest-death-test.cc
--
diff --git a/depends/libhdfs3/gtest/src/gtest-death-test.cc 
b/depends/libhdfs3/gtest/src/gtest-death-test.cc
new file mode 100644
index 000..a6023fc
--- /dev/null
+++ b/depends/libhdfs3/gtest/src/gtest-death-test.cc
@@ -0,0 +1,1344 @@
+// Copyright 2005, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan), vl...@google.com (Vlad Losev)
+//
+// This file implements death tests.
+
+#include "gtest/gtest-death-test.h"
+#include "gtest/internal/gtest-port.h"
+
+#if GTEST_HAS_DEATH_TEST
+
+# if GTEST_OS_MAC
+#  include 
+# endif  // GTEST_OS_MAC
+
+# include 
+# include 
+# include 
+
+# if GTEST_OS_LINUX
+#  include 
+# endif  // GTEST_OS_LINUX
+
+# include 
+
+# if GTEST_OS_WINDOWS
+#  include 
+# else
+#  include 
+#  include 
+# endif  // GTEST_OS_WINDOWS
+
+# if GTEST_OS_QNX
+#  include 
+# endif  // GTEST_OS_QNX
+
+#endif  // GTEST_HAS_DEATH_TEST
+
+#include "gtest/gtest-message.h"
+#include "gtest/internal/gtest-string.h"
+
+// Indicates that this translation unit is part of Google Test's
+// implementation.  It must come before gtest-internal-inl.h is
+// included, or there will be a compiler error.  This trick is to
+// prevent a user from accidentally including gtest-internal-inl.h in
+// his code.
+#define GTEST_IMPLEMENTATION_ 1
+#include "src/gtest-internal-inl.h"
+#undef GTEST_IMPLEMENTATION_
+
+namespace testing {
+
+// Constants.
+
+// The default death test style.
+static const char kDefaultDeathTestStyle[] = "fast";
+
+GTEST_DEFINE_string_(
+death_test_style,
+internal::StringFromGTestEnv("death_test_style", kDefaultDeathTestStyle),
+"Indicates how to run a death test in a forked child process: "
+"\"threadsafe\" (child process re-executes the test binary "
+"from the beginning, running only the specific death test) or "
+"\"fast\" (child process runs the death test immediately "
+"after forking).");
+
+GTEST_DEFINE_bool_(
+death_test_use_fork,
+internal::BoolFromGTestEnv("death_test_use_fork", false),
+"Instructs to use fork()/_exit() instead of clone() in death tests. "
+"Ignored and always uses fork() on POSIX systems where clone() is not "
+"implemented. Useful when running under valgrind or similar tools if "
+"those do not support clone(). Valgrind 3.3.1 will just fail if "
+"it sees an unsupported combination of clone() flags. "
+"It is not recommended to use this flag w/o valgrind though it will "
+"work in 99% of the cases. Once valgrind is fixed, this flag will "
+"most likely be removed.");
+
+namespace internal {
+GTEST_DEFINE_string_(
+internal_run_death_test, "",
+"Indicates the file, line number, temporal index of "
+"the single death test to run, and a file descriptor to "
+"which a success code may be sent, all separated by "
+"the '|' characters.  This flag is specified if and only if the current "
+"process is a sub-process launched for running a thread-safe "
+"death test.  FOR INTERNAL USE ONLY.");
+}  // namespace internal
+
+#if GTEST_HAS_DEATH_TEST
+
+namespace internal {
+
+// Valid only for fast death tests. Indicates the code is running in the
+// child process of a fast style death test.
+static bool g_in_fast_death_test_child = false;
+
+// Returns a Bool

[47/48] incubator-hawq git commit: HAWQ-620. remove useless codes in resourcebroker_NONE.c

2016-04-03 Thread bhuvnesh2703
HAWQ-620. remove useless codes in resourcebroker_NONE.c


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/e8fcfb0c
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/e8fcfb0c
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/e8fcfb0c

Branch: refs/heads/HAWQ-617
Commit: e8fcfb0c0b350ec83d9cb30a2510993b68e00fc7
Parents: bc0904a
Author: Wen Lin 
Authored: Fri Apr 1 20:36:21 2016 +0800
Committer: Wen Lin 
Committed: Fri Apr 1 20:36:21 2016 +0800

--
 .../resourcebroker/resourcebroker_NONE.c| 70 
 1 file changed, 70 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/e8fcfb0c/src/backend/resourcemanager/resourcebroker/resourcebroker_NONE.c
--
diff --git a/src/backend/resourcemanager/resourcebroker/resourcebroker_NONE.c 
b/src/backend/resourcemanager/resourcebroker/resourcebroker_NONE.c
index 97e8d37..627356e 100644
--- a/src/backend/resourcemanager/resourcebroker/resourcebroker_NONE.c
+++ b/src/backend/resourcemanager/resourcebroker/resourcebroker_NONE.c
@@ -137,76 +137,6 @@ int RB_NONE_acquireResource(uint32_t memorymb, uint32_t 
core, List *preferred)
}
}
 
-   /*
-* Then if we need more containers, round-robin strategy is implemented.
-*/
-   while( contactcount < contcount )
-   {
-
-   hasallocated = false;
-   for (int i = 0 ; i < hostcount ; ++i)
-   {
-
-RoundRobinIndex = RoundRobinIndex >= hostcount-1 ?
-  0 :
-  RoundRobinIndex + 1;
-
-   /* Get the host currently indexed. */
-   SegResource segres = getSegResource(RoundRobinIndex);
-   if ( segres == NULL )
-   {
-   continue;
-   }
-
-   char *hostname = GET_SEGRESOURCE_HOSTNAME(segres);
-
-   elog(DEBUG5, "NONE mode resource broker tries host 
%s.", hostname);
-
-   /* The host must be HAWQ available and not RUAlive 
pending. */
-   if ( !IS_SEGRESOURCE_USABLE(segres))
-   {
-   continue;
-   }
-
-   /* Check the pending resource quota to avoid allocating 
too much. */
-   if ( segres->Stat->FTSTotalMemoryMB -
-segres->Allocated.MemoryMB -
-segres->IncPending.MemoryMB >= contmemorymb &&
-segres->Stat->FTSTotalCore -
-segres->Allocated.Core -
-segres->IncPending.Core >= 1 )
-   {
-   hasallocated = true;
-
-   elog(DEBUG3, "NONE mode resource broker chooses 
host %s to allocate "
-"resource (%d MB, 1 
CORE).",
-hostname,
-contmemorymb);
-
-   newcontainer = 
createGRMContainer(ContainerIDCounter,
-   
  contmemorymb,
-   
  1,
-   
  hostname,
-   
  segres);
-   contactcount++;
-   ContainerIDCounter++;
-   break;
-   }
-   }
-
-   if ( !hasallocated ) {
-   break; /* May be not fully satisfied. */
-   }
-
-   /*
-* Add the new container into resource pool. In NONE mode, we 
expect
-* HAWQ RM always can successfully add the container into its 
resource
-* pool.
-*/
-   Assert(newcontainer != NULL);
-   addGRMContainerToToBeAccepted(newcontainer);
-   }
-
elog(LOG, "NONE mode resource broker allocated containers "
  "(%d MB, %d CORE) x %d. Expected %d containers.",
contmemorymb,



[10/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/rpc/RpcContentWrapper.h
--
diff --git a/depends/libhdfs3/src/rpc/RpcContentWrapper.h 
b/depends/libhdfs3/src/rpc/RpcContentWrapper.h
new file mode 100644
index 000..b8f0a20
--- /dev/null
+++ b/depends/libhdfs3/src/rpc/RpcContentWrapper.h
@@ -0,0 +1,54 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef _HDFS_LIBHDFS3_RPC_RPCCONTENTWRAPPER_H_
+#define _HDFS_LIBHDFS3_RPC_RPCCONTENTWRAPPER_H_
+
+#include 
+
+#include "WriteBuffer.h"
+
+namespace Hdfs {
+namespace Internal {
+
+class RpcContentWrapper {
+public:
+RpcContentWrapper(::google::protobuf::Message * header,
+  ::google::protobuf::Message * msg);
+
+int getLength();
+void writeTo(WriteBuffer & buffer);
+
+public:
+::google::protobuf::Message * header;
+::google::protobuf::Message * msg;
+};
+
+}
+}
+
+#endif /* _HDFS_LIBHDFS3_RPC_RPCCONTENTWRAPPER_H_ */

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/rpc/RpcProtocolInfo.cpp
--
diff --git a/depends/libhdfs3/src/rpc/RpcProtocolInfo.cpp 
b/depends/libhdfs3/src/rpc/RpcProtocolInfo.cpp
new file mode 100644
index 000..faf57a4
--- /dev/null
+++ b/depends/libhdfs3/src/rpc/RpcProtocolInfo.cpp
@@ -0,0 +1,39 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "RpcProtocolInfo.h"
+
+namespace Hdfs {
+namespace Internal {
+
+size_t RpcProtocolInfo::hash_value() const {
+size_t values[] = { Int32Hasher(version), StringHasher(protocol), 
StringHasher(tokenKind) };
+return CombineHasher(values, sizeof(values) / sizeof(values[0]));
+}
+
+}
+}

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/rpc/RpcProtocolInfo.h
--
diff --git a/depends/libhdfs3/src/rpc/RpcProtocolInfo.h 
b/depends/libhdfs3/src/rpc/RpcProtocolInfo.h
new file mode 100644
index 000..5930ea3
--- /dev/null
+++ b/depends/libhdfs3/src/rpc/RpcProtocolInfo.h
@@ -0,0 +1,86 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache Li

[16/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/client/UserInfo.cpp
--
diff --git a/depends/libhdfs3/src/client/UserInfo.cpp 
b/depends/libhdfs3/src/client/UserInfo.cpp
new file mode 100644
index 000..6f6a8f3
--- /dev/null
+++ b/depends/libhdfs3/src/client/UserInfo.cpp
@@ -0,0 +1,81 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "UserInfo.h"
+
+#include 
+#include 
+#include 
+
+#include 
+
+#include "Exception.h"
+#include "ExceptionInternal.h"
+
+namespace Hdfs {
+namespace Internal {
+
+UserInfo UserInfo::LocalUser() {
+UserInfo retval;
+uid_t uid, euid;
+int bufsize;
+struct passwd pwd, epwd, *result = NULL;
+euid = geteuid();
+uid = getuid();
+
+if ((bufsize = sysconf(_SC_GETPW_R_SIZE_MAX)) == -1) {
+THROW(InvalidParameter,
+  "Invalid input: \"sysconf\" function failed to get the configure 
with key \"_SC_GETPW_R_SIZE_MAX\".");
+}
+
+std::vector buffer(bufsize);
+
+if (getpwuid_r(euid, &epwd, &buffer[0], bufsize, &result) != 0 || !result) 
{
+THROW(InvalidParameter,
+  "Invalid input: effective user name cannot be found with UID 
%u.",
+  euid);
+}
+
+retval.setEffectiveUser(epwd.pw_name);
+
+if (getpwuid_r(uid, &pwd, &buffer[0], bufsize, &result) != 0 || !result) {
+THROW(InvalidParameter,
+  "Invalid input: real user name cannot be found with UID %u.",
+  uid);
+}
+
+retval.setRealUser(pwd.pw_name);
+return retval;
+}
+
+size_t UserInfo::hash_value() const {
+size_t values[] = { StringHasher(realUser), effectiveUser.hash_value() };
+return CombineHasher(values, sizeof(values) / sizeof(values[0]));
+}
+
+}
+}

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/client/UserInfo.h
--
diff --git a/depends/libhdfs3/src/client/UserInfo.h 
b/depends/libhdfs3/src/client/UserInfo.h
new file mode 100644
index 000..2778da9
--- /dev/null
+++ b/depends/libhdfs3/src/client/UserInfo.h
@@ -0,0 +1,108 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef _HDFS_LIBHDFS3_CLIENT_USERINFO_H_
+#define _HDFS_LIBHDFS3_CLIENT_USERINFO_H_
+
+#include 
+#include 
+
+#include "Hash.h"
+#include "KerberosName.h"
+#include "Token.h"
+
+#include "Logger.h"
+
+namespace Hdfs {
+namespace Internal {
+
+class UserInfo {
+public:
+UserInfo() {
+ 

[05/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/test/data/function-secure.xml
--
diff --git a/depends/libhdfs3/test/data/function-secure.xml 
b/depends/libhdfs3/test/data/function-secure.xml
new file mode 100644
index 000..ed8921d
--- /dev/null
+++ b/depends/libhdfs3/test/data/function-secure.xml
@@ -0,0 +1,43 @@
+
+
+   
+   dfs.default.uri
+   hdfs://localhost:9000
+   
+
+   
+   hadoop.security.authentication
+   kerberos
+   
+
+   
+   dfs.nameservices
+   gphd-cluster
+   
+   
+   
+   dfs.ha.namenodes.gphd-cluster
+   nn1,nn2
+   
+   
+   
+   dfs.namenode.rpc-address.gphd-cluster.nn1
+   smdw:9000
+   
+   
+   
+   dfs.namenode.rpc-address.gphd-cluster.nn2
+   mdw:9000
+   
+
+   
+   dfs.namenode.http-address.gphd-cluster.nn1
+   smdw:50070
+   
+   
+   
+   dfs.namenode.http-address.gphd-cluster.nn2
+   mdw:50070
+   
+   
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/test/data/function-secure.xml.sample
--
diff --git a/depends/libhdfs3/test/data/function-secure.xml.sample 
b/depends/libhdfs3/test/data/function-secure.xml.sample
new file mode 100644
index 000..ed8921d
--- /dev/null
+++ b/depends/libhdfs3/test/data/function-secure.xml.sample
@@ -0,0 +1,43 @@
+
+
+   
+   dfs.default.uri
+   hdfs://localhost:9000
+   
+
+   
+   hadoop.security.authentication
+   kerberos
+   
+
+   
+   dfs.nameservices
+   gphd-cluster
+   
+   
+   
+   dfs.ha.namenodes.gphd-cluster
+   nn1,nn2
+   
+   
+   
+   dfs.namenode.rpc-address.gphd-cluster.nn1
+   smdw:9000
+   
+   
+   
+   dfs.namenode.rpc-address.gphd-cluster.nn2
+   mdw:9000
+   
+
+   
+   dfs.namenode.http-address.gphd-cluster.nn1
+   smdw:50070
+   
+   
+   
+   dfs.namenode.http-address.gphd-cluster.nn2
+   mdw:50070
+   
+   
+

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/test/data/function-test.xml
--
diff --git a/depends/libhdfs3/test/data/function-test.xml 
b/depends/libhdfs3/test/data/function-test.xml
new file mode 100644
index 000..d6800b9
--- /dev/null
+++ b/depends/libhdfs3/test/data/function-test.xml
@@ -0,0 +1,117 @@
+
+
+   
+   dfs.default.uri
+   hdfs://localhost:9000
+   
+
+   
+   hadoop.security.authentication
+   simple
+   
+
+   
+   dfs.nameservices
+   phdcluster
+   
+
+   
+   dfs.default.replica
+   3
+   
+
+   
+   dfs.client.log.severity
+   INFO
+   
+
+   
+   dfs.client.read.shortcircuit
+   true
+   
+
+   
+   input.localread.blockinfo.cachesize
+   10
+   
+
+   
+   dfs.client.read.shortcircuit.streams.cache.size
+   10
+   
+
+   
+   dfs.client.use.legacy.blockreader.local
+   true
+   
+
+   
+   output.replace-datanode-on-failure
+   false
+   
+
+
+   input.localread.mappedfile
+   true
+
+
+   
+   dfs.domain.socket.path
+   /var/lib/hadoop-hdfs/hdfs_domain__PORT
+   
+
+   
+   dfs.ha.namenodes.phdcluster
+   nn1,nn2
+   
+   
+   
+   dfs.namenode.rpc-address.phdcluster.nn1
+   mdw:9000
+   
+   
+   
+   dfs.namenode.rpc-address.phdcluster.nn2
+   smdw:9000
+   
+
+   
+   dfs.namenode.http-address.phdcluster.nn1
+   mdw:50070
+   
+   
+   
+   dfs.namenode.http-address.phdcluster.nn2
+   smdw:50070
+   
+   
+   
+   rpc.socekt.linger.timeout
+   20
+   
+   
+   
+   rpc.max.idle
+   100
+   
+   
+   
+   test.get.conf
+   success
+   
+
+   
+   test.get.confint32
+   10
+   
+   
+   
+   dfs.client.socketcache.expiryMsec
+   3000
+   
+   
+   
+   dfs.client.socketcache.capaci

[31/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/include/gtest/internal/gtest-param-util-generated.h
--
diff --git 
a/depends/libhdfs3/gtest/include/gtest/internal/gtest-param-util-generated.h 
b/depends/libhdfs3/gtest/include/gtest/internal/gtest-param-util-generated.h
new file mode 100644
index 000..e805485
--- /dev/null
+++ b/depends/libhdfs3/gtest/include/gtest/internal/gtest-param-util-generated.h
@@ -0,0 +1,5143 @@
+// This file was GENERATED by command:
+// pump.py gtest-param-util-generated.h.pump
+// DO NOT EDIT BY HAND!!!
+
+// Copyright 2008 Google Inc.
+// All Rights Reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: vl...@google.com (Vlad Losev)
+
+// Type and function utilities for implementing parameterized tests.
+// This file is generated by a SCRIPT.  DO NOT EDIT BY HAND!
+//
+// Currently Google Test supports at most 50 arguments in Values,
+// and at most 10 arguments in Combine. Please contact
+// googletestframew...@googlegroups.com if you need more.
+// Please note that the number of arguments to Combine is limited
+// by the maximum arity of the implementation of tr1::tuple which is
+// currently set at 10.
+
+#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_
+#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_
+
+// scripts/fuse_gtest.py depends on gtest's own header being #included
+// *unconditionally*.  Therefore these #includes cannot be moved
+// inside #if GTEST_HAS_PARAM_TEST.
+#include "gtest/internal/gtest-param-util.h"
+#include "gtest/internal/gtest-port.h"
+
+#if GTEST_HAS_PARAM_TEST
+
+namespace testing {
+
+// Forward declarations of ValuesIn(), which is implemented in
+// include/gtest/gtest-param-test.h.
+template 
+internal::ParamGenerator<
+  typename ::testing::internal::IteratorTraits::value_type>
+ValuesIn(ForwardIterator begin, ForwardIterator end);
+
+template 
+internal::ParamGenerator ValuesIn(const T (&array)[N]);
+
+template 
+internal::ParamGenerator ValuesIn(
+const Container& container);
+
+namespace internal {
+
+// Used in the Values() function to provide polymorphic capabilities.
+template 
+class ValueArray1 {
+ public:
+  explicit ValueArray1(T1 v1) : v1_(v1) {}
+
+  template 
+  operator ParamGenerator() const { return ValuesIn(&v1_, &v1_ + 1); }
+
+ private:
+  // No implementation - assignment is unsupported.
+  void operator=(const ValueArray1& other);
+
+  const T1 v1_;
+};
+
+template 
+class ValueArray2 {
+ public:
+  ValueArray2(T1 v1, T2 v2) : v1_(v1), v2_(v2) {}
+
+  template 
+  operator ParamGenerator() const {
+const T array[] = {static_cast(v1_), static_cast(v2_)};
+return ValuesIn(array);
+  }
+
+ private:
+  // No implementation - assignment is unsupported.
+  void operator=(const ValueArray2& other);
+
+  const T1 v1_;
+  const T2 v2_;
+};
+
+template 
+class ValueArray3 {
+ public:
+  ValueArray3(T1 v1, T2 v2, T3 v3) : v1_(v1), v2_(v2), v3_(v3) {}
+
+  template 
+  operator ParamGenerator() const {
+const T array[] = {static_cast(v1_), static_cast(v2_),
+static_cast(v3_)};
+return ValuesIn(array);
+  }
+
+ private:
+  // No implementation - assignment is unsupported.
+  void operator=(const ValueArray3& other);
+
+  const T1 v1_;
+  const T2 v2_;
+  const T3 v3_;
+};
+
+template 
+class ValueArray4 {
+ public:
+  ValueArray4(T1 v1, T2 v2, T3 v3, T4 v4) : v1_(v1), v2_(v2), v3_(v3),
+

[32/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/include/gtest/internal/gtest-linked_ptr.h
--
diff --git a/depends/libhdfs3/gtest/include/gtest/internal/gtest-linked_ptr.h 
b/depends/libhdfs3/gtest/include/gtest/internal/gtest-linked_ptr.h
new file mode 100644
index 000..b1362cd
--- /dev/null
+++ b/depends/libhdfs3/gtest/include/gtest/internal/gtest-linked_ptr.h
@@ -0,0 +1,233 @@
+// Copyright 2003 Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: Dan Egnor (eg...@google.com)
+//
+// A "smart" pointer type with reference tracking.  Every pointer to a
+// particular object is kept on a circular linked list.  When the last pointer
+// to an object is destroyed or reassigned, the object is deleted.
+//
+// Used properly, this deletes the object when the last reference goes away.
+// There are several caveats:
+// - Like all reference counting schemes, cycles lead to leaks.
+// - Each smart pointer is actually two pointers (8 bytes instead of 4).
+// - Every time a pointer is assigned, the entire list of pointers to that
+//   object is traversed.  This class is therefore NOT SUITABLE when there
+//   will often be more than two or three pointers to a particular object.
+// - References are only tracked as long as linked_ptr<> objects are copied.
+//   If a linked_ptr<> is converted to a raw pointer and back, BAD THINGS
+//   will happen (double deletion).
+//
+// A good use of this class is storing object references in STL containers.
+// You can safely put linked_ptr<> in a vector<>.
+// Other uses may not be as good.
+//
+// Note: If you use an incomplete type with linked_ptr<>, the class
+// *containing* linked_ptr<> must have a constructor and destructor (even
+// if they do nothing!).
+//
+// Bill Gibbons suggested we use something like this.
+//
+// Thread Safety:
+//   Unlike other linked_ptr implementations, in this implementation
+//   a linked_ptr object is thread-safe in the sense that:
+// - it's safe to copy linked_ptr objects concurrently,
+// - it's safe to copy *from* a linked_ptr and read its underlying
+//   raw pointer (e.g. via get()) concurrently, and
+// - it's safe to write to two linked_ptrs that point to the same
+//   shared object concurrently.
+// TODO(w...@google.com): rename this to safe_linked_ptr to avoid
+// confusion with normal linked_ptr.
+
+#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_
+#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_
+
+#include 
+#include 
+
+#include "gtest/internal/gtest-port.h"
+
+namespace testing {
+namespace internal {
+
+// Protects copying of all linked_ptr objects.
+GTEST_API_ GTEST_DECLARE_STATIC_MUTEX_(g_linked_ptr_mutex);
+
+// This is used internally by all instances of linked_ptr<>.  It needs to be
+// a non-template class because different types of linked_ptr<> can refer to
+// the same object (linked_ptr(obj) vs linked_ptr(obj)).
+// So, it needs to be possible for different types of linked_ptr to participate
+// in the same circular linked list, so we need a single class type here.
+//
+// DO NOT USE THIS CLASS DIRECTLY YOURSELF.  Use linked_ptr.
+class linked_ptr_internal {
+ public:
+  // Create a new circle that includes only this instance.
+  void join_new() {
+next_ = this;
+  }
+
+  // Many linked_ptr operations may change p.link_ for some linked_ptr
+  // variable p in the sa

[28/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/include/gtest/internal/gtest-tuple.h
--
diff --git a/depends/libhdfs3/gtest/include/gtest/internal/gtest-tuple.h 
b/depends/libhdfs3/gtest/include/gtest/internal/gtest-tuple.h
new file mode 100644
index 000..7b3dfc3
--- /dev/null
+++ b/depends/libhdfs3/gtest/include/gtest/internal/gtest-tuple.h
@@ -0,0 +1,1012 @@
+// This file was GENERATED by command:
+// pump.py gtest-tuple.h.pump
+// DO NOT EDIT BY HAND!!!
+
+// Copyright 2009 Google Inc.
+// All Rights Reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+
+// Implements a subset of TR1 tuple needed by Google Test and Google Mock.
+
+#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_
+#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_
+
+#include   // For ::std::pair.
+
+// The compiler used in Symbian has a bug that prevents us from declaring the
+// tuple template as a friend (it complains that tuple is redefined).  This
+// hack bypasses the bug by declaring the members that should otherwise be
+// private as public.
+// Sun Studio versions < 12 also have the above bug.
+#if defined(__SYMBIAN32__) || (defined(__SUNPRO_CC) && __SUNPRO_CC < 0x590)
+# define GTEST_DECLARE_TUPLE_AS_FRIEND_ public:
+#else
+# define GTEST_DECLARE_TUPLE_AS_FRIEND_ \
+template  friend class tuple; \
+   private:
+#endif
+
+// GTEST_n_TUPLE_(T) is the type of an n-tuple.
+#define GTEST_0_TUPLE_(T) tuple<>
+#define GTEST_1_TUPLE_(T) tuple
+#define GTEST_2_TUPLE_(T) tuple
+#define GTEST_3_TUPLE_(T) tuple
+#define GTEST_4_TUPLE_(T) tuple
+#define GTEST_5_TUPLE_(T) tuple
+#define GTEST_6_TUPLE_(T) tuple
+#define GTEST_7_TUPLE_(T) tuple
+#define GTEST_8_TUPLE_(T) tuple
+#define GTEST_9_TUPLE_(T) tuple
+#define GTEST_10_TUPLE_(T) tuple
+
+// GTEST_n_TYPENAMES_(T) declares a list of n typenames.
+#define GTEST_0_TYPENAMES_(T)
+#define GTEST_1_TYPENAMES_(T) typename T##0
+#define GTEST_2_TYPENAMES_(T) typename T##0, typename T##1
+#define GTEST_3_TYPENAMES_(T) typename T##0, typename T##1, typename T##2
+#define GTEST_4_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \
+typename T##3
+#define GTEST_5_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \
+typename T##3, typename T##4
+#define GTEST_6_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \
+typename T##3, typename T##4, typename T##5
+#define GTEST_7_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \
+typename T##3, typename T##4, typename T##5, typename T##6
+#define GTEST_8_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \
+typename T##3, typename T##4, typename T##5, typename T##6, typename T##7
+#define GTEST_9_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \
+typename T##3, typename T##4, typename T##5, typename T##6, \
+typename T##7, typename T##8
+#define GTEST_10_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \
+typename T##3, typename T##4, typename T##5, typename T##6, \
+typename T##7, typename T##8, typename T##9
+
+// In theory, defining stuff in the ::std namespace is undefined
+// behavior.  We can do this as we are playing the role of a standard
+// library vendor.
+namespace std {
+namespace tr1 {
+
+template 
+class tuple;
+
+// Anything in namespace gtest_internal is Google Test's INTERNAL
+// IMPLEMENTATION DETAIL and MUST N

[36/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/include/gtest/gtest-param-test.h
--
diff --git a/depends/libhdfs3/gtest/include/gtest/gtest-param-test.h 
b/depends/libhdfs3/gtest/include/gtest/gtest-param-test.h
new file mode 100644
index 000..d6702c8
--- /dev/null
+++ b/depends/libhdfs3/gtest/include/gtest/gtest-param-test.h
@@ -0,0 +1,1421 @@
+// This file was GENERATED by command:
+// pump.py gtest-param-test.h.pump
+// DO NOT EDIT BY HAND!!!
+
+// Copyright 2008, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Authors: vl...@google.com (Vlad Losev)
+//
+// Macros and functions for implementing parameterized tests
+// in Google C++ Testing Framework (Google Test)
+//
+// This file is generated by a SCRIPT.  DO NOT EDIT BY HAND!
+//
+#ifndef GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
+#define GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_
+
+
+// Value-parameterized tests allow you to test your code with different
+// parameters without writing multiple copies of the same test.
+//
+// Here is how you use value-parameterized tests:
+
+#if 0
+
+// To write value-parameterized tests, first you should define a fixture
+// class. It is usually derived from testing::TestWithParam (see below for
+// another inheritance scheme that's sometimes useful in more complicated
+// class hierarchies), where the type of your parameter values.
+// TestWithParam is itself derived from testing::Test. T can be any
+// copyable type. If it's a raw pointer, you are responsible for managing the
+// lifespan of the pointed values.
+
+class FooTest : public ::testing::TestWithParam {
+  // You can implement all the usual class fixture members here.
+};
+
+// Then, use the TEST_P macro to define as many parameterized tests
+// for this fixture as you want. The _P suffix is for "parameterized"
+// or "pattern", whichever you prefer to think.
+
+TEST_P(FooTest, DoesBlah) {
+  // Inside a test, access the test parameter with the GetParam() method
+  // of the TestWithParam class:
+  EXPECT_TRUE(foo.Blah(GetParam()));
+  ...
+}
+
+TEST_P(FooTest, HasBlahBlah) {
+  ...
+}
+
+// Finally, you can use INSTANTIATE_TEST_CASE_P to instantiate the test
+// case with any set of parameters you want. Google Test defines a number
+// of functions for generating test parameters. They return what we call
+// (surprise!) parameter generators. Here is a  summary of them, which
+// are all in the testing namespace:
+//
+//
+//  Range(begin, end [, step]) - Yields values {begin, begin+step,
+//   begin+step+step, ...}. The values do not
+//   include end. step defaults to 1.
+//  Values(v1, v2, ..., vN)- Yields values {v1, v2, ..., vN}.
+//  ValuesIn(container)- Yields values from a C-style array, an STL
+//  ValuesIn(begin,end)  container, or an iterator range [begin, end).
+//  Bool() - Yields sequence {false, true}.
+//  Combine(g1, g2, ..., gN)   - Yields all combinations (the Cartesian product
+//   for the math savvy) of the values generated
+//   by the N generators.
+//
+// For more details, see comments at the definitions of these functions below
+// in this file.
+//
+// The following statement will instantiate tests from the FooTest test case
+// each with parameter values "meeny", "miny", 

[15/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/common/HWCrc32c.cpp
--
diff --git a/depends/libhdfs3/src/common/HWCrc32c.cpp 
b/depends/libhdfs3/src/common/HWCrc32c.cpp
new file mode 100644
index 000..4e3f270
--- /dev/null
+++ b/depends/libhdfs3/src/common/HWCrc32c.cpp
@@ -0,0 +1,165 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include 
+#include 
+
+#include "HWCrc32c.h"
+
+#if ((defined(__X86__) || defined(__i386__) || defined(i386) || 
defined(_M_IX86) || defined(__386__) || defined(__x86_64__) || defined(_M_X64)))
+#include 
+#endif
+
+#if ((defined(__X86__) || defined(__i386__) || defined(i386) || 
defined(_M_IX86) || defined(__386__) || defined(__x86_64__) || defined(_M_X64)))
+#if !defined(__SSE4_2__)
+
+namespace Hdfs {
+namespace Internal {
+
+#if defined(__LP64__)
+static inline uint64_t _mm_crc32_u64(uint64_t crc, uint64_t value) {
+asm("crc32q %[value], %[crc]\n" : [crc] "+r"(crc) : [value] "rm"(value));
+return crc;
+}
+#endif
+
+static inline uint32_t _mm_crc32_u16(uint32_t crc, uint16_t value) {
+asm("crc32w %[value], %[crc]\n" : [crc] "+r"(crc) : [value] "rm"(value));
+return crc;
+}
+
+static inline uint32_t _mm_crc32_u32(uint32_t crc, uint64_t value) {
+asm("crc32l %[value], %[crc]\n" : [crc] "+r"(crc) : [value] "rm"(value));
+return crc;
+}
+
+static inline uint32_t _mm_crc32_u8(uint32_t crc, uint8_t value) {
+asm("crc32b %[value], %[crc]\n" : [crc] "+r"(crc) : [value] "rm"(value));
+return crc;
+}
+
+}
+}
+
+#else
+
+#include 
+
+#endif
+
+namespace Hdfs {
+namespace Internal {
+
+bool HWCrc32c::available() {
+#if ((defined(__X86__) || defined(__i386__) || defined(i386) || 
defined(_M_IX86) || defined(__386__) || defined(__x86_64__) || defined(_M_X64)))
+uint32_t eax, ebx, ecx = 0, edx;
+/*
+ * get the CPU features (level 1). ecx will have the SSE4.2 bit.
+ * This gcc routine automatically handles saving ebx in the case where we 
are -fpic or -fPIC
+ */
+__get_cpuid(1, &eax, &ebx, &ecx, &edx);
+return (ecx & (1 << 20)) != 0;
+#else
+return false;
+#endif
+}
+
+void HWCrc32c::update(const void * b, int len) {
+const char * p = static_cast(b);
+#if defined(__LP64__)
+const size_t bytes = sizeof(uint64_t);
+#else
+const size_t bytes = sizeof(uint32_t);
+#endif
+int align = bytes - reinterpret_cast(p) % bytes;
+align = bytes == static_cast(align) ? 0 : align;
+
+if (len < align) {
+align = len;
+}
+
+updateInt64(p, align);
+p = p + align;
+len -= align;
+
+if (len > 0) {
+assert(0 == reinterpret_cast(p) % bytes);
+
+for (int i = len / bytes; i > 0; --i) {
+#if defined(__LP64__)
+crc = _mm_crc32_u64(crc, *reinterpret_cast(p));
+#else
+crc = _mm_crc32_u32(crc, *reinterpret_cast(p));
+#endif
+p = p + bytes;
+}
+
+len &= bytes - 1;
+updateInt64(p, len);
+}
+}
+
+void HWCrc32c::updateInt64(const char * b, int len) {
+assert(len < 8);
+
+switch (len) {
+case 7:
+crc = _mm_crc32_u8(crc, *reinterpret_cast(b++));
+
+case 6:
+crc = _mm_crc32_u16(crc, *reinterpret_cast(b));
+b += 2;
+
+/* case 5 is below: 4 + 1 */
+case 4:
+crc = _mm_crc32_u32(crc, *reinterpret_cast(b));
+break;
+
+case 3:
+crc = _mm_crc32_u8(crc, *reinterpret_cast(b++));
+
+case 2:
+crc = _mm_crc32_u16(crc, *reinterpret_cast(b));
+break;
+
+case 5:
+crc = _mm_crc32_u32(crc, *reinterpret_cast(b));
+b += 4;
+
+case 1:
+crc = _mm_crc32_u8(crc, *reinterpret_cast(b));
+break;
+
+case 0:
+  

[07/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/test/data/checksum1.in
--
diff --git a/depends/libhdfs3/test/data/checksum1.in 
b/depends/libhdfs3/test/data/checksum1.in
new file mode 100644
index 000..36002ed
--- /dev/null
+++ b/depends/libhdfs3/test/data/checksum1.in
@@ -0,0 +1,512 @@
+3251651376 a
+3802278198 ab
+910901175 abc
+2462583345 abcd
+3293632151 abcde
+1404891121 abcdef
+3861378113 abcdefg
+177480119 abcdefgh
+769432060 abcdefghi
+3864630327 abcdefghij
+1325211590 abcdefghijk
+2610574288 abcdefghijkl
+1608251256 abcdefghijklm
+1692248097 abcdefghijklmn
+3206163554 abcdefghijklmno
+2745695973 abcdefghijklmnop
+132947879 abcdefghijklmnopq
+3025685113 abcdefghijklmnopqr
+959128355 abcdefghijklmnopqrs
+3618358858 abcdefghijklmnopqrst
+1012567298 abcdefghijklmnopqrstu
+3829648078 abcdefghijklmnopqrstuv
+2554247582 abcdefghijklmnopqrstuvw
+2540970522 abcdefghijklmnopqrstuvwx
+545015781 abcdefghijklmnopqrstuvwxy
+2665934629 abcdefghijklmnopqrstuvwxyz
+3556917021 abcdefghijklmnopqrstuvwxyza
+2099394887 abcdefghijklmnopqrstuvwxyzab
+3039258793 abcdefghijklmnopqrstuvwxyzabc
+778903086 abcdefghijklmnopqrstuvwxyzabcd
+2325842120 abcdefghijklmnopqrstuvwxyzabcde
+1556412504 abcdefghijklmnopqrstuvwxyzabcdef
+1020387900 abcdefghijklmnopqrstuvwxyzabcdefg
+3305033599 abcdefghijklmnopqrstuvwxyzabcdefgh
+1682345241 abcdefghijklmnopqrstuvwxyzabcdefghi
+819048997 abcdefghijklmnopqrstuvwxyzabcdefghij
+3209568376 abcdefghijklmnopqrstuvwxyzabcdefghijk
+2231782657 abcdefghijklmnopqrstuvwxyzabcdefghijkl
+2122123694 abcdefghijklmnopqrstuvwxyzabcdefghijklm
+2442004636 abcdefghijklmnopqrstuvwxyzabcdefghijklmn
+3000123864 abcdefghijklmnopqrstuvwxyzabcdefghijklmno
+2052942969 abcdefghijklmnopqrstuvwxyzabcdefghijklmnop
+3638449906 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopq
+3495147135 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqr
+535744311 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrs
+4805590 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrst
+3815136336 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstu
+1417829435 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuv
+1578173281 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvw
+986668785 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwx
+1516830648 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxy
+1888578973 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
+3961545369 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyza
+947304317 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzab
+3999489454 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc
+4210835184 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcd
+4124985188 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcde
+3014917601 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdef
+4137237564 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefg
+3291776929 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefgh
+468523485 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghi
+884582506 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghij
+2686609914 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk
+3872680647 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijkl
+2603752101 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklm
+149089460 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmn
+405797677 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmno
+3160992426 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnop
+412840235 abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopq
+2079508799 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqr
+1581151220 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrs
+3503642202 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrst
+738407677 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstu
+1227800614 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuv
+4051313729 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvw
+450738952 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwx
+3516160484 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxy
+1820099640 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz
+2072391370 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyza
+2056534038 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzab
+376342392 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabc
+259405161 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcd
+523365044 
abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcde
+1619719143 
abcdefghijklmnopqrstuvwxyzabcdefgh

[41/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gmock/include/gmock/gmock-generated-nice-strict.h
--
diff --git a/depends/libhdfs3/gmock/include/gmock/gmock-generated-nice-strict.h 
b/depends/libhdfs3/gmock/include/gmock/gmock-generated-nice-strict.h
new file mode 100644
index 000..4095f4d
--- /dev/null
+++ b/depends/libhdfs3/gmock/include/gmock/gmock-generated-nice-strict.h
@@ -0,0 +1,397 @@
+// This file was GENERATED by command:
+// pump.py gmock-generated-nice-strict.h.pump
+// DO NOT EDIT BY HAND!!!
+
+// Copyright 2008, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+
+// Implements class templates NiceMock, NaggyMock, and StrictMock.
+//
+// Given a mock class MockFoo that is created using Google Mock,
+// NiceMock is a subclass of MockFoo that allows
+// uninteresting calls (i.e. calls to mock methods that have no
+// EXPECT_CALL specs), NaggyMock is a subclass of MockFoo
+// that prints a warning when an uninteresting call occurs, and
+// StrictMock is a subclass of MockFoo that treats all
+// uninteresting calls as errors.
+//
+// Currently a mock is naggy by default, so MockFoo and
+// NaggyMock behave like the same.  However, we will soon
+// switch the default behavior of mocks to be nice, as that in general
+// leads to more maintainable tests.  When that happens, MockFoo will
+// stop behaving like NaggyMock and start behaving like
+// NiceMock.
+//
+// NiceMock, NaggyMock, and StrictMock "inherit" the constructors of
+// their respective base class, with up-to 10 arguments.  Therefore
+// you can write NiceMock(5, "a") to construct a nice mock
+// where MockFoo has a constructor that accepts (int, const char*),
+// for example.
+//
+// A known limitation is that NiceMock, NaggyMock,
+// and StrictMock only works for mock methods defined using
+// the MOCK_METHOD* family of macros DIRECTLY in the MockFoo class.
+// If a mock method is defined in a base class of MockFoo, the "nice"
+// or "strict" modifier may not affect it, depending on the compiler.
+// In particular, nesting NiceMock, NaggyMock, and StrictMock is NOT
+// supported.
+//
+// Another known limitation is that the constructors of the base mock
+// cannot have arguments passed by non-const reference, which are
+// banned by the Google C++ style guide anyway.
+
+#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_NICE_STRICT_H_
+#define GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_NICE_STRICT_H_
+
+#include "gmock/gmock-spec-builders.h"
+#include "gmock/internal/gmock-port.h"
+
+namespace testing {
+
+template 
+class NiceMock : public MockClass {
+ public:
+  // We don't factor out the constructor body to a common method, as
+  // we have to avoid a possible clash with members of MockClass.
+  NiceMock() {
+::testing::Mock::AllowUninterestingCalls(
+internal::ImplicitCast_(this));
+  }
+
+  // C++ doesn't (yet) allow inheritance of constructors, so we have
+  // to define it for each arity.
+  template 
+  explicit NiceMock(const A1& a1) : MockClass(a1) {
+::testing::Mock::AllowUninterestingCalls(
+internal::ImplicitCast_(this));
+  }
+  template 
+  NiceMock(const A1& a1, const A2& a2) : MockClass(a1, a2) {
+::testing::Mock::AllowUninterestingCalls(
+internal::ImplicitCast_(this));
+  }
+
+  template 
+  NiceMock(const A1& a1, const A2& a2, const A3& a3) : MockClas

[14/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/common/Thread.cpp
--
diff --git a/depends/libhdfs3/src/common/Thread.cpp 
b/depends/libhdfs3/src/common/Thread.cpp
new file mode 100644
index 000..14cf217
--- /dev/null
+++ b/depends/libhdfs3/src/common/Thread.cpp
@@ -0,0 +1,54 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include 
+
+#include "Thread.h"
+
+namespace Hdfs {
+namespace Internal {
+
+sigset_t ThreadBlockSignal() {
+sigset_t sigs;
+sigset_t oldMask;
+sigemptyset(&sigs);
+sigaddset(&sigs, SIGHUP);
+sigaddset(&sigs, SIGINT);
+sigaddset(&sigs, SIGTERM);
+sigaddset(&sigs, SIGUSR1);
+sigaddset(&sigs, SIGUSR2);
+pthread_sigmask(SIG_BLOCK, &sigs, &oldMask);
+return oldMask;
+}
+
+void ThreadUnBlockSignal(sigset_t sigs) {
+pthread_sigmask(SIG_SETMASK, &sigs, 0);
+}
+
+}
+}

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/common/Thread.h
--
diff --git a/depends/libhdfs3/src/common/Thread.h 
b/depends/libhdfs3/src/common/Thread.h
new file mode 100644
index 000..4c2401e
--- /dev/null
+++ b/depends/libhdfs3/src/common/Thread.h
@@ -0,0 +1,107 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#ifndef _HDFS_LIBHDFS3_COMMON_THREAD_H_
+#define _HDFS_LIBHDFS3_COMMON_THREAD_H_
+
+#include "platform.h"
+
+#include 
+
+#ifdef NEED_BOOST
+
+#include 
+
+namespace Hdfs {
+namespace Internal {
+
+using boost::thread;
+using boost::mutex;
+using boost::lock_guard;
+using boost::unique_lock;
+using boost::condition_variable;
+using boost::defer_lock_t;
+using boost::once_flag;
+using boost::call_once;
+using namespace boost::this_thread;
+
+}
+}
+
+#else
+
+#include 
+#include 
+#include 
+
+namespace Hdfs {
+namespace Internal {
+
+using std::thread;
+using std::mutex;
+using std::lock_guard;
+using std::unique_lock;
+using std::condition_variable;
+using std::defer_lock_t;
+using std::once_flag;
+using std::call_once;
+using namespace std::this_thread;
+
+}
+}
+#endif
+
+namespace Hdfs {
+namespace Internal {
+
+/*
+ * make the background thread ignore these signals (which should allow that
+ * they be delivered to the main thread)
+ */
+sigset_t ThreadBlockSignal();
+
+/*
+ * Restore previous signals.
+ */
+void ThreadUnBlockSignal(sigset_t sigs);
+
+}
+}
+
+#define CREATE_THREAD(retval, fun) \
+do { \
+sigset_t sigs = Hdfs::Internal::Thr

[27/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/include/gtest/internal/gtest-type-util.h
--
diff --git a/depends/libhdfs3/gtest/include/gtest/internal/gtest-type-util.h 
b/depends/libhdfs3/gtest/include/gtest/internal/gtest-type-util.h
new file mode 100644
index 000..e46f7cf
--- /dev/null
+++ b/depends/libhdfs3/gtest/include/gtest/internal/gtest-type-util.h
@@ -0,0 +1,3331 @@
+// This file was GENERATED by command:
+// pump.py gtest-type-util.h.pump
+// DO NOT EDIT BY HAND!!!
+
+// Copyright 2008 Google Inc.
+// All Rights Reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+
+// Type utilities needed for implementing typed and type-parameterized
+// tests.  This file is generated by a SCRIPT.  DO NOT EDIT BY HAND!
+//
+// Currently we support at most 50 types in a list, and at most 50
+// type-parameterized tests in one type-parameterized test case.
+// Please contact googletestframew...@googlegroups.com if you need
+// more.
+
+#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_
+#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_
+
+#include "gtest/internal/gtest-port.h"
+
+// #ifdef __GNUC__ is too general here.  It is possible to use gcc without 
using
+// libstdc++ (which is where cxxabi.h comes from).
+# if GTEST_HAS_CXXABI_H_
+#  include 
+# elif defined(__HP_aCC)
+#  include 
+# endif  // GTEST_HASH_CXXABI_H_
+
+namespace testing {
+namespace internal {
+
+// GetTypeName() returns a human-readable name of type T.
+// NB: This function is also used in Google Mock, so don't move it inside of
+// the typed-test-only section below.
+template 
+std::string GetTypeName() {
+# if GTEST_HAS_RTTI
+
+  const char* const name = typeid(T).name();
+#  if GTEST_HAS_CXXABI_H_ || defined(__HP_aCC)
+  int status = 0;
+  // gcc's implementation of typeid(T).name() mangles the type name,
+  // so we have to demangle it.
+#   if GTEST_HAS_CXXABI_H_
+  using abi::__cxa_demangle;
+#   endif  // GTEST_HAS_CXXABI_H_
+  char* const readable_name = __cxa_demangle(name, 0, 0, &status);
+  const std::string name_str(status == 0 ? readable_name : name);
+  free(readable_name);
+  return name_str;
+#  else
+  return name;
+#  endif  // GTEST_HAS_CXXABI_H_ || __HP_aCC
+
+# else
+
+  return "";
+
+# endif  // GTEST_HAS_RTTI
+}
+
+#if GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P
+
+// AssertyTypeEq::type is defined iff T1 and T2 are the same
+// type.  This can be used as a compile-time assertion to ensure that
+// two types are equal.
+
+template 
+struct AssertTypeEq;
+
+template 
+struct AssertTypeEq {
+  typedef bool type;
+};
+
+// A unique type used as the default value for the arguments of class
+// template Types.  This allows us to simulate variadic templates
+// (e.g. Types, Type, and etc), which C++ doesn't
+// support directly.
+struct None {};
+
+// The following family of struct and struct templates are used to
+// represent type lists.  In particular, TypesN
+// represents a type list with N types (T1, T2, ..., and TN) in it.
+// Except for Types0, every struct in the family has two member types:
+// Head for the first type in the list, and Tail for the rest of the
+// list.
+
+// The empty type list.
+struct Types0 {};
+
+// Type lists of length 1, 2, 3, and so on.
+
+template 
+struct Types1 {
+  typedef T1 Head;
+  typedef Types0 Tail;
+};
+template 
+struct Types2 {
+  typedef T1 He

[12/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/network/DomainSocket.cpp
--
diff --git a/depends/libhdfs3/src/network/DomainSocket.cpp 
b/depends/libhdfs3/src/network/DomainSocket.cpp
new file mode 100644
index 000..2675433
--- /dev/null
+++ b/depends/libhdfs3/src/network/DomainSocket.cpp
@@ -0,0 +1,159 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "platform.h"
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "DateTime.h"
+#include "Exception.h"
+#include "ExceptionInternal.h"
+#include "DomainSocket.h"
+#include "Syscall.h"
+
+namespace Hdfs {
+namespace Internal {
+
+DomainSocketImpl::DomainSocketImpl() {}
+
+DomainSocketImpl::~DomainSocketImpl() {}
+
+void DomainSocketImpl::connect(const char *host, int port, int timeout) {
+  connect(host, "", timeout);
+}
+
+void DomainSocketImpl::connect(const char *host, const char *port,
+   int timeout) {
+  remoteAddr = host;
+  assert(-1 == sock);
+  sock = HdfsSystem::socket(AF_UNIX, SOCK_STREAM, 0);
+
+  if (-1 == sock) {
+THROW(HdfsNetworkException, "Create socket failed when connect to %s: %s",
+  remoteAddr.c_str(), GetSystemErrorInfo(errno));
+  }
+
+  try {
+int len, rc;
+disableSigPipe();
+struct sockaddr_un addr;
+memset(&addr, 0, sizeof(addr));
+addr.sun_family = AF_UNIX;
+rc = snprintf(addr.sun_path, sizeof(addr.sun_path), "%s", host);
+
+if (rc < 0 || rc >= static_cast(sizeof(addr.sun_path))) {
+  THROW(HdfsNetworkException, "error computing UNIX domain socket path: 
%s",
+remoteAddr.c_str());
+}
+
+len = offsetof(struct sockaddr_un, sun_path) + strlen(addr.sun_path);
+
+do {
+  rc = HdfsSystem::connect(sock, (struct sockaddr *)&addr, len);
+} while (rc < 0 && EINTR == errno && !CheckOperationCanceled());
+
+if (rc < 0) {
+  THROW(HdfsNetworkConnectException, "Connect to \"%s:\" failed: %s", host,
+GetSystemErrorInfo(errno));
+}
+  } catch (...) {
+close();
+throw;
+  }
+}
+
+void DomainSocketImpl::connect(struct addrinfo *paddr, const char *host,
+   const char *port, int timeout) {
+  assert(false && "not implemented");
+  abort();
+}
+
+int32_t DomainSocketImpl::receiveFileDescriptors(int fds[], size_t nfds,
+ char *buffer, int32_t size) {
+  assert(-1 != sock);
+  ssize_t rc;
+  struct iovec iov[1];
+  struct msghdr msg;
+
+  iov[0].iov_base = buffer;
+  iov[0].iov_len = size;
+
+  struct cmsghdr *cmsg;
+  size_t auxSize = CMSG_SPACE(sizeof(int) * nfds);
+  std::vector aux(auxSize, 0); /* ancillary data buffer */
+
+  memset(&msg, 0, sizeof(msg));
+  msg.msg_iov = &iov[0];
+  msg.msg_iovlen = 1;
+  msg.msg_control = &aux[0];
+  msg.msg_controllen = aux.size();
+  cmsg = CMSG_FIRSTHDR(&msg);
+  cmsg->cmsg_level = SOL_SOCKET;
+  cmsg->cmsg_type = SCM_RIGHTS;
+  cmsg->cmsg_len = CMSG_LEN(sizeof(int) * nfds);
+  msg.msg_controllen = cmsg->cmsg_len;
+
+  do {
+rc = HdfsSystem::recvmsg(sock, &msg, 0);
+  } while (-1 == rc && EINTR == errno && !CheckOperationCanceled());
+
+  if (-1 == rc) {
+THROW(HdfsNetworkException, "Read file descriptors failed from %s: %s",
+  remoteAddr.c_str(), GetSystemErrorInfo(errno));
+  }
+
+  if (0 == rc) {
+THROW(HdfsEndOfStream,
+  "Read file descriptors failed from %s: End of the stream",
+  remoteAddr.c_str());
+  }
+
+  if (msg.msg_controllen != cmsg->cmsg_len) {
+THROW(HdfsEndOfStream, "Read file descriptors failed from %s.",
+  remoteAddr.c_str());
+  }
+

[34/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/include/gtest/gtest.h
--
diff --git a/depends/libhdfs3/gtest/include/gtest/gtest.h 
b/depends/libhdfs3/gtest/include/gtest/gtest.h
new file mode 100644
index 000..6fa0a39
--- /dev/null
+++ b/depends/libhdfs3/gtest/include/gtest/gtest.h
@@ -0,0 +1,2291 @@
+// Copyright 2005, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+//
+// The Google C++ Testing Framework (Google Test)
+//
+// This header file defines the public API for Google Test.  It should be
+// included by any test program that uses Google Test.
+//
+// IMPORTANT NOTE: Due to limitation of the C++ language, we have to
+// leave some internal implementation details in this header file.
+// They are clearly marked by comments like this:
+//
+//   // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM.
+//
+// Such code is NOT meant to be used by a user directly, and is subject
+// to CHANGE WITHOUT NOTICE.  Therefore DO NOT DEPEND ON IT in a user
+// program!
+//
+// Acknowledgment: Google Test borrowed the idea of automatic test
+// registration from Barthelemy Dagenais' (barthel...@prologique.com)
+// easyUnit framework.
+
+#ifndef GTEST_INCLUDE_GTEST_GTEST_H_
+#define GTEST_INCLUDE_GTEST_GTEST_H_
+
+#include 
+#include 
+#include 
+
+#include "gtest/internal/gtest-internal.h"
+#include "gtest/internal/gtest-string.h"
+#include "gtest/gtest-death-test.h"
+#include "gtest/gtest-message.h"
+#include "gtest/gtest-param-test.h"
+#include "gtest/gtest-printers.h"
+#include "gtest/gtest_prod.h"
+#include "gtest/gtest-test-part.h"
+#include "gtest/gtest-typed-test.h"
+
+// Depending on the platform, different string classes are available.
+// On Linux, in addition to ::std::string, Google also makes use of
+// class ::string, which has the same interface as ::std::string, but
+// has a different implementation.
+//
+// The user can define GTEST_HAS_GLOBAL_STRING to 1 to indicate that
+// ::string is available AND is a distinct type to ::std::string, or
+// define it to 0 to indicate otherwise.
+//
+// If the user's ::std::string and ::string are the same class due to
+// aliasing, he should define GTEST_HAS_GLOBAL_STRING to 0.
+//
+// If the user doesn't define GTEST_HAS_GLOBAL_STRING, it is defined
+// heuristically.
+
+namespace testing {
+
+// Declares the flags.
+
+// This flag temporary enables the disabled tests.
+GTEST_DECLARE_bool_(also_run_disabled_tests);
+
+// This flag brings the debugger on an assertion failure.
+GTEST_DECLARE_bool_(break_on_failure);
+
+// This flag controls whether Google Test catches all test-thrown exceptions
+// and logs them as failures.
+GTEST_DECLARE_bool_(catch_exceptions);
+
+// This flag enables using colors in terminal output. Available values are
+// "yes" to enable colors, "no" (disable colors), or "auto" (the default)
+// to let Google Test decide.
+GTEST_DECLARE_string_(color);
+
+// This flag sets up the filter to select by name using a glob pattern
+// the tests to run. If the filter is not given all tests are executed.
+GTEST_DECLARE_string_(filter);
+
+// This flag causes the Google Test to list tests. None of the tests listed
+// are actually run if the flag is provided.
+GTEST_DECLARE_bool_(list_tests);
+
+// This flag controls whether Google Test emits a detailed XML report to a file
+// in addition to i

[24/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/src/gtest-test-part.cc
--
diff --git a/depends/libhdfs3/gtest/src/gtest-test-part.cc 
b/depends/libhdfs3/gtest/src/gtest-test-part.cc
new file mode 100644
index 000..c60eef3
--- /dev/null
+++ b/depends/libhdfs3/gtest/src/gtest-test-part.cc
@@ -0,0 +1,110 @@
+// Copyright 2008, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: mhe...@google.com (Markus Heule)
+//
+// The Google C++ Testing Framework (Google Test)
+
+#include "gtest/gtest-test-part.h"
+
+// Indicates that this translation unit is part of Google Test's
+// implementation.  It must come before gtest-internal-inl.h is
+// included, or there will be a compiler error.  This trick is to
+// prevent a user from accidentally including gtest-internal-inl.h in
+// his code.
+#define GTEST_IMPLEMENTATION_ 1
+#include "src/gtest-internal-inl.h"
+#undef GTEST_IMPLEMENTATION_
+
+namespace testing {
+
+using internal::GetUnitTestImpl;
+
+// Gets the summary of the failure message by omitting the stack trace
+// in it.
+std::string TestPartResult::ExtractSummary(const char* message) {
+  const char* const stack_trace = strstr(message, internal::kStackTraceMarker);
+  return stack_trace == NULL ? message :
+  std::string(message, stack_trace);
+}
+
+// Prints a TestPartResult object.
+std::ostream& operator<<(std::ostream& os, const TestPartResult& result) {
+  return os
+  << result.file_name() << ":" << result.line_number() << ": "
+  << (result.type() == TestPartResult::kSuccess ? "Success" :
+  result.type() == TestPartResult::kFatalFailure ? "Fatal failure" :
+  "Non-fatal failure") << ":\n"
+  << result.message() << std::endl;
+}
+
+// Appends a TestPartResult to the array.
+void TestPartResultArray::Append(const TestPartResult& result) {
+  array_.push_back(result);
+}
+
+// Returns the TestPartResult at the given index (0-based).
+const TestPartResult& TestPartResultArray::GetTestPartResult(int index) const {
+  if (index < 0 || index >= size()) {
+printf("\nInvalid index (%d) into TestPartResultArray.\n", index);
+internal::posix::Abort();
+  }
+
+  return array_[index];
+}
+
+// Returns the number of TestPartResult objects in the array.
+int TestPartResultArray::size() const {
+  return static_cast(array_.size());
+}
+
+namespace internal {
+
+HasNewFatalFailureHelper::HasNewFatalFailureHelper()
+: has_new_fatal_failure_(false),
+  original_reporter_(GetUnitTestImpl()->
+ GetTestPartResultReporterForCurrentThread()) {
+  GetUnitTestImpl()->SetTestPartResultReporterForCurrentThread(this);
+}
+
+HasNewFatalFailureHelper::~HasNewFatalFailureHelper() {
+  GetUnitTestImpl()->SetTestPartResultReporterForCurrentThread(
+  original_reporter_);
+}
+
+void HasNewFatalFailureHelper::ReportTestPartResult(
+const TestPartResult& result) {
+  if (result.fatally_failed())
+has_new_fatal_failure_ = true;
+  original_reporter_->ReportTestPartResult(result);
+}
+
+}  // namespace internal
+
+}  // namespace testing

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/src/gtest-typed-test.cc
--
diff --git a/depends/libhdfs3/gtest/src/gtest-typed-test.cc 
b/depends/libhdfs3/gtest/src/gtest-typed-test.cc
new file mode 100

[03/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/test/function/TestOutputStream.cpp
--
diff --git a/depends/libhdfs3/test/function/TestOutputStream.cpp 
b/depends/libhdfs3/test/function/TestOutputStream.cpp
new file mode 100644
index 000..da0162f
--- /dev/null
+++ b/depends/libhdfs3/test/function/TestOutputStream.cpp
@@ -0,0 +1,774 @@
+/
+ * Copyright (c) 2013 - 2014, Pivotal Inc.
+ * All rights reserved.
+ *
+ * Author: Zhanwei Wang
+ /
+/
+ * 2014 -
+ * open source under Apache License Version 2.0
+ /
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+#include "client/FileSystem.h"
+#include "client/InputStream.h"
+#include "client/OutputStream.h"
+#include "DateTime.h"
+#include "Exception.h"
+#include "ExceptionInternal.h"
+#include "gtest/gtest.h"
+#include "Memory.h"
+#include "TestUtil.h"
+#include "Thread.h"
+#include "XmlConfig.h"
+
+#include 
+#include 
+#include 
+#include 
+
+#ifndef TEST_HDFS_PREFIX
+#define TEST_HDFS_PREFIX "./"
+#endif
+
+#define BASE_DIR TEST_HDFS_PREFIX"/testOutputStream/"
+
+using namespace Hdfs;
+using namespace Hdfs::Internal;
+
+class TestOutputStream: public ::testing::Test {
+public:
+TestOutputStream() :
+conf("function-test.xml") {
+conf.set("output.default.packetsize", 1024);
+fs = new FileSystem(conf);
+fs->connect();
+superfs = new FileSystem(conf);
+superfs->connect(conf.getString("dfs.default.uri"), HDFS_SUPERUSER, 
NULL);
+superfs->setWorkingDirectory(fs->getWorkingDirectory().c_str());
+
+try {
+superfs->deletePath(BASE_DIR, true);
+} catch (...) {
+}
+
+superfs->mkdirs(BASE_DIR, 0755);
+superfs->setOwner(TEST_HDFS_PREFIX, USER, NULL);
+superfs->setOwner(BASE_DIR, USER, NULL);
+}
+
+~TestOutputStream() {
+try {
+superfs->deletePath(BASE_DIR, true);
+} catch (...) {
+}
+
+fs->disconnect();
+delete fs;
+superfs->disconnect();
+delete superfs;
+}
+
+/**
+ * size the size will be the size of a chunk or the size of a packet or 
the size of a block, the default blocksize is 2048
+ */
+void CheckWrite(size_t size, int flag) {
+char buffer[size], buffer2[size - 100], buffer3[size + 100], 
readBuffer[2560];
+//check write a chunk|packet|block
+ASSERT_NO_THROW(ous.open(*fs, BASE_DIR"testWrite", flag, 0644, false, 
0, 2048));
+FillBuffer(buffer, sizeof(buffer), 0);
+ASSERT_NO_THROW(ous.append(buffer, sizeof(buffer)));
+ASSERT_NO_THROW(ous.close());
+ASSERT_NO_THROW(ins.open(*fs, BASE_DIR"testWrite", true));
+ASSERT_NO_THROW(ins.readFully(readBuffer, sizeof(buffer)));
+ASSERT_EQ(CheckBuffer(readBuffer, sizeof(buffer), 0), true);
+ASSERT_NO_THROW(ins.close());
+ASSERT_EQ(fs->deletePath(BASE_DIR"testWrite", false), true);
+//check write less than a chunk|packet|block
+ASSERT_NO_THROW(ous.open(*fs, BASE_DIR"testWrite", flag, 0644, false, 
0, 2048));
+FillBuffer(buffer2, sizeof(buffer2), 0);
+ASSERT_NO_THROW(ous.append(buffer2, sizeof(buffer2)));
+ASSERT_NO_THROW(ous.close());
+ASSERT_NO_THROW(ins.open(*fs, BASE_DIR"testWrite", false));
+ASSERT_NO_THROW(ins.readFully(readBuffer, sizeof(buffer2)));
+ASSERT_EQ(CheckBuffer(readBuffer, sizeof(buffer2), 0), true);
+ASSERT_NO_THROW(ins.close());
+ASSERT_EQ(fs->deletePath(BASE_DIR"testWrite", false), true);
+//check write greater than a chunk|packet|block
+ASSERT_NO_THROW(ous.open(*fs, BASE_DIR"testWrite", flag, 0644, false, 
0, 2048));
+FillBuffer(buffer3, sizeof(buffer3), 0);
+ASSERT_NO_THROW(ous.append(buffer3, sizeof(buffer3)));
+ASSERT_NO_THROW(ous.close());
+ASSERT_NO_T

[37/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gmock/src/gmock-spec-builders.cc
--
diff --git a/depends/libhdfs3/gmock/src/gmock-spec-builders.cc 
b/depends/libhdfs3/gmock/src/gmock-spec-builders.cc
new file mode 100644
index 000..abaae3a
--- /dev/null
+++ b/depends/libhdfs3/gmock/src/gmock-spec-builders.cc
@@ -0,0 +1,813 @@
+// Copyright 2007, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+
+// Google Mock - a framework for writing C++ mock classes.
+//
+// This file implements the spec builder syntax (ON_CALL and
+// EXPECT_CALL).
+
+#include "gmock/gmock-spec-builders.h"
+
+#include 
+#include   // NOLINT
+#include 
+#include 
+#include 
+#include "gmock/gmock.h"
+#include "gtest/gtest.h"
+
+#if GTEST_OS_CYGWIN || GTEST_OS_LINUX || GTEST_OS_MAC
+# include   // NOLINT
+#endif
+
+namespace testing {
+namespace internal {
+
+// Protects the mock object registry (in class Mock), all function
+// mockers, and all expectations.
+GTEST_API_ GTEST_DEFINE_STATIC_MUTEX_(g_gmock_mutex);
+
+// Logs a message including file and line number information.
+GTEST_API_ void LogWithLocation(testing::internal::LogSeverity severity,
+const char* file, int line,
+const string& message) {
+  ::std::ostringstream s;
+  s << file << ":" << line << ": " << message << ::std::endl;
+  Log(severity, s.str(), 0);
+}
+
+// Constructs an ExpectationBase object.
+ExpectationBase::ExpectationBase(const char* a_file,
+ int a_line,
+ const string& a_source_text)
+: file_(a_file),
+  line_(a_line),
+  source_text_(a_source_text),
+  cardinality_specified_(false),
+  cardinality_(Exactly(1)),
+  call_count_(0),
+  retired_(false),
+  extra_matcher_specified_(false),
+  repeated_action_specified_(false),
+  retires_on_saturation_(false),
+  last_clause_(kNone),
+  action_count_checked_(false) {}
+
+// Destructs an ExpectationBase object.
+ExpectationBase::~ExpectationBase() {}
+
+// Explicitly specifies the cardinality of this expectation.  Used by
+// the subclasses to implement the .Times() clause.
+void ExpectationBase::SpecifyCardinality(const Cardinality& a_cardinality) {
+  cardinality_specified_ = true;
+  cardinality_ = a_cardinality;
+}
+
+// Retires all pre-requisites of this expectation.
+void ExpectationBase::RetireAllPreRequisites()
+GTEST_EXCLUSIVE_LOCK_REQUIRED_(g_gmock_mutex) {
+  if (is_retired()) {
+// We can take this short-cut as we never retire an expectation
+// until we have retired all its pre-requisites.
+return;
+  }
+
+  for (ExpectationSet::const_iterator it = immediate_prerequisites_.begin();
+   it != immediate_prerequisites_.end(); ++it) {
+ExpectationBase* const prerequisite = it->expectation_base().get();
+if (!prerequisite->is_retired()) {
+  prerequisite->RetireAllPreRequisites();
+  prerequisite->Retire();
+}
+  }
+}
+
+// Returns true iff all pre-requisites of this expectation have been
+// satisfied.
+bool ExpectationBase::AllPrerequisitesAreSatisfied() const
+GTEST_EXCLUSIVE_LOCK_REQUIRED_(g_gmock_mutex) {
+  g_gmock_mutex.AssertHeld();
+  for (ExpectationSet::const_iterator it = immediate_prerequisites_.begin();
+   it != immediate_prerequi

[48/48] incubator-hawq git commit: Merge branch 'master' into HAWQ-617

2016-04-03 Thread bhuvnesh2703
Merge branch 'master' into HAWQ-617


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/c5c7d8fc
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/c5c7d8fc
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/c5c7d8fc

Branch: refs/heads/HAWQ-617
Commit: c5c7d8fc0757822d21e18958e03d1e536fc27a56
Parents: 53d264e e8fcfb0
Author: Bhuvnesh Chaudhary 
Authored: Sun Apr 3 22:10:31 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Sun Apr 3 22:10:31 2016 -0700

--
 LICENSE |   18 +-
 .../CMake/CMakeTestCompileNestedException.cpp   |   10 +
 .../CMake/CMakeTestCompileSteadyClock.cpp   |7 +
 .../libhdfs3/CMake/CMakeTestCompileStrerror.cpp |   10 +
 depends/libhdfs3/CMake/CodeCoverage.cmake   |   48 +
 depends/libhdfs3/CMake/FindBoost.cmake  | 1162 
 depends/libhdfs3/CMake/FindGSasl.cmake  |   26 +
 depends/libhdfs3/CMake/FindKERBEROS.cmake   |   23 +
 depends/libhdfs3/CMake/FindLibUUID.cmake|   23 +
 depends/libhdfs3/CMake/Functions.cmake  |   46 +
 depends/libhdfs3/CMake/Options.cmake|  169 +
 depends/libhdfs3/CMake/Platform.cmake   |   33 +
 depends/libhdfs3/CMakeLists.txt |   62 +
 depends/libhdfs3/README.md  |   86 +
 depends/libhdfs3/bootstrap  |  122 +
 depends/libhdfs3/debian/.gitignore  |   15 +
 depends/libhdfs3/debian/build.sh|  100 +
 depends/libhdfs3/debian/changelog.in|5 +
 depends/libhdfs3/debian/compat  |1 +
 depends/libhdfs3/debian/control |   31 +
 depends/libhdfs3/debian/copyright   |   23 +
 depends/libhdfs3/debian/libhdfs3-dev.dirs   |2 +
 depends/libhdfs3/debian/libhdfs3-dev.install|4 +
 .../debian/libhdfs3-dev.lintian-overrides   |1 +
 depends/libhdfs3/debian/libhdfs3.dirs   |1 +
 depends/libhdfs3/debian/libhdfs3.install|1 +
 .../libhdfs3/debian/libhdfs3.lintian-overrides  |1 +
 depends/libhdfs3/debian/rules   |   24 +
 depends/libhdfs3/debian/source/format   |1 +
 depends/libhdfs3/gmock/CMakeLists.txt   |   31 +
 depends/libhdfs3/gmock/COPYING  |   28 +
 .../gmock/include/gmock/gmock-actions.h | 1078 
 .../gmock/include/gmock/gmock-cardinalities.h   |  147 +
 .../include/gmock/gmock-generated-actions.h | 2415 
 .../gmock/gmock-generated-function-mockers.h|  991 
 .../include/gmock/gmock-generated-matchers.h| 2190 
 .../include/gmock/gmock-generated-nice-strict.h |  397 ++
 .../gmock/include/gmock/gmock-matchers.h| 3986 ++
 .../gmock/include/gmock/gmock-more-actions.h|  233 +
 .../gmock/include/gmock/gmock-more-matchers.h   |   58 +
 .../gmock/include/gmock/gmock-spec-builders.h   | 1791 ++
 depends/libhdfs3/gmock/include/gmock/gmock.h|   94 +
 .../internal/gmock-generated-internal-utils.h   |  279 +
 .../gmock/internal/gmock-internal-utils.h   |  498 ++
 .../gmock/include/gmock/internal/gmock-port.h   |   78 +
 .../libhdfs3/gmock/src/gmock-cardinalities.cc   |  156 +
 .../libhdfs3/gmock/src/gmock-internal-utils.cc  |  174 +
 depends/libhdfs3/gmock/src/gmock-matchers.cc|  498 ++
 .../libhdfs3/gmock/src/gmock-spec-builders.cc   |  813 +++
 depends/libhdfs3/gmock/src/gmock.cc |  182 +
 depends/libhdfs3/gtest/CMakeLists.txt   |   28 +
 .../gtest/include/gtest/gtest-death-test.h  |  294 +
 .../gtest/include/gtest/gtest-message.h |  250 +
 .../gtest/include/gtest/gtest-param-test.h  | 1421 +
 .../gtest/include/gtest/gtest-printers.h|  855 +++
 .../libhdfs3/gtest/include/gtest/gtest-spi.h|  232 +
 .../gtest/include/gtest/gtest-test-part.h   |  179 +
 .../gtest/include/gtest/gtest-typed-test.h  |  259 +
 depends/libhdfs3/gtest/include/gtest/gtest.h| 2291 
 .../gtest/include/gtest/gtest_pred_impl.h   |  358 ++
 .../libhdfs3/gtest/include/gtest/gtest_prod.h   |   58 +
 .../gtest/internal/gtest-death-test-internal.h  |  319 ++
 .../include/gtest/internal/gtest-filepath.h |  206 +
 .../include/gtest/internal/gtest-internal.h | 1158 
 .../include/gtest/internal/gtest-linked_ptr.h   |  233 +
 .../gtest/internal/gtest-param-util-generated.h | 5143 ++
 .../include/gtest/internal/gtest-param-util.h   |  619 +++
 .../gtest/include/gtest/internal/gtest-port.h   | 1947 +++
 .../gtest/include/gtest/internal/gtest-string.h |  167 +
 .../gtest/include/gtest/internal/gtest-tuple.h  | 1012 
 .../include/gtest/internal/gtest-type-util.h| 3331 
 depends/libhdfs3/gtest/src/gtest-death-test.cc  | 1344 +
 depends/libhdfs3/gtest/src/gtest-filepath.cc|  382 ++
 depends/

[30/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/include/gtest/internal/gtest-param-util.h
--
diff --git a/depends/libhdfs3/gtest/include/gtest/internal/gtest-param-util.h 
b/depends/libhdfs3/gtest/include/gtest/internal/gtest-param-util.h
new file mode 100644
index 000..d5e1028
--- /dev/null
+++ b/depends/libhdfs3/gtest/include/gtest/internal/gtest-param-util.h
@@ -0,0 +1,619 @@
+// Copyright 2008 Google Inc.
+// All Rights Reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: vl...@google.com (Vlad Losev)
+
+// Type and function utilities for implementing parameterized tests.
+
+#ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_
+#define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_
+
+#include 
+#include 
+#include 
+
+// scripts/fuse_gtest.py depends on gtest's own header being #included
+// *unconditionally*.  Therefore these #includes cannot be moved
+// inside #if GTEST_HAS_PARAM_TEST.
+#include "gtest/internal/gtest-internal.h"
+#include "gtest/internal/gtest-linked_ptr.h"
+#include "gtest/internal/gtest-port.h"
+#include "gtest/gtest-printers.h"
+
+#if GTEST_HAS_PARAM_TEST
+
+namespace testing {
+namespace internal {
+
+// INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE.
+//
+// Outputs a message explaining invalid registration of different
+// fixture class for the same test case. This may happen when
+// TEST_P macro is used to define two tests with the same name
+// but in different namespaces.
+GTEST_API_ void ReportInvalidTestCaseType(const char* test_case_name,
+  const char* file, int line);
+
+template  class ParamGeneratorInterface;
+template  class ParamGenerator;
+
+// Interface for iterating over elements provided by an implementation
+// of ParamGeneratorInterface.
+template 
+class ParamIteratorInterface {
+ public:
+  virtual ~ParamIteratorInterface() {}
+  // A pointer to the base generator instance.
+  // Used only for the purposes of iterator comparison
+  // to make sure that two iterators belong to the same generator.
+  virtual const ParamGeneratorInterface* BaseGenerator() const = 0;
+  // Advances iterator to point to the next element
+  // provided by the generator. The caller is responsible
+  // for not calling Advance() on an iterator equal to
+  // BaseGenerator()->End().
+  virtual void Advance() = 0;
+  // Clones the iterator object. Used for implementing copy semantics
+  // of ParamIterator.
+  virtual ParamIteratorInterface* Clone() const = 0;
+  // Dereferences the current iterator and provides (read-only) access
+  // to the pointed value. It is the caller's responsibility not to call
+  // Current() on an iterator equal to BaseGenerator()->End().
+  // Used for implementing ParamGenerator::operator*().
+  virtual const T* Current() const = 0;
+  // Determines whether the given iterator and other point to the same
+  // element in the sequence generated by the generator.
+  // Used for implementing ParamGenerator::operator==().
+  virtual bool Equals(const ParamIteratorInterface& other) const = 0;
+};
+
+// Class iterating over elements provided by an implementation of
+// ParamGeneratorInterface. It wraps ParamIteratorInterface
+// and implements the const forward iterator concept.
+template 
+class ParamIterator {
+ public:
+  typedef T value_type;
+  typedef const T& reference;
+  typedef ptrdiff_t difference_type;
+
+  // Param

[35/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gtest/include/gtest/gtest-printers.h
--
diff --git a/depends/libhdfs3/gtest/include/gtest/gtest-printers.h 
b/depends/libhdfs3/gtest/include/gtest/gtest-printers.h
new file mode 100644
index 000..0639d9f
--- /dev/null
+++ b/depends/libhdfs3/gtest/include/gtest/gtest-printers.h
@@ -0,0 +1,855 @@
+// Copyright 2007, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+
+// Google Test - The Google C++ Testing Framework
+//
+// This file implements a universal value printer that can print a
+// value of any type T:
+//
+//   void ::testing::internal::UniversalPrinter::Print(value, ostream_ptr);
+//
+// A user can teach this function how to print a class type T by
+// defining either operator<<() or PrintTo() in the namespace that
+// defines T.  More specifically, the FIRST defined function in the
+// following list will be used (assuming T is defined in namespace
+// foo):
+//
+//   1. foo::PrintTo(const T&, ostream*)
+//   2. operator<<(ostream&, const T&) defined in either foo or the
+//  global namespace.
+//
+// If none of the above is defined, it will print the debug string of
+// the value if it is a protocol buffer, or print the raw bytes in the
+// value otherwise.
+//
+// To aid debugging: when T is a reference type, the address of the
+// value is also printed; when T is a (const) char pointer, both the
+// pointer value and the NUL-terminated string it points to are
+// printed.
+//
+// We also provide some convenient wrappers:
+//
+//   // Prints a value to a string.  For a (const or not) char
+//   // pointer, the NUL-terminated string (but not the pointer) is
+//   // printed.
+//   std::string ::testing::PrintToString(const T& value);
+//
+//   // Prints a value tersely: for a reference type, the referenced
+//   // value (but not the address) is printed; for a (const or not) char
+//   // pointer, the NUL-terminated string (but not the pointer) is
+//   // printed.
+//   void ::testing::internal::UniversalTersePrint(const T& value, ostream*);
+//
+//   // Prints value using the type inferred by the compiler.  The difference
+//   // from UniversalTersePrint() is that this function prints both the
+//   // pointer and the NUL-terminated string for a (const or not) char 
pointer.
+//   void ::testing::internal::UniversalPrint(const T& value, ostream*);
+//
+//   // Prints the fields of a tuple tersely to a string vector, one
+//   // element for each field. Tuple support must be enabled in
+//   // gtest-port.h.
+//   std::vector UniversalTersePrintTupleFieldsToStrings(
+//   const Tuple& value);
+//
+// Known limitation:
+//
+// The print primitives print the elements of an STL-style container
+// using the compiler-inferred type of *iter where iter is a
+// const_iterator of the container.  When const_iterator is an input
+// iterator but not a forward iterator, this inferred type may not
+// match value_type, and the print output may be incorrect.  In
+// practice, this is rarely a problem as for most containers
+// const_iterator is a forward iterator.  We'll fix this if there's an
+// actual need for it.  Note that this fix cannot rely on value_type
+// being defined as many user-defined container types don't have
+// value_type.
+
+#ifndef GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_
+#define GTEST_INCLUD

[43/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/gmock/include/gmock/gmock-generated-function-mockers.h
--
diff --git 
a/depends/libhdfs3/gmock/include/gmock/gmock-generated-function-mockers.h 
b/depends/libhdfs3/gmock/include/gmock/gmock-generated-function-mockers.h
new file mode 100644
index 000..577fd9e
--- /dev/null
+++ b/depends/libhdfs3/gmock/include/gmock/gmock-generated-function-mockers.h
@@ -0,0 +1,991 @@
+// This file was GENERATED by command:
+// pump.py gmock-generated-function-mockers.h.pump
+// DO NOT EDIT BY HAND!!!
+
+// Copyright 2007, Google Inc.
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+//
+// Author: w...@google.com (Zhanyong Wan)
+
+// Google Mock - a framework for writing C++ mock classes.
+//
+// This file implements function mockers of various arities.
+
+#ifndef GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_FUNCTION_MOCKERS_H_
+#define GMOCK_INCLUDE_GMOCK_GMOCK_GENERATED_FUNCTION_MOCKERS_H_
+
+#include "gmock/gmock-spec-builders.h"
+#include "gmock/internal/gmock-internal-utils.h"
+
+namespace testing {
+namespace internal {
+
+template 
+class FunctionMockerBase;
+
+// Note: class FunctionMocker really belongs to the ::testing
+// namespace.  However if we define it in ::testing, MSVC will
+// complain when classes in ::testing::internal declare it as a
+// friend class template.  To workaround this compiler bug, we define
+// FunctionMocker in ::testing::internal and import it into ::testing.
+template 
+class FunctionMocker;
+
+template 
+class FunctionMocker : public
+internal::FunctionMockerBase {
+ public:
+  typedef R F();
+  typedef typename internal::Function::ArgumentTuple ArgumentTuple;
+
+  MockSpec& With() {
+return this->current_spec();
+  }
+
+  R Invoke() {
+// Even though gcc and MSVC don't enforce it, 'this->' is required
+// by the C++ standard [14.6.4] here, as the base class type is
+// dependent on the template argument (and thus shouldn't be
+// looked into when resolving InvokeWith).
+return this->InvokeWith(ArgumentTuple());
+  }
+};
+
+template 
+class FunctionMocker : public
+internal::FunctionMockerBase {
+ public:
+  typedef R F(A1);
+  typedef typename internal::Function::ArgumentTuple ArgumentTuple;
+
+  MockSpec& With(const Matcher& m1) {
+this->current_spec().SetMatchers(::std::tr1::make_tuple(m1));
+return this->current_spec();
+  }
+
+  R Invoke(A1 a1) {
+// Even though gcc and MSVC don't enforce it, 'this->' is required
+// by the C++ standard [14.6.4] here, as the base class type is
+// dependent on the template argument (and thus shouldn't be
+// looked into when resolving InvokeWith).
+return this->InvokeWith(ArgumentTuple(a1));
+  }
+};
+
+template 
+class FunctionMocker : public
+internal::FunctionMockerBase {
+ public:
+  typedef R F(A1, A2);
+  typedef typename internal::Function::ArgumentTuple ArgumentTuple;
+
+  MockSpec& With(const Matcher& m1, const Matcher& m2) {
+this->current_spec().SetMatchers(::std::tr1::make_tuple(m1, m2));
+return this->current_spec();
+  }
+
+  R Invoke(A1 a1, A2 a2) {
+// Even though gcc and MSVC don't enforce it, 'this->' is required
+// by the C++ standard [14.6.4] here, as the base class type is
+// dependent on the template argument (and thus shouldn't be
+// looked into when resolving InvokeWith).
+return this->InvokeWith(ArgumentTup

[13/48] incubator-hawq git commit: HAWQ-618. Import libhdfs3 library for internal management and LICENSE modified

2016-04-03 Thread bhuvnesh2703
http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/bc0904ab/depends/libhdfs3/src/doxyfile.in
--
diff --git a/depends/libhdfs3/src/doxyfile.in b/depends/libhdfs3/src/doxyfile.in
new file mode 100644
index 000..0b74cb6
--- /dev/null
+++ b/depends/libhdfs3/src/doxyfile.in
@@ -0,0 +1,1808 @@
+# Doxyfile 1.8.2
+
+# This file describes the settings to be used by the documentation system
+# doxygen (www.doxygen.org) for a project.
+#
+# All text after a hash (#) is considered a comment and will be ignored.
+# The format is:
+#   TAG = value [value, ...]
+# For lists items can also be appended using:
+#   TAG += value [value, ...]
+# Values that contain spaces should be placed between quotes (" ").
+
+#---
+# Project related configuration options
+#---
+
+# This tag specifies the encoding used for all characters in the config file
+# that follow. The default is UTF-8 which is also the encoding used for all
+# text before the first occurrence of this tag. Doxygen uses libiconv (or the
+# iconv built into libc) for the transcoding. See
+# http://www.gnu.org/software/libiconv for the list of possible encodings.
+
+DOXYFILE_ENCODING = UTF-8
+
+# The PROJECT_NAME tag is a single word (or sequence of words) that should
+# identify the project. Note that if you do not use Doxywizard you need
+# to put quotes around the project name if it contains spaces.
+
+PROJECT_NAME = libhdfs3
+
+# The PROJECT_NUMBER tag can be used to enter a project or revision number.
+# This could be handy for archiving the generated documentation or
+# if some version control system is used.
+
+PROJECT_NUMBER = 
@libhdfs3_VERSION_MAJOR@.@libhdfs3_VERSION_MINOR@.@libhdfs3_VERSION_PATCH@
+
+# Using the PROJECT_BRIEF tag one can provide an optional one line description
+# for a project that appears at the top of each page and should give viewer
+# a quick idea about the purpose of the project. Keep the description short.
+
+PROJECT_BRIEF = 
+
+# With the PROJECT_LOGO tag one can specify an logo or icon that is
+# included in the documentation. The maximum height of the logo should not
+# exceed 55 pixels and the maximum width should not exceed 200 pixels.
+# Doxygen will copy the logo to the output directory.
+
+PROJECT_LOGO = 
+
+# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute)
+# base path where the generated documentation will be put.
+# If a relative path is entered, it will be relative to the location
+# where doxygen was started. If left blank the current directory will be used.
+
+OUTPUT_DIRECTORY = @DOXYFILE_PATH@
+
+# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create
+# 4096 sub-directories (in 2 levels) under the output directory of each output
+# format and will distribute the generated files over these directories.
+# Enabling this option can be useful when feeding doxygen a huge amount of
+# source files, where putting all generated files in the same directory would
+# otherwise cause performance problems for the file system.
+
+CREATE_SUBDIRS = NO
+
+# The OUTPUT_LANGUAGE tag is used to specify the language in which all
+# documentation generated by doxygen is written. Doxygen will use this
+# information to generate all constant output in the proper language.
+# The default language is English, other supported languages are:
+# Afrikaans, Arabic, Brazilian, Catalan, Chinese, Chinese-Traditional,
+# Croatian, Czech, Danish, Dutch, Esperanto, Farsi, Finnish, French, German,
+# Greek, Hungarian, Italian, Japanese, Japanese-en (Japanese with English
+# messages), Korean, Korean-en, Lithuanian, Norwegian, Macedonian, Persian,
+# Polish, Portuguese, Romanian, Russian, Serbian, Serbian-Cyrillic, Slovak,
+# Slovene, Spanish, Swedish, Ukrainian, and Vietnamese.
+
+OUTPUT_LANGUAGE = English
+
+# If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will
+# include brief member descriptions after the members that are listed in
+# the file and class documentation (similar to JavaDoc).
+# Set to NO to disable this.
+
+BRIEF_MEMBER_DESC = YES
+
+# If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend
+# the brief description of a member or function before the detailed 
description.
+# Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
+# brief descriptions will be completely suppressed.
+
+REPEAT_BRIEF = YES
+
+# This tag implements a quasi-intelligent brief description abbreviator
+# that is used to form the text in various listings. Each string
+# in this list, if found as the leading text of the brief description, will be
+# stripped from the text and the result after processing the whole list, is
+# used as the annotated text. Otherwise, the brief description is used as-is.
+# If left blank, the following values are used ("$

incubator-hawq git commit: HAWQ-617 Applied feedback

2016-04-03 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-617 7e8331a7e -> 53d264e98


HAWQ-617 Applied feedback


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/53d264e9
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/53d264e9
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/53d264e9

Branch: refs/heads/HAWQ-617
Commit: 53d264e9869e408efc67ab1b9710b594d18bc1b0
Parents: 7e8331a
Author: Bhuvnesh Chaudhary 
Authored: Sun Apr 3 22:09:26 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Sun Apr 3 22:09:26 2016 -0700

--
 .gitignore |  1 -
 tools/bin/hawq_ctl | 23 --
 tools/bin/hawqpylib/hawqlib.py | 39 +
 tools/doc/gpscp_help   |  7 +++
 4 files changed, 63 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/53d264e9/.gitignore
--
diff --git a/.gitignore b/.gitignore
deleted file mode 100644
index 485dee6..000
--- a/.gitignore
+++ /dev/null
@@ -1 +0,0 @@
-.idea

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/53d264e9/tools/bin/hawq_ctl
--
diff --git a/tools/bin/hawq_ctl b/tools/bin/hawq_ctl
index c43dfd4..a4e0c3c 100755
--- a/tools/bin/hawq_ctl
+++ b/tools/bin/hawq_ctl
@@ -493,6 +493,7 @@ class HawqStart:
 self.masteronly = opts.masteronly 
 self.special_mode = opts.special_mode
 self.restrict =  opts.restrict
+self.ignore_bad_hosts = opts.ignore_bad_hosts
 
 self._get_config()
 
@@ -682,13 +683,24 @@ class HawqStart:
 
 def _start_all_segments(self):
 logger.info("Start all the segments in hawq cluster")
-segment_cmd_str = self._start_segment_cmd()
 logger.info("Start segments in list: %s" % self.host_list)
-work_list = []
+bad_hosts = []
+working_hosts = self.host_list
+if self.ignore_bad_hosts:
+working_hosts, bad_hosts = exclude_bad_hosts(self.host_list)
+if len(bad_hosts) == len(self.host_list):
+logger.error("Unable to SSH on any of the hosts, skipping 
segment start operation")
+return
+if len(bad_hosts) > 0:
+logger.warning("Skipping starting segments in the list {0}, 
SSH test failed".format(bad_hosts))
+self.hosts_count_number -= len(bad_hosts)
+
+segment_cmd_str = self._start_segment_cmd()
 q = Queue.Queue()
-for host in self.host_list:
+work_list = []
+for host in working_hosts:
 work_list.append({"func":remote_ssh,"args":(segment_cmd_str, host, 
self.user, q)})
-work_list.append({"func":check_progress,"args":(q, 
self.hosts_count_number, 'start', 0, self.quiet)})
+work_list.append({"func":check_progress,"args":(q, 
self.hosts_count_number, 'start', len(bad_hosts), self.quiet)})
 node_init = HawqCommands(name = 'HAWQ', action_name = 'start', logger 
= logger)
 node_init.get_function_list(work_list)
 node_init.start()
@@ -699,7 +711,6 @@ class HawqStart:
 logger.info("Segments started successfully")
 return node_init.return_flag
 
-
 def run(self):
 if self.node_type == "master":
 check_return_code(self.start_master(), logger, \
@@ -1205,7 +1216,7 @@ def hawq_activate_standby(opts, hawq_dict):
 logger.info("Start hawq cluster")
 cmd = "%s; hawq start master" % source_hawq_env
 check_return_code(remote_ssh(cmd, new_master_host_name, ''), logger, 
"Start master failed")
-cmd = "%s; hawq start allsegments" % source_hawq_env
+cmd = "%s; hawq start allsegments %s" % (source_hawq_env, ignore_bad_hosts)
 check_return_code(remote_ssh(cmd, new_master_host_name, ''), logger, 
"Start all the segments failed")
 cmd = '''sed -i "/gp_persistent_repair_global_sequence/d" %s/%s''' % 
(hawq_dict['hawq_master_directory'], 'postgresql.conf')
 check_return_code(remote_ssh(cmd, new_master_host_name, ''))

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/53d264e9/tools/bin/hawqpylib/hawqlib.py
--
diff --git a/tools/bin/hawqpylib/hawqlib.py b/tools/bin/hawqpylib/hawqlib.py
index 85354b4..79bcdae 100755
--- a/tools/bin/hawqpylib/hawqlib.py
+++ b/tools/bin/hawqpylib/hawqlib.py
@@ -24,6 +24,8 @@ from xml.dom import minidom
 from xml.etree.ElementTree import ElementTree
 import shutil
 from gppylib.db import dbconn
+from gppylib.commands.base import WorkerPool, REMOTE
+from gppylib.commands.unix import Echo
 import re
 
 
@@ -484,3 +486,4

incubator-hawq git commit: [#116576425] - Add option to ignore hosts on which SSH test fails to skip syncing configuration files

2016-03-31 Thread bhuvnesh2703
Repository: incubator-hawq
Updated Branches:
  refs/heads/HAWQ-617 [created] 7e8331a7e


[#116576425] - Add option to ignore hosts on which SSH test fails to skip 
syncing configuration files


Project: http://git-wip-us.apache.org/repos/asf/incubator-hawq/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-hawq/commit/7e8331a7
Tree: http://git-wip-us.apache.org/repos/asf/incubator-hawq/tree/7e8331a7
Diff: http://git-wip-us.apache.org/repos/asf/incubator-hawq/diff/7e8331a7

Branch: refs/heads/HAWQ-617
Commit: 7e8331a7e314c9d5b7f2ec65078fa069271a284d
Parents: 071f276
Author: Bhuvnesh Chaudhary 
Authored: Thu Mar 31 23:14:54 2016 -0700
Committer: Bhuvnesh Chaudhary 
Committed: Thu Mar 31 23:14:54 2016 -0700

--
 .gitignore  | 55 +---
 tools/bin/gppylib/util/ssh_utils.py | 23 +
 tools/bin/gpscp | 36 +
 tools/bin/hawq_ctl  | 13 +---
 tools/bin/hawqconfig| 12 ---
 tools/bin/hawqpylib/HAWQ_HELP.py|  1 +
 6 files changed, 64 insertions(+), 76 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/7e8331a7/.gitignore
--
diff --git a/.gitignore b/.gitignore
index 8fd9023..485dee6 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,54 +1 @@
-# Object files
-*.o
-*.ko
-*.obj
-*.elf
-.deps
-
-# Precompiled Headers
-*.gch
-*.pch
-
-# Libraries
-*.lib
-*.a
-*.la
-*.lo
-
-# Shared objects (inc. Windows DLLs)
-*.dll
-*.so
-*.so.*
-*.dylib
-
-# Executables
-*.exe
-*.app
-*.i*86
-*.x86_64
-*.hex
-
-# Debug files
-*.dSYM/
-*.o
-*.ko
-*.obj
-*.elf
-objfiles.txt
-
-# Eclipse Project
-.project
-.pydevproject
-.cproject
-.settings
-
-# Generated files
-BUILD_NUMBER
-GNUmakefile
-config.log
-config.status
-VERSION
-env.sh
-ext/
-plr.tgz
-autom4te.cache/
+.idea

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/7e8331a7/tools/bin/gppylib/util/ssh_utils.py
--
diff --git a/tools/bin/gppylib/util/ssh_utils.py 
b/tools/bin/gppylib/util/ssh_utils.py
index 3194e11..853c0f5 100644
--- a/tools/bin/gppylib/util/ssh_utils.py
+++ b/tools/bin/gppylib/util/ssh_utils.py
@@ -160,6 +160,29 @@ class HostList():
 
 return self.list
 
+def removeBadHosts(self):
+''' Update list of host to include only the host on which SSH was 
successful'''
+
+pool = WorkerPool()
+
+for h in self.list:
+cmd = Echo('ssh test', '', ctxt=REMOTE, remoteHost=h)
+pool.addCommand(cmd)
+
+pool.join()
+pool.haltWork()
+
+bad_hosts = []
+working_hosts = []
+for cmd in pool.getCompletedItems():
+if not cmd.get_results().wasSuccessful():
+bad_hosts.append(cmd.remoteHost)
+else:
+working_hosts.append(cmd.remoteHost)
+
+self.list = working_hosts[:]
+return bad_hosts
+
 # Session is a command session, derived from a base class cmd.Cmd
 class Session(cmd.Cmd):
 '''Implements a list of open ssh sessions ready to execute commands'''

http://git-wip-us.apache.org/repos/asf/incubator-hawq/blob/7e8331a7/tools/bin/gpscp
--
diff --git a/tools/bin/gpscp b/tools/bin/gpscp
index d00f15d..c02d677 100755
--- a/tools/bin/gpscp
+++ b/tools/bin/gpscp
@@ -64,6 +64,7 @@ class Global:
 opt['-f'] = None
 opt['-J'] = '=:'
 opt['-r'] = False
+opt['--ignore-bad-hosts'] = False
 filePath = []
 
 GV = Global()
@@ -86,18 +87,19 @@ def print_version():
 #
 def parseCommandLine():
 try:
-(options, args) = getopt.getopt(sys.argv[1:], '?vrJ:p:u:h:f:', 
['version'])
+(options, args) = getopt.getopt(sys.argv[1:], '?vrJ:p:u:h:f:', 
['version', 'ignore-bad-hosts'])
 except Exception, e:
 usage('[ERROR] ' + str(e))
 
 for (switch, val) in options:
-   if (switch == '-?'):  usage(0)
-   elif (switch == '-v'):GV.opt[switch] = True
-   elif (switch == '-f'):GV.opt[switch] = val
-   elif (switch == '-h'):GV.opt[switch].append(val)
-elif (switch == '-J'):GV.opt[switch] = val + ':'
-elif (switch == '-r'):GV.opt[switch] = True
-elif (switch == '--version'): print_version()
+if (switch == '-?'): usage(0)
+elif (switch == '-v'):  GV.opt[switch] = 
True
+elif (switch == '-f'):  GV.opt[switch] = 
val
+elif (switch == '-h'):  
GV.opt[switch].append(val)
+elif (switch == '-J'):  GV.opt[switch] = 
va