That link points to hadoop2.6.tgz.  I tried changing the URL to
https://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop2.7.tgz
and I get a NoSuchKey error.

Should I just go with it even though it says hadoop2.6?

On Sat, Apr 16, 2016 at 5:37 PM, Ted Yu <yuzhih...@gmail.com> wrote:

> BTW this was the original thread:
>
> http://search-hadoop.com/m/q3RTt0Oxul0W6Ak
>
> The link for spark-1.6.1-bin-hadoop2.7 is
> https://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop2.7.tgz
> <https://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop2.6.tgz>
>
> On Sat, Apr 16, 2016 at 2:14 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> From the output you posted:
>> -------
>> Unpacking Spark
>>
>> gzip: stdin: not in gzip format
>> tar: Child returned status 1
>> tar: Error is not recoverable: exiting now
>> -------
>>
>> The artifact for spark-1.6.1-bin-hadoop2.6 is corrupt.
>>
>> This problem has been reported in other threads.
>>
>> Try spark-1.6.1-bin-hadoop2.7 - the artifact should be good.
>>
>> On Sat, Apr 16, 2016 at 2:09 PM, YaoPau <jonrgr...@gmail.com> wrote:
>>
>>> I launched a cluster with: "./spark-ec2 --key-pair my_pem --identity-file
>>> ../../ssh/my_pem.pem launch jg_spark2" and I got the "Spark standalone
>>> cluster started at http://ec2-54-88-249-255.compute-1.amazonaws.com:8080
>>> "
>>> and "Done!" success confirmations at the end.  I confirmed on EC2 that 1
>>> Master and 1 Slave were both launched and passed their status checks.
>>>
>>> But none of the Spark commands seem to work (spark-shell, pyspark, etc),
>>> and
>>> port 8080 isn't being used.  The full output from launching the cluster
>>> is
>>> below.  Any ideas what the issue is?
>>>
>>> >>>>>>>>>>>>>>>>>>>>>>
>>> launch
>>>
>>> jg_spark2/Users/jg/dev/spark-1.6.1-bin-hadoop2.6/ec2/lib/boto-2.34.0/boto/plugin.py:40:
>>> PendingDeprecationWarning: the imp module is deprecated in favour of
>>> importlib; see the module's documentation for alternative uses
>>>   import imp
>>>
>>> /Users/jg/dev/spark-1.6.1-bin-hadoop2.6/ec2/lib/boto-2.34.0/boto/provider.py:197:
>>> ResourceWarning: unclosed file <_io.TextIOWrapper
>>> name='/Users/jg/.aws/credentials' mode='r' encoding='UTF-8'>
>>>   self.shared_credentials.load_from_path(shared_path)
>>> Setting up security groups...
>>> Creating security group jg_spark2-master
>>> Creating security group jg_spark2-slaves
>>> Searching for existing cluster jg_spark2 in region us-east-1...
>>> Spark AMI: ami-5bb18832
>>> Launching instances...
>>> Launched 1 slave in us-east-1a, regid = r-e7d97944
>>> Launched master in us-east-1a, regid = r-d3d87870
>>> Waiting for AWS to propagate instance metadata...
>>> Waiting for cluster to enter 'ssh-ready' state............
>>>
>>> Warning: SSH connection error. (This could be temporary.)
>>> Host: ec2-54-88-249-255.compute-1.amazonaws.com
>>> SSH return code: 255
>>> SSH output: b'ssh: connect to host
>>> ec2-54-88-249-255.compute-1.amazonaws.com
>>> port 22: Connection refused'
>>>
>>>
>>> ./Users/jg/dev/spark-1.6.1-bin-hadoop2.6/ec2/lib/boto-2.34.0/boto/connection.py:190:
>>> ResourceWarning: unclosed <ssl.SSLSocket fd=4,
>>> family=AddressFamily.AF_INET,
>>> type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.1.66', 55580),
>>> raddr=('54.239.20.1', 443)>
>>>   self.queue.pop(0)
>>>
>>>
>>> Warning: SSH connection error. (This could be temporary.)
>>> Host: ec2-54-88-249-255.compute-1.amazonaws.com
>>> SSH return code: 255
>>> SSH output: b'ssh: connect to host
>>> ec2-54-88-249-255.compute-1.amazonaws.com
>>> port 22: Connection refused'
>>>
>>>
>>> ./Users/jg/dev/spark-1.6.1-bin-hadoop2.6/ec2/lib/boto-2.34.0/boto/connection.py:190:
>>> ResourceWarning: unclosed <ssl.SSLSocket fd=4,
>>> family=AddressFamily.AF_INET,
>>> type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.1.66', 55760),
>>> raddr=('54.239.26.182', 443)>
>>>   self.queue.pop(0)
>>>
>>>
>>> Warning: SSH connection error. (This could be temporary.)
>>> Host: ec2-54-88-249-255.compute-1.amazonaws.com
>>> SSH return code: 255
>>> SSH output: b'ssh: connect to host
>>> ec2-54-88-249-255.compute-1.amazonaws.com
>>> port 22: Connection refused'
>>>
>>>
>>> ./Users/jg/dev/spark-1.6.1-bin-hadoop2.6/ec2/lib/boto-2.34.0/boto/connection.py:190:
>>> ResourceWarning: unclosed <ssl.SSLSocket fd=4,
>>> family=AddressFamily.AF_INET,
>>> type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.1.66', 55827),
>>> raddr=('54.239.20.1', 443)>
>>>   self.queue.pop(0)
>>>
>>>
>>> Warning: SSH connection error. (This could be temporary.)
>>> Host: ec2-54-88-249-255.compute-1.amazonaws.com
>>> SSH return code: 255
>>> SSH output: b'ssh: connect to host
>>> ec2-54-88-249-255.compute-1.amazonaws.com
>>> port 22: Connection refused'
>>>
>>>
>>> ./Users/jg/dev/spark-1.6.1-bin-hadoop2.6/ec2/lib/boto-2.34.0/boto/connection.py:190:
>>> ResourceWarning: unclosed <ssl.SSLSocket fd=4,
>>> family=AddressFamily.AF_INET,
>>> type=SocketKind.SOCK_STREAM, proto=6, laddr=('192.168.1.66', 55925),
>>> raddr=('207.171.162.181', 443)>
>>>   self.queue.pop(0)
>>>
>>> Cluster is now in 'ssh-ready' state. Waited 612 seconds.
>>> Generating cluster's SSH key on master...
>>> Warning: Permanently added
>>> 'ec2-54-88-249-255.compute-1.amazonaws.com,54.88.249.255' (ECDSA) to the
>>> list of known hosts.
>>> Connection to ec2-54-88-249-255.compute-1.amazonaws.com closed.
>>> Warning: Permanently added
>>> 'ec2-54-88-249-255.compute-1.amazonaws.com,54.88.249.255' (ECDSA) to the
>>> list of known hosts.
>>> Transferring cluster's SSH key to slaves...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> Warning: Permanently added
>>> 'ec2-54-209-124-74.compute-1.amazonaws.com,54.209.124.74' (ECDSA) to the
>>> list of known hosts.
>>> Cloning spark-ec2 scripts from
>>> https://github.com/amplab/spark-ec2/tree/branch-1.5 on master...
>>> Warning: Permanently added
>>> 'ec2-54-88-249-255.compute-1.amazonaws.com,54.88.249.255' (ECDSA) to the
>>> list of known hosts.
>>> Cloning into 'spark-ec2'...
>>> remote: Counting objects: 2072, done.
>>> remote: Total 2072 (delta 0), reused 0 (delta 0), pack-reused 2072
>>> Receiving objects: 100% (2072/2072), 355.67 KiB | 0 bytes/s, done.
>>> Resolving deltas: 100% (792/792), done.
>>> Checking connectivity... done.
>>> Connection to ec2-54-88-249-255.compute-1.amazonaws.com closed.
>>> Deploying files to master...
>>> Warning: Permanently added
>>> 'ec2-54-88-249-255.compute-1.amazonaws.com,54.88.249.255' (ECDSA) to the
>>> list of known hosts.
>>> building file list ... done
>>> root/spark-ec2/ec2-variables.sh
>>>
>>> sent 1658 bytes  received 42 bytes  1133.33 bytes/sec
>>> total size is 1517  speedup is 0.89
>>> Running setup on master...
>>> Warning: Permanently added
>>> 'ec2-54-88-249-255.compute-1.amazonaws.com,54.88.249.255' (ECDSA) to the
>>> list of known hosts.
>>> Connection to ec2-54-88-249-255.compute-1.amazonaws.com closed.
>>> Warning: Permanently added
>>> 'ec2-54-88-249-255.compute-1.amazonaws.com,54.88.249.255' (ECDSA) to the
>>> list of known hosts.
>>> Setting up Spark on ip-172-31-55-237.ec2.internal...
>>> Setting executable permissions on scripts...
>>> RSYNC'ing /root/spark-ec2 to other cluster nodes...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> Warning: Permanently added
>>> 'ec2-54-209-124-74.compute-1.amazonaws.com,172.31.59.121' (ECDSA) to the
>>> list of known hosts.
>>> id_rsa
>>> 100% 1679     1.6KB/s   00:00
>>> [timing] rsync /root/spark-ec2:  00h 00m 00s
>>> Running setup-slave on all cluster nodes to mount filesystems, etc...
>>> [1] 20:31:41 [SUCCESS] ec2-54-88-249-255.compute-1.amazonaws.com
>>> checking/fixing resolution of hostname
>>> Setting up slave on ip-172-31-55-237.ec2.internal... of type m1.large
>>> 1024+0 records in
>>> 1024+0 records out
>>> 1073741824 bytes (1.1 GB) copied, 1.86735 s, 575 MB/s
>>> mkswap: /mnt/swap: warning: don't erase bootbits sectors
>>>         on whole disk. Use -f to force.
>>> Setting up swapspace version 1, size = 1048572 KiB
>>> no label, UUID=87b6ce4e-5dad-4ecb-8098-7bed955b1392
>>> Added 1024 MB swap file /mnt/swap
>>> Stderr: Warning: Permanently added
>>> 'ec2-54-88-249-255.compute-1.amazonaws.com,172.31.55.237' (ECDSA) to the
>>> list of known hosts.
>>> Connection to ec2-54-88-249-255.compute-1.amazonaws.com closed.
>>> [2] 20:31:42 [SUCCESS] ec2-54-209-124-74.compute-1.amazonaws.com
>>> checking/fixing resolution of hostname
>>> Setting up slave on ip-172-31-59-121.ec2.internal... of type m1.large
>>> 1024+0 records in
>>> 1024+0 records out
>>> 1073741824 bytes (1.1 GB) copied, 1.91229 s, 561 MB/s
>>> mkswap: /mnt/swap: warning: don't erase bootbits sectors
>>>         on whole disk. Use -f to force.
>>> Setting up swapspace version 1, size = 1048572 KiB
>>> no label, UUID=33ce4520-38e3-4918-8c8f-be98034e904d
>>> Added 1024 MB swap file /mnt/swap
>>> Stderr: Connection to ec2-54-209-124-74.compute-1.amazonaws.com closed.
>>> [timing] setup-slave:  00h 00m 16s
>>> Initializing scala
>>> Unpacking Scala
>>> --2016-04-16 20:31:42--
>>> http://s3.amazonaws.com/spark-related-packages/scala-2.10.3.tgz
>>> Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.114.178
>>> Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.114.178|:80...
>>> connected.
>>> HTTP request sent, awaiting response... 200 OK
>>> Length: 30531249 (29M) [application/x-compressed]
>>> Saving to: ‘scala-2.10.3.tgz’
>>>
>>>
>>> 100%[===============================================================================================>]
>>> 30,531,249  2.32MB/s   in 15s
>>>
>>> 2016-04-16 20:31:57 (1.93 MB/s) - ‘scala-2.10.3.tgz’ saved
>>> [30531249/30531249]
>>>
>>> [timing] scala init:  00h 00m 16s
>>> Initializing spark
>>> --2016-04-16 20:31:58--
>>>
>>> http://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop1.tgz
>>> Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.32.114
>>> Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.32.114|:80...
>>> connected.
>>> HTTP request sent, awaiting response... 200 OK
>>> Length: 277258240 (264M) [application/x-compressed]
>>> Saving to: ‘spark-1.6.1-bin-hadoop1.tgz’
>>>
>>>
>>> 100%[===============================================================================================>]
>>> 277,258,240 44.5MB/s   in 6.3s
>>>
>>> 2016-04-16 20:32:04 (41.8 MB/s) - ‘spark-1.6.1-bin-hadoop1.tgz’ saved
>>> [277258240/277258240]
>>>
>>> Unpacking Spark
>>>
>>> gzip: stdin: not in gzip format
>>> tar: Child returned status 1
>>> tar: Error is not recoverable: exiting now
>>> mv: missing destination file operand after `spark'
>>> Try `mv --help' for more information.
>>> [timing] spark init:  00h 00m 06s
>>> Initializing ephemeral-hdfs
>>> --2016-04-16 20:32:04--
>>> http://s3.amazonaws.com/spark-related-packages/hadoop-1.0.4.tar.gz
>>> Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.13.48
>>> Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.13.48|:80...
>>> connected.
>>> HTTP request sent, awaiting response... 200 OK
>>> Length: 62793050 (60M) [application/x-gzip]
>>> Saving to: ‘hadoop-1.0.4.tar.gz’
>>>
>>>
>>> 100%[===============================================================================================>]
>>> 62,793,050  45.3MB/s   in 1.3s
>>>
>>> 2016-04-16 20:32:06 (45.3 MB/s) - ‘hadoop-1.0.4.tar.gz’ saved
>>> [62793050/62793050]
>>>
>>> Unpacking Hadoop
>>> RSYNC'ing /root/ephemeral-hdfs to slaves...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> Warning: Permanently added
>>> 'ec2-54-209-124-74.compute-1.amazonaws.com,172.31.59.121' (ECDSA) to the
>>> list of known hosts.
>>> [timing] ephemeral-hdfs init:  00h 00m 15s
>>> Initializing persistent-hdfs
>>> --2016-04-16 20:32:19--
>>> http://s3.amazonaws.com/spark-related-packages/hadoop-1.0.4.tar.gz
>>> Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.115.26
>>> Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.115.26|:80...
>>> connected.
>>> HTTP request sent, awaiting response... 200 OK
>>> Length: 62793050 (60M) [application/x-gzip]
>>> Saving to: ‘hadoop-1.0.4.tar.gz’
>>>
>>>
>>> 100%[===============================================================================================>]
>>> 62,793,050  58.1MB/s   in 1.0s
>>>
>>> 2016-04-16 20:32:20 (58.1 MB/s) - ‘hadoop-1.0.4.tar.gz’ saved
>>> [62793050/62793050]
>>>
>>> Unpacking Hadoop
>>> RSYNC'ing /root/persistent-hdfs to slaves...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> [timing] persistent-hdfs init:  00h 00m 14s
>>> Initializing spark-standalone
>>> [timing] spark-standalone init:  00h 00m 00s
>>> Initializing tachyon
>>> --2016-04-16 20:32:33--
>>> https://s3.amazonaws.com/Tachyon/tachyon-0.8.2-bin.tar.gz
>>> Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.80.67
>>> Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.80.67|:443...
>>> connected.
>>> HTTP request sent, awaiting response... 200 OK
>>> Length: 72834292 (69M) [application/x-tar]
>>> Saving to: ‘tachyon-0.8.2-bin.tar.gz’
>>>
>>>
>>> 100%[===============================================================================================>]
>>> 72,834,292  35.9MB/s   in 1.9s
>>>
>>> 2016-04-16 20:32:35 (35.9 MB/s) - ‘tachyon-0.8.2-bin.tar.gz’ saved
>>> [72834292/72834292]
>>>
>>> Unpacking Tachyon
>>> [timing] tachyon init:  00h 00m 04s
>>> Initializing rstudio
>>> --2016-04-16 20:32:37--
>>> http://download2.rstudio.org/rstudio-server-rhel-0.99.446-x86_64.rpm
>>> Resolving download2.rstudio.org (download2.rstudio.org)...
>>> 52.85.140.113,
>>> 52.85.140.193, 52.85.140.208, ...
>>> Connecting to download2.rstudio.org
>>> (download2.rstudio.org)|52.85.140.113|:80... connected.
>>> HTTP request sent, awaiting response... 200 OK
>>> Length: 35035164 (33M) [application/x-redhat-package-manager]
>>> Saving to: ‘rstudio-server-rhel-0.99.446-x86_64.rpm’
>>>
>>>
>>> 100%[===============================================================================================>]
>>> 35,035,164  66.8MB/s   in 0.5s
>>>
>>> 2016-04-16 20:32:38 (66.8 MB/s) -
>>> ‘rstudio-server-rhel-0.99.446-x86_64.rpm’
>>> saved [35035164/35035164]
>>>
>>> Loaded plugins: priorities, update-motd, upgrade-helper
>>> Examining rstudio-server-rhel-0.99.446-x86_64.rpm:
>>> rstudio-server-0.99.446-1.x86_64
>>> Marking rstudio-server-rhel-0.99.446-x86_64.rpm to be installed
>>> Resolving Dependencies
>>> --> Running transaction check
>>> ---> Package rstudio-server.x86_64 0:0.99.446-1 will be installed
>>> --> Finished Dependency Resolution
>>>
>>> Dependencies Resolved
>>>
>>>
>>> =========================================================================================================================================
>>>  Package                       Arch                  Version
>>> Repository                                           Size
>>>
>>> =========================================================================================================================================
>>> Installing:
>>>  rstudio-server                x86_64                0.99.446-1
>>> /rstudio-server-rhel-0.99.446-x86_64                252 M
>>>
>>> Transaction Summary
>>>
>>> =========================================================================================================================================
>>> Install  1 Package
>>>
>>> Total size: 252 M
>>> Installed size: 252 M
>>> Downloading packages:
>>> Running transaction check
>>> Running transaction test
>>> Transaction test succeeded
>>> Running transaction
>>>   Installing : rstudio-server-0.99.446-1.x86_64
>>> 1/1
>>> groupadd: group 'rstudio-server' already exists
>>> rsession: no process killed
>>> rstudio-server start/running, process 4817
>>>   Verifying  : rstudio-server-0.99.446-1.x86_64
>>> 1/1
>>>
>>> Installed:
>>>   rstudio-server.x86_64 0:0.99.446-1
>>>
>>> Complete!
>>> rstudio-server start/running, process 4854
>>> [timing] rstudio init:  00h 01m 25s
>>> Initializing ganglia
>>> Connection to ec2-54-209-124-74.compute-1.amazonaws.com closed.
>>> [timing] ganglia init:  00h 00m 01s
>>> Creating local config files...
>>> Connection to ec2-54-209-124-74.compute-1.amazonaws.com closed.
>>> Connection to ec2-54-209-124-74.compute-1.amazonaws.com closed.
>>> Configuring /etc/ganglia/gmond.conf
>>> Configuring /etc/ganglia/gmetad.conf
>>> Configuring /etc/httpd/conf.d/ganglia.conf
>>> Configuring /etc/httpd/conf/httpd.conf
>>> Configuring /root/mapreduce/hadoop.version
>>> Configuring /root/mapreduce/conf/core-site.xml
>>> Configuring /root/mapreduce/conf/slaves
>>> Configuring /root/mapreduce/conf/mapred-site.xml
>>> Configuring /root/mapreduce/conf/hdfs-site.xml
>>> Configuring /root/mapreduce/conf/hadoop-env.sh
>>> Configuring /root/mapreduce/conf/masters
>>> Configuring /root/persistent-hdfs/conf/core-site.xml
>>> Configuring /root/persistent-hdfs/conf/slaves
>>> Configuring /root/persistent-hdfs/conf/mapred-site.xml
>>> Configuring /root/persistent-hdfs/conf/hdfs-site.xml
>>> Configuring /root/persistent-hdfs/conf/hadoop-env.sh
>>> Configuring /root/persistent-hdfs/conf/masters
>>> Configuring /root/ephemeral-hdfs/conf/core-site.xml
>>> Configuring /root/ephemeral-hdfs/conf/yarn-site.xml
>>> Configuring /root/ephemeral-hdfs/conf/slaves
>>> Configuring /root/ephemeral-hdfs/conf/mapred-site.xml
>>> Configuring /root/ephemeral-hdfs/conf/hadoop-metrics2.properties
>>> Configuring /root/ephemeral-hdfs/conf/capacity-scheduler.xml
>>> Configuring /root/ephemeral-hdfs/conf/yarn-env.sh
>>> Configuring /root/ephemeral-hdfs/conf/hdfs-site.xml
>>> Configuring /root/ephemeral-hdfs/conf/hadoop-env.sh
>>> Configuring /root/ephemeral-hdfs/conf/masters
>>> Configuring /root/spark/conf/core-site.xml
>>> Configuring /root/spark/conf/spark-defaults.conf
>>> Configuring /root/spark/conf/spark-env.sh
>>> Configuring /root/tachyon/conf/slaves
>>> Configuring /root/tachyon/conf/workers
>>> Configuring /root/tachyon/conf/tachyon-env.sh
>>> Deploying Spark config files...
>>> RSYNC'ing /root/spark/conf to slaves...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> Setting up scala
>>> RSYNC'ing /root/scala to slaves...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> [timing] scala setup:  00h 00m 02s
>>> Setting up spark
>>> RSYNC'ing /root/spark to slaves...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> [timing] spark setup:  00h 00m 01s
>>> Setting up ephemeral-hdfs
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> Connection to ec2-54-209-124-74.compute-1.amazonaws.com closed.
>>> RSYNC'ing /root/ephemeral-hdfs/conf to slaves...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> Formatting ephemeral HDFS namenode...
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 16/04/16 20:34:09 INFO namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = ip-172-31-55-237.ec2.internal/172.31.55.237
>>> STARTUP_MSG:   args = [-format]
>>> STARTUP_MSG:   version = 1.0.4
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>> ************************************************************/
>>> 16/04/16 20:34:09 INFO util.GSet: VM type       = 64-bit
>>> 16/04/16 20:34:09 INFO util.GSet: 2% max memory = 17.78 MB
>>> 16/04/16 20:34:09 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>>> 16/04/16 20:34:09 INFO util.GSet: recommended=2097152, actual=2097152
>>> 16/04/16 20:34:09 INFO namenode.FSNamesystem: fsOwner=root
>>> 16/04/16 20:34:09 INFO namenode.FSNamesystem: supergroup=supergroup
>>> 16/04/16 20:34:09 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> 16/04/16 20:34:09 INFO namenode.FSNamesystem:
>>> dfs.block.invalidate.limit=100
>>> 16/04/16 20:34:09 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> 16/04/16 20:34:09 INFO namenode.NameNode: Caching file names occuring
>>> more
>>> than 10 times
>>> 16/04/16 20:34:09 INFO common.Storage: Image file of size 110 saved in 0
>>> seconds.
>>> 16/04/16 20:34:09 INFO common.Storage: Storage directory
>>> /mnt/ephemeral-hdfs/dfs/name has been successfully formatted.
>>> 16/04/16 20:34:09 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at
>>> ip-172-31-55-237.ec2.internal/172.31.55.237
>>> ************************************************************/
>>> Starting ephemeral HDFS...
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> starting namenode, logging to
>>>
>>> /mnt/ephemeral-hdfs/logs/hadoop-root-namenode-ip-172-31-55-237.ec2.internal.out
>>> ec2-54-209-124-74.compute-1.amazonaws.com: Warning: $HADOOP_HOME is
>>> deprecated.
>>> ec2-54-209-124-74.compute-1.amazonaws.com:
>>> ec2-54-209-124-74.compute-1.amazonaws.com: starting datanode, logging to
>>>
>>> /mnt/ephemeral-hdfs/logs/hadoop-root-datanode-ip-172-31-59-121.ec2.internal.out
>>> ec2-54-88-249-255.compute-1.amazonaws.com: Warning: Permanently added
>>> 'ec2-54-88-249-255.compute-1.amazonaws.com,172.31.55.237' (ECDSA) to the
>>> list of known hosts.
>>> ec2-54-88-249-255.compute-1.amazonaws.com: Warning: $HADOOP_HOME is
>>> deprecated.
>>> ec2-54-88-249-255.compute-1.amazonaws.com:
>>> ec2-54-88-249-255.compute-1.amazonaws.com: starting secondarynamenode,
>>> logging to
>>>
>>> /mnt/ephemeral-hdfs/logs/hadoop-root-secondarynamenode-ip-172-31-55-237.ec2.internal.out
>>> [timing] ephemeral-hdfs setup:  00h 00m 07s
>>> Setting up persistent-hdfs
>>> Pseudo-terminal will not be allocated because stdin is not a terminal.
>>> RSYNC'ing /root/persistent-hdfs/conf to slaves...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> Formatting persistent HDFS namenode...
>>> Warning: $HADOOP_HOME is deprecated.
>>>
>>> 16/04/16 20:34:16 INFO namenode.NameNode: STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting NameNode
>>> STARTUP_MSG:   host = ip-172-31-55-237.ec2.internal/172.31.55.237
>>> STARTUP_MSG:   args = [-format]
>>> STARTUP_MSG:   version = 1.0.4
>>> STARTUP_MSG:   build =
>>> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
>>> 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
>>> ************************************************************/
>>> 16/04/16 20:34:16 INFO util.GSet: VM type       = 64-bit
>>> 16/04/16 20:34:16 INFO util.GSet: 2% max memory = 17.78 MB
>>> 16/04/16 20:34:16 INFO util.GSet: capacity      = 2^21 = 2097152 entries
>>> 16/04/16 20:34:16 INFO util.GSet: recommended=2097152, actual=2097152
>>> 16/04/16 20:34:16 INFO namenode.FSNamesystem: fsOwner=root
>>> 16/04/16 20:34:16 INFO namenode.FSNamesystem: supergroup=supergroup
>>> 16/04/16 20:34:16 INFO namenode.FSNamesystem: isPermissionEnabled=false
>>> 16/04/16 20:34:17 INFO namenode.FSNamesystem:
>>> dfs.block.invalidate.limit=100
>>> 16/04/16 20:34:17 INFO namenode.FSNamesystem: isAccessTokenEnabled=false
>>> accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
>>> 16/04/16 20:34:17 INFO namenode.NameNode: Caching file names occuring
>>> more
>>> than 10 times
>>> 16/04/16 20:34:17 INFO common.Storage: Image file of size 110 saved in 0
>>> seconds.
>>> 16/04/16 20:34:17 INFO common.Storage: Storage directory
>>> /vol/persistent-hdfs/dfs/name has been successfully formatted.
>>> 16/04/16 20:34:17 INFO namenode.NameNode: SHUTDOWN_MSG:
>>> /************************************************************
>>> SHUTDOWN_MSG: Shutting down NameNode at
>>> ip-172-31-55-237.ec2.internal/172.31.55.237
>>> ************************************************************/
>>> Persistent HDFS installed, won't start by default...
>>> [timing] persistent-hdfs setup:  00h 00m 03s
>>> Setting up spark-standalone
>>> RSYNC'ing /root/spark/conf to slaves...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> RSYNC'ing /root/spark-ec2 to slaves...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> ./spark-standalone/setup.sh: line 22: /root/spark/sbin/stop-all.sh: No
>>> such
>>> file or directory
>>> ./spark-standalone/setup.sh: line 27: /root/spark/sbin/start-master.sh:
>>> No
>>> such file or directory
>>> ./spark-standalone/setup.sh: line 33: /root/spark/sbin/start-slaves.sh:
>>> No
>>> such file or directory
>>> [timing] spark-standalone setup:  00h 00m 25s
>>> Setting up tachyon
>>> RSYNC'ing /root/tachyon to slaves...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> Connecting to ec2-54-209-124-74.compute-1.amazonaws.com as root...
>>> Pseudo-terminal will not be allocated because stdin is not a terminal.
>>> Formatting Tachyon Worker @ ip-172-31-59-121.ec2.internal
>>> Formatting Tachyon Master @ ec2-54-88-249-255.compute-1.amazonaws.com
>>> Killed 0 processes on ip-172-31-55-237.ec2.internal
>>> Killed 0 processes on ip-172-31-55-237.ec2.internal
>>> Connecting to ec2-54-209-124-74.compute-1.amazonaws.com as root...
>>> Pseudo-terminal will not be allocated because stdin is not a terminal.
>>> Killed 0 processes on ip-172-31-59-121.ec2.internal
>>> Starting master @ ec2-54-88-249-255.compute-1.amazonaws.com
>>> Connecting to ec2-54-209-124-74.compute-1.amazonaws.com as root...
>>> Pseudo-terminal will not be allocated because stdin is not a terminal.
>>> Formatting RamFS: /mnt/ramdisk (6154mb)
>>> Starting worker @ ip-172-31-59-121.ec2.internal
>>> [timing] tachyon setup:  00h 00m 17s
>>> Setting up rstudio
>>> spark-ec2/setup.sh: line 110: ./rstudio/setup.sh: No such file or
>>> directory
>>> [timing] rstudio setup:  00h 00m 00s
>>> Setting up ganglia
>>> RSYNC'ing /etc/ganglia to slaves...
>>> ec2-54-209-124-74.compute-1.amazonaws.com
>>> Shutting down GANGLIA gmond:                               [FAILED]
>>> Starting GANGLIA gmond:                                    [  OK  ]
>>> Shutting down GANGLIA gmond:                               [FAILED]
>>> Starting GANGLIA gmond:                                    [  OK  ]
>>> Connection to ec2-54-209-124-74.compute-1.amazonaws.com closed.
>>> Shutting down GANGLIA gmetad:                              [FAILED]
>>> Starting GANGLIA gmetad:                                   [  OK  ]
>>> Stopping httpd:                                            [FAILED]
>>> Starting httpd: httpd: Syntax error on line 154 of
>>> /etc/httpd/conf/httpd.conf: Cannot load
>>> /etc/httpd/modules/mod_authz_core.so
>>> into server: /etc/httpd/modules/mod_authz_core.so: cannot open shared
>>> object
>>> file: No such file or directory
>>>                                                            [FAILED]
>>> [timing] ganglia setup:  00h 00m 02s
>>> Connection to ec2-54-88-249-255.compute-1.amazonaws.com closed.
>>> Spark standalone cluster started at
>>> http://ec2-54-88-249-255.compute-1.amazonaws.com:8080
>>> Ganglia started at
>>> http://ec2-54-88-249-255.compute-1.amazonaws.com:5080/ganglia
>>> Done!
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/A-number-of-issues-when-running-spark-ec2-tp26791.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>
>

Reply via email to