Hi Oleg, The Tachyon related issue should be fixed.
Hope this helps, Calvin On Mon, Jan 18, 2016 at 2:51 AM, Oleg Ruchovets <oruchov...@gmail.com> wrote: > Hi , > I try to follow the spartk 1.6.0 to install spark on EC2. > > It doesn't work properly - got exceptions and at the end standalone spark > cluster installed. > here is log information: > > Any suggestions? > > Thanks > Oleg. > > oleg@robinhood:~/install/spark-1.6.0-bin-hadoop2.6/ec2$ ./spark-ec2 > --key-pair=CC-ES-Demo > > --identity-file=/home/oleg/work/entity_extraction_framework/ec2_pem_key/CC-ES-Demo.pem > --region=us-east-1 --zone=us-east-1a --spot-price=0.05 -s 5 > --spark-version=1.6.0 launch entity-extraction-spark-cluster > Setting up security groups... > Searching for existing cluster entity-extraction-spark-cluster in region > us-east-1... > Spark AMI: ami-5bb18832 > Launching instances... > Requesting 5 slaves as spot instances with price $0.050 > Waiting for spot instances to be granted... > 0 of 5 slaves granted, waiting longer > 0 of 5 slaves granted, waiting longer > 0 of 5 slaves granted, waiting longer > 0 of 5 slaves granted, waiting longer > 0 of 5 slaves granted, waiting longer > 0 of 5 slaves granted, waiting longer > 0 of 5 slaves granted, waiting longer > 0 of 5 slaves granted, waiting longer > 0 of 5 slaves granted, waiting longer > All 5 slaves granted > Launched master in us-east-1a, regid = r-9384033f > Waiting for AWS to propagate instance metadata... > Waiting for cluster to enter 'ssh-ready' state.......... > > Warning: SSH connection error. (This could be temporary.) > Host: ec2-52-90-186-83.compute-1.amazonaws.com > SSH return code: 255 > SSH output: ssh: connect to host ec2-52-90-186-83.compute-1.amazonaws.com > port 22: Connection refused > > . > > Warning: SSH connection error. (This could be temporary.) > Host: ec2-52-90-186-83.compute-1.amazonaws.com > SSH return code: 255 > SSH output: ssh: connect to host ec2-52-90-186-83.compute-1.amazonaws.com > port 22: Connection refused > > . > > Warning: SSH connection error. (This could be temporary.) > Host: ec2-52-90-186-83.compute-1.amazonaws.com > SSH return code: 255 > SSH output: ssh: connect to host ec2-52-90-186-83.compute-1.amazonaws.com > port 22: Connection refused > > . > Cluster is now in 'ssh-ready' state. Waited 442 seconds. > Generating cluster's SSH key on master... > Warning: Permanently added > 'ec2-52-90-186-83.compute-1.amazonaws.com,52.90.186.83' > (ECDSA) to the list of known hosts. > Connection to ec2-52-90-186-83.compute-1.amazonaws.com closed. > Warning: Permanently added > 'ec2-52-90-186-83.compute-1.amazonaws.com,52.90.186.83' > (ECDSA) to the list of known hosts. > Transferring cluster's SSH key to slaves... > ec2-54-165-243-74.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-165-243-74.compute-1.amazonaws.com,54.165.243.74' > (ECDSA) to the list of known hosts. > ec2-54-88-245-107.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-88-245-107.compute-1.amazonaws.com,54.88.245.107' > (ECDSA) to the list of known hosts. > ec2-54-172-29-47.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-172-29-47.compute-1.amazonaws.com,54.172.29.47' > (ECDSA) to the list of known hosts. > ec2-54-165-131-210.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-165-131-210.compute-1.amazonaws.com,54.165.131.210' > (ECDSA) to the list of known hosts. > ec2-54-172-46-184.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-172-46-184.compute-1.amazonaws.com,54.172.46.184' > (ECDSA) to the list of known hosts. > Cloning spark-ec2 scripts from > https://github.com/amplab/spark-ec2/tree/branch-1.5 on master... > Warning: Permanently added > 'ec2-52-90-186-83.compute-1.amazonaws.com,52.90.186.83' > (ECDSA) to the list of known hosts. > Cloning into 'spark-ec2'... > remote: Counting objects: 2068, done. > remote: Total 2068 (delta 0), reused 0 (delta 0), pack-reused 2068 > Receiving objects: 100% (2068/2068), 349.76 KiB, done. > Resolving deltas: 100% (796/796), done. > Connection to ec2-52-90-186-83.compute-1.amazonaws.com closed. > Deploying files to master... > Warning: Permanently added > 'ec2-52-90-186-83.compute-1.amazonaws.com,52.90.186.83' > (ECDSA) to the list of known hosts. > sending incremental file list > root/spark-ec2/ec2-variables.sh > > sent 1,835 bytes received 40 bytes 416.67 bytes/sec > total size is 1,684 speedup is 0.90 > Running setup on master... > Warning: Permanently added > 'ec2-52-90-186-83.compute-1.amazonaws.com,52.90.186.83' > (ECDSA) to the list of known hosts. > Connection to ec2-52-90-186-83.compute-1.amazonaws.com closed. > Warning: Permanently added > 'ec2-52-90-186-83.compute-1.amazonaws.com,52.90.186.83' > (ECDSA) to the list of known hosts. > Setting up Spark on ip-172-31-24-124.ec2.internal... > Setting executable permissions on scripts... > RSYNC'ing /root/spark-ec2 to other cluster nodes... > ec2-54-165-243-74.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-165-243-74.compute-1.amazonaws.com,172.31.19.61' > (ECDSA) to the list of known hosts. > ec2-54-88-245-107.compute-1.amazonaws.com > id_rsa > > 100% 1679 1.6KB/s 00:00 > Warning: Permanently added > 'ec2-54-88-245-107.compute-1.amazonaws.com,172.31.30.81' > (ECDSA) to the list of known hosts. > ec2-54-172-29-47.compute-1.amazonaws.com > id_rsa > > 100% 1679 1.6KB/s 00:00 > Warning: Permanently added > 'ec2-54-172-29-47.compute-1.amazonaws.com,172.31.29.54' > (ECDSA) to the list of known hosts. > ec2-54-165-131-210.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-165-131-210.compute-1.amazonaws.com,172.31.23.10' > (ECDSA) to the list of known hosts. > id_rsa > > 100% 1679 1.6KB/s 00:00 > ec2-54-172-46-184.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-172-46-184.compute-1.amazonaws.com,172.31.30.167' > (ECDSA) to the list of known hosts. > id_rsa > > 100% 1679 1.6KB/s 00:00 > id_rsa > > 100% 1679 1.6KB/s 00:00 > [timing] rsync /root/spark-ec2: 00h 00m 01s > Running setup-slave on all cluster nodes to mount filesystems, etc... > [1] 08:08:10 [SUCCESS] ec2-52-90-186-83.compute-1.amazonaws.com > checking/fixing resolution of hostname > Setting up slave on ip-172-31-24-124.ec2.internal... of type m1.large > 1024+0 records in > 1024+0 records out > 1073741824 bytes (1.1 GB) copied, 2.01407 s, 533 MB/s > mkswap: /mnt/swap: warning: don't erase bootbits sectors > on whole disk. Use -f to force. > Setting up swapspace version 1, size = 1048572 KiB > no label, UUID=b4d25f54-4732-40bb-8086-c78117cb58b2 > Added 1024 MB swap file /mnt/swap > Stderr: Warning: Permanently added ' > ec2-52-90-186-83.compute-1.amazonaws.com,172.31.24.124' (ECDSA) to the > list of known hosts. > Connection to ec2-52-90-186-83.compute-1.amazonaws.com closed. > [2] 08:08:24 [SUCCESS] ec2-54-165-243-74.compute-1.amazonaws.com > checking/fixing resolution of hostname > Setting up slave on ip-172-31-19-61.ec2.internal... of type m1.large > 1024+0 records in > 1024+0 records out > 1073741824 bytes (1.1 GB) copied, 2.11705 s, 507 MB/s > mkswap: /mnt/swap: warning: don't erase bootbits sectors > on whole disk. Use -f to force. > Setting up swapspace version 1, size = 1048572 KiB > no label, UUID=928041a8-4d48-4c65-94e2-d9f84e14cad9 > Added 1024 MB swap file /mnt/swap > Stderr: Connection to ec2-54-165-243-74.compute-1.amazonaws.com closed. > [3] 08:08:27 [SUCCESS] ec2-54-88-245-107.compute-1.amazonaws.com > checking/fixing resolution of hostname > Setting up slave on ip-172-31-30-81.ec2.internal... of type m1.large > 1024+0 records in > 1024+0 records out > 1073741824 bytes (1.1 GB) copied, 2.21007 s, 486 MB/s > mkswap: /mnt/swap: warning: don't erase bootbits sectors > on whole disk. Use -f to force. > Setting up swapspace version 1, size = 1048572 KiB > no label, UUID=1e8c3d4c-7e27-4c35-acae-d83ec2ea9edb > Added 1024 MB swap file /mnt/swap > Stderr: Connection to ec2-54-88-245-107.compute-1.amazonaws.com closed. > [4] 08:08:32 [SUCCESS] ec2-54-172-29-47.compute-1.amazonaws.com > checking/fixing resolution of hostname > Setting up slave on ip-172-31-29-54.ec2.internal... of type m1.large > 1024+0 records in > 1024+0 records out > 1073741824 bytes (1.1 GB) copied, 2.15544 s, 498 MB/s > mkswap: /mnt/swap: warning: don't erase bootbits sectors > on whole disk. Use -f to force. > Setting up swapspace version 1, size = 1048572 KiB > no label, UUID=7bd81d33-ae22-4973-810e-855535ecb743 > Added 1024 MB swap file /mnt/swap > Stderr: Connection to ec2-54-172-29-47.compute-1.amazonaws.com closed. > [5] 08:08:34 [SUCCESS] ec2-54-165-131-210.compute-1.amazonaws.com > checking/fixing resolution of hostname > Setting up slave on ip-172-31-23-10.ec2.internal... of type m1.large > 1024+0 records in > 1024+0 records out > 1073741824 bytes (1.1 GB) copied, 2.39186 s, 449 MB/s > mkswap: /mnt/swap: warning: don't erase bootbits sectors > on whole disk. Use -f to force. > Setting up swapspace version 1, size = 1048572 KiB > no label, UUID=abbdbe4d-f8e8-469b-90d2-c9d0a244b261 > Added 1024 MB swap file /mnt/swap > Stderr: Connection to ec2-54-165-131-210.compute-1.amazonaws.com closed. > [6] 08:08:37 [SUCCESS] ec2-54-172-46-184.compute-1.amazonaws.com > checking/fixing resolution of hostname > Setting up slave on ip-172-31-30-167.ec2.internal... of type m1.large > 1024+0 records in > 1024+0 records out > 1073741824 bytes (1.1 GB) copied, 2.1603 s, 497 MB/s > mkswap: /mnt/swap: warning: don't erase bootbits sectors > on whole disk. Use -f to force. > Setting up swapspace version 1, size = 1048572 KiB > no label, UUID=115ac0e9-c28c-4404-a648-826ece20815d > Added 1024 MB swap file /mnt/swap > Stderr: Connection to ec2-54-172-46-184.compute-1.amazonaws.com closed. > [timing] setup-slave: 00h 00m 45s > Initializing scala > Unpacking Scala > --2016-01-18 08:08:37-- > http://s3.amazonaws.com/spark-related-packages/scala-2.10.3.tgz > Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.13.224 > Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.13.224|:80... > connected. > HTTP request sent, awaiting response... 200 OK > Length: 30531249 (29M) [application/x-compressed] > Saving to: ‘scala-2.10.3.tgz’ > > 100%[============================================================================================================================================>] > 30,531,249 3.46MB/s in 10s > > 2016-01-18 08:08:47 (2.86 MB/s) - ‘scala-2.10.3.tgz’ saved > [30531249/30531249] > > [timing] scala init: 00h 00m 11s > Initializing spark > --2016-01-18 08:08:48-- > http://s3.amazonaws.com/spark-related-packages/spark-1.6.0-bin-hadoop1.tgz > Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.81.220 > Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.81.220|:80... > connected. > HTTP request sent, awaiting response... 200 OK > Length: 243448482 (232M) [application/x-compressed] > Saving to: ‘spark-1.6.0-bin-hadoop1.tgz’ > > 100%[============================================================================================================================================>] > 243,448,482 65.6MB/s in 3.5s > > 2016-01-18 08:08:52 (65.6 MB/s) - ‘spark-1.6.0-bin-hadoop1.tgz’ saved > [243448482/243448482] > > Unpacking Spark > [timing] spark init: 00h 00m 08s > Initializing ephemeral-hdfs > --2016-01-18 08:08:56-- > http://s3.amazonaws.com/spark-related-packages/hadoop-1.0.4.tar.gz > Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.17.48 > Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.17.48|:80... > connected. > HTTP request sent, awaiting response... 200 OK > Length: 62793050 (60M) [application/x-gzip] > Saving to: ‘hadoop-1.0.4.tar.gz’ > > 100%[============================================================================================================================================>] > 62,793,050 69.2MB/s in 0.9s > > 2016-01-18 08:08:57 (69.2 MB/s) - ‘hadoop-1.0.4.tar.gz’ saved > [62793050/62793050] > > Unpacking Hadoop > RSYNC'ing /root/ephemeral-hdfs to slaves... > ec2-54-165-243-74.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-165-243-74.compute-1.amazonaws.com,172.31.19.61' > (ECDSA) to the list of known hosts. > ec2-54-88-245-107.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-88-245-107.compute-1.amazonaws.com,172.31.30.81' > (ECDSA) to the list of known hosts. > ec2-54-172-29-47.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-172-29-47.compute-1.amazonaws.com,172.31.29.54' > (ECDSA) to the list of known hosts. > ec2-54-165-131-210.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-165-131-210.compute-1.amazonaws.com,172.31.23.10' > (ECDSA) to the list of known hosts. > ec2-54-172-46-184.compute-1.amazonaws.com > Warning: Permanently added > 'ec2-54-172-46-184.compute-1.amazonaws.com,172.31.30.167' > (ECDSA) to the list of known hosts. > [timing] ephemeral-hdfs init: 00h 00m 54s > Initializing persistent-hdfs > --2016-01-18 08:09:50-- > http://s3.amazonaws.com/spark-related-packages/hadoop-1.0.4.tar.gz > Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.49.236 > Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.49.236|:80... > connected. > HTTP request sent, awaiting response... 200 OK > Length: 62793050 (60M) [application/x-gzip] > Saving to: ‘hadoop-1.0.4.tar.gz’ > > 100%[============================================================================================================================================>] > 62,793,050 67.4MB/s in 0.9s > > 2016-01-18 08:09:51 (67.4 MB/s) - ‘hadoop-1.0.4.tar.gz’ saved > [62793050/62793050] > > Unpacking Hadoop > RSYNC'ing /root/persistent-hdfs to slaves... > ec2-54-165-243-74.compute-1.amazonaws.com > ec2-54-88-245-107.compute-1.amazonaws.com > ec2-54-172-29-47.compute-1.amazonaws.com > ec2-54-165-131-210.compute-1.amazonaws.com > ec2-54-172-46-184.compute-1.amazonaws.com > [timing] persistent-hdfs init: 00h 00m 39s > Initializing spark-standalone > [timing] spark-standalone init: 00h 00m 00s > Initializing tachyon > --2016-01-18 08:10:29-- > https://s3.amazonaws.com/Tachyon/tachyon-0.8.2-bin.tar.gz > Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.81.67 > Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.81.67|:443... > connected. > HTTP request sent, awaiting response... 403 Forbidden > 2016-01-18 08:10:29 ERROR 403: Forbidden. > > ERROR: Unknown Tachyon version > tachyon/init.sh: line 60: return: -1: invalid option > return: usage: return [n] > Unpacking Tachyon > tar (child): tachyon-*.tar.gz: Cannot open: No such file or directory > tar (child): Error is not recoverable: exiting now > tar: Child returned status 2 > tar: Error is not recoverable: exiting now > rm: cannot remove `tachyon-*.tar.gz': No such file or directory > ls: cannot access tachyon-*: No such file or directory > mv: missing destination file operand after `tachyon' > Try `mv --help' for more information. > [timing] tachyon init: 00h 00m 00s > Initializing rstudio > --2016-01-18 08:10:29-- > http://download2.rstudio.org/rstudio-server-rhel-0.99.446-x86_64.rpm > Resolving download2.rstudio.org (download2.rstudio.org)... 54.192.18.169, > 54.192.18.246, 54.192.18.133, ... > Connecting to download2.rstudio.org > (download2.rstudio.org)|54.192.18.169|:80... > connected. > HTTP request sent, awaiting response... 200 OK > Length: 35035164 (33M) [application/x-redhat-package-manager] > Saving to: ‘rstudio-server-rhel-0.99.446-x86_64.rpm’ > > 100%[============================================================================================================================================>] > 35,035,164 84.0MB/s in 0.4s > > 2016-01-18 08:10:29 (84.0 MB/s) - > ‘rstudio-server-rhel-0.99.446-x86_64.rpm’ saved [35035164/35035164] > > Loaded plugins: priorities, update-motd, upgrade-helper > Examining rstudio-server-rhel-0.99.446-x86_64.rpm: > rstudio-server-0.99.446-1.x86_64 > Marking rstudio-server-rhel-0.99.446-x86_64.rpm to be installed > Resolving Dependencies > --> Running transaction check > ---> Package rstudio-server.x86_64 0:0.99.446-1 will be installed > --> Finished Dependency Resolution > > Dependencies Resolved > > > ====================================================================================================================================================================================== > Package Arch > Version Repository > Size > > ====================================================================================================================================================================================== > Installing: > rstudio-server x86_64 > 0.99.446-1 /rstudio-server-rhel-0.99.446-x86_64 > 252 M > > Transaction Summary > > ====================================================================================================================================================================================== > Install 1 Package > > Total size: 252 M > Installed size: 252 M > Downloading packages: > Running transaction check > Running transaction test > Transaction test succeeded > Running transaction > Installing : rstudio-server-0.99.446-1.x86_64 > > 1/1 > groupadd: group 'rstudio-server' already exists > rsession: no process killed > rstudio-server start/running, process 2535 > Verifying : rstudio-server-0.99.446-1.x86_64 > > 1/1 > > Installed: > rstudio-server.x86_64 0:0.99.446-1 > > > > Complete! > rstudio-server start/running, process 2570 > [timing] rstudio init: 00h 00m 39s > Initializing ganglia > Connection to ec2-54-165-243-74.compute-1.amazonaws.com closed. > Connection to ec2-54-88-245-107.compute-1.amazonaws.com closed. > Connection to ec2-54-172-29-47.compute-1.amazonaws.com closed. > Connection to ec2-54-165-131-210.compute-1.amazonaws.com closed. > Connection to ec2-54-172-46-184.compute-1.amazonaws.com closed. > [timing] ganglia init: 00h 00m 02s > Creating local config files... > Connection to ec2-54-165-243-74.compute-1.amazonaws.com closed. > Connection to ec2-54-165-243-74.compute-1.amazonaws.com closed. > Configuring /etc/ganglia/gmond.conf > Configuring /etc/ganglia/gmetad.conf > Configuring /etc/httpd/conf.d/ganglia.conf > Configuring /etc/httpd/conf/httpd.conf > Configuring /root/mapreduce/hadoop.version > Configuring /root/mapreduce/conf/core-site.xml > Configuring /root/mapreduce/conf/slaves > Configuring /root/mapreduce/conf/mapred-site.xml > Configuring /root/mapreduce/conf/hdfs-site.xml > Configuring /root/mapreduce/conf/hadoop-env.sh > Configuring /root/mapreduce/conf/masters > Configuring /root/persistent-hdfs/conf/core-site.xml > Configuring /root/persistent-hdfs/conf/slaves > Configuring /root/persistent-hdfs/conf/mapred-site.xml > Configuring /root/persistent-hdfs/conf/hdfs-site.xml > Configuring /root/persistent-hdfs/conf/hadoop-env.sh > Configuring /root/persistent-hdfs/conf/masters > Configuring /root/ephemeral-hdfs/conf/core-site.xml > Configuring /root/ephemeral-hdfs/conf/yarn-site.xml > Configuring /root/ephemeral-hdfs/conf/slaves > Configuring /root/ephemeral-hdfs/conf/mapred-site.xml > Configuring /root/ephemeral-hdfs/conf/hadoop-metrics2.properties > Configuring /root/ephemeral-hdfs/conf/capacity-scheduler.xml > Configuring /root/ephemeral-hdfs/conf/yarn-env.sh > Configuring /root/ephemeral-hdfs/conf/hdfs-site.xml > Configuring /root/ephemeral-hdfs/conf/hadoop-env.sh > Configuring /root/ephemeral-hdfs/conf/masters > Configuring /root/spark/conf/core-site.xml > Configuring /root/spark/conf/spark-defaults.conf > Configuring /root/spark/conf/spark-env.sh > Configuring /root/tachyon/conf/slaves > Configuring /root/tachyon/conf/workers > Configuring /root/tachyon/conf/tachyon-env.sh > Deploying Spark config files... > RSYNC'ing /root/spark/conf to slaves... > ec2-54-165-243-74.compute-1.amazonaws.com > ec2-54-88-245-107.compute-1.amazonaws.com > ec2-54-172-29-47.compute-1.amazonaws.com > ec2-54-165-131-210.compute-1.amazonaws.com > ec2-54-172-46-184.compute-1.amazonaws.com > Setting up scala > RSYNC'ing /root/scala to slaves... > ec2-54-165-243-74.compute-1.amazonaws.com > ec2-54-88-245-107.compute-1.amazonaws.com > ec2-54-172-29-47.compute-1.amazonaws.com > ec2-54-165-131-210.compute-1.amazonaws.com > ec2-54-172-46-184.compute-1.amazonaws.com > [timing] scala setup: 00h 00m 09s > Setting up spark > RSYNC'ing /root/spark to slaves... > ec2-54-165-243-74.compute-1.amazonaws.com > ec2-54-88-245-107.compute-1.amazonaws.com > ec2-54-172-29-47.compute-1.amazonaws.com > ec2-54-165-131-210.compute-1.amazonaws.com > ec2-54-172-46-184.compute-1.amazonaws.com > [timing] spark setup: 00h 01m 07s > Setting up ephemeral-hdfs > ec2-54-165-243-74.compute-1.amazonaws.com > ec2-54-88-245-107.compute-1.amazonaws.com > Connection to ec2-54-165-243-74.compute-1.amazonaws.com closed. > ec2-54-172-29-47.compute-1.amazonaws.com > Connection to ec2-54-88-245-107.compute-1.amazonaws.com closed. > Connection to ec2-54-172-29-47.compute-1.amazonaws.com closed. > ec2-54-165-131-210.compute-1.amazonaws.com > Connection to ec2-54-165-131-210.compute-1.amazonaws.com closed. > ec2-54-172-46-184.compute-1.amazonaws.com > Connection to ec2-54-172-46-184.compute-1.amazonaws.com closed. > RSYNC'ing /root/ephemeral-hdfs/conf to slaves... > ec2-54-165-243-74.compute-1.amazonaws.com > ec2-54-88-245-107.compute-1.amazonaws.com > ec2-54-172-29-47.compute-1.amazonaws.com > ec2-54-165-131-210.compute-1.amazonaws.com > ec2-54-172-46-184.compute-1.amazonaws.com > Formatting ephemeral HDFS namenode... > Warning: $HADOOP_HOME is deprecated. > > 16/01/18 08:12:39 INFO namenode.NameNode: STARTUP_MSG: > /************************************************************ > STARTUP_MSG: Starting NameNode > STARTUP_MSG: host = ip-172-31-24-124.ec2.internal/172.31.24.124 > STARTUP_MSG: args = [-format] > STARTUP_MSG: version = 1.0.4 > STARTUP_MSG: build = > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r > 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 > ************************************************************/ > 16/01/18 08:12:39 INFO util.GSet: VM type = 64-bit > 16/01/18 08:12:39 INFO util.GSet: 2% max memory = 17.78 MB > 16/01/18 08:12:39 INFO util.GSet: capacity = 2^21 = 2097152 entries > 16/01/18 08:12:39 INFO util.GSet: recommended=2097152, actual=2097152 > 16/01/18 08:12:39 INFO namenode.FSNamesystem: fsOwner=root > 16/01/18 08:12:39 INFO namenode.FSNamesystem: supergroup=supergroup > 16/01/18 08:12:39 INFO namenode.FSNamesystem: isPermissionEnabled=false > 16/01/18 08:12:39 INFO namenode.FSNamesystem: > dfs.block.invalidate.limit=100 > 16/01/18 08:12:39 INFO namenode.FSNamesystem: isAccessTokenEnabled=false > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) > 16/01/18 08:12:39 INFO namenode.NameNode: Caching file names occuring more > than 10 times > 16/01/18 08:12:39 INFO common.Storage: Image file of size 110 saved in 0 > seconds. > 16/01/18 08:12:39 INFO common.Storage: Storage directory > /mnt/ephemeral-hdfs/dfs/name has been successfully formatted. > 16/01/18 08:12:39 INFO namenode.NameNode: SHUTDOWN_MSG: > /************************************************************ > SHUTDOWN_MSG: Shutting down NameNode at ip-172-31-24-124.ec2.internal/ > 172.31.24.124 > ************************************************************/ > Starting ephemeral HDFS... > Warning: $HADOOP_HOME is deprecated. > > starting namenode, logging to > /mnt/ephemeral-hdfs/logs/hadoop-root-namenode-ip-172-31-24-124.ec2.internal > Error: Could not find or load main class crayondata.com.log > ec2-54-172-29-47.compute-1.amazonaws.com: Warning: $HADOOP_HOME is > deprecated. > ec2-54-172-29-47.compute-1.amazonaws.com: > ec2-54-172-29-47.compute-1.amazonaws.com: starting datanode, logging to > /mnt/ephemeral-hdfs/logs/hadoop-root-datanode-ip-172-31-29-54.ec2.internal.out > ec2-54-172-46-184.compute-1.amazonaws.com: Warning: $HADOOP_HOME is > deprecated. > ec2-54-172-46-184.compute-1.amazonaws.com: > ec2-54-172-46-184.compute-1.amazonaws.com: starting datanode, logging to > /mnt/ephemeral-hdfs/logs/hadoop-root-datanode-ip-172-31-30-167.ec2.internal.out > ec2-54-165-131-210.compute-1.amazonaws.com: Warning: $HADOOP_HOME is > deprecated. > ec2-54-165-131-210.compute-1.amazonaws.com: > ec2-54-165-131-210.compute-1.amazonaws.com: starting datanode, logging to > /mnt/ephemeral-hdfs/logs/hadoop-root-datanode-ip-172-31-23-10.ec2.internal.out > ec2-54-88-245-107.compute-1.amazonaws.com: Warning: $HADOOP_HOME is > deprecated. > ec2-54-88-245-107.compute-1.amazonaws.com: > ec2-54-88-245-107.compute-1.amazonaws.com: starting datanode, logging to > /mnt/ephemeral-hdfs/logs/hadoop-root-datanode-ip-172-31-30-81.ec2.internal.out > ec2-54-165-243-74.compute-1.amazonaws.com: Warning: $HADOOP_HOME is > deprecated. > ec2-54-165-243-74.compute-1.amazonaws.com: > ec2-54-165-243-74.compute-1.amazonaws.com: starting datanode, logging to > /mnt/ephemeral-hdfs/logs/hadoop-root-datanode-ip-172-31-19-61.ec2.internal.out > ec2-52-90-186-83.compute-1.amazonaws.com: Warning: Permanently added ' > ec2-52-90-186-83.compute-1.amazonaws.com,172.31.24.124' (ECDSA) to the > list of known hosts. > ec2-52-90-186-83.compute-1.amazonaws.com: Warning: $HADOOP_HOME is > deprecated. > ec2-52-90-186-83.compute-1.amazonaws.com: > ec2-52-90-186-83.compute-1.amazonaws.com: starting secondarynamenode, > logging to > /mnt/ephemeral-hdfs/logs/hadoop-root-secondarynamenode-ip-172-31-24-124.ec2.internal.out > [timing] ephemeral-hdfs setup: 00h 00m 12s > Setting up persistent-hdfs > Pseudo-terminal will not be allocated because stdin is not a terminal. > Pseudo-terminal will not be allocated because stdin is not a terminal. > Pseudo-terminal will not be allocated because stdin is not a terminal. > Pseudo-terminal will not be allocated because stdin is not a terminal. > Pseudo-terminal will not be allocated because stdin is not a terminal. > RSYNC'ing /root/persistent-hdfs/conf to slaves... > ec2-54-165-243-74.compute-1.amazonaws.com > ec2-54-88-245-107.compute-1.amazonaws.com > ec2-54-172-29-47.compute-1.amazonaws.com > ec2-54-165-131-210.compute-1.amazonaws.com > ec2-54-172-46-184.compute-1.amazonaws.com > Formatting persistent HDFS namenode... > Warning: $HADOOP_HOME is deprecated. > > 16/01/18 08:12:50 INFO namenode.NameNode: STARTUP_MSG: > /************************************************************ > STARTUP_MSG: Starting NameNode > STARTUP_MSG: host = ip-172-31-24-124.ec2.internal/172.31.24.124 > STARTUP_MSG: args = [-format] > STARTUP_MSG: version = 1.0.4 > STARTUP_MSG: build = > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r > 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012 > ************************************************************/ > 16/01/18 08:12:50 INFO util.GSet: VM type = 64-bit > 16/01/18 08:12:50 INFO util.GSet: 2% max memory = 17.78 MB > 16/01/18 08:12:50 INFO util.GSet: capacity = 2^21 = 2097152 entries > 16/01/18 08:12:50 INFO util.GSet: recommended=2097152, actual=2097152 > 16/01/18 08:12:50 INFO namenode.FSNamesystem: fsOwner=root > 16/01/18 08:12:50 INFO namenode.FSNamesystem: supergroup=supergroup > 16/01/18 08:12:50 INFO namenode.FSNamesystem: isPermissionEnabled=false > 16/01/18 08:12:50 INFO namenode.FSNamesystem: > dfs.block.invalidate.limit=100 > 16/01/18 08:12:50 INFO namenode.FSNamesystem: isAccessTokenEnabled=false > accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) > 16/01/18 08:12:50 INFO namenode.NameNode: Caching file names occuring more > than 10 times > 16/01/18 08:12:50 INFO common.Storage: Image file of size 110 saved in 0 > seconds. > 16/01/18 08:12:50 INFO common.Storage: Storage directory > /vol/persistent-hdfs/dfs/name has been successfully formatted. > 16/01/18 08:12:50 INFO namenode.NameNode: SHUTDOWN_MSG: > /************************************************************ > SHUTDOWN_MSG: Shutting down NameNode at ip-172-31-24-124.ec2.internal/ > 172.31.24.124 > ************************************************************/ > Persistent HDFS installed, won't start by default... > [timing] persistent-hdfs setup: 00h 00m 06s > Setting up spark-standalone > RSYNC'ing /root/spark/conf to slaves... > ec2-54-165-243-74.compute-1.amazonaws.com > ec2-54-88-245-107.compute-1.amazonaws.com > ec2-54-172-29-47.compute-1.amazonaws.com > ec2-54-165-131-210.compute-1.amazonaws.com > ec2-54-172-46-184.compute-1.amazonaws.com > RSYNC'ing /root/spark-ec2 to slaves... > ec2-54-165-243-74.compute-1.amazonaws.com > ec2-54-88-245-107.compute-1.amazonaws.com > ec2-54-172-29-47.compute-1.amazonaws.com > ec2-54-165-131-210.compute-1.amazonaws.com > ec2-54-172-46-184.compute-1.amazonaws.com > ec2-54-165-243-74.compute-1.amazonaws.com: no > org.apache.spark.deploy.worker.Worker to stop > ec2-54-88-245-107.compute-1.amazonaws.com: no > org.apache.spark.deploy.worker.Worker to stop > ec2-54-172-29-47.compute-1.amazonaws.com: no > org.apache.spark.deploy.worker.Worker to stop > ec2-54-165-131-210.compute-1.amazonaws.com: no > org.apache.spark.deploy.worker.Worker to stop > ec2-54-172-46-184.compute-1.amazonaws.com: no > org.apache.spark.deploy.worker.Worker to stop > no org.apache.spark.deploy.master.Master to stop > starting org.apache.spark.deploy.master.Master, logging to > /root/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-ip-172-31-24-124.ec2.internal > crayondata.com.out > ec2-54-88-245-107.compute-1.amazonaws.com: starting > org.apache.spark.deploy.worker.Worker, logging to > /root/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-ip-172-31-30-81.ec2.internal.out > ec2-54-165-243-74.compute-1.amazonaws.com: starting > org.apache.spark.deploy.worker.Worker, logging to > /root/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-ip-172-31-19-61.ec2.internal.out > ec2-54-172-46-184.compute-1.amazonaws.com: starting > org.apache.spark.deploy.worker.Worker, logging to > /root/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-ip-172-31-30-167.ec2.internal.out > ec2-54-165-131-210.compute-1.amazonaws.com: starting > org.apache.spark.deploy.worker.Worker, logging to > /root/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-ip-172-31-23-10.ec2.internal.out > ec2-54-172-29-47.compute-1.amazonaws.com: starting > org.apache.spark.deploy.worker.Worker, logging to > /root/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-ip-172-31-29-54.ec2.internal.out > [timing] spark-standalone setup: 00h 00m 39s > Setting up tachyon > RSYNC'ing /root/tachyon to slaves... > ec2-54-165-243-74.compute-1.amazonaws.com > ec2-54-88-245-107.compute-1.amazonaws.com > ec2-54-172-29-47.compute-1.amazonaws.com > ec2-54-165-131-210.compute-1.amazonaws.com > ec2-54-172-46-184.compute-1.amazonaws.com > ./tachyon/setup.sh: line 5: /root/tachyon/bin/tachyon: No such file or > directory > ./tachyon/setup.sh: line 9: /root/tachyon/bin/tachyon-start.sh: No such > file or directory > [timing] tachyon setup: 00h 00m 04s > Setting up rstudio > spark-ec2/setup.sh: line 110: ./rstudio/setup.sh: No such file or directory > [timing] rstudio setup: 00h 00m 00s > Setting up ganglia > RSYNC'ing /etc/ganglia to slaves... > ec2-54-165-243-74.compute-1.amazonaws.com > ec2-54-88-245-107.compute-1.amazonaws.com > ec2-54-172-29-47.compute-1.amazonaws.com > ec2-54-165-131-210.compute-1.amazonaws.com > ec2-54-172-46-184.compute-1.amazonaws.com > Shutting down GANGLIA gmond: [FAILED] > Starting GANGLIA gmond: [ OK ] > Shutting down GANGLIA gmond: [FAILED] > Starting GANGLIA gmond: [ OK ] > Connection to ec2-54-165-243-74.compute-1.amazonaws.com closed. > Shutting down GANGLIA gmond: [FAILED] > Starting GANGLIA gmond: [ OK ] > Connection to ec2-54-88-245-107.compute-1.amazonaws.com closed. > Shutting down GANGLIA gmond: [FAILED] > Starting GANGLIA gmond: [ OK ] > Connection to ec2-54-172-29-47.compute-1.amazonaws.com closed. > Shutting down GANGLIA gmond: [FAILED] > Starting GANGLIA gmond: [ OK ] > Connection to ec2-54-165-131-210.compute-1.amazonaws.com closed. > Shutting down GANGLIA gmond: [FAILED] > Starting GANGLIA gmond: [ OK ] > Connection to ec2-54-172-46-184.compute-1.amazonaws.com closed. > Shutting down GANGLIA gmetad: [FAILED] > Starting GANGLIA gmetad: [ OK ] > Stopping httpd: [FAILED] > Starting httpd: httpd: Syntax error on line 154 of > /etc/httpd/conf/httpd.conf: Cannot load > /etc/httpd/modules/mod_authz_core.so into server: > /etc/httpd/modules/mod_authz_core.so: cannot open shared object file: No > such file or directory > [FAILED] > [timing] ganglia setup: 00h 00m 04s > Connection to ec2-52-90-186-83.compute-1.amazonaws.com closed. > Spark standalone cluster started at > http://ec2-52-90-186-83.compute-1.amazonaws.com:8080 > Ganglia started at > http://ec2-52-90-186-83.compute-1.amazonaws.com:5080/ganglia > Done! > >