For political reasons, I've been asked to use Capistrano to deploy Chef 
Solo to a list of servers and interact with an API.
I'm new to Cap and new(ish) to Ruby. I'm looking for some help re: best way 
to use server definitions in Cap 3.

>From Cap 3 docs 
<http://capistranorb.com/documentation/getting-started/preparing-your-application/>
:
# Extended Server Syntax
    # ======================
    # This can be used to drop a more detailed server
    # definition into the server list. The second argument
    # is something that quacks like a hash and can be used
    # to set extended properties on the server.
    server 'example.com', roles: %w{web app}, my_property: :my_value

vs
# Simple Role Syntax
    # ==================
    # Supports bulk-adding hosts to roles, the primary
    # server in each group is considered to be the first
    # unless any hosts have the primary property set.
    role :app, %w{example.com}
    role :web, %w{example.com}
    role :db,  %w{example.com}

Which was the de facto standard in Cap 2.

My server list looks like this:
server '10.0.1.1', roles: [:hdp_ambari, :hdp_namenode, :hdp_hbase, :
hdp_monitor_server, :hdp_zookeeper, :hdp_ganglia], internal:
'ip-10-0-1-1.ec2.internal' 
server '10.0.1.2', roles: [:hdp_namenode_secondary, :hdp_datanode, :
hdp_zookeeper, :hdp_ganglia], internal:'ip-10-0-1-2.ec2.internal'

I want to set :options to a hash of keys and internals corresponding to a 
given role. What's the best way to do this?

So desired output something like:
set :options, {
  default_filesystem: "hdfs://ip-10-0-1-1.ec2.internal:8020",
  hbase_master: 'ip-10-0-1-2.ec2.internal',
  job_history_server: 'ip-10-0-1-1.ec2.internal',
  namenode: 'ip-10-0-1-1.ec2.internal',
  namenode_secondary: 'ip-10-0-1-2.ec2.internal',
  yarn_node: 'ip-10-0-1-3.ec2.internal',
  zookeepers: ['ip-10-0-1-1.ec2.internal', 'ip-10-0-1-2.ec2.internal']
}

For which I was thinking...
task :set_configuration_options do
  # Fetching custom server.properties via http://tinyurl.com/ldkm7ov
  on roles(:hdp_namenode), in: :parallel do |server|
    namenode_internal = p server.internal 
    filesystem_internal = "hdfs://#{namenode_internal}:8020"
  end


options = {
  default_filesystem: "#{filesystem_internal}",
  namenode: "#{namenode_internal}"
}
end


set :options, options


But there's got to be a cleaner, better way, right? I'm having a tough time 
finding good examples or docs on this.



*Relevant details:*

Versions:

   - Ruby - 2.0.0p481
   - Capistrano - 3.0.1
   - Rake / Rails / etc - 10.1.0, 0.9.6

Platform:

   - Working on.... Mac OSX 10.9.2
   - Deploying to... CentOS 6.4

Logs:

   - Not relevant.  Asking for general advice, best practices re: extended 
   server syntax.

Files:

   - Capfile, deploy.rb, and stage attached.
   

-- 
You received this message because you are subscribed to the Google Groups 
"Capistrano" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to capistrano+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/capistrano/703ba8c0-e2ca-44fe-bfa3-1161abeeb118%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
set :blueprint_name, fetch(:stage).split(':').last
set :cluster_name, "HDP_#{fetch(:stage).split(':').last}"

# internal: is a custom server property we need to set configuration options in 
Ambari.  
server '10.0.1.1', roles: [:hdp_ambari, :hdp_namenode, :hdp_hbase, 
:hdp_monitor_server, :hdp_zookeeper, :hdp_ganglia], 
internal:'ip-10-0-1-47.ec2.internal' 
server '10.0.1.2', roles: [:hdp_namenode_secondary, :hdp_datanode, 
:hdp_zookeeper, :hdp_ganglia], internal:'ip-10-0-1-48.ec2.internal'
server '10.0.1.3', roles: [:hdp_datanode, :hdp_yarn, :hdp_job_history_server, 
:hdp_zookeeper, :hdp_ganglia], internal:'ip-10-0-1-49.ec2.internal'
server '10.0.0.4', roles: [:hdp_client, :hdp_zookeeper_client, :hdp_ganglia], 
internal:'ip-10-0-0-30.ec2.internal'

# Ambari API calls ONLY use the internal IP.  Everything else can use Amazon's 
internal DNS. 
set :base_uri, '10.0.1.1:8080'   
set :username, 'admin'
set :password, 'admin'

task :set_configuration_options do
  # Build array (see staging and prod) that matches up roles with the internal 
IP.  See ambari_api.rb.
  # Fetching custom server.properties via http://tinyurl.com/ldkm7ov
  on roles(:hdp_namenode), in: :parallel do |server|
    namenode_internal = p server.internal 
    filesystem_internal = "hdfs://#{namenode_internal}:8020"
  end
  on roles(:hdp_namenode_secondary), in: :parallel do |server|
    namenode_secondary_internal = p server.internal 
  end
  on roles(:hdp_hbase), in: :parallel do |server|
    hbase_internal = p server.internal 
  end
  on roles(:hdp_job_history_server), in: :parallel do |server|
    job_history_internal = p server.internal 
  end
  on roles(:hdp_yarn), in: :parallel do |server|
    yarn_internal = p server.internal 
  end
  on roles(:hdp_zookeeper), in: :parallel do |server|
    zookeeper_internal = p server.internal   # needs to return array for 
multiple internals
  end
  
  configuration_options = {
  default_filesystem: "#{filesystem_internal}",
  hbase_master: "#{hbase_internal}",
  job_history_server: "#{job_history_internal}",
  namenode: "#{namenode_internal}",
  namenode_secondary: "#{namenode_secondary_internal}",
  yarn_node: "#{yarn_internal}",
  zookeepers: "#{zookeeper_internal}"
}
  #Note: Could also throw base_uri in here, then set it below.
end

set :configuration_options, configuration_options

components = {
  hdp_namenode: %w[
    NAMENODE
    HDFS_CLIENT
    YARN_CLIENT
    TEZ_CLIENT
  ],
  hdp_namenode_secondary: %w[
    SECONDARY_NAMENODE
    HDFS_CLIENT
    YARN_CLIENT
    TEZ_CLIENT
  ],
  hdp_yarn: %w[
    APP_TIMELINE_SERVER
  ], 
  hdp_datanode: %w[
    HBASE_REGIONSERVER
    NODEMANAGER
    DATANODE
    HDFS_CLIENT
    YARN_CLIENT
    TEZ_CLIENT
  ],
  hdp_hbase: %w[
    HBASE_MASTER
  ],
  hdp_zookeeper: %w[
    ZOOKEEPER_SERVER
    ZOOKEEPER_CLIENT
  ],
  hdp_zookeeper_client: %w[
    ZOOKEEPER_CLIENT
  ],
  hdp_ambari: %w[
    AMBARI_SERVER
  ],
  hdp_client: %w[
    HBASE_CLIENT
    TEZ_CLIENT
    SQOOP
    PIG
    HDFS_CLIENT
    YARN_CLIENT
    MAPREDUCE2_CLIENT
    TEZ_CLIENT
  ],
  hdp_job_history_server: %w[
    HISTORYSERVER
  ],
  hdp_resource_manager: %w[
    RESOURCEMANAGER
  ],
  hdp_monitor_server: %w[
    GANGLIA_SERVER
    NAGIOS_SERVER
  ],  
  hdp_ganglia: %w[
    GANGLIA_MONITOR
  ],
}

host_groups = servers.collect do |server|
  {
    name: server[:roles].first.to_s,
    components: server[:roles].collect { |role| components[role] }.flatten.uniq,
    hosts: server[:internal]
  }
end

set :host_groups, host_groups
#
# Put here shared configuration shared among all children
#
# Read more about configurations:
# https://github.com/railsware/capistrano-multiconfig/blob/master/README.md

require 'aws-sdk'
require 'JSON'
require 'open-uri'
require 'active_support/all'
require 'retryable'
require 'heroku-api'
require 'chef_solo'

set :heroku_api_key, '[redacted]'

set :default_stage, 'development'

ask :branch, ENV['BRANCH'] || 'master'

set :application, proc { fetch(:stage).split(':').reverse[1] }

set :env, proc { fetch(:stage).split(':').last }

set :repo_url, proc { "g...@github.com:[redacted]/#{fetch(:application)}.git" }

set :deploy_to, proc { "/var/www/#{fetch(:application)}" }

set :scm, :git

set :format, :pretty

set :log_level, :debug

set :pty, true

set :linked_files, %w{config/database.yml}

set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle 
public/system}

set :storm_dir, proc { File.expand_path(File.join(File.dirname(__FILE__), '..', 
'bin', 'storm-0.9.0.1')) }

set :default_env, { path: "/opt/ruby/bin:#{fetch(:storm_dir)}/bin:$PATH" }

set :keep_releases, 5

set :topology_bundler_group, 'default'

set :cluster_name, fetch(:topology_name)

set :nimbus_security_group, lambda { "[redacted]" }

set :username, ENV['USER'] || ENV['username']

set :chef_project_location, "/root/chef-repo"

def use_rvm(command, gemfile_path = nil)
  command_prefix = gemfile_path ? "cd #{gemfile_path}; 
BUNDLE_GEMFILE=./Gemfile" : ''
  "bash --login --rcfile /usr/local/rvm/scripts/rvm -c '#{command_prefix} 
#{command}'"
end

def get_newest_code_for(project_location, git_branch=nil)
  git_branch ||= branch
  try_sudo "sh -c 'cd #{project_location} && git fetch origin && git reset 
--hard HEAD && git checkout #{git_branch.gsub('origin/', '')} && git reset 
--hard #{git_branch}'", :as => nil
end

def build_runlist(roles)
  role_list = StringIO.new({
      "run_list" => [ "role[#{roles.join(',')}]" ]
    }.to_json).string
end


namespace :chef_solo do
  desc 'Get newest code'
  task :update_git do 
    on server(:all) do |server| 
      get_newest_code_for(:chef_project_location, fetch(:branch))
    end
  end

  desc 'Upload roles and execute Chef Solo'
  task :upload_and_run do 
    on server(:all) do |server|
      role_list = build_runlist(server[:roles])
      upload! role_list, "/tmp/node.json"
      run "#{sudo} cd #{chef_project_location} && chef-solo -c 
/root/chef-repo/client-config/solo.rb -j /tmp/node.json"
    end
  end
end
require 'bundler'
Bundler.require :default

# Load DSL and Setup multiple configurations
# https://github.com/railsware/capistrano-multiconfig
require 'capistrano/multiconfig'

# Includes default deployment tasks
require 'capistrano/framework'

# Includes tasks from other gems included in your Gemfile
#
# For documentation on these, see for example:
#
#   https://github.com/capistrano/rvm
#   https://github.com/capistrano/rbenv
#   https://github.com/capistrano/chruby
#   https://github.com/capistrano/bundler
#   https://github.com/capistrano/rails/tree/master/assets
#   https://github.com/capistrano/rails/tree/master/migrations
#   https://github.com/railsware/capistrano-uptodate
#   https://github.com/railsware/capistrano-patch
#   https://github.com/railsware/capistrano-calendar
#
# require 'capistrano/rvm'
# require 'capistrano/rbenv'
# require 'capistrano/chruby'
# require 'capistrano/bundler'
# require 'capistrano/rails/assets'
# require 'capistrano/rails/migrations'
# require 'capistrano/uptodate'
# require 'capistrano/patch'
# require 'capistrano/calendar'

$LOAD_PATH.unshift File.join(File.dirname(__FILE__), 'lib')

# Loads custom tasks
Dir.glob('tasks/*.cap').each { |r| import r }

# vim syntax=ruby

Reply via email to