[ 
https://issues.apache.org/jira/browse/KAFKA-1173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14215639#comment-14215639
 ] 

Ewen Cheslack-Postava commented on KAFKA-1173:
----------------------------------------------

You need ec2_associate_public_ip if you want to be able to SSH externally. I 
think for your setup you'd want to set override.hostmanager.ignore_private_ip = 
false, whereas I specifically set it to true to get the public address. You'd 
only want to turn of ec2_associate_public_ip if everything including vagrant 
was running inside your VPC, e.g. internal automated tests that leverage the 
Vagrant script to setup the cluster.

To explain why I wrote things the way I did -- my primary use case is 
development of and on top of Kafka. I want to make it easy to setup the 
cluster, run admin commands to inspect its state, setup producer/consumer 
processes, and, when necessary, SSH in and debug things. A lot of that can be 
done from my laptop, so supporting access from outside the VPC is handy. My 
setup is definitely not secure, but that's kind of by design -- if I'm just 
dumping test data into the cluster, I'm not particularly concerned  about the 
security of the data. (But it would suck if someone else randomly started 
publishing data to my cluster...).

I'm hesitant of adding even more toggles -- eventually it gets so complex that 
its easier for each person to write their own custom Vagrantfile. And the 
amount of effort to get up and running on EC2 is already pretty high. Thoughts 
on a good compromise? The primary use cases I was thinking about were Kafka 
development (i.e. patch needs testing against a real cluster, system tests are 
breaking and I need to reproduce the issue), demo/tutorial (i.e. help users get 
a real cluster they can test against up and running), and a testbed for 
application-level code and benchmarks. It sounds like you either have a 
slightly different use case or just have a different workflow for using EC2 
during development.

> Using Vagrant to get up and running with Apache Kafka
> -----------------------------------------------------
>
>                 Key: KAFKA-1173
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1173
>             Project: Kafka
>          Issue Type: Improvement
>            Reporter: Joe Stein
>            Assignee: Ewen Cheslack-Postava
>             Fix For: 0.8.3
>
>         Attachments: KAFKA-1173.patch, KAFKA-1173_2013-12-07_12:07:55.patch, 
> KAFKA-1173_2014-11-11_13:50:55.patch, KAFKA-1173_2014-11-12_11:32:09.patch
>
>
> Vagrant has been getting a lot of pickup in the tech communities.  I have 
> found it very useful for development and testing and working with a few 
> clients now using it to help virtualize their environments in repeatable ways.
> Using Vagrant to get up and running.
> For 0.8.0 I have a patch on github https://github.com/stealthly/kafka
> 1) Install Vagrant [http://www.vagrantup.com/](http://www.vagrantup.com/)
> 2) Install Virtual Box 
> [https://www.virtualbox.org/](https://www.virtualbox.org/)
> In the main kafka folder
> 1) ./sbt update
> 2) ./sbt package
> 3) ./sbt assembly-package-dependency
> 4) vagrant up
> once this is done 
> * Zookeeper will be running 192.168.50.5
> * Broker 1 on 192.168.50.10
> * Broker 2 on 192.168.50.20
> * Broker 3 on 192.168.50.30
> When you are all up and running you will be back at a command brompt.  
> If you want you can login to the machines using vagrant shh <machineName> but 
> you don't need to.
> You can access the brokers and zookeeper by their IP
> e.g.
> bin/kafka-console-producer.sh --broker-list 
> 192.168.50.10:9092,192.168.50.20:9092,192.168.50.30:9092 --topic sandbox
> bin/kafka-console-consumer.sh --zookeeper 192.168.50.5:2181 --topic sandbox 
> --from-beginning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to