It doesn't have to be FQDN.

Here's how I run Kafka in a container:
docker run --name st-kafka -p 2181:2181 -p 9092:9092 -e
ADVERTISED_HOST=`docker-machine ip dev-st` -e ADVERTISED_PORT=9092 -d
spotify/kafka

And then you have access to Kafka on the docker host VM from any other
machine.
BTW I use Spotify's image since it contains both ZK and Kafka, but I think
the latest version they built is 0.8.2.1, so you might have to build the
new image yourself if you need 0.9, but that's trivial to do.

Marko Bonaći
Monitoring | Alerting | Anomaly Detection | Centralized Log Management
Solr & Elasticsearch Support
Sematext <http://sematext.com/> | Contact
<http://sematext.com/about/contact.html>

On Thu, Dec 17, 2015 at 11:33 AM, Ben Davison <ben.davi...@7digital.com>
wrote:

> Hi David,
>
> Are you running in docker? Are you trying to connect from to a remote box?
> We found we could connect locally but couldn't connect from another remote
> host.
>
> (I've just started using kafka also)
>
> We had the same issue and found out: host.name=<%=@ipaddress%> needed to
> be
> the FQDN of the box.
>
> Thanks,
>
> Ben
>
> On Thu, Dec 17, 2015 at 5:40 AM, David Montgomery <
> davidmontgom...@gmail.com
> > wrote:
>
> > Hi,
> >
> > I am very concerned about using kafka in production given the below
> > errors:
> >
> > Now issues with myt zookeeper.  Other services use ZK.  Only kafka fails.
> > I have 2 kafka servers using 8.x.  How do I resolve?  I tried restarting
> > services for kafka.  Below is my kafka server.properties file
> >
> > 'Traceback (most recent call last):
> >   File
> >
> >
> "/usr/local/lib/python2.7/dist-packages/gevent-1.1b6-py2.7-linux-x86_64.egg/gevent/greenlet.py",
> > line 523, in run
> >     result = self._run(*self.args, **self.kwargs)
> >   File "/var/feed-server/ad-server/pixel-server.py", line 145, in
> > send_kafka_message
> >     res = producer.send_messages(topic, message)
> >   File "build/bdist.linux-x86_64/egg/kafka/producer/simple.py", line 52,
> in
> > send_messages
> >     partition = self._next_partition(topic)
> >   File "build/bdist.linux-x86_64/egg/kafka/producer/simple.py", line 36,
> in
> > _next_partition
> >     self.client.load_metadata_for_topics(topic)
> >   File "build/bdist.linux-x86_64/egg/kafka/client.py", line 383, in
> > load_metadata_for_topics
> >     kafka.common.check_error(topic_metadata)
> >   File "build/bdist.linux-x86_64/egg/kafka/common.py", line 233, in
> > check_error
> >     raise error_class(response)
> > LeaderNotAvailableError: TopicMetadata(topic='topic-test-production',
> > error=5, partitions=[])
> > <Greenlet at 0x7f7acd1654b0: send_kafka_message('topic-test-production',
> > '{"adfadfadf)> failed with LeaderNotAvailableError
> >
> >
> >
> >
> >
> >
> >
> >
> > # limitations under the License.
> > # see kafka.server.KafkaConfig for additional details and defaults
> >
> > ############################# Server Basics #############################
> >
> > # The id of the broker. This must be set to a unique integer for each
> > broker.
> > broker.id=<%=@broker_id%>
> > advertised.host.name=<%=@ipaddress%>
> > advertised.port=9092
> > ############################# Socket Server Settings
> > #############################
> >
> > # The port the socket server listens on
> > port=9092
> >
> > # Hostname the broker will bind to and advertise to producers and
> > consumers.
> > # If not set, the server will bind to all interfaces and advertise the
> > value returned from
> > # from java.net.InetAddress.getCanonicalHostName().
> > host.name=<%=@ipaddress%>
> >
> > # The number of threads handling network requests
> > num.network.threads=2
> >
> > # The number of threads doing disk I/O
> > num.io.threads=2
> >
> > # The send buffer (SO_SNDBUF) used by the socket server
> > socket.send.buffer.bytes=1048576
> >
> > # The receive buffer (SO_RCVBUF) used by the socket server
> > socket.receive.buffer.bytes=1048576
> >
> > # The maximum size of a request that the socket server will accept
> > (protection against OOM)
> > socket.request.max.bytes=104857600
> >
> >
> > ############################# Log Basics #############################
> >
> > # A comma seperated list of directories under which to store log files
> > log.dirs=/tmp/kafka-logs
> >
> > # The number of logical partitions per topic per server. More partitions
> > allow greater parallelism
> > # for consumption, but also mean more files.
> > num.partitions=2
> >
> > ############################# Log Flush Policy
> > #############################
> >
> > # The following configurations control the flush of data to disk. This is
> > among the most
> > # important performance knob in kafka.
> > # There are a few important trade-offs here:
> > #    1. Durability: Unflushed data may be lost if you are not using
> > replication.
> > #    2. Latency: Very large flush intervals may lead to latency spikes
> when
> > the flush does occur as there will be a lot of data to flush.
> > #    3. Throughput: The flush is generally the most expensive operation,
> > and a small flush interval may lead to exceessive seeks.
> > # The settings below allow one to configure the flush policy to flush
> data
> > after a period of time or
> > # every N messages (or both). This can be done globally and overridden
> on a
> > per-topic basis.
> >
> > # The number of messages to accept before forcing a flush of data to disk
> > log.flush.interval.messages=10000
> >
> > # The maximum amount of time a message can sit in a log before we force a
> > flush
> > log.flush.interval.ms=1000
> >
> > # Per-topic overrides for log.flush.interval.ms
> > #log.flush.intervals.ms.per.topic=topic1:1000, topic2:3000
> >
> > ############################# Log Retention Policy
> > #############################
> >
> > # The following configurations control the disposal of log segments. The
> > policy can
> > # be set to delete segments after a period of time, or after a given size
> > has accumulated.
> > # A segment will be deleted whenever *either* of these criteria are met.
> > Deletion always happens
> > # from the end of the log.
> >
> > # The minimum age of a log file to be eligible for deletion
> > log.retention.hours=168
> >
> > # A size-based retention policy for logs. Segments are pruned from the
> log
> > as long as the remaining
> > # segments don't drop below log.retention.bytes.
> > #log.retention.bytes=1073741824
> >
> > # The maximum size of a log segment file. When this size is reached a new
> > log segment will be created.
> > log.segment.bytes=536870912
> >
> > # The interval at which log segments are checked to see if they can be
> > deleted according
> > # to the retention policies
> > log.cleanup.interval.mins=1
> >
> > ############################# Zookeeper #############################
> >
> > # Zookeeper connection string (see zookeeper docs for details).
> > # This is a comma separated host:port pairs, each corresponding to a zk
> > # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
> > # You can also append an optional chroot string to the urls to specify
> the
> > # root directory for all kafka znodes.
> > #zookeeper.connect=localhost:2181
> > zookeeper.connect=<%=@zookeeper%>
> >
> >
> > # Timeout in ms for connecting to zookeeper
> > zookeeper.connection.timeout.ms=1000000
> >
> > num.replica.fetchers=4
> > default.replication.factor=2
> > delete.topic.enable=true
> > unclean.leader.election.enable=true
> >
>
> --
>
>
> This email, including attachments, is private and confidential. If you have
> received this email in error please notify the sender and delete it from
> your system. Emails are not secure and may contain viruses. No liability
> can be accepted for viruses that might be transferred by this email or any
> attachment. Any unauthorised copying of this message or unauthorised
> distribution and publication of the information contained herein are
> prohibited.
>
> 7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
> Registered in England and Wales. Registered No. 04843573.
>

Reply via email to