Re: broker logs unusable after KAFKA-6150: Make Repartition Topics Transient

2018-07-15 Thread Guozhang Wang
Hello Henry, I saw your detailed explanations on the JIRA ticket.

Thinking about this a bit more, I think moving the start offset should be
as important as an event as truncating log tail, or rolling new segment etc
(their log4j levels are all INFO). But I also agree that with the streams
periodical delete records request to brokers it could be too much as a
regularly frequent operation.

Note that besides handling the delete records request, there are a couple
other call traces that can leave to this entry:

1. Replica truncating starting offset due to received fetch response from
the leader.
2. Partition truncating starting offset due to time / size based retention
policy.

For both of these other two cases, the deletion event should be logged as
INFO as they are important and happen less frequently. Given that, I'd
suggest the following: if you do not care about the cases 1) / 2) as well,
then you can simply override these two classes in the log4j properties file
to WARN --- I do not think it is a hack, since overriding per class is a
very common and useful way for users to finer control their granularities.
If you do care other cases that may trigger log head truncation, but want
to reduce the logging overhead for the event that is triggered by handling
a delete records request only, we can file a JIRA for that and maybe
consider augmented call trace to let Log class decide whether or not log
the event.


Guozhang


On Fri, Jul 6, 2018 at 5:58 PM, Henry Cai 
wrote:

> On server side, we use INFO for everything.
>
> Log4j setting can be a temporary hack, but we would like to keep INFO
> logging as the default.
>
> I think those two logging lines can be just downgraded into DEBUG loggings,
> moving start offset is not that eventful to be logged as INFO.
>
> On Fri, Jul 6, 2018 at 4:08 PM, Guozhang Wang  wrote:
>
> > Hello Henry,
> >
> > What's your server-side log4j settings? Could you use WARN on these two
> > classes: kafka.server.epoch.LeaderEpochFileCache and kafka.log.Log.
> >
> >
> >
> > Guozhang
> >
> >
> > On Fri, Jul 6, 2018 at 3:08 PM, Henry Cai 
> > wrote:
> >
> > > @guozhang
> > >
> > > After we moved to kafka-1.1.0 for our Kafka streams application, our
> > broker
> > > logs are polluted with loggings such as:
> > >
> > > [2018-07-06 21:59:26,170] INFO Cleared earliest 0 entries from epoch
> > cache
> > > based on passed offset 301483601 leaving 1 in EpochFile for partition
> > > inflight_spend_unified_staging-single_spend_agg_
> > > window_AD_GROUP-repartition-26
> > > (kafka.server.epoch.LeaderEpochFileCache)
> > >
> > > [2018-07-06 21:59:26,170] INFO [Log
> > > partition=inflight_spend_unified_staging-single_spend_
> > agg_window_AD_GROUP-
> > > repartition-1,
> > > dir=/mnt/kafka] Incrementing log start offset to 240548684
> > (kafka.log.Log)
> > >
> > >
> > > Thousands of them keeping rolling in the broker logs which makes server
> > > side log unusable.
> > >
> > >
> > > Looks like this is triggered by DELETE_RECORDS requests from
> > StreamsThread
> > > from 'KAFKA-6150: Make Repartition Topics Transient'
> > >
> > >
> > > Can you suppress these two INFO loggings on the server side if they are
> > > triggered by AdminClient.deleteRecords()
> > >
> > >
> > > We have thousands of partitions per broker, those deletes were
> happening
> > > too frequent.
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang


[ANNOUCE] Apache Kafka 1.0.2 Released

2018-07-15 Thread Matthias J. Sax
The Apache Kafka community is pleased to announce the release for Apache
Kafka 1.0.2.

This is a bug fix release and it includes fixes and improvements from 27
JIRAs, including a few critical bugs.

All of the changes in this release can be found in the release notes:

https://www.apache.org/dist/kafka/1.0.2/RELEASE_NOTES.html

You can download the source and binary release (Scala 2.11 and Scala
2.12) from:

https://kafka.apache.org/downloads#1.0.2

---

Apache Kafka is a distributed streaming platform with four core APIs:

** The Producer API allows an application to publish a stream records to
one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an
output stream to one or more output topics, effectively transforming the
input streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might capture
every change to a table.three key capabilities:


With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react to
the streams of data.


Apache Kafka is in use at large and small companies worldwide, including
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
Target, The New York Times, Uber, Yelp, and Zalando, among others.


A big thank you for the following 32 contributors to this release!

Matthias J. Sax, Rajini Sivaram, Anna Povzner, Jason Gustafson, Ewen
Cheslack-Postava, Guozhang Wang, Dong Lin, huxi, John Roesler, Ismael
Juma, Jun Rao, Manikumar Reddy O, Max Zheng, Mickael Maison, Radai
Rosenblatt, Randall Hauch, Robert Yokota, Vahid Hashemian, fredfp, hmcl,
ro7m, tedyu, wushujames, Attila Sasvari, Bill Bejeck, Colin Patrick
McCabe, Damian Guy, Dhruvil Shah, Gitomain, Gunnar Morling, Jagadesh
Adireddi, Jarek Rudzinski

We welcome your help and feedback. For more information on how to report
problems, and to get involved, visit the project website at
https://kafka.apache.org/


Thank you!


Regards,
 -Matthias



signature.asc
Description: OpenPGP digital signature


Re: Apache Kafka QuickStart

2018-07-15 Thread Matthias J. Sax
>  After executing the first command to start zookeeper, do i have to open a 
> Terminal to run the Kafka Server? 

Yes. What is the problem with this? "but run into problem in Step 2"

If you follow the quickstart, you download the binaries and start
multiple processes (Zookeeper, Kafka Broker, Kafka Producer, Kafka
Consumer etc). It's easiest to start a terminal for each process, so you
can interact.

You can also start Zookeeper and Brokers in the background of course and
use a single terminal for both.


-Matthias

On 7/11/18 5:33 AM, Nicholas Chang wrote:
> Hi,
> I am new to Apache Kafka and I am trying to work on the QuickStart but run 
> into problem in Step 2. After executing the first command to start zookeeper, 
> do i have to open a Terminal to run the Kafka Server? I even try How To 
> Install Apache Kafka on Ubuntu 14.04 | DigitalOcean also cannot get pass step 
> 6. I am using Ubuntu 16.04 LTS. I look forward to receiving your reply soon.
> 
> 
> | 
> | 
> | 
> |  |  |
> 
>  |
> 
>  |
> | 
> |  | 
> How To Install Apache Kafka on Ubuntu 14.04 | DigitalOcean
> 
> Apache Kafka is a popular distributed message broker designed to handle large 
> volumes of real-time data efficien...
>  |
> 
>  |
> 
>  |
> 
> 
> Regards,Nicholas Chang
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: Questions about state stores and KSQL

2018-07-15 Thread Matthias J. Sax
To understand joins better, you might want to check out:
https://www.confluent.io/blog/crossing-streams-joins-apache-kafka/

KSQL uses the same join semantics as Kafka Streams.


-Matthias


On 7/11/18 8:01 AM, Guozhang Wang wrote:
> Hello Jonathan,
> 
> At the very high-level, KSQL statements is compiled into a Kafka Streams
> topology for execution. And the concept "state stores" are for Kafka
> Streams, not for KSQL, where inside the topology for those processor nodes
> that need stateful processing, like Joins, one or more state stores would
> be associated with the nodes.
> 
> Back to your example, this KSQL statement will be compiled into a Kafka
> Streams that roughly looks like this:
> 
> --
> 
> Kafka topic that defines stream "topics" --> source node --> join node
> (queries the "users-state" store, as generated below) --> sink node --> Kafka
> topic that defines stream "orders_enriched"
> 
> Kafka topic that defines table "users" --> source node --> materialization
> node (associated with a state store, let's name it "users-state")
> 
> --
> 
> That is, one state store will be used to materialize the table changelog
> stream for "users", where the other stream's record will query on.
> 
> In Kafka Streams, you can query a state store following the interactive
> query mechanism:
> 
> https://kafka.apache.org/documentation/streams/developer-guide/interactive-queries.html
> 
> It is not supported in KSQL yet.
> 
> 
> 
> Guozhang
> 
> 
> 
> 
> On Wed, Jul 11, 2018 at 1:41 AM, Jonathan Roy <
> jonathan@caldera.com.invalid> wrote:
> 
>> Hi Kafka users,
>>
>> I am very new to Kafka and more globally to stream processing, and am
>> trying to understand some of the concepts used by Kafka. From what I
>> understand, a key-value state store is created on each processor node that
>> performs stateful operations such as aggregations or joins. Let’s take an
>> example. I have an ‘orders’ stream and a ‘users’ table, and I want to
>> enrich the orders events with the corresponding users information, using
>> the KSQL CLI:
>>
>> CREATE STREAM orders_enriched AS SELECT o.id , o.article,
>> o.quantity, o.userId, u.name, u.address, u.email FROM orders o LEFT JOIN
>> users u ON o.userId = u.id ;
>>
>> Where is located the state store in this case? What will it contain
>> exactly? Is it possible to query it from another node?
>>
>> Thanks beforehand for your help!
>>
>> Jonathan
> 
> 
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] 2.0.0 RC2

2018-07-15 Thread Satish Duggana
+1 (non-binding)

- Ran testAll/releaseTarGzAll on 2.0.0-rc2 tag
- Ran through quickstart of core/streams on builds generated from tag
- Ran few internal apps targeting to topics on 3 node cluster.

Thanks,
Satish.

On Sun 15 Jul, 2018, 9:55 PM Rajini Sivaram, 
wrote:

> Hi Ismael,
>
> Thank you for pointing that out. I have re-uploaded the RC2 artifacts to
> maven including streams-scala_2.12. Also submitted a PR to update build &
> release scripts to include this.
>
> Thank you,
>
> Rajini
>
>
>
> On Fri, Jul 13, 2018 at 7:19 AM, Ismael Juma  wrote:
>
> > Hi Rajini,
> >
> > Thanks for generating the RC. It seems like the kafka-streams-scala 2.12
> > artifact is missing from the Maven repository:
> >
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > Since this is the first time we are publishing this artifact, it is
> > possible that this never worked properly.
> >
> > Ismael
> >
> > On Tue, Jul 10, 2018 at 10:17 AM Rajini Sivaram  >
> > wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > >
> > > This is the third candidate for release of Apache Kafka 2.0.0.
> > >
> > >
> > > This is a major version release of Apache Kafka. It includes 40 new
> KIPs
> > > and
> > >
> > > several critical bug fixes. Please see the 2.0.0 release plan for more
> > > details:
> > >
> > > https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=80448820
> > >
> > >
> > > A few notable highlights:
> > >
> > >- Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for
> CreateTopics
> > >(KIP-277)
> > >- SASL/OAUTHBEARER implementation (KIP-255)
> > >- Improved quota communication and customization of quotas (KIP-219,
> > >KIP-257)
> > >- Efficient memory usage for down conversion (KIP-283)
> > >- Fix log divergence between leader and follower during fast leader
> > >failover (KIP-279)
> > >- Drop support for Java 7 and remove deprecated code including old
> > scala
> > >clients
> > >- Connect REST extension plugin, support for externalizing secrets
> and
> > >improved error handling (KIP-285, KIP-297, KIP-298 etc.)
> > >- Scala API for Kafka Streams and other Streams API improvements
> > >(KIP-270, KIP-150, KIP-245, KIP-251 etc.)
> > >
> > >
> > > Release notes for the 2.0.0 release:
> > >
> > > http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/RELEASE_NOTES.html
> > >
> > >
> > > *** Please download, test and vote by Friday, July 13, 4pm PT
> > >
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > >
> > > http://kafka.apache.org/KEYS
> > >
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > >
> > > http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/
> > >
> > >
> > > * Maven artifacts to be voted upon:
> > >
> > > https://repository.apache.org/content/groups/staging/
> > >
> > >
> > > * Javadoc:
> > >
> > > http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/javadoc/
> > >
> > >
> > > * Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:
> > >
> > > https://github.com/apache/kafka/tree/2.0.0-rc2
> > >
> > >
> > >
> > > * Documentation:
> > >
> > > http://kafka.apache.org/20/documentation.html
> > >
> > >
> > > * Protocol:
> > >
> > > http://kafka.apache.org/20/protocol.html
> > >
> > >
> > > * Successful Jenkins builds for the 2.0 branch:
> > >
> > > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.0-jdk8/72/
> > >
> > > System tests:
> > > https://jenkins.confluent.io/job/system-test-kafka/job/2.0/27/
> > >
> > >
> > > /**
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Rajini
> > >
> >
>


Re: Error while creating ephemeral at /brokers/ids/BROKER_ID

2018-07-15 Thread Jonathan Santilli
Thanks a lot for your reply Ismael, I have filled the JIRA:
https://issues.apache.org/jira/browse/KAFKA-7165

Cheers!
--
Jonathan




On Sat, Jul 14, 2018 at 5:25 PM Ismael Juma  wrote:

> Hi Jonathan,
>
> Can you please file a JIRA?
>
> Ismael
>
> On Sat, Jul 14, 2018 at 3:44 AM Jonathan Santilli <
> jonathansanti...@gmail.com> wrote:
>
> > Hello, hope you all are Ok,
> >
> > I would like to know if this is the expected behavior:
> >
> > ERROR Error while creating ephemeral at /brokers/ids/BROKER_ID, node
> > already exists and owner '*216186131422332301*' does not match current
> > session '*288330817911521280*' (kafka.zk.KafkaZkClient$CheckedEphemeral)
> >
> > INFO Result of znode creation at /brokers/ids/BROKER_ID is: NODEEXISTS
> > (kafka.zk.KafkaZkClient)
> >
> > ERROR Uncaught exception in scheduled task 'isr-expiration'
> > (kafka.utils.KafkaScheduler)
> >
> >
> > After that, the logs keep showing constantly the following:
> >
> >
> > INFO [Partition TOPIC_NAME-PARTITION-ID broker=BROKER_ID] Shrinking ISR
> > from 1,3,2 to 1 (kafka.cluster.Partition)
> > INFO [Partition TOPIC_NAME-PARTITION-ID broker=BROKER_ID Cached zkVersion
> > [0] not equal to that in zookeeper, skip updating ISR
> > (kafka.cluster.Partition)
> > INFO [Partition __consumer_offsets-PARTITION-ID broker=BROKER_ID]
> Shrinking
> > ISR from 1,2,3 to 1 (kafka.cluster.Partition)
> > INFO [Partition __consumer_offsets-PARTITION-ID broker=BROKER_ID] Cached
> > zkVersion [139] not equal to that in zookeeper, skip updating ISR
> > (kafka.cluster.Partition)
> >
> >
> > The only way to recover was restarting the Broker.
> >
> > I will really appreciate any clue about this error,
> >
> >
> > Cheers!
> > --
> > Santilli Jonathan
> >
>


-- 
Santilli Jonathan


Re: [VOTE] 2.0.0 RC2

2018-07-15 Thread Rajini Sivaram
Hi Ismael,

Thank you for pointing that out. I have re-uploaded the RC2 artifacts to
maven including streams-scala_2.12. Also submitted a PR to update build &
release scripts to include this.

Thank you,

Rajini



On Fri, Jul 13, 2018 at 7:19 AM, Ismael Juma  wrote:

> Hi Rajini,
>
> Thanks for generating the RC. It seems like the kafka-streams-scala 2.12
> artifact is missing from the Maven repository:
>
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> Since this is the first time we are publishing this artifact, it is
> possible that this never worked properly.
>
> Ismael
>
> On Tue, Jul 10, 2018 at 10:17 AM Rajini Sivaram 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> >
> > This is the third candidate for release of Apache Kafka 2.0.0.
> >
> >
> > This is a major version release of Apache Kafka. It includes 40 new  KIPs
> > and
> >
> > several critical bug fixes. Please see the 2.0.0 release plan for more
> > details:
> >
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=80448820
> >
> >
> > A few notable highlights:
> >
> >- Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
> >(KIP-277)
> >- SASL/OAUTHBEARER implementation (KIP-255)
> >- Improved quota communication and customization of quotas (KIP-219,
> >KIP-257)
> >- Efficient memory usage for down conversion (KIP-283)
> >- Fix log divergence between leader and follower during fast leader
> >failover (KIP-279)
> >- Drop support for Java 7 and remove deprecated code including old
> scala
> >clients
> >- Connect REST extension plugin, support for externalizing secrets and
> >improved error handling (KIP-285, KIP-297, KIP-298 etc.)
> >- Scala API for Kafka Streams and other Streams API improvements
> >(KIP-270, KIP-150, KIP-245, KIP-251 etc.)
> >
> >
> > Release notes for the 2.0.0 release:
> >
> > http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/RELEASE_NOTES.html
> >
> >
> > *** Please download, test and vote by Friday, July 13, 4pm PT
> >
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> >
> > http://kafka.apache.org/KEYS
> >
> >
> > * Release artifacts to be voted upon (source and binary):
> >
> > http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/
> >
> >
> > * Maven artifacts to be voted upon:
> >
> > https://repository.apache.org/content/groups/staging/
> >
> >
> > * Javadoc:
> >
> > http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/javadoc/
> >
> >
> > * Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:
> >
> > https://github.com/apache/kafka/tree/2.0.0-rc2
> >
> >
> >
> > * Documentation:
> >
> > http://kafka.apache.org/20/documentation.html
> >
> >
> > * Protocol:
> >
> > http://kafka.apache.org/20/protocol.html
> >
> >
> > * Successful Jenkins builds for the 2.0 branch:
> >
> > Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/72/
> >
> > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka/job/2.0/27/
> >
> >
> > /**
> >
> >
> > Thanks,
> >
> >
> > Rajini
> >
>


Re: Accessing Zookepper and Kafka through designated ports in a container

2018-07-15 Thread M. Manna
So was it your sasl or something else?

Regards,

On Sun, 15 Jul 2018 at 08:40, Mich Talebzadeh 
wrote:

> resolved this now
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sat, 14 Jul 2018 at 22:40, Mich Talebzadeh 
> wrote:
>
> > Some additional info from the client side
> >
> > [zk: rhes75:4300(CONNECTED) 3] *ls /brokers/topics * #Gives the list of
> > topics
> > [test, md]
> > [zk: rhes75:4300(CONNECTED) 5] *ls /brokers/ids*
> > [1]
> > [zk: rhes75:4300(CONNECTED) 4] *get /brokers/ids/1 * #detailed info about
> > broker ID '1'
> >
> >
> {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://localhost:12092"],"jmx_port":,"host":"localhost","timestamp":"1531512017557","port":12092,"version":4}
> > cZxid = 0x16f
> > ctime = Fri Jul 13 21:00:17 BST 2018
> > mZxid = 0x16f
> > mtime = Fri Jul 13 21:00:17 BST 2018
> > pZxid = 0x16f
> > cversion = 0
> > dataVersion = 0
> > aclVersion = 0
> > ephemeralOwner = 0x164953a1010
> > dataLength = 192
> > numChildren = 0
> >
> > [zk: rhes75:4300(CONNECTED) 6] *get /controller*
> > {"version":1,"brokerid":1,"timestamp":"1531512017622"}
> > cZxid = 0x170
> > ctime = Fri Jul 13 21:00:17 BST 2018
> > mZxid = 0x170
> > mtime = Fri Jul 13 21:00:17 BST 2018
> > pZxid = 0x170
> > cversion = 0
> > dataVersion = 0
> > aclVersion = 0
> > ephemeralOwner = 0x164953a1010
> > dataLength = 54
> > numChildren = 0
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> >
> > On Sat, 14 Jul 2018 at 21:20, Mich Talebzadeh  >
> > wrote:
> >
> >> Apologies correction. Got the port number incorrect. It should be 4300
> >>
> >> on the docker itself confirming Zookeeper port
> >>
> >> *netstat -plten|grep 4300*
> >> (Not all processes could be identified, non-owned process info
> >>  will not be shown, you would have to be root to see it all.)
> >> tcp0  0 0.0.0.0:43000.0.0.0:*
> >> LISTEN  1000   46382517026/java
> >>
> >> *jps|grep 7026*7026 QuorumPeerMain
> >>
> >> On the remote host
> >>
> >> hduser@rhes564: /home/hduser/zookeeper-3.4.6/bin> ./zkCli.sh -server
> >> rhes75:4300
> >> *Connecting to rhes75:4300*
> >> 2018-07-14 21:27:35,560 [myid:] - INFO  [main:Environment@100] - Client
> >> environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09
> GMT
> >> 2018-07-14 21:27:35,563 [myid:] - INFO  [main:Environment@100] - Client
> >> environment:host.name=rhes564
> >> 2018-07-14 21:27:35,563 [myid:] - INFO  [main:Environment@100] - Client
> >> environment:java.version=1.8.0_77
> >> 2018-07-14 21:27:35,565 [myid:] - INFO  [main:Environment@100] - Client
> >> environment:java.vendor=Oracle Corporation
> >> 2018-07-14 21:27:35,565 [myid:] - INFO  [main:Environment@100] - Client
> >> environment:java.home=/usr/java/jdk1.8.0_77/jre
> >> 2018-07-14 21:27:35,565 [myid:] - INFO  [main:Environment@100] - Client
> >>
> environment:java.class.path=/home/hduser/zookeeper-3.4.6/bin/../build/classes:/home/hduser/zookeeper-3.4.6/bin/../build/lib/*.jar:/home/hduser/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/hduser/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/home/hduser/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/home/hduser/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/home/hduser/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/home/hduser/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/home/hduser/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/home/hduser/zookeeper-3.4.6/bin/../conf:
> >> 2018-07-14 21:27:35,565 [myid:] - INFO  [main:Environment@100] - Client
> >>
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> >> 2018-07-14 21:27:35,566 [myid:] - INFO  [main:Environment@100] - Client
> >> environment:java.io.tmpdir=/tmp
> >> 2018-07-14 21:27:35,566 [myid:] - INFO  [main:Environment@100] - Client

Re: Real time streaming as a microservice

2018-07-15 Thread Mich Talebzadeh
Hi Deepak,

I will put it there once all the bits and pieces come together. At the
moment I am drawing the diagrams. I will let you know.

Definitely everyone's contribution is welcome.

Regards,

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 15 Jul 2018 at 09:16, Deepak Sharma  wrote:

> Is it on github Mich ?
> I would love to use the flink and spark edition and add some use cases
> from my side.
>
> Thanks
> Deepak
>
> On Sun, Jul 15, 2018, 13:38 Mich Talebzadeh 
> wrote:
>
>> Hi all,
>>
>> I have now managed to deploy both ZooKeeper and Kafka as microservices
>> using docker images.
>>
>> The idea came to me as I wanted to create lightweight processes for both
>> ZooKeeper and Kafka to be used as services for Flink and Spark
>> simultaneously.
>>
>> In this design both Flink and Spark rely on streaming market data
>> messages published through Kafka. My current design is simple one docker
>> for Zookeeper and another for Kafka
>>
>> [root@rhes75 ~]# docker ps -a
>> CONTAINER IDIMAGE   COMMAND
>> CREATED STATUS
>> PORTSNAMES
>> 05cf097ac139ches/kafka  "/start.sh"  9 hours
>> ago Up 9 hours  *0.0.0.0:7203->7203/tcp,
>> 0.0.0.0:9092->9092/tcp*   kafka
>> b173e455cc80jplock/zookeeper"/opt/zookeeper/bin/…"   10 hours
>> agoUp 10 hours (healthy)   *2888/tcp, 0.0.0.0:2181->2181/tcp,
>> 3888/tcp*   zookeeper
>>
>> Note that the docker ports are exposed to the physical host that running
>> the containers.
>>
>> A test message is simply created as follows:
>> ${KAFKA_HOME}/bin/kafka-topics.sh --create --zookeeper rhes75:2181
>> --replication-factor 1 --partitions 1 --topic test
>>
>> Note that rhes75 is the host that houses the dockers and port 2181 is the
>> zookeeper port used by the zookeeper docker and mapped
>>
>> The spark streaming uses speed layer in Lambda architecture to write to
>> an Hbase table for selected market data (Hbase requires connectivity to a
>> Zookeeper). For Hbase I specified a zookeeper instance running on another
>> host and Hbase works fine.
>>
>> Anyway I will provide further info and diagrams.
>>
>> Cheers,
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Sun, 15 Jul 2018 at 08:40, Mich Talebzadeh 
>> wrote:
>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>> Thanks got it sorted.
>>>
>>> Regards,
>>>
>>>
>>> On Tue, 10 Jul 2018 at 09:24, Mich Talebzadeh 
>>> wrote:
>>>
 Thanks Rahul.

 This is the outcome of

 [root@rhes75 ~]# iptables -t nat -L -n
 Chain PREROUTING (policy ACCEPT)
 target prot opt source   destination
 DOCKER all  --  0.0.0.0/00.0.0.0/0ADDRTYPE
 match dst-type LOCAL
 Chain INPUT (policy ACCEPT)
 target prot opt source   destination
 Chain OUTPUT (policy ACCEPT)
 target prot opt source   destination
 DOCKER all  --  0.0.0.0/0   !127.0.0.0/8  ADDRTYPE
 match dst-type LOCAL
 Chain POSTROUTING (policy ACCEPT)
 target prot opt source   destination
 MASQUERADE  all  --  172.17.0.0/160.0.0.0/0
 MASQUERADE  all  --  172.18.0.0/16

Re: Real time streaming as a microservice

2018-07-15 Thread Deepak Sharma
Is it on github Mich ?
I would love to use the flink and spark edition and add some use cases from
my side.

Thanks
Deepak

On Sun, Jul 15, 2018, 13:38 Mich Talebzadeh 
wrote:

> Hi all,
>
> I have now managed to deploy both ZooKeeper and Kafka as microservices
> using docker images.
>
> The idea came to me as I wanted to create lightweight processes for both
> ZooKeeper and Kafka to be used as services for Flink and Spark
> simultaneously.
>
> In this design both Flink and Spark rely on streaming market data messages
> published through Kafka. My current design is simple one docker for
> Zookeeper and another for Kafka
>
> [root@rhes75 ~]# docker ps -a
> CONTAINER IDIMAGE   COMMAND
> CREATED STATUS
> PORTSNAMES
> 05cf097ac139ches/kafka  "/start.sh"  9 hours
> ago Up 9 hours  *0.0.0.0:7203->7203/tcp,
> 0.0.0.0:9092->9092/tcp*   kafka
> b173e455cc80jplock/zookeeper"/opt/zookeeper/bin/…"   10 hours
> agoUp 10 hours (healthy)   *2888/tcp, 0.0.0.0:2181->2181/tcp,
> 3888/tcp*   zookeeper
>
> Note that the docker ports are exposed to the physical host that running
> the containers.
>
> A test message is simply created as follows:
> ${KAFKA_HOME}/bin/kafka-topics.sh --create --zookeeper rhes75:2181
> --replication-factor 1 --partitions 1 --topic test
>
> Note that rhes75 is the host that houses the dockers and port 2181 is the
> zookeeper port used by the zookeeper docker and mapped
>
> The spark streaming uses speed layer in Lambda architecture to write to an
> Hbase table for selected market data (Hbase requires connectivity to a
> Zookeeper). For Hbase I specified a zookeeper instance running on another
> host and Hbase works fine.
>
> Anyway I will provide further info and diagrams.
>
> Cheers,
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sun, 15 Jul 2018 at 08:40, Mich Talebzadeh 
> wrote:
>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>> Thanks got it sorted.
>>
>> Regards,
>>
>>
>> On Tue, 10 Jul 2018 at 09:24, Mich Talebzadeh 
>> wrote:
>>
>>> Thanks Rahul.
>>>
>>> This is the outcome of
>>>
>>> [root@rhes75 ~]# iptables -t nat -L -n
>>> Chain PREROUTING (policy ACCEPT)
>>> target prot opt source   destination
>>> DOCKER all  --  0.0.0.0/00.0.0.0/0ADDRTYPE
>>> match dst-type LOCAL
>>> Chain INPUT (policy ACCEPT)
>>> target prot opt source   destination
>>> Chain OUTPUT (policy ACCEPT)
>>> target prot opt source   destination
>>> DOCKER all  --  0.0.0.0/0   !127.0.0.0/8  ADDRTYPE
>>> match dst-type LOCAL
>>> Chain POSTROUTING (policy ACCEPT)
>>> target prot opt source   destination
>>> MASQUERADE  all  --  172.17.0.0/160.0.0.0/0
>>> MASQUERADE  all  --  172.18.0.0/160.0.0.0/0
>>> RETURN all  --  192.168.122.0/24 224.0.0.0/24
>>> RETURN all  --  192.168.122.0/24 255.255.255.255
>>> MASQUERADE  tcp  --  192.168.122.0/24!192.168.122.0/24 masq
>>> ports: 1024-65535
>>> MASQUERADE  udp  --  192.168.122.0/24!192.168.122.0/24 masq
>>> ports: 1024-65535
>>> MASQUERADE  all  --  192.168.122.0/24!192.168.122.0/24
>>> Chain DOCKER (2 references)
>>> target prot opt source   destination
>>> RETURN all  --  0.0.0.0/00.0.0.0/0
>>> RETURN all  --  0.0.0.0/00.0.0.0/0
>>>
>>> So basically I need to connect to container from another host as the
>>> link points it out.
>>>
>>> My docker is already running.
>>>
>>> [root@rhes75 ~]# docker ps -a
>>> CONTAINER IDIMAGE   COMMAND
>>> CREATED STATUS  PORTS   NAMES
>>> 8dd84a174834ubuntu  "bash"  19 hours
>>> ago  

Re: Real time streaming as a microservice

2018-07-15 Thread Mich Talebzadeh
Hi all,

I have now managed to deploy both ZooKeeper and Kafka as microservices
using docker images.

The idea came to me as I wanted to create lightweight processes for both
ZooKeeper and Kafka to be used as services for Flink and Spark
simultaneously.

In this design both Flink and Spark rely on streaming market data messages
published through Kafka. My current design is simple one docker for
Zookeeper and another for Kafka

[root@rhes75 ~]# docker ps -a
CONTAINER IDIMAGE   COMMAND
CREATED STATUS
PORTSNAMES
05cf097ac139ches/kafka  "/start.sh"  9 hours
ago Up 9 hours  *0.0.0.0:7203->7203/tcp,
0.0.0.0:9092->9092/tcp*   kafka
b173e455cc80jplock/zookeeper"/opt/zookeeper/bin/…"   10 hours
agoUp 10 hours (healthy)   *2888/tcp, 0.0.0.0:2181->2181/tcp,
3888/tcp*   zookeeper

Note that the docker ports are exposed to the physical host that running
the containers.

A test message is simply created as follows:
${KAFKA_HOME}/bin/kafka-topics.sh --create --zookeeper rhes75:2181
--replication-factor 1 --partitions 1 --topic test

Note that rhes75 is the host that houses the dockers and port 2181 is the
zookeeper port used by the zookeeper docker and mapped

The spark streaming uses speed layer in Lambda architecture to write to an
Hbase table for selected market data (Hbase requires connectivity to a
Zookeeper). For Hbase I specified a zookeeper instance running on another
host and Hbase works fine.

Anyway I will provide further info and diagrams.

Cheers,


Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 15 Jul 2018 at 08:40, Mich Talebzadeh 
wrote:

>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
> Thanks got it sorted.
>
> Regards,
>
>
> On Tue, 10 Jul 2018 at 09:24, Mich Talebzadeh 
> wrote:
>
>> Thanks Rahul.
>>
>> This is the outcome of
>>
>> [root@rhes75 ~]# iptables -t nat -L -n
>> Chain PREROUTING (policy ACCEPT)
>> target prot opt source   destination
>> DOCKER all  --  0.0.0.0/00.0.0.0/0ADDRTYPE
>> match dst-type LOCAL
>> Chain INPUT (policy ACCEPT)
>> target prot opt source   destination
>> Chain OUTPUT (policy ACCEPT)
>> target prot opt source   destination
>> DOCKER all  --  0.0.0.0/0   !127.0.0.0/8  ADDRTYPE
>> match dst-type LOCAL
>> Chain POSTROUTING (policy ACCEPT)
>> target prot opt source   destination
>> MASQUERADE  all  --  172.17.0.0/160.0.0.0/0
>> MASQUERADE  all  --  172.18.0.0/160.0.0.0/0
>> RETURN all  --  192.168.122.0/24 224.0.0.0/24
>> RETURN all  --  192.168.122.0/24 255.255.255.255
>> MASQUERADE  tcp  --  192.168.122.0/24!192.168.122.0/24 masq
>> ports: 1024-65535
>> MASQUERADE  udp  --  192.168.122.0/24!192.168.122.0/24 masq
>> ports: 1024-65535
>> MASQUERADE  all  --  192.168.122.0/24!192.168.122.0/24
>> Chain DOCKER (2 references)
>> target prot opt source   destination
>> RETURN all  --  0.0.0.0/00.0.0.0/0
>> RETURN all  --  0.0.0.0/00.0.0.0/0
>>
>> So basically I need to connect to container from another host as the link
>> points it out.
>>
>> My docker is already running.
>>
>> [root@rhes75 ~]# docker ps -a
>> CONTAINER IDIMAGE   COMMAND
>> CREATED STATUS  PORTS   NAMES
>> 8dd84a174834ubuntu  "bash"  19 hours
>> agoUp 11 hours dockerZooKeeperKafka
>>
>> What would be an option to add a fixed port to the running container.?
>>
>> Regards,
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> 

Re: Accessing Zookepper and Kafka through designated ports in a container

2018-07-15 Thread Mich Talebzadeh
resolved this now

Thanks

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sat, 14 Jul 2018 at 22:40, Mich Talebzadeh 
wrote:

> Some additional info from the client side
>
> [zk: rhes75:4300(CONNECTED) 3] *ls /brokers/topics * #Gives the list of
> topics
> [test, md]
> [zk: rhes75:4300(CONNECTED) 5] *ls /brokers/ids*
> [1]
> [zk: rhes75:4300(CONNECTED) 4] *get /brokers/ids/1 * #detailed info about
> broker ID '1'
>
> {"listener_security_protocol_map":{"PLAINTEXT":"PLAINTEXT"},"endpoints":["PLAINTEXT://localhost:12092"],"jmx_port":,"host":"localhost","timestamp":"1531512017557","port":12092,"version":4}
> cZxid = 0x16f
> ctime = Fri Jul 13 21:00:17 BST 2018
> mZxid = 0x16f
> mtime = Fri Jul 13 21:00:17 BST 2018
> pZxid = 0x16f
> cversion = 0
> dataVersion = 0
> aclVersion = 0
> ephemeralOwner = 0x164953a1010
> dataLength = 192
> numChildren = 0
>
> [zk: rhes75:4300(CONNECTED) 6] *get /controller*
> {"version":1,"brokerid":1,"timestamp":"1531512017622"}
> cZxid = 0x170
> ctime = Fri Jul 13 21:00:17 BST 2018
> mZxid = 0x170
> mtime = Fri Jul 13 21:00:17 BST 2018
> pZxid = 0x170
> cversion = 0
> dataVersion = 0
> aclVersion = 0
> ephemeralOwner = 0x164953a1010
> dataLength = 54
> numChildren = 0
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sat, 14 Jul 2018 at 21:20, Mich Talebzadeh 
> wrote:
>
>> Apologies correction. Got the port number incorrect. It should be 4300
>>
>> on the docker itself confirming Zookeeper port
>>
>> *netstat -plten|grep 4300*
>> (Not all processes could be identified, non-owned process info
>>  will not be shown, you would have to be root to see it all.)
>> tcp0  0 0.0.0.0:43000.0.0.0:*
>> LISTEN  1000   46382517026/java
>>
>> *jps|grep 7026*7026 QuorumPeerMain
>>
>> On the remote host
>>
>> hduser@rhes564: /home/hduser/zookeeper-3.4.6/bin> ./zkCli.sh -server
>> rhes75:4300
>> *Connecting to rhes75:4300*
>> 2018-07-14 21:27:35,560 [myid:] - INFO  [main:Environment@100] - Client
>> environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
>> 2018-07-14 21:27:35,563 [myid:] - INFO  [main:Environment@100] - Client
>> environment:host.name=rhes564
>> 2018-07-14 21:27:35,563 [myid:] - INFO  [main:Environment@100] - Client
>> environment:java.version=1.8.0_77
>> 2018-07-14 21:27:35,565 [myid:] - INFO  [main:Environment@100] - Client
>> environment:java.vendor=Oracle Corporation
>> 2018-07-14 21:27:35,565 [myid:] - INFO  [main:Environment@100] - Client
>> environment:java.home=/usr/java/jdk1.8.0_77/jre
>> 2018-07-14 21:27:35,565 [myid:] - INFO  [main:Environment@100] - Client
>> environment:java.class.path=/home/hduser/zookeeper-3.4.6/bin/../build/classes:/home/hduser/zookeeper-3.4.6/bin/../build/lib/*.jar:/home/hduser/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/hduser/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/home/hduser/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/home/hduser/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/home/hduser/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/home/hduser/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/home/hduser/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/home/hduser/zookeeper-3.4.6/bin/../conf:
>> 2018-07-14 21:27:35,565 [myid:] - INFO  [main:Environment@100] - Client
>> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
>> 2018-07-14 21:27:35,566 [myid:] - INFO  [main:Environment@100] - Client
>> environment:java.io.tmpdir=/tmp
>> 2018-07-14 21:27:35,566 [myid:] - INFO  [main:Environment@100] - Client
>> environment:java.compiler=
>> 2018-07-14 21:27:35,566 [myid:] - INFO  [main:Environment@100] - Client
>> environment:os.name=Linux
>> 2018-07-14 21:27:35,566 [myid:] - INFO  [main:Environment@100] - Client
>> environment:os.arch=amd64
>> 2018-07-14 21:27:35,566 [myid:] - INFO  [main:Environment@100] - Client
>> environment:os.version=2.6.18-92.el5
>> 

Re: Real time streaming as a microservice

2018-07-15 Thread Mich Talebzadeh
Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.


Thanks got it sorted.

Regards,


On Tue, 10 Jul 2018 at 09:24, Mich Talebzadeh 
wrote:

> Thanks Rahul.
>
> This is the outcome of
>
> [root@rhes75 ~]# iptables -t nat -L -n
> Chain PREROUTING (policy ACCEPT)
> target prot opt source   destination
> DOCKER all  --  0.0.0.0/00.0.0.0/0ADDRTYPE
> match dst-type LOCAL
> Chain INPUT (policy ACCEPT)
> target prot opt source   destination
> Chain OUTPUT (policy ACCEPT)
> target prot opt source   destination
> DOCKER all  --  0.0.0.0/0   !127.0.0.0/8  ADDRTYPE
> match dst-type LOCAL
> Chain POSTROUTING (policy ACCEPT)
> target prot opt source   destination
> MASQUERADE  all  --  172.17.0.0/160.0.0.0/0
> MASQUERADE  all  --  172.18.0.0/160.0.0.0/0
> RETURN all  --  192.168.122.0/24 224.0.0.0/24
> RETURN all  --  192.168.122.0/24 255.255.255.255
> MASQUERADE  tcp  --  192.168.122.0/24!192.168.122.0/24 masq
> ports: 1024-65535
> MASQUERADE  udp  --  192.168.122.0/24!192.168.122.0/24 masq
> ports: 1024-65535
> MASQUERADE  all  --  192.168.122.0/24!192.168.122.0/24
> Chain DOCKER (2 references)
> target prot opt source   destination
> RETURN all  --  0.0.0.0/00.0.0.0/0
> RETURN all  --  0.0.0.0/00.0.0.0/0
>
> So basically I need to connect to container from another host as the link
> points it out.
>
> My docker is already running.
>
> [root@rhes75 ~]# docker ps -a
> CONTAINER IDIMAGE   COMMAND
> CREATED STATUS  PORTS   NAMES
> 8dd84a174834ubuntu  "bash"  19 hours
> agoUp 11 hours dockerZooKeeperKafka
>
> What would be an option to add a fixed port to the running container.?
>
> Regards,
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Tue, 10 Jul 2018 at 08:35, Rahul Singh 
> wrote:
>
>> Seems like you need to expose your port via docker run or docker-compose .
>>
>>
>> https://docs.docker.com/v17.09/engine/userguide/networking/default_network/binding/
>>
>>
>>
>> --
>> Rahul Singh
>> rahul.si...@anant.us
>>
>> Anant Corporation
>> On Jul 9, 2018, 2:21 PM -0500, Mich Talebzadeh ,
>> wrote:
>> > Hi,
>> >
>> > I have now successfully created a docker for RHEL75 as follows:
>> >
>> > [root@rhes75 ~]# docker ps -a
>> > CONTAINER ID IMAGE COMMAND
>> > CREATED STATUS PORTS NAMES
>> > 816f07de15b1 zookeeper "/docker-entrypoint.…" 2 hours
>> > ago Up 2 hours 2181/tcp, 2888/tcp, 3888/tcp
>> > dockerZooKeeper
>> > 8dd84a174834 ubuntu "bash" 6 hours
>> > ago Up 6 hours
>> > dockerZooKeeperKafka
>> >
>> > The first container is ready made for ZooKeeper that exposes the
>> zookeeper
>> > client port etc.
>> >
>> > The second container is an ubuntu shell which I installed both zookeeper
>> > and Kafka on it. They are both running in container dockerZooKeeperKafka
>> >
>> >
>> > hduser@8dd84a174834: /home/hduser/dba/bin> jps
>> > 5715 Kafka
>> > 5647 QuorumPeerMain
>> >
>> > hduser@8dd84a174834: /home/hduser/dba/bin> netstat -plten
>> > (Not all processes could be identified, non-owned process info
>> > will not be shown, you would have to be root to see it all.)
>> > Active Internet connections (only servers)
>> > Proto Recv-Q Send-Q Local Address Foreign Address
>> > State User Inode PID/Program name
>> > tcp 0 0 0.0.0.0: 0.0.0.0:*
>> > LISTEN 1005 2865148 5715/java
>> > tcp 0 0 0.0.0.0:35312 0.0.0.0:*
>> > LISTEN 1005 2865147 5715/java
>> > tcp 0 0 0.0.0.0:34193 0.0.0.0:*
>> > LISTEN 1005 2865151 5715/java
>> > tcp 0 0 0.0.0.0:22 0.0.0.0:*
>> > LISTEN 0 2757032 -
>> > tcp 0 0 0.0.0.0:40803 0.0.0.0:*
>> > LISTEN 1005 2852821 5647/java
>> >
>> >
>> > *tcp 0 0 0.0.0.0:9092 
>> > 0.0.0.0:* LISTEN 1005 2873507
>> > 

zookeeper as systemctl

2018-07-15 Thread Adrien Ruffie
Hello Kafka's users,
without any response of Zooketreper's users,
I* am **relyin**g on you...*


I have 2 questions for you.


what is the real difference between these 2 following commands ? (I don't
find any documentation)


zkServer.sh start-foreground

and

zkServer.sh start



My second question is, how I can correctly start my zookeeper as a
systemclt service ?

What is the common best template to write into
/etc/systemd/system/zookeeper.service ?

Do you use Restart=always ? RestartSec=0s ?

What is "After=network.target" ?

If my Zookeeper does not really start in 300 sec, the process will be
shutdown ?


Do you have any example of zookeeper service file ?


Because our zookeeper.service is right now:


[Unit]
Description=ZooKeeper

[Service]
Type=simple
User=zookeeper
Group=zookeeper
ExecStart=/usr/local/zookeeper-3.4.9/bin/zkServer.sh start-foreground

TimeoutSec=300

[Install]
WantedBy=multi-user.target

--- But I found this following on a blog:


[Unit]
Description=Apache Zookeeper
After=network.target

[Service]
Type=forking
User=zookeeper
Group=zookeeper
SyslogIdentifier=zookeeper
Restart=always
RestartSec=0s
ExecStart=/usr/bin/zookeeper-server start
ExecStop=/usr/bin/zookeeper-server stop
ExecReload=/usr/bin/zookeeper-server restart

[Install]
WantedBy=multi-user.target


Thank you very much and best regards

Adrien