Re: Zookeeper not starting

2023-05-18 Thread Mich Talebzadeh
Have you looked at the zookeeper log to see why it is failing to start?

On the zookeeper host do

ps -ef | grep zookeeper | grep zookeeper.log.dir --color
Says something like below

hduser   26214 1  0 23:18 pts/200:00:01 /opt/jdk1.8.0_201/bin/java
-Dzookeeper.log.dir=/home/hduser/hadoop-3.1.1/logs
-Dzookeeper.log.file=zookeeper-hduser-server-rhes75.log

Go to that directory and look at the log file(example here ->
/home/hduser/hadoop-3.1.1/logs/zookeeper-hduser-server-rhes75.log

To find out the cause of error

HTH

Mich Talebzadeh,
Lead Solutions Architect/Engineering Lead
Palantir Technologies Limited
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 17 May 2023 at 14:08, Lemi Odidi  wrote:

> I need help in starting up  zookeeper.
> see below the console output when I try to star it
>
>
> 
> bitnami@ip-172-21-2-143:/var/log$ sudo /opt/bitnami/ctlscript.sh status
> zookeeper not running
> kafka already running
>
> bitnami@ip-172-21-2-143:/var/log$ sudo /opt/bitnami/ctlscript.sh stop
>
> /opt/bitnami/kafka/scripts/ctl.sh : kafka stopped
> /opt/bitnami/zookeeper/scripts/ctl.sh : zookeeper not running
>
> bitnami@ip-172-21-2-143:/var/log$ sudo /opt/bitnami/ctlscript.sh status
>
> zookeeper not running
> kafka not running
>
> bitnami@ip-172-21-2-143:/var/log$ sudo /opt/bitnami/ctlscript.sh start
>
> ZooKeeper JMX enabled by default
> Using config: /opt/bitnami/zookeeper/bin/../conf/zoo.cfg
> Starting zookeeper ... STARTED
> /opt/bitnami/zookeeper/scripts/ctl.sh : zookeeper could not be started
> /opt/bitnami/kafka/scripts/ctl.sh : kafka started
>
> bitnami@ip-172-21-2-143:/var/log$ sudo /opt/bitnami/ctlscript.sh status
> zookeeper not running
> kafka already running
> bitnami@ip-172-21-2-143:/var/log$
>
> About Ascential plc
> Ascential delivers specialist information, analytics and ecommerce
> optimisation platforms to the world's leading consumer brands and their
> ecosystems.
> Our world-class businesses improve performance and solve problems for our
> customers by delivering immediately actionable information combined with
> visionary longer-term thinking across Digital Commerce, Product Design and
> Marketing. We also serve customers across Retail & Financial Services.
> With over 3,000 employees across five continents, we combine local
> expertise with a global footprint for clients in over 120 countries.
> Ascential is listed on the London Stock Exchange.
> The information in or attached to this email is confidential and may be
> legally privileged. If you are not the intended recipient of this message,
> any use, disclosure, copying, distribution or any action taken in reliance
> on it is prohibited and may be unlawful.
> If you have received this message in error, please notify the sender
> immediately by return email and delete this message and any copies from
> your computer and network. Ascential does not warrant that this email and
> any attachments are free from viruses and accepts no liability for any loss
> resulting from infected email transmissions.
> Ascential reserves the right to monitor all email through its networks.
> Any view expressed may be those of the originator and not necessarily of
> Ascential plc. Please be advised all phone calls may be recorded for
> training and quality purposes and by accepting and/or making calls to us
> you acknowledge and agree to calls being recorded.
> Ascential plc, number 9934451 (England and Wales). Registered Office: 33
> Kingsway, London, WC2B 6UF.
>


Re: Zookeeper not starting

2023-05-18 Thread Lemi Odidi
Does anyone have an idea how to get my zookeeper running?


On Wed, May 17, 2023 at 10:34 AM Lemi Odidi 
wrote:

> Hello Mich,
> Thank you for your response. I have 3 instances running Kafka in this
> environment. All these instances had zookeepers running on them
> independently until the one in question stopped running.
> The version of kafka is kafka_2.12-1.1.0.jar.asc.
>
> Thanks for help.
> Regards, Lemi
>
> On Wed, May 17, 2023 at 9:12 AM Mich Talebzadeh 
> wrote:
>
>> This email was sent from an external source so please treat with caution.
>>
>> Hi Lemi,
>>
>> How many ZooKeepers do you have in your topology?
>>
>> Also ZooKeeper and Kafka versions.
>>
>> If it says Kafka is running there must be ZooKeeper somewhere already
>> started
>>
>> HTH
>>
>> Mich Talebzadeh,
>> Lead Solutions Architect/Engineering Lead
>> Palantir Technologies Limited
>> London
>> United Kingdom
>>
>>
>>view my Linkedin profile
>> <
>> https://www.linkedin.com/in/mich-talebzadeh-ph-d-5205b2
>> >
>>
>>
>>
>> https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>> loss, damage or destruction of data or any other property which may arise
>> from relying on this email's technical content is explicitly disclaimed.
>> The author will in no case be liable for any monetary damages arising from
>> such loss, damage or destruction.
>>
>>
>>
>>
>> On Wed, 17 May 2023 at 14:08, Lemi Odidi 
>> wrote:
>>
>> > I need help in starting up  zookeeper.
>> > see below the console output when I try to star it
>> >
>> >
>> >
>> 
>> > bitnami@ip-172-21-2-143:/var/log$ sudo /opt/bitnami/ctlscript.sh status
>> > zookeeper not running
>> > kafka already running
>> >
>> > bitnami@ip-172-21-2-143:/var/log$ sudo /opt/bitnami/ctlscript.sh stop
>> >
>> > /opt/bitnami/kafka/scripts/ctl.sh : kafka stopped
>> > /opt/bitnami/zookeeper/scripts/ctl.sh : zookeeper not running
>> >
>> > bitnami@ip-172-21-2-143:/var/log$ sudo /opt/bitnami/ctlscript.sh status
>> >
>> > zookeeper not running
>> > kafka not running
>> >
>> > bitnami@ip-172-21-2-143:/var/log$ sudo /opt/bitnami/ctlscript.sh start
>> >
>> > ZooKeeper JMX enabled by default
>> > Using config: /opt/bitnami/zookeeper/bin/../conf/zoo.cfg
>> > Starting zookeeper ... STARTED
>> > /opt/bitnami/zookeeper/scripts/ctl.sh : zookeeper could not be started
>> > /opt/bitnami/kafka/scripts/ctl.sh : kafka started
>> >
>> > bitnami@ip-172-21-2-143:/var/log$ sudo /opt/bitnami/ctlscript.sh status
>> > zookeeper not running
>> > kafka already running
>> > bitnami@ip-172-21-2-143:/var/log$
>> >
>> > About Ascential plc
>> > Ascential delivers specialist information, analytics and ecommerce
>> > optimisation platforms to the world's leading consumer brands and their
>> > ecosystems.
>> > Our world-class businesses improve performance and solve problems for
>> our
>> > customers by delivering immediately actionable information combined with
>> > visionary longer-term thinking across Digital Commerce, Product Design
>> and
>> > Marketing. We also serve customers across Retail & Financial Services.
>> > With over 3,000 employees across five continents, we combine local
>> > expertise with a global footprint for clients in over 120 countries.
>> > Ascential is listed on the London Stock Exchange.
>> > The information in or attached to this email is confidential and may be
>> > legally privileged. If you are not the intended recipient of this
>> message,
>> > any use, disclosure, copying, distribution or any action taken in
>> reliance
>> > on it is prohibited and may be unlawful.
>> > If you have received this message in error, please notify the sender
>> > immediately by return email and delete this message and any copies from
>> > your computer and network. Ascential does not warrant that this email
>> and
>> > any attachments are free from viruses and accepts no liability for any
>> loss
>> > resulting from infected email transmissions.
>> > Ascential reserves the right to monitor all email through its networks.
>> > Any view expressed may be those of the originator and not necessarily of
>> > Ascential plc. Please be advised all phone calls may be recorded for
>> > training and quality purposes and by accepting and/or making calls to us
>> > you acknowledge and agree to calls being recorded.
>> > Ascential plc, number 9934451 (England and Wales). Registered Office: 33
>> > Kingsway, London, WC2B 6UF.
>> >
>>
>

About Ascential plc
Ascential delivers specialist information, analytics and ecommerce optimisation 
platforms to the world's leading consumer brands and their ecosystems.
Our world-class businesses improve performance and solve problems for our 
customers by delivering immediately actionable information combined with 
visionary longer-term thinking across Digital Commerce, Product Design and 
Marketing. We also serve customers 

Re: Some questions on Kafka on order of messages with mutiple partitions

2023-05-18 Thread Mich Talebzadeh
Thanks Peter.

This is my modified Json file

{
 "version":1,
 "partitions":[
{"topic":"md","partition":0,"replicas":[1,3,7]},
{"topic":"md","partition":1,"replicas":[2,8,9]},
{"topic":"md","partition":2,"replicas":[7,10,12]},
{"topic":"md","partition":3,"replicas":[1,12,9]},
{"topic":"md","partition":4,"replicas":[7,9,11]},
{"topic":"md","partition":5,"replicas":[3,11,1]},
{"topic":"md","partition":6,"replicas":[10,1,7]},
{"topic":"md","partition":7,"replicas":[8,7,3]},
{"topic":"md","partition":8,"replicas":[2,3,10]}
]
}

Running below

kafka-reassign-partitions.sh --bootstrap-server rhes75:9092
--reassignment-json-file ./reduce_replication_factor2.json --execute

and

kafka-topics.sh --describe --bootstrap-server rhes75:9092 --topic md

I get

Topic: md   TopicId: UfQly87bQPCbVKoH-PQheg PartitionCount: 9
ReplicationFactor: 3Configs: segment.bytes=1073741824,retention.ms
=1000,retention.bytes=1073741824
Topic: md   Partition: 0Leader: 1   Replicas: 1,3,7
Isr: 1,3,7
Topic: md   Partition: 1Leader: 2   Replicas: 2,8,9
Isr: 2,8,9
Topic: md   Partition: 2Leader: 7   Replicas: 7,10,12
Isr: 10,7,12
Topic: md   Partition: 3Leader: 1   Replicas: 1,12,9
 Isr: 1,9,12
Topic: md   Partition: 4Leader: 7   Replicas: 7,9,11
 Isr: 9,7,11
Topic: md   Partition: 5Leader: 3   Replicas: 3,11,1
 Isr: 1,3,11
Topic: md   Partition: 6Leader: 10  Replicas: 10,1,7
 Isr: 10,1,7
Topic: md   Partition: 7Leader: 8   Replicas: 8,7,3
 Isr: 7,3,8
Topic: md   Partition: 8Leader: 2   Replicas: 2,3,10
 Isr: 10,2,3


I trust this is correct?

Thanks


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 18 May 2023 at 18:53, Peter Bukowinski  wrote:

> It looks like you successfully removed replicas from partitions 0, 1, and
> 2, but partitons 3 - 8 still show 9 replicas. You probably intended to
> remove them from all 9 partitions? You’ll need to create another json file
> with partitions 3  - 8 to complete the task.
>
> —
> Peter
>
>
> > On May 17, 2023, at 12:41 AM, Mich Talebzadeh 
> wrote:
> >
> > Thanks Miguel, I did that
> >
> > Based on the following Json file
> >
> > {
> > "version":1,
> > "partitions":[
> > {"topic":"md","partition":0,"replicas":[1,3,7]},
> > {"topic":"md","partition":1,"replicas":[2,8,9]},
> > {"topic":"md","partition":2,"replicas":[7,10,12]}
> > ]
> > }
> >
> > I ran this command
> >
> > kafka-reassign-partitions.sh --bootstrap-server rhes75:9092
> > --reassignment-json-file ./reduce_replication_factor2.json --execute
> >
> > Current partition replica assignment
> >
> {"version":1,"partitions":[{"topic":"md","partition":0,"replicas":[1,3,7],"log_dirs":["any","any","any"]},{"topic":"md","partition":1,"replicas":[2,8,9],"log_dirs":["any","any","any"]},{"topic":"md","partition":2,"replicas":[7,10,12],"log_dirs":["any","any","any"]}]}
> > Save this to use as the --reassignment-json-file option during rollback
> > Successfully started partition reassignments for md-0,md-1,md-2
> >
> > kafka-topics.sh --describe --bootstrap-server rhes75:9092 --topic md
> >
> > Topic: md   TopicId: UfQly87bQPCbVKoH-PQheg PartitionCount: 9
> > ReplicationFactor: 3Configs: segment.bytes=1073741824,retention.ms
> > =1000,retention.bytes=1073741824
> >Topic: md   Partition: 0Leader: 1   Replicas: 1,3,7
> > Isr: 1,3,7
> >Topic: md   Partition: 1Leader: 2   Replicas: 2,8,9
> > Isr: 2,8,9
> >Topic: md   Partition: 2Leader: 7   Replicas: 7,10,12
> >Isr: 10,7,12
> >Topic: md   Partition: 3Leader: 1   Replicas:
> > 1,12,9,11,7,3,10,8,2  Isr: 10,1,9,2,12,7,3,11,8
> >Topic: md   Partition: 4Leader: 7   Replicas:
> > 7,9,11,1,3,10,8,2,12  Isr: 10,1,9,2,12,7,3,11,8
> >Topic: md   Partition: 5Leader: 3   Replicas:
> > 3,11,1,7,10,8,2,12,9  Isr: 10,1,9,2,12,7,3,11,8
> >Topic: md   Partition: 6Leader: 10  Replicas:
> > 10,1,7,3,8,2,12,9,11  Isr: 10,1,9,2,12,7,3,11,8
> >Topic: md   Partition: 7Leader: 8   Replicas:
> > 8,7,3,10,2,12,9,11,1  Isr: 10,1,9,2,12,7,3,11,8
> >Topic: md   Partition: 8Leader: 2   Replicas:
> > 2,3,10,8,12,9,11,1,7  Isr: 10,1,9,2,12,7,3,11,8
> >
> >
> > Mich
> >
> >
> >   view my Linkedin profile
> > 
> >
> >
> > 

Re: Some questions on Kafka on order of messages with mutiple partitions

2023-05-18 Thread Peter Bukowinski
It looks like you successfully removed replicas from partitions 0, 1, and 2, 
but partitons 3 - 8 still show 9 replicas. You probably intended to remove them 
from all 9 partitions? You’ll need to create another json file with partitions 
3  - 8 to complete the task.

—
Peter


> On May 17, 2023, at 12:41 AM, Mich Talebzadeh  
> wrote:
> 
> Thanks Miguel, I did that
> 
> Based on the following Json file
> 
> {
> "version":1,
> "partitions":[
> {"topic":"md","partition":0,"replicas":[1,3,7]},
> {"topic":"md","partition":1,"replicas":[2,8,9]},
> {"topic":"md","partition":2,"replicas":[7,10,12]}
> ]
> }
> 
> I ran this command
> 
> kafka-reassign-partitions.sh --bootstrap-server rhes75:9092
> --reassignment-json-file ./reduce_replication_factor2.json --execute
> 
> Current partition replica assignment
> {"version":1,"partitions":[{"topic":"md","partition":0,"replicas":[1,3,7],"log_dirs":["any","any","any"]},{"topic":"md","partition":1,"replicas":[2,8,9],"log_dirs":["any","any","any"]},{"topic":"md","partition":2,"replicas":[7,10,12],"log_dirs":["any","any","any"]}]}
> Save this to use as the --reassignment-json-file option during rollback
> Successfully started partition reassignments for md-0,md-1,md-2
> 
> kafka-topics.sh --describe --bootstrap-server rhes75:9092 --topic md
> 
> Topic: md   TopicId: UfQly87bQPCbVKoH-PQheg PartitionCount: 9
> ReplicationFactor: 3Configs: segment.bytes=1073741824,retention.ms
> =1000,retention.bytes=1073741824
>Topic: md   Partition: 0Leader: 1   Replicas: 1,3,7
> Isr: 1,3,7
>Topic: md   Partition: 1Leader: 2   Replicas: 2,8,9
> Isr: 2,8,9
>Topic: md   Partition: 2Leader: 7   Replicas: 7,10,12
>Isr: 10,7,12
>Topic: md   Partition: 3Leader: 1   Replicas:
> 1,12,9,11,7,3,10,8,2  Isr: 10,1,9,2,12,7,3,11,8
>Topic: md   Partition: 4Leader: 7   Replicas:
> 7,9,11,1,3,10,8,2,12  Isr: 10,1,9,2,12,7,3,11,8
>Topic: md   Partition: 5Leader: 3   Replicas:
> 3,11,1,7,10,8,2,12,9  Isr: 10,1,9,2,12,7,3,11,8
>Topic: md   Partition: 6Leader: 10  Replicas:
> 10,1,7,3,8,2,12,9,11  Isr: 10,1,9,2,12,7,3,11,8
>Topic: md   Partition: 7Leader: 8   Replicas:
> 8,7,3,10,2,12,9,11,1  Isr: 10,1,9,2,12,7,3,11,8
>Topic: md   Partition: 8Leader: 2   Replicas:
> 2,3,10,8,12,9,11,1,7  Isr: 10,1,9,2,12,7,3,11,8
> 
> 
> Mich
> 
> 
>   view my Linkedin profile
> 
> 
> 
> https://en.everybodywiki.com/Mich_Talebzadeh
> 
> 
> 
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
> 
> 
> 
> 
> On Wed, 17 May 2023 at 00:21, Miguel A. Sotomayor 
> wrote:
> 
>> Hi Mich,
>> 
>> You can use the script `kafka-reassign-partitions.sh` to re-locate or
>> change the number of replicas
>> 
>> Regards
>> Miguel
>> 
>> El mar, 16 may 2023 a las 18:44, Mich Talebzadeh (<
>> mich.talebza...@gmail.com>)
>> escribió:
>> 
>>> Thanks Peter. I meant reduce replication from 9 to 3 and Not partitions.
>>> Apologies for any confusion
>>> 
>>> 
>>> Cheers
>>> 
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>>> loss, damage or destruction of data or any other property which may arise
>>> from relying on this email's technical content is explicitly disclaimed.
>>> The author will in no case be liable for any monetary damages arising
>> from
>>> such loss, damage or destruction.
>>> 
>>> 
>>> 
>>> 
>>> On Tue, 16 May 2023 at 17:38, Peter Bukowinski  wrote:
>>> 
 Mich,
 
 It is not possible to reduce the number of partitions for a kafka topic
 without deleting and recreating the topic. What previous responders to
>>> your
 inquiry noted is that your topic replication of 9 is high. What you
>> want
>>> to
 do is reduce your replication, not the partitions. You can do this
>> using
 the same json file you had the first time, with all 9 partitions. Just
 remove 6 of the 9 broker ids from the replicas array, e.g.
 
 cat reduce_replication_factor.json
 {
 "version":1,
 "partitions":[
 {"topic":"md","partition":0,"replicas":[12,10,8]},
 {"topic":"md","partition":1,"replicas":[9,8,2]},
 {"topic":"md","partition":2,"replicas":[11,2,12]},
 {"topic":"md","partition":3,"replicas":[1,12,9]},
 {"topic":"md","partition":4,"replicas":[7,9,11]},
 {"topic":"md","partition":5,"replicas":[3,11,1]}
 ]
 }
 
 You may want to adjust where the replicas sit to achieve a better
>> balance
 across the cluster, but this arrangement only truncates the last 6
>>> replicas
 from the list, so should complete quickly