Re: advertised.listeners

2017-05-31 Thread Darshan
Thanks, Rajini and Raghav. Let me try this. This is helpful.

On Wed, May 31, 2017 at 11:31 AM, Rajini Sivaram 
wrote:

> If you want to use different interfaces with the same security protocol,
> you can specify listener names. You can then also configure different
> security properties for internal/external if you need.
>
> listeners=INTERNAL://1.x.x.x:9092,EXTERNAL://172.x.x.x:9093
>
> advertised.listeners=INTERNAL://1.x.x.x:9092,EXTERNAL://172.x.x.x:9093
>
> listener.security.protocol.map=INTERNAL:SSL,EXTERNAL:SSL
>
> inter.broker.listener.name=INTERNAL
>
> On Wed, May 31, 2017 at 6:22 PM, Raghav  wrote:
>
> > Hello Darshan
> >
> > Have you tried SSL://0.0.0.0:9093 ?
> >
> > Rajani had suggested something similar to me a week back while I was
> > trying to get a ACL based setup.
> >
> > Thanks.
> >
> > On Wed, May 31, 2017 at 8:58 AM, Darshan 
> > wrote:
> >
> >> Hi
> >>
> >> Our Kafka broker has two IPs on two different interfaces.
> >>
> >> eth0 has 172.x.x.x for external leg
> >> eth1 has 1.x.x.x for internal leg
> >>
> >>
> >> Kafka Producer is on 172.x.x.x subnet, and Kafka Consumer is on 1.x.x.x
> >> subnet.
> >>
> >> If we use advertised.listeners=SSL://172.x.x.x:9093, then Producer can
> >> producer the message, but Consumer cannot receive the message.
> >>
> >> What value should we use for advertised.listeners so that Producer can
> >> write and Consumers can read ?
> >>
> >> Thanks.
> >>
> >
> >
> >
> > --
> > Raghav
> >
>


Re: advertised.listeners

2017-05-31 Thread Rajini Sivaram
If you want to use different interfaces with the same security protocol,
you can specify listener names. You can then also configure different
security properties for internal/external if you need.

listeners=INTERNAL://1.x.x.x:9092,EXTERNAL://172.x.x.x:9093

advertised.listeners=INTERNAL://1.x.x.x:9092,EXTERNAL://172.x.x.x:9093

listener.security.protocol.map=INTERNAL:SSL,EXTERNAL:SSL

inter.broker.listener.name=INTERNAL

On Wed, May 31, 2017 at 6:22 PM, Raghav  wrote:

> Hello Darshan
>
> Have you tried SSL://0.0.0.0:9093 ?
>
> Rajani had suggested something similar to me a week back while I was
> trying to get a ACL based setup.
>
> Thanks.
>
> On Wed, May 31, 2017 at 8:58 AM, Darshan 
> wrote:
>
>> Hi
>>
>> Our Kafka broker has two IPs on two different interfaces.
>>
>> eth0 has 172.x.x.x for external leg
>> eth1 has 1.x.x.x for internal leg
>>
>>
>> Kafka Producer is on 172.x.x.x subnet, and Kafka Consumer is on 1.x.x.x
>> subnet.
>>
>> If we use advertised.listeners=SSL://172.x.x.x:9093, then Producer can
>> producer the message, but Consumer cannot receive the message.
>>
>> What value should we use for advertised.listeners so that Producer can
>> write and Consumers can read ?
>>
>> Thanks.
>>
>
>
>
> --
> Raghav
>


Re: advertised.listeners

2017-05-31 Thread Raghav
Hello Darshan

Have you tried SSL://0.0.0.0:9093 ?

Rajani had suggested something similar to me a week back while I was trying
to get a ACL based setup.

Thanks.

On Wed, May 31, 2017 at 8:58 AM, Darshan 
wrote:

> Hi
>
> Our Kafka broker has two IPs on two different interfaces.
>
> eth0 has 172.x.x.x for external leg
> eth1 has 1.x.x.x for internal leg
>
>
> Kafka Producer is on 172.x.x.x subnet, and Kafka Consumer is on 1.x.x.x
> subnet.
>
> If we use advertised.listeners=SSL://172.x.x.x:9093, then Producer can
> producer the message, but Consumer cannot receive the message.
>
> What value should we use for advertised.listeners so that Producer can
> write and Consumers can read ?
>
> Thanks.
>



-- 
Raghav


advertised.listeners

2017-05-31 Thread Darshan
Hi

Our Kafka broker has two IPs on two different interfaces.

eth0 has 172.x.x.x for external leg
eth1 has 1.x.x.x for internal leg


Kafka Producer is on 172.x.x.x subnet, and Kafka Consumer is on 1.x.x.x
subnet.

If we use advertised.listeners=SSL://172.x.x.x:9093, then Producer can
producer the message, but Consumer cannot receive the message.

What value should we use for advertised.listeners so that Producer can
write and Consumers can read ?

Thanks.


Re: Kafka write throughput tuning

2017-05-31 Thread jan
I'm no kafka expert and I've forgotten what little I learnt, however
there must be a bottleneck somewhere.

In your first instance of 3 partitions on 3 disks:
- Are all partitions growing?
- Are they growing about equally?

- What else might be limiting aspect...
-- what's the cpu activity like, perhaps it's cpu bound (unlikely but
please check)
-- are the disks directly attached and not sharing any write paths, or
are they virtual disks over a network? (I've actually seen virtuals
over a network - not pretty)
-- any other limiting factors you can see or imagine?

Also please in future give a fuller picture of your setup eg. OS, OS
version, memory, number of cpus, what actual hardware (PCs are very
different from servers), etc

cheers

jan

On 17/05/2017, 陈 建平Chen Jianping  wrote:
> Hi Group,
>
> Recently I am trying to turn Kafka write performance to improve throughput.
> On my Kafka broker, there are 3 disks (7200 RPM).
> For one disk, the Kafka write throughput can reach 150MB/s. In my opinion,
> if I send message to topic test_p3 (which has 3 partitions located on
> different disk in the same server), the whole write throughput can reach 450
> MB/s due to parallel writing disk. However the test result is still 150MB/s.
> Is there any reason that multiple disk doesn’t multiply the write
> throughput? And is there any bottleneck for the Kafka write throughput or I
> need some configuration to update?
>
> I also try to test sending message to two different topic (whose partition
> on different disk of that server), and the total throughput only reach 200
> MB/s instead of 300 MB/s as I expect. Below is my Kafka configuration and
> setting. Thanks for any idea or advice on it:)
>
> ##Kafka producer setting
> ./kafka-run-class org.apache.kafka.tools.ProducerPerformance --topic test_p3
> --num-records 5000 --record-size 100 --throughput -1 --producer-props
> acks=0 bootstrap.servers=localhost:9092 buffer.memory=33554432
> batch.size=16384
>
> ##OS tuning setting
> net.core.rmem_default = 124928
> net.core.rmem_max = 2048000
> net.core.wmem_default = 124928
> net.core.wmem_max = 2048000
> net.ipv4.tcp_rmem = 4096 87380 4194304
> net.ipv4.tcp_wmem = 4096 87380 4194304
> net.ipv4.tcp_max_tw_buckets = 262144
> net.ipv4.tcp_max_syn_backlog = 1024
> vm.oom_kill_allocating_task = 1
> vm.max_map_count = 20
> vm.swappiness = 1
> vm.dirty_writeback_centisecs = 500
> vm.dirty_expire_centisecs = 500
> vm.dirty_ratio = 60
> vm.dirty_background_ratio = 5
>
>
> Thanks,
> Eric
>
>


Re: KIP-162: Enable topic deletion by default

2017-05-31 Thread Jim Jagielski
+1
> On May 27, 2017, at 9:27 PM, Vahid S Hashemian  
> wrote:
> 
> Sure, that sounds good.
> 
> I suggested that to keep command line behavior consistent.
> Plus, removal of ACL access is something that can be easily undone, but 
> topic deletion is not reversible.
> So, perhaps a new follow-up JIRA to this KIP to add the confirmation for 
> topic deletion.
> 
> Thanks.
> --Vahid
> 
> 
> 
> From:   Gwen Shapira 
> To: d...@kafka.apache.org, users@kafka.apache.org
> Date:   05/27/2017 11:04 AM
> Subject:Re: KIP-162: Enable topic deletion by default
> 
> 
> 
> Thanks Vahid,
> 
> Do you mind if we leave the command-line out of scope for this?
> 
> I can see why adding confirmations, options to bypass confirmations, etc
> would be an improvement. However, I've seen no complaints about the 
> current
> behavior of the command-line and the KIP doesn't change it at all. So I'd
> rather address things separately.
> 
> Gwen
> 
> On Fri, May 26, 2017 at 8:10 PM Vahid S Hashemian 
> 
> wrote:
> 
>> Gwen, thanks for the KIP.
>> It looks good to me.
>> 
>> Just a minor suggestion: It would be great if the command asks for a
>> confirmation (y/n) before deleting the topic (similar to how removing 
> ACLs
>> works).
>> 
>> Thanks.
>> --Vahid
>> 
>> 
>> 
>> From:   Gwen Shapira 
>> To: "d...@kafka.apache.org" , Users
>> 
>> Date:   05/26/2017 07:04 AM
>> Subject:KIP-162: Enable topic deletion by default
>> 
>> 
>> 
>> Hi Kafka developers, users and friends,
>> 
>> I've added a KIP to improve our out-of-the-box usability a bit:
>> KIP-162: Enable topic deletion by default:
>> 
>> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-162+-+Enable+topic+deletion+by+default
> 
>> 
>> 
>> Pretty simple :) Discussion and feedback are welcome.
>> 
>> Gwen
>> 
>> 
>> 
>> 
>> 
> 
> 
> 
> 



Kafka not starting up after Kerberos, Zookeeper conn error

2017-05-31 Thread Gerd König
Hello,

I am currently stuck in enabling Kerberos and starting-up KafkaBroker
afterwards (confluent oss 3.2.1).
Zookeepers are running, but starting kafka
(/opt/confluent/bin/kafka-server-start /etc/kafka/server.properties) gives
me error:


*[2017-05-31 12:21:10,523] ERROR SASL authentication failed using login
context 'Client'. (org.apache.zookeeper.client.ZooKeeperSaslClient)*
*[2017-05-31 12:21:10,523] INFO zookeeper state changed (AuthFailed)
(org.I0Itec.zkclient.ZkClient)*
*[2017-05-31 12:21:10,523] INFO Terminate ZkClient event thread.
(org.I0Itec.zkclient.ZkEventThread)*
*[2017-05-31 12:21:10,524] FATAL Fatal error during KafkaServer startup.
Prepare to shutdown (kafka.server.KafkaServer)*


The corresponding log entry in *Zookeeper* is:

*ERROR cnxn.saslServer is null: cnxn object did not initialize its
saslServer properly*


feeling like "can't see the forest but the trees"


*CONFIGS*

*kafka_server_jaas.conf:*
*KafkaServer {*
*com.sun.security.auth.module.Krb5LoginModule required*
*useKeyTab=true*
*storeKey=true*
*keyTab="/keytabs/kafka.user.keytab"*
*principal="kafka/confluent-1.test.d...@test.demo";*
*};*

*// ZooKeeper client authentication*
*Client {*
*com.sun.security.auth.module.Krb5LoginModule required*
*useKeyTab=true*
*storeKey=true*
*keyTab="/keytabs/zookeeper.user.keytab"*
*principal="zookeeper/confluent-1.test.d...@test.demo";*
*};*


*zookeeper_jaas.conf:*
*Server {*
*com.sun.security.auth.module.Krb5LoginModule required*
*useKeyTab=true*
*storeKey=true*
*useTicketCache=false*
*keyTab="/keytabs/zookeeper.user.keytab"*
*principal="zookeeper/confluent-1.test.d...@test.demo";*
*};*

*kafka server.properties:*
*zookeeper.connect=confluent-1.test.demo:2181,confluent-2.test.demo:2181,confluent-3.test.demo:2181*
*zookeeper.connection.timeout.ms
=6000*
*broker.id =1*
*delete.topic.enable=true*
*listeners=PLAINTEXT://0.0.0.0:9092
,SASL_PLAINTEXT://0.0.0.0:9099 *
*sasl.enabled.mechanisms=GSSAPI,PLAIN*
*sasl.mechanism.inter.broker.protocol=PLAIN*
*num.network.threads=2*
*num.io.threads=2*
*socket.send.buffer.bytes=102400*
*socket.receive.buffer.bytes=102400*
*socket.request.max.bytes=104857600*
*zookeeper.set.acl=false*
*allow.everyone.if.no.acl.found=true*
*auto.create.topics.enable=false*
*super.users=User:kafka*
*sasl.kerberos.service.name =kafka*

*zookeeper.properties:*
*dataDir=/var/lib/zookeeper*
*clientPort=2181*
*maxClientCnxns=100*
*authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider*
*jaasLoginRenew=360*
*kerberos.removeHostFromPrincipal=true*
*kerberos.removeRealmFromPrincipal=true*
*initLimit=5*
*syncLimit=2*
*server.1=confluent-1.test.demo:2888:3888*
*server.2=confluent-2.test.demo:2888:3888*
*server.3=confluent-3.test.demo:2888:3888*

Any help highly appreciated to solve this issue and startup Kafka.

Thanks in advance...


Re: WindowStore fetch() ordering

2017-05-31 Thread Damian Guy
Hi Eric,

For any given key the data will always be returned in increasing temporal
order. So, yes, the most recent value for a key will be the last value.

Thanks,
Damian

On Wed, 31 May 2017 at 09:51 Eric Lalonde  wrote:

> Hello, I was reading:
>
>
> https://kafka.apache.org/0102/javadoc/org/apache/kafka/streams/state/ReadOnlyWindowStore.html#fetch(K,%20long,%20long)
> <
> https://kafka.apache.org/0102/javadoc/org/apache/kafka/streams/state/ReadOnlyWindowStore.html#fetch(K,
> long, long)>
>
> From both my experiments and the way I read the fetch() documentation, it
> appears the values in the iterator are returned in increasing temporal
> order. Is this ordering guaranteed? e.g. if I want the the most recent
> value in the window range passed to fetch(), will it *always* be the last
> value, or are there scenarios where this is relaxed?
>
> I've linked above to ReadOnlyWindowStore, but the question goes for any
> window store iterator.
>
>


WindowStore fetch() ordering

2017-05-31 Thread Eric Lalonde
Hello, I was reading:

https://kafka.apache.org/0102/javadoc/org/apache/kafka/streams/state/ReadOnlyWindowStore.html#fetch(K,%20long,%20long)
 


From both my experiments and the way I read the fetch() documentation, it 
appears the values in the iterator are returned in increasing temporal order. 
Is this ordering guaranteed? e.g. if I want the the most recent value in the 
window range passed to fetch(), will it *always* be the last value, or are 
there scenarios where this is relaxed? 

I've linked above to ReadOnlyWindowStore, but the question goes for any window 
store iterator.



Re: Kafka write throughput tuning

2017-05-31 Thread Alexander Binzberger
One partition is one file on disk. This is why your throughput per 
partition will not go over 150MB/s.


I can not explain why you do not get 300MB/s for two partition files on 
different disks. It could be related to your hardware. How fast are your 
writes using 2 dd processes?


To get more throughput you would normally use more brokers on more 
machines with a lot of partitions.



Am 17.05.2017 um 11:13 schrieb 陈 建平Chen Jianping:

Hi Group,

Recently I am trying to turn Kafka write performance to improve throughput. On 
my Kafka broker, there are 3 disks (7200 RPM).
For one disk, the Kafka write throughput can reach 150MB/s. In my opinion, if I 
send message to topic test_p3 (which has 3 partitions located on different disk 
in the same server), the whole write throughput can reach 450 MB/s due to 
parallel writing disk. However the test result is still 150MB/s. Is there any 
reason that multiple disk doesn’t multiply the write throughput? And is there 
any bottleneck for the Kafka write throughput or I need some configuration to 
update?

I also try to test sending message to two different topic (whose partition on 
different disk of that server), and the total throughput only reach 200 MB/s 
instead of 300 MB/s as I expect. Below is my Kafka configuration and setting. 
Thanks for any idea or advice on it:)

##Kafka producer setting
./kafka-run-class org.apache.kafka.tools.ProducerPerformance --topic test_p3 
--num-records 5000 --record-size 100 --throughput -1 --producer-props 
acks=0 bootstrap.servers=localhost:9092 buffer.memory=33554432 batch.size=16384

##OS tuning setting
net.core.rmem_default = 124928
net.core.rmem_max = 2048000
net.core.wmem_default = 124928
net.core.wmem_max = 2048000
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 87380 4194304
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.tcp_max_syn_backlog = 1024
vm.oom_kill_allocating_task = 1
vm.max_map_count = 20
vm.swappiness = 1
vm.dirty_writeback_centisecs = 500
vm.dirty_expire_centisecs = 500
vm.dirty_ratio = 60
vm.dirty_background_ratio = 5


Thanks,
Eric



--
Alexander Binzberger
System Designer - WINGcon AG
Tel. +49 7543 966-119

Sitz der Gesellschaft: Langenargen
Registergericht: ULM, HRB 734260
USt-Id.: DE232931635, WEEE-Id.: DE74015979
Vorstand: thomasThomas Ehrle (Vorsitz), Fritz R. Paul (Stellvertreter), Tobias 
Treß
Aufsichtsrat: Jürgen Maucher (Vorsitz), Andreas Paul (Stellvertreter), Martin 
Sauter