Problematic Message Batches in a Produce Request

2018-03-11 Thread Murat Balkan
Hi all,

A Produce Request is composed of one or more Message Batches, each
belonging to 1 topic-partition.

My question is;
When a produce request is sent to Kafka, and if only 1 of the message
batches is problematic, what will happen to the other message batches? Will
they be lost?

Looking at the documentation, I see that certain error types are tied to
scenarios where an individual message batch fails.

Error code 18 for example;
RecordListTooLargeCode "If a message batch in a produce request exceeds the
maximum configured segment size."
(Error 18 is also not retriable.)

Will that mean that we have to go all the way to the client and put a logic
there saying if the callback reports a fail (due to a reason that has
nothing to do with the client's original message), reproduce the same
message?

Is my understanding correct? I really appreciate some help.

Thanks a lot,

Murat


Re: Vids Re: Running SSL and PLAINTEXT mode together (Kafka 10.2.1)

2018-03-11 Thread Martin Gainty





From: svsuj...@gmail.com 
Sent: Sunday, March 11, 2018 4:22 PM
To: users@kafka.apache.org
Cc: Ismael Juma; rajinisiva...@gmail.com
Subject: Vids Re: Running SSL and PLAINTEXT mode together (Kafka 10.2.1)

Chic bhari

Sent from my iPhone
 GC
> On Dec 19, 2017, at 5:54 PM, Darshan  wrote:
> Srvy cdhdjtiyyjj
> Anyone ?
> Y. Yum m
> On Mon, Dec 18, 2017 at 7:25 AM, Darshan 
> wrote:
>
>> Hi
>>
>> I am wondering if there is a way to know nhj mbiib the SSL and PLAINTEXT mode
>> together ? I am running Kafka 10.2.1. We want our internal clients to use
>> the PLAINTEXT mode to write to certain topics, but any external clients
>> should use SSL to read messages on those topics. We also want to enforce
>> ACLs.ccds
>>
>> To try this out, I modified my server.properties as follows, but without
>> any luck. Can someone please let me know if it needs any change ?
>>
>> listeners=INTERNAL://10.10.10.64:9092,EXTERNAL://172.1.1.157:9093
MG>where is your need SSL declaration? here is example
MG>listeners=SSL://:9093

>> advertised.listeners=INTERNAL://10.10.10.64:9092,EXTERNAL://
>> 172.1.1.157:9093
>> listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:SSL
>> inter.broker.listener.name=INTERNAL
>>
>> ssl.keystore.location=/opt/keystores/keystotr.jks
MG>are you certain the jks file name is keystotr.jks?

>> ssl.keystore.password=ABCDEFGH
>> ssl.key.password=ABCDEFGH
>> ssl.truststore.location=/opt/keystores/truststore.jks
>> ssl.truststore.password=ABCDEFGH
>> ssl.keystore.type=JKS
>> ssl.truststore.type=JKS
>> security.protocol=SSL
>> ssl.client.auth=required
#you are missing the following ssl entries (value on right of = sign is 
placeholder)

ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null

ssl.keymanager.algorithm = SunX509

ssl.protocol = TLS

#match ssl.provider listed in $JAVA_HOME/jre/lib/java.security
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX

>> # allow.everyone.if.no.acl.found=false
>> allow.everyone.if.no.acl.found=true
>> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>> super.users=User:CN=KafkaBroker01
MG>your DN is incomplete.. here is a complete DN example
super.users=User:CN=KafkaBroker01.example.com,OU=Users,O=ConfluentOffice,L=London,ST=London,C=GB
>>
>> Thanks.
>>
>> --Darshan
MG>ismael please confirm
>>


Vids Re: Running SSL and PLAINTEXT mode together (Kafka 10.2.1)

2018-03-11 Thread svsujeet
Chic bhari 

Sent from my iPhone
 GC 
> On Dec 19, 2017, at 5:54 PM, Darshan  wrote:
> Srvy cdhdjtiyyjj
> Anyone ?
> Y. Yum m
> On Mon, Dec 18, 2017 at 7:25 AM, Darshan 
> wrote:
> 
>> Hi
>> 
>> I am wondering if there is a way to know nhj mbiib the SSL and PLAINTEXT mode
>> together ? I am running Kafka 10.2.1. We want our internal clients to use
>> the PLAINTEXT mode to write to certain topics, but any external clients
>> should use SSL to read messages on those topics. We also want to enforce
>> ACLs.ccds
>> 
>> To try this out, I modified my server.properties as follows, but without
>> any luck. Can someone please let me know if it needs any change ?
>> 
>> listeners=INTERNAL://10.10.10.64:9092,EXTERNAL://172.1.1.157:9093
>> advertised.listeners=INTERNAL://10.10.10.64:9092,EXTERNAL://
>> 172.1.1.157:9093
>> listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:SSL
>> inter.broker.listener.name=INTERNAL
>> 
>> ssl.keystore.location=/opt/keystores/keystotr.jks
>> ssl.keystore.password=ABCDEFGH
>> ssl.key.password=ABCDEFGH
>> ssl.truststore.location=/opt/keystores/truststore.jks
>> ssl.truststore.password=ABCDEFGH
>> ssl.keystore.type=JKS
>> ssl.truststore.type=JKS
>> security.protocol=SSL
>> ssl.client.auth=required
>> # allow.everyone.if.no.acl.found=false
>> allow.everyone.if.no.acl.found=true
>> authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
>> super.users=User:CN=KafkaBroker01
>> 
>> Thanks.
>> 
>> --Darshan
>> 


Re: Re: kafka steams with TimeWindows ,incorrect result

2018-03-11 Thread Guozhang Wang
If you want to strictly "only have one output per window", then for now
you'd probably implement that logic using a lower-level "transform"
function in which you can schedule a punctuate function to send all the
results at the end of a window.

If you just want to reduce the amount of data to your sink, but your sink
can still handle overwritten records of the same key, you can enlarge the
cache size via the cache.max.bytes.buffering config.

https://kafka.apache.org/documentation/#streamsconfigs

On Fri, Mar 9, 2018 at 9:45 PM, 杰 杨  wrote:

> thx for your reply!
> I see that it is designed to operate on an infinite, unbounded stream of
> data.
> now I want to process for  unbounded stream but divided by time interval .
> so what can I do for doing this ?
>
> 
> funk...@live.com
>
> From: Guozhang Wang
> Date: 2018-03-10 02:50
> To: users
> Subject: Re: kafka steams with TimeWindows ,incorrect result
> Hi Jie,
>
> This is by design of Kafka Streams, please read this doc for more details
> (search for "outputs of the Wordcount application is actually a continuous
> stream of updates"):
>
> https://kafka.apache.org/0110/documentation/streams/quickstart
>
> Note this semantics applies for both windowed and un-windowed tables.
>
>
> Guozhang
>
> On Fri, Mar 9, 2018 at 5:36 AM, 杰 杨  wrote:
>
> > Hi:
> > I used TimeWindow for aggregate data in kafka.
> >
> > this is code snippet ;
> >
> >   view.flatMap(new MultipleKeyValueMapper(client)
> > ).groupByKey(Serialized.with(Serdes.String(),
> > Serdes.serdeFrom(new CountInfoSerializer(), new
> > CountInfoDeserializer(
> > .windowedBy(TimeWindows.of(6)).reduce(new
> > Reducer() {
> > @Override
> > public CountInfo apply(CountInfo value1, CountInfo value2) {
> > return new CountInfo(value1.start + value2.start,
> > value1.active + value2.active, value1.fresh + value2.fresh);
> > }
> > }) .toStream(new KeyValueMapper > String>() {
> > @Override
> > public String apply(Windowed key, CountInfo value) {
> > return key.key();
> > }
> > }).print(Printed.toSysOut());
> >
> > KafkaStreams streams = new KafkaStreams(builder.build(),
> > KStreamReducer.getConf());
> > streams.start();
> >
> > and I test 3 data in kafka .
> > and I print key value .
> >
> >
> > [KTABLE-TOSTREAM-07]: [9_9_2018-03-09_hour_
> > 21@152060130/152060136], CountInfo{start=12179, active=12179,
> > fresh=12179}
> > [KTABLE-TOSTREAM-07]: [9_9_2018-03-09@
> 152060130/152060136],
> > CountInfo{start=12179, active=12179, fresh=12179}
> > [KTABLE-TOSTREAM-07]: [9_9_2018-03-09_hour_
> > 21@152060130/152060136], CountInfo{start=3, active=3,
> > fresh=3}
> > [KTABLE-TOSTREAM-07]: [9_9_2018-03-09@
> 152060130/152060136],
> > CountInfo{start=3, active=3, fresh=3}
> > why in one window duration will be print two result but not one result ?
> >
> > 
> > funk...@live.com
> >
>
>
>
> --
> -- Guozhang
>



-- 
-- Guozhang