Re: One partitions segment files in different log-dirs
Hi Thank you for a answer. The answer I was looking for was how partition segment files are distributed over kafka-logs directories? In example I have one broker with two log directories: kafka-logs1 and kafka-logs2 both of them 100MB in example and partition file segment size is 90MB. If one segment will be full can kafka start second segment file in other kafka-logs directory? Br, Margus On 2019/01/05 17:20:06, Jonathan Santilli wrote: > Hello Margus, > > Am not sure if I got your question correctly, but, assuming you have a > topic called "*kafka-log*" with two partitions, each of them (kafka-log-1 > and kafka-log-2) will contain its own segments. > Kafka Brokers will distribute/replicate (according to the Brokers config) > the topics partitions among the available Brokers (once again, it depends > on the configuration you have in place). > > The segments within a topic partition belongs to that particular partition > and are not shared between partitions, that is, one particular segment > sticks to the partition it belongs and is not shared/split with other > partitions. > > Hope this helps or maybe you can provide more details about your doubt. > > Cheers! > -- > Jonathan > > > On Fri, Jan 4, 2019 at 4:29 PM wrote: > > > Hi > > > > In example if I have /kafka-log1 and /kafka-log2 > > > > Can kafka distribute one partitions segment files between different logs > > directories? > > > > Br, > > Margus Roo > > > > > -- > Santilli Jonathan >
Re: Kafka producer in CSharp
Ok got some info myself. I can have fault tolerance - I can start kafka-http-endpoint using broker lists I can have ack - start using --sync But what is best practice in case if kafka-http-endpoint goes down? Start multiple kafka-http-endpoint's and in client side just control that kafka-http-endpoint is up? And if not up then using another? Best regards, Margus (Margusja) Roo +372 51 48 780 http://margus.roo.ee http://ee.linkedin.com/in/margusroo skype: margusja ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)" On 13/05/14 10:49, Margusja wrote: Thank you for response. I think HTTP is ok. I have two more question in case of HTTP. 1. Can I have fault tolerance in case I have two or more brokers? 2. Can I ack that message is in queue? Best regards, Margus (Margusja) Roo +372 51 48 780 http://margus.roo.ee http://ee.linkedin.com/in/margusroo skype: margusja ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)" On 12/05/14 23:28, Joe Stein wrote: The wire protocol has changed drastically since then. I don't know of any C# clients (there are none on the client library page nor have I heard of any being used in production but maybe there are some). For clients that use DotNet I often suggest that they use some HTTP producer/consumer https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-HTTPREST /*** Joe Stein Founder, Principal Consultant Big Data Open Source Security LLC http://www.stealth.ly Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop> / On Mon, May 12, 2014 at 12:34 PM, Margusja wrote: Hi I have kafka broker running (kafka_2.9.1-0.8.1.1) All is working. One project requires producer is written in CSharp I am not dot net programmer but I managed to write simple producer code using https://github.com/kafka-dev/kafka/blob/master/clients/ csharp/README.md the code ... using System; using System.Collections.Generic; using System.Text; using System.Threading.Tasks; using Kafka.Client; namespace DemoProducer { class Program { static void Main(string[] args) { string payload1 = "kafka 1."; byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1); Message msg1 = new Message(payloadData1); string payload2 = "kafka 2."; byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2); Message msg2 = new Message(payloadData2); Producer producer = new Producer("broker", 9092); producer.Send("kafkademo3", 0 , msg1 ); } } } ... In broker side I am getting the error if I executing the code above: [2014-05-12 19:15:58,984] ERROR Closing socket for /84.50.21.39 because of error (kafka.network.Processor) java.nio.BufferUnderflowException at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:145) at java.nio.ByteBuffer.get(ByteBuffer.java:694) at kafka.api.ApiUtils$.readShortString(ApiUtils.scala:38) at kafka.api.ProducerRequest$.readFrom(ProducerRequest.scala:33) at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36) at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36) at kafka.network.RequestChannel$Request.(RequestChannel. scala:53) at kafka.network.Processor.read(SocketServer.scala:353) at kafka.network.Processor.run(SocketServer.scala:245) at java.lang.Thread.run(Thread.java:744) [2014-05-12 19:16:11,836] ERROR Closing socket for /90.190.106.56 because of error (kafka.network.Processor) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) at kafka.utils.Utils$.read(Utils.scala:375) at kafka.network.BoundedByteBufferReceive.readFrom( BoundedByteBufferReceive.scala:54) at kafka.network.Processor.read(SocketServer.scala:347) at kafka.network.Processor.run(SocketServer.scala:245) at java.lang.Thread.run(Thread.java:744) I suspected that the problem is in the broker version (kafka_2.9.1-0.8.1.1) so I downloaded kafka-0.7.1-incubating. Now I was able to send messages using CSharp code. So is there workaround how I can use latest kafka version and CSharp ? Or What is the latest kafka version supporting CSharp producer? -- Best regards, Margus (Margusja) Roo +372 51 48 780 http://margus.roo.ee http://ee.linkedin.com/in/margusroo skype: margusja ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
Re: Kafka producer in CSharp
Thank you for response. I think HTTP is ok. I have two more question in case of HTTP. 1. Can I have fault tolerance in case I have two or more brokers? 2. Can I ack that message is in queue? Best regards, Margus (Margusja) Roo +372 51 48 780 http://margus.roo.ee http://ee.linkedin.com/in/margusroo skype: margusja ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)" On 12/05/14 23:28, Joe Stein wrote: The wire protocol has changed drastically since then. I don't know of any C# clients (there are none on the client library page nor have I heard of any being used in production but maybe there are some). For clients that use DotNet I often suggest that they use some HTTP producer/consumer https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-HTTPREST /*** Joe Stein Founder, Principal Consultant Big Data Open Source Security LLC http://www.stealth.ly Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop> / On Mon, May 12, 2014 at 12:34 PM, Margusja wrote: Hi I have kafka broker running (kafka_2.9.1-0.8.1.1) All is working. One project requires producer is written in CSharp I am not dot net programmer but I managed to write simple producer code using https://github.com/kafka-dev/kafka/blob/master/clients/ csharp/README.md the code ... using System; using System.Collections.Generic; using System.Text; using System.Threading.Tasks; using Kafka.Client; namespace DemoProducer { class Program { static void Main(string[] args) { string payload1 = "kafka 1."; byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1); Message msg1 = new Message(payloadData1); string payload2 = "kafka 2."; byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2); Message msg2 = new Message(payloadData2); Producer producer = new Producer("broker", 9092); producer.Send("kafkademo3", 0 , msg1 ); } } } ... In broker side I am getting the error if I executing the code above: [2014-05-12 19:15:58,984] ERROR Closing socket for /84.50.21.39 because of error (kafka.network.Processor) java.nio.BufferUnderflowException at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:145) at java.nio.ByteBuffer.get(ByteBuffer.java:694) at kafka.api.ApiUtils$.readShortString(ApiUtils.scala:38) at kafka.api.ProducerRequest$.readFrom(ProducerRequest.scala:33) at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36) at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36) at kafka.network.RequestChannel$Request.(RequestChannel. scala:53) at kafka.network.Processor.read(SocketServer.scala:353) at kafka.network.Processor.run(SocketServer.scala:245) at java.lang.Thread.run(Thread.java:744) [2014-05-12 19:16:11,836] ERROR Closing socket for /90.190.106.56 because of error (kafka.network.Processor) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) at kafka.utils.Utils$.read(Utils.scala:375) at kafka.network.BoundedByteBufferReceive.readFrom( BoundedByteBufferReceive.scala:54) at kafka.network.Processor.read(SocketServer.scala:347) at kafka.network.Processor.run(SocketServer.scala:245) at java.lang.Thread.run(Thread.java:744) I suspected that the problem is in the broker version (kafka_2.9.1-0.8.1.1) so I downloaded kafka-0.7.1-incubating. Now I was able to send messages using CSharp code. So is there workaround how I can use latest kafka version and CSharp ? Or What is the latest kafka version supporting CSharp producer? -- Best regards, Margus (Margusja) Roo +372 51 48 780 http://margus.roo.ee http://ee.linkedin.com/in/margusroo skype: margusja ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
CSharp librari and Producer Closing socket for because of error (kafka.network.Processor),java.nio.BufferUnderflowException
Hi I have kafka broker running (kafka_2.9.1-0.8.1.1) All is working. One project requires producer is written in CSharp I am not dot net programmer but I managed to write simple producer code using https://github.com/kafka-dev/kafka/blob/master/clients/csharp/README.md the code ... using System; using System.Collections.Generic; using System.Text; using System.Threading.Tasks; using Kafka.Client; namespace DemoProducer { class Program { static void Main(string[] args) { string payload1 = "kafka 1."; byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1); Message msg1 = new Message(payloadData1); string payload2 = "kafka 2."; byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2); Message msg2 = new Message(payloadData2); Producer producer = new Producer("broker", 9092); producer.Send("kafkademo3", 0 , msg1 ); } } } ... In broker side I am getting the error if I executing the code above: [2014-05-12 19:15:58,984] ERROR Closing socket for /84.50.21.39 because of error (kafka.network.Processor) java.nio.BufferUnderflowException at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:145) at java.nio.ByteBuffer.get(ByteBuffer.java:694) at kafka.api.ApiUtils$.readShortString(ApiUtils.scala:38) at kafka.api.ProducerRequest$.readFrom(ProducerRequest.scala:33) at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36) at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36) at kafka.network.RequestChannel$Request.(RequestChannel.scala:53) at kafka.network.Processor.read(SocketServer.scala:353) at kafka.network.Processor.run(SocketServer.scala:245) at java.lang.Thread.run(Thread.java:744) [2014-05-12 19:16:11,836] ERROR Closing socket for /90.190.106.56 because of error (kafka.network.Processor) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) at kafka.utils.Utils$.read(Utils.scala:375) at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54) at kafka.network.Processor.read(SocketServer.scala:347) at kafka.network.Processor.run(SocketServer.scala:245) at java.lang.Thread.run(Thread.java:744) I suspected that the problem is in the broker version (kafka_2.9.1-0.8.1.1) so I downloaded kafka-0.7.1-incubating. Now I was able to send messages using CSharp code. So is there workaround how I can use latest kafka version and CSharp ? Or What is the latest kafka version supporting CSharp producer? And one more question. In Csharp lib how can I give to producer brokers list to get fault tolerance in case one broker is down? -- Best regards, Margus (Margusja) Roo +372 51 48 780 http://margus.roo.ee http://ee.linkedin.com/in/margusroo skype: margusja ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
Kafka producer in CSharp
Hi I have kafka broker running (kafka_2.9.1-0.8.1.1) All is working. One project requires producer is written in CSharp I am not dot net programmer but I managed to write simple producer code using https://github.com/kafka-dev/kafka/blob/master/clients/csharp/README.md the code ... using System; using System.Collections.Generic; using System.Text; using System.Threading.Tasks; using Kafka.Client; namespace DemoProducer { class Program { static void Main(string[] args) { string payload1 = "kafka 1."; byte[] payloadData1 = Encoding.UTF8.GetBytes(payload1); Message msg1 = new Message(payloadData1); string payload2 = "kafka 2."; byte[] payloadData2 = Encoding.UTF8.GetBytes(payload2); Message msg2 = new Message(payloadData2); Producer producer = new Producer("broker", 9092); producer.Send("kafkademo3", 0 , msg1 ); } } } ... In broker side I am getting the error if I executing the code above: [2014-05-12 19:15:58,984] ERROR Closing socket for /84.50.21.39 because of error (kafka.network.Processor) java.nio.BufferUnderflowException at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:145) at java.nio.ByteBuffer.get(ByteBuffer.java:694) at kafka.api.ApiUtils$.readShortString(ApiUtils.scala:38) at kafka.api.ProducerRequest$.readFrom(ProducerRequest.scala:33) at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36) at kafka.api.RequestKeys$$anonfun$1.apply(RequestKeys.scala:36) at kafka.network.RequestChannel$Request.(RequestChannel.scala:53) at kafka.network.Processor.read(SocketServer.scala:353) at kafka.network.Processor.run(SocketServer.scala:245) at java.lang.Thread.run(Thread.java:744) [2014-05-12 19:16:11,836] ERROR Closing socket for /90.190.106.56 because of error (kafka.network.Processor) java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379) at kafka.utils.Utils$.read(Utils.scala:375) at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54) at kafka.network.Processor.read(SocketServer.scala:347) at kafka.network.Processor.run(SocketServer.scala:245) at java.lang.Thread.run(Thread.java:744) I suspected that the problem is in the broker version (kafka_2.9.1-0.8.1.1) so I downloaded kafka-0.7.1-incubating. Now I was able to send messages using CSharp code. So is there workaround how I can use latest kafka version and CSharp ? Or What is the latest kafka version supporting CSharp producer? -- Best regards, Margus (Margusja) Roo +372 51 48 780 http://margus.roo.ee http://ee.linkedin.com/in/margusroo skype: margusja ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"