Apart from the fact that the file rename is failing (the API notes that
there are chances of the rename failing), it looks like the
implementation in FileMessageSet's rename can cause a couple of issues,
one of them being a leak.
The implementation looks like this
https://github.com/apache/ka
There are different ways to find the connection count and each one
depends on the operating system that's being used. "lsof -i" is one
option, for example, on *nix systems.
-Jaikiran
On Thursday 08 January 2015 11:40 AM, Sa Li wrote:
Yes, it is weird hostname, ;), that is what our system guys
Yes, it is weird hostname, ;), that is what our system guys name it. How to
take a note to measure the connections open to 10.100.98.102?
Thanks
AL
On Jan 7, 2015 9:42 PM, "Jaikiran Pai" wrote:
> On Thursday 08 January 2015 01:51 AM, Sa Li wrote:
>
>> see this type of error again, back to norma
Hi Sa,
Are you really sure "w2" is a real hostname, something that is
resolvable from the system where you are running this. The JSON output
you posted seems very close to the example from the jmxtrans project
page https://code.google.com/p/jmxtrans/wiki/GraphiteWriter, so I
suspect you aren'
CentOS release 6.3 (Final)
2015-01-07 22:18 GMT+08:00 Harsha :
> Yonghui,
>Which OS you are running.
> -Harsha
>
> On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
> > Yes and I found the reason rename in deletion is failed.
> > In rename progress the files is deleted? and then
On Thursday 08 January 2015 01:51 AM, Sa Li wrote:
see this type of error again, back to normal in few secs
[2015-01-07 20:19:49,744] WARN Error in I/O with harmful-jar.master/
10.100.98.102
That's a really weird hostname, the "harmful-jar.master". Is that really
your hostname? You mention th
Hi, All
I installed jmxtrans and graphite, wish to be able to graph stuff from
kafka, but firstly I start the jmxtrans and getting such errors, (I use the
example graphite json).
./jmxtrans.sh start graphite.json
[07 Jan 2015 17:55:58] [ServerScheduler_Worker-4] 180214 DEBUG
(com.googlecode.jmxt
Things bother me, sometimes, the errors won't pop out, sometimes it comes,
why?
On Wed, Jan 7, 2015 at 1:49 PM, Sa Li wrote:
>
> Hi, Experts
>
> Our cluster is a 3 nodes cluster, I simply test producer locally, see
>
> bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
> t
Hi, Experts
Our cluster is a 3 nodes cluster, I simply test producer locally, see
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
test-rep-three 100 3000 -1 acks=1 bootstrap.servers=10.100.98.100:9092
buffer.memory=67108864 batch.size=8196
But I got such error, I do
I checked topic config, isr changes dynamically.
root@voluminous-mass:/srv/kafka# bin/kafka-topics.sh --describe --zookeeper
10.100.98.101:2181 --topic test-rep-three
Topic:test-rep-threePartitionCount:8ReplicationFactor:3
Configs:
Topic: test-rep-three Partition: 0Leader
Thanks for your answers.
@Mark
Well, basically we agree. My question was more to figure out the limits of
kafka, that's why I picked unicity to figure this out. Unicity doesn't
imply ACID, yet it's already way more than a stream. I was wondering if
some clever trick could allow to achieve it.
Act
see this type of error again, back to normal in few secs
[2015-01-07 20:19:49,744] WARN Error in I/O with harmful-jar.master/
10.100.98.102 (org.apache.kafka.common.network.Selector)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
Hi, All
I am doing performance test by
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance
test-rep-three 5 100 -1 acks=1 bootstrap.servers=10.100.98.100:9092,
10.100.98.101:9092,10.100.98.102:9092 buffer.memory=67108864 batch.size=8196
where the topic test-rep-thre
Yes, I think we are saying the same thing. Basically I am saying version 0
should be considered frozen as of the format and behavior in the 0.8.1
release and we can do whatever we want as a version 1+.
-Jay
On Wed, Jan 7, 2015 at 10:10 AM, Joe Stein wrote:
> << We need to take the versioning of
<< We need to take the versioning of the protocol seriously
amen
<< People are definitely using the offset commit functionality in 0.8.1
agreed
<< I really think we should treat this as a bug and revert the change to
version 0.
What do you mean exactly by revert? Why can't we use version as a
Hey guys,
We need to take the versioning of the protocol seriously. People are
definitely using the offset commit functionality in 0.8.1 and I really
think we should treat this as a bug and revert the change to version 0.
-Jay
On Wed, Jan 7, 2015 at 9:24 AM, Jun Rao wrote:
> Yes, we did make a
Yes, we did make an incompatible change in OffsetCommitRequest in 0.8.2,
which is a mistake. The incompatible change was introduced in KAFKA-1012 in
Mar, 2014 when we added the kafka-based offset management support. However,
we didn't realize that this breaks the wire protocol until much later. Now
Alec,
Which C# producer are you running?
Did you grab one of the Github projects or convert the Jars with IKVM etc..
-Matti
-Original Message-
From: Sa Li [mailto:sal...@gmail.com]
Sent: Tuesday, January 06, 2015 7:18 PM
To: users@kafka.apache.org
Subject: java.io.IOException: Connecti
AFAIK, there're two levels of parallelism related to the Spark Kafka
consumer:
At JVM level: For each receiver, one can specify the number of threads for
a given topic, provided as a map [topic -> nthreads]. This will
effectively start n JVM threads consuming partitions of that kafka topic.
At Cl
- You are launching up to 10 threads/topic per Receiver. Are you sure your
receivers can support 10 threads each ? (i.e. in the default configuration, do
they have 10 cores). If they have 2 cores, that would explain why this works
with 20 partitions or less.
- If you have 90 partitions, why
Hi,
Could you add the code where you create the Kafka consumer?
-kr, Gerard.
On Wed, Jan 7, 2015 at 3:43 PM, wrote:
> Hi Mukesh,
>
> If my understanding is correct, each Stream only has a single Receiver.
> So, if you have each receiver consuming 9 partitions, you need 10 input
> DStreams to c
Hi Mukesh,
If my understanding is correct, each Stream only has a single Receiver. So, if
you have each receiver consuming 9 partitions, you need 10 input DStreams to
create 10 concurrent receivers:
https://spark.apache.org/docs/latest/streaming-programming-guide.html#level-of-parallelism
I understand that I've to create 10 parallel streams. My code is running
fine when the no of partitions is ~20, but when I increase the no of
partitions I keep getting in this issue.
Below is my code to create kafka streams, along with the configs used.
Map kafkaConf = new HashMap();
kafk
Yonghui,
Which OS you are running.
-Harsha
On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
> Yes and I found the reason rename in deletion is failed.
> In rename progress the files is deleted? and then exception blocks file
> closed in kafka.
> But I don't know how can rename fai
Hi Guys,
I have a kafka topic having 90 partitions and I running
SparkStreaming(1.2.0) to read from kafka via KafkaUtils to create 10
kafka-receivers.
My streaming is running fine and there is no delay in processing, just that
some partitions data is never getting picked up. From the kafka consol
Yes and I found the reason rename in deletion is failed.
In rename progress the files is deleted? and then exception blocks file
closed in kafka.
But I don't know how can rename failure happen,
[2015-01-07 00:10:48,685] ERROR Uncaught exception in scheduled task
'kafka-log-retention' (kafka.utils
No, I missed that.
thanks,
svante
2015-01-07 6:44 GMT+01:00 Jun Rao :
> Did you set offsets.storage to kafka in the consumer of mirror maker?
>
> Thanks,
>
> Jun
>
> On Mon, Jan 5, 2015 at 3:49 PM, svante karlsson wrote:
>
> > I'm using 0.82beta and I'm trying to push data with the mir
27 matches
Mail list logo