You are reaching 10gb * 1000 / 64 = 156 MB / s which probably saturated your
hard drive bandwidth ? So you can take a look at your iostats
--
Sent from my iPhone
On Aug 22, 2018, at 8:20 PM, Nan Xu wrote:
I setup a local single node test. producer and broker are sitting at the
same VM. broker
1. Any detail logs ?
2. How do you process the records after you polled the records?
3. How much time does it take for every round of poll ?
Thanks !
--
Sent from my iPhone
On May 28, 2018, at 10:44 PM, Shantanu Deshmukh wrote:
Can anyone here help me please? I am at my wit's end. I now have
Thanks Ewen. Will take a look at the config and if there are any findings, will
come back here later.
--
Sent from my iPhone
On Mar 22, 2018, at 7:39 PM, Ewen Cheslack-Postava wrote:
The log is showing that the Connect worker is trying to make sure it has
read the entire log and gets to offset
+1
On Tue, Oct 25, 2016 at 3:17 PM, Harsha Chintalapani
wrote:
> Jeff,
> Thanks for participating. We already have the discussion thread going
> on. Please add your thoughts there . I'll keep this as interest check vote
> thread.
> Thanks,
> Harsha
>
> On Tue, Oct 25, 2016 at 3:12 PM Stev
.
If set enable.auto.commit=true in my test case,
when consumer rebalanced I checked the offset position and it is not
overlap before.
I guess if set enable.auto.commit=true, offset commit will be triggered
when rebalancing happened and wakeup() called.
These just are my thought, is that right?
-Ken
consumer ,this approach is more
straightly.
Second approach only use for specific requirement, but it has to control
more detail information.
It is suitable for target clear job or web service to get given length
offsets
how do you think?
2016-03-08 1:54 GMT+08:00 Jason Gustafson :
> Hi Ken,
>
MBean from any broker, or none had a value
of 4, assume the cluster is NOT available
Thanks.
Ken Hohl
Cars.com
-Original Message-
From: hsy...@gmail.com [mailto:hsy...@gmail.com]
Sent: Thursday, December 17, 2015 1:02 PM
To: users@kafka.apache.org
Subject: Re: how to programatically mon
red requesting a JMX MBean periodically and concluding the
cluster is not accessible if we can't get the MBean from at least 1 broker.
What is the recommended way of accomplishing what we're trying to do?
Thanks.
Ken Hohl
Cars.com
Hi Jun,
I was wondering if there was something out there already. GPFS appears to the
OS as local filesystem, so if there was a consumer that dumped to local
filesystem, we'd be gold.
Thanks,
--Ken
On May 16, 2014, at 7:04 PM, Jun Rao wrote:
> You probably would have to write a
Correction, the http post may or may not be faster than writing directly to
SMB, but hopefully we can improve that speed in a more scalable manner than
SMB.
--Ken
On May 16, 2014, at 11:17 AM, Carlile, Ken wrote:
> Hi all,
>
> Sorry for the possible repost--hadn't seen th
etween acquisition instruments (usually running Windows) and several kinds of
storage, so that we could write virtually simultaneously to archive storage for
the raw data and to HPC scratch for data analysis, thereby limiting the penalty
incurred from data movement between storage tiers.
Thanks for any input you have,
--Ken
ly simultaneously to archive storage for
the raw data and to HPC scratch for data analysis, thereby limiting the penalty
incurred from data movement between storage tiers.
Thanks for any input you have,
--Ken
here?
Thanks,
Ken
On Mar 15, 2014, at 11:09 AM, Ray Rodriguez wrote:
> Imagine a situation where one of your nodes running a kafka broker and
> zookeeper node goes down. You now have to contend with two distributed
> systems that need to do leader election and consensus in the
it's java under all of this?
--Ken
On Mar 15, 2014, at 12:07 AM, Jun Rao wrote:
> The spec looks reasonable. If you have other machines, it may be better to
> put ZK on its own machines.
>
> Thanks,
>
> Jun
>
>
> On Fri, Mar 14, 2014 at 10:52 AM, Carlile, Ken
&g
large as possible,
particularly when I'm dealing with a small number of brokers.
Thanks,
Ken Carlile
Senior Unix Engineer, Scientific Computing Systems
Janelia Farm Research Campus, HHMI
ently they use the lower level APIs. This should make
them easier to maintain, and user friendly enough to avoid the need for
extensive documentation.
Ken
On Fri, Aug 9, 2013 at 8:52 AM, Andrew Psaltis wrote:
> Dibyendu,
> According to the pull request: https://github.com/linkedin/camus/
make sure each email gets answered.
But it can take me a day or two.
-Ken
On Aug 7, 2013, at 9:33 AM, ao...@wikimedia.org wrote:
> Hi all,
>
> Over at the Wikimedia Foundation, we're trying to figure out the best way to
> do our ETL from Kafka into Hadoop. We don't cur
object to hdfs. The ability to plugin a custom writer was
added recently, and I haven't had a chance to review how that works. I am
looking into it now, and will send an update to this mailing list shortly.
Ken
On Wed, Jul 31, 2013 at 4:31 PM, Vadim Keylis wrote:
> Good afternoon. I a
r now?
Ken
On Wed, Jul 3, 2013 at 10:57 AM, Felix GV wrote:
> IMHO, I think Camus should probably be decoupled from Avro before the
> simpler contribs are deleted.
>
> We don't actually use the contribs, so I'm not saying this for our sake,
> but it seems like the right
Hi Jason,
On May 22, 2013, at 3:35pm, Jason Weiss wrote:
> Ken,
>
> Great question! I should have indicated I was using EBS, 500GB with 2000
> provisioned IOPs.
OK, thanks. Sounds like you were pegged on CPU usage.
But that does surprise me a bit. Did you check that you were usi
Hi Jason,
Thanks for the notes.
I'm curious whether you went with using local drives (ephemeral storage) or
EBS, and if with EBS then what IOPS.
Thanks,
-- Ken
On May 22, 2013, at 1:42pm, Jason Weiss wrote:
> All,
>
> I asked a number of questions of the group over the last
ame cluster group.
E.g. test with three cr1.8xlarge instances.
-- Ken
> On 5/20/13 12:56 PM, "Scott Clasen" wrote:
>
>> My guess, EBS is likely your bottleneck. Try running on instance local
>> disks, and compare your results. Is this 0.8? What replication factor are
&g
ould have to back-port these to the older Hadoop APIs in order to
work with Cascading. Also Cascading sends all data around as the key (value is
always NullWritable) whereas the Kafka input/output formats do the opposite.
-- Ken
> On Jan 7, 2013 1:51 PM, "Ken Krugler" wrote:
>
king about a Cascading Tap, right?
-- Ken
> On Mon, Jan 7, 2013 at 9:57 AM, Ken Krugler
> wrote:
>
>> Hi Guy,
>>
>> On Jan 6, 2013, at 11:11pm, Guy Doulberg wrote:
>>
>>> Hi,
>>> Thanks David,
>>>
>>> I am looking for a produ
ugh.
One issue is that Hadoop is batch oriented, so there's a bit of an impedance
mismatch when you've got a streaming data source, but from experience it's
possible to get that to work.
-- Ken
> The product should be complete and supports many connections to many data
&g
25 matches
Mail list logo