Hi Chris, setting the ack default to 1 would mean folks would have to have
a replica setup and configured otherwise starting a server from scratch
from download would mean an error message to the user.   I hear your risk
of not replicating though perhaps such a use case would be solved through
auto discovery or some other feature/contribution for 0.9.

I would be -1 on changing the default right now because new folks coming in
on a build either as new or migrations simply leaving because they got an
error or even running by just git clone ./sbt package and running (less
steps in 0.8).  There are already expectations on 0.8 we should try to keep
things settling too.

Lastly, folks when they run and go live often will have a chef, cfengine,
puppet, etc script for configuration

Perhaps through some more operation documentation, comments and general
communications to the community we can reduce risk.

/*
Joe Stein
http://www.linkedin.com/in/charmalloc
Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
*/

On Tue, Mar 5, 2013 at 8:30 AM, Chris Curtin <curtin.ch...@gmail.com> wrote:

> Hi Jun,
>
> I wasn't explicitly setting the ack anywhere.
>
> Am I reading the code correctly that in SyncProducerConfig.scala the
> DefaultRequiredAcks is 0? Thus not waiting on the leader?
>
> Setting:  props.put("request.required.acks", "1"); causes the writes to go
> back to the performance I was seeing before yesterday.
>
> Are you guys open to changing the default to be 1? The MongoDB Java-driver
> guys made a similar default change at the end of last year because many
> people didn't understand the risk that the default value of no-ack was
> putting them in until they had a node failure. So they default to 'safe'
> and let you decide what your risk level is vs. assuming you can lose data.
>
> Thanks,
>
> Chris
>
>
>
> On Tue, Mar 5, 2013 at 1:00 AM, Jun Rao <jun...@gmail.com> wrote:
>
> > Chris,
> >
> > On the producer side, are you using ack=0? Earlier, ack=0 is the same as
> > ack=1, which means that the producer has to wait for the message to be
> > received by the leader. More recently, we did the actual implementation
> of
> > ack=0, which means the producer doesn't wait for the message to reach the
> > leader and therefore it is much faster.
> >
> > Thanks,
> >
> > Jun
> >
> > On Mon, Mar 4, 2013 at 12:01 PM, Chris Curtin <curtin.ch...@gmail.com
> > >wrote:
> >
> > > Hi,
> > >
> > > I'm definitely not complaining, but after upgrading to HEAD today my
> > > producers are running much, much faster.
> > >
> > > Don't have any measurements, but last release I was able to tab windows
> > to
> > > stop a Broker before I could generate 500 partitioned messages. Now it
> > > completes before I can get the Broker shutdown!
> > >
> > > Anything in particular you guys fixed?
> > >
> > > (I did remove all the files on disk per the email thread last week and
> > > reset the ZooKeeper meta, but that shouldn't matter right?)
> > >
> > > Very impressive!
> > >
> > > Thanks,
> > >
> > > Chris
> > >
> >
>

Reply via email to