On 14 Nov 2013, at 18:23, Kohsuke Kawaguchi <kkawagu...@cloudbees.com> wrote:

> On 11/13/2013 11:58 PM, Luca Milanesio wrote:
>> We need to make some tests on the scalability of the events API because of:
>> 1) need to monitor over 1000 repos (one call per repo ? one call for all ?)
>> 2) by monitoring the entire jenkinsci org, 300 events could be not enough in 
>> case of catastrophic events
> 
> The good news is that the push that removes/alters refs also take time. I 
> have the notification e-mail from your push to 186 repos, and it spans over 
> an hour.

True: however possibly the notifications took an hour but the push was pretty 
fast but still around 25 / min. 300 events per minute should be then fairly 
enough :-)
The only way to go over that limit is parallel push by multiple accounts ... 
but that I would say is very unlikely.

> 
> So I'm hoping that polling 300 events every minute would cover us pretty 
> well. And like you say, a webhook can help us reduce this window down even 
> further.

Yep.

> 
> 
> There's another reason I'm optimistic about this scheme.
> 
> Suppose you are maliciously trying to cause data loss. If we are regularly 
> recording refs, you have to mount an attack immediately after some commits go 
> in so as to overwhelm the 300 event buffer, then keep that saturation going 
> so that your ref updates/removals will also be dropped from the event buffer. 
> And even with this much effort you can only cause the data loss of the 
> commits that went in right before yours.
> 
> So I think it makes the attack so ineffective that we can tolerate that risk, 
> and I find it unlikely that no accidents will look like this.

Agreed.

Luca.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to