On 08/13/2010 05:10 PM, Kevin Kofler wrote:
> Ralf Corsepius wrote:
>>>> I think, for packages that are modified during the testing period,
>>>> this N should be calculated from the day the last push was made to
>>>> testing.
>>
>> This would very unhelpful.
>>
>>> Yes, this was my initial intention.  However, looking at the code a bit
>>> closer, your scenario would currently be allowed, as it currently only
>>> calculates the time-in-testing based on the first push to testing.
>> This behavior is helpful, because otherwise updates would "starve".
>
> +1
>
> Once again, we're in violent agreement!

Real world case I've occasionally encountered:

I submitted a package to testing. It did not receive any feedback, 
however I started using it.

Several weeks later, I notice another (often minor) bug and fix it with 
a "few liner patch" rsp. upstream releases a new "minor bug fix release".

With the new procedure, I have 2 choices:
1. Push the known to be buggy package to avoid this timer to be reset, 
knowingly exposing the users to this bug and the bugs this update was 
supposed to fix.

2. Push an update comprising resetting the timer.
=> Users will have to wait another timeout period for the bugs they are 
waiting to see fixed.

Neither of both choices are helpful.

With the timer not being reset, I could push the package with the "new 
bug fix applied", knowing the package would not immediately malfunction 
and would have the "new fix" applied.



Another similar, scenario is upstreams releasing packages in a higher 
frequency than fedora can handle them.

Though these cases are fairly rare, I also have seen them happen 
(Classical case: upstream release update, package makes it into Fedora, 
Several weeks/months later, upstream notices major problems and releases 
"hot fix releases" at high frequency).

Ralf

-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Reply via email to