On 29/07/2014 6:22 a.m., Alex Rousskov wrote:
> On 07/24/2014 10:37 PM, Amos Jeffries wrote:
> 
>> If any of you have patches lined up for commit or
>> about to be, please reply to this mail with details so we can triage
>> what gets in and what can hold off in audit.
> 
> 
> Here are some of the bigger items on the Factory plate:
> 
> * Native FTP support (works great; still working on class renaming)

Can you get that to audit this week?


> * Peek and Splice (now with SNI; probably ready but may need polishing)
> * StoreID via eCAP/ICAP (waiting for audit or Store-ID naming comments)
> * on-disk header updates or Bug #7 (probably more than two weeks out)

Bug #7 is squid-2.7 parity fix, so a candidate for backporting.

> * Store API polishing (no ETA; depends on bug #7)
> * post-cache REQMOD support (may be available late August 2014)
> 
> 
> Judging by the trunk items listed on the RoadMap wiki page, the only
> major v3.5 improvement already available is Large Rock, so getting at
> least the first two items in would be necessary to justify the
> maintenance/porting overheads of a new Squid version IMO.

Thank you for that list.


FYI there are several goal metrics informing the decision to branch:

* major user visible features/changes
 - Going by user visibility are: Large Rock, Collapsed Forwarding, and
helper *_extras. Possibly also eCAP 1.0.
 - This has been the main criteria previously however after board
discussions I am convinced to give this lower priority and I am no
longer waiting on a particular feature count threshold. What we have now
seems good enough combined with the below criteria.

* annual stable cycle, (aka. ensuring sponsors wait no more than 1 year
for their features to get into a stable production release)
 - initial 3.5 features are almost at 10 months now, even if they are minor.

* significant LOC change between stable and beta
  - just over 33% of squid right now (91K/271K LOC).

* bugs and general stability
  - we have a record low number in both 3.HEAD and 3.4 to work on right now.

Amos

Reply via email to