Re: Trying to start using bzr
On Mon, 2008-03-31 at 20:43 +0200, Henrik Nordstrom wrote: But I have to ask Robert how we best continue the merge process on that branch.. I suspect it would actually be best to recreate the branch as a new bzr branch from trunk, if it's kept at all... We can manually sync it up quite cleanly if we can determine the last revision of trunk merged to it; after that merges would be smooth. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr issues
On Tue, 2008-04-01 at 15:18 +1300, Amos Jeffries wrote: I'm getting annoyed by bzr's method of handling bzr send emails. As you may have noticed it adds the subject of the last commit to the email. regardless of the message thats asked for manually during the send process. It would be good if bzr pulled its email subject from that new message. We are going to be sick of feature submissions going up as Merged from trunk because the last the thing the developer did was properly check the branch was up-to-date. I think this is a good suggestion. Its a little complex because most email clients want a subject given when starting the new email, and you can edit the subject *and* the message at that point. Are you perhaps supplying a fully prepared message to bzr send? If so then we should special case that. affects bzr -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr issues
On Tue, 2008-04-01 at 15:41 +1300, Amos Jeffries wrote: No. I'm just using: bzr send --mailto=squid-dev.squid-cache.org and entering the message when it asks for one. Is it popping up a gui client, or a text editor? Does it have a field for setting a subject? -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr stable branch maintenance and backporting
On Thu, 2008-03-27 at 23:23 +0100, Henrik Nordstrom wrote: Robert, do you have any advice on how to best keep track of what to backport to earlier (i.e. STABLE) releases? ... For a sensible workflow what one really wants is something like this - A changeset starts out as uncategorized - Multiple related changesets may be grouped (i.e. fix or later continuation on an earlier commit) - Voting on a group of changes for categorizing the group as either * to be backported * not to be backported * maybe backported when more complete (no default until some opinions have been voiced) - Voting on a group MAY be reopened later on request - A backport gets reverted if it's status gets changed So, for changeset id, I suggest we use the revision id of the commit of a change to trunk. bzr doesn't yet, but will soon, be able to report on 'has been backported' by the use of cherry pick merges. Beyond that bzr really has nothing build around this, but it looks like an interesting thing to build and have. just trying to figure out how much bzr can support this, an how much needs to be built externally in separate trackers. The simple changeset framework we used for CVS is far from perfect and do not 100% reflect the above requirements, but still worked out reasonably well making sure that changes flows nicely and orderly from HEAD branches down to the active stable branches. We now need to get a similar workflow running for the bzr setup.. I'd probably start with the cvs one but use bzr's superior facilities for obtaining changesets. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr stable branch maintenance and backporting
On Thu, 2008-03-27 at 22:03 -0400, Tres Seaver wrote: I thought I understood that 'bzr' encouraged the fix on the old rev and forward port model over backporting / cherry picking? In the style described at: http://www.venge.net/mtn-wiki/DaggyFixes I think that daggyfixes is the tail shaking the dog. Folk usually don't know how far back a problem goes when they realise a problem exists; where the fix exists in the global revision dag is a technical issue for vcs authors, it should not be something code authors need to think about. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr stable branch maintenance and backporting
On Thu, 2008-03-27 at 22:26 -0400, Tres Seaver wrote: OK. I had the impression that bzr's model was branch happy (compared to CVS / SVN), which would seem to me to make forward porting more attractive. bzr encourages many, lightweight branches. For instance, in supportig Zope2, we often need to do a fix across multiple supported releases: e.g., if somebody reported a security issue today, we might end up releasing fixes for Zope 2.8 and 2.9, as well as 2.10 (the currently released branch) and 2.11 (the almost-ready-for-prime-time branch). I've even done one fix in this configuration for 2.7 (because there are a large number of production systems on 2.7, including a couple of my clients). Yup. My experience with such fixes indicates that it is much easier to fix the oldest stuff, and than forward port, compared to fixing the trunk, and then backporting. That made the daggy fixes model seem quite natural to me. My point is that you can do it in any direction you find most convenient. Once any two branches are diverged, there is no difference for a vcs - theres no 'forward' or 'backword' to the merge - its symmetrical. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr revert
On Thu, 2008-03-27 at 00:02 +0100, Henrik Nordstrom wrote: I was more thinking of the resulting changeset and what you meant by your statement above you are committing a changeset that happens to alter previously done work, but bzr does not consider this a cherrypick or merge - the undo will propogate. I.e. how does a commit of the results of bzr revert -r X differ from a commit of the results of bzr merge -r -1..X? In current bzr, not at all. In future, when the cherrypicking planned changes are made; bzr will know that the changesets from -1..X have been backed out, rather than just the textual changes being reversed. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: Valgrind results
On Sun, 2008-03-23 at 20:37 +0900, Adrian Chadd wrote: On Mon, Mar 24, 2008, Amos Jeffries wrote: Adrian Chadd wrote: You can ignore that. If you care that much about the epoll error then just bzero() the data being passed into epoll_ctl() before its called. I care about every potential cause of trouble in squid. Particularly avoidable ones that pop up all the time. 'Zero errors' policy and all that. Thats cool. There's just bigger fish to fry. :) It may be that this is fixable by setting the first byte of the buffer to '\0'. I'm just busy working on the other more serious ones myself tonight. Nah, it'll actually check the whole region. Is it valgrind being wrong, or does epoll actually read from that unitinitialised region? If the former, thats what excludes are for; if the latter then it is a bug :) -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr revert
On Tue, 2008-03-25 at 10:34 +1300, Amos Jeffries wrote: Robert Henrik, Some of us have found a knowledge-hole in the bzr revert process. We can easily reverse a patch using for example: bzr revert -r 8902 But then when its fixed we don't know how to undo the undo. The local branch is left with code apparently up-to-date but is actually missing a changeset. How do we undo the local change back to actual? 'bzr revert' changes the working tree to be the same as a given revision [with optional file list]. If you then do a commit - e.g: bzr revert -r X bzr commit you are committing a changeset that happens to alter previously done work, but bzr does not consider this a cherrypick or merge - the undo will propogate. To 'backout' something, or to reinstate something backed out, use cherrypick merges. e.g: backout revision X: $bzr merge -r X..X-1 /bzr/squid3/trunk $bzr commit -m Backout linux memory fix. Committed revision Z to reinstate it, just backout the backout $bzr merge -r Z..Z-1 /bzr/squid3/trunk $bzr commit -m Reinstate linux memory fix. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: Including build tag in daily snapshots
On Thu, 2008-03-20 at 08:57 -0600, Alex Rousskov wrote: Hello, From http://www.squid-cache.org/bugs/show_bug.cgi?id=2275: User: squid-3.0.STABLE2 compile error. Troubleshooter: Where did you get this 3.0.STABLE2 tree? User: squid-3.0.STABLE2-20080319.tar.gz Troubleshooter: 3.0.STABLE2 does not have this bug, only some snapshots do Can the snapshot making scripts be altered to include the snapshot date or revision number in the Squid version tag? This might create an occasional conflict if a user tries to patch autotools-related files, but it would save us from trying to troubleshoot the bug in the wrong version. bzr help version-info - bzr can output the tree details to a compilable file. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: Including build tag in daily snapshots
On Fri, 2008-03-21 at 14:33 +1300, Amos Jeffries wrote: I've been thinking it would be a good idea to add the patch-cleaning script to the source for people to use for submission. There are a few minor issues to work out still but if you all agree I'll drop it in. FWIW I'd really rather we just delete the files that you have to clean. The should not be versioned in the developers branches, only in branches intended for non-developers. So we should our branches related thusly: daily- trunk - featureX/personX | v releaseX.Y.Z - branchX.Y the daily and releaseX.Y.Z branches should have the autofluff output, built documentation and so on. The rest should not. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr revert
On Tue, 2008-03-25 at 13:04 +0100, Henrik Nordstrom wrote: On Tue, 2008-03-25 at 12:04 +1100, Robert Collins wrote: 'bzr revert' changes the working tree to be the same as a given revision [with optional file list]. If you then do a commit - e.g: bzr revert -r X bzr commit you are committing a changeset that happens to alter previously done work, but bzr does not consider this a cherrypick or merge - the undo will propogate. How does revert differ from a backout merge down to that same revision? bzr revert -r X bzr merge -r -1..X (or last:..X) The conflict resolver for revert uses a big hammer to make the tree identical; in the case of local edits the merge above will conflict and keep your edit, the revert above will discard it. 'revert' is 'become' 'merge' is 'apply change' -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: Bundle Buggy: Voting error
On Tue, 2008-03-25 at 13:09 +0100, Henrik Nordstrom wrote: Robert? Should I go ahead and set up the needed accounts? turbogears seems broken on squid-cache.org at the moment, until I figure out what has changed, noone can add accounts. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: [squid-users] Squid Future (was Re: [squid-users] Squid-2, Squid-3, roadmap)
On Mon, 2008-03-24 at 11:25 +0100, Henrik Nordstrom wrote: On Sun, 2008-03-23 at 18:29 +0900, Adrian Chadd wrote: The real solution is a tree for offset lookups, and a linear walk of order O(1) for subsequent sequential accesses. Walking a tree is usually a cheap operation, unless the tree is wrongly designed. You just need to remember the current tree node to enable linear walk from there. But it's true splay trees is not the appropriate lookup structure for this. To much runtime churn with the tree rebalancing itself.. I believe this to be the case, however as the code only uses the tree interface, it should be easy to substitute in a different tree ADT here. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: loggerhead problems Could not fetch changes
On Thu, 2008-03-20 at 06:09 +0100, Henrik Nordstrom wrote: loggerhead (/bzrview/) seems to have some problems. Any attemt to view changes of an individual file gives a server error. Viewing the commit logs, file annotations, revision changesets and also download of individual source files seem to work however.. http://www.squid-cache.org/bzrview//squid3/trunk/changes?start_revid=henrik%40henriknordstrom.net-20080318122230-sia0s0f3cf18u1jufile_id=access_log.cc-19970621083647-inx67188wqbz-1 500 Internal error Could not fetch changes Powered by CherryPy 2.2.1 Indeed, there will be something in the logs. Probably due to the bzr upgrade I did. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: loggerhead problems Could not fetch changes
On Thu, 2008-03-20 at 16:25 +1100, Robert Collins wrote: On Thu, 2008-03-20 at 06:09 +0100, Henrik Nordstrom wrote: loggerhead (/bzrview/) seems to have some problems. Any attemt to view changes of an individual file gives a server error. Viewing the commit logs, file annotations, revision changesets and also download of individual source files seem to work however.. http://www.squid-cache.org/bzrview//squid3/trunk/changes?start_revid=henrik%40henriknordstrom.net-20080318122230-sia0s0f3cf18u1jufile_id=access_log.cc-19970621083647-inx67188wqbz-1 500 Internal error Could not fetch changes Powered by CherryPy 2.2.1 Indeed, there will be something in the logs. Probably due to the bzr upgrade I did. Restarted loggerhead and it is working. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: Some progress on large response headers
On Sun, 2008-03-16 at 21:54 -0600, Alex Rousskov wrote: On Sun, 2008-03-16 at 02:55 +0100, Henrik Nordstrom wrote: Cons: - Not entirely sure how ICAP and ESI fits into the reply processing. But I hope there is no problem.. The bzr branch can be found at http://www.henriknordstrom.net/bzr/squid3/hno/largeresp/ (bzr only, no online viewer installed, sorry) What is the recipe for viewing that with bzr? In addition to the ones from Henrik and Amos, if you have trunk on your machine, you can also do: cd $trunk bzr diff -r ancestor:http://www.henriknordstrom.net/bzr/squid3/hno/largeresp/..branch:http://www.henriknordstrom.net/bzr/squid3/hno/largeresp/ to get a diff without making a local clone of the branch; it will cache the history in your branch so future diffs will be fast. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: Bundlebuggy bug links
On Sun, 2008-03-16 at 23:30 +0100, Henrik Nordstrom wrote: Robert, can you fix the bundlebuggy bug links to point to our bugzilla instead of launchpad? /bugs/show_bug.cgi?id=id From what it looks this is set in bundlebuggy/templates/master.kid where it's currently hardcoded to use the launchpad bug tracker. Sure. I'll poke at it today, and file a bug to the author/fix it. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: [PATCH] Fix stripping NT domain in squid_ldap_group
bb:approve signature.asc Description: This is a digitally signed message part
Re: tcp proxy hackery
On Sun, 2008-03-16 at 23:21 +0900, Adrian Chadd wrote: On Wed, Mar 12, 2008, Adrian Chadd wrote: I'm able to push this to about 5000 req/sec, 8000 concurrent client connections (so 16,000 concurrent TCP connections on the proxy) @ ~ 335mbit full-duplex on my test setup. I'm not maxing out anything yet as my thttpd opteron server is running at full steam. .. I've bought another box to run thttpd on and I've maxed out the tcp proxy box when hitting it with small connections. Its a FreeBSD problem - their locking stuff doesn't scale well under very high connection creation/destruction rates. http://www.creative.net.au/diffs/test1-340.tar.gz It maxes out on my kit at the above speed; but at 32k replies it hits 3100 req/sec and close to a gigabit. I'll whack a recent linux distro+kernel on the test boxes in a few days and see how it compares. Sounds like its going well. I'd love to see a similar benchmark for memory allocations - something that exercises the slab and buffer allocator in squid, so we can tune that in -3. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
bzr version upgrade tomorrow
I would like to upgrade the bzr version on squid-cache.org tomorrow. This does not require client upgrades. Those of you with current clients will be seeing 'Server is too old for fast get_parent_map, reconnecting. (Upgrade the server to Bazaar 1.2 to avoid this)' when you use bzr+ssh - the upgrade will fix this (and make bzr update and first checkout massively faster) -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr commit stuff?
On Sat, 2008-03-15 at 13:49 +0100, Henrik Nordstrom wrote: On Fri, 2008-03-14 at 12:34 +0900, Adrian Chadd wrote: Has anyone actually committed to the bzr tree? I haven't seen any commit messages. Commit works fine, but commit messages is not yet operational. Waiting for Robert to set up a mailer cron job.. I have done this in my crontab - its 'bzr-hookless-email'. Waiting for a commit to see it work, or not. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: tcp proxy hackery
This reminds me, one of the things I was thinking heavily about a few years ago was locality of reference in N-CPU situations. That is, making sure we don't cause thrashing unnecessarily. For instance - given chunking we can't really avoid seeing all the bytes for a MISS, so does it matter if process all the request on one CPU, or part on one part on another? Given NUMA it clearly does matter, but how many folk run squid/want to run squid on a NUMA machines? Or, should we make acl lookups come back to the same cpu, but do all the acl lookups on one cpu, trading potential locking (a non-read-blocking cache can allow result lookups cheaply) for running the same acl code over extended sets of acls. (Not quite SIMD, but think about the problem from that angle for a bit). -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: [squid-users] Caching files from Amazon S3
On Mon, 2008-03-17 at 12:05 +0900, Adrian Chadd wrote: On Mon, Mar 17, 2008, Robert Collins wrote: On Sun, 2008-03-16 at 14:04 +0900, Adrian Chadd wrote: Annoyingly, why the hell is the request from the client a range request? Squid can't easily cache those unless it somehow fetches the entire object first. FWIW -3 has about 60% of the work needed to cache fragments done. Whats missing is a store that can handle them. I've looked at the -3 stuff, and its missing about as much as the -2 stuff is missing. The memory store is only a small part of the overall problem handling sparse objects. (Unless there's some code I've missed that handles other range-request relevant stuff.) In -3 the client side was overhauled to talk in object offsets, so a range request would ask the store for the relevant bytes, and rather than getting an opaque stream and range parsing it, it gets the bytes back; likewise the store insertion by the server side writes offset, length data into the store, not opaque data. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: [squid-users] Caching files from Amazon S3
On Mon, 2008-03-17 at 13:24 +0900, Adrian Chadd wrote: Yes, but I think more thought needed to go into teaching the store about sparse objects (which the backend store hooks don't cover); They don't yet, thats definitely true. teaching the store about re-populating the memory cache (which is needed for range request processing -and- for general better performance); Needed for more that fragements. Not needed any more for range requests than any other. is there any code to properly process range replies and spit stuff into the store. Yes. http.cc had that last I looked. As I said, a lot of it hasn't been done (I don't think 60% is a credible answer) and I think handle range requests is something that needs to be thought of as part of a squid internal planning/discussion rather than something seperate. Well sure you can assess the amount of work done/to do differently. I think that if I sat down to do it, it would be less work to go than I've already put in on the outside of the store. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: [MERGE] Bug 2252: Build failure on Mac OSX 10.5
On Wed, 2008-03-12 at 14:15 +1300, Amos Jeffries wrote: === modified file 'src/ACLBrowser.cc' --- src/ACLBrowser.cc 2004-12-20 23:30:12 + +++ src/ACLBrowser.cc 2008-03-12 01:10:46 + @@ -41,11 +41,7 @@ /* explicit template instantiation required for some systems */ -template class ACLStrategisedchar const * - -; template class ACLRequestHeaderStrategyHDR_USER_AGENT - ; forte used to fail to build without the explicit instantiation. It might be an idea to #ifdef that; or alternatively, you should remove the comment as well as the explicit instantiation. bb:tweak -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
bzr/vcs stuff
Sorry I've been silent a few days - been ill :(. Tomorrow I travel home, then on tuesday I will be around again. Re: review process stuff - there is a tool, running already at http://squid-cache.org/bundlebuggy/ which reduces much of the overhead of patch submission. 'bzr send' to squid-dev will cause bundlebuggy to catch mails and track them. I'll setup accounts etc Tuesday. I'll be turning on commit mails Tuesday as well. Sorry for the delay, Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: squid3 CVS down for the migration
On Wed, 2008-03-05 at 15:18 +1300, Amos Jeffries wrote: I'm doing the CVS-bzr migration now, any future squid3 commits will be ignored. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. Oh darn, I got my date conversion wrong. Oh well this afternoons commits weren't much. I think they got picked up anyhow. http://www.squid-cache.org/bzrview//squid3/trunk/changes The conversion is done, and the new repository in place. Please delete any bzr copies you had and you can start working with the converted repository as per the squid3vcs wiki page. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
test -ignore
-- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr cutover timetable
On Sun, 2008-03-02 at 20:31 +0100, Guido Serassio wrote: Hi, At 13:47 02/03/2008, Robert Collins wrote: Henrik and I have been fine tuning the bzr configuration instructions and checking everything is sufficiently good to migrate across to. We propose to cut across on Tuesday; if there are no objections, then at 1200 UTC Tuesday, squid3 CVS will be frozen, and the import recreated. The import takes a few hours but should be completed late tuesday/wednesday at the latest. For me is fine. But just a request: please before publish somewhere the binary package availability and all the instructions and tips needed to use bzr on all the platforms used for development: Linux, Windows, *BSD, Solaris, Irix, Tru64, AIX, etc. Thats really bzr standard docs - the already linked to Downloads page should cover installation. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
bzr cutover timetable
Henrik and I have been fine tuning the bzr configuration instructions and checking everything is sufficiently good to migrate across to. We propose to cut across on Tuesday; if there are no objections, then at 1200 UTC Tuesday, squid3 CVS will be frozen, and the import recreated. The import takes a few hours but should be completed late tuesday/wednesday at the latest. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
squid meetup dinner/drinks
Hi, The squid meetup has been going well :). We're going to head to waggamamma's http://www.wagamama.com/locations_map.php?locationid=127 around 5pm. We'll head off to a local pub after that around 6:30 or so. -Rob signature.asc Description: This is a digitally signed message part
PONG(Re: Test!)
On Fri, 2008-02-29 at 10:54 +, Tony Dodd wrote: This is a test e-mail, as I'm fairly sure that something is broken with the exchange mail server that I'm now being made to use. Thus far, I haven't received any squid-dev e-mails since the change-over, so this is just a test to make sure it's working. Please ignore! Thanks -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: squid3 future directory structure
On Tue, 2008-02-26 at 20:37 -0700, Alex Rousskov wrote: On Wed, 2008-02-27 at 11:33 +1100, Robert Collins wrote: On Tue, 2008-02-26 at 16:50 -0700, Alex Rousskov wrote: That #include line would be illegal in cleaned up sources, of course. You are supposed to say include1/Foo.h or equivalent. That's why duplicating group in file names may become unnecessary: Group/GroupFoo.h becomes Group/Foo.h I think for graceful failures its better to eliminate failure modes we can predict when it is cheap to do so. Using consistently lowercase file names on disk will do that. So you would recommend something like http/request_method.cc and acl/request_header_strategy.h right? yes. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: [squid-users] Squid meetup in london
On Wed, 2008-02-27 at 15:20 +1100, Mark Nottingham wrote: Is this going to be a semi-regular event? I'd be interested in participating in the future, but need more lead time to arrange travel... As already noted this is really an ad-hoc meetup, I'm in London for a different event and Kinkie suggested a meetup in Milan, then couldn't and - well it just came together. I think a more targeted 'lets have a sprint' style event would be good to arrange in the future - by which I mean something that is 4-5 days long, has an agenda of stuff we'd like to get done - both actual coding, and discussions, and so forth. -Rob signature.asc Description: This is a digitally signed message part
Squid meetup in london
I'm very happy to announce that Canonical are hosting a squid meetup in London this coming Saturday and Sunday the 1st and 2nd of March. Any *developers* (in the broad sense - folk doing coding/testing/documenting/community support/) are very welcome to attend. As it is a weekend and a security office building, you need to contact me to arrange to come - just rocking up won't work :). We'll be there all saturday and sunday through to mid-afternoon. The Canonical London office is in Millbank Tower http://en.wikipedia.org/wiki/Millbank_Tower. So if you want to come by please drop me a mail. For folk wanting a purely social meetup, I'm going to pick a reasonable place to meet for food and (optionally) alcohol on Saturday evening - I'll post details here mid-friday. -Rob signature.asc Description: This is a digitally signed message part
Re: squid3 future directory structure
On Fri, 2008-02-22 at 20:11 +0100, Guido Serassio wrote: Hi Alex, At 19:29 22/02/2008, Alex Rousskov wrote: On Fri, 2008-02-22 at 19:23 +0100, Guido Serassio wrote: Changing the case of files/dir will not be a problem if we will avoid upper/lower case collisions. This only applies to files in the same directory, right? Sure. AFAICT, filenames from different directories may still collide and even have identical case. Yes, absolutely no problems here. Actually there is a problem. touch include1/Foo.h touch include2/foo.h g++ -I include1 -I include2 foo.cc foo.cc: #include foo.h ... *boom* -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: eCAP: expose Squid or link with eCAP lib?
On Sat, 2008-02-23 at 15:03 +0100, Henrik Nordström wrote: fre 2008-02-15 klockan 09:07 +1100 skrev Robert Collins: Its more work both at code and at runtime. The only thing it really allows that 1) doesn't is non-GPL eCAP modules. I don't see how 2) can allow non-GPL eCAP modules. We can't add a linking excemption to the license even if there is a well defined API. The only way to link non-GPL code to Squid is by not linking it into the runtime binary, statically or dynamically. non-GPL code needs to run in it's own process image, and for that we have ICAP already. Split out runtime linking and compiling. Runtime linking is done on the users machine, no distribution is taking place and thus the GPL is not relevant. Compiling uses header information (and on some platforms information about the library (e.g. dll symbol name mappings) to cause the generated binary to be one that will link at runtime correctly. This can (but doesn't always) cause a copy of information to be placed into the binary that was compiled. Distributing that binary requires a copying licence for the copied information - which is where the GPL kicks in. Now, when you have a well defined interface (e.g. readline), its entirely possible to have *two* implementations that are binary compatible. Implementation one: GPL. Feature complete. Implementation two: BSD. Feature complete from an interface perspective (you can compile anything you can compile with the GPL implementation), but its runtime support is shockingly bad. And the loophole should now be obvious: - vendor A writes against the GPL version in private, to test. - they then create a binary module for platform X by compiling against the BSD library. - the binary module runs against the GPL version. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: RESEND - URGENT !!! - Cannot cvsmerge on sourceforge
On Tue, 2008-02-19 at 16:22 -0700, Alex Rousskov wrote: The problem is not with the script, but with a cvs command that finds a lock somewhere in the repository. You can just tell them what *cvs* command fails and not mention any scripts. Somebody with enough access privileges should be able to go in and manually delete the lock file, I guess. You can actually change the CVS precommit script for a different module to clear the locks in the module you have a problem with; then revert that script back to what it should be. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: RESEND - URGENT !!! - Cannot cvsmerge on sourceforge
On Mon, 2008-02-18 at 15:58 -0700, Alex Rousskov wrote: Will Squid2 tree be migrated to bzr along with Squid3? I am happy to do so. I'm not sure that we'll be able to get great joining of history - enough surgery has happened on both trees to make them quite separate these days. Thats not a reason to not migrate though. -Rob signature.asc Description: This is a digitally signed message part
Re: HEAD squid3/errors/Armenian ERR_ESI,1.1,1.2 ERR_ICAP_FAILURE,1.1,1.2
On Sat, 2008-02-16 at 18:14 -0700, Alex Rousskov wrote: On Fri, 2008-02-15 at 14:05 +, Arthur Tumanyan wrote: Update of cvs.devel.squid-cache.org:/cvsroot/squid/squid3/errors/Armenian Modified Files: ERR_ESI ERR_ICAP_FAILURE Log Message: Another SourceForge HEAD commit? Such commits create problems for others working with SourceForge CVS tree, right? Can we do something simple/automatic to block these? We could switch to a VCS that doesn't have these problems :) /semi-troll -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: squid3 future directory structure
On Fri, 2008-02-15 at 12:24 +1300, Amos Jeffries wrote: 6) Should directory names use just_small, CamelCase, or CAPS letters? I Think CamelCase like we do for files and classes, with acroynms being upper case is working. When we can make the converted and re-used legacy files from lower-only to CamelCase it will sit better than now. Any chance to convince you and others that HTTPReply is not as readable as HttpReply? Some, but not a lot :-) I'm constantly encountering syntax and typo errors because HTTP is not always spelt upper-case in class names. I'd prefer lowercase always for directories. filenames and CaSe are always headaches on windows :) -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: meetup occuring in London
Canonical has confirmed - they are happy to host 4-5 people at the office on the 1st and 2nd. The only caveat is the air conditioning is off in the weekend. On Wed, 2008-02-13 at 09:27 -0700, Alex Rousskov wrote: On Wed, 2008-02-13 at 15:52 +1100, Robert Collins wrote: If this was a 'lets make a 1 year plan to make squid fast' big strategy meeting; then it would be silly to do it without you; and even more silly to do it without you in person. Its not, and I don't think anyone has represented as being that. It would also be silly to just sit in a room, working on bugs. Attendees can do that without meeting each-other face-to-face. If folks get together, I think they should talk about the Big Picture. They just should not make any final decisions. There are many sorts of productive activities. Large scale triage is often easier in person; pair programming can get a lot done that would take many round trips not face to face; design discussions can be made - totally true. I agree that Adrian's couple of years of fiddling with stuff should be included via conferencing and/or some kind of summary document. The latter would be very useful at any rate, but I doubt Adrian has time to prepare a nice summary. Adrian - this is a good idea, can you do some sort of summary document - basically what you'd say the biggest issues are ? :) -Rob signature.asc Description: This is a digitally signed message part
Re: meetup occuring in London
On Wed, 2008-02-13 at 12:53 +0900, Adrian Chadd wrote: On Wed, Feb 13, 2008, Robert Collins wrote: Definitely, especially with the launch of yahoo live!; afaik, there's a private online ability with this, which should allow easy interaction with sound video especially with the hookup of a projector at the meeting end. We've tried this sort of thing numerous times at ubuntu development sessions; my experience to date is that they are time consuming to set up, significantly reduce the bandwidth available between the folk physically there (its ~ 300ms rtt to talk to someone in perth from London). I'd like to try. I think its a bit silly to hold a design and brainstorming without the squid core members able to have some sort of presence. Speaking purely for myself, I've spent the last couple of years fiddling with this stuff, researching how other applications are written and getting my head around all the various issues preventing Squid from being fast; it'd be a bit silly for us not to include that. I presume you mean 'without all the' :). This isn't intended as some sort of major design jag; it started with my saying 'another trip in March' and kinkie saying, 'lets have a meetup'. Its pretty common in other projects to have little meet ups of the folk that *can* get together as time permits. If this was a 'lets make a 1 year plan to make squid fast' big strategy meeting; then it would be silly to do it without you; and even more silly to do it without you in person. Its not, and I don't think anyone has represented as being that. -Rob signature.asc Description: This is a digitally signed message part
Re: meetup occuring in London
On Wed, 2008-02-13 at 01:57 +, Tony Dodd wrote: Adrian Chadd wrote: (Only replying to squid-dev) Since some of us aren't really able to travel half way around the world for the weekend, could some sort of video and/or voice hookup be organised for the devel session? Definitely, especially with the launch of yahoo live!; afaik, there's a private online ability with this, which should allow easy interaction with sound video especially with the hookup of a projector at the meeting end. We've tried this sort of thing numerous times at ubuntu development sessions; my experience to date is that they are time consuming to set up, significantly reduce the bandwidth available between the folk physically there (its ~ 300ms rtt to talk to someone in perth from London). -Rob signature.asc Description: This is a digitally signed message part
meetup occuring in London
Some of the squid developers are going to get together in London during the weekend of the 1st and 2nd of March. We're still putting together final details and so on. We'll be talking about much of the current things under development. If you are hacking on squid, or interested in doing so please let me know, I'm sure we'd love to have you drop by. If you're not hacking on squid, but use it or just love the project and want to get together to say hi or chat, I'm positive we'll have at least one evening spent in a pleasant pub/dinner. For now please drop me an email - but when we have some more details figured out I'll throw them up on the wiki. -Rob signature.asc Description: This is a digitally signed message part
Re: a simple formatter
On Fri, 2008-02-08 at 23:26 +0200, Tsantilas Christos wrote: Alex Rousskov wrote: Changes in the current code are only bad for outstanding patches and forgotten branches, right? If everybody applies the same formatting to all their branches and HEAD, we should not have many conflicts, right? I don't know, possibly it is OK. Or just format the HEAD, merge HEAD to branches (this will take some hours but...) and then apply formating to the branch. After the first time all will be OK. with CVS you will be much better off formatting all branches simultaneously, then merging. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: X-Vary-Options patch
On Fri, 2008-02-08 at 16:26 +1100, Tim Starling wrote: The added features of the patch are conditional, and are enabled by the configure option --enable-vary-options. Unless there is non-trivial process required for regular vary headers with this enabled, I don't think it needs to be optional. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: two xasserts in squid3
On Tue, 2008-02-12 at 11:10 +1300, Amos Jeffries wrote: On Mon, 2008-02-11 at 09:33 +1100, Robert Collins wrote: One of the things I'd most love to see is the modularisation completed - complete deletion of protos.h, structs.h, typedefs.h. Yes, of course. That would be the focus of the cleanup that Amos is volunteering to do :-) Hows this for an action plan Robert? * Obsolete typedefs.h (underway) - remove all unneeded typedefs - move all needed typedefs to their appropriate headers - fix compile errors * Add automatic testing for header dependency - script to perform universal include unit-test for .h files - link to automatic unit-testing in each directory - fix the compile errors! * Obsolete protos.h - move all protos to their appropriate header files - add includes for headers where needed. * Obsolete structs.h - move all structs to their appropriate header files - move modular configuration in to *Config.h files (discussion on exactly what the modules are) I like the automatic include unit test. I found while slowly working through this that circular structure dependencies made it quite tricky; I think you will find you need * Break dependencies to make it really clean, but that can be longer term if needed. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: squid-3.HEAD IPAddress leak
On Fri, 2008-02-08 at 12:14 +0900, Adrian Chadd wrote: Together they make a pretty tree. But every used piece is eseentially another new, memset, free. Ah, and here you will have problems. The members of that struct should probably be malloc, free, and not new/delete. You're using new/delete which -should- map to a default new operator and head off to the malloc libraries, but -squids- idea of the malloc interface could differ from the -library- idea of the malloc interface. You can tell C++ to use malloc/free if needed. Am I reading right that the OS calls free itself ? Usually /either/ clients free and allocate, or libraries free and allocate - never a mix. (Because even in C the malloc libraries can be partially overridden). If squid is doing the allocation and free the following: So you should probably drop the new/delete'ing of the addrinfo stuff and replace it with malloc/free. is irrelevant, but if its shared then we should definitely not use C++ because operator delete can't be called. (And don't use talloc, or anything else either. You're also memset()'ing the addrinfo struct whether you allocated it or not, which may be double-memset'ing the thing, and if someone passed in an addrinfo it may have structure members which have now been leaked. tighter boundaries may help this. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: two xasserts in squid3
On Mon, 2008-02-11 at 10:15 +1300, Amos Jeffries wrote: What I'm seeing with the auto-docs work is that a lot of header files include squid.h or protos.h for one or two simple things. That file brings with it a host of type-dependencies, directly or indirectly that clutter up the whole header include process for compiling. One of the things I'd most love to see is the modularisation completed - complete deletion of protos.h, structs.h, typedefs.h. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: two xasserts in squid3
On Mon, 2008-02-11 at 11:49 +0900, Adrian Chadd wrote: On Mon, Feb 11, 2008, Amos Jeffries wrote: I end up with -one- exit location in the function and I don't have to bum around using extra functions or massively nested conditional constructs to achieve it. In fact, I've used goto in a few places to tidy up the code.. If you need to use it to clean up a single function. It's an obvious sign that the function is too complicated. In 10 years of writing complicated code I have yet to see a single place where it is actually required. Required? No. Useful? Yes. Too often I've seen people push the cleanup stuff into another function, thinking that bit of refactoring was a good idea. Thing is, if they didn't document that the function is private just for another function, some other coder will come along later, see the function, and think its fine for them to use (solving their immediate problem.) This leads to badness in the future. It took me quite a while to get over the goto is evil, never ever use it koolaid. But then, in C++, you should be using exceptions, not weird flow control tricks. :) I quite like the very structured way of using goto: that the linux kernel encourages. Its certainly cleaner the way they do it than reused helper functions - you need a bazjillion little functions to clean up nested resources properly, and you end up with two sets: a tonne of little 'call X cleanup, then chain to Y the remaining things to cleanup' functions. And secondly, all the actual X cleanup functions. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: two xasserts in squid3
On Mon, 2008-02-11 at 16:03 +1300, Amos Jeffries wrote: typedef has its place in C particularly for portability issues, but that is vastly reduced in C++. I've only seen the event/callback function-pointers as a required use for it nowadays. That only because none has shown me a better way to do function pointers than the way squid currently does them. I've seen a lot of tasteful typedefs in c++ to provide handles to common template usage. - in the stdlib IIRC :). -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: squid-3.HEAD IPAddress leak
On Thu, 2008-02-07 at 11:56 +0900, Adrian Chadd wrote: Argh, this temporary malloc/free pair is peppered throughout the codebase! Grr. I've removed that hack, and things work fine for me. Ubuntu 7.01 here, x86_64. There's a useful c++ thing called 'placement new' - its used when you have classes that you want to use on the stack, but still have tight abstraction boundaries. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: squid-3.HEAD IPAddress leak
On Thu, 2008-02-07 at 12:21 +0900, Adrian Chadd wrote: Well, I haven't removed the temporary malloc/free pair, whatever its called; I've just removed Amos' workaround in src/comm.cc so it doesn't leak on my system whilst I profile. Still, this is one of those death of a thousand cuts method of killing performance.. Right, I haven't seen the commit; care to mail the diff? -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: diskd and unlink
On Wed, 2008-01-30 at 15:54 +0900, Adrian Chadd wrote: Hi everyone, I'd like to make diskd use the unlink daemon instead of handling unlinks itself in an attempt to lessen the amount of work being sent to the queue, and thus (hopefully) mitigate the amount of times the disk callback is being recursively called. If noone objects I'll make this change to 2.HEAD, 2.7 and 2.6; I'll leave 3 to someone who knows more about the code post-code-shuffling. Doesn't that cause a performance hit on file systems where unlink() blocks? -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: C++ errors building 3.HEAD on Windows
On Wed, 2008-01-23 at 10:41 +1300, Amos Jeffries wrote: Hi Alex, At 18:28 21/01/2008, Alex Rousskov wrote: Please let me know if both of you get stuck with any specific error and I will try to help. FWIW, I doubt many Squid functions should be declared with C binding so we may just need to modify some old declarations to remove extern C. Just this one left: depbase=`echo ACLARP.o | sed 's|[^/]*$|.deps/|;s|\.o$||'`; \ if g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\c:/mgw-3.0/etc/squid.conf\ -I. -I. -I../include -I. -I. -I../include -I../include -I../lib/libTrie/include -I/usr/include/libxml2 -Werror -Wall -Wpointer-arith -Wwrite-strings -Wcomments -D_FILE_OFFSET_BITS=64 -g -O2 -mthreads -MT ACLARP.o -MD -MP -MF $depbase.Tpo -c -o ACLARP.o ACLARP.cc; \ then mv -f $depbase.Tpo $depbase.Po; else rm -f $depbase.Tpo; exit 1; fi ACLARP.cc: In function `int aclMatchArp(SplayNodeacl_arp_data***, IPAddress)': ACLARP.cc:571: error: no matching function for call to `in_addr::in_addr(DWORD)' d:/mingw/bin/../lib/gcc/mingw32/3.4.2/../../../../include/winsock2.h:223: note: candidates are: in_addr::in_addr() d:/mingw/bin/../lib/gcc/mingw32/3.4.2/../../../../include/winsock2.h:223: note: in_addr::in_addr(const in_addr) make[3]: *** [ACLARP.o] Error 1 I'm not able to do the correct type cast, please do you can help me ? oh b*%%*r. This might be why the ACLARP originally used raw bytes instead of proper types. Are you able to add a cast it to DWORD-BYTE[4]-in_addr or DWORD-BYTE[4]-int-in_addr or even void* if those fail? I think we need to check the byte-order is not touched. It should be arriving as a network-order DWORD. Thats likely a reason; I migrated the code to C++ without trying to improve its specific style; just to fit it into the dynamic registered acl type stuff. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: RESEND - URGENT !!! - Static copy of www.squid-cache.org is growing
On Mon, 2008-01-21 at 08:42 -0700, Alex Rousskov wrote: On Sun, 2008-01-20 at 18:58 +0100, Guido Serassio wrote: Im resending again this message, because nobody answered my first post. I cannot fix this by myself because I don't have the needed write permission and I don't know the location of the script for the daily build of the web site static copy. But if this will be not fixed in a few time, I will shutdown www.it.squid-cache.org mirror because my disk are near to be full. Unfortunately, I do not know anything about the scripts generating static copies, but can you simply delete the old/stale directories on your box or make them non-writable as a short-term fix? or -x them from rsync? -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: Become a developer.. (sf.net ui: valexey_eykon)
On Fri, 2008-01-18 at 16:51 +0300, Alexey Veselovsky wrote: valexey_eykon I've added you to the squid project with CVS access. Cheers, Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: so, 3.x VCS
On Wed, 2008-01-16 at 09:46 -0700, Alex Rousskov wrote: On Tue, 2008-01-15 at 08:24 +1100, Robert Collins wrote: So far I've heard of 3 devs trying the bzr tree, main concern has been 'high' memory use (80MB) during initial pull, and that 1.0 is not all that wide spread in stable distribution releases. I'd like to get feedback (or an explicit 'its fine') from hno, alex and duane specifically before I suggest that we have consensus and are ready to set a date for making CVS readonly and doing a final conversion. IMHO, we should wait until bzr 1.x is widely spread and, hence, better tested and documented. With folks pushing for Squid 3.1 release soon and a few large branches not integrated yet, I would rather not spend time on learning a new VCS and struggling with relatively immature software. Thank you, Like all products bzr moves and advances; I would not classify it as relatively immature because of its recently reaching '1.0' - it got labelled 1.0 to reflect its maturity rather than immaturity. I can appreciate the time argument about learning a new VCS, but there will always be large branches pending integration; there have been large branches outstanding the entire time I've been involved with squid; so I don't think that that reason is a good one to delay; in fact, the sooner one converts to a VCS that does good merging, the easier integration of branches becomes. If there are specific things about the migration proposal, or capabilities that are a problem - I will happily let this VCS discussion rest until they are resolved. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: so, 3.x VCS
On Wed, 2008-01-16 at 16:50 -0700, Alex Rousskov wrote: It is not about the version number, it is about being widely available and used for a while. The first mature version of a VCS tool that has not been a part of major distributions for at _least_ a few months does not qualify, IMHO. We do not need to be on the cutting edge when it comes to version control, at least not right now. 1.0 is in: debian fedora suse ubuntu netbsd freebsd but thats not really the point; 1.0 was a relabelling of an already mature tool to reflect that maturity. And 0.92 (the format that the repositories I created) has been in the major distribtions for 'a few months'. I can think of two reasons where being out there and available matters to us. One is for ease of access for users of our VCS (and the above list should help ease concerns there, though bzr will run from source with no compilation trivially). The other is that you expect bzr to change - in which case you will be asking that we wait months again for that version, etc etc. 1.0 of bzr is really not bleeding edge in my assessment. I guess the question is whether it is in the eyes of the other devs. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
so, 3.x VCS
So far I've heard of 3 devs trying the bzr tree, main concern has been 'high' memory use (80MB) during initial pull, and that 1.0 is not all that wide spread in stable distribution releases. I'd like to get feedback (or an explicit 'its fine') from hno, alex and duane specifically before I suggest that we have consensus and are ready to set a date for making CVS readonly and doing a final conversion. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: async-calls squid3/src comm.cc,1.81.4.16,1.81.4.17
On Fri, 2008-01-11 at 16:06 +0100, Henrik Nordström wrote: ons 2008-01-09 klockan 21:13 +1300 skrev Amos Jeffries: That this code is currently depending on that implicit guarantee when it should not be. This change is the perfect time to drop that implicit dependency and make it clean (bug free!). Having one precursor call schedule its successor is the cleanest and fastest way to do that. Run-time speed optimizations can wait, but this is a stability issue. Even the day when we are fully SMP with support for 80 CPU cores call order of calls scheduled form the same job will be preserved, unless they are explicitly made as callouts for parallell processing which I don't see will be needed. To scale on SMP it's very important you keep the processing of the same job on the same CPU. Each time data needs to cross from one CPU to the other is very expensive, and each time you need to syncronize access for all CPUs is really bad.. Ideally there will be one worker thread per CPU core, each with their own filedescriptors, async queues, comm loop etc. Possibly even memory management. It's still too early to say if this will be done using threads or processes. But what we certainly will not see is that everything uses a global async-call queue managed by all CPUs. If you desing that way then you could just as well not go the SMP path and you'll probably get better performance.. That is one of the key points of the current code in head; because the dispatch and queue mechanisms are not based on globals, you can just instantiate a new loop where that is appropriate. As long as the new stuff can be used by passing code that needs to do async calls a handle to the loop (or a dispatcher, or whatever), then this is still directly feasible. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr VCS feedback
On Tue, 2008-01-08 at 18:18 +1300, Amos Jeffries wrote: Working my way through the wiki instructions with bzr 1.0 I find a problem with the bind and update commands: # the environ setup differing from wiki... export TRUNKURL=/home/opensrc/bzr/ ## seems to work, or at least produces no messages. bzr bind $TRUNKURL ### fails with the following error bzr update bzr: ERROR: No WorkingTree exists for file:///home/opensrc/bzr/trunk/.bzr/checkout/. https://launchpad.net/bugs/181367 A manual look-see finds no 'checkout' subdir created by the bind call (I assume it was meant to be created by the bind call) which ran without any output. Its not meant to be created by the bind call; its a bug in update. To work around it, you can do 'bzr checkout .' in the trunk directory. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr VCS feedback
On Tue, 2008-01-08 at 12:20 -0200, Gonzalo Arana wrote: On Jan 7, 2008 7:15 PM, Robert Collins [EMAIL PROTECTED] wrote: On Mon, 2008-01-07 at 16:50 -0200, Gonzalo Arana wrote: Tsantilas, ... debian (and most linux/BSD distros I guess) come with a usable CVS version. Newer bzr is in backports.org, bsd ports, fedora and so on. Using backports always ended in some dpkg brokeness. If a debian user really wants to user bzr, installing from source I guess is the only long-term option. bzr by default, installed in /usr; while /usr/local would be a better choice I suppose. I guess this is a non-stopper anyway. backports should work just fine; they are maintained by DD's. If they don't, file bugs :). You can run bzr out of the source tree - just add the source tree to your path. They are doing completely different things. CVS is streaming the final texts from the remote server and is not processing any of the history data. bzr is cloning the deep history of the repository. 80M is more I understand the cause of this memory usage, but that does not make the memory usage less inconvenient. I agree. than I would have expected; I'd call that a bug. It won't ever be '4K' though because python takes more memory than that. I believe a reasonable memory usage would be accepted, but 80M to fetch the history data makes me think that bzr may not be as tested as people may think. Again, that's just my opinion. If squid-3.1 is shifted to bzr, I'll manage. Well, 80MB to grab a 60MB compressed archive representing 10s of thousands of file versions; IIRC the peak memory usage of 1.0 with http is a minimum of the size of the remote http file due to how the http layer buffers data; this is either fixed in 1.1, or in progress. $ ps axuw | grep bzr garana 27912 7.1 2.9 17872 15096 pts/16 S+ 16:03 0:02 /usr/bin/python /usr/bin/bzr bind http://www.squid-cache.org/bzr/squid3/trunk Since I am not a core squid developer, I know that I don't have a vote on this, but I have to say that I am quite disappointed with bzr resource usage. Do you mind me asking how much RAM your workstation has? Not at all! My office workstation has 512M. I wouln't like to have to shutdown firefox, apache, mysql, gkrellm, gaim and so on to do a 'bzr branch'. I don't need to do that when I do 'cvs checkout'. I don't think you should need to do that for bzr either; I'm pretty confident that local operations - making new branches, performing a checkout --lightweight, etc, will be lower in memory use. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr VCS feedback
On Mon, 2008-01-07 at 16:50 -0200, Gonzalo Arana wrote: Tsantilas, Thank you very much for the pointer. I was able to 'branch' squid3-bzr. About bzr, here are two things that I've noticed: * debian etch (stable) has an old version of bzr (0.11-1.1), which does not recognize squid-bzr repository: $ bzr branch http://www.squid-cache.org/bzr/squid3/trunk bzr: ERROR: Unknown branch format: 'Bazaar Branch Format 6 (bzr 0.15)\n' debian (and most linux/BSD distros I guess) come with a usable CVS version. Newer bzr is in backports.org, bsd ports, fedora and so on. * bzr 1.0 seems to use a lot of memory for checkout at least: $ ps axuw | grep bzr garana 25130 8.9 15.7 87256 80276 pts/16 S+ 10:46 1:06 /usr/bin/python /usr/bin/bzr branch http://www.squid-cache.org/bzr/squid3/trunk /home/garana/src/squid--bzr/trunk $ ps axuw | grep cvs garana 28048 16.4 0.3 4228 1860 pts/18 S+ 16:11 0:02 cvs co squid CVS used only 4K, while bzr used more than 80M (it devastated my workstation). They are doing completely different things. CVS is streaming the final texts from the remote server and is not processing any of the history data. bzr is cloning the deep history of the repository. 80M is more than I would have expected; I'd call that a bug. It won't ever be '4K' though because python takes more memory than that. $ ps axuw | grep bzr garana 27912 7.1 2.9 17872 15096 pts/16 S+ 16:03 0:02 /usr/bin/python /usr/bin/bzr bind http://www.squid-cache.org/bzr/squid3/trunk Since I am not a core squid developer, I know that I don't have a vote on this, but I have to say that I am quite disappointed with bzr resource usage. Do you mind me asking how much RAM your workstation has? -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: 'include' directive in squid-2.HEAD
On Tue, 2008-01-08 at 16:33 +1300, Amos Jeffries wrote: Adrian Chadd wrote: On Tue, Jan 08, 2008, Mark Nottingham wrote: +1, and in 2.7 ASAP; I've already implemented this in my own hacky way (external to squid); would be nice to have 'real' includes. Well, test it out, make sure broken things like ACL lines give the right source file and line number, and let me know if it works out. 2.7 is meant to be a stable release, and although I'm hesitant to suggest including this in 2.7 I think its a good idea and, if it does the right thing, will only benefit users. I looked at this for 3.1, but the config parser there confused me with its multiple meanings of 'file'. I'd definately like to see how you are doing it in 2 for a side-port. I implemented one about 5 years back. Now where did I put it it's not in my local arch repo; it may be on devel. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bzr VCS feedback
On Fri, 2008-01-04 at 18:44 +0100, ecasbas wrote: Robert Collins escribió: has anyone tried the bzr copy of the squid3 repository ? Any feedback/questions/concerns? #bzr branch $TRUNKURL squid-repo/trunk it's transferring but extremely slow. Kinkie reported much the same thing for http - low network use and low cpu. My current working theory is the presence of a broken proxy in-line with the download combined with a complex range request. I don't know that this needs to be a showstopper, as it only affects first-time downloads and is not an architectural problem. Kinkie - can you file a bug report for this for you? -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: VCS for squid3 development?
On Sat, 2007-12-29 at 02:06 +0100, Henrik Nordström wrote: fre 2007-12-28 klockan 10:42 +1100 skrev Robert Collins: On Fri, 2007-12-28 at 08:25 +0900, Adrian Chadd wrote: I've been following the VCS debate a little. Guys, I'm not an enormous fan of CVS, but what we have works *badly*. 150 emails when autoconf changes are made for instance. That's not really the fault of CVS thou.. more a broken CVS commit mailer script.. but yes it's partly due to how CVS works. Indeed :). -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
bzr VCS feedback
has anyone tried the bzr copy of the squid3 repository ? Any feedback/questions/concerns? -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: VCS for squid3 development?
On Sat, 2007-12-29 at 02:12 +0100, Henrik Nordström wrote: lör 2007-12-29 klockan 05:24 +1100 skrev Robert Collins: Other projects using distributed VCS tools often do not have a dedicated development area, preferring to let individuals publish their own branches; I had been assuming something like that - but its a good question to raise and discuss. We need both. Our turnover on contributions is rather long, often with multiple authors over some years, and not having them collected i a central repository means a very high risk of the contribution getting lost. Also many of the contributors do not really have a suitable place where they can host public access to their repository. But it's a quite separate problem from the migration of the main repository. Some options: Existing hosting site Setup own hosting site Use a patch tracker rather than branch hosting Existing hosting sites for bzr: - launchpad + no charge for use, provides web viewer etc etc etc - no 'repository' support - so each branch needs a full upload of the history when creating it. (This will be getting addressed during 2008) - no custom website facility like sourceforge (but we could query lp and do a dynamic page on squid-cache.org easily. And the directory facility we use sourceforge's custom website stuff for is builtin to launchpad anyhow) Setup own hosting site - more time and effort from our volunteer pool - need to do account management stuff which will raise the bar for adding contributors + complete control + could do a shared repository to make initial uploads of branches fast Use a patch tracker See for example http://bundlebuggy.aaronbentley.com/ + patches are very small - just the size of the aggregate work, no VCS metadata is lost - they can be merged from and pulled from just like branches + self manages the list of 'active patches' + good for review - may be more tricky to explain than just using branches I'd suggest use a patch tracker with launchpad as a hosting site for folk that want or need external branch hosting. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
[MERGE] Quick cut at VCS script changeover.
This is most of a changeover of scripts for squid 3 trunk to use bzr; the missing bit appears to need a bzr 1.1 (to do 'rdiff' basically) or thereabouts, I'll look into that in a bit. I'm not sure that the unconverted cvs calls will actually trigger with our current setup or not. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. # Bazaar merge directive format 2 (Bazaar 0.90) # revision_id: [EMAIL PROTECTED] # e5kae9czem63ux4z # target_branch: file:///home/robertc/archives/robertc%40squid-\ # cache.org--squid/squid/3.0/cvsps-HEAD/ # testament_sha1: ccb5a5b25492531abe14f982c2dd791f98b573bf # timestamp: 2007-12-30 11:23:27 +1100 # base_revision_id: cvs-1:hno-20071220081046-d59dmpbuvvyioekf # # Begin patch === modified file 'configure.in' --- configure.in 2007-12-19 09:36:26 + +++ configure.in 2007-12-30 00:22:22 + @@ -5,7 +5,7 @@ dnl dnl dnl -AC_INIT(Squid Web Proxy, 3.HEAD-CVS, http://www.squid-cache.org/bugs/, squid) +AC_INIT(Squid Web Proxy, 3.HEAD-BZR, http://www.squid-cache.org/bugs/, squid) AC_PREREQ(2.52) AM_CONFIG_HEADER(include/autoconf.h) AC_CONFIG_AUX_DIR(cfgaux) === modified file 'mkrelease.sh' --- mkrelease.sh 2007-08-31 02:31:41 + +++ mkrelease.sh 2007-12-30 00:22:22 + @@ -3,8 +3,12 @@ echo Usage: $0 revision [destination] exit 1 fi +# VCS details +module=squid3 +BZRROOT=${BZRROOT:-/bzr} + +# infer tags from command line details package=squid -module=squid3 rev=`echo $1 | sed -e s/^${package}-//` name=${package}-${rev} tag=`echo ${name} | tr a-z.- A-Z__` @@ -24,32 +28,29 @@ tmpdir=${TMPDIR:-${PWD}}/${name}-mkrelease -CVSROOT=${CVSROOT:-/server/cvs-server/squid} -export CVSROOT - rm -rf $name.tar.gz $tmpdir trap rm -rf $tmpdir 0 -cvs -Q export -d $tmpdir -r $tag $module +bzr export $tmpdir $BZRROOT/$module/tags/$tag || exit 1 if [ ! -f $tmpdir/configure ]; then echo ERROR! Tag $tag not found in $module fi cd $tmpdir -eval `grep ^ *VERSION= configure | sed -e 's/-CVS//'` +eval `grep ^ *VERSION= configure | sed -e 's/-BZR//'` eval `grep ^ *PACKAGE= configure` if [ ${name} != ${PACKAGE}-${VERSION} ]; then - echo ERROR! The version numbers does not match! + echo ERROR! The tag and configure version numbers do not match! echo ${name} != ${PACKAGE}-${VERSION} exit 1 fi RELEASE=`echo $VERSION | cut -d. -f1,2 | cut -d- -f1` ed -s configure.in EOS -g/${VERSION}-CVS/ s//${VERSION}/ +g/${VERSION}-BZR/ s//${VERSION}/ w EOS ed -s configure EOS -g/${VERSION}-CVS/ s//${VERSION}/ +g/${VERSION}-BZR/ s//${VERSION}/ w EOS ed -s include/version.h EOS === modified file 'mksnapshot.sh' --- mksnapshot.sh 2007-09-20 03:29:13 + +++ mksnapshot.sh 2007-12-30 00:22:22 + @@ -1,36 +1,46 @@ #!/bin/sh -e + if [ $# -gt 1 ]; then echo Usage: $0 [branch] + echo Where [branch] is the path under /bzr/ to the branch to snapshot. exit 1 fi +# VCS details module=squid3 -tag=${1:-HEAD} +BZRROOT=${BZRROOT:-/bzr} + +# generate a tarball name from the branch ($1) note that trunk is at +# /bzr/trunk, but we call it HEAD for consistency with CVS (squid 2.x), and +# branches are in /bzr/branches/ but we don't want 'branches/' in the tarball +# name so we strip that. +tag=HEAD +branchpath=${1:-trunk} +if [ trunk != $branchpath ]; then +tag=`echo $branchpath | sed -e s/^branches\///` +fi startdir=$PWD date=`env TZ=GMT date +%Y%m%d` tmpdir=${TMPDIR:-${PWD}}/${module}-${tag}-mksnapshot -CVSROOT=${CVSROOT:-/server/cvs-server/squid} -export CVSROOT - rm -rf $tmpdir trap rm -rf $tmpdir 0 rm -f ${tag}.out -cvs -Q export -d $tmpdir -r $tag $module +bzr export $tmpdir $BZRROOT/$branchpath || exit 1 if [ ! -f $tmpdir/configure ]; then echo ERROR! Tag $tag not found in $module fi cd $tmpdir -eval `grep ^ *VERSION= configure | sed -e 's/-CVS//'` +eval `grep ^ *VERSION= configure | sed -e 's/-BZR//'` eval `grep ^ *PACKAGE= configure` ed -s configure.in EOS -g/${VERSION}-CVS/ s//${VERSION}-${date}/ +g/${VERSION}-BZR/ s//${VERSION}-${date}/ w EOS ed -s configure EOS -g/${VERSION}-CVS/ s//${VERSION}-${date}/ +g/${VERSION}-BZR/ s//${VERSION}-${date}/ w EOS # Begin bundle IyBCYXphYXIgcmV2aXNpb24gYnVuZGxlIHY0CiMKQlpoOTFBWSZTWXUmK84AA4ffgEAwfff//35h /Z/+YAdvp61UAAG2pSlayUoqoSSEaTJoKn5TzINAamiaj9SMnqNDZQ9Jp5qn6jUPU9Q40ZMj CMQDCaDAJoNAyZNGTIYQGONGTIwjEAwmgwCaDQMmTRkyGEBhpkUmp6TQeo0GI0BgRoAyAAA0NADj RkyMIxAMJoMAmg0DJk0ZMhhAYSRBNATAk0NCamAxU9pGmnpNMk9JkBo2o2kxCIQMS9DNky164sR9 7q78HQe7mN35fGPmNd1cvpB/ffOWBSBlBnyrquzk08oFETZHlYRBgGNNaSiHIM1rppn3YwJXOaV2 Vpssu+tUXuqo5ny7voQzF1pDlh+HxNKpCey3wXEGAGZMyfuVIwEGE1WmDymEmmDPcOgqka0ZMlB5 HcgVgbyTV6RmiJUqMioY6GpCBIkSajNF/qvHh1Yh+Hxry9A7Hy0E0C5J1Ph8812g0fOOouuXoSDp T4VP9Kpbtz2+yNVP10nk87al5e/PG+S3sbBrqq55sUb/lBO6ssclP5xNF7G6dym+qEBs3g7j7LLN Oy3bnucT1GPfw6yqJNJ7lrcPdneIhDdGIy4OJOMetL3QUedhyScD3nGL0ywN4sdOXH/0Czhgso7D tJaBqTSLODbWZYRFRhNVoXFvsWS+8R9ivvcx2qwOPR6aupVpDE2GPpPEJcgUy5yLnnBh2GTMEj9s dAjSqC6Th8mGDn45hS+gt7C8zI2bRkM7gN5eSHCIiQREMCOgcOJIcRWm8w4vaOcaySQRPCwvV7V2
Re: VCS for squid3 development?
On Fri, 2007-12-28 at 18:09 +0100, Guido Serassio wrote: Just a stupid question: there is a native client (not based on Cygwin) for Windows ? Yes. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: VCS for squid3 development?
On Fri, 2007-12-28 at 16:16 +0200, Tsantilas Christos wrote: Hi Robert, Which are the required steps from developers? Will we open our computers a morning and find all our development branches in the new system :-) ? Probably not. What about the sourceforge developers repository? Because bzr is natively distributed we don't need to have a dedicated developers repository like we have with CVS - anyone using bzr will get their own copy of the repository automatically; they can commit there as often as they like, rollback, merge etc. Will be hosted in the squid main server? Currently I know of one sourceforge like service that supports bzr - launchpad.net (which is hosted by my employer). We can upload project branches like the async-calls branch there, or to any other location we want. The requirements to publish a branch are minimal: - upload access via one of sftp/ftp/mounting/ssh - http access for others to read it Other projects using distributed VCS tools often do not have a dedicated development area, preferring to let individuals publish their own branches; I had been assuming something like that - but its a good question to raise and discuss. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: VCS for squid3 development?
On Sat, 2007-12-22 at 04:18 +0100, Henrik Nordstrom wrote: On tor, 2007-12-20 at 22:48 +1100, Robert Collins wrote: What I am interested in is: - If someone puts the effort in to perform a migration of data and scripts (I'm offering to do this during my christmas break), whats the feeling on moving? Very welcome. Cool. - What does each of you individually need to consider moving to bzr for squid 3 trunk development? [what infrastructure do I need to port or replace, etc etc]. There isn't very much infrastructure that needs porting, beyond getthing the new VCS server components and configuration up and running,. Do I have sufficient access to do this? I'm not a BSD afficiondo anyhow, so perhaps its best if I say 'install the bzr port' as long as its at version 1.0 now. (If its not we can just install by hand). Same for loggerhead which a trivial apache redirect can feed requests to. I'll get details for that together in a bit. - the snapshot scripts need a little update to use the right tools for checking out the source tree. The ones in the source tree itself? Otherwise point me at them, I'll update them. - the release scripts as well Ditto. - the rest is maninly a handful recipes on how to do common tasks needed for Squid development. Right. Lets list them: - generate a patch for a commit - get a mirror of the development source to hack on - make a new branch to hack on - commit something which has been developed back to trunk - cherry pick something back to an older release using CVS - cherry pick something back to an older release using bzr. - others ? Most of the other VCS related infrastructure we have is just to work around the shortcomings of CVS. Like the list of patches? We'll want to run a web gui up - I suggest loggerhead. But some script to mirror HEAD and STABLE branches into CVS while keeping the CVS structure of things would be nice in order to continue serving reasonable anoncvs read-only access. Not a requirement however. I'd *prefer* to set an expectation about a switchover time and switch disable the CVS mirrors; because the higher fidelity of a VCS that does renames etc makes correct mirroring into CVS really annoying. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: VCS for squid3 development?
On Fri, 2007-12-28 at 08:25 +0900, Adrian Chadd wrote: I've been following the VCS debate a little. Guys, I'm not an enormous fan of CVS, but what we have works *badly*. 150 emails when autoconf changes are made for instance. , and I think we have bigger things to work on right now than a VCS migration. We all scratch our own itches. I'm not proposing to change the squid 2.x series VCS, and AIUI you're largely ignoring squid 3.1, so you'll be able to pretty much ignore this. Unless someone can show how we'll improve productivity offline work clearer commit emails easier branch management etc - the DVCS benefits are well documented now and I'm not about to regurgitate the full argument here. or bring on board more developers Better toolchain - less headaches when developing - easier to contribute to work on Squid then I don't think changing VCSes before we start on a new codebase is a great idea. We've tried starting a new codebase before, abysmal failure. And we have no more resources now than then. Incrementally fixing things is a solid strategy - squid 3.x has fixed one thing (language limits of C are now gone), and other things are now being fixed (e.g. Alex's async call stuff). I've setup a wiki page http://wiki.squid-cache.org/Squid3VCS to track the stuff I do on this. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: VCS for squid3 development?
On Fri, 2007-12-28 at 10:34 +0900, Adrian Chadd wrote: On Fri, Dec 28, 2007, Henrik Nordstr?m wrote: fre 2007-12-28 klockan 08:25 +0900 skrev Adrian Chadd: I've been following the VCS debate a little. Guys, I'm not an enormous fan of CVS, but what we have works, and I think we have bigger things to work on right now than a VCS migration. Unless someone can show how we'll improve productivity or bring on board more developers to work on Squid then I don't think changing VCSes before we start on a new codebase is a great idea. Moving to bzr will solve some of the problems we have today, mainly with the devel repository, and if Robert wants to push the migration forward I am not going to object. Hey, I'm happy to object, I just won't stand in his way. (I've updated the ports collection and installed the bzr port on squid-cache.org.) Thanks Adrian. Its mostly done now - see the wiki page for details, outstanding TODO's etc. Main thing is for: * current devs to give it a go and say 'yay' or 'nay - needs X'. * set a date. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
VCS for squid3 development?
This has come up a few times. In past years we did a trial with Arch, which did not work at all well for various reasons. However, there are now very serious and usable alternatives to CVS. Lets choose one. And move the 3.1 HEAD over to that. We can move 3.0's repo over or not - I don't particularly care. I think we need the following: - Anonymous access [e.g. to 'track HEAD'] - Mirrorable repositories to separate out trunk on squid-cache.org from devel.squid-cache.org as we currently do (as people seem happy with this setup). - commits to trunk over ssh or similar secure mechanism - works well with branches to remove the current cruft we have to deal with on sourceforge with the mirror from trunk. - works well on windows and unix - friendly to automation for build tests etc in the future. - anonymous code browsing facility (viewvc etc) Theres a whole lot more modern VCS's can do; the above list isn't meant to be an upper limit in our choice, rather a lower limit: something that can't fulfil those needs is not acceptable. I think this rules out svn as the mirror in svn is fugly compared to a distributed VCS. Other vcs's: - darcs - hg - monotone - bzr - git All these meet the needs listed above. My strong preference is bzr; its recently reached 1.0 and I'm extremely familiar with it due to having spent some years on it. In a broad sense hg/monotone/bzr/git are very similar, and darcs is radically different. So I'm not particularly fussed about getting into a deep compare-every-little-detail of the systems discussion. Any of them is a vast improvement over CVS. What I am interested in is: - If someone puts the effort in to perform a migration of data and scripts (I'm offering to do this during my christmas break), whats the feeling on moving? - What does each of you individually need to consider moving to bzr for squid 3 trunk development? [what infrastructure do I need to port or replace, etc etc]. Based on this I'll put forward an actual configuration etc - or identify things that bzr is lacking that stop us moving to it. For the interested, A complete conversion of the CVS history for squid is currently 66MB in bzr 1.0. (bzr is looking at halving the size of storage again in the next few releases). So initial clone is a little hefty to get the full history. It's entirely possible to operate exactly as CVS, with no local copy, but I don't think we particularly need that - 66MB is a small price to pay to a complete local mirror with local log, annotate and so on. The adhoc conversion I ran to see the repositories shape is here: http://www.squid-cache.org/~robertc/bzr/cvsps/squid3/bzr/squid3/branches/HEAD/ you can make a local branch by doing 'bzr branch' e.g. 'bzr branch $URL squid3'. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: async-calls and Dispatchers
On Thu, 2007-12-20 at 08:58 -0700, Alex Rousskov wrote: Can I ask why? The point of the current design is to allow integrating some quite different sources of events *and* workers. CompletionDispatcher was documented as code to handle events that have completed. It was implemented as an abstract call me once per main loop iteration interface. We already have AsyncEngine class for that (the purpose of AsyncEngine class is not really documented so I am judging by the code). Have you read the ACE patterns for event handling? I'm really just grabbing a glass of water before going back to sleep (its 3am :)), so this isn't a full reply, and essentially reading the two key patterns they published - reactor and proactor - will give you the design which was an application of those patterns to our current code base at the time I wanted to clean up the event loop. The difference between engine and dispatcher was intended to decouple 'perform X concurrently' (an engine), and 'handle the result from a given engine'. Engines were called once per loop to handle pseudo-async engines that are not really asynchronous and thus need a time-slice to operate. Dispatchers are called once per loop to allow for synchronisation where that is appropriate - such as with OverlappedIO. Expected dispatchers that are fundamentally different to each other were: - Standard select/epoll style - Completion ports (win32) style - threaded worker operation gathering - deferred operations within a single thread (which is the single special case I understand async-calls to be optimised for) So I'm curious how you will handle these. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: async-calls and Dispatchers
On Thu, 2007-12-20 at 11:00 -0700, Alex Rousskov wrote: On Fri, 2007-12-21 at 04:25 +1100, Robert Collins wrote: The dispatcher design has a great deal to do with handing results from async operations, because the OS can and will call directly back into user code. I am sorry, but a class with a single pure virtual method cannot have a great deal to do with anything, especially when there is another class that provides the same interface. I presume then that the bit of my email you didn't reply to, which suggested making one of the two interfaces a subclass of the other did fit your concerns? It is possible that you planned it to be something else, but those plans did not materialize. In reality, we had two nearly identical classes that were used nearly identically. The only difference was the timeout setting. In practice, all those dispatchers were just a complex way to say call me at least once per main loop and we already have an interface for that. By which you mean the AsyncEngine interface? Yes, the OS will call user code, but comm will receive such calls and propagate them using standard async calls. There is no need to expose the rest of Squid to these low-level details. The rest of squid wasn't exposed - the most it should ever get is a dispatcher instance to hand in when making a async call. And without some sort of object to point to we're back to being unable to build little tools out of the squid code base because of globals cross referencing just about everything under the sun. Re-reading the native async calls wiki page, it seems to me that 'all' you need to do is implement your 'TheCalls' as a dispatcher into the current EventLoop. If you are not handling the os level concurrency interface, then a Dispatcher should be all you need. Yes, I could have done that. However, since async calls remove the need for all existing classes that were dispatching some events (or pretending to do so), there is no need to build upon that unused interface. I simply do not see any practical reason for that interface to exist. If you can give me specific examples where the code would benefit from the abstract CompletionDispatcher class (and should not use the remaining AsyncEngine API), I am all ears. Well. If everything is becoming an AsyncEngine then *an* interface which is modular is being preserved; we can always split it back out again when someone wants to have the flexibility it offers; Please do read the 'proactor' ACE pattern if you haven't. http://www.cs.wustl.edu/~schmidt/PDF/proactor.pdf As for concrete reasons to have dispatcher and engine separate: - code reuse (all engines of a similar type/behaviour can be given the same dispatcher) - scaling (as we start working towards multiple core in the future a single engine will correctly dispatch back to the originating thread or process because the dispatcher instance given is for that thread) - loose coupling - we can experiment with different async call mechanisms without having to do an all-at-once conversion, by just adding a different dispatcher instance for the code that needs that. But looking at the patch I can see that all you've done is move the abstract interface one level further out. So the entire discussion is really moot - I don't care if its a level further out or not, though I think the extra indirection you are adding via AsyncCallQueue::Instance().fire() is more complexity not less. I presume you've made the signal handler dispatcher into an engine, or will do so. More importantly I can't see any unit tests. If we want to end up with a rigourously tested code base - which is both well worth doing and something I thought we as a group were interested in - replacing tested code with untested code is a step backwards. Note also that the current design was in the process of removing global state to make testing and modularisation easier, so please do take care not to reintroduce dependencies on global variables etc. From that point of view, we are essentially removing things here, not adding new ones so modularization can only improve. As for unit testing, if it makes the actual code a lot more complex, it is doing more harm than good. I'm not sure if you are saying that you are adding global variables because its 'simpler' (its not, and I'm *really* against more globals)... or just raising a hypothetical point about tradeoffs: to which my matching response would be: Trying to convert untestable code into testable code will tend to push existing complexities and mispractices into much greater visibility, while at the same time the regions that are well tested become substantially easier to reuse (because tests reuse them). If converted code is more complex to use then indeed there is something wrong and we should look more closely but that may speak more to how the code was refactored to allow testing than to the fact its being tested. -Rob -- GPG key available
Re: async-calls and Dispatchers
On Thu, 2007-12-20 at 22:30 -0700, Alex Rousskov wrote: On Fri, 2007-12-21 at 09:00 +1100, Robert Collins wrote: Please do read the 'proactor' ACE pattern if you haven't. http://www.cs.wustl.edu/~schmidt/PDF/proactor.pdf I only had time to skim that paper, but I think what they call Completion Dispatcher is what we call comm. Their Completion Dispatcher class is I/O-aware and converts I/O notifications (possibly supporting various models) into callbacks to the core. Its the callback side of comm + aufs + diskd + aio + disk - all the things that actually gather results from concurrent operations. I see no connection between CompletionDispatcher in Squid and the Completion Dispatcher in that article, except for the name. For two reasons; I only got the core in place before I ran out of time to hack at that point; secondly with the single exception of signals and aufs we don't have anything that is directly OS proactive - all the interfaces we use are reactive, and as such they are thunked onto the proactor design (as the other website you referenced discusses) by polling the os level reactive tasks and generating callbacks to issue for them. Aufs should be converted to hand off directly to a dispatcher so that calling its engine in each loop becomes simply a check for 'has outstanding requests'. The above does not explain to me why you need both classes. You can do all of the above with the current AsyncEngine interface alone. Perhaps you assign some unimplemented semantics to Dispatchers and we end up discussing two different things: you talk about the intended/future API, while I talk about the deleted code. I think that is likely the case. But looking at the patch I can see that all you've done is move the abstract interface one level further out. So the entire discussion is really moot - I don't care if its a level further out or not, though I think the extra indirection you are adding via AsyncCallQueue::Instance().fire() is more complexity not less. I do not know what interface levels are here, but I am glad you do not have a problem with deleting CompletionDispatcher classes after all. I'm not happy about it, but you are working on a larger cleanup which I'm very keen to see happen. I don't think that the bit we're talking about substantially influences the cleanup you are doing either way, but I would be deeply unhappy if I caused that cleanup to be delayed or not happen because of this discussion. So I'd rather see the CompletionDispatcher stuff deleted and the other cleanups come in, and revisit it later, than block up-front on a design discussion which is essentially about future work (such as better native win32 support). More importantly I can't see any unit tests. If we want to end up with a rigourously tested code base - which is both well worth doing and something I thought we as a group were interested in - replacing tested code with untested code is a step backwards. We are not moving in just one dimension here. I agree that rigorously tested code base is a goal. Yes, we will probably lose some old unit tests while simplifying the code. Better code and/or new tests will probably compensate for that. I'd like to note that what you are removing and replacing was *new code* and *new unit tests* not so long ago. I think its reasonable to ask that when replacing tested code, it be replaced with tested code. (I do take your point about moving in 1 dimension). I'm not sure if you are saying that you are adding global variables because its 'simpler' (its not, and I'm *really* against more globals) I do not recall adding global variables. The old code just hid them behind register(Instance()) calls. We can quickly hide them again if needed. The old code had the following structure IIRC: in main.cc an instance of EventLoop. various globals inherited from the squid2 event loop registered with the EventLoop instance (using Instance() calls). So the code in EventLoop was free of globals. You can run multiple EventLoops (e.g. one in each thread), or do other interesting things. Other parts of the code base not migrated yet still used globals, which is simply reflecting that the job of removing globals is not complete. Making EventLoop use another classes Instance() is a step back in removing globals IMO. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: async-calls squid3/src AsyncCallQueue.cc,NONE,1.1.2.1 AsyncCallQueue.h,NONE,1.1.2.1 AsyncCall.cc,1.3.22.10,1.3.22.11 AsyncCall.h,1.3.22.17,1.3.22.18 EventLoop.cc,1.5.4.2,1.5.4.3 Makefile.am,1.131.
On Wed, 2007-12-19 at 23:09 +, Alex Rousskov wrote: Update of cvs.devel.squid-cache.org:/cvsroot/squid/squid3/src Modified Files: Tag: async-calls AsyncCall.cc AsyncCall.h EventLoop.cc Makefile.am Added Files: Tag: async-calls AsyncCallQueue.cc AsyncCallQueue.h Log Message: Async calls are not longer using time-based events with zero delay. They have a dedicated AsyncCallQueue. Good so far. ... I did not make the queue a child of the Dispatcher because I hope to get rid of dispatchers eventually. ... Can I ask why? The point of the current design is to allow integrating some quite different sources of events *and* workers. E.g. the dispatch of disk IO to different threads fits in cleanly in the current design IMO. -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: building async-calls
On Sun, 2007-12-16 at 21:57 +0900, Adrian Chadd wrote: On Sun, Dec 16, 2007, Tsantilas Christos wrote: OK Adrian I fixed this too. You can build the async-calls without enabling of ICAP client. Next question - if I read this code right, a class is instanced for every async callback being scheduled, is this true? Yes this is true. An AsyncCall class instanced for every async callback. And the comm code is going to register one of these per comm events? Have you benchmarked what that'll do to performance? :) Robert's comm code changes in -3 did exactly this, and it trashed performance at high throughput; hence why I started unwinding it.. I find it hard to buy that root cause analysis has been done here. There are *lots* of other changes in -3 that will cause memory allocation; and if allocation is the problem for these async call result classes, then its likely fixable (for instance, in this specific case, a ring buffer allocator may be ideal). There are *python* programs that can saturate gigabit links with a similar high allocation model. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: http header deletion?
On Mon, 2007-12-10 at 17:26 +0900, Adrian Chadd wrote: Why/where is the code assuming that offset 0 will be HDR_DATE and offset 1 will be HDR_EXPIRES? The replacement code doesn't check to see whether anything is at the position in the array that its inserting into.. Would it make more sense to modify httpHeaderDelById() to return the first offset it found a header of that type at and then make httpHeaderInsertTime() use that? Or perhaps replace that with a httpHeaderReplaceEntry() to replace an entry at that position? httpHeaderInsertEntry() is only used by httpHeaderInsertTime(); httpHeaderInsertTime() is only ever called in the above snippet of code. I haven't checked the code yet but 'insert' suggests to me that it inserts and shifts everthing down, as Amos says - inserting at 0 makes sense when adding both an expires and date header. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: caching dynamic content
On Thu, 2007-11-15 at 09:37 +0900, Adrian Chadd wrote: G'day, I'd like to propose a Squid modification - to cache dynamic content thats playing good. I long ago turned off the ? cache deny rule on my caches. That said, making reply header checks be able to influence cache rules sounds like a good idea. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
cvs commit mails - unidiff?
Any chance of getting unidiff rather than context diffs ? -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: preserve cvs history of moved files.
On Sun, 2007-10-14 at 21:45 +0300, Tsantilas Christos wrote: Hi, I read again my previous mail and looks a little stupid. I will try to help Alex with AsyncCalls project: http://wiki.squid-cache.org/Features/NativeAsyncCalls As first steps at the async-calls branch of sourceforge cvs, the files src/ICAP/AsyncJob.c, src/ICAP/AsyncJob.h,src/ICAP/TextException.cc and src/ICAP/TextException.h should moved under the src/ directory. But also it is good to keep their cvs history. Is there any way? Yes, copying the ,v file rather than moving; then use cvsadmin to mark the old versions at the new path deleted. And just cvs delete the current versions at the old path. Its important to do it this way rather than mv, because mv breaks the old versions. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: Modules, installing Squid headers, and src layout
On Thu, 2007-10-11 at 20:17 -0600, Alex Rousskov wrote: Hello, To support loadable modules, we may need to alter Squid sources layout or installation procedure. This email discusses the problem and four possible solutions. Please review and suggest the best path forward. I'd really prefer that we don't do loadable modules for now. But that said... To be portable, modules cannot use anything defined in 'squid' - we must have a libsquid, which contains all symbols modules need. This shapes the problem space quite definitively. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: Migrating to Subversion
On Sun, 2007-10-07 at 03:51 +0200, Henrik Nordstrom wrote: On sön, 2007-10-07 at 03:47 +0200, Henrik Nordstrom wrote: bzr and git is the only two left standing on my list of choices. Neither is perfect, but both significantly better than most other.. with bzr being the main candidate. On a related note the Samba project is currently migrating to bzr. Err.. Samba is migrating to git, not bzr. Sorry for the mixup. Yup. There are members of the samba team that really like bzr, but some key things were not sufficiently good for them at the time; they have since been fixed/are rapidly being fixed, but I don't see any of them being a problem for squid. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: Migrating to Subversion
On Fri, 2007-10-05 at 09:43 -0600, Alex Rousskov wrote: Hello, With Squid 3.0 branching event approaching, perhaps now is a good time to decide whether we want to switch from CVS to Subversion? Pros: + Many consider svn to be overall better than CVS. Its branch model is terrible. + Svn supports renaming and moving files (we may want that for 3.1). Actually, its supports copies, not renames. Its rename command is built on a copy primitive, the same as it uses for branch. Elegant in some regards. + Svn working copy diffs are very fast (no network delays). + Svn handles binary files and keyword substitution better. + Branching and tagging is a much simpler concept in svn. + SorceForge svn services may be faster (I do not know that). + Subversion offers more remote access methods (e.g., WebDAV). Cons: - Some consider svn to be overall worse than CVS. I am one of those, especially when the cvsnt groups' work is considered. - Lossless migration is possible, but takes time/work. - Henrik's CVS scripts will need to be changed to support svn. - Some CVS veterans will hate svn branching and tagging. - Some svn newbies may modify tagged snapshots. - Some web pages and scripts accessing CVS will need to be changed. Did I miss anything important? svn is a poor choice today, projects that have moved to svn are already moving on a step further to the generation of distributed version control systems that have matured. Do pros outweigh the cons? I'd really encourage a migration to bzr, http://bazaar-vcs.org/, if we are to migrate at all. Pros * Distributed [no requirement to sign up or join a group in order to be able to use the full vcs power, and a bunch of corollaries like very fast local logging and history bisection etc]. * Has a centralised 'svn-like' capability built in for projects that want/need it. * Renames and moves supported * Binary files - check * Branching and tagging is simpler than CVS and more managable than svn. * Like svn, can operate over http/ssh/custom wire protocol but also adds ftp support and other dumb protocols [with commensurately lower performance due to latency multiplication]. * Written in python - trivially extensible * I am a core contributor to it; so support is readily available :). * Disk storage overhead is lower than SVN Cons * Sourceforge don't offer bzr hosting. (The distributed nature lowers the value of a separate 'devel' site though, and launchpad.net does offer hosting if we want it) * As above, lossless conversion does take work. I don't think VCS migration is at all coupled to the 3.1 branching. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
[Fwd: [SLUG] {Commercial} SafeSquid SPEED-BOOSTER 4.2.0 Released]
Garh! This turned up on my local LUG list. Its *not* squid, its not related to squid, and its not open source. We really should see if we can do something about their abuse of the squid brand. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. ---BeginMessage--- Hello All, SafeSquid SPEED-BOOSTER 4.2.0 for Linux has been released. Increasing throughout, is the key goals of 4.2.0.The changes made in the new version are as follows - - Improved TCP tuning - Improved Thread tuning - Password Caching To know more in details about the changes, please visit the link - http://www.safesquid.com/html/viewtopic.php?t=2321 The Free, SafeSquid Composite Edition 20 can be downloaded from the Products Download Page page at http://www.safesquid.com/html/portal.php?page=126 -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html ---End Message--- signature.asc Description: This is a digitally signed message part
Re: cvs commit: squid3 configure.in
On Fri, 2007-09-07 at 09:43 +0800, Adrian Chadd wrote: On Thu, Sep 06, 2007, Henrik Nordstrom wrote: Maybe. I'll leave that to you to judge. Don't have a clear opinion either way. But why not? HTCP and SNMP is very non-intrusive, just enabling code which is never used unless configured in squid.conf, with no significant change in other code. Maybe one of the goals for squid-3.1 should be runtime loadable dyanmic modules for stuff. (Although I'm not sure how well that'd work with C++ and its symbol munging..) It works fine and easily. In fact a number of the current modules are runtime enabled, though statically linked. That said, doing dynamic loading has GPL implications that I'm not currently comfortable with. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: wccp2d in sourceforge
On Tue, 2007-09-04 at 09:18 +0800, Adrian Chadd wrote: I've put my wccp2 code breakout into a new module in the sourceforge squid repository (under wccp2d.) For now its just the squid wccp2.c with some glue to make it live outside of Squid. It uses libevent for its network IO. There's a TODO list there. The first two things to implement are running a shell script on router assignment/deletion (to bring up and tear down the GRE tunnels, for example, and to install the redirection rules to Squid) and negotiating per-router assignment and redirection. I'd love more help on this; god knows I've got enough on my plate already.. What IO load does this get? perhaps writing it in python or something would be better overall? -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: squid-3 ?
On Thu, 2007-08-16 at 11:59 +0300, Tsantilas Christos wrote: Hi all, Alex Rousskov wrote: The primary idea behind RC1 is to bring in users who are ignoring PRE releases because there were so many PREs. We need more testers than a handful of folks running PREs on busy sites. RC1 release to attract testers is a good idea. The only comment I have is that it is still summer and some of squid code submitters are still in vacations so 1 or 2 developers get more load . Maybe September is a better month... Whats this summer you speak of? -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: cvs commit: squid3/src ftp.cc
On Tue, 2007-08-14 at 17:58 +0200, Henrik Nordstrom wrote: On tis, 2007-08-14 at 23:01 +1200, Amos Jeffries wrote: Alex Rousskov wrote: On Mon, 2007-08-13 at 12:01 +0200, Guido Serassio wrote: Now fixed. If you have time, it may be better to replace a virtual FtpStateData::haveControlChannel(char*) into a static FtpStateData::HaveControlChannel(FtpStateData*, char*) and let it check the ftpState pointer itself. This will avoid more code repetition and may even work faster. Whats with the uppercasing? the method is not a type. Is this a new convention not mentioned in the wiki/devel-faq? Hmm.. it should be in the style guide somewhere. http://www.squid-cache.org/Devel/squid-3-style.txt object local methods and variables start with a lowercase, global variables and static class members with an upper case. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: Default debug stream formatting
On Wed, 2007-08-08 at 08:28 -0600, Alex Rousskov wrote: Hi, Any objections to setting the default formatting flags for the Squid3 debugs() stream to fixed with a 2-digit precision? It would help to convert cache.log messages like Took 6.6 seconds (3.5e+03 objects/sec). into more readable/usable Took 7.12 seconds (3147.13 objects/sec). without modifying the debugs() statements themselves (which is actually impossible for flags like ios::fixed that do not have a stream manipulator). The patch is quoted below. Any other flags we should set by default? If there are no objections, I will commit the patch. +1 -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bug 2000 - patch
On Sun, 2007-08-05 at 19:39 +1200, Amos Jeffries wrote: Here is the patch to obsolete StoreEntryStream from squid. If any of you want to check it before I push it to HEAD next weekend. It blocks out the StoreEntryStream class code and test cases, and replaces all use of the stream with calls to storeAppendPrintf(). I'm not sure if it needs a Changelog entry since this is code new to 3.0 anyway. So why is this being removed? -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bug 2000 - patch
On Sun, 2007-08-05 at 22:03 +1200, Amos Jeffries wrote: Building on recent testing versions of Debian with upcoming g++ 4.1.3 and 4.2.1 it fails the unit-tests. std::setw() on a string/char requires something that is not implemented in the StoreEntrystream* classes. My limited-knowledge tests haven't located exactly what it needs to be implemented now for it to work. I can try to have a look at it next weekend. Theres some family stuff on, so no guarantees. The alternative to restore the builds, is to pull the MemObject stats display (only place the stream is ever actually used) back inline with the rest of squid to use storeEntryAppendPrintf(). And drop the stream until someone who knows what needs to be fixed can do so. Well, I'd really rather we don't do that, because I was trying to get rid of the inability to do stream operations by introducing that. I presume bug 2000 has the compile error? Have you looked at the g++ changelog to see if anything stands out? -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: bug 2000 - patch
On Mon, 2007-08-06 at 01:52 +1200, Amos Jeffries wrote: Have you looked at the g++ changelog to see if anything stands out? yes, there is no mention in either the Debian changelog or the gcc one of anything stream related back to before the versions that are known to have no probems. The first thing I'd do then, if you have a debian box, is file a bug - it may in fact be a gcc regression. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part
Re: Event order
On Fri, 2007-07-27 at 09:08 +1000, Robert Collins wrote: On Thu, 2007-07-26 at 09:31 -0600, Alex Rousskov wrote: Folks, There are at least two known difficult-to-reproduce bugs that may be explained by asynchronous calls being called out of order. I do not know whether out-of-order execution is indeed their cause, but they prompted me to investigate event scheduling further. Currently, asynchronous calls are implemented using addEvent with 'when' parameter set to zero. This means that the event time is set to current_dtime in EventScheduler::schedule. However, current_dtime may _decrease_ when the system clock is adjusted. If such a decrease happens between the two asynchronous call submissions, the later call will be fired first. I see two ways of fixing this: 1) Stop using addEvent for asynchronous calls. Add a special queue for them and drain the queue every select loop. Pros: straightforward design that is probably a little faster than addEvent because we will always append the new call instead of searching for the right place in the queue. This design will help treating asynchronous calls specially in the future (e.g., debugging and exception trapping). Cons: lots of work and current code changes. Oh, I should note - its dead easy to add a new dispatcher that just implements a FIFO of queued calls per event loop, using the current event infrastructure. It may be overengineered, but it makes such additions trivial. -Rob -- GPG key available at: http://www.robertcollins.net/keys.txt. signature.asc Description: This is a digitally signed message part