On Aug 12, 2009, at 9:04 AM, Mark Martin wrote:

Elaine Ashton wrote:


What information do you suspect we have? :) Getting to the urls programmatically from inside the db is not as simple as it might seem. Also, I'll have to revisit the redirect algos for apache and tomcat to see how and when it scans the remapping table, but when you're talking about 10-20,000 pages to be individually remapped, it will not be without an expense to the performance of the site as a whole. And if you maintain the broken links, you commit to maintain them in perpetuity whereas if they are broken, people fix them and move on. I believe most links will get you to the top- level project page which is a reasonable compromise.

1) You don't have to maintain in perpetuity. You can set reasonable expectations.

What makes you believe that breaking them either today or in 2 years is going to be met with a discernibly different response? I fully expect that it'd be the same flurry of unreasonable expectations as there are now.

2) People don't "fix them and move on". Look at the fiasco that happened with the ARC move. No notice. Not enough decency or respect to even respond to questions or suggestions. And in the end, a large body of work (case artifacts, emails, etc) which is either difficult or impossible to correct links within. At least in this case, we've been giving something resembling a notice, which I suppose is an improvement.

I think the ARC stuff is a special case and is mostly a problem from within the ARC group itself.


Links change constantly and we can't control who points to what in various media and this has been an endemic problem in hypertext since there were more than two pages on the internet which linked to each other.

And thus our forefathers had the foresight to grant us the 30x response.

I suspect this is just another case of the website "community" group marching to the beat of a different drummer (i.e. serving other interests), and essentially ignoring the needs of the users.

You know, it does get pretty discouraging, even though I am in the business of being unappreciated in the boiler room, where people who are technical enough to appreciate and understand the problems with the scale of what migrating all this content entails do little but gripe and sling insults.

Our forefathers did not give us sentient webservers with self-healing URIs.

I AM thinking of the end users as if we would do a URI rewrite for 10-20,000 pages, this would entail the server scanning /every/ rule to match against before finally serving up the page. In the battle of glacial server vs. broken link on a zippy server, the zippy server wins.

That being said, if there is someone out there who does happen to be familiar with large-scale production-level real-world kind of site migration issues, I'd be interested in hearing about your first-hand experiences.

e.


_______________________________________________
website-discuss mailing list
[email protected]

Reply via email to