> Fair enough, but it seems like del is so simple really that you could > replicate it fairly readily.
You can't replicate the user-base, and in addition, while the data model is fairly simple, the data traversal is not. Undirected graphs get complex much faster than they get large, for all the usual Metcalfe/Reed's Law reasons. So the issue is not "Can we write something that links users, links, and tags?" but "Can we write something that handles all those entities in the millions, and performs real-time queries that do complex things like calculate network neighborhoods?" Del used to list the last 2 hours of queries on the homepage. Now it lists less than 3 minutes or so -- that's a two order of magnitude increase in the amount of incoming data, and the graph traversal problems grow faster than linearly, so the server infrastructure and the query optimization are both non-trivial. Anyone putting up servers for downloading software has a brain-dead interaction model, and isn't going to be forced to think hard about their server or db setup. Things like del (and flickr and wikis and the other things flock says they are going after) are a different kind of business than Flock is in. -c _______________________________________________ discuss mailing list [email protected] http://lists.del.icio.us/cgi-bin/mailman/listinfo/discuss

