> >  4.  How do you break up files in the datastore and the datastore
> >      indices to get around file size limits imposed by filesystems?
> what?

I think he's referring to the 2GB file size limit on Linux and similar
limitations in other OSs.

> In fact, I think a lot of your criticisms really don't make much
> sense.  You're not designing a webserver.  It doesnt *have* to scale to
> enterprise levels.

I was thinking about this. Whenever the datastore starts having
scalability problems then that probably means we're not properly balancing
the load throughout the network. Then again, it's possible that we may end
up with so much traffic that even with proper load balancing there is
still a need for scalable datastores. This would be the case if there were
several large sites running a trusted subnetwork and swapping huge amounts
of data between each other. Of course any large node could be replaced
with several smaller nodes, but there may be times when running a large
node is a sensible thing to do, such as when you just happen to have a
big, fast computer with lots of HD space and bandwidth. You might as well
put a big, fat node on it. In general, though, I neither see much of a
need for highly scalable datastores, or any particular reason *not* to
make the datastore highly scalable. It's more of an issue with a C port
since that's what people trying to run high traffic nodes will probably
run anyway.



_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to