* Mark Mielke (m...@mark.mielke.cc) wrote:
> There is no technical requirement for PostgreSQL to separate data in  
> databases or tables on subdirectory or file boundaries. Nothing wrong  
> with having one or more large files that contain everything.

Uhh, except where you run into system limitations on file size (eg- a 2G
max file size..).  You'll note PG creates files up to 1G and then splits
them into separate files.  It's not done just because it's fun.

> I guess I'm not seeing how using 32k tables is a sensible model.

For one thing, there's partitioning.  For another, there's a large user
base.  32K tables is, to be honest, not all that many, especially for
some of these databases which reach into the multi-TB range..

> So yes,  
> things can be done to reduce the cost - but it seems like something is  
> wrong if this is truly a requirement.

I have no idea what you've been working with, but I hardly think it
makes sense for PG to consider over 32k tables as not worth supporting.

> There are alternative models of  
> storage that would not require 32k tables, that likely perform better.  

Eh?  You would advocate combining tables for no reason other than you
think it's bad to have alot?

> Do you agree with me that having 32k open file descriptors (or worse,  
> open on demand file descriptors that need to be re-opened many times) is  
> a problem?

Nope.

> Looking at PostgreSQL today - I don't think it's designed to scale to  
> this. Looking at SQL today, I think I would find it difficult to justify  
> creating a solution that requires this capability.

Actually, I find that PG handles it pretty well.  And we used to be an
Oracle shop.

        Thanks,

                Stephen

Attachment: signature.asc
Description: Digital signature

Reply via email to