Re: [GENERAL] Create/Erase 5000 Tables in PostGRE SQL in execution
Christopher Browne wrote: >> Orlando Giovanny Solarte Delgado wrote: >>> It is a system web and each user can >>> to do out near 50 consultations for session. I can have simultaneously >>> around 100 users. Therefore I can have 5000 consultations >>> simultaneously. Each consultation goes join to a space component in >>> Postgis, therefore I need to store each consultation in PostgreSQL to >>> be able to use all the capacity of PostGIS. The question is if for >>> each consultation in execution time build a table in PostGRESQL I use >>> it and then I erase it. Is a system efficient this way? Is it possible >>> to have 5000 tables in PostGRESQL? How much performance? >> Use TEMP tables. > Hmm. To what degree do temp tables leave dead tuples lying around in > pg_class, pg_attribute, and such? > I expect that each one of these connections will leave a bunch of dead > tuples lying around in the system tables. The system tables will need > more vacuuming than if the data was placed in some set of > more-persistent tables... > None of this seems forcibly bad; you just need to be sure that you > vacuum the right things :-). Since there is pg_autovacuum you don't need to think about it. -- Wbr, Sergey Moiseev ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Create/Erase 5000 Tables in PostGRE SQL in execution
> Orlando Giovanny Solarte Delgado wrote: >> I am designing a system that it takes information of several databases >> distributed in Interbase (RDBMS). It is a system web and each user can >> to do out near 50 consultations for session. I can have simultaneously >> around 100 users. Therefore I can have 5000 consultations >> simultaneously. Each consultation goes join to a space component in >> Postgis, therefore I need to store each consultation in PostgreSQL to >> be able to use all the capacity of PostGIS. The question is if for >> each consultation in execution time build a table in PostGRESQL I use >> it and then I erase it. Is a system efficient this way? Is it possible >> to have 5000 tables in PostGRESQL? How much performance? >> > Use TEMP tables. Hmm. To what degree do temp tables leave dead tuples lying around in pg_class, pg_attribute, and such? I expect that each one of these connections will leave a bunch of dead tuples lying around in the system tables. The system tables will need more vacuuming than if the data was placed in some set of more-persistent tables... None of this seems forcibly bad; you just need to be sure that you vacuum the right things :-). It is a big drag if system tables get filled with vast quantities of dead tuples; you can't do things like reindexing them without shutting down the postmaster. -- (reverse (concatenate 'string "moc.liamg" "@" "enworbbc")) http://linuxdatabases.info/info/x.html "Listen, strange women, lyin' in ponds, distributin' swords, is no basis for a system of government. Supreme executive power derives itself from a mandate from the masses, not from some farcical aquatic ceremony." -- Monty Python and the Holy Grail ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Create/Erase 5000 Tables in PostGRE SQL in execution
Orlando Giovanny Solarte Delgado wrote: > I am designing a system that it takes information of several databases > distributed in Interbase (RDBMS). It is a system web and each user can > to do out near 50 consultations for session. I can have simultaneously > around 100 users. Therefore I can have 5000 consultations > simultaneously. Each consultation goes join to a space component in > Postgis, therefore I need to store each consultation in PostgreSQL to > be able to use all the capacity of PostGIS. The question is if for > each consultation in execution time build a table in PostGRESQL I use > it and then I erase it. Is a system efficient this way? Is it possible > to have 5000 tables in PostGRESQL? How much performance? > Use TEMP tables. -- wbr, Sergey Moiseev ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Create/Erase 5000 Tables in PostGRE SQL in execution Time
I don't really know what you're trying to accomplish here, but dropping and creating thousands of tables is never a good idea with any database system. You can certainly do that, just don't expect any query to run at their best performance. You'd need to at least do a vacuum before starting to query those tables. Can't you just leave the tables alone and populate them with records? Looks like a bad design to me when you have to drop/create tables as part of the regular operations. On Monday 16 January 2006 09:10, Orlando Giovanny Solarte Delgado wrote: > I am designing a system that it takes information of several databases > distributed in Interbase (RDBMS). It is a system web and each user can to > do out near 50 consultations for session. I can have simultaneously around > 100 users. Therefore I can have 5000 consultations simultaneously. Each > consultation goes join to a space component in Postgis, therefore I need to > store each consultation in PostgreSQL to be able to use all the capacity of > PostGIS. The question is if for each consultation in execution time build > a table in PostGRESQL I use it and then I erase it. Is a system efficient > this way? Is it possible to have 5000 tables in PostGRESQL? How much > performance? > > Thanks for your help! > > > > Orlando Giovanny Solarte Delgado > > Ingeniero en Electrónica y Telecomunicaciones > > Universidad del Cauca, Popayan. Colombia. > > E-mail Aux: [EMAIL PROTECTED] -- UC -- Open Source Solutions 4U, LLC 1618 Kelly St Phone: +1 707 568 3056 Santa Rosa, CA 95401 Cell: +1 650 302 2405 United States Fax:+1 707 568 6416 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match