ow <[EMAIL PROTECTED]> writes:
> My concern though ... wouldn't pgSql server collapse when faced with
> transaction spawning across 100M+ records?
The number of records involved really doesn't faze Postgres at all. However
the amount of time spent in the transaction could be an issue if there is
Clinging to sanity, [EMAIL PROTECTED] (Tom Lane) mumbled into her beard:
> ow <[EMAIL PROTECTED]> writes:
>> My concern though ... wouldn't pgSql server collapse when faced with
>> transaction spawning across 100M+ records?
>
> No. You're extrapolating from Oracle-specific assumptions again.
Or f
ow <[EMAIL PROTECTED]> writes:
> My concern though ... wouldn't pgSql server collapse when faced with
> transaction spawning across 100M+ records?
No. You're extrapolating from Oracle-specific assumptions again.
regards, tom lane
---(end of broadc
--- Jan Wieck <[EMAIL PROTECTED]> wrote:
> #!/bin/sh
>
> (
> echo "start transaction;"
> cat $2
> echo "commit transaction;"
> ) psql $1
>
>
>
> then call it as
>
> reload_in_transaction my_db my_namespace.dump
>
> Since the whole dump will be restored inside of one transaction
Jan Wieck wrote:
ow wrote:
--- ow <[EMAIL PROTECTED]> wrote:
How? The doc only mentions db: pg_dump [option...] [dbname]
Then, how would I lock users out from the schema while it's being loaded?
Never mind how, I see there's "-n namespace" option in 7.4. But still, how
would I lock users out fro
ow wrote:
--- ow <[EMAIL PROTECTED]> wrote:
How? The doc only mentions db: pg_dump [option...] [dbname]
Then, how would I lock users out from the schema while it's being loaded?
Never mind how, I see there's "-n namespace" option in 7.4. But still, how
would I lock users out from the schema while
--- ow <[EMAIL PROTECTED]> wrote:
> How? The doc only mentions db: pg_dump [option...] [dbname]
>
> Then, how would I lock users out from the schema while it's being loaded?
Never mind how, I see there's "-n namespace" option in 7.4. But still, how
would I lock users out from the schema while it
--- Peter Eisentraut <[EMAIL PROTECTED]> wrote:
> You could just dump individual schemas.
How? The doc only mentions db: pg_dump [option...] [dbname]
Then, how would I lock users out from the schema while it's being loaded?
Thanks
__
Do you Yahoo!?
Protect y
ow writes:
> There's too much data to put it in one db.
There is never too much data to be put in one database.
> If anything happens to it, I'll never be able to restore (or dump) it in
> time.
You could just dump individual schemas.
> BTW, mySql has cross-db queries.
PostgreSQL has schemas,
--- Peter Eisentraut <[EMAIL PROTECTED]> wrote:
> I'm afraid that what you want to do is not possible. Perhaps you want to
> organize your data into schemas, not databases.
There's too much data to put it in one db. If anything happens to it, I'll
never be able to restore (or dump) it in time. B
--- Peter Eisentraut <[EMAIL PROTECTED]> wrote:
> I'm afraid that what you want to do is not possible. Perhaps you want to
> organize your data into schemas, not databases.
There's too much data to put it in one db. If anything happens to it, I'll
never be able to restore (or dump) it in time. B
ow writes:
> That's the whole point: I'm trying to avoid maintaining *separate* connection
> pools for each db. In other words, instead of having, say, 5 connection pools
> to 5 dbs with total of 1000 connections, I could've used just one (1) pool with
> 200 connections, if there was a way to "swi
--- Peter Eisentraut wrote:
> Nothing prevents you from keeping the connection to db1 open when you open
> a connection to db2. By the way, psql's "\c" command does exactly
> disconnect-from-db1-connect-to-db2.
That's the whole point: I'm trying to avoid maintaining *separate* connection
pools
ow writes:
> Is there a way to programatically switch conn1 to use db2 without doing
> disconnect-from-db1-connect-to-db2? Something like what "\c" does but to
> be used independently from psql? I need this to be able to reuse a pool
> of connections to db1 for actions on db1, db2 ... dbn.
Nothin
14 matches
Mail list logo