Hi,
I have using postgresql for a while and is on the stage to move a few
databases to a new server (7.1.x to 7.3.x - linux/debian).
During test and the planing I have found no tool that can help me to
move the databases. pgAdminII fails, pg_dump have problems with
functions and unicode, EMS Post
Dear all,
All the while I was doing pg_dump
as backup method for my pgsql db. Today I came across
File system level backup on Postgresql
Documentation.
After reading it, I am quite unsure whether File system level backup is
better than pg_dump.
Furthermore, do we really need to
On Thu, 16 Oct 2003 23:35:48 +0100
"Donald Fraser" <[EMAIL PROTECTED]> wrote:
>
> Since this seems to work for you,
> would you be kind enough to post the shell script for doing the
> snapshot with LVM.
>
Ahh, I posted it to -perform. Guess it didn't make it here.
I have a 2 disk striped LVM as
There's been a lot of discussion on the ADMIN list about postgresql backups
from LVM snapshots:
http://marc.theaimsgroup.com/?l=postgresql-admin&w=2&r=1&s=LVM+snapshot&q=b
Note that the existence of the snapshot slows the original filesystem down,
so you want to minimize the duration for which the
Title: How to extract table DDL from PGSQL database?
How do I extract table or view DDL from a postgresql database? I thought the PGAdmin tool would do this, but I haven't found the functionality there yet. Looked at the system catalog tables, but didn't find everything I needed (version 7.1
David Wagoner <[EMAIL PROTECTED]> writes:
> How do I extract table or view DDL from a postgresql database?
"pg_dump -s" is the usual way.
regards, tom lane
---(end of broadcast)---
TIP 7: don't forget to increase your free s
Hi There,
I am new to postgresql and we are thinking of migrating from MS-SQL to
postgresql, but I am having trouble getting postgresql to keep up with the
load.
I am using it to serve about 1 images/hour (16/sec), so I am making that
many connections to determine which image to serve each ti
Stephan Szabo wrote:
On Thu, 16 Oct 2003, Oli Sennhauser wrote:
I would like to start a second postmaster on my server.
First problem was the lock file /tmp/.s.PGSQL.5432.lock
and its socket. But you can workaround that by the -k
parameter. So I was able to start at least 3 clusters...
If you w
Hi. I want to start helping with the nuts and bolts of postgresql.
That is, I want to begin helping to update it with programming, etc.
What group should I read (which postgresql one, that is...). Also, is
this going to be possible on a 486? That is all I am operating with
right now
Robert W.
Dear Pgsql-admin:
We are now using pgsql(ver 7.0), the disk of our server is bad
yestoday. I find that the all database file is ok and I have no make any
backup. Can you help me to restore?
Thanks and wait for you reply urgent,
Lei Xiong
---(end of broadcast)--
Robert W. Kernell wrote:
Hi. I want to start helping with the nuts and bolts of postgresql.
That is, I want to begin helping to update it with programming, etc.
.*
What group should I read (which postgresql one, that is...). Also, is
this going to be possible on a 486? That is all I am operati
Hi,
What's the best way to upgade PostgreSQL on say a 6 machine cluster.
As I have compiled Pg from source must I compile Pg on each machine
again by hand
or can I compile once and send a package, .deb/.rpm, to each machine ?
Lets say all 6 machines have identical hardware and are all on a
192
Murthy Kambhampaty <[EMAIL PROTECTED]> writes:
> ... The script handles situations
> where (i) the XFS filesystem containing $PGDATA has an external log and (ii)
> the postmaster log ($PGDATA/pg_xlog) is written to a filesystem different
> than the one containing the $PGDATA folder.
It does? How
Friday, October 17, 2003 12:05, Tom Lane [mailto:[EMAIL PROTECTED] wrote:
>Murthy Kambhampaty <[EMAIL PROTECTED]> writes:
>> ... The script handles situations
>> where (i) the XFS filesystem containing $PGDATA has an
>external log and (ii)
>> the postmaster log ($PGDATA/pg_xlog) is written to a
Jeff,
> The downside is
> this method will only work on that specific version of PG and it isn't the
> "cleanest" thing in the world since you are essentially simulating a power
> failure to PG. Luckly the WAL works like a champ. Also, these backups can
> be much larger since it has to include the
Jeff,
> I left the DB up while doing this.
>
> Even had a program sitting around committing data to try and corrupt
> things. (Which is how I discovered I was doing the snapshot wrong)
Really? I'm unclear on the method you're using to take the snapshot, then; I
seem to have missed a couple pos
16 matches
Mail list logo