wrote:
On Thu, 19 Oct 2006 06:14:46 -0600, Rick Gigger wrote:
I think we've got it figure out though. We were able to patch up the
db enough to extract the data with some help from google and old
postings
from Tom.
It would be really great if you put down the specifics of what you
googled
Ron Johnson wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/19/06 00:46, Rick Gigger wrote:
Ron Johnson wrote:
On 10/18/06 23:52, Rick Gigger wrote:
Rick Gigger wrote:
Ron Johnson wrote:
On 10/18/06 19:57, Rick Gigger wrote:
[snip]
Not much that is useful. I think
Rick Gigger wrote:
Ron Johnson wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/19/06 00:46, Rick Gigger wrote:
Ron Johnson wrote:
On 10/18/06 23:52, Rick Gigger wrote:
Rick Gigger wrote:
Ron Johnson wrote:
On 10/18/06 19:57, Rick Gigger wrote:
[snip]
Not much that is useful
To make a long story short lets just say that I had a bit of a
hardware failure recently.
If I got an error like this when trying to dump a db from the mangled
data directory is it safe to say it's totally hosed or is there some
chance of recovery?
pg_dump: ERROR: could not open
Ron Johnson wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/18/06 19:57, Rick Gigger wrote:
To make a long story short lets just say that I had a bit of a hardware
failure recently.
If I got an error like this when trying to dump a db from the mangled
data directory is it safe
Rick Gigger wrote:
Ron Johnson wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/18/06 19:57, Rick Gigger wrote:
To make a long story short lets just say that I had a bit of a hardware
failure recently.
If I got an error like this when trying to dump a db from the mangled
data
Ron Johnson wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/18/06 23:52, Rick Gigger wrote:
Rick Gigger wrote:
Ron Johnson wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/18/06 19:57, Rick Gigger wrote:
To make a long story short lets just say that I had a bit
jef peeraer wrote:
beer schreef:
Hello All
So I have an old database that is ASCII_SQL encoded. For a variety
of reasons I need to convert the database to UNICODE. I did some
googling on this but have yet to find anything that looked like a
viable option, so i thought I'd post to the
only when I specify. But if it kills vacuum I
will have to take a different approach.
On Mar 3, 2006, at 2:59 AM, Ragnar wrote:
On fim, 2006-03-02 at 11:03 -0700, Rick Gigger wrote:
Never-mind that. I'm assuming statement_timeout is what I need?
Yes, but take care if you change
not be affected, but a vacuum run through psql yes.
You can also set it for a user (see alter user ... set ...), and use
separate users for application access and maintenance work.
Cheers,
Csaba.
On Fri, 2006-03-03 at 11:03, Rick Gigger wrote:
Oh that will abort vacuum after that time as well? Can anyone
Is there a way to put a timeout on a query so that if it runs longer
than 5 minutes or something it is just automatically terminated?
---(end of broadcast)---
TIP 6: explain analyze is your friend
Never-mind that. I'm assuming statement_timeout is what I need?
On Mar 2, 2006, at 11:01 AM, Rick Gigger wrote:
Is there a way to put a timeout on a query so that if it runs
longer than 5 minutes or something it is just automatically
terminated?
---(end
I have this exact problem. I have dumped and reloaded other
databases and set the client encoding to convert them to UTF-8 but I
have one database with values that still cause it to fail, even if I
specify that the client encoding is SQL_ASCII. How do I fix that?
On Feb 17, 2006, at 4:08
Yeah, that's how I remember mysql doing it. I'm sure postgres
doesn't want anything to do with how they do it. If I recall it was
kind of convenient sometimes as long as you only select fields that
are unambiguous.
For instance take the query where table first_table has primary key
a:
of the stop wal file was in the backup history file.
Rick
On Jan 30, 2006, at 10:20 PM, Bruce Momjian wrote:
Yes, I think copying it while it is being written is safe.
--
-
Rick Gigger wrote:
Yes! Thanks you
Why doesn't mysql just forget the whole dual licensing of the server
thing and just tell everyone to use the GPL versions of everything.
Then dual license the client libraries which I would think they
already own outright. I think this is what forces most people to
need a commercial
There is a little trick you can do though, it goes something like this:
insert into table (field1, field2, field3) select v1, v2, v3 union
b1, b2, b3 union select c1, c2, c3
I originally did this because it was significantly faster on SQL
Server 2000 than doing the inserts individually.
Is the ordering guaranteed to be the same on both boxes if you do this?
Rick
On Feb 9, 2006, at 1:03 PM, Philippe Ferreira wrote:
Are there any tools that can compare a database schema, and
produce sql of the changes from one version to the next.
We have a development server, and it
Hi,
Our IT budget is not so much and even so I´m trying to set up a
Postgresql high availability solution for our business.
My managers gave me the following statements that I must follow:
. the system could be out of service no more than 2 hours
. last 5 minutes of work could be lost
The
Wonderful. That is good news. Thanks.
Rick
On Jan 31, 2006, at 7:14 AM, Tom Lane wrote:
Rick Gigger [EMAIL PROTECTED] writes:
That's what I mean by invalid. Let's say I do something stupid and
do a physical backup and I don't grab the current WAL file. All I
have is the last one
Nagios and snmp
http://www.nagios.org/
On Jan 31, 2006, at 9:06 AM, Henrique Engelmann wrote:
Hi,
We´ve many postgresql servers running in linux Redhat/Fedora boxes
in our enterprise and we´re looking for some tool to help us to
administer and monitor those systems. This tool should
I haven't tried those specific versions but I'm guessing a build from
source will work great.
If you are ok with just using 8.0 then you could use fink (http://
fink.sourceforge.net). Fink is apt-get for Mac. It looks like 8.1
is still in their unstable branch. Hopefully it will be moved
On Jan 30, 2006, at 12:28 AM, Ron Marom wrote:
First of all thanks for you quick and efficient response.
Indeed I forgot to mention that I AM vacuuming the database using a
daemon every few hours; however this seems not to be the issue this
time, as when the CPU consumptions went up I tried
them done for 8.2.
--
-
Rick Gigger wrote:
I guess my email wasn't all that clear. I will try to rephrase. I
am moving from using the old style pg_dump for backups to using
incrementals and want to make sure I understand
And here is the real million dollar question. Let's say for some
reason I don't have the last WAL file I need for my backup to be
valid. Will it die and tell me it's bad or will it just start up
with a screwed up data directory?
On Jan 30, 2006, at 4:29 PM, Rick Gigger wrote:
Yes
On Jan 30, 2006, at 6:58 PM, Tom Lane wrote:
Rick Gigger [EMAIL PROTECTED] writes:
And here is the real million dollar question. Let's say for some
reason I don't have the last WAL file I need for my backup to be
valid. Will it die and tell me it's bad or will it just start up
with a screwed
fit your needs or might not...
Cheers,
Csaba.
On Thu, 2006-01-26 at 18:48, Rick Gigger wrote:
Um, no you didn't read my email at all. I am aware of all of that
and it is clearly outlined in the docs. My email was about a
specific detail in the process. Please read it if you want to know
what
dump?
I hope that makes more sense.
Thanks,
Rick
On Jan 27, 2006, at 3:33 AM, Richard Huxton wrote:
Rick Gigger wrote:
Um, no you didn't read my email at all. I am aware of all of that
and it is clearly outlined in the docs. My email was about a
specific detail in the process. Please read
I am looking into using WAL archiving for incremental backups. It
all seems fairly straightforward except for one thing.
So you set up the archiving of the WAL files. Then you set up cron
or something to regularly do a physical backup of the data
directory. But when you do the physical
to any
point
in time you need. You can then supply all the WAL files which are
needed
by the last file system backup to recover after a crash, or you can
supply all the WAL files up to the time point just before your student
DBA deleted all your data.
HTH,
Csaba.
On Thu, 2006-01-26 at 18:33, Rick
Every once in a while I've noticed the number of processes I've got
running jumps up a little higher than normal. So I check it out and
realize that I'm building up a bunch of processes that just aren't
completing. About half of the one's not completing say SELECT
waiting. I'm not doing
I figure this would be a good place to ask. I want to build / buy a
new linux postgres box. I was wondering if anyone on this list had
some experience with this they'd like to share. I'm thinking
somewhere in the $7k - 15k range. The post important things are
write speed to the disk and
: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Rick Gigger
Sent: Monday, January 23, 2006 2:13 PM
To: pgsql general
Subject: [GENERAL] Linux - postgres RAID
I figure this would be a good place to ask. I want to build / buy a
new linux postgres box. I was wondering if anyone on this list
, at 11:13 AM, Rick Gigger wrote:
I figure this would be a good place to ask. I want to build / buy
a new linux postgres box. I was wondering if anyone on this list
had some experience with this they'd like to share. I'm thinking
somewhere in the $7k - 15k range. The post important things
pgsql-general@postgresql.org
---(end of broadcast)---
TIP 6: explain analyze is your friend
I got this message:
2006-01-20 11:50:51 PANIC: creation of file /var/lib/pgsql/data/
pg_clog/0292 failed: File exists
In 7.3. It caused the server to restart.
Can anyone tell me what it means?
---(end of broadcast)---
TIP 9: In versions
I have a table that I populate with a stored procedure. When the
stored procedure runs it deletes the table and rebuilds the whole
thing from scratch. Initially it performs terribly but when I play
around with it for a while (I will describe this in a moment) it runs
very, very fast.
Don't forget the auto-vacuum daemon!
Martijn van Oosterhout wrote:
Crikey! Tablespaces, Win32, nested transactions and PITR. Almost worth
a version 8 :)
On Tue, Jun 22, 2004 at 12:37:39AM -0400, Robert Treat wrote:
== PostgreSQL Weekly News - June 22nd 2004 ==
snip
---(end
One more question; on one server the Vacuum Analyze before the insert takes
approx. 2min after that the same command takes 15min.
You might try a VACUUM FULL sometime when you can deal with 15min of downtime
or so. Actually it would probably be longer. Perhaps the table that's taking
15min has a
The link you have down there is not the one on the site. All of the
links to that file work just fine for me on the live site.
Jan Wieck wrote:
On 6/4/2004 4:47 AM, Karel Zak wrote:
On Fri, Jun 04, 2004 at 01:01:19AM -0400, Jan Wieck wrote:
Yes, Slonik's,
it't true. After nearly a year the
This is a huge improvement over GBorg! I feel much more comfortable
with this than I ever did with GBorg. GBorg always seemed very
unfriendly and the crusty look and feel at first made me wonder how
legitimate it was as a source for serious projects. I realize now that
it housed some great
So can I quietly beg the Win32 group to expedite this port. I believe
you will be utterly astonished at the demand. Please.
I'm sure quietly begging certain developers with your pocekt book
probably wouldn't hurt your cause either. :)
Actually though from what I read here on this list
I want to know how much memory I've got free on my system.
The free command gives me something like this:
total used free sharedbuffers cached
Mem: 20648322046196 18636 0 1468921736968
-/+ buffers/cache: 1623361902496
I am running a few web based applications with postgres on the backend.
We have a few app servers load balanced all connecting to a dedicated
postgres server. As usage on the applications increases I want to
monitor my resources so that I can anticipate when I will hit
bottlenecks on the db
Randolf Richardson wrote:
In dealing with web applications and frontends to database or
even just a dynamic web site PHP has every bit the power and ability
that
Java does and the development time is way down.
Uh, how about threads. I know that you don't need them much but it sure
would
Rick Gigger [EMAIL PROTECTED] writes:
All of this explains why an embedded PostgreSQL isn't a great idea. It
being a true multi-user database means that even if you went though
all the work needed to turn it into an embedded database you wouldn't
get most of the advantages.
Is it true
Jeff Bowden [EMAIL PROTECTED] writes:
That makes sense to me. I wonder if sqlite suffers for this problem
(e.g. app crashing and corrupting the database).
Likely. I can tell you that Ann Harrison once told me she made a decent
amount of money as a consultant fixing broken
On Wed, 14 Jan 2004, Joshua D. Drake wrote:
Not to mention that PostgreSQL.Org has some of the most complete
documentation
of any software out there.
Yes, I don't understand why people seem to keep complaining about
Postgres' documentation - it is by far the best reference documentation
I used to have that complaint until I got more aquainted with the docs.
When I used to use mysql I found that if I used search feature on their
docs
I could find exactly what I was looking for almost immediately. When I
use
the postgres doc search feature I don't get the same experience. It
Does anyone have any experience with postgers full text search?
It works well but it is my understanding that our docs search doesn't
use PostgreSQL
and TSearch. It uses PostgreSQL monogo search or something like that.
That's good to hear. What is monogo and is it the problem here? Why
I've never actually used them but I'm guessing that this is what your
looking for. Can anyone verify this?
http://us2.php.net/ibase
rg
- Original Message -
From: Robert Treat [EMAIL PROTECTED]
To: Paul Ganainm [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, December 18, 2003
On a more serious note though I stopped doing comparisons between postgres
and mysql a long time ago. As soon as I realized that it didn't have unions
(yes I know that it does now) I never took it seriously again. But I have
noticed some very good things about firebird. Namely:
tested, solid,
Ok, I see what you're trying to do. In looking at this it occurs to me
that one
of the way to aid in this effort is through more tech documents. For
instance,
I have asked before what is the recommended procedure or stategy for
recovering
a database that has crashed. Something like that is
I was glad to see this topic come up on the list as I was about to start
asking about some of these issues myself. I would like to discuss each of
the methods I have researched so far for doing trees in sql and see if
anyone has some experience or insite into the topic that could help me.
That
Is there a convenient way to tell in postgres if a transaction has been
started or not?
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
[sNip]
In summary, you could be charging them for some very expensive courier
services, if for which they don't pay then you won't deliver. =)
Of course a competitor could purchase a copy or get it from a customer
and set up shop right away selling it too.
Ah, so even the GPL has
This is only a problem for ext2. Ext3, Reiser, XFS, JFS are all fine,
though you get better performance from them by mounting them
'writeback'.
What does 'writeback' do exactly?
---(end of broadcast)---
TIP 6: Have you searched our list
Thanks! This is exactly what I wanted to know when I first asked the
question. And it is the only response that seems to make sense. Does
anyone else have experiecne with this?
rg
Here's a quick list of my experiences with BLOB's and such.
Performance is just fine, I get about 1M hits a
I used it first because
1) someone suggested it and I didn't know any better
2) install, setup, maintanance and using it is easier than breathing. You'd
be surprised how much of a difference it makes to a newbie to not have to do
things like vacuum regularly and the ability to change a column
Note: I am a php developer and I love it, but...
In dealing with web applications and frontends to database or
even just a dynamic web site PHP has every bit the power and ability that
Java does and the development time is way down.
Uh, how about threads. I know that you don't need them much
Here is a link to the sql for smarties book:
http://www.amazon.com/exec/obidos/tg/detail/-/1558603239/102-3995931-726?v=glance
by Joe Celko
Has some cool ways of handling trees in sql
- Original Message -
From: Chris Travers [EMAIL PROTECTED]
To: Tony [EMAIL PROTECTED]
Cc: [EMAIL
What is the best method for storing files in postgres? Is it better to use
the large object functions or to just encode the data and store it in a
regular text or data field?
---(end of broadcast)---
TIP 3: if posting/reading through Usenet,
I will search the archives but does anyone know off the top of their head
which performs better?
- Original Message -
From: Keith C. Perry [EMAIL PROTECTED]
To: Rick Gigger [EMAIL PROTECTED]
Cc: PgSQL General ML [EMAIL PROTECTED]
Sent: Tuesday, November 18, 2003 12:25 PM
Subject: Re
Does anyone have any experience with postgres on fedora?
---(end of broadcast)---
TIP 8: explain analyze is your friend
Is this correct?
vacuum by itself just cleans out the old extraneous tuples so that they
aren't in the way anymore
Actually it puts the free space in each page on a list (the free space
map) so it can be reused for new tuples without having to allocate
fresh pages. It finds free space
Are there any guidelines on how often one should do
a reindex?
- Original Message -
From:
Reece
Hart
To: scott.marlowe
Cc: pgsql-general
Sent: Thursday, November 13, 2003 12:50
PM
Subject: Re: [GENERAL] More Praise for
7.4RC2
On Thu, 2003-11-13 at 10:09,
Is this correct?
vacuum by itself just cleans out the old extraneous tuples so that they
aren't in the way anymore
vacuum analyze rebuilds indexes. If you add an index to a table it won't be
used until you vacuum analyze it
vacuum full actually compresses the table on disk by reclaiming the
In the following situation:
You do a large transaction where lots of rows are update
All of your tables/indexes cached in memory
When are the updated rows written out to disk? When they are updated
inside
the transaction, or when the transaction is completed?
The data is written
I have heard that postgres will not use an index
unless the field has a not null constraint on it. Is that
true?
guessing that it couldn't make to big
of a performance difference or it would probably be implemented already.
Question 2:
Do serial ATA drives suffer from the same issue?
- Original Message -
From: Tom Lane [EMAIL PROTECTED]
To: Rick Gigger [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent
It seems to me file system journaling should fix the whole problem by giving
you a record of what was actually commited to disk and what was not. I must
not understand journaling correctly. Can anyone explain to me how
journaling works.
- Original Message -
From: Bruce Momjian [EMAIL
My guess is this will happen natually after using postgres for a short time.
(That's what happened to me.)
- Original Message -
From: Alvaro Herrera [EMAIL PROTECTED]
To: scott.marlowe [EMAIL PROTECTED]
Cc: Errol Neal [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Wednesday, October 15, 2003
I would guess most likely not. There are a few mysql features that postgres
doesn't have (for example mysql_insert_id) but there are still ways to do
them in postgers. I doubt it will be very hard.
- Original Message -
From: Errol Neal [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent:
My experience with mysql and postgres was this. I had some apps that were
running on SQL Server and I wanted to get rid of it because it was
expensive. Didn't really do much for us that the others couldn't, and I
wanted to get rid of windows. Plus administratively SQL Server just seemed
to have
changed
very often and I will have to undergo a testing cycle for each of them just
to maintain compatibility with postgres 7.2.4. This is not something I
really want to do. I would much prefer to just upgrade and have my legasy
apps work without modification or testing.
Thanks,
Rick Gigger
Two questions:
1) how would I go about doing that
2) is there any change that doing that could break other things?
thanks,
Rick Gigger
- Original Message -
From: Tom Lane [EMAIL PROTECTED]
To: Rick Gigger [EMAIL PROTECTED]
Sent: Monday, October 06, 2003 2:57 PM
Subject: Re: [GENERAL
76 matches
Mail list logo