On Fri, Apr 16, 2010 at 2:57 PM, Alexandre Leclerc wrote:
> Thank you guys. I wanted to rush and vacuum the other tables and try, but I
> decided to make a copy. This is actually running. (Enough mistakes in one
> day to not take the time to do it.)
>
> After that we try to launch the DB and hopef
Le 2010-04-16 16:14, Tom Lane a écrit :
Alexandre Leclerc writes:
The vacuum raised a "max_fsm_pages" of 142000 not enought and stopped.
That's just a warning that gets put out at the end of the run. Go on
with vacuuming your other databases. Right now is no time to be
worrying abo
Alexandre Leclerc writes:
> The vacuum raised a "max_fsm_pages" of 142000 not enought and stopped.
That's just a warning that gets put out at the end of the run. Go on
with vacuuming your other databases. Right now is no time to be
worrying about FSM too small --- you need to get back to a runn
Alexandre Leclerc wrote:
> The vacuum raised a "max_fsm_pages" of 142000 not enought and
> stopped.
That's probably just a warning that it wasn't able to track all the
dead space -- I would expect that. You're going to want to clean up
the bloat anyway. I would try a pg_dumpall at this point
You could temporarily increase the fsm size in the postgres configuration
so as to be able to properly map all the free space. I think you're going
to do a dump/restore in due course in order to return the database to
something like it's normal size, at which point (if you're RAM constrained)
you
Alexandre Leclerc writes:
> I'm always getting:
> WARNING: db "template1" must be vacuumed within 999593 transactions
> HINT: To avoid... execute a full-database VACUUM in "template1"
> ... (repeated many times until 999568)
Yeah, I think you will get that bleat once per table processed, until
yo
Le 2010-04-16 15:44, Tom Lane a écrit :
"Kevin Grittner" writes:
"Joshua D. Drake" wrote:
if you actually managed to start two services against the
same data directory, I hope you have a backup, you can restore
from.
This is 8.1 under Windows, and he connected to a di
Alexandre Leclerc wrote:
> our customer is supposed to have a full file backup from the
> evening.
That's very good news, but given that they've not been going "by the
book" in all respects, it pays to be cautious here. Did they make
the copy while the database service was shut down? If not,
Kevin Grittner wrote:
Also, the "full-database vacuum" terminology seems too likely to be
interpreted as VACUUM FULL for best results. Perhaps it's worth
changing that to just "database vacuum" or "vacuum of the entire
database"
http://archives.postgresql.org/pgsql-committers/2008-12/msg00096.
"Kevin Grittner" writes:
> "Joshua D. Drake" wrote:
>> if you actually managed to start two services against the
>> same data directory, I hope you have a backup, you can restore
>> from.
> This is 8.1 under Windows, and he connected to a different database
> with each backend. He got errors w
Le 2010-04-16 15:20, Scott Marlowe a écrit :
On Fri, Apr 16, 2010 at 12:47 PM, Alexandre Leclerc wrote:
Le 2010-04-16 14:18, Kevin Grittner a écrit :
Alexandre Leclercwrote:
At some point I got:
ERROR: xlog flush request AC/FBEEF148 is not satisfied --- flushed
only to
On Fri, Apr 16, 2010 at 12:47 PM, Alexandre Leclerc wrote:
> Le 2010-04-16 14:18, Kevin Grittner a écrit :
>>
>> Alexandre Leclerc wrote:
>>
>>
>>>
>>> At some point I got:
>>> ERROR: xlog flush request AC/FBEEF148 is not satisfied --- flushed
>>> only to AC/FB9224A8
>>> CONTEXT: writing block 0
"Joshua D. Drake" wrote:
> if you actually managed to start two services against the
> same data directory, I hope you have a backup, you can restore
> from.
This is 8.1 under Windows, and he connected to a different database
with each backend. He got errors writing the WAL files, and it
appa
On Fri, 2010-04-16 at 14:47 -0400, Alexandre Leclerc wrote:
> I did. :( Shame on me. I just realised while reading doc on postgres
> that it is not made for that but only for a single instance at the time.
> I hope I did not break anything.
How in the world did you pull that off? PostgreSQL ch
Le 2010-04-16 14:18, Kevin Grittner a écrit :
Alexandre Leclerc wrote:
At some point I got:
ERROR: xlog flush request AC/FBEEF148 is not satisfied --- flushed
only to AC/FB9224A8
CONTEXT: writing block 0 of relation 1664/0/1214
WARNING: could not writing block 0 of 1664/0/1214
DETAIL: Mult
Alexandre Leclerc wrote:
> I also want to mention that maybe I'm not doing it properly.
>
> I started "postgres.exe" and it is inside that "session",
> "backend>" prompt, that I did run the VACUUM command. Is it that
> way
Yes, that's the single-user mode. Just don't run more than one with
t
Alexandre Leclerc wrote:
> At some point I got:
> ERROR: xlog flush request AC/FBEEF148 is not satisfied --- flushed
> only to AC/FB9224A8
> CONTEXT: writing block 0 of relation 1664/0/1214
> WARNING: could not writing block 0 of 1664/0/1214
> DETAIL: Multiple failures --- write error may be per
Hi again,
I also want to mention that maybe I'm not doing it properly.
I started "postgres.exe" and it is inside that "session", "backend>"
prompt, that I did run the VACUUM command. Is it that way or should I
use psql to connect to anything "postgres.exe" would have "done" (like
listening to
Hi all,
I might have a problem of a greater order, but I can't see how to get an
answer. (Indeed the message didn't say anything about VACUUM FULL... I
miss interpreted the message.)
The messages says to VACUUM the database postgres.
When I execute:
postgres -D "D:\my\path" postgres
VACUUM;
Renato Oliveira wrote:
> I thought the base backup was only necessary once. For example
> once you have done your first base backup, that is it, all you
> need is to replay the logs and backup the logs. What would be
> the reason(s) for you to do weekly base backups?
There are a few reasons, m
Hi Renato,
I had the same question. I think, as far as I understood, the point is
that if you have a few base backups, not only logs replay would be
faster for a recovery but also you don't need to archive WAL segments
before the base backup.
**
I also have a question regarding the frequency
Samuel Stearns wrote:
> I am running version 8.3.3 and encountered a problem
> Does anyone have any ideas how I can keep from getting into this
> duplicate database scenario? Any advice would be greatly
> appreciated.
> it was stated by Tom Lane that the large xmax number may indicate
> a p
Hello,
On Thu, Apr 15, 2010 at 6:30 PM, Tom Lane wrote:
> Jose Berardo writes:
- Is it possible to store the server.key in a ciphered file with
>
>>> No.
>
>> I believe that it may be a good idea, it may bring another security level,
>
> Not really.
>
>> Just saving the private key file
Alexandre Leclerc writes:
> *Background:*
> - PostgreSQL 8.1 on Windows Server
> - The customer has disabled regular VACUUM jobs for backup to be taken,
> a year or two ago.
> - They didn't told us (as far as I can remember).
> - Wednesday morning at 10:55:50: database is shut down to avoid
> wr
Khangelani Gama wrote:
> there is a table that has a broken row, but now I don't know which
> one is broken. the table has about 20974 pages.
If there are any indexes on the table which haven't been corrupted,
you might try selecting ranges of rows using one of them, capturing
the undamaged da
Khangelani Gama writes:
> Please help me, I am using PostgreSQL 7.3.4 running on Redhat5
> there is a table that has a broken row, but now I don't know which one
is broken. the table has about 20974 pages. is there a command to find
this?
You have to use divide-and-conquer. Try
select
Alexandre Leclerc wrote:
> - PostgreSQL 8.1 on Windows Server
That's not a supported environment.
http://www.postgresql.org/about/news.865
They should really be looking at upgrading.
> - The customer has disabled regular VACUUM jobs for backup to be
> taken, a year or two ago.
Ouch. O
Alexandre Leclerc wrote:
- 2. Could we stop VACUUM FULL and simply restart postmaster and
starting a normal VACUUM even if it's slow?
This is what you want to do. VACUUM FULL is the slow way--much, much
slower--and it is not needed to clean up from wraparound issues. Here's
two more opinion
Hi all,
I'm sorry for the urgency of the question. (We have a customer whose DB
is "down" since 36 hours and business operations are compromised. Thank
you for your help.)
*Background:*
- PostgreSQL 8.1 on Windows Server
- The customer has disabled regular VACUUM jobs for backup to be taken,
Please help me, I am using PostgreSQL 7.3.4 running on Redhat5
there is a table that has a broken row, but now I don't know which one is
broken. the table has about 20974 pages. is there a command to find this?
because I used select commands like: select * from table order by column desc
limit
Hello.
> > > I'm trying to use the java keytool in place of openssl.
> > > - I believe that it not possible to start the PostgreSQL server
> without
> > > openssl (and ssl-dev package in debian), is it correct?
> >
> > Yes, I don't think the java keytool works.
>
> Oh, the documentation defeated
I was looking into SkyTools, it sounds quite good. I am going to revisit this
PITR solution once it is implemented for sure.
Will try to keep an eye and see how it goes on live and see what we need to
adjust in time.
Thank you very much for your help really appreciated it.
Renato
Renato Oliv
I am sorry Kevin, I really appreciate your experience and your knowledge, and
that's why I am asking; I thought the base backup was only necessary once. For
example once you have done your first base backup, that is it, all you need is
to replay the logs and backup the logs.
What would be the r
33 matches
Mail list logo