Chris Browne wrote:
> In support of PG 8.2, we need to have the log trigger function do the
> following:
> - Save value of standards_conforming_string
> - Set value of standards_conforming_string to FALSE
> - proceed with saving data to sl_log_?
> - Recover value of standards_conforming_str
In support of PG 8.2, we need to have the log trigger function do the
following:
- Save value of standards_conforming_string
- Set value of standards_conforming_string to FALSE
- proceed with saving data to sl_log_?
- Recover value of standards_conforming_string
The variable, standards_co
On Thursday 06 July 2006 21:55, Martijn van Oosterhout wrote:
> On Thu, Jul 06, 2006 at 07:43:20PM +0300, Tzahi Fadida wrote:
> > The downside is that i noticed that the CTID is removed from the tuple
> > if a cast occurs. Is there a way to tell postgresql to not remove the
> > CTID?
>
> Err, the f
On Thu, Jul 06, 2006 at 07:43:20PM +0300, Tzahi Fadida wrote:
> The downside is that i noticed that the CTID is removed from the tuple
> if a cast occurs. Is there a way to tell postgresql to not remove the
> CTID?
Err, the fact the ctid is removed is really just a side-effect. With no
adjusting o
I looked around in the code and the whole thing looks complex
and prone to breaking my code often, i.e., whenever someone will decide to
change the casting/operators. I thought about just
issuing in SPI_prepare query the proper casting like:
SELECT a0::text,a1::text ...
Casting to equal types (whe
Martijn van Oosterhout writes:
> On Wed, Jun 28, 2006 at 03:25:57PM +0300, Tzahi Fadida wrote:
>> I need help finding out how to determine if two types are equality compatible
>> and compare them.
> Fortunatly the backend contains functions that do all this already.
> Check out parser/parse_oper.
On Wed, Jun 28, 2006 at 03:25:57PM +0300, Tzahi Fadida wrote:
> Hi,
>
> I need help finding out how to determine if two types are equality compatible
> and compare them.
> Currently i only allow two values only of the same type but i wish to allow
> to compare values like "20.2"=?20.2 or 20=?20
Hi,
I need help finding out how to determine if two types are equality compatible
and compare them.
I am using the following call to check for equality between two values:
DatumGetBool(\
FunctionCall2(&(fctx->tupleSetAttEQFunctions[attID]->eq_opr_finfo)\
, lvalue, rvalue))
The
On 5/16/06, winlinchu <[EMAIL PROTECTED]> wrote:
I must to write a report on physical structures of DBMS's.
I choosed PostgreSQL; I looked the sources, and I have
understood the block structure. And relations? And databases?
How are structured?
Is there a question in there?
If you're looking f
Hi to all!I am a student in Computer Science, and in Databases' Technologycourse I must to writea report on physical structures of DBMS's. I choosed PostgreSQL; Ilooked the sources, andI have understood the block structure. And relations? And databases?How are structured?Thanks!!!
Yahoo! Mai
"yy h" <[EMAIL PROTECTED]> writes:
> I was trying to modify the physical page layout in PostgreSQL.
Uh, why?
> I understand the source code for physical page organization is located
> at bufpage.c but I wonder what is the external interface for this
> physical page organization. Interface like I
Hi,
I was trying to modify the physical page layout in PostgreSQL. I understand the source code for physical page organization is located at bufpage.c but I wonder what is the external interface for this physical page organization. Interface like InsertRecToPage, DeleteRecFromPage, GetAttr, GetRe
On Nov 28, 2005, at 4:13 PM, Tom Lane wrote:
Yeah, could be. Anyway it doesn't seem like we can learn much more
today. You might as well just zing the vacuumdb process and let
things get back to normal. If it happens again, we'd have reason
to dig deeper.
Final report [ and apologies to ha
On Nov 28, 2005, at 1:46 PM, Tom Lane wrote:
James Robinson <[EMAIL PROTECTED]> writes:
backtrace of the sshd doesn't look good:
Stripped executable :-( ... you won't get much info there. What of
the client at the far end of the ssh connection? You should probably
assume that the blockage
James Robinson <[EMAIL PROTECTED]> writes:
> Given the other culprits in play are bash running a straightforward
> shellscript line with redirected output to a simple file on a non-
> full filesystem, I'm leaning more towards the odds that something
> related to the sshd + tcp/ip + ssh client
James Robinson <[EMAIL PROTECTED]> writes:
> backtrace of the sshd doesn't look good:
Stripped executable :-( ... you won't get much info there. What of
the client at the far end of the ssh connection? You should probably
assume that the blockage is there, rather than in a commonly used bit
of s
On Nov 28, 2005, at 12:00 PM, Tom Lane wrote:
Your next move is to look at the state of sshd
and whatever is running at the client end of the ssh tunnel.
backtrace of the sshd doesn't look good:
(gdb) bt
#0 0xe410 in ?? ()
#1 0xbfffdb48 in ?? ()
#2 0x080a1e28 in ?? ()
#3 0x080a1e78
James Robinson <[EMAIL PROTECTED]> writes:
> On Nov 28, 2005, at 11:38 AM, Tom Lane wrote:
>> Can you get a similar backtrace from the vacuumdb process?
> OK:
> (gdb) bt
> #0 0xe410 in ?? ()
> #1 0xbfffe4f8 in ?? ()
> #2 0x0030 in ?? ()
> #3 0x08057b68 in ?? ()
> #4 0xb7e98533 in
On Nov 28, 2005, at 11:38 AM, Tom Lane wrote:
Can you get a similar backtrace from the vacuumdb process?
(Obviously,
give gdb the vacuumdb executable not the postgres one.)
OK:
(gdb) bt
#0 0xe410 in ?? ()
#1 0xbfffe4f8 in ?? ()
#2 0x0030 in ?? ()
#3 0x08057b68 in ?? ()
#4 0xb7e
James Robinson <[EMAIL PROTECTED]> writes:
> (gdb) bt
> #0 0xe410 in ?? ()
> #1 0xbfffd508 in ?? ()
> #2 0x082aef97 in PqSendBuffer ()
> #3 0xbfffd4f0 in ?? ()
> #4 0xb7ec03e1 in send () from /lib/tls/libc.so.6
> #5 0x08137d27 in secure_write ()
> #6 0x0813c2a7 in internal_flush ()
> #7
Here ya go -- BTW -- your guys support is the _best_. But you know
that already:
[EMAIL PROTECTED]:/home/sscadmin> gdb /usr/local/pgsql/bin/postgres 19244
GNU gdb 6.2.1
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and
you are
w
James Robinson <[EMAIL PROTECTED]> writes:
> As fate would have it, the vacuumdb frontend and backend which were
> initially afflicted are still in existence:
OK, so pid 19244 isn't blocked on any lmgr lock; else we'd see an entry
with granted = f for it in pg_locks. It could be blocked on a lo
As fate would have it, the vacuumdb frontend and backend which were
initially afflicted are still in existence:
sscadmin 19236 19235 0 Nov25 ?00:00:00 /usr/local/pgsql/bin/
vacuumdb -U postgres --all --analyze --verbose
postgres 19244 3596 0 Nov25 ?00:00:02 postgres: postgre
James Robinson <[EMAIL PROTECTED]> writes:
> Comparing the logs further with when it did complete, it seems that
> one table in particular (at least) seems afflicted:
> social=# vacuum verbose analyze agency.swlog_client;
> hangs up forever -- have to control-c the client. Likewise for w/o
>
G'day folks.
We have a production database running 8.0.3 which gets fully
pg_dump'd and vacuum analyze'd hourly by cron. Something strange
happened to us on the 5AM Friday Nov. 25'th cron run -- the:
/usr/local/pgsql/bin/vacuumdb -U postgres --all --analyze --verbose
>& $DATE/vacuum.log
-general@postgresql.org; pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Help with Array Function in C language...
"Cristian Prieto" <[EMAIL PROTECTED]> writes:
> Datum
> test_array(PG_FUNCTION_ARGS)
> {
> ArrayType *v = PG_GETARG_ARRAYTYPE_P(1);
>
"Cristian Prieto" <[EMAIL PROTECTED]> writes:
> Datum
> test_array(PG_FUNCTION_ARGS)
> {
> ArrayType *v = PG_GETARG_ARRAYTYPE_P(1);
> Datum element;
> Oidarray_type = get_array_type(v);
I think you want get_element_type, instead. And you definitely ought to
be check
Hello, I'm doing a very simple C language function in PostgreSQL but I can't
figure out why this is not working, the documentation about the PostgreSQL
internals is not so good about arrays and I couldn't find a suitable example
of the use of some kind of array functions inside the pgsql source tre
Did you not see the posts by Richard and Dennis?
ElayaRaja S wrote:
Hi,
I am unable to restart the PostgreSQL. I am using redhat Linux 9
with postgresql 7.4.5. Unexpectedly due to ups problem my server was
shutdown once. After that i am unable to restart the server. How to
stop and start.
Present s
Hi,
I am unable to restart the PostgreSQL. I am using redhat Linux 9
with postgresql 7.4.5. Unexpectedly due to ups problem my server was
shutdown once. After that i am unable to restart the server. How to
stop and start.
Present status is running. If i tried to start i am getting as
1) bash-2.
ElayaRaja S wrote:
Hi,
I am unable to restart the PostgreSQL. I am using redhat Linux 9
with postgresql 7.4.5. Unexpectedly due to ups problem my server was
shutdown once. After that i am unable to restart the server. How to
stop and start.
Present status is running. If i tried to start i am ge
On Fri, 15 Apr 2005, ElayaRaja S wrote:
> Hi,
> I am unable to restart the PostgreSQL. I am using redhat Linux 9
> with postgresql 7.4.5. Unexpectedly due to ups problem my server was
> shutdown once. After that i am unable to restart the server.
> DETAIL: The data directory was initialized by
Hi,
I am unable to restart the PostgreSQL. I am using redhat Linux 9
with postgresql 7.4.5. Unexpectedly due to ups problem my server was
shutdown once. After that i am unable to restart the server. How to
stop and start.
Present status is running. If i tried to start i am getting as
1) bash-
strk <[EMAIL PROTECTED]> writes:
> On Wed, Mar 23, 2005 at 01:48:11PM +, Richard Huxton wrote:
>> *What* is giving this error? Something seems to be holding onto a
>> reference to (at a guess) your temporary table. Can you identify what?
> Whatever is called from create temp table ..
"\set V
On Wed, Mar 23, 2005 at 02:49:53PM +, Richard Huxton wrote:
> strk wrote:
> >On Wed, Mar 23, 2005 at 01:48:11PM +, Richard Huxton wrote:
> >
> >>strk wrote:
> >>
> >>>Hello.
> >>>A memory fault in a trigger left my database
> >>>in a corrupted state:
> >>>
> >>
> >>> - I can't create temp
strk wrote:
On Wed, Mar 23, 2005 at 01:48:11PM +, Richard Huxton wrote:
strk wrote:
Hello.
A memory fault in a trigger left my database
in a corrupted state:
- I can't create temporary tables anymore
(restart/vacuum full don't help)
ERROR: cache lookup failed for r
On Wed, Mar 23, 2005 at 01:48:11PM +, Richard Huxton wrote:
> strk wrote:
> >Hello.
> >A memory fault in a trigger left my database
> >in a corrupted state:
> >
>
> > - I can't create temporary tables anymore
> > (restart/vacuum full don't help)
> > ERROR: cache lookup failed
strk wrote:
Hello.
A memory fault in a trigger left my database
in a corrupted state:
- I can't create temporary tables anymore
(restart/vacuum full don't help)
ERROR: cache lookup failed for relation 1250714
*What* is giving this error? Something seems to be holding o
Hello.
A memory fault in a trigger left my database
in a corrupted state:
- A temporary table listed in pg_class
was not accessible with a select
- I could not DROP it
- I deleted the record from pg_class
- I can't create temporary tables anymore
Thomas F.O'Connell wrote:
Does auto_vacuum vacuum the system tables?
Yes
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
Does auto_vacuum vacuum the system tables?
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Feb 16, 2005, at 5:42 PM, Matthew T. O'Connor wrote:
Tom Lane wrote:
[EMAIL PROTECT
Matthew T. O'Connor wrote:
> Tom Lane wrote:
>
> >[EMAIL PROTECTED] writes:
> >
> >
> >>Maybe I'm missing something, but shouldn't the prospect of data loss (even
> >>in the presense of admin ignorance) be something that should be
> >>unacceptable? Certainly within the realm "normal PostgreSQL"
Russell Smith wrote:
On Fri, 18 Feb 2005 04:38 pm, Kevin Brown wrote:
Tom Lane wrote:
No, the entire point of this discussion is to whup the DBA upside the
head with a big enough cluestick to get him to install autovacuum.
Once autovacuum is default, it won't matter anymore.
I have a
On Thursday 17 February 2005 07:47, [EMAIL PROTECTED] wrote:
> > Gaetano Mendola <[EMAIL PROTECTED]> writes:
> >> We do ~4000 txn/minute so in 6 month you are screewd up...
> >
> > Sure, but if you ran without vacuuming for 6 months, wouldn't you notice
> > the
> > huge slowdowns from all those dea
On Fri, 18 Feb 2005 08:53 pm, Jürgen Cappel wrote:
> Just wondering after this discussion:
>
> Is transaction wraparound limited to a database or to an installation ?
> i.e. can heavy traffic in one db affect another db in the same installation ?
>
XID's are global to the pg cluster, or installat
On Fri, 18 Feb 2005 04:38 pm, Kevin Brown wrote:
> Tom Lane wrote:
> > Gaetano Mendola <[EMAIL PROTECTED]> writes:
> > > BTW, why not do an automatic vacuum instead of shutdown ? At least the
> > > DB do not stop working untill someone study what the problem is and
> > > how solve it.
> >
> > No,
Just wondering after this discussion:
Is transaction wraparound limited to a database or to an installation ?
i.e. can heavy traffic in one db affect another db in the same installation ?
---(end of broadcast)---
TIP 8: explain analyze is your friend
Tom Lane wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
> > BTW, why not do an automatic vacuum instead of shutdown ? At least the
> > DB do not stop working untill someone study what the problem is and
> > how solve it.
>
> No, the entire point of this discussion is to whup the DBA upside t
Greg Stark wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
>
>>We do ~4000 txn/minute so in 6 month you are screewd up...
>
>
> Sure, but if you ran without vacuuming for 6 months, wouldn't you notice the
> huge slowdowns from all those dead tuples before that?
>
In my applications yes
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
>> We do ~4000 txn/minute so in 6 month you are screewd up...
>
> Sure, but if you ran without vacuuming for 6 months, wouldn't you notice
> the
> huge slowdowns from all those dead tuples before that?
>
>
I would think that only applies to databases
And most databases get a mix of updates and selects. I would expect it would
be pretty hard to go that long with any significant level of update activity
and no vacuums and not notice the performance problems from the dead tuples.
I think the people who've managed to shoot themselves in the foot t
On 17 Feb 2005, Greg Stark wrote:
> Gaetano Mendola <[EMAIL PROTECTED]> writes:
>
> > We do ~4000 txn/minute so in 6 month you are screewd up...
>
> Sure, but if you ran without vacuuming for 6 months, wouldn't you notice the
> huge slowdowns from all those dead tuples before that?
Most people
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> We do ~4000 txn/minute so in 6 month you are screewd up...
Sure, but if you ran without vacuuming for 6 months, wouldn't you notice the
huge slowdowns from all those dead tuples before that?
--
greg
---(end of broadcast)---
Gaetano Mendola <[EMAIL PROTECTED]> writes:
> BTW, why not do an automatic vacuum instead of shutdown ? At least the
> DB do not stop working untill someone study what the problem is and
> how solve it.
No, the entire point of this discussion is to whup the DBA upside the
head with a big enough cl
Tom Lane wrote:
> Bruno Wolff III <[EMAIL PROTECTED]> writes:
>
>>I don't think there is much point in making it configurable. If they knew
>>to do that they would most likely know to vacuum as well.
>
>
> Agreed.
>
>
>>However, 100K out of 1G seems too small. Just to get wrap around there
>>m
Stephan Szabo wrote:
> On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
>
>
>>>Once autovacuum gets to the point where it's used by default, this
>>>particular failure mode should be a thing of the past, but in the
>>>meantime I'm not going to panic about it.
>>
>>I don't know how to say this without
Greg Stark wrote:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
>
>
>>Christopher Kings-Lynne wrote:
>>
>>
>>>I wonder if I should point out that we just had 3 people suffering XID
>>>wraparound failure in 2 days in the IRC channel...
>>
>>I have had half a dozen new customers in the last six m
Tom Lane wrote:
[EMAIL PROTECTED] writes:
Maybe I'm missing something, but shouldn't the prospect of data loss (even
in the presense of admin ignorance) be something that should be
unacceptable? Certainly within the realm "normal PostgreSQL" operation.
Once autovacuum gets to the point wher
I think the people who've managed to shoot themselves in the foot this
way are those who decided to "optimize" their cron jobs to only vacuum
their user tables, and forgot about the system catalogs. So it's
probably more of a case of "a little knowledge is a dangerous thing"
than never having hea
Greg Stark <[EMAIL PROTECTED]> writes:
> How are so many people doing so many transactions so soon after installing?
> To hit wraparound you have to do a billion transactions? ("With a `B'") That
> takes real work. If you did 1,000 txn/minute for every minute of every day it
> would still take a c
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> Christopher Kings-Lynne wrote:
>
> > I wonder if I should point out that we just had 3 people suffering XID
> > wraparound failure in 2 days in the IRC channel...
>
> I have had half a dozen new customers in the last six months that have
> had the
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> I don't think there is much point in making it configurable. If they knew
> to do that they would most likely know to vacuum as well.
Agreed.
> However, 100K out of 1G seems too small. Just to get wrap around there
> must be a pretty high transaction
Stephan Szabo <[EMAIL PROTECTED]> writes:
> All in all, I figure that odds are very high that if someone isn't
> vacuuming in the rest of the transaction id space, either the transaction
> rate is high enough that 100,000 warning may not be enough or they aren't
> going to pay attention anyway and
Tom Lane wrote:
Maybe
(a) within 200,000 transactions of wrap, every transaction start
delivers a WARNING message;
(b) within 100,000 transactions, forced shutdown as above.
This seems sound enough, but if the DBA and/or SA can't be bothered
reading the docs where this topic features quite pro
Stephan Szabo wrote:
On Wed, 16 Feb 2005, Tom Lane wrote:
Stephan Szabo <[EMAIL PROTECTED]> writes:
(a) within 200,000 transactions of wrap, every transaction start
delivers a WARNING message;
(b) within 100,000 transactions, forced shutdown as above.
This seems reasonable, although perhaps the fo
On Wed, Feb 16, 2005 at 09:38:31 -0800,
Stephan Szabo <[EMAIL PROTECTED]> wrote:
> On Wed, 16 Feb 2005, Tom Lane wrote:
>
> > (a) within 200,000 transactions of wrap, every transaction start
> > delivers a WARNING message;
> >
> > (b) within 100,000 transactions, forced shutdown as above.
>
> T
On Wed, 16 Feb 2005, Tom Lane wrote:
> Stephan Szabo <[EMAIL PROTECTED]> writes:
> > Right, but since the how to resolve it currently involves executing a
> > query, simply stopping dead won't allow you to resolve it. Also, if we
> > stop at the exact wraparound point, can we run into problems act
Stephan Szabo <[EMAIL PROTECTED]> writes:
> Right, but since the how to resolve it currently involves executing a
> query, simply stopping dead won't allow you to resolve it. Also, if we
> stop at the exact wraparound point, can we run into problems actually
> trying to do the vacuum if that's stil
>
> On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
>
>> > On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
>> >
>> >> >
>> >> > Once autovacuum gets to the point where it's used by default, this
>> >> > particular failure mode should be a thing of the past, but in the
>> >> > meantime I'm not going to pa
> Stephan Szabo <[EMAIL PROTECTED]> writes:
>> Right, but since the how to resolve it currently involves executing a
>> query, simply stopping dead won't allow you to resolve it. Also, if we
>> stop at the exact wraparound point, can we run into problems actually
>> trying to do the vacuum if that'
>
> On Wed, 16 Feb 2005, Joshua D. Drake wrote:
>
>>
>> >Do you have a useful suggestion about how to fix it? "Stop working" is
>> >handwaving and merely basically saying, "one of you people should do
>> >something about this" is not a solution to the problem, it's not even
>> an
>> >approach towa
On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
> > On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
> >
> >> >
> >> > Once autovacuum gets to the point where it's used by default, this
> >> > particular failure mode should be a thing of the past, but in the
> >> > meantime I'm not going to panic about
> On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
>
>> >
>> > Once autovacuum gets to the point where it's used by default, this
>> > particular failure mode should be a thing of the past, but in the
>> > meantime I'm not going to panic about it.
>>
>> I don't know how to say this without sounding lik
On Wed, 16 Feb 2005, Joshua D. Drake wrote:
>
> >Do you have a useful suggestion about how to fix it? "Stop working" is
> >handwaving and merely basically saying, "one of you people should do
> >something about this" is not a solution to the problem, it's not even an
> >approach towards a soluti
Christopher Kings-Lynne wrote:
At this point we have a known critical bug. Usually the PostgreSQL
community
is all over critical bugs. Why is this any different?
It sounds to me that people are just annoyed that users don't RTFM.
Get over it. Most won't. If users RTFM more often, it would put mo
Do you have a useful suggestion about how to fix it? "Stop working" is
handwaving and merely basically saying, "one of you people should do
something about this" is not a solution to the problem, it's not even an
approach towards a solution to the problem.
I believe that the ability for Postgr
At this point we have a known critical bug. Usually the PostgreSQL
community
is all over critical bugs. Why is this any different?
It sounds to me that people are just annoyed that users don't RTFM. Get
over it. Most won't. If users RTFM more often, it would put most support
companies out of bu
in the foot. We've seen several instances of people blowing away
pg_xlog and pg_clog, for example, because they "don't need log files".
Or how about failing to keep adequate backups? That's a sure way for an
ignorant admin to lose data too.
There is a difference between actively doing somet
On Wed, 16 Feb 2005 [EMAIL PROTECTED] wrote:
> >
> > Once autovacuum gets to the point where it's used by default, this
> > particular failure mode should be a thing of the past, but in the
> > meantime I'm not going to panic about it.
>
> I don't know how to say this without sounding like a jerk,
> [EMAIL PROTECTED] writes:
>> Maybe I'm missing something, but shouldn't the prospect of data loss
>> (even
>> in the presense of admin ignorance) be something that should be
>> unacceptable? Certainly within the realm "normal PostgreSQL" operation.
>
> [ shrug... ] The DBA will always be able to
[EMAIL PROTECTED] writes:
> Maybe I'm missing something, but shouldn't the prospect of data loss (even
> in the presense of admin ignorance) be something that should be
> unacceptable? Certainly within the realm "normal PostgreSQL" operation.
[ shrug... ] The DBA will always be able to find a way
>> The checkpointer is entirely incapable of either detecting the problem
>> (it doesn't have enough infrastructure to examine pg_database in a
>> reasonable way) or preventing backends from doing anything if it did
>> know there was a problem.
>
> Well, I guess I meant 'some regularly running proc
The checkpointer is entirely incapable of either detecting the problem
(it doesn't have enough infrastructure to examine pg_database in a
reasonable way) or preventing backends from doing anything if it did
know there was a problem.
Well, I guess I meant 'some regularly running process'...
I think
> Not being able to issue new transactions *is* data loss --- how are you
> going to get the system out of that state?
Yes, but I also would prefer the server to say something as "The database is
full, please vacuum." - the same as when the hard disk is full and you try
to record something on it -
Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
> This might seem like a stupid question, but since this is a massive data
> loss potential in PostgreSQL, what's so hard about having the
> checkpointer or something check the transaction counter when it runs and
> either issue a db-wide vac
Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
> This might seem like a stupid question, but since this is a massive
> data loss potential in PostgreSQL, what's so hard about having the
> checkpointer or something check the transaction counter when it runs
> and either issue a db-wide vacuum
>> I think you're pretty well screwed as far as getting it *all* back goes,
>> but you could use pg_resetxlog to back up the NextXID counter enough to
>> make your tables and databases reappear (and thereby lose the effects of
>> however many recent transactions you back up over).
>>
>> Once you've
It must be possible to create a tool based on the PostgreSQL sources that
can read all the tuples in a database and dump them to a file stream. All
the data remains in the file until overwritten with data after a vacuum.
It *should* be doable.
If there data in the table is worth anything, then it
I think you're pretty well screwed as far as getting it *all* back goes,
but you could use pg_resetxlog to back up the NextXID counter enough to
make your tables and databases reappear (and thereby lose the effects of
however many recent transactions you back up over).
Once you've found a NextXID s
> Once you've found a NextXID setting you like, I'd suggest an immediate
> pg_dumpall/initdb/reload to make sure you have a consistent set of data.
> Don't VACUUM, or indeed modify the DB at all, until you have gotten a
> satisfactory dump.
>
> Then put in a cron job to do periodic vacuuming ;-)
T
"Kouber Saparev" <[EMAIL PROTECTED]> writes:
> After asking the guys in the [EMAIL PROTECTED] channel they told
> me that the reason is the "Transaction ID wraparound", because I have never
> ran VACUUM on the whole database.
> So they proposed to ask here for help. I have stopped the server, but
Hi folks,
I ran into big trouble - it seems that my DB is lost.
"select * from pg_database" gives me 0 rows, but I still can connect to
databases with \c and even select from tables there, although they're also
not visible with \dt.
After asking the guys in the [EMAIL PROTECTED] channel they tol
On Mon, 27 Dec 2004 07:55:27 +0530 (IST), Ameya S. Sakhalkar
<[EMAIL PROTECTED]> wrote:
>
> For my project (main memory DBMS), I have written a main memory filesystem.
> Idea is, the primary copy of data will reside in main memory. (Workable
> only for small size data, at most 2GB).
>
> Now, I wa
For my project (main memory DBMS), I hv written a main memory filesystem.
Idea is, the primary copy of data will reside in main memory. (Workable
only for small size data, at most 2GB).
Now, I want to plug this filesystem with Postgres, so that, instead of
Unix filesystem, this main memory files
overbored <[EMAIL PROTECTED]> writes:
> Hi all, I added a new variable-length field to the pg_class catalog, but
> I did something wrong, and I can't tell what else I'd need to change.
> ...
> The REVOKE command invokes ExecuteGrantStmt_Relation() to modify the
> relacl attribute of pg_class, whi
On Sun, Dec 19, 2004 at 01:56:02AM -0800, overbored wrote:
> Hi all, I added a new variable-length field to the pg_class catalog, but
> I did something wrong, and I can't tell what else I'd need to change. (I
> know about how extending pg_class is bad and all, but it seems to be the
> simplest s
Hi all, I added a new variable-length field to the pg_class catalog, but
I did something wrong, and I can't tell what else I'd need to change. (I
know about how extending pg_class is bad and all, but it seems to be the
simplest solution to my problem right now, and I'd just like to get it
worki
ElayaRaja S wrote:
Hi,
While configuring OpenCRX by using Postgresql i am facing probmelm.
The problem while creating db using the command ( createdb -h
localhost -E utf8 -U system crx-CRX ) .
Erro:
createdb: could not connect to database template1: could not connect
to server:
Connection refu
Hi,
While configuring OpenCRX by using Postgresql i am facing probmelm.
The problem while creating db using the command ( createdb -h
localhost -E utf8 -U system crx-CRX ) .
Erro:
createdb: could not connect to database template1: could not connect
to server:
Connection refused
Is the se
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
On Thu, 9 Sep 2004, [koi8-r] ??? ?. wrote:
Hello, Hackers! I use Nagios - monitoring system. Can you help, please?
I want to compile plugin for Nagios named 'check_pqsql'.
Which libraries I need to compile it successful? Thank you.
If
101 - 200 of 295 matches
Mail list logo