Excerpts from Tom Lane's message of jue jun 09 17:10:10 -0400 2011:
> Alvaro Herrera writes:
> > Excerpts from Alvaro Herrera's message of jue jun 09 16:34:13 -0400 2011:
> >> I have pushed it now.
>
> > ... and it caused a failure on the buildfarm, so I panicked and reverted
> > it. I think the
Alvaro Herrera writes:
> Excerpts from Alvaro Herrera's message of jue jun 09 16:34:13 -0400 2011:
>> I have pushed it now.
> ... and it caused a failure on the buildfarm, so I panicked and reverted
> it. I think the patch below fixes it.
Actually, I think what you want is what closeAllVfds doe
Excerpts from Alvaro Herrera's message of jue jun 09 16:34:13 -0400 2011:
> I have pushed it now.
... and it caused a failure on the buildfarm, so I panicked and reverted
it. I think the patch below fixes it. Let me know if you think I
should push the whole thing again.
*** a/src/backend/stora
Alvaro Herrera writes:
> That was pretty weird. I had rm'd the build tree and rebuilt, failure
> still there. I then did a git reset FETCH_HEAD, recompiled, and the
> problem was gone. git reset to my revision, failed. Then git clean
> -dfx, nuked the build tree, redid the whole thing from scr
Excerpts from Alvaro Herrera's message of jue jun 09 16:02:14 -0400 2011:
> Excerpts from Tom Lane's message of jue jun 09 14:45:31 -0400 2011:
> > My thought is that it needs some beta testing. Perhaps it'd be sane to
> > push it into beta2 now, and then back-patch sometime after 9.1 final,
> >
Excerpts from Tom Lane's message of jue jun 09 14:45:31 -0400 2011:
> Alvaro Herrera writes:
> > Excerpts from Tom Lane's message of mié jun 08 14:28:02 -0400 2011:
> >> Alvaro Herrera writes:
> >>> This customer is running on 8.4 so I started from there; should I
> >>> backpatch this to 8.2, or
On Thu, Jun 9, 2011 at 3:02 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, Jun 9, 2011 at 2:45 PM, Tom Lane wrote:
>>> My thought is that it needs some beta testing. Perhaps it'd be sane to
>>> push it into beta2 now, and then back-patch sometime after 9.1 final,
>>> if no problems pop up
Robert Haas writes:
> On Thu, Jun 9, 2011 at 2:45 PM, Tom Lane wrote:
>> My thought is that it needs some beta testing. Perhaps it'd be sane to
>> push it into beta2 now, and then back-patch sometime after 9.1 final,
>> if no problems pop up.
> I think it'd be sensible to back-patch it. I'm no
On Thu, Jun 9, 2011 at 2:45 PM, Tom Lane wrote:
> Alvaro Herrera writes:
>> Excerpts from Tom Lane's message of mié jun 08 14:28:02 -0400 2011:
>>> Alvaro Herrera writes:
This customer is running on 8.4 so I started from there; should I
backpatch this to 8.2, or not at all?
>
>>> I'm n
Alvaro Herrera writes:
> Excerpts from Tom Lane's message of mié jun 08 14:28:02 -0400 2011:
>> Alvaro Herrera writes:
>>> This customer is running on 8.4 so I started from there; should I
>>> backpatch this to 8.2, or not at all?
>> I'm not excited about back-patching it...
> Bummer.
Well, o
Excerpts from Tom Lane's message of mié jun 08 14:28:02 -0400 2011:
> Alvaro Herrera writes:
> > Okay, here's a patch implementing this idea. It seems to work quite
> > well, and it solves the problem in a limited testing scenario -- I
> > haven't yet tested on the customer machines.
>
> This se
Alvaro Herrera writes:
> Okay, here's a patch implementing this idea. It seems to work quite
> well, and it solves the problem in a limited testing scenario -- I
> haven't yet tested on the customer machines.
This seems mostly sane, except I think you have not considered the
issue of when to cle
Excerpts from Tom Lane's message of mar jun 07 12:26:34 -0400 2011:
> It's not *that* many levels: in fact, I think md.c is the only level
> that would just have to pass it through without doing anything useful.
> I think that working from there is a saner and more efficient approach
> than what y
Alvaro Herrera writes:
> Excerpts from Tom Lane's message of lun jun 06 12:49:46 -0400 2011:
>> Hmm, there's already a mechanism for closing "temp" FDs at the end of a
>> query ... maybe blind writes could use temp-like FDs?
> I don't think it can be made to work exactly like that. If I understa
Excerpts from Tom Lane's message of lun jun 06 12:49:46 -0400 2011:
> Robert Haas writes:
> > Instead of closing them immediately, how about flagging the FD and
> > closing all the flagged FDs at the end of each query, or something
> > like that?
>
> Hmm, there's already a mechanism for closing
Excerpts from Tom Lane's message of lun jun 06 12:49:46 -0400 2011:
> Robert Haas writes:
> > On Mon, Jun 6, 2011 at 12:30 PM, Tom Lane wrote:
> >> On reflection I think this behavior is probably limited to the case
> >> where we've done what we used to call a "blind write" of a block that
> >> i
Robert Haas writes:
> On Mon, Jun 6, 2011 at 12:30 PM, Tom Lane wrote:
>> On reflection I think this behavior is probably limited to the case
>> where we've done what we used to call a "blind write" of a block that
>> is unrelated to our database or tables. For normal SQL-driven accesses,
>> the
On Mon, Jun 6, 2011 at 12:30 PM, Tom Lane wrote:
> Alvaro Herrera writes:
>> Excerpts from Tom Lane's message of lun jun 06 12:10:24 -0400 2011:
>>> Yeah, I wasn't that thrilled with the suggestion either. But we can't
>>> just have backends constantly closing every open FD they hold, or
>>> per
Alvaro Herrera writes:
> Excerpts from Tom Lane's message of lun jun 06 12:10:24 -0400 2011:
>> Yeah, I wasn't that thrilled with the suggestion either. But we can't
>> just have backends constantly closing every open FD they hold, or
>> performance will suffer. I don't see any very good place t
Alvaro Herrera wrote:
> That doesn't solve the WAL problem Kevin found, of course ...
I wouldn't worry about that too much -- the WAL issue is
self-limiting and not likely to ever cause a failure. The biggest
risk is that every now and then some new individual will notice it
and waste a bit o
Excerpts from Tom Lane's message of lun jun 06 12:10:24 -0400 2011:
> Alvaro Herrera writes:
> > Hmm interesting. I don't think the placement suggested by Tom would be
> > useful, because the Zabbix backends are particularly busy all the time,
> > so they wouldn't run ProcessCatchupEvent at all.
Alvaro Herrera writes:
> Excerpts from Kevin Grittner's message of lun jun 06 11:58:51 -0400 2011:
>> Alvaro Herrera wrote:
>>> What we found out after more careful investigation is that the
>>> file is kept open by a backend connected to a different database.
>>> I have a suspicion that what ha
Excerpts from Kevin Grittner's message of lun jun 06 11:58:51 -0400 2011:
> Alvaro Herrera wrote:
>
> > What we found out after more careful investigation is that the
> > file is kept open by a backend connected to a different database.
> > I have a suspicion that what happened here is that thi
Alvaro Herrera wrote:
> What we found out after more careful investigation is that the
> file is kept open by a backend connected to a different database.
> I have a suspicion that what happened here is that this backend
> was forced to flush out a page from shared buffers to read some
> other
Excerpts from Tom Lane's message of sáb jun 04 12:49:05 -0400 2011:
> Alvaro Herrera writes:
> > What surprises me is that the open references remain after a database
> > drop. Surely this means that no backends keep open file descriptors to
> > any table in that database, because there are no co
Alvaro Herrera writes:
> What surprises me is that the open references remain after a database
> drop. Surely this means that no backends keep open file descriptors to
> any table in that database, because there are no connections.
bgwriter ...
regards, tom lane
--
Sen
Excerpts from Alexander Shulgin's message of vie jun 03 17:45:28 -0400 2011:
> There were about 450 such (or similar) files, all of them having /2613 in the
> filename. Since 2613 is a regclass of pg_largeobject and we are indeed
> working with quite a few large objects in that DB so this is wh
Hello Hackers,
There is some strange behavior we're experiencing with one of the customer's
DBs (8.4)
We've noticed that free disk space went down heavily on a system, and after a
short analysis determined that the reason was that postmaster was holding lots
of unlinked files open. A sample o
Hi,
On Wed, Jun 17, 2009 at 12:22 AM, Czichy, Thoralf (NSN -
FI/Helsinki) wrote:
> [STONITH is not always best strategy if failures can be declared as
> user-space software problem only, limit STONITH to HW/OS failures]
>
> The isolation of the failing Postgres instance does not require a
> STONIT
"Czichy, Thoralf (NSN - FI/Helsinki)" writes:
> I am working together with Harald on this issue. Below some thoughts on
> why we think it should be possible to disable the postmaster-internal
> recovery attempt and instead have faults in the processes started
> by postmaster escalated to postma
hi,
I am working together with Harald on this issue. Below some thoughts on
why we think it should be possible to disable the postmaster-internal
recovery attempt and instead have faults in the processes started
by postmaster escalated to postmaster-exit.
[Our typical "embedded" situation]
Kolb, Harald (NSN - DE/Munich) escribió:
> The recovery and restart feature is an excellent solution if the db is
> running in a standalone environment and I understand that this should
> not be weakened. But in a configuration where the db is only one
> resource among others and where you have a
elsinki)
> Subject: Re: [HACKERS] postmaster recovery and automatic
> restart suppression
>
> "Kolb, Harald (NSN - DE/Munich)" writes:
> > If you don't want to see this option as a GUC parameter, would it be
> > acceptable to have it as a new postmaster cm
Hi,
On Wed, Jun 10, 2009 at 4:21 AM, Simon Riggs wrote:
>
> On Tue, 2009-06-09 at 20:59 +0200, Kolb, Harald (NSN - DE/Munich) wrote:
>
>> There are some good reasons why a switchover could be an appropriate
>> means in case the DB is facing troubles. It may be that the root cause
>> is not the DB
On Tue, 2009-06-09 at 15:48 -0500, Kevin Grittner wrote:
> My first reaction on hearing the request was that it might have *some*
> use; but in trying to recall any restart where it is what I would have
> wanted, I come up dry. I haven't even really come up with a good
> hypothetical use case.
Tom Lane wrote:
> "Kevin Grittner" writes:
>> "Kolb, Harald (NSN - DE/Munich)" wrote:
>>> There are some good reasons why a switchover could be an
>>> appropriate means in case the DB is facing troubles. It may be
>>> that the root cause is not the DB itself, but used resources or
>>> other thi
"Kevin Grittner" writes:
> "Kolb, Harald (NSN - DE/Munich)" wrote:
>> There are some good reasons why a switchover could be an appropriate
>> means in case the DB is facing troubles. It may be that the root
>> cause is not the DB itsself, but used resources or other things
>> which are going craz
Not really since once you fail over you may as well stop the rebuild
since you'll have to restore the whole database. Moreover wouldn't
that have to be a manual decision?
The closest thing I can come to a use case would be if you run a very
large cluster with hundreds of read-only replicas.
"Kolb, Harald (NSN - DE/Munich)" wrote:
>> From: ext Tom Lane [mailto:t...@sss.pgh.pa.us]
>> Mechanism should exist to support useful policy. I don't believe
>> that the proposed switch has any real-world usefulness.
> There are some good reasons why a switchover could be an appropriate
> me
"Kolb, Harald (NSN - DE/Munich)" writes:
> If you don't want to see this option as a GUC parameter, would it be
> acceptable to have it as a new postmaster cmd line option ?
That would make two kluges, not one (we don't do options that are
settable in only one way). And it does nothing whatever
On Tue, 2009-06-09 at 20:59 +0200, Kolb, Harald (NSN - DE/Munich) wrote:
> There are some good reasons why a switchover could be an appropriate
> means in case the DB is facing troubles. It may be that the root cause
> is not the DB itsself, but used resources or other things which are
> going cr
t; (NSN - FI/Helsinki)
> Subject: Re: [HACKERS] postmaster recovery and automatic
> restart suppression
>
> Robert Haas writes:
> > I see that you've carefully not quoted Greg's remark about
> "mechanism
> > not policy" with which I completely agree.
&g
On Mon, Jun 8, 2009 at 7:34 PM, Tom Lane wrote:
> Robert Haas writes:
>> I see that you've carefully not quoted Greg's remark about "mechanism
>> not policy" with which I completely agree.
>
> Mechanism should exist to support useful policy. I don't believe that
> the proposed switch has any real
Robert Haas writes:
> I see that you've carefully not quoted Greg's remark about "mechanism
> not policy" with which I completely agree.
Mechanism should exist to support useful policy. I don't believe that
the proposed switch has any real-world usefulness.
regards, tom
On Mon, Jun 8, 2009 at 4:30 PM, Tom Lane wrote:
> Greg Stark writes:
>>> On Mon, 2009-06-08 at 09:47 -0400, Tom Lane wrote:
I think the proposed don't-restart flag is exceedingly ugly and will not
solve any real-world problem.
>
>> Hm. I'm not sure I see a solid use case for it -- in my
Greg Stark writes:
>> On Mon, 2009-06-08 at 09:47 -0400, Tom Lane wrote:
>>> I think the proposed don't-restart flag is exceedingly ugly and will not
>>> solve any real-world problem.
> Hm. I'm not sure I see a solid use case for it -- in my experience you
> want to be pretty sure you have a pers
On Mon, Jun 8, 2009 at 6:58 PM, Simon Riggs wrote:
>
> On Mon, 2009-06-08 at 09:47 -0400, Tom Lane wrote:
>
>> I think the proposed don't-restart flag is exceedingly ugly and will not
>> solve any real-world problem.
>
> Agreed.
Hm. I'm not sure I see a solid use case for it -- in my experience yo
On Mon, 2009-06-08 at 09:47 -0400, Tom Lane wrote:
> I think the proposed don't-restart flag is exceedingly ugly and will not
> solve any real-world problem.
Agreed.
--
Simon Riggs www.2ndQuadrant.com
PostgreSQL Training, Services and Support
--
Sent via pgsql-hackers mailing li
Gregory Stark writes:
> I think the accepted way to handle this kind of situation is called STONITH --
> "Shoot The Other Node In The Head".
Yeah, and the reason people go to the trouble of having special hardware
for that is that pure-software solutions are unreliable.
I think the proposed don'
Hi,
On Mon, Jun 8, 2009 at 6:45 PM, Gregory Stark wrote:
> Fujii Masao writes:
>
>> On the other hand, the primary postgres might *not* restart automatically.
>> So, it's difficult for clusterware to choose whether to do failover when it
>> detects the death of the primary postgres, I think.
>
>
Fujii Masao writes:
> On the other hand, the primary postgres might *not* restart automatically.
> So, it's difficult for clusterware to choose whether to do failover when it
> detects the death of the primary postgres, I think.
I think the accepted way to handle this kind of situation is calle
Hi,
On Fri, Jun 5, 2009 at 9:24 PM, Kolb, Harald (NSN -
DE/Munich) wrote:
>> Good point. I also think that this makes a handling of failover
>> more complicated. In other words, clusterware cannot determine
>> whether to do failover when it detects the death of the primary
>> postgres. A wrong dec
Hi,
> -Original Message-
> From: ext Fujii Masao [mailto:masao.fu...@gmail.com]
> Sent: Friday, June 05, 2009 8:14 AM
> To: Kolb, Harald (NSN - DE/Munich)
> Cc: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] postmaster recovery and automatic
> restar
Hi,
On Fri, Jun 5, 2009 at 1:02 AM, Kolb, Harald (NSN - DE/Munich)
wrote:
> Hi,
>
> in case of a serious failure of a backend or an auxiliary process the
> postmaster performs a crash recovery and restarts the db automatically.
>
> Is there a possibility to deactivate the restart and to force the
Hi,
in case of a serious failure of a backend or an auxiliary process the
postmaster performs a crash recovery and restarts the db automatically.
Is there a possibility to deactivate the restart and to force the postmaster to
simply exit at the end ?
The background is that we will have a watchd
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> I find it a bit worrying that the postmaster is calling that syscall at
> all.
Yeah. Misguided thread-aware library perhaps?
Next time please try to get a stack trace.
regards, tom lane
---(end of broa
Dan Langille wrote:
> Looking at ktrace output, I saw a lot of this:
>
> 1172 postgres CALL kse_release(0xbfbfd500)
> 1172 postgres RET kse_release -1 errno 22 Invalid argument
Humm, kse_release seems related to multithreading. Or so says
http://nixdoc.net/man-pages/FreeBSD/kse_release.2.html
Folks,
I encountered a situation on Sunday night where the postmaster was in
a tight
loop. That's the conclusion we reached, but have no real proof. I
also have no
idea how to reproduce this situation. This post is just an FYI in
case it helps.
The laptop was running hot so I looked ar
Michael Paesold wrote:
> In case of recovery, I think one should still get the full
> output, no?
Recovery happens just after these messages are printed, so the window
when they are actually relevant would be very small.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/
On Jun 1, 2007, at 1:58 AM, Michael Paesold wrote:
In case of recovery, I think one should still get the full output, no?
+1
--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
--
On Fri, 2007-06-01 at 10:33 +0200, Peter Eisentraut wrote:
> Am Freitag, 1. Juni 2007 10:06 schrieb Simon Riggs:
> > Recovery considerations mean there can be more than one copy of a
> > database and it is important to be able to tell which one was just
> > started. The time a database was shutdown
Am Freitag, 1. Juni 2007 10:06 schrieb Simon Riggs:
> Recovery considerations mean there can be more than one copy of a
> database and it is important to be able to tell which one was just
> started. The time a database was shutdown defines which copy we are
> looking at.
No, the database identifi
On Wed, 2007-05-30 at 17:57 +0200, Peter Eisentraut wrote:
> Does anyone actually read these?
>
> LOG: database system was shut down at 2007-05-30 17:54:39 CEST
> LOG: checkpoint record is at 0/42C4FC
> LOG: redo record is at 0/42C4FC; undo record is at 0/0; shutdown TRUE
> LOG: next transacti
Tom Lane wrote:
Peter Eisentraut <[EMAIL PROTECTED]> writes:
Does anyone actually read these?
LOG: database system was shut down at 2007-05-30 17:54:39 CEST
LOG: checkpoint record is at 0/42C4FC
LOG: redo record is at 0/42C4FC; undo record is at 0/0; shutdown TRUE
LOG: next transaction ID: 0
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Does anyone actually read these?
> LOG: database system was shut down at 2007-05-30 17:54:39 CEST
> LOG: checkpoint record is at 0/42C4FC
> LOG: redo record is at 0/42C4FC; undo record is at 0/0; shutdown TRUE
> LOG: next transaction ID: 0/593; nex
Does anyone actually read these?
LOG: database system was shut down at 2007-05-30 17:54:39 CEST
LOG: checkpoint record is at 0/42C4FC
LOG: redo record is at 0/42C4FC; undo record is at 0/0; shutdown TRUE
LOG: next transaction ID: 0/593; next OID: 10820
LOG: next MultiXactId: 1; next MultiXact
FYI, with the options merged, we still have this TODO item:
* %Remove behavior of postmaster -o
---
Peter Eisentraut wrote:
> Here's the plan for assimilating the command-line options of the postmaster
> and postgr
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Here's the plan for assimilating the command-line options of the postmaster
> and postgres options.
> ...
> * postmaster options added to postgres: -h -i -k -l -n
> These options will not have any useful effects, but their behavior is
> consistent i
Here's the plan for assimilating the command-line options of the postmaster
and postgres options. I reported earlier on a couple of conflict areas; here
is the full plan:
* Remove: postmaster -a -b -m -M
These options have done nothing forever.
* postmaster options added to postgres: -h -i -k
Peter Eisentraut <[EMAIL PROTECTED]> writes:
> -S
> postmaster: silent mode
> postgres: work_mem
> Renaming the postgres side of -N, -o, -p, and -s might not really do
> any harm, but the -S option used to be very popular on the postgres
> command-line via -o from the postmaster, so I'm af
I've looked at the issue of assimilating the options of postmaster and
postgres, which has been mentioned now and then over the years. Basically,
we have five conflict cases that need to be resolved by breaking one or the
other, namely:
-N
postmaster: max_connections
postgres: do not e
On Mon, Sep 19, 2005 at 03:59:35PM -0400, Tom Lane wrote:
> Patrick Welche <[EMAIL PROTECTED]> writes:
> > I seem to have an unhappy postgresql:
>
> Let's see a test case, not a stack trace.
I haven't set up the minimalist test case yet, but the 2 tables involved
are incredibly simple. stats.id i
Patrick Welche <[EMAIL PROTECTED]> writes:
> I seem to have an unhappy postgresql:
Let's see a test case, not a stack trace.
regards, tom lane
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore yo
On Mon, Sep 19, 2005 at 06:12:54PM +0100, Patrick Welche wrote:
> #15 0x081a4c2f in exec_simple_query (
> query_string=0x834501c "select timesliced, count(stats_id) from trans
> left j
I just truncated one line early.. the query was:
# explain select timesliced, count(stats_id) from trans le
I seem to have an unhappy postgresql:
(gdb) bt
#0 0xbd99871b in kill () from /usr/lib/libc.so.12
#1 0xbda217e7 in abort () from /usr/lib/libc.so.12
#2 0x0820c1fa in ExceptionalCondition (
conditionName=0x8298920 "!(batchno > hashtable->curbatch)",
errorType=0x823919f "FailedAssertion",
On Tue, 2004-11-16 at 16:25 +1100, Neil Conway wrote:
> Attached is a revised patch
Applied to HEAD, and backpatched to REL7_4_STABLE.
-Neil
---(end of broadcast)---
TIP 8: explain analyze is your friend
Neil Conway <[EMAIL PROTECTED]> writes:
> Attached is a revised patch -- I just did the check at the end of
> transformStmt(),
Looks OK offhand.
> BTW I figure this should be backpatched to REL7_4_STABLE. Barring any
> objections I will do that (and apply to HEAD) this evening.
No objection here
On Mon, 2004-11-15 at 20:53 -0500, Tom Lane wrote:
> I think the SELECT limit should be MaxTupleAttributeNumber not
> MaxHeapAttributeNumber.
Ah, true -- I forgot about the distinction...
> What I think needs to happen is to check p_next_resno at some point
> after the complete tlist has been bui
On Mon, 2004-11-15 at 21:08 -0500, Tom Lane wrote:
> Are we going to try to test whether the behavior is appropriate when
> running out of memory to store the tlist?
We absolutely should: segfaulting on OOM is not acceptable behavior.
Testing that we recover safely when palloc() elogs (or _any_ ro
Neil Conway <[EMAIL PROTECTED]> writes:
> On Sun, 2004-11-14 at 11:24 +, Simon Riggs wrote:
>> Does this mean that we do not have
>> regression tests for each maximum setting ... i.e. are we missing a
>> whole class of tests in the regression tests?
> That said, there are some minor logistical
Neil Conway <[EMAIL PROTECTED]> writes:
> Attached is a patch. Not entirely sure that the checks I added are in
> the right places, but at any rate this fixes the three identified
> problems for me.
I think the SELECT limit should be MaxTupleAttributeNumber not
MaxHeapAttributeNumber. The point o
On Sun, 2004-11-14 at 11:24 +, Simon Riggs wrote:
> This seems too obvious a problem to have caused a bug
Well, I'd imagine that we've checked CREATE TABLE et al. with
somewhat-too-large values (like 2000 columns), which wouldn't be
sufficiently large to trigger the problem.
> presumably this
On Sun, 2004-11-14 at 18:29 -0500, Tom Lane wrote:
> Good analysis. We can't check earlier than DefineRelation AFAICS,
> because earlier stages don't know about inherited columns.
>
> On reflection I suspect there are similar issues with SELECTs that have
> more than 64K output columns. This pro
Neil Conway <[EMAIL PROTECTED]> writes:
> This specific assertion is triggered because we represent attribute
> numbers throughout the code base as a (signed) int16 -- the assertion
> failure has occurred because an int16 has wrapped around due to
> overflow. A fix would be to add a check to Def
On Sun, 2004-11-14 at 10:05, Neil Conway wrote:
> Joachim Wieland wrote:
> > this query makes postmaster (beta4) die with signal 11:
> >
> > (echo "CREATE TABLE footest(";
> > for i in `seq 0 66000`; do
> > echo "col$i int NOT NULL,";
> > done;
> > echo "PRIMARY KEY(col0));") |
Joachim Wieland wrote:
this query makes postmaster (beta4) die with signal 11:
(echo "CREATE TABLE footest(";
for i in `seq 0 66000`; do
echo "col$i int NOT NULL,";
done;
echo "PRIMARY KEY(col0));") | psql test
ERROR: tables can have at most 1600 columns
LOG: serve
Hi,
this query makes postmaster (beta4) die with signal 11:
(echo "CREATE TABLE footest(";
for i in `seq 0 66000`; do
echo "col$i int NOT NULL,";
done;
echo "PRIMARY KEY(col0));") | psql test
ERROR: tables can have at most 1600 columns
LOG: server process (PID
We use postgresql 7.4 running on a modified redhat
linux system as our database to store network related
data. The tables have millions of rows and several
joins on these tables are typically done in response
to user queries. The database itself takes about 40Gb
of disk space. Our application uses
I ran into this too. Patched the code with Tom's change and it works fine.
Thanks again Tom!
Richard Schilling
On 2003.07.17 11:04 Hannu Krosing wrote:
> Tom Lane kirjutas N, 17.07.2003 kell 19:49:
> > Ugh. The reason we hadn't seen this happen in the field was that it is
> > a bug I introduce
After a long battle with technology,
"Merlin Moncure" <[EMAIL PROTECTED]>, an earthling, wrote:
>INAL, but I would read carefully over the Oracle license agreement
>and redistribution allowances before doing this, especially if the
>database is installed in a commercial environment. With all the
Hi all,
last week ( 27/7/2003 ) I did a post with subj:
"postmaster core ( finally I have it )",
at that time I was supspecting that the core was caused by
a select on a view ( the view is always the same that cause the core )
that was running together with a vacuum; Tom Lane told me that
is reall
"Tom Lane" <[EMAIL PROTECTED]> writes:
> "Mendola Gaetano" <[EMAIL PROTECTED]> writes:
> > Is once-in-a-while but always at 00 minutes. This select is performed
each
> > 20 minutes and
> > the core happen always at 00 never at 20 and never at 40!
>
> Now that is very interesting ... why would that
"Mendola Gaetano" <[EMAIL PROTECTED]> writes:
> Is once-in-a-while but always at 00 minutes. This select is performed each
> 20 minutes and
> the core happen always at 00 never at 20 and never at 40!
Now that is very interesting ... why would that be?
Could we see the definition of this view?
>
From: "Tom Lane" <[EMAIL PROTECTED]>
> "Mendola Gaetano" <[EMAIL PROTECTED]> writes:
> > From: "Tom Lane" <[EMAIL PROTECTED]>
> >> I suspect some form of
> >> data corruption in the pg_rewrite row(s) for this table. Do you
> >> see any misbehavior when you do
> >>
> >> select * from pg_rewrite whe
"Mendola Gaetano" <[EMAIL PROTECTED]> writes:
> From: "Tom Lane" <[EMAIL PROTECTED]>
>> I suspect some form of
>> data corruption in the pg_rewrite row(s) for this table. Do you
>> see any misbehavior when you do
>>
>> select * from pg_rewrite where ev_class = 'v_psl_package_info'::regclass
> Al
From: "Tom Lane" <[EMAIL PROTECTED]>
> "Mendola Gaetano" <[EMAIL PROTECTED]> writes:
> > The process killed made always the same select ( with different
> > id_package ):
>
> > SELECT id_publisher, publisher_name, id_package, package_name
> > FROM v_psl_package_info
> > WHERE id_package = 177;
>
"Mendola Gaetano" <[EMAIL PROTECTED]> writes:
> The process killed made always the same select ( with different
> id_package ):
> SELECT id_publisher, publisher_name, id_package, package_name
> FROM v_psl_package_info
> WHERE id_package = 177;
> (gdb) where
> #0 0x08171fdd in RelationBuildRuleLo
Hi all,
since long time ( in the mean time I did Postgres upgrade four time and
now I'm using 7.3.3 ) I'm having, at least once in a week, a signal 11 on
a backend, and how you can immagine with the subseguent drop of all
connections, finally now I have the core.
The process killed made always the
Tom Lane kirjutas N, 17.07.2003 kell 19:49:
> Ugh. The reason we hadn't seen this happen in the field was that it is
> a bug I introduced in a patch two months ago :-(
>
> 7.3.3 will in fact fail to start up, with the above error, any time the
> last record of the WAL file ends exactly at a page
Hannu Krosing <[EMAIL PROTECTED]> writes:
> WHen running PostgreSQL 7.3.3-1 (from rpm's) on Redhat 9.0 I got the
> following in logs and the postmaster will not start up.
> PANIC: XLogWrite: write request 0/30504000 is past end of log
> 0/30504000
Ugh. The reason we hadn't seen this happen in th
1 - 100 of 123 matches
Mail list logo