[EMAIL PROTECTED] writes:
> The capability to move objects to other schemas would be quite
> useful.
I agree. It's not utterly-trivial to implement (for one thing, you
need to move any dependant objects like indexes to the new schema),
but some form of this functionality would be a useful thing to
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> ... But how about
>> 42$foo$
>> This is a syntax error in 7.4, and we propose to redefine it as an
>> integer literal '42' followed by a dollar-quote start symbol.
> The test should not succeed anywhere in the string '42$foo$'.
No, i
> Interesting, when I went to copy my data directory out of the way, I
> received this from cp:
>
> cp: data/base/16976/17840: Result too large
>
> might be a clue
I don't think it's PostgreSQL. I would suggest unmounting the volume and
running fsck (or the equivalent for your environment.)
I
Starting in single user mode and reindexing the database didn't fix the
error, although it seemed to run just fine.
Vacuum verbose ran until it hit the tfxtrade_details table and then it
died with that same error. it didn't whine about any other problems
prior to dying.
INFO: --Relation publi
Hello,
I personally ran into the exact same thing with another customer.
They are running RedHat 8.0 with (2.4.20 at the time). We had to upgrade
them to 2.4.23 and reboot. Worked like a charm. This was about two months
ago. I swear it was almost the exact same error.
Sincerely,
Joshua D. Drak
Hello,
There are a couple of things it could be. I would suggest that you take
down the database, start it up with -P? (I think it is -o '-P' it might
be -p '-O' I don't recall) and try and reindex the database itself.
You can also do a vacuuum verbose and see if you get some more errors you
m
Kernel 2.4.23 on redhat 8.0Please cc any response from
linux kernel list. TIA.
On or about 7:50am Friday 13 2004 my postgresql server
breaks down. I can ssh and use "top" and "ps" but
postgresql stops accepting connection. A small perl
script that logs system load average also hangs. I
cannot
Jo Voordeckers wrote:
> What exactly are these PG_CLOG files for... they seem pretty
> important and not corresponding to a single DB. What I don't
> understand is that some query's error on this while others run
> fine... And again how can you identify the PG_CLOG files's
> corresponding DB's.
Th
Both vacuum [full] and reindex fail with that same error.
vacuum is run regularly via a cron job.
-jason
On Feb 14, 2004, at 2:29 PM, Joshua D. Drake wrote:
Hello,
When was the last time you ran a reindex? Or a vacuum / vacuum full?
Sincerely,
Joshua D. Drake
On Sat, 14 Feb 2004, Jason Essing
Hello,
When was the last time you ran a reindex? Or a vacuum / vacuum full?
Sincerely,
Joshua D. Drake
On Sat, 14 Feb 2004, Jason Essington wrote:
> I am running PostgreSQL 7.3.3 on OS X Server 10.2
>
> The database has been running just fine for quite some time now, but
> this morning it be
Can we clarify what is meant by the client? It is my
expectation/desire that the client library would handle this as a
setting similar to "AutoCommit", which would implicitly protect each
statement within a nested block (savepoint), causing only itself to
abort. Such as, "OnError=>[abort|continue
Op Thu, 12 Feb 2004 21:18:10 +0100, schreef Jo Voordeckers:
> restored from a live system backup. One CLOG file turned up to be
> missining according to syslog, however no database-transactions are used
> in this database application.
Hmm I think I misunderstood the transaction nature of the PG_C
I am running PostgreSQL 7.3.3 on OS X Server 10.2
The database has been running just fine for quite some time now, but
this morning it began pitching the error:
ERROR: cannot read block 176 of tfxtrade_details: Numerical result
out of range
any time the table tfxtrade_details is accessed.
A d
> and I'm willing to entertain other suggestions.
Very nice, but you missed the most important. Command Tag.
--
Rod Taylor
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL
PGP Key: http://www.rbt.ca/rbtpub.asc
signature.asc
Description: This is a digitally signed message part
> Oh, yea, that would be bad. So you want to invalidate the entire
> session on any error? That could be done.
>
> --
> Bruce Momjian| http://candle.pha.pa.us
> [EMAIL PROTECTED] | (610) 359-1001
Well, that's exactly the current behaviour, which crea
Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
I ended up not using a regex, which seemed to be a little heavy handed,
but just writing a small custom recognition function, that should (and I
think does) mimic the pattern recognition for these tokens used by the
backend lexer.
Tom Lane wrote:
This is a dead end. The --disable-triggers hack is already a time bomb
waiting to happen, because all dump scripts using it will break if we
ever change the catalog representations it is hacking. Disabling rules
by such methods is no better an idea; it'd double our exposure to
com
Andrew Dunstan <[EMAIL PROTECTED]> writes:
> I ended up not using a regex, which seemed to be a little heavy handed,
> but just writing a small custom recognition function, that should (and I
> think does) mimic the pattern recognition for these tokens used by the
> backend lexer.
I looked at t
Joe Conway <[EMAIL PROTECTED]> writes:
> I didn't dispute the fact that disabling triggers (without unsupported
> hacks) is useful. I did agree with Tom that doing so with "permanent"
> commands is dangerous. I think the superuser-only SET variable idea is
> the best one I've heard for a way to
Does anyone know if the code for the Main Memory Storage Manager is
available somewhere (Berkeley??)?
Also, in released versions, MM.c is included but not used, does anyone
know if it should work if we define the STABLE_MEMORY_STORAGE, or do a
lot coding has to be done for it to work?
Please CC
Andreas Pflug wrote:
Joe Conway wrote:
Christopher Kings-Lynne wrote:
As an implementation issue, I wonder why these things are
hacking permanent on-disk data structures anyway, when what is
wanted is only a temporary suspension of triggers/rules within
a single backend. Some kind of superuser-onl
Joe Conway wrote:
Christopher Kings-Lynne wrote:
As an implementation issue, I wonder why these things are hacking
permanent on-disk data structures anyway, when what is wanted is only a
temporary suspension of triggers/rules within a single backend. Some
kind of superuser-only SET variable migh
22 matches
Mail list logo