Patryk Kordylewski wrote:
Hi Tom,
we are running PostgreSQL 8.2.4.
Tom Lane wrote:
Patryk Kordylewski [EMAIL PROTECTED] writes:
can someone help/explain me why i get this error when i try to make
an update on that view? It seems that something is wrong with the
subselect.
Andrus wrote:
Apart from that: there was a bug in the ODBC driver prior 08.02.0402 which
resulted in this error message whenever binary data of type
SQL_LONGVARBINARY
was send.
Where is 0402 driver ?
The snapshot drivers can be found here:
Patryk Kordylewski wrote:
I made a test-case which is not working for me. Perhaps something wrong
with the SELECT DISTINCT ON () syntax when i specify more then 1 column?
At first this was working when i specified only one column in the SELECT
DISTINCT ON-clause, then i inserted a dummy
Michael Glaesemann [EMAIL PROTECTED] writes:
char * theDigits = malloc(sizeof(XTypeMaximumStringLength));
...
char * subscriberNumber = malloc(XTypeMaximumStringLength);
One of those two is wrong, I suspect the first one. I wonder how you define
XTypeMaximumStringLength as
Hi,
How can I query the following information from the catalog:
- max number of index columns;
- max number of function arguments;
As far as I know these are compile-time constants, default is 32.
Thanks,
Otto
---(end of broadcast)---
TIP 5:
Hi,
I have a query that has run on 3 other *identical* machines (hardware,
software, postgresql.conf idenntical, just other data in the database)
that give me an out of memory error every time I try (see below).
Anyone any idea of where or how to look for the problem or the
solution?
From the
I am getting this message when I start the DB:
psql: FATAL: could not access status of transaction 0
DETAIL: Could not write to file pg_subtrans/01F8 at offset 221184:
No space left on device.
What is this about and how do I solve this? A df -h on my system shows this:
FilesystemType
Phoenix Kiula wrote:
I am getting this message when I start the DB:
psql: FATAL: could not access status of transaction 0
DETAIL: Could not write to file pg_subtrans/01F8 at offset 221184:
No space left on device.
What is this about and how do I solve this? A df -h on my system shows this:
Ok. I'd try locally from the machine first, so you know the krb
configurations are absolutely identical all the way. Just change your
pg_hba so it uses krb5 (and don't forget to use -h - krb5 only works over
TCP/IP sockets)
THat said, I think your problem is in that you use postgres as your SPN.
Looks like you're out of disk space on:
/dev/sda2 ext39.9G 9.5G 0 100% /var
is this where your database resides?
Phoenix Kiula wrote:
I am getting this message when I start the DB:
psql: FATAL: could not access status of transaction 0
DETAIL: Could not write to file
-- Forwarded message --
From: Sibte Abbas [EMAIL PROTECTED]
Date: Aug 31, 2007 8:32 AM
Subject: Re: [GENERAL] Query the catalog
To: Ottó Havasvölgyi [EMAIL PROTECTED]
On 8/31/07, Ottó Havasvölgyi [EMAIL PROTECTED] wrote:
Hi,
How can I query the following information from the
Hello,
I've recently been busy improving a query that yields a fixed number of
random records matching certain conditions. I have tried all the usual
approaches, and although they do work, they're all limited in some way
and don't translate really well to what you want. They're kludges, IMHO.
On 8/31/07, Phoenix Kiula [EMAIL PROTECTED] wrote:
I am getting this message when I start the DB:
psql: FATAL: could not access status of transaction 0
DETAIL: Could not write to file pg_subtrans/01F8 at offset 221184:
No space left on device.
What is this about and how do I solve this?
On 8/31/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Phoenix Kiula írta:
I am getting this message when I start the DB:
psql: FATAL: could not access status of transaction 0
DETAIL: Could not write to file pg_subtrans/01F8 at offset 221184:
No space left on device.
What is
Phoenix Kiula írta:
I am getting this message when I start the DB:
psql: FATAL: could not access status of transaction 0
DETAIL: Could not write to file pg_subtrans/01F8 at offset 221184:
No space left on device.
What is this about and how do I solve this? A df -h on my system shows this:
On 8/31/07, Alban Hertroys [EMAIL PROTECTED] wrote:
Hello,
I've recently been busy improving a query that yields a fixed number of
random records matching certain conditions. I have tried all the usual
approaches, and although they do work, they're all limited in some way
and don't translate
On 31/08/2007, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Phoenix Kiula írta:
I am getting this message when I start the DB:
psql: FATAL: could not access status of transaction 0
DETAIL: Could not write to file pg_subtrans/01F8 at offset 221184:
No space left on device.
What
Alban Hertroys wrote:
I've recently been busy improving a query that yields a fixed
number of random records matching certain conditions.
Dear Santa,
I'd like my database to have functionality analogue to how
LIMIT works,
but for other - non-sequential - algorithms.
I was thinking
Dear Santa,
I'd like my database to have functionality analogue to how LIMIT works,
but for other - non-sequential - algorithms.
There was some discussion before to possibly reuse the algorithm ANALYZE
is using for sampling some given percentage of the table data and
provide this for some
On Fri, Aug 31, 2007 at 02:42:18PM +0200, Alban Hertroys wrote:
Examples:
* random(maxrows) would return random rows from the resultset.
* median() would return the rows in the middle of the result set (this
would require ordering to be meaningful).
It would be possible to write an aggregate
Hi,
Why not generate a random number in your application and then:
SELECT *
FROM table_x
WHERE condition = true
OFFSET generated_random_number
LIMIT xx
Kaloyan Iliev
Alban Hertroys wrote:
Hello,
I've recently been busy improving a query that yields a fixed number of
random records matching
Joost Kraaijeveld [EMAIL PROTECTED] writes:
I have a query that has run on 3 other *identical* machines (hardware,
software, postgresql.conf idenntical, just other data in the database)
that give me an out of memory error every time I try (see below).
Anyone any idea of where or how to look
On 31/08/2007, Josh Tolley [EMAIL PROTECTED] wrote:
On 8/31/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Phoenix Kiula írta:
In addition to what others have already said, when things calm down
you should consider implementing some sort of monitoring system that
is configured to start
On Aug 31, 2007, at 3:02 , Gregory Stark wrote:
Michael Glaesemann [EMAIL PROTECTED] writes:
char * theDigits = malloc(sizeof(XTypeMaximumStringLength));
...
char * subscriberNumber = malloc(XTypeMaximumStringLength);
One of those two is wrong, I suspect the first one.
Thanks,
So what's the problem? Increase max_locks_per_transaction. The reason
we have that as a tunable is mainly to support systems with very large
numbers of tables.
So increasing this value into the thousands is a reasonable approach?
If it is reasonable, that's fine. I'll certainly be
John Prevost [EMAIL PROTECTED] writes:
So what's the problem? Increase max_locks_per_transaction. The reason
we have that as a tunable is mainly to support systems with very large
numbers of tables.
So increasing this value into the thousands is a reasonable approach?
If it is reasonable,
On Aug 31, 2007, at 8:35 AM, Phoenix Kiula wrote:
Thanks everyone. Yes, /var was full because of the backups that're
going there.
Database is back working.
It was my backup script. It is set to save a daily backup to the /var/
folder, which is not clever. I'll change it to be in the backup
On Aug 31, 2007, at 8:34 AM, Kaloyan Iliev wrote:
Alban Hertroys wrote:
Hello,
I've recently been busy improving a query that yields a fixed
number of
random records matching certain conditions. I have tried all the
usual
approaches, and although they do work, they're all limited in some
Hi all.
I've a table with large objects, here is the definition:
PAFlow-emmebi=# \d documenti
Tabella public.documenti
Colonna | Tipo | Modificatori
-+---+--
id_doc | character varying(50) | not null
file| oid
On Thu, 2007-08-30 at 21:43 -0400, Merlin Moncure wrote:
Well, a SRF may be unsuitable for various reasons, but maybe views are
better. I really like views more and more lately (better than
functions as a rule, I think).
you have some query, select yadda
create view log_yadda as
select
Merlin Moncure wrote:
I seem to recall giving out a query about that in the IRC channel a
while back...so if you got it from me, now I'll attempt to finish the
job :-).
If you can get postfix to look at a view, maybe you could
CREATE VIEW email_v AS
SELECT
usr_email, dom_name,
Patryk Kordylewski [EMAIL PROTECTED] writes:
I think i found the problem and made 2 sql files to reproduce it. The
only difference between this 2 files is the order by-clause of the view...
http://fooby.imosnet.de/order_by_working.txt
http://fooby.imosnet.de/order_by_not_working.txt
Hmm,
Marco Bizzarri wrote:
Tabella public.documenti
Colonna | Tipo | Modificatori
-+---+--
id_doc | character varying(50) | not null
file| oid |
kind| integer | not null
On Fri, 2007-08-31 at 13:51 -0400, Merlin Moncure wrote:
how about this:
create view log_yadda sa
select yadda where (select func());
the parens around the function call force it to be evaluated as a scalar.
if you declare func() immutable, you can (maybe) remove the parens
because
On 8/31/07, Jeff Davis [EMAIL PROTECTED] wrote:
On Fri, 2007-08-31 at 13:51 -0400, Merlin Moncure wrote:
how about this:
create view log_yadda sa
select yadda where (select func());
the parens around the function call force it to be evaluated as a scalar.
if you declare
Merlin Moncure [EMAIL PROTECTED] writes:
At present, immutable functions are only treated as constants during a
query, which is what we want (no problems with prepare).
Uh, no, they'd be folded to constants at plan time, which is exactly
what Jeff doesn't want AFAICS.
The subselect trick is a
On 8/31/07, Tom Lane [EMAIL PROTECTED] wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
At present, immutable functions are only treated as constants during a
query, which is what we want (no problems with prepare).
Uh, no, they'd be folded to constants at plan time, which is exactly
what
On 8/31/07, Jeff Davis [EMAIL PROTECTED] wrote:
On Thu, 2007-08-30 at 21:43 -0400, Merlin Moncure wrote:
Well, a SRF may be unsuitable for various reasons, but maybe views are
better. I really like views more and more lately (better than
functions as a rule, I think).
you have some
On 8/31/07, Merlin Moncure [EMAIL PROTECTED] wrote:
On 8/31/07, Jeff Davis [EMAIL PROTECTED] wrote:
On Thu, 2007-08-30 at 21:43 -0400, Merlin Moncure wrote:
Well, a SRF may be unsuitable for various reasons, but maybe views are
better. I really like views more and more lately (better than
On Fri, 2007-08-31 at 16:42 -0400, Merlin Moncure wrote:
On 8/31/07, Tom Lane [EMAIL PROTECTED] wrote:
Merlin Moncure [EMAIL PROTECTED] writes:
At present, immutable functions are only treated as constants during a
query, which is what we want (no problems with prepare).
Uh, no,
On Fri, 2007-08-31 at 22:34 +0800, Phoenix Kiula wrote:
On 31/08/2007, Josh Tolley [EMAIL PROTECTED] wrote:
On 8/31/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Phoenix Kiula írta:
In addition to what others have already said, when things calm down
you should consider implementing
On 31/08/2007, Jeff Davis [EMAIL PROTECTED] wrote:
On Fri, 2007-08-31 at 22:34 +0800, Phoenix Kiula wrote:
On 31/08/2007, Josh Tolley [EMAIL PROTECTED] wrote:
On 8/31/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Phoenix Kiula írta:
In addition to what others have already said,
Hi guys,
I've got a bunch of PosgreSQL servers connected to external storage,
where a single server needs to be serving as WO database dealing with
INSERTs only, and bunch of other guys need to obtain a copy of that
data for RO serving, without taking resources on WO server.
The idea is to
Alex Vinogradovs [EMAIL PROTECTED] writes:
The idea is to have say 2 raw devices which would be used as 2 WAL
segments (round-robin). RO servers will go after the one that's not used
at a given time with something like xlogdump utility and produce INSERT
statements to be then executed
WAL segments already have their structure. Filesystem would be an
overhead, plus I meantioned access to the same storage from
multiple hosts - no filesystem mounting, synchronization and
other problems.
I figured PG folks aren't interested in adding enterprise-level storage
functionality (movable
Alex Vinogradovs [EMAIL PROTECTED] writes:
WAL segments already have their structure. Filesystem would be an
overhead,
Just because you'd like that to be true doesn't make it true. We have
to manage a variable number of active segments; track whether a given
segment is waiting for future use,
But would it be a problem to have only 1 active segment at all times ?
My inspiration pretty much comes from Oracle, where redo logs are
pre-configured and can be switched by a command issued to the instance.
Just because you'd like that to be true doesn't make it true. We have
to manage a
Alex Vinogradovs wrote:
WAL segments already have their structure. Filesystem would be an
overhead,
In this case you can choose a filesystem with lower overhead. For
example with WAL you don't need a journalling filesystem at all, so
using ext2 is not a bad idea. For Pg data files, you need
Probably you missed that part... In my setup, I need at least
2 boxes going after those files, while 3rd box keeps on writing
to them... I can't mount ext2 even in R/O mode while it's being
written to by another guy. I can't unmount it before mounting
exclusively on any of them either, since PG
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Alex Vinogradovs wrote:
Hi guys,
I've got a bunch of PosgreSQL servers connected to external storage,
where a single server needs to be serving as WO database dealing with
INSERTs only, and bunch of other guys need to obtain a copy of that
Oh well, I guess I will just use some trigger to invoke a C
function and store the statements in a raw device with some
proprietary format, while the actual inserts don't take place
at all.
In case anyone has more ideas, please let me know.
On Fri, 2007-08-31 at 17:45 -0700, Joshua D. Drake
Alex Vinogradovs wrote:
Probably you missed that part... In my setup, I need at least
2 boxes going after those files, while 3rd box keeps on writing
to them... I can't mount ext2 even in R/O mode while it's being
written to by another guy. I can't unmount it before mounting
exclusively on
Yeah, that's the trick... I need high availability with
high performance and nearly real-time synchronization ;-)
Also, I've got FreeBSD here... ZFS will be out with 7.0
release, plus UFS2 has snapshotting capability too. But
the whole method isn't good enough anyway.
Oh, I see.
What I've
Alvaro Herrera [EMAIL PROTECTED] writes:
It continues with the next table if interrupted (SIGINT), but the worker
exits on any other error. I would ask you to review that code -- it's
in do_autovacuum, the PG_TRY block at the end. It was committed in rev
1.52 of autovacuum.c.
While looking
Tom Lane escribió:
Alvaro Herrera [EMAIL PROTECTED] writes:
It continues with the next table if interrupted (SIGINT), but the worker
exits on any other error. I would ask you to review that code -- it's
in do_autovacuum, the PG_TRY block at the end. It was committed in rev
1.52 of
Alvaro Herrera [EMAIL PROTECTED] writes:
Also, note that the worst thing that can happen is that the wrong
process gets a SIGUSR1 signal, and the launcher misses an opportunity
for starting another worker and rebalancing the vacuum cost parameters.
Hmmm ... okay, but I note that part of that
56 matches
Mail list logo