[GENERAL] Strange message from pg_receivexlog

2013-08-19 Thread Sergey Konoplev
Hi all, My WAL archiving script based on pg_receivexlog reported the following error several days ago (just ignore everything before 'pg_receivexlog', it's a message my script generates). Thu Aug 15 18:33:09 MSK 2013 ERROR archive_wal.sh: Problem occured during WAL archiving: pg_receivexlog: coul

Re: [GENERAL] Denormalized field

2013-08-19 Thread BladeOfLight16
On Mon, Aug 19, 2013 at 4:27 AM, Vik Fearing wrote: > Yes, I would use a trigger for this. > > > This is definitely the right answer, but keep in mind that this will slow down your inserts since it calls slow_function for each insert. Make sure you can afford that performance hit.

Re: [GENERAL] Create a deferrably-unique index

2013-08-19 Thread Tom Lane
Paul Jungwirth writes: >> Deferrability is a property of a constraint, not an index > Yes, but creating a unique constraint implicitly creates an index, and > creating a unique index implicitly creates a constraint. No, it doesn't. I'm using "constraint" in a technical sense here, that is somet

Re: [GENERAL] Create a deferrably-unique index

2013-08-19 Thread Paul Jungwirth
> Deferrability is a property of a constraint, not an index Yes, but creating a unique constraint implicitly creates an index, and creating a unique index implicitly creates a constraint. So I'm wondering whether I can create a pair where the index is partial and the constraint is deferrable. It s

Re: [GENERAL] Create a deferrably-unique index

2013-08-19 Thread Tom Lane
Paul Jungwirth writes: > I'm trying to create a unique index where the unique constraint is > `deferrable initially immediate`. But I don't see any way to do this > in the syntax of the `create index` command. It looks like the only > way to do it is via `alter table foo add unique`. Is that right

[GENERAL] AccessShareLock on pg_authid

2013-08-19 Thread Granthana Biswas
Hi, Processes are failing due to the following error on Postgresql 8.3.5: FATAL: lock AccessShareLock on object 0/1260/0 is already held 1260 oid belongs to pg_authid. This error is not coming for every transaction. I have found these two links related to the above error but not quite helpful:

[GENERAL] Create a deferrably-unique index

2013-08-19 Thread Paul Jungwirth
I'm trying to create a unique index where the unique constraint is `deferrable initially immediate`. But I don't see any way to do this in the syntax of the `create index` command. It looks like the only way to do it is via `alter table foo add unique`. Is that right, or can I do it as part of `cre

Re: [GENERAL] please suggest i need to test my upgrade

2013-08-19 Thread Vick Khera
On Wed, Aug 14, 2013 at 7:15 AM, Albe Laurenz wrote: > > This is the first thing that comes to mind: > > http://petereisentraut.blogspot.co.at/2008/03/readding-implicit-casts-in-postgresql.html > > > > But you may encounter other incompatibilities. > > Read the release nots of all major releases b

Re: [GENERAL] Select performance variation based on the different combinations of using where lower(), order by, and limit

2013-08-19 Thread Jeff Janes
On Sun, Aug 18, 2013 at 4:46 PM, Tyler Reese wrote: > I haven't heard of raising the statistics target, so I'll read up on that. > A few days ago, all 4 cases were responding equally fast. I had been > messing around with the postgres settings, and I went and dropped all of the > indexes and rec

Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Robert Sosinski
At the moment, all guids are distinct, however before I zapped the duplicates, there were 280 duplicates. Currently, there are over 2 million distinct guids. -Robert On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule wrote: > > > > 2013/8/19 Robert Sosinski > >> Hi Pavel, >> >> What kind of exam

[GENERAL] thank you

2013-08-19 Thread Basavaraj
Ya i got the answer here is the code SELECT * FROM (SELECT row_number() over(), * FROM employee) t1 right outer JOIN (SELECT row_number() over(), * FROM managers) t2 on t1.row_number=t2.row_number Thank you -- View this message in context: http://postgresql.1045698.n5.nabble.com/Here-is-my

Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Pavel Stehule
2013/8/19 Robert Sosinski > Hi Pavel, > > What kind of example do you need? I cant give you the actual data I have > in the table, but I can give you an example query and the schema attached > below. From there, I would just put in 2 million rows worth 1.2 Gigs of > data. Average size of the t

Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Robert Sosinski
Hi Pavel, What kind of example do you need? I cant give you the actual data I have in the table, but I can give you an example query and the schema attached below. From there, I would just put in 2 million rows worth 1.2 Gigs of data. Average size of the the extended columns (using the pg_colum

Re: [GENERAL] Query on a record variable

2013-08-19 Thread Giuseppe Broccolo
Hi Janek, Hi, ok :) I suppose you have a table 'table' with 'col' (text), 'dede' (text) and 'vectors' (tsvector) as fields. In this case, you can do SELECT levenshtein(col, 'string') FROM table AS lev WHERE levenshtein(col, 'string') < 10 AND LENGTH(dede) BETWEEN x AND y AND plainto_tsqu

Re: [GENERAL] Denormalized field

2013-08-19 Thread Luca Ferrari
On Sun, Aug 18, 2013 at 5:56 AM, Robert James wrote: > What's the best way to do this automatically? Can this be done with > triggers? (On UPDATE or INSERT, SET slow_function_f = > slow_function(new_f) ) How? > Define a before trigger that updates your column. For instance: CREATE OR REPLACE FU

Re: [GENERAL] Denormalized field

2013-08-19 Thread Vik Fearing
On 08/18/2013 05:56 AM, Robert James wrote: > I have a slow_function. My table has field f, and since slow_function > is slow, I need to denormalize and store slow_function(f) as a field. > > What's the best way to do this automatically? Can this be done with > triggers? (On UPDATE or INSERT, SET

Re: [GENERAL] Memory Issue with array_agg?

2013-08-19 Thread Pavel Stehule
Hello please, can you send some example or test? Regards Pavel Stehule 2013/8/19 Robert Sosinski > When using array_agg on a large table, memory usage seems to spike up > until Postgres crashes with the following error: > > 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection