Re: [GENERAL] How to check a table content efficiently? With LIMIT and OFFSET?

2011-05-30 Thread Stefan Keller
Hi Jaime

2011/5/30 Jaime Casanova ja...@2ndquadrant.com wrote:
 On Sun, May 29, 2011 at 4:55 PM, Stefan Keller sfkel...@gmail.com wrote:

 2. There's an autovacuum background process which already does the
 job, doesn't it?

 Yes, but in its own time. If you know there has been a batch of 
 inserts/deletes you might as well run analyse immediately on that table.

 My table is a read-only table after all.
 That's another reason why I'm reluctant using ANALYZE table.


 sorry, i don't follow that... why do you think that a read-only table
 doesn't need an ANALYZE?

Thanks for joining the discussion.

I'm only reluctant to do an ANALYZE as part of a perdiodical (hourly)
check table contents function.

Such an ANALYZE command is already included in the perdiodical
(nightly) update script which mirrors OpenStreetMap data.

BTW: I've asked before for best parameters over at pgsql-performance
(How to configure a read-only database server? 19. April 2011) and I
am still happy about any hint.

Yours, Stefan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] UTC4115FATAL: the database system is in recovery mode

2011-05-30 Thread Mathew Samuel
Hi,

I see the following error as found in pg.log:
UTC4115FATAL:  the database system is in recovery mode

Actually that message was logged repeatedly for about 4 hours according to the 
logs (I don't have access to the system itself, just the logs).

Leading up to that error were the following in pg.log:
2011-03-28 10:44:06 UTC3609LOG:  checkpoints are occurring too frequently (11 
seconds apart)
2011-03-28 10:44:06 UTC3609HINT:  Consider increasing the configuration 
parameter checkpoint_segments.
2011-03-28 10:44:18 UTC3609LOG:  checkpoints are occurring too frequently (12 
seconds apart)
2011-03-28 10:44:18 UTC3609HINT:  Consider increasing the configuration 
parameter checkpoint_segments.
2011-03-28 10:44:28 UTC3609LOG:  checkpoints are occurring too frequently (10 
seconds apart)
2011-03-28 10:44:28 UTC3609HINT:  Consider increasing the configuration 
parameter checkpoint_segments.
2011-03-28 10:44:38 UTC3609LOG:  checkpoints are occurring too frequently (10 
seconds apart)
2011-03-28 10:44:38 UTC3609HINT:  Consider increasing the configuration 
parameter checkpoint_segments.
2011-03-28 10:44:42 UTC3932ERROR:  canceling statement due to statement timeout
2011-03-28 10:44:42 UTC3932STATEMENT:  vacuum full analyze _zamboni.sl_log_1
2011-03-28 10:44:42 UTC3932PANIC:  cannot abort transaction 1827110275, it was 
already committed
2011-03-28 10:44:42 UTC3566LOG:  server process (PID 3932) was terminated by 
signal 6
2011-03-28 10:44:42 UTC3566LOG:  terminating any other active server processes
2011-03-28 10:44:42 UTC13142WARNING:  terminating connection because of crash 
of another server process
2011-03-28 10:44:42 UTC13142DETAIL:  The postmaster has commanded this server 
process to roll back the current transaction and exit, because another server 
process exited abnormally and possibly corrupted shared memory.
2011-03-28 10:44:42 UTC13142HINT:  In a moment you should be able to reconnect 
to the database and repeat your command.
2011-03-28 10:44:42 UTC29834WARNING:  terminating connection because of crash 
of another server process
2011-03-28 10:44:42 UTC29834DETAIL:  The postmaster has commanded this server 
process to roll back the current transaction and exit, because another server 
process exited abnormally and possibly corrupted shared memory.
2011-03-28 10:44:42 UTC29834HINT:  In a moment you should be able to reconnect 
to the database and repeat your command.
2011-03-28 10:44:42 UTC3553WARNING:  terminating connection because of crash of 
another server process
2011-03-28 10:44:42 UTC3553DETAIL:  The postmaster has commanded this server 
process to roll back the current transaction and exit, because another server 
process exited abnormally and possibly corrupted shared memory.


In fact those last 3 lines are repeated over and over again repeatedly until 
UTC4115FATAL:  the database system is in recovery mode is logged for 4 hours. 
At some point, 4 hours later of course, it appears that the system recovers.

The Checkpoints Settings in postgresql.conf are commented out so I guess the 
defaults are being used:
#checkpoint_segments = 3# in logfile segments, min 1, 16MB each
#checkpoint_timeout = 5min  # range 30s-1h
#checkpoint_warning = 30s   # 0 is off


That system where this was seen was using pgsql-8.2.6 on RHEL4.

Not sure if this is a known bug (or if it is a bug at all or something I can 
address using different configuration) but I thought I would post here first if 
any one might be familiar with this issue and suggest a possible solution. Any 
ideas?

Cheers,
Matt



Re: [GENERAL] Shared Buffer Size

2011-05-30 Thread Carl von Clausewitz
Thanks Toby, I will check it, and change it.

regards,
Carl

2011/5/30 Toby Corkindale toby.corkind...@strategicdata.com.au

 On 28/05/11 18:42, Carl von Clausewitz wrote:

 a few months ago, when I installed my first PostgreSQL, I have had the
 same problem. I've try to get any information about optimal memory
 config, and working, but there wasn't any optimal memory setting
 calculator on the internet, just some guide in the posgre documentation
 (
 http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC
 ).
 I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
 PostgreSQL and a little PHP app with 2 user), and I have theese setting
 in postgresql.conf (which are not the default):

  [snip]

 work_mem = 64MB# min 64kB
 maintenance_work_mem = 1024MB# min 1MB
 max_stack_depth = 64MB# min 100kB


 Just a warning - but be careful about setting work_mem to high values.
 The actual memory used by a query can be many times the value, depending on
 the complexity of your query.

 In a particular query I saw last week, we were regularly exceeding the
 available memory on a server, because the query was requiring 80 times the
 value of work_mem, and work_mem had been set to a high value.

 Reducing work_mem back to just 4MB reduced memory usage by a couple of
 gigabytes, and had almost no effect on the execution time. (Actually, it was
 marginally faster - probably because more memory was left for the operating
 system's cache)

 Toby

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL]

2011-05-30 Thread Sairam Krishnamurthy
http://turedure.oboroduki.com/find11.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Index Size

2011-05-30 Thread Nick Raj
Hi,

Cube code provided by postgres contrib folder. It uses the NDBOX structure.
On creating index, it's size increase at a high rate.

On inserting some tuple and creating indexes its behaviour is shown below.

1. When there is only one tuple
select pg_size_pretty(pg_relation_
size('cubtest'));   //Table size without index
 pg_size_pretty

 8192 bytes
(1 row)

select pg_size_pretty(pg_total_relation_size('cubtest')); //Table size with
index
 pg_size_pretty

 16 kB
(1 row)

i.e. Index size in nearly 8kB

2. When tuples are 20,000

Table Size without index - 1.6 MB
Table Size with index - 11 MB
i.e. Index size is nearly 9.4 MB

3. When tuples are 5 lakh

Table Size without index - 40 MB
Table Size with index - 2117 MB
i.e. Index size is nearly 2077 MB ~ 2GB
It is taking nearly 20-25 min for creating index for 5 lakh tuples.

Can some one tell me why index is becoming so large?
How to compress or reduce its size?

Thanks
Nick


Re: [GENERAL] Postgres 8.3.5 - ECPG and the use of descriptors and cursors in multi-threaded programs

2011-05-30 Thread Leif Jensen
   Hello Bosco,

   Thank you for your comment. Yes, it would be nice to get some more comments 
on the allocate/deallocate on a connection issue.

   I have verified that in my case deallocating a prepared statement, it 
guesses the wrong connection and returns an error. (The right one is doing 
auto-deallocation at disconnect time, though).

   However, I just noticed that allocating a descriptor with the AT 
connection clause, 
EXEC SQL AT :_thisDbConn ALLOCATE DESCRIPTOR :descname;
generates an ECPGallocate_desc() call without any connection name and that this 
can screw up the ECPGget_desc() function when guessing a connection. I could 
of course use:
EXEC SQL SET CONNECTION connection name;
before the allocate, but that would need mutex's all over to make sure that 
other threads will not set the connection too.

   Any idea why the ecpg pre-compiler doesn't use the named connection for the 
ALLOCATE DESCRIPTOR statement even though it allows it ?

   Please help,

 Leif


- Bosco Rama postg...@boscorama.com wrote:

 Leif Jensen wrote:
  
  Is it really not possible to use 2 separate connection within 1
 thread
  at the same time ? or is it an error in the ecpg library ?
 
 It should be entirely possible to run multiple connections in a
 single
 thread as long as you manage the 'AT connName' clauses properly.
 
 Though, IIRC, using an 'AT connName' clause on any sort of
 'deallocate'
 statement generates an error in ecpg:
 
   ecpg -o test.c test.pgc
   test.pgc:35: ERROR: AT option not allowed in DEALLOCATE statement
 
 This happens when trying to deallocate a query or a prepared
 statement.
 I don't use descriptors but the error message indicates it's _any_
 sort
 of deallocate.
 
 So, it would appear that you can allocate on a connection but not
 deallocate from one.  :-(
 
 I'm wonder if Tom or Michael can shine some light on this one?
 
 Bosco.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


RES: [GENERAL] SELECT COUNT(*) execution time on large tables (v9.0.4-1)

2011-05-30 Thread Carlos Sotto Maior (SIM)
David, Thanks for your reply.

I will probably use the strategy of a trigger driven counter, with temporal
strategy devising current month totals and up to last month total as current
month changes rapidly. 

I also apologize for not being investigative enough. I did look at wiki but
maybe I did not pursue my doubt in wiki in the proper manner. I will surely
get more acquainted to wiki to keep this channel clean from repeated
questions.

Once again, thank you very much.

Carlos Sotto Maior
+55 11 8244-7899
cso...@sistemassim.com.br

Sistemas Sim Serviços e Tecnologia Ltda.
+55 11 5041-3086
Rua Tenente Gomes Ribeiro, 78
Vila Clementino (Próximo ao Metro Santa Cruz)
São Paulo - SP 
04038-040

-Mensagem original-
De: David Johnston [mailto:pol...@yahoo.com] 
Enviada em: sexta-feira, 27 de maio de 2011 17:49
Para: 'Carlos Sotto Maior (SIM)'; pgsql-general@postgresql.org
Assunto: RE: [GENERAL] SELECT COUNT(*) execution time on large tables
(v9.0.4-1)

Counting live data is inherently imprecise.  There are supposedly some
system tables that can give you rough numbers.

You would be better off figuring out an alternative method to get the data
you desire and stop continually recounting all 5.7M records. 

A Trigger driven counter, for insert and delete, is probably the most
obvious method.  Also, say for temporal data, cache the prior monthly counts
and only perform an actual count over the current (changing) month(s).

At your table size the brute-force approach is obviously not going to work
so an alternative method needs to be devised, one that eliminates
re-counting previously counted records.  The specific design is going to be
highly dependent on your specific requirements - which is why no generalized
solution exists.  If you provide the why behind the question, and not just
the question, people may be inclined to provide relevant suggestions. 

Issuing a count(*) is not a need - it is an implementation.  The need is
what you end up doing with that number.

Lastly, the time you spent combing the system catalogs would have been
better spent perusing the FAQ linked to off the PostgreSQL homepage.  You
question, in almost the same words, is in the FAQ with a link to the wiki
which repeats all your observations and explains why the behavior is that
way; and suggests (links to) possible alternatives.  You may wish to go
there now to get more background and ideas.

David J.


 Hi,
 
 My application has a frequent need to issue a select count(*) on tables.
 Some have a large row count. (The example below are from a 5.7 M row; 
 Some are larger).
 
 Issuing  either  SELECT COUNT(*)  or SELECT COUNT(Primary_Key_Colum) 
 yelds a sequential scan on table;
 
 I have browsed catalog tables, digging for a real time Row.count but  
 so
far
 did not find any.
 QUESTION: Is there a better (faster) way to obtain the row count from 
 a table?
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there any problem with pg_notify and memory consumption?

2011-05-30 Thread Per-Olov Esgard
The patch seemed to work for me too.

Thanks.

Regards,   Per-Olov Esgård



From:   Tom Lane t...@sss.pgh.pa.us
To: Per-Olov Esgard per-olov.esg...@micronic-mydata.com
Cc: Merlin Moncure mmonc...@gmail.com, pgsql-general@postgresql.org
Date:   05/27/2011 06:19 PM
Subject:Re: [GENERAL] Is there any problem with pg_notify and 
memory consumption?



I wrote:
 I think the right fix is to make sure that ProcessCompletedNotifies
 saves and restores the call-time CurrentMemoryContext.

The patch committed here appears to fix it for me:
http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=722548e4309c28631ada292fe6cad04ae8f9c151


 regards, tom lane



The information contained in this communication and any attachments may be 
confidential and privileged, and is for the sole use of the intended 
recipient(s). If you are not the intended recipient, you are hereby 
formally notified that any unauthorized review, use, disclosure or 
distribution of this message is prohibited. Please notify the sender 
immediately by replying to this message and destroy all copies of this 
message and any attachments. Micronic Mydata is neither liable for the 
proper and complete transmission of the information contained in this 
communication, nor for any delay in its receipt.

Re: [GENERAL] Postgres 8.3.5 - ECPG and the use of descriptors and cursors in multi-threaded programs

2011-05-30 Thread Leif Jensen
PS.: That goes for the AT clause on the GET DESCRIPTOR statement too. The 
connection name is not included in the ECPGget_desc() call.


- Leif Jensen l...@crysberg.dk wrote:

 Hello Bosco,
 
Thank you for your comment. Yes, it would be nice to get some more
 comments on the allocate/deallocate on a connection issue.
 
I have verified that in my case deallocating a prepared statement,
 it guesses the wrong connection and returns an error. (The right one
 is doing auto-deallocation at disconnect time, though).
 
However, I just noticed that allocating a descriptor with the AT
 connection clause,
 EXEC SQL AT :_thisDbConn ALLOCATE DESCRIPTOR :descname;
 generates an ECPGallocate_desc() call without any connection name and
 that this can screw up the ECPGget_desc() function when guessing a
 connection. I could of course use:
 EXEC SQL SET CONNECTION connection name;
 before the allocate, but that would need mutex's all over to make sure
 that other threads will not set the connection too.
 
Any idea why the ecpg pre-compiler doesn't use the named connection
 for the ALLOCATE DESCRIPTOR statement even though it allows it ?
 
Please help,
 
  Leif
 
 
 - Bosco Rama postg...@boscorama.com wrote:
 
  Leif Jensen wrote:
  
   Is it really not possible to use 2 separate connection within 1
  thread
   at the same time ? or is it an error in the ecpg library ?
 
  It should be entirely possible to run multiple connections in a
  single
  thread as long as you manage the 'AT connName' clauses properly.
 
  Though, IIRC, using an 'AT connName' clause on any sort of
  'deallocate'
  statement generates an error in ecpg:
 
ecpg -o test.c test.pgc
test.pgc:35: ERROR: AT option not allowed in DEALLOCATE statement
 
  This happens when trying to deallocate a query or a prepared
  statement.
  I don't use descriptors but the error message indicates it's _any_
  sort
  of deallocate.
 
  So, it would appear that you can allocate on a connection but not
  deallocate from one.  :-(
 
  I'm wonder if Tom or Michael can shine some light on this one?
 
  Bosco.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inspecting a DB - psql or system tables ?

2011-05-30 Thread Nicholson, Brad (Toronto, ON, CA)
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Andrew Sullivan
 Sent: Friday, May 27, 2011 2:32 PM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Inspecting a DB - psql or system tables ?
 
 On Fri, May 27, 2011 at 08:26:33PM +0200, Tomas Vondra wrote:
   While parsing the output of psql is cumbersome, accessing the
   system tables seems more likely to break whenever a new version
   of PostgreSQL comes out.
 
  Really? Those catalogs are pretty stable, and when changed they're
  usually extended (new columns are added). So well written queries
 won't
  break very often. Actually I'd expect the psql output to change much
  more often.
 
 The whole point of the information_schema is that it's well-defined by
 the standard.  The system tables themselves do sometimes change
 between versions -- that's why you get warnings from psql when you
 start up a client with a different major version number than the
 server.  (If you want to see this in action, try using a 7.4-era
 client with 9.0, and do some tab completion or something like that.)
 

There is a sharp edge to watch out for when querying for this data between the 
system catalogs and the information schema, and it's not mentioned in our docs 
anywhere.

The information schema queries will only return rows back for objects that the 
user issuing the query has permissions on.  This is the correct behavior as per 
the SQL spec I believe,  but very different from the way the pg_catalog queries 
work - which will return you all objects back regardless of permissions on them.

Brad.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Universal certificate for verify-full ssl connection

2011-05-30 Thread Asia
Hi,

I am trying to generate self-signed certificate for full ssl authentication. I 
need to have universal version of this certificate for development purposes (so 
any client can connect with any postgresql server with ssl on).
I am using IP while connecting, I mean host=IP.

However verify-full connection works only in case Common Name in certificate 
contains only fully qualified IP address, when I try to set CN as * (asterisk) 
I receive error:

server common name * does not match hostname my_ip

According to the documentation here : 
http://www.postgresql.org/docs/current/static/libpq-ssl.html

If the connection is made using an IP address instead of a host name, the IP 
address will be matched (without doing any DNS lookups). 

Would you please advise what I am doing wrong? Or maybe there is other way to 
generate wildcard certificate ?

Thanks in advance !

Joanna

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regular disk activity of an idle DBMS

2011-05-30 Thread Andrej Podzimek

Nothing changes there. When OpenFire, Courier-MTA and Apache are restarted,
a few numbers change, but othrewise they remain unchanged pretty long. There
is no obvious activity that could trigger a disk write 20 times a minute...


How many databases are in your pg cluster?


There are currently 19 of them, but only about 5 are used actively (queried at 
least once a day).



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] How to check a table content efficiently? With LIMIT and OFFSET?

2011-05-30 Thread Alban Hertroys
On 29 May 2011, at 23:55, Stefan Keller wrote:

 Hi Alban
 
 On 2011/5/29 Alban Hertroys wrote:
 On 29 May 2011, at 19:45, Stefan Keller wrote:
 
 But I'm hesitating to use ANALYZE for two reasons:
 1. It's very slow: it repeadly takes 59000 ms on my machine.
 
 ANALYZE on a single table takes 59s?!? That's _really_ long. How big is that 
 table (it has about 180k rows, you did provide that information, but that's 
 not much at all) and how many indexes are on it? Are you sure you're not 
 overburdening your hardware in some way?
 
 Or are you in fact talking about a different command? For example, ANALYZE 
 (without specifying a table) or VACUUM ANALYZE table?
 
 You are right: I used ANALYZE (without specifying a table). But this
 still takes about 1 to 3 sec which is about 100 times slower than
 
 SELECT (count(*) = 1) FROM (SELECT osm_id FROM planet_osm_point LIMIT
 1 OFFSET 10) tmp;
 or

Still 1 to 3 seconds? It should be faster than a full table-scan, as it just 
takes samples across the table. Perhaps you have many indexes or some of an 
uncommon type? Or is your system I/O-bound perhaps? It does need to write those 
statistics to a system table at some point, in which an I/O-bound system won't 
perform very well of course.

 SELECT reltuples FROM pg_class WHERE relname = 'planet_osm_point';

That will only work if you want to know the total amount of rows in the table. 
If you need to know how many rows will match a specific WHERE-clause this falls 
on its behind. For cases like that you can use the output of EXPLAIN query, 
but that will not be very accurate since it uses statistical information about 
value distribution and such of your data (which is gathered by ANALYSE).

If you really only need the total number of records and if you're indeed 
inserting/deleting in batches, then it's probably best to create 
statement-level INSERT/DELETE triggers (that could call the same function). 
I've done this in the past using a row-level trigger, but my data came in live. 
For batches of data it's probably more efficient to call a statement-level 
trigger once per query than a row-level one for each row. I think you can 
obtain the number of modified rows from GET DIAGNOSTICS, off the top of my head.

 2. There's an autovacuum background process which already does the
 job, doesn't it?
 
 Yes, but in its own time. If you know there has been a batch of 
 inserts/deletes you might as well run analyse immediately on that table.
 
 My table is a read-only table after all.
 That's another reason why I'm reluctant using ANALYZE table.

You probably won't need to run it as often as every time you need to know the 
number of rows in it. If the data doesn't change, then the row-count in the 
statistics won't either.
You probably do want to run this after a batch-INSERT/DELETE, or your 
row-counts will be inaccurate until auto-vacuum comes along.

 Also, on this mailing-list people don't appreciate it if you top-post. It 
 makes the context
 hard to decipher and sometimes even makes it difficult to give an accurate 
 answer
 because the information people want to refer to is far separated from the 
 bit where
 they're trying to reply to something you said/asked. Remember, people aren't 
 here for your sake.
 
 Thank you for the hint, which I didn't know:
 Is this really still part of this elderly USENET netiquette here?

There's nothing elderly about it. If you're communicating with a large number 
of people at once, the requirements change. You don't want to make it difficult 
on people to follow a thread they possibly didn't follow earlier, or they 
either won't bother to answer or they only pick up the latest bit of the 
thread. In both cases the chances that their answers will be irrelevant are 
quite significant, provided they even do reply.

In the end it has little to do with style and much more with common sense. The 
format used here (as well as in USENET) is more suitable for mailing lists.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4de33d4211921620335251!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Function Column Expansion Causes Inserts To Fail

2011-05-30 Thread David Johnston
PostgreSQL 9.0.4

 

The following script fails even though the pkonlytest table is empty since
we just created it.

 

 BEGIN SCRIPT

 

CREATE TABLE pkonlytest (

   pkid text PRIMARY KEY

);

 

CREATE OR REPLACE FUNCTION createpkrecord(INOUT pkvalue text, OUT col1
boolean, OUT col2 boolean)

RETURNS record

AS $$

BEGIN

INSERT INTO pkonlytest (pkid) VALUES (pkvalue);

col1 = true;

col2 = false;

END;

$$

LANGUAGE 'plpgsql';

 

SELECT (   createpkrecord('1')).*;

 

 

SQL Error: ERROR:  duplicate key value violates unique constraint
pkonlytest_pkey

DETAIL:  Key (pkid)=(1) already exists.

CONTEXT:  SQL statement INSERT INTO pkonlytest (pkid) VALUES (pkvalue)

PL/pgSQL function createpkrecord line 2 at SQL statement

 

END SCRIPT

 

If you call the function without the column expansion (and required
parentheses) it work just fine.

 

SELECT createpkrecord('1');

 

There is a workaround.

 

SELECT (func.result).* FROM (

SELECT  createpkrecord('4') as result ) func

 

David J.

 

 



Re: [GENERAL] Shared Buffer Size

2011-05-30 Thread Cédric Villemain
2011/5/30 Toby Corkindale toby.corkind...@strategicdata.com.au:
 On 28/05/11 18:42, Carl von Clausewitz wrote:

 a few months ago, when I installed my first PostgreSQL, I have had the
 same problem. I've try to get any information about optimal memory
 config, and working, but there wasn't any optimal memory setting
 calculator on the internet, just some guide in the posgre documentation

 (http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC).
 I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
 PostgreSQL and a little PHP app with 2 user), and I have theese setting
 in postgresql.conf (which are not the default):

 [snip]

 work_mem = 64MB# min 64kB
 maintenance_work_mem = 1024MB# min 1MB
 max_stack_depth = 64MB# min 100kB

 Just a warning - but be careful about setting work_mem to high values.
 The actual memory used by a query can be many times the value, depending on
 the complexity of your query.

 In a particular query I saw last week, we were regularly exceeding the
 available memory on a server, because the query was requiring 80 times the
 value of work_mem, and work_mem had been set to a high value.

 Reducing work_mem back to just 4MB reduced memory usage by a couple of
 gigabytes, and had almost no effect on the execution time. (Actually, it was
 marginally faster - probably because more memory was left for the operating
 system's cache)

Maybe, you're also aware that linux may decide to swap to protect its
buffer cache (depend of the strategy it got in its configuration) and
also that you may be limited by commitable memory. On a default
install where the swap is NOT at least twice the RAM size, you're not
able to commit all RAM you have. But, it protects the buffer cache for
the not allocatable memory.

So maybe you've hitten a step where you did swap your work_mem...
anyway interesting to have a query where a large work_mem is not
better... Will it be hard to isolate the case and make it public ? In
the long term it might be a good test to add to a performance farm if
it is not based on a non-optimum linux configuration (I mean if the
issue *need* the work_mem to be reduced to be fixed).


 Toby

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] deadlock problem

2011-05-30 Thread Sebastian Böhm
Hi,

I need a little help with a deadlock.

when I execute this (end of the mail) function in parallel sometimes a deadlock 
happens.

This function does implement a insert or update functionality.

The error is:
DETAIL:  Process 29464 waits for ShareLock on transaction 1293098; blocked by 
process 29463.
Process 29463 waits for ShareRowExclusiveLock on relation 16585 of database 
16384; blocked by process 29464.

From the postgres documentation:
SHARE
Conflicts with the ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE ROW EXCLUSIVE, 
EXCLUSIVE, and ACCESS EXCLUSIVE lock modes. This mode protects a table against 
concurrent data changes.

Acquired by CREATE INDEX (without CONCURRENTLY).



so where the ShareLock is acquired? I don't create an index here. 


TThe cause of the lock itself is clear to me, but I don't know where the 
ShareLock was acquired.



Kind Regards
Sebastian Boehm


---




CREATE FUNCTION acount(count_in integer) RETURNS integer
AS $$

DECLARE day_now timestamp with time zone;
DECLARE ii int;
DECLARE jj int;

BEGIN

SELECT date_trunc('day',now() at TIME ZONE 'America/Los_Angeles') at 
time zone 'America/Los_Angeles' INTO day_now;



SELECT count FROM summary
WHERE
day = day_now AND
INTO ii;

IF (ii IS NULL) THEN

LOCK table summary IN SHARE ROW EXCLUSIVE MODE;


SELECT count FROM summary
WHERE
day = day_now AND
INTO jj;

IF (jj IS NULL) THEN


INSERT INTO summary (day,count) VALUES 
(day_now,count_in);

ELSE

UPDATE summary SET count = count + count_in  
WHERE day = day_now;

END IF;


ELSE

UPDATE summary SET count = count + count_in  WHERE day 
= day_now

END IF;


RETURN ii;

END;

$$
LANGUAGE plpgsql;

[GENERAL] pgpool-II 3.0.4 release delayed

2011-05-30 Thread Tatsuo Ishii
Sorry for off topic posting but...

Since pgfoundry has been down, the schedule for releasing pgpool-II
3.0.4, to be released today, will be delayed. Sorry for
inconvenience. We will start to continue the releasing work as soon as
pgfoundry comes back.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] determine database and tables from deadlock

2011-05-30 Thread Machiel Richards
Hi Everyone

Some of our databases have been experiencing a sudden spike in
deadlocks being detected.

 as far as our knowledge is concerned, there have not been any new
code taken live and no changes from what are aware off.

 The databases are running postgresql 8.1

 From the log files I am able to get the following information:

2011-05-30 00:19:05 SAST ERROR:  deadlock detected
2011-05-30 00:19:05 SAST DETAIL:  Process 534 waits for ShareLock on
transaction 20417220; blocked by process 29184.
Process 29184 waits for ExclusiveLock on tuple (127,56) of
relation 1502070 of database 1502000; blocked by process 534.
2011-05-30 00:19:05 SAST STATEMENT:  update Organisation set name=$1
where id=$2
2011-05-30 00:19:06 SAST ERROR:  deadlock detected
2011-05-30 00:19:06 SAST DETAIL:  Process 29389 waits for ShareLock on
transaction 20417220; blocked by process 29184.
Process 29184 waits for ExclusiveLock on tuple (127,56) of
relation 1502070 of database 1502000; blocked by process 29389.
2011-05-30 00:19:06 SAST STATEMENT:  update Organisation set name=$1
where id=$2


 I am however not sure how to trace these back to a database, table,
and maybe even a query to determine where things are going wrong.

  I tried to look through the system tables however I can't seem to
find the database (1502000) or the relation mentioned here, however, I
am sure I am not looking in the right places as I do not know Postgresql
that well.


   I would really appreciate help on this as allt he deadlocks seems to
be happening on the same database and relation, just different tuples
and process id's.


Regards
Machiel






 









Re: [GENERAL] Regular disk activity of an idle DBMS

2011-05-30 Thread Scott Marlowe
On Sun, May 29, 2011 at 12:42 PM, Andrej Podzimek and...@podzimek.org wrote:

 Nothing changes there. When OpenFire, Courier-MTA and Apache are restarted,
 a few numbers change, but othrewise they remain unchanged pretty long. There
 is no obvious activity that could trigger a disk write 20 times a minute...

How many databases are in your pg cluster?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] trigger - dynamic WHERE clause

2011-05-30 Thread Pavel Stehule
Hello

[...]
Clause USING doesn't do a array unpacking

you should to generate little bit different dynamic statement
EXECUTE 'SELECT .. WHERE a = $1[1] AND b = $1[2]' USING ARRAY[...]

 I changed that but this wasn't my only problem; typecasting was the
 second issue. Column id1 is INT4 and the value obtained from NEW via
 each(hstore(NEW))) converted to TEXT.

 I can fix this by explicit typecasting:
 '... WHERE id1 = $1[1]::int4 ...'



 But there's a few things I'd be interested to understand:

 1) My original version quoted all values regardless of type. I presume
 this worked with integers because there's some implicit typecasting
 going on?


It is working usually - sometimes explicit number can help with
searching a related functions. You can have a problem when function or
operator is overwritten. You should to test it.

 2) I took from your blog entry
 (http://okbob.blogspot.com/2008/06/execute-using-feature-in-postgresql-84.html)
 that it is good practice to use EXECUTE USING.
 Well, there's no danger of SQL injection as this particular DB runs on
 an internal network. However, I am wondering whether EXECUTE USING has
 a performance advantage?


You newer know where or who is attacker :)

The performance is very similar now - the most slow part is generating
of execution plan - not IO operations.

Regards

Pavel Stehule


 --

 Best Regards,
 Tarlika Elisabeth Schmitz

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] determine database and tables from deadlock

2011-05-30 Thread Craig Ringer

On 05/30/2011 03:45 PM, Machiel Richards wrote:


I am however not sure how to trace these back to a database, table, and
maybe even a query to determine where things are going wrong.


One of the deadlocking queries is shown in the deadlock error message, 
but one isn't really enough.


IIRC, newer versions of PostgreSQL print both queries.

You can use log_prefix to show the database, though I'm not sure it's in 
8.1 .


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Universal certificate for verify-full ssl connection

2011-05-30 Thread Craig Ringer

On 05/30/2011 03:58 PM, Asia wrote:

Would you please advise what I am doing wrong? Or maybe there is other way to 
generate wildcard certificate ?


I wouldn't be surprised if libpq didn't support wildcard certificates at 
all. I doubt there's ever been any demand for them.


Have you checked in the source code?

What version of libpq are you using, and what version of openssl is it 
compiled against?


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index Size

2011-05-30 Thread Craig Ringer

On 05/30/2011 08:53 PM, Nick Raj wrote:

Hi,

Cube code provided by postgres contrib folder. It uses the NDBOX structure.
On creating index, it's size increase at a high rate.


[snip]


Can some one tell me why index is becoming so large?
How to compress or reduce its size?


It'd help if you included some more details:

- Your PostgreSQL version

- A .sql file that demonstrated the problem,
  including your table definitions and index
  creation commands.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] deadlock problem

2011-05-30 Thread Craig Ringer

On 05/30/2011 10:04 PM, Sebastian Böhm wrote:


Acquired by CREATE INDEX (without CONCURRENTLY).

so where the ShareLock is acquired? I don't create an index here.


There's some confusing historical terminology involved here, I'm afraid.

The documentation you referred to talks about table-level locks, used 
when a whole table is partially or wholly locked.


There are *also* row-level locks of both exclusive and shared kinds.

I *think* the deadlock you are experiencing is on a row-level ShareLock, 
rather than a table-level lock.


Here's a demo. 1 and 2 are two different psql sessions open at once 
and the sequence of commands shown below causes them to deadlock with 
each other, giving a message just like yours:


1 create table a (x integer);
1 insert into a(x) values (1),(2),(3);

1 begin;
1 delete from a where x = 1;
2 begin;
2 delete from a where x = 2;
2 delete from a where x = 1;
1 delete from a where x = 2;

Now one of the transactions will abort with:

ERROR:  deadlock detected
DETAIL:  Process 15727 waits for ShareLock on transaction 1272; blocked 
by process 15725.
Process 15725 waits for ShareLock on transaction 1273; blocked by 
process 15727.

HINT:  See server log for query details.


Hope this helps.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] trigger - dynamic WHERE clause

2011-05-30 Thread Tarlika Elisabeth Schmitz
On Mon, 30 May 2011 11:02:34 +0200
Pavel Stehule pavel.steh...@gmail.com wrote:

 2) I took from your blog entry
 (http://okbob.blogspot.com/2008/06/execute-using-feature-in-postgresql-84.html)
 that it is good practice to use EXECUTE USING.
 Well, there's no danger of SQL injection as this particular DB runs
 on an internal network. However, I am wondering whether EXECUTE
 USING has a performance advantage?


You newer know where or who is attacker :)
The performance is very similar now - the most slow part is generating
of execution plan - not IO operations.

I have converted my generic trigger to use EXECUTE ... USING.

I need to convert all NEW values to a text array, retaining their
ordinal position.
avals(hstore(NEW)) doesn't seem to do that:

NEW: (5,name5,1000,,,2)
avals(hstore(NEW)):  {5,name5,2,1000,NULL,NULL}

The best I can come up with is a JOIN with information_schema.columns.

-- 

Best Regards,
Tarlika Elisabeth Schmitz

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UTC4115FATAL: the database system is in recovery mode

2011-05-30 Thread Craig Ringer

On 05/30/2011 10:29 PM, Mathew Samuel wrote:


2011-03-28 10:44:28 UTC3609HINT: Consider increasing the configuration
parameter checkpoint_segments.
2011-03-28 10:44:38 UTC3609LOG: checkpoints are occurring too frequently
(10 seconds apart)
2011-03-28 10:44:38 UTC3609HINT: Consider increasing the configuration
parameter checkpoint_segments.
2011-03-28 10:44:42 UTC3932ERROR: canceling statement due to statement
timeout
2011-03-28 10:44:42 UTC3932STATEMENT: vacuum full analyze _zamboni.sl_log_1
2011-03-28 10:44:42 UTC3932PANIC: cannot abort transaction 1827110275,
it was already committed
2011-03-28 10:44:42 UTC3566LOG: server process (PID 3932) was terminated
by signal 6


Interesting. It almost looks like a VACUUM FULL ANALYZE was cancelled by 
statement_timeout, couldn't be aborted (assuming it was in fact 
1827110275) and then the backend crashed with a signal 6 (SIGABRT). 
SIGABRT can be caused by an assertion failure, certain fatal aborts in 
the C library caused by memory allocation errors, etc.


Alas, while PostgreSQL may have dumped a core file I doubt there's any 
debug information in your build. If you do find a core file for that 
process ID, it might be worth checking for a debuginfo rpm just in case.



In fact those last 3 lines are repeated over and over again repeatedly
until UTC4115FATAL: the database system is in recovery mode is logged
for 4 hours. At some point, 4 hours later of course, it appears that the
system recovers.


Wow. Four hours recovery with default checkpoint settings.

Is it possible that the server was completely overloaded and was 
swapping heavily? That could explain why VACUUM timed out in the first 
place, and would explain why it took such a long time to recover. Check 
your system logs around the same time for other indications of excessive 
load, and check your monitoring history if you have monitoring like 
Cacti or the like active.


See if there's anything interesting in the kernel logs too.

Just for completeness, can you send all non-commented-out, non-blank 
lines in your postgresql.conf ?


$ egrep '^[^#[:space:]]' postgresql.conf |cut -d '#' -f 1

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [9.1beta1] UTF-8/Regex Word-Character Definition excluding accented letters

2011-05-30 Thread David Johnston
PostgreSQL 9.1beta1, compiled by Visual C++ build 1500, 64-bit (EnterpriseDB
Install Executable)

 

CREATE DATABASE betatest 

TEMPLATE template0 

ENCODING 'UTF8'

LC_COLLATE 'C' 

LC_CTYPE 'C';



[connect to database]

 

CREATE DOMAIN idcode AS text

NOT NULL CHECK (VALUE ~* '^\w[-:\w]*$')

;

 

SELECT 'Aéa'::idcode; // - SQL Error: ERROR:  value for domain
idcode violates check constraint idcode_check (note the accented “e”
between all the “A”s)

 

This is running just fine against a 9.0 install on the same machine.  [\w]
is Unicode aware and server encoding is set (and confirmed via SHOW) to be
“UTF8”.

 

David J.

 

 



[GENERAL] time estimation for a test

2011-05-30 Thread salah jubeh
Hello,

I have a view which is a result of the cross product of three tables, I want to 
test how much time is required to populate this view. Also, I want to test the 
scalability of this view in the future. Since, I have already live data I am 
wondering if I can do that without creating an automatic data generation i.e

time psql  -c 'SELECT * FROM view limit 100'
time psql  -c 'SELECT * FROM view limit 200'
...
time psql  -c 'SELECT * FROM view limit 1'


Also, I would like to do a hot and cold tests. How Can I do the cold tests on 
Postgres, should I shut down and restart the server ? or there are some 
commands 
to clear the DBMS buffer.

How can I test this using a formula. The time to calculate the view is 
proportional to the cross product of the three tables i.e 


view (t) =  number of rows of table1 * number of rows table2 * number of rows 
table3.  Since I am doing cross product I assume a sequential scan will be used 
for data retrial from the H.D for all tables. What are other parameters I need 
to include in this formula and How I can get it. i.e I know that the row size 
is 
one factor and the time to read a data page on the H.D. is another one.

Regards

Re: [GENERAL] UTC4115FATAL: the database system is in recovery mode

2011-05-30 Thread Tom Lane
Craig Ringer cr...@postnewspapers.com.au writes:
 On 05/30/2011 10:29 PM, Mathew Samuel wrote:
 2011-03-28 10:44:42 UTC3932ERROR: canceling statement due to statement
 timeout
 2011-03-28 10:44:42 UTC3932STATEMENT: vacuum full analyze _zamboni.sl_log_1
 2011-03-28 10:44:42 UTC3932PANIC: cannot abort transaction 1827110275,
 it was already committed
 2011-03-28 10:44:42 UTC3566LOG: server process (PID 3932) was terminated
 by signal 6

 Interesting. It almost looks like a VACUUM FULL ANALYZE was cancelled by 
 statement_timeout, couldn't be aborted (assuming it was in fact 
 1827110275) and then the backend crashed with a signal 6 (SIGABRT). 

Yeah, that seems highly likely.  There's a long-known problem in the
pre-9.0 implementation of VACUUM FULL, that it marks itself as committed
well before the vacuuming is actually done.  Any error that occurs after
that point results in exactly the above symptom.  There's a hack
solution for that in releases made after Nov 2009, and getting rid of
the problem in a cleaner fashion was one of the motivations for
replacing the VACUUM FULL implementation in 9.0.  But I suppose the OP
is running something not too up-to-date :-(.

 Wow. Four hours recovery with default checkpoint settings.

Ouch ... the reason for that needs investigation.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [9.1beta1] UTF-8/Regex Word-Character Definition excluding accented letters

2011-05-30 Thread Tom Lane
David Johnston pol...@yahoo.com writes:
 PostgreSQL 9.1beta1, compiled by Visual C++ build 1500, 64-bit (EnterpriseDB
 Install Executable)

 CREATE DATABASE betatest 
 TEMPLATE template0 
 ENCODING 'UTF8'
 LC_COLLATE 'C' 
 LC_CTYPE 'C';

 CREATE DOMAIN idcode AS text
 NOT NULL CHECK (VALUE ~* '^\w[-:\w]*$')
 ;

 SELECT 'Aéa'::idcode; // - SQL Error: ERROR:  value for domain
 idcode violates check constraint idcode_check (note the accented “e”
 between all the “A”s)

AFAICS that's correct behavior.  C locale should not think that é is
a letter.

 This is running just fine against a 9.0 install on the same machine.

We made some strides towards getting locale-sensitive stuff to work as
it should in 9.1.  In particular, platform-specific creative
interpretations of what C locale means shouldn't happen anymore ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function Column Expansion Causes Inserts To Fail

2011-05-30 Thread Tom Lane
David Johnston pol...@yahoo.com writes:
 SELECT (   createpkrecord('1')).*;
 [ results in function being called more than once ]

Yeah.  Don't do that.  Better style is

SELECT * FROM createpkrecord('1');

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Shared Buffer Size

2011-05-30 Thread Toby Corkindale

On 30/05/11 20:41, Cédric Villemain wrote:

2011/5/30 Toby Corkindaletoby.corkind...@strategicdata.com.au:

On 28/05/11 18:42, Carl von Clausewitz wrote:


a few months ago, when I installed my first PostgreSQL, I have had the
same problem. I've try to get any information about optimal memory
config, and working, but there wasn't any optimal memory setting
calculator on the internet, just some guide in the posgre documentation

(http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC).
I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for
PostgreSQL and a little PHP app with 2 user), and I have theese setting
in postgresql.conf (which are not the default):


[snip]


work_mem = 64MB# min 64kB
maintenance_work_mem = 1024MB# min 1MB
max_stack_depth = 64MB# min 100kB


Just a warning - but be careful about setting work_mem to high values.
The actual memory used by a query can be many times the value, depending on
the complexity of your query.

In a particular query I saw last week, we were regularly exceeding the
available memory on a server, because the query was requiring 80 times the
value of work_mem, and work_mem had been set to a high value.

Reducing work_mem back to just 4MB reduced memory usage by a couple of
gigabytes, and had almost no effect on the execution time. (Actually, it was
marginally faster - probably because more memory was left for the operating
system's cache)


Maybe, you're also aware that linux may decide to swap to protect its
buffer cache (depend of the strategy it got in its configuration) and
also that you may be limited by commitable memory. On a default
install where the swap is NOT at least twice the RAM size, you're not
able to commit all RAM you have. But, it protects the buffer cache for
the not allocatable memory.

So maybe you've hitten a step where you did swap your work_mem...
anyway interesting to have a query where a large work_mem is not
better... Will it be hard to isolate the case and make it public ? In
the long term it might be a good test to add to a performance farm if
it is not based on a non-optimum linux configuration (I mean if the
issue *need* the work_mem to be reduced to be fixed).



In this case, it was not just slowing down due to the amount of work_mem 
allocated -- it was exceeding several gigabytes of memory usage and 
crashing out. Lower values of work_mem allowed the query to succeed, but 
it used almost 3G.. Even lower values of work_mem could do the query in 
only a few hundred MB - and was faster.


I note that if you exceed work_mem in a query,then I guess the temp 
files created are cached by the VM cache, so it's not like the 
performance hit will be *too* bad?



I agree that the slowness of the 3GB version could be due to swapping or 
something like that.. or just due to the VM cache being eliminated as I 
suggested.


Either way - the problem was that this (machine-generated) query was 
pivoting and joining many views-of-views. It's a pretty nasty query.


The key fact is that postgres (8.3) seems to allocate the full work_mem 
amount every time it needs *some* work_mem - even if it could have 
happily got by on just a few MB. So if your query allocates work_mem a 
hundred times, it'll consume $work_mem * 100 -- or die trying.


I'm curious to know if Postgres 9.0 has improved this -- I'm going to 
try re-running this query on it once I get a chance, but due to 
contractual agreements this isn't quite as simple to test as you might 
think.
(And running the test over a much smaller example data set might not 
trigger the same query plan)

I'll get there eventually though :)

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index Size

2011-05-30 Thread Tom Lane
Craig Ringer cr...@postnewspapers.com.au writes:
 On 05/30/2011 08:53 PM, Nick Raj wrote:
 Cube code provided by postgres contrib folder. It uses the NDBOX structure.
 On creating index, it's size increase at a high rate.

 [snip]

 Can some one tell me why index is becoming so large?
 How to compress or reduce its size?

 It'd help if you included some more details:

 - Your PostgreSQL version

In particular, I wonder whether his version contains this fix:

Author: Robert Haas rh...@postgresql.org
Branch: master [4fa0a23c7] 2010-11-14 21:27:34 -0500
Branch: REL9_0_STABLE Release: REL9_0_2 [e6b380251] 2010-11-14 21:27:34 -0500
Branch: REL8_4_STABLE Release: REL8_4_6 [2519b8268] 2010-11-14 21:27:34 -0500
Branch: REL8_3_STABLE Release: REL8_3_13 [d589e4070] 2010-11-14 21:27:34 -0500
Branch: REL8_2_STABLE Release: REL8_2_19 [e642ca767] 2010-11-14 21:27:34 -0500
Branch: REL8_1_STABLE Release: REL8_1_23 [0e27a7319] 2010-11-14 21:27:34 -0500

Fix bug in cube picksplit algorithm.

Alexander Korotkov


regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index Size

2011-05-30 Thread Nick Raj
On Tue, May 31, 2011 at 8:50 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Craig Ringer cr...@postnewspapers.com.au writes:
  On 05/30/2011 08:53 PM, Nick Raj wrote:
  Cube code provided by postgres contrib folder. It uses the NDBOX
 structure.
  On creating index, it's size increase at a high rate.

  [snip]

  Can some one tell me why index is becoming so large?
  How to compress or reduce its size?

  It'd help if you included some more details:

  - Your PostgreSQL version

 In particular, I wonder whether his version contains this fix:

 Author: Robert Haas rh...@postgresql.org
 Branch: master [4fa0a23c7] 2010-11-14 21:27:34 -0500
 Branch: REL9_0_STABLE Release: REL9_0_2 [e6b380251] 2010-11-14 21:27:34
 -0500
 Branch: REL8_4_STABLE Release: REL8_4_6 [2519b8268] 2010-11-14 21:27:34
 -0500
 Branch: REL8_3_STABLE Release: REL8_3_13 [d589e4070] 2010-11-14 21:27:34
 -0500
 Branch: REL8_2_STABLE Release: REL8_2_19 [e642ca767] 2010-11-14 21:27:34
 -0500
 Branch: REL8_1_STABLE Release: REL8_1_23 [0e27a7319] 2010-11-14 21:27:34
 -0500

Fix bug in cube picksplit algorithm.

Alexander Korotkov




Is this bug fixed?
postgresql 9.0 having this bug or not?

Thanks


Re: [GENERAL] trigger - dynamic WHERE clause

2011-05-30 Thread Pavel Stehule
2011/5/31 Tarlika Elisabeth Schmitz postgres...@numerixtechnology.de:
 On Mon, 30 May 2011 11:02:34 +0200
 Pavel Stehule pavel.steh...@gmail.com wrote:

 2) I took from your blog entry
 (http://okbob.blogspot.com/2008/06/execute-using-feature-in-postgresql-84.html)
 that it is good practice to use EXECUTE USING.
 Well, there's no danger of SQL injection as this particular DB runs
 on an internal network. However, I am wondering whether EXECUTE
 USING has a performance advantage?


You newer know where or who is attacker :)
The performance is very similar now - the most slow part is generating
of execution plan - not IO operations.

 I have converted my generic trigger to use EXECUTE ... USING.

 I need to convert all NEW values to a text array, retaining their
 ordinal position.
 avals(hstore(NEW)) doesn't seem to do that:

 NEW: (5,name5,1000,,,2)
 avals(hstore(NEW)):  {5,name5,2,1000,NULL,NULL}

 The best I can come up with is a JOIN with information_schema.columns.

jup

it should be relative expensive (slow). If you need a generic triggers
use different PL instead. I can not to know what requests you have to
solve. But try to look on PLPerl or PLPython. Generic triggers can be
developed there with less work.

Regards

Pavel


 --

 Best Regards,
 Tarlika Elisabeth Schmitz

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general