Re: [GENERAL] Absolute value of intervals

2009-10-29 Thread Tom Lane
Jeff Davis  writes:
> Yes, that is a strange case. When you can't tell if an interval is
> positive or negative, how do you define the absolute value?

That was the point of my '1 day -25 hours' example.  Whether you
consider that positive or negative seems mighty arbitrary.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] checkpoints/bgwriter tuning verification

2009-10-29 Thread Greg Smith

On Thu, 29 Oct 2009, Vick Khera wrote:

Greg, do you have a performance tuning book?  If so, I really want to 
buy it!  Your articles are awesome.


Give me a few more months...

basically, the next checkpoint starts within a few seconds of the prior 
one completing.


That's the expected behavior with a high setting for the completion 
target.  Each checkpoint finishes just before the next one needs to start. 
That's the most you can spread them out, and more spreading generally 
equals better behavior.  Usually; always exceptions of course.



Based on Greg's article and the above number showing that most
checkpoints are triggered by running out of WAL segments, I should
increase my checkpoint_buffers.  Also, based on the article, I should
increase the bgwriter_lru_maxpages (currently at default 100).


You might actually want to decrease bgwriter_lru_maxpages to 0.  The 
current background writer designs presumes you have some spare I/O 
capacity and want to trade off some write overhead for possibly lower 
latency in client backends.  If you're at the point where the disks aren't 
keeping up with the checkpoint disk I/O, this could turn 
counter-productive.



My plan is to bump checkpoint_segments to 48 and reduce
checkpoint_completion_target to 0.7, and bump the
bgwriter_lru_maxpages to 500.


In your situation, I would increase checkpoint_segments to 64, keep the 
completion target at 0.9, and decrease bgwriter_lru_maxpages to 0 so that 
feature is turned off altogether.  If you're still not happy, you can try 
dropping the completion target too; I've heard one report of that logic 
not working out so well in practice, where lower settings actually 
resulted in less of an I/O spike.  I would worry about the OS tuning first 
though, which brins us to:



Can the checkpoint operation actually cause the DB to stop responding
for a few seconds at a time?  That seems to be what I observe.
Sometimes for 5 or more seconds one transaction will just stall.


There are two ways that that writes can hang:

1) You've gotten to the point in the checkpoint cycle where it's calling 
fsync to flush everything out of the filesystem.  At this point you could 
potentially have a big chunk of data that needs to go out to disk, and any 
other client that needs to write something (or needs a resource locked by 
a writer) is stuck behind that.


2) The entire OS write cache is filled.  Once that happens, if a client 
tries to write something else, the OS will force it to nap until there's 
space again to hold that write.


How that all works really depends on the OS.  I know that under Linux, the 
way this is all handled is really bad, and can easily lead to multi-second 
hangs.  Generally you can watch the writes accumulate by looking at 
/proc/meminfo.  I wrote an example showing how the problem appears and 
what I did to improve long hangs one server at 
http://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html and 
it refers to a long theory page on Linux's pdflush on my web page.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Absolute value of intervals

2009-10-29 Thread Jeff Davis
On Thu, 2009-10-29 at 16:39 -0700, Scott Bailey wrote:
> But there is some ambiguity around the length of a month. So INTERVAL '1 
> month - 30 days' =  INTERVAL '0 days' = INTERVAL '-1 month +30 days'. 
> But when added to a date, it makes no change for months with 30 days, 
> adds 1 day for months with 31 days and subtracts 2 days for February.

Yes, that is a strange case. When you can't tell if an interval is
positive or negative, how do you define the absolute value?

I think that's a strong argument not to provide an absolute value
function for INTERVALs.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Absolute value of intervals

2009-10-29 Thread Scott Bailey



I think this came up again recently and somebody pointed out that the
correct definition isn't as obvious as all that.  The components of
an interval can have different signs, so should abs('-1 day 1 hour') be
'1 day -1 hour' or '1 day 1 hour'?  Or what about corner cases like
'1 day -25 hours'?


I agree with Sam. The absolute value of a negative interval should be 
equidistant from zero, not the removal of negative signs. So abs('-1 day 
1 hour') should be ('1 day -1 hour'). I don't think your corner case is 
any different. So his function and operator should be perfectly valid.


But there is some ambiguity around the length of a month. So INTERVAL '1 
month - 30 days' =  INTERVAL '0 days' = INTERVAL '-1 month +30 days'. 
But when added to a date, it makes no change for months with 30 days, 
adds 1 day for months with 31 days and subtracts 2 days for February.


Scott Bailey

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres alpha testing docs and general test packs

2009-10-29 Thread Guillaume Lelarge
Le jeudi 29 octobre 2009 à 02:13:51, Greg Smith a écrit :
> On Wed, 28 Oct 2009, Thom Brown wrote:
> > All we have are a summary of changes.  We can find out all the
> > information if we do plenty of searching of mailing lists and comparing
> > old and new documentation, but obviously this can be off-putting and is
> > duplicated for everyone who wants to participate in testing.
> 
> For the last release, we had some people who updated blogs etc. with usage
> examples for many of the new major features.  That doesn't seem to be
> happening as well for the 8.5 development.
> 

I was thinking about this too. Lack of time, I suppose. But anyways, some blog 
notes are interesting for testers, but hard to find if you need to go on 
different blogs. All that kind of informations need to be found quite easily, 
and the better way to do this is to put all of them in the same place. Maybe 
the wiki can be such a place.

> In any case, the whole process is still being worked out.  I for example
> and working on some instructions for doing performance regression testing
> of the alpha releases.  There actually is a full regression test suite
> that gets runs all the time on many platforms.  The point of the alphas is
> actually for you to try *your* tests, not for everyone to test the same
> thing.
> 
> There is another route to get information here that might be a bit easier
> than directly looking up things in the mailing lists or commit logs.
> Each alpha is being generated after a CommitFest period during which
> patches are commited.  The web application organizing that process
> provides one way to more easily find the relevant discussion leading up
> that patch being applied, and many of those include better/more obvious
> examples and documentation.  The current alpha2 is based on the results of
> https://commitfest.postgresql.org/action/commitfest_view?id=3
> 

I didn't think about this, but it seems to be a better way to build this kind 
of document than browsing the commit mailing list. I'll try to write something 
according to informations available in 2009-07 
(https://commitfest.postgresql.org/action/commitfest_view?id=2) and 2009-09 
(https://commitfest.postgresql.org/action/commitfest_view?id=3) commitfest. 
I'll begin the work next monday.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Column Type Suggestions

2009-10-29 Thread Arjen Nienhuis
On Thu, Oct 29, 2009 at 9:34 PM, Blake Starkenburg  wrote:
> I am in the midst of planning for a new database which will contain specific
> keywords relating to different products. For example a "Microwave" may have
> the keywords HOUSEHOLD -> PANASONIC -> CAROUSEL -> CONVECTION -> 2200 WATTS.
> A pair of Levis may be CLOTHING -> LEVIS -> STONEWASHED -> BOOT CUT ->
> 36x34.
>
> Initially I had planned to use Postgresql's Array Column (keywords
> varchar(100)[]) but this has some limitations when using the LIKE operator,
> making searching for specific string of words difficult unless you know
> firsthand the array key (keywords[2] LIKE 'PANA%').
>
> Does anyone have any suggestions? Product keywords could vary from 1 up to
> 20, maybe more. Performance isn't a huge issue since the database table will
> be fairly small and on a local network. Appreciate all the input! NOTE:
> Using PHP5/Postgresql 8.3

Would ltree fit your problem?

http://www.postgresql.org/docs/8.4/interactive/ltree.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Column Type Suggestions

2009-10-29 Thread Chris
>
>
>
> Initially I had planned to use Postgresql's Array Column (keywords
> varchar(100)[]) but this has some limitations when using the LIKE operator,
> making searching for specific string of words difficult unless you know
> firsthand the array key (keywords[2] LIKE 'PANA%').
>
> Does anyone have any suggestions? Product keywords could vary from 1 up to
> 20, maybe more. Performance isn't a huge issue since the database table
> will
> be fairly small and on a local network. Appreciate all the input! NOTE:
> Using PHP5/Postgresql 8.3
>

Sounds like a seperate table with the primary key of your product table and
the keyword you want.

-- 
Chris Spotts
rfu...@gmail.com


[GENERAL] Column Type Suggestions

2009-10-29 Thread Blake Starkenburg
I am in the midst of planning for a new database which will contain specific
keywords relating to different products. For example a "Microwave" may have
the keywords HOUSEHOLD -> PANASONIC -> CAROUSEL -> CONVECTION -> 2200 WATTS.
A pair of Levis may be CLOTHING -> LEVIS -> STONEWASHED -> BOOT CUT ->
36x34.

Initially I had planned to use Postgresql's Array Column (keywords
varchar(100)[]) but this has some limitations when using the LIKE operator,
making searching for specific string of words difficult unless you know
firsthand the array key (keywords[2] LIKE 'PANA%').

Does anyone have any suggestions? Product keywords could vary from 1 up to
20, maybe more. Performance isn't a huge issue since the database table will
be fairly small and on a local network. Appreciate all the input! NOTE:
Using PHP5/Postgresql 8.3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] checkpoints/bgwriter tuning verification

2009-10-29 Thread Scott Marlowe
On Thu, Oct 29, 2009 at 12:46 PM, Vick Khera  wrote:
> On my primary DB I'm observing random slowness which just doesn't make
> sense to me.  The I/O system can easily do 40MB/sec writes, but I'm
> only seeing a sustained 5MB/sec, even as the application is stalling
> waiting on the DB.

Just one point on top of everything else you'll hear.  40 MB/sec
sequential throughput does not equal 40MB/sec random PLUS checkpoint
throughput.  Random access is gonna lower that 40MB/sec way down real
fast.

First step to speed things up is putting pg_xlog on its own disk(s).
Since pg_xlog is mostly sequentially access, it's much faster when
it's not competing with the rest of the db.  THEN if you need faster
disks you can buy them and throw them at path/data/base/.  However,
often just splitting things out like this will make a world of
difference on a write heavy database.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] checkpoints/bgwriter tuning verification

2009-10-29 Thread Brad Nicholson
On Thu, 2009-10-29 at 14:46 -0400, Vick Khera wrote:
> On my primary DB I'm observing random slowness which just doesn't make
> sense to me.  The I/O system can easily do 40MB/sec writes, but I'm
> only seeing a sustained 5MB/sec, even as the application is stalling
> waiting on the DB.
> 
> My only guess is that I'm getting hit by checkpoints too often, and
> this is causing some delays.  I did a lot of reading and came across
> this most excelelent article by Greg Smith
> (http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm).
> Reading that and the Pg manual and looking at the statistics, here is
> what I think I need to do.  (Greg, do you have a performance tuning
> book?  If so, I really want to buy it!  Your articles are awesome.)
> 
> current settings for checkpoints:
> checkpoint_segments = 32
> checkpoint_timeout = 900
> checkpoint_completion_target = 0.9
> log_checkpoints = on
> 
> In my logs I see this pretty consistently during times of slowness:
> 
> Oct 29 14:17:38 d04 postgres[54902]: [10990-1] LOG:  checkpoint starting: xlog
> Oct 29 14:22:04 d04 postgres[54902]: [10991-1] LOG:  checkpoint
> complete: wrote 52828 buffers (24.3%); 0 transaction log file(s)
> added, 0 removed, 32 recycled; write=265.881
> Oct 29 14:22:57 d04 postgres[54902]: [10992-1] LOG:  checkpoint starting: xlog
> Oct 29 14:26:56 d04 postgres[54902]: [10993-1] LOG:  checkpoint
> complete: wrote 52773 buffers (24.3%); 0 transaction log file(s)
> added, 0 removed, 32 recycled; write=234.846
> Oct 29 14:28:32 d04 postgres[54902]: [10994-1] LOG:  checkpoint starting: xlog
> Oct 29 14:33:32 d04 postgres[54902]: [10995-1] LOG:  checkpoint
> complete: wrote 53807 buffers (24.7%); 0 transaction log file(s)
> added, 0 removed, 32 recycled; write=299.170
> 
> 
> basically, the next checkpoint starts within a few seconds of the
> prior one completing.

That's most likely due to having checkpoint_completion target set near
the maximum.  You are telling it to stretch the checkpoint out as long
as possible.

> 
> The stats show this:
> 
> # select * from pg_stat_bgwriter;
> -[ RECORD 1 ]--+---
> checkpoints_timed  | 3236
> checkpoints_req| 83044
> buffers_checkpoint | 1376460896
> buffers_clean  | 59124159
> maxwritten_clean   | 304410
> buffers_backend| 285595787
> buffers_alloc  | 6643047623
> 
> Based on Greg's article and the above number showing that most
> checkpoints are triggered by running out of WAL segments, I should
> increase my checkpoint_buffers.  Also, based on the article, I should
> increase the bgwriter_lru_maxpages (currently at default 100).
> 
> My plan is to bump checkpoint_segments to 48 and reduce
> checkpoint_completion_target to 0.7, 

That's what I would do. I think you getting hit by checkpointing too
frequently or stretching the checkpoints out too long (or a combo of
both).

I'd also keep bumping checkpoint_segements up until you are
checkpointing on the timeout.  This will give you a longer window
between checkpoints - which gives more time to smooth the the checkpoint
out.

> and bump the
> bgwriter_lru_maxpages to 500.


I would probably start by increasing the number of WAL segments and
decreasing the checkpoint_completion_timeout and see if that solves the
problem.  I wouldn't change the bgwriter_lru_maxpages at the same time,
and wouldn't necessarily increase it by a factor of 5 when doing so.

It does look like you need to increase it though.


> Can the checkpoint operation actually cause the DB to stop responding
> for a few seconds at a time?  That seems to be what I observe.
> Sometimes for 5 or more seconds one transaction will just stall.

Absolutely.  The smoothing is supposed to alleviate that, but I'm not
sure how well it works with aggressive activity driven checkpoints like
you have.  Keep up posted, I'm curious.

> Thanks for any ideas.
> 
-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] checkpoints/bgwriter tuning verification

2009-10-29 Thread Vick Khera
On my primary DB I'm observing random slowness which just doesn't make
sense to me.  The I/O system can easily do 40MB/sec writes, but I'm
only seeing a sustained 5MB/sec, even as the application is stalling
waiting on the DB.

My only guess is that I'm getting hit by checkpoints too often, and
this is causing some delays.  I did a lot of reading and came across
this most excelelent article by Greg Smith
(http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm).
Reading that and the Pg manual and looking at the statistics, here is
what I think I need to do.  (Greg, do you have a performance tuning
book?  If so, I really want to buy it!  Your articles are awesome.)

current settings for checkpoints:
checkpoint_segments = 32
checkpoint_timeout = 900
checkpoint_completion_target = 0.9
log_checkpoints = on

In my logs I see this pretty consistently during times of slowness:

Oct 29 14:17:38 d04 postgres[54902]: [10990-1] LOG:  checkpoint starting: xlog
Oct 29 14:22:04 d04 postgres[54902]: [10991-1] LOG:  checkpoint
complete: wrote 52828 buffers (24.3%); 0 transaction log file(s)
added, 0 removed, 32 recycled; write=265.881
Oct 29 14:22:57 d04 postgres[54902]: [10992-1] LOG:  checkpoint starting: xlog
Oct 29 14:26:56 d04 postgres[54902]: [10993-1] LOG:  checkpoint
complete: wrote 52773 buffers (24.3%); 0 transaction log file(s)
added, 0 removed, 32 recycled; write=234.846
Oct 29 14:28:32 d04 postgres[54902]: [10994-1] LOG:  checkpoint starting: xlog
Oct 29 14:33:32 d04 postgres[54902]: [10995-1] LOG:  checkpoint
complete: wrote 53807 buffers (24.7%); 0 transaction log file(s)
added, 0 removed, 32 recycled; write=299.170


basically, the next checkpoint starts within a few seconds of the
prior one completing.


The stats show this:

# select * from pg_stat_bgwriter;
-[ RECORD 1 ]--+---
checkpoints_timed  | 3236
checkpoints_req| 83044
buffers_checkpoint | 1376460896
buffers_clean  | 59124159
maxwritten_clean   | 304410
buffers_backend| 285595787
buffers_alloc  | 6643047623

Based on Greg's article and the above number showing that most
checkpoints are triggered by running out of WAL segments, I should
increase my checkpoint_buffers.  Also, based on the article, I should
increase the bgwriter_lru_maxpages (currently at default 100).

My plan is to bump checkpoint_segments to 48 and reduce
checkpoint_completion_target to 0.7, and bump the
bgwriter_lru_maxpages to 500.

Can the checkpoint operation actually cause the DB to stop responding
for a few seconds at a time?  That seems to be what I observe.
Sometimes for 5 or more seconds one transaction will just stall.

Thanks for any ideas.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't connect

2009-10-29 Thread Richard Huxton
Bob Pawley wrote:
> I am on Windows and am running an anti virus program. But I was running
> the same programs on Windows before without this problem.

Well, as long as you're happy you've ruled out your anti-virus, and
you're running 8.3.8 then you'll want to monitor it and next time it
happens record:
1. What connections are already open
2. Whether PG receives any connection attempt at all
3. Whether your a/v+firewall sees the connection attempt

Items 2,3 will require you to check that you are logging the right
information (PG can log connection and disconnection). You'll need to
consult your a/v manuals for details on its logging.

If PostgreSQL isn't accepting any connections then you can't connect and
list what others it has. You should be able to see each in Task Manager,
but probably something like sysinternals' "process explorer" will be
more useful. Check how many connections and whether any are unreasonably
large, hung etc. Ideally the developers would probably want to see
output from a debugger, but I'm guessing that's not straightforward on
Windows.

It's possible there is a corner case that the fix in 8.3.8 doesn't
handle and it would be useful to know.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't connect

2009-10-29 Thread Alvaro Herrera
Bob Pawley wrote:

> FATAL:  could not reattach to shared memory (key=1804, addr=0170): 487
> 2009-10-29 00:19:20 PDT WARNING:  worker took too long to start; cancelled
> 
> Is there some way of ensuring that the server always accepts a connection?

This is a known bug, supposedly fixed in 8.3.8.  Update and let us know
if it reocurrs.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is data compressed when stored?

2009-10-29 Thread Howard Cole

A. Kretschmer wrote:

http://www.postgresql.org/docs/current/static/storage-toast.html

Andreas
  

Thanks Andreas.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't connect

2009-10-29 Thread Bob Pawley
I am on Windows and am running an anti virus program. But I was running the 
same programs on Windows before without this problem.


Bob
- Original Message - 
From: "Richard Huxton" 

To: "Bob Pawley" 
Cc: 
Sent: Thursday, October 29, 2009 10:24 AM
Subject: Re: [GENERAL] Can't connect



Bob Pawley wrote:

Hi

My copy of PostgreSQL version 8.3 has decided not to receive a
connection after an idle time measured in hours.


Odd.


It acceptes the connection one I stop the server and then restart. At
this point, it always asks for the password.

Here is the log of the event -

FATAL:  could not reattach to shared memory (key=1804,
addr=0170): 487 2009-10-29 00:19:20 PDT WARNING:  worker took too
long to start; cancelled


That "worker took too long..." message is from autovaccuum, so I'm not
sure it's directly responsible. The "could not reattach to shared
memory" error looks familiar though.

Are you:
1. On Windows?
2. Running some sort of anti-virus?

Also, you might want to read this news item from September:
 http://www.postgresql.org/about/news.1135

Are you already on 8.3.8?

--
 Richard Huxton
 Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't connect

2009-10-29 Thread Richard Huxton
Bob Pawley wrote:
> Hi
> 
> My copy of PostgreSQL version 8.3 has decided not to receive a
> connection after an idle time measured in hours.

Odd.

> It acceptes the connection one I stop the server and then restart. At
> this point, it always asks for the password.
> 
> Here is the log of the event -
> 
> FATAL:  could not reattach to shared memory (key=1804,
> addr=0170): 487 2009-10-29 00:19:20 PDT WARNING:  worker took too
> long to start; cancelled

That "worker took too long..." message is from autovaccuum, so I'm not
sure it's directly responsible. The "could not reattach to shared
memory" error looks familiar though.

Are you:
1. On Windows?
2. Running some sort of anti-virus?

Also, you might want to read this news item from September:
  http://www.postgresql.org/about/news.1135

Are you already on 8.3.8?

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can't connect

2009-10-29 Thread Bob Pawley
Hi

My copy of PostgreSQL version 8.3 has decided not to receive a connection after 
an idle time measured in hours.

It acceptes the connection one I stop the server and then restart. At this 
point, it always asks for the password.

Here is the log of the event -

FATAL:  could not reattach to shared memory (key=1804, addr=0170): 487
2009-10-29 00:19:20 PDT WARNING:  worker took too long to start; cancelled

Is there some way of ensuring that the server always accepts a connection?

Bob

Re: [GENERAL] [Newbie] UPDATE based on other table content

2009-10-29 Thread Daniel Chiaramello

Thom Brown a écrit :


...

2009/10/29 Daniel Chiaramello :
  

Never mind, I found how finally:

UPDATE
  product
SET
  qty = qty+s_count
FROM (
  SELECT
  intermediate.product_id,
  count(*) AS s_count
  FROM
  intermediate,
  orders
  WHERE
  orders.intermediate_id=intermediate.id
  GROUP BY
  intermediate.product_id
  ) AS summary
WHERE
  summary.product_id = product.id
;

Sorry for disturbance!
Daniel



Your solution looks like it would perform a cumulative calculation.
Surely you'd want qty = s_count?

In any case, wouldn't you be better off not having a quantity column
at all and just calculate it with either a query or a view?

Example:

SELECT product.id, COUNT(intermediate.product_id)
FROM product
LEFT JOIN intermediate ON product.id = intermediate.product_id
GROUP BY product.id
ORDER BY product.id

Or include a product name in the product table to get more meaningful
output.  You'd then get an output like:

id  namecount
1   Orange  5
2   Apples  7
3   Pears   2
4   Kiwi0

If you don't want ones for which there have been no orders for (or
whatever your intermediate table is for), use an INNER JOIN instead.

Regards

Thom

  
Thanks for anwer, but no, it's really adding the number of entries to my 
"qty" field. The "orders" table is a transient one and is cleaned regularly.
Of course, the example I gave is a simplified one (there are no orders 
or products, I chose these names to ease the understanding of my 
problem) - in reality, the problem is much complicated than that :)


But thanks for answer anyways.

Daniel

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multiple identical calc and function in single query

2009-10-29 Thread Sam Mason
On Thu, Oct 29, 2009 at 12:05:54PM +0200, Sim Zacks wrote:
> 326/getdisplayconversionmultiplebypn(pnid) is in this query 6 times. How
> many times is it evaluated?

I'm pretty sure it'll evaluated multiple times.  Why don't you put it
into a sub-select, it'll at least save some typing.  Something like:

  SELECT pnid, calc + case when ... then .. else ... end AS qty
  FROM (
SELECT pnid, 326/getdisplayconversionmultiplebypn(pnid) AS calc, ...
FROM ...) x;

I have a feeling that it may still get evaluated several times in
certain cases, but you could put a "RAISE NOTICE" into your code to
figure out.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Newbie] UPDATE based on other table content

2009-10-29 Thread Thom Brown
2009/10/29 Daniel Chiaramello :
> Never mind, I found how finally:
>
> UPDATE
>   product
> SET
>   qty = qty+s_count
> FROM (
>       SELECT
>           intermediate.product_id,
>           count(*) AS s_count
>       FROM
>           intermediate,
>           orders
>       WHERE
>           orders.intermediate_id=intermediate.id
>       GROUP BY
>           intermediate.product_id
>   ) AS summary
> WHERE
>   summary.product_id = product.id
> ;
>
> Sorry for disturbance!
> Daniel
>
> Daniel Chiaramello a écrit :
>>
>> Hello.
>>
>> I have a very basic question, relative to the following "problem".
>>
>> I have the following tables:
>>
>> product
>>   id
>>   qty
>>
>> intermediate
>>   id
>>   product_id
>>
>> orders
>>   intermediate_id
>>
>> I want to update the "qty" field of the "product" table by incrementing it
>> each time there is an order in the "orders" table, referencing a given
>> product through the "intermediate" table.
>>
>> I tried the following request:
>>
>> UPDATE
>>       qty = qty+1
>> FROM
>>   intermediate,
>>   orders
>> WHERE
>>   orders.intermediate_id=intermediate.id AND
>>   intermediate.product_id=product.id
>> ;
>>
>> But of course it does what was predictable - ie the qty "field" is
>> incremented only once, even if more than one entry is referencing a given
>> product. But it's not what I was hoping...
>>
>> What would be the "good" solution to do that UPDATE?
>>
>> Thanks for your attention!
>> Daniel Chiaramello
>>
>

Your solution looks like it would perform a cumulative calculation.
Surely you'd want qty = s_count?

In any case, wouldn't you be better off not having a quantity column
at all and just calculate it with either a query or a view?

Example:

SELECT product.id, COUNT(intermediate.product_id)
FROM product
LEFT JOIN intermediate ON product.id = intermediate.product_id
GROUP BY product.id
ORDER BY product.id

Or include a product name in the product table to get more meaningful
output.  You'd then get an output like:

id  namecount
1   Orange  5
2   Apples  7
3   Pears   2
4   Kiwi0

If you don't want ones for which there have been no orders for (or
whatever your intermediate table is for), use an INNER JOIN instead.

Regards

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Newbie] UPDATE based on other table content

2009-10-29 Thread Daniel Chiaramello

Never mind, I found how finally:

UPDATE
   product
SET
   qty = qty+s_count
FROM (
   SELECT
   intermediate.product_id,
   count(*) AS s_count
   FROM
   intermediate,
   orders
   WHERE
   orders.intermediate_id=intermediate.id
   GROUP BY
   intermediate.product_id
   ) AS summary
WHERE
   summary.product_id = product.id
;

Sorry for disturbance!
Daniel

Daniel Chiaramello a écrit :

Hello.

I have a very basic question, relative to the following "problem".

I have the following tables:

product
   id
   qty

intermediate
   id
   product_id

orders
   intermediate_id

I want to update the "qty" field of the "product" table by 
incrementing it each time there is an order in the "orders" table, 
referencing a given product through the "intermediate" table.


I tried the following request:

UPDATE
   qty = qty+1
FROM
   intermediate,
   orders
WHERE
   orders.intermediate_id=intermediate.id AND
   intermediate.product_id=product.id
;

But of course it does what was predictable - ie the qty "field" is 
incremented only once, even if more than one entry is referencing a 
given product. But it's not what I was hoping...


What would be the "good" solution to do that UPDATE?

Thanks for your attention!
Daniel Chiaramello




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is data compressed when stored?

2009-10-29 Thread A. Kretschmer
In response to Howard Cole :
> Hi everyone,
> 
> Is data compressed when stored in a database table/field?

http://www.postgresql.org/docs/current/static/storage-toast.html

Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is data compressed when stored?

2009-10-29 Thread Howard Cole

Hi everyone,

Is data compressed when stored in a database table/field?

If not, is there an easy way (invisible to the database client) to 
compress text and bytea data without actually using compressed disks?


Thanks.

Howard
www.selestial.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Newbie] UPDATE based on other table content

2009-10-29 Thread Daniel Chiaramello

Hello.

I have a very basic question, relative to the following "problem".

I have the following tables:

product
   id
   qty

intermediate
   id
   product_id

orders
   intermediate_id

I want to update the "qty" field of the "product" table by incrementing 
it each time there is an order in the "orders" table, referencing a 
given product through the "intermediate" table.


I tried the following request:

UPDATE
   qty = qty+1
FROM
   intermediate,
   orders
WHERE
   orders.intermediate_id=intermediate.id AND
   intermediate.product_id=product.id
;

But of course it does what was predictable - ie the qty "field" is 
incremented only once, even if more than one entry is referencing a 
given product. But it's not what I was hoping...


What would be the "good" solution to do that UPDATE?

Thanks for your attention!
Daniel Chiaramello

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Emal reg expression

2009-10-29 Thread Stephane Bortzmeyer
On Wed, Oct 28, 2009 at 05:45:14AM -0700,
 Xai  wrote 
 a message of 2 lines which said:

> i want to create a type for an email field but i'm not good with regx

Do not even try.

http://stackoverflow.com/questions/201323/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Emal reg expression

2009-10-29 Thread William Temperley
2009/10/28 Richard Huxton :
> Xai wrote:
>> i want to create a type for an email field but i'm not good with regx
>> can some one help me?
>
> Google for "email regex". Be warned - this is very complicated if you
> want to match *all* possible email addresses.
>

Just send your users an email asking them to reply. If they do, it's
obviously a valid address.

Best regards,

Will Temperley

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] multiple identical calc and function in single query

2009-10-29 Thread Sim Zacks
I have a query which includes the same calculation and function call
multiple times.

Is the calculation and function call evaluated one time or as many times
as it is in the query?

Example:

326/getdisplayconversionmultiplebypn(pnid) is in this query 6 times. How
many times is it evaluated?


select pnid, (326/getdisplayconversionmultiplebypn(pnid)) + case when
coalesce(mpq,0)=0 then 0 else
mpq-((326/getdisplayconversionmultiplebypn(pnid)) % mpq)end as qty,
case when not translatempq then
(326/getdisplayconversionmultiplebypn(pnid)) + case when
coalesce(mpq,0)=0 then 0 else
mpq-((326/getdisplayconversionmultiplebypn(pnid)) % mpq)end
else ((326/getdisplayconversionmultiplebypn(pnid)) + case when
coalesce(mpq,0)=0 then 0 else
mpq-((326/getdisplayconversionmultiplebypn(pnid)) % mpq)end)/mpq  end as
mpqqty
from manufacturerpartpn a
inner join leadstatemfgparts b on a.partid=b.partid and
a.manufacturerid=b.manufacturerid
where compatibilitygradeid in (50,100,200) and lifecycleid not
in (-400,-500)
and a.parentid is null and a.PartID=1108 and leadstateid = any
('{1,3}')
and leadstateid <> case when lfbcount>0 then 4 else -1 end
and leadstateid <> case when lpbcount>0 then 2 else -1 end


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow running query with views...how to increase efficiency? with index?

2009-10-29 Thread Alban Hertroys

On 28 Oct 2009, at 14:51, fox7 wrote:



I copy the results derived by istruction "EXPLAIN ANALYZE" for the two
query...


For a next time, if you attach that output as text files they won't  
get wrapped by e-mail clients, making them a bit easier to read. Also,  
this looks like output from pg_admin? Command-line psql doesn't wrap  
the lines in quotes (those give problems in tools like http://explain-analyze.info/



--Query with views-
"  ->  Sort  (cost=40863.02..40865.50 rows=994 width=436) (actual
time=5142.974..5143.026 rows=40 loops=1)"
"Sort Key: TC.term1, v2TO.term2"
"Sort Method:  quicksort  Memory: 23kB"
"->  Hash Join  (cost=38857.33..40813.53 rows=994 width=436)  
(actual

time=3547.557..5142.853 rows=40 loops=1)"
"  Hash Cond: ((TC.term2)::text = (v2TO.term2)::text)"
"  ->  Unique  (cost=38837.21..40099.83 rows=49719  
width=111)

(actual time=3546.697..4869.647 rows=168340 loops=1)"
"->  Sort  (cost=38837.21..39258.08 rows=168350
width=111) (actual time=3546.691..4363.092 rows=168350 loops=1)"
"  Sort Key: TC.term1, TC.term2"
"  Sort Method:  external merge  Disk:  
21032kB"


Here's your problem. The time taken jumps from a few hundreds of  
milliseconds to 3.5 seconds here.


Postgres is told to sort a largish dataset and it doesn't fit in  
workmem, so it has to push it to disk. This may well be one of the  
unnecessary orderings or distinct specifiers you put in your views,  
I'd try removing some of those and see what happens.


Alternatively you can increase the amount of work_mem that's available  
per connection.



"  ->  Seq Scan on TC  (cost=0.00..4658.50
rows=168350 width=111) (actual time=0.010..294.459 rows=168350  
loops=1)"

"  ->  Hash  (cost=20.07..20.07 rows=4 width=218) (actual
time=0.219..0.219 rows=2 loops=1)"
"->  Subquery Scan v2TO  (cost=20.00..20.07 rows=4
width=218) (actual time=0.192..0.207 rows=2 loops=1)"
"  ->  Unique  (cost=20.00..20.03 rows=4  
width=108)

(actual time=0.186..0.195 rows=2 loops=1)"
"->  Sort  (cost=20.00..20.01 rows=4
width=108) (actual time=0.182..0.185 rows=2 loops=1)"
"  Sort Key: TO.term1, TO.term2"
"  Sort Method:  quicksort   
Memory:

17kB"
"  ->  Append   
(cost=15.17..19.96 rows=4

width=108) (actual time=0.094..0.169 rows=2 loops=1)"
"->  Unique   
(cost=15.17..15.19

rows=3 width=108) (actual time=0.090..0.100 rows=2 loops=1)"
"  ->  Sort
(cost=15.17..15.18 rows=3 width=108) (actual time=0.086..0.088 rows=2
loops=1)"
"Sort Key:  
TO.term2"

"Sort Method:
quicksort  Memory: 17kB"
"->  Bitmap  
Heap
Scan on TO  (cost=4.28..15.15 rows=3 width=108) (actual  
time=0.064..0.067

rows=2 loops=1)"
"   
Recheck Cond:

((term1)::text = 'c'::text)"
"  ->   
Bitmap

Index Scan on TO_index1  (cost=0.00..4.28 rows=3 width=0) (actual
time=0.052..0.052 rows=2 loops=1)"
" 
Index

Cond: ((term1)::text = 'c'::text)"
"->  Seq Scan on TB
(cost=0.00..4.72 rows=1 width=104) (actual time=0.056..0.056 rows=0
loops=1)"
"  Filter:  
((term2)::text =

'c'::text)"
"Total runtime: 5147.410 ms"


Alban Hertroys

--
Screwing up is the best way to attach something to the ceiling.


!DSPAM:737,4ae956d611071386765946!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general