Re: [PERFORM] App very unresponsive while performing simple update

2006-05-31 Thread Greg Stark
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:

> I tried duplicating this but couldn't. What's the data in the tables?

Sorry, I had intended to include the definition and data:

stark=> create table t1 (a integer primary key, b integer);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "t1_pkey" for 
table "t1"
CREATE TABLE

stark=> create table t2 (a integer, b integer primary key);
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "t2_pkey" for 
table "t2"
CREATE TABLE

stark=> insert into t1 values (1,2);
INSERT 0 1

stark=> insert into t2 values (1,2);
INSERT 0 1

stark=> alter table t1 add constraint fk foreign key (b) references t2 
deferrable initially deferred ;
ALTER TABLE

stark=> alter table t2 add constraint fk foreign key (a) references t1 
deferrable initially deferred ;
ALTER TABLE

stark=> \d t1
  Table "public.t1"
 Column |  Type   | Modifiers 
+-+---
 a  | integer | not null
 b  | integer | 
Indexes:
"t1_pkey" PRIMARY KEY, btree (a)
Foreign-key constraints:
"fk" FOREIGN KEY (b) REFERENCES t2(b) DEFERRABLE INITIALLY DEFERRED

stark=> \d t2
  Table "public.t2"
 Column |  Type   | Modifiers 
+-+---
 a  | integer | 
 b  | integer | not null
Indexes:
"t2_pkey" PRIMARY KEY, btree (b)
Foreign-key constraints:
"fk" FOREIGN KEY (a) REFERENCES t1(a) DEFERRABLE INITIALLY DEFERRED


stark=> select * from t1;
 a | b 
---+---
 1 | 2
(1 row)

stark=> select * from t2;
 a | b 
---+---
 1 | 2
(1 row)




-- 
greg


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] App very unresponsive while performing simple update

2006-05-31 Thread Jim C. Nasby
On Wed, May 31, 2006 at 11:24:05AM -0400, Greg Stark wrote:
> stark=> begin;
>   
> BEGIN 
>   
>   stark=> begin;  
>   
>   BEGIN   
>   
> stark=> update t1 set a = 0;  
>   
> UPDATE 1  
>   
> stark=> update t1 set a = 1;  
>   
> UPDATE 1  
>   
>   
>   
>   stark=> update t2 set b = 0;
>   
>   UPDATE 1
>   
>   stark=> update t2 set b = 2;
>   
>   UPDATE 1
>   
> stark=> commit;   
>   
>   stark=> commit; 
>   
>   ERROR:  deadlock detected   
>   
>   DETAIL:  Process 16531 waits for 
> ShareLock on transaction 245131; blocked by process 16566
>   Process 16566 waits for ShareLock on 
> transaction 245132; blocked by process 16531.
>   CONTEXT:  SQL statement "SELECT 1 FROM 
> ONLY "public"."t1" x WHERE "a" = $1 FOR SHARE OF x"
>   stark=> >   
>   
> COMMIT   

I tried duplicating this but couldn't. What's the data in the tables?
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] App very unresponsive while performing simple update

2006-05-31 Thread Jan de Visser
On Wednesday 31 May 2006 13:34, Brendan Duddridge wrote:
> Hi Jan,
>
> That sounds like a great idea! How would you control the update to
> occur only every 10,000 transactions?
>
> Is there a trigger setting for that somewhere?

I was thinking something like

IF count(*) % 1 = 0 then
  ... do stuff ...
end if

Problem may be that that may be a bit expensive; maybe better to have a 
sequence and use the sequence value. 

Or something like that.

Also, maybe you should do the actual update of click_count not in the trigger 
itself, but have the trigger do a NOTIFY and have another process do a 
LISTEN. Depends how long the update takes.

jan

>
> Thanks,
>
> 
> Brendan Duddridge | CTO | 403-277-5591 x24 |  [EMAIL PROTECTED]
>
> ClickSpace Interactive Inc.
> Suite L100, 239 - 10th Ave. SE
> Calgary, AB  T2G 0V9
>
> http://www.clickspace.com
>
> On May 31, 2006, at 6:34 AM, Jan de Visser wrote:
> > On Wednesday 31 May 2006 02:29, Brendan Duddridge wrote:
> >> We'll probably have to write a process to update the click_count from
> >> querying our product_click_history table.
> >
> > How about an insert trigger on product_click_history which updates
> > click_count
> > every say 1 transactions or so?
> >
> > jan
> >
> > --
> > --
> > Jan de Visser [EMAIL PROTECTED]
> >
> > Baruk Khazad! Khazad ai-menu!
> > --
> >
> > ---(end of
> > broadcast)---
> > TIP 3: Have you checked our extensive FAQ?
> >
> >http://www.postgresql.org/docs/faq

-- 
--
Jan de Visser                     [EMAIL PROTECTED]

                Baruk Khazad! Khazad ai-menu!
--

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] App very unresponsive while performing simple update

2006-05-31 Thread Brendan Duddridge

Hi Jan,

That sounds like a great idea! How would you control the update to  
occur only every 10,000 transactions?


Is there a trigger setting for that somewhere?

Thanks,


Brendan Duddridge | CTO | 403-277-5591 x24 |  [EMAIL PROTECTED]

ClickSpace Interactive Inc.
Suite L100, 239 - 10th Ave. SE
Calgary, AB  T2G 0V9

http://www.clickspace.com

On May 31, 2006, at 6:34 AM, Jan de Visser wrote:


On Wednesday 31 May 2006 02:29, Brendan Duddridge wrote:

We'll probably have to write a process to update the click_count from
querying our product_click_history table.


How about an insert trigger on product_click_history which updates  
click_count

every say 1 transactions or so?

jan

--
--
Jan de Visser [EMAIL PROTECTED]

Baruk Khazad! Khazad ai-menu!
--

---(end of  
broadcast)---

TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] App very unresponsive while performing simple update

2006-05-31 Thread Greg Stark
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:

> On Sun, May 28, 2006 at 07:20:59PM -0400, Greg Stark wrote:
> > Brendan Duddridge <[EMAIL PROTECTED]> writes:
> > 
> > > We do have foreign keys on other tables that reference the product  table.
> > > Also, there will be updates going on at the same time as this  update. 
> > > When
> > > anyone clicks on a product details link, we issue an  update statement to
> > > increment the click_count on the product. e.g.  update product set 
> > > click_count
> > > = click_count + 1;
> > 
> > You should realize this will produce a lot of garbage records and mean 
> > you'll
> > have to be running vacuum very frequently. You might consider instead of
> > updating the main table inserting into a separate clickstream table. That
> > trades off not getting instantaneous live totals with isolating the
> > maintenance headache in a single place. That table will grow large but you 
> > can
> > prune it at your leisure without impacting query performance on your main
> > tables.
>  
> Actually, you can still get instant results, you just have to hit two
> tables to do it.

But that defeats the purpose of moving this traffic out to the clickstream
table. The whole point is to avoid generating garbage records in your main
table that you're doing a lot of real-time queries against.

I would probably keep the clickstream table, then once a day or perhaps more
often perform an aggregate query against it to generate a summary table (and
then vacuum full or cluster it since it's half garbage). Then join from the
main product table to the summary table to sort by popularity.

If you need results that are more up-to-date than 24 hours and/or can't stand
the downtime of the daily vacuum full on the summary table it becomes a lot
harder.

> > More likely you were blocking on some lock. Until that other query holding
> > that lock tries to commit Postgres won't actually detect a deadlock, it'll
> > just sit waiting until the lock becomes available.
> 
> Wow, are you sure that's how it works? I would think it would be able to
> detect deadlocks as soon as both processes are waiting on each other's
> locks.

I didn't mean to describe the general situation, just what I suspected was
happening in this case. The user had a large batch update that was performing
poorly. I suspect it may have been performing poorly because it was spending
time waiting to acquire an exclusive lock. There would be no deadlock yet,
just very slow updates.

However the other client updating the other table has deferred foreign key
constraints back to the table the big update is acquiring all these exclusive
locks. Locks for deferred constraints aren't taken until they're checked. So
the actual deadlock doesn't occur until the commit occurs.

In any case Tom said I was misunderstanding the deadlock message he posted.
The kind of situation I'm talking about would look something like this:

stark=> begin;  

BEGIN   

  stark=> begin;

  BEGIN 

stark=> update t1 set a = 0;

UPDATE 1

stark=> update t1 set a = 1;

UPDATE 1



  stark=> update t2 set b = 0;  

  UPDATE 1  

  stark=> update t2 set b = 2;  

  UPDATE 1  

stark=> commit; 

  stark=> commit;   

  ERROR:  deadlock detected 

Re: [PERFORM] App very unresponsive while performing simple update

2006-05-31 Thread Bruno Wolff III
On Wed, May 31, 2006 at 01:23:07 -0500,
  "Jim C. Nasby" <[EMAIL PROTECTED]> wrote:
> On Sun, May 28, 2006 at 07:20:59PM -0400, Greg Stark wrote:
> > Brendan Duddridge <[EMAIL PROTECTED]> writes:
> > More likely you were blocking on some lock. Until that other query holding
> > that lock tries to commit Postgres won't actually detect a deadlock, it'll
> > just sit waiting until the lock becomes available.
> 
> Wow, are you sure that's how it works? I would think it would be able to
> detect deadlocks as soon as both processes are waiting on each other's
> locks.

I don't see how it could wait for a commit. If a command is blocked waiting for
a lock, how are you going to get a commit (you might get a rollback if the
query is aborted)?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] App very unresponsive while performing simple update

2006-05-31 Thread Jan de Visser
On Wednesday 31 May 2006 02:29, Brendan Duddridge wrote:
> We'll probably have to write a process to update the click_count from  
> querying our product_click_history table.

How about an insert trigger on product_click_history which updates click_count 
every say 1 transactions or so?

jan

-- 
--
Jan de Visser                     [EMAIL PROTECTED]

                Baruk Khazad! Khazad ai-menu!
--

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Speedup hint needed, if available? :)

2006-05-31 Thread Nis Jorgensen

Mario Splivalo wrote:

Hello again.

I have to track user subscriptions to certain mailinglists, and I also
need to track credits users have on those mailinglists. On one side I
have procedures that add credits, on other side I have procedures that
subtract available credits. Add/subtract is pretty intensive, around
30-50 adds per minute (usualy 10 or 100 credits), and around 200-500
subtracts per minute (usualy by one or two credits).

I have created table user_subscriptions to track user subscriptions to
certain mailing list. I have derived subscription_id as primary key. I
have two other tables, user_subscription_credits_given, and
_credits_taken, wich track credits for subscription added or subtracted
to or from certain subscription. I created those two tables so I could
eliminate a lot of UPDATES on user_subscriptions table (if I were to
have a column 'credits' in that table). 


It sounds to me like you have decided beforehand that the obvious
solution (update a credit field in the user_subscriptions table) is not 
going to perform well. Have you tried it? How does it perform?


If it does indeed give you performance problems, you could instead run 
some kind of batch job to update the credits field (and delete the 
/given/taken records).


Finally: You could refactor the query to get rid of the union:

SELECT u.subscription_id, u.user_id, u.mailinglist_id, u.valid_from,
u.valid_to, (
  SELECT sum(credits)
  FROM credits_given
  WHERE subscription_id = u.subscription_id
) - (
  SELECT sum(credits)
  FROM credits_taken
  WHERE subscription_id = u.subscription_id)
) AS credits
FROM user_subscriptions u
WHERE u.user_id = 1

(Not tested).

You will probably need a COALESCE around each of the subqueries to avoid
problems with nulls. The sum of an empty set of numbers is 0. The
conjunction of an empty set of booleans is true. The SQL standard 
somehow manages to get this wrong


/Nis



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings