Re: [PERFORM] [GENERAL] DELETE taking too much memory

2011-07-08 Thread Jose Ildefonso Camargo Tolosa
On Fri, Jul 8, 2011 at 4:35 AM, Dean Rasheed dean.a.rash...@gmail.comwrote:

  On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
  Hi,
 
  I have a delete query taking 7.2G of ram (and counting) but I do not
  understant why so much memory is necessary. The server has 12G, and
  I'm afraid it'll go into swap. Using postgres 8.3.14.
 
  I'm purging some old data from table t1, which should cascade-delete
  referencing rows in t2. Here's an anonymized rundown :
 
  # explain delete from t1 where t1id in (select t1id from t2 where
  foo=0 and bar  '20101101');

 It looks as though you're hitting one of the known issues with
 PostgreSQL and FKs. The FK constraint checks and CASCADE actions are
 implemented using AFTER triggers, which are queued up during the query
 to be executed at the end. For very large queries, this queue of
 pending triggers can become very large, using up all available memory.

 There's a TODO item to try to fix this for a future version of
 PostgreSQL (maybe I'll have another go at it for 9.2), but at the
 moment all versions of PostgreSQL suffer from this problem.

 The simplest work-around for you might be to break your deletes up
 into smaller chunks, say 100k or 1M rows at a time, eg:

 delete from t1 where t1id in (select t1id from t2 where foo=0 and bar
  '20101101' limit 10);


I'd like to comment here I had serious performance issues with a similar
query (planner did horrible things), not sure if planner will do the same
dumb thing it did for me, my query was against the same table (ie, t1=t2).
I had this query:

delete from t1 where ctid in (select ctid from t1 where
created_at'20101231' limit 1);   --- this was slow.  Changed to:

delete from t1 where ctid = any(array(select ctid from t1 where
created_at'20101231' limit 1));   --- a lot faster.

So... will the same principle work here?, doing this?:

delete from t1 where t1id = any(array(select t1id from t2 where foo=0 and
bar
 '20101101' limit 10));  -- would this query be faster then original
one?




 Regards,
 Dean

 --
 Sent via pgsql-performance mailing list (pgsql-performa...@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance



Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4

2009-12-02 Thread Jose Ildefonso Camargo Tolosa
Hi!

On Thu, Dec 3, 2009 at 10:39 PM, Jerome Alet jerome.a...@univ-nc.nc wrote:
 On Thu, Dec 03, 2009 at 10:54:07AM +0800, Craig Ringer wrote:

 Anyway, it'd be nice if Bacula would convert file names to utf-8 at the
 file daemon, using the encoding of the client, for storage in a utf-8
 database.

 +1 for me.

+1 here: it, in fact, have problems when restoring to a server with
different code page as the original one.


 this is the way to go.

 I understand people with an existing backup history won't be very happy
 with this unless you provide them the appropriate tools or instructions
 to convert their database's content, though.

 bye

 --
 Jérôme Alet - jerome.a...@univ-nc.nc - Centre de Ressources Informatiques
      Université de la Nouvelle-Calédonie - BPR4 - 98851 NOUMEA CEDEX
   Tél : +687 266754                                  Fax : +687 254829

 --
 Join us December 9, 2009 for the Red Hat Virtual Experience,
 a free event focused on virtualization and cloud computing.
 Attend in-depth sessions from your desk. Your couch. Anywhere.
 http://p.sf.net/sfu/redhat-sfdev2dev
 ___
 Bacula-users mailing list
 bacula-us...@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/bacula-users


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general