[SQL] vacuum process taking more than 33 hours

2007-01-15 Thread Mario Behring
Hi all,

I've executed a VACUUM FULL on a database 33GB in size. The process was going 
fine until it reached a index (I guess it's an index) and there it stopped for 
more than 30 hours...the whole weekend..

I've canceled it but I desperately need to free some space at the server's 
disk. I was thinking about using the TRUNCATE statement at the table I know to 
be the largest one. I have some questions though:
Will the TRUNCATE free disk space?Can I run this command and let the openNMS 
software running at the same time? The ONMS product will certainly try to 
execute INCLUDES and such.If the TRUNCATE indeed free disk space, what should I 
do next, reindex, vacuum ??

Also, what else can I do to avoid this kind of problem? Would a REINDEX fix the 
problem and let the VACUUM command finish?

Please advise.

Thank you.

Best regards,

Mario Behring




 

TV dinner still cooling? 
Check out "Tonight's Picks" on Yahoo! TV.
http://tv.yahoo.com/

Re: [SQL] vacuum process taking more than 33 hours

2007-01-15 Thread Andrew Sullivan
On Mon, Jan 15, 2007 at 06:23:23AM -0800, Mario Behring wrote:
> Hi all,
> 
> I've executed a VACUUM FULL on a database 33GB in size. The process
> was going fine until it reached a index (I guess it's an index) and
> there it stopped for more than 30 hours...the whole
> weekend..

It may not have been doing anything.  VACUUM FULL needs to take an
exclusive lock on each table it is processing.  It may have been
waiting for that lock.

> I've canceled it but I desperately need to free some space at the
> server's disk. I was thinking about using the TRUNCATE statement at
> the table I know to be the largest one. I have some questions
> though:

TRUNCATE and VACUUM are different beasts.  VACUUM recovers space from
deleted or updated rows.  If you have done neither of those things,
then it won't recover any space.  TRUNCATE is like DELETE on
steriods: it simply removes all the data from your table.  

TRUNCATE will indeed recover disk, although I can't remember whether
it actually returns that disk space to the operating system, or
whether it remains allocated for the table in question by postgres. 
If the latter, a VACUUM FULL on the table in question ought to be
enough to get you the space back.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The fact that technology doesn't work is no bar to success in the marketplace.
--Philip Greenspun

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] vacuum process taking more than 33 hours

2007-01-15 Thread Ezequiel Luis Pellettieri

Hi Andrew,

Why don't you try droping all indexes exept but the constraints ones for
each table.
then you'll free aprox half of DB size. then re-create indexes,  one by one.
and finally you can perform a full vacuum.

that worked for me. :)

cheers

pelle.-

2007/1/15, Andrew Sullivan <[EMAIL PROTECTED]>:


On Mon, Jan 15, 2007 at 06:23:23AM -0800, Mario Behring wrote:
> Hi all,
>
> I've executed a VACUUM FULL on a database 33GB in size. The process
> was going fine until it reached a index (I guess it's an index) and
> there it stopped for more than 30 hours...the whole
> weekend..

It may not have been doing anything.  VACUUM FULL needs to take an
exclusive lock on each table it is processing.  It may have been
waiting for that lock.

> I've canceled it but I desperately need to free some space at the
> server's disk. I was thinking about using the TRUNCATE statement at
> the table I know to be the largest one. I have some questions
> though:

TRUNCATE and VACUUM are different beasts.  VACUUM recovers space from
deleted or updated rows.  If you have done neither of those things,
then it won't recover any space.  TRUNCATE is like DELETE on
steriods: it simply removes all the data from your table.

TRUNCATE will indeed recover disk, although I can't remember whether
it actually returns that disk space to the operating system, or
whether it remains allocated for the table in question by postgres.
If the latter, a VACUUM FULL on the table in question ought to be
enough to get you the space back.

A

--
Andrew Sullivan  | [EMAIL PROTECTED]
The fact that technology doesn't work is no bar to success in the
marketplace.
--Philip Greenspun

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq





--
Saludos cordiales. Ezequiel L. Pellettieri


Re: [SQL] vacuum process taking more than 33 hours

2007-01-15 Thread Andrew Sullivan
On Mon, Jan 15, 2007 at 02:44:15PM +, Ezequiel Luis Pellettieri wrote:
> Hi Andrew,
> 
> Why don't you try droping all indexes exept but the constraints ones for
> each table.

I don't have the problem, so I don't know whether that will help.

> then you'll free aprox half of DB size. then re-create indexes,  one by one.
> and finally you can perform a full vacuum.
> 
> that worked for me. :)

On some versions of Postgres, it will, but AFAIK the latest versions
don't have the same index bloat problems.  Moreover, if your vacuum
regime is right, you should _never_ have to VACUUM FULL.

A


-- 
Andrew Sullivan  | [EMAIL PROTECTED]
When my information changes, I alter my conclusions.  What do you do sir?
--attr. John Maynard Keynes

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[SQL] Referencing tables are grinding UPDATE to a halt

2007-01-15 Thread William Scott Jordan

Hey all!

I'm having some trouble with a simple update on a table that only has 
about 250,000 rows in it.  The table itself looks something like:


CREATE TABLE price_details (
price_detail_id int PRIMARY KEY,
private bool
) ;

There is one table that references price_details, but isn't affected 
by the "private" column, and one table that references this second 
table.  They look something like:


CREATE TABLE prices (
p_id int PRIMARY KEY,
price_detail_id int NOT NULL REFERENCES price_details ON DELETE CASCADE,
max_sales int
) ;

CREATE INDEX prices_price_detail_id  ON prices (price_detail_id) ;

CREATE TABLE sales (
sales_id int PRIMARY KEY,
p_id int NOT NULL REFERENCES prices ON DELETE CASCADE,
sales int
) ;

CREATE INDEX sales_p_id ON sales (p_id) ;

I'm trying to do a simple update to the "private" column in the 
price_details table, which I expected to take a few seconds at 
most.  After 10 minutes, I gave up and ran explain, with this as the result:



EXPLAIN UPDATE price_details SET private = 't' WHERE private = 'f' ;

 Nested Loop  (cost=2663.45..363527947.70 rows=118759098 width=50)
   Join Filter: (subplan)
   ->  Seq Scan on sales  (cost=0.00..3685.27 rows=54627 width=42)
   ->  Materialize  (cost=2663.45..2706.93 rows=4348 width=12)
 ->  Seq Scan on price_details  (cost=0.00..2663.45 
rows=4348 width=12)

   Filter: (private  = false)
   SubPlan
 ->  Index Scan using prices_price_detail_id on 
prices  (cost=0.00..3.01 rows=1 width=4)

   Index Cond: (price_detail_id = $0)

 Seq Scan on price_details  (cost=0.00..2663.45 rows=4348 width=508)
   Filter: (private = false)
(12 rows)


So it looks to me like the postgres is checking this table against 
the table that references it, and the table that reference that 
table, making what should be a very easy transaction into something 
unusable.  Is there any way to avoid this without losing proper referencing?


Any suggestions would be appreciated.

-Scott


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Resolved - Referencing tables are grinding UPDATE to

2007-01-15 Thread William Scott Jordan

Yeah, I forgot about the RULE on that table.  Ignore my previous email.

-Scott

At 01:26 PM 1/15/2007, you wrote:

Hey all!

I'm having some trouble with a simple update on a table that only 
has about 250,000 rows in it.  The table itself looks something like:


CREATE TABLE price_details (
price_detail_id int PRIMARY KEY,
private bool
) ;

There is one table that references price_details, but isn't affected 
by the "private" column, and one table that references this second 
table.  They look something like:


CREATE TABLE prices (
p_id int PRIMARY KEY,
price_detail_id int NOT NULL REFERENCES price_details ON DELETE CASCADE,
max_sales int
) ;

CREATE INDEX prices_price_detail_id  ON prices (price_detail_id) ;

CREATE TABLE sales (
sales_id int PRIMARY KEY,
p_id int NOT NULL REFERENCES prices ON DELETE CASCADE,
sales int
) ;

CREATE INDEX sales_p_id ON sales (p_id) ;

I'm trying to do a simple update to the "private" column in the 
price_details table, which I expected to take a few seconds at 
most.  After 10 minutes, I gave up and ran explain, with this as the result:



EXPLAIN UPDATE price_details SET private = 't' WHERE private = 'f' ;

 Nested Loop  (cost=2663.45..363527947.70 rows=118759098 width=50)
   Join Filter: (subplan)
   ->  Seq Scan on sales  (cost=0.00..3685.27 rows=54627 width=42)
   ->  Materialize  (cost=2663.45..2706.93 rows=4348 width=12)
 ->  Seq Scan on price_details  (cost=0.00..2663.45 
rows=4348 width=12)

   Filter: (private  = false)
   SubPlan
 ->  Index Scan using prices_price_detail_id on 
prices  (cost=0.00..3.01 rows=1 width=4)

   Index Cond: (price_detail_id = $0)

 Seq Scan on price_details  (cost=0.00..2663.45 rows=4348 width=508)
   Filter: (private = false)
(12 rows)


So it looks to me like the postgres is checking this table against 
the table that references it, and the table that reference that 
table, making what should be a very easy transaction into something 
unusable.  Is there any way to avoid this without losing proper referencing?


Any suggestions would be appreciated.

-Scott


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq