Re: [GENERAL] Recreating indices safely

2001-09-20 Thread Denis Perchine

On Wednesday 19 September 2001 11:25, Tom Lane wrote:
 Denis Perchine [EMAIL PROTECTED] writes:
  BTW, using begin; lock table; create index;drop index;commit; is not
  working,

 ??

 regression=# create table foo (f1 text);
 CREATE
 regression=# create index fooi1 on foo(f1);
 CREATE
 regression=# begin;
 BEGIN
 regression=# lock table foo;
 LOCK TABLE
 regression=# create index fooi2 on foo(f1);
 CREATE
 regression=# drop index fooi1;
 DROP
 regression=# end;
 COMMIT

 Please define not working.

Hmmm... I got deadlock detected... Something was interfered as well...

BTW, also sometimes I get the following message which really intrigues me.

ERROR:  Index 8734149 does not exist

When I restart my application it just disappears... Is it possible, that backends loose
information about updated indices? And how can I debug/detect this?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [GENERAL] Recreating indices safely

2001-09-19 Thread Denis Perchine

On Wednesday 19 September 2001 11:25, Tom Lane wrote:
 Denis Perchine [EMAIL PROTECTED] writes:
  BTW, using begin; lock table; create index;drop index;commit; is not
  working,

 ??

 regression=# create table foo (f1 text);
 CREATE
 regression=# create index fooi1 on foo(f1);
 CREATE
 regression=# begin;
 BEGIN
 regression=# lock table foo;
 LOCK TABLE
 regression=# create index fooi2 on foo(f1);
 CREATE
 regression=# drop index fooi1;
 DROP
 regression=# end;
 COMMIT

 Please define not working.

Hmmm... I got deadlock detected... Something was interfered as well...

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Re: Perfomance decreasing

2001-08-20 Thread Denis Perchine

On Friday 17 August 2001 15:09, Ivan Babikov wrote:
  In this case, however, I think he may be understating too much.  I read

 the original question as PostgreSQL is not useful for production systems.
 Call me melodramatic if you like: you are probably right.

  The point, I guess, is this: it would be really useful to have a document

 somewhere that honestly described the limitations of (the current version
 of) PostgreSQL.

 Do you mean Postgres becomes very weak when the size of a database achieves
 1.5Gb or something close to it?

 Maybe this is one of typical questions, but I have heard people complaining
 that Postgres is just for quite small bases. Now we have to choose a free
 database for then inexpensive branch of our project and Interbase looks
 better at capability to work with quite big bases (up to 10-20Gb). I am not
 sure now that Postgres will work with bases greater than 10Gb, what does
 All think?

I do not see any problems. It works for me, and I have no problems. The only 
problem you could have is with vacuum. It is solvable anyway. But if you have 
not so much updates it is not an issue too (I mean if do not update more than 
25% of DB each day).

Actually for anyone listening for such advices I would recommend to create a 
test installation, and stress test it before go to production. Interbase has 
its own problems.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] memory leaks fixed?

2001-04-08 Thread Denis Perchine

Hello,

 Just finished reading all of the comments at

 http://openacs.org/philosophy/why-not-mysql.html

 and became concerned about the comments regarding severe memory leaks with
 PostgreSQL.  Is this true?  Have they been fixed?  Are there any
 workarounds?

There are some still. Although not that large as mentioned in the comments.
If you have long-running cursor, and tables involved in this cursor are 
heavily updated, you will end with 200Mb postmaster in 1-2 days.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] Postgres eats up memory when using cursors

2001-02-28 Thread Denis Perchine

Hello,

I would like to bring subj to your attention again.
The problem is the following:
I need to have cursor opened for a long time.
I declare a cursor on the table of approx. 1 million rows.
And start fetching data by 1000 rows at each fetch.
Data processing can take quite a long time (3-4 days)
Theoretically postgres process should remain the same in size.
But it grows... In the end of 3rd day it becames 256Mb large
And this is REAL problem.

Also I would like to mention that this problem was raised by other person 
also.

I would like to hear any comments on this issue.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] postgres load

2001-02-21 Thread Denis Perchine

On Wednesday 21 February 2001 01:19, Joseph wrote:
 I republish my question for I had no answer, and this is a serious
 problem to me... I've used explain, vacuum, indexes... and so on, few
 nested requests...

Do not use ReiserFS on production machines. There are still enormous amount 
of bugs includig data corruption, etc. See recent linux-kernel discussions 
for details. That's why what you expiriencing is possible.

 
 I am running PGSql 7.0.3 over Linux 2/ELF with a ReiserFS
 filesystem,
 Bi-P3 800 and 2Gb of RAM.

 My database jump from 8 in load to 32 without any real reason
 nor too
 much requests.

 I already do vacuum even on the fly ifever that can decrease
 load, but
 nothing...

 I've done many indexed also...

 Can someone help me ?

 Emmanuel

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] Re: Load a database into memory

2001-01-25 Thread Denis Perchine

If you have enough RAM, the database will already be in memory, in a
  manner of speaking - your kernel will have all of the files held in disk
  cache.

 Ok, but if so, why 10 simultaneous same queries on a well-indexed table
 with only 500 000 records take a so long time ? :-/

Which queries? Could you please provide explain of them? Could you please 
provide execution stats for them under load?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] Problems with order by, limit, and indices

2001-01-07 Thread Denis Perchine

Hi,

another interesting thing...
This is current 7.1.

slygreetings= explain select * from users where variant_id=5 AND active='f' 
order by rcptdate,variant_id,active limit 60;
NOTICE:  QUERY PLAN:
 
Limit  (cost=13005.10..13005.10 rows=60 width=145)
  -  Sort  (cost=13005.10..13005.10 rows=3445 width=145)
-  Index Scan using users_rcptdate_vid_key on users  
(cost=0.00..12658.35 rows=3445 width=145)
 
EXPLAIN
slygreetings= set enable_sort to off;
SET VARIABLE
slygreetings= explain select * from users where variant_id=5 AND active='f' 
order by rcptdate,variant_id,active limit 60;
NOTICE:  QUERY PLAN:
 
Limit  (cost=100013005.10..100013005.10 rows=60 width=145)
  -  Sort  (cost=100013005.10..100013005.10 rows=3445 width=145)
-  Index Scan using users_rcptdate_vid_key on users  
(cost=0.00..12658.35 rows=3445 width=145)
 
EXPLAIN

Cost is something really wierd Why?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] Problems with order by, limit, and indices

2001-01-06 Thread Denis Perchine

  Example with variant_id = 2
 
  slygreetings= explain select * from users where variant_id=2 AND
  active='f' order by rcptdate limit 60;
  NOTICE:  QUERY PLAN:
 
  Limit  (cost=77117.18..77117.18 rows=60 width=145)
- Sort  (cost=77117.18..77117.18 rows=162640 width=145)
  - Seq Scan on users  (cost=0.00..33479.65 rows=162640 width=145)

 This plan looks fine to me, considering that variant_id=2 is the vast
 majority of the table.  An indexscan will be slower, except perhaps if
 you've recently CLUSTERed the table on this index.  (If you don't
 believe me, try it with ENABLE_SEQSCAN set to OFF.)

I would agree with you if there was no limit specified. As far as I can 
understand it is possible to traverse users_rcptdate_vid_key Forward,
and get 60 tuples, than finish. And that tuples will be already sorted (index 
includes rcptdate also).

  Example with variant_id = 5
 
  slygreetings= explain select * from users where variant_id=5 AND
  active='f' order by rcptdate limit 60;
  NOTICE:  QUERY PLAN:
 
  Limit  (cost=13005.10..13005.10 rows=60 width=145)
- Sort  (cost=13005.10..13005.10 rows=3445 width=145)
  - Index Scan using users_rcptdate_vid_key on users
  (cost=0.00..12658.35 rows=3445 width=145)

 You could probably get a plan without the sort step if you said
   ... order by variant_id, rcptdate;

No way, it just get all tuples for the qual, sort them, and the limiting. 
That's horrible...

slygreetings= explain select * from users where variant_id=5 AND active='f' 
order by rcptdate,variant_id limit 60;
NOTICE:  QUERY PLAN:
 
Limit  (cost=13005.10..13005.10 rows=60 width=145)
  -  Sort  (cost=13005.10..13005.10 rows=3445 width=145)
-  Index Scan using users_rcptdate_vid_key on users  
(cost=0.00..12658.35 rows=3445 width=145)
 
EXPLAIN
slygreetings= explain select * from users where variant_id=5 AND active='f' 
order by rcptdate,variant_id,active limit 60;
NOTICE:  QUERY PLAN:
 
Limit  (cost=13005.10..13005.10 rows=60 width=145)
  -  Sort  (cost=13005.10..13005.10 rows=3445 width=145)
-  Index Scan using users_rcptdate_vid_key on users  
(cost=0.00..12658.35 rows=3445 width=145)
 
EXPLAIN

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] overhead of small large objects

2000-12-11 Thread Denis Perchine

  Is there significant overhead involoved in using large objects that
  aren't very large?

 Yes, since each large object is a separate table in 7.0.* and before.
 The allocation unit for table space is 8K, so your 10K objects chew up
 16K of table space.  What's worse, each LO table has a btree index, and
 the minimum size of a btree index is 16K --- so your objects take 32K
 apiece.

 That accounts for a factor of 3.  I'm not sure where the other 8K went.
 Each LO table will require entries in pg_class, pg_attribute, pg_type,
 and pg_index, plus the indexes on those tables, but that doesn't seem
 like it'd amount to anything close to 8K per LO.

 7.1 avoids this problem by keeping all LOs in one big table.

Or you can use my patch for the same functionality in 7.0.x.
You can get it at: http://www.perchine.com/dyp/pg/

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[GENERAL] My patches are now on a webpage.

2000-12-08 Thread Denis Perchine

Hello,

I finally found time and put all my patches for 7.0.x to a webpage.
I use these patches in one of heavy loaded system (it handles approx.
1M of queries each day). That's why I can say that they are a little bit 
tested. Also all of these patches are included in current 7.1 CVS.

The link is: http://www.perchine.com/dyp/pg/

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] Problems during doing reindex on 7.0.3

2000-11-13 Thread Denis Perchine
On 14 November 2000 07:33, Hiroshi Inoue wrote:
 Denis Perchine wrote:
  Hello,
 
  I have very interesting problem. I have quite highly loaded database.
  Once a day I create new indices, and after that drop old ones. Ido this
  one by one.

 Your "reindex" is different from REINDEX command,isn't it ?

Sure. It is a sequnce like:
create index ix_name_1 ...;
drop index ix_name;
alter table ix_name_1 rename to ix_name;

For each index I like to recreate.

  All is fine except this error message I sometimes get for some of the
  queries just after reindex.
 
  ERROR:  Index 2050642 does not exist
 
  Looks like index oid was already resolved, but index was not locked
  yet... And it was dropped just under the feet...

 Parser-rewriter-planner acquires a short term lock.
 This may be improved in 7.1 though I'm not sure.

That's really bad,  as it forces some of the queries to fail... And I can not 
detect the situation, as PostgreSQL does not have any classes for errors.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--


[GENERAL] ERROR: Index 1821202 does not exist

2000-11-09 Thread Denis Perchine

Hello,

any ideas what this message mean?

ERROR: Index 1821202 does not exist

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] Large objects

2000-11-02 Thread Denis Perchine

  I want to make a image catalogue. I will use postgresql, perl and php.
 
  What are the advantages of having the images in the database instead of
  having them out in a directory?
 
  After all, to show the images I need them on a directory?

 Really, you can show images from database, BUT
 there are no much reasons put in database something, for which you can't
 say WHERE field=some_thing
 Better place images into filesystem.

Except on case... When you would like to be sure in transaction safety...

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] Large objects

2000-11-02 Thread Denis Perchine

 On Thu, Nov 02, 2000 at 05:35:04PM +0600, Denis Perchine wrote:
  Except on case... When you would like to be sure in transaction safety...

 Ok, but not for image galery.

Again... If you can accept that you will have half of image, it's OK.
If not...

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] Query caching

2000-11-01 Thread Denis Perchine

 PostgreSQL hits the disk on UPDATE/DELETE/INSERT operations. SELECT's
 are cached, but the default cache is only ½MB of RAM. You can change
 this to whatever you want.

 I'm using Cold Fusion and it can cache queries itself, so no database
 action is necessary. But I don't think PHP and others have this
 possibility. But Cold Fusion costs 1300$ :(

No, PHP has this.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[GENERAL] Fwd: Postgres update

2000-10-29 Thread Denis Perchine

Any comments?
This is vacuum... And it happends several time.
PostgreSQL 7.0.2

--  ðÅÒÅÓÌÁÎÎÏÅ ÓÏÏÂÝÅÎÉÅ  --
Subject: Postgres update
Date: Fri, 27 Oct 2000 23:00:01 -0500 (EST)
From: [EMAIL PROTECTED] (WebmailStation User)
To: [EMAIL PROTECTED]


psql:/home/www/www.webmailstation.com/sql/vacuum.limits.sql:1: NOTICE: 
 --Relation limits--
 psql:/home/www/www.webmailstation.com/sql/vacuum.limits.sql:1: NOTICE: 
 Pages 72: Changed 55, reaped 10, Empty 0, New 0; Tup 9404: Vac 279, Keep/VTL
 1658/1658, Crash 0, UnUsed 0, MinLen 56, MaxLen 56; Re-using: Free/Avail.
 Space 15864/15864; EndEmpty/Avail. Pages 0/10. CPU 0.03s/0.00u sec.
 psql:/home/www/www.webmailstation.com/sql/vacuum.limits.sql:1: NOTICE: 
 Index limits_id_key: Pages 63; Tuples 9404: Deleted 279. CPU 0.01s/0.02u
 sec. psql:/home/www/www.webmailstation.com/sql/vacuum.limits.sql:1: NOTICE: 
 Index ix_limits_sid_type: Pages 81; Tuples 9404: Deleted 279. CPU
 0.01s/0.02u sec.
 psql:/home/www/www.webmailstation.com/sql/vacuum.limits.sql:1: ERROR: 
 Parent tuple was not found

---

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] Using BLOBs with PostgreSQL

2000-10-09 Thread Denis Perchine

 PG definitively lack BLOB support and it is a big drawback...

This statement is completly wrong.
Just look at lo_create/lo_open/lo_read/lo_write/lo_close functions
in libpq.

 Even with PG7.1 with TOAST that will remove the 8kB limit on records, there
 will be still no BLOB support.

 As you have indicated it seems that PG chokes on null characters. What is
 needed is a varbinary type, which stores binary data unformated, and spits
 out the binary data inside a PQResult. Some additional funtions are needed
 to read and write chunk of data inside that varbinary type.

 Any volunteer amongst the hackers ?

 I may try to do it as a used defined type inside a shared library, but I
 think PG as is may not handle it correctly...

 Cheers
 [EMAIL PROTECTED]


 -Original Message-
 From: Martin A. Marques
 To: Tim Kientzle; PostgreSQL general mailing list
 Sent: 8/10/00 11:11
 Subject: Re: [GENERAL] Using BLOBs with PostgreSQL

 On Sat, 07 Oct 2000, Tim Kientzle wrote:
  I've been using MySQL for initial development; it has pretty
  clean and easy-to-use BLOB support.  You just declare a BLOB
  column type, then read and write arbitrarily large chunks of data.
  In Perl, BLOB columns work just like varchar columns; in JDBC,
  the getBinaryStream()/setBinaryStream() functions provide support
  for streaming large data objects.

 If you're talking about BLOB texts, just declare the column as text and
 thats
 all. In the case of binary data, I don't have an idea. I only work we
 text
 data.

  How well-supported is this functionality in PostgreSQL?
  I did some early experimenting with PG, but couldn't
  find any column type that would accept binary data
  (apparently PG's parser chokes on null characters?).
 
  I've heard about TOAST, but have no idea what it really
  is, how to use it, or how well it performs.  I'm leery
  of database-specific APIs.

 As far as I have listen, it looks like a nice way to optimize searches
 in
 blobs. Don't know anything else.

 Saludos... :-)

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] Problem with nested query

2000-10-03 Thread Denis Perchine

 Denis Perchine [EMAIL PROTECTED] writes:
  As far as you can see it uses sequence scan for inner select. This is
  quite strange as if I do this select with a constant it will use index
  scan:

 Are the two tables' "email" fields declared as exactly the same
 datatype?

No...

 IIRC, 7.0.* has a problem with recognizing that a type-coerced parameter
 to an inner query is useful as an indexscan reference constant.  This is
 fixed in current sources, but in the meantime avoiding an implicit type
 coercion is the easiest workaround.

OK. Thanks.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[GENERAL] Using one of the indices to make query faster

2000-08-03 Thread Denis Perchine

Hello,

Just small question about postgres query optimizer.
If I have a table with 2 fields a and b.
I have indices for both of them.
Table is quite large ( 10).
I do:

select * from q where a = 10 and b = 20;
I know that the best solution is to build the index on both columns, but...
Will postgres consider to use one of the indices to make initial cut?

This should be quite useful...

One example where this will be useful is:
we have a query where one of the conditions is very restrivtive and
index can be used on it, another condition is complicated functional one.

If we use index on the first column and calculate all others this will be much
faster than sequence scan.

Any thoughts?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[GENERAL] Quoting routines in libpq

2000-07-17 Thread Denis Perchine

Hello,

Just a small question: why there is not any quoting routine in libpq.
This is very handy to have one. And also it is quite hard always
write quoting routing for database if you want to have cross database code.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



[GENERAL] libpq error codes

2000-06-21 Thread Denis Perchine

Hello all,

I try to add automatical connection restoring possibility to my app.
And I have the following problem:

When I execute query I have:

query: 1024: 'select count(*) from pg_class'
ResStatus: PGRES_TUPLES_OK
Status: 0

ResStatus is the result of PQresultStatus, Status is the result of PQstatus.

If I shutdown postgres between queries I get:

query: 1024: 'select count(*) from pg_class'
ResStatus: PGRES_FATAL_ERROR
Status: 0
except: pqReadData() --  read() failed: errno=32
ïÂÏÒ×ÁÎÎÙÊ ËÁÎÁÌ

query: 1024: 'select count(*) from pg_class'
FATAL 1:  The system is shutting down
NOTICE:  AbortTransaction and not in in-progress state
Status: 1
except: pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.

Please note, that Status is 0 in the first case. There's already no any backend on the
other side but Status is still OK. That's bad... And the second query just return NULL
to PQexec.

The problem is that I cannot properly distinguish between errors in SQL, or some 
incorrect
SQL usage and situations when connection is lost and I should try to reconnect.

Any ideas how this can be implemented?

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] libpq error codes

2000-06-21 Thread Denis Perchine

  If I shutdown postgres between queries I get:
 
  query: 1024: 'select count(*) from pg_class'
  ResStatus: PGRES_FATAL_ERROR
  Status: 0
  except: pqReadData() --  read() failed: errno=32
  ïÂÏÒ×ÁÎÎÙÊ ËÁÎÁÌ
 
 What version are you running, and are you sure you are using libpq
 correctly?  Using psql I see

7.0.2.

And you use PIPE, but I use sockets. If I just do psql -d db, all is as you've said,
but if I do psql -d db -h localhost the pictures is as following:

db= select count(*) from pg_class;
 count
---
 28531
(1 row)

db= select count(*) from pg_class;
pqReadData() --  read() failed: errno=32
ïÂÏÒ×ÁÎÎÙÊ ËÁÎÁÌ
db= select count(*) from pg_class;
FATAL 1:  The system is shutting down
NOTICE:  AbortTransaction and not in in-progress state
pqReadData() -- backend closed the channel unexpectedly.
This probably means the backend terminated abnormally
before or while processing the request.
 
 regression=# select count(*) from pg_class;
  count
 ---
260
 (1 row)
 
  in another window, kill postgres backend 
 
 regression=# select count(*) from pg_class;
 FATAL 1:  The system is shutting down
 NOTICE:  AbortTransaction and not in in-progress state
 pqReadData() -- backend closed the channel unexpectedly.
 This probably means the backend terminated abnormally
 before or while processing the request.
 regression=#
 
 which looks pretty reasonable.
 
 I should also point out that in the current system, normal shutdown
 (via pg_ctl stop or 'kill' on the postmaster) produces no such result
 because extant backends are allowed to finish their sessions normally.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--



Re: [GENERAL] DateStyle is Postgres with US (NonEuropean) conventions

2000-06-15 Thread Denis Perchine

 i have start postgresql doingpostmaster -i 
 
 and  this is what i am getting in the shell.
 
 NOTICE:  DateStyle is Postgres with US (NonEuropean) conventions
 NOTICE:  DateStyle is Postgres with US (NonEuropean) conventions
 NOTICE:  DateStyle is Postgres with US (NonEuropean) conventions
 NOTICE:  DateStyle is Postgres with US (NonEuropean) conventions
 
 and continuos.
 
 does anyone know how can start postgres to prevent from this?

postmaster -i -o -e 

CU,
Denis.



[GENERAL] Problem when doing join from multiple views

2000-06-12 Thread Denis Perchine
02)
  -  Nested Loop  (cost=0.00..15.54 rows=1 width=298)
-  Nested Loop  (cost=0.00..13.51 rows=1 width=278)
  -  Nested Loop  (cost=0.00..11.48 rows=1 
width=258)
-  Nested Loop  (cost=0.00..9.45 rows=1 
width=238)
  -  Nested Loop  (cost=0.00..8.36 
rows=1 width=234)
-  Nested Loop  
(cost=0.00..6.33 rows=1 width=214)
  -  Nested Loop  
(cost=0.00..4.30 rows=1 width=194)
-  Nested Loop  
(cost=0.00..3.28 rows=1 width=12)
  -  Nested 
Loop  (cost=0.00..2.18 rows=1 width=8)
-  
Seq Scan on dicts_type t  (cost=0.00..1.09 rows=1 width=4)
-  
Seq Scan on dicts_type t  (cost=0.00..1.09 rows=1 width=4)
  -  Seq Scan 
on dicts_type t  (cost=0.00..1.09 rows=1 width=4)
-  Seq Scan on 
users  (cost=0.00..1.01 rows=1 width=182)
  -  Index Scan using 
dicts_id_key on dicts d  (cost=0.00..2.01 rows=1 width=20)
-  Index Scan using 
dicts_id_key on dicts d  (cost=0.00..2.01 rows=1 width=20)
  -  Seq Scan on dicts_type t  
(cost=0.00..1.09 rows=1 width=4)
-  Index Scan using dicts_id_key on dicts 
d  (cost=0.00..2.01 rows=1 width=20)
  -  Index Scan using dicts_id_key on dicts d  
(cost=0.00..2.01 rows=1 width=20)
-  Index Scan using dicts_id_key on dicts d  
(cost=0.00..2.01 rows=1 width=20)
  -  Seq Scan on dicts_type t  (cost=0.00..1.09 rows=1 
width=4)
-  Seq Scan on dicts_type t  (cost=0.00..1.09 rows=1 width=4)
  -  Index Scan using dicts_id_key on dicts d  (cost=0.00..2.01 rows=1 
width=20)
-  Seq Scan on dicts_type t  (cost=0.00..1.09 rows=1 width=4)
  -  Index Scan using dicts_id_key on dicts d  (cost=0.00..2.01 rows=1 width=20)

EXPLAIN

All is perfect :-((( Except small issue. It would be much better to do only one seq 
(or index) scan
on dicts and the do join... Why they are SO much scans on dicts And why they are 
so much type
spent for prepare?

OK. I know that I am bad guy... Let's do more simple thing:
CREATE VIEW country AS select id, name from dicts where type = 1;
CREATE VIEW gender AS select id, name from dicts where type = 2;
CREATE VIEW income AS select id, name from dicts where type = 3;
CREATE VIEW occupation AS select id, name from dicts where type = 4;
CREATE VIEW question AS select id, name from dicts where type = 5;
CREATE VIEW state AS select id, name from dicts where type = 6;
CREATE VIEW born_year AS select id, name from dicts where type = 7;

And the same query's explain:
psql:test.sql:11: NOTICE:  QUERY PLAN:

Nested Loop  (cost=0.00..15.21 rows=1 width=294)
  -  Nested Loop  (cost=0.00..13.18 rows=1 width=278)
-  Nested Loop  (cost=0.00..11.15 rows=1 width=262)
  -  Nested Loop  (cost=0.00..9.12 rows=1 width=246)
-  Nested Loop  (cost=0.00..7.10 rows=1 width=230)
  -  Nested Loop  (cost=0.00..5.07 rows=1 width=214)
-  Nested Loop  (cost=0.00..3.04 rows=1 width=198)
  -  Seq Scan on users  (cost=0.00..1.01 rows=1 
width=182)
  -  Index Scan using dicts_id_key on dicts  
(cost=0.00..2.02 rows=1 width=16)
-  Index Scan using dicts_id_key on dicts  
(cost=0.00..2.02 rows=1 width=16)
  -  Index Scan using dicts_id_key on dicts  (cost=0.00..2.02 
rows=1 width=16)
-  Index Scan using dicts_id_key on dicts  (cost=0.00..2.02 
rows=1 width=16)
  -  Index Scan using dicts_id_key on dicts  (cost=0.00..2.02 rows=1 
width=16)
-  Index Scan using dicts_id_key on dicts  (cost=0.00..2.02 rows=1 width=16)
  -  Index Scan using dicts_id_key on dicts  (cost=0.00..2.02 rows=1 width=16)

EXPLAIN

Again... Lot's of index scans on the same table... Is it possible to do one scan and
avoid lot's of index scans? Sorry to bother, but I created views to avoid multiple 
scans...
But...

DISCLAMER: after any change vacuum analyze was made. :-))) Just to avoid some 
questions.

-- 
Sincerely Yours,
Denis Perchine

--
E-Mail: [EMAIL PROTECTED]
HomePage: http://www.perchine.com/dyp/
FidoNet: 2:5000/120.5
--