[HACKERS] timeout for "idle in transaction"

2001-10-25 Thread Hannu Krosing


Is there a way to automatically ROLLBACK transactions that are 
in "idle in transaction" for too long ?

I remember it has been discussed on this list, but what is the current
status ?

This is a problem that has haunted me on several web applicatons using 
application servers that have persistent connection (Zope, apache-php
with 
persistent connections)

---
Hannu

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Proposed new create command, CREATE OPERATOR CLASS

2001-10-25 Thread Teodor Sigaev

>>Wait a second, how can you do that? Doesn't that violate
>>pg_amop_opc_strategy_index ?
>>
> 
> It sure does, but running the script shows that the second insert
> doesn't try to insert any rows.  There's no entry in the temp table
> for ~~ because its left and right operands are not the types the
> SELECT/INTO is looking for.
> 
> This is evidently a bug in the script.  Oleg?
> 


Make me right if I mistake.

When we was developing operator @@, I saw that postgres don't use index in 
select if operation has not commutator. But operator with different types in 
argument can't be commutator with itself. So I maked operator ~~ only for 
postgres can use index access for operator @@. There is no any difficulties to 
adding index support for operator ~~. The same things is with contrib/tsearch 
module.

But I think that there is not any other necessity in presence ~~.



-- 
Teodor Sigaev
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] inquiry using create function

2001-10-25 Thread fcarlo


Hello,

This is my first post here.
I've tried to search the archive and i haven't found
an answer to my problem...here it is...

I created a function with the "create function+setof" keywords...

create function employee(int) returns setof employee as 'select * from
employee where $1 = id'
language 'sql';


Instead of returning a tuple, I get this:

 ?column?
---
 136491256
 
I tried exchanging "$1" and "id" but the thing did not yet work.  I
replaced the "*" with the actual fields in my table and it still would
not work. 

What could be the problem?  By the way, I use postgreseql 7.1.3

Thanks!

Carlo Florendo

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[HACKERS] Kind of "bug-report"

2001-10-25 Thread Antonio Fiol Bonnín


Hi!

I say kind of, as I am not sure about it, or whether there is a newer
version that does not show up the bug. Here's the description:

When I use the following request (either on psql or using JDBC), the
backend crashes, making the other backends fail spectacularly.

The beast:

select S.last_stat,  hdb_comfort as "Confort Haut Dbit" , U.url as item
from url_stats S, urls U where  S.idzone =  and S.idurl=U.idurl  and

S.idurl in (543888938, -776493094) and last_stat between '2001-09-24
16:15:00.704' and '2001-10-25 00:00:00.0' union select
trunc_3hour(last_stat) as last_stat,  avg(hdb_comfort) as "Confort Haut
Dbit", idcontact::varchar(512) as item from url_stats S,urls U, reports
R where S.idzone =  and S.last_stat between '2001-09-24
16:15:00.704' and '2001-10-25 00:00:00.0' and S.idurl=u.idurl and
r.idurl=u.idurl and (status=1 or status=5) and (idcontact in
(-431758079)) group by idcontact, trunc_3hour(last_stat) order by
last_stat;

(sorry about that ;-)

I have three (interesting for the example) tables:

Table url_stats ( hdb_comfort int, last_stat timestamp, idurl int,
idzone int, [...] )
Table urls ( idurl int, url varchar(512), status int [...] )
Table reports ( idurl int, idcontact int, [...] )

There are indices, called:
ident_url
url_by_id
 both on table urls (idurl)
url_by_status
 on table urls (status)

For table url_stats, they are quite straightforward:
Indices: stat_by_idurl,
 stat_by_idurl_idzone_laststat,
 stat_by_idurl_last_stat

Function timestamp trunc_3hour (timestamp) returns the year, month, day
fields intact, minutes and seconds to zero, and hour /3 *3 (so as I only
get 00:00:00, 03:00:00, 06:00:00, 09:00:00, ... 21:00:00).

Well, now you have all the elemensts.

An explain select ... shows:
Unique  (cost=41329.35..41329.56 rows=3 width=32)
  ->  Sort  (cost=41329.35..41329.35 rows=28 width=32)
->  Append  (cost=0.00..41328.66 rows=28 width=32)
->  Nested Loop  (cost=0.00..41222.22 rows=28 width=32)
  ->  Seq Scan on urls u  (cost=0.00..68.31
rows=1431 width=16)
  ->  Index Scan using stat_by_idurl_idzone_laststat
on url_stats s  (cost=0.00..28.75 rows=1 width=16)
->  Aggregate  (cost=106.44..106.44 rows=0 width=28)
  ->  Group  (cost=106.44..106.44 rows=1 width=28)
->  Sort  (cost=106.44..106.44 rows=1
width=28)
  ->  Nested Loop  (cost=0.00..106.43
rows=1 width=28)
->  Nested Loop
(cost=0.00..52.11 rows=2 width=12)
  ->  Index Scan using
url_by_contact on reports r  (cost=0.00..13.26 rows=19 width=8)
  ->  Index Scan using
url_by_id on urls u  (cost=0.00..2.02 rows=1 width=4)
->  Index Scan using
stat_by_idurl_idzone_laststat on url_stats s  (cost=0.00..28.71 rows=6
width=16)


Would the verbose query plan useful? I can send it to you if needed.


About the version:
$ psql --version
psql (PostgreSQL) 7.0.3
contains readline, history, multibyte support

I firmly believe that it's a RedHat compiled version.

I do not wish to upgrade, if it is not absolutely required, as I have
about 2Gb data and availability is a main concern.

More information:

If I execute (from psql) the two parts of the union separately, none
crashes. If I do that into tables temp1 and temp2, which were not
previously created and I issue "select * from temp1 union select * from
temp2;" it does not crash either.

The other clients tell me that the backend wishes them to reconnect, as
another backend died and shared memory could be corrupted. The crashing
one just says pgReadData() -- the backend closed the connection
unexpectedly, or something close to this.

If you have some clues, or some other way of writing the request without
dramatically turning performance to unacceptable limits, anything will
be welcome.

The url_stats table contains 150+ tuples (I do not dare select
count(*) from url_stats ;-), urls contains 1000+ and reports contains
about 5000 (not sure, but >1000 and <10).

If you believe that upgrading could lead us to a notable performance
increase, we may study the situation.

Thank you for reading my e-mail.

Thank you very, very much for answering it.

Yours,

Antonio Fiol
W3ping


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] Index not used ! Why?

2001-10-25 Thread Gabor Csuri

Hi All,

 my database server has very high load in this morning.
I've found the problem. One of my index was not used so far!
it's interesting:

addb=> \d banners
  Table "banners"
 Attribute  |   Type   |  Modifier 
 
+--+
 b_no   | integer  | not null default 
nextval('banners_b_no_seq'::text)
 usr_no | integer  | 
 b_ext  | character varying(10)| 
 b_link | character varying(100)   | 
 b_from | date | 
 b_to   | date | 
 b_lastview | timestamp with time zone | default now()
 b_maxview  | integer  | 
 b_curview  | integer  | default 0
 b_maxclick | integer  | 
 b_curclick | integer  | default 0
 b_weight   | integer  | default 1
 b_curwg| double precision | default 0
 b_active   | boolean  | default 'f'::bool
 last_upd   | timestamp with time zone | default now()
 upd_usr| integer  | 
 b_name | character varying(40)| 
Indices: b_usr_no_idx,
 banners_b_no_key

addb=> EXPLAIN SELECT b_link FROM banners WHERE b_no = 3;
NOTICE:  QUERY PLAN:

Seq Scan on banners  (cost=0.00..1.57 rows=1 width=12)

EXPLAIN
addb=> DROP INDEX banners_b_no_key;
DROP
addb=> CREATE INDEX banners_b_no_key ON banners (b_no);
CREATE
addb=> EXPLAIN SELECT b_link FROM banners WHERE b_no = 3;
NOTICE:  QUERY PLAN:

Index Scan using banners_b_no_key on banners  (cost=0.00..4.43 rows=1 width=12)

EXPLAIN
addb=> 

Why index wasn't used ?
postgresql-7.1.2, redhat 7.0, kernel:2.2.19

Thanks, Gabor


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] inquiry using create function

2001-10-25 Thread fcarlo


Hello,

I've tried to search some postgresql mailing lists and i haven't found
an answer to my problem...

I create a function with the setof keyword...

create function employee(int) returns setof employee as 'select * from
employee where $1 = id'
language 'sql';


Instead of returning a tuple, I get this:

 ?column?
---
 136491256
 
I tried exchanging "$1" and "id" but the thing did not yet work.  I
replaced the "*" with the actual fields in my table and it still would
not work. 

What could be the problem?  By the way, I use postgreseql 7.1.3

Thanks!

Carlo Florendo

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Kind of "bug-report"

2001-10-25 Thread Tom Lane

Antonio Fiol =?iso-8859-1?Q?Bonn=EDn?= <[EMAIL PROTECTED]> writes:
> I say kind of, as I am not sure about it, or whether there is a newer
> version that does not show up the bug. Here's the description:

Please update to 7.1.3 and let us know whether you still see the
problem.  We fixed a number of problems with UNION in 7.1.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Index not used ! Why?

2001-10-25 Thread Gabor Csuri

> Hello!
> It needs some help by the command
> VACUUM [VERBOSE] ANALYZE table;
> to choose the ideal query strategy.

How can I choose better query strategy than  ...WHERE key_field = x; ?

Regards, Gabor.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Index not used ! Why? + Little graphical client ...

2001-10-25 Thread Nicolas Verger

>  my database server has very high load in this morning.
> I've found the problem. One of my index was not used so far!
> it's interesting:
> ...
> addb=> CREATE INDEX banners_b_no_key ON banners (b_no);
> CREATE
> addb=> EXPLAIN SELECT b_link FROM banners WHERE b_no = 3;
> NOTICE:  QUERY PLAN:
>
> Index Scan using banners_b_no_key on banners  (cost=0.00..4.43
> rows=1 width=12)
>
> EXPLAIN
> addb=>
>
> Why index wasn't used ?
> postgresql-7.1.2, redhat 7.0, kernel:2.2.19

Try to create a unique index :
CREATE UNIQUE INDEX banners_b_no_key ON banners (b_no);
or specify a primary key :
ALTER TABLE banners ADD CONSTRAINT pk_banners PRIMARY KEY (b_no);

then ANALYZE your table 

-- Nicolas --

We ( me and my teammate ) try to create a little graphical client for
PostgreSQL in Java. If someone want to try it :
http://pgInhaler.ifrance.com. It's an alpha version with lots of bugs... Try
it and send us your feedback to [EMAIL PROTECTED] Thanx...


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] Index of a table is not used (in any case)

2001-10-25 Thread Zeugswetter Andreas SB SD

Tom Lane writes:
> "Zeugswetter Andreas SB SD" <[EMAIL PROTECTED]> writes:
> > Imho one of the biggest sources for problems is people creating new
> > indexes on populated tables when the rest of the db/table has badly
> > outdated statistics or even only default statistics in place.
> > In this situation the optimizer is badly misguided, because it now
> > sees completely inconsistent statistics to work on.
> > (e.g. old indexes on that table may seem way too cheap compared 
> > to table scan) 
> 
> I don't think any of this is correct.  We don't have per-index
> statistics.  The only stats updated by CREATE INDEX are the same ones
> updated by plain VACUUM, viz the number-of-tuples and number-of-pages
> counts in pg_class.

1. Have I said anything about other stats, than relpages and reltuples ?

2. There is only limited use in the most accurate pg_statistics if
reltuples
and relpages is completely off. In the current behavior you eg get:

rel1: pages = 10-- updated from "create index"
index1 pages = 2-- outdated
index2 pages = 2000 -- current

rel2: pages = 1 -- outdated

--> Optimizer will prefer join order: rel2, rel1

> I believe it's reasonable to update those stats
> more often than the pg_statistic stats (in fact, if we could keep them
> constantly up-to-date at a reasonable cost, we'd do so).

There is a whole lot of difference between keeping them constantly up to

date and modifying (part of) them in the "create index" command, so I do

not counter your above sentence, but imho the conclusion is wrong.

> The
> pg_statistic stats are designed as much as possible to be independent
> of the absolute number of rows in the table, so that it's okay if they
> are out of sync with the pg_class stats.

Independently, they can only be good for choosing whether to use an 
index or seq scan. They are not sufficient to choose a good join order.

> The major reason why "you vacuumed but you never analyzed" is such a
> killer is that in the absence of any pg_statistic data, the default
> selectivity estimates are such that you may get either an index or seq
> scan depending on how big the table is.  The cost estimates are
> nonlinear (correctly so, IMHO, though I wouldn't necessarily 
> defend the
> exact shape of the curve) and ye olde default 0.01 will give you an
> indexscan for a small table but not for a big one.  In 7.2 I have
> reduced the default selectivity estimate to 0.005, for a number of
> reasons but mostly to get it out of the range where the decision will
> flip-flop.

Yes, the new selectivity is better, imho even still too high.
Imho the strategy should be to assume a good selectivity
of values in absence of pg_statistics evidence.
If the index was not selective enough for an average query, the
dba should not have created the index in the first place.

> test71=# create table foo (f1 int);
> test71=# create index fooi on foo(f1);
> test71=# explain select * from foo where f1 = 42;

> Index Scan using fooi on foo  (cost=0.00..8.14 rows=10 width=4)

> test71=# update pg_class set reltuples = 10, relpages = 
> 1000 where relname = 'foo';
> Index Scan using fooi on foo  (cost=0.00..1399.04 rows=1000 width=4)

> test71=# update pg_class set reltuples = 100, relpages = 
> 1 where relname = 'foo';

> Seq Scan on foo  (cost=0.00..22500.00 rows=1 width=4)

> In current sources you keep getting an indexscan as you increase the
> number of tuples...

As you can see it toppeled at 10 Mio rows :-(

Andreas

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Can't cast bigint to smallint?

2001-10-25 Thread Tom Lane

Philip Warner <[EMAIL PROTECTED]> writes:
> The problem in in the code to re-enable triggers:
> ...reltriggers = (select Count(*)
> So perhaps this version now has Count returning a bigint rather than an int?

Okay, I've added conversion functions for int8-to-int2 and vice versa.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Proposed new create command, CREATE OPERATOR CLASS

2001-10-25 Thread Bill Studenmund

On Wed, 24 Oct 2001, Tom Lane wrote:

> Bill Studenmund <[EMAIL PROTECTED]> writes:
> > [ revised proposal for CREATE OPERATOR CLASS syntax ]
>
> I don't like the idea of writing a bunch of consecutive commas (and
> having to count them correctly) for cases where we're inserting
> noncontigous amopstrategy or amprocnum numbers.  Perhaps the syntax
> for the elements of the lists could be
>
>   [ integer ]  operator  [ ( argtype, argtype ) ] [ RECHECK ]
>
>   [ integer ]  funcname ( argtypes )
>
> where if the integer is given, it is the strategy/procnum for this
> entry, and if it's not given then it defaults to 1 for the first
> item and previous-entry's-number-plus-one for later items.

That would work.

> Or just require the integer all the time.  That seems a lot less
> mistake-prone, really.  Concision is not a virtue in the case of
> a command as specialized as this.  Is there really anything wrong with
>
> CREATE OPERATOR CLASS complex_abs_ops
>   DEFAULT FOR TYPE complex USING btree
>   WITH
>   1  ||<,
>   2  ||<=,
>   3  ||=,
>   4  ||>=,
>   5  ||>
>   AND
>   1  complex_abs_cmp(complex, complex);

Not really. Especially when there are ones which are 3, 6, 7, 8, 20
floating around. :-)

> (One could imagine adding system catalogs that give symbolic names
> to the strategy/procnum numbers for each access method, and then
> allowing names instead of integers in this command.  I'm not sure
> whether GiST has sufficiently well-defined strategy numbers to make that
> work, but even if not, I like this better than a positional approach to
> figuring out which operator is which.)

Something like that (having a catalog of what the different operators are
supposed to be) would be nice. Especially for the support procs, so that
CREATE OPERATOR CLASS could make sure you gave the right ones for each
number.

> > I decided to change that to an operator followed by "needs_recheck" to
> > indicate a recheck is needed. "needs_recheck" is not handled as a keyword,
> > but as an IDENT which is examined at parse time.
>
> Ugh.  Make it a keyword.  As long as it can be a TokenId there is no
> downside to doing so, and doing it that way eliminates interesting
> issues about case folding etc.  (Did you know that case folding rules
> are slightly different for keywords and identifiers?)

Ok. Will do. Yes, I know the case folding is different, though I'm not
100% sure how so. I assume it's something like for identifiers, acents &
such get folded to unaccented characters?

> I still like RECHECK better than NEEDS_RECHECK, but that's a minor
> quibble.

RECHECK is one word. I'll go with it.

Take care,

Bill


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] 7.2b1 ...

2001-10-25 Thread Marc G. Fournier


... is now packaged ... mirrors will pick it up soon, but if anyone wants
to do a quick check, its in /pub/beta ...




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] 7.2b1 ...

2001-10-25 Thread Peter Eisentraut

Marc G. Fournier writes:

> ... is now packaged ... mirrors will pick it up soon, but if anyone wants
> to do a quick check, its in /pub/beta ...

What ever happened to 7.2beta1?

Sorry, but the inconsistency in naming of releases and CVS tags (if ever
there would be any) is driving me nuts.

-- 
Peter Eisentraut   [EMAIL PROTECTED]   http://funkturm.homeip.net/~peter


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] schema support, was Package support for Postgres

2001-10-25 Thread Peter Eisentraut

Bill Studenmund writes:

> > Because SQL says so.  All objects in a schema belong to the owner of the
> > schema.  In simple setups you have one schema per user with identical
> > names.  This has well-established use patterns in other SQL RDBMS.
>
> Then implimenting schemas will cause a backwards-incompatabile change
> regarding who can add/own functions (and operators and ..).
>
> Mainly because when we introduce schemas, all SQL transactions will have
> to be performed in the context of *some* schema.  I think "DEFAULT" was the
> name you mentioned for when there was no schema matching the username. As
> "DEFAULT" (or whatever we call it) will be made by the PG super user (it
> will actually be added as part of initdb), then that means that only the
> super user will own functions.

If you want to own the function you should create it in your schema.  If
you want to create a function and let someone else own it, then ask
someone else for write access to their schema.  (This should be a rare
operation and I don't think SQL provides for it, so we can ignore it in
the beginning.)  If there is no schema you have write access to then you
cannot create things.  People have been dying for that kind of feature,
and schemas will enable us to have it.

Think about it this way:  In its simplest implementation (which is in fact
the Entry Level SQL92, AFAIR), a schema can only have the name of the user
that owns it.  I suspect that this is because SQL has no CREATE USER, so
CREATE SCHEMA is sort of how you become a user that can do things.  At the
same time, schemas would space off the things each user creates, and if
you want to access someone else's stuff you have to prefix it with the
user's name ., sort of like ~user/file.  The generic
"namespace" nature of schemas only comes from the fact that in higher
SQL92 levels a user can own more than one schema with different names.

(Interesting thesis:  It might be that our users are in fact schemas
(minus the parser changes) and we can forget about the whole thing.)

Now what does this spell for the cooperative development environments you
described?  Difficult to tell, but perhaps some of these would do, none of
which are standard, AFAIK:

* schemas owned by groups/roles

* access privileges to schemas, perhaps some sort of sticky bit
  functionality

> Or we have to special case the DEFAULT schema. Which strikes me as a bad
> thing to do.

I don't necessarily think of the DEFAULT schemas as a real schema.  It
might just be there so that *some* schema context is set if you don't have
one set otherwise, but you don't necessarily have write access to it.
But it might not be necessary at all.

-- 
Peter Eisentraut   [EMAIL PROTECTED]   http://funkturm.homeip.net/~peter


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] schema support, was Package support for Postgres

2001-10-25 Thread Bill Studenmund

On Thu, 25 Oct 2001, Peter Eisentraut wrote:

> Bill Studenmund writes:
>
> > Mainly because when we introduce schemas, all SQL transactions will have
> > to be performed in the context of *some* schema.  I think "DEFAULT" was the
> > name you mentioned for when there was no schema matching the username. As
> > "DEFAULT" (or whatever we call it) will be made by the PG super user (it
> > will actually be added as part of initdb), then that means that only the
> > super user will own functions.
>
> If you want to own the function you should create it in your schema.  If
> you want to create a function and let someone else own it, then ask
> someone else for write access to their schema.  (This should be a rare
> operation and I don't think SQL provides for it, so we can ignore it in
> the beginning.)  If there is no schema you have write access to then you
> cannot create things.  People have been dying for that kind of feature,
> and schemas will enable us to have it.

I think I understand your descriptions of what you will be *able* to do
with schemas. And also that they may describe how you *should* do thing
with schema. I'm not disagreeing with you about that. But that's not the
angle I'm working.

I guess to get at my point, I can ask this question, "Will schema support
invalidate existing PostgreSQL database designs."

I would like the answer to be no. I would like our users to be able to
dump a pre-schema-release db, upgrade, and then restore into a
schema-aware PostgreSQL. And have their restore work.

Since the admin is restoring a db which was made before schema support,
there are no CREATE SCHEMA commands in it (or certainly not ones which do
a real schema create - right now CREATE SCHEMA is a synonym for CREATE
DATABASE). So the restore will create everything in the "DEFAULT" schema
(The schema where creates done w/o a CREATE SCHEMA go).

But right now, we can have different users owning things in one database.
So there will be restores out there which will have different users owning
things in the same restored-to schema, which will be "DEFAULT".

So we have to have (or just retail) the ability to have different users
owning things in one schema.

> Think about it this way:  In its simplest implementation (which is in fact
> the Entry Level SQL92, AFAIR), a schema can only have the name of the user
> that owns it.  I suspect that this is because SQL has no CREATE USER, so
> CREATE SCHEMA is sort of how you become a user that can do things.  At the
> same time, schemas would space off the things each user creates, and if
> you want to access someone else's stuff you have to prefix it with the
> user's name ., sort of like ~user/file.  The generic
> "namespace" nature of schemas only comes from the fact that in higher
> SQL92 levels a user can own more than one schema with different names.
>
> (Interesting thesis:  It might be that our users are in fact schemas
> (minus the parser changes) and we can forget about the whole thing.)

Hmmm... I don't think so, but hmmm..

> Now what does this spell for the cooperative development environments you
> described?  Difficult to tell, but perhaps some of these would do, none of
> which are standard, AFAIK:
>
> * schemas owned by groups/roles

I think that schemas owned by roles are part of SQL99.

> * access privileges to schemas, perhaps some sort of sticky bit
>   functionality
>
> > Or we have to special case the DEFAULT schema. Which strikes me as a bad
> > thing to do.
>
> I don't necessarily think of the DEFAULT schemas as a real schema.  It
> might just be there so that *some* schema context is set if you don't have
> one set otherwise, but you don't necessarily have write access to it.
> But it might not be necessary at all.

While if we were starting over, we might be able to (maybe should have)
design(ed) things so we don't need it, I think a "DEFAULT" schema would
help give users of the schema-aware PostgreSQL an experience similar to
what they have now.

And getting back to where this all started, I think we do need to have the
ability to have users other than the schema owner own things in the
schema, so we should keep the owner id column in the pg_package table. I'm
not against, when things are all said and done, having the default be that
only the schema owner can add things. But that's a policy decision. :-)

Take care,

Bill


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] 7.2b1 ...

2001-10-25 Thread Marc G. Fournier


CVS tags have been conssitent since v7.1 ...


On Thu, 25 Oct 2001, Peter Eisentraut wrote:

> Marc G. Fournier writes:
>
> > ... is now packaged ... mirrors will pick it up soon, but if anyone wants
> > to do a quick check, its in /pub/beta ...
>
> What ever happened to 7.2beta1?
>
> Sorry, but the inconsistency in naming of releases and CVS tags (if ever
> there would be any) is driving me nuts.
>
> --
> Peter Eisentraut   [EMAIL PROTECTED]   http://funkturm.homeip.net/~peter
>
>


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] HISTORY updated

2001-10-25 Thread Bruce Momjian

I have updated the HISTORY file to be current as of today.  Marc, it may
be nice to repackage beta1 with that one file changed, but my guess is
that we will have a beta2 soon enough.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] LOCK SEQUENCE

2001-10-25 Thread Mikhail Terekhov

Hi,

In my application I use 'LOCK seq'. In 7.0.2 it worked fine but in
7.1.2 Postgres complains that 'seq is not a table'. Is this
(disabling to lock a sequences) an intended change?

Thanks
Mikhail

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] [patch] helps fe-connect.c handle -EINTR more gracefully

2001-10-25 Thread David Ford

I'm fresh in the code, but this has solved my issues with PQconnect* 
failing when interrupted by signals.  Some of it is sloppy and not to my 
liking yet, but I'm still digging through to see if anything else needs 
touched.  Comments appreciated.

Honestly, I'm a bit surprised that this issue hasn't been encountered 
before.

Summary:
* changes to connect() sections to handle errno=EINTR.  this solves 
libpq PQconnect* family problems if the connect is interrupted by a 
signal such as SIGALRM.
* not all read/recv/write/send calls have been updated

David



--- src/interfaces/libpq/fe-connect.c.orig  Wed Oct 24 17:43:52 2001
+++ src/interfaces/libpq/fe-connect.c   Wed Oct 24 17:43:54 2001
@@ -912,21 +912,35 @@
 * Thus, we have to make arrangements for all eventualities.
 * --
 */
+   
+   retry_socket:
if (connect(conn->sock, &conn->raddr.sa, conn->raddr_len) < 0)
{
-   if (SOCK_ERRNO == EINPROGRESS || SOCK_ERRNO == EWOULDBLOCK || 
SOCK_ERRNO == 0)
-   {
-   /*
-* This is fine - we're in non-blocking mode, and the
-* connection is in progress.
-*/
-   conn->status = CONNECTION_STARTED;
-   }
-   else
-   {
-   /* Something's gone wrong */
-   connectFailureMessage(conn, SOCK_ERRNO);
-   goto connect_errReturn;
+   switch (SOCK_ERRNO) {
+   case EINTR:
+   /*
+* interrupted by signal, keep trying
+*/
+goto retry_socket;
+break;
+
+   case 0:
+   case EINPROGRESS:
+   case EWOULDBLOCK:
+   /*
+* This is fine - we're in non-blocking mode, and the
+* connection is in progress.
+*/
+   conn->status = CONNECTION_STARTED;
+   break;
+
+   default:
+   /*
+* Something's gone wrong
+*/
+   connectFailureMessage(conn, SOCK_ERRNO);
+   goto connect_errReturn;
+   break;
}
}
else
@@ -2132,8 +2146,13 @@
   "PQrequestCancel() -- socket() failed: ");
goto cancel_errReturn;
}
-   if (connect(tmpsock, &conn->raddr.sa, conn->raddr_len) < 0)
+   while (connect(tmpsock, &conn->raddr.sa, conn->raddr_len) < 0)
{
+   /*
+* interrupted by a signal
+*/
+   if(errno==EINTR)
+   continue;
strcpy(conn->errorMessage.data,
   "PQrequestCancel() -- connect() failed: ");
goto cancel_errReturn;



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] LOCK SEQUENCE

2001-10-25 Thread Tom Lane

Mikhail Terekhov <[EMAIL PROTECTED]> writes:
> In my application I use 'LOCK seq'. In 7.0.2 it worked fine but in
> 7.1.2 Postgres complains that 'seq is not a table'. Is this
> (disabling to lock a sequences) an intended change?

Hmm, it wasn't thought about too much, but why in the world would you
want to lock a sequence?  Seems like that destroys the point of using
one.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] [patch] helps fe-connect.c handle -EINTR more gracefully

2001-10-25 Thread Brent Verner

On 25 Oct 2001 at 17:08 (-0400), David Ford wrote:
| I'm fresh in the code, but this has solved my issues with PQconnect* 
| failing when interrupted by signals.  Some of it is sloppy and not to my 
| liking yet, but I'm still digging through to see if anything else needs 
| touched.  Comments appreciated.

Disclaimer: I may be wrong as hell ;-), but...


I'm not sure this is correct.  I've tried to /make/ a SIGALRM cause
connect to errno==EINTR, but I can't cause this condition.  I suspect
you have another signal being raised that is causing your symptom.
FTR, the textbook definition[1] of EINTR error for connect is:

  The attempt to establish a connection was interrupted by delivery 
  of a signal that was caught; the connection will be established 
  asynchronously.

Please check the attached prog to see if it is representative of your
code relating to the connect error you're seeing.  If it is, please
run it and see if you can get it to cause the EINTR error from connect.
If you can't I'm more certain that you have a problem elsewhere.

cheers.
  brent

1. http://www.opengroup.org/onlinepubs/7908799/xns/connect.html

-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman


#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

static int AT = 5; /* usecs for _ualarm to */

int nonblocking(int fd){
int fb = fcntl(fd, F_GETFL, 0);
if (fb == -1)
return -1;
return fcntl(fd, F_SETFL, fb|O_NONBLOCK);
}

void sigalrm(int arg){
  ualarm(AT,0);
}

int try_connect(){
  int s,c;
  struct sockaddr_in serv;
  memset(&serv,0,sizeof(serv));
  s = socket(PF_INET,SOCK_STREAM,6);
  nonblocking(s);
  serv.sin_family = AF_INET;
  serv.sin_port = htons(80);
  inet_aton("127.0.0.1",(struct in_addr*)&serv.sin_addr);
  c = connect(s,(struct sockaddr*)&serv,sizeof(serv));
  if( c < 0 && errno == EINTR )
perror("connect (EINTR):"); // this is 
  return c;
}

int main( int argc, char** argv ){
  signal(SIGALRM,sigalrm);
  ualarm(AT,0);
  while(1){
try_connect();
if( errno == EBADF ){
  break;
}
sleep(100); // this sleep never really sleeps 100 secs.
  }
  puts("ran out of file descriptors as expected.");
  return 0;
}




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] [patch] helps fe-connect.c handle -EINTR more gracefully

2001-10-25 Thread Tom Lane

Brent Verner <[EMAIL PROTECTED]> writes:
> I'm not sure this is correct.  I've tried to /make/ a SIGALRM cause
> connect to errno==EINTR, but I can't cause this condition.

It wouldn't surprise me in the least if this behavior is
platform-dependent.  It may well be that David's kernel will allow
connect() to be interrupted by SIGALRM while yours won't.  (Which
reminds me that neither of you specified what platforms you were
testing on.  For shame.)  Or maybe the difference depends on whether
you are trying to connect to a local or remote server.

Unless someone can point out a situation where retrying connect()
after EINTR is actively bad, my inclination is to accept the patch.
It seems like a net improvement in robustness to me, with no evident
downside other than a line or two more code.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] [patch] helps fe-connect.c handle -EINTR more gracefully

2001-10-25 Thread David Ford

Many signals may be the cause of -EINTR.  It depends on what the signal 
is as to how it's normally handled.  sigalarm is the most common due to 
it being a timer event.

Generate a timer that expires as fast as possible (not too fast to 
prevent code execution), and you should see things left and right return 
with -EINTR.

I'm very much aware of why SIGALRM is happening, I generate it and I 
catch it.  As per my original message on this thread, my program does 
maintenance on a scheduled basis.  The period of that maintenance is 
many times per second.

Sooo... :)

Now let's get on with the story.

Libpq doesn't deal with system calls being interrupted in the slightest. 
 None of the read/write or socket calls handle any errors.  Even benign 
returns i.e. EINTR are treated as fatal errors and returned.  Not to 
malign, but there is no reason not to continue on and handle EINTR.

David
p.s. you cant use sleep() or alarm() functions and have a timer event as 
well.  The only POSIX compliant function that doesn't trample signal 
timer events is nanosleep().

Brent Verner wrote:

On 25 Oct 2001 at 17:08 (-0400), David Ford wrote:
| I'm fresh in the code, but this has solved my issues with PQconnect* 
| failing when interrupted by signals.  Some of it is sloppy and not to my 
| liking yet, but I'm still digging through to see if anything else needs 
| touched.  Comments appreciated.

Disclaimer: I may be wrong as hell ;-), but...


I'm not sure this is correct.  I've tried to /make/ a SIGALRM cause
connect to errno==EINTR, but I can't cause this condition.  I suspect
you have another signal being raised that is causing your symptom.
FTR, the textbook definition[1] of EINTR error for connect is:

  The attempt to establish a connection was interrupted by delivery 
  of a signal that was caught; the connection will be established 
  asynchronously.

Please check the attached prog to see if it is representative of your
code relating to the connect error you're seeing.  If it is, please
run it and see if you can get it to cause the EINTR error from connect.
If you can't I'm more certain that you have a problem elsewhere.

cheers.
  brent

1. http://www.opengroup.org/onlinepubs/7908799/xns/connect.html



[snipped]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] [patch] helps fe-connect.c handle -EINTR more gracefully

2001-10-25 Thread David Ford

>
>
>
>It wouldn't surprise me in the least if this behavior is
>platform-dependent.  It may well be that David's kernel will allow
>connect() to be interrupted by SIGALRM while yours won't.  (Which
>reminds me that neither of you specified what platforms you were
>testing on.  For shame.)  Or maybe the difference depends on whether
>you are trying to connect to a local or remote server.
>
>Unless someone can point out a situation where retrying connect()
>after EINTR is actively bad, my inclination is to accept the patch.
>It seems like a net improvement in robustness to me, with no evident
>downside other than a line or two more code.
>

I didn't specify my OS because this sort of a thing is standard *nix etc 
design (well, m$ excluded of course).

I use Linux.  Every *nix that I know of can have system calls be 
interrupted.

Please wait a day before applying the patch, I want to make it a bit 
more clean/readable and make sure I covered everything in fe-connect.c, 
I found that the SSL functions are traversed even if ssl is turned off 
in the config file and I have to handle that too.

David



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [patch] helps fe-connect.c handle -EINTR more gracefully

2001-10-25 Thread Brent Verner

On 26 Oct 2001 at 00:05 (-0400), Tom Lane wrote:
| Brent Verner <[EMAIL PROTECTED]> writes:
| > I'm not sure this is correct.  I've tried to /make/ a SIGALRM cause
| > connect to errno==EINTR, but I can't cause this condition.
| 
| It wouldn't surprise me in the least if this behavior is
| platform-dependent.  It may well be that David's kernel will allow
| connect() to be interrupted by SIGALRM while yours won't.  (Which
| reminds me that neither of you specified what platforms you were
| testing on.  For shame.)  Or maybe the difference depends on whether
| you are trying to connect to a local or remote server.

sorry, I tested the attached prog on linux(2.2/2.4) and freebsd(4.4R)
to both local and remote(slow) servers.

| Unless someone can point out a situation where retrying connect()
| after EINTR is actively bad, my inclination is to accept the patch.
| It seems like a net improvement in robustness to me, with no evident
| downside other than a line or two more code.

  I've found numerous examples where connect() is retried after EINTR,
infact it appears to be fairly common.

cheers,
  brent

-- 
"Develop your talent, man, and leave the world something. Records are 
really gifts from people. To think that an artist would love you enough
to share his music with anyone is a beautiful thing."  -- Duane Allman

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] timeout for "idle in transaction"

2001-10-25 Thread Tom Lane

Hannu Krosing <[EMAIL PROTECTED]> writes:
> Is there a way to automatically ROLLBACK transactions that are 
> in "idle in transaction" for too long ?

Nope, we don't have anything for that.  Not clear to me that it's
appropriate as a server-side function anyway.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Cache query plan..

2001-10-25 Thread Christian Meunier

i was wondering if the jan's work on SPI portal creation of prepared/saved
plans has something to do with caching query plan ?

thx in advance.



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] pgindent run

2001-10-25 Thread Tom Lane

Bruce Momjian <[EMAIL PROTECTED]> writes:
> I have run pgindent on the C files and run pgjindent on the jdbc files
> as requested by the jdbc list.  You can package up beta now.  I will
> update the HISTORY file tomorrow with recent changes.

Please hold on that packaging until I add the int2<->int8 cast functions
that Philip pointed out pg_dump needs.  Will have it done in an hour or
two.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] storing binary data

2001-10-25 Thread Lincoln Yeoh

>
>I'll take a shot at improving the documentation for bytea. I'm hoping 
>documentation patches are accepted during beta though ;-)
>
>Also, FWIW, 7.2 includes bytea support for LIKE, NOT LIKE, LIKE ESCAPE, 
>||, trim(), substring(), position(), length(), indexing, and various 
>comparators.
>

Cool!

Would it be practical to use substring for retrieving chunks of binary data
in manageable sizes? Or would the overheads be too high?

Cheerio,
Link.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Index of a table is not used (in any case)

2001-10-25 Thread Zeugswetter Andreas SB SD


> Of course the question "did you vacuum" (better, did you analyze) is
> annoying, just as the requirement to analyze is annoying in the first
> place, but unless someone designs a better query planner it 
> will have to do.  The reason why we always ask that question first is 
> that people invariantly have not analyzed.

I think it is also not allways useful to ANALYZE. There are applications

that choose optimal plans with only the rudimentary statistics VACUUM 
creates. And even such that use optimal plans with only the default 
statistics in place.

Imho one of the biggest sources for problems is people creating new
indexes on populated tables when the rest of the db/table has badly
outdated statistics or even only default statistics in place.
In this situation the optimizer is badly misguided, because it now
sees completely inconsistent statistics to work on.
(e.g. old indexes on that table may seem way too cheap compared 
to table scan) 

I would thus propose a more distinguished approach of writing 
the statistics gathered during "create index" to the system tables.

Something like:
if (default stats in place)
write defaults
else if (this is the only index)
write gathered statistics
else 
write only normalized statistics for index
(e.g. index.reltuples = table.reltuples;
  index.relpages = (index.gathered.relpages * 
  table.relpages / table.gathered.relpages)

Andreas

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [GENERAL] Using an SMP machine to make multiple indices on the same

2001-10-25 Thread Tom Lane

Hiroshi Inoue <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Also, I'm now a little worried about whether concurrent index creations
>> will actually work.  Both CREATE INDEX operations will try to update
>> the pg_class tuple to set relhasindex true.

> Yes but there's a big difference. It's at the end of the creation
> not at the beginning. Also note that UpdateStats() updates pg_class
> tuple in case of B-trees etc before updating relhasindex. I'm
> suspicios if we should update Stats under the transactional control. 

It would probably be good to fix things so that there's only one update
done for both stats and relhasindex, instead of two.  But we *will* get
failures in simple_heap_update if we continue to use that routine.
The window for failure may be relatively short but it's real.  It's not
necessarily short, either; consider multiple CREATE INDEX commands
executed in a transaction block.

>> I still don't see any value in LockClassinfoForUpdate, however.

> ISTM to rely on completely the lock for the corresponding
> relation is a little misplaced.

Surely we *must* be able to rely on the relation lock.  For example:
how does SELECT FOR UPDATE of the relation's pg_class tuple prevent
writers from adding tuples to the relation?  It does not and cannot.
Only getting the appropriate relation lock provides a semantically
correct guarantee that the relation isn't changing underneath us.
Locking the pg_class tuple only locks the tuple itself, it has no wider
scope of meaning.

> For example ALTER TABLE OWNER
> doesn't acquire any lock on the table but it seems natural to me.

Seems like a bug to me.  Consider this scenario:

Backend 1   Backend 2

begin;

lock table1;

select from table1;  -- works

alter table1 set owner ...

select from table1;  -- fails, no permissions

That should not happen.  It wouldn't happen if ALTER TABLE OWNER
were acquiring an appropriate lock on the relation.

> UPDATE pg_class set .. doesn't acquire any lock on the correspoding
> relations of the target pg_class tuples but it seems natural to me,

While we allow knowledgeable users to poke at the system catalogs
directly, I feel that that is very much a "let the user beware"
facility.  I have no urge to try to guarantee cross-backend
transactional safety for changes executed that way.  But CREATE INDEX,
ALTER TABLE, and so forth should have safe concurrent behavior.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] schema support, was Package support for Postgres

2001-10-25 Thread Gunnar Rønning

* Bill Studenmund <[EMAIL PROTECTED]> wrote:

| I would like the answer to be no. I would like our users to be able to
| dump a pre-schema-release db, upgrade, and then restore into a
| schema-aware PostgreSQL. And have their restore work.


Important point. Also having a standard is fine, but by limiting ourselves
to it we are ignoring issues that might be very useful. Draw the line. 

-- 
Gunnar Rønning - [EMAIL PROTECTED]
Senior Consultant, Polygnosis AS, http://www.polygnosis.com/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] java virtual machine

2001-10-25 Thread Gunnar Rønning

* tweekie <[EMAIL PROTECTED]> wrote:
|
| I asked this question a while back but got no response - is there any way of 
| creating a Java stored procedure in a postgres database ? I can see that 
| there is a built-in PL/sql type of environment and a python one but it would  
| be nice if I could migrate Java stored procedures in an Oracle database into  
| postgres.
| 
| Any comments?


It would rock ;-) An Hungarian guy just sent a mail indicating that he
had a first prototype version of something with Kaffe up and running.
But I believe there is a lot of issues to be solved, especially
threading issues...

-- 
Gunnar Rønning - [EMAIL PROTECTED]
Senior Consultant, Polygnosis AS, http://www.polygnosis.com/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [GENERAL] Using an SMP machine to make multiple indices on

2001-10-25 Thread Hiroshi Inoue
Tom Lane wrote:
> 
> Hiroshi Inoue <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> Also, I'm now a little worried about whether concurrent index creations
> >> will actually work.  Both CREATE INDEX operations will try to update
> >> the pg_class tuple to set relhasindex true.
> 
> > Yes but there's a big difference. It's at the end of the creation
> > not at the beginning. Also note that UpdateStats() updates pg_class
> > tuple in case of B-trees etc before updating relhasindex. I'm
> > suspicios if we should update Stats under the transactional control.
> 
> It would probably be good to fix things so that there's only one update
> done for both stats and relhasindex, instead of two.  

I don't fully agree with you at this point. It's pretty painful
to update relatively irrevalent items at a time in some cases.
UpdateStats() had updated both reltuples and relhasindex before 7.0.
It's me who changed UpdateStats() to not update relhasindex when
I implemented REINDEX command. Reindex has to set relhasindex to 
true after all the indexes of a table were recreated.

> But we *will* get
> failures in simple_heap_update if we continue to use that routine.
> The window for failure may be relatively short but it's real.  It's not
> necessarily short, either; consider multiple CREATE INDEX commands
> executed in a transaction block.
> 
> >> I still don't see any value in LockClassinfoForUpdate, however.
> 
> > ISTM to rely on completely the lock for the corresponding
> > relation is a little misplaced.
> 
> Surely we *must* be able to rely on the relation lock.  For example:
> how does SELECT FOR UPDATE of the relation's pg_class tuple prevent
> writers from adding tuples to the relation? It does not and cannot.

I've never said that the relation lock is unnecessary.
The stuff around relhasindex is(was) an exception that keeps
a (possibly) long term lock for the pg_class tuple apart from 
the relevant relation lock.
What I've mainly intended is to guard our(at least my) code.
If our(my) code acquires an AccessExclusiveLock on a relation
and would update the correspoing pg_class tuple, I'd like to
get the locked tuple not the unlocked one because I couldn't
change unlocked tuples without anxiety. That's almost all.
In most cases the AccessExclusiveLock on the relation would
already block other backends which must be blocked as you
say and so the lock on the pg_class tuple would cause few
additional lock conflicts. Where are disadvantages to get
locked pg_class tuples ?

> Only getting the appropriate relation lock provides a semantically
> correct guarantee that the relation isn't changing underneath us.
> Locking the pg_class tuple only locks the tuple itself, it has no wider
> scope of meaning.
> 
> > For example ALTER TABLE OWNER
> > doesn't acquire any lock on the table but it seems natural to me.
> 
> Seems like a bug to me.  Consider this scenario:
> 
> Backend 1   Backend 2
> 
> begin;
> 
> lock table1;
> 
> select from table1;  -- works
> 
> alter table1 set owner ...
> 
> select from table1;  -- fails, no permissions
> 
> That should not happen.  It wouldn't happen if ALTER TABLE OWNER
> were acquiring an appropriate lock on the relation.

Hmm ok agreed. One of my intentions is to guard our(my) code
from such careless(?) applications.

> 
> > UPDATE pg_class set .. doesn't acquire any lock on the correspoding
> > relations of the target pg_class tuples but it seems natural to me,
> 
> While we allow knowledgeable users to poke at the system catalogs
> directly, I feel that that is very much a "let the user beware"
> facility.

Me too. Again what I intend is to guard our(my) code from
such knowledgeable users not guarantee them an expected(?)
result.

regards,
Hiroshi Inoue

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


Re: [HACKERS] pgindent run

2001-10-25 Thread Marc G. Fournier

On Thu, 25 Oct 2001, Tom Lane wrote:

> "Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> > If we aren'g putting that Packaging stuff into v7.2, can we get it into
> > beta as contrib also?  Before I do the first packagingof the beta?
>
> Uh ... what?
>
> I just meant to wait a little bit on wrapping the tarball while I make
> this last(?) catalog update.  I don't know of anything that should go
> into contrib.
>
> I saw you updated the version tag in configure, but aren't there three
> or four other places that need work to brand the version number?

Not that I've ever changed ... I know that Bruce does a bunch of docs
related stuff, like in HISTORY and whatnot ...


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] pgindent run

2001-10-25 Thread Tom Lane

"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> If we aren'g putting that Packaging stuff into v7.2, can we get it into
> beta as contrib also?  Before I do the first packagingof the beta?

Uh ... what?

I just meant to wait a little bit on wrapping the tarball while I make
this last(?) catalog update.  I don't know of anything that should go
into contrib.

I saw you updated the version tag in configure, but aren't there three
or four other places that need work to brand the version number?

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Proposed new create command, CREATE OPERATOR CLASS

2001-10-25 Thread Bill Studenmund

On Thu, 25 Oct 2001, Teodor Sigaev wrote:

> Make me right if I mistake.
>
> When we was developing operator @@, I saw that postgres don't use index in
> select if operation has not commutator. But operator with different types in
> argument can't be commutator with itself. So I maked operator ~~ only for
> postgres can use index access for operator @@. There is no any difficulties to
> adding index support for operator ~~. The same things is with contrib/tsearch
> module.
>
> But I think that there is not any other necessity in presence ~~.

So only one of the two needs to go into pg_amop, correct? Then everything
else is fine.

Take care,

Bill


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [HACKERS] pgindent run

2001-10-25 Thread Bruce Momjian

> On Thu, 25 Oct 2001, Tom Lane wrote:
> 
> > "Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> > > If we aren'g putting that Packaging stuff into v7.2, can we get it into
> > > beta as contrib also?  Before I do the first packagingof the beta?
> >
> > Uh ... what?
> >
> > I just meant to wait a little bit on wrapping the tarball while I make
> > this last(?) catalog update.  I don't know of anything that should go
> > into contrib.
> >
> > I saw you updated the version tag in configure, but aren't there three
> > or four other places that need work to brand the version number?
> 
> Not that I've ever changed ... I know that Bruce does a bunch of docs
> related stuff, like in HISTORY and whatnot ...

I noticed that SELECT version() shows:

test=> select  version();
version 

 PostgreSQL 7.2devel on i386-pc-bsdi4.2, compiled by GCC 2.95.2
   ^
(1 row) 

I see this in configure.in and am applying a patch to make it 7.2.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] pgindent run

2001-10-25 Thread Bruce Momjian

> On Thu, 25 Oct 2001, Tom Lane wrote:
> 
> > "Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> > > If we aren'g putting that Packaging stuff into v7.2, can we get it into
> > > beta as contrib also?  Before I do the first packagingof the beta?
> >
> > Uh ... what?
> >
> > I just meant to wait a little bit on wrapping the tarball while I make
> > this last(?) catalog update.  I don't know of anything that should go
> > into contrib.
> >
> > I saw you updated the version tag in configure, but aren't there three
> > or four other places that need work to brand the version number?
> 
> Not that I've ever changed ... I know that Bruce does a bunch of docs
> related stuff, like in HISTORY and whatnot ...

Looks like Marc already got configure.in:

VERSION='7.2b1'

I will work on HISTORY now but you don't have to wait for me for beta1.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] java virtual machine

2001-10-25 Thread tweekie

Hello all

I asked this question a while back but got no response - is there any way of 
creating a Java stored procedure in a postgres database ? I can see that 
there is a built-in PL/sql type of environment and a python one but it would  
be nice if I could migrate Java stored procedures in an Oracle database into  
postgres.

Any comments?

Chris

 Posted Via Usenet.com Premium Usenet Newsgroup Services
--
** SPEED ** RETENTION ** COMPLETION ** ANONYMITY **
--
http://www.usenet.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Index of a table is not used (in any case)

2001-10-25 Thread Tom Lane

"Zeugswetter Andreas SB SD" <[EMAIL PROTECTED]> writes:
> Imho one of the biggest sources for problems is people creating new
> indexes on populated tables when the rest of the db/table has badly
> outdated statistics or even only default statistics in place.
> In this situation the optimizer is badly misguided, because it now
> sees completely inconsistent statistics to work on.
> (e.g. old indexes on that table may seem way too cheap compared 
> to table scan) 

I don't think any of this is correct.  We don't have per-index
statistics.  The only stats updated by CREATE INDEX are the same ones
updated by plain VACUUM, viz the number-of-tuples and number-of-pages
counts in pg_class.  I believe it's reasonable to update those stats
more often than the pg_statistic stats (in fact, if we could keep them
constantly up-to-date at a reasonable cost, we'd do so).  The
pg_statistic stats are designed as much as possible to be independent
of the absolute number of rows in the table, so that it's okay if they
are out of sync with the pg_class stats.

The major reason why "you vacuumed but you never analyzed" is such a
killer is that in the absence of any pg_statistic data, the default
selectivity estimates are such that you may get either an index or seq
scan depending on how big the table is.  The cost estimates are
nonlinear (correctly so, IMHO, though I wouldn't necessarily defend the
exact shape of the curve) and ye olde default 0.01 will give you an
indexscan for a small table but not for a big one.  In 7.2 I have
reduced the default selectivity estimate to 0.005, for a number of
reasons but mostly to get it out of the range where the decision will
flip-flop.  Observe:

test71=# create table foo (f1 int);
CREATE
test71=# create index fooi on foo(f1);
CREATE
test71=# explain select * from foo where f1 = 42;
NOTICE:  QUERY PLAN:

Index Scan using fooi on foo  (cost=0.00..8.14 rows=10 width=4)

EXPLAIN
test71=# select reltuples,relpages from pg_class where relname = 'foo';
 reltuples | relpages
---+--
  1000 |   10
(1 row)

EXPLAIN
test71=# update pg_class set reltuples = 10, relpages = 1000 where relname = 'foo';
UPDATE 1
test71=# explain select * from foo where f1 = 42;
NOTICE:  QUERY PLAN:

Index Scan using fooi on foo  (cost=0.00..1399.04 rows=1000 width=4)

EXPLAIN
test71=# update pg_class set reltuples = 100, relpages = 1 where relname = 
'foo';
UPDATE 1
test71=# explain select * from foo where f1 = 42;
NOTICE:  QUERY PLAN:

Seq Scan on foo  (cost=0.00..22500.00 rows=1 width=4)

EXPLAIN
test71=#

In current sources you keep getting an indexscan as you increase the
number of tuples...

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] pgindent run

2001-10-25 Thread Marc G. Fournier


D'oh ...

Okay, will hold off on packaging, but have already tag'd it ...

If we aren'g putting that Packaging stuff into v7.2, can we get it into
beta as contrib also?  Before I do the first packagingof the beta?

On Thu, 25 Oct 2001, Tom Lane wrote:

> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > I have run pgindent on the C files and run pgjindent on the jdbc files
> > as requested by the jdbc list.  You can package up beta now.  I will
> > update the HISTORY file tomorrow with recent changes.
>
> Please hold on that packaging until I add the int2<->int8 cast functions
> that Philip pointed out pg_dump needs.  Will have it done in an hour or
> two.
>
>   regards, tom lane
>


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] timeout for "idle in transaction"

2001-10-25 Thread Hannu Krosing

Tom Lane wrote:
> 
> Hannu Krosing <[EMAIL PROTECTED]> writes:
> > Is there a way to automatically ROLLBACK transactions that are
> > in "idle in transaction" for too long ?
> 
> Nope, we don't have anything for that.  Not clear to me that it's
> appropriate as a server-side function anyway.

This can't be done from the client side and we do have other types of 
deadlock detection on server side so this seems to quite appropriate 
from where I stand.

I guess it would be quite nice to have as a connection-level setting, 
so that things that benefit from it can set it to some reasonable 
value while others that want to behave unsocially  can do it as well ;)

The default could be 1-3 sec of idle time in transaction for typical 
client-server and web apps while command line clients (like psql) could 
set it to something more automatically.

-
Hannu

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] [SQL] Index of a table is not used (in any case)

2001-10-25 Thread Reiner Dassing

Hello Tom!

Tom Lane wrote:
> 
> Reiner Dassing <[EMAIL PROTECTED]> writes:
> > explain select * from wetter order by epoche desc;
> > NOTICE:  QUERY PLAN:
> 
> > Index Scan Backward using wetter_epoche_idx on wetter
> > (cost=0.00..3216018.59 rows=2034 width=16)
> 
> > explain select * from wetter where epoche between '1970-01-01' and
> > '1980-01-01' order by epoche asc;
> > NOTICE:  QUERY PLAN:
> 
> > Sort  (cost=480705.74..480705.74 rows=203400 width=16)
> >   ->  Seq Scan on wetter  (cost=0.00..454852.00 rows=203400 width=16)
> 
> It's hard to believe that you've done a VACUUM ANALYZE on this table,
> since you are getting a selectivity estimate of exactly 0.01, which
> just happens to be the default selectivity estimate for range queries.
> How many rows are there really in this date range?
> 
Well, I did not claim that i made a VACUUM ANALYZE, I just set up a new
table
for testing purposes doing just INSERTs.

After VACUUM ANALYSE the results look like:
explain select * from wetter where epoche between '1970-01-01' and
test_wetter-# '1980-01-01' order by epoche asc;
NOTICE:  QUERY PLAN:

Index Scan using wetter_epoche_idx on wetter  (cost=0.00..3313780.74
rows=20319660 width=16)

EXPLAIN

Now, the INDEX Scan is used and therefore, the query is very fast, as
expected.

For me, as a user not being involved in all the intrinsics of
PostgreSQL, the question was

"Why is this SELECT so slow?" (this question is asked a lot of times in
this Mail lists)

Now, I would like to say thank you! You have explained me and hopefully
many more users
what is going on behind the scene.

> Anyway, the reason the planner is picking a seqscan+sort is that it
> thinks that will be faster than an indexscan.  It's not necessarily
> wrong.  Have you compared the explain output and actual timings both
> ways?  (Use "set enable_seqscan to off" to force it to pick an indexscan
> for testing purposes.)
> 
> regards, tom lane

--
Mit freundlichen Gruessen / With best regards
   Reiner Dassing

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])