Re: [GENERAL] Does SET STATISTICS lock the table?

2004-02-01 Thread Tom Lane
Joseph Shraibman <[EMAIL PROTECTED]> writes:
> Does ALTER TABLE  ALTER  SET STATISTICS 100; lock the 
> table?  I just tried to do that while a query is running and the ALTER 
> is hanging.

Any ALTER TABLE will lock the table.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Search across multiple sources

2004-02-01 Thread Oleg Bartunov
Why not use schema and single search table contains indices from
different schemes (use trigger to update search table)

On Sun, 1 Feb 2004, Merrall, Graeme wrote:

>
> I don't think there's an easy way to do this but I thought I better ask just in 
> case.  I'm trying to come up with a way to search across a number of databases 
> without resorting to lots of horrible scripts. In one database I have a lot of news 
> stories from our news provider while in another database I have a lot of user 
> entered content. Ideally I'd like to search across both databases via a single 
> web-based search form.
>
> The obvious way is to create a tsearch index/table in both databases and then to 
> connect to each one in turn and to merge the results together but that doesn't seem 
> like the best solution and potentially there could be issues with ranking and so on.
>
> Is it possible to create a single search database which can store search data from a 
> number of different databases. This is all on the same server of course.
>
> Another option I looked at was to use an external search tool lke mnogosearch 
> (http://www.mnogosearch.ru/doc/msearch-extended-indexing.html#htdb) although since 
> I've used tsearch elsewhere it would be nice to use it here as well.
>
> Cheers,
>  Graeme
>
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
>

Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Search across multiple sources

2004-02-01 Thread Merrall, Graeme

> On Sun, 1 Feb 2004, Merrall, Graeme wrote:
> 
> >
> > I don't think there's an easy way to do this but I thought 
> I better ask just in case.  I'm trying to come up with a way 
> to search across a number of databases without resorting to 
> lots of horrible scripts. In one database I have a lot of 
> news stories from our news provider while in another database 
> I have a lot of user entered content. Ideally I'd like to 
> search across both databases via a single web-based search form.
> >
> > The obvious way is to create a tsearch index/table in both 
> databases and then to connect to each one in turn and to 
> merge the results together but that doesn't seem like the 
> best solution and potentially there could be issues with 
> ranking and so on.
> >
> > Is it possible to create a single search database which can 
> store search data from a number of different databases. This 
> is all on the same server of course.
> >
> > Another option I looked at was to use an external search 
> tool lke mnogosearch 
> (http://www.mnogosearch.ru/doc/msearch-extended-indexing.html#
htdb) although since I've used tsearch elsewhere it would be nice to use
it here as well.
>
> 
> Why not use schema and single search table contains indices from
> different schemes (use trigger to update search table)

Thanks for the reply Oleg. I looked at using schema snd I'm not entirely
sure whether I want to go the schema route at this stage although if
that's my only option I'm more than happy to consider it.  I'm giving it
some serious thought though.

Cheers,
 Graeme

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Functions in CHECK constraint not getting dumped before tables.

2004-02-01 Thread Rajesh Kumar Mallah
Greetings!

It is found that pg_dump does not dump function referred in CHECK
constraint definations before dumping the table defination . As a result 
the tables
do not get restored due to lack of defined functions.

Is it something that will be worked upon in future ?

regds
mallah.
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] PostgreSQL 7.4.1 and pgdb.py

2004-02-01 Thread Gaetano Mendola
Manuel Tejada wrote:

import pgdb
dbConnect = pgdb.connect(dsn='localhost:oracle', user='manuel',
password='')

cursor = dbConnect.cursor()
cursor.execute("select * from address")
Traceback (most recent call last):
   File "", line 1, in ?
   File "/usr/lib/python2.2/site-packages/pgdb.py", line 189, in execute
self.executemany(operation, (params,))
   File "/usr/lib/python2.2/site-packages/pgdb.py", line 221, in executemany
desc = type[1:2]+self ._cache.getdescr(typ[2])
   File "/usr/lib/python2.2/site-packages/pgdb.py", line 149, in getdescr
self  ._source.execute(
_pg.error: ERROR: non exist the column "typprtlen"
--
This is a really old problem already solved on 7.3 see this my post:

http://archives.postgresql.org/pgsql-bugs/2002-12/msg00082.php

I'm checking that my 7.4.1 installation is affected by the same
problem. I don't understand how this could happen that a modification
made on a 7.3 was not ported to 7.4
For the moment what you can do is substitute this select:

"SELECT typname, typprtlen, typlen "
"FROM pg_type WHERE oid = %s" % oid
inside the file pgdb.py with this one:

"SELECT typname, 4, typlen "
"FROM pg_type WHERE oid = %s" % oid
just to not break all file.

I'm not able to look at CVS to see where the modification was lost.

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] PostgreSQL 7.4.1 and pgdb.py

2004-02-01 Thread Gaetano Mendola
Tom Lane wrote:

"Manuel Tejada" <[EMAIL PROTECTED]> writes:

But now when I input the same sintaxis with the new Installation(PostgreSQL
7.4.1), I get an error when I enter rhe four line:


_pg.error: ERROR: non exist the column "typprtlen"


I believe this indicates you're using an old version of the PyGreSQL
module.  typprtlen disappeared from the pg_type system catalog several
releases back.  There is updated PyGreSQL code out there, but I'm not
very sure where --- have you looked at gborg.postgresql.org?
Unfortunately the pgdb.py is wrong and is shipped with

postgresql-python-7.4.1-1PGDG.i386.rpm

this problem was solved already on 7.3

look this:

http://archives.postgresql.org/pgsql-bugs/2002-12/msg00082.php

something did wrong during the SRPM file building for the 7.4.1

Is a good idea look how this happen.



Regards
Gaetano Mendola
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Functions in CHECK constraint not getting dumped before tables.

2004-02-01 Thread Tom Lane
> It is found that pg_dump does not dump function referred in CHECK
> constraint definations before dumping the table defination . As a result 
> the tables
> do not get restored due to lack of defined functions.

> Is it something that will be worked upon in future ?

This is fixed in CVS tip.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL 7.4.1 and pgdb.py

2004-02-01 Thread Manuel Tejada
Thank you very much Gaetano

I edited the pgdb.py file setting "4" instead of "typprtlen".
Now I am able to connect to PostgreSQL using pgdb.py.

Just for curiosity,  Can I set to -1 too as Gerhard Haring told to you?


- Original Message - 
From: "Gaetano Mendola" <[EMAIL PROTECTED]>
Newsgroups: comp.databases.postgresql.general
Cc: "Manuel Tejada" <[EMAIL PROTECTED]>
Sent: Sunday, February 01, 2004 4:48 PM
Subject: Re: PostgreSQL 7.4.1 and pgdb.py


> Manuel Tejada wrote:
> >
> import pgdb
> dbConnect = pgdb.connect(dsn='localhost:oracle', user='manuel',
> >
> > password='')
> >
> cursor = dbConnect.cursor()
> cursor.execute("select * from address")
> >
> > Traceback (most recent call last):
> >File "", line 1, in ?
> >File "/usr/lib/python2.2/site-packages/pgdb.py", line 189, in execute
> > self.executemany(operation, (params,))
> >File "/usr/lib/python2.2/site-packages/pgdb.py", line 221, in
executemany
> > desc = type[1:2]+self ._cache.getdescr(typ[2])
> >File "/usr/lib/python2.2/site-packages/pgdb.py", line 149, in
getdescr
> > self  ._source.execute(
> > _pg.error: ERROR: non exist the column "typprtlen"
> > --
>
> This is a really old problem already solved on 7.3 see this my post:
>
> http://archives.postgresql.org/pgsql-bugs/2002-12/msg00082.php
>
> I'm checking that my 7.4.1 installation is affected by the same
> problem. I don't understand how this could happen that a modification
> made on a 7.3 was not ported to 7.4
>
> For the moment what you can do is substitute this select:
>
> "SELECT typname, typprtlen, typlen "
> "FROM pg_type WHERE oid = %s" % oid
>
> inside the file pgdb.py with this one:
>
> "SELECT typname, 4, typlen "
> "FROM pg_type WHERE oid = %s" % oid
>
> just to not break all file.
>
> I'm not able to look at CVS to see where the modification was lost.
>
> Regards
> Gaetano Mendola
>
>
>


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] PostgreSQL 7.4.1 and pgdb.py

2004-02-01 Thread Gaetano Mendola
Manuel Tejada wrote:
> Thank you very much Gaetano
>
> I edited the pgdb.py file setting "4" instead of "typprtlen".
> Now I am able to connect to PostgreSQL using pgdb.py.
>
> Just for curiosity,  Can I set to -1 too as Gerhard Haring told to you?
I think yes, I really didn't dig on it to see the usage of that
value.
Regards
Gaetano Mendola




---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] partial indexes

2004-02-01 Thread Unregistered

I have following situation:
one "message" table and 3 other "message_{1,2,3}" tables which inherit
from "message".
Every day +-50 000 "messages" are being inserted. 

I needed something  extra to speed up the queries instead of only using
indexes, so I thought of doing something like you can do in Oracle,
table partitioning.
In postgresql you can simulate this using the partial indexes. 

So what I did was:
made an "today_idx", "2_weeks_idx", "1 month_idx" on the attribute
"sent_date" to speed up the queries using the "sent_date"
attribute,which 90% of the times is queried on. These indexes are
rebuild every night.  

When I was testing this setup, the indexes I used where like:
today_idx: 1/02
2_weeks_idx: 18/01  -->  1/02
month_idx: 2/01 --> 1/02

When quering something like:
select * from messages where sent_date = '2004-02-01' 

the optimiser chooses to search in the month_idx index instead of using
the optimal today_idx. This is because today (1/02) is also included in
the month_idx (and also the 2weeks_idx)
So I had to recreate the indexes excluding the previous ones and
borders. Is this normal?

My question:
Is this a good way to solve the problem? are there any other ways I can
do this? 
Is the inheritance a good feature / stable in postgresql (not a lot of
documentation about)

Tnx in advance


Unregistered -

Posted via http://www.webservertalk.com

View this thread: http://www.webservertalk.com/message106661.html


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] partial indexes

2004-02-01 Thread Tom Lane
Unregistered <[EMAIL PROTECTED]> writes:
> So what I did was:
> made an "today_idx", "2_weeks_idx", "1 month_idx" on the attribute
> "sent_date" to speed up the queries using the "sent_date"
> attribute,which 90% of the times is queried on. These indexes are
> rebuild every night.  

> When I was testing this setup, the indexes I used where like:
> today_idx: 1/02
> 2_weeks_idx: 18/01  -->  1/02
> month_idx: 2/01 --> 1/02

> When quering something like:
> select * from messages where sent_date = '2004-02-01' 

> the optimiser chooses to search in the month_idx index instead of using
> the optimal today_idx. This is because today (1/02) is also included in
> the month_idx (and also the 2weeks_idx)

There's really hardly any difference in practice.  The exact same number
of index entries will be scanned in both cases.  If you're lucky there
might be one less level of btree to descend through at the start of the
scan, but probably not more than that considering the index fanout will
be in the hundreds.  The planner's cost model for this is too crude to
tell the difference, and so it estimates the costs the same and might
choose either index.

The above really isn't a very profitable way to use partial indexes,
anyway.  You're triply indexing the most active part of the table,
thus tripling your index-updating costs, for what certainly won't
be a 3x payoff in access time.  See the Postgres user manual for some
more plausible use-cases for partial indexes.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Error with pg_restore (parse error at or near "BY" at character 144

2004-02-01 Thread nednieuws | charles
What does this error mean:

pg_restore: creating TABLE author
pg_restore: creating SEQUENCE author_id
pg_restore: [archiver (db)] could not execute query: ERROR:  parser: parse error at or 
near "BY" at character 144
pg_restore: *** aborted because of error



The line in question is:

GRANT ALL ON TABLE author TO GROUP dvnl;
  [...]
);

^A^A
INCREMENT BY 1
NO MAXVALUE
NO MINVALUE
CACHE 1;

^A^A



The command used to restore the db:

pg_restore --dbname=dvnl --format=t --verbose --username=dvnl --no-privileges 
--no-owner dvnldump.tar



The source db is PostgreSQL 7.4 and the target db is 7.3.3
--
Regards, Charels.


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Error with pg_restore (parse error at or near "BY" at character 144

2004-02-01 Thread Tom Lane
"nednieuws | charles" <[EMAIL PROTECTED]> writes:
> The source db is PostgreSQL 7.4 and the target db is 7.3.3

pg_dump is not designed to handle downgrades --- its output is intended
to be loaded into pg_dump's own version or later.  You should expect to
have to hand-edit the output to load it into an older version.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] I can't upgrade to PostgreSQL 7.4 in RedHat 9.0

2004-02-01 Thread Stephen Robert Norris
On Fri, 2004-01-30 at 13:04, Alvaro Herrera wrote:
> On Thu, Jan 29, 2004 at 08:50:47PM -0500, Manuel Tejada wrote:
> 
> > By the way, what does mean RHEL3?
> 
> "Red Hat Entreprise Linux", a commercial Linux distribution (meaning you
> shouldn't use it unless you pay for it).

No, it means you won't get support unless you pay for it, and if you get
support for a single machine in your organisation using RHEL, you must
get support for all of them.

Stephen


signature.asc
Description: This is a digitally signed message part