Re: [GENERAL] [PERFORM] HELP speed up my Postgres

2004-11-24 Thread Christopher Kings-Lynne
		update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where mobile_num in (select 
mobile_num from LOADED_MOBILE_NUMBERS)
Change to:
update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where exists (select 1 from 
LOADED_MOBILE_NUMBERS lmn where 
lmn.mobile_num=SUBSCRIPTIONTABLE.mobile_num);

That should run a lot faster.
Make sure you have indexes on both mobile_num columns.
Chris
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] [PERFORM] HELP speed up my Postgres

2004-11-24 Thread Klint Gore
On Thu, 25 Nov 2004 14:00:32 +0800, JM <[EMAIL PROTECTED]> wrote:
>   update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where mobile_num in 
> (select 
> mobile_num from LOADED_MOBILE_NUMBERS)

does loaded_mobile_numbers have a primary key or index on mobile_num?
same for subscriptiontable?
have you analyzed both tables?
is mobile_num the same type in both tables?

how does this query compare?
   update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' 
   from loaded_mobile_numbers
   where subscriptiontable.mobile_num = LOADED_MOBILE_NUMBERS.mobile_num

klint.

+---+-+
: Klint Gore: "Non rhyming:
: EMail   : [EMAIL PROTECTED]   :  slang - the:
: Snail   : A.B.R.I.:  possibilities  :
: Mail  University of New England   :  are useless"   :
:   Armidale NSW 2351 Australia : L.J.J.  :
: Fax : +61 2 6772 5376 : :
+---+-+

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] [PERFORM] HELP speed up my Postgres

2004-11-24 Thread Tom Lane
JM <[EMAIL PROTECTED]> writes:
> PG Version 7.3.4

Avoid the "IN (subselect)" construct then.  7.4 is the first release
that can optimize that in any real sense.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] [PERFORM] HELP speed up my Postgres

2004-11-24 Thread Iain
SQL:
update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where mobile_num in (select
mobile_num from LOADED_MOBILE_NUMBERS)
Could you try using UPDATE ... FROM (SELECT ) AS .. style syntax?
About 20 minutes ago, I changed a 8 minute update to an most instant by 
doing that.

regards
Iain 

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] [PERFORM] HELP speed up my Postgres

2004-11-24 Thread JM
PG Version 7.3.4

On Thursday 25 November 2004 14:12, Vishal Kashyap @ [SaiHertz] wrote:
> Dear JM ,
>
> > Ive been using postgres for 3 years and now we are having
> > problems with its
>
> PostgrSQL version please


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] [PERFORM] HELP speed up my Postgres

2004-11-24 Thread Vishal Kashyap @ [SaiHertz]
Dear JM ,



> Ive been using postgres for 3 years and now we are having problems 
> with its

PostgrSQL version please
-- 
With Best Regards,
Vishal Kashyap.
Lead Software Developer,
http://saihertz.com,
http://vishalkashyap.tk

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] HELP speed up my Postgres

2004-11-24 Thread JM
Hi ALL,

Ive been using postgres for 3 years and now we are having problems with 
its 
performance.

Here are some givens..

We have 260 subscription tables per Database. 
We have 2 databases.

Our main client has given us 250,000 mobile numbers to 
deactivate.

--
We we are experiencing
 91,000 mobile numbers to deactive it took a week to finish for 
1 DB only 
the second DB is still in the process of deactivating

Algorithm to deactivate:
we loaded all subscription tables names into a table
we loaded all mobile numbers to deactivate into a table

SQL:
update SUBSCRIPTIONTABLE set ACTIVEFLAG='Y' where mobile_num in 
(select 
mobile_num from LOADED_MOBILE_NUMBERS)

the script was made is "C"

COFIG FILE:
# This is ARA nmimain

tcpip_socket = true
max_connections = 150
superuser_reserved_connections = 2

port = 5433
shared_buffers = 45600
sort_mem = 4
max_locks_per_transaction=128

#fsync = true
#wal_sync_method = fsync

#
#   Locale settings
#
# (initialized by initdb -- may be changed)
LC_MESSAGES = 'en_US.UTF-8'
LC_MONETARY = 'en_US.UTF-8'
LC_NUMERIC = 'en_US.UTF-8'
LC_TIME = 'en_US.UTF-8'


.. DB is being vaccumed every week
my box is running on a DUAL Xeon, 15K RPM with 2 G Mem.

that box is running 2 instances of PG DB.



TIA,






---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] SELECT...VIEW...UNION...LIMIT

2004-11-24 Thread Greg Stark

"Ed L." <[EMAIL PROTECTED]> writes:

>   create view big_view as
>   select *, 'big_table'::varchar as source from big_table
>   union
>   select *, 'small_table'::varchar as source from small_table;

Try "UNION ALL" instead of just "union"

The difference is that union has to avoid duplicates. If you want duplicates
to be included or know for certain there will be no duplicates then union all
is faster.

-- 
greg


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] select into temp tables withough using EXECUTE in plpgsql

2004-11-24 Thread Edmund Kleiser
I have a query surrounding somthing taht seems to have been a relatively 
FAQ. It concerns the use of temporary tables in plpgsql.

Which initially resulted in the good old 'oid  not found' error.
So I learnt from the maliing-list that I should be 'executing' (with EXECUTE 
Command) my queries because expressions used in a PL/pgSQL function are only 
prepared and saved once (without using execute).

However I would like to select ditinct valuse in my temp table which seem 
impossible as:

" SELECT INTO is not currently supported within EXECUTE. So, the only way to 
extract a result from a dynamically-created SELECT is to use the FOR ... 
EXECUTE form described later."

(http://www.postgresql.org/docs/7.1/static/plpgsql-description.html)
I either cannot find or do not understand the documentation for this " FOR 
... EXECUTE form " being described somewhere later in the docuanetation.

So to recap I'm creating a temp table fine.
I'm EXCUTING an insert into the temp table fine.
Then I cannot select from the table
in the form:
SELECT INTO int1 count(distinct(value)) from TEMP1;
Then it happily drops the table (without the select).
Any advice on how to select from a temp table into a variable wuold be 
gratefully recieved.

Many Thanks
Edmund

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] Are subselects treated as atomic single commands?

2004-11-24 Thread Thomas Chille
Hi,

the docu about the Read Committed Transaction Isolation Level pointed
out: "... The point at issue here is whether or not within a single
command we see an absolutely consistent view of the database.".

Because i dont want to use the Serializable Transaction Isolation
Level or table locks if it not necessary i have one question:

Would the insert command with that subselect treated as one single
command and can i so prevent a race condition between multiple
function calls?

CREATE OR REPLACE FUNCTION "public"."count_parameter" (name, integer)
RETURNS "pg_catalog"."void" AS'
BEGIN

INSERT INTO parameter (parameter_name, parameter_value) SELECT $1,
$2 WHERE (
SELECT COUNT(*) = 0 FROM parameter WHERE parameter_name = $1
);
IF NOT FOUND THEN
UPDATE parameter SET parameter_value = parameter_value + $2
WHERE parameter_name = $1;
END IF;

RETURN;

END;
'LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;

best regards,
thomas

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Nesting Stored Procedure Calls

2004-11-24 Thread Larry White
Is it possible for one stored procedure to call another?  I would like
a number of procs to call another proc to update an audit trail
whenever they're called.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Upcoming Changes to News Server ...

2004-11-24 Thread \"Marc G. Fournier From\"@svr1.postgresql.org
[EMAIL PROTECTED] ("Gary L. Burnore") writes:

>It appears that his aliiasing hasn't actually taken effect yet. Once it 
>does, apparently things will be slightly better because he's then sending 
>posts to pgsql.* not comp.databases.postgres.* .As of a short while 
>ago, we were still receiving articles from the list.  Now they show up in 
>our unwanted.log file as groups we refuse to carry.  (The same thing google 
>did).

of course, as those that know how to run a news server have already pointed
out to you on news.groups, the reason why you are receiving posts to the
comp.* groups still is most likely a result of propogation issues from other
news server, or other news servers that have users posting to those groups ...

... but, like with all your other postings, you like to avoid the facts and
deal with your own personal version of reality instead ...

>So now you'll have less places passing your posts along to the next NSP. 
>Less propigation.  I know of at least two, DataBasix and Google.  I'm sure 
>there are more.

Of course, that avoids the fact that the groups were *always* bogus but
you created them anyway, without twisting your arm, I might add ... but, 
that's okay, again, that goes back to your view of reality vs the rest 
of the worlds ...

See, the way I see it, you created the groups because there was traffic in
them, the same as every other site out there did that is carrying them, since
there never was a cmsg sent out to create them ... 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] why use SCHEMA? any real-world examples?

2004-11-24 Thread Miles Keaton
I just noticed PostgreSQL's schemas for my first time.
(http://www.postgresql.org/docs/current/static/ddl-schemas.html) 

I Googled around, but couldn't find any articles describing WHY or
WHEN to use schemas in database design.

Since the manual says HOW, could anyone here who has used schemas take
a minute to describe to a newbie like me why you did?   What benefits
did they offer you?   Any drawbacks?

Thanks for your time.

- Miles

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] SELECT...VIEW...UNION...LIMIT

2004-11-24 Thread Ed L.

I have "big_table" (1M rows) and "small_table" (1K rows) with 
identical schemas and together in a view as follows:

create view big_view as
select *, 'big_table'::varchar as source from big_table
union
select *, 'small_table'::varchar as source from small_table;

I tried this query...

select * from big_view limit 1

...expecting a quick result, but no joy.  Is there something I can do 
to make this work?  Here's the explain:

$ psql -c "explain select * from big_view limit 1"
   QUERY PLAN   
 
-
 Limit  (cost=294405.67..294405.79 rows=1 width=711)
   ->  Subquery Scan big_view  (cost=294405.67..295871.93 rows=11730 width=711)
 ->  Unique  (cost=294405.67..295871.93 rows=11730 width=711)
   ->  Sort  (cost=294405.67..294698.92 rows=117301 width=711)
 Sort Key: value, cdate, "key", source
 ->  Append  (cost=0.00..183139.01 rows=117301 width=711)
   ->  Subquery Scan "*SELECT* 1"  
(cost=0.00..183119.01 rows=116301 width=711)
 ->  Seq Scan on big_table  
(cost=0.00..183119.01 rows=116301 width=711)
   ->  Subquery Scan "*SELECT* 2"  (cost=0.00..20.00 
rows=1000 width=72)
 ->  Seq Scan on small_table  (cost=0.00..20.00 
rows=1000 width=72)
(10 rows)


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Any good report/form generators for postgresql?

2004-11-24 Thread Reid Thompson
Alberto Cabello Sanchez wrote:
On Sun, Nov 21, 2004 at 05:50:30PM +, Chris Green wrote:
 

I'm after a good report generator for a postgresql database.  I am
using Rekall at the moment which is quite good but I'm not quite able
to do some of the things I want.
Has anyone got any recommendations for forms/reports generators for
postgresql?  I can probably get the forms I want from Rekall so the
bigger requirement is a report generator.  The particular thing I
can't manage in Rekall at the moment is a 'running total' output
column on a report.
   

You can give a try  to jasperreports and some of the GUI design tools such as
iReports or JasperAssistant. Jasperreports seems to be fairly powerful.
 

--
Chris Green ([EMAIL PROTECTED])
   

 

you can take a look at Datavision reports also
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Replication & BLOBs

2004-11-24 Thread Joshua D. Drake
Tatsuo Ishii wrote:
pgpool(http://www2b.biglobe.ne.jp/~caco/pgpool/index-e.html) surely
can handle large objects. I don't know any other replication solutions
can handle them. 
 

Mammoth Replicator can, but it is not open source.
Sincerely,
Joshua D. Drake

Be carefull, however, please make sure that the master and the slave
database clusters are physically synced before starting
replication. Also if multiple sessions are going to concurrently
create large objects, you need to make cluster wide explicite
concurrency controls. Otherwise OIDs may not become identical among
master/slave. To accomplish this, you can lock the shared table. For
example,
BEGIN;
LOCK TABLE pg_database;
creating large objec...
--
Tatsuo Ishii
 

Does anyone know of an open source  single master- multi (or single) slave 
replication system for Postgresql 7 or 8 which handles BLOBs?

Thanks,
Mark Childerson
Mark Childerson
espressoCode inc.
Ph: 416-963-8793
Fax: 416-963-8643
www.exdocs.com
www.espressocode.com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
   

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match
 


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Replication & BLOBs

2004-11-24 Thread Tatsuo Ishii
pgpool(http://www2b.biglobe.ne.jp/~caco/pgpool/index-e.html) surely
can handle large objects. I don't know any other replication solutions
can handle them. 

Be carefull, however, please make sure that the master and the slave
database clusters are physically synced before starting
replication. Also if multiple sessions are going to concurrently
create large objects, you need to make cluster wide explicite
concurrency controls. Otherwise OIDs may not become identical among
master/slave. To accomplish this, you can lock the shared table. For
example,

BEGIN;
LOCK TABLE pg_database;
creating large objec...
--
Tatsuo Ishii

> Does anyone know of an open source  single master- multi (or single) slave 
> replication system for Postgresql 7 or 8 which handles BLOBs?
> 
> Thanks,
> 
> Mark Childerson
> 
> 
> Mark Childerson
> espressoCode inc.
> Ph: 416-963-8793
> Fax: 416-963-8643
> 
> www.exdocs.com
> www.espressocode.com
> 
> 
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings
> 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] pgdump of schema...

2004-11-24 Thread Net Virtual Mailing Lists
Actually this database has been carried forward since the "postgres95"
days, so you are definitely right in your analysis.. 

Would another (perhaps safer?) way of doing this is to remove the "CREATE
SEQUENCE" and "SELECT pg_catalog.setval", and replace the "DEFAULT
nextval" with "SERIAL" then restore that?...

Thanks as always!

- Greg



>Hmm.  What you've apparently got here is a serial column that you've
>carried forward from an old (pre 7.3 at least) database.  Had the serial
>default been created in 7.3 or later then it would be a fully qualified
>name (ie nextval('someschema.emailtemplate_email_templat_seq')) and
>there would be no issue.  For that matter, had the SERIAL column been
>created in 7.3 or later, pg_dump would know to say
>
>   CREATE TABLE emailtemplates (
>   email_template_id SERIAL,
>   ...
>
>instead of what it did say.  Now it is surely not pg_dump's charter
>to editorialize on default expressions that were supplied by the user
>(which this was, as far as the current database knows).  So this isn't a
>pg_dump bug.  What it is is a deficiency in the upgrade process that we
>had from pre-7.3 to 7.3 databases.  You might want to consider running
>contrib/adddepend against your database to fix things up.  (But note
>that it's just a contrib script and is not guaranteed; so keep a prior
>dump around ...)



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Query for postmaster stats start time?

2004-11-24 Thread Ed L.

Is there a SQL query to retrieve the start time of a) when the postmaster 
was started, and/or b) when the stats were last reset?  I'd like to 
calculate a few rates over time...

TIA.

Ed


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Regexp matching: bug or operator error?

2004-11-24 Thread Tom Lane
Ken Tanzer <[EMAIL PROTECTED]> writes:
> Thanks for the quick responses yesterday.  At a minimum, it seems like 
> this behavior does not match what is described in the Postgres 
> documentation (more detail below).

After looking at this more, I think that it is actually behaving as
Spencer designed it to.  The key point is this bit from the fine print
in section 9.6.3.5:

A branch has the same preference as the first quantified atom in it
which has a preference.

("branch" being any regexp with no outer-level | operator)

What this apparently means is that if the RE begins with a non-greedy
quantifier, then the matching will be done in such a way that the whole
RE matches the shortest possible string --- that is, the whole RE is
non-greedy.  It's still possible for individual items within the RE to
be greedy or non-greedy, but that only affects how much of the shortest
possible total match they are allowed to eat relative to each other.
All the examples I've looked at seem to work "properly" when seen in
this light.

I can see that this behavior could have some usefulness, and if need be
you can always override it by writing (...){1,1} around the whole RE.
So at this point I'm disinclined to vary from the Tcl semantics.

This does leave us with a documentation problem though, because this
behavior is surely not obvious from what it says in 9.6.3.5.  If you've
got any thoughts about a better explanation, I'm all ears.

> Here's the actual regex we're working on--any help 
> reformulating this would be great!

> select substring('Searching for log 5376, referenced in this text'
> FROM
> '(?i)(?:.*?)logs?(?:\\s|\\n||| 
> )(?:entry|no|number|#)?(?:\\s|\\n|| )?([0-9]{1,7})(.*?)');

I don't see that you need either the leading (?:.*?) or the trailing
(.*?) here, and if you dropped them then the first quantifier would be
the "s?" which is greedy so the curious case goes away.  I suppose the
idea of adding (?:.*?) was to ensure that "log" will be matched to the
first possible place where it could match --- but that is true anyway,
per the first sentence of 9.6.3.5.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] tableoid

2004-11-24 Thread Jamie Deppeler
Hi
have a bit of a issue im planning on using tableoid to select the 
appropate table, but im not sure that you can in sql select statement? 
If not is there another approch i could be using?

--
*Jamie Deppeler
*Database Administrator

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] VACUUM and ANALYZE With Empty Tables

2004-11-24 Thread gnari
From: "Mark Dexter" <[EMAIL PROTECTED]>


> Thanks very much for the information.  It would appear that our best
option might be to vacuum analyze these tables in our
> application at a point in time when they contain rows instead of doing it
at night.  Needlesst to say, it would nice to have > an option to analyze
with a target number of rows instead of the number presently in the table.

> I suppose another option would be to keep a small number of rows
permanently in these tables.  In my testing, 100 rows (94 to > be exact) did
the trick.  Is this number going to vary from table to table?

or, you could add this procedure to your nightly vacuum job:
(after regular vacuum analyzes)
  insert a representative dummy row set into the empty table
  analyze the table
  remove the rows again

this way the dummy rows wont interfere with your regular operations.
if the table is not always empty at vacuum time, you need to be
able to differentiate the dummy rows from the regular ones to be able
to remove only the dummy ones, of course.

gnari




---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Replication & BLOBs

2004-11-24 Thread Mark Childerson
Hi,
Does anyone know of an open source  single master- multi (or single) slave 
replication system for Postgresql 7 or 8 which handles BLOBs?

Thanks,
Mark Childerson
Mark Childerson
espressoCode inc.
Ph: 416-963-8793
Fax: 416-963-8643
www.exdocs.com
www.espressocode.com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Upcoming Changes to News Server ...

2004-11-24 Thread Andrew - Supernews
On 2004-11-24, Marc G Fournier From : <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] (Andrew - Supernews) writes:
>>Any chance of there being regular (or even only occasional) signed
>>checkgroups messages for the new hierarchy?
>
> Sure, but I've never done it before, so if you can help ... ?

Sure. You can reach me by email or on the irc chan. You should already have
a copy of the signcontrol script itself (it's in the INN distribution).
The format of a checkgroups message is simple: one line in the message
body per newsgroup in this format:

group.name(tabs)Description of group

The description must end with " (Moderated)" (without the quotes) if it is
a moderated group in the Usenet sense, and not otherwise. Conventionally
the separator is enough tabs so that the description starts in column 24,
but the only real requirement is that there be one or more tabs (and not
any other sort of whitespace, and no tabs in the description). This is the
same format as the newsgroups file in INN.

For the headers, you want "Control: checkgroups", an Approved header,
and a Newsgroups: header with an appropriate announcement group in
(pgsql.announce should do; the message won't show up to normal readers).

Checkgroups should be posted preferably after any change to the group
list, and once per month or two even if there are no changes. Obviously
you need an appropriate PGP or GPG key (RSA seems to be best as far as
compatibility goes), which has to be published somewhere (but doesn't need
to be on keyservers).

Let me know if you have any questions or if you want me to verify any
messages.

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostGreSQL upgrade failed (Debian Packages), need advice...

2004-11-24 Thread Tom Lane
Eric D Nielsen <[EMAIL PROTECTED]> writes:
> There were two sets of errors.  One set dealing with "FATAL 1: unsupported
> frontend protocol" during the data dumping stage of the automatic update
> script.  It appears that the data was successfully dumped, however.  Should I
> be worried?  Is this FATAL warning actually routine?

It is if you are using a 7.4 or later client to talk to a 7.3 or older
server.  The client will first attempt to connect with 7.4 protocol, and
only fall back to the older protocol when that fails.  This leaves a
harmless gripe behind in the server's log...

> Dumping with new pg_dumpall
> FATAL 1:  unsupported frontend protocol

... and evidently that's exactly what the script is doing.  I'm not sure
why it's intermixing the server's log output with its own commentary
though.

> The second set of errors were caused by disappearance of the "debug-level"
> configuration parameter and by the upgrade script not over-writing the
> configuration file with a new one.  (This is where the user-error claim is
> arising.  I don't recall denying permission to overwrite, but the script is
> acting as if I did.)

Can't speak to this one.  It could be a file-permissions kind of failure?

> In this case the initdb didn't clean up the partially populated PGDATA
> directory, should it have?

Depends how the script invoked initdb --- there's a command line option
telling it which to do.

> /usr/lib/postgresql/bin/initdb: line 648: /etc/postgresql/4122: Permission
> denied

> How do I proceed here?  It looks like a permission issue, but there is no file
> by that name in that directory.

This seems to be related to Debian's local changes to Postgres; you'll
have to read their documentation to figure out what's up.  Or at least
look at their version of the initdb script (in the stock 7.4 script,
line 648 is noticeably further down than you evidently got).

> Assuming that this issue is resolved and I can initdb and restart postmaster
> what is the series of actions to finish recovery?
> 1. psql template1 < db.out

Check.

> 2. adddepend?  I'm coming from 7.2 to 7.4 so I beleive I'm supposed to run 
> this,
> but I haven't found documentation on it yet...

Look at contrib/addepend/README (not sure where Debian puts this, but
it's there in the source tree).

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] PostGreSQL upgrade failed (Debian Packages), need advice...

2004-11-24 Thread Eric D Nielsen
Quoting Peter Eisentraut <[EMAIL PROTECTED]>:

> Eric D Nielsen wrote:
> > I recently tried to upgrade from the 7.2.1 PostGreSQL package on
> > Debian Stable to the 7.4.6 PostGreSQL package on Debian Testing.  The
> > automatic update failed, message included below.  The documentation
> > for manual upgrades references a script which does not appear to
> > exist (postgresql-dump) in the postgres/dumpall/7.2/ directoty.
> 
> If the upgrade of the Debian package failed, please submit a bug report 
> for the Debian package (after scanning previous bug reports for 
> duplicates).  That might not help fixing your installation, but at 
> least the problem might be corrected in the future.

I've submitted the bug to Debian.  Their initial triage appears to suggest
user-error, on my part; I'm not quite accepting that yet.  In any case I'm
trying to figure out how to recover my install.

It looks like my attempt to include the script output in my email to the list
got truncated.  Here is a brief discussion of what the problems were and what
I've figured out so far.

There were two sets of errors.  One set dealing with "FATAL 1: unsupported
frontend protocol" during the data dumping stage of the automatic update
script.  It appears that the data was successfully dumped, however.  Should I
be worried?  Is this FATAL warning actually routine?  Why would it pop up but
still appear to finish the dump successfully?

SCRIPT OUTPUT
 
Stopping and restarting the postmaster
/var/lib/postgres/dumpall/7.2/postmaster -D /var/lib/postgres/data  -p 5431 -o
-d0
DEBUG:  database system was shut down at 2004-11-24 07:17:34 EST
DEBUG:  checkpoint record is at 1/A1C26620
DEBUG:  redo record is at 1/A1C26620; undo record is at 0/0; shutdown TRUE
DEBUG:  next transaction id: 73576388; next oid: 446733
DEBUG:  database system is ready
Dumping the database to /var/lib/postgres//db.out
pg_dumpall -N
Dumping with new pg_dumpall
FATAL 1:  unsupported frontend protocol
... ~30 lines of the same FATAL error repeat

END SCRIPT OUTPUT

The second set of errors were caused by disappearance of the "debug-level"
configuration parameter and by the upgrade script not over-writing the
configuration file with a new one.  (This is where the user-error claim is
arising.  I don't recall denying permission to overwrite, but the script is
acting as if I did.)

Relevant output:
creating directory /var/lib/postgres/data/base... ok
creating directory /var/lib/postgres/data/global... ok
creating directory /var/lib/postgres/data/pg_xlog... ok
creating directory /var/lib/postgres/data/pg_clog... ok
selecting default max_connections... 100
selecting default shared_buffers... 1000
ok
creating template1 database in /var/lib/postgres/data/base/1... FATAL: 
unrecognized configuration parameter "debug_level"

initdb: failed
initdb failed

END OUTPUT

In this case the initdb didn't clean up the partially populated PGDATA
directory, should it have?

I've gone in and manually removed the offending line in the configuration file.

Now I try to initdb manually and I receive
[EMAIL PROTECTED]:~$ initdb --debian-conffile
use debian conffile location
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale C.

creating directory /var/lib/postgres/data... ok
creating directory /var/lib/postgres/data/base... ok
creating directory /var/lib/postgres/data/global... ok
creating directory /var/lib/postgres/data/pg_xlog... ok
creating directory /var/lib/postgres/data/pg_clog... ok
selecting default max_connections... 100
selecting default shared_buffers... 1000
/usr/lib/postgresql/bin/initdb: line 648: /etc/postgresql/4122: Permission
denied

initdb: failed
initdb: removing data directory "/var/lib/postgres/data"

END OUTPUT

How do I proceed here?  It looks like a permission issue, but there is no file
by that name in that directory.

Assuming that this issue is resolved and I can initdb and restart postmaster
what is the series of actions to finish recovery?
1. psql template1 < db.out
   db.out is the all database dump, so it will create and connect to the
individual databases.  Or is there a dedicated restore tool I should be using?

2. adddepend?  I'm coming from 7.2 to 7.4 so I beleive I'm supposed to run this,
but I haven't found documentation on it yet... Do I run it before restore on
the sql dump or against the live DB, etc?  I assume this answer is in the
mailing list archive, but searching hasn't been working for me all day.

3.  Anything else?

Thank you.

Eric

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Upcoming Changes to News Server ...

2004-11-24 Thread Andrew - Supernews
On 2004-11-23, "Marc G. Fournier" <[EMAIL PROTECTED]> wrote:
> On Wed, 24 Nov 2004, Peter Eisentraut wrote:
>> Marc G. Fournier wrote:
>>> Due to recent action by Google concerning the
>>> comp.databases.postgresql.* hierarchy, we are going to make some
>>> changes that should satisfy just about everyone ... over the next
>>> 24hrs or so, traffic *to*
>>> comp.databases.postgresql.* from the mailing lists will cease and be
>>> re-routed to pgsql.* instead ... on our server (and we encourage
>>> others to do the same), the comp.* groups will be aliased to the new
>>> pgsql.* hierarchy, so that posts to the old groups will still get
>>> through ...
>>
>> What exactly is this meant to achieve?
>
> To clean up the comp.* hierarchy ... evcen if the 4/5 that are being RFDd 
> right now pass, ppl are going to continue screaming that the other 15-16 
> should be removed as well ... this way, thos using news.postgresql.org can 
> still get access to the whole hierarchy, while the comp.* would only carry 
> those that are deemed "official"

Any chance of there being regular (or even only occasional) signed
checkgroups messages for the new hierarchy?

-- 
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Regexp matching: bug or operator error?

2004-11-24 Thread Ken Tanzer
Thanks for the quick responses yesterday.  At a minimum, it seems like 
this behavior does not match what is described in the Postgres 
documentation (more detail below).  But I still have a hard time 
understanding the results of these two queries:

   select SUBSTRING( 'X444X','.*?([0-9]{1,3}).*?');
   This is the original query I submitted, with the puzzling non-greedy 
match.  It returns '4'.

Adding start and end characters to the query, like so:
   select SUBSTRING( 'X444X','^.*?([0-9]{1,3}).*?$');
   returns '444'.
If the "whole RE" was being set non-greedy by the first ".*?", then 
shouldn't the subsequent "([0-9]{1,3})" also match non-greedily, 
returning a '4', with the last ".*?" then capturing the balance of 
"44X"?  Either way, I'm not sure why the start and end characters would 
effect the rest of the match.

In terms of the Postgres documentation, it definitely seems at odds with 
the observed behavior.  Here's my attempts to explain why:

a)select SUBSTRING( 'X444X','[0-9]{1,3}');
returns '444'.  This suggests that a "default" for the {m,n} syntax is 
greedy.

b)  Table 9-13 of the docs describes {m,n} syntax, then lists {m,n}? as 
a "non-greedy" version of the same.  That, and the fact that there 
doesn't seem to be a specific "greedy" modifier, would both also imply 
that {m,n} should be greedy.

Section 9.6.3.5: "A quantified atom with other normal quantifiers 
(including {m,n} with m equal to n) prefers longest match"  I can't find 
anything else in this section that would say otherwise.  I specifically 
can't find anything that says the whole expression becomes greedy or not.

If the regex code isn't going to change, it seems that changing the 
documentation would be very helpful to avoid confusion.  Of course, 
that's easy for me to say, since I wouldn't have to do the work! For 
that matter, I'd be willing to try editing the documentation, but I'd 
have to understand what the actual behavior is before I could try to 
describe it! :)  Either way, thanks for the great DB program!

Ken Tanzer
p.s., The suggested regex rewrites some people responded with were 
appreciated, but the regex I used was just a simplified example for this 
posting.  Here's the actual regex we're working on--any help 
reformulating this would be great!

select substring('Searching for log 5376, referenced in this text'
   FROM
   '(?i)(?:.*?)logs?(?:\\s|\\n||| 
)(?:entry|no|number|#)?(?:\\s|\\n|| )?([0-9]{1,7})(.*?)');

We were able to get this to work by adding start and end characters, 
like so, but it doesn't seem like it should be necessary:

select substring('Searching for log 5376, referenced in this text'
   FROM
   '(?i)^(?:.*?)logs?(?:\\s|\\n||| 
)(?:entry|no|number|#)?(?:\\s|\\n\| )?([0-9]{1,7})(.*?)$');


Tom Lane wrote:
Checking in the Tcl bug tracker reveals that this is an open issue
for them as well:
http://sourceforge.net/tracker/index.php?func=detail&aid=219219&group_id=10894&atid=110894
http://sourceforge.net/tracker/index.php?func=detail&aid=219358&group_id=10894&atid=110894
The first entry has Henry Spencer claiming that it is operating as
designed, but the second one seems to cast doubt on that claim.
In any case I tend to agree that the notation implies that greediness
should be an independent property of each quantifier.
However, if Henry can't or doesn't want to fix it, I'm not sure that
I care to wade in ;-)
			regards, tom lane
 

begin:vcard
fn:Kenneth Tanzer
n:Tanzer;Kenneth
org:Downtown Emergency Service Center;Information Services
adr:;;507 Third Avenue;Seattle;WA;98104;USA
email;internet:[EMAIL PROTECTED]
title:IS Manager
tel;work:(206) 464-1570 x 3061
tel;fax:(206) 624-4196
url:http://www.desc.org
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] VACUUM and ANALYZE With Empty Tables

2004-11-24 Thread Mark Dexter
Thanks very much for the information.  It would appear that our best option 
might be to vacuum analyze these tables in our application at a point in time 
when they contain rows instead of doing it at night.  Needlesst to say, it 
would nice to have an option to analyze with a target number of rows instead of 
the number presently in the table.
 
I suppose another option would be to keep a small number of rows permanently in 
these tables.  In my testing, 100 rows (94 to be exact) did the trick.  Is this 
number going to vary from table to table?
 
Thanks again for your help.  Mark



From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: Wed 11/24/2004 1:26 AM
To: Mark Dexter
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] VACUUM and ANALYZE With Empty Tables



Mark Dexter wrote:
> We use a development environment that works with Postgres via ODBC and
> uses cursors to insert and update rows in Postgres tables.  I'm using
> Postgres version 7.4.5.

> A. If I TRUNCATE or DELETE all of the rows in the table and then run
> VACUUM or ANALYZE on the empty table, the test program takes over 15
> minutes to complete (i.e., 15X performance drop).

> If we routinely run VACUUM or VACUUM ANALYZE (e.g., nightly), these work
> tables will normally be empty when the VACUUM is run.  So it would
> appear from the testing above that they will experience performance
> problems when inserting large numbers of rows  through our application.

Yep - it's a known issue. The analyse is doing what you asked, it's just
not what you want.

> Is there some easy way around this problem?  If there a way to force
> VACUUM or ANALYZE to optimize for a set number of rows even if the table
> is empty when it is run?  Thanks for your help.   Mark

There are only two options I know of:
  1. Vaccum analyse each table separately (tedious, I know)
  2. Try pg_autovacuum in the contrib/ directory

The autovacuum utility monitors activity for you and targets tables when
they've seen a certain amount of activity. Even if it hasn't got the
tunability you need, it should be a simple patch to add a list of
"excluded" tables.

--
   Richard Huxton
   Archonet Ltd



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Moving/Using Postgres Binaries on multiple machines

2004-11-24 Thread Tom Lane
Francis Reed <[EMAIL PROTECTED]> writes:
> We tested the principle with postgres 7.4.6 but found a what we believe is a
> compile time dependancy in create_conversion.sql where $libdir is not being
> resolved properly during the initdb process on the second machine.

We only started supporting the idea of a relocatable installation for
8.0; in prior versions you can't just arbitrarily install the files
to a different path than what you said at configure time.  Even in 8.0
the files have to remain in the same relative locations.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] PostGreSQL upgrade failed (Debian Packages), need

2004-11-24 Thread Robin Ericsson
On Wed, 2004-11-24 at 08:30 -0800, Joshua D. Drake wrote:
> Peter Eisentraut wrote:
> 
> >Joshua D. Drake wrote:
> >  
> >
> >>Well you can't just "upgrade" 7.2.1 to 7.4.6. You have to dump and
> >>restore.
> >>
> >>
> >
> >The Debian package does that automatically.  On some days...
> >  
> >
> Really? WOW! I wonder if Gentoo does that. That is pretty
> remarkable.

Gentoo tells you that you need to dump and remove the cluster before it
evens tries to upgrade, atleast did for me when going from 7.3 to 7.4



regards,
Robin


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Nesting Stored Procedure Calls

2004-11-24 Thread Tino Wildenhain
Hi, 

Am Mittwoch, den 24.11.2004, 11:17 -0500 schrieb Larry White:
> Is it possible for one stored procedure to call another?  

yes.

More specifically these are stored functions rather then 
stored procedures in the M$-like way.

Regards
Tino


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Nesting Stored Procedure Calls

2004-11-24 Thread Michael Fuhr
On Wed, Nov 24, 2004 at 11:17:26AM -0500, Larry White wrote:

> Is it possible for one stored procedure to call another?  I would like
> certain procs to call a different proc to update an audit trail
> whenever they're executed.

What happened when you tried it?  If you're having trouble then it
would be helpful to see what you're doing, what you'd like to happen,
and what actually does happen.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Nesting Stored Procedure Calls

2004-11-24 Thread Joshua D. Drake
Ben wrote:
Yes, it's no different than any other select statement. Just keep in 
mind that it's all one big happy transaction - if the inner stored 
proc aborts, so does the outer one. (Actually, that may have changed 
in version 8, but I don't know.)
In 8 you could use an exception in plPgsql I think.
Sincerely,
Joshua D. Drake


On Nov 24, 2004, at 8:17 AM, Larry White wrote:
ne stored procedure to call another?  I would like
certain procs to call a different proc to update an audit trail
whenever they're exec

---(end of broadcast)---
TIP 8: explain analyze is your friend

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] Are subselects treated as atomic single commands?

2004-11-24 Thread Thomas Chille
Hi,
the docu about the Read Committed Transaction Isolation Level pointed out: "... The point at issue 
here is whether or not within a single command we see an absolutely consistent view of the 
database.".

Because i dont want to use the Serializable Transaction Isolation Level or table locks if it not 
necessary i have one question:

Would the insert command with that subselect treated as one single command and can i so prevent a 
race condition between multiple function calls?

CREATE OR REPLACE FUNCTION "public"."count_parameter" (name, integer)
RETURNS "pg_catalog"."void" AS'
BEGIN
   INSERT INTO parameter (parameter_name, parameter_value)
   SELECT $1, $2 WHERE (
   SELECT COUNT(*) = 0 FROM parameter WHERE parameter_name = $1
   );
  IF NOT FOUND THEN
   UPDATE parameter SET parameter_value = parameter_value + $2
   WHERE parameter_name = $1;
  END IF;
  RETURN;
END;
'LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;
best regards,
thomas 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] PostGreSQL upgrade failed (Debian Packages), need advice...

2004-11-24 Thread Joshua D. Drake
Peter Eisentraut wrote:
Joshua D. Drake wrote:
 

Well you can't just "upgrade" 7.2.1 to 7.4.6. You have to dump and
restore.
   

The Debian package does that automatically.  On some days...
 

Really? WOW! I wonder if Gentoo does that. That is pretty
remarkable.
Sincerely,
Joshua D. Drake


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Nesting Stored Procedure Calls

2004-11-24 Thread Ben
Yes, it's no different than any other select statement. Just keep in 
mind that it's all one big happy transaction - if the inner stored proc 
aborts, so does the outer one. (Actually, that may have changed in 
version 8, but I don't know.)

On Nov 24, 2004, at 8:17 AM, Larry White wrote:
ne stored procedure to call another?  I would like
certain procs to call a different proc to update an audit trail
whenever they're exec

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Copying into Unicode - Correcting Errors

2004-11-24 Thread Hunter Hillegas
Peter,

Thanks for the reply.

Perhaps I should go into some more detail about what is going on.

Originally, the database was in SQL_ASCII and the data had been imported via
COPY from a text file. The text file is no longer available. The data went
into the table just fine.

When selecting from the table via JDBC, I see this exception:

'Invalid character data was found.  This is most likely caused by stored
data containing characters that are invalid for the character set the
database was created in.  The most common example of this is storing 8bit
data in a SQL_ASCII database.'

Ok, so I've never seen this but I do a little investigation and some of the
stuff I see online suggests that I should change the database encoding.

When I try UNICODE, I get the error below during my data import.

The 'bad' data looks like this when I SELECT:

| Ver?onica |

Is it possible that this is an issue with beta5 in conjunction with the JDBC
driver and encoding? I didn't see a CHANGELOG note that would make me
suspicious but I'm not sure I would know if it I saw it.

Hunter

> From: Peter Eisentraut <[EMAIL PROTECTED]>
> Date: Wed, 24 Nov 2004 11:19:44 +0100
> To: Hunter Hillegas <[EMAIL PROTECTED]>
> Cc: PostgreSQL <[EMAIL PROTECTED]>
> Subject: Re: [GENERAL] Copying into Unicode - Correcting Errors
> 
> Hunter Hillegas wrote:
>> I need to import a file into a Unicode database.
>> 
>> I am getting an error:
>> 
>> ERROR:  Unicode characters greater than or equal to 0x1 are not
>> supported
>> CONTEXT:  COPY mailing_list_entry, line 30928, column
>> first_last_name: "Ver?nica"
> 
> If your file really does have Unicode characters greater than or equal
> to 0x1, then I don't have a good answer.
> 
> But more often, this error means that your file is not in Unicode in the
> first place.  If so, set the client encoding to the real encoding of
> your file, e.g.
> 
> export PGCLIENTENCODING=LATIN1
> 
> -- 
> Peter Eisentraut
> http://developer.postgresql.org/~petere/



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Nesting Stored Procedure Calls

2004-11-24 Thread Larry White
Is it possible for one stored procedure to call another?  I would like
certain procs to call a different proc to update an audit trail
whenever they're executed.

I thought about using triggers but want the trail to include info
that's not in the updated table -  specifically the application user
ID of the responsible party.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] PostGreSQL upgrade failed (Debian Packages), need advice...

2004-11-24 Thread Peter Eisentraut
Joshua D. Drake wrote:
> Well you can't just "upgrade" 7.2.1 to 7.4.6. You have to dump and
> restore.

The Debian package does that automatically.  On some days...

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] PostGreSQL upgrade failed (Debian Packages), need advice...

2004-11-24 Thread Peter Eisentraut
Eric D Nielsen wrote:
> I recently tried to upgrade from the 7.2.1 PostGreSQL package on
> Debian Stable to the 7.4.6 PostGreSQL package on Debian Testing.  The
> automatic update failed, message included below.  The documentation
> for manual upgrades references a script which does not appear to
> exist (postgresql-dump) in the postgres/dumpall/7.2/ directoty.

If the upgrade of the Debian package failed, please submit a bug report 
for the Debian package (after scanning previous bug reports for 
duplicates).  That might not help fixing your installation, but at 
least the problem might be corrected in the future.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] pgdump of schema...

2004-11-24 Thread Tom Lane
"Net Virtual Mailing Lists" <[EMAIL PROTECTED]> writes:
> When I do a "pgdump --schema=someschema somedatabase > something.dump",
> the results of the dump file look like this:

> CREATE TABLE emailtemplates (
> email_template_id integer DEFAULT
> nextval('"emailtemplate_email_templat_seq"'::text) NOT NULL,

Hmm.  What you've apparently got here is a serial column that you've
carried forward from an old (pre 7.3 at least) database.  Had the serial
default been created in 7.3 or later then it would be a fully qualified
name (ie nextval('someschema.emailtemplate_email_templat_seq')) and
there would be no issue.  For that matter, had the SERIAL column been
created in 7.3 or later, pg_dump would know to say

CREATE TABLE emailtemplates (
email_template_id SERIAL,
...

instead of what it did say.  Now it is surely not pg_dump's charter
to editorialize on default expressions that were supplied by the user
(which this was, as far as the current database knows).  So this isn't a
pg_dump bug.  What it is is a deficiency in the upgrade process that we
had from pre-7.3 to 7.3 databases.  You might want to consider running
contrib/adddepend against your database to fix things up.  (But note
that it's just a contrib script and is not guaranteed; so keep a prior
dump around ...)

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Moving/Using Postgres Binaries on multiple machines

2004-11-24 Thread Joshua D. Drake
Peter Eisentraut wrote:
Francis Reed wrote:
 

We tested the principle with postgres 7.4.6 but found a what we
believe is a compile time dependancy in create_conversion.sql where
$libdir is not being resolved properly during the initdb process on
the second machine. The usual environment variables don't seem to
help (LD_LIBRARY_PATH; PATH; PGLIB etc). Anyone come across that?
   

Moving the installation to a different path is not supported for 
permanent use before 8.0.  You better try to install in the same 
directory layout on all machines where you want to deploy.
 

You can also use links, which is what we do. Mammoth Reokicator
automaticaly install in /usr/local/pgsql but we often put it
in /opt and we just link /usr/local/pgsql back.
Sincerely,
Joshua D. Drake



--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] PostGreSQL upgrade failed (Debian Packages), need advice...

2004-11-24 Thread Joshua D. Drake
Eric D Nielsen wrote:
I recently tried to upgrade from the 7.2.1 PostGreSQL package on Debian Stable
to the 7.4.6 PostGreSQL package on Debian Testing.  The automatic update
failed, message included below.  The documentation for manual upgrades
references a script which does not appear to exist (postgresql-dump) in the
postgres/dumpall/7.2/ directoty.
Can anyone advise me of how to proceed?  I would prefer to stick with the
Debian
packages, but if I must can deal with compiling from source for intermediate
versions, etc.
 

Well you can't just "upgrade" 7.2.1 to 7.4.6. You have to dump and restore.
My suggestion would be to dump your 7.2.1 database and if you can use 
the 7.4.6 pg_dump (from source). Then remove 7.2.1 and try and reinstall 
7.4.6.
Once 7.4.6 is installed then restore your dump and you should be good to go.

Sincerely,
Joshua D. Drake

Thank you.
Eric Nielsen
- Begin script output
The postmaster did not start after postgresql was installed:
Stopping PostgreSQL database server: postmasterpg_ctl: could not find
/var/lib/postgres/data/postmaster.pid
Is postmaster running?
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
 


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] PostGreSQL upgrade failed (Debian Packages), need advice...

2004-11-24 Thread Eric D Nielsen
I recently tried to upgrade from the 7.2.1 PostGreSQL package on Debian Stable
to the 7.4.6 PostGreSQL package on Debian Testing.  The automatic update
failed, message included below.  The documentation for manual upgrades
references a script which does not appear to exist (postgresql-dump) in the
postgres/dumpall/7.2/ directoty.

Can anyone advise me of how to proceed?  I would prefer to stick with the
Debian
packages, but if I must can deal with compiling from source for intermediate
versions, etc.

Thank you.

Eric Nielsen

- Begin script output
The postmaster did not start after postgresql was installed:

Stopping PostgreSQL database server: postmasterpg_ctl: could not find
/var/lib/postgres/data/postmaster.pid
Is postmaster running?
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] problem in modifing column datatype of a table

2004-11-24 Thread Peter Eisentraut
Durga Prasad Mohapatra wrote:
>   I am newbie in postgresql.I want to modify column datatype of a
> table how can i do it.
> for example there is a column with datatype varchar, i want to change
> it to text.
> How can i.

Before 8.0: Recreate the table and copy the data.
After 8.0: ALTER TABLE

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Moving/Using Postgres Binaries on multiple machines

2004-11-24 Thread Peter Eisentraut
Francis Reed wrote:
> We tested the principle with postgres 7.4.6 but found a what we
> believe is a compile time dependancy in create_conversion.sql where
> $libdir is not being resolved properly during the initdb process on
> the second machine. The usual environment variables don't seem to
> help (LD_LIBRARY_PATH; PATH; PGLIB etc). Anyone come across that?

Moving the installation to a different path is not supported for 
permanent use before 8.0.  You better try to install in the same 
directory layout on all machines where you want to deploy.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Performance

2004-11-24 Thread Richard Huxton
Werdin Jens wrote:
Hello,
Ich have a big performance problem.
I'm running postgres 7.4.2 on Suse Linux 9.0 on a dual Xeon 3.0 GHz with 3
Gbyte Ram.
In postgres.conf I'm using the defaults.
That's the place to start. See the guide at:
  http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
For your hardware, the default configuration settings are far too small. 
Oh, and you should upgrade to the latest 7.4 as soon as convenient.

> Filesystem is ext3 with writeback
journaling
I have 3 tables with ca 10 million entries with a gist index on GIS data and
5 tables with 10 million entries with an index on (timestamp,double,double).
There are 10 tables with 1 million entries and index on int. and some
smaller tables.
With 1 Gbyte Ram all went fine. Than I added a new table and it startet to
swap. I added 2 Gbyte but the Problem is still there.
The kswapd and kjournald are running nearly permanently.
If the system is swapping that's not likely to be due to PostgreSQL, 
especially on the default configuration settings.

The first time I do a query it takes very long. But the second time it goes
a lot faster.
That's because the data is cached in RAM the second time.
Is postgres only using a certain amount of Ram for the indexes? But why my
Ram is full then?
Am I too short of Ram? Is the filesystem too slow?
What is "top" showing for memory usage?
What does vmstat show for activity when you are having problems?
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] problem in modifing column datatype of a table

2004-11-24 Thread Durga Prasad Mohapatra
Hi,

  I am newbie in postgresql.I want to modify column datatype of a table
  how can i do it.
for example there is a column with datatype varchar, i want to change it
to text.
How can i.

Thanks
Regards
Durga

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Moving/Using Postgres Binaries on multiple machines

2004-11-24 Thread Richard Huxton
Francis Reed wrote:
Hi Richard, Sun/Sparc is the platform of choice in our case. We tested the
principle with postgres 7.4.6 but found a what we believe is a compile time
dependancy in create_conversion.sql where $libdir is not being resolved
properly during the intidb process. The usual environment variables don't
seem to help (LD_LIBRARY_PATH; PATH; PGLIB etc). Its not clear whether the
concept of binary distribution is not supported, hence my general question.
Sounds like it should be fine, for your situation*. It could be there is 
a glitch in create_conversion.sql, especially if it is something that 
won't show if you compile on the machine you want to run on. You might 
want to file a bug on it. I'm afraid I don't have a copy of the 7.4.6 
tarball to hand.

It might also be worth checking 8.0beta to see if it's fixed there - I 
know some relocation changes were made there (because of the Windows 
port iirc).

Bit puzzled there isn't some standard binary package already built for 
various flavours of Sun box though.

* of course, assuming you're on the same OS version with compatible 
libraries etc etc etc.

PS - cc the list and the sender when you reply, it's the convention 
round here (though not on most lists I'll grant you)
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] just a test

2004-11-24 Thread ohp
last one I hope

-- 
Olivier PRENANT Tel: +33-5-61-50-97-00 (Work)
6, Chemin d'Harraud Turrou   +33-5-61-50-97-01 (Fax)
31190 AUTERIVE   +33-6-07-63-80-64 (GSM)
FRANCE  Email: [EMAIL PROTECTED]
--
Make your life a dream, make your dream a reality. (St Exupery)

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Performance

2004-11-24 Thread Werdin Jens
Title: Performance






Hello,


Ich have a big performance problem.
I'm running postgres 7.4.2 on Suse Linux 9.0 on a dual Xeon 3.0 GHz with 3 Gbyte Ram.
In postgres.conf I'm using the defaults. Filesystem is ext3 with writeback journaling


I have 3 tables with ca 10 million entries with a gist index on GIS data and 5 tables with 10 million entries with an index on (timestamp,double,double).

There are 10 tables with 1 million entries and index on int. and some smaller tables.


With 1 Gbyte Ram all went fine. Than I added a new table and it startet to swap. I added 2 Gbyte but the Problem is still there.

The kswapd and kjournald are running nearly permanently.


The first time I do a query it takes very long. But the second time it goes a lot faster.


Is postgres only using a certain amount of Ram for the indexes? But why my Ram is full then?
Am I too short of Ram? Is the filesystem too slow?


Can anyone help me?


Greeting
Jens




 





Re: [GENERAL] Moving/Using Postgres Binaries on multiple machines

2004-11-24 Thread Doug McNaught
Francis Reed <[EMAIL PROTECTED]> writes:

> If I want to create a postgres database on multiple machines, is the
> practice of tarring or zipping up binaries compiled on one machine and
> untarring them on another, and using the binaries (initdb etc) acceptable?.
> This removes the need for having a compiler and environment on the target
> machine, or is it necessary always to have such an environment on any
> machine you intend to use postgres on? Postgres seems to have enough
> environment options to allow this to work, overriding the original library
> locations and paths etc from the original machine on which postgres was
> compiled.
>
> Does anyone see a problem with this approach?

I've had no problems doing this, though I generally standardize the
install location across multiple machines.

-Doug

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Moving/Using Postgres Binaries on multiple machines

2004-11-24 Thread Francis Reed
That's what we had hoped.

We tested the principle with postgres 7.4.6 but found a what we believe is a
compile time dependancy in create_conversion.sql where $libdir is not being
resolved properly during the initdb process on the second machine. The usual
environment variables don't seem to help (LD_LIBRARY_PATH; PATH; PGLIB etc).
Anyone come across that?

Thx

-Original Message-
From: Martijn van Oosterhout [mailto:[EMAIL PROTECTED]
Sent: 24 November 2004 13:33
To: Francis Reed
Cc: '[EMAIL PROTECTED]'
Subject: Re: [GENERAL] Moving/Using Postgres Binaries on multiple
machines


Sure, this is what Linux distributers do. They compile postgresql into
a binary package which is installed on the user's machine.

It works as long as the environments are reasonably compatable, all
have readline, similar libc, etc.

Hope this helps,

On Wed, Nov 24, 2004 at 12:30:28PM -, Francis Reed wrote:
> If I want to create a postgres database on multiple machines, is the
> practice of tarring or zipping up binaries compiled on one machine and
> untarring them on another, and using the binaries (initdb etc)
acceptable?.
> This removes the need for having a compiler and environment on the target
> machine, or is it necessary always to have such an environment on any
> machine you intend to use postgres on? Postgres seems to have enough
> environment options to allow this to work, overriding the original library
> locations and paths etc from the original machine on which postgres was
> compiled.
> 
> Does anyone see a problem with this approach?
> 
> Thanks! 
> 
> Francis 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster

-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Moving/Using Postgres Binaries on multiple machines

2004-11-24 Thread Martijn van Oosterhout
Sure, this is what Linux distributers do. They compile postgresql into
a binary package which is installed on the user's machine.

It works as long as the environments are reasonably compatable, all
have readline, similar libc, etc.

Hope this helps,

On Wed, Nov 24, 2004 at 12:30:28PM -, Francis Reed wrote:
> If I want to create a postgres database on multiple machines, is the
> practice of tarring or zipping up binaries compiled on one machine and
> untarring them on another, and using the binaries (initdb etc) acceptable?.
> This removes the need for having a compiler and environment on the target
> machine, or is it necessary always to have such an environment on any
> machine you intend to use postgres on? Postgres seems to have enough
> environment options to allow this to work, overriding the original library
> locations and paths etc from the original machine on which postgres was
> compiled.
> 
> Does anyone see a problem with this approach?
> 
> Thanks! 
> 
> Francis 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster

-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgp47lhlexGnQ.pgp
Description: PGP signature


Re: [GENERAL] Moving/Using Postgres Binaries on multiple machines

2004-11-24 Thread Richard Huxton
Francis Reed wrote:
If I want to create a postgres database on multiple machines, is the
practice of tarring or zipping up binaries compiled on one machine and
untarring them on another, and using the binaries (initdb etc) acceptable?.
This removes the need for having a compiler and environment on the target
machine, or is it necessary always to have such an environment on any
machine you intend to use postgres on? Postgres seems to have enough
environment options to allow this to work, overriding the original library
locations and paths etc from the original machine on which postgres was
compiled.
Does anyone see a problem with this approach?
Possibly hundreds. You clearly can't move from Sun/SPARC to BSD/x86 to 
Linux/PPC. What platform are you interested in and why isn't there a 
package manager for it?

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Creating index on a view?

2004-11-24 Thread Richard Huxton
Net Virtual Mailing Lists wrote:
CREATE TABLE table2 (
  table2_id INTEGER,
  table2_desc VARCHAR,
  table3_id   INTEGER[]
);
CREATE TABLE table3 (
  table3_id INTEGER,
  table3_desc VARCHAR
);
What I need is an "indirect index" (for lack of a better phrase) that
allows me to do:
SELECT b.table3_id, b.table3_desc FROM table2 a, table 3 b WHERE
a.table2_id = 4 AND b.table3_id = ANY (a.table3_id);
.. in the above example, the "4" is the variable component in the
query...  THe table3_id in table2 has the value of '{7}' - so when I do
the above select, it is actually retrieving records from table3 where
table3_id is equal to 7.
[snip]
SELECT b.table3_id, b.table3_desc FROM table3 b WHERE b.table3_id = 7; 
I don't think you want to use an array here. If you were to split your 
tables:
 table2 (t2_id, t2_desc);
 table3 (t3_id, t3_desc);
 table2_and_3 (t2_id, t3_id);
Then, you should find everything a lot easier.

Try not to use arrays as a set.
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Can't get planner to use multicolumn index on large

2004-11-24 Thread Neil Conway
Ulrich Meis wrote:
CREATE TABLE data.question_result (
id  bigserial   PRIMARY KEY,
trial_idbigint  NOT NULL REFERENCES data.trial(id),
question_id bigint  REFERENCES content.question(id),
,
);

mydb=# explain analyze select * from data.question_result where trial_id=1
and question_id=2;
This is a well-known optimizer deficiency. You need to single-quote the 
numeric literals or cast them to the type of the column, or else you 
won't get index scans for non-int4 columns. In other words:

explain analyze select * from data.question_result where trial_id='1' 
and question_id='2'

This is fixed in 8.0
-Neil
---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Can't get planner to use multicolumn index on large table

2004-11-24 Thread Ulrich Meis
Hi!

I am trying to get postgres to use my index but it just doesn't. Please
Help!
It follows the table definition and a series of commands I thought would
ensure usage of my index.

CREATE TABLE data.question_result (
id  bigserial   PRIMARY KEY,
trial_idbigint  NOT NULL REFERENCES data.trial(id),
question_id bigint  REFERENCES content.question(id),
,
);

mydb=# set enable_seqscan=false;
SET
mydb=# create unique index bothcols on data.question_result
(trial_id,question_id);
CREATE INDEX
mydb=# analyze verbose data.question_result;
INFO:  analyzing "data.question_result"
INFO:  "question_result": 4657 pages, 3000 rows sampled, 591439 estimated
total rows
ANALYZE
mydb=# explain analyze select * from data.question_result where trial_id=1
and question_id=2;
  QUERY
PLAN
--
 Seq Scan on question_result  (cost=1.00..100013528.58 rows=1
width=30) (actual time=883.641..883.641 rows=0 loops=1)
   Filter: ((trial_id = 1) AND (question_id = 2))
 Total runtime: 883.858 ms
(3 rows)

I inserted the data (not the schema) from a dump if that is of any
relevance.
I am running postgres 7.4.5 on gentoo linux 2.6.8.

Thanks for any Help,

Uli

P.S.: just did CLUSTER bothcols on data.question_result, analyze, explain
analyze...didn't help :-(

-- 
Geschenkt: 3 Monate GMX ProMail + 3 Top-Spielfilme auf DVD
++ Jetzt kostenlos testen http://www.gmx.net/de/go/mail ++

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Moving/Using Postgres Binaries on multiple machines

2004-11-24 Thread Francis Reed
If I want to create a postgres database on multiple machines, is the
practice of tarring or zipping up binaries compiled on one machine and
untarring them on another, and using the binaries (initdb etc) acceptable?.
This removes the need for having a compiler and environment on the target
machine, or is it necessary always to have such an environment on any
machine you intend to use postgres on? Postgres seems to have enough
environment options to allow this to work, overriding the original library
locations and paths etc from the original machine on which postgres was
compiled.

Does anyone see a problem with this approach?

Thanks! 

Francis 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Insert may fail if i create a primary key on the oid

2004-11-24 Thread Richard Huxton
songsubo wrote:
> I create a primary key on the "oid" column. The oid is generate by
> system itself. The oid may overlap, when this happen, this operation
> may fail?

Yes. If you want a self-incrementing primary key, why not use a SERIAL type?

-- 
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Creating index on a view?

2004-11-24 Thread Net Virtual Mailing Lists
The problem in my case is that the view does a join between table2 and
table3 and I want to do a select on a value from table2.  So at the point
the expansion happens, I am actually doing a query on a column that does
not exist in table3 - it only exists in table2.

Given what you said, perhaps a better way of me explaining it would be
without the use of a view, since it seems irrelevant:



CREATE TABLE table2 (
  table2_id INTEGER,
  table2_desc VARCHAR,
  table3_id   INTEGER[]
);

CREATE TABLE table3 (
  table3_id INTEGER,
  table3_desc VARCHAR
);


What I need is an "indirect index" (for lack of a better phrase) that
allows me to do:

SELECT b.table3_id, b.table3_desc FROM table2 a, table 3 b WHERE
a.table2_id = 4 AND b.table3_id = ANY (a.table3_id);

.. in the above example, the "4" is the variable component in the
query...  THe table3_id in table2 has the value of '{7}' - so when I do
the above select, it is actually retrieving records from table3 where
table3_id is equal to 7.

.. assuming tables where table2 is very small and table3 is very large it
does not seem yield good performance by creating an index on
table3(table3_id).  (In fact, I can't get it to use the index at all in
this case no matter what I do).  To be more precise, if table3 has 24,000
rows and selecting table2_id of "4" using the above query 800 rows would
be returned, it always does a sequential scan on table3.  Comparing this
with doing:

SELECT b.table3_id, b.table3_desc FROM table3 b WHERE b.table3_id = 7; 

.. when there is an index on table3(table3_id) - an index scan is
performed instead of a table scan.


As for why I want to do this it is because there is another table
(table1) where the schema is different from table3 and I'm trying to use
table2 as a means of doing an "on the fly conversion" (via a view) so
that a "SELECT ... FROM table1 UNION select ... FROM view1" will work.

I thought about the materialized view, but I'm concerned that with the
number of records I would essentially be doubling my disk usage.

I hope this clarifies... I think I'm confused just trying to explain it!

- Greg

>Net Virtual Mailing Lists wrote:
>> My question is regarding creating an index on a view, or perhaps
>> another way to accomplish this.
>
>Views are just macro expansions of queries (in a manner of speaking).  
>To make queries on views use indexes, you create the indexes on the 
>underlying tables in the same way as if you had typed in the expanded 
>view query yourself.  (In your example, you'd just need the usual 
>indexes on the primary keys.)
>
>If what you want is that the creation of an index on a view 
>automatically materializes that view (don't other database systems do 
>that?), then you will have to implement that manually in PostgreSQL, 
>with triggers and a bunch of code.
>
>-- 
>Peter Eisentraut
>http://developer.postgresql.org/~petere/
>



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Insert may fail if i create a primary key on the oid column?

2004-11-24 Thread Michael Glaesemann
On Nov 24, 2004, at 8:18 PM, songsubo wrote:
I create a primary key on the "oid" column. The oid is generate by 
system itself. The oid may overlap, when this happen, this operation 
may fail?
Yes. Check the mailing list archives, as this was just recently 
discussed.

Michael Glaesemann
grzm myrealbox com
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Insert may fail if i create a primary key on the oid column?

2004-11-24 Thread songsubo
I create a primary key on the "oid" column. The oid is generate by system 
itself. The oid may overlap, when this happen, this operation may fail?




---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Best practice updating data in webapps?

2004-11-24 Thread Bjørn T Johansen
Maybe a bit outside the scope of PGSQL but when designing webapps where different users 
can edit the same record at the same time, what is the best way to solve this situation?
One way is to use an update field, that gets selected with rest of the data and when 
updating the data, check this update field with the one in the db; if they don't match, 
someone else has updated the record before you..
But is this the best way?

Regards,
BTJ
--
---
Bjørn T Johansen
[EMAIL PROTECTED]
---
Someone wrote:
"I understand that if you play a Windows CD backwards you hear strange Satanic 
messages"
To which someone replied:
"It's even worse than that; play it forwards and it installs Windows"
---
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] Row creation time

2004-11-24 Thread Najib Abi Fadel
Actually i have an old  table without any timestamp column. I want to know
the creation time of one Row: One way is to look in the postgres dump files
(which could take some time) in order to see the date the Row was inserted.
I was hoping there is another way 

Thx anyways.


- Original Message - 
From: "Michael Glaesemann" <[EMAIL PROTECTED]>
To: "Najib Abi Fadel" <[EMAIL PROTECTED]>
Cc: "generalpost" <[EMAIL PROTECTED]>
Sent: Wednesday, November 24, 2004 12:40 PM
Subject: Re: [GENERAL] Row creation time


>
> On Nov 24, 2004, at 7:07 PM, Najib Abi Fadel wrote:
>
> > Is there a to get the row creation time if we know it's object ID ??
>
> Only if you have a timestamp column on the table that records the
> creation time. For example,
>
> create table foo (
> foo_id serial not null unique
> , created_timestamp timestamptz not null
> default current_timestamp
> ) without oids;
>
> Some people also like to include a modified_timestamp column, which can
> easily be updated via an after update trigger.
>
> hth
>
> Michael Glaesemann
> grzm myrealbox com
>



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Row creation time

2004-11-24 Thread Michael Glaesemann
On Nov 24, 2004, at 7:07 PM, Najib Abi Fadel wrote:
Is there a to get the row creation time if we know it's object ID ??
Only if you have a timestamp column on the table that records the 
creation time. For example,

create table foo (
foo_id serial not null unique
, created_timestamp timestamptz not null
default current_timestamp
) without oids;
Some people also like to include a modified_timestamp column, which can 
easily be updated via an after update trigger.

hth
Michael Glaesemann
grzm myrealbox com
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Creating index on a view?

2004-11-24 Thread Patrick Fiche
Some time ago, an excellent tutorial on materialized views with PostgreSQL
was pointed at this address
http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html

Patrick

> --
-
> Patrick Fiche
> email : [EMAIL PROTECTED]
> tél : 01 69 29 36 18
> --
-
>
>
>


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Peter Eisentraut
Sent: mercredi 24 novembre 2004 11:15
To: Net Virtual Mailing Lists
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] Creating index on a view?


Net Virtual Mailing Lists wrote:
> My question is regarding creating an index on a view, or perhaps
> another way to accomplish this.

Views are just macro expansions of queries (in a manner of speaking).
To make queries on views use indexes, you create the indexes on the
underlying tables in the same way as if you had typed in the expanded
view query yourself.  (In your example, you'd just need the usual
indexes on the primary keys.)

If what you want is that the creation of an index on a view
automatically materializes that view (don't other database systems do
that?), then you will have to implement that manually in PostgreSQL,
with triggers and a bunch of code.

--
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]




Protected by Polesoft Lockspam
http://www.polesoft.com/refer.html


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Copying into Unicode - Correcting Errors

2004-11-24 Thread Peter Eisentraut
Hunter Hillegas wrote:
> I need to import a file into a Unicode database.
>
> I am getting an error:
>
> ERROR:  Unicode characters greater than or equal to 0x1 are not
> supported
> CONTEXT:  COPY mailing_list_entry, line 30928, column
> first_last_name: "Ver?nica"

If your file really does have Unicode characters greater than or equal 
to 0x1, then I don't have a good answer.

But more often, this error means that your file is not in Unicode in the 
first place.  If so, set the client encoding to the real encoding of 
your file, e.g.

export PGCLIENTENCODING=LATIN1

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Row creation time

2004-11-24 Thread Najib Abi Fadel



Is there a to get the row creation time if we know 
it's object ID ??
 
Thx


Re: [GENERAL] Creating index on a view?

2004-11-24 Thread Peter Eisentraut
Net Virtual Mailing Lists wrote:
> My question is regarding creating an index on a view, or perhaps
> another way to accomplish this.

Views are just macro expansions of queries (in a manner of speaking).  
To make queries on views use indexes, you create the indexes on the 
underlying tables in the same way as if you had typed in the expanded 
view query yourself.  (In your example, you'd just need the usual 
indexes on the primary keys.)

If what you want is that the creation of an index on a view 
automatically materializes that view (don't other database systems do 
that?), then you will have to implement that manually in PostgreSQL, 
with triggers and a bunch of code.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Upcoming Changes to News Server ...

2004-11-24 Thread Martijn van Oosterhout
On Tue, Nov 23, 2004 at 09:07:04PM -0500, Gary L. Burnore wrote:
> We've removed all of the comp.databases.postgres.* groups from our server 
> and our feeds anyway.  Do did google.  So will anyone else who's still 
> holding the bogus groups.
> 
> Basically, the thing that Marc is doing that's 'bad', is unilaterally 
> making changes that effect your list without any discussion with those who 
> it effects either ON the list or in USENet.  USENet people tried to help 
> and got a "we don't see it as broken from our side so who cares?" attitude.

I thought initially too that the discussion was "we have some bogus
groups here, lets just formalise them and all will be well". However,
the groups are being given no repreive, they're being dropped all over
the place, now. So from a purly practical point of view the right way
to go is to remove all the bogus groups and create them elsewhere.
After all, people still want to read them on usenet.

Even after the CFV goes through, the remaining dozen groups will still
be bogus and still need a place to live. Hence pgsql.*

This all perfectly logical reasoning, I honestly can't understand why
this is "bad". It's the only way by my understanding. As someone who
left usenet five years ago, all I can see are social problems not
technical ones.

As for the list, it's been around for more than six years and will keep
going, with or without usenet.

Good day, 
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgpuzBy5XKlUr.pgp
Description: PGP signature


Re: [GENERAL] VACUUM and ANALYZE With Empty Tables

2004-11-24 Thread Richard Huxton
Mark Dexter wrote:
We use a development environment that works with Postgres via ODBC and
uses cursors to insert and update rows in Postgres tables.  I'm using
Postgres version 7.4.5.

A. If I TRUNCATE or DELETE all of the rows in the table and then run
VACUUM or ANALYZE on the empty table, the test program takes over 15
minutes to complete (i.e., 15X performance drop).

If we routinely run VACUUM or VACUUM ANALYZE (e.g., nightly), these work
tables will normally be empty when the VACUUM is run.  So it would
appear from the testing above that they will experience performance
problems when inserting large numbers of rows  through our application.
Yep - it's a known issue. The analyse is doing what you asked, it's just 
not what you want.

Is there some easy way around this problem?  If there a way to force
VACUUM or ANALYZE to optimize for a set number of rows even if the table
is empty when it is run?  Thanks for your help.   Mark
There are only two options I know of:
 1. Vaccum analyse each table separately (tedious, I know)
 2. Try pg_autovacuum in the contrib/ directory
The autovacuum utility monitors activity for you and targets tables when 
they've seen a certain amount of activity. Even if it hasn't got the 
tunability you need, it should be a simple patch to add a list of 
"excluded" tables.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] I need to add a column to a table

2004-11-24 Thread Richard Huxton
Krause, Lewis wrote:
I know there is a command to add the column. I want to back up the table
before I alter it. What is the best way. It has a couple indexes and a
trigger.
pg_dump -t  ...
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Creating index on a view?

2004-11-24 Thread Net Virtual Mailing Lists
Hello,

First, let me apologize for my flurry of emails as of late... I'm working
on something which seems particularly difficult (at least to me)...

My question is regarding creating an index on a view, or perhaps another
way to accomplish this.  For example:

CREATE TABLE table1 (
   table1_id SERIAL,
   table2_id INTEGER
);

CREATE TABLE table2 (
  table2_id INTEGER,
  table2_desc VARCHAR,
  table3_id   INTEGER[]
);

This allows me, simply, to do queries like:

SELECT a.table1_id, b.table2_id, b.table2_desc FROM table1 a,table2 b
WHERE a.table2_id = b.table2_id; 

But now I have another table with data which needs to be converted into
the format of table1:

CREATE TABLE table3 (
  table3_id INTEGER,
  table3_desc VARCHAR
);

CREATE VIEW view1 (table1_id, table2_id) AS
  SELECT a.table3_id, b.table2_id
  FROM table3 a,
   table2 b
  WHERE a.table3_id = ANY (b.table3_id)
;


With this method I can execute the exact same query against view1 as I do
table1, but when I need to do something like:

SELECT * FROM view1 WHERE table2_id=1;

.. What I really want to do is:

CREATE INDEX view1_table2_id_idx ON view1(table2_id);

.. I can't figure out how to make such a query use an index
efficiently  What I am trying to do is sort-of do an on-the-fly data
conversion of table3 to table1, for purposes of doing a "UNION" on the
two of them.  My join is between several more tables than this
example 

Any thoughts on how to accomplish something like this?...

Thanks as always!

- Greg


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Any good report/form generators for postgresql?

2004-11-24 Thread Chris Green
On Wed, Nov 24, 2004 at 08:55:35AM +0100, Alberto Cabello Sanchez wrote:
> On Sun, Nov 21, 2004 at 05:50:30PM +, Chris Green wrote:
> > Has anyone got any recommendations for forms/reports generators for
> > postgresql?  I can probably get the forms I want from Rekall so the
> > bigger requirement is a report generator.  The particular thing I
> > can't manage in Rekall at the moment is a 'running total' output
> > column on a report.
> > 
> You can give a try  to jasperreports and some of the GUI design tools such as
> iReports or JasperAssistant. Jasperreports seems to be fairly powerful.
> 
Thanks, those look interesting.

-- 
Chris Green ([EMAIL PROTECTED])

"Never ascribe to malice, that which can be explained by incompetence."

---(end of broadcast)---
TIP 8: explain analyze is your friend