Re: [GENERAL] query results different after vacuum analyze? 7.4.0

2004-05-27 Thread Tom Lane
Klint Gore <[EMAIL PROTECTED]> writes:
> There are 55k rows in t1 (103 fields) and 10k in t2 (4 fields, 4 is text).
> before vacuum analyze the query gave 10k rows like it was doing an inner
> join.  after vacuum analyze gave the full 55k.

Sounds like a bug ...

> 7.4.0, rh linux 7.2, p4 (non ht) cpu.

You are of course aware that there are a lot of known bugs in 7.4.0?
Please try it on 7.4.2.

If you can still make it happen on 7.4.2 then I'd be interested in a
test case...

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Naive schema questions

2004-05-27 Thread Tom Lane
<[EMAIL PROTECTED]> writes:
> And where do tables created with "CREATE LOCAL TEMPORARY TABLE..." fit
> into this, like if say a local temp table where created that has the same
> name as an existing normal (i.e., not a local temp) table?

Temp tables live in a schema that is effectively inserted into your
search path ahead of whatever schema(s) are explicitly listed there.
For instance, suppose

CREATE SCHEMA a;
SET search_path = a, public;
CREATE TABLE t1 (...);
-- t1 is created in schema a
SELECT * FROM t1;
-- same as SELECT * FROM a.t1;

Now if I do

CREATE TEMP TABLE t1 (...);

then SELECT * FROM t1 will reference the temp table ... but I can still
get to the permanent table by explicitly qualifying it as "a.t1".

> And what if I
> do an explicit DROP of the local temp table rather than relying on the
> automatic, end-of-session clean-up?

You drop the temp table.

> Is there any risk of losing the normal table?

Only if you do it twice --- after the initial DROP, the permanent table
would come back "into view".

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] query results different after vacuum analyze? 7.4.0

2004-05-27 Thread Klint Gore
query is

  select t2.field4, t1.*
  from t1
  left outer join t2 on t2.field1 = t1.field1 and t2.field2 = t1.field2

There are 55k rows in t1 (103 fields) and 10k in t2 (4 fields, 4 is text).
before vacuum analyze the query gave 10k rows like it was doing an inner
join.  after vacuum analyze gave the full 55k.

t2 is a new table which probably has never been vacuum'd before.  the
10k rows in t2 were entered via insert statements.  there are no
triggers on t2.  it has a primary key (fields 1-3).  field types for
joined fields are the same.  selecting from either table separately
gives the expected number of rows.  only returning "t2.field4, t1.field2"
gives the correct number.

7.4.0, rh linux 7.2, p4 (non ht) cpu.

Can anyone think of a situation where vacuum analyze would change the
results of a query?  Am I looking at something that's been missed in
setting up the table? Or a bug that's been since fixed?  Or do I need to
try and work out a test case?

klint


+---+-+
: Klint Gore: "Non rhyming:
: EMail   : [EMAIL PROTECTED]   :  slang - the:
: Snail   : A.B.R.I.:  possibilities  :
: Mail  University of New England   :  are useless"   :
:   Armidale NSW 2351 Australia : L.J.J.  :
: Fax : +61 2 6772 5376 : :
+---+-+

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] PostgreSQL Logs

2004-05-27 Thread Postgresql
Basically, I'm looking for information on logging connections (who's
connecting and when), what the users are doing when connected (creating
tables, updating data, etc), and if there are any system messages (like when
the postgresql server is started, stopped, if it has problems, etc).


Thanks!

-Original Message-
From: Mike G [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 27, 2004 10:27 PM
To: Postgresql
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] PostgreSQL Logs

Yes to the first two questions under number 1.  Not sure what you are
looking for on the last one.

Yes to number 2 if you configure it correctly although I don't know if it
can write to a file and screen at the same time.

Open the postgresql.conf file.

This page in the docs might help.

http://www.postgresql.org/docs/7.4/static/runtime-config.html


On Thu, May 27, 2004 at 05:39:19PM -0400, Postgresql wrote:
> What logs does postgresql produce?
> 
> 1)   It has transaction logs. Can it log connections? Are there other
> logs?
> 
> 2)   When I start postgresql it outputs log messages to the console.
> Does this also log anywhere else?
> 
>  
> 
> What I'm trying to do is find all the postgresql log files and then find a
> way to load them into a database and then have a program scan that data
and
> look for anything out of the ordinary or to do reporting on the logs (like
> how many connections per hour, per day, etc).
> 
>  
> 
>  
> 
> Thank you!
> 
>  
> 
> ~ T r o y ~
> 
>  
> 





---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL Logs

2004-05-27 Thread Mike G
Yes to the first two questions under number 1.  Not sure what you are looking for on 
the last one.

Yes to number 2 if you configure it correctly although I don't know if it can write to 
a file and screen at the same time.

Open the postgresql.conf file.

This page in the docs might help.

http://www.postgresql.org/docs/7.4/static/runtime-config.html


On Thu, May 27, 2004 at 05:39:19PM -0400, Postgresql wrote:
> What logs does postgresql produce?
> 
> 1)   It has transaction logs. Can it log connections? Are there other
> logs?
> 
> 2)   When I start postgresql it outputs log messages to the console.
> Does this also log anywhere else?
> 
>  
> 
> What I'm trying to do is find all the postgresql log files and then find a
> way to load them into a database and then have a program scan that data and
> look for anything out of the ordinary or to do reporting on the logs (like
> how many connections per hour, per day, etc).
> 
>  
> 
>  
> 
> Thank you!
> 
>  
> 
> ~ T r o y ~
> 
>  
> 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] How do I unsubscribe?

2004-05-27 Thread Lee Hanxue
Sorry for posting this to the list, but I tried sending a message to [EMAIL PROTECTED] 
with the message "unsubscribe postgresql-general" and I got no reply.

The web interface does not seem to work either. 

How else can I get myself unsubscribed from this list? Thank you.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_restore quick question

2004-05-27 Thread Chris Browne
[EMAIL PROTECTED] ("Sally Sally") writes:
> I want to dump a database containing about five tables, each
> containing about 25 million records. I wanted to run pg_restore per
> table all at the same time. Would this have any undesirable
> consequences. I tried it on a smaller database and it seems to work
> fine.
> Let me know your thoughts ASAP

Performance may be questionable, and if they touch the same table,
it's possible for there to be deadlocks, but aside from that, it
should generally _work_.
-- 
"cbbrowne","@","ntlug.org"
http://www3.sympatico.ca/cbbrowne/linuxxian.html
DO IT -- it's easier to get forgiveness than permission.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Naive schema questions

2004-05-27 Thread Jan Wieck
On 5/27/2004 6:03 PM, Rory Campbell-Lange wrote:
Just a question on this, Jan. Would one expect UNIONS for this sort of
work?
I just did this which is useful anyway:
schematest=> SELECT 
(select count(id) from b.messages) 
+ 
(select count(id) from a.messages);
 ?column? 
 --
 5
 (1 row)

I see the horizons expanding! Common data (I often have an 'info' table)
can be shared between schemas. I think my search_patch might go:
You can mix those in queries however you want. They are just namespaces 
with some additional security (even if you grant public access to an 
object inside a schema, one still needs access to the schema itself). 
The search path let's you hide one schemas objects behind another ones 
by chosing the order. You can use qualified or unqualified names and 
different search path's where one or the other makes sense in your 
application- and data-design. After all, all the objects reside in the 
same database and all access is covered by the same transaction.

The problem with expanded horizons is that one has more possibilities to 
screw it up at the same time he get's more flexibility. Well used, this 
is a powerfull feature. Poorly applied and inconsistently used it can 
become a maintenance nightmare.

Jan
this_schema, info_schema, public_schema
Thanks very much for the information.
Kind regards,
Rory

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Naive schema questions

2004-05-27 Thread Rory Campbell-Lange
Fabulous stuff! I am so delighted I chose Postgresql a couple of year
ago. Thank you for the valuable insights. A comment or two below:

On 27/05/04, Peter Eisentraut ([EMAIL PROTECTED]) wrote:
> Am Donnerstag, 27. Mai 2004 13:15 schrieb Rory Campbell-Lange:
> > I imagined schemas might allow me to globally update functions across a
> > database hosting many schemas with the same structure.

> Put your data tables in separate schemas, put the functions in yet
> another schema, and then when you connect set the schema search path
> to "dataschema, functionschema" (or maybe vice versa).

On 27/05/04, Nick Barr ([EMAIL PROTECTED]) wrote:
> > Put your data tables in separate schemas, put the functions in yet
> > another schema, and then when you connect set the schema search path
> > to "dataschema, functionschema" (or maybe vice versa).

> Or when you make the calls in the web app use the following:
> 
> SELECT function_schema.function1(arg1, arg2);
> instead of just:
> SELECT function1(arg1, arg2);
> But like Peter said have a schema per client/"instance" of your database.

Is it ok to use the public schema for the functions? It means it is that
much easier to reload the functions as one wouldn't need to specify the
search_path.

On 27/05/04, Jan Wieck ([EMAIL PROTECTED]) wrote:
...
> It is even better. The property that set's your "schema context" is 
> called search_path. This contains a list of schema names. For an 
> unqualified (schema name not explicitly given) object, be that a table, 
> sequence, view, function or whatever, the system looks in all those 
> schemas in that particular order and uses the first found.
> 
> With that, you can have your common or shared objects in a central 
> schema "schema_common", and everything that's application specific in 
> "schema_A", "schema_B". The connection just has to set the search_path 
> at the beginning with
> 
> set search_path = schema_A, schema_common;

This is brillliant. I didn't note this in the documentation.

> >I suppose I'm trying to think of how I might implement the second point
> >in this list (also from dd-schemas.html):
...
> >- To organize database objects into logical groups to make them more
> >  manageable.
...
> Yes, yes and yes. Plus the ability for you to do cross database joins 
> for global analyzing for example.

Just a question on this, Jan. Would one expect UNIONS for this sort of
work?

I just did this which is useful anyway:
schematest=> SELECT 
(select count(id) from b.messages) 
+ 
(select count(id) from a.messages);
 ?column? 
 --
 5
 (1 row)

I see the horizons expanding! Common data (I often have an 'info' table)
can be shared between schemas. I think my search_patch might go:

this_schema, info_schema, public_schema

Thanks very much for the information.

Kind regards,
Rory
-- 
Rory Campbell-Lange 
<[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] PostgreSQL Logs

2004-05-27 Thread Postgresql








What logs does postgresql produce?

1)   It has
transaction logs. Can it log connections? Are there other logs?

2)   When I start
postgresql it outputs log messages to the console. Does this also log anywhere
else?

 

What I’m trying to do is find all the postgresql log
files and then find a way to load them into a database and then have a program
scan that data and look for anything out of the ordinary or to do reporting on
the logs (like how many connections per hour, per day, etc).

 

 

Thank you!

 

~ T r o y ~

 








Re: [GENERAL] custom error messages/numbers

2004-05-27 Thread Martijn van Oosterhout
On Thu, May 27, 2004 at 06:26:53PM +0200, BARTKO Zoltan wrote:
> Folks,
> 
> I am developing an app for PostgreSQL, entirely with stored functions (somewhat 
> resembling the object-oriented approach. In fact it is not an app, just an API). All 
> my functions return an integer value - 0 if the function was successful, another 
> value otherwise.
> 
> Until now, all parameter checking was done manually in the stored function (if param 
> is null then return -8; end if; etc.) This makes things quite lengthy.
> 
> I know about check constraints. However, upon error they throw an error message and 
> abort the current transaction, So my question is: is there any possible way of 
> throwing my own error messages upon a check constraint violation? 

I don't know if this does what you want, but maybe RAISE ERROR is what
you're looking for.

-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgp2xsJlLtWYJ.pgp
Description: PGP signature


[GENERAL] pg_restore quick question

2004-05-27 Thread Sally Sally
I want to dump a database containing about five tables, each containing 
about 25 million records. I wanted to run pg_restore per table all at the 
same time. Would this have any undesirable consequences. I tried it on a 
smaller database and it seems to work fine.
Let me know your thoughts ASAP
Thanks
Sally

_
Is your PC infected? Get a FREE online computer virus scan from McAfee® 
Security. http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Database Size Limiting

2004-05-27 Thread Campano, Troy
Is it possible to have a database spread across two data files?
So if it fills up filesystem1 then we could add a second filesystem and
allow it to grow onto this second filesystem?


Thanks!


-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 27, 2004 3:23 PM
To: Joshua D. Drake
Cc: Campano, Troy; Postgres general mailing list
Subject: Re: [GENERAL] Database Size Limiting

Joshua D. Drake wrote:
> Hello,
> 
> You could use Filsystem quotas but I strongly suggest against it as it

> will probably bring about database corruption.

Ah - manual reference
http://www.postgresql.org/docs/current/static/disk-full.html

"24.2. Disk Full Failure

The most important disk monitoring task of a database administrator is 
to make sure the disk doesn't grow full. A filled data disk may result 
in subsequent corruption of database indexes, but not of the tables 
themselves. If the WAL files are on the same disk (as is the case for a 
default configuration) then a filled disk during database initialization

may result in corrupted or incomplete WAL files. This failure condition 
is detected and the database server will refuse to start up. "

-- 
   Richard Huxton
   Archonet Ltd

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] HOWTO: Integrating Posgresql queries into an event loop

2004-05-27 Thread Mark Harrison
HOWTO: Integrating Posgresql queries into an event loop.
Mark Harrison
[EMAIL PROTECTED]
May 27, 2004
Problem
---
The commonly used postgresql APIs will block until completed.
If you are in a GUI event loop, this will block your GUI
responsiveness until a query is completed.
If the queries are small and finish quickly, there is probably
not a problem.  Hanging for a few milliseconds will not
be cause a problem.
However, if you have a really large or really slow query,
this will be a significant problem.
For example, one of my tables (call it "big") has about
14 million rows.  It takes about two minutes for
res = PQexec(conn, "select * from big");
to return.  An additional unpleasant side effect is that
the process then requires about 1.2 gig of memory to buffer
the returned data.
Solution, part 1:
-
First, we need to break up the returned data into more
manageable chunks.  For this, we use an SQL cursor.  Here
are the relevant bits of code:
res = PQexec(conn, "BEGIN");
res = PQexec(conn, "DECLARE cur CURSOR FOR select * from big");
while (1) {
res = PQexec(conn, "FETCH 1000 in cur");
if (PQntuples(res) == 0)
break
else
   //process rows of data
}
res = PQexec(conn, "CLOSE cur");
res = PQexec(conn, "END");
This has two immediate benefits:
1.  There is not a two minute pause while the data is being
transferred and buffered from the server to the client.
2.  The memory requirements for the client program are much
lower.
Solution, part 2

Now that we have broken  our data retrieval into managable
chunks, we need to integrate this logic into the event loop.
As is typical for event loop programming, there are two
main bits of code:
1.  Set up query and callback.
conn = PQconnectdb("");
rc = PQsetnonblocking(conn, 1);
res = PQexec(conn, "BEGIN");
res = PQexec(conn, "DECLARE cur CURSOR FOR select * from big");
rc = PQsendQuery(conn, "FETCH 1000 in cur");
PQflush(conn);
sock = PQsocket(conn);
add_to_event_loop(READABLE, sock, myhandler);
2.  The callback which processes the returned data.  This is referred
to as myhandler() in the previous step.
rc = PQconsumeInput(conn);
while (!PQisBusy(conn)) {
rc = PQconsumeInput(conn); // (is this necessary?)
res = PQgetResult(conn);
if (res == NULL) {
// we have finished all the rows for this FETCH.  We need
// to send another FETCH to the server.
rc = PQsendQuery(conn, "FETCH 1000 in cur");
PQflush(conn);
return;
}
else {
   if (PQntuples(res) == 0)
   // finished processing all rows.  Clean up the
   // result and remove your callback from the
   // event loop.
   else
   //process rows of data
}
}
If you wish to cancel a query midway through processing (e.g.,
if the user presses "cancel"), call
PQrequestCancel(conn);
Notes
-
This HOWTO is released under the same license as the Posgresql
documentation...  Share and Enjoy!  Thanks to Tom Lane for pointing
out the need to use a cursor.
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] [ADMIN] Clustering Postgres

2004-05-27 Thread Greg Spiegelberg
Here's the basics basically because I ran out of time and budget to
actually purchase it but here's what I know.
1 day to install.  Installed RedHat ES 3.0 from scratch, and the
PolyServe process took perhaps 1.5 hours to install including a
kernel build and system reboot.  Roughly 2 hours into PolyServe I
had my first psfs (shared cluster) partition setup on my 1Gb SAN.
Just doing a basic dd to write 1.5GB and 5GB to an ext3 on the
same hardware produced roughly 115 MB/s and 107 MB/s respectively.
It's my understand that PostgreSQL won't create files largers than
2GB though.
Doing the same tests on the PolyServe file system (psfs) accomplished
the tasks in 117 MB/s and 109 MB/s, just slightly faster.
If you're familiar with Veritas Cluster Server it operates similiar
to it though I didn't have a chance to fully test it.
Given the budget I'd definitely take another look at it.
Greg

Robert Treat wrote:
Some basics on how easy it is to install/configure and some before/after type 
benchmarks showing how it affected performance.  Perferrably you could 
measure simple vs. complex queries as well as update vs. insert performance. 
If you're really ambitious you could try running one of the tpc style 
benchmarks on the system, check the archives or the osdl site for more info 
on getting these up and running.

Robert Treat
On Monday 24 May 2004 09:59, Greg Spiegelberg wrote:
Robert,
I am currently evaluating PolyServe Matrix Server which is a clustering
solution including a clustered file system (mounted read-write
everywhere).
Anything special anyone wants to know?  I'm using PostgreSQL 7.4.2 in
a Linux cluster.
Greg
Robert Treat wrote:
On Thu, 2004-05-20 at 14:30, Kris Kiger wrote:
Has anyone heard of clusgres or, better yet, had any experience with it?
I came across it while searching for active clustering projects.
http://www.linuxlabs.com/clusgres.html
Thanks in advance for the info!
I have heard of it but I've not heard of any reports on these lists of
how well it works. If you (or anyone else) gives it a spin please try to
post an account to the list and/or if you send me the info it could
probably make its way on to techdocs.
Robert Treat


--
Greg Spiegelberg
 Product Development Manager
 Cranel, Incorporated.
 Phone: 614.318.4314
 Fax:   614.431.8388
 Email: [EMAIL PROTECTED]
Technology. Integrity. Focus. V-Solve!
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Connection log entries - random port number recorded

2004-05-27 Thread Matt Van Mater
On linux you can use "lsof -i" to see all current connections, I used to  
use something similar on >>Windows, but can't remember what it was.
The tool you're looking for is called fport, and is distributed for free by 
Foundstone.  It lists which program is using each open port.  Alternatively 
you can just use netstat -an to just see which ports are actively 
communicating or listening, but it won't show you which program is using 
that port.

Matt
_
Is your PC infected? Get a FREE online computer virus scan from McAfee® 
Security. http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Database Size Limiting

2004-05-27 Thread Richard Huxton
Joshua D. Drake wrote:
Hello,
You could use Filsystem quotas but I strongly suggest against it as it 
will probably bring about database corruption.
Ah - manual reference
http://www.postgresql.org/docs/current/static/disk-full.html
"24.2. Disk Full Failure
The most important disk monitoring task of a database administrator is 
to make sure the disk doesn't grow full. A filled data disk may result 
in subsequent corruption of database indexes, but not of the tables 
themselves. If the WAL files are on the same disk (as is the case for a 
default configuration) then a filled disk during database initialization 
may result in corrupted or incomplete WAL files. This failure condition 
is detected and the database server will refuse to start up. "

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Database Size Limiting

2004-05-27 Thread Richard Huxton
Joshua D. Drake wrote:
Hello,
You could use Filsystem quotas but I strongly suggest against it as it 
will probably bring about database corruption.
I thought we handled that? I'm not saying it'd be pleasant, but I think 
it's no worse than a power-outage.

Hmm, can't find anything in the manuals to  back that up. I can find 
postings to the mailing lists though, and no-one is complaining PG ate 
their data.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Database Size Limiting

2004-05-27 Thread Campano, Troy
My understanding is databases reside within 'database clusters' in
postgresql. Is this right? Can I run multiple database clusters within
one postgresql/postmaster server? Then could I use filesystem quotas for
each cluster?


Thank you!


-Original Message-
From: Joshua D. Drake [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 27, 2004 2:12 PM
To: Campano, Troy
Cc: Postgres general mailing list
Subject: Re: [GENERAL] Database Size Limiting

Hello,

You could use Filsystem quotas but I strongly suggest against it as it 
will probably bring about database corruption.

A better solution would be to monitor the space being allocated so you
can add storage as required.

Sincerely,

Joshua D. Drake


Campano, Troy wrote:
> -->
> 
> Is there a way to limit the size that an individual database can take
up?
> 
> My understanding is that all your databases go into a file system
which 
> is your 'catalog cluster'.
> 
> But all the databases can grow until that filesystem is full.
> 
> Is there a way to limit how big a database can be? Can you allocated 
> space when the database is set up so you can say db1 can only be 100
MB?
> 
>  
> 
> Thank you!
> 
> *Troy Campano*
> 


-- 
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Database Size Limiting

2004-05-27 Thread Joshua D. Drake
Hello,
You could use Filsystem quotas but I strongly suggest against it as it 
will probably bring about database corruption.

A better solution would be to monitor the space being allocated so you
can add storage as required.
Sincerely,
Joshua D. Drake
Campano, Troy wrote:
-->
Is there a way to limit the size that an individual database can take up?
My understanding is that all your databases go into a file system which 
is your ‘catalog cluster’.

But all the databases can grow until that filesystem is full.
Is there a way to limit how big a database can be? Can you allocated 
space when the database is set up so you can say db1 can only be 100 MB?

 

Thank you!
*Troy Campano*

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL
begin:vcard
fn:Joshua D. Drake
n:Drake;Joshua D.
org:Command Prompt, Inc.
adr:;;PO Box 215;Cascade Locks;Oregon;97014;USA
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0034
note:Command Prompt, Inc. is the largest and oldest US based commercial PostgreSQL support provider. We  provide the only commercially viable integrated PostgreSQL replication solution, but also custom programming, and support. We authored  the book Practical PostgreSQL, the procedural language plPHP, and adding trigger capability to plPerl.
x-mozilla-html:FALSE
url:http://www.commandprompt.com/
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Database Size Limiting

2004-05-27 Thread Richard Huxton
Campano, Troy wrote:
Is there a way to limit the size that an individual database can take
up?
My understanding is that all your databases go into a file system which
is your 'catalog cluster'.
But all the databases can grow until that filesystem is full.
Is there a way to limit how big a database can be? Can you allocated
space when the database is set up so you can say db1 can only be 100 MB?
Well, you've got two main options:
1. Run multiple instances of PG
If you run each as its own user "postgres1","postgres2" etc then you can 
use the standard user permissions to control resource usage.

2. Place each database on its own volume
Mount one volume per database, move and symlink that database to the 
volume. If you volume is only 100MB then the database will stop there.

Of course, what you probably want to do is monitor each folder (tip - 
the folder names in .../data/base are the OIDs of databases) and warn at 
75MB and disallow access if usage is 125MB for more than 24 hours. 
That's something outside of PG's control, but someone will have done it.

Oh - final option would be one of the virtual server setups which let 
you allocate resource limits per vserver.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Database Size Limiting

2004-05-27 Thread Campano, Troy
Title: Database Size Limiting









Is there
a way to limit the size that an individual database can take up?

My understanding is that all your databases go
into a file system which is your ‘catalog
cluster’.

But all the databases can grow until that
filesystem is full.

Is there a way to limit how big a database can
be? Can you allocated space when the database is set up so you can say db1 can
only be 100 MB?

 

Thank you!

Troy Campano








Re: [GENERAL] custom error messages/numbers

2004-05-27 Thread Richard Huxton
BARTKO Zoltan wrote:
Folks,
I am developing an app for PostgreSQL, entirely with stored functions
(somewhat resembling the object-oriented approach. In fact it is not
an app, just an API). All my functions return an integer value - 0 if
the function was successful, another value otherwise.
Until now, all parameter checking was done manually in the stored
function (if param is null then return -8; end if; etc.) This makes
things quite lengthy.
I know about check constraints. However, upon error they throw an
error message and abort the current transaction, So my question is:
is there any possible way of throwing my own error messages upon a
check constraint violation?
Not in v7.4 - there should be nested transactions available in 7.5 which 
might allow this. In fact, I think the developer of nested transactions 
is looking for testers at the moment - see the hackers list for details.

In actual fact, I'd be tempted to move the procedural API out of the 
database. If all your access is to be through the API and not using SQL 
there seems little point in embedding it inside PG.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] I want to use postresql for this app, but...

2004-05-27 Thread Joshua D. Drake

So are you saying that if you connect to any GPL database (e.g. gnumed 
is a GPL database created with Postgresql), you must GPL your code? Even 
when using something like ODBC as the connection method?
I don't think so. The difference is, that PostgreSQL is BSD and you are 
connecting from a GPL source versus say MySQL which is GPL and 
connecting from a BSD source.

The GPL is pervasive in that if your app requires libs that are GPL then
your app must also be GPL. However, since PostgreSQL libs are BSD we 
don't have that issue.

IANAL but I believe that is the case.
Sincerely,
Joshua D. Drake

Thanks,
John DeSoi, Ph.D.
---(end of broadcast)---
TIP 8: explain analyze is your friend

--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL
begin:vcard
fn:Joshua D. Drake
n:Drake;Joshua D.
org:Command Prompt, Inc.
adr:;;PO Box 215;Cascade Locks;Oregon;97014;USA
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0034
note:Command Prompt, Inc. is the largest and oldest US based commercial PostgreSQL support provider. We  provide the only commercially viable integrated PostgreSQL replication solution, but also custom programming, and support. We authored  the book Practical PostgreSQL, the procedural language plPHP, and adding trigger capability to plPerl.
x-mozilla-html:FALSE
url:http://www.commandprompt.com/
version:2.1
end:vcard


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Howto handle views/Rules Privileges

2004-05-27 Thread Ulrich Meis
Hi!
Situation two schemas:
webviews: Contains a set of views with insert rules on them, accessed 
via a GUI.
devconf: "Business logic", all the tables, functions, triggers that I 
want to restrict access to as far as possible.Preferably, I wouldn't 
even grant usage to the schema.

I have a problem with the insert rules on the views :
The inserts triggered by the rules are executed with view owner 
privileges, but unfortunately any functions, triggers, sequence updates 
that are invoked by the rule NOT.

How do I solve this ?
My solution right now is to do "security definer" on all triggers and 
functions invoked by the rules, and to grant update privileges for the 
sequences. I hope there is a better way.

Thanks in advance for any comments and suggestions,
Uli



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] custom error messages/numbers

2004-05-27 Thread BARTKO Zoltan



Folks,
 
I am developing an app for PostgreSQL, entirely with stored 
functions (somewhat resembling the object-oriented approach. In fact it is not 
an app, just an API). All my functions return an integer value - 0 if the 
function was successful, another value otherwise.
 
Until now, all parameter checking was done manually in the 
stored function (if param is null then return -8; end if; etc.) This makes 
things quite lengthy.
 
I know about check constraints. However, upon error they throw 
an error message and abort the current transaction, So my question is: is 
there any possible way of throwing my own error messages upon a 
check constraint violation? 
 
Thank you very much
 
Zoltan Bartko


Re: [GENERAL] Naive schema questions

2004-05-27 Thread btober

> On 5/27/2004 7:15 AM, Rory Campbell-Lange wrote:
>
>> seems to suggest that the functions are schema specific.
>
> It is even better. The property that set's your "schema context" is
> called search_path. This contains a list of schema names. For an
> unqualified (schema name not explicitly given) object, be that a table,
>  sequence, view, function or whatever, the system looks in all those
> schemas in that particular order and uses the first found.

And where do tables created with "CREATE LOCAL TEMPORARY TABLE..." fit
into this, like if say a local temp table where created that has the same
name as an existing normal (i.e., not a local temp) table? And what if I
do an explicit DROP of the local temp table rather than relying on the
automatic, end-of-session clean-up? Is there any risk of losing the
normal table?

--Berend Tober




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] One more time on ONE-TO-MANY

2004-05-27 Thread Laura Vance
I agree with your statement.  I believe that the SQL standard does not 
support the 3rd option by design.  I could be wrong, but everything I've 
seen only discusses the first 2 options.  If a RDBMS does that 
automatically, think about this possible consequence. 

Lets say that you have a database that tracks customers and their 
purchases.  Lets say that an operator has the ability to delete purchase 
records (children (FK) to the customer (PK)).   Lets say that a purchase 
gets entered by mistake while the operator is on the phone with the 
customer, and he/she needs to delete the mistake.  As she deletes the 
mistake, the customer record is wiped out along with it.  Then she has 
to re-create the customer record as the customer gets impatient because 
this information has already been given.

I would much rather have to manually remove a childless parent record 
than have the system do it for me.

Duane Lee - EGOVX wrote:
Ok, I think I understand now.  You are wondering if the capability exists
that whenever all the children of a parent are deleted, not via cascade
delete, can the parent also be deleted.  I know this capability does not
exist in DB2 and I'm pretty sure it doesn't in Postgres.  Personally, that
is not a "feature" I would choose, especially as any kind of default.
Duane
-Original Message-
From: Dennis Gearon [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 26, 2004 5:25 PM
To: Duane Lee - EGOVX
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] One more time on ONE-TO-MANY
Yes, and No. I am talking about that capability, which is the first two 
in the list, pluse another, the last one in the list.

That is when an attempt to delete the LAST CHILD takes place, the logic 
is that there is no need for the parent, or that there must be a child 
for every parent.
So, either the action is to say NO, error out, or the parent is deleted 
along with the child.

Currently, all Postgres supports natively is what should happen if the 
PARENT is deleted.

Supposedly, some RDBMs handle the options of when the LAST child is 
deleted, natively, by declaration in the constraint.

Duane Lee - EGOVX wrote:
 

It sounds like you are referring to a RI (Referential Integrity) 
constraint and if so one of the options when the constraint is defined 
is CASCADE DELETE, i.e., delete the children rows then delete the 
parent row and this is available in Postgres.

Is this what you were asking or did I mis-interpret your query?
Duane
-Original Message-
From: Dennis Gearon [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 26, 2004 3:25 PM
To: [EMAIL PROTECTED]
Subject: [GENERAL] One more time on ONE-TO-MANY
CC me, I'm digesting this list.
From:
   http://www.sum-it.nl/cursus/dbdesign/english/intro030.php3
A quote:
   ' In addition *the database designer chooses* an action for delete:
   * It's /only possible/ to delete a row in the one-table when there a
 no more related many-rows.
   * When deleting a row the RDBMS
 
 /automatically/ deletes the related data in the many table. This
 is called a /cascaded delete/.
   * When deleting the last 'many' the RDBMS /automatically/ deletes
 the related 'one' row.'
I'm pretty sure that Postgres does not support the last one
automatically. I shall have to do that one by either a chron script or a
post trigger.
Does anyone have experience with a database that will do the last one,
and what database would that be?
--
Thanks,
Laura Vance
Systems Engineer
Winfree Academy Charter Schools, Data-Business Office
1711 W. Irving Blvd. Ste 310
Irving, Tx  75061
Web: www.winfreeacademy.com

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] insert/update

2004-05-27 Thread Jeff Eckermann
--- Tom Allison <[EMAIL PROTECTED]> wrote:
> Jeff Eckermann wrote:
> > --- Tom Allison <[EMAIL PROTECTED]> wrote:
> > 
> >>I seemed to remember being able to do this but I
> >>can't find the docs.
> >>
> >>Can I run a sql query to insert new or update
> >>existing rows in one query?
> 
> So I have to watch out for transactions on this?
> Essentially what I'm trying to do is one of the
> following two:
> 
> if exists update a field to field+1 on one record
> if it doesn't exist, insert a row with field = 1
> 

I'm not sure what you are asking here that is not
already covered.  I suggest you spend some time
reading the documentation on concurrency, and
searching the archives for some of the lengthy past
discussions on this topic.




__
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Naive schema questions

2004-05-27 Thread Nick Barr
- Original Message -
From: "Peter Eisentraut" <[EMAIL PROTECTED]>
To: "Rory Campbell-Lange" <[EMAIL PROTECTED]>
Cc: "Postgresql General List" <[EMAIL PROTECTED]>
Sent: Thursday, May 27, 2004 1:10 PM
Subject: Re: [GENERAL] Naive schema questions


> Am Donnerstag, 27. Mai 2004 13:15 schrieb Rory Campbell-Lange:
> > I imagined schemas might allow me to globally update functions across a
> > database hosting many schemas with the same structure.
>
> Put your data tables in separate schemas, put the functions in yet another
> schema, and then when you connect set the schema search path to
"dataschema,
> functionschema" (or maybe vice versa).

Or when you make the calls in the web app use the following:

SELECT function_schema.function1(arg1, arg2);

instead of just:

SELECT function1(arg1, arg2);

But like Peter said have a schema per client/"instance" of your database.

Nick





---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])