Re: [GENERAL] Practical question.

2007-03-17 Thread louis gonzales




Dear Hubert,
Two things
1) "statement-level" and "row-level" straight from PostgreSQL:
http://www.postgresql.org/docs/8.1/interactive/trigger-datachanges.html

  
Statement-level triggers follow simple visibility
rules: none of the changes made by a statement are visible to
statement-level triggers that are invoked before the statement, whereas
all modifications are visible to statement-level after triggers. 
  
  
The data change (insertion, update, or deletion) causing the
trigger to fire is naturally not
visible to SQL commands executed in a row-level before trigger, because
it hasn't happened yet. 
  
  
However, SQL commands executed in a row-level before trigger will see the effects
of data changes for rows previously processed in the same outer
command. This requires caution, since the ordering of these change
events is not in general predictable; a SQL command that affects
multiple rows may visit the rows in any order. 
  
  
When a row-level after trigger is fired, all data
changes made by the outer command are already complete, and are visible
to the invoked trigger function. 
  

2) Seeing as you have no idea - not attacking, stating fact - on the
rationale behind the "insert statement-level" to create 1-to-1 table
for each statement-level 
insert, I'd say your presumption is unfounded. If you have some
benchmark data, which support why/how to quantify, 50K records in a
single table, all of which would have N number of associated records in
another table, would out perform 50K records in a single table
referencing dedicated 'small' tables, please do share.

Thanks though.

hubert depesz lubaczewski wrote:
On 3/16/07, louis gonzales [EMAIL PROTECTED]
wrote:
  
  I want to write a statement-level
trigger - one that happens once per

statement - such that, immediately after an insert into a table(which

gets a unique integer value as an ID from a defined sequence, being the

primary key on the table), a new table is created with foreign key

constraint on that unique ID.

  
  
hi,
  
i think what you;re trying to do is wrong - having that many tables
  
simply cannot work properly.
  
additionally - i think you're misinformed. the kind of action you
  
would like to "trigger on" is not "per statement" but "per row".
  
example:
  
insert into table x (field) select other_field from other_table;
  
if this insert would insert 10 records - "once per statement" trigger
  
would be called only once.
  
  
but anyway - what you're proposing will lead to many, many problems.
  
(plus it will never scale correctly).
  
  
depesz
  



-- 
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
"Open the pod bay doors HAL!" -2001: A Space Odyssey
"Good morning starshine, the Earth says hello." -Willy Wonka





Re: [GENERAL] Practical question.

2007-03-17 Thread louis gonzales

louis gonzales wrote:


Dear Hubert,
Two things
1) _*statement-level and row-level straight from PostgreSQL: 
http://www.postgresql.org/docs/8.1/interactive/trigger-datachanges.html*_


   *

  _*Statement-level triggers*_ follow simple visibility rules:
  none of the changes made by a statement are visible to
  statement-level triggers that are invoked before the statement,
  whereas all modifications are visible to statement-level after
  triggers.

   *

  The data change (insertion, update, or deletion) causing the
  trigger to fire is naturally /not/ visible to SQL commands
  executed in a row-level before trigger, because it hasn't
  happened yet.

   *

  However, SQL commands executed in a row-level before trigger
  /will/ see the effects of data changes for rows previously
  processed in the same outer command. This requires caution,
  since the ordering of these change events is not in general
  predictable; a SQL command that affects multiple rows may visit
  the rows in any order.

   *

  When a _*row-level*_ after trigger is fired, all data changes
  made by the outer command are already complete, and are visible
  to the invoked trigger function.

2) Seeing as you have no idea - not attacking, stating fact - on the 
rationale behind the insert statement-level to create 1-to-1 table 
for each statement-level
insert, I'd say your presumption is unfounded.  If you have some 
benchmark data, which support why/how to quantify, 50K records in a 
single table, all of which would have N number of associated records 
in another table, would out perform 50K records in a single table 
referencing dedicated 'small' tables, please do share.


Thanks though.

hubert depesz lubaczewski wrote:


On 3/16/07, louis gonzales [EMAIL PROTECTED] wrote:


I want to write a *statement-level* trigger - one that happens once per
statement - such that, immediately after an insert into a table(which
gets a unique integer value as an ID from a defined sequence, being the
primary key on the table), a new table is created with foreign key
constraint on that unique ID.



hi,
i think what you;re trying to do is wrong - having that many tables
simply cannot work properly.
additionally - i think you're misinformed. the kind of action you
would like to trigger on is not per statement but per row.
example:
insert into table x (field) select other_field from other_table;
if this insert would insert 10 records - once per statement trigger
would be called only once.

but anyway - what you're proposing will lead to many, many problems.
(plus it will never scale correctly).

depesz




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka
 




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Practical question.

2007-03-16 Thread louis gonzales
Actually, there will be creation of 2 new tables for each insert on 
'primary' table, so for 10K records, we would have 20K tables.  Those 
tables each will never grow more than a few records each.


Is it better to have 1 monolithic table and have to search it, or small 
individual tables but many of them? 


Ron Johnson wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/15/07 22:14, louis gonzales wrote:
 


Hello List,
I want to write a statement-level trigger - one that happens once per
statement - such that, immediately after an insert into a table(which
gets a unique integer value as an ID from a defined sequence, being the
primary key on the table), a new table is created with foreign key
   


  
 


constraint on that unique ID.
   



So if you insert 10,000 records into T, you then have 10,000 new tables?

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFF+nDLS9HxQb37XmcRAkrOAJ4pKzY/TVlm3K0aqiEY7ukhCfNZjwCfa5C7
mukqcoAmU2/OYr4QFVxjt6k=
=sHA3
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend
 




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Practical question.

2007-03-15 Thread louis gonzales

Hello List,
I want to write a statement-level trigger - one that happens once per 
statement - such that, immediately after an insert into a table(which 
gets a unique integer value as an ID from a defined sequence, being the 
primary key on the table), a new table is created with foreign key 
constraint on that unique ID.
My concern is with what happens if two such inserts occur at almost the 
same time, what is the best way to ensure that I never miss the creation 
of one of the tables due to the 2nd one, possibly reading the same max 
or nextvalue that the immediate previous trigger read.


As an example:
insertX which initiates the trigger reads the 'nextvalue' from the 
sequence and begins to create the associcated table
insertY happens almost at the same time, so that it gets the same 
'nextvalue' from the sequence and would then create or attempt to create 
the 'same' assoicated table while missing it's 'true' 'nextvalue'.


Thanks for any insight!


--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Practical question.

2007-03-15 Thread louis gonzales
:) , something that is analogous to a race condition.  is this something 
I shouldn't be concerned with?
I suppose if I knew for certain there was some kind of synchronous 
behavior, then I wouldn't fear a potentially subsequent event completing 
before the previous one doing so.


As a possible solution, I'm thinking that I can make the trigger be a 
before trigger, where the before trigger captures the 'nextvalue' for 
both the actual insert and the table creation would be based on this, 
while incrementing the sequence to guarantee that each successive pull 
on the nextvalue will have the correct one.


Does that sound plausible?

Thanks,

Tom Lane wrote:


louis gonzales [EMAIL PROTECTED] writes:
 


As an example:
insertX which initiates the trigger reads the 'nextvalue' from the 
sequence and begins to create the associcated table
insertY happens almost at the same time, so that it gets the same 
'nextvalue' from the sequence 
   



[ blink... ]  Whatever makes you think that could happen?

regards, tom lane
 




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Practical question.

2007-03-15 Thread louis gonzales

Hey Joshua,
I appreciate the insight.  That's clear.

Thanks again,

Joshua D. Drake wrote:


Tom Lane wrote:
 


louis gonzales [EMAIL PROTECTED] writes:
   


As an example:
insertX which initiates the trigger reads the 'nextvalue' from the 
sequence and begins to create the associcated table
insertY happens almost at the same time, so that it gets the same 
'nextvalue' from the sequence 
 



That won't happen because of isolation :). When InsertX increments the
sequence, it is forever incremented, so when InsertY increments it gets
the next value... e.g;

CREATE TABLE foo(id serial);

Transaction 1:
BEGIN;
INSERT INTO foo(id) VALUES (DEFAULT);
id now == 1

Transaction 2:
BEGIN;
INSERT INTO foo(id) VALUES (DEFAULT);
id now == 2

Transaction 1;
COMMIT;

Transaction 2;
COMMIT;

Even if Transaction 1 were to rollback, it has already incremented the
sequence so the next transaction would get 3.

Joshua D. Drake



 


[ blink... ]  Whatever makes you think that could happen?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

   




 




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Is there anyway to...

2006-11-02 Thread louis gonzales

Hello all,
Is there an existing mechanism is postgresql that can automatically 
increment/decrement on a daily basis w/out user interaction?  The use 
case I'm considering is where a student is in some type of contract with 
an instructor of some sort, and that contract puts a time limit on the 
student requiring her to pay a fee by a certain day.  IF that day comes 
to pass - or a certain number of days elapse - and that payment 
requirement hasn't been met, I want to trigger a function.


The one requirement I want to impose is, that no end user of the DB 
application, needs to do anything to set the trigger, other than the 
initialization of making the student of this type.


An example would be:
Day1 - Application user(typically the instructor) creates a profile for 
a new student - John Doe, which sets a 30 day time limit for John Doe to 
pay $100.00

Day2 - Day31 - John Doe didn't make the payment
Day 31 - Trigger of event occurs when the instructor logs in.

Basically on Day 1 when John Doe's profile was created, I want a 
decrement counter to occur daily on his profile(some attribute/timer) 
and nothing should happen until day 31 when he doesn't pay.


Any ideas?

--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Is there anyway to...

2006-11-02 Thread louis gonzales

Hey Brian,
Yeah I had considered this, using cron, I just feel like that is too dirty.

Actually I didn't see Andreas' post, can someone forward that?

I'm running this application on Solaris 9.  Ultimately what I want to 
know is, is there something that is internal to postgresql that can be 
used that doesn't need external action, to make it do some task?


Some built in function that can be set to do some simple task on a daily 
- or other time - interval, where all of the defined users may not have 
any activity with the database for day's or week's at a time, but this 
builtin function still operates?


Am I making any sense with how I'm asking this?  I could of course have 
cron do a scheduled task of checking/incrementing/decrementing and 
define triggers to occur when one of the cron delivered actions sets the 
appropriate trigger off, but are there other methods that are standard 
in the industry or are we stuck with this type of external influence?


Thanks all!

brian wrote:


louis gonzales wrote:


Hello all,
Is there an existing mechanism is postgresql that can automatically 
increment/decrement on a daily basis w/out user interaction?  The use 
case I'm considering is where a student is in some type of contract 
with an instructor of some sort, and that contract puts a time limit 
on the student requiring her to pay a fee by a certain day.  IF that 
day comes to pass - or a certain number of days elapse - and that 
payment requirement hasn't been met, I want to trigger a function.


The one requirement I want to impose is, that no end user of the DB 
application, needs to do anything to set the trigger, other than the 
initialization of making the student of this type.


An example would be:
Day1 - Application user(typically the instructor) creates a profile 
for a new student - John Doe, which sets a 30 day time limit for John 
Doe to pay $100.00

Day2 - Day31 - John Doe didn't make the payment
Day 31 - Trigger of event occurs when the instructor logs in.

Basically on Day 1 when John Doe's profile was created, I want a 
decrement counter to occur daily on his profile(some attribute/timer) 
and nothing should happen until day 31 when he doesn't pay.




Further to Andreas' suggestion to use CRON, you don't require a 
decrement of anything. When the profile is created, your date_created 
(or whatever) column will be set. Then your script (called by CRON) 
only needs to test for rows that a) have not paid, and b) are outside 
the bounds set in the script (eg. MAX_GRACE_PERIOD = 30).


brian

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Is there anyway to...

2006-11-02 Thread louis gonzales
To all who replied, first, thank you!  Second, I simply said 'dirty' 
never said dislike or anything like that.  I've used and use cron for 
different OS related operations and have for years.  I LOVE cron itself.


What I mean with 'dirty' is that I'm trying to take away as much as 
possible from external influences to the database action I want to happen.


Fine so let's say when the instructor creates the user profile, this can 
trigger the creation of a sequence, say, but IF the instructor doesn't 
visit that student's profile for a month, I want something that, 
independent of any other action, will be either incrementing or 
decrementing that sequence.  So when the instructor does go back to 
visit the student profile it will only show/flag the status if  the time 
has elapsed.  Is there like a sleep() function that postgresql has?  
That could be part of the plan.  So sleep(24hours) - pseudo code - wake 
up and increment something initiate trigger to see if a requirement has 
been met - say 10day or 30day or 1year has elapsed - if so, set a flag 
attribute to the student profile.


During that 30 days, the instructor won't care what's going on behind 
the scenes, they just care when the time has elapsed, that they will get 
a flag, to say, hey buddy, over here, this student hasn't fulfilled the 
agreement.


Based on what everyone is saying, there is no such internal mechanism 
function/trigger or otherwise, that can do independent actions.



Andreas Kretschmer wrote:


louis gonzales [EMAIL PROTECTED] schrieb:

 


Hey Brian,
Yeah I had considered this, using cron, I just feel like that is too dirty.
   



Why?


 


Actually I didn't see Andreas' post, can someone forward that?
   



Sorry, i posted to the list, and i can see my mail.


 

I'm running this application on Solaris 9.  Ultimately what I want to know 
is, is there something that is internal to postgresql that can be used that 
doesn't need external action, to make it do some task?
   



Take a look at http://pgfoundry.org/projects/pgjob/


 

Am I making any sense with how I'm asking this?  I could of course have 
cron do a scheduled task of checking/incrementing/decrementing and define 
triggers to occur when one of the cron delivered actions sets the 
appropriate trigger off, but are there other methods that are standard in 
the industry or are we stuck with this type of external influence?
   



I'm using PG on Linux-systems and use CRON for any tasks, without any
problems. I can't understand your dislike...


Andreas
 




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Is there anyway to...

2006-11-02 Thread louis gonzales
Apparently this isn't the first time someone else thought a sleep or 
timer mechanism, independent of user action would be of great value and 
didn't want to use external programs to accomplish it.


http://developer.*postgresql*.org/pgdocs/postgres/release-8-2.html
* Add a server-side *sleep* *function* pg_sleep() (Joachim Wieland): 
SELECT pg_sleep(1);




AgentM wrote:



On Nov 2, 2006, at 14:02 , Glen Parker wrote:


louis gonzales wrote:


Hey Brian,
Yeah I had considered this, using cron, I just feel like that is  
too dirty.

Actually I didn't see Andreas' post, can someone forward that?
I'm running this application on Solaris 9.  Ultimately what I want  
to know is, is there something that is internal to postgresql that  
can be used that doesn't need external action, to make it do some  
task?
Some built in function that can be set to do some simple task on a  
daily - or other time - interval, where all of the defined users  
may not have any activity with the database for day's or week's at  
a time, but this builtin function still operates?
Am I making any sense with how I'm asking this?  I could of course  
have cron do a scheduled task of checking/incrementing/ decrementing 
and define triggers to occur when one of the cron  delivered actions 
sets the appropriate trigger off, but are there  other methods that 
are standard in the industry or are we stuck  with this type of 
external influence?




Just some commentary...  This is exactly the sort of thing cron is  
for.  Duplicating that functionality in the RDBMS would be silly  
IMO.  I don't see why you could consider cron to be dirty for  this 
application...



I actually tried to come up with something for this. There are plenty  
of good reasons to have some timer functionality in the database:


1) it makes regular database-oriented tasks OS portable
2) your cron user needs specific permissions + authorization to  
access the database whereas postgres could handle sudo-like  
behavior transparently
3) there are triggers other than time that could be handy- on vacuum,  
on db start, on db quit, on NOTIFY


Unfortunately, the limitation I came across was for 2). There is no  
way to use set session authorization or set role safely because  
the wrapped code could always exit from the sandbox. So my timer only  
works for db superusers.


-M

---(end of broadcast)---
TIP 6: explain analyze is your friend




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Is there anyway to...

2006-11-02 Thread louis gonzales

A. Kretschmer wrote:


am  Thu, dem 02.11.2006, um 14:24:20 -0500 mailte louis gonzales folgendes:
 

visit the student profile it will only show/flag the status if  the time 
has elapsed.  Is there like a sleep() function that postgresql has?  
That could be part of the plan.  So sleep(24hours) - pseudo code - wake 
   



Read the release notes for 8.2:
http://developer.postgresql.org/pgdocs/postgres/release-8-2.html :
Add a server-side sleep function pg_sleep()  (Joachim Wieland)

 


Andreas Kretschmer wrote:

   


louis gonzales [EMAIL PROTECTED] schrieb:
 



I'm sorry, but i have a big dislike for Top Posting, Below Quoting.
It breaks the normal reading of text.

Hint:

A: Top-posting.
 


   Besser Antwort: Unsolicited opinions!


Q: What is the most annoying thing on usenet?


Andreas
 




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Is there anyway to...

2006-11-02 Thread louis gonzales

Glen,
Two things:
1) The instructor should only get a flag when the date conflict occurs, 
once they create this student profile account, they shouldn't have to 
manually check, I want the program to do, what programs should do, 
automate as much as possible!  So the suggestion about 'generating a 
query' is _wrong_ for the solution I want.
2) If you review all of the previous threads about this, I was using the 
incrementing/decrementing 'task' as an example, as in, yes, probably 
would've done some simple 'date math' to achieve what I want.


The problem in particular that I'm looking for a solution to is, having 
an internal mechanism within the database that itself doesn't need an 
event to drive it's behavior, but that this mechanism's behavior will 
drive other events.


If there was a sleep() function, I could write a function that would 
call sleep for, perhaps 24hrs and calculate the date difference on a 
table of dates and put the difference of the dates, into a 
'remainder_date' attribute.  A second sleep() function could check at 
the turn of midnight every night, to check if something has 
expired/elapsed and then trigger a write-out of specific data that would 
then show up, upon instructor login, but the login itself shouldn't be 
the trigger to generate the data.  What if the instructor logs in 15 
times in one day, they don't need to have the same information presented 
to them every time by virtue of the trigger, nor should the computer 
have to run the function redundantly.




Glen Parker wrote:


louis gonzales wrote:

Fine so let's say when the instructor creates the user profile, this 
can trigger the creation of a sequence, say, but IF the instructor 
doesn't visit that student's profile for a month, I want something 
that, independent of any other action, will be either incrementing or 
decrementing that sequence.  So when the instructor does go back to 
visit the student profile it will only show/flag the status if  the 
time has elapsed.  Is there like a sleep() function that postgresql 
has?  That could be part of the plan.  So sleep(24hours) - pseudo 
code - wake up and increment something initiate trigger to see if a 
requirement has been met - say 10day or 30day or 1year has elapsed - 
if so, set a flag attribute to the student profile.


During that 30 days, the instructor won't care what's going on behind 
the scenes, they just care when the time has elapsed, that they will 
get a flag, to say, hey buddy, over here, this student hasn't 
fulfilled the agreement.




I really think you're approaching this wrong by wanting to increment 
something every day.  You should generate a query that can run at any 
time and determine if any accounts are past due.  You can do the whole 
job using some simple date math.  If you do it that way, you could run 
the query every time the instructor logs in (not what I would do, but 
it would work), and at no other time.


What I would do is run it every evening, say, and send a email 
notification to someone (like the instructor) whenever past due 
accounts were found.


The problem with your increment design is, what happens if it fails 
for some reason?  Maybe the server was down for a memory upgrade, who 
knows?  You'll be a day off for every time the job failed to run.


If your logic works even if it skips a few days, as soon as it runs 
again, everything will be back up to date.


my .025...

-Glen

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Is there anyway to...

2006-11-02 Thread louis gonzales

Wes,
Thanks.  That is fair.

Wes Sheldahl wrote:

On 11/2/06, *AgentM* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:



On Nov 2, 2006, at 14:02 , Glen Parker wrote:

 louis gonzales wrote:
 Hey Brian,
 Yeah I had considered this, using cron, I just feel like that is
 too dirty.
 Actually I didn't see Andreas' post, can someone forward that?
 I'm running this application on Solaris 9.  Ultimately what I want
 to know is, is there something that is internal to postgresql that
 can be used that doesn't need external action, to make it do some
 task?
 Some built in function that can be set to do some simple task on a
 daily - or other time - interval, where all of the defined users
 may not have any activity with the database for day's or week's at
 a time, but this builtin function still operates?
 Am I making any sense with how I'm asking this?  I could of course
 have cron do a scheduled task of checking/incrementing/
 decrementing and define triggers to occur when one of the cron
 delivered actions sets the appropriate trigger off, but are there
 other methods that are standard in the industry or are we stuck
 with this type of external influence?


 Just some commentary...  This is exactly the sort of thing cron is
 for.  Duplicating that functionality in the RDBMS would be silly
 IMO.  I don't see why you could consider cron to be dirty for
 this application...

I actually tried to come up with something for this. There are plenty
of good reasons to have some timer functionality in the database:

1) it makes regular database-oriented tasks OS portable
2) your cron user needs specific permissions + authorization to
access the database whereas postgres could handle sudo-like
behavior transparently
3) there are triggers other than time that could be handy- on vacuum,
on db start, on db quit, on NOTIFY

Unfortunately, the limitation I came across was for 2). There is no
way to use set session authorization or set role safely because
the wrapped code could always exit from the sandbox. So my timer only
works for db superusers.

-M

---(end of
broadcast)---


None of those are good reasons to have timer functionality in the DB. 
Portability can be achieved having your cron job written in a portable 
language, like java, ruby or perl. Consistent permissions can be 
handled by having the java/whatever script connect to the db as a 
particular user; it doesn't matter what user executes the cron job 
provided it can run the script. #3 has nothing to do with timer 
functionality.


Glen was right about solving this problem with some basic date math: 
IF (now - startdate)  '30 days' THEN EXPIRED. This could be 
implemented at the application level, or in postgresql as a function 
that the application calls whenever it wants to know about possible 
expirations. So this particular problem may be better solved without 
any timer functionality either in OR out of the database... if you did 
have a cron job run to check, you would probably just have it set a 
boolean field on expired records or something of that sort, and run it 
a little after midnight, at the start of each day, assuming durations 
were always being measured in days.


Best of luck,
--
Wes Sheldahl
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] 




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Is there anyway to...

2006-11-02 Thread louis gonzales
I suppose I'll just concede now :)  Thanks for putting up with my 
sarcasm and humoring my other ideas.  I really wanted to see if there 
were any other methods out there and do appreciate everyone's ideas.


Thanks again,  CRON it is.

Glen Parker wrote:


Wes Sheldahl wrote:

if you did have a cron job run to check, you would probably just have 
it set a boolean field on expired records or something of that sort, 
and run it a little after midnight, at the start of each day, 
assuming durations were always being measured in days.



Exactly.  You flag when you discover an expired condition, and you 
flag again when the condition is acted upon.  Easy.  The instructor 
doesn't need to be bothered any more than you choose.  You could even 
set a timestamp indicating the last time the instructor was harrassed 
about it, so you can re-harrass on a daily or weekly basis :-)


Now, my example that had the expire query run when the instructor logs 
on was just to illustrate that if you do this right, it doesn't matter 
when the code runs.  Personally, I'd have a cron job do it at 
midnight, but whenever it runs, even if it's multiple times a day, the 
outcome should still be correct. That's where your status flags come 
in.  Once a record has been expired, there's no reason for your expire 
code to ever look at that record again, right?


-Glen

---(end of broadcast)---
TIP 6: explain analyze is your friend




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] postgresql books and convertion utilities

2006-10-29 Thread louis gonzales

Ganbold,
There are man PDF files out there that outline all of the workings of 
postgresql.  They are easy to read, but also have 'deep dive' 
information in them as well.  The PDF's are really valuable, both for 
the novice and for those who are experienced.


I'd start there.

Ganbold wrote:


Hi,

I'm new to postgresql and I'm looking for references and books. We are 
usually coding in php/C.


Can somebody recommend me good books which covers Postgresql 8.x?

Which one is worth from following books?

1. PostgreSQL (2nd Edition) by Korry Douglas (Paperback - Jul 26, 2005)
2. Beginning Databases with PostgreSQL: From Novice to Professional, 
Second Edition (Beginning from Novice to Professional) by Neil Matthew 
and Richard Stones (Paperback - April 6, 2005)
3. Beginning PHP and PostgreSQL 8: From Novice to Professional by W. 
Jason Gilmore and Robert H. Treat (Paperback - Feb 27, 2006)
4. Beginning PHP and PostgreSQL E-Commerce: From Novice to 
Professional (Beginning, from Novice to Professional) by Emilian 
Balanescu, Mihai Bucica, and Cristian Darie (Paperback - Dec 25, 2006)
5. Beginning PostgreSQL 8 by Edward Lecky-Thompson and Clive Gardner 
(Paperback - Sep 25, 2006)


I'm thinking to buy books published in 2006, but maybe I'm wrong.
I appreciate if somebody can point me to the right direction.

Also I'm thinking to convert our mysql db and application to postgresql.
Is there any method to convert mysql db/app to postgresql, maybe at 
least DB (tables, indexes, queries)?
Are there any tools that can convert php functions and database from 
mysql to postgresql?


thanks in advance,

Ganbold



---(end of broadcast)---
TIP 6: explain analyze is your friend




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Maximum size of database

2006-10-18 Thread louis gonzales

also, run
EXPLAIN
on any command, show the results of this.  In particular, if you have 
some commands that are taking 'even longer?'




roopa perumalraja wrote:


Thanks for your reply.
 
I have answered your questions below.
 
1  2) System: Microsoft Windows XP Professional

  Version 2002
  Computer: Intel Pentium CPU 3.40GHz, 960MB of RAM
 
3) shared_buffers = 2

autovaccum = on
 
4) Yes, I am vacuuming  analyzing the database once every day.
 
5) No concurrent activities, means I run one command at a time.
 
6) Nothing else running on the box other than Postgres.
 
I hope these answers will try to solve my problem. Thanks again.
 
Roopa

*/Michael Fuhr [EMAIL PROTECTED]/* wrote:

On Tue, Oct 17, 2006 at 07:26:25PM -0700, roopa perumalraja wrote:
 I would like to know that what can be the maximum size of
database in
 postgres 8.1.4.

http://www.postgresql.org/docs/faqs.FAQ.html#item4.4

 Currently my database size is 37GB  its pretty slow. I wonder if
 its b'cos of huge amount of data in it.

37GB isn't all that huge; as the FAQ mentions, much larger databases
exist. Without more information we'll have to ask some of the
standard questions:

What's your hardware configuration?
What operating system and version are you using?
What are your non-default postgresql.conf settings?
Are you vacuuming and analyzing the database regularly?
How much concurrent activity do you have?
Does anything other than PostgreSQL run on the box?

If you have a specific query that's slow then please post the EXPLAIN
ANALYZE output. Also, you might get more help on the pgsql-performance
list.

-- 
Michael Fuhr





signature


Get your own web address for just $1.99/1st yr 
%20http://us.rd.yahoo.com/evt=43290/*http://smallbusiness.yahoo.com/domains. 
We'll help. Yahoo! Small Business 
http://us.rd.yahoo.com/evt=41244/*http://smallbusiness.yahoo.com/. 




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Maximum size of database

2006-10-18 Thread louis gonzales

explain analyze verbose select * from tk_  ;

roopa perumalraja wrote:


Hi
 
Thanks for your reply.
 
explain select * from tk_20060403;
QUERY PLAN   
--

 Seq Scan on tk_20060403  (cost=0.00..95561.30 rows=3609530 width=407)
(1 row)
will this help?
 
*/louis gonzales [EMAIL PROTECTED]/* wrote:


also, run
EXPLAIN
on any command, show the results of this. In particular, if you have
some commands that are taking 'even longer?'



roopa perumalraja wrote:

 Thanks for your reply.

 I have answered your questions below.

 1  2) System: Microsoft Windows XP Professional
 Version 2002
 Computer: Intel Pentium CPU 3.40GHz, 960MB of RAM

 3) shared_buffers = 2
 autovaccum = on

 4) Yes, I am vacuuming  analyzing the database once every day.

 5) No concurrent activities, means I run one command at a time.

 6) Nothing else running on the box other than Postgres.

 I hope these answers will try to solve my problem. Thanks again.

 Roopa
 */Michael Fuhr /* wrote:

 On Tue, Oct 17, 2006 at 07:26:25PM -0700, roopa perumalraja wrote:
  I would like to know that what can be the maximum size of
 database in
  postgres 8.1.4.

 http://www.postgresql.org/docs/faqs.FAQ.html#item4.4

  Currently my database size is 37GB  its pretty slow. I wonder if
  its b'cos of huge amount of data in it.

 37GB isn't all that huge; as the FAQ mentions, much larger databases
 exist. Without more information we'll have to ask some of the
 standard questions:

 What's your hardware configuration?
 What operating system and version are you using?
 What are your non-default postgresql.conf settings?
 Are you vacuuming and analyzing the database regularly?
 How much concurrent activity do you have?
 Does anything other than PostgreSQL run on the box?

 If you have a specific query that's slow then please post the
EXPLAIN
 ANALYZE output. Also, you might get more help on the
pgsql-performance
 list.

 --
 Michael Fuhr




 signature



 Get your own web address for just $1.99/1st yr
 



Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail. 
http://us.rd.yahoo.com/evt=42297/*http://advision.webevents.yahoo.com/mailbeta 





--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] not so sequential sequences

2006-10-17 Thread louis gonzales

Rhys,
You could create a sequence, then make the seq attribute to your table 
have a default value of:

seq integer default nextval('your_sequence')

Then every time an insert is done into your table, the seq will 
increment.  You alternatively could make your insert statement have 
for that position, the nextval('your_sequence')


Does that help?

Rhys Stewart wrote:


Hi all, looking for a method to number a table sequentially, but the
sequence only increments if the value in a certain column is
different. as in


seq|   parish

1  | Kingston
1  | Kingston
1  | Kingston
1  | Kingston
2  | Lucea
3  | Morant Bay
3  | Morant Bay
3  | Morant Bay
4  | Port Antonio
5  | Savannah-La-Mar
5  | Savannah-La-Mar
5  | Savannah-La-Mar

so i guess i would order by a certain column and then the 'magic
sequence' would be a column that only increments when the column
changes.

Rhys

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Is Postgres good for large Applications

2006-10-17 Thread louis gonzales
Is your server capable?  Does it have enough resources to handle many 
connections?


many = ???  100, 200, 1,000,000,000 are they concurrent users?

'good for large applications' = ??? I'd say, how large your 
application is doesn't matter, right... cause that's the front end.  How 
well is it coded and does it make efficient logical SQL calls to a well 
structured database... that's another question.


I've got a question, who wants to play, ask 20 questions?

Sorry for the sarcasm... but this is now 2:00 a.m. EST, and questions 
have to be specific to warrant an answer.


If I were to say:

many = YES
good for large applications = YES

Wouldn't you come back then with, How many? and How large of 
applications?



Sandeep Kumar Jakkaraju wrote:


Hi All

Is Postgres good for large Applications ??
I mean where we have to make many simulataneous connections...

Thanks
Sandeep

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] create table foo( ... ..., _date date default current_date, ... ... );

2006-10-01 Thread louis gonzales

Group,
I want to set the default value of a date attribute _date to  CURRENT_DATE.
CURRENT_DATE gives a format-MM-DD

my table is something similar to:

create table foo(
...  ...,
_date date default current_date,
... ...);

Now, everytime a new entry is inserted, is it going to get the 
CURRENT_DATE of the day of insertion?  Or will this be the same value 
all of the time?


The behavior I'd like is, if today is   2006-10-01,
all new entries today will get that as default.
Then tomorrow   2006-10-02
all entries will get 2006-10-02 as the default.

when I do a:
\d foo

sseq   | integer   |
_iseq  | integer   |
_comment   | text  |
_day   | character varying(3)  |
_time  | character varying(5)  |
_meridiem  | character varying(2)  |
_paymentamount | character varying(13) |
_date  | date  | default ('now'::text)::date
Foreign-key constraints:
   paymenthistory__iseq_fkey FOREIGN KEY (_iseq) REFERENCES 
instructor(_iseq)

   paymenthistory_sseq_fkey FOREIGN KEY (sseq) REFERENCES students(sseq)

Is this going to give the desired behavior?

Thanks,

--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] cannot open pg_database

2006-08-21 Thread louis gonzales

Also, what is you $PGDATA variable pointing to? Issue:

env | grep PG

see what that comes out with.

Roman Neuhauser wrote:


# [EMAIL PROTECTED] / 2006-08-19 20:18:53 -0700:
 


Installing with yum, Fedora core 5.  Get error: could
not open file global/pg_database: No such file or
directory.   The file exists however, in
/var/lib/pgsql/data/global and contains 3 lines:
�postgres� 10793 1663 499 499
�template� 1 1663 499 499
�template0� 10792 1663 499 499

From the user-comments in manual, chapter 17.1 :
�If you get an error like
psql: FATAL: could not open file global/pg_database:
No such file or directory
make sure that in your init.d postgresql file (if you
have one) or in the env variables for the shell that
runs your server process that PGDATA is set properly.
Then try to restart the server. If the server will not
restart, check for an already running server process
(sudo ps -af | grep postgres). Sometimes they can hang
around, secretly, and screw things up.�

As newbie, reluctant to start editing the init.d file.
Ideas appreciated.

*
bash-3.1$ su -c 'pg_ctl start -D /usr/local/pgsql/data -l serverlog' postgres
Password:
postmaster starting
bash-3.1$  psql template1
psql: FATAL:  could not open file
global/pg_database: No such file or directory
   



   what does this output?
   
   su -c 'ls -l /usr/local/pgsql/data/global' postgres


 




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Is it possible (postgresql/mysql)

2006-08-16 Thread louis gonzales

Hello List,
PostgreSQL 8.0.1 (on Solaris 9)
There is a PERL program that a friend purchased which is used to create 
tables on a MySQL database, and of course ;) I want to run this on a 
PostgreSQL database server instead.  The below is the code:

$sth=runSQL(CREATE TABLE someTable (
   date_create bigint NOT NULL,
   date_end bigint NOT NULL,
   username VARCHAR(20) NOT NULL,
   $cat_definition
   id serial PRIMARY KEY,
   status VARCHAR(20) NOT NULL,
   $adfields
  visibility TEXT NOT NULL,
  priority TEXT NOT NULL,
  template TEXT NOT NULL,
  view bigint DEFAULT 0 NOT NULL,
  reply bigint DEFAULT 0 NOT NULL,
  save bigint DEFAULT 0 NOT NULL,
  updated bigint,
  photo VARCHAR(1) NOT NULL DEFAULT '0',
  INDEX(username),
  $cat_index
  INDEX(date_create) ););

What my question is, the INDEX(...) function calls, which work this 
way on MySQL, don't work in PostgreSQL.  Does anybody know what a 
synonymous way to modify the above code, for compatibility with PostgreSQL?


FYI:yourVariable  INT UNSIGNED AUTO_INCREMENT(MySQL)
   can be replaced by   
  yourVariable serial


Thanks group!

--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Is it possible (postgresql/mysql)

2006-08-16 Thread louis gonzales

Harald,
I had thought of that, but I wasn't sure if there was/is a way to create 
the index's upon table creation, as it appears is possible with MySQL.


As for the replacing of varchar(xx) with a text data type, why do you 
recommend this?  I want to stay as close as I can to the original 
code...but if you think there is a good reason and that it won't 
conflict with something else, then I'd like to know.  Granted, 
varchar(xx) is nothing but a string of characters, potentially xx in 
length, and a text datatype is also just a string of characters, I would 
wonder if possibly there would be a string comparison that would treat 
these different on the single fact of different datatype?


Thanks for your help Harald!

Harald Armin Massa wrote:


Louis,

indizes are simply created outside the create table


CREATE INDEX someTable_Date_create
  ON someTable
  USING btree
  (date_create);

As you are working on transferring, maybe you like to drop those 
varchar(xx) and replace them with text. Saves a lot of hassle lateron.


Harald



On 8/16/06, *louis gonzales* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Hello List,
PostgreSQL 8.0.1 (on Solaris 9)
There is a PERL program that a friend purchased which is used to
create
tables on a MySQL database, and of course ;) I want to run this on a
PostgreSQL database server instead.  The below is the code:
$sth=runSQL(CREATE TABLE someTable (
date_create bigint NOT NULL,
date_end bigint NOT NULL,
username VARCHAR(20) NOT NULL,
$cat_definition
id serial PRIMARY KEY,
status VARCHAR(20) NOT NULL,
$adfields
   visibility TEXT NOT NULL,
   priority TEXT NOT NULL,
   template TEXT NOT NULL,
   view bigint DEFAULT 0 NOT NULL,
   reply bigint DEFAULT 0 NOT NULL,
   save bigint DEFAULT 0 NOT NULL,
   updated bigint,
   photo VARCHAR(1) NOT NULL DEFAULT '0',
   INDEX(username),
   $cat_index
   INDEX(date_create) ););

What my question is, the INDEX(...) function calls, which work this
way on MySQL, don't work in PostgreSQL.  Does anybody know what a
synonymous way to modify the above code, for compatibility with
PostgreSQL?

FYI:yourVariable  INT UNSIGNED AUTO_INCREMENT(MySQL)
can be replaced by
   yourVariable serial

Thanks group!

--
Email:[EMAIL PROTECTED]
mailto:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of
broadcast)---
TIP 5: don't forget to increase your free space map settings




--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
-
Let's set so double the killer delete select all. 




--
Email:[EMAIL PROTECTED]
WebSite:  http://www.linuxlouis.net
Open the pod bay doors HAL! -2001: A Space Odyssey
Good morning starshine, the Earth says hello. -Willy Wonka


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Tuning to speed select

2006-08-09 Thread louis gonzales
What about creating views on areas of the table that are queried often?  
I don't know if you have access or the ability to find what type of 
trends the table has, in terms of queries, but if you create some views 
on frequently visited information, this could also help.


Tom Laudeman wrote:


Hi,

I'm running PostgreSQL version 8 on a dual 2.4GHz Xeon with 1GB of RAM 
and an IDE hard drive. My big table has around 9 million records.


Is there a tuning parameter I can change to increase speed of selects? 
Clearly, there's already some buffering going on since selecting an 
indexed ~50,000 records takes 17 seconds on the first try, and only 
0.5 seconds on the second try (from pgsql).


cowpea= explain analyze select bs_fk from blast_result where 
si_fk=11843254;
QUERY 
PLAN 
 

Index Scan using si_fk_index on blast_result  (cost=0.00..22874.87 
rows=58118 width=4) (actual time=112.249..17472.935 rows=50283 loops=1)

  Index Cond: (si_fk = 11843254)
Total runtime: 17642.522 ms
(3 rows)

cowpea=  explain analyze select bs_fk from blast_result where 
si_fk=11843254;
  QUERY 
PLAN   
 

Index Scan using si_fk_index on blast_result  (cost=0.00..22874.87 
rows=58118 width=4) (actual time=0.178..341.643 rows=50283 loops=1)

  Index Cond: (si_fk = 11843254)
Total runtime: 505.011 ms
(3 rows)

cowpea= 



Thanks,
Tom





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Tuning to speed select

2006-08-09 Thread louis gonzales
I'm not so sure about that, when you create a view on a table - at least 
with Oracle - which is a subset(the trivial or 'proper' subset is the 
entire table view) of the information on a table, when a select is 
issued against a table, Oracle at least, determines if there is a view 
already on a the table which potentially has a smaller amount of 
information to process - as long as the view contains the proper 
constraints that meet your 'select' criteria, the RDBMS engine will have 
fewer records to process - which I'd say, certainly constitutes a time 
benefit, in terms of 'performance gain.'


Hence my reasoning behind determining IF there is a subset of the 'big 
table' that is frequented, I'd create a view on this, assuming 
postgresql does this too?  Maybe somebody else can answer that for the 
pgsql-general's general information?


query-speed itself is going to be as fast/slow as your system is 
configured for, however my point was to shave some time off of a 1M+ 
record table, but implementing views of 'frequently' visisted/hit 
records meeting the same specifications.


Harald Armin Massa wrote:


Louis,

Views certainly help in managing complexity. They do nothing to 
improve query-speed.


Querying a view gets rewritten to queries to the underlying tables on 
the fly.
(as long as there are no materialized views, which are still on a the 
TODO list)


--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
-
Let's set so double the killer delete select all. 




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] psql -h host ...

2006-06-27 Thread louis gonzales

James,
the psql command as you know is just the command line program that 
requests connection to a database and depending how you issue the 
command, determines if it's attempting to connect to a local file, or 
via a network protocol (commonly TCP/IP).  When you issue the command 
from a remote host, relative to where the database is located, once the 
database server receives the connection request, that request is looked 
up in the pg_hba.conf file to validate the user attempting the request.


If the user meets the constraints imposed within, the user is granted 
access to the database, otherwise, try again.


If you launch the psql command directly on the server to connect locally 
to the database, in this scenario, the server is 'the client' and 
therefore would be looking up pg_hba.conf on 'the client' but this case 
is no different from the remote client to the server, in that there are 
still both roles being fulfilled and ultimately 'the server' is looking 
up the request in the pg_hba.conf.


Hope this helps.

Martijn van Oosterhout wrote:


On Mon, Jun 26, 2006 at 01:51:24PM -0700, James wrote:
 


In this command
psql -h host ...

does it look for pg_hba.conf in the client or in the server?
   



The server. Client machines do not necessarily have a pg_hba.conf and
even if they do, clients are unlikely to be able to read it.

Have a nice day,
 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Adding foreign key constraints without integrity check?

2006-06-19 Thread louis gonzales

Florian,
Are you certain:

You can only create an FK if the fields you are referencing in the 
foreign table form a PK there. And creating a PK implicitly creates an 
index, which you can't drop without dropping the PK :-(




I'm not sure I am convinced the necessity of a foreign key, needing to 
reference a primary keyed entry from a different table.





Florian G. Pflug wrote:


Wes wrote:


You could create the fk-constraints _first_, then disable them, load
the data, reindex, and reenable them afterwards.

pg_dump/pg_restore can enable and disable fk-constraints before 
restoring

the data, I believe. It does so by tweaking the system catalogs.



Are referring to '--disable-triggers' on pg_dump?  Will this work for
foreign key constraints?  The doc talks about triggers, but doesn't say
anything about FK constraints (are these implemented as triggers?)  I 
don't

use pg_restore, just psql.


Yes, I was referring to --disable-triggers. I always assumes that it 
disables FK-Constraints as well as triggers, but now that you ask I 
realize that I might have never actually tried that ;-)


But FK-Constraints _are_ implemented as triggers internally, so I 
guess it should work.


The only problem I can see is that creating the fk-constraints might 
create
some indices too. But maybe you can manually drop those indices 
afterwards - I
don't know if the fk really _depends_ on the index, or if it creates 
it only

for convenience.



I don't see any indexes being added to the table beyond what I add, and
those added as a primary key constraint.  Currently, pg_dump outputs 
the FK
constraints after the indexes are built, as the last steps.  If I try 
to add
the FK constraints after loading the database definitions, but 
without any

indexes, I'm not sure what would happen.


Hm.. it i tried it out, and came to the conclusion that my approach 
doesn't work :-(


You can only create an FK if the fields you are referencing in the 
foreign table form a PK there. And creating a PK implicitly creates an 
index, which you can't drop without dropping the PK :-(


So unless you find a way to force postgres to ignore the index when 
inserting data, my suggestion won't work :-(


greetings, Florian Pflug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Adding foreign key constraints without integrity check?

2006-06-19 Thread louis gonzales

Florian,
I understand where you're coming from.  Indexes are always unique and 
all RDBMS systems use them to 'uniquely' identify a row from the the 
perspective of internal software management.  Index != PrimaryKey, so 
every table created, despite any Primary/Foreign key contraints put on 
them, always have a 1-1 Index per row entry.  At least that's the way I 
understand it, can someone else affirm this statement or redirect a 
misguided 'me ;)'?


Thanks group,

Florian G. Pflug wrote:


louis gonzales wrote:


Florian,
Are you certain:

You can only create an FK if the fields you are referencing in the 
foreign table form a PK there. And creating a PK implicitly creates 
an index, which you can't drop without dropping the PK :-(



Arg.. Should have written unique index instead of primary key..
But it doesn't change much, since a unique index and a pk are nearly
the same.

I'm not sure I am convinced the necessity of a foreign key, needing 
to reference a primary keyed entry from a different table.


I tried the following:
create table a(id int4) ;
create table b(id int4, a_id int4) ;
alter table b add constraint pk foreign key (a_id) references a (id) ;

Not sure, but maybe the syntax on this is slightly ambiguous.  Try 
creating table b with a primary key constraint on a_id, then alter the 
table to add foreign key constraint.  I'm going to look up a couple of 
references and see what I can dig up.  That may be perfectly legitimate 
syntax, but it just seems off to me.


Sorry if it is, I've spent the last few days on Oracle 9i, so I'm 
jumping around in my memory quite a bit for validity amongst 
different syntax.


The alter table gave me an error stating that I need to have a unique 
index

defined on a.id...





greetings, Florian Pflug

---(end of broadcast)---
TIP 6: explain analyze is your friend




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Adding foreign key constraints without integrity check?

2006-06-19 Thread louis gonzales

Florian,

So if you:

create table test (
id varchar(2) primary key,
age int );

create table test2 (
id varchar(2) primary key,
age2 int );

alter table test2 add foreign key (id) references test (id);

\d test2

you'll see that attribute id from test2, now has both a primary key 
constraint and a foreign key that references the primary key of test.


perhaps you can assert two constraints at the same time during an alter 
table ... not sure why your example syntax is failing



louis gonzales wrote:


Florian,
I understand where you're coming from.  Indexes are always unique and 
all RDBMS systems use them to 'uniquely' identify a row from the the 
perspective of internal software management.  Index != PrimaryKey, so 
every table created, despite any Primary/Foreign key contraints put on 
them, always have a 1-1 Index per row entry.  At least that's the way 
I understand it, can someone else affirm this statement or redirect a 
misguided 'me ;)'?


Thanks group,

Florian G. Pflug wrote:


louis gonzales wrote:


Florian,
Are you certain:

You can only create an FK if the fields you are referencing in the 
foreign table form a PK there. And creating a PK implicitly creates 
an index, which you can't drop without dropping the PK :-(



Arg.. Should have written unique index instead of primary key..
But it doesn't change much, since a unique index and a pk are nearly
the same.

I'm not sure I am convinced the necessity of a foreign key, 
needing to reference a primary keyed entry from a different table.



I tried the following:
create table a(id int4) ;
create table b(id int4, a_id int4) ;
alter table b add constraint pk foreign key (a_id) references a (id) ;

Not sure, but maybe the syntax on this is slightly ambiguous.  Try 
creating table b with a primary key constraint on a_id, then alter the 
table to add foreign key constraint.  I'm going to look up a couple of 
references and see what I can dig up.  That may be perfectly 
legitimate syntax, but it just seems off to me.


Sorry if it is, I've spent the last few days on Oracle 9i, so I'm 
jumping around in my memory quite a bit for validity amongst 
different syntax.


The alter table gave me an error stating that I need to have a unique 
index

defined on a.id...





greetings, Florian Pflug

---(end of broadcast)---
TIP 6: explain analyze is your friend





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Problem Connecting to 5432

2006-06-14 Thread louis gonzales

Try using the following format in the pg_hba.conf file:

host all all(or your_user_account) your_IP/32 trust (The 32 is the same 
as 255.255.255.255 but in CIDR format)


As for the command line you started postmaster with, doesn't the -i 
require an interface such as an IP address too? If you look below in 
your comments, you specify -i after your DATA directory but never give 
the -i an argument?




Casey, J Bart wrote:


All,

I have read message after message and searched the internet for hours, 
yet I still can’t get a remote computer to connect to port 5432 on my 
Fedora Core 3 system running Postgresql 7.4.7.


What I have done:

1) Stopped the iptables service

2) Modified postgresql.conf and added the following lines

tcpip_socket = true

port = 5432

3) Modified pg_hba.conf and added

host all all (my ip address) 255.255.255.255 trust

4) Modified the postgresql startup script to use the –i flag

5) Verified that postmaster is running with the –i flag… ps ax | grep 
postmaster output:


4259 pts/1 S 0:00 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data –i

6) Tried to verify that the server was listening on port 5432 only to 
find out that it isn’t. The netstat output follows:


tcp 0 0 127.0.0.1:8438 0.0.0.0:* LISTEN

tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN

tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN

tcp 0 0 :::80 :::* LISTEN

tcp 0 0 :::22 :::* LISTEN

tcp 0 0 :::443 :::* LISTEN

As you can see it is only listening on the loopback interface

I’m quite certain the issue is how I am starting the service, but I’ve 
added the –i flag.


I’m all out of ideas on this one. Any and all help is greatly appreciated.

Regards,

Bart




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Problem Connecting to 5432

2006-06-14 Thread louis gonzales
My mistake, the -h host_IP explicitly states which IP address to 
listen on.


/usr/bin/postmaster -h your_IP -p 5432 -D /var/lib/pgsql/data –i

I'm not sure if postgresql v7.x.y already used the pg_ctl command which 
is essentially a wrapper for postmaster, if so use,


pg_ctl -w -o -h your_IP -p your_PORT -l logfile(if you wish) start

if you use your_IP = 0.0.0.0 it will listen on all valid TCP/IP 
interfaces, including 127.0.0.1(a.k.a. localhost)




louis gonzales wrote:


Try using the following format in the pg_hba.conf file:

host all all(or your_user_account) your_IP/32 trust (The 32 is the 
same as 255.255.255.255 but in CIDR format)


As for the command line you started postmaster with, doesn't the -i 
require an interface such as an IP address too? If you look below in 
your comments, you specify -i after your DATA directory but never 
give the -i an argument?




Casey, J Bart wrote:


All,

I have read message after message and searched the internet for 
hours, yet I still can’t get a remote computer to connect to port 
5432 on my Fedora Core 3 system running Postgresql 7.4.7.


What I have done:

1) Stopped the iptables service

2) Modified postgresql.conf and added the following lines

tcpip_socket = true

port = 5432

3) Modified pg_hba.conf and added

host all all (my ip address) 255.255.255.255 trust

4) Modified the postgresql startup script to use the –i flag

5) Verified that postmaster is running with the –i flag… ps ax | grep 
postmaster output:


4259 pts/1 S 0:00 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data –i

6) Tried to verify that the server was listening on port 5432 only to 
find out that it isn’t. The netstat output follows:


tcp 0 0 127.0.0.1:8438 0.0.0.0:* LISTEN

tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN

tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN

tcp 0 0 :::80 :::* LISTEN

tcp 0 0 :::22 :::* LISTEN

tcp 0 0 :::443 :::* LISTEN

As you can see it is only listening on the loopback interface

I’m quite certain the issue is how I am starting the service, but 
I’ve added the –i flag.


I’m all out of ideas on this one. Any and all help is greatly 
appreciated.


Regards,

Bart




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL and Apache

2006-06-13 Thread louis gonzales
PHP is one alternative, another is PERL with CGI to write web based 
programs that can GET/POST with input/output from the browser, and to 
interface with *SQL - i.e. postgresql - you can use PERL's DBI interface


Leif B. Kristensen wrote:


On Tuesday 13. June 2006 15:39, jqpx37 wrote:
 


I'm working on a project involving PostgreSQL and Apache.

Anyone know of any good books or online how-to's on getting PostgreSQL
and Apache to work together?  (I'm also using PHP.)
   



AFAIK, there are no dependencies beween Apache and PostgreSQL. PHP is 
what you'll use as the glue between them.


I've worked with PHP and MySQL for some years, and found the transition 
to PostgreSQL rather painless, but still I've considered buying 
the Beginning PHP and PostgreSQL 8: From Novice to Professional by W. 
Jason Gilmore and Robert H. Treat.
 




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL scalability concerns

2006-03-16 Thread Louis Gonzales

Hope this helps:

http://www.postgresql.org/files/about/casestudies/wcgcasestudyonpostgresqlv1.2.pdf

http://www.postgresql.org/about/users

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Clustered PostgreSQL

2006-03-14 Thread Louis Gonzales

Jojo Paderes wrote:


Is it possible to cluster PostgreSQL? If yes where can I find the
resource information on how to implement it?

--
http://jojopaderes.multiply.com
http://jojopaderes.wordpress.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org
 

Slony-I is another cluster software for postgreSQL.  What OS are you 
running?  I just deployed it on Solaris 9, between a Sun Ultra 
Enterprise E450 and a Sun Ultra 30.


It's really great.  Currently, it only supports Single 
Master-to-multipleSlaves.  Meaning, the single master is the only node 
where database updates can occur, the changes are then propagated to the 
slave nodes.


http://gborg.postgresql.org/project/slony1/projdisplay.php

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Build failures on RedHat 3.0 with openssl/kerberos

2006-03-14 Thread Louis Gonzales

Wes,
Did you try to ./configure w/out  --enable-thread-safety?  I recently 
compiled postgreSQL 8.0.1 on Solaris and _needed_ --enable-thread-safety 
strictly for building Slony-I against postgresql with that feature enabled.


What is the reason you are compiling this _with_ the feature?
If it's necessary, then you may need to --with-includes= and/or --with-libs=
with additional include directories, such as   /usr/include:/usr/include/sys
where-ever the thread .h files are for your OS.

This configure attempt could be failing, because it can't locate the 
correct thread headers and/or libraries


Wes wrote:


I try to build 8.1.3 with:

 ./configure --prefix=/usr/local/pgsql8.1.3 --with-openssl --with-pam
--enable-thread-safety

It fails the openssl test, saying openssl/ssl.h is unavailable.  Digging
deeper, I find that it is because the test program with

 #include openssl/ssl.h

is failing because it can't include krb5.h.

Based on another post, I tried adding --with-krb5.  That explicitly
aborted with it unable to find krb5.h.  I then tried:

./configure --prefix=/usr/local/pgsql8.1.3 --with-openssl --with-pam
--enable-thread-safety --with-krb5 --with-includes=/usr/kerberos/include

Now it gets past both the openssl and kerberos, but bites the dust with:

configure: error:
*** Thread test program failed.  Your platform is not thread-safe.
*** Check the file 'config.log'for the exact reason.
***
*** You can use the configure option --enable-thread-safety-force
*** to force threads to be enabled.  However, you must then run
*** the program in src/tools/thread and add locking function calls
*** to your applications to guarantee thread safety.

If I remove the --with-krb5, it works.  Why does enabling Kerberos break
threads?

I haven't been able to find any issues in the archives with krb5 and
threads.  Am I missing something here?

Wes



---(end of broadcast)---
TIP 6: explain analyze is your friend
 




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] 8.0 Client can't connect to 7.3 server?

2006-03-08 Thread Louis Gonzales

Jussi Saarinen wrote:


I have following environment:

Server1, rh9, ip:192.168.1.10:
postgresql-7.3.4-3.rhl9
postgresql-libs-7.3.4-3.rhl9
postgresql-server-7.3.4-3.rhl9
postgresql-jdbc-7.3.4-3.rhl9

Server2, fc4, ip:192.168.1.11:
postgresql-libs-8.0.7-1.FC4.1
postgresql-8.0.7-1.FC4.1
postgresql-server-8.0.7-1.FC4.1
postgresql-jdbc-8.0.7-1.FC4.1
postgresql-contrib-8.0.7-1.FC4.1

I can't connect to server1 (7.3.4) using client (8.0.7) at server2. 
I just get error:

psql: FATAL:  No pg_hba.conf entry for host 192.168.1.11, user joe,
database template1

Uncommented lines at server1's pg_hba.conf (postgresql service is
restarted after every change):
local all all trust trust
host all all 192.168.1.11 255.255.255.255 trust

Server1 also have line:
tcpip_socket = true
in postgresql.conf

Any ideas what's wrong?

 


Two things come to mind:
1) do you have a defined postgresql user joe ?
2) 192.168.1.11/32 (without looking it up, I'm not sure if you can 
specify the subnetmask, as an alternative to the /DecimalNumber notation)
This is certainly only an issue with the entry in pg_hba.conf, on the 
server to be contacted, just missing the correct configuration.


Remember OS user joe != postgresql user joe
postgresql user joe must have been granted access to the database 
instance you're attempting to connect to, then you can have an entry like:


host   all all  192.168.1.1/32 trust  ( where postgresql user joe 
would be implied )
begin:vcard
fn:louis
n:gonzales;louis
email;internet:[EMAIL PROTECTED]
tel;home:248.943.0144
tel;cell:248.943.0144
x-mozilla-html:TRUE
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Louis Gonzales




Paul Newman wrote:

  
  
  
  
  Hi,
  We run with
multiple identical schemas in our db.
Each schema actually represents a clients db. What wed like to do is
have a common schema where trigger functions and the like are held
whilst each
trigger defined against the tables is in there own particular schema.
This would
mean that there is one function per trigger type to maintain.
  
  However at
the moment we are placing the trigger
functions within each schema along with trigger itself. The reason is
that we dont
know of a function or a variable that says Give me the schema of the
trigger that is calling this function. We are therefore having to
write
the function into every schema and then use set search_path =br1; as
the first
line. This is a real headache to us since we are intending on putting
200 
300 schemas in one db.
  
  My question
is  is there such a function or
variable ? . Or is there a better for us to achieve this ?
  
  Regards
  
  Paul Newman
  

Paul,
When you say "multiple identical schemas" are they all separate
explicit schemas? Or are they all under a general 'public' schema.
>From my understanding, when you create a new db instance, it's under
the public level schema by default unless you create an explicit schema
and subsequently a db instance - or several - therein, effectively
establishing sibling db instances belonging to a single schema, I know
at least that data in the form of table access is allowed across the
siblings. I'd also assume that this would be the case for triggers and
functions that could be identified or defined at the 'root' level
schema.

Now I'm sure there is associated jargon with this type of hierarchical
or tiered schema layout, so please don't anybody shoot me because of my
analogy to 'root' level scenario.

I think this is a great opportunity for somebody to add additional
insight with their experience with utilizing explicit schemas, rather
than the default public schema.

We have to remember, that for every database instance, there is at
least one schema to which it belongs, meaning that a schema and is a db
container of sorts, there can be many database instances that exist in
1 schema to - typically public by default.

I know I'm opening up a big can of worms... but hey... let's have it ;)


begin:vcard
fn:louis
n:gonzales;louis
email;internet:[EMAIL PROTECTED]
tel;home:248.943.0144
tel;cell:248.943.0144
x-mozilla-html:TRUE
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Louis Gonzales




Scott Marlowe wrote:

  On Wed, 2006-03-08 at 14:19, Louis Gonzales wrote:

  
  
Paul,
When you say "multiple identical schemas" are they all separate
explicit schemas?  Or are they all under a general 'public' schema.
>From my understanding, when you create a new db instance, it's under
the public level schema by default unless you create an explicit
schema and subsequently a db instance - or several - therein,
effectively establishing sibling db instances belonging to a single
schema, I know at least that data in the form of table access is
allowed across the siblings.  I'd also assume that this would be the
case for triggers and functions that could be identified or defined at
the 'root' level 

  
  
Ummm.  In PostgreSQL schemas are contained within databases, not the
other way around.  It's cluster contains databases contains schemas
contains objects (tables, sequences, indexes, et. al.)

---(end of broadcast)---
TIP 6: explain analyze is your friend
  

I stand corrected. That's right. But under a database you create your
explicit schemas, to organize tables which constitute your separate
data, where all of the schemas belonging to a database instance, can
share resources without conflicting with one another.

I apologize for giving the inaccurate description of database to schema
relationship.


begin:vcard
fn:louis
n:gonzales;louis
email;internet:[EMAIL PROTECTED]
tel;home:248.943.0144
tel;cell:248.943.0144
x-mozilla-html:TRUE
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Triggers and Multiple Schemas.

2006-03-08 Thread Louis Gonzales

Paul,
What is the current schema layout for your db instances?  I don't think 
it's possible to share across db instances like this:


dbname1.myschema.sometable
dbname2.myschema.sometable

But you can share resources of the following type:

dbname.myschema1.sometable
dbname.myschema2.sometable
dbname.myschema2.sometable2
dbname.myschema2.sometable3

I think that it's a mis-statement to call each separate schema a DB, but 
the group of:
dbname.myschema2.(collection of objects) is effectively a separate DB, 
in that, the tables are what constitute a functional db.


so you can treat
dbname.myschema1.(...)
and
dbname.myschema2.(...)
as separate databases that share common resources, because they belong 
to the same db instances, namely dbname
begin:vcard
fn:louis
n:gonzales;louis
email;internet:[EMAIL PROTECTED]
tel;home:248.943.0144
tel;cell:248.943.0144
x-mozilla-html:TRUE
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend