[GENERAL] Statistics and Indexes

2005-07-05 Thread postgresql
Hi

I am from a MSSQL background and am trying to understand something about
statistics in PostgreSQL.

Question 1:
In MSSQL, if you create an index (and you are using MSSQL's default
settings) the Server will automatically create appropriate statistics for
you. Does this happen in PostgreSQL? Or should I explicitly create
statistics for every one of my indexes?

Question 2:
I believe ANALYZE keeps indexes and statistics up to date. How often
should this be run (assume that my DB has 200,000 new records daily)?

Thanks in advance
Craig

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] PITR problems

2005-07-05 Thread Per Lauvås
Title: PITR problems






Hi


PITR (Point in Time Recovery) seams like an excellent feature, but I have trouble getting the hang of it.


My environment:

PostgreSQL 8.0.3 on windows 2000.


My procedure:

- Set up archiving with the archive_command. The files are stored on a remote computer in a folder called wal_archive. Looks fine.

- Set up a scheduled task running a script every half hour. The script copies the last modified file in the pg_xlog dir (I call this file current_wal) to the same remote computer, but in a different folder. The script is described later.

- SELECT pg_start_backup('label')

- Copy the data directory to the remote computer. I remove the contents (in the copy) of the pg_xlog directory and archive_status directory. The pg_start_backup function produces a backup history file indicating where in the current WAL segment the backup took place.

Question 1: What do I need to save in my wal_archive. The backup file is automatically placed there. Do I need the WAL segment in which the backup file is based upon? It is not automatically copied to the wal_archive. In that case; what if the backup file is called .39.00198218.backup and I have no 39 WAL segment? Do I use the last modified segment, althoug it has a different file name, for example 3F? Do I rename it???

The DOC tells me that the WAL segments numerically less than the current WAL segment are no longer needed. My base backup gave me a backup history file numbered 39 although all the WAL segments er numerically greater than 39!

Question 2: Should I delete the other WAL segments in the pg_xlog directory, although they are numerically greater?


So, after setting up the backup I now test the recovery process.

I think I'm well off when I have my backup history file, the archived WAL segments, the current_wal_segment on the data directory.

- I replace the data directory, fill the pg_xlog directory with the backup history file and the archived WAL segments. But what about the current_wal_segment produced by the scheduled task? It doesn't seem to fit in. The archived WAL segments are numbered in sequence based upon the backup history file. So the numbers do not match the numbers in the pg_xlog directory. The result of the copy is a current_wal_segment with a number not following the last archived wal.

The result is that I am able to backup to the last archived WAL segment, but I am not anywhere close to recover to the last half hour. Has anyone mastered this?

Per


The script:

@echo off


@setlocal


set PG_WAL_AREA=D:\PostgreSQL\8.0\data\pg_xlog

set CUR_WAL_AREA=path to current_wal


SET lastfil=

FOR /F %%f in ('DIR %PG_WAL_AREA% /AA /OD /B') do set lastfil=%%f

ECHO. last file in directory is %lastfil%

COPY /Y %PG_WAL_AREA%\%lastfil% %CUR_WAL_AREA%


@endlocal








Re: [GENERAL] Statistics and Indexes

2005-07-05 Thread Bricklen Anderson
[EMAIL PROTECTED] wrote:
 Hi
 
 I am from a MSSQL background and am trying to understand something about
 statistics in PostgreSQL.
 
 Question 1:
 In MSSQL, if you create an index (and you are using MSSQL's default
 settings) the Server will automatically create appropriate statistics for
 you. Does this happen in PostgreSQL? Or should I explicitly create
 statistics for every one of my indexes?
 

Q2 sort of answers Q1.

 Question 2:
 I believe ANALYZE keeps indexes and statistics up to date. How often
 should this be run (assume that my DB has 200,000 new records daily)?

Try out a VACUUM ANALYZE every hour or three.

-- 
___

This e-mail may be privileged and/or confidential, and the sender does
not waive any related rights and obligations. Any distribution, use or
copying of this e-mail or the information it contains by other than an
intended recipient is unauthorized. If you received this e-mail in
error, please advise me (by return e-mail or otherwise) immediately.
___

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] How can this be optimized, if possible?

2005-07-05 Thread Greg Stark
Net Virtual Mailing Lists [EMAIL PROTECTED] writes:

 The query I want to run against these two tables is something like this:
 
 SELECT
  count(*) as count,
  category.category,
  nlevel(category.category) AS level,
  subpath(category.category,0,nlevel(category.category)-1) as parent,
  category.head_title,
  category.cat_title,
  category.subcat_title
 FROM
  test,
  category
 WHERE
  test.category @ category.category 
 GROUP BY
   category.category, category.head_title, category.cat_title,
 category.subcat_title | 
 
 
 Many times the WHERE clause will contain additional search criteria on
 the 'test' table.  What I am trying to get is a count of how many rows
 from the test table fall into each category, being limited by the search
 criteria.

Post the output of EXPLAIN ANALYZE SELECT ...

Also list any indexes you have on the tables. Do you have a GIST index on
the ltree column in test?

There are a number of ways of rewriting this query, you'll probably have some
success with one of them.

You could for example write it:

SELECT *,
 (SELECT count(*) 
FROM test 
   WHERE category @ category.category 
[AND search criteria...]) as count
 FROM category

Normally I would say your form with the join gives the planner the maximum
flexibility, but I don't think the planner is going to be able to do any
better than nested loops with a join clause like that so I don't think this
will be any worse than the join. And it might have a better chance of using an
index on test.category.

But not that it's still got to do 300 scans of the test index. If each one
takes .5s then this query is still going to take 150s or so. But with a gist
index on the test.category column it may be more 10s total. It will depend
partly on how many categories you have that span a large number of records in
test. That is, how many parent categories you have.


-- 
greg


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Regex escape [ character and change text result into integer

2005-07-05 Thread David Pratt
Hi.  I am using array_dims to give me dimensions of  multidimensional 
array ie:


[1:5][1:2]

In my function I want to retreive the value of the second number from 
array_dims (5 in example above)


This is what I am trying:

count_str = substr(dimensions from '\[\d\:(\d+)\]\[\d\:\d\]')
(this should give me 5 in the above example)

I have two problems:

1) I can't seem to escape the [ character - so how do I do this?
2) How do I turn my result which is originally text into an integer?

Regards,
David

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Regex escape [ character and change text result into integer

2005-07-05 Thread Tom Lane
David Pratt [EMAIL PROTECTED] writes:
 Hi.  I am using array_dims to give me dimensions of  multidimensional 
 array ie:

 [1:5][1:2]

 In my function I want to retreive the value of the second number from 
 array_dims (5 in example above)

Why aren't you using array_upper()?

 This is what I am trying:
 count_str = substr(dimensions from '\[\d\:(\d+)\]\[\d\:\d\]')

Perhaps you forgot to double the backslashes?

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] default tablespaces

2005-07-05 Thread Jaime Casanova
Hi all,

is there a way to specify a default tablespace for tables and a
different one for indexes without put it in the CREATE statement?

i guess one way is to set 'default_tablespace'  for table's tablespace
create or table without any indexes and then set it for idx's
tablespace and create all indexes, a workaround that come to my mind.

Any other idea?

-- 
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Regex escape [ character and change text result into

2005-07-05 Thread David Pratt
Hi Tom.  I misread the manual.  I thought I could not do array_upper on 
multidimensional array but it was specific concatenation functions.  
Thank you for clarifying this.


Regards,
David

On Tuesday, July 5, 2005, at 01:22 PM, Tom Lane wrote:


David Pratt [EMAIL PROTECTED] writes:

Hi.  I am using array_dims to give me dimensions of  multidimensional
array ie:



[1:5][1:2]



In my function I want to retreive the value of the second number from
array_dims (5 in example above)


Why aren't you using array_upper()?


This is what I am trying:
count_str = substr(dimensions from '\[\d\:(\d+)\]\[\d\:\d\]')


Perhaps you forgot to double the backslashes?

regards, tom lane

---(end of 
broadcast)---

TIP 8: explain analyze is your friend



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] How can this be optimized, if possible?

2005-07-05 Thread Dann Corbit
Would it help if you created a trigger that puts a category count into a
statistics table for each table?  Perhaps you could gather most of the
information you need from such a process.  If an estimate is good enough
and you are using the statistics collector and if you have an index on
that column, perhaps you could even query the system statistics tables.

Do you have a large number of categories, or is it only a few?

If it is only 10 categories or so, and if they do not change, then
perhaps you can reformulate as a collection of equal joins.

What does the query plan look like?  What sort of hardware are you
using?  What version of PostgreSQL are you using?

Your query speed does seem shockingly slow for only 30,000 rows in the
largest table.

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Net Virtual Mailing Lists
 Sent: Monday, July 04, 2005 9:47 PM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] How can this be optimized, if possible?
 
 Hello,
 
 
 My database has grown far faster then expected and a query which used
to
 run acceptably now does not.   I'm trying to figure out a way to make
 this operate faster and scale better.   I'm very open to the idea that
 this does not need to be done using a SQL query at all - right now I'm
 really just in need of some conceptual/architectural help on this one.
 
 
 So I have two tables:
 
  Table category
 Column|   Type| Modifiers
 --+---+---
  head_title   | character varying |
  cat_title| character varying |
  subcat_title | character varying |
  category | ltree |
 
 
 
  Table test
Column   |   Type   |
Modifiers
 +--
 +-
  id | integer  | not null default
 nextval('master.test_id_seq'::text)
  category   | ltree[]  |
 
 
 ... there are other fields in the test table, but these are really the
 only two relevant to this.
 
 
 The query I want to run against these two tables is something like
this:
 
 SELECT
  count(*) as count,
  category.category,
  nlevel(category.category) AS level,
  subpath(category.category,0,nlevel(category.category)-1) as parent,
  category.head_title,
  category.cat_title,
  category.subcat_title
 FROM
  test,
  category
 WHERE
  test.category @ category.category
 GROUP BY
   category.category, category.head_title, category.cat_title,
 category.subcat_title |
 
 
 Many times the WHERE clause will contain additional search criteria
on
 the 'test' table.  What I am trying to get is a count of how many rows
 from the test table fall into each category, being limited by the
search
 criteria.
 
 This query is starting to take an enormous amount of time (30+
seconds)
 and I really need the results of this in a couple seconds tops.  I can
do
 a select category from test and it completes in about .5 seconds.
The
 category table currently only has 225 rows, the test table having
 approximately 30,000.  SELECT count(category,category FROM test GROUP
BY
 category is quite slow and I thought of making a materialized view of
 this, but then of course I don't see any way to make that result
limited
 by my search criteria.
 
 I am completely open to re-architecting this entirely, performance of
 this query is critical to my application - I really just am not sure
 where to start.  It seems like everything I do is worse then what I
 started with.
 
 ... It *almost* seems as if I need to build some sort of search
engine
 like tool which performs all the queries against the database, has
tons
 of memory, and cache the category attributes for each record in
memory.
 This sure seems like a lot of work though - I sincerely hope there is
an
 easier way.
 
 Thanks for your help, as always!
 
 - Greg
 
 
 ---(end of
broadcast)---
 TIP 7: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] double entries into database when using IE

2005-07-05 Thread D A GERM
I am having a problem when a browser dumps data into a table. 

When the browser is Firefox , it works perfect; when the browser is IE, 
it dumps 2 entries into the table just thousandths of a second apart.


What could be causing this? Is this an issue with IE, Apache or Postgres.

Thanks in advance for any help.

--
D. Aaron Germ
Scarborough Library, Shepherd University
(304) 876-5423

Well then what am I supposed to do with all my creative ideas- take a bath and wash 
myself with them? 'Cause that is what soap is for (Peter, Family Guy)



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL sequence within function

2005-07-05 Thread Tony Caduto

Try this version of your function.
I don't think you can assign a value to a variable in the declaration 
section with the return value of a function.


CREATE OR REPLACE FUNCTION sp_slide_create(int4, bool, bool, bool, 
varchar, text, varchar, varchar, int4)

RETURNS int4 AS'
DECLARE
aScriptID ALIAS FOR $1;
aAllowDGP ALIAS FOR $2;
aAllowDGO ALIAS FOR $3;
aWaitForSlideFinish ALIAS FOR $4;
aTitle ALIAS FOR $5;
aText ALIAS FOR $6;
aFlashFileDGP ALIAS FOR $7;
aFlashFileDGO ALIAS FOR $8;
aSlideType ALIAS FOR $9;
seqID int4;
BEGIN
 seqID = nextval(seqslideid);
INSERT INTO tblslides
(slideid, scriptID, allowdgp, allowdgo, waitforslidefinish, title, 
text, flashfiledgp, flashfiledgo, slidetype)

VALUES
(seqID, aScriptID, aAllowDGP, aAllowDGO, aWaitForSlideFinish, aTitle, 
aText, aFlashFileDGP, aFlashFileDGO, aSlideType);


RETURN seqID;
END;'

LANGUAGE 'plpgsql' VOLATILE;


Clark Allan wrote:


Thanks for the help Tony,
But im still having some trouble.
 




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] PITR problems

2005-07-05 Thread Frank Finner
Hi,

I can tell you how I do this on Linux:

1. Set up the archiving command in postgresql.conf. Archiving of complete WAL 
files (that is, of WALs whose 16M space is completely filled) starts 
automatically after restarting the engine. No renaming, no nothing else with 
complete WALs. No deleting also. You__ll need all the WALs since the time of 
the full backup for recovery!
2. Copy the database with the pg_start_backup feature. Clean up pg_xlog in that 
copy.
3. Run a cronjob that finds the latest incomplete WAL and copy it also into the 
archive (but into a subdirectory, not to confuse with the complete WALs).

I copy the full backup to another machine once I do the backup, usually once at 
night. Also I copy the WAL archive including the incomplete WAL to that other 
machine once per minute while taking the incomplete WAL with rsync. Also no 
renaming.

To start the backup database on that other machine I have written a script that 
basically does the following:

0. Shutdown an eventually running postgresql engine. Shutdown the interface 
connecting to the master to prevent further copying of WALs.
1. Wipe the postgresql data directory and copy the full backup to that 
directory, not including any WALs that might be within the full backup. pg_xlog 
must be empty except an empty directory archive_status.
2. Create a file recovery.conf in the data directory containing the single 
line 'cp /archivedirectory/%f %p'. /archivedirectory contains all the archived 
WALs including the incomplete last one. The oldest one must be at least as old 
as the start time of the full backup. I usually keep some even older ones, so I 
have at least one complete WAL, that is older than the full backup.
3. Start postgresql. It will recover with all the WALs from the archive 
directory, starting with the one the full backup requires and using all the 
WALs piece by piece up to and including the last incomplete one. Incomplete 
does NOT mean, that this WAL is shorter than 16M, but the 16M space is not 
completely filled.

If everything goes well, you have a running consistent database. You may now 
start the interface to the master again.

Recovery might take some minutes due to data masses of WALs. For example, I 
have about 160M of WALs per hour.

Regards, Frank.



On Tue, 5 Jul 2005 15:24:35 +0200 Per Lauvås [EMAIL PROTECTED] thought long, 
then sat down and wrote:

 Hi
 
 PITR (Point in Time Recovery) seams like an excellent feature, but I have 
 trouble getting the hang of it.
 
 My environment:
 PostgreSQL 8.0.3 on windows 2000.
 
 My procedure:
 - Set up archiving with the archive_command. The files are stored on a remote 
 computer in a folder called wal_archive. Looks fine.
 - Set up a scheduled task running a script every half hour. The script copies 
 the last modified file in the pg_xlog dir (I call this file current_wal) to 
 the same remote computer, but in a different folder. The script is described 
 later.
 - SELECT pg_start_backup('label')
 - Copy the data directory to the remote computer. I remove the contents (in 
 the copy) of the pg_xlog directory and archive_status directory. The 
 pg_start_backup function produces a backup history file indicating where in 
 the current WAL segment the backup took place.
 Question 1: What do I need to save in my wal_archive. The backup file is 
 automatically placed there. Do I need the WAL segment in which the backup 
 file is based upon? It is not automatically copied to the wal_archive. In 
 that case; what if the backup file is called .39.00198218.backup and I 
 have no 39 WAL segment? Do I use the last modified segment, althoug it 
 has a different file name, for example 3F? Do I rename it???
 
 The DOC tells me that the WAL segments numerically less than the current WAL 
 segment are no longer needed. My base backup gave me a backup history file 
 numbered 39 although all the WAL segments er numerically greater than 39!
 
 Question 2: Should I delete the other WAL segments in the pg_xlog directory, 
 although they are numerically greater?
 
 So, after setting up the backup I now test the recovery process.
 I think I'm well off when I have my backup history file, the archived WAL 
 segments, the current_wal_segment on the data directory.
 - I replace the data directory, fill the pg_xlog directory with the backup 
 history file and the archived WAL segments. But what about the 
 current_wal_segment produced by the scheduled task? It doesn't seem to fit 
 in. The archived WAL segments are numbered in sequence based upon the backup 
 history file. So the numbers do not match the numbers in the pg_xlog 
 directory. The result of the copy is a current_wal_segment with a number not 
 following the last archived wal.
 
 The result is that I am able to backup to the last archived WAL segment, but 
 I am not anywhere close to recover to the last half hour. Has anyone mastered 
 this?
 
 Per
 
 The script:
 @echo off
 
 @setlocal
 
 set 

Re: [GENERAL] double entries into database when using IE

2005-07-05 Thread Michael Fuhr
On Tue, Jul 05, 2005 at 02:04:47PM -0400, D A GERM wrote:

 I am having a problem when a browser dumps data into a table. 

How exactly is the browser dumping data into a table?  Via a form
submission handled by a CGI program or the like?

 When the browser is Firefox , it works perfect; when the browser is IE, 
 it dumps 2 entries into the table just thousandths of a second apart.

If you're doing a form submission, do you see multiple entries for
IE in the web server's access logs?

 What could be causing this? Is this an issue with IE, Apache or Postgres.

We'd need more information to say for sure.  Can you tell us more
about the path from the browser to the database?  Have you tried
to simplify the problem, i.e., create the smallest possible test
case that exhibits the undesirable behavior?  Sometimes that means
writing a separate program that isn't useful to your application
per se, but that eliminates the irrelevant factors so you can focus
on what matters.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] double entries into database when using IE

2005-07-05 Thread David Gagnon

Hi,

I have seen IE posting request twice to the server when you post using a 
button that use javascript.  Per example, if you have a submit button 
and call a javascript that actually post the form... IE will still do 
the post attached to the submit button.  So you will have 2 submits 
It's stupid but it's true. 


It's happen on some version of IE6 I think .. not all for what I remember.

You can search the web for the solution ... I don't remember what I did 
or if I actually did something to solve the problem.


/David

P.S.: This is quite OT for the postgresql mailing list .. you don't think ?

 D A GERM wrote:


I am having a problem when a browser dumps data into a table.
When the browser is Firefox , it works perfect; when the browser is 
IE, it dumps 2 entries into the table just thousandths of a second apart.


What could be causing this? Is this an issue with IE, Apache or Postgres.

Thanks in advance for any help.




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL sequence within function

2005-07-05 Thread Tony Caduto

Or upgrade your server to 8.x and use dollar quoting.
with dollar quoting all that is a thing of the past.

CREATE FUNCTION sp_slide_create(int4) RETURNS int4 AS
$$
DECLARE
aScriptID ALIAS FOR $1;
seqID int4 := nextval('genseq'); -- no magic needed with dollar qouting :-)
BEGIN

INSERT INTO tblslides (slideid) VALUES (seqID);

RETURN seqID;

END;
$$
LANGUAGE 'plpgsql' VOLATILE


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Custom C function shutdown-signaling

2005-07-05 Thread Otto Blomqvist
How do I signal a custom C function that the Postmaster wants to shut down ?

I want to use pg_ctl restart -mf  and not pg_ctl  restart -mi  because
of data integrity concerns...

Any ideas ?

Thanks

/Otto Blomqvist




---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] double entries into database when using IE

2005-07-05 Thread SCassidy

Hi,

If a web page has  JavaScript called on a submit button, like
'onClick=someFunction();, and the function does some stuff, then does a
form.submit(), and DOES NOT REMEMBER TO RETURN FALSE, then the browser is
supposed to go ahead and submit the form.  So, something like that could
possibly happen, but only if the JavaScript is incorrectly written, as far
as I know.   Although, once a submit is done, it would be a gray area as to
whether the form data would still be available for the second submit.

I use JavaScript in forms all the time to interact with a PostgreSQL
database, and don't have any issues like this with debugged code, using
IE6.

Susan




   
   David Gagnon 
   
  [EMAIL PROTECTED]   To:   D A GERM [EMAIL 
PROTECTED]   
   Sent by:  cc:   
Postgresql-General pgsql-general@postgresql.org   
 Subject:  Re: [GENERAL] 
double entries into database when using IE

   
  [EMAIL PROTECTED] |---|   
 
  tgresql.org | [ ] Expand Groups | 
   
  |---| 
   

   
   07/05/2005 02:24 
   
  PM
   

   

   




Hi,

I have seen IE posting request twice to the server when you post using a
button that use javascript.  Per example, if you have a submit button
and call a javascript that actually post the form... IE will still do
the post attached to the submit button.  So you will have 2 submits
It's stupid but it's true.

It's happen on some version of IE6 I think .. not all for what I remember.

You can search the web for the solution ... I don't remember what I did
or if I actually did something to solve the problem.

/David

P.S.: This is quite OT for the postgresql mailing list .. you don't think ?

  D A GERM wrote:

 I am having a problem when a browser dumps data into a table.
 When the browser is Firefox , it works perfect; when the browser is
 IE, it dumps 2 entries into the table just thousandths of a second apart.

 What could be causing this? Is this an issue with IE, Apache or Postgres.

 Thanks in advance for any help.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster







--
See our award-winning line of tape and disk-based
backup  recovery solutions at http://www.overlandstorage.com
--


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] PostgreSQL sequence within function

2005-07-05 Thread Clark Allan
ahhh... very nice. Thank you.
On 7/5/05, Tony Caduto [EMAIL PROTECTED] wrote:
Or upgrade your server to 8.x and use dollar quoting.with dollar quoting all that is a thing of the past.
CREATE FUNCTION sp_slide_create(int4) RETURNS int4 AS$$DECLAREaScriptID ALIAS FOR $1;seqID int4 := nextval('genseq'); -- no magic needed with dollar qouting :-)BEGININSERT INTO tblslides (slideid) VALUES (seqID);
RETURN seqID;END;$$LANGUAGE 'plpgsql' VOLATILE


Re: [GENERAL] Custom C function shutdown-signaling

2005-07-05 Thread Tom Lane
Otto Blomqvist [EMAIL PROTECTED] writes:
 How do I signal a custom C function that the Postmaster wants to shut down ?

Do CHECK_FOR_INTERRUPTS(); at a suitable spot in the C function's main
loop.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] double entries into database when using IE

2005-07-05 Thread David Gagnon








[EMAIL PROTECTED] wrote:

  Hi,

If a web page has  _javascript_ called on a submit button, like
', and the function does some stuff, then does a
form.submit(), and DOES NOT REMEMBER TO RETURN FALSE, then the browser is
supposed to go ahead and submit the form.  So, something like that could
possibly happen, but only if the _javascript_ is incorrectly written, as far
as I know.   Although, once a submit is done, it would be a gray area as to
whether the form data would still be available for the second submit.
  


I think this thread discuss the problem .. like they say I started
having the problem with some version of IE6 but not IE5.
http://forums.devshed.com/showthread.php?t=50160page=1pp=15
http://www.tfug.org/pipermail/tfug/2004-October/007785.html

They don't give the answer but when they talk 

here is what it might be:

input class="input" type="submit "
name="Submit" value="Submit"  class="moz-txt-link-rfc2396E" href="">"_javascript_:submitit();"

if you use _javascript_ to submit the form

do not use a button type of submit.it will submit twice

correct code

input class="input" type="button "
name="Submit" value="Submit"  class="moz-txt-link-rfc2396E" href="">"_javascript_:submitit();"

Like SCassidy said .. maybe returing FALSE will solve the problem ..
but I remember having tried this without success...

It's all what I know :-)




  
I use _javascript_ in forms all the time to interact with a PostgreSQL
database, and don't have any issues like this with debugged code, using
IE6.

Susan



   
   David Gagnon
  [EMAIL PROTECTED]   To:   D A GERM [EMAIL PROTECTED]   
   Sent by:  cc:   Postgresql-General pgsql-general@postgresql.org   
 Subject:  Re: [GENERAL] double entries into database when using IE
   
  [EMAIL PROTECTED] |---|
  tgresql.org | [ ] Expand Groups |
  |---|
   
   07/05/2005 02:24
  PM   
   
   




Hi,

I have seen IE posting request twice to the server when you post using a
button that use _javascript_.  Per example, if you have a submit button
and call a _javascript_ that actually post the form... IE will still do
the post attached to the submit button.  So you will have 2 submits
It's stupid but it's true.

It's happen on some version of IE6 I think .. not all for what I remember.

You can search the web for the solution ... I don't remember what I did
or if I actually did something to solve the problem.

/David

P.S.: This is quite OT for the postgresql mailing list .. you don't think ?

  D A GERM wrote:

  
  
I am having a problem when a browser dumps data into a table.
When the browser is Firefox , it works perfect; when the browser is
IE, it dumps 2 entries into the table just thousandths of a second apart.

What could be causing this? Is this an issue with IE, Apache or Postgres.

Thanks in advance for any help.


  
  

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster







--
See our award-winning line of tape and disk-based
backup  recovery solutions at http://www.overlandstorage.com
--


---(end of broadcast)---
TIP 6: Have you searched our list archives?

 

[GENERAL] How can I make SELECT return tuple numbers?

2005-07-05 Thread Roy Souther




Is there any way to make a SELECT return an extra field that is a sequential number for the tuples that are returned?
I am trying to populate a table with an INSERT SELECT and would like to make one of the fields a sequence starting with 1 for the first tuple and incromenting. I don't want the field to be a unique index. As the query is run again and again I want more tuples appended to the table and each time the counter should start at 1 and count up.







Roy Souther
www.SiliconTao.com
Let Open Source help your business move beyond.

For security this message is digitally authenticated by GnuPG.













signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] How can I make SELECT return tuple numbers?

2005-07-05 Thread Michael Fuhr
On Tue, Jul 05, 2005 at 09:34:51PM -0600, Roy Souther wrote:

 Is there any way to make a SELECT return an extra field that is a
 sequential number for the tuples that are returned?

You could use a sequence.

 I am trying to populate a table with an INSERT SELECT and would like to
 make one of the fields a sequence starting with 1 for the first tuple
 and incromenting. I don't want the field to be a unique index. As the
 query is run again and again I want more tuples appended to the table
 and each time the counter should start at 1 and count up.

You could reset a sequence's start value before each insert.  You
might even be able to it automagically with a statement-level before
trigger that resets the sequence value and a DEFAULT expression or
row-level trigger that uses nextval() to obtain the next sequence
value.  That appears to work in simple tests, although I admittedly
haven't thought it through very far to see if there would be any
problems.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] How can I make SELECT return tuple numbers?

2005-07-05 Thread Michael Fuhr
On Tue, Jul 05, 2005 at 09:55:48PM -0600, Michael Fuhr wrote:
 
 You could reset a sequence's start value before each insert.  You
 might even be able to it automagically with a statement-level before
 trigger that resets the sequence value and a DEFAULT expression or
 row-level trigger that uses nextval() to obtain the next sequence
 value.  That appears to work in simple tests, although I admittedly
 haven't thought it through very far to see if there would be any
 problems.

One problem would involve concurrent inserts obtaining values from
the same sequence; I think using a temporary (i.e., session-local)
sequence should mitigate that problem.  You could create the sequence
in a statement-level before trigger and drop it in a statement-level
after trigger to make its use somewhat invisible.  But again, I've
only done this in simple tests, so it deserves more thought.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Seg fault in postgres 7.4.7?

2005-07-05 Thread Akash Garg
My postgres 7.4.7 installation crashed on its own today -- here is the
error log:

ERROR:  duplicate key violates unique constraint toolbar_pkey
LOG:  server process (PID 22753) was terminated by signal 11
LOG:  terminating any other active server processes
WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back
the current transaction and exit, because another server process
exited abnormally and possibly corrupted shared memory.

Is this a known bug?

Akash

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster