[GENERAL] Checkpoints questions

2008-03-04 Thread Henrik

Hi list,

I'm using 8.3 and I've started looking at the new checkpoint features.

As a starter does anyone have some clues how to analyse this:

db=# select * from pg_stat_bgwriter;
 checkpoints_timed | checkpoints_req | buffers_checkpoint |  
buffers_clean | maxwritten_clean | buffers_backend | buffers_alloc
---+-+ 
+---+--+-+---
   118 | 435 |1925161 | 
126291 |7 | 1397373 |   2665693



Thanks!
//Henke

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Checkpoints questions

2008-03-04 Thread Devi

Hi,
Hope this helps
http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm

Thanks
DEVI.G
- Original Message - 
From: Henrik [EMAIL PROTECTED]

To: pgsql-general@postgresql.org
Sent: Tuesday, March 04, 2008 3:28 PM
Subject: [GENERAL] Checkpoints questions



Hi list,

I'm using 8.3 and I've started looking at the new checkpoint features.

As a starter does anyone have some clues how to analyse this:

db=# select * from pg_stat_bgwriter;
 checkpoints_timed | checkpoints_req | buffers_checkpoint |  buffers_clean 
| maxwritten_clean | buffers_backend | buffers_alloc
---+-+ 
+---+--+-+---
   118 | 435 |1925161 | 126291 
|7 | 1397373 |   2665693



Thanks!
//Henke

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.516 / Virus Database: 
269.21.4/1309 - Release Date: 3/3/2008 6:50 PM






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Checkpoints questions

2008-03-04 Thread Greg Smith

On Tue, 4 Mar 2008, Henrik wrote:


As a starter does anyone have some clues how to analyse this:

db=# select * from pg_stat_bgwriter;
checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean | 
maxwritten_clean | buffers_backend | buffers_alloc

---+-++---+--+-+---
 118 | 435 |1925161 |126291 | 
7 | 1397373 |   2665693


Ah, nobody has asked this question yet.  This is a good sample and I'm 
going to assimilate it into my document that someone already suggested to 
you.


You had 118 checkpoints that happened because of checkpoint_timeout 
passing.  435 of them happened before that, typically those are because 
checkpoint_segments was reached.  This suggests you might improve your 
checkpoint situation by increasing checkpoint_segments, but that's not a 
bad ratio.  Increasing that parameter and spacing checkpoints further 
apart helps give the checkpoint spreading logic of 
checkpoint_completion_target more room to work over, which reduces the 
average load from the checkpoint process.


During those checkpoints, 1,925,161 8K buffers were written out.  That 
means on average, a typical checkpoint is writing 3481 buffers out, which 
works out to be 27.2MB each.  Pretty low, but that's an average; there 
could have been some checkpoints that wrote a lot more while others wrote 
nothing, and you'd need to sample this data regularly to figure that out.


The background writer cleaned 126,291 buffers (cleaned=wrote out dirty 
ones) during that time.  7 times, it wrote the maximum number it was 
allowed to before meeting its other goals.  That's pretty low; if it were 
higher, it would be obvious you could gain some improvement by increasing 
bgwriter_lru_maxpages.


Since last reset, 2,665,693 8K buffers were allocated to hold database 
pages.  Out of those allocations, 1,397,373 times a database backend 
(probably the client itself) had to write a page in order to make space 
for the new allocation.  That's not awful, but it's not great.  You might 
try and get a higher percentage written by the background writer in 
advance of when the backend needs them by increasing 
bgwriter_lru_maxpages, bgwriter_lru_multiplier, and decreasing 
bgwriter_delay--making the changes in that order is the most effective 
strategy.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] debug nonstandard use of \\ in a string literal

2008-03-04 Thread Ivan Zolotukhin
Thanks guys, this simple solution worked. Why didn't I guess before?..

On Thu, Feb 28, 2008 at 2:28 PM, Albe Laurenz [EMAIL PROTECTED] wrote:

 Ivan Zolotukhin wrote:
   From time to time I face with these well-known warnings in the
   PostgreSQL log, i.e.
  
   Feb 28 04:21:10 db7 postgres[31142]: [2-1] WARNING:  nonstandard use
   of escape in a string literal at character 62
   Feb 28 04:21:10 db7 postgres[31142]: [2-2] HINT:  Use the escape
   string syntax for escapes, e.g., E'\r\n'.
  
   This is fine, everybody knows about that and our PL/PgSQL developers
   try to make use of escape syntax. But sometimes errors occur anyway
   (by developers mistakes or something). So the question is: how to
   debug these annoying messages when pretty big application causes them?
   Is it possible to have a look what exact queries produced them?

  All I can think of is to set

  log_statement=all
  log_min_error_statement=WARNING
  log_min_messages=WARNING

  which will cause all statements and warnings to be logged.

  This might of course generate a lot of output...

  Yours,
  Laurenz Albe


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Checkpoints questions

2008-03-04 Thread Henrik


4 mar 2008 kl. 13.45 skrev Greg Smith:


On Tue, 4 Mar 2008, Henrik wrote:


As a starter does anyone have some clues how to analyse this:

db=# select * from pg_stat_bgwriter;
checkpoints_timed | checkpoints_req | buffers_checkpoint |  
buffers_clean | maxwritten_clean | buffers_backend | buffers_alloc
---+-+ 
+---+--+-+---
118 | 435 |1925161 | 
126291 | 7 | 1397373 |   2665693


Ah, nobody has asked this question yet.  This is a good sample and  
I'm going to assimilate it into my document that someone already  
suggested to you.


You had 118 checkpoints that happened because of checkpoint_timeout  
passing.  435 of them happened before that, typically those are  
because checkpoint_segments was reached.  This suggests you might  
improve your checkpoint situation by increasing checkpoint_segments,  
but that's not a bad ratio.  Increasing that parameter and spacing  
checkpoints further apart helps give the checkpoint spreading logic  
of checkpoint_completion_target more room to work over, which  
reduces the average load from the checkpoint process.


During those checkpoints, 1,925,161 8K buffers were written out.   
That means on average, a typical checkpoint is writing 3481 buffers  
out, which works out to be 27.2MB each.  Pretty low, but that's an  
average; there could have been some checkpoints that wrote a lot  
more while others wrote nothing, and you'd need to sample this data  
regularly to figure that out.


The background writer cleaned 126,291 buffers (cleaned=wrote out  
dirty ones) during that time.  7 times, it wrote the maximum number  
it was allowed to before meeting its other goals.  That's pretty  
low; if it were higher, it would be obvious you could gain some  
improvement by increasing bgwriter_lru_maxpages.


Since last reset, 2,665,693 8K buffers were allocated to hold  
database pages.  Out of those allocations, 1,397,373 times a  
database backend (probably the client itself) had to write a page in  
order to make space for the new allocation.  That's not awful, but  
it's not great.  You might try and get a higher percentage written  
by the background writer in advance of when the backend needs them  
by increasing bgwriter_lru_maxpages, bgwriter_lru_multiplier, and  
decreasing bgwriter_delay--making the changes in that order is the  
most effective strategy.




Ah, thank you Greg. I actually studied your paper before writing to  
this list but couldn't apply your example to mine. Now I know how I  
can interpret those numbers. Also thank you for the performance  
improvement suggestions. I think this is one of the most difficult  
things to understand. Knowing what parameters to tweak according to  
the output from pg_stat_bgwriter but you helped me a great deal.


Thanks!

//Henke

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Build 8.3 with OpenSSL on CentOS 5.x?

2008-03-04 Thread Douglas McNaught
On 3/4/08, Devrim GÜNDÜZ [EMAIL PROTECTED] wrote:
 Hi,


  On Tue, 2008-03-04 at 00:34 -0500, Tom Lane wrote:
   Karl Denninger [EMAIL PROTECTED] writes:
Anyone know where the magic incantation is to find the crypto
   libraries?
  
   If the RPM layout is the same as Fedora (which it surely oughta be)
   openssl is what provides libcrypto.so.


 I think you meant openssl-devel ?

That probably is needed if you're compiling against the library (since
it has the header files) but the actual runtime shared library is in
'openssl'--you don't need 'devel' if you're just installing binaries.

-Doug

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] how do you write aggregate function

2008-03-04 Thread Justin
The help is not real clear nor can i find examples how to write an 
aggregate function. 


I searched the archive of the mail list and did not find anything

I need to write Weighted Average function so the need to keep track of 
multiple variables between function calls is must?


I see how the /|sfunc|/( internal-state, next-data-values )   keeps 
track of the passed values with the internal state.  Can the internal 
state be an array type where i  just adding new entries to a 
multi-dimensional array then on ffunc do all the math in.


One person pointed out the pl/r plugin but what i'm working on gets tied 
into the PostBooks/OpenMfg project and they are not real big fans on 
adding another language and plugin they have to make sure their 
customers have deployed (which i'm one of those customers).


PostBooks/OpenMfg will want all the functions in pl/pgsql  does this 
present any major problems??


Re: [GENERAL] how do you write aggregate function

2008-03-04 Thread Colin Wetherbee

Justin wrote:
The help is not real clear nor can i find examples how to write an 
aggregate function. 


Examples:

http://www.postgresql.org/docs/8.2/static/xaggr.html


I searched the archive of the mail list and did not find anything


The online documentation is excellent for these sorts of things.

I need to write Weighted Average function so the need to keep track of 
multiple variables between function calls is must?


You don't necessarily need an aggregate function to do this.

A weighted average takes several independent variables, weights them 
based on some constant (usually the difference between a static time and 
the time at which the data were recorded), and returns a value [0]. 
Maintaining state between calls is probably going to be more trouble 
than it's worth, especially if you're recomputing the weights all the 
time... which, in most cases, is what happens.


I perform exponential moving average analysis of stock market and 
trading data, for which I have a table that contains columns like the 
following (these data are not intended to resemble the performance of 
any particular security).


id | time  | price
-
 1 | 09:30 | 89.54
 2 | 09:31 | 89.58
 3 | 09:32 | 89.53
 4 | 09:33 | 89.5
 5 | 09:34 | 89.51
 6 | 09:35 | 89.5
 7 | 09:36 | 89.42
 8 | 09:37 | 89.44

When I compute the exponential average of these data, I'm always looking 
at the most recent X prices, as I loop over all the rows in which I'm 
interested.  Which means I need to recompute the weighted values for 
every minute of data (in the case of this sample table, anyway). 
Maintaining state for that sort of calculation wouldn't be worth the 
overhead.


I suggest writing a function (in PL/pgSQL or whatever your favorite 
flavor is) that performs a query to retrieve all the rows you need and 
outputs a SETOF data that contains the weighted averages.  If you only 
need one average at a time, just return a single value instead of a 
SETOF values.


I hope this helps, but in case it doesn't, you should probably give us a 
little more detail about what you're actually trying to do.


Colin

[0] Using a formula like this: 
http://en.wikipedia.org/wiki/Moving_average#Weighted_moving_average


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] PostgreSQL Conference East, only 3 weeks left

2008-03-04 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello,

There are only three weeks left to register for the PostgreSQL
Community Conference: East! 

The conference is scheduled on March 29th and 30th (a Saturday and
Sunday) at the University of Maryland. Come join us as Bruce Momjian
and Joshua Drake have a round table open to any PostgreSQL question
from the community. 

If a round table isn't your cup of tea, visit our talks page where you
can pick to learn from over 20 experts in the field.

http://www.postgresqlconference.org/talks/

To register just point that old fashion web browser over to:

http://www.postgresqlconference.org/

All registrations and sponsorships are donations to PostgreSQL via
Software in the Public Interest, Inc., a 501(c)3 non-profit corporation.

Thanks again to our Community Conference Sponsors:

Organizational Sponsor
Command Prompt, Inc. http://www.commandprompt.com/

Silver Sponsor
EnterpriseDB http://www.enterprisedb.com

Talk Sponsors
Afilias http://www.afilias.org/
Continuent http://www.continuent.com/
Sun http://www.sun.com/
Truviso http://www.truviso.com/
Xtuple http://www.xtuple.com/

Meal Sponsors
OTG http://www.otg-nc.com/

General Sponsor
Emma http://www.myemma.com/


Sincerely,

Joshua D. Drake


- -- 
The PostgreSQL Company since 1997: http://www.commandprompt.com/ 
PostgreSQL Community Conference: http://www.postgresqlconference.org/
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL SPI Liaison | SPI Director |  PostgreSQL political pundit

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHzY7qATb/zqfZUUQRAnOHAJ0TR2yACeDyKMYK17LLNDtZmPqk7wCfYnDx
mQ4xhEFI07KTjuDFHR0D+2Q=
=kkf4
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Benetl version 1.6

2008-03-04 Thread Benoît Carpentier
Dear all,

New version (1.6) of Benetl has been released.

It brings a new transformation engine (a new version the part formula).
This brings much more possibilities to transform datas with the
combination of several mathematics functions.

This is correcting also a trouble with timestamp brought by version 1.3.

Thanks for your interest and enjoy postgreSQL (and Benetl).

Regards,

--
Benoît Carpentier
www.benetl.net Founder of Benetl  Java Developer




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Question about PostgreSQL's bytea compatibility with MS Access as front-end

2008-03-04 Thread Kristina Yamamoto
Hi,

I'd really appreciate any input for this issue we've been having. 

I work for an environmental non-profit. We have a database that is used to 
store our wildlife tracking data, which includes photos of wildlife taken from 
the motion-detector cameras set up near Vail, CO. The database was originally 
created in the 2003 version Microsoft Access (not by me), and because of the 
large number of records (4000+), the database is now at its 2 gb limit set by 
Microsoft. The non-profit employees are set on storing the photos in the 
database.

We decided to migrate the database to PostgreSQL, because of the ability to 
still use Access as the front-end (there are forms, queries, etc that are 
easier for the volunteers to use in Access than learning SQL.) We bought the 
DBConvert program, which stores the photos in bytea format in PostgreSQL. 
However, when we try to link to the PostgreSQL database using Access, the 
pictures are no longer recognized as OLE - instead they are binary and are no 
longer visible using Access' image viewer.

I realize this could be an issue on any of the three fronts: DbConvert, MS 
Access, or PostgreSQL. I'm currently in contact with the DBConvert company, but 
I'd like some feedback from the bright minds on this mailing list.

Has anyone encountered this sort of problem before? Short of taking the photos 
out of the database and just saving a link to their folder(s), can anyone think 
of another solution?

Thanks very much! 

-Kristina


  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

Re: [GENERAL] Documenting a DB schema

2008-03-04 Thread Oisin Glynn

Shahaf Abileah wrote:


I’m looking for a systematic way to document the schema for the 
database behind our website (www.redfin.com http://www.redfin.com/), 
so that the developers using this database have a better idea what all 
the tables and columns mean and what data to expect. Any recommendations?


It would be great if the documentation could be kept as close to the 
code as possible – that way we stand a chance of keeping it up to 
date. So, in the same way that Java docs go right there on top of the 
class or method definitions, it would be great if I could attach my 
comments to the table definitions. It looks like MySQL has that kind 
of capability:


create table table_with_comments(a int comment 'this is column a...');

(see http://dev.mysql.com/doc/refman/5.0/en/create-table.html)

However, Postgres doesn’t support the “comment” keyword. Is there an 
alternative?


Thanks,

--S

*Shahaf Abileah *|* Lead Software Developer *

[EMAIL PROTECTED] | tel: 206.859.2869 | fax: 877.733.3469

Redfin Corporation
710 2nd Ave
Suite 600
Seattle, WA 98104

Its probably best to try one list and see if you get a response rather 
than sending the same message to 3 lists.

Comments are supported

CREATE TABLE follow_me_destination
(
mailbox_number character varying(10), -- Follow me users mailbox number.
destination_number character varying(32), -- Follow me phone number.
dest_id serial NOT NULL
)
WITHOUT OIDS;
ALTER TABLE follow_me_destination OWNER TO postgres;
COMMENT ON TABLE follow_me_destination IS 'Stores follow me numbers for 
system users.';
COMMENT ON COLUMN follow_me_destination.mailbox_number IS 'Follow me 
users mailbox number.';
COMMENT ON COLUMN follow_me_destination.destination_number IS 'Follow me 
phone number.';


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Documenting a DB schema

2008-03-04 Thread Alan Hodgson
On Tuesday 04 March 2008, Shahaf Abileah [EMAIL PROTECTED] wrote:

 However, Postgres doesn't support the comment keyword.  Is there an
 alternative?

comment on table table_name is 'comment';
comment on column table.column_name is 'comment';


-- 
Alan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Documenting a DB schema

2008-03-04 Thread Shahaf Abileah
I'm looking for a systematic way to document the schema for the database
behind our website (www.redfin.com http://www.redfin.com/ ), so that
the developers using this database have a better idea what all the
tables and columns mean and what data to expect.  Any recommendations?

 

It would be great if the documentation could be kept as close to the
code as possible - that way we stand a chance of keeping it up to date.
So, in the same way that Java docs go right there on top of the class or
method definitions, it would be great if I could attach my comments to
the table definitions. It looks like MySQL has that kind of capability:

 

create table table_with_comments(a int comment 'this is
column a...');

 

(see http://dev.mysql.com/doc/refman/5.0/en/create-table.html)

 

However, Postgres doesn't support the comment keyword.  Is there an
alternative?

 

Thanks,

 

--S

 

Shahaf Abileah | Lead Software Developer 

[EMAIL PROTECTED] | tel: 206.859.2869 | fax: 877.733.3469

Redfin Corporation
710 2nd Ave
Suite 600
Seattle, WA 98104

 



Re: [GENERAL] Documenting a DB schema

2008-03-04 Thread Colin Wetherbee

Shahaf Abileah wrote:
It would be great if the documentation could be kept as close to the 
code as possible – that way we stand a chance of keeping it up to date.  
So, in the same way that Java docs go right there on top of the class or 
method definitions, it would be great if I could attach my comments to 
the table definitions. It looks like MySQL has that kind of capability:


Please do not cross-post.  One list is enough.

PostgreSQL has that functionality, too.

cww=# CREATE TABLE foo (a INTEGER, b INTEGER);
CREATE TABLE
cww=# COMMENT ON TABLE foo IS 'my comment';
COMMENT
cww=# \d+
  List of relations
 Schema | Name | Type  | Owner | Description
+--+---+---+-
 public | foo  | table | cww   | my comment
(1 row)

COMMENT is well-documented.

http://www.postgresql.org/docs/8.3/static/sql-comment.html

Colin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] [SQL] Documenting a DB schema

2008-03-04 Thread Steve Crawford

Shahaf Abileah wrote:


I'm looking for a systematic way to document the schema for the 
database behind our website (www.redfin.com http://www.redfin.com/), 
so that the developers using this database have a better idea what all 
the tables and columns mean and what data to expect.  Any recommendations?


 

It would be great if the documentation could be kept as close to the 
code as possible -- that way we stand a chance of keeping it up to 
date.  So, in the same way that Java docs go right there on top of the 
class or method definitions, it would be great if I could attach my 
comments to the table definitions. It looks like MySQL has that kind 
of capability:


 

create table table_with_comments(a int comment 'this is 
column a...');


 


(see http://dev.mysql.com/doc/refman/5.0/en/create-table.html)

 

However, Postgres doesn't support the comment keyword.  Is there an 
alternative?



You mean like:
COMMENT ON mytable IS 'This is my table. Mine, mine, mine';

You can also comment columns, databases, functions, schemas, domains, etc.

Cheers,
Steve



Re: [GENERAL] Documenting a DB schema

2008-03-04 Thread Tomás Di Doménico
Check http://www.postgresql.org/docs/8.3/interactive/sql-comment.html

Cheers!

Shahaf Abileah wrote:
 I’m looking for a systematic way to document the schema for the database
 behind our website (www.redfin.com http://www.redfin.com/), so that
 the developers using this database have a better idea what all the
 tables and columns mean and what data to expect.  Any recommendations?
 
  
 
 It would be great if the documentation could be kept as close to the
 code as possible – that way we stand a chance of keeping it up to date. 
 So, in the same way that Java docs go right there on top of the class or
 method definitions, it would be great if I could attach my comments to
 the table definitions. It looks like MySQL has that kind of capability:
 
  
 
 create table table_with_comments(a int comment 'this is
 column a...');
 
  
 
 (see http://dev.mysql.com/doc/refman/5.0/en/create-table.html)
 
  
 
 However, Postgres doesn’t support the “comment” keyword.  Is there an
 alternative?
 
  
 
 Thanks,
 
  
 
 --S
 
  
 
 *Shahaf Abileah *|* Lead Software Developer *
 
 [EMAIL PROTECTED] | tel: 206.859.2869 | fax: 877.733.3469
 
 Redfin Corporation
 710 2nd Ave
 Suite 600
 Seattle, WA 98104
 
  
 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Planner: rows=1 after similar to where condition.

2008-03-04 Thread Joris Dobbelsteen
-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Joris 
Dobbelsteen
Sent: Monday, 25 February 2008 17:08
To: Tom Lane
Cc: Gregory Stark; Scott Marlowe; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Planner: rows=1 after similar to 
where condition. 

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Monday, 25 February 2008 16:34
To: Joris Dobbelsteen
Cc: Gregory Stark; Scott Marlowe; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Planner: rows=1 after similar to 
where condition. 

Joris Dobbelsteen [EMAIL PROTECTED] writes:
 Bitmap Heap Scan on log_syslog syslog  (cost=11168.32..16988.84
 rows=1
 width=221) (actual time=11145.729..30067.606 rows=212 loops=1)
   Recheck Cond: (((program)::text = 'amavis'::text) AND 
 ((facility)::text = 'mail'::text))
   Filter: (((priority)::text = 'notice'::text) AND ((text)::text ~
 '***:^(?:amavis\\[[0-9]+\\]: \\([0-9]+-[-0-9]+\\) Passed
[A-Za-z0-9]+,
 [][0-9\\.]* [^]+ - [^]+, Message-ID: [^]+,
 (Resent-Message-ID: [^]+, |)mail.id: [^ ,]+, Hits: [-+0-9\\.,]+,
 queued.as: [^ ,]+, [0-9]+ ms)$'::text))

It's not too surprising that you'd get a small selectivity 
estimate for 
such a long regexp; the default estimate is just based on the 
amount of 
fixed text in the pattern, and you've got a lot.

If you increase the stats target for the column to 100 or 
more then it 
will try actually applying the regexp to all the histogram entries.
That might or might not give you a better estimate.

I will try that, expect result back within a few days (have it 
collect some better sample set). Unfortunally the regex is not 
so much for narrowing down the selection, but rather 
guarenteeing the format of the messages.
You seem to consider the common case differently, and I can 
agree for most part. Unfortunally my use-case is different 
from the expected. That said, might a less aggressive 
selectivity estimation for long strings work better in the common case?

A new test case (I did a fresh VACUUM ANALYZE with your statistics for
text set to 100):
Arround 5288 rows out of 4.3 Million match.

Bitmap Heap Scan on log_syslog syslog  (cost=1.94..53522.27 rows=1
width=226) (actual time=41661.354..92719.083 rows=5288 loops=1)
  Recheck Cond: (((program)::text = 'amavis'::text) AND
((facility)::text = 'mail'::text))
  Filter: (((priority)::text = 'notice'::text) AND ((text)::text ~
'***:^(?:amavis\\[[0-9]+\\]: \\([0-9]+-[-0-9]+\\) Passed [A-Za-z0-9]+,
[][0-9\\.]* [^]+ - [^]+, Message-ID: [^]+,
(Resent-Message-ID: [^]+, |)mail.id: [^ ,]+, Hits: [-+0-9\\.,]+,
queued.as: [^ ,]+, [0-9]+ ms)$'::text))
  -  BitmapAnd  (cost=1.94..1.94 rows=15279 width=0) (actual
time=4641.009..4641.009 rows=0 loops=1)
-  Bitmap Index Scan on IX_log_syslog_program
(cost=0.00..2908.86 rows=113370 width=0) (actual time=2913.718..2913.718
rows=113897 loops=1)
  Index Cond: ((program)::text = 'amavis'::text)
-  Bitmap Index Scan on IX_log_syslog_facility
(cost=0.00..14868.57 rows=591426 width=0) (actual
time=1715.591..1715.591 rows=586509 loops=1)
  Index Cond: ((facility)::text = 'mail'::text)
Total runtime: 92738.389 ms

Unfortunally, Tom, it seems the data varies to much and is not included
in the histogram. Probably the data varies too much. In this case, a
regex NOT for selection but rather for forcing the input format should
be done differently.
My construction with the regex as substring() construction and a
WHERE substring() IS NOT NULL seems to give a better estimate in these
cases. The result seems equivalent.

Bitmap Heap Scan on log_syslog syslog  (cost=17783.78..53966.33
rows=5844 width=226) (actual time=59095.076..110913.152 rows=5295
loops=1)
  Recheck Cond: (((program)::text = 'amavis'::text) AND
((facility)::text = 'mail'::text))
  Filter: (((priority)::text = 'notice'::text) AND
(substring((text)::text, 'amavis\\[[0-9]+\\]: \\([0-9]+-[-0-9]+\\)
Passed \\[A-Za-z0-9]+\\, [][0-9.]* [^]+ - [^]+, Message-ID:
[^]+, (Resent-Message-ID: [^]+, |)mail_id: [^ ,]+, Hits:
[-+0-9.,]+, queued_as: [^ ,]+, [0-9]+ ms'::text, '\\'::text) IS NOT
NULL))
  -  BitmapAnd  (cost=17783.78..17783.78 rows=15279 width=0) (actual
time=4003.657..4003.657 rows=0 loops=1)
-  Bitmap Index Scan on IX_log_syslog_program
(cost=0.00..2908.86 rows=113370 width=0) (actual time=1652.278..1652.278
rows=113939 loops=1)
  Index Cond: ((program)::text = 'amavis'::text)
-  Bitmap Index Scan on IX_log_syslog_facility
(cost=0.00..14868.57 rows=591426 width=0) (actual
time=2339.943..2339.943 rows=586653 loops=1)
  Index Cond: ((facility)::text = 'mail'::text)
Total runtime: 110921.978 ms

Note: few added rows in second run is due to the fact that this is a
live table that receives input continuesly.

Concluding:
Your estimator is really great and seems to give pretty good estimates!
Except for regular expressions, which seem more tricky in this regard.

A good note might be to 

Re: [GENERAL] [ADMIN] GRANT ALL ON recursive for all tables in my scheme?

2008-03-04 Thread Tom Lane
A Lau [EMAIL PROTECTED] writes:
 I recently searched for a way to grant permissions to a new created user for
 all tables in a scheme or database. I just found ways who uses psql and
 scripts. But I'm astonished that there is no way to do it with the grant
 all on database [schema]...-option. Actually i thought that a grant on a
 schema or database would recusivly set the accoding permissions to the
 corresponding objects (eg. tables, views...). Is there a way to do it easily
 in SQL-Syntax without psql and scripting? Why it can't be done with the
 grant-operator?

Because the SQL spec says what GRANT should do, and that's not in it.

If you plan in advance for this sort of thing then it can be quite
painless.  The best way is to grant permissions on the individual
objects to roles, and then grant membership in those roles to particular
users.  Users can come and go but the role permissions grants stay about
the same.

If you didn't plan in advance then you find yourself wishing for
recursive grants, wildcard grants, future grants, and all sorts of
action-at-a-distance ideas that have been seen before on these lists
:-(.  Personally I think that scripts and plpgsql functions are
perfectly fine solutions for such needs, mainly because they're easily
customizable.  Anything we were to hard-wire into GRANT would solve only
some cases.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] using warm standby with drbd???

2008-03-04 Thread libra dba
Hi All!

I have configured the warm standby using WAL FILES and DRBD. The wal files
are generated as server A. I am using DRBD to replicate these wal files
between server A and server B. The DRBD service is running as Primary on
Server A and secondary on Server B. The wal_files are written to the server
B, but there is a drawback for DRBD. The filesystme needs to be unmounted on
the server B (where DRBD is running secondary role).

So recovery is not countinuous.(file systme is unmounted). To start the
recovery, i need to change the DRBD role to primary on server B. After that
i have to mount the filesystem. Once i mount the filesystem, the recovery
process starts.

Is there any way in which i need not switch secondary/primary role for DRBD
on server B?
Is there any way in which i can have the file system mounted on server B,
running DRBD secondary role?

How else can i replicate the wal_files? ( i don't want to user common file
system ,,, NFS,,, etc.)?

Another thing which i want to ask is that if we are generating archives
every 1 minute. then what happens to the data which was written to the
server A after 35 seconds after the last wal file generation.(server A
crashes).

Since this data has not been archived (it was supposed to archive after 1
minute), do i think that this is the loss of data in a server A crash as
this 35 second data has not been written to the archived logs and neither
has been transported to server B?

A quick reply is highly appreciated!

Thanks!


[GENERAL] GRANT ALL ON recursive for all tables in my scheme?

2008-03-04 Thread A Lau
I recently searched for a way to grant permissions to a new created user for
all tables in a scheme or database. I just found ways who uses psql and
scripts. But I'm astonished that there is no way to do it with the grant
all on database [schema]...-option. Actually i thought that a grant on a
schema or database would recusivly set the accoding permissions to the
corresponding objects (eg. tables, views...). Is there a way to do it easily
in SQL-Syntax without psql and scripting? Why it can't be done with the
grant-operator?
 
thanks for help.
 
Andreas


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Planner: rows=1 after similar to where condition.

2008-03-04 Thread Tom Lane
Joris Dobbelsteen [EMAIL PROTECTED] writes:
 From: Tom Lane [mailto:[EMAIL PROTECTED]
 If you increase the stats target for the column to 100 or 
 more then it 
 will try actually applying the regexp to all the histogram entries.
 That might or might not give you a better estimate.

 A new test case (I did a fresh VACUUM ANALYZE with your statistics for
 text set to 100):
 Arround 5288 rows out of 4.3 Million match.

Ah, you had not given us that number before.  That's one in 800 rows,
more or less, which means that there's no chance of getting a
well-founded statistical estimate with less than 800 items in the stats
collection.  Does it do any better with stats target set to 1000?

I think though that the real problem may be that the index condition

   Recheck Cond: (((program)::text = 'amavis'::text) AND
 ((facility)::text = 'mail'::text))

selects rows that match the regex with much higher probability than the
general row population does.  Since we don't yet have any cross-column
statistics the planner has no chance of realizing that.

 My construction with the regex as substring() construction and a
 WHERE substring() IS NOT NULL seems to give a better estimate in these
 cases. The result seems equivalent.

Actually, it's got exactly 0 knowledge about substring() and is giving
you a completely generic guess for this clause :-(

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [SQL] Documenting a DB schema

2008-03-04 Thread Adrian Klaver

-- Original message --
From: Shahaf Abileah [EMAIL PROTECTED]
 I'm looking for a systematic way to document the schema for the database
 behind our website (www.redfin.com http://www.redfin.com/ ), so that
 the developers using this database have a better idea what all the
 tables and columns mean and what data to expect.  Any recommendations?
 
  
 
 It would be great if the documentation could be kept as close to the
 code as possible - that way we stand a chance of keeping it up to date.
 So, in the same way that Java docs go right there on top of the class or
 method definitions, it would be great if I could attach my comments to
 the table definitions. It looks like MySQL has that kind of capability:
 
  
 
 create table table_with_comments(a int comment 'this is
 column a...');
 
  
 
 (see http://dev.mysql.com/doc/refman/5.0/en/create-table.html)
 
  
 
 However, Postgres doesn't support the comment keyword.  Is there an
 alternative?
 
  
 
 Thanks,
 
  
 
 --S
 
  
 
See:
http://www.postgresql.org/docs/8.2/interactive/sql-comment.html

--
Adrian Klaver
[EMAIL PROTECTED]

 

---BeginMessage---








Im looking for a systematic way to document the
schema for the database behind our website (www.redfin.com), so that the developers
using this database have a better idea what all the tables and columns mean and
what data to expect. Any recommendations?



It would be great if the documentation could be kept as
close to the code as possible  that way we stand a chance of keeping it
up to date. So, in the same way that Java docs go right there on top of
the class or method definitions, it would be great if I could attach my
comments to the table definitions. It looks like MySQL has that kind of
capability:




create table table_with_comments(a int comment 'this is column a...');



(see http://dev.mysql.com/doc/refman/5.0/en/create-table.html)



However, Postgres doesnt support the
comment keyword. Is there an alternative?



Thanks,



--S



Shahaf Abileah|Lead Software Developer 

[EMAIL PROTECTED] | tel: 206.859.2869 | fax:
877.733.3469

Redfin
Corporation
710 2nd Ave
  Suite 600
Seattle, WA
   98104








---End Message---

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] GRANT ALL ON recursive for all tables in my scheme?

2008-03-04 Thread Alan Hodgson
On Tuesday 04 March 2008, A Lau [EMAIL PROTECTED] wrote:
 I recently searched for a way to grant permissions to a new created user
 for all tables in a scheme or database. I just found ways who uses psql
 and scripts. But I'm astonished that there is no way to do it with the
 grant all on database [schema]...-option. Actually i thought that a
 grant on a schema or database would recusivly set the accoding
 permissions to the corresponding objects (eg. tables, views...). Is there
 a way to do it easily in SQL-Syntax without psql and scripting? Why it
 can't be done with the grant-operator?


Normally you would have a group role or roles that have appropriate 
permissions already, and then just grant role to new_user for group 
membership.

-- 
Alan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] using warm standby with drbd???

2008-03-04 Thread Alan Hodgson
On Tuesday 04 March 2008, libra dba [EMAIL PROTECTED] wrote:
 How else can i replicate the wal_files? ( i don't want to user common
 file system ,,, NFS,,, etc.)?

scp


 Another thing which i want to ask is that if we are generating archives
 every 1 minute. then what happens to the data which was written to the
 server A after 35 seconds after the last wal file generation.(server A
 crashes).


It's gone.

If that isn't acceptable then I would suggest putting the active pg_xlog 
directory on drbd. That may have speed implications.

-- 
Alan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] I don't understand this WARNING on pg_ctl startup

2008-03-04 Thread Ralph Smith

===

[EMAIL PROTECTED]:/usr/lib/postgresql/8.2/bin$ 2008-03-04 15:47:21 PST  
LOG:  0: could not load root certificate file root.crt: no SSL  
error reported

2008-03-04 15:47:21 PST DETAIL:  Will not verify client certificates.
2008-03-04 15:47:21 PST LOCATION:  initialize_SSL, be-secure.c:785

-

ALSO, when I start PG via
./pg_ctl start -D /var/lib/postgresql/8.2/main
 c/o `ps -ef`  I get:
postgres   872 1  4 15:47 pts/200:00:00 /usr/lib/postgresql/ 
8.2/bin/postgres


whereas at other times I get:
postgres 31784 1  5 15:20 pts/200:00:00 /usr/lib/postgresql/ 
8.2/bin/postgres -D /var/lib/postgresql/8.2/main


Can anyone tell me what that is?


Thank you,
Ralph Smith
==




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] using warm standby with drbd???

2008-03-04 Thread Erik Jones


On Mar 4, 2008, at 6:02 PM, Alan Hodgson wrote:


On Tuesday 04 March 2008, libra dba [EMAIL PROTECTED] wrote:

How else can i replicate the wal_files? ( i don't want to user common
file system ,,, NFS,,, etc.)?


scp


Actually, scp is a bad choice for transfering wal files if you're  
planning on continuous replay.  You want something that supports  
atomic transfers.  rsync is a typical choice and I've been meaning to  
check out unison for a while now.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] PGPOOL_HA

2008-03-04 Thread Luis Alberto Pérez Paz
Hi,

I'm going to implement PGPOOL-II in my system.
I was reading in the official site (http://pgfoundry.org/projects/pgpool/)
and I found and excelent solution called PGPOOL-HA, however I've not found
information of how implement this.

Does anyone has implement it? In the positive case, can anyone share
experiences*??*



Best Regards,



-- 
paz, amor y comprensión
   (1967-1994)


[GENERAL] Import file into bytea field in SQL/plpgsql?

2008-03-04 Thread Erwin Brandstetter
Hi!

What I want to do:
Import a file from the file system into a bytea field of a table.

I know how to do it with large objects:
   INSERT INTO mytable(oid_fld) VALUES (lo_import('/mypath/myfile'));
And export from there:
   SELECT lo_export(oid_fld, '/mypath/myfile2') FROM mytable WHERE
some condition;

Now, I could copy over from pg_largeobject:
   INSERT INTO mytable(bytea_fld) SELECT data FROM pg_largeobject
WHERE loid = 1234567;
And create a large object and export from there as above.
But that seems unnecessarily complex, and  .. well .. stupid.

There must be a simpler way to import/export a file (as a whole, an
image for instance) into/out of my bytea field - in SQL or plpgsql?
Probably another set of functions I overlooked?


Thanks in advance
Erwin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] I don't understand this WARNING on pg_ctl startup

2008-03-04 Thread Chris

Ralph Smith wrote:

===

[EMAIL PROTECTED]:/usr/lib/postgresql/8.2/bin$ 2008-03-04 15:47:21 PST 
LOG:  0: could not load root certificate file root.crt: no SSL 
error reported

2008-03-04 15:47:21 PST DETAIL:  Will not verify client certificates.
2008-03-04 15:47:21 PST LOCATION:  initialize_SSL, be-secure.c:785


You have:

ssl = true

in your postgresql.conf file but when postgres tries to start up, there 
is no root.crt file (or it can't be read due to permissions).


http://www.postgresql.org/docs/8.2/interactive/ssl-tcp.html

--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] how do you write aggregate function

2008-03-04 Thread Richard Broersma
On Tue, Mar 4, 2008 at 8:44 AM, Justin [EMAIL PROTECTED] wrote:


 I searched the archive of the mail list and did not find anything


I don't know if you've already seen this, but this is the question that I
asked a while back:
http://archives.postgresql.org/pgsql-general/2007-12/msg00681.php

Regards,
Richard Broersma Jr.


[GENERAL] Find Number Of Sundays Between Two Dates

2008-03-04 Thread raghukumar

Hai EverBody,

Can I know what is the query by which we can find the number of
sundays between two given dates in postgres

Thanks In Advance, 
Raghu...
-- 
View this message in context: 
http://www.nabble.com/Find-Number-Of-Sundays-Between-Two-Dates-tp15843956p15843956.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Find Number Of Sundays Between Two Dates

2008-03-04 Thread A. Kretschmer
am  Tue, dem 04.03.2008, um 22:16:07 -0800 mailte raghukumar folgendes:
 
 Hai EverBody,
 
 Can I know what is the query by which we can find the number of
 sundays between two given dates in postgres

No problem, for instance for month january 2008:

test=*# select count(1) from 
  (select '2008-01-01'::date + s*'1day'::interval as datum from 
generate_series(0,30) s)foo 
  where extract(dow from datum)=6;
 count
---
 4
(1 row)


With generate_series() i generate a list of dates, and later i check if
the date are a saturday. Okay, you need to know sunday - change from 6
to 0 and ou course, you can calculate the parameter for the
generate_series like 

test=*# select count(1) from 
  (select '2008-01-01'::date + s*'1day'::interval as datum from 
generate_series(0,'2008-01-31'::date - '2008-01-01'::date) s)foo 
  where extract(dow from datum)=0;
 count
---
 4
(1 row)




Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 6: explain analyze is your friend