[GENERAL] Scaleable DB structure for counters...

2006-07-16 Thread Eci Souji
So we've got a table called "books" and we want to build records of how 
often each book is accessed and when.  How would you store such 
information so that it wouldn't become a huge unmanageable table? 
Before I go out trying to plan something like this I figured I'd ask and 
see if anyone had any experience with such a beast.


One idea I had was to create a separate DB for these counters and create 
a schema for each year.  Within each year schema I would create month 
tables.  Then I'd write a function to hit whatever schema existed like, 
ala...


SELECT * FROM public.get_counters(date, hour, book_id);

get_day_counters would break up the date and based on the year do a 
select counters from "2006".may WHERE day=12 and book_id=37.  If hour 
had a value it could do select counters from "2006".may where day=12 and 
book_id=37 and hour=18.


Offline scripts would take care of generating and populating these 
tables, as they'd be historical and never real-time.


Thoughts?  I'm hoping someone has done something similar and can point 
me in the right direction.



- E




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Scaleable DB structure for counters...

2006-07-16 Thread Harald Armin Massa
Eci,the usual way is:create table books (id_book serial, author text, title text ...)create table access (id_access serial, id_book int4, timeofaccess timestamp,...)then for every access you write 1 record to access. 
A rough estimate: a book may be lent out every hour once, so that is 8544 records per year and book; IF you expect that table gets "to big", you still can move over to inheritance:create table access2006 inherits access
create table access2007 inherits accessand put rules on them to make sure the data goes into the correct table when you access only the access table. Google up "constraint exclusion" within the 8.1 release notes / the postgresql documentation.
HaraldOn 7/16/06, Eci Souji <[EMAIL PROTECTED]> wrote:
So we've got a table called "books" and we want to build records of howoften each book is accessed and when.  How would you store suchinformation so that it wouldn't become a huge unmanageable table?
Before I go out trying to plan something like this I figured I'd ask andsee if anyone had any experience with such a beast.One idea I had was to create a separate DB for these counters and createa schema for each year.  Within each year schema I would create month
tables.  Then I'd write a function to hit whatever schema existed like,ala...SELECT * FROM public.get_counters(date, hour, book_id);get_day_counters would break up the date and based on the year do a
select counters from "2006".may WHERE day=12 and book_id=37.  If hourhad a value it could do select counters from "2006".may where day=12 andbook_id=37 and hour=18.Offline scripts would take care of generating and populating these
tables, as they'd be historical and never real-time.Thoughts?  I'm hoping someone has done something similar and can pointme in the right direction.- E---(end of broadcast)---
TIP 6: explain analyze is your friend-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607
-on different matter: EuroPython 2006 is over. It was a GREAT conference. If you missed it, now you can prepare budget for visiting EuroPython 2007.


Re: [GENERAL] Scaleable DB structure for counters...

2006-07-16 Thread Eci Souji
What if instead of book checkouts we were looking at how often a book 
was referenced?  In which case we're talking multiple times an hour, and 
we could easily have each book requiring hundreds of thousands of rows. 
 Multiply that by hundreds of thousands of books and a the table seems 
to become huge quite quick.  Would breaking up the table by year still 
make sense?  I'm just not familiar with having to deal with a table that 
could easily hit millions of records.


Thanks for your reply,

- E


Harald Armin Massa wrote:

Eci,

the usual way is:

create table books (id_book serial, author text, title text ...)
create table access (id_access serial, id_book int4, timeofaccess 
timestamp,...)


then for every access you write 1 record to access.

A rough estimate: a book may be lent out every hour once, so that is 
8544 records per year and book;


IF you expect that table gets "to big", you still can move over to 
inheritance:


create table access2006 inherits access
create table access2007 inherits access

and put rules on them to make sure the data goes into the correct table 
when you access only the access table. Google up "constraint exclusion" 
within the 8.1 release notes / the postgresql documentation.


Harald


On 7/16/06, *Eci Souji* <[EMAIL PROTECTED] 
> wrote:


So we've got a table called "books" and we want to build records of how
often each book is accessed and when.  How would you store such
information so that it wouldn't become a huge unmanageable table?
Before I go out trying to plan something like this I figured I'd ask and
see if anyone had any experience with such a beast.

One idea I had was to create a separate DB for these counters and create
a schema for each year.  Within each year schema I would create month
tables.  Then I'd write a function to hit whatever schema existed like,
ala...

SELECT * FROM public.get_counters(date, hour, book_id);

get_day_counters would break up the date and based on the year do a
select counters from "2006".may WHERE day=12 and book_id=37.  If hour
had a value it could do select counters from "2006".may where day=12 and
book_id=37 and hour=18.

Offline scripts would take care of generating and populating these
tables, as they'd be historical and never real-time.

Thoughts?  I'm hoping someone has done something similar and can point
me in the right direction.


- E




---(end of
broadcast)---
TIP 6: explain analyze is your friend




--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
-
on different matter:
EuroPython 2006 is over. It was a GREAT conference. If you missed it, 
now you can prepare budget for visiting EuroPython 2007.




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Scaleable DB structure for counters...

2006-07-16 Thread Harald Armin Massa
Eci,I could not google them up quickly, but there are people dealing with tables with millions of records in PostgreSQL.Per technical data the number of rows in a table is unlimited in PostgreSQL:
http://www.postgresql.org/about/There may be performance-reasons to split up a table of that size, but still you can trust PostgreSQLs table inheritance together with constraint exclusion to deal with that: just inherit your tables on a monthly base:
create table access200601 inherits and adjust your rules accordingly. read up on this documentation for examples of table partitioning,
http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.htmlwhat is the technical term for the method you are looking for.Best wishes,Harald-- GHUM Harald Massa
persuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-on different matter: EuroPython 2006 is over. It was a GREAT conference. If you missed it, now you can prepare budget for visiting EuroPython 2007.


Re: [GENERAL] Scaleable DB structure for counters...

2006-07-16 Thread hubert depesz lubaczewski
On 7/16/06, Eci Souji <[EMAIL PROTECTED]> wrote:
So we've got a table called "books" and we want to build records of howoften each book is accessed and when.  How would you store suchinformation so that it wouldn't become a huge unmanageable table?
Before I go out trying to plan something like this I figured I'd ask andsee if anyone had any experience with such a beast.from your other email i understand that you will deal with a milion or so records in access-list table.
the simplest approach would be not to divide it into multiple tables, but instead just add triger on access table to increment counters perbook.simple and very effective.depesz


[GENERAL] Browse database , schema

2006-07-16 Thread qnick
Hi All,

I have a question about rights to browse a database and a schema's
structure. Is this normal that user who isn't have any permission as
select
to same db or scheme may explore it and see tables (table's fields
etc.) ?
How prohibit that ? Thanks

Regards
Qnick


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Dynamic table with variable number of columns

2006-07-16 Thread nkunkov
Thank you very much.
Much appreciated.
NK

- Original Message -
From: Bruno Wolff III <[EMAIL PROTECTED]>
Date: Friday, July 14, 2006 2:50 pm
Subject: Re: Dynamic table with variable number of columns

> On Wed, Jul 12, 2006 at 13:38:34 -0700,
>  [EMAIL PROTECTED] wrote:
> > Hi,
> > Thanks again.
> > One more question.  Will crosstab function work if i will not 
> know the
> > number/names of columns before hand?  Or I need to supply colum
> > headings?
> 
> I checked a bit into this, and the actual contrib name is 
> tablefunc, not
> crosstab. It provides crosstab functions for up to 4 columns, but 
> it isn't
> hard to make ones that handle more columns.
> 
> You can read the included readme file at:
> http://developer.postgresql.org/cvsweb.cgi/pgsql/contrib/tablefunc/README.tablefunc?rev=1.14
> 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Scaleable DB structure for counters...

2006-07-16 Thread Christian Kratzer

Hi,

On Sun, 16 Jul 2006, Eci Souji wrote:

What if instead of book checkouts we were looking at how often a book was 
referenced?  In which case we're talking multiple times an hour, and we could 
easily have each book requiring hundreds of thousands of rows.  Multiply that 
by hundreds of thousands of books and a the table seems to become huge quite 
quick.  Would breaking up the table by year still make sense?  I'm just not 
familiar with having to deal with a table that could easily hit millions of 
records.


you might want to keep a separate table with counters per book 
and per year or month which you regularly compute from your yearly 
or month totals.


something like following untested code:

  INSERT INTO access_count
  SELECT id_book, date_trunc('day',timeofaccess) AS dayofaccess,count(id_book)
  FROM access
  WHERE date_trunc('day',timeofaccess) = date_trunc('day',now())
  GROUP BY id_book, dayofaccess

That way you do not need to count all the access records. 
You just sum up the pre computed counts for each period.


  SELECT sum(count) FROM access_count WHERE id_book=?

You also have the option of throwing away the raw access data 
for a certain day or month once that period of time is over.


This is more efficient than calling a trigger on each access and
also more scalable as there is no contention over a per book count 
record.


Keeping the raw data in per month or year partitions is also propably
a good idea as it allows you to easily drop specific partitions.

Greetings
Christian

--
Christian Kratzer   [EMAIL PROTECTED]
CK Software GmbHhttp://www.cksoft.de/
Phone: +49 7452 889 135 Fax: +49 7452 889 136

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Browse database , schema

2006-07-16 Thread Michael Fuhr
On Sat, Jul 15, 2006 at 04:04:50AM -0700, [EMAIL PROTECTED] wrote:
> I have a question about rights to browse a database and a schema's
> structure. Is this normal that user who isn't have any permission as
> select to same db or scheme may explore it and see tables (table's fields
> etc.) ?

Users who can connect to a database can query that database's system
catalogs.

> How prohibit that ? Thanks

Don't allow users to connect to databases whose structures you don't
want the users to browse.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] I have a questions, can you help-me ?

2006-07-16 Thread Adam Witney


vagner mendes wrote:
> how can i do, for to install Postgresql in my Mac ? what´s steps i have do ?
>  
> Thank you by your attention.

(best to send these requests for help to the mailing list)

There are several options for OSX, there is an Apple article here:

http://developer.apple.com/internet/opensource/postgres.html

or use the package here:

http://www.entropy.ch/software/macosx/postgresql/

or if you are comfortable building from source, PostgreSQL compiles from
source out of the box on OSX these days... details of this are in the
source distribution.

HTH

adam


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Scaleable DB structure for counters...

2006-07-16 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Eci Souji wrote:
> What if instead of book checkouts we were looking at how often a book
> was referenced?  In which case we're talking multiple times an hour, and
> we could easily have each book requiring hundreds of thousands of rows.
>  Multiply that by hundreds of thousands of books and a the table seems
> to become huge quite quick.  Would breaking up the table by year still
> make sense?  I'm just not familiar with having to deal with a table that
> could easily hit millions of records.

Are all 20 books accessed every hour?  What kind of library is
this?  Do you have robot librarians moving at hyperspeed?  Wouldn't
a more reasonable value be 5000 books per *day*?

It's easy to know when a book is checked out.  How do you know when
a book is referenced?  Are all books only accessed by the librarians?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEumvvS9HxQb37XmcRAg0wAKDOYZWThOFbIK3MWJw9RoD6Ql3zawCfdrBf
k3GCt4HEvMtrxfQMQM2Wv9M=
=fW2P
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Postgresql and Oracle

2006-07-16 Thread Steve Atkins

Not an advocacy post. If I want advocacy, I know where to find it.

I have an application that uses Postgresql - nothing too fancy, some  
plpgsql, a couple of custom types, lots of text and no varchar.


For business reasons I need to also support Oracle. On the app side  
this is not a big problem - I'm already using a DB-independent access  
library (Qt) and most of my queries are fairly vanilla SQL.


On the database side it's a little more complex. I'm looking for  
resources comparing the two, and advice on porting PG -> Oracle  
(though I suspect that any docs about Oracle -> PG porting experience  
might be useful too). Google finds a bazillion advocacy comparisons,  
which makes it hard to find anything more useful.


I already know about OpenACS (a fairly database complex CMS written  
for Oracle and ported to Pg) and their docs. If anyone has any other  
pointers, they'd be appreciated.


(If it's relevant I'm currently on 7.4, but would be looking at  
comparing 8.1 and 10g).


Cheers,
  Steve


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] customizing pg_dump together with copy.c's DoCopy function

2006-07-16 Thread Brian Mathis
pg_dump by default dumps to STDOUT, which you should use in a pipeline to perform any modifications.  To me this seems pretty tricky, but should be doable.  Modifying pg_dump really strikes me as the wrong way to go about it.  Pipelines operate in memory, and should be very fast, depending on how you write the filtering program.  You would need to dump the data without compression, then compress it coming out the other end (maybe split it up too).  Something like this:
    pg_dump | myfilter | gzip | split --bytes=2000M - mydump.Also, you can't expect to have speed if you have no disk space.  Reading/writing to the same disk will kill you.  If you could set up some temp space over NFS on the local network, that should gain you some speed.
On 11 Jul 2006 08:43:17 -0700, [EMAIL PROTECTED] <[EMAIL PROTECTED]
> wrote:> > Is it possible to compile-link together frontend pg_dump code with
> > backend code from copy.c?>> No.  Why do you think you need to modify pg_dump at all?>pg_dump and pg_restore provide important advantages for upgrading acustomer's database on site:
They are fast. I want to minimize downtime.They allow compression. I often will have relatively little free diskspace to work with.My concept is "customized dump", drop database, create new schema
database, "customized restore".My upgrade requires many schema and data content changes. I've triedusing standard SQL statements in perl scripts to do all of it, but evenwith no indexes on inserts, later creating indexes for the lookup work,
and every other optimization I know of, a 100gb database requiresseveral days to turn our old database into a new one. I was hoping thatI could modify the speedy pg_dump/pg_restore utilities to make thesechanges "on the fly". It gets tricky because I have to restore some of
the data to different tables having varying schema and also change thetable linking. But this is all doable as long as I can "massage" theSQL statements and data both when it goes into the dump file and when
it is getting restored back out.Or am I trying to do the impossible?-Lynn


[GENERAL] Lock changes with 8.1 - what's the right lock?

2006-07-16 Thread Wes
8.1 improved locking for foreign key references but had an unexpected
consequence to our application - no parallel loads in our application.

The application does an EXCLUSIVE lock on 'addresses'.  It then gets all of
the keys from 'addresses' it needs, and adds new ones encountered in this
load.  It then completes the transaction, releases the exclusive lock, and
inserts the other table's records using the values read from/inserted into
'addresses'.

There are foreign key constraints between the various tables and 'addresses'
to insure referential integrity.

Previously (pgsql 7.4.5), multiple loads would run simultaneously - and
occasionally got 'deadlock detected' with the foreign key locks even though
they were referenced in sorted order.  When loading tables other than
'addresses', foreign key locks did not prevent other jobs from grabbing the
exclusive lock on 'addresses'.

With 8.1.4, the foreign key locks prevent other instances from grabbing the
lock, so they wait until the first job is complete - only one job loads at a
time.

About EXCLUSIVE locks, the manual says: "...only reads from the table can
proceed in parallel with a transaction holding this lock mode."

What is now the appropriate lock?  It needs to:

  1. Prevent others from updating the table
  2. Block other jobs that are requesting the same lock (if job 2 does a
SELECT and finds nothing, it will try to create the record that job 1 may
already have created in its transaction).
  3. Not conflict with foreign key reference locks

SHARE does not appear to be appropriate - it would fail #2.  Maybe "SHARE
UPDATE EXCLUSIVE"?

Wes



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Simple webuser setup

2006-07-16 Thread msiner
There must be something simple that I am missing, but here is my
problem.  I am setting up a standard pg install as a backend to a small
webapp.  I want to create a user "webuser" with only enough privileges
to query all of the tables in my database.  It has not been working for
me.  What is the simplest way to do this?  Do I need to start at the
top and then work down (db->schema->table) or is there any cascading
effect?  I am still pretty new to web development, so is there a
better/easier way to achieve the same effect?


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Scaleable DB structure for counters...

2006-07-16 Thread Eci Souji
I think "books" may have thrown everyone for a loop.  These are not 
physical books, but rather complete scanned collections that would be 
available for search and reference online.  One of the most important 
features required would be keeping track of how often each book was 
referenced and when.  Time of day, days of week, etc.  This is why I was 
looking into how to construct some form of counter system that would 
allow us to keep track of accesses.


Although I would love to see a robot librarian at work.  :-)

- E

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Eci Souji wrote:


What if instead of book checkouts we were looking at how often a book
was referenced?  In which case we're talking multiple times an hour, and
we could easily have each book requiring hundreds of thousands of rows.
Multiply that by hundreds of thousands of books and a the table seems
to become huge quite quick.  Would breaking up the table by year still
make sense?  I'm just not familiar with having to deal with a table that
could easily hit millions of records.



Are all 20 books accessed every hour?  What kind of library is
this?  Do you have robot librarians moving at hyperspeed?  Wouldn't
a more reasonable value be 5000 books per *day*?

It's easy to know when a book is checked out.  How do you know when
a book is referenced?  Are all books only accessed by the librarians?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEumvvS9HxQb37XmcRAg0wAKDOYZWThOFbIK3MWJw9RoD6Ql3zawCfdrBf
k3GCt4HEvMtrxfQMQM2Wv9M=
=fW2P
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Lock changes with 8.1 - what's the right lock?

2006-07-16 Thread Michael Fuhr
On Sun, Jul 16, 2006 at 05:46:16PM -0500, Wes wrote:
> Previously (pgsql 7.4.5), multiple loads would run simultaneously - and
> occasionally got 'deadlock detected' with the foreign key locks even though
> they were referenced in sorted order.  When loading tables other than
> 'addresses', foreign key locks did not prevent other jobs from grabbing the
> exclusive lock on 'addresses'.

Unless I'm misunderstanding you or a bug was fixed between 7.4.5
and 7.4.13 (the version I'm running), I'm not convinced that last
statement is true.  EXCLUSIVE conflicts with all lock types except
ACCESS SHARE; foreign key references prior to 8.1 use SELECT FOR
UPDATE and in 8.1 they use SELECT FOR SHARE, but in both cases they
acquire ROW SHARE on the referenced table, which conflicts with
EXCLUSIVE.

> With 8.1.4, the foreign key locks prevent other instances from grabbing the
> lock, so they wait until the first job is complete - only one job loads at a
> time.

Again, maybe I'm misunderstanding you, but the following example
behaves the same way in 8.1.4 and 7.4.13 (foo has a foreign key
reference to addresses):

T1: BEGIN;
T1: INSERT INTO foo (address_id) VALUES (1);
T2: BEGIN;
T2: LOCK TABLE addresses IN EXCLUSIVE MODE;
T2: (blocked until T1 completes)

Does this example differ from what you're doing or seeing?

> What is now the appropriate lock?  It needs to:
> 
>   1. Prevent others from updating the table
>   2. Block other jobs that are requesting the same lock (if job 2 does a
> SELECT and finds nothing, it will try to create the record that job 1 may
> already have created in its transaction).
>   3. Not conflict with foreign key reference locks

SHARE ROW EXCLUSIVE is the weakest lock that meets these requirements.
It conflicts with itself (#2) and with ROW EXCLUSIVE, which UPDATE,
DELETE, and INSERT acquire (#1), but doesn't conflict with ROW SHARE,
which is what SELECT FOR UPDATE/SHARE acquire (#3).

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Scaleable DB structure for counters...

2006-07-16 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

IOW, files.  No problem.

The # of files is known.  That's a start.  Is there any existing
metric as to how often they are accessed?  That's what you need to
know before deciding on a design.

This simple design might be perfectly feasible:
CREATE TABLE T_USAGE_TXN (
BOOK_ID INTEGER,
USER_ID INTEGER,
REFERENCED_DT   DATE,
REFERENCED_TM   TIME )

*All* the rows (in field order) would be the PK, and I'd then add
secondary indexes on
  USER_ID/BOOK_ID
  REFERENCED_DT/BOOK_ID
  REFERENCED_DT/USER_ID

Lastly, create and algorithmically *pre-populate* this table :
T_CALENDAR (
DATE_ANSI   DATE,
YEARNUM SMALLINT,
MONTH_NUM   SMALLINT,
DAY_OF_MONTHSMALLINT,
DAY_OF_WEEK SMALLINT,
JULIAN_DAY  SMALLINT)

So, if you want a list and count of all books that were referenced
on Sundays in 2006:

SELECT UT.BOOK_ID, COUNT(*)
FROM T_USAGE_COUNT UT,
 T_CALENDAR C
WHERE C.YEARNUM = 2006
  AND C.DAY_OF_WEEK = 0
  AND C.DATE_ANSI = UT.REFERENCED_DT;


Eci Souji wrote:
> I think "books" may have thrown everyone for a loop.  These are
> not physical books, but rather complete scanned collections that
> would be available for search and reference online.  One of the
> most important features required would be keeping track of how
> often each book was referenced and when.  Time of day, days of
> week, etc.  This is why I was looking into how to construct some
> form of counter system that would allow us to keep track of
> accesses.
> 
> Although I would love to see a robot librarian at work.  :-)
> 
> - E
> 
> Ron Johnson wrote: Eci Souji wrote:
> 
 What if instead of book checkouts we were looking at how
 often a book was referenced?  In which case we're talking
 multiple times an hour, and we could easily have each book
 requiring hundreds of thousands of rows. Multiply that by
 hundreds of thousands of books and a the table seems to
 become huge quite quick.  Would breaking up the table by
 year still make sense?  I'm just not familiar with having
 to deal with a table that could easily hit millions of
 records.
> 
> 
> Are all 20 books accessed every hour?  What kind of library
> is this?  Do you have robot librarians moving at hyperspeed?
> Wouldn't a more reasonable value be 5000 books per *day*?
> 
> It's easy to know when a book is checked out.  How do you know
> when a book is referenced?  Are all books only accessed by the
> librarians?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is "common sense" really valid?
For example, it is "common sense" to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that "common sense" is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEuxlMS9HxQb37XmcRAmXCAJ42IBwCvaDMlfMsiJoPsELxL0e1QQCfUBWH
6M7o4n9q2CEKbYn/xgh6OnY=
=iQF3
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Log actual params for prepared queries: TO-DO item?

2006-07-16 Thread Jaime Casanova

On 7/15/06, Ed L. <[EMAIL PROTECTED]> wrote:


We'd like to attempt some log replay to simulate real loads, but
in 8.1.2, it appears the formal parameters are logged ('$')
instead of the actuals for prepared queries, e.g.:

EXECUTE   [PREPARE:  UPDATE sessions SET a_session = $1
WHERE id = $2]

Thoughts on making this a to-do item?

Ed



i think this is the one you are requesting, it's a TODO item already:

o Allow protocol-level BIND parameter values to be logged

--
Atentamente,
Jaime Casanova

"Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs and the universe trying
to produce bigger and better idiots.
So far, the universe is winning."
  Richard Cook

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster