Re: [GENERAL] COPY problem on -- strings

2008-11-20 Thread Sabin Coanda
> Is that the *first* error message you got?
>

Yes it is.

In fact I made a mistake in the first email, so instead:
INSERT INTO "A" ( "Col1", "Col2" )
VALUES (2, '-- any text' );

please change with:
INSERT INTO "A" ( "Col1", "Col2" )
VALUES (1, '-- any text' );

However I suppose this doesn't change the problem :(.

Regards,
Sabin 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connecting to old 7.1 Database

2008-11-20 Thread Andy Greensted
After a trying a few different versions, 7.2.8 seems to compile OK and 
is happy connecting to a 7.1 server.


Andy

Andy Greensted wrote:

Hi All,

I need to connect to a version 7.1 PostgreSQL database. Unfortunately, I
cannot get the 7.1.3 source to compile. configure gives this error:

checking types of arguments for accept()... configure: error: could not
determine argument types

I'm not posting to pgsql-bugs because I think this is to do with me
having a much newer environment (gentoo linux) than was around when
7.1.3 was released.

So, two questions:

- Is there anyway to run a newer version (8.3.5) of psql in some sort of
'backwards compatible' mode?

- Do you have any tips on making 7.1.3 compile on a newer system?

Many thanks
Andy




--
Dr. Andrew Greensted  Department of Electronics
Bio-Inspired Engineering  University of York, YO10 5DD, UK

Tel: +44(0)1904 432828Mailto: [EMAIL PROTECTED]
Fax: +44(0)1904 432335Web: http://www.bioinspired.com/users/ajg112

2009 IEEE Congress on Evolutionary Computation: http://www.cec-2009.org

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread Albe Laurenz
WireSpot wrote:
> I'm trying to use prepared statements in an application and I'm
> running into this error: "Query failed: prepared statement already
> exists".
> 
> The reason is obvious. What I want to know is the best way to avoid
> getting this error. The client application sets statement names as MD5
> of the actual query text, which means there's a possibility of a clash
> between different parts of the applications if they attempt to prepare
> the same query in the lifetime of a connection.
> 
> Possible solutions, please advise:
> 
> 1) Something like PREPARE IF NOT EXISTS. I'm guessing no such luck.
> 
> 2) Tweaking the Postgres error reporting to ignore this particular
> error. Is it possible? From a non-priviledged client connection?
> 
> 3) Reading a list of all the currently defined prepared statements to
> see if the one I want is already prepared. I'm hoping some "magic"
> SELECT in pg's internal tables may do the trick. But I also worry
> about introducing overhead this way.
> 
> I also imagined some workarounds in the code (PHP), such as defining a
> global/static hash table and registering statement names with it. But
> I'd like to know if there's a better way.

Do you still need the old prepared statement?

If not, you can simple DEALLOCATE it and then try the PREPARE again.

Something like that

try {
PREPARE statementnam AS SELECT ;
} catch (SQLException e) {
if (e.getSQLState().equals("42P05")) {
DEALLOCATE statementnam;
PREPARE statementnam AS SELECT ;
} else
throw e;
}

(that's Java pseudocode, but I hope you'll understand what I mean).

If you still need the old statement, generate a new, different name
and try again.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Serial - last value

2008-11-20 Thread hendra kusuma
Hi there
a little basic question here

I usually use something like this in stored function
to get the last value of a serial type column

select last_value into ret from id_sequence
> return ret
>


What i'm asking is,
if many people run the same stored function at the same time
will they get the last_value they should really get
or is there another way to code that can ensure they really get the right
value

Thank you
Regards
Hendra


[GENERAL] row-level security (Dynamically rewrite queries)

2008-11-20 Thread Jonatan Evald Buus
Hi there,
Is it possible to dynamically rewrite an SQL query for all operations
(SELECT, INSERT, UPATE, DELETE etc.)?
It seems that using RULES it's possible to replace one query with another,
but can the actual query that was executed by retrieved so it can be
dynamically modified?
I.e. I'd like to create a rule along the lines of:
CREATE RULE txn_vpd ON
SELECT TO Transaction_Tbl
DO INSTEAD replace('WHERE', 'INNER JOIN User_Tbl ON Transaction_Tbl.userid =
User_Tbl.id WHERE User_Tbl.name = \'CURRENT_USER\'', $SQL)
Where $SQL represents the actual SQL query that was executed on the table
and the table has a column: userid which indicates the owner of the row.

Essentially what I'd like to do is implement row-level security (what Oracle
calls "Virtual Private Database") but as far as I can find the last time
this was discussed is several years ago and the general consensus ended up
being "use veil".
Veil seems overly like an complicated approach for something that (in
theroy) should be possible with a dynamic query rewrite using search and
replace prior to execution.
Oracle's implementation seems quiete elegant for this, please see
http://www.oracle.com/technology/pub/articles/10gdba/week14_10gdba.html or
http://www.devshed.com/c/a/Oracle/RowLevel-Security-with-Virtual-Private-Database/for
examples.

One other approach that I could think of, would be to create a view for
every table and use the view for accessing the data:
CREATE VIEW Transaction_Vw AS
SELECT * FROM Transaction_Tbl Txn
INNER JOIN User_Tbl U ON Txn.userid = U.id
WHERE U.name = 'CURRENT_USER'
However, can usage this view be enforced by the database by removing SELECT
priviliges from Transaction_Tbl?
Also, I suspect that performance would go down the drain if complex joins
are used? There could potentially be a lot of unnecessary joins to User_Tbl
if multiple "secure view" where joined together.
Is the planner capable of taking this into account and auto-magically
optimize the query?

Are there any other alternatives to implementing row-level security that can
be enforced at the database level?

Appreciate the input

Cheers
Jonatan


[GENERAL] where in (select array)

2008-11-20 Thread Marcus Engene

Hi List,

I have the might_like table that contains products a user might like if 
he likes the present one (item).


CREATE TABLE might_like
(
item   INTEGER NOT NULL
,created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL
,child  INTEGER NOT NULL
)
WITHOUT OIDS;

CREATE INDEX might_like_x1 ON might_like(item);

Since there are (will be) houndreds of thousands of items, and 20+ might 
like items, i thought it would be nice to reduce the set to 1/20th by 
using a vector.


CREATE TABLE might_like_vector
(
item   INTEGER NOT NULL
,created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL
,child_arr  INTEGER[]
)
WITHOUT OIDS;

CREATE INDEX might_like_vector_x1 ON might_like_vector(item);

But then this don't work:

select
   ...
from
   item pic
where
   pic.objectid in (
   select mlv.child_arr
   from might_like_vector mlv
   where mlv.item = 125 AND
 mlv.created_at > now() - interval '1 week'
   )
limit 16

This seems to work but is ugly:

select
   ...
from
   item pic
where
   pic.objectid in (
   select mlv.child_arr[s.a]
   from might_like_vector mlv
   ,generate_series(1,20) as s(a)
   where mlv.item = 125 AND
 mlv.created_at > now() - interval '1 week'
   )
limit 16

Is there a better way to do it?

Thanks,
Marcus


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scheduling backup via PgAgent

2008-11-20 Thread Dave Page
On Thu, Nov 20, 2008 at 5:41 AM, Abdul Rahman <[EMAIL PROTECTED]> wrote:
> Oh!
>
> I have got the mistake made by me. Actually I didn't start the PostgreSQL
> Scheduling Agent - pgAgent service. Now its working fine.
>
> I also have attached the complete document of my work for the sake of
> knowledge share.

I think you forgot the attachment. It would be useful if you could add
it to the wiki though: http://wiki.postgresql.org/


-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] row-level security (Dynamically rewrite queries)

2008-11-20 Thread Richard Huxton
Jonatan Evald Buus wrote:
> Essentially what I'd like to do is implement row-level security (what Oracle
> calls "Virtual Private Database") but as far as I can find the last time
> this was discussed is several years ago and the general consensus ended up
> being "use veil".
> Veil seems overly like an complicated approach for something that (in
> theroy) should be possible with a dynamic query rewrite using search and
> replace prior to execution.

I've never used veil myself, but I can't believe it's less effort to
re-invent the wheel on this. The query-rewrite is what the views are doing.

> Oracle's implementation seems quiete elegant for this, please see
> http://www.oracle.com/technology/pub/articles/10gdba/week14_10gdba.html or
> http://www.devshed.com/c/a/Oracle/RowLevel-Security-with-Virtual-Private-Database/for
> examples.
> 
> One other approach that I could think of, would be to create a view for
> every table and use the view for accessing the data:
> CREATE VIEW Transaction_Vw AS
> SELECT * FROM Transaction_Tbl Txn
> INNER JOIN User_Tbl U ON Txn.userid = U.id
> WHERE U.name = 'CURRENT_USER'
> However, can usage this view be enforced by the database by removing SELECT
> priviliges from Transaction_Tbl?

Yes, of course. This also seems to be what veil does for you. Beware -
views are basically macros that rewrite your query and you can get
unexpected results when e.g. inserting multiple rows at once. See
mailing-list archives for examples.

> Also, I suspect that performance would go down the drain if complex joins
> are used? There could potentially be a lot of unnecessary joins to User_Tbl
> if multiple "secure view" where joined together.
> Is the planner capable of taking this into account and auto-magically
> optimize the query?

There's some overhead associated with rewriting the query, but the
planner should cope just fine then. Which isn't to say there aren't
cases where you couldn't write a better query from scratch, but the
planner optimises rewritten queries as well as any other.

> Are there any other alternatives to implementing row-level security that can
> be enforced at the database level?

Use veil. Someone else has gone to the trouble to think this through. If
you write your own solution you'll only have 1 user (you). Even if veil
is only used by its author there will at least be 2 of you.

I frequently use a "my" schema with views that map to the relevant base
tables ("my.clients" etc). However, that's not for applications that
require some guarantee of security database-wide, it's as much about
simplifying my queries.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] where in (select array)

2008-11-20 Thread Grzegorz Jaśkiewicz
just as a little advice, I would actually use joins - for performance
reasons. 'where in' seems to be rather slow, especially if you use it on
large sets of data.


Re: [GENERAL] where in (select array)

2008-11-20 Thread Richard Huxton
Marcus Engene wrote:
> Hi List,
> 
> I have the might_like table that contains products a user might like if
> he likes the present one (item).
> 
> CREATE TABLE might_like
> (
> item   INTEGER NOT NULL
> ,created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL
> ,child  INTEGER NOT NULL
> )
> WITHOUT OIDS;
> 
> CREATE INDEX might_like_x1 ON might_like(item);
> 
> Since there are (will be) houndreds of thousands of items, and 20+ might
> like items, i thought it would be nice to reduce the set to 1/20th by
> using a vector.
> 
> CREATE TABLE might_like_vector
> (
> item   INTEGER NOT NULL
> ,created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL
> ,child_arr  INTEGER[]
> )
> WITHOUT OIDS;

You haven't reduced the set at all, you've just turned part of it
sideways. You might gain something on your search, but I'm guessing
you've not tested it.

Hmm - the attached script generates 100,000 items and 10 liked ones for
each (well for the first 99,990 it says you like the next 10 items).
They're all given different timestamps at day intervals which means
you'll end up with 6 or seven matches for you sample query.

> But then this don't work:
> 
> select
>...
> from
>item pic
> where
>pic.objectid in (
>select mlv.child_arr
>from might_like_vector mlv
>where mlv.item = 125 AND
>  mlv.created_at > now() - interval '1 week'
>)
> limit 16

Without messing around with arrays you get this query (which seems
readable enough to me)

SELECT
objectid, objname
FROM
items i
JOIN might_like m ON (i.objectid = m.child)
WHERE
m.created_at > (now() - '1 week'::interval)
AND m.item = 125
ORDER BY
objectid
LIMIT
16
;

I'm getting times less than a millisecond for this - are you sure it's
worth fiddling with arrays?

-- 
  Richard Huxton
  Archonet Ltd
BEGIN;

CREATE SCHEMA mightlike;

SET search_path = mightlike;

CREATE TABLE items (
	objectid  integer NOT NULL,
	objname   text NOT NULL
);

CREATE TABLE might_like (
	item   integer NOT NULL,
	created_at timestamp with time zone NOT NULL DEFAULT CURRENT_TIMESTAMP,
	child  integer NOT NULL
);

INSERT INTO items SELECT i, 'item number ' || i 
FROM generate_series(1, 10) i;

INSERT INTO might_like SELECT i, (now() - j * '1 day'::interval), i+j
FROM generate_series(1, 0) i, generate_series(1, 10) j;

ALTER TABLE items ADD PRIMARY KEY (objectid);
ALTER TABLE might_like ADD PRIMARY KEY (item, child);
ALTER TABLE might_like ADD CONSTRAINT valid_child FOREIGN KEY (child) REFERENCES items;
CREATE INDEX might_like_idx1 ON might_like (item, created_at);

-- EXPLAIN ANALYSE
SELECT
	objectid, objname
FROM
	items i
	JOIN might_like m ON (i.objectid = m.child)
WHERE
	m.created_at > (now() - '1 week'::interval)
	AND m.item = 125
ORDER BY
	objectid
LIMIT
	16
;

ROLLBACK;

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Serial - last value

2008-11-20 Thread Richard Huxton
hendra kusuma wrote:
> Hi there
> a little basic question here
> 
> I usually use something like this in stored function
> to get the last value of a serial type column
> 
> select last_value into ret from id_sequence
>> return ret
> 
> What i'm asking is,
> if many people run the same stored function at the same time
> will they get the last_value they should really get
> or is there another way to code that can ensure they really get the right
> value

You should use the functional interface: SELECT currval('my_sequence').
That guarantees you the right value for your current session.

This only works if you've called nextval('my_sequence') at some point in
the current session (although that could be automatic for a column of
type SERIAL).

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using database to find file doublettes in my computer

2008-11-20 Thread Craig Ringer

Lothar Behrens wrote:


I have expected a smaller amount of records due to the fact that for 4
files each available 2 times (sum = 8) I
have 8 records in ECADFiles, but must have 4 in the above result.

So for an average of 2 doubles I expected half the files from
ECADFiles, because one is exactly right and the other
is on the left.


It's a combinatorial problem. If you have 3 duplicate files, eg:

INSERT INTO paths (path) VALUES
(E'C:\\path\\file1.txt'),
(E'C:\\path2\\file1.txt'),
(E'/path/file1.txt');

then the query process I described above will output the matches:

 C:\path\file1.txt   | C:\path2\file1.txt
 /path/file1.txt | C:\path2\file1.txt
 /path/file1.txt | C:\path\file1.txt

because while it avoids showing both (A,B) and (B,A) pairs, for any A, B 
and C it'll show:


(A,B)
(A,C)
(B,C)

I've attached test SQL that does the above.

Presumably, you want to only show, say:

(A,B)
(A,C)

or maybe:

(filename, A)
(filename, B)
(filename, C)

If that's what you want, you need to work a little differently. The 
attached SQL in dups_test2.sql shows one way you might do it, by 
generating a list of files with duplicates then listing all the 
locations each appears in. Again, you can do it without the temp table, 
it'll probably just be slower. None of what I've written is particularly 
fast anyway - it evaluates those regular expressions many more times 
than should be required, for example.


--
Craig Ringer
BEGIN;

CREATE TABLE paths (
   path TEXT PRIMARY KEY
);

CREATE OR REPLACE FUNCTION get_filename(text, text) RETURNS VARCHAR AS $$
SELECT (regexp_split_to_array($1, $2))[array_upper(regexp_split_to_array($1, $2),1)];
$$ LANGUAGE SQL IMMUTABLE;

COMMENT ON FUNCTION get_filename(text, text) IS 'Extract filename part from path $1 using path separator $2';

CREATE OR REPLACE FUNCTION get_filename(text) RETURNS VARCHAR AS $$
SELECT get_filename($1, E'[/]');
$$ LANGUAGE SQL IMMUTABLE;

COMMENT ON FUNCTION get_filename(text, text) IS E'Extract filename part from path $1 using path separator / or \\';

INSERT INTO paths (path) VALUES
(E'C:\\path\\file1.txt'),
(E'C:\\path2\\file1.txt'),
(E'/path/file1.txt'),
(E'C:\\somewhere\\file2.txt'),
(E'/random/place/file2.txt'),
(E'/orphans/file3.blah');

COMMIT;
BEGIN;

SELECT 
get_filename(path) AS fn, count(path) AS n
INTO TEMPORARY TABLE dup_files
FROM paths
GROUP BY get_filename(path)
HAVING count(path) > 1;

SELECT * FROM dup_files;

-- Creates UNIQUE index on PATH as well
ALTER TABLE dup_files ADD CONSTRAINT dup_files_path_pkey PRIMARY KEY (fn);

-- Now build your side-by-side table of duplicates:

SELECT p1.path, p2.path
FROM paths p1 INNER JOIN paths p2
 ON (get_filename(p1.path) = get_filename(p2.path))
WHERE EXISTS(SELECT 1 FROM dup_files WHERE fn = get_filename(p1.path))
  AND p1.path > p2.path
ORDER BY get_filename(p1.path), p1.path, p2.path;

ROLLBACK;
SELECT get_filename(path) AS fn, min(path) AS path, count(path) AS dupcount
INTO dups
FROM paths
GROUP BY get_filename(path)
HAVING count(path) > 1;

SELECT * FROM dups;

SELECT dups.fn, paths.path
FROM dups INNER JOIN paths ON (dups.fn = get_filename(paths.path));

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Res: Res: Res: [GENERAL] Archive files growth!!!

2008-11-20 Thread Craig Ringer

paulo matadr wrote:


teste=# \d cliente_fone
 Table "cadastro.cliente_fone"
 Column |Type |   Modifiers
+ -+

 cfon_id| integer | not null
 clie_id | integer| not null
 cfon_c | character(2)   | 
 cfon_nnfone | character varying(9)   | 
 cfon_nnfoneramal | character varying(4)   | 
 cfon_icfonepadrao   | smallint   | 
 fnet_id | integer| not null

 cfon_tmultimaalteracao | timestamp without time zone | not null default now()
 cfon_nmcontato | character varying(50)   | 
Indexes:

"cliente_fone_pkey" PRIMARY KEY, btree (cfon_id)
Foreign-key constraints:
"cliente_fone_clie_id_fkey" FOREIGN KEY (clie_id) REFERENCES 
cliente(clie_id) ON UPDATE RESTRICT ON DELETE RESTRICT
"cliente_fone_fnet_id_fkey" FOREIGN KEY (fnet_id) REFERENCES 
fone_tipo(fnet_id) ON UPDATE RESTRICT ON DELETE RESTRICT


I don't see anything there that would account for the growth either. 
However, I forgot to check one thing with you when I asked for the table 
sizes: Do you have any associated toast table, and if so how big is that?


You can find out with a query like:

select oid, relname, reltype, reltuples, relpages, relpages*8 AS size_kb
from pg_class where relname = 'TABLENAME'
  OR oid = (SELECT reltoastrelid FROM pg_class
WHERE relname = 'TABLENAME');


It's quite possible that your table, including associated TOAST data, is 
actually much bigger than you think it is.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 8.4 download?

2008-11-20 Thread Craig Ringer

Scott Marlowe wrote:

On Wed, Nov 19, 2008 at 6:28 AM, Josh Harrison <[EMAIL PROTECTED]> wrote:

Hi
Is version 8.4 available for download now ? If so can you direct me to he
website?


There's a daily snapshot available for download off the main dl tree.
Go to the download page and look for the link to browse the mirrors
directly.  Pick on and look for snapshot.  You get to compile it
yourself, but it's pretty easy to do.


... on a UNIX/Linux machine. It's less fun on Windows, though still not 
actually all that bad if you're unlucky enough to be reasonably familiar 
with Windows dev tools.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY problem on -- strings

2008-11-20 Thread Richard Huxton
Sabin Coanda wrote:
> Hi,
> 
> I have "PostgreSQL 8.3.5, compiled by Visual C++ build 1400" on Windows OS.
> 
> I try to use the COPY command to optimize the backup/restore performance, 
> but I found a problem. I reproduce it below.

I can't reproduce it here on 8.3 on linux.

> I backup the database plain with the command:
>  pg_dump.exe -U postgres -F p -v -f "backup_plain.sql" "DemoDB"
> 
> I create a new database, and I run the script. But it rise me the error:
>   ERROR:  syntax error at or near "1"
>   LINE 49: 1 -- any text
> 
> I look for the error line and I saw how pg_dump created the script 
> statement:
>   COPY "A" ("Col1", "Col2") FROM stdin;
>   1 -- any text
>   \.

That's what I see too, and it's fine here.

Try trimming the file down to just those lines, manually create the
database and table and see if you can run the copy then. If so, then
Tom's right and there's an error before the COPY. If not, then you've
got something odd in the file (bad line-ending, invalid high-bit
character or some such).

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] transaction_read_only effects on performance, actual meaning (was: How to reduce impact of a query)

2008-11-20 Thread Craig Ringer

Howard Cole wrote:

You'll be pleased to know that changing the transaction to read only 
(SET TRANSACTION READ ONLY)

as you suggested seemed to prevent the server from slowing to a halt.


Actually, I'm mostly surprised by that. I primarily suggested issuing 
the command to ensure that if your transaction was doing UPDATes or 
similar via triggers or function side-effects you weren't aware of, the 
transaction would fail and help you pin-point the problem area.


I'm quite curious about why setting the transaction to readonly helped 
its performance. Could it be to do with setting hint bits or something 
along those lines, perhaps? Anyone?


Is it not possible that the query optimisation process should determine 
that the query should be read only without

explicitly stating this?


I don't think it can. You can issue a series of statements that make no 
changes to the database, followed by an UPDATE/INSERT/DELETE/ALTER 
TABLE/whatever. You could also issue a SELECT that invokes a function 
(directly or via a rule) that modifies the database.


The database doesn't know what statements you will issue next.

That said, I thought the transaction_read_only flag had to be set before 
any operations were issued, but it seems to be permitted even after 
queries have been issued. In fact, it seems to be permitted after DML 
has been issued in a transaction, which really confuses me. Is 
transaction_read_only not actually applied to the transaction as a whole 
like, say, transaction_isolation, but rather on a statement-by-statement 
basis? If so, is it not somewhat misleadingly named?


I would not have expected the following to work:


CREATE TABLE a ( i serial primary key, j text );

BEGIN;

INSERT INTO a (j) VALUES ('oats'), ('wheat'), ('beans');

SET transaction_read_only = 1;

SELECT * FROM a;

COMMIT;



but it does.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] return MAX and when it happened

2008-11-20 Thread Sam Mason
On Wed, Nov 19, 2008 at 05:06:14PM -0600, Scara Maccai wrote:
> Sam Mason wrote:
> >The custom aggregate sounds the
> >most elegant, it's just annoying that it's so much fiddling to get it
> >all working to start with
> Thanks.
> 
> I think I wrote it, but there's something I don't get from the docs: do 
> I have to call
> 
> get_call_result_type(fcinfo, NULL, &tupdesc)

I've always tried to stay away from C level extensions so far!  How
many records are you expecting to aggregate across?  If it's only a few
thousand a simple SQL language function may be ok:

  CREATE TYPE nt AS ( n INTEGER, t TIMESTAMP );

  CREATE FUNCTION maxnt(nt, nt) RETURNS nt IMMUTABLE LANGUAGE SQL AS $$
SELECT CASE WHEN $1.n > $2.n THEN $1 ELSE COALESCE($2,$1) END $$;

  CREATE AGGREGATE MAX (nt) (
  SFUNC = maxnt,
  STYPE = nt
  );

This is about 20 times slower than a C function (80 vs ~1500 rows per
ms), but if you're only iterating over a few rows it's not going to
matter much.

Sorry I can't be of more help!


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Fetch query which triggered a rule

2008-11-20 Thread Jonatan Evald Buus
Greetings,
Is it possible to fetch the query which triggered a rule?

I.e. suppose I create the following rule:
CREATE RULE MyRule AS ON Select TO MyTable_Tbl DO INSTEAD
process_query($sql);

Where $sql represents the query which triggered the rule.

Appreciate the input

Cheers
Jona


[GENERAL] hidden errors calling a volatile function inside a stable function

2008-11-20 Thread Sabin Coanda
Hi,

I have "PostgreSQL 8.2.4 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 
3.4.2 (mingw-special)" on Windows OS , but I experienced the same problem on 
"PostgreSQL 8.3.5, compiled by Visual C++ build 1400" too.

I attach the demo database here, to be available to test according with the 
following scenario.

I found in a STABLE function, for instance "TEST_0"(), it is not allowed to 
use INSERT statement. Trying this will give me the error:
ERROR: INSERT is not allowed in a non-volatile function

Same behavior is for DELETE statement (e.g. "TEST_1"()).

If I set the function to VOLATILE (as "TEST_2"() ), it works very well.

I replace DELETE and INSERT statements with a volatile function call, in 
"TEST_3"(), and I call it. It works well too.

Finally I set the function as STABLE, not VOLATILE (see "TEST_4"()).

First I call:
DELETE FROM "A";

Then another call:
SELECT "TEST_4"();

I get no rosen errors, but the results are wrong (see the log results), 
because "TEST_4"() doesn't see the changes made by the called function.

I find this behaviour it is very dangerous because it is completely hidden. 
What do you say ?

TIA,
Sabin 


begin 666 081120_DEMO_01.backup
M4$=$35 !"@`$" $!`0`9`!(`#0`4``H`; ``
M``<```!$14U/7S Q``4X+C(N- `%."XR+C0`#P!'
[EMAIL PROTECTED] ``0```# `" ```$5.0T]$24Y'[EMAIL PROTECTED]/1$E.
M1P`>4T54(&-L:65N=%]E;F-O9&EN9R ]("=55$8X)SL*```!`0``
M``$!`0$%9F%L`0``0U)%051%($953D-424].(")415-47S 
B*"D@
M4D5455).4R!I;G1E9V5R"B @("!!4R D) T*0D5'24X-"@E204E312!.3U1)
M0T4@)[EMAIL PROTECTED])SL-"@E204E312!.3U1)0T4@)S$@
M+2 E)[EMAIL PROTECTED]"!%6$E35%,H(%-%3$5#5"!T0`*
M0T].4U1204E.5 !(04Q415(@5$%"[EMAIL PROTECTED],62 B02(*(" @($%$
M1"!#3TY35%)!24Y4(")!7W!K97DB([EMAIL PROTECTED]("@B0V]L7S$B*3L*
M`#8```!!3%1%4B!404),12!/3DQ9('!U8FQI8RXB02(@1%)/4"!#3TY35%)!
M24Y4(")!7W!K97DB.PH!`0`&<'5B;&EC" ```'!Ohttp://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] where in (select array)

2008-11-20 Thread Marcus Engene

Richard Huxton wrote:

Marcus Engene wrote:
  

Hi List,

I have the might_like table that contains products a user might like if
he likes the present one (item).

CREATE TABLE might_like
(
item   INTEGER NOT NULL
,created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL
,child  INTEGER NOT NULL
)
WITHOUT OIDS;

CREATE INDEX might_like_x1 ON might_like(item);

Since there are (will be) houndreds of thousands of items, and 20+ might
like items, i thought it would be nice to reduce the set to 1/20th by
using a vector.

CREATE TABLE might_like_vector
(
item   INTEGER NOT NULL
,created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP NOT NULL
,child_arr  INTEGER[]
)
WITHOUT OIDS;



You haven't reduced the set at all, you've just turned part of it
sideways. You might gain something on your search, but I'm guessing
you've not tested it.

Hmm - the attached script generates 100,000 items and 10 liked ones for
each (well for the first 99,990 it says you like the next 10 items).
They're all given different timestamps at day intervals which means
you'll end up with 6 or seven matches for you sample query.
  
Sorry, I was a bit unclear. This is run on a web server. The might like 
rows are generated if they are not available for an item when the 
corresponding page is generated. The one row per might-like-item is 
online since yesterday and has when this is written 1/2 M rows in it. 
The caching is primarily initiated by a google indexer agent.


This might-like generation is expensive so caching them in the db is a 
must and the used CPU of the web-server halfed when this caching had 
been put live and cached the most commonly used items.


When the might-like data is read from the database, the generated 
presentation html is put in memcached with a timeout of 1h (presently). 
Memcached here is probably way overkill, but using it in situations like 
this makes the site more scaleable to add webservers and postpones the 
problem of clustering pg.


So with memcached I care less about saving a few mS in select latency 
and more about postponing other approaching problems like having the 
dbdump manageble. Right now it's a 100MB gzipped dump, which is very 
manageable, so where it's possible I'd like to keep the data compact. I 
imagine it's cheaper disk & dump wise to do the array thing in this and 
some other similar usages, and therefore it would be nice to have a 
non-ugly usage pattern. Also, we're going to collect usage statistics to 
further optimize the behavior of the site, and I'm really worried about 
these millions of rows.


If this is a bad approach to the problem I'm very interested to hear 
this. Regardless of the best approach, a "proper" solution to the 
subquery in would be super appreciated too :)


Thanks for your answer!

Best regards,
Marcus

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date range query help

2008-11-20 Thread Sam Mason
On Thu, Nov 20, 2008 at 01:10:41PM +1100, novice wrote:
> select * from mobile_custodian;
> 
>  custodian_id | user_id | issue_date | return_date |  mobile_no
> --+-++-+-
> 4 | Ben | 2008-10-11 | 2008-10-13  | 09455225998
> 5 |Josh | 2008-10-15 | | 09455225998

I sometimes find it easier to store ranges like this with having the
open ends at infinity.  This is easier with timestamps as they have
magic 'infinity' values in PG (both positive and negative).  Values of
date type don't have any special values like this which makes things a
bit more awkward.

I haven't seen it posted to the list for a while; but there's a nice
old book titled "Developing Time Oriented Database Applications in SQL"
that's now out of print but is available as a PDF.

  http://www.cs.arizona.edu/~rts/tdbbook.pdf


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] join with redundant results VS simpler join plus multiple selects

2008-11-20 Thread WireSpot
I have a schema with galleries, people and images. Each person has a
bunch of private images. People can join any number of galleries and
can publish any of their images to the galleries they join (or not).

I'd like to retrieve a data set where for a given gallery id I get all
the people AND all the images they've published to that gallery.

I can do this in two ways.
1) Do a join that will give me the people that belong to said gallery,
then loop in the code and do simple selects to retrieve images in that
gallery for each of them.
2) Do a join between all three tables. The end result will have as
many rows as total images for all the people in the gallery.
Obviously, there's going to be redundant data, since a person's info
will be repeated for each image.

Which is better in terms of performance? I used EXPLAIN ANALYZE and
actual queries and it seems to suggest that option 2, while returning
redundant info, is faster.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread WireSpot
On Thu, Nov 20, 2008 at 10:56, Albe Laurenz <[EMAIL PROTECTED]> wrote:
> Do you still need the old prepared statement?
>
> If not, you can simple DEALLOCATE it and then try the PREPARE again.

Yes, I'd like to keep the old statements, that's part of the perks --
if a query will be repeated it will (possibly) benefit from the
statement being already prepared.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] transaction_read_only effects on performance, actual meaning (was: How to reduce impact of a query)

2008-11-20 Thread Ivan Sergio Borgonovo
On Thu, 20 Nov 2008 20:24:42 +0900
Craig Ringer <[EMAIL PROTECTED]> wrote:

> Howard Cole wrote:

> > You'll be pleased to know that changing the transaction to read
> > only (SET TRANSACTION READ ONLY)
> > as you suggested seemed to prevent the server from slowing to a
> > halt.

> Actually, I'm mostly surprised by that. I primarily suggested
> issuing the command to ensure that if your transaction was doing
> UPDATes or similar via triggers or function side-effects you
> weren't aware of, the transaction would fail and help you
> pin-point the problem area.

> I'm quite curious about why setting the transaction to readonly
> helped its performance. Could it be to do with setting hint bits
> or something along those lines, perhaps? Anyone?

Function happens in transactions. I'd be curious to know if
declaring inside a function SET TRANSACTION READ ONLY has any
meaning/effect once you've declared that function stable/immutable.

> I would not have expected the following to work:

> CREATE TABLE a ( i serial primary key, j text );
> 
> BEGIN;
> 
> INSERT INTO a (j) VALUES ('oats'), ('wheat'), ('beans');
> 
> SET transaction_read_only = 1;
> 
> SELECT * FROM a;
> 
> COMMIT;
> 
> 
> 
> but it does.

Interesting. Thank you for pointing it out.

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hidden errors calling a volatile function inside a stable function

2008-11-20 Thread Merlin Moncure
On Thu, Nov 20, 2008 at 6:21 AM, Sabin Coanda
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have "PostgreSQL 8.2.4 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC)
> 3.4.2 (mingw-special)" on Windows OS , but I experienced the same problem on
> "PostgreSQL 8.3.5, compiled by Visual C++ build 1400" too.
>
> I attach the demo database here, to be available to test according with the
> following scenario.
>
> I found in a STABLE function, for instance "TEST_0"(), it is not allowed to
> use INSERT statement. Trying this will give me the error:
> ERROR: INSERT is not allowed in a non-volatile function

Why did you lie when your declared your function stable? :-)

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread Merlin Moncure
On Wed, Nov 19, 2008 at 2:42 PM, WireSpot <[EMAIL PROTECTED]> wrote:
> I'm trying to use prepared statements in an application and I'm
> running into this error: "Query failed: prepared statement already
> exists".
>
> The reason is obvious. What I want to know is the best way to avoid
> getting this error. The client application sets statement names as MD5
> of the actual query text, which means there's a possibility of a clash
> between different parts of the applications if they attempt to prepare
> the same query in the lifetime of a connection.
>
> Possible solutions, please advise:
>
> 1) Something like PREPARE IF NOT EXISTS. I'm guessing no such luck.
>
> 2) Tweaking the Postgres error reporting to ignore this particular
> error. Is it possible? From a non-priviledged client connection?
>
> 3) Reading a list of all the currently defined prepared statements to
> see if the one I want is already prepared. I'm hoping some "magic"
> SELECT in pg's internal tables may do the trick. But I also worry
> about introducing overhead this way.

pg_prepared_statements (on recent versions of postgresql)

also, watch out for race conditions.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread Albe Laurenz
Please, send your replies to the list as well.

WireSpot wrote:
> > Do you still need the old prepared statement?
> >
> > If not, you can simple DEALLOCATE it and then try the PREPARE again.
> 
> Yes, I'd like to keep the old statements, that's part of the perks --
> if a query will be repeated it will (possibly) benefit from the
> statement being already prepared.

I see.

Then you'll have a way to remember the names of prepared statements,
because otherwise you cannot reuse them.

You'll have to find a way to pick or generate unique names for the
prepared statements.
You could check for name collisions and disambiguate with a suffix
or something.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Res: [GENERAL] Archive files growth!!!

2008-11-20 Thread paulo matadr
see  results:
  oid  |   relname| reltype | reltuples | relpages | size_kb 
---+--+-+---+--+-
 16506 | cliente_fone |   16507 |341130 | 3345 |   26760

any configuration about wal_buffer x checkpoint_segments can help me?





De: Craig Ringer <[EMAIL PROTECTED]>
Para: paulo matadr <[EMAIL PROTECTED]>
Cc: GENERAL 
Enviadas: Quinta-feira, 20 de Novembro de 2008 8:09:19
Assunto: Re: Res: Res: Res: [GENERAL] Archive files growth!!!

paulo matadr wrote:

> teste=# \d cliente_fone
>  Table "cadastro.cliente_fone"
>  Column |Type |   Modifiers   
>  + 
> -+
>  cfon_id| integer | not null
>  clie_id | integer| not null
>  cfon_c | character(2)   |  cfon_nnfone   
>   | character varying(9)   |  cfon_nnfoneramal | character 
> varying(4)   |  cfon_icfonepadrao   | smallint   
> |  fnet_id | integer| not null
>  cfon_tmultimaalteracao | timestamp without time zone | not null default now()
>  cfon_nmcontato | character varying(50)   | Indexes:
> "cliente_fone_pkey" PRIMARY KEY, btree (cfon_id)
> Foreign-key constraints:
> "cliente_fone_clie_id_fkey" FOREIGN KEY (clie_id) REFERENCES 
> cliente(clie_id) ON UPDATE RESTRICT ON DELETE RESTRICT
> "cliente_fone_fnet_id_fkey" FOREIGN KEY (fnet_id) REFERENCES 
> fone_tipo(fnet_id) ON UPDATE RESTRICT ON DELETE RESTRICT

I don't see anything there that would account for the growth either. However, I 
forgot to check one thing with you when I asked for the table sizes: Do you 
have any associated toast table, and if so how big is that?

You can find out with a query like:

select oid, relname, reltype, reltuples, relpages, relpages*8 AS size_kb
from pg_class where relname = 'TABLENAME'
  OR oid = (SELECT reltoastrelid FROM pg_class
WHERE relname = 'TABLENAME');


It's quite possible that your table, including associated TOAST data, is 
actually much bigger than you think it is.

--
Craig Ringer

-- Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



  Veja quais são os assuntos do momento no Yahoo! +Buscados
http://br.maisbuscados.yahoo.com

Re: Res: [GENERAL] Archive files growth!!!

2008-11-20 Thread Craig Ringer

paulo matadr wrote:

see  results:
  oid  |   relname| reltype | reltuples | relpages | size_kb 
---+--+-+---+--+-

 16506 | cliente_fone |   16507 |341130 | 3345 |   26760



OK, so there is no TOAST table either. I am out of ideas. It's not 
indexes, it's not toast table contents, it's not some action being done 
by a trigger, so why is so much WAL logging data generated for these 
inserts?


Does anybody have any ideas about what's happening, or any suggestions 
about how to find out?


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hidden errors calling a volatile function inside a stable function

2008-11-20 Thread Craig Ringer

Merlin Moncure wrote:

On Thu, Nov 20, 2008 at 6:21 AM, Sabin Coanda
<[EMAIL PROTECTED]> wrote:

Hi,

I have "PostgreSQL 8.2.4 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC)
3.4.2 (mingw-special)" on Windows OS , but I experienced the same problem on
"PostgreSQL 8.3.5, compiled by Visual C++ build 1400" too.

I attach the demo database here, to be available to test according with the
following scenario.

I found in a STABLE function, for instance "TEST_0"(), it is not allowed to
use INSERT statement. Trying this will give me the error:
ERROR: INSERT is not allowed in a non-volatile function


Why did you lie when your declared your function stable? :-)


I suspect the point here is that PostgreSQL tries to stop you violating 
your claims about the function's stability by preventing DML operations. 
However, it doesn't also attempt to prevent invocation of volatile 
functions. When you annotate a function as STABLE you are making a 
guarantee to PostgreSQL that the function is stable, rather than telling 
PostgreSQL to force the function to behave as STABLE.


This is probably reasonable enough - after all, a volatile function 
could even be invoked via an ON SELECT ... DO INSTEAD rule that 
transforms a statement, so the function doesn't even know about it. It 
could also be invoked dynamically via EXECUTE, which would probably be 
tricky to trap. There are more issues than DML when it comes to STABLE 
functions, too.


I do wonder if making STABLE functions execute as if `SET 
transaction_read_only = 1' had been issued is possible/practical. Would 
that work, and would it provide a way to trap at least most unsafe 
operations without undue overhead?


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] join with redundant results VS simpler join plus multiple selects

2008-11-20 Thread Craig Ringer

WireSpot wrote:

I have a schema with galleries, people and images. Each person has a
bunch of private images. People can join any number of galleries and
can publish any of their images to the galleries they join (or not).

I'd like to retrieve a data set where for a given gallery id I get all
the people AND all the images they've published to that gallery.

I can do this in two ways.
1) Do a join that will give me the people that belong to said gallery,
then loop in the code and do simple selects to retrieve images in that
gallery for each of them.
2) Do a join between all three tables. The end result will have as
many rows as total images for all the people in the gallery.
Obviously, there's going to be redundant data, since a person's info
will be repeated for each image.

Which is better in terms of performance? I used EXPLAIN ANALYZE and
actual queries and it seems to suggest that option 2, while returning
redundant info, is faster.


That's probably going to be the case.  PostgreSQL won't need to read the 
redundant info in from disk each time, and relative to the image data 
it's going to be pretty small. By doing it all in one join you're 
avoiding the overhead of all those network round trips (if on a 
network), statement preparation and planning, etc etc etc. Additionally, 
PostgreSQL is probably going to be using a join plan that's much more 
efficient than anything you'll get by looping over each user and asking 
for images.


If you wanted to avoid returning too much redundant info, you could 
always do it in two queries:


- Find a list of all users belonging to the gallery and any other 
non-image data associated with them; then


- Retrieve all images in one query using a join against the list of 
users who're members of the gallery, but only actually return (eg) the 
user id, gallery id, and image data for each image.


In all honestly, though, it probably doesn't matter unless there's a LOT 
of additional data you want to obtain about each user.


Note, however, that when testing method (1) in your post you will REALLY 
need to make sure that you're using parameterized prepared statements 
for the image queries.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SQL] [GENERAL] date range query help

2008-11-20 Thread Frank Bax

novice wrote:

2008/11/20 Rodrigo E. De León Plicet <[EMAIL PROTECTED]>:

On Wed, Nov 19, 2008 at 10:03 PM, novice <[EMAIL PROTECTED]> wrote:

sorry I get nothing :(

Of course not. None of the dates you gave in the example overlap.



But it should still have the 1st entry with the name Ben?  Am I
missing something?





Yes, you are missing something.  You would only get 1st entry with the 
name Ben if the dates in two tables were in the same year.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY problem on -- strings

2008-11-20 Thread Sabin Coanda
Sorry, my fault that I run the script in the query window of pgAdmin, not in 
the system console. I check it again in the system console and it works 
well.

Thanks,
Sabin 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread Alvaro Herrera
Merlin Moncure escribió:
> On Wed, Nov 19, 2008 at 2:42 PM, WireSpot <[EMAIL PROTECTED]> wrote:

> > 3) Reading a list of all the currently defined prepared statements to
> > see if the one I want is already prepared. I'm hoping some "magic"
> > SELECT in pg's internal tables may do the trick. But I also worry
> > about introducing overhead this way.
> 
> pg_prepared_statements (on recent versions of postgresql)
> 
> also, watch out for race conditions.

What race conditions?  Prepared statements are per-connection.

Perhaps the application could keep a hash of statements prepared so far,
to avoid having to query pg_prepared_statements all the time.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread WireSpot
> Merlin Moncure escribió:
>> pg_prepared_statements (on recent versions of postgresql)

Thank you, that's one of the things I wanted to know.

On Thu, Nov 20, 2008 at 15:30, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
> Merlin Moncure escribió:
>> also, watch out for race conditions.
>
> What race conditions?  Prepared statements are per-connection.

I guess he means if connections are persistent, or if the same
connection is being used at the same time from different parts of the
application. Which brings us to the next point:

> Perhaps the application could keep a hash of statements prepared so far,
> to avoid having to query pg_prepared_statements all the time.

That's how I dealt with it so far, purely in the code. But see the
above scenarios.

If the connection is reused but the code is fresh (being PHP and
living in a webpage lifecycle) the hash in the code will be empty but
the statements are in fact still there.

Or if parts of the application issue their own PREPAREs without going
through the central mechanism with the hash (granted, this is an
internal issue of code discipline).

Still, I don't wanna have to query pg_prepared_statements for every
query I make. I feed that a hybrid approach might be best. Early in
the lifecycle of every script I could load pg_prepared_statements in
the code hashtable, then the mechanism could proceed normally,
checking only against the hashtable.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread Sam Mason
On Wed, Nov 19, 2008 at 09:42:33PM +0200, WireSpot wrote:
> I also imagined some workarounds in the code (PHP), such as defining a
> global/static hash table and registering statement names with it. But
> I'd like to know if there's a better way.

Have you thought about using stored procedures instead of prepared
statements?  No need to register them or keep track of that state.


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] join with redundant results VS simpler join plus multiple selects

2008-11-20 Thread WireSpot
On Thu, Nov 20, 2008 at 15:05, Craig Ringer <[EMAIL PROTECTED]> wrote:
> That's probably going to be the case.  PostgreSQL won't need to read the
> redundant info in from disk each time, and relative to the image data it's
> going to be pretty small. By doing it all in one join you're avoiding the
> overhead of all those network round trips (if on a network), statement
> preparation and planning, etc etc etc. Additionally, PostgreSQL is probably
> going to be using a join plan that's much more efficient than anything
> you'll get by looping over each user and asking for images.

How about if the subset of images for each user is randomized? As in
ORDER BY RANDOM() LIMIT 3. I'm guessing that will put somewhat of a
cramp on the big join scenario and perhaps it becomes better to have
the RANDOM() in the small individual selects?

I'm probably going to give myself the answer, please advise if I'm not
thinking straight:

In this case, from EXPLAIN ANALYZE I get that after introducing
random() and limit, while the cost for the big join scenario is
practically the same, the actual execution time increases with about
0.100ms. Whereas on individual selects with random() limit I get an
increase of 0.040, and since the people will be shown paginated 10 per
page, I'm looking at a 10 x 0.040 = 0.400 increase.

So the big join still comes ahead.

> Note, however, that when testing method (1) in your post you will REALLY
> need to make sure that you're using parameterized prepared statements for
> the image queries.

Definitely, and I'm already working on that (see my other thread).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] return MAX and when it happened

2008-11-20 Thread Scara Maccai

Sam Mason wrote:

On Wed, Nov 19, 2008 at 05:06:14PM -0600, Scara Maccai wrote:
  
I think I wrote it, but there's something I don't get from the docs: do 
I have to call


get_call_result_type(fcinfo, NULL, &tupdesc)



I've always tried to stay away from C level extensions so far!  How
many records are you expecting to aggregate across?  If it's only a few
thousand a simple SQL language function may be ok:
  


It's going to be used in millions of records.
That's why I went straight to the C implementation. But your SQL 
function is a lot easier to read.



Sorry I can't be of more help!
  

You helped me a lot, thank you.


Anyone can answer me?
Do I have to call

get_call_result_type


every time or I can just call it the first time? I don't think the docs 
are very clear in case of a function returning 1 Composite Type.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread WireSpot
On Thu, Nov 20, 2008 at 15:45, Sam Mason <[EMAIL PROTECTED]> wrote:
> On Wed, Nov 19, 2008 at 09:42:33PM +0200, WireSpot wrote:
>> I also imagined some workarounds in the code (PHP), such as defining a
>> global/static hash table and registering statement names with it. But
>> I'd like to know if there's a better way.
>
> Have you thought about using stored procedures instead of prepared
> statements?  No need to register them or keep track of that state.

I'm not sure if it would work. What I'm trying to do is have an
application layer which takes all the client queries and makes
prepared statements out of them. Do you mean to say I should make
stored procedures out of them instead? Granted, CREATE FUNCTION has OR
REPLACE, but other than that it's only complicating matters. A
function needs to have the argument types defined, for example, I
can't get away with simply listing them like I do when executing a
statement.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread Alvaro Herrera
WireSpot escribió:

> I guess he means if connections are persistent, or if the same
> connection is being used at the same time from different parts of the
> application.

I guess if connections are persistent, you could clear them before each
usage with DISCARD (8.3 only)

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] on error logs the whole multiline script

2008-11-20 Thread Sabin Coanda
Hi,

I have "PostgreSQL 8.3.5, compiled by Visual C++ build 1400", and I found 
when I run a script and an error occurs, all the script content is logged.

My previous postgres version "PostgreSQL 8.2.4 on i686-pc-mingw32, compiled 
by GCC gcc.exe (GCC)
3.4.2 (mingw-special)" logged just the function name stack where the error 
occured.

Is there any configuration setting to disable script content logging on 
error, and to enable just the error and the function stack ?

TIA,
Sabin 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread Sam Mason
On Thu, Nov 20, 2008 at 04:03:08PM +0200, WireSpot wrote:
> On Thu, Nov 20, 2008 at 15:45, Sam Mason <[EMAIL PROTECTED]> wrote:
> > Have you thought about using stored procedures instead of prepared
> > statements?  No need to register them or keep track of that state.
> 
> I'm not sure if it would work. What I'm trying to do is have an
> application layer which takes all the client queries and makes
> prepared statements out of them.

I think it depends on what level this abstraction wants to live.  If
its purpose is just to make queries run a bit quicker then, no, stored
procedures aren't going to help at all.  If this abstraction can be
given knowledge of the things that you actually want to do against the
database then stored procedures could be a win.

The best structure will be determined by your problem, I just thought
that mentioning another solution may push your mental model around a bit
to help clarify the "best" solution.


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread WireSpot
On Thu, Nov 20, 2008 at 16:07, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
> I guess if connections are persistent, you could clear them before each
> usage with DISCARD (8.3 only)

Again, I'd be losing the advantage of the already prepared statements.
Basically, what it comes down it is I want to benefit as much as
possible from previously prepared statements, while at the same time
avoiding name clashes.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] example of really weird caching (or whatever) problem

2008-11-20 Thread Brandon Metcalf
Here is an example of the caching problem I described yesterday in a
post.  I have the following tables:

  db=> \d bmetcalf.foo1;
  Table "bmetcalf.foo1"
Column   | Type | Modifiers
  ---+--+---
   country   | text |
   replicaID | text |
   host  | text |
   replica   | text |
  Triggers:
  foo_trig BEFORE DELETE ON foo1 FOR EACH ROW EXECUTE PROCEDURE foo_func()

  db=> \d bmetcalf.foo2;
  Table "bmetcalf.foo2"
Column   | Type | Modifiers
  ---+--+---
   vobID | text |
   replicaID | text |
  Triggers:
  foo_trig BEFORE DELETE ON foo2 FOR EACH ROW EXECUTE PROCEDURE foo_func()

And here is the SQL for the function and trigger definitions:

  CREATE OR REPLACE FUNCTION bmetcalf.foo_func()
  RETURNS TRIGGER
  LANGUAGE plperlu
  AS $$
  require 5.8.0;

  my $table = $_TD->{relname};
  warn "table name is $table";
  warn "BWM before call: table name is $table";

  do_delete();

  return 'SKIP';

  sub do_delete {
  warn "BWM in call: table name is $table";
  }

  $$;

  DROP TRIGGER foo_trig ON bmetcalf.foo1;

  DROP TRIGGER foo_trig ON bmetcalf.foo2;

  CREATE TRIGGER foo_trig BEFORE DELETE ON bmetcalf.foo1 FOR EACH ROW
EXECUTE PROCEDURE bmetcalf.foo_func();

  CREATE TRIGGER foo_trig BEFORE DELETE ON bmetcalf.foo2 FOR EACH ROW
EXECUTE PROCEDURE bmetcalf.foo_func();

Now, watch what happens when I execute a DELETE on bmetcalf.foo1 after
a DELETE on bmetcalf.foo2 .  The table name when this trigger fires
for bmetcalf.foo1 changes:

  db=# delete from bmetcalf.foo1 where "replicaID" = '11';
  ...
  db=# delete from bmetcalf.foo1 where "replicaID" = '11';
  NOTICE:  table name is foo1 at line 6.

  NOTICE:  BWM before call: table name is foo1 at line 8.

  NOTICE:  BWM in call: table name is foo2 at line 15.

  DELETE 0

If I close my session and reconnect and only run the DELETE against
bmetcalf.foo1, the problem doesn't appear:

  db=# delete from bmetcalf.foo1 where "replicaID" = '11';
  NOTICE:  table name is foo1 at line 6.

  NOTICE:  BWM before call: table name is foo1 at line 8.

  NOTICE:  BWM in call: table name is foo1 at line 15.

  DELETE 0

Any ideas?

-- 
Brandon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] example of really weird caching (or whatever) problem

2008-11-20 Thread Richard Huxton
Brandon Metcalf wrote:
> Here is an example of the caching problem I described yesterday in a
> post.  I have the following tables:

> And here is the SQL for the function and trigger definitions:
> 
>   CREATE OR REPLACE FUNCTION bmetcalf.foo_func()
>   RETURNS TRIGGER
>   LANGUAGE plperlu
>   AS $$
>   require 5.8.0;
> 
>   my $table = $_TD->{relname};
>   warn "table name is $table";
>   warn "BWM before call: table name is $table";
> 
>   do_delete();
> 
>   return 'SKIP';
> 
>   sub do_delete {
>   warn "BWM in call: table name is $table";
>   }
> 
>   $$;

Umm - you've got a named closure inside your funciton here - "sub
do_delete".

It's warning isn't using the "my $table" variable, it's using a
localised copy of that variable. That gets defined when the sub is
defined, which will be on the first call (my $table=foo2) and still
exists, unchanged on the second call (my $table=foo1).

Warning - can't remember if I'm using the right terminology on the
above, although I think it's the right diagnosis.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] start/stop error message

2008-11-20 Thread Peter Eisentraut

David wrote:

[EMAIL PROTECTED]:/etc/init.d$ sudo /etc/init.d/postgresql-8.2 start
 * Starting PostgreSQL 8.2 database server
 * Error: specified cluster does not exist
 [fail]

and the same error when passing "stop"

The cluster clearly exists and is recognized by pg_ctl. As a result
PostgreSQL does not start with the system and must be started and
stopped manually.


I can imagine that you have some unsupported mixture of package 
installation and manually created cluster.  But it is hard to tell 
without more detail.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] example of really weird caching (or whatever) problem

2008-11-20 Thread Brandon Metcalf
d == [EMAIL PROTECTED] writes:

 d> Brandon Metcalf wrote:
 d> > Here is an example of the caching problem I described yesterday in a
 d> > post.  I have the following tables:

 d> > And here is the SQL for the function and trigger definitions:
 d> >
 d> >   CREATE OR REPLACE FUNCTION bmetcalf.foo_func()
 d> >   RETURNS TRIGGER
 d> >   LANGUAGE plperlu
 d> >   AS $$
 d> >   require 5.8.0;
 d> >
 d> >   my $table = $_TD->{relname};
 d> >   warn "table name is $table";
 d> >   warn "BWM before call: table name is $table";
 d> >
 d> >   do_delete();
 d> >
 d> >   return 'SKIP';
 d> >
 d> >   sub do_delete {
 d> >   warn "BWM in call: table name is $table";
 d> >   }
 d> >
 d> >   $$;

 d> Umm - you've got a named closure inside your funciton here - "sub
 d> do_delete".

 d> It's warning isn't using the "my $table" variable, it's using a
 d> localised copy of that variable. That gets defined when the sub is
 d> defined, which will be on the first call (my $table=foo2) and still
 d> exists, unchanged on the second call (my $table=foo1).


Yep, it seems that's the problem.  If I pass in $table and use a
lexical variable defined inside do_delete(), the problem goes away.
So, this is where my understanding of how triggers work lacks.  For a
given session, each execution of a trigger isn't completely
independent?

-- 
Brandon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] example of really weird caching (or whatever) problem

2008-11-20 Thread Richard Huxton
Brandon Metcalf wrote:
> Yep, it seems that's the problem.  If I pass in $table and use a
> lexical variable defined inside do_delete(), the problem goes away.
> So, this is where my understanding of how triggers work lacks.  For a
> given session, each execution of a trigger isn't completely
> independent?

Nothing to do with triggers - it's all to do with your Perl code.

#!/usr/bin/perl

sub foo {
my $x = shift;
print "foo x = $x\n";
bar();
return;

sub bar {
print "bar x = $x\n";
}
}

foo(1);
foo(2);
exit;

$ ./perl_example.pl
foo x = 1
bar x = 1
foo x = 2
bar x = 1

If you use warnings it'll tell you about it too with this example.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] example of really weird caching (or whatever) problem

2008-11-20 Thread Brandon Metcalf
d == [EMAIL PROTECTED] writes:

 d> Brandon Metcalf wrote:
 d> > Yep, it seems that's the problem.  If I pass in $table and use a
 d> > lexical variable defined inside do_delete(), the problem goes away.
 d> > So, this is where my understanding of how triggers work lacks.  For a
 d> > given session, each execution of a trigger isn't completely
 d> > independent?

 d> Nothing to do with triggers - it's all to do with your Perl code.


I respectfully disagree because if I don't execute a DELETE on foo2 as
shown in my original email, the problem doesn't occur.  Somewhere in
the trigger execution it's remembering the first table on which the
trigger fired.  So, the information about foo2 is coming from
somewhere and it's in the Perl code.  In other words, I performing two
different DELETEs which cause two different invocations of the same
trigger.


 d> #!/usr/bin/perl

 d> sub foo {
 d> my $x = shift;
 d> print "foo x = $x\n";
 d> bar();
 d> return;

 d> sub bar {
 d> print "bar x = $x\n";
 d> }
 d> }

 d> foo(1);
 d> foo(2);
 d> exit;

 d> $ ./perl_example.pl
 d> foo x = 1
 d> bar x = 1
 d> foo x = 2
 d> bar x = 1

 d> If you use warnings it'll tell you about it too with this example.



-- 
Brandon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] example of really weird caching (or whatever) problem

2008-11-20 Thread Brandon Metcalf
b == [EMAIL PROTECTED] writes:

 b> d == [EMAIL PROTECTED] writes:

 b>  d> Brandon Metcalf wrote:
 b>  d> > Yep, it seems that's the problem.  If I pass in $table and use a
 b>  d> > lexical variable defined inside do_delete(), the problem goes away.
 b>  d> > So, this is where my understanding of how triggers work lacks.  For a
 b>  d> > given session, each execution of a trigger isn't completely
 b>  d> > independent?

 b>  d> Nothing to do with triggers - it's all to do with your Perl code.


 b> I respectfully disagree because if I don't execute a DELETE on foo2 as
 b> shown in my original email, the problem doesn't occur.  Somewhere in
 b> the trigger execution it's remembering the first table on which the
 b> trigger fired.  So, the information about foo2 is coming from
 b> somewhere and it's in the Perl code.  In other words, I performing two

That should read it's _not_ in the Perl code.


 b> different DELETEs which cause two different invocations of the same
 b> trigger.


 b>  d> #!/usr/bin/perl

 b>  d> sub foo {
 b>  d> my $x = shift;
 b>  d> print "foo x = $x\n";
 b>  d> bar();
 b>  d> return;

 b>  d> sub bar {
 b>  d> print "bar x = $x\n";
 b>  d> }
 b>  d> }

 b>  d> foo(1);
 b>  d> foo(2);
 b>  d> exit;

 b>  d> $ ./perl_example.pl
 b>  d> foo x = 1
 b>  d> bar x = 1
 b>  d> foo x = 2
 b>  d> bar x = 1

 b>  d> If you use warnings it'll tell you about it too with this example.





-- 
Brandon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lesslog "incorrect resource manager data checksum."

2008-11-20 Thread Jean-Christophe Arnu
Koichi-san,

full_page_writes are written to be on by default in the global user
configuration file (i.e. postgresql.conf). Query to the server showed that
it is :
 show full_page_writes ;
 full_page_writes
--
 on

I use the RPM version provided on pgfoundry.

The main problem is located on pg_compresslog side (first) : it is unabe to
compress xlogs.
I send you (in private) a bunch of uncompressed logs that are known to
report pg_compresslog problem.

Once pg_compresslog will work, I plan to test "the other side" solution with
pg_decompresslog :)

Thanks for your reply!

2008/11/20 Koichi Suzuki <[EMAIL PROTECTED]>

> Hi,
>
> Sorry I was out and late to find your report.
>
> Lesslog replaces full page write to corresponding incremental log to
> decrease the size and so you should turn full_page_writes "on" for
> lesslog to work.
>
> Did you specify "pg_decpmpresslog" as restore command?  If so, I don't
> see anything missing so far.   Pg_decompresslog must be used in place
> of "cat".
>
> If possible, could you try to share your archive log (a part may help)
> and let me know how you built lesslog?  (Did you use rpm?)   They will
> help.   I've tested lesslog against pgbench and DBT-2 so I believe the
> coverage is reasonable.
>
> Sorry for the inconvenience and thanks a lot for your help.
>
> -
> Koichi Suzuki
>
> 2008/11/18 Jean-Christophe Arnu <[EMAIL PROTECTED]>:
> > Hi,
> >
> > I'm trying to use lesslog 1.1 (out of NTT rpm available on pgfoundry) on
> my
> > pg_xlog files out of a 8.3.3 postgresql server.
> > I always get messages such as "incorrect resource manager data checksum."
> >
> > This message is issued (as I've read the sources) when there's CRC
> mismatch
> > in a record.
> >
> > Do you have any idea of what's going wrong (i.e. why my CRC are not
> > matching)? Does this deals with full_page_writes to be set to on (by
> default
> > in GUC)  which is commented in my GUC (thus set to on) ? Is there
> something
> > missing?
> >
> > Note : I'm transferring my xlog to another node with rsync. The log I get
> (a
> > bunch of) for my test purpose are the ones on the remote node, they
> should
> > be exactly the same as the one on the server.
> >
> > Thanks by advance for any ideao!
> >
> > --
> > Jean-Christophe Arnu
> >
>



-- 
Jean-Christophe Arnu


Re: [GENERAL] example of really weird caching (or whatever) problem

2008-11-20 Thread Richard Huxton
Brandon Metcalf wrote:
> d == [EMAIL PROTECTED] writes:
> 
>  d> Brandon Metcalf wrote:
>  d> > Yep, it seems that's the problem.  If I pass in $table and use a
>  d> > lexical variable defined inside do_delete(), the problem goes away.
>  d> > So, this is where my understanding of how triggers work lacks.  For a
>  d> > given session, each execution of a trigger isn't completely
>  d> > independent?
> 
>  d> Nothing to do with triggers - it's all to do with your Perl code.
> 
> 
> I respectfully disagree because if I don't execute a DELETE on foo2 as
> shown in my original email, the problem doesn't occur.

Of course not.

>  Somewhere in
> the trigger execution it's remembering the first table on which the
> trigger fired. 

Yes. in your "sub do_delete" with it's local variable.

> So, the information about foo2 is coming from
> somewhere and it's in the Perl code. 

Yes, your local copy of $table in do_delete.

> In other words, I performing two
> different DELETEs which cause two different invocations of the same
> trigger.

You've written your code such that do_delete has a local copy of $table.
  In fact, the way it actually works iirc is that when you exit the
trigger function "my $table" goes out of scope and vanishes, but the
"$table" in do_delete doesn't vanish and persists from call to call. You
might call this a static variable in C terms.

>  d> #!/usr/bin/perl
> 
>  d> sub foo {
>  d> my $x = shift;
>  d> print "foo x = $x\n";
>  d> bar();
>  d> return;
> 
>  d> sub bar {
>  d> print "bar x = $x\n";
>  d> }
>  d> }
> 
>  d> foo(1);
>  d> foo(2);
>  d> exit;

This code mirrors _exactly_ what is happening with your trigger. On the
first call to foo $x is set to 1, on the second it's set to 2. That
doesn't affect "sub bar" though because its copy of $x is still the one
from the first call.

Maybe the following makes it clearer:
#!/usr/bin/perl

sub foo {
my $x = shift;
print "foo x = $x, ";
bar();
return;

sub bar {
print "bar x = $x\n";
$x--;
}
}

for my $i (1..5) { foo($i); }
exit;

$ ./perl_example.pl
foo x = 1, bar x = 1
foo x = 2, bar x = 0
foo x = 3, bar x = -1
foo x = 4, bar x = -2
foo x = 5, bar x = -3

The two $x variables go their separate ways and the one in "bar" doesn't
go out of scope at the end of the function.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] transaction_read_only effects on performance, actual meaning (was: How to reduce impact of a query)

2008-11-20 Thread Tom Lane
Craig Ringer <[EMAIL PROTECTED]> writes:
> I'm quite curious about why setting the transaction to readonly helped 
> its performance. Could it be to do with setting hint bits or something 
> along those lines, perhaps? Anyone?

AFAIK that's actually a pessimization.  Declaring the transaction READ
ONLY does not activate any optimizations that wouldn't be there
otherwise, and it causes a few more cycles to be expended to check that
each statement is allowed under READ ONLY rules.  So I think that this
report is mistaken, and the performance change came from something else.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: List Ettiquette (perhaps Off Topic by now) was: Re: [GENERAL] Database access over the Internet...

2008-11-20 Thread Bruce Momjian
Scott Marlowe wrote:
> > In my view, anything that causes people to check more closely about exactly
> > where a reply is being sent is a good thing.
> 
> Really?  Really?  I spend a fair bit of time on these lists helping
> people out.  If I have to examine my reply-tos every time I'm done.  I
> hit reply to all and type. Knowing that my reply goes where it's
> supposed to.  I have better things to do than argue this point over
> and over, and so do a lot of other very helpful folks on this list.
> There is an OPTION that fixes the cc problem for people with brain
> dead email clients that can't eliminate the duplicate for them.  It's
> in the FAQ.  I think we've done enough for those people.

Agreed.  Some of us could just add Michelle to our auto-delete mailbox
rules and then Michelle would never have to worry about getting email
help from us.

Michelle is asking to be involved in this community and there are
requirements for that involvement.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date range query help

2008-11-20 Thread Tom Lane
Sam Mason <[EMAIL PROTECTED]> writes:
> I sometimes find it easier to store ranges like this with having the
> open ends at infinity.  This is easier with timestamps as they have
> magic 'infinity' values in PG (both positive and negative).  Values of
> date type don't have any special values like this which makes things a
> bit more awkward.

Just FYI, date does have +/-infinity as of 8.4.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hidden errors calling a volatile function inside a stable function

2008-11-20 Thread Tom Lane
Craig Ringer <[EMAIL PROTECTED]> writes:
> Merlin Moncure wrote:
>> On Thu, Nov 20, 2008 at 6:21 AM, Sabin Coanda
>> <[EMAIL PROTECTED]> wrote:
>>> I found in a STABLE function, for instance "TEST_0"(), it is not allowed to
>>> use INSERT statement. Trying this will give me the error:
>>> ERROR: INSERT is not allowed in a non-volatile function
>> 
>> Why did you lie when your declared your function stable? :-)

> I suspect the point here is that PostgreSQL tries to stop you violating 
> your claims about the function's stability by preventing DML operations. 
> However, it doesn't also attempt to prevent invocation of volatile 
> functions.

All this behavior is documented; see
http://www.postgresql.org/docs/8.3/static/xfunc-volatility.html
particularly the NOTE at the bottom.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Propose: enum with all registered table names?

2008-11-20 Thread Dmitry Koterov
Hello.

Sometimes I want to save in a table reference to another table's name.
E.g. I am creating an accounting system which (on triggers) logs which
record is changed and in which table:

CREATE TABLE log(
  tbl XXX,
  id INTEGER,
  blahblah
);

Nowadays XXX may be:
1) Table name. But it is quite inefficient by disk usage. Another bad thing:
if I rename a table, I have to rename all rows in log table.
2) Table OID. It is very bad for pg_dump: after the restoration log table
will be unusable.
3) ENUM with all table names. But I have to add an element to ENUM each time
I create a new table, and, if I rename a table, I have to rename an ENUM
element too.

So it would be very useful if Postgres has a special, system ENUM (e.g.
pg_catalog.table_names_enum) which holds names of all tables in the database
(format: schema.table), and their elements are automatically renamed when a
table is renamed.


Re: [GENERAL] Propose: enum with all registered table names?

2008-11-20 Thread Richard Huxton
Dmitry Koterov wrote:
> 
> So it would be very useful if Postgres has a special, system ENUM (e.g.
> pg_catalog.table_names_enum) which holds names of all tables in the database
> (format: schema.table), and their elements are automatically renamed when a
> table is renamed.

Too late :-)

It's regclass you're after I think.

=> CREATE TABLE track_tables (t regclass);
=> INSERT INTO track_tables (t) VALUES ('"A"'::regclass);
INSERT 0 1
=> SELECT * FROM track_tables;
  t
-
 "A"
(1 row)

=> ALTER TABLE "A" RENAME TO atable;
ALTER TABLE
=> SELECT * FROM track_tables;
   t

 atable
(1 row)


-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] compiling libpq.dll with Borland C++, is it possible?

2008-11-20 Thread Bruce Momjian
Tony Caduto wrote:
> Hi,
> I am trying to compile my own copy of libpq.dll using bcc32.exe, the 
> docs say it is possible, but I get a error when it tries to compile dirent.c
> Has anyone been able to do this? 

Oh, that's a shame.  We usually get feedback once per major release from
Borland CC users and adjust things so they work, but then at some later
point things can break and there are no Borland CC users around to do
testing.

Looking at the errors I see a 'DIR' structure being defined:

DIR*d;

and the errors are in referencing fields in the structure:

d->dirname = malloc(strlen(dirname) + 4);

So would you check to see what fields are defined in your DIR structure.

Also, I don't see this file as being changed for a few years so I am
confused why the compile is failing for you and not for previous Borland
CC users.

---


> 
> 
> C:\postgresql-8.3.5\src\interfaces\libpq>make -N -DCFG=Release /f bcc32.mak
> MAKE Version 5.3  Copyright (c) 1987, 2008 CodeGear
> Building the Win32 DLL and Static Library...
> 
> Configuration "Release"
> 
> bcc32.exe @MAKE0014.@@@
> Borland C++ 5.5.1 for Win32 Copyright (c) 1993, 2000 Borland
> ..\..\port\dirent.c:
> Error E2451 ..\..\port\dirent.c 35: Undefined symbol 
> 'INVALID_FILE_ATTRIBUTES' in function opendir
> Error E2451 ..\..\port\dirent.c 52: Undefined symbol 'dirname' in 
> function opendir
> Error E2451 ..\..\port\dirent.c 53: Undefined symbol 'dirname' in 
> function opendir
> Error E2451 ..\..\port\dirent.c 59: Undefined symbol 'dirname' in 
> function opendir
> Error E2451 ..\..\port\dirent.c 60: Undefined symbol 'dirname' in 
> function opendir
> Error E2451 ..\..\port\dirent.c 60: Undefined symbol 'dirname' in 
> function opendir
> Error E2451 ..\..\port\dirent.c 61: Undefined symbol 'dirname' in 
> function opendir
> Error E2451 ..\..\port\dirent.c 61: Undefined symbol 'dirname' in 
> function opendir
> Error E2451 ..\..\port\dirent.c 62: Undefined symbol 'dirname' in 
> function opendir
> Error E2451 ..\..\port\dirent.c 64: Undefined symbol 'dirname' in 
> function opendir
> Error E2451 ..\..\port\dirent.c 65: Undefined symbol 'handle' in 
> function opendir
> Error E2451 ..\..\port\dirent.c 66: Undefined symbol 'ret' in function 
> opendir
> Error E2451 ..\..\port\dirent.c 67: Undefined symbol 'ret' in function 
> opendir
> Error E2451 ..\..\port\dirent.c 77: Undefined symbol 'handle' in 
> function readdir
> Error E2451 ..\..\port\dirent.c 79: Undefined symbol 'handle' in 
> function readdir
> Error E2451 ..\..\port\dirent.c 79: Undefined symbol 'dirname' in 
> function readdir
> Error E2451 ..\..\port\dirent.c 80: Undefined symbol 'handle' in 
> function readdir
> Error E2451 ..\..\port\dirent.c 88: Undefined symbol 'handle' in 
> function readdir
> Error E2451 ..\..\port\dirent.c 100: Undefined symbol 'ret' in function 
> readdir
> Error E2451 ..\..\port\dirent.c 102: Undefined symbol 'ret' in function 
> readdir
> Error E2451 ..\..\port\dirent.c 102: Undefined symbol 'ret' in function 
> readdir
> Error E2451 ..\..\port\dirent.c 103: Undefined symbol 'ret' in function 
> readdir
> Error E2451 ..\..\port\dirent.c 109: Undefined symbol 'handle' in 
> function closedir
> Error E2451 ..\..\port\dirent.c 110: Undefined symbol 'handle' in 
> function closedir
> Error E2451 ..\..\port\dirent.c 111: Undefined symbol 'dirname' in 
> function closedir
> Error E2228 ..\..\port\dirent.c 111: Too many error or warning messages 
> in function closedir
> *** 26 errors in Compile ***
> 
> ** error 1 ** deleting ".\Release\dirent.obj"
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Propose: enum with all registered table names?

2008-11-20 Thread Alvaro Herrera
Dmitry Koterov wrote:

> CREATE TABLE log(
>   tbl XXX,
>   id INTEGER,
>   blahblah
> );

Use type regclass.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread Daniel Verite

Albe Laurenz wrote:


You'll have to find a way to pick or generate unique names for the
prepared statements.
You could check for name collisions and disambiguate with a suffix
or something.


By the way, why do the prepared statements require to be named at all?
With other DBMS such as oracle or mysql, one can prepare statements 
without providing any name for them: the prepare() step returns a 
"statement handle" that is to be passed to subsequent exec() calls, no 
unique name is involved. I know that you can pass an empty string to 
PQPrepare(), but only one such statement can be used at a time, so it's 
not the same thing.


Currently with pg, using prepared statements more or less implies 
implementing an application-wide policy about naming them, otherwise 
there is always the risk that some code upper in the stack has a live 
statement with the same name. And what about contributed code or 
libraries? That would be easier if this global namespace for prepared 
statements didn't exist in the first place.


Best regards,
--
Daniel
PostgreSQL-powered mail user agent and storage: 
http://www.manitou-mail.org


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Propose: enum with all registered table names?

2008-11-20 Thread Tom Lane
Richard Huxton <[EMAIL PROTECTED]> writes:
> It's regclass you're after I think.

A fairly large problem with either regclass or a hypothetical
system-maintained enum is that a table with such a column effectively
has a data dependency on all the tables it mentions.  Which pg_dump
won't know about, so it might try to restore that table before it's
finished creating all the mentioned tables.

I think this will actually work alright in existing pg_dump releases
because it does all the table creation DDL before trying to load any
data.  You might have some problems with future parallel pg_restores
though.

On the whole I'd recommend staying away from the idea.  It doesn't
seem valuable enough to be worth taking risks over.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] compiling libpq.dll with Borland C++, is it possible?

2008-11-20 Thread Alvaro Herrera
Tony Caduto wrote:
> Hi,
> I am trying to compile my own copy of libpq.dll using bcc32.exe, the  
> docs say it is possible, but I get a error when it tries to compile 
> dirent.c

How hard would it be to set up a buildfarm member that exercises the
Borland compiler?  If it only compiles client code, maybe it's possible
to compile the rest of the stuff with MSVC or mingw.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] example of really weird caching (or whatever) problem

2008-11-20 Thread Brandon Metcalf
d == [EMAIL PROTECTED] writes:

 d> Brandon Metcalf wrote:
 d> > d == [EMAIL PROTECTED] writes:
 d> >
 d> >  d> Brandon Metcalf wrote:
 d> >  d> > Yep, it seems that's the problem.  If I pass in $table and use a
 d> >  d> > lexical variable defined inside do_delete(), the problem goes away.
 d> >  d> > So, this is where my understanding of how triggers work lacks.  For 
a
 d> >  d> > given session, each execution of a trigger isn't completely
 d> >  d> > independent?
 d> >
 d> >  d> Nothing to do with triggers - it's all to do with your Perl code.
 d> >
 d> >
 d> > I respectfully disagree because if I don't execute a DELETE on foo2 as
 d> > shown in my original email, the problem doesn't occur.

 d> Of course not.

 d> >  Somewhere in
 d> > the trigger execution it's remembering the first table on which the
 d> > trigger fired.

 d> Yes. in your "sub do_delete" with it's local variable.

 d> > So, the information about foo2 is coming from
 d> > somewhere and it's in the Perl code.

 d> Yes, your local copy of $table in do_delete.

 d> > In other words, I performing two
 d> > different DELETEs which cause two different invocations of the same
 d> > trigger.

 d> You've written your code such that do_delete has a local copy of $table.
 d>   In fact, the way it actually works iirc is that when you exit the
 d> trigger function "my $table" goes out of scope and vanishes, but the
 d> "$table" in do_delete doesn't vanish and persists from call to call. You
 d> might call this a static variable in C terms.


OK.  I understand the Perl part of what is going on.  What I don't
understand is why $table in do_delete() hangs around.  It seems this
is more a characteristic of how triggers work in pgsql.  At any rate,
I appreciate the input since it provides me with a fix.


 d> >  d> #!/usr/bin/perl
 d> >
 d> >  d> sub foo {
 d> >  d> my $x = shift;
 d> >  d> print "foo x = $x\n";
 d> >  d> bar();
 d> >  d> return;
 d> >
 d> >  d> sub bar {
 d> >  d> print "bar x = $x\n";
 d> >  d> }
 d> >  d> }
 d> >
 d> >  d> foo(1);
 d> >  d> foo(2);
 d> >  d> exit;

 d> This code mirrors _exactly_ what is happening with your trigger. On the
 d> first call to foo $x is set to 1, on the second it's set to 2. That
 d> doesn't affect "sub bar" though because its copy of $x is still the one
 d> from the first call.

 d> Maybe the following makes it clearer:
 d> #!/usr/bin/perl

 d> sub foo {
 d> my $x = shift;
 d> print "foo x = $x, ";
 d> bar();
 d> return;

 d> sub bar {
 d> print "bar x = $x\n";
 d> $x--;
 d> }
 d> }

 d> for my $i (1..5) { foo($i); }
 d> exit;

 d> $ ./perl_example.pl
 d> foo x = 1, bar x = 1
 d> foo x = 2, bar x = 0
 d> foo x = 3, bar x = -1
 d> foo x = 4, bar x = -2
 d> foo x = 5, bar x = -3

 d> The two $x variables go their separate ways and the one in "bar" doesn't
 d> go out of scope at the end of the function.



-- 
Brandon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] example of really weird caching (or whatever) problem

2008-11-20 Thread Richard Huxton
Brandon Metcalf wrote:
> 
> OK.  I understand the Perl part of what is going on.  What I don't
> understand is why $table in do_delete() hangs around.  It seems this
> is more a characteristic of how triggers work in pgsql.  At any rate,
> I appreciate the input since it provides me with a fix.

Because it will persist until either:
1. "sub do_delete" expires
2. the whole perl interpreter expires (which then implies #1)

Your trigger function will be "compiled"* just before it is first called
and will then exist until #2 above happens (when you disconnect). Since
your trigger function exists, that implies do_delete continues to exist
which means its version of $table continues to exist. It really is
exactly like that example script I attached, where the "exit" equals
database end-of-session.


* ok, perl is technically interpreted, except of course it isn't really,
and I don't know what a better term than compiled would be.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] example of really weird caching (or whatever) problem

2008-11-20 Thread Brandon Metcalf
d == [EMAIL PROTECTED] writes:

 d> Brandon Metcalf wrote:
 d> >
 d> > OK.  I understand the Perl part of what is going on.  What I don't
 d> > understand is why $table in do_delete() hangs around.  It seems this
 d> > is more a characteristic of how triggers work in pgsql.  At any rate,
 d> > I appreciate the input since it provides me with a fix.

 d> Because it will persist until either:
 d> 1. "sub do_delete" expires
 d> 2. the whole perl interpreter expires (which then implies #1)

 d> Your trigger function will be "compiled"* just before it is first called
 d> and will then exist until #2 above happens (when you disconnect). Since
 d> your trigger function exists, that implies do_delete continues to exist
 d> which means its version of $table continues to exist. It really is
 d> exactly like that example script I attached, where the "exit" equals
 d> database end-of-session.

OK, that makes sense.

 d> * ok, perl is technically interpreted, except of course it isn't really,
 d> and I don't know what a better term than compiled would be.

No, you're right.  It is compiled before being executed, but not in
the sense that C code is compiled.


-- 
Brandon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] join with redundant results VS simpler join plus multiple selects

2008-11-20 Thread Craig Ringer
WireSpot wrote:
> On Thu, Nov 20, 2008 at 15:05, Craig Ringer <[EMAIL PROTECTED]> wrote:
>> That's probably going to be the case.  PostgreSQL won't need to read the
>> redundant info in from disk each time, and relative to the image data it's
>> going to be pretty small. By doing it all in one join you're avoiding the
>> overhead of all those network round trips (if on a network), statement
>> preparation and planning, etc etc etc. Additionally, PostgreSQL is probably
>> going to be using a join plan that's much more efficient than anything
>> you'll get by looping over each user and asking for images.
> 
> How about if the subset of images for each user is randomized? As in
> ORDER BY RANDOM() LIMIT 3. I'm guessing that will put somewhat of a
> cramp on the big join scenario and perhaps it becomes better to have
> the RANDOM() in the small individual selects?

I'm not even sure how you'd achieve that (exactly 3 randomly selected
images per user) in with a single query. Then again, it's stupidly late
here, so my brain may not be working. Any chance you can post a query
that shows what you're doing?

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Sorting JTA survey results

2008-11-20 Thread wstrzalka
http://www.postgresqlcertification.org/jta/2008/results

Having point 4 as an example:

For how long have you been a PostgreSQL database administrator?
Less than 1 year36
I wish. 15
3 to 5 years42
1 to 3 years78
Over 10 years 7
5 to 7 years33
7 to 10 years   12
--

I think many people would appreciate if this list could be sorted by
the
a) topic - in this case administrative experience
or
b) percentage

Currently it's really hard to interpret the results. Please put an
ORDER BY there :)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question about the WITH RECURSIVE patch

2008-11-20 Thread Josh Harrison
Hi,
I tried the 8.4-devel version and the CTE (Common Table Expression -WITH
RECURSIVE ) patch is working pretty good.
I just have a question

These are the queries & their plan .

The first query uses RECURSIVE keyword (and has a recursive and
non-recursive term as CTE) while the second query uses only WITH keyword(and
has no recursive term)
My question is when I don't use the Recursive term does the optimizer just
consider it as a subquery or does it work like Oracle's WITH CLAUSE
(Subquery Factoring) ? Oracle's WITH CLAUSE boosts the performance of the
queries. So does this do the same?

1. explain analyse
WITH RECURSIVE subdepartment AS
(
-- non-recursive term
SELECT * FROM department WHERE name = 'A'
UNION ALL
-- recursive term
SELECT d.*FROM   department AS d
JOIN   subdepartment AS sd   ON (d.parent_department = sd.id)
)
SELECT *
FROM subdepartment

QUERY
PLAN

 
-

 CTE Scan on subdepartment  (cost=17.57..18.99 rows=71 width=40) (actual
time=0.044..0.590 rows=5 loops=1)

InitPlan

 ->  Recursive Union  (cost=0.00..17.57 rows=71 width=10) (actual
time=0.034..0.536 rows=5 loops=1)
   ->  Seq Scan on department  (cost=0.00..1.10 rows=1 width=10)
(actual time=0.025..0.031 rows=1 loops=1)
 Filter: (name =
'A'::text)

   ->  Hash Join  (cost=0.33..1.51 rows=7 width=10) (actual
time=0.080..0.107 rows=1 loops=4)
 Hash Cond: (d.parent_department = sd.id)

 ->  Seq Scan on department d  (cost=0.00..1.08 rows=8
width=10) (actual time=0.004..0.033 rows=8 loops=4)
 ->  Hash  (cost=0.20..0.20 rows=10 width=4) (actual
time=0.023..0.023 rows=1 loops=4)
   ->  WorkTable Scan on subdepartment sd
(cost=0.00..0.20 rows=10 width=4) (actual time=0.004..0.009 rows=1 loops=4)
 Total runtime: 0.681 ms

2. explain analyse
WITH  subdepartment AS
(
-- non-recursive term
SELECT * FROM department WHERE name = 'A'
)
SELECT id,name FROM subdepartment

QUERY
PLAN

 
---

 CTE Scan on subdepartment  (cost=1.10..1.12 rows=1 width=36) (actual
time=0.037..0.050 rows=1 loops=1)

InitPlan

 ->  Seq Scan on department  (cost=0.00..1.10 rows=1 width=10) (actual
time=0.024..0.030 rows=1 loops=1)
   Filter: (name =
'A'::text)

 Total runtime: 0.111 ms

Thanks
Josh


Re: [GENERAL] Question about the WITH RECURSIVE patch

2008-11-20 Thread Tom Lane
"Josh Harrison" <[EMAIL PROTECTED]> writes:
> My question is when I don't use the Recursive term does the optimizer just
> consider it as a subquery or does it work like Oracle's WITH CLAUSE
> (Subquery Factoring) ? Oracle's WITH CLAUSE boosts the performance of the
> queries. So does this do the same?

See the fine manual, for instance last para here:
http://developer.postgresql.org/pgdocs/postgres/queries-with.html

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about the WITH RECURSIVE patch

2008-11-20 Thread Christophe

On Nov 20, 2008, at 1:21 PM, Tom Lane wrote:

See the fine manual, for instance last para here:
http://developer.postgresql.org/pgdocs/postgres/queries-with.html


Fine manual indeed... this the best explanation of WITH RECURSIVE  
I've ever read.  Kudos to the documentation writer(s).


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about the WITH RECURSIVE patch

2008-11-20 Thread Josh Harrison
Thanks Tom. This is wonderful

On Thu, Nov 20, 2008 at 4:21 PM, Tom Lane <[EMAIL PROTECTED]> wrote:

> "Josh Harrison" <[EMAIL PROTECTED]> writes:
> > My question is when I don't use the Recursive term does the optimizer
> just
> > consider it as a subquery or does it work like Oracle's WITH CLAUSE
> > (Subquery Factoring) ? Oracle's WITH CLAUSE boosts the performance of the
> > queries. So does this do the same?
>
> See the fine manual, for instance last para here:
> http://developer.postgresql.org/pgdocs/postgres/queries-with.html
>
>regards, tom lane
>


Re: [GENERAL] Question about the WITH RECURSIVE patch

2008-11-20 Thread Thomas Kellerer

Tom Lane wrote on 20.11.2008 22:21:

"Josh Harrison" <[EMAIL PROTECTED]> writes:

My question is when I don't use the Recursive term does the optimizer just
consider it as a subquery or does it work like Oracle's WITH CLAUSE
(Subquery Factoring) ? Oracle's WITH CLAUSE boosts the performance of the
queries. So does this do the same?


See the fine manual, for instance last para here:
http://developer.postgresql.org/pgdocs/postgres/queries-with.html



Cool :)

From http://umitanuki.net/pgsql/wfv08/design.html I got the impression that 
windowing functions will make into (at least partially) into 8.4 because on that 
page several items are marked with [DONE].


I can't see anything in the developer docs regarding them. Does that mean they 
won't make it or that simply the documentation isnt't yet there?


Cheers
Thomas




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about the WITH RECURSIVE patch

2008-11-20 Thread Bruce Momjian
Thomas Kellerer wrote:
> Tom Lane wrote on 20.11.2008 22:21:
> > "Josh Harrison" <[EMAIL PROTECTED]> writes:
> >> My question is when I don't use the Recursive term does the optimizer just
> >> consider it as a subquery or does it work like Oracle's WITH CLAUSE
> >> (Subquery Factoring) ? Oracle's WITH CLAUSE boosts the performance of the
> >> queries. So does this do the same?
> > 
> > See the fine manual, for instance last para here:
> > http://developer.postgresql.org/pgdocs/postgres/queries-with.html
> > 
> 
> Cool :)
> 
>  From http://umitanuki.net/pgsql/wfv08/design.html I got the impression that 
> windowing functions will make into (at least partially) into 8.4 because on 
> that 
> page several items are marked with [DONE].
> 
> I can't see anything in the developer docs regarding them. Does that mean 
> they 
> won't make it or that simply the documentation isnt't yet there?

The patch is still being reviewed for inclusion in 8.4.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about the WITH RECURSIVE patch

2008-11-20 Thread Thomas Kellerer

Bruce Momjian wrote on 20.11.2008 22:56:
From http://umitanuki.net/pgsql/wfv08/design.html I got the impression that 
windowing functions will make into (at least partially) into 8.4 because on that 
page several items are marked with [DONE].


I can't see anything in the developer docs regarding them. Does that mean they 
won't make it or that simply the documentation isnt't yet there?


The patch is still being reviewed for inclusion in 8.4.


Thanks for the quick reply.

I'm keeping my fingers crossed, because that is one of the features I'm really 
waiting for :)


Cheers
Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about the WITH RECURSIVE patch

2008-11-20 Thread Bruce Momjian
Thomas Kellerer wrote:
> Bruce Momjian wrote on 20.11.2008 22:56:
> >> From http://umitanuki.net/pgsql/wfv08/design.html I got the impression 
> >> that 
> >> windowing functions will make into (at least partially) into 8.4 because 
> >> on that 
> >> page several items are marked with [DONE].
> >>
> >> I can't see anything in the developer docs regarding them. Does that mean 
> >> they 
> >> won't make it or that simply the documentation isnt't yet there?
> > 
> > The patch is still being reviewed for inclusion in 8.4.
> > 
> Thanks for the quick reply.
> 
> I'm keeping my fingers crossed, because that is one of the features I'm 
> really 
> waiting for :)

I think there is a good chance it will be in 8.4;  several community
members have done a lot to help test it and make sure it is ready for
8.4.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Porting from Oracle PL/SQL

2008-11-20 Thread Daniel Clark
While doing a Google search recently, I found a nice site located at 
http://www.redhat.com/docs/manuals/database/RHDB-2.0-Manual/prog/plpgsql-porting.html.
 
The number and quality of Postgres related book titles has grown, and greatly 
improved over the years. With the increased popularity of Open Source 
databases, has anyone given any thought towards publishing a title focused on 
migrating apps from Oracle to Postgres? Or maybe a title focused exclusively 
on PL/pgSQL? It's just a thought, no criticism whatsover. Good day.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] lo data type

2008-11-20 Thread John Zhang
Hi there,

I am writing to seek your assistance on how to set up the lo data type for
large objects. I am using postres 3.3. By default, the data type lo is not
created by the installation, right? How to create one? Any input would be
much appreciated. Thanks.

John


Re: [GENERAL] lo data type

2008-11-20 Thread Alan Hodgson
On Thursday 20 November 2008, "John Zhang" <[EMAIL PROTECTED]> wrote:
> Hi there,
>
> I am writing to seek your assistance on how to set up the lo data type
> for large objects. I am using postres 3.3. By default, the data type lo
> is not created by the installation, right? How to create one? Any input
> would be much appreciated. Thanks.
>

http://www.postgresql.org/docs/8.3/static/largeobjects.html


-- 
Corporations will ingest natural resources and defecate garbage until all 
resources are depleted, debt can no longer be repaid and our money becomes 
worthless - Jay Hanson

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Reversing transactions on a large scale

2008-11-20 Thread snacktime
Right now we are running mysql as that is what was there when I
entered the scene.  We might switch to postgres, but I'm not sure if
postgres makes this any easier.

We run a couple of popular games on social networking sites.  These
games have a simple economy,and we need to be able to time warp the
economy back in time, which means reverting a whole lot of
transactions and inventories.  Our games generate around 1 million
user transactions per hour, which results in inserts/updates on 4
times that many rows.   Using PIT recovery would be a very reliable
way to accomplish this, but I'm wondering how long it would take.  If
it takes a full day to roll back an hour of game time, then I need to
find another solution.

Chris

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reversing transactions on a large scale

2008-11-20 Thread Scott Marlowe
On Thu, Nov 20, 2008 at 4:36 PM, snacktime <[EMAIL PROTECTED]> wrote:
> Right now we are running mysql as that is what was there when I
> entered the scene.  We might switch to postgres, but I'm not sure if
> postgres makes this any easier.
>
> We run a couple of popular games on social networking sites.  These
> games have a simple economy,and we need to be able to time warp the
> economy back in time, which means reverting a whole lot of
> transactions and inventories.  Our games generate around 1 million
> user transactions per hour, which results in inserts/updates on 4
> times that many rows.   Using PIT recovery would be a very reliable
> way to accomplish this, but I'm wondering how long it would take.  If
> it takes a full day to roll back an hour of game time, then I need to
> find another solution.

PITR is pretty fast, since it sequentially applies changes to the
database as fast as it can.  Your hardware has a lot to do with this
though.  Applying changes to a machine with plenty of memory, fast
CPUs, and a big rockin RAID-10 array will of course be much faster
than doing the same thing on a laptop.

If you make "base" sets every night at midnight with snapshots, then
it shouldn't take too long.  Is this gonna be a regular thing, or is
this more of an occasional occurance when things in the game go
horribly wrong?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Porting from Oracle PL/SQL

2008-11-20 Thread Scott Marlowe
On Thu, Nov 20, 2008 at 4:05 PM, Daniel Clark <[EMAIL PROTECTED]> wrote:
> While doing a Google search recently, I found a nice site located at
> http://www.redhat.com/docs/manuals/database/RHDB-2.0-Manual/prog/plpgsql-porting.html.
> The number and quality of Postgres related book titles has grown, and greatly
> improved over the years. With the increased popularity of Open Source
> databases, has anyone given any thought towards publishing a title focused on
> migrating apps from Oracle to Postgres? Or maybe a title focused exclusively
> on PL/pgSQL? It's just a thought, no criticism whatsover. Good day.

I would totally buy an in depth book on plpgsql.  Or one that split
between that and custome C functions.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reversing transactions on a large scale

2008-11-20 Thread Tom Lane
> On Thu, Nov 20, 2008 at 4:36 PM, snacktime <[EMAIL PROTECTED]> wrote:
>> Right now we are running mysql as that is what was there when I
>> entered the scene.  We might switch to postgres, but I'm not sure if
>> postgres makes this any easier.
>> 
>> We run a couple of popular games on social networking sites.  These
>> games have a simple economy,and we need to be able to time warp the
>> economy back in time, which means reverting a whole lot of
>> transactions and inventories.  Our games generate around 1 million
>> user transactions per hour, which results in inserts/updates on 4
>> times that many rows.

Just out of curiosity, I'm wondering how you make that happen now with
mysql.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres mail list traffic over time

2008-11-20 Thread Tom Lane
I got interested by Bruce's plot of PG email traffic here
http://momjian.us/main/img/pgincoming.gif
and decided to try to extend it into the past.  The data I have
available is just my own incoming mail log, but being a pack-rat by
nature I have that back to April 1998.  Attached is a graph of Postgres
list messages per month since then.  I should note that this covers only
the mail lists I'm subscribed to, which has been most of them since
about 1999; but the first few numbers in this chart are undercounts by
comparison.  Also, the very last dot is month-to-date for November and
so is an underestimate.

So, to a first approximation, the PG list traffic has been constant
since 2000.  Not the result I expected.

regards, tom lane

<>
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres mail list traffic over time

2008-11-20 Thread Bruce Momjian
Tom Lane wrote:
> I got interested by Bruce's plot of PG email traffic here
> http://momjian.us/main/img/pgincoming.gif
> and decided to try to extend it into the past.  The data I have
> available is just my own incoming mail log, but being a pack-rat by
> nature I have that back to April 1998.  Attached is a graph of Postgres
> list messages per month since then.  I should note that this covers only
> the mail lists I'm subscribed to, which has been most of them since
> about 1999; but the first few numbers in this chart are undercounts by
> comparison.  Also, the very last dot is month-to-date for November and
> so is an underestimate.
> 
> So, to a first approximation, the PG list traffic has been constant
> since 2000.  Not the result I expected.

Yes, I know Magnus did a graph for the PG-EU conference and it was also
flat;  perhaps he can post it here.  His chart was pulled from the
Postgres archives, so it is even more accurate than our graphs.

I also was confused by its flatness.  I am finding the email traffic
almost impossible to continue tracking, so something different is
happening, but it seems it is not volume-related.  I am going to post
another blog tomorrow with more thoughts.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres mail list traffic over time

2008-11-20 Thread Tom Lane
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> So, to a first approximation, the PG list traffic has been constant
>> since 2000.  Not the result I expected.

> I also was confused by its flatness.  I am finding the email traffic
> almost impossible to continue tracking, so something different is
> happening, but it seems it is not volume-related.

Yes, my perception also is that it's getting harder and harder to keep
up with the list traffic; so something is happening that a simple
volume count doesn't capture.

Does anyone have the data to break it down per mailing list?  That might
yield some more insight.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres mail list traffic over time

2008-11-20 Thread Bruce Momjian
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> So, to a first approximation, the PG list traffic has been constant
> >> since 2000.  Not the result I expected.
> 
> > I also was confused by its flatness.  I am finding the email traffic
> > almost impossible to continue tracking, so something different is
> > happening, but it seems it is not volume-related.
> 
> Yes, my perception also is that it's getting harder and harder to keep
> up with the list traffic; so something is happening that a simple
> volume count doesn't capture.

Agreed.  I am struggling to put into words some of my angst, but I am
concerned I will not be able to offer the same guarantees I have done in
previous releases that every bug has been either fixed or added to the
TODO list, and every submitted patch has been either applied or rejected.

There, I said it.  :-(

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres mail list traffic over time

2008-11-20 Thread Joshua D. Drake
On Thu, 2008-11-20 at 22:36 -0500, Tom Lane wrote:
> I got interested by Bruce's plot of PG email traffic here
> http://momjian.us/main/img/pgincoming.gifto 
> and decided to try to extend it into the past.  The data I have
> available is just my own incoming mail log, but being a pack-rat by
> nature I have that back to April 1998.  Attached is a graph of Postgres
> list messages per month since then.  I should note that this covers only
> the mail lists I'm subscribed to, which has been most of them since
> about 1999; but the first few numbers in this chart are undercounts by
> comparison.  Also, the very last dot is month-to-date for November and
> so is an underestimate.
> 
> So, to a first approximation, the PG list traffic has been constant
> since 2000.  Not the result I expected.

Am I reading your graph wrong? I show a sharp increase right before 2006
and then a small drop off but a constant after that?

I know that my email (I am pretty sure I am subscribed to at least as
many lists as you) has been on a steady incline, especially through
-general and -hackers.

Joshua D. Drake


> 
>   regards, tom lane
> 
-- 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres mail list traffic over time

2008-11-20 Thread Tom Lane
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> I know that my email (I am pretty sure I am subscribed to at least as
> many lists as you) has been on a steady incline, especially through
> -general and -hackers.

I would have said the same, which is why I find it noteworthy that
my mail logs don't seem to support that impression.  Have you got
actual log data on the point?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres mail list traffic over time

2008-11-20 Thread brian

Tom Lane wrote:

Bruce Momjian <[EMAIL PROTECTED]> writes:

>>

I am finding the email traffic
almost impossible to continue tracking, so something different is
happening, but it seems it is not volume-related.


Yes, my perception also is that it's getting harder and harder to keep
up with the list traffic; so something is happening that a simple
volume count doesn't capture.


Perhaps it's just subjective: we're all getting older.

Soon, these pesky whippersnappers will want to twitter their PG 
questions to this list over YouTube.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres mail list traffic over time

2008-11-20 Thread Joshua D. Drake
On Thu, 2008-11-20 at 23:46 -0500, Tom Lane wrote:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> > I know that my email (I am pretty sure I am subscribed to at least as
> > many lists as you) has been on a steady incline, especially through
> > -general and -hackers.
> 
> I would have said the same, which is why I find it noteworthy that
> my mail logs don't seem to support that impression.  Have you got
> actual log data on the point?

I purge my postgresql logs except for some specific ones (like PGFG).
however, I have the entire archives.postgresql.org.

pgsql-hackers (since inception, 1997), first line date, second line
number of messages.

1997-01
939
1997-02
300
1997-03
534
1997-04
865
1997-05
484
1997-06
601
1997-07
392
1997-08
399
1997-09
579
1997-10
594
1997-11
381
1997-12
351
1998-01
870
1998-02
1326
1998-03
1121
1998-04
707
1998-05
632
1998-06
493
1998-07
490
1998-08
867
1998-09
675
1998-10
1221
1998-11
609
1998-12
600
1999-01
769
1999-02
699
1999-03
1008
1999-04
217
1999-05
1155
1999-06
1241
1999-07
1052
1999-08
705
1999-09
945
1999-10
962
1999-11
929
1999-12
1065
2000-01
1688
2000-02
1460
2000-03
288
2000-04
187
2000-05
1686
2000-06
1283
2000-07
1477
2000-08
890
2000-09
642
2000-10
1320
2000-11
1419
2000-12
1234
2001-01
1469
2001-02
1178
2001-03
1708
2001-04
1181
2001-05
1478
2001-06
1151
2001-07
955
2001-08
1220
2001-09
921
2001-10
1165
2001-11
1318
2001-12
970
2002-01
1411
2002-02
1233
2002-03
1246
2002-04
1565
2002-05
1169
2002-06
1045
2002-07
1339
2002-08
2308
2002-09
1843
2002-10
1469
2002-11
1257
2002-12
1172
2003-01
1356
2003-02
1324
2003-03
1262
2003-04
1033
2003-05
812
2003-06
1316
2003-07
1068
2003-08
1373
2003-09
1695
2003-10
1631
2003-11
1643
2003-12
836
2004-01
878
2004-02
1017
2004-03
1352
2004-04
1177
2004-05
1495
2004-06
1025
2004-07
1430
2004-08
1620
2004-09
953
2004-10
1084
2004-11
1226
2004-12
963
2005-01
1116
2005-02
987
2005-03
1086
2005-04
1022
2005-05
1626
2005-06
1598
2005-07
1162
2005-08
1217
2005-09
1484
2005-10
1442
2005-11
1587
2005-12
1278
2006-01
1050
2006-02
1282
2006-03
1343
2006-04
1158
2006-05
1386
2006-06
1645
2006-07
1660
2006-08
2060
2006-09
2397
2006-10
1583
2006-11
1031
2006-12
1437
2007-01
1663
2007-02
1953
2007-03
1871
2007-04
1285
2007-05
1201
2007-06
1140
2007-07
1019
2007-08
1244
2007-09
1230
2007-10
1575
2007-11
1380
2007-12
1000
2008-01
1236
2008-02
1324
2008-03
1308
2008-04
1928
2008-05
1128
2008-06
1161
2008-07
1512
2008-08
1391
2008-09
1910
2008-10
1715
2008-11
1431



> 
>   regards, tom lane
> 
-- 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres mail list traffic over time

2008-11-20 Thread Joshua D. Drake
On Thu, 2008-11-20 at 21:19 -0800, Joshua D. Drake wrote:
> On Thu, 2008-11-20 at 23:46 -0500, Tom Lane wrote:
> > "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> > > I know that my email (I am pretty sure I am subscribed to at least as
> > > many lists as you) has been on a steady incline, especially through
> > > -general and -hackers.
> > 
> > I would have said the same, which is why I find it noteworthy that
> > my mail logs don't seem to support that impression.  Have you got
> > actual log data on the point?
> 
> I purge my postgresql logs except for some specific ones (like PGFG).
> however, I have the entire archives.postgresql.org.
> 
> pgsql-hackers (since inception, 1997), first line date, second line
> number of messages.
> 

pgsql-general

1998-05
139
1998-06
337
1998-07
438
1998-08
226
1998-09
187
1998-10
283
1998-11
269
1998-12
242
1999-01
302
1999-02
356
1999-03
385
1999-04
332
1999-05
404
1999-06
470
1999-07
411
1999-08
496
1999-09
385
1999-10
606
1999-11
512
1999-12
631
2000-01
667
2000-02
477
2000-03
219
2000-04
705
2000-05
843
2000-06
803
2000-07
1180
2000-08
861
2000-09
999
2000-10
1337
2000-11
1084
2000-12
1002
2001-01
1700
2001-02
1623
2001-03
1656
2001-04
1568
2001-05
1710
2001-06
1651
2001-07
1342
2001-08
1303
2001-09
1195
2001-10
1223
2001-11
1124
2001-12
901
2002-01
1216
2002-02
1419
2002-03
1388
2002-04
1287
2002-05
1192
2002-06
1366
2002-07
1893
2002-08
1261
2002-09
1438
2002-10
1444
2002-11
1517
2002-12
1225
2003-01
1657
2003-02
1760
2003-03
1597
2003-04
1611
2003-05
1295
2003-06
1951
2003-07
1586
2003-08
1836
2003-09
1880
2003-10
1604
2003-11
1768
2003-12
1664
2004-01
1708
2004-02
1355
2004-03
1215
2004-04
1210
2004-05
965
2004-06
1236
2004-07
973
2004-08
1677
2004-09
1337
2004-10
1579
2004-11
1557
2004-12
1358
2005-01
1877
2005-02
1535
2005-03
1622
2005-04
1460
2005-05
1379
2005-06
1413
2005-07
1332
2005-08
1632
2005-09
1232
2005-10
1945
2005-11
1438
2005-12
1402
2006-01
1743
2006-02
1218
2006-03
1602
2006-04
1372
2006-05
1604
2006-06
1268
2006-07
1170
2006-08
1501
2006-09
1289
2006-10
1588
2006-11
1866
2006-12
1619
2007-01
1953
2007-02
1720
2007-03
1724
2007-04
1304
2007-05
1650
2007-06
1796
2007-07
1257
2007-08
2097
2007-09
1385
2007-10
1722
2007-11
1770
2007-12
1487
2008-01
1621
2008-02
1527
2008-03
1666
2008-04
1446
2008-05
1144
2008-06
1055
2008-07
1251
2008-08
1188
2008-09
1252
2008-10
1485
2008-11
1045 
-- 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres mail list traffic over time

2008-11-20 Thread Joshua D. Drake
On Fri, 2008-11-21 at 00:06 -0500, brian wrote:
> Tom Lane wrote:
> > Bruce Momjian <[EMAIL PROTECTED]> writes:
>  >>
> >> I am finding the email traffic
> >> almost impossible to continue tracking, so something different is
> >> happening, but it seems it is not volume-related.
> > 
> > Yes, my perception also is that it's getting harder and harder to keep
> > up with the list traffic; so something is happening that a simple
> > volume count doesn't capture.
> 
> Perhaps it's just subjective: we're all getting older.

ouch


> Soon, these pesky whippersnappers will want to twitter their PG 
> questions to this list over YouTube.
> 

I assume you don't realize that is already happening :P

Joshua D. Drake


-- 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] join with redundant results VS simpler join plus multiple selects

2008-11-20 Thread WireSpot
On Thu, Nov 20, 2008 at 20:40, Craig Ringer <[EMAIL PROTECTED]> wrote:
> I'm not even sure how you'd achieve that (exactly 3 randomly selected
> images per user) in with a single query. Then again, it's stupidly late
> here, so my brain may not be working. Any chance you can post a query
> that shows what you're doing?

Basically the big join would go like this:

SELECT * FROM gall2ppl gp
JOIN people p ON (gp.gallery_id=N AND gp.gallery_id=p.gallery_id)
JOIN gall2ppl2img gpi ON (gp.gp_id=gpi.gp_id)
JOIN images i ON (gpi.image_id=i.image_id)

Where gp links galleries to people, images are images, and
gall2ppl2img links images to the links between galleries and people.
As you'll notice, I have a gallery_id=N condition, which means I work
with one gallery at a time.

The second scenario would mean doing just the first JOIN, then cycling
through the results and doing the last JOIN for each person.

Getting 3 random images is achieved by slapping an ORDER BY random()
LIMIT 3 at the end of the last JOIN.

I think this means that, although the big join would be more
efficient, it's not practical. Because I also need to limit the number
of persons to get pagination (means using LIMIT and OFFSET on the
people set) and I don't see how I can do that simultaneously (limit
both the people set and the image set in the same join). Not to
mention the added headache of ordering first by some attribute of the
people (name or date of join) then by the images with the random()
thing. The mixed ordering could probably be done but I really dunno
about the simultaneous LIMITing. And in achieving this the big join
may well lose its initial efficiency.

Would it be better if I took the first join and made a view out of it,
then SELECT on that to get the images? The possible advantage would be
I wouldn't have to do the last JOIN for the images, it would be all
straight SELECT's.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] start/stop error message

2008-11-20 Thread Martijn van Oosterhout
On Thu, Nov 20, 2008 at 12:22:24AM -0500, David wrote:
> I am unable to solve the following problem.
> 
> When I start or stop PostgreSQL using pg_ctl (without the -D option) the
> system works fine. No errors.  I have the PGDATA environment variable
> set and it points to my cluster.

What did you do prior to getting into this situation?

Firstly, run pg_lsclusters and see if your cluster is in there. If not
I suggest you explain more clearly what you did. Also, read the
documentation in /usr/share/doc/postgresql-8.2/README.Debian.gz and see
if it puts you on the right track.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while 
> boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


Re: [GENERAL] Prepared statement already exists

2008-11-20 Thread WireSpot
On Thu, Nov 20, 2008 at 19:19, Daniel Verite <[EMAIL PROTECTED]> wrote:
> By the way, why do the prepared statements require to be named at all?
> With other DBMS such as oracle or mysql, one can prepare statements without
> providing any name for them: the prepare() step returns a "statement handle"
> that is to be passed to subsequent exec() calls, no unique name is involved.

Isn't this basically the same thing? Except instead of having to take
care yourself about the uniqueness aspect, it gets taken care of
automatically by the language, since each handle becomes a separate
variable.

> Currently with pg, using prepared statements more or less implies
> implementing an application-wide policy about naming them, otherwise there
> is always the risk that some code upper in the stack has a live statement
> with the same name. And what about contributed code or libraries? That would
> be easier if this global namespace for prepared statements didn't exist in
> the first place.

Yeah, but if you wanted to reuse a statement you'd still have to
implement a mechanism in the code. Like in my case, I'd still have to
have a method of recognizing the same query, so I'd still resort to a
hashtable with MD5's as keys, only instead of a boolean I'd put
statement handles in there.

So it would eliminate the possibility of clashes, but do nothing for
statement reuse.

What would make it all the way better was if the database would do
that last step for you as well: automatically recognize statements
that do the same thing and return the already existing handle.

Only then I'd be truly worry-free as far as the code goes; I could
prepare statements left and right knowing that both clashing and reuse
are taken care of by Postgres.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Serial - last value

2008-11-20 Thread Richard Huxton
hendra kusuma wrote:
> 
> Let me get this clear
> it should looks like this?
> 
> create function something() returns integer as $$
> declare
>   ret integer;
> begin
>   -- just assume something table has a serial column as primary key
>   insert into something values ('a value');
>   select currval('something_sequence') into ret;
>   return ret;
> end
> 
> $$ language 'plpgsql';

That should work fine, although for that particular case with recent
versions you could just use:

INSERT INTO some_table (id, mytext) VALUES (DEFAULT, 'a value')
RETURNING id;

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL]transaction isolation level in plpgsql function

2008-11-20 Thread Sergey Moroz
Is there any way to set transaction isolation level inside plpgsql function?
In my case I have no control of transaction before function is started.

-- 
Sincerely,
Sergey Moroz