Re: [GENERAL] boolean to int

2004-03-15 Thread Alex Satrapa
Pavel Stehule wrote:
create or replace function int2bool (integer) returns boolean as '
  select case when $1=1 then ''t''::boolean else ''f''::boolean end;
' language sql;
I'd do it slightly differently, if only to cater to the principle of least surprise:

create or replace function int2bool (integer) returns boolean as '
  select case when $1=0 then false else true end;
' language sql
That way, 0 maps to false, any non-zero value becomes true.

create or replace function bool2int (boolean) returns integer as '
  select case when $1 then 0 else 1 end; ' language sql;
And that's back-to-front ;)

create or replace function bool2int (boolean) returns integer as '
  select case when $1 then 1 else 0 end;
' language sql
Thanks for the example of the use of casts.

Alex Satrapa

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Foreign Key on Inheriting Table?

2004-02-08 Thread Alex Satrapa
Stephan Szabo wrote:
This still works the same way it has.

If you look through the archives, you should hopefully be able to scrounge
up some of the workaround information that's been sent in the past.
For reference, if anyone finds this thread through Google or whatever, here is the result of some quick Googling on my part. As an aside, if you want to find answer instead of questions in your Google search, add the phrase "Stephan Szabo" - I get the impression that Stephan has the "use a different table for the unique index for the hierarchy" answer in a .sig by now ;)

Workaround 1: Use a separate table for the field (eg: "id") that is going to be used as the foreign key, and have all tables in the hierarchy reference that table: 
  * http://archives.postgresql.org/pgsql-general/2002-06/msg01036.php
  * http://archives.postgresql.org/pgsql-novice/2004-01/msg00149.php

Workaround 2: Create unique indexes on each of the child tables in the hierarchy to 
allow that column on that child table to be used as a foreign key:
  * http://archives.postgresql.org/pgsql-bugs/2000-12/msg00131.php
From what I've read, the problem arises because the (unique) index doesn't descend an inheritance hierarchy. This causes two problems - the first being that child tables won't have the "unique" index on their inherited columns. The second problem is that the uniqueness check doesn't apply to data inserted into any tables other than the original parent. I guess that's just two different sides of the same coin - the index doesn't span the hierarchy, therefore the uniqueness doesn't span the hierarchy either. Any uniqueness is specific to a child table *iff* a unique index is specified on that child table.
I am not a PostgreSQL programmer, and I doubt I'll get any time to look at the code in the depth required to propose a solution to this problem, so I'll settle for re-documenting Stephan's suggestion.  How much would it break existing code to have PostgreSQL issue a warning or notice when someone attempts to CREATE (UNIQUE) INDEX on a column that is inherited? At least the programmer would then be alerted to the problem before it bites them.

For the record, this "foreign key not working on tables defined using inherit" issue affects all versions of PostgreSQL up to at least 7.4.1.

HTH
Alex Satrapa
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Foreign Key on Inheriting Table?

2004-02-08 Thread Alex Satrapa
There's some mention in the (old!) documentation that constraints such as foreign keys won't include data from inheriting tables, eg:

 CREATE TABLE foo (
id SERIAL PRIMARY KEY
 );
 CREATE TABLE bar (
attribute integer NOT NULL
 ) INHERITS (foo);
 CREATE TABLE bar_widgets (
bar integer CONSTRAINT bar_exists REFERENCES foo (id)
 );
Now if you populate bar, you can't put anything in bar_widgets, because the foreign key constraint is not satisfied.

Similarly, if you want to have self-referencing items (eg: two points link together):

 CREATE TABLE anomalies (
id integer PRIMARY KEY,
x integer NOT NULL,
y integer NOT NULL
 );
 CREATE TABLE wormholes (
other_end integer CONSTRAINT has_end REFERENCES wormholes (id)
 ) INHERITS (anomalies);
This won't work because the wormholes tables doesn't actually have the id column (the anomalies table has it).

This won't work either:

 CREATE TABLE wormhole_tubes (
left_end integer CONSTRAINT left_exists REFERENCES wormholes (id),
right_end integer CONSTRAINT right_exists REFERENCES wormholes (id)
 );
While I could adjust my code to treat wormholes separately to all other anomalies, I was hoping to take advantage of the object-relational features of PostgreSQL to make my work a little easier.

Does anyone know whether I'm just doing something wrong, or is the old documentation still correct?

Thanks
Alex Satrapa
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] SCO Extortion

2004-01-21 Thread Alex Satrapa
Gavin M. Roy wrote:
Feel free to link away :)

Gavin

Adam Ruth wrote:

...Also, if you prefer, I can mirror it on
my server and post that link so as to not chew up your bandwidth.
Just make sure you get someone to mirror it before it hits Slashdot. 
That way it's someone else's server that gets turned to slag.

Alex Satrapa

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Best practice? Web application: single PostgreSQL

2004-01-13 Thread Alex Satrapa
Keith Murphy wrote:
At some point, I may try rolling my own PAM module (as Tom Lane 
suggested) that uses the user's browser-authenticated username and 
password to map to a PostgreSQL username that constitutes a "role" 
(assuming that's possible).
One option is to add an extra layer of indirection: the web server 
interacts with a "transaction server" through eg: XML-RPC or CORBA.

The list of transactions ("interactions") you can perform is controlled, 
each transaction can be logged, and each transaction handler can have 
its own access rights to the postgresql database.

The transaction server can be hardened by only allowing access from the 
web server. This does mean that anyone breaking into your web server can 
potentially alter data by interacting with the transaction server - but 
only to the extent allowed by the existing transactions. They can't make 
wholesale changes to your database such as "select * from 
credit_card_details; truncate invoices; drop table accounts_receivable;" 
etc.

Regards
Alex Satrapa
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] sql insert function

2004-01-12 Thread Alex Satrapa
Chris Ochs wrote:
My program starts a transaction, does about 20 inserts, then commits.  When
I replace once of the inserts with a function that does the insert, when I
do the commit I get this message:
WARNING:  there is no transaction in progress

The inserts all commit fine.  Do functions used through DBD::Pg do something
like turn on autocommit after a function is called?
Is your function calling 'commit' itself?  If so, it could be committing 
before your SQL statement issues the 'commit', thus attempting to commit 
a transaction which doesn't exist any more.

DBD::Pg defaults to "AutoCommit" behaviour, unless you explicitly turn 
it off:

   my $dbh = DBI->connect (
  "DBI:Pg:dbname=database", "user" , "password",
  {AutoCommit => 0}
   );
HTH
Alex Satrapa
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Parse error help needed...

2004-01-12 Thread Alex Satrapa
Thapliyal, Deepak wrote:
create function loadme() return text as '
try "RETURNS" instead of "RETURN"

[the guys writing the function parser might want to consider reporting 
what the parser was expecting at this point]

Declare
   s_out text ;
Begin
  For i in 1..1 loop
insert into test values (i,''Test'');
  end loop;
return s_out;
End;
' language 'plpgsql';
The rest looks fine (works for me - yes I tested it this time)

Alex Satrapa

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] sql insert function

2004-01-12 Thread Alex Satrapa
Chris Ochs wrote:
Never mind, I forgot to quote the quote's...
Heh... and here I was thinking you were trying to build a function ;)

And I made the same mistake as you... guess I should proofread instead 
of copy-pasting ;)

Alex Satrapa

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] sql insert function

2004-01-12 Thread Alex Satrapa
Chris Ochs wrote:
CREATE FUNCTION taxship(varchar,integer,varchar,float,float) returns integer
AS '
insert into taxship(s_oid,order_id,mer_id,tax,shipping) values
('$1',$2,'$3',$4,$5);
SELECT 1;
' LANGUAGE SQL;
try

CREATE FUNCTION taxship (varchar,integer,varchar,float,float) RETURNS 
integer AS '
BEGIN
  insert into taxship(s_oid,order_id,mer_id,tax,shipping)
values ('$1',$2,'$3',$4,$5);
  return 1;
END' LANGUAGE 'plpgsql';

since what you are trying to do is a compound statement.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Any commercial shopping cart packages using postgresql?

2003-12-16 Thread Alex Satrapa
Dann Corbit wrote:
But as for the DEFAULT '' NOT NULL entries, CODD and Date eventually
decided that NULL data was a big mistake.
While SQL programmers are used to it, most end users with slim SQL
familiarity will be pretty shocked when:
And so, too, will man people with little or no understanding of internal 
combustion engines get surprised when their diesel engine explodes after 
putting high-octane unleaded fuel into it...

I tend to agree that every column should have a default and not be
allowed to become NULL.  Just to keep end-user astonishment at a
minimum.
The idea of NOT NULL is to make sure that *valid* stuff is put in. Most 
of these columns are set to absolutely meaningless defaults, ensuring 
that your database not only lacks referential integrity, but contains 
nothing of value either!

If I had enough hair left, I'd be pulling it out right about now ;)

Alex

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Any commercial shopping cart packages using postgresql?

2003-12-16 Thread Alex Satrapa
Alex Satrapa wrote:
I'm currently exploring the Zelerate AllCommerce system 
http://allcommerce.sourceforge.net
Ick... this product is *so* the poster child of the MySQL generation:

#
# Table structure for table 'addresses'
#
CREATE TABLE addresses (
  objid varchar(20) DEFAULT '' NOT NULL,
  objclass varchar(20) DEFAULT '' NOT NULL,
  objtype varchar(20) DEFAULT '' NOT NULL,
  ...
  PRIMARY KEY objid
);
No foreign keys! Look at all those "DEFAULT '' NOT NULL" columns! What 
are they thinking?

As penance for suggesting this product, I will clean up the SQL and at 
least post my experiences with installing and using this product on 
PostgreSQL.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Any commercial shopping cart packages using postgresql?

2003-12-10 Thread Alex Satrapa
Mike Nolan wrote:
I may need to move a web store to another hosting facility or rewrite it.
I'm currently exploring the Zelerate AllCommerce system 
http://allcommerce.sourceforge.net

I haven't even reached the stage of installing it yet. It's designed to 
run out-of-the-box on MySQL, so I'm a little wary of that (I'm still 
recovering from reading taht MySQL Gotchas page). So this is just a 
pointer to something that exists.

HTH
Alex
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] PostgreSQL Advocacy, Thoughts and Comments

2003-12-04 Thread Alex Satrapa
Chris Travers wrote:
  Here is a paper I have written for the purposes of providing some
  additional educational material for the MySQL crowd.
Here's my contribution:

Why I choose PostgreSQL (PostgreSQL in 21 Seconds)

I choose referential integrity, meaning my lookups always work.

I choose stored procedures, meaning all my developers - Windows or Unix, 
Perl, C++ or Java - can access the database in the same way, using the 
same locking, with the same checking and cleaning

I choose subselects and outer joins, which allow me to build complex 
queries to get exactly the information I want from the database, rather 
than wasting my time munging data in my code. Even better, I can put 
those common queries into stored procedures, so other developers can get 
the same results as I do!

I choose partial indexes, so lookups on NULL fields are just as fast if 
not faster.

I choose a user community that believes getting the results right is 
more important than getting them quickly.

I choose getting the right results, right now!

I choose funny capitalisation, and a name that can't be pronounced!

I choose PostgreSQL.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Pronouncing PostgreSQL

2003-12-03 Thread Alex Satrapa
John Wells wrote:
Ok...this has irritated me for sometime.  I've seen reference to dividing
it up into proper syllables in the mailing archives, but couldn't find
pronunciation of the "gres" part...
FOLDOC (Free Online Dictionary of Computing) says:

PostgreSQL
/'post-gres-kyu-el/ An enhancement of the
   POSTGRES database system.
Though I tend to pronounce that last syllable as "ell" not "el".  The 
difference is one of those trivial matters which Alexander Pope observed 
are responsible for starting the greatest conflicts.

"you will go to *Dell* if you don't print my *label*".

Alex

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] DBD::Pg problem

2003-12-03 Thread Alex Satrapa
Ausrack Webmaster wrote:
The thing is...I am not. I am inserting it into a varchar field.
Are there any single quotes in the message body?  They will wreak havoc 
with the rest of the query. And why are you putting single quotes around 
'$parent'?

What happens if you move the '$body' to the end:

$sql="insert into it_contact (email, to_email, 
subject,modify,parent,details) values(?,?,?,now(),$parent,'$body')";

Alex

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Undropping a column?

2003-12-01 Thread Alex Satrapa
Tom Lane wrote:
Greg Stark <[EMAIL PROTECTED]> writes:

I just dropped a column that I wish I hadn't. Is there some simple update i
could do to the pg_* tables that would undrop it?
You could rsync the database back from your live off-site backup, or 
rifle through the piles of backups to recover from last night's archive 
backup :)

Another alternative, if it's just the schema not the data you need back, 
is simply to "alter table add column ..." the column back.  Note that if 
you want it to be "not null" or "default x" you have to add the column, 
update the table with the default value in each column, alter the column 
to be not null then alter the column to be default x.

Though this probably isn't what you were looking for, it's still on the 
archives for someone else to stumble across c/o Google

Alex



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] disaster recovery

2003-11-26 Thread Alex Satrapa
Jason Tesser wrote:
We are evaluating Postgres and would like some input about disaster recovery.
I'm going to try to communicate what I understand, and other list 
members can correct me at their selected level of vehemence :)
Please send corrections to the list - I may take days to post follow-ups.

> I know in MsSQL they have a feature called transactional
logs that would enable a database to be put back together based off those logs.
A roughly parallel concept in PostgreSQL (what's the correct 
capitalisation and spelling?) is the "Write Ahead Log" (WAL). There is 
also a quite dissimilar concept called the query log - which is good to 
inspect for common queries to allow database tuning, but is not replay-able.

The theory is that given a PostgreSQL database and the respective WAL, 
you can recreate the database to the time that the last entry of the WAL 
was written to disk.

Some caveats though:
1) Under Linux, if you have the file system containing the WAL mounted 
with asynchronous writes, "all bets are off". The *BSD crowd (that I 
know of) take great pleasure in constantly reminding me that if the 
power fails, my file system will be in an indeterminate state - things 
could be half-written all over the file system.
2) If you're using IDE drives, under any operating system, and have 
write-caching turned on in the IDE drives themselves, again "all bets 
are off"
3) If you're using IDE drives behind a RAID controller, YMMV.

So to play things safe, one recommendation to ensure database robustness 
is to:
1) Store the WAL on a separate physical drive
2) Under Linux, mount that file system with synchronous writes (ie: 
fsync won't return until the data is actually, really, written to the 
interface)
3) If using IDE drives, turn off write caching on the WAL volume so that 
you know data is actually written to disk when the drive claims it is.

Note that disabling write caching will impact write performance 
significantly. Most people *want* write caching turned on for 
throughput-critical file systems, and turned off for mission-critical 
file systems.

Note too that SCSI systems tend to have no "write cache" as such, since 
they use "tagged command queues". The OS can say to the SCSI drive 
something that is effectively, "here are 15 blocks of data to write to 
disk, get back to me when the last one is actually written to the 
media", and continue on its way.  On IDE, the OS can only have one 
command outstanding - the purpose of the write cache is to allow 
multiple commands to be received and "acknowledged" before any data is 
actually written to the media.

When the host is correctly configured, you can recover a PostgreSQL 
database from a hardware failure by recovering the database file itself 
and "replaying" the WAL to that database.

Read more about WAL here: 
http://www.postgresql.org/docs/current/static/wal.html

Regards
Alex
PS: Please send corrections to the list
PPS: Don't forget to include "fire drills" as part of your disaster 
recovery plan - get plenty of practice at recovering a database from a 
crashed machine so that you don't make mistakes when the time comes that 
you actually need to do it!
PPPS: And follow your own advice ;)

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Where is Postgesql ? - MYSQL SURPRISES WITH MAXDB /

2003-11-24 Thread Alex Satrapa
Randolf Richardson wrote:
	From my perspective MySQL and PostgreSQL are completely different 
projects (for starters they even use different licensing schemes -- BSD v. 
GPL) that are in competition.  Since PostgreSQL stands on its own without 
any dependencies on MySQL, I don't see any reason why PostgreSQL would 
suffer in any way if MySQL came to an end;
The headlines that "Jack and Jill Wall Street" will be reading in the 
nightmarish outcome of "something bad" happening to MySQL will not be 
"MySQL.com Fails To Migrate To SAPDB", it will be "Open Source Database 
Disaster".  The article will spin the story into rhetoric about how this 
team of hackers failed to accomplish the simple task of taking code from 
an ancient mainframe database system and making it work with the 
implication woven in that if open source/free software programmers can't 
even get old software to work, how can they get new software to work?

Jack and Jill Wall Street only read the headlines*. They rarely if ever 
read the first paragraph, and only the smallest fraction of them read 
the whole article.  Out of that miniscule fraction, the ones who 
actually do any groundwork of their own to figure out what the newspaper 
was talking about, are yourselves and myself.

Combine that with the newspapers' habit of writing headlines to sell 
newspapers, rather than tell the truth, and you can see where any 
undesirable outcome in an open source project will lead. Especially with 
"open source is bad" being topic-du-jour what with SCO vs World going on 
right now.

Anyone who actually works with computers (as opposed to accomplishes 
their work using computers) is more likely to understand that there's 
more to any problem or statement than just the words. If we heard a 
story about "Is Your Car And Incinerator On Wheels?", we'd be more 
likely to think, "what are they on about?" - Jack and Jill Wall Street 
will be thinking, "what if my car *is* an incinerator on wheels?".

Once you come to the realisation that people are not inherently good or 
evil, they are just (as a whole) inherently stupid**, the whole world 
starts to make a lot more sense.

Alex

* I'm going to cop out here and say "I forget where these figures come 
from" - but in truth, it's just that Google can't remember for me ;) 
Check out how most people recommend teaching kids to read newspapers, 
and you'll find out why people develop bad habits like forming opinions 
from headlines!

** In any sufficiently large group of people, the average intelligence 
tends towards the minimum. I can't remember whose axiom that was... but 
it refers (more or less) to the fact that in a group, people want to 
conform, and conformity means not asking questions, which means 
believing whatever you're told, which means that you're being stupid.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] Humor me: Postgresql vs. MySql (esp. licensing)

2003-11-19 Thread Alex Satrapa
Randolf Richardson, DevNet SysOp 29 wrote:
[sNip]

Do a shutdown -h on a live database machine with pg. It will gracefully
shut itself down.
Is that true for all OS flavors and is it dependent upon the DBA having
set up proper shutdown scripts?
[sNip]

	When I tested this on PostgreSQL on Novell NetWare 6 it shut down 
gracefully.  I don't see why this would be different on other Operating 
Systems since the code base is essentially the same (isn't it?).
No it's not. Don't confuse the PostgreSQL code base with the operating 
system it's running on.

On Mac OS X (desktop version, at least) there are no shutdown scripts. 
All running applications are simply sent the "TERM" signal, then later 
sent the "KILL" signal. Luckily enough, PostgreSQL seems to respond to 
TERM by shutting down gracefully.

Totally off topic, but this lack of shutdown scripts, along with a lack 
of proper package management are the two most painful faults in Mac OS X.

Alex



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Query Planner Filtering Of Specified Value From 'On Distinct' Clause

2003-11-18 Thread Alex Satrapa
I have a table from which I'm trying to extract certain information. For 
historical reasons, we archive every action on a particular thing 
('thing' is identified, funnily enough, by 'id'). So the only way to 
find out the current state of a particular combination of attributes is 
to "select distinct on (id, ...) ... order by date desc".

In the examples below, I've taken real output from psql and done a 
global search/replace on various names to convince myself that I'm not 
leaking information. The numbers are intact, just the column/table/view 
names are different.

For sake of argument, here is the table structure:
database=> \d souce
 Table "public.source"
 Column  |Type | Modifiers
-+-+---
 id  | integer | not null
 date| timestamp without time zone | default 'now'
 second  | integer | not null
 third   | text| not null
Indexes: source btree (id)
This table contains about 98000 rows. Let's create a view into the 
"source" table:

  database=> create view myview as select distinct
  on (id, second, third) *
  from source
  order by id, second, third, date desc;
  CREATE VIEW
Now let's look at the query plan for "select * from myview":

  database=> explain analyze select * from myview;
QUERY PLAN
  --
  Subquery Scan myview  (cost=12483.55..13463.96 rows=9804 width=63)
  (actual time=1262.08..1450.68 rows=21089 loops=1)
 ->  Unique  (cost=12483.55..13463.96 rows=9804 width=63)
  (actual time=1262.06..1406.71 rows=21089 loops=1)
   ->  Sort  (cost=12483.55..12728.65 rows=98041 width=63)
  (actual time=1262.06..1339.34 rows=98063 loops=1)
Sort Key: source.id, source.second, source."third",
  source.date
 ->  Seq Scan on source  (cost=0.00..2247.41 rows=98041
  width=63) (actual time=0.02..269.07 rows=98063 loops=1)
  Total runtime: 1467.78 msec
  (6 rows)
I'm not sure if there's really anything I can do to make that go faster. 
But look what happens when I specify a particular value for one of the 
fields in the "on distinct" clause:

  database=> explain analyze select * from myview where id=12345;
QUERY PLAN
  --
  Subquery Scan myview  (cost=12483.55..13463.96 rows=9804 width=63)
  (actual time=1446.58..1446.81 rows=2 loops=1)
  Filter: (id = 12345)
->  Unique  (cost=12483.55..13463.96 rows=9804 width=63)
  (actual time=1258.87..1434.39 rows=21089 loops=1)
  ->  Sort  (cost=12483.55..12728.65 rows=98041 width=63)
  (actual time=1258.87..1336.89 rows=98063 loops=1)
Sort Key: source.id, source.second, source."third",
  source.date
->  Seq Scan on source  (cost=0.00..2247.41 rows=98041 width=63)
  (actual time=0.02..268.54 rows=98063 loops=1)
  Total runtime: 1457.53 msec
  (7 rows)
Hmm.. not as efficient as I would have liked. The planner happily 
executed the whole view, and only then filtered out just the bits that I 
wanted. The most expensive parts of the query were done on the 
unfiltered set. The result set is only 0.002% of the whole data set, so 
I was a little shocked that it took *longer* to generate the results 
that I wanted, than it took to create the unfiltered results!

Is there any reason why it would not be sensible to push the filter down 
to just after the first sequential scan of source? Since "id" is one of 
the fields in the "on distinct" condition, doesn't it make sense that 
pushing the filter further down would reduce the amount of work done to 
get exactly the same result?  The same would be true if I was to filter 
on a specific "second" or "third" value.

I understand that filtering on columns not present in the "on distinct" 
clause would have drastically different effects when done before or 
after the "Unique" processing. However, since the column(s) I'm 
filtering on are only those present in the "on distinct" clause, it 
makes no difference whether the filter is before or after the 
"Unique"-ifying.

Am I totally stark, raving mad?
Alex Satrapa
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] offset for query result ??

2003-11-17 Thread Alex Satrapa
Kathy Zhu wrote:
I know we have LIMIT to limit the number of rows returned, I wonder if there is
a way to indicate an offset.

Select * from Test offset 10, limit 4;
As per the PostgreSQL documentation, specifically the page on the 
"SELECT" SQL command:

LIMIT Clause

   LIMIT { count | ALL }
OFFSET start
 where count specifies the maximum number of rows to return, and start 
specifies the number of rows to skip before starting to return rows.



so the query you want is:

SELECT * FROM Test LIMIT 4 OFFSET 10;

Alex

---(end of broadcast)---
TIP 8: explain analyze is your friend