Re: [GENERAL] Performance Tuning, hardware-wise

2001-01-02 Thread Gordan Bobic

 As for the drive in that machine, doing inserts on it was SLOW.
Slower
 even than on our beater development machine.  I suppose I could have
fiddled
 with hdparm to increase the disk I/O, but that would have been a
temporary
 fix at best. Our CGI applications were eating lots of CPU time, and we
just
 needed more processors.

If you needed more CPU power, then that's fair. However, bear in mind that
hdparm is NOT necessarily just a TEMPORARY fix. Using DMA modes can HALVE
the CPU utilization required for heavy disk I/O. It is also not uncommon to
increase the disk bandwidth by as much as four times by tuning the disks
with hdparm. I have seen tens, if not hundreds of Linux machines. I have
yet to see one that didn't benefit greatly from using hdparm to tune up
disk transfers. On my database development server, the CPU consumption on
heavy SELECT/INSERT/UPDATE setup went from around 60% to around 40%, and
the hdparm -t -T reported the increase from 4 MB/s on both buffer-cache and
buffer-disk transfers to about 50 MB/s and 20 MB/s respectively. It takes a
bit to get it right, but if you know your disk and motherboard spec, and
don't feel like adventuring into overclocking and overtuning, you are
pretty much guaranteed success on the first try. Then you just have to add
the tune-up to your startup scripts, preferably before the swap is enabled,
as in my experience, in a very limited number of cases, retuning the disk
after the swap has been started can cause some minor, random stability
problems.

  So, we moved to a large database machine to service a group of web
 servers.  The database machine has four Xeon 700's in it, with 512 megs
of
 RAM.  For the disk, we did decide to go with a RAID array, for disk
 redundancy as much as efficiency.  The fast disk array makes inserts go
 tremedously fast, but for selects, well, it's still the same - you're
 limitted by CPU and memory.   Why not disk I/O?  Well, the entire dataset
is
 in the disk cache anyway, so there's very little disk I/O involved with
 selects.

Glad to hear that this worked for you, but some of us have a setup where
you have to randomly query a 8+ GB database. Having it all in cache just
isn't going to happen in that case and lots of disk I/O bandwidth is the
only feasible answer (I don't consider an 8 GB RAM disk to be a feasible
option)

  Our data directory is 340 megs, and the machine has 512 megs of RAM,
 and Postgres is just too efficient with RAM.   ; )  I start up the
 postmaster with 3072 buffer blocks, and yesterday increased the amount of
 memory each postmaster can use for sorting from the default of 4 megs to
32
 megs, which did give a small speed increase, but the machine *still* has
at
 least 120 megs completely unused at all times.  Maybe I'll give each
backend
 more RAM for sorting. : )

You're lucky that you have a comparatively small data set. My data set is
about 20 times as big, and we were stargint to experience crashes (well,
things would abort, rather then crash as they are well behaved, but still)
because the machine would blow the 256MB of RAM and 256 MB of swap. And it
would grind to a halt long before that... 512 MB made a world of
difference... But as with all different applications, YMWV.

   Under Linux, RAM disks aren't much faster/better than regulsr disks,
from
 the benchmarks I've seen.

I can second this. The caching seems to be just too efficient for a vast
improvement in a real-world application... I don't know about benchmarking,
though, as IMHO and experience, benchmark numbers are often meaningless
when used to assess a real-world situation.

 Besides, then a power-outtage can really hose
 you. : )

Well, that is entirely true, but technically, you ought to have a UPS if
you have a mission critical system. I have recently had a complete disk's
worth of data hosed due to power failure, as something went wrong and the
root inode got corrupted. Usefulness of backups is difficult to
overestimate...

HTH.

Gordan




[GENERAL] pg_dump/psql db.out issue

2001-01-02 Thread Michael Davis

I have several tables with text fields that contain single quotes ('). 
 Pg_dump exports these tables and the single quotes (') okay.  Psql, 
however, will not import the data into the tables because of the single 
quote (') in one of the columns in one of the records.  Any idea how to 
work around this other than using the -d/-D option in pg_dump?  I like the 
-d option in pg_dump, however, importing the data via pgsql is much slower 
with this option.

Thanks in advance, Michael Davis





Re: [GENERAL] MySQL and PostgreSQL speed compare

2001-01-02 Thread Gordan Bobic

 Actually, if he ran Postgresql with WAL enabled, fsync shouldn't
 make much of a difference.

 WAL seems to be enabled by default. What WAL is good for I do not know.
But
 if I start PostgreSQL without the -S I see a lot of info about WAL this
and
 WAL that.

You seem to be too hung up on defaults. I am not into advocacy, and
whatever database works better for you is the right one to use. However,
using the defaults as the basis for benchmarking is intrinsically flawed.
It ultimately depends on what the person who set up the distribution felt
like at the time of creating the packages. There may be guidelines which
err on the side of caution, to the point of paranoia. All these are quite
common. If you are serious enough about using a database to run into
bottlenecks of whatever sort you are experiencing, then you should also be
serious enough to RTFM and find out about tuning the database for a
particular application (I consider a benchmark to be an application in this
case) before you do it. Posting results of a benchmark on a default
installation will not prove absolutely anything.

 ...
  But isn't it recommended to run the server with fsync?  If so,
  you shouldn't disable it on a benchmark then.

 I run both MySQL and PostgreSQL as they are (minimum switches, no tuning,
as
 default as it can be). That is MySQL as the .rpm installed it
 (--datadir --pid-file --skip-locking) and PostgreSQL with -i -S -D. Thats
 the way most people would be running them anyway. And default should be
good
 enought for this test (simple queries, few rows (max 1000) per table).

There you go with defaults again. And I'm afraid that your argument "Thats
the way most people would be running them anyway." is also flawed in the
same way. People serious enough about using a database in a sufficiently
heavy environment to run up against speed problems whould be serious enough
about reading up on the software they are using to find out how to tune it
for their application.

Is this some kind of Windows induced dementia? Use everything as it was
installed, and expect it to always work in the best possible way for your
particular application? Use everything the way it was installed because
"users are too thick to play with the settings"? What abous sysops? Would
you really want your business, mission critical server to be operated by
someone who cannot even be bothered to read the documentation for the
software he is installing in sufficient depth to find out about things like
tuning?

The problem here is not the lack of knowledge - it is the resistance to the
concept of learning about something before judging it. Can you see what is
wrong with that approach?

 ...
Well I expected MySQL to be the faster one, but this much.
 ...
   To me, all this is pointing toward the possibility that you haven't
   switched of fsync. This will make a MASSIVE difference to
insert/update

 The idea was to run as recomended and as default as possible. But with
the
 latest (alpha/beta/development) code.

Latest code doesn't matter in this case. If you are running a benchmark,
here are the things you should be considering if you are being serious
about measuring real-world performance AND usefulness.

1) Never benchmark pre-releases. Always use the latest RELEASE version,
with all the required stability/bugfix patches installed.
2) Always tune the software and hardware up for the particular benchmark.
This will allow you to asses the ability of software/hardware to adapt to a
specific application.
3) If you are testing pre-release versions, you should ALWAYS take the
results with a pinch of salt. Pre-releases are not necessarily stable
(although they often are), and they are often set up to allow for easier
bug tracking and reliability testing, rather than pure speed measuring.
4) ALWAYS contact the developers of the software before publishing the
results. They will give you useful hints on how to optimize things.
5) Default installations are usually completely meaningless for
benchmarking purposes.

 ...
   And in case you cannot be bothered, add the "-o -F" parameters (IIRC)
to
 ...
   flushes the it's disk cache bufferes after every query. This should
even
   things out quite a lot.

 Ill test that. Even thou it feels like tweaking PostgreSQL away from what
 its considered safe by PostgreSQL developers. If it would be safe it
would
 be default.

OK, I am not a PostgreSQL developer (not quite yet, anyway), so they should
comment on this from their point of view.

However, if you are benchmarking speed, then tune the setup for speed. That
is what you are measuring, right? If you are testing something for
reliability and torture-proof features, then tune the setup for that. Not
tuning the system for the application is like using a sledge hammer to
unscrew a bolt. There is such a thing as the correct tool for the task!

  Sir, thanks for sharing this with us. However, unless you can explain
  why queries inside of transactions run faster than 

Re: SV: [GENERAL] MySQL and PostgreSQL speed compare

2001-01-02 Thread Gordan Bobic

[tuning analogies snipped]

 Likewise with self-proclaimed computer tuners.

You have no idea how much I agree with you there.

  I really don't understand why people expect computers to do everything
  for them, the burden of using tools properly belongs to the user.

 I of course agree in principle to this statement (and the whole tone of
 your reply) -- but, my statement doesn't reflect my opinion -- it
 reflects reality.  Facts are stubborn things.

 Of course the defaults will never be perfect -- nor will all users RTM.
 But, therein lies the utility of 'defaults' benchmarking -- let's see
 what the trade offs really are so that we the 'experts' can
 intelligently recommend things -- as well as intelligently tweak the
 defaults.

Unfortunately, there are two ways you can load the defaults. You can strip
them down for maximum speed, or you can load them up for maximum
reliability. The latter will make the users complain about speed on the
support lists in a generally annoyed fashion.

Stripping things down for sppeed, OTOH, will work great - until someone
gets a powercut, or some other kind of wierd hardware failure that will
wipe out their data. Then they will come back again and complain.

And the answer is always to simply spend an hour or so reading the
documentation...

Some people, eh...

Regards.

Gordan




Re: [GENERAL] How passwords can be crypted in postgres?

2001-01-02 Thread Gordan Bobic

 I usually just run 'crypt()' on the clear text before storing it to the
 backend ...

Isn't this just as bad? If you store the encrypted password, that doesn't
help you in the slightest in this case, because if you can breach the list
of encrypted passwords, you still know what you need to send as the
"password" from the front end to let you into the database.

Unless I am missing something here, doing this doesn't make any
difference... Not for someone serious about breaching security, anyway...

Regards.

Gordan




Re: SV: [GENERAL] MySQL and PostgreSQL speed compare

2001-01-02 Thread Gordan Bobic

 Advanced tools do have advanced safety features, but are sold "ready
 for most use", not "safely disabled until you read all of the manuals
 so you can figure out how to make it work decently". I agree that
 reading the manuals is an important part of learning a new tool,
 but it shouldn't be *required* to make it work for basic use.

It isn't *required*. It works lovely the way it is shipped. But if you want
more speed, you should go and read the manual before complaining. It is not
crippled in any way - just tuned on the side of caution. It STILL works
well for MOST users who just want something to work, rather than ultimate
speed or reliability. It is up to the user to decide what is more important
for their particular application, and what is more appropriate given their
setup and budget.

 Users shouldn't have to know how to tune the fuel injection system
 for *optimum* performance in order to take a car for a test drive
 on a fast roadway.

No, they shouldn't. However, for THOSE users, the more appropriate way of
solving the problem would be to buy faster hardware - this is the analogy
you are following, right? If you want to drive faster than the car will let
you, buy a faster car, right?

 Computer software is, indeed, a tool which does not do everything
 for you. But is should come "from the factory" setup for the way
 a user would expect it to run, not partially disabled for maximum
 safety.

It is not "disabled" in any way. It works very well, for a vast majority of
uses. If you are setting up a web site, which you want people to see, then
you should consider yourself serious enough to read the documentation. If
you are intending to stake the future of your business on a server, then
exactly what are you thinking if you still refuse to RTFM?

 It's a power tool, and it can "hurt" if misused. If that's
 too much responsibility for a bad user, it won't matter how safely
 it's been tuned at the factory, the bad user will *still* modify it
 in unsafe ways, and often tune it or use it the wrong way, damaging
 the tool in the process.

There is a valid point in there somewhere. However, there is nothing wrong
with erring on the side of caution. All the functionalityis there - but if
you need more speed, all it takes is reading through the archives for an
hour or so, and you will find all the answers you need.

 I don't expect my software to come optimized for my use. I expect
 it to come optimized for the most users and uses, not "dumbed down"
 for the worst case, or "safely disabled" for the worst users.

Why? What's your reasoning behind that? If all the functionality is there,
and the only penalty is speed, which is still adequate for most uses, what
is the problem? If you are happy with tuning things up for your particular
application, they the chances are that you will go through the tuning
process yourself regardless of how it is shipped. All the default that is
slightly slower will do is encourage you to read the docs that little bit
sooner, if your system becomes large enough for this to be an issue.

Regards.

Gordan




Re: [GENERAL] How passwords can be crypted in postgres?

2001-01-02 Thread Jens Hartwig

  [...]
 Isn't this just as bad? If you store the encrypted password, that doesn't
 help you in the slightest in this case, because if you can breach the list
 of encrypted passwords, you still know what you need to send as the
 "password" from the front end to let you into the database.
  [...]

If you encrypt the input from the frontend as well and compare the
encrypted strings it will not help you to look into the list of
encrypted passwords ... or am I wrong?

Regards, Jens Hartwig

=
Jens Hartwig
-
debis Systemhaus GEI mbH
10875 Berlin
Tel. : +49 (0)30 2554-3282
Fax  : +49 (0)30 2554-3187
Mobil: +49 (0)170 167-2648
E-Mail   : [EMAIL PROTECTED]
=



[GENERAL] System Tables Questions

2001-01-02 Thread Colin Taylor

Hi,
Anyone know where I can get a decent description of the system tables and
their contents?
Thanks in advance,
Colin



Re: [GENERAL] How passwords can be crypted in postgres?

2001-01-02 Thread Gordan Bobic

   [...]
  Isn't this just as bad? If you store the encrypted password, that
doesn't
  help you in the slightest in this case, because if you can breach the
list
  of encrypted passwords, you still know what you need to send as the
  "password" from the front end to let you into the database.
   [...]

 If you encrypt the input from the frontend as well and compare the
 encrypted strings it will not help you to look into the list of
 encrypted passwords ... or am I wrong?

What problem are you trying to defeat? If you are worried about "sniffing"
passwords from the traveling packets, then regardless of whether the
password field carries a plain text password or scrambled garbage, if you
know where the password field is, you can sniff it. If you are simply using
this for authentication, then it doesn't matter whether the password is
encrypted or not. You are still, effectively, transmitting a "password
string" that is used for authentication.

The security of passwords, encrypted or otherwise is purely reliant on the
security of your database server that stores the data.

Does that make sense?

Regards.

Gordan




[GENERAL] Hierarchical queries in pgsql

2001-01-02 Thread Betsemes

Do pgsql features an equivalent to Oracle's hierarchical queries? Those that

can be done with "start with" and "connect to" clauses. Thanks.

Efrain Caro



Try Ematic.com
Where everyone can buy, or sell, new or used, products online in safe and secure 
manner.  Accept credit cards on your web site (no merchant account required). Get free 
email, web hosting, e-commerce, affiliate program and more at http://www.ematic.com.



Re: [GENERAL] How passwords can be crypted in postgres?

2001-01-02 Thread Karl DeBisschop

Jens Hartwig wrote:
 
   [...]
  Isn't this just as bad? If you store the encrypted password, that doesn't
  help you in the slightest in this case, because if you can breach the list
  of encrypted passwords, you still know what you need to send as the
  "password" from the front end to let you into the database.
   [...]
 
 If you encrypt the input from the frontend as well and compare the
 encrypted strings it will not help you to look into the list of
 encrypted passwords ... or am I wrong?

Slightly wrong -- you need to fetch the salt from the database first.

But even so, if you then transmit this ENCRYPTED password, it can be 
sniffed, and the results of that sniff are all that are needed to 
access the system.

-- 
Karl DeBisschop  [EMAIL PROTECTED]
Learning Network/Information Please  http://www.infoplease.com
Netsaint Plugin Developer[EMAIL PROTECTED]



[GENERAL] Possible off-topic

2001-01-02 Thread Tulio Oliveira


Hi, ALL

I just install:

Postgres 7.0.2
Apache 1.3.14
PHP4
Over RedHat Linux 6.2
On a Notebook Toshiba Satellite Pro

It's working fine, but I have 2 problems.

1) The arrows-keys (up,down,left,right) don't work in PSQL prompt.
Instead this keys, it returns ^[[A  ,  ^[[B  ,  ^[[C  ,   ^[[D
And when I access the notebook from other machine via TELNET, the
BACKSPACE don't works too. (^H isntead).
It's very dificult edit tables and manage the database without this
keys.
Anyone know any configuration to adjust this ??


2) When I startup the X system, and runs Netscape from the same
notebook, the time to make de CONNECTION into Postgres is too slow. More
than 30 seconds
But when I access the notebook via another machine, the connection is
fast.
Anyone know if the X system conflicts with POSTMASTER ?
I need make some presentations with this notebook.


Regards,

Tulio Oliveira




-- 
==
AKACIA TECNOLOGIA
Desenvolvimento de sistemas para Internet
www.akacia.com.br



[GENERAL] Re: Possible off-topic

2001-01-02 Thread François LODIER



Hi all,

happy new year ,

sorry to disturb you but I am trying to create a trigger but it doesn't seems to be working.

I have got a function called halloworld that takes one argument and I am trying to trigger it :

so I write somethig like :

CREATE TRIGGER helloworld_trigger AFTER INSERT
 ON data FOR EACH ROW
 EXECUTE PROCEDURE helloworld ( 'login' )

As I have a field named login in the data table.

and the response is :

CreateTrigger: function helloworld() does not exist


If any ne could help me it would be great.

Thanks

--

Franois Lodier . _
. __ .
. .. ...
[EMAIL PROTECTED] . __ .
Ouais! Et pourquoi ?? ... .
-



Re: [GENERAL] Re: Possible off-topic

2001-01-02 Thread Zachary Beane

On Tue, Jan 02, 2001 at 05:01:56PM +0100, François LODIER wrote:
 Hi all,
 
 happy new year ,
 
 sorry to disturb you but I am trying to create a trigger but it doesn't seems to be 
working.
 
 I have got a function called halloworld that takes one argument and I am trying to 
trigger it :
 
 so I write somethig like :
 
 CREATE TRIGGER helloworld_trigger AFTER INSERT
 ON data FOR EACH ROW
 EXECUTE PROCEDURE helloworld ( 'login' )
 
 As I have a field named login in the data table.
 
 and the response is :
 
 CreateTrigger: function helloworld() does not exist
 
 If any ne could help me it would be great.

When triggers are called, they are invoked with no arguments. The
arguments listed in the "create trigger" statement are placed in the
array TG_ARGV[], which can be referenced inside your procedure.

Zach
-- 
[EMAIL PROTECTED] Zachary Beane http://www.xach.com/



Re: [GENERAL] Two tables refenceing each other's columns

2001-01-02 Thread Stephan Szabo


You can use ALTER TABLE ADD CONSTRAINT to add foreign key constraints
after table creation.

On Tue, 2 Jan 2001, GH wrote:

 
 Is something like the following allowed (or is not a Bad Idea)?
 
 table1
 -+-
 id1  |serial primary key
 col2 |int references table2(id2)
 
 table2
 -+-
 id2  |serial primary key
 col2 |int references table1(id1)
 
 
 Obviously, creating the tables is a problem since the constraints require
 that the other table exists. 
 If doing the above is *not* a Bad Idea, how could I work around this
 problem?
 (That is, (how) can I add the constraints after table creation?
 I imagine something with "create constraint trigger", but the manual is
 not very clear on that.)
 
 
 Thanks
 
 dan
 




[GENERAL] Re: pg_dump/psql db.out issue

2001-01-02 Thread Tom Lane

Michael Davis [EMAIL PROTECTED] writes:
 I have several tables with text fields that contain single quotes ('). 
  Pg_dump exports these tables and the single quotes (') okay.  Psql, 
 however, will not import the data into the tables because of the single 
 quote (') in one of the columns in one of the records.

Huh?  That's worked just fine for a long time.  What version are you
running?

play= select * from foo;
f1
--
 I'm here
(1 row)

play= \q
$ pg_dump  -t foo play quote.sql
$ cat quote.sql
\connect - tgl
CREATE TABLE "foo" (
"f1" text
);
COPY "foo" FROM stdin;
I'm here
\.
$ psql play
Welcome ...
play= drop table foo;
DROP
play= \i quote.sql
You are now connected as new user tgl.
CREATE
play= select * from foo;
f1
--
 I'm here
(1 row)

play=

regards, tom lane



Re: [GENERAL] Possible off-topic

2001-01-02 Thread Tom Lane

Tulio Oliveira [EMAIL PROTECTED] writes:
 1) The arrows-keys (up,down,left,right) don't work in PSQL prompt.
 Instead this keys, it returns ^[[A  ,  ^[[B  ,  ^[[C  ,   ^[[D
 And when I access the notebook from other machine via TELNET, the
 BACKSPACE don't works too. (^H isntead).

Sounds like you have messed-up terminal settings.  Check that the
TERM environment variable matches what you are using.

 2) When I startup the X system, and runs Netscape from the same
 notebook, the time to make de CONNECTION into Postgres is too slow. More
 than 30 seconds

Your local DNS setup is messed up, I think.  That sounds like a
nameserver timeout while trying to look up "localhost" (or whatever
you have PGHOST set to).  Bone up on configuring DNS; I imagine there's
a HOWTO for it out there.

Or you could just unset PGHOST.  But the DNS problem will affect more
than just Postgres, so I'd recommend attacking the problem not just
this symptom.

regards, tom lane



Re: [GENERAL] System Tables Questions

2001-01-02 Thread Peter Eisentraut

Colin Taylor writes:

 Hi,
 Anyone know where I can get a decent description of the system tables and
 their contents?
 Thanks in advance,
 Colin

http://www.postgresql.org/devel-corner/docs/postgres/catalogs.htm

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




[GENERAL] How to drop a NOT NULL column constraint?

2001-01-02 Thread Ed Loehr

I have a table created like so:

CREATE TABLE foo (
id INTEGER NOT NULL
...
);

I need to alter the table to get rid of the "NOT NULL" constraint on the
'id' column.  Can I do this using ALTER TABLE?  Or do I have to
dump/recreate/reload the table?  

I'm looking for something like "ALTER TABLE foo ALTER COLUMN id DROP NOT
NULL", but the docs don't hint at it...

Regards,
Ed Loehr



Re: [GENERAL] How to drop a NOT NULL column constraint?

2001-01-02 Thread Brett W. McCoy

On Tue, 2 Jan 2001, Ed Loehr wrote:

 I have a table created like so:

   CREATE TABLE foo (
   id INTEGER NOT NULL
   ...
   );

 I need to alter the table to get rid of the "NOT NULL" constraint on the
 'id' column.  Can I do this using ALTER TABLE?  Or do I have to
 dump/recreate/reload the table?

 I'm looking for something like "ALTER TABLE foo ALTER COLUMN id DROP NOT
 NULL", but the docs don't hint at it...

I don't think Postgres supports DROP CONSTRAINT in the ALTER TABLE
statement yet.  You'll need to create a temp table with the same structure
but without the constaint and do a SELECT INTO to copy the data over, then
delete the old table and rename the new table to the old name.

-- Brett
 http://www.chapelperilous.net/~bmccoy/
---
Shah, shah!  Ayatollah you so!




Re: [GENERAL] How to drop a NOT NULL column constraint?

2001-01-02 Thread Tom Lane

Ed Loehr [EMAIL PROTECTED] writes:
 I need to alter the table to get rid of the "NOT NULL" constraint on the
 'id' column.  Can I do this using ALTER TABLE?

There isn't an ALTER TABLE variant for this at the moment, but you can
do it the hard way: reach in and change the attnotnull boolean in the
column's pg_attribute row.  The actual update would only require

UPDATE pg_attribute SET attnotnull = 'f' WHERE
attname = 'id' AND attrelid = whatever...

but I don't believe this will be noticed automatically by running
backends.  I think a VACUUM on your table afterwards would be sufficient
to force the backends to notice the change.

regards, tom lane



Re: [GENERAL] How to drop a NOT NULL column constraint?

2001-01-02 Thread Ed Loehr

Tom Lane wrote:
 
 Ed Loehr [EMAIL PROTECTED] writes:
  I need to alter the table to get rid of the "NOT NULL" constraint on the
  'id' column.  Can I do this using ALTER TABLE?
 
 There isn't an ALTER TABLE variant for this at the moment, but you can
 do it the hard way: reach in and change the attnotnull boolean in the
 column's pg_attribute row.  The actual update would only require
 
 UPDATE pg_attribute SET attnotnull = 'f' WHERE
 attname = 'id' AND attrelid = whatever...
 
 but I don't believe this will be noticed automatically by running
 backends.  I think a VACUUM on your table afterwards would be sufficient
 to force the backends to notice the change.

Thanks.  That's nice and easy.  For posterity, here's the command I used:

UPDATE pg_attribute SET attnotnull = 'f' 
FROM pg_class c 
WHERE attname = 'id' 
  AND attrelid = c.oid
  AND c.relkind = 'r'
  AND c.relname = 'foo';

(not sure the 'relkind' predicate is necessary).

Regards,
Ed Loehr



[GENERAL] Function TOP

2001-01-02 Thread Tomasz Janyska

Is there in Postgres such a function like TOP in MSSQL?