Re: [GENERAL] young guy wanting (Postgres DBA) ammo

2007-11-01 Thread Scott Marlowe
On 11/1/07, Kevin Hunter <[EMAIL PROTECTED]> wrote:
> Hiya list,
>
> A friend recently told me that, among other things, the institutions for
> which he works tend to choose MySQL or MSSQL over Postgres because the
> latter requires a dedicated DBA while the former do not.  When they do
> spring for a DBA, they go with Oracle.  As a developer/end-user in/of
> the MySQL/Postgres/Oracle environments, I can say hands down that I
> prefer Postgres and I have a list of reasons why to which I can readily
> refer.

Anyone who thinks MSSQL server doesn't need a DBA is an idiot.
Period.  I've worked with it in the past, and it needed at least as
much DBA hand holding as PostgreSQL 7.4 did.

Now, as to the difference between PostgreSQL and MySQL re: DBA being
needed.  The main difference between the two databases is one of
behavior which has trained people to think MySQL is so simple you
don't need a DBA.  But MySQL will gladly corrupt your data silently
until it's far too late to get any of it back.  This is true of myisam
files getting corrupted and is also true of it's rather poorly behaved
out of the box replication, which is documented, even on the MySQL
site, as quietly failing and losing sync.  Main server fails, you
switch to the backup, find it's 8 weeks out of date, or missing 1 in
100 rows, and it never told you.

PostgreSQL, OTOH, throws fits the second anything goes wrong and often
refuses to come up or keep running until you fix it.  It lets you know
something is wrong before you lose data.

Neither one will run particularly fast or well without SOMEONE who
knows how to keep them happy.  Neither one takes backups automatically
at midnight and ships them offsite for safe keeping.  Neither one
automatically finds slow running queries and analyzes them to find out
what you need to do to make them run faster.

The fact is that you can hire a part time DBA from one of the
wonderful PostgreSQL support companies that frequent this list who
will come in once a month or so and make sure your db is running
smooth.

And Oracle does NOT require a DBA.  Oracle requires an entire
entourage of DBAs to keep happy.  It is a ravenous beast filled with
arcane switches which requires a lot of manual labor to keep it
running smooth.  And not just for big transactional sites.  For damned
near anything you throw at it.  Admittedly, when properly tuned and
maintained it's an impressive database.

So, in my opinion, MySQL is good for data you don't care about or can
recreate or can lose a days worth of with no big loss.  Like bulletin
board systems, or content management.  Take a backup every night or
too, use rdiff-backup on it, and you're set for when it eats your data
one day.  which is will if you're running isam tables.

MSSQL server is good for fast Windows based development, because it
integrates well with .net et. al.

PostgreSQL is good if you need to save money on licensing and want a
DB that can rival Oracle for the 80/20 split.  It's still not in the
same realm for really really big transactional sites, but man is it
geting close fast.  Faster than Oracle is improving in that realm.

Oracle is one of the big ticket DBs and needs big ticket budget and
hardware to perform.

They ALL NEED A DBA.  Just for different things.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] young guy wanting (Postgres DBA) ammo

2007-11-01 Thread Lew

Kevin Hunter wrote:

At 12:35a -0400 on 02 Nov 2007, Tom Lane wrote:

Kevin Hunter <[EMAIL PROTECTED]> writes:

However, I'm not a DBA and only minimally know what's involved in doing
the job, so I don't have "ammo" to defend (or agree?) with my friend
when he says that "Postgres requires a DBA and MySQL doesn't so that's
why they choose the latter.

He's full of it ... mysql is not any easier to run or tune.


I expected as much, but would you give me something more than "Because
Tom says so!"  Good enough for me, but not for a
non-Postgres-indoctrinated person, I fear.  ;-)


MySQL comprises at least three different database engines, one of which does 
not support relational integrity.


Where I used to work, we developed a MySQL-based solution that required 
foreign keys, so we used one of the engines that did support that.  The "DBA" 
for the production system forgot that instruction, and didn't use our scripts, 
I guess, because they configured the production system with the version that 
didn't support foreign keys.  Whoops.


MySQL's configuration contains similar tuning parameters to PG's.  All you 
need to do to gather "ammo" is to visit the respective web sites and read up 
on the configuration parameters for both.


By "MSSQL", what do you mean?  SQL Server?  That also needs some tuning.

Tuning, of course, is only one chore for a DBA.  Designing and maintaining the 
dataspace, performing backups without sacrificing (too much) availability, 
managing indexes, perhaps writing and maintaining stored procedures, 
allocating usernames and passwords, creating and configuring schemas (or 
whatever they're called in the particular product) are all part of DBA work.


Does MySQL even support stored procedures?

PG surely doesn't need a DBA for small data stores, any more than MySQL does. 
 No DBMS will survive a heavy production environment for long without someone 
keeping an eye on it, particularly with large data sets.  Then you get into 
issues of RAID storage, clustering, failover and business continuity, data 
striping, segmenting the database so you can drop or maintain parts of it 
while leaving others in service, and much more are all part of any high-volume 
DBMS if you want it reliable.


Anybody who promulgates the idea that MySQL or SQL Server (assuming that's the 
one you meant) do not need a DBA simply have their head up their ass.  Someone 
has to handle these tasks, and if the workload is high enough, that needs to 
be someone's primary duty.


Unless, of course, you simply don't care about your data.  The lifeblood of 
your enterprise.


--
Lew

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] young guy wanting (Postgres DBA) ammo

2007-11-01 Thread Tom Lane
Kevin Hunter <[EMAIL PROTECTED]> writes:
> At 12:35a -0400 on 02 Nov 2007, Tom Lane wrote:
>> He's full of it ... mysql is not any easier to run or tune.

> I expected as much, but would you give me something more than "Because
> Tom says so!"  Good enough for me, but not for a
> non-Postgres-indoctrinated person, I fear.  ;-)

Well, let's see:

* Installation: on practically any platform that "I don't want to be a
DBA" people would use, it's a wash.  You download a package, you do
the local equivalent of "sudo service  start", and there you
are.  I'm only really familiar personally with the details of this for
Red Hat systems, but being personally responsible for RH's packaging of
both postgres and mysql, I can tell you they are isomorphic.

* Initial configuration: well, both DBs are a bit unfriendly when it
comes to setting up some initial database users and configuring a
sane security policy.  This is actually a hard problem since "sane"
means different things to different people, so a one-size-fits-all
preconfigured solution doesn't work.  IMHO, for both DBs you're
in for some manual-reading whether you like it or not.  Red Hat's
packages of the two are not isomorphic on this point --- RH forces ident
authorization by default for PG, while mysql is left with its default
behavior which is comparable to PG's "trust".  (Don't ask me why, those
decisions were made before I got here.)  We see the blow-back from the
forced ident authorization on our lists, in that people can't figure
out how to get into the DB initially, and I'm sure there are people
out there who don't get past that hurdle and just give up.  It would
be interesting to troll the mysql lists for evidence of the downside
of their default ... which'd be along the line of "someone broke into
my completely insecure database and stole/destroyed all my data" ...

* Tuning: there used to be some truth to the "PG is hard" meme here,
in that the out-of-the-box defaults for PG were designed for what we'd
now consider laughably small boxes.  PG is now cranked up to the point
of being sanely configured for merely small boxes, which is about where
the out-of-the-box mysql configuration is too.  If you want you can
award mysql a few points for providing sample config files for larger
configurations, but even knowing that one of those should be installed
is knowledge that an "I don't want to be a DBA" person wouldn't have.
I think that in either case you're going to end up fooling with the
configuration parameters if you're doing anything much more demanding
than cataloging your baseball cards.

* Routine backups: seems about the same to me; in both cases you
gotta do something along the lines of setting up a cron job to call
a DB-supplied program.

* Replication: I'll grant that mysql has got built-in replication that
is easier to set up than any PG alternative I know about.  But is the
"I don't want to be a DBA" crowd planning to run replication, or likely
to get it right without reading any manuals?

Certainly there is a perception out there that mysql is easier,
but I think it's based on out-of-date information.  You might also
care to read Greg Smith's take on the matter:
http://www.postgresql.org/docs/techdocs.83.html
He points out that each DB's developer community has been working
over the past few years to ameliorate their respective perceived
disadvantages ...

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] young guy wanting (Postgres DBA) ammo

2007-11-01 Thread Kevin Hunter
At 12:35a -0400 on 02 Nov 2007, Tom Lane wrote:
> Kevin Hunter <[EMAIL PROTECTED]> writes:
>> However, I'm not a DBA and only minimally know what's involved in doing
>> the job, so I don't have "ammo" to defend (or agree?) with my friend
>> when he says that "Postgres requires a DBA and MySQL doesn't so that's
>> why they choose the latter.
> 
> He's full of it ... mysql is not any easier to run or tune.

I expected as much, but would you give me something more than "Because
Tom says so!"  Good enough for me, but not for a
non-Postgres-indoctrinated person, I fear.  ;-)

Thanks,

Kevin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] young guy wanting (Postgres DBA) ammo

2007-11-01 Thread Tom Lane
Kevin Hunter <[EMAIL PROTECTED]> writes:
> However, I'm not a DBA and only minimally know what's involved in doing
> the job, so I don't have "ammo" to defend (or agree?) with my friend
> when he says that "Postgres requires a DBA and MySQL doesn't so that's
> why they choose the latter.

He's full of it ... mysql is not any easier to run or tune.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] young guy wanting (Postgres DBA) ammo

2007-11-01 Thread Kevin Hunter
Hiya list,

A friend recently told me that, among other things, the institutions for
which he works tend to choose MySQL or MSSQL over Postgres because the
latter requires a dedicated DBA while the former do not.  When they do
spring for a DBA, they go with Oracle.  As a developer/end-user in/of
the MySQL/Postgres/Oracle environments, I can say hands down that I
prefer Postgres and I have a list of reasons why to which I can readily
refer.

However, I'm not a DBA and only minimally know what's involved in doing
the job, so I don't have "ammo" to defend (or agree?) with my friend
when he says that "Postgres requires a DBA and MySQL doesn't so that's
why they choose the latter."  Basically, I have so far not run across
the need to mess with any of the tunables.  (I am naive, I know.)
Clearly, one should generally pick the best tool for the job, so I'm no
pundit that Postgres is *always* the right answer, but I'd like to be
informed.  My questions:

- With 8.2, and the almost-out 8.3, what kinds of responsibilities
should a Postgres DBA expect to have?

- More in line with the conversation with my friend, what/why is it that
Postgres needs a DBA while MySQL doesn't?  I highly suspect that the
assumption that MySQL doesn't need a DBA is incorrect, but that's what
was posed to me and I couldn't agree or disagree.

I am familiar with Greg Smith et al's page:
http://www.postgresql.org/docs/techdocs.83

Thoughts?  Thanks in advance,

Kevin

P.S. BTW Greg et al, thank you a *ton* for putting that document
together.  I learned a lot watching the discussion on the list and
reading the final version.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Improve Search

2007-11-01 Thread Scott Marlowe
On 11/1/07, carter ck <[EMAIL PROTECTED]> wrote:
>
> Hi all,
>
> Has anyone come across a solution for faster and powerful search stored 
> procedure in a table that contains more than 500K of records?
>
> I am currently trying to work around with it. All helps are appreciated.

Are you using the tsearch2 full text engine?  If not, you're missing
the boat for fast searching text.  But you didn't really tell us what
exactly you're searching on, so I'm just guessing.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Calculation for Max_FSM_pages : Any rules of thumb?

2007-11-01 Thread Ow Mun Heng

On Thu, 2007-11-01 at 21:22 -0400, Tom Lane wrote:
> Ow Mun Heng <[EMAIL PROTECTED]> writes:
> > OK.. Vacuum verbose took 2 hours.. Vacuum full will likely take 2x that
> > I presume.
> 
> Probably a lot more, and it'll bloat your indexes while it's at it.
> Do you have a *reason* to run a vacuum full?


Maybe you didn't read my original post. I did a vacuum verbose (playing
with pgfouine) and found that my max_fsm_pages was exceeded and based on
archives, I have to do a vacuum full.

I decided to bite the bullet and go ahead with the vacuum full anyway.
It's been ~3 hours already and I _think_ it's about to finish.. (is
there a way to determine which tables are left to vacuum? Is there a
_list_ which it transverse etc? tailing the vacuum_log, I can see where
it's at, but not where it is in terms of overall status.


> I'd suggest using contrib/pgstattuple to get a fix on how much dead
> space there is in your tables.  If it's really horrid (like more than
> 50%) then VACUUM FULL followed by REINDEX might be called for, but
> otherwise you should probably not sweat it.

pgstattuple.. Okay.. anyone have a centos rpm for it? Don't really have
access to a compiler on the (server) machine.

> If you do have a problem you need to reconsider your regular vacuuming
> policy, because it's not running often enough.  See if autovacuum makes
> sense for you.

autovacuum _is_ running on a regular basis. (I'm not sure if it's
supposed to catch the max_fsm pages being exceeded etc)

> Also, if you are not low on disk space overall, consider CLUSTER as a
> substitute for VACUUM FULL + REINDEX.  It'll be faster and you might get
> a speed boost for subsequent queries using whichever index you cluster
> on.  The only drawback is that CLUSTER uses temp space equal to the
> table + index sizes ...

I'm not low.. I have ~300G available. Total DB size is ~60G. 
I guess I need to read up on CLUSTER. Thanks.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] getting list of tables from command line

2007-11-01 Thread Craig White
On Thu, 2007-11-01 at 20:25 +0100, hubert depesz lubaczewski wrote:
> On Wed, Oct 31, 2007 at 08:01:41AM -0700, Craig White wrote:
> > my script looks like this...
> > (all I want is to get a list of the tables into a text file pg_tables)
> 
> everybody else showed some ways, but i'll ask a question:
> i hope you're not treating it as a backup? bacause when you do it that
> way (on table-by-table basic) the "backup" is practically useless as it
> doesn't have internal consistency.
> 
> if you want to make backup, perhaps you can tell us exactly what you
> want to do, and why standard pg_dump is not enough.
> 
> if you're fully aware of the issue i pointed - sorry, i'm just trying to
> make sure you'll not shoot yourself in a foot.

No - thanks...you raised a very good point.

First I would like to thank all of those who responded (Reece, T.J.,
David, Filip, Ron) with great ideas...I learned a lot.

I also have pg_dumpall on a weekly basis and pg_dump each database
nightly. I learned that a long time ago when I was in early development
and migrated my fedora installation which updated postgres and my
database no longer worked.

I wanted the table separations nightly as insurance from things like bad
migrations (ruby on rails which alters the tables via scripting) and the
ability to reload the data from a table based on poorly conceived coding
(not that I would ever do such a thing), or to make it easier for me to
move data from my production database to my development database.

I am fortunate that even though I am now past 30 tables in my project
(production) and we've been adding data for a year and a half, the total
data backup is only 7 Megabytes (/var/lib/pgsql/data is only 132
megabytes) so I can't have too many different backups made nightly via
cron scripts at this point. I also turned on auto-vacuum in the
preferences but this method seems much more thorough.

For anyone interested, this is what I settled upon for my final script
(heck, I don't even bother tar/zip the things yet)...

*** begin pg_table_dump.scr ***
#/bin/sh
#
# Script to identify tables, backup schema and data separately and
# then finally, vacuum each table
#
DB_NAME=MY_DB
BACKUP_PATH=/home/backup/postgres/production
MY_SCHEMA=public
PG_USER=craig
#
psql -U $PG_USER \
  $DB_NAME \
  -c "SELECT tablename FROM pg_tables WHERE \
  schemaname = "\'$MY_SCHEMA\'";" | \
  grep -v 'tablename' | \
  grep -v [\--*] | \
  grep -v rows\) > $BACKUP_PATH/pg_tables
#
for i in `cat $BACKUP_PATH/pg_tables`
do
  pg_dump \
--username=$PG_USER \
--schema=$MY_SCHEMA \
--table=$i \
--schema-only \
$DB_NAME > $BACKUP_PATH/schemas/$i.sql
  pg_dump \
--username=$PG_USER \
--schema=$MY_SCHEMA \
--table=$i \
--data-only \
$DB_NAME > $BACKUP_PATH/data/$i.sql
  vacuumdb \
   --username=$PG_USER \
   --dbname=$DB_NAME \
   --table=$MY_SCHEMA.$i \
   --verbose \
   --full
done
*** end pg_table_dump.scr ***

Craig


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Ignore just testing

2007-11-01 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Testing some changes on our end, please ignore

- -- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFHKpCrATb/zqfZUUQRAhTsAJ48kt6NmuZnCAi5s0iwcaXHDPVtRQCeMqH8
QNs3tLUoEWaJhupsml8x7kA=
=Z8Yd
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Improve Search

2007-11-01 Thread carter ck

Hi all,

Has anyone come across a solution for faster and powerful search stored 
procedure in a table that contains more than 500K of records? 

I am currently trying to work around with it. All helps are appreciated. 

Thanks.
_
Express yourself with the latest Windows Live Messenger! Get it free today!
http://www.get.live.com/wl/all

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] XML database

2007-11-01 Thread Sean Davis
I have a large set of XML files (representing about 18M records) that I 
would like to load into postgres.  I have been loading the records into 
relational tables.  Is this the best way to go?  I am particularly 
interested in full-text searches of a subset of the elements.  I am on 
8.3Beta.


Thanks,
Sean


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Calculation for Max_FSM_pages : Any rules of thumb?

2007-11-01 Thread Tom Lane
Ow Mun Heng <[EMAIL PROTECTED]> writes:
> OK.. Vacuum verbose took 2 hours.. Vacuum full will likely take 2x that
> I presume.

Probably a lot more, and it'll bloat your indexes while it's at it.
Do you have a *reason* to run a vacuum full?

I'd suggest using contrib/pgstattuple to get a fix on how much dead
space there is in your tables.  If it's really horrid (like more than
50%) then VACUUM FULL followed by REINDEX might be called for, but
otherwise you should probably not sweat it.

If you do have a problem you need to reconsider your regular vacuuming
policy, because it's not running often enough.  See if autovacuum makes
sense for you.

Also, if you are not low on disk space overall, consider CLUSTER as a
substitute for VACUUM FULL + REINDEX.  It'll be faster and you might get
a speed boost for subsequent queries using whichever index you cluster
on.  The only drawback is that CLUSTER uses temp space equal to the
table + index sizes ...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Calculation for Max_FSM_pages : Any rules of thumb?

2007-11-01 Thread Ow Mun Heng

On Thu, 2007-11-01 at 20:56 -0400, Bill Moran wrote:
> Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> >
> > I just ran a vacuum verbose on the entire DB and this came out.
> > 
> >  number of page slots needed (274144) exceeds max_fsm_pages (153600)
> > 
> > Hence, I've changed the max to 400,000 (pulled it straight out of the
> > air). How does one calculate what's the number needed anyway?
> 
> It's not simple.  Every update or delete creates a "dead tuple" that
> needs to be tracked by an fsm entry.  So it depends on how frequently
> your database is changing in between vacuum runs.

Quite a lof actually.

> 
> In my experience, the best bet is to do vacuum verbose on a regular
> basis and get a feel for what you need.  Every database load is
> different.


autovacuum is turned on by default.. so I didn't think of any issues
_might_ occur.. (or rather.. didn't think about murphy's law)

> 
> > Another question is, based on what I've read in the archives (in my
> > laptop.. No-Inet conn @ work) Since I've overran my max_FSM, I'm
> > basically screwed and will have to do a vacuum verbose FULL on the
> > entire DB. Crap..
> 
> You don't _need_ to.  But it's generally a good idea to get table
> bloat reduced.

OK.. Vacuum verbose took 2 hours.. Vacuum full will likely take 2x that
I presume.

> > I'm planning to run vacuum verbose full tonight/over the weekend. (is
> > this sane?) Thanks for the advice..
> 
> vacuum full is sane, if that's what you mean.  The only problem is that
> it locks tables while working on them, so you have to take into account
> what other workload might be blocked while vacuum full is working, and
> how long vacuum full is liable to take.

It's pulling data from the master DB (it's a data mart) every 50 to 120
seconds)
I presume that it's blocked on a table by table basis?? 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Calculation for Max_FSM_pages : Any rules of thumb?

2007-11-01 Thread Bill Moran
Ow Mun Heng <[EMAIL PROTECTED]> wrote:
>
> I just ran a vacuum verbose on the entire DB and this came out.
> 
>  number of page slots needed (274144) exceeds max_fsm_pages (153600)
> 
> Hence, I've changed the max to 400,000 (pulled it straight out of the
> air). How does one calculate what's the number needed anyway?

It's not simple.  Every update or delete creates a "dead tuple" that
needs to be tracked by an fsm entry.  So it depends on how frequently
your database is changing in between vacuum runs.

In my experience, the best bet is to do vacuum verbose on a regular
basis and get a feel for what you need.  Every database load is
different.

> Another question is, based on what I've read in the archives (in my
> laptop.. No-Inet conn @ work) Since I've overran my max_FSM, I'm
> basically screwed and will have to do a vacuum verbose FULL on the
> entire DB. Crap..

You don't _need_ to.  But it's generally a good idea to get table
bloat reduced.

> (I was playing with pgfouine and then I found the above piece of advice)
> 
> I'm planning to run vacuum verbose full tonight/over the weekend. (is
> this sane?) Thanks for the advice..

vacuum full is sane, if that's what you mean.  The only problem is that
it locks tables while working on them, so you have to take into account
what other workload might be blocked while vacuum full is working, and
how long vacuum full is liable to take.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Calculation for Max_FSM_pages : Any rules of thumb?

2007-11-01 Thread Ow Mun Heng
I just ran a vacuum verbose on the entire DB and this came out.

 number of page slots needed (274144) exceeds max_fsm_pages (153600)

Hence, I've changed the max to 400,000 (pulled it straight out of the
air). How does one calculate what's the number needed anyway?

Another question is, based on what I've read in the archives (in my
laptop.. No-Inet conn @ work) Since I've overran my max_FSM, I'm
basically screwed and will have to do a vacuum verbose FULL on the
entire DB. Crap..

(I was playing with pgfouine and then I found the above piece of advice)

I'm planning to run vacuum verbose full tonight/over the weekend. (is
this sane?) Thanks for the advice..



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] posgresql-8.2 startup problem

2007-11-01 Thread Tom Lane
novice <[EMAIL PROTECTED]> writes:
> I just installed postgres-8.2 using the command line
> apt-get install postgres-8.2 and it seems the installation went ok,
> but when it tries to start postgres , it fails with this error message
> below...
> (by the way I am running ubuntu fiesty and on an amazon ec2-image).
> Any ideas what is wrong?

> 2007-11-01 14:16:26 CET LOG:  could not translate host name
> "localhost", service "5432" to address: Name or service not known

Your DNS and/or /etc/hosts configuration is broken.  Every machine
should be able to resolve "localhost" (to itself, of course).

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Number to Words Conversion

2007-11-01 Thread Bruce Momjian
yogesh wrote:
> Hello Frnds,
> 
> Is there any way to Convert the Number to its equivalent String
> ( Words)
> 
>  e.g. 10 to TEN. Urgent require the Answer..

We have cash_words:

test=> SELECT cash_words('100');
 cash_words

 One hundred dollars and zero cents
(1 row)

but that is money-oriented.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>http://momjian.us
  EnterpriseDB http://postgres.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Number to Words Conversion

2007-11-01 Thread brian

yogesh wrote:

Hello Frnds,

Is there any way to Convert the Number to its equivalent String
( Words)

 e.g. 10 to TEN. Urgent require the Answer..


(column sizes just an example)

CREATE TABLE integer_string (
number SMALLINT NOT NULL,
word VARCHAR(24) NOT NULL
);

brian

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] select random order by random

2007-11-01 Thread Tom Lane
Gregory Stark <[EMAIL PROTECTED]> writes:
> So I guess having the parser do this substitution kind of makes sense
> if you're thinking about things the way the spec does. It doesn't make
> much sense if you're thinking the way Postgres does of having
> arbitrary expressions there independent of what's in the select list.

Again: this is not "Postgres vs the spec", it is "SQL92 vs SQL99".
I draw your attention to the relevant text...

SQL92:

  ::=
  ORDER BY 

  ::=
   [ {   }... ]

  ::=
   [  ] [  ]

  ::=

  | 

  ::= ASC | DESC

...

 10)If ORDER BY is specified, then each  in the
 shall identify a column of T.

Case:

a) If a  contains a , then T
  shall contain exactly one column with that  and
  the  identifies that column.

b) If a  contains an ,
  then the  shall be greater than 0 and not
  greater than the degree of T. The  iden-
  tifies the column of T with the ordinal position specified by
  the .

(T is the table emitted by the SELECT.)


SQL99:

  ::=
  ORDER BY 

  ::=
   [ {   }... ]

  ::=
   [  ] [  ]

  ::=
  

  ::= ASC | DESC


18) If an  is specified, then:

a) Let K(i) be the  contained in the i-th .

b) Let DT be the declared type of K(i).

c) If DT is a user-defined type, then the comparison form of DT
  shall be FULL.

d) K(i) shall not be a .

e) If QE is a  that is a  that is a  that is a  that is a  that is a , then the  is said to be
  a simple table query.

f) Case:

  i) If  contains any  K(i)
 that contains a column reference to a column that is not a
 column of T, then:

 1) The  shall be a simple table
   query.

 2) Case:

   A) If K(i) is not equivalent to a 
  immediately contained in any  in the
   SL of  QS contained
  in QE, then:

  I) T shall not be a grouped table.

 II) QS shall not specify the  DISTINCT
or directly contain one or more s.

III) Let C(j) be a column that is not a column of T and
whose column reference is contained in some K(i).

 IV) Let SKL be the list of s that are
s of column references to every C(j).
The columns C(j) are said to be extended sort key
columns.

  V) Let TE be the  immediately
contained in QS.

 VI) Let ST be the result of evaluating the :

   SELECT SL, SKL FROM TE

   B) Otherwise:

  I) Let ST be T.

 II) For every  DC(e) of SL that is
equivalent to K(i), if DC(e) has a ,
then let CN(e) be that ; otherwise:

1) Let CN(e) be an implementation-defined  that is not equal to any  of
   any column of ST.

2) DC(e) is effectively replaced by DE(e) AS CN(e)
   in the  of ST, where DE(e) is the
of DC(e).

III) K(i) is effectively replaced by CN(e).

 ii) Otherwise, let ST be T.

g) ST is said to be a sort table.

h) K(i) is a . The  shall
  not contain a  or a ,
  but shall contain a .

  i) Let X be any  directly contained in K(i).

 ii) If X does not contain an explicit  or
 , then K(i) shall be a  that
 shall be equivalent to the name of exactly one column of
 ST.

  NOTE 287 - A previous version of ISO/IEC 9075 allows  to be a  to denote a column
  reference of a column of T. That facility no longer exists.
  See Annex E, "Incompatibilities with ISO/IEC 9075:1992 and
  ISO/IEC 9075-4:1996".


In the usual tradition of SQL99, the spec text is enormously less
readable than SQL92 was, but I *think* this says nearly the same thing
as what we do: a plain column reference in ORDER BY is first sought as
an output column name, and failing that sought as a column name of one
of the input tables.  They are more restrictive than we are but that's
OK.

For the particular issue at hand here, it seems to me that 18.f.i.2.B
dictates that a  matching an output column be treated as a
reference to the column, not as an independently evaluated expression.
Admittedl

[GENERAL] Storing float array

2007-11-01 Thread KeesKling

HI.

I want to store an array of floats size about 50,000. My first Idea was to
store them in a column 'float[]', but using  JDBC and java it takes minutes
to convert the data and store it.
My next try was to use a column BYTEA and store the data as 'byte[]' and
that worked. Inspecting the data. Looking at clientside the data is OK, but
if I do that on the serverside wit PLPERL I found out that there is a big
increase in datasize and I found a lot of escape (\0)characters. 
Is it possible to avoid these characters and store the bitearray as it is or
do I have to use another columntype or storing method.

Regards Kees Kling 
-- 
View this message in context: 
http://www.nabble.com/Storing-float-array-tf4731022.html#a13527815
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] posgresql-8.2 startup problem

2007-11-01 Thread novice
Hi guys,

I just installed postgres-8.2 using the command line
apt-get install postgres-8.2 and it seems the installation went ok,
but when it tries to start postgres , it fails with this error message
below...
(by the way I am running ubuntu fiesty and on an amazon ec2-image).
Any ideas what is wrong?

Regards,
Beat

Setting up postgresql-8.2 (8.2.5-0ubuntu0.7.04.1) ...
 * Starting PostgreSQL 8.2 database
server
* The PostgreSQL server failed to start. Please check the log output:
2007-11-01 14:16:26 CET LOG:  could not load root certificate file
"root.crt": no SSL error reported
2007-11-01 14:16:26 CET DETAIL:  Will not verify client certificates.
2007-11-01 14:16:26 CET LOG:  could not translate host name
"localhost", service "5432" to address: Name or service not known
2007-11-01 14:16:26 CET WARNING:  could not create listen socket for
"localhost"
2007-11-01 14:16:26 CET FATAL:  could not create any TCP/IP sockets
 
[fail]
invoke-rc.d: initscript postgresql-8.2, action "start" failed.
dpkg: error processing postgresql-8.2 (--configure):
 subprocess post-installation script returned error exit status 1
Errors were encountered while processing:
 postgresql-8.2
E: Sub-process /usr/bin/dpkg returned an error code (1)


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Number to Words Conversion

2007-11-01 Thread yogesh
Hello Frnds,

Is there any way to Convert the Number to its equivalent String
( Words)

 e.g. 10 to TEN. Urgent require the Answer..


 Regards,
 Yogesh Arora


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Need Refrences

2007-11-01 Thread Anand Kumar

Hi All,

Im looking for some case studies or reference sites or benchmarks for 
Solaris/Linux  and Postgresql
and application level benchmarks like JBOS/Tomcat  with Postgres as 
backend.

It would be great if anybody could help me.

Regards
Anand Kumar.
begin:vcard
fn:Anand Kumar M
n:Meenakshi Sunadaram;Anand Kumar
org:Pre-Sales, Chennai;Sun Microsystems India Pvt Ltd
adr:12 Haddows Road,;;Appejay Business Centre,;Chennai;Tamilnadu;66;India
email;internet:[EMAIL PROTECTED]
title:Systems Engineer
tel;cell:+919840327287
url:http://www.in.sun.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] select random order by random

2007-11-01 Thread Gregory Stark
"Scott Marlowe" <[EMAIL PROTECTED]> writes:

> On 11/1/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote:
>> On Thu, Nov 01, 2007 at 02:22:58PM -0400, Tom Lane wrote:
>> > > SELECT random()FROM generate_series(1, 10) ORDER BY random();
>> > > SELECT random() AS foo FROM generate_series(1, 10) ORDER BY foo;
>> >
>> > (BTW, this is not the planner's fault; the collapsing of the two
>> > targetlist entries into one happens in the parser.)
>>
>> Something twigged telling me that in fact the latter expression is not
>> in standard SQL but a (very common) extension. A  is clearly
>> indicated to be a  with no indication anywhere that
>> column aliases are allowed here (though that may be in the common rules
>> somewhere).
>
> Well, the standard way I know if is to use column numbers.  i.e.:
>
> select random() from generate_series(1,10) order by 1
>
> That I'm pretty sure IS in the standard.  Don't see why column aliases
> would be disallowed.  It's not like the where clause where the select
> field doesn't exist when it fires.  The select field list does exist
> when order by fires, so referring to it makes sense.

Well IIRC the standard requires the sort keys to be columns from the select
list. You can't put any old expression there, only copies of the expressions
used in the select list.

So in the spec "random()" can't really be considered a second call to
random(), it's just a retyped instance of the "random()" in the select list.
That is, it's just a longwinded way of saying "order by 1" (meaning column 1).

So I guess having the parser do this substitution kind of makes sense if
you're thinking about things the way the spec does. It doesn't make much sense
if you're thinking the way Postgres does of having arbitrary expressions there
independent of what's in the select list.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Populating large DB from Perl script

2007-11-01 Thread Jorge Godoy
Em Thursday 01 November 2007 16:57:36 Kynn Jones escreveu:
>
> But it occurred to me that this is a generic enough problem, and that
> I'm probably re-inventing a thoroughly invented wheel.  Are there
> standard techniques or resources or Pg capabilities to deal with this
> sort of situation?

You can restore the database without the constraints and then add them back 
after you restored the last table.

You can also use pg_dump / pg_restore / psql to do that.

You can also use COPY. 

When you want to keep the referential integrity checks in place since the 
beginning, you have to respect the order you need to restore your data.

-- 
Jorge Godoy  <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] (Never?) Kill Postmaster?

2007-11-01 Thread Tom Lane
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> Tom, is it possible the backend was doing something that couldn't be
> immediately interrupted, like a long wait on IO or something?

Sherlock Holmes said that theorizing in advance of the data is a capital
mistake...

What we can be reasonably certain of is that that backend wasn't
reaching any CHECK_FOR_INTERRUPTS() macros.  Whether it was hung up
waiting for something, or caught in a tight loop somewhere, is
impossible to say without more data than we have.  AFAIR the OP didn't
even mention whether the backend appeared to be consuming CPU cycles
(which'd be a pretty fair tip about which of those to believe, but still
not enough to guess *where* the problem is). A gdb backtrace would tell
us more.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Scrolling cursors in PL/PgSQL

2007-11-01 Thread Tom Lane
"Dawid Kuroczko" <[EMAIL PROTECTED]> writes:
> Is there a way to scroll a cursor from within PL/PgSQL?

FWIW, in 8.3 plpgsql has full FETCH/MOVE support --- that might or might
not help you, depending on what your timescale for deploying this
application is ...

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] select random order by random

2007-11-01 Thread Tom Lane
Martijn van Oosterhout <[EMAIL PROTECTED]> writes:
> Something twigged telling me that in fact the latter expression is not
> in standard SQL but a (very common) extension. A  is clearly
> indicated to be a  with no indication anywhere that
> column aliases are allowed here (though that may be in the common rules
> somewhere).

SQL92 says differently.  The committee basically redefined ORDER BY
entirely between SQL92 and SQL99.

What we actually try to support is both SQL92 and SQL99 interpretations,
which is a pretty unholy mess, but enough people (and programs) are used
to the SQL92 way that I don't foresee being able to drop it.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Scrolling cursors in PL/PgSQL

2007-11-01 Thread Pavel Stehule
On 01/11/2007, Dawid Kuroczko <[EMAIL PROTECTED]> wrote:
> Hello.
>
> Is there a way to scroll a cursor from within PL/PgSQL?
>
> I tried EXECUTE, but:
>
> ERROR:  cannot manipulate cursors directly in PL/pgSQL
> HINT:  Use PL/pgSQL's cursor features instead.
>
> The idea would be that PL/pgsql function would look
> through (all) query results, then rewind the cursor and
> finally return it.
>
> My fellow developers have created a function which run
> the (complex) query twice (first a count(*) on the result
> set, then return the row count and cursor to the results.
> Then they created a version which uses temporary
> table as a placeholder for the query results, and results
> count and a pointer to select * from tmp_table.
>
> Now I imagined a function which would open the cursor,
> MOVE FORWARD ALL, then MOVE ABSOLUTE 0;
> and return count and a cursor, yet it seems it won't work.
>
> Any other idea how to efficiently solve such a problem?
>

Hello

PostgreSQL 8.3 allows scrollable cursors in plpgsql

Regards
Pavel Stehule

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Scrolling cursors in PL/PgSQL

2007-11-01 Thread Raymond O'Donnell

On 01/11/2007 20:19, Dawid Kuroczko wrote:

> Any other idea how to efficiently solve such a problem?

How about creating a temporary table within your function and using that 
instead?


Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select random order by random

2007-11-01 Thread Scott Marlowe
On 11/1/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 01, 2007 at 02:22:58PM -0400, Tom Lane wrote:
> > > SELECT random()FROM generate_series(1, 10) ORDER BY random();
> > > SELECT random() AS foo FROM generate_series(1, 10) ORDER BY foo;
> >
> > (BTW, this is not the planner's fault; the collapsing of the two
> > targetlist entries into one happens in the parser.)
>
> Something twigged telling me that in fact the latter expression is not
> in standard SQL but a (very common) extension. A  is clearly
> indicated to be a  with no indication anywhere that
> column aliases are allowed here (though that may be in the common rules
> somewhere).

Well, the standard way I know if is to use column numbers.  i.e.:

select random() from generate_series(1,10) order by 1

That I'm pretty sure IS in the standard.  Don't see why column aliases
would be disallowed.  It's not like the where clause where the select
field doesn't exist when it fires.  The select field list does exist
when order by fires, so referring to it makes sense.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Solaris 10, mod_auth_pgsql2

2007-11-01 Thread Jeff MacDonald

Yup, this was in thanks !

Jeff.

On 1-Nov-07, at 2:27 PM, Tom Lane wrote:


Jeff MacDonald <[EMAIL PROTECTED]> writes:

sandbox# /opt/csw/apache2/sbin/apxs -i -a -c -l /opt/csw/postgresql/


I think you want -I not -l in front of that include path ...

regards, tom lane



Jeff MacDonald
Interchange Technologies
613 292 6239





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Populating large DB from Perl script

2007-11-01 Thread Kynn Jones
Hi.  This is a recurrent problem that I have not been able to find a
good solution for.  I have  large database that needs to be built from
scratch roughly once every month.  I use a Perl script to do this.

The tables are very large, so I avoid as much as possible using
in-memory data structures, and instead I rely heavily on temporary
flat files.

The problem is the population of tables that refer to "internal" IDs
on other tables.  By "internal" I mean IDs that have no meaning
external to the database; they exist only to enable relational
referencing.  They are always defined as serial integers.  So the
script either must create and keep track of them, or it must populate
the database in stages, letting Pg assign the serial IDs, and query
the database for these IDs during subsequent stages.

I have solved this general problem in various ways, all of them
unwieldy (in the latest version, the script generates the serial ids
and uses Perl's so-called "tied hashes" to retrieve them when needed).

But it occurred to me that this is a generic enough problem, and that
I'm probably re-inventing a thoroughly invented wheel.  Are there
standard techniques or resources or Pg capabilities to deal with this
sort of situation?

TIA!

kj

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] select random order by random

2007-11-01 Thread Martijn van Oosterhout
On Thu, Nov 01, 2007 at 02:22:58PM -0400, Tom Lane wrote:
> > SELECT random()FROM generate_series(1, 10) ORDER BY random();
> > SELECT random() AS foo FROM generate_series(1, 10) ORDER BY foo;
> 
> (BTW, this is not the planner's fault; the collapsing of the two
> targetlist entries into one happens in the parser.)

Something twigged telling me that in fact the latter expression is not
in standard SQL but a (very common) extension. A  is clearly
indicated to be a  with no indication anywhere that
column aliases are allowed here (though that may be in the common rules
somewhere).

Then again, I may be remembering all wrong...

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Those who make peaceful revolution impossible will make violent revolution 
> inevitable.
>  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] getting list of tables from command line

2007-11-01 Thread hubert depesz lubaczewski
On Wed, Oct 31, 2007 at 08:01:41AM -0700, Craig White wrote:
> my script looks like this...
> (all I want is to get a list of the tables into a text file pg_tables)

everybody else showed some ways, but i'll ask a question:
i hope you're not treating it as a backup? bacause when you do it that
way (on table-by-table basic) the "backup" is practically useless as it
doesn't have internal consistency.

if you want to make backup, perhaps you can tell us exactly what you
want to do, and why standard pg_dump is not enough.

if you're fully aware of the issue i pointed - sorry, i'm just trying to
make sure you'll not shoot yourself in a foot.

depesz

-- 
quicksil1er: "postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV!" :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] (Never?) Kill Postmaster?

2007-11-01 Thread Scott Marlowe
On 10/31/07, Christian Schröder <[EMAIL PROTECTED]> wrote:
> Tom Lane wrote:
> >> Ok, you wrote "Postgres will recover automatically", but could this take
> >> several minutes?
> >>
> >
> > Yeah, potentially.  I don't suppose you have any idea how long it'd been
> > since your last checkpoint, but what do you have checkpoint_timeout and
> > checkpoint_segments set to?
> >
>
> I did not change these parameters from their default values, so
> checkpoint_timeout is 5 min and checkpoint_segments is 8.
>
> > What I'd like to know about is why the child process was unresponsive to
> > SIGINT in the first place.  There's little we can do about long-running
> > plpython functions, for instance, but if it was looping in Postgres
> > proper then we should do something about that.  Can you reproduce this
> > problem easily?
> >
>
> Unfortunately not. I have tried the same query and it took only about 1
> sec to complete. In fact, it's a simple seq scan with a single filter
> condition. No user defined functions are involved.
> Maybe it has something to do with the users connecting from their
> Windows machines to the PostgreSQL server using psqlodbc. On the other
> hand, it has not been the first time that such a user connection had to
> be terminated and we did never experience this problem.
> If I see the phenomenon again I will use strace or something similar to
> find out what the backend process is doing.

Tom, is it possible the backend was doing something that couldn't be
immediately interrupted, like a long wait on IO or something?

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Scrolling cursors in PL/PgSQL

2007-11-01 Thread Dawid Kuroczko
Hello.

Is there a way to scroll a cursor from within PL/PgSQL?

I tried EXECUTE, but:

ERROR:  cannot manipulate cursors directly in PL/pgSQL
HINT:  Use PL/pgSQL's cursor features instead.

The idea would be that PL/pgsql function would look
through (all) query results, then rewind the cursor and
finally return it.

My fellow developers have created a function which run
the (complex) query twice (first a count(*) on the result
set, then return the row count and cursor to the results.
Then they created a version which uses temporary
table as a placeholder for the query results, and results
count and a pointer to select * from tmp_table.

Now I imagined a function which would open the cursor,
MOVE FORWARD ALL, then MOVE ABSOLUTE 0;
and return count and a cursor, yet it seems it won't work.

Any other idea how to efficiently solve such a problem?

   Regards,
 Dawid

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] select random order by random

2007-11-01 Thread piotr_sobolewski
On Thursday 01 November 2007 17:16, Scott Marlowe wrote:

> > > I was very surprised when I executed such SQL query (under PostgreSQL
> > > 8.2):
> > > select random() from generate_series(1, 10) order by random();
> > >
> (...)
> My guess is that it was recognized by the planner as the same function
> and evaluated once per row only.

I also had such hypothesis, but I think that query:
select random(), random() from generate_series(1, 10) order by random();
contradicts it.

-- 
Piotr Sobolewski
http://www.piotrsobolewski.w.pl

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] select random order by random

2007-11-01 Thread piotr_sobolewski
On Thursday 01 November 2007 17:08, brian wrote:

> > I was very surprised when I executed such SQL query (under PostgreSQL
> > 8.2): select random() from generate_series(1, 10) order by random();
> >
> > I don't understand - why the result is like that? It seems like in each
> > row both random()s were giving the same result. Why is it like that? What
> > caused it?
>
> Your query specifically requested that the result be ordered by the
> column "random" in the result set (the default ordering direction being
> ASC). Your query is semantically identical to:
> SELECT random() AS foo FROM generate_series(1, 10) ORDER BY foo ASC;

I also had such theory. But if I do such query:
select x from generate_series(1, 10) as x order by random();
the answer is shuffled in random order.
So why in one case this "random()" is treaded as a column name and in second - 
as function name?

And when I do such query:
select random() as xxx, random() from generate_series(1, 10) order by 
random();
your theory would predict that the answer is ordered by the second column (as 
the first one is renamed to 'xxx'). However in reality the answer is in 
random order.

> I should think that you would get a better result if you dropped the
> ORDER BY clause.

Yes, I know. However, once I made such request just for fun and curiosity, and 
found that I don't know why does it work like that. And since then I think 
about it and try to understand it - if in this case Postgres behaves the way 
I don't understand, I probably don't understand it well at all.

-- 
Piotr Sobolewski
http://www.piotrsobolewski.w.pl

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Solaris 10, mod_auth_pgsql2

2007-11-01 Thread Tom Lane
Jeff MacDonald <[EMAIL PROTECTED]> writes:
> sandbox# /opt/csw/apache2/sbin/apxs -i -a -c -l /opt/csw/postgresql/ 

I think you want -I not -l in front of that include path ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] select random order by random

2007-11-01 Thread Tom Lane
Richard Huxton <[EMAIL PROTECTED]> writes:
> Gregory Stark wrote:
>> This does strike me as wrong. random() is marked volatile and the planner
>> ought not collapse multiple calls into one. 

> I think I agree with the earlier poster. Surely these two queries should 
> be equivalent?

> SELECT random()FROM generate_series(1, 10) ORDER BY random();
> SELECT random() AS foo FROM generate_series(1, 10) ORDER BY foo;

Well, the latter case is why it acts that way, but Greg has a point that
when a volatile function is involved maybe they shouldn't be the same.
OTOH it's always been like that, and in the absence of a clear reason
to change it I'm inclined to leave it alone.

(BTW, this is not the planner's fault; the collapsing of the two
targetlist entries into one happens in the parser.)

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] select random order by random

2007-11-01 Thread Sam Mason
On Thu, Nov 01, 2007 at 04:49:16PM +, Richard Huxton wrote:
> Gregory Stark wrote:
> >This does strike me as wrong. random() is marked volatile and the planner
> >ought not collapse multiple calls into one. 
> 
> I think I agree with the earlier poster. Surely these two queries should 
> be equivalent?
> 
> SELECT random()FROM generate_series(1, 10) ORDER BY random();
> SELECT random() AS foo FROM generate_series(1, 10) ORDER BY foo;

If they were pure (in the functional programming sense) then this would
be a correct optimisation.  However, if they're marked as volatile then
they should be called independently---they're not pure anymore and
you're calling the code for its side-effects and optimising out the
either call changes the semantics.  Try playing around with monads in
Haskell or uniqueness types in Clean, they help to clarify what's going
on when you call a "function" in an impure language.


  Sam

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] test

2007-11-01 Thread Greg Quinn
 



Re: [GENERAL] =?UTF-8?Q?select_random_order_by_random?=

2007-11-01 Thread Chris Browne
[EMAIL PROTECTED] ("=?UTF-8?Q?piotr=5Fsobolewski?=") writes:
> I was very surprised when I executed such SQL query (under PostgreSQL 8.2):
> select random() from generate_series(1, 10) order by random();
>
> I thought I would receive ten random numbers in random order. But I received 
> ten random numbers sorted numerically:
>   random
> ---
>  0.102324520237744
>   0.17704638838768
>  0.533014383167028
>   0.60182224214077
>  0.644065519794822
>  0.750732169486582
>  0.821376844774932
>   0.88221683120355
>  0.889879426918924
>  0.924697323236614
> (10 rows)
>
> I don't understand - why the result is like that? It seems like in each row 
> both random()s were giving the same result. Why is it like that? What caused 
> it?

At first, I thought this was unsurprising, but it's pretty easy to
show that there's more going on than meets the eye...  It is a bit
more clear that something interesting is going on if you add extra
columns, and name them all.

For instance, consider:

[EMAIL PROTECTED]:5433=# select random() as r1, random() as r2, random() as r3 
from generate_series(1,10) order by r3;
r1 |r2 |r3 
---+---+---
 0.246548388153315 | 0.700139089021832 | 0.119033687748015
 0.627153669018298 | 0.813135434407741 | 0.197322080843151
 0.306632998399436 | 0.545771937351674 |  0.25644090725109
 0.345610864460468 | 0.474996185861528 | 0.350776285864413
 0.580351672600955 | 0.673816084861755 | 0.443187412340194
  0.73298008274287 | 0.756699057295918 | 0.594754341989756
 0.932091740425676 | 0.383943342603743 | 0.749452064745128
 0.955010122619569 | 0.972370331641287 | 0.893978256732225
 0.675367069896311 | 0.800306641962379 | 0.922712546307594
 0.778622157406062 |  0.51328693702817 | 0.978598471730947
(10 rows)

You can see that it's ordering by the third column.

If I replicate your query, with extra columns, AND NAMES, I get the following:

[EMAIL PROTECTED]:5433=# select random() as r1, random() as r2, random() as r3 
from generate_series(1,10) order by random();
 r1 | r2 |r3 
++---
 0.0288224648684263 |  0.904462072532624 |  0.27792159980163
  0.144174488261342 |  0.406729203648865 | 0.452183415647596
  0.263208176475018 |  0.752340068109334 | 0.927179601509124
  0.443778183776885 |  0.197728976141661 | 0.556072968058288
  0.613984462339431 | 0.0589730669744313 | 0.472951539326459
  0.641100264620036 |  0.152739099226892 | 0.528443300165236
  0.700987075921148 |  0.160180815029889 | 0.752044326625764
  0.778274529613554 |  0.579829142428935 | 0.078228241764009
  0.849023841321468 |  0.570575307123363 | 0.742937533650547
  0.870425369590521 |  0.837595224380493 | 0.986238476354629
(10 rows)

It is indeed somewhat curious that the query parser chose to interpret
that the "order by random()" was referring to column #1.
-- 
(format nil "[EMAIL PROTECTED]" "cbbrowne" "linuxfinances.info")
http://cbbrowne.com/info/lisp.html
"...I'm not one  of those who think Bill Gates is  the devil. I simply
suspect that if Microsoft ever met up with the devil, it wouldn't need
an interpreter." -- Nicholas Petreley, InfoWorld, Sept 16, 1996

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select random order by random

2007-11-01 Thread Richard Huxton

Gregory Stark wrote:

"Scott Marlowe" <[EMAIL PROTECTED]> writes:


I think that Piotr expected the random() to be evaluated in both
places separately.

My guess is that it was recognized by the planner as the same function
and evaluated once per row only.

If you try this:

select random() from generate_series(1, 10) order by random()*1;

then you'll get random ordering.


This does strike me as wrong. random() is marked volatile and the planner
ought not collapse multiple calls into one. 


I think I agree with the earlier poster. Surely these two queries should 
be equivalent?


SELECT random()FROM generate_series(1, 10) ORDER BY random();
SELECT random() AS foo FROM generate_series(1, 10) ORDER BY foo;


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] select random order by random

2007-11-01 Thread Gregory Stark
"Scott Marlowe" <[EMAIL PROTECTED]> writes:

> I think that Piotr expected the random() to be evaluated in both
> places separately.
>
> My guess is that it was recognized by the planner as the same function
> and evaluated once per row only.
>
> If you try this:
>
> select random() from generate_series(1, 10) order by random()*1;
>
> then you'll get random ordering.

This does strike me as wrong. random() is marked volatile and the planner
ought not collapse multiple calls into one. Note that it affects other
volatile functions too:

postgres=#  select nextval('s') from generate_series(1, 10) order by 
nextval('s');
 nextval 
-
   1
   2
   3
   4
   5
   6
   7
   8
   9
  10
(10 rows)

postgres=#  select nextval('s') from generate_series(1, 10) order by 
nextval('s');
 nextval 
-
  11
  12
  13
  14
  15
  16
  17
  18
  19
  20
(10 rows)

That's certainly not how I remembered it working but I'm not sure I ever
tested it before.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's RemoteDBA services!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] select random order by random

2007-11-01 Thread Scott Marlowe
On 11/1/07, Lee Keel <[EMAIL PROTECTED]> wrote:
> > Dear sirs,
> >
> > I was very surprised when I executed such SQL query (under PostgreSQL
> > 8.2):
> > select random() from generate_series(1, 10) order by random();
> >
> > I thought I would receive ten random numbers in random order. But I
> > received
> > ten random numbers sorted numerically:
> >   random
> > ---
> >  0.102324520237744
> >   0.17704638838768
> >  0.533014383167028
> >   0.60182224214077
> >  0.644065519794822
> >  0.750732169486582
> >  0.821376844774932
> >   0.88221683120355
> >  0.889879426918924
> >  0.924697323236614
> > (10 rows)
> >
> > I don't understand - why the result is like that? It seems like in each
> > row
> > both random()s were giving the same result. Why is it like that? What
> > caused
> > it?
>
> Would this not have to do with the 'order by' you added to the end of the
> statement?  If you remove the order by clause, then it works for me...

I think that Piotr expected the random() to be evaluated in both
places separately.

My guess is that it was recognized by the planner as the same function
and evaluated once per row only.

If you try this:

select random() from generate_series(1, 10) order by random()*1;

then you'll get random ordering.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] select random order by random

2007-11-01 Thread Lee Keel
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of piotr_sobolewski
> Sent: Thursday, November 01, 2007 9:25 AM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] select random order by random
> 
> Dear sirs,
> 
> I was very surprised when I executed such SQL query (under PostgreSQL
> 8.2):
> select random() from generate_series(1, 10) order by random();
> 
> I thought I would receive ten random numbers in random order. But I
> received
> ten random numbers sorted numerically:
>   random
> ---
>  0.102324520237744
>   0.17704638838768
>  0.533014383167028
>   0.60182224214077
>  0.644065519794822
>  0.750732169486582
>  0.821376844774932
>   0.88221683120355
>  0.889879426918924
>  0.924697323236614
> (10 rows)
> 
> I don't understand - why the result is like that? It seems like in each
> row
> both random()s were giving the same result. Why is it like that? What
> caused
> it?
> 
> --
> Piotr Sobolewski
> http://www.piotrsobolewski.w.pl
> 
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
[Lee Keel] 

Would this not have to do with the 'order by' you added to the end of the
statement?  If you remove the order by clause, then it works for me...

-LK


This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the sender. This message 
contains confidential information and is intended only for the individual 
named. If you are not the named addressee you should not disseminate, 
distribute or copy this e-mail.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] select random order by random

2007-11-01 Thread brian

piotr_sobolewski wrote:
Dear sirs, 


I was very surprised when I executed such SQL query (under PostgreSQL 8.2):
select random() from generate_series(1, 10) order by random();

I thought I would receive ten random numbers in random order. But I received 
ten random numbers sorted numerically:

  random
---
 0.102324520237744
  0.17704638838768
 0.533014383167028
  0.60182224214077
 0.644065519794822
 0.750732169486582
 0.821376844774932
  0.88221683120355
 0.889879426918924
 0.924697323236614
(10 rows)

I don't understand - why the result is like that? It seems like in each row 
both random()s were giving the same result. Why is it like that? What caused 
it?




Your query specifically requested that the result be ordered by the 
column "random" in the result set (the default ordering direction being 
ASC). Your query is semantically identical to:


SELECT random() AS foo FROM generate_series(1, 10) ORDER BY foo ASC;

I should think that you would get a better result if you dropped the 
ORDER BY clause.


brian

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Solaris 10, mod_auth_pgsql2

2007-11-01 Thread Jeff MacDonald

Greetings.

I'm trying to compile mod_auth_pgsql2 for Solaris 10, I have the  
blastwave packages installed for libpq, apache2 and apache2_devel


I'm following this how to

http://archives.postgresql.org/pgsql-performance/2007-10/msg00076.php

However, when I run it i get this

sandbox# /opt/csw/apache2/sbin/apxs -i -a -c -l /opt/csw/postgresql/ 
include -L /opt/csw/postgresql/lib -lpq mod_auth_pgsql.c
/opt/csw/apache2/share/build/libtool --silent --mode=compile cc - 
prefer-pic -DSSL_EXPERIMENTAL -DSSL_ENGINE -xO3 -xarch=386 -xspace - 
xildoff -I/opt/csw/bdb44/include -I/opt/csw/include  -I/opt/csw/bdb44/ 
include -I/opt/csw/include -DSOLARIS2=8 -D_POSIX_PTHREAD_SEMANTICS - 
D_REENTRANT -mt -I/opt/csw/apache2/include  -I/opt/csw/apache2/ 
include   -I/opt/csw/apache2/include -I/opt/csw/bdb44/include -I/opt/ 
csw/include  -c -o mod_auth_pgsql.lo mod_auth_pgsql.c && touch  
mod_auth_pgsql.slo

"mod_auth_pgsql.c", line 117: cannot find include file: "libpq-fe.h"
"mod_auth_pgsql.c", line 520: undefined symbol: PGresult
"mod_auth_pgsql.c", line 520: undefined symbol: pg_result
"mod_auth_pgsql.c", line 521: undefined symbol: PGconn
"mod_auth_pgsql.c", line 521: undefined symbol: pg_conn
"mod_auth_pgsql.c", line 533: warning: implicit function declaration:  
PQsetdbLogin
"mod_auth_pgsql.c", line 536: warning: implicit function declaration:  
PQstatus

"mod_auth_pgsql.c", line 536: undefined symbol: CONNECTION_OK
"mod_auth_pgsql.c", line 537: warning: implicit function declaration:  
PQreset
"mod_auth_pgsql.c", line 540: warning: implicit function declaration:  
PQerrorMessage
"mod_auth_pgsql.c", line 545: warning: implicit function declaration:  
PQfinish
"mod_auth_pgsql.c", line 555: warning: implicit function declaration:  
PQexec
"mod_auth_pgsql.c", line 565: warning: implicit function declaration:  
PQresultStatus

"mod_auth_pgsql.c", line 565: undefined symbol: PGRES_EMPTY_QUERY
"mod_auth_pgsql.c", line 566: warning: implicit function declaration:  
PQclear

"mod_auth_pgsql.c", line 571: undefined symbol: PGRES_TUPLES_OK
"mod_auth_pgsql.c", line 579: warning: implicit function declaration:  
PQntuples
"mod_auth_pgsql.c", line 580: warning: implicit function declaration:  
PQgetvalue
"mod_auth_pgsql.c", line 580: warning: improper pointer/integer  
combination: op "="
"mod_auth_pgsql.c", line 841: warning: implicit function declaration:  
crypt

"mod_auth_pgsql.c", line 1123: warning: syntax error:  empty declaration
cc: acomp failed for mod_auth_pgsql.c
apxs:Error: Command failed with rc=65536

Now the strange thing is this

sandbox# ls -l /opt/csw/postgresql/include/libpq-fe.h
-rw-r--r--   1 root bin17983 Feb 16  2007 /opt/csw/ 
postgresql/include/libpq-fe.h


I'm not that great with compiling and whatnot, so I'm at a bit of a  
loss.


Advice ?

Thanks.


Jeff MacDonald
Interchange Technologies
613 292 6239





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] AutoVacuum Behaviour Question

2007-11-01 Thread Jeff Amiel


Bruce Momjian wrote:


No, it isn't.  Please add a TODO item about it:
 * Prevent long-lived temp tables from causing frozen-Xid advancement
   starvation




  


Can somebody explain this one to me?  because of our auditing technique, 
we have many LONG lived temp tables.(one per pooled connection)...so 
as long as the pool isn't disturbed, these temp tables can exist for a 
long time (weeksmonths?)


(previous thread about our use of temp tables and autovacuum/xid issues)
http://archives.postgresql.org/pgsql-general/2007-01/msg00690.php
http://archives.postgresql.org/pgsql-general/2007-01/msg00691.php





[GENERAL] select random order by random

2007-11-01 Thread piotr_sobolewski
Dear sirs, 

I was very surprised when I executed such SQL query (under PostgreSQL 8.2):
select random() from generate_series(1, 10) order by random();

I thought I would receive ten random numbers in random order. But I received 
ten random numbers sorted numerically:
  random
---
 0.102324520237744
  0.17704638838768
 0.533014383167028
  0.60182224214077
 0.644065519794822
 0.750732169486582
 0.821376844774932
  0.88221683120355
 0.889879426918924
 0.924697323236614
(10 rows)

I don't understand - why the result is like that? It seems like in each row 
both random()s were giving the same result. Why is it like that? What caused 
it?

-- 
Piotr Sobolewski
http://www.piotrsobolewski.w.pl


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] anybody attending FOSS.in 2007 ??

2007-11-01 Thread Mayuresh Nirhali

Hello,

http://foss.in/2007 , the most happening Open Source event in India is 
taking place in Bangalore between Dec4th and Dec8th.

Is anybody from the postgres community planning to attend ?
I thought it would be a good opportunity to get together and talk about 
Postgres and if we have some good interest then we can even propose a 
BOF session or something.

What say??

Cheers
Mayuresh

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] If Postgresql DLL files causing any kind of conflict for two different versions of Postgresql installed on one machine

2007-11-01 Thread Magnus Hagander
dhiraj bhosale wrote:
> Hi,
> 
> I would like to know if there is any kind of conflict or limitation
> occur between postgresql dll files for two different setups(different
> versions) in different data directories and running on two serepate
> ports. And if it has any conflict can anyone put some more details
> regarding the same.

Shouldn't be as long as you are using 8.1 or newer. 8.0 installed shared
DLLs in system32, in which case you may have a conflict. 8.1 and newer
store the DLLs in the version-local directory.

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Attaching information about users

2007-11-01 Thread David Goodenough
What is the proper way to attach additional information about users of 
a database.  That is to say I am using their DB login as their application
ID, and I need to have one or more tables which remember preferences and
other application level things, but the key is their current userid and 
I want to be sure that I only have entries for people who are currently
users in this DB.  I suppose what I want to do is to use foreign key
constraints, but that would be to a postgresql specific table (I guess,
or is the table that holds the list of IDs and its field names common
across DBs?).

David

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] If Postgresql DLL files causing any kind of conflict for two different versions of Postgresql installed on one machine

2007-11-01 Thread dhiraj bhosale
Hi,

I would like to know if there is any kind of conflict or limitation occur 
between postgresql dll files for two different setups(different versions) in 
different data directories and running on two serepate ports. And if it has any 
conflict can anyone put some more details regarding the same.


Thanks,
Dhiraj

 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

Re: [GENERAL] day of week

2007-11-01 Thread Tomáš Vondra

Hi,

I have records with date column. Is there a way I can get which day of 
week this date is?


http://www.postgresql.org/docs/8.2/interactive/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT

t.v.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] day of week

2007-11-01 Thread A. Kretschmer
am  Wed, dem 31.10.2007, um 16:34:44 +0200 mailte Anton Andreev folgendes:
> Hi,
> 
> I have records with date column. Is there a way I can get which day of 
> week this date is?

Yes, no problem. select extract (dow from date).


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] day of week

2007-11-01 Thread Thomas Kellerer

Anton Andreev wrote on 31.10.2007 15:34:

Hi,

I have records with date column. Is there a way I can get which day of 
week this date is?


What about the extract() function with the dow parameter?



Thomas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] day of week

2007-11-01 Thread Pavel Stehule
On 31/10/2007, Anton Andreev <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have records with date column. Is there a way I can get which day of
> week this date is?
>
> Cheers,
> Anton

Hello

http://www.postgresql.org/docs/8.2/interactive/functions-datetime.html
EXTRACT or function date_part

Regards
Pavel Stehule

>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
>

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] day of week

2007-11-01 Thread Anton Andreev

Hi,

I have records with date column. Is there a way I can get which day of 
week this date is?


Cheers,
Anton

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] how can I shut off "more"?

2007-11-01 Thread Filip Rembiałkowski
2007/10/31, Gauthier, Dave <[EMAIL PROTECTED]>:
> When I run a query, and the number of lines exceeds what the screen can
> hold,  the results seem to get piped into "more" (or "less").  How can I
> turn that off and just have everything stream out without stopping?  I tried
> –echo-all, but that doesn't seem to do the trick.

man psql
search for 'pager'


-- 
Filip Rembiałkowski

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] REQ: pgagent to send out alert/emails if error occurs

2007-11-01 Thread Ow Mun Heng
Hi All,

Thanks to Dave Page's instruction, I'm not a pretty happy camper with
pgadmin's pgagent. It provides a graphical interface to for the NON-nix
inclined people. (otherwise a sh script would have done nicely)

Anyway.. I've got a request and I'm not sure if this has been talked
about or (trawling the internet, I can't really find any/much docs
on pgagent) For the pgagent jobs, there are 3 options to choose from 

* Fail
* Suceed
* Ignore

If fail, the whole job fails. If Suceed, it will go on, and ignore. it
will ignore and go on. (actually, to be frank, I'm confused as to diff
between succeed and ignore, but anyway, I digress)

The crux of the issue is actually when an error does occur, be it via
duplicate primary key values or X, the job will just be marked as FAIL
and it continues happily along for the next step (if I mark it as
ignore). Problem is, this error will just stay there and _not_ get fixed
because no alert is sent out and unless someone goes in to check on the
status. By that time, the job is already stale and old.

I've tried to look at possibility to get some kind of mail-sending
capability, and I found pgsendmail and pgmail, but in my testing, it
didn't really work.

Hence I'm asking here for other methods or if I can make a "feature"
request for pgagent.

PS : I've read somewhere about using LISTEN/NOTIFY processes, but am
unclear as to _what_ it is exactly..


Thanks

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match