Re: [GENERAL] close database, nomount state

2008-04-29 Thread paul rivers

[EMAIL PROTECTED] wrote:

Hello,
I want to ask if there is something like nomount state or close database state
in which I can acces postgresql to drop database or to do some other stuff.

Because when there are some connections, drop database is not
possible. Or is this done some other way?

Lukas Houf
  


Short answer-- no.

Longer answer-- there's really no need for the Oracle-esque nomount 
state in Pg. If you're doing media recovery, it's very much all or 
nothing, cluster-wide. You are not going to do media recovery for a set 
of tablespaces, for example. If you'd like to drop a database, you can 
cut off connections (say, via pg_hba.conf or whatever floats your boat) 
and drop it with a single command. It's not such a big deal as it is in 
Oracle.


If this doesn't answer your question, could you say more about what your 
issue is?


Regards,
Paul



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL injection, php and queueing multiple statement

2008-04-12 Thread paul rivers

Ivan Sergio Borgonovo wrote:

Yeah... but how can I effectively enforce the policy that ALL input
will be passed through prepared statements?
  


Code reviews are about the only way to enforce this.



If I can't, and I doubt there is a system that will let me enforce
that policy at a reasonable cost, why not providing a safety net that
will at least raise the bar for the attacker at a very cheap cost?
  


How do you do this? Disallow string concatenation and/or variable 
interpolation for any string that's going to be shipped off to the 
database? Do you parse the SQL string according to the rules of any 
backend database you might be talking to, to see if you have a where 
clause not using a prepared statement? i.e. - Nothing is going to work here.


You're stuck with making sure developers know the most rudimentary 
things about talking to a database.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SQL injection, php and queueing multiple statement

2008-04-12 Thread paul rivers

Gregory Stark wrote:

paul rivers [EMAIL PROTECTED] writes:

  

If I can't, and I doubt there is a system that will let me enforce
that policy at a reasonable cost, why not providing a safety net that
will at least raise the bar for the attacker at a very cheap cost?
  

How do you do this? Disallow string concatenation and/or variable interpolation
for any string that's going to be shipped off to the database? 



Actually there is a system that can do this. Perl with the -T option. It keeps
track of which strings are tainted by user-input and functions like eval
will cause errors if you try to pass them a tainted string. The database
drivers support this and will trigger an error if they're passed a tainted
string.

  


Good point. What happens in the case I query a string from the database, 
and use this result to build another sql string via concatenation? 
Assume the value in the database came from user input, albeit via 
another source and not this script. Will taint catch this? (Genuine 
question - I don't know.)





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table size and storage location

2008-03-25 Thread paul rivers

chuckee wrote:

paul rivers-2 wrote:
  

chuckee wrote:


1) how do I find out the size, in MB, of a particular table (called
'capture' in this case).
I tried entering the SQL query SELECT (pg_tablespace_size('capture'));
The result was the following:
ERROR:  tablespace capture does not exist
  
  
You're looking for pg_relation_size('capture') or 
pg_total_relation_size('capture'). A tablespace is a named location for 
creating objects.






Thanks but I still get the error 'ERROR:  relation capture does not exist'
when trying these two alternative functions you mention above. There is
definitely a table called 'capture' in my database!
  


Is the schema for capture in your search_path? If not, include that in 
the function call: function('yourschema.capture'). Otherwise, what 
version are you on? I don't know when these functions were added; 
perhaps you're stuck doing the math yourself on page counts in pg_class.


Paul




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table size and storage location

2008-03-23 Thread paul rivers

chuckee wrote:

1) how do I find out the size, in MB, of a particular table (called
'capture' in this case).
I tried entering the SQL query SELECT (pg_tablespace_size('capture'));
The result was the following:
ERROR:  tablespace capture does not exist
  


You're looking for pg_relation_size('capture') or 
pg_total_relation_size('capture'). A tablespace is a named location for 
creating objects.




2) how do I find out where the actual files for the database are stored on
my system?
Where is the default storage location?
  


The complete story is laid out in the docs here:
http://www.postgresql.org/docs/8.3/interactive/storage-file-layout.html


HTH,
Paul


-
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG secure for financial applications ...

2008-03-14 Thread paul rivers

Micah Yoder wrote:
Just curious, would PostgreSQL be considered secure for applications involving 
financial matters where the clients have a direct database logon?


First, to clarify, I'm not in a serious position to write such an application.  
I'm just wondering.  :-)  If it is possible, I may make a proof of concept 
application and document it on a public website.  Kind of for fun, but also 
as a learning experience
  


My $0.02 - if you're trying to be pragmatic about it, your starting 
point should be whatever audit regulations govern your definition of 
financial matters, and how well-worn the path is to compliance on 
Postgres.


Some audit regulations range from dubious to absurd, but they are still 
going to be what you have to answer to in the financial world. There are 
areas where Postgres will have difficulties, at least against the regs 
I've worked with, but IMHO these areas have little to do with real security.


Paul



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG secure for financial applications ...

2008-03-14 Thread paul rivers

Joshua D. Drake wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Fri, 14 Mar 2008 02:00:39 -0600
Micah Yoder [EMAIL PROTECTED] wrote:

  

Maybe it's nuts to consider such a setup (and if you're talking a
major bank it probably is) ... and maybe not.  At this point it's
kind of a mental exercise.  :-)



If you don't have enough control over the application to handle that
type of situation, no database is going to serve your purposes.

Beyond that, PostgreSQL is one of the most flexible database systems
around when it comes to security and my company professionally supports
several financial firms using PostgreSQL as their core database.

Sincerely,

Joshia D. Drake
  


Is it possible to share what audit regulations you have been able to 
meet with Postgres? Do you deal with SOX or PCI regs that require an 
audit trail for DBAs and SAs (e.g. PCI v1.1 10.1)? Short of building in 
some Oracle-like audit vault, I don't see how you can do this without 
falling back to mitigating controls loopholes.


Paul



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgre vs MySQL

2008-03-14 Thread paul rivers

Tom Lane wrote:

In connection with my Red Hat duties I've had to look at it occasionally
:-(.  They definitely have a lower standard for commenting than we do.
I sure hope that there is unpublished documentation somewhere ... 


And cut into the very lucrative figuring out the MySQL source code 
book market??  No way.  There have been at least 3 books out in the last 
year or so on just that topic.


Paul

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgre vs MySQL

2008-03-12 Thread paul rivers

Reece Hart wrote:

On Tue, 2008-03-11 at 06:47 -0700, rrahul wrote:

Any major clients of the two.
You can add you own points too.


Perhaps someone can comment on current MySQL backups procedures.  I 
believe that MySQL used to (still does?) require shutdown to be backed 
up. I don't know whether this was true for all engines or whether it 
might have been fixed. Having to shutdown a database to make a backup 
is a non-starter for anything that other than a toy (or read-only) 
databases.


-Reece



For a database of InnoDB tables, people tend to replicate the database, 
and then backup the slave (unless the db is trivially small, in which 
case, mysqldump).  For MyISAM, you can back it up hot, or do the same 
replication thing as with InnoDB tables.  

For larger and active MySQL installations, it's not uncommon to see a 
MySQL database replicate to 2 or more slaves, and:


- use a slave to initialize any future additional slaves
- use a slave for backups
- promote a slave to master in case of master failure

There's the hot backup tool you can buy for InnoDB, but I've yet to meet 
anyone who's actually used it.


Paul



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pain of postgres upgrade with extensions

2008-03-12 Thread paul rivers

David Potts wrote:

This is not a flame about current or previous release of Postgres.

I have just gone through the awful experience of upgrading from Postgres
8.2 to 8.3 with a database that had one of the many Postgres extensions
included. The problem comes down to the way that Postgres extensions are
packaged up, each extension tends to define some extension specific
functions, when you do a dump of the database these functions get include.
 If upgrade from one version of Postgres to another, you take a dump of
the database, which then needs to be upgrade if there have been any
changes in the extension.  The problem being that there doesn#8217;t seem
to be a way of dumping the database with out including extension specific
information.
  


Is this something that wouldn't be fixed by:

- dump 8.2 database
- load dump into 8.3 database
- for each extension, run the 8.2 drop extension script in 8.2's contrib
- for each extension, run the 8.3 install extension script in 8.3's contrib

??

Or is it a matter of easily keeping an inventory of what extension is 
installed in what db?


Paul





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgre vs MySQL

2008-03-12 Thread paul rivers

Alvaro Herrera wrote:

Ivan Sergio Borgonovo wrote:
  

On Wed, 12 Mar 2008 09:13:14 -0700
paul rivers [EMAIL PROTECTED] wrote:



For a database of InnoDB tables, people tend to replicate the
database, and then backup the slave (unless the db is trivially
  

That recalled me the *unsupported* feeling I have that it is easier
to setup a HA replication solution on MySQL.



Well, if you have a crappy system that cannot sustain concurrent load or
even be backed up concurrently with regular operation, one solution is
to write a kick-ass replication system.

The other solution is to enhance the ability of the system to deal with
concurrent operation.

We keep hearing how great all those Web 2.0 sites are; Slashdot, Flickr,
etc; and they all run on farms and farms of MySQL servers, because
MySQL replication is so good.  I wonder if replication is an actual
_need_ or it's there just because the other aspects of the system are so
crappy


Kick-ass imho really means really simple to setup and included as 
part of the standard db.


There are all kinds of corner cases that can bite you with MySQL 
replication. Offhand, I wager most of these (at least in InnoDB) result 
from the replication commit status of a transaction is in the binlogs, 
which is not the same as the InnoDB database commit status in the .ibd 
files. Writing out binlog entries happens at a higher level than the 
storage engine, and so it's not hard to imagine what can go wrong there. 
There are a few my.cnf settings that let you really roll the dice with 
data integrity based on this dichotomy, if you so choose.


In those high volume shops, imho replication is a requirement, but in 
part to overcome technical limitations of MySQL. Or to phrase it from a 
MySQL point of view, to do it the MySQL way. If you have 50-ish minutes, 
this video by the YouTube people talks about their evolution with MySQL 
(among many other things) :


http://video.google.com/videoplay?docid=-6304964351441328559

The summary from the video is:

- Start with a MySQL instance using InnoDB
- Go to 1-M replication, and use the replicants as read-only version.
- Eventually the cost of replication outweighs the gains, so go to 
database sharding
- Keep 1-M replication within a shard group to allow easy backups of a 
slave, some read-only use of the slaves, and a new master in case of 
master failure (i.e. high availability)



Almost everyone finds MyISAM unworkable in large scale environments 
because of the repairs necessary post-crash.



Big complaints about MySQL high-volume shops often, imho, come back to :

- You can only have so many active threads in the InnoDB storage engine 
module at a time. See e.g.:


http://dev.mysql.com/doc/refman/5.0/en/innodb-parameters.html#option_mysqld_innodb_thread_concurrency

- Auto_increment columns as pkeys in InnoDB tables are practically 
required, yet severely limited scalability due to how a transaction 
would lock the structure to get the next auto-increment (significantly 
improved in 5.1)


- Shutting down a MySQL engine can take forever, due partly dirty page 
writes, partly due to insert buffer merging. See:


http://dev.mysql.com/doc/refman/5.1/en/innodb-insert-buffering.html


There are other complaints you'd expect people to have, but don't seem 
to get talked about much, because people are so used to (from my point 
of view) working around them. For example, statistics on an InnoDB table 
are calculated when the table is first accessed, but not stored 
anywhere, so there are extra costs on database startup. The backup issue 
with InnoDB has already been covered. Tablespace management in InnoDB 
seems exceptionally primitive, and is helped somewhat by the 
tablespace-per-table option. There are many more, again imho.


Paul







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgre vs MySQL

2008-03-12 Thread paul rivers

Scott Marlowe wrote:

On Wed, Mar 12, 2008 at 1:02 PM, paul rivers [EMAIL PROTECTED] wrote:

  

 - Auto_increment columns as pkeys in InnoDB tables are practically
 required, yet severely limited scalability due to how a transaction
 would lock the structure to get the next auto-increment (significantly
 improved in 5.1)



Pretty sure they implemented the fix for that in an early 5.0 release.
 I remember chatting with Heikki Turri about it.
  


Definitely not fixed until 5.1, in fact not until very recently (5.1.22) :

http://bugs.mysql.com/bug.php?id=16979

Anyway, enough of that for me.  It's a Postgres list, and my list of 
MySQL complaints is far longer than my enthusiasm for documenting them.



Paul




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connect to postgres from a dynamic IP

2008-03-03 Thread paul rivers

Jorge Godoy wrote:

Em Monday 03 March 2008 08:08:36 Raymond O'Donnell escreveu:
  

On 03/03/2008 11:01, dfx wrote:


The question il: Is there a method to avoid to insert the addesses of
the clients in the pg_hba.conf and to allow connections from internet
with security assured only by username and password?
  

Yes, that's what people have been explaining: you insert a line
something like:

   host  [database]   [user]   0.0.0.0/0   md5



But make it hostssl instead of host, to require some cryptography in the 
channel used, specially to authenticate the connection.


Opening your access to everyone without crypto sounds like something you don't 
want to do.  Specially if users can change their own passwords...


My understanding is no password is sent in the clear with md5 per:

http://www.postgresql.org/docs/8.3/interactive/auth-methods.html#AUTH-PASSWORD


Paul


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Connect to postgres from a dynamic IP

2008-03-03 Thread paul rivers

Collin wrote:




But make it hostssl instead of host, to require some 
cryptography in the channel used, specially to authenticate the 
connection.


Opening your access to everyone without crypto sounds like something 
you don't want to do.  Specially if users can change their own 
passwords...


My understanding is no password is sent in the clear with md5 per:

http://www.postgresql.org/docs/8.3/interactive/auth-methods.html#AUTH-PASSWORD 




Paul

However, it depends on the sort of data you are accessing. Sending a 
MD5 password is all well and good but if your data consists of credit 
card info or trade secrets then you'll want that encrypted too.




Yes true, if your data is sensitive, go with SSL.

On the other hand, if you're sending credit card data around, you must 
comply with the PCI audit regulation, in which case there is exactly 
0.0% chance you're putting your database port on a public network.


Regards,
Paul


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Query meltdown: caching results

2008-02-27 Thread paul rivers

Gordon wrote:

On Feb 26, 5:26 pm, [EMAIL PROTECTED] (Gregory Stark) wrote:
  

Norman Peelman [EMAIL PROTECTED] writes:


My options are, as far as I can tell,

1) replace the Database PDO extending class with something else that

provides query results caching in PHP, or
2) get Postgres itself to cache the results of queries to avoid
running them repeatedly during a session.


You might consider looking at memcached. One way to use it would be to have
the PHP application check for the cached object first and use it rather than
do any database queries. Then you can use pgmemcached to allow triggers to
invalidate cached objects whenever the underlying data changes. (Or you could
even just use pl/php to update or invalidate the cached object through the
same code library)

--
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's Slony Replication support!

---(end of broadcast)---
TIP 6: explain analyze is your friend


The problem is that there is a vast number of redundant queries being
run.    the principle that objects should not depend on a knowledge
of the inner workings of unrelated objects.   Results caching would 
eliminate the problem of
the same queries beign run over and over  
The problem is the mechanics of actually implementing this caching.

I'm using prepared statements almost exclusivly throughout the design,
meaning that the PDOStatement class probably needs to be extended
somehow and my Database prepare()  I can't have been the first person to 
run up against this problem
  


With memcached, your methods to retrieve data go from get data from db 
to get data from cache, and on cache miss get from db and leave a copy 
for the next guy in cache.   Updating the data is not much more 
complicated.  I don't see why this doesn't work for you?  It won't 
compromise anything on the encapsulation front you are concerned about, 
and you can still use your prepared statements for hitting the db, etc.?


Regards,
Paul




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] reindexing

2008-02-27 Thread paul rivers

Lew wrote:

Tom Lane wrote:

There never was a 7.1.4 release, so I suspect the OP meant 7.4.1
 not that that speaks very much better for his software maintenance
habits.  Even with the more charitable interpretation, it's a version
that was obsoleted four years ago next week.


In my experience at various big-iron shops (government agencies, 
large health-care organizations and the like), four years is not a 
long time for enterprise software - a version often has to be at least 
four years old before the powers-that-be decide to try it.  One has 
only to look at how many organizations still use Oracle 8, or Java 
1.3, for example, to see how conservative many shops are with respect 
to upgrades.


I'm not saying they should be that conservative, but many 
organizations are and we must be ready to deal with that.




This is completely the opposite of my experience at a very large global 
financial company.  They are extremely concerned with staying current, 
and in fact audit regulations require it for any software not written 
in-house.  If they were still running Oracle 8, for example, they would 
fail internal audit precisely because it is no longer a supported Oracle 
version, and thus security and such patches are no longer available.  
The same would go for operating system patches, firmware, whatever.  The 
release cycle does tend to be slower (from quarterly to yearly) for, 
say, things like AIX or z/OS or DB2, but updates are coming out 
routinely [including security and bug fixes, as well as feature 
additions], and in my experience these shops are definitely keeping up.


The only places I've had direct experience with that tend to run very 
old versions of things are doing so for all the wrong reasons.  They 
seem to be learning, albeit slowly and painfully, the demerits of not 
keeping current.


Just my $0.02,
Paul


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Pains in upgrading to 8.3

2008-02-15 Thread paul rivers

Phoenix Kiula wrote:

I'm glad I didn't go from 8.2.3 to 8.3 straight!

http://ogasawalrus.com/blog/node/462

  
Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me.  However, 
unlike the blogger you cite, I read the directions before, not after, 
attempting it.


Paul


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Pains in upgrading to 8.3

2008-02-15 Thread paul rivers

Tony Caduto wrote:

paul rivers wrote:
 
Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me.  
However, unlike the blogger you cite, I read the directions before, 
not after, attempting it.


The blogger has a point about pg_dump and restore, it could be much 
better, for example
the backup process could be part of the server core and instead of 
having a fat client where most of the process is running on the 
client, a API could be
used where the backup is generated on the server and then have options 
where it could be left on the server or transferred to the clients PC.


Using pg_dump remotely is becoming a pain because it's not really 
backwards compatible with earlier releases, so you end up having to 
have multiple copies laying around to use on different server versions.


While Firebird is mostly inferior, it's backup system is much nicer 
that PostgreSQL's system.  Firebird uses a backup API, so if you 
backup remotely there is no fat client needed and it eliminates all 
the dependency issues on the client side.  The client access library 
implements the API and that's it.
You of course could hack something similar on PGSQL by using SSH and 
remotely executing pg_dump on the server, but that does not really 
help on windows servers where SSH is not a common thing.


The backup data is coming back to the client regardless, so why not 
just return it as a result set?


Just my opinion on the matter, no flames please.



I agree with you 100% it would be nice if this weren't necessary, so no 
flames intended!  It's just if the blogger is going to use a software 
package, it's in his/her best interests to rtfm.  It's no good to write, 
say, a lot of tricky SQL that depends on transactional control and 
properties of certain isolation levels, and then be surprised when in 
MySQL I get odd results, especially when my tables span storage engine 
types.  If I did that, I would blame myself, not MySQL, even if I also 
thought MySQL should reconsider the behavior.  MySQL did warn me after 
all, in the docs.


I do agree it would be nice to change this aspect, and no, I've no clue 
how hard it would be.  As a model of ease and flexibility, Microsoft's 
SQL Server is very good in this respect, probably the easiest I've ever 
worked with (at least from v2000 - v2005, prior version upgrades were a 
little rockier).  Hot backups of full databases via T-SQL commands, 
in-place upgrades that convert page structures as necessary, turn 
archive log mode on/off dynamically, differential vs incremental 
backups, backups by tablespace, etc.  All in all, they got that part of 
their engine mostly right, excepting from problems in 2000 with 
relocating master database files (and got a nice head-start that 
direction from Sybase).


Paul



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Perceived weaknesses of postgres

2008-02-14 Thread paul rivers

Stephen Cook wrote:

Magnus Hagander wrote:
I would guess they're referring to the ability to pin a table into 
memory, so that it always stays in the cache regardless of what else 
the database is doing. There is a narrow use-case where this can be 
very useful, but it can also be a very dangerous tool (hint: if you 
pin a table that grows up to say 80-90% of your RAM size, your 
database will not be fast for anything else)


I know that MS removed this ability in SQL Server 2005 for pretty much 
this reason; it's usefulness was greatly outweighed by people screwing 
up their systems by not calculating things correctly.




What they removed was dbcc pintable, which would specify that data 
pages for a table should be pinned in the general buffer cache as they 
are requested.  This feature didn't allow you to divide up your buffer 
cache, and so this rather pointless feature went away in SQL Server 
2005.  A few large, active pinned tables is obviously going to really 
wreck performance for most databases.


What SQL Server never had is more like what you get with Sybase, where 
you can partition your buffer cache into different regions of whatever 
fraction of the overall buffer cache you wish.  This is IMHO a far more 
useful implementation.  You can specify which regions a particular (set 
of) tables should use.  You can further specify different page sizes for 
each buffer (say 2k pages for the intensely oltp stuff, and 16k pages 
for the more olapy things).  You don't end up trying to leave *every* 
page of a table in memory this way, since LRU (or whatever method) will 
still recycle pages as needed within a named cache.  This was all there 
in version 11 of the product, which was the last one I ever worked with 
(and is very dated at this point).  This feature never made it to SQL 
Server since Microsoft went off and did their own thing well before this.


It's more this Sybase-type implementation I assumed the original poster 
was asking about? 

You can do something kind of similar in MySQL with the MyISAM storage 
engine, but I've not heard of too many shops actually doing this (who 
knows).   The MySQL manual seems to strongly recommend it, anyway.


Paul





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] using SSL in psql

2008-02-11 Thread paul rivers

Willy-Bas Loos wrote:

Hi,

How, using psql,  can i connect to a PostgreSQL server that has 
sslhost in the pg_hba.conf file?

I can't find the SSL option in the manpage.

thx,

WBL
Make sure both your server and client have ssl support compiled in.  I'm 
not sure if that's there by default with the provided binaries, but if 
you compiled your own, you specified --with-openssl.  Checking pg_config 
will be helpful here.


Make sure your server is really configured to provide SSL support.  
ssl=on in the postgresql.conf, and be sure to have at least server.key 
and server.crt (and optionally your root.crt and root.crl).


Make sure to ask for an ssl connection, especially if you have both ssl 
and non-ssl options in the pg_hba.conf.  Use the environment variable 
PGSSLMODE=require to force the issue and test with psql.


If successful, you will see a line similar to this above the ready prompt:

[Usual welcome banner snipped]
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)

yourdb=

Alternatively, the only programmatic way to tell that I know if is the 
pgsslinfo contrib module, where you can install the function 
ssl_is_used() in your db.


Manual re: server setup for SSL:
http://www.postgresql.org/docs/8.3/interactive/ssl-tcp.html

Useful environment variables for the client:
http://www.postgresql.org/docs/current/static/libpq-envars.html

Regards,
Paul


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Another question about partitioning

2007-11-28 Thread paul rivers

Alex Vinogradovs wrote:

Yes, I enter query manually while testing. Here are explain plans :

for select count(*) from poll_3 where eid = 72333

Aggregate  (cost=34697.64..34697.65 rows=1 width=0)
  -  Seq Scan on poll_3  (cost=0.00..34650.40 rows=18893 width=0)
Filter: (eid = 72333)


for for select count(*) from poll where eid = 72333

Aggregate  (cost=320001.59..320001.60 rows=1 width=0)
  -  Append  (cost=0.00..319570.78 rows=172323 width=0)
-  Seq Scan on poll  (cost=0.00..27.50 rows=17 width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_0 poll  (cost=0.00..14348.85 rows=9014
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_1 poll  (cost=0.00..34796.82 rows=18735
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_2 poll  (cost=0.00..34993.84 rows=18527
width=0)
  
Do you have appropriate check constraints defined on table poll?  Can 
you include a \d poll?


Also, what version is this?

Paul



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Another question about partitioning

2007-11-28 Thread paul rivers

paul rivers wrote:

Alex Vinogradovs wrote:

Yes, I enter query manually while testing. Here are explain plans :

for select count(*) from poll_3 where eid = 72333

Aggregate  (cost=34697.64..34697.65 rows=1 width=0)
  -  Seq Scan on poll_3  (cost=0.00..34650.40 rows=18893 width=0)
Filter: (eid = 72333)


for for select count(*) from poll where eid = 72333

Aggregate  (cost=320001.59..320001.60 rows=1 width=0)
  -  Append  (cost=0.00..319570.78 rows=172323 width=0)
-  Seq Scan on poll  (cost=0.00..27.50 rows=17 width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_0 poll  (cost=0.00..14348.85 rows=9014
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_1 poll  (cost=0.00..34796.82 rows=18735
width=0)
  Filter: (eid = 72333)
-  Seq Scan on poll_2 poll  (cost=0.00..34993.84 rows=18527
width=0)
  
Do you have appropriate check constraints defined on table poll?  Can 
you include a \d poll?


Also, what version is this?

Paul



Sorry, I should have asked: do you have check constraints defined on all 
the child poll tables?  So, what's \d poll_3 look like, etc?  You've 
already said you're sure constraint exclusion is on, but you're also 
sure postmaster was restarted too?


Paul




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Another question about partitioning

2007-11-27 Thread paul rivers

Alex Vinogradovs wrote:

Hello all,


I have a table which is partitioned by range into 10 pieces with
constraint exceptions. Constraint exceptions is enabled in server
configuration too. For some reason, queries to the master table
are still slower than direct queries against partitions. Is there
any real reason for that, or I should look into misconfiguration ?

Thanks!


Best regards,
Alex Vinogradovs
  
Is that true even if you type the query yourself in psql and ensure that 
the values for the partitioned columns are constants in the where 
clause?  Can you post an explain of the sql?


Paul


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Enabling password complexity for password authentication

2007-11-09 Thread paul rivers


Is there an existing way to enforce password complexity for password 
authentication?


I am not seeing anything in the docs, and I can only turn up this 
reference to a pending patch for 8.2 (bottom of page):


http://www.postgresql.org/community/weeklynews/pwn20061210

Thanks in advance for any suggestions.

Paul


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] young guy wanting (Postgres DBA) ammo

2007-11-02 Thread paul rivers

Tom Lane wrote:

Kevin Hunter [EMAIL PROTECTED] writes:
  

However, I'm not a DBA and only minimally know what's involved in doing
the job, so I don't have ammo to defend (or agree?) with my friend
when he says that Postgres requires a DBA and MySQL doesn't so that's
why they choose the latter.



He's full of it ... mysql is not any easier to run or tune.
  

Kevin,

If you work at an edu and want to talk to our DBA team at a large edu 
around operational DBA issues with MySQL, Postgres, Oracle and SQL 
Server, feel free to contact me off-list. 


My long-winded version of Tom's succinctness:

Our shop supports all four.  I am not a fanboi of any.  Postgres 
continues to impress our shop with how reliable the core engine is, and 
how concerned with documented behavior and how concerned with following 
standards the project is.


I don't want to just rip on MySQL, as there are some things it does do 
well, but the perceived it's so easy is a total illusion.  I 
personally have gone on rescue missions to various departments around 
our University to rescue data (sometimes very important research data, 
upon which future grants depend) from the clutches of a dead or dying 
MySQL installations that were just set up so easily some time 
before.   Projects where no one knows the database engine where their 
data is stored always end badly.


The commercial database platforms and mysql continue to pitch how easy 
their engine is, how it tunes itself, etc, in order to compete in the 
marketing arena of the perception of total cost of ownership.  Less DBA 
time is cheaper, goes the thinking, and so the smart manager avoids 
strategic decisions that carry larger fixed overhead costs.  It makes 
for  colorful glossy  brochures.


It does not really match reality, though, because how well and how many 
projects a team of X DBAs can support is more a function of how far the 
projects push the envelop with the database engine.   And this pushing 
can happen in a lot of directions: what tools are being used, how large 
are the datasets, how demanding are the uptime requirements and 
performance requirements, how many features of the engine does the 
project exploit, how are releases done, etc etc.  This stuff never 
factors into the marketing hype, but this is where it gets real.


If your shop must meet any formal audit standards, you will be 
hard-pressed to do this without a DBA.  If you *are* able to meet audit, 
then some other group(s) must be doing this work.  A rose by another 
other name costs just as much.


There are other reasons that make sense for a shop to decide what RDBMS 
is best for them, but the alleged reason of MySQL requires less time 
is definitely not one of them.


HTH,
Paul







---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Partitioning: how to exclude unrelated partitions?

2007-10-30 Thread paul rivers

Sean Z. wrote:

Hi,

I partitioned a table events into 31 tables, based on day of 
event_time.


I did 3 steps to setup partition, after creating partition tables:

1. Add the constraint to the 31 partition tables like:

ALTER TABLE events_day_1
  ADD CONSTRAINT events_day_1_event_time_check CHECK 
(date_part('day'::text, event_time) = 1::double precision);


[snip]

Do I miss anything?

Best,
Sean



I believe you can only partition on literal values.  You'll probably 
need to include a derived 'day' column in your table that you can 
populate in the rule.  Your query will then need to include the literal 
day value in the where clause, rather than the event_time.


Check out the caveats section for partitioning here (bottom of page, 5.9.5):

http://www.postgresql.org/docs/8.2/interactive/ddl-partitioning.html

It's not terribly obvious at first reading, as the focus is more on 
querying than designing the table.  Maybe that would be worth expanding 
on a little in the docs?


Regards,
Paul






---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] looking for some real world performance numbers

2007-10-21 Thread paul rivers

snacktime wrote:

I'm working through the architecture design for a new product.  We
have a small group working on this.  It's a web app that will be using
ruby on rails.  The challenge I'm running into is that the latest
conventional wisdom seems to be that since obviously databases don't
scale on the web, you should just not use them at all.  I have a group
of otherwise very bright people trying to convince me that a rdbms is
not a good place to store relational data because eventually it won't
scale.  And of course we don't even have version 1 of our product out
of the door.  I'll admit we do have a very good chance of actually
getting tons of traffic, but my position is to use a rdbms for
relational data, and then if and when it won't scale any more, deal
with it then.

So what would really help me is some real world numbers on how
postgresql is doing in the wild under pressure.  If anyone cares to
throw some out I would really appreciate it.   


It sounds like the RoR people are talking about any relational database, 
and not just Postgres.   Many very busy sites do use relational 
databases successfully.  So it can work.  Many other have failed.  So it 
can fail, if the situation is exceptionally unusual, or IMHO more 
likely, it´s poorly implemented.


What the main argument of their ¨won´t scale¨ stance?  Why not setup a 
test case to prove or disprove it?  I don´t think anything we can 
suggest based on what we know of your project will help, unless someone 
happens to throw out a nearly identical case.


I would be surprised if avoiding a database is a better solution.  But 
regardless, I would be more worried about using a technology when most 
of the core group doesn´t believe in it.  That often leads to bad 
results, regardless of whether it should.


Paul




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Partitioned table limitation

2007-10-02 Thread paul rivers


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Goboxe
 Sent: Monday, October 01, 2007 11:26 AM
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Partitioned table limitation
 
 On Oct 2, 1:38 am, [EMAIL PROTECTED] (paul rivers) wrote:
   -Original Message-
   From: [EMAIL PROTECTED] [mailto:pgsql-general-
   [EMAIL PROTECTED] On Behalf Of Goboxe
   Sent: Monday, October 01, 2007 2:18 AM
   To: [EMAIL PROTECTED]
   Subject: [GENERAL] Partitioned table limitation
 
   Hi,
 
   Are there any limitations on number of child tables that can be use
   in
   partitioned table?
 
   [snip]
 
  We currently use partitioning by date and id, with 1/4 a year of dates
 and
  approximately 10 IDs (and slowly increasing).  Each partition runs from
  around 1 million to 20 million rows.
 
  Whether it's recommended or not, I don't know.  But for us, the
 partitioning
  works exactly as advertised.  As with anything new, I'd take the time to
  setup a simple test to see if it works for you, too.
 
  In particular, be sure to check the documentation on caveats.  You'll
 find
  these a little stricter than partitioning issues in Oracle or SQL
 Server.
 
  HTH,
  Paul
 
 
 
 Thanks Paul for your inputs.
 
 I am not really clear when you said partitioning by date and id, with
 1/4 a year of dates and
 approximately 10 IDs. Could you give some examples of your tables?
 
 
 TQ,
 G
 

Sure.  

The largest logical table has a primary key of fw_id, fw_date, fw_line_nbr.
We partition on fw_id, fw_date.  

fw_date ranges from today to about 120 days ago.  There are no gaps for any
fw_id in this rolling window.  Each fw_id + fw_date has between 1-20 million
rows, though most of them tend toward the smaller end of that scale.

We also generate child tables (partitions) for a few days into the future as
part of a nightly maintenance job.  We also drop ones older than the 120
days.  So all told, we have around 1400 partitions or so, and around a
trillion rows of data, all told.  The rows average about 700 bytes or so,
wide, with date, time, inet, cidr, varchar, bigint smallint, and int types.

There are a variety of different processes loading the data constantly
during the day.  This data is used for ad-hoc troubleshooting during the
day, plus some near real-time monitoring alerts.  It sees a fair amount of
reading during the day.  On a nightly basis, it is rolled up into a
summarized format, and we keep this rollup data for years.  These rollup
tables are partitioned too, but it's not on the same scale as the above
table.  The rollup data is used for all kinds of trend analysis, further
reporting, etc.

HTH,
Paul







---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Partitioned table limitation

2007-10-01 Thread paul rivers

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Goboxe
 Sent: Monday, October 01, 2007 2:18 AM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Partitioned table limitation
 
 Hi,
 
 Are there any limitations on number of child tables that can be use
 in
 partitioned table?
 
 [snip]

We currently use partitioning by date and id, with 1/4 a year of dates and
approximately 10 IDs (and slowly increasing).  Each partition runs from
around 1 million to 20 million rows.   

Whether it's recommended or not, I don't know.  But for us, the partitioning
works exactly as advertised.  As with anything new, I'd take the time to
setup a simple test to see if it works for you, too.

In particular, be sure to check the documentation on caveats.  You'll find
these a little stricter than partitioning issues in Oracle or SQL Server.  

HTH,
Paul



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] MVCC cons

2007-08-14 Thread paul rivers
 On 08/14/07 14:34, Kenneth Downs wrote:
  Tom Lane wrote:
  Kenneth Downs [EMAIL PROTECTED] writes:
 
  Speaking as an end-user, I can give only one I've ever seen, which is
  performance.  Because of MVCC, Postgres's write performance (insert
  and update) appears on my systems to be almost exactly linear to row
  size.  Inserting 1000 rows into a table with row size 100 characters
  takes twice as long as inserting 1000 rows into a table with row size
  50 characters.
 
 
  Not sure why you'd think that's specific to MVCC.  It sounds like
 purely
  an issue of disk write bandwidth.
 
  regards, tom lane
 
 
  I did not see this in MS SQL Server.
 
 It is only logical that it will take 2x as long to insert 2x as much
 data.
 
 Maybe SQL Server is compressing out white space?  Or (shudder)
 heavily caching writes?


There's no SQL Server magic.  It doesn't compress whitespace or cache writes
in any scary way.  Doubling with row width does double the insert time.


On SQL Server 2000 sp4:

Setup via:

create database test_db
use test_db
create table t50 ( f1 char(50) )
create table t100 ( f1 char(100) )

Test 1:

declare @start datetime
select @start = getdate()
begin transaction
insert into t50 ( f1 ) values (
'01234567890123456789012345678901234567890123456789' );
-- Repeat above insert 1000 times
commit transaction
select datediff(ms, @start, getdate()) 

Test 2:
declare @start datetime
select @start = getdate()
begin transaction
insert into t50 ( f1 ) values (
'012345678901234567890123456789012345678901234567890123456789012345678901234
5678901234567890123456789' );
-- Repeat above insert 1000 times
commit transaction
select datediff(ms, @start, getdate())


On my system, test one averages around 16ms over 100 tests.  Test 2 averages
around 33ms over 100 tests.

I would wager my week's coffee change the same outcome on SQL 2005 sp2.  

Paul






---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] greatest/least semantics different between oracle and postgres

2007-06-30 Thread paul rivers

At risk of putting my foot in my mouth again, greatest() returns null if one
or more expressions are null for Oracle enterprise 9.2.0.7 and 10.2.0.3.  

The docs for greatest() don't talk of NULL:





SQL select greatest(1,2,null,3) from dual;

GREATEST(1,2,NULL,3)



SQL select greatest(1,2,3) from dual;

GREATEST(1,2,3)
---
  3

SQL select version from v$instance;

VERSION
---
9.2.0.7.0




SQL select greatest(1,2,null,3) from dual;

GREATEST(1,2,NULL,3)



SQL select greatest(1,2,3) from dual;

GREATEST(1,2,3)
---
  3

SQL select version from v$instance;

VERSION
---
10.2.0.3.0


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Pavel Stehule
 Sent: Saturday, June 30, 2007 10:37 AM
 To: Bruno Wolff III; Pavel Stehule; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] greatest/least semantics different between oracle
 and postgres
 
  Maybe that reference was for an earlier version of Oracle and the
 definition
  changed at some point? I only have access to version 9 and greatest and
  lest are strict there.
 
 
 I am installing OracleXE and I'll test it.
 
 Pavel
 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] greatest/least semantics different between oracle and postgres

2007-06-30 Thread paul rivers

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Pavel Stehule
 Sent: Saturday, June 30, 2007 10:37 AM
 To: Bruno Wolff III; Pavel Stehule; pgsql-general@postgresql.org
 Subject: Re: [GENERAL] greatest/least semantics different between oracle
 and postgres
 
  Maybe that reference was for an earlier version of Oracle and the
 definition
  changed at some point? I only have access to version 9 and greatest and
  lest are strict there.
 
 
 I am installing OracleXE and I'll test it.
 
 Pavel
 

At risk of putting my foot in my mouth again, greatest() returns null if one
or more expressions are null for Oracle enterprise 9.2.0.7 and 10.2.0.3.
 
The docs for greatest() don't talk of NULL:

http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions060.
htm#SQLRF00645

There are metalink documents that do seem to make it clear greatest/least
are defined to return null if one or more expressions has a null.  (see doc
207279.999 for example)

SQL select greatest(1,2,null,3) from dual;

GREATEST(1,2,NULL,3)



SQL select greatest(1,2,3) from dual;

GREATEST(1,2,3)
---
  3

SQL select version from v$instance;

VERSION
---
9.2.0.7.0




SQL select greatest(1,2,null,3) from dual;

GREATEST(1,2,NULL,3)



SQL select greatest(1,2,3) from dual;

GREATEST(1,2,3)
---
  3

SQL select version from v$instance;

VERSION
---
10.2.0.3.0




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] greatest/least semantics different between oracle and postgres

2007-06-29 Thread paul rivers

I believe the spec says nulls are ignored for min/max.  Postgres is as far
as I know behaving according to spec.

But I question the original poster's report of Oracle's behavior.  I don't
have 9.2.0.8 to test, but on 9.2.0.7:

SQL select f1, case when f1 is not null then 'not null' else 'null' end if
from t; 

F1 IF
-- 
 1 not null
 2 not null
   null

SQL select max(f1) from t;

   MAX(F1)
--
 2

SQL select version from v$instance;

VERSION
---
9.2.0.7.0


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Ben
 Sent: Friday, June 29, 2007 10:18 PM
 To: Tom Lane
 Cc: PostgreSQL General ((EN))
 Subject: Re: [GENERAL] greatest/least semantics different between oracle
 and postgres
 
 On Jun 29, 2007, at 9:15 PM, Tom Lane wrote:
 
  Hmm ... I fear Oracle's behavior is more correct, because if any
  argument is null (ie, unknown), then who can say what the greatest or
  least value is?  It's unknown (ie, null).  But I suspect our behavior
  is more useful.  Comments?
 
 I agree with you. I don't know what the spec says, but it seems clear
 Oracle is doing the proper thing and Postgres is doing the useful thing.
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] greatest/least semantics different between oracle and postgres

2007-06-29 Thread paul rivers


 
 Er ... your example doesn't actually seem to involve greatest() or
 least()?
 

So sorry, it's been a long day, I misread.  Yes, greatest/least definitely
does work on Oracle as the OP said.  Apologies again.




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] table partitioning and plpgsql functions in 8.2.3

2007-04-11 Thread paul rivers

Overview:

plpgsql functions seem to ignore partitioning, even with 
constraint_exclusion on.


Description:

Version is 8.2.3 on RHEL 4, constraint_exlusion is on.  I have an
events table (fw_events) partitioned by an int and a date (fw_id,
fw_date for discussion) following the recommendations outlined in
5.9 of the manual.

FWIW, each partition holds around 1M rows.  There are presently
about 250 partitions (2 ids, around 4+ months of dates).

explain select count(*) from fw_events where fw_id = 1 and
fw_date = '2007-04-08' shows that the single partition table is
examined, and results are snappy when executed.

I created a function to do the same count, and it took orders of
magnitude longer.

I then created a plpgsql function to return the explain plan
instead, which seemed to indicate the plpgsql function scans all 
the partitions.

Shouldn't the plpgsql function honor the partitioning and only
examine the single partition in the above example?


Thanks in advance,
Paul




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] table partitioning and plpgsql functions in 8.2.3

2007-04-08 Thread paul rivers

Apologies, I should have read more cafeully - this is already documented:

5.9.5 Caveats

Constraint exclusion only works when the query's WHERE clause contains
constants. A parameterized query will not be optimized, since the planner
cannot know what partitions the parameter value might select at run time.
For the same reason, stable functions such as CURRENT_DATE must be
avoided.




-Original Message-
From: paul rivers [mailto:[EMAIL PROTECTED] 
Sent: Sunday, April 08, 2007 2:40 PM
To: 'pgsql general'
Subject: table partitioning and plpgsql functions in 8.2.3


Overview:

plpgsql functions seem to ignore partitioning, even with 
constraint_exclusion on.


Description:

Version is 8.2.3 on RHEL 4, constraint_exlusion is on.  I have an
events table (fw_events) partitioned by an int and a date (fw_id,
fw_date for discussion) following the recommendations outlined in
5.9 of the manual.

FWIW, each partition holds around 1M rows.  There are presently
about 250 partitions (2 ids, around 4+ months of dates).

explain select count(*) from fw_events where fw_id = 1 and
fw_date = '2007-04-08' shows that the single partition table is
examined, and results are snappy when executed.

I created a function to do the same count, and it took orders of
magnitude longer.

I then created a plpgsql function to return the explain plan
instead, which seemed to indicate the plpgsql function scans all 
the partitions.

Shouldn't the plpgsql function honor the partitioning and only
examine the single partition in the above example?


Thanks in advance,
Paul




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Notes on converting from MySQL 5.0.x to PostgreSQL

2006-06-30 Thread paul rivers

Out of curiosity, which big, expensive enterprise database are you spoiled
by?  Many that I support do not allow DDL within an transaction, or if they
allow it, there are many caveats and rules.  


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ron Johnson
Sent: Friday, June 30, 2006 8:22 AM
To: pgsql general
Subject: Re: [GENERAL] Notes on converting from MySQL 5.0.x to PostgreSQL

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Scott Marlowe wrote:
[snip]
 However, the more interesting thing here, is that every
 statement, including DDL is transactable, except for a couple of
 big odd ones, like create database. So, in postgresql, you can do:
 
 begin;
 create table xyz...
 alter table abc...
 insert into abc select * from iii
 update iii...;
 drop table iii;
 (oops, I messed up something)
 rollback;

But isn't that what it means to be transactional?  Or am I spoiled
 by my big, expensive enterprise database?

- --
Ron Johnson, Jr.
Jefferson LA  USA

Is common sense really valid?
For example, it is common sense to white-power racists that
whites are superior to blacks, and that those with brown skins
are mud people.
However, that common sense is obviously wrong.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEpUGPS9HxQb37XmcRAiJwAJ9/A/N/OgmslveSsX3Xym2QnDQz1gCghPD0
YX882Kv81hzZ4AKjaIVKHg8=
=Gsml
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] SSL for an AIX 5.x client - possible?

2006-04-09 Thread paul rivers










I am having difficulty getting SSL-enabled Postgres client
libs working on AIX with either vac or gcc using OpenSSL. SSL from other UNIX
flavors has not been a problem. 



Versions in question: 

AIX 5.1 on power4

OpenSSL 0.9.8

Postgres 8.1.3



I am not that familiar (yet) with the configure and build
part of Postgres, but I suspect this problem really comes down to how openssl
gives up trying to build a shared lib version on AIX, and postgres must (?) dynamically
link to openssl. (Incidentally, mysql has the same problem on AIX it would
seem, again said with a question mark.)



Any advice, guidance or links on how I might get SSL working
from AIX clients would be greatly appreciated. Googling around has not yielded
anything thats put me on the right track.



Thanks in advance,
Paul










Re: [GENERAL] PostgreSQL client api

2006-03-28 Thread paul rivers

Try factoring the connect time out of the test.  My experience is the
connect is more expensive for Postgres than MySQL.  With that out of the
way, I'd wager the times will be closer.
Regards,
Paul

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Antimon
Sent: Tuesday, March 28, 2006 5:02 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] PostgreSQL client api

Hi,
I was testing MySQL and PgSQL performances on my home box (amd athlon
64 3000, 1gig ddr ram, sata I hdd, win xp (x86)), select and insert
times seeemed identical with innoDB.

But when i try to query both using php, there's a huge difference even
for a funny query like select 1

Here's the code:

?php
$mtime = microtime(true);

$pdo = new PDO('pgsql:host=localhost;dbname=test', testacc, pw);
for ($i = 0; $i  1; $i++)
{
$result = $pdo-query(Select +$i);
}

echo microtime(true) - $mtime;
echo br;

$mtime = microtime(true);

$pdo = new PDO('mysql:host=localhost;dbname=test', testacc, pw);
for ($i = 0; $i  1; $i++)
{
$result = $pdo-query(Select +$i);
}

echo microtime(true) - $mtime;
echo br;
?

output is:
2.7696590423584
0.89393591880798

Nearly 3 times slower even w/o any table queries. But i could not
reproduce this by writing stored procs on both which selects 0-1 in
a loop to a variable. results were almost same.
(I tried pg_ and mysqli_ functions too, results were not too different)

Is it mysql client libraries performs better? Or postgre stored procs
are 3x faster? I cannot understand, since there is not even an io
operation or any query planning stuff, what is the cause of this?

Thanks.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org