Re: [GENERAL] WIP: CoC
> On Jan 12, 2016, at 12:20 AM, Tom Lane wrote: > > [ just a few comments on specific points ] > > "Greg Sabino Mullane" writes: >>> 2. The CoC is not about being offended. The act of being offended is >>> purely a recipient response and usually the offended individual is more >>> interested in being a victim than moving forward. > >> Too defensive; not needed in the code of conduct. > > Agreed, let's avoid defensiveness here. There's not much advantage to > this whole exercise unless we can be welcoming rather than dismissive. > How about a simple statement? 2. We expect people to be both tolerant and respectful of others. Neil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Code of Conduct: Is it time?
> On Jan 12, 2016, at 7:50 AM, Vick Khera wrote: > > On Mon, Jan 11, 2016 at 6:55 PM, Steve Litt wrote: >> All because somebody just *had* to personally insult someone else, >> repeatedly, and nobody thought that was a bad thing, and when the >> recipient finally objected, the objection was chalked up to him or her >> valuing his/her victimhood. > > +1 > > I was thinking along the same lines when I saw JD's original list > containing that "victimhood" line. I think that one line pretty much > eviscerates the entire purpose of having the CoC. > I don’t remember the “victimhood” line, but it is important to make sure people understand that the problem manifests itself both by being to sensitive by the complainer and not being sensitive enough by the group. I do believe that in any document it needs to be stated that everyone is expected to be tolerant of others. A free society cannot exist without some level of tolerance. Neil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Code of Conduct: Is it time?
> On Jan 10, 2016, at 2:59 PM, Joshua D. Drake wrote: > > On 01/10/2016 10:44 AM, Regina Obe wrote: > >>> JD >> >> This may come as a big shock to many of you, but as a contributor >> I don't care if you are racist, sexist, transphobic or whatever as long as >> you > > I think this is reasonable but my point is that we don't care if you are > sexist (in terms of .Org). We care if you allow your sexism to bleed into the > community. > > In short, as long as you are professional and respectful, your personal > beliefs may remain your own. > My problem with all of this is when there is a demand for no tolerance. People cannot comfortably live and work without some level of their essence (good or bad) bleeding into their work. I think Regina’s comment above is the most important comment I have read. I want to work with Regina, right attitude, right focus. And if I did step over the line and Regina felt the need to address the issue I would very very much respect it. This is the attitude that a code of conduct should project, not all of the politically correct crap that is normally written. It is important to protect the community from people who are on a mission to rid the world (or the community) of all ass-holes, racists, sexists, etc. That is never going to happen and their personal hate trip and lack of tolerance should not be in the community either. Certainly there is a line that should not be crossed from both extremes, but we need to be tolerant while people are learning and adapting so the gap between the two lines needs to be as wide as possible. The code of conduct IMO must address both extremes. Honestly, I would rather work with someone that offended me every day than someone that was so easily offended that I had to watch every word in our communications. In managing projects, my experience is that more often that not, the people that focused on the style of the communications (politically correct, pleasing words, etc.) and were easily offended by style of communications had contributions that were much less valuable than people that were neutral or rough around the edges. The community will make more progress if it can find a way to accept these ‘rough around the edges’ people, not because they are rough, but because roughness does not degrade value except at the extreme. Often someone that is ‘rough around the edges’ has to be better at their work to make up for it. These are good people to keep around if possible. Neil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
> On Aug 25, 2015, at 1:38 PM, Karsten Hilbert wrote: > >> In most cases developers don’t care about index, unique, foreign key, or >> primary key names (from a coding standpoint) > > Until the day they’d like to write a reliable database change script. Not sure I understand. Once the object is created the name is set, it does not change, so I don’t understand why it is not possible to write a reliable database change script. Dump and restore maintain the name. Of course every project has periodic scripts that need to run, so these objects would, if they are dropped or manipulated in the script, have to be manually named, especially during development since the whole database might be dropped and recreated multiple times. My original comment included that situation. My projects typically have many, many objects that once created are not referred to again, unless a DBA is doing some tuning or troubleshooting. In that case, the DBA just looks up the name. I can see if say 2 years later you want to create a development database from the original SQL that generated the original table definitions that could be problematic. But I always have used the current definitions not the original and those can be exported with the current names. It just seems like busy work to me, but I would love to be enlightened. Neil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
> On Aug 22, 2015, at 10:15 AM, Melvin Davidson wrote: > 6. Although it is legal to use the form "column TYPE PRIMARY KEY," It is best > to specify as a CONSTRAINT, >that way YOU get to choose the name, otherwise postgres assigns a default > name which may not be to your liking. >EG: , CONSTRAINT accounts_pk PRIMARY KEY (account_id) > 10. Standardize Index names with the form table_name + col(s) + “idx” > EG: For accounts table: > accounts_name_idx > accounts_city_state_idx > I really like the standardization that PostgreSQL uses in auto generating default names. The rule I use is to always use the auto generated names unless the object is referenced routinely in code. In most cases developers don’t care about index, unique, foreign key, or primary key names (from a coding standpoint) so why should they be creating the names. Since the postgresql standard uses auto generated names with ‘_pkey’ for PRIMARY KEY ‘_fkey’ for FOREIGN KEY, and ‘_key’ for UNIQUE, why not use the same rules for consistency? So I disagree with 6 and would extend 10 to include these other names if they are manually generated. interestingly enough, when I searched 9.5 docs I could not find a description of these postgreSQL naming convention. Probably because the developers consider it an internal detail that could change which is fine, since the names usually don’t matter, until they do. I would say use “column TYPE PRIMARY KEY”, “column TYPE UNIQUE”, and ‘column TYPE REFERENCES …’ every place you can and only create manual names when absolutely necessary. When you do create manual names follow the standard PostgreSQL convention. Now I have worked on mostly smaller installations so maybe someone should chime in if this is a bad best practice. Neil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Create Virtual Indexes on Postgres
The system can’t know what conditions will be present when your query executes sometime in the future without defining those conditions. To define those conditions you create a simulated environment with the index, data, and load you want to test and test it. Without more info, your requirements and constraints don’t make any sense. So, no there is no magic wand that can be waved to indicate whether or not your query will magically be faster or slower at some time in the future, with some unknown data, under some unknown db load. (I know that’s not what you intended to ask, but it is not clear why you are asking what you are asking.) If you are working on a live production system (which I do understand your reluctance to create indexes for testing) , then you would be better off creating a duplicate system and testing the queries and indexes on it. If you are running on a development system with only sample data, then the analyzer won’t give you good info anyway. Why not just create the indexes/data and evaluate the result in a simulated system? If you read and have a little understanding how the planner works and how statistics are generated, you can get usable info about indexes and relative performance in pretty simple simulated environments. Now, i’m just guessing, but you may be thinking that this simulation business is a lot of work. You could be right, which is why optimization often occurs after specific problems are identified rather than as an earlier intellectual exercise. Most people would just use general rules of thumb or their experience to create indexes until specific performance indicated something else is needed. Then they would solve the specific performance issue. Neil > On Feb 26, 2015, at 8:14 AM, Sreerama Manoj > wrote: > > Hi, Neil > >Thanks for your reply...I understood that it the planner chooses > index based on the whether it increases performance or not...But, as part of > my work I need to know whether an Index Increases or decreases the > performance and the execution time of query before actually creating that > Index. Is there any provision to do that in Postgres (or) suggest any way to > find that > > On Thu, Feb 26, 2015 at 7:09 PM, Neil Tiffin <mailto:ne...@neiltiffin.com>> wrote: > > > On Feb 26, 2015, at 12:47 AM, Sreerama Manoj > <mailto:manoj.sreerama...@gmail.com>> wrote: > > > > Hi, > > I use Postgres 9.4 database.Now,I am optimizing the queries by using > > the results of "explain" and "explain analyze",Sometimes I am creating > > Indexes to optimize them. But, I was not successful sometimes as even I > > create Index to optimize them, the planner is not using them . > > > > That is correct. The planner makes it decisions based on the availability of > indexes, the nature of your data, the hardware configuration, and the planner > impacting settings (which might be different on different hardware). To use > an index the db must load the index blocks from disk, search through them, > find the reference it cares about, then load the block with the data from > disk. Sometimes it is quicker to load the data block from disk and use it > directly without the index. You need to read the parts in the documentation > about using the planner. It is very configurable to handle many different > situations. The planner will also respond very differently with a limited > set of test or debug data versus a full loaded production data base as it is > designed to adapt to your data and the db load. > > Keep in mind that indexes will always slow down inserting and may or may not > increase the speed of retrieval. So indexes are not always a good solution, > especially in tables with high quantities of inserts. There are options to > use different kinds of indexes, and to configure the planner to treat > individual tables/columns differently than normal. Unless you know what you > are doing, these customizations should only be used to treat problems that > arise in production or in production like simulations. > > The normal development process is to create your tables and indexes and get > your application up and running. When you start noticing slowdowns, track > them down and only make adjustments to the db when you are solving a real > problem. > > Neil >
Re: [GENERAL] Create Virtual Indexes on Postgres
> On Feb 26, 2015, at 12:47 AM, Sreerama Manoj > wrote: > > Hi, > I use Postgres 9.4 database.Now,I am optimizing the queries by using the > results of "explain" and "explain analyze",Sometimes I am creating Indexes to > optimize them. But, I was not successful sometimes as even I create Index to > optimize them, the planner is not using them . > That is correct. The planner makes it decisions based on the availability of indexes, the nature of your data, the hardware configuration, and the planner impacting settings (which might be different on different hardware). To use an index the db must load the index blocks from disk, search through them, find the reference it cares about, then load the block with the data from disk. Sometimes it is quicker to load the data block from disk and use it directly without the index. You need to read the parts in the documentation about using the planner. It is very configurable to handle many different situations. The planner will also respond very differently with a limited set of test or debug data versus a full loaded production data base as it is designed to adapt to your data and the db load. Keep in mind that indexes will always slow down inserting and may or may not increase the speed of retrieval. So indexes are not always a good solution, especially in tables with high quantities of inserts. There are options to use different kinds of indexes, and to configure the planner to treat individual tables/columns differently than normal. Unless you know what you are doing, these customizations should only be used to treat problems that arise in production or in production like simulations. The normal development process is to create your tables and indexes and get your application up and running. When you start noticing slowdowns, track them down and only make adjustments to the db when you are solving a real problem. Neil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Help with PostgreSQL 9.4 to expand jsonb int array into table with row numbers
Trying to wrap my head around postgresql 9.4 jsonb and would like some help figuring out how to do the following. Given the following example jsonb: ‘{“name1” : value1, “name2” : value2, “name3” : [int1, int2, int3] }’::jsonb AS table1.column1 Wanted: Return the “name3” array only, as a table with a return signature of TABLE( var_name varchar, var_value int, var_row_num int) So the resulting data would look like this: (‘name3’, int1, 1) (‘name3’, int2, 2) (‘name3’, int3, 3) Assume the array could be any length except zero and ‘name3’ is guaranteed to exist. Also posted on stackoverflow: http://stackoverflow.com/questions/26691725/postgresql-9-4-expand-jsonb-int-array-into-table-with-row-numbers Thanks, Neil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgre SQL SHA-256 Compliance
On Sep 22, 2014, at 9:46 AM, Paul Jungwirth wrote: >>> Can you confirm that your software is SHA-256 Compliant? > > Postgres's SSL certificate & key live at the value of ssl_cert_file > and ssl_key_file in your postgresql.conf. Why not point it at a > SHA-256 certificate, restart, and try it out? > Unfortunately, that is not the way the government usually works. The person requesting the info may not even have access to the system or know how to use the system. This is especially true if the system is classified at any level. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Offending My Tender Sensibilities -OR- OLTP on a Star Schema
On Feb 6, 2014, at 12:44 PM, Scott Marlowe wrote: > Merlin, this reminds me of the quote from Mencken: For every complex > problem there is an answer that is clear, simple, and wrong. Or as Niklaus Wirth said. ... complexity has and will maintain a strong fascination for many people. It is true that we live in a complex world and strive to solve inherently complex problems, which often do require complex mechanisms. However, this should not diminish our desire for elegant solutions, which convince by their clarity and effectiveness. Simple, elegant solutions are more effective, but they are harder to find than complex ones, and they require more time, which we to often believe to be unaffordable. Communications of the ACM, Feb. 1985 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why are stored procedures looked on so negatively?
On Jul 23, 2013, at 7:29 PM, Some Developer wrote: > I've done quite a bit of reading on stored procedures recently and the > consensus seems to be that you shouldn't use them unless you really must. Application architecture is a specific software engineering discipline. These types of generalizations come from coders who don't really understand application architecture and/or databases. There are specific reasons to put code in the database server, application middleware, or the application. To make this decision, much more must be known that what has been presented in this thread. For example, if you want to maintain data integrity, then you really want to use very specific table definitions with foreign keys, defaults, and constraints. While this is not related to stored procedures, application coders try to shy away from these (like they do stored procedures) because it makes working with the database harder. It forces the data to be correct before it comes into the database. When foreign keys, defaults, and constraints are not enough to ensure data integrity then stored procedures should be used. The question is, how important is your data and how much time do you want to spend correcting it after it enters the database? The next reason is performance. Stored procedures can in certain circumstances dramatically increase or decrease performance of both the client and the server, network traffic, and application response time. But which one is most important in your application? The consensus does not know. The best practice depends on the type of multitasking the application is performing, the type of client, the client coding environment, the locations of the data being processed, the locking requirements, the concurrency requirements, the capacity of the servers and clients, the network topology, the expected response time for the activity, etc. It is not at all uncommon to think that a stored procedure should be in the database server and to have performance testing show that it is better in the application and vice versa. Keep in mind that as the database becomes loaded, these performance issues may change and any decisions you make on a development database with only partial data may not prove out in the final application. There may also be reasons to normalize/denormalize data in the database, but present a different view to the application. This should, if done correctly, make the application code simpler to maintain and understand. What is your support experience level? No experienced DBAs, this is probably a bad idea. Relatively inexperienced application coders, this is probably a really good idea. Sophisticated applications may even have more than one database server. One update server and multiple read only servers is very common in the environments I work in. Since the update server is not burdened by providing all of the read only data, it has much more capacity to handle stored procedures. Some of our environments see 80 or 90% of the load as read only. This is the network topology part. Another example, if the result of a procedure is one number, but requires 15 columns, from 200 rows the question is, is it faster to do it on the server and only put one resulting number back on the network, or should the system get all 15 columns times 200 rows worth of data and put that on the network for the client to analyze? The answer is, well it depends? Well, maybe not for this example, but hopefully you get the point. Now if part of the procedure requires data that comes from a GUI table or user entered data that only resides in the application, then the situation changes. Wherever you put the code, you should have specific reasons for doing so and for high performance applications it is not appropriate to generalize that all the code should go exclusively into the database or the app. Neil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres case insensitive searches
On Jun 29, 2013, at 11:24 AM, bhanu udaya wrote: > Upper and Lower functions are not right choice when the table is > 2.5 > million and where we also have heavy insert transactions. PostgreSQL and SQL Server are completely different. Rules that apply to SQL Server do not necessarily apply to PostgreSQL. You problem is not the use of upper() or lower() it is the assumption what works in SQL Server is the best way to use PostgreSQL. You'll get farther if you benchmark several of the suggestions, then if the performance is not good enough, ask how to improve the performance. This will take a little work on your part, but that is how you learn. Neil
Re: [GENERAL] Supporting SQL/MED DATALINK
On Jan 9, 2012, at 5:07 AM, Alban Hertroys wrote: > On 9 January 2012 09:56, Damiano ALBANI >>> I believe DB2 is pretty much it in this area. >> >> For the record, it looks like MS SQL Server has some equivalent feature : >> FILESTREAM. > > And Oracle has BFILE. > > I've actually been thinking about how to implement something like this > for Postgres, but the interaction with the file-system makes it a > little more difficult to implement. Someone shouldn't be able to > delete a file from the file-system that's still referenced from the > database. > Perhaps a file-system layer with FusionFS would be a suitable solution > for something like this, but I couldn't convince myself that it is. > > In the end, it probably requires a custom file-system implementation > that needs maintaining across all operating systems that Postgres runs > on. That's a major undertaking! > -- > If you can't see the forest for the trees, > Cut the trees and you'll see there is no forest. > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general Not sure I see the need to keep the user/system from deleting a database referenced file. BFILEs are created completely outside of Oracle and Oracle will not insert data or write to a BFILE. It seems the reference to the external file is a very weak read only reference as BFILE provides functions like fileExists() and isFileOpen(). They leave it up to the code to determine if the file is available and has the correct permisisons. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Installation woes via Macports on Mac OS X 10.7
On Oct 8, 2011, at 1:01 AM, Tom Lane wrote: > =?iso-8859-1?Q?Ren=E9_Fournier?= writes: >> I've tried installation 8.4 and 9.0 on two different machines, and at the >> end can't start Postgresql. Here's the basic story: > > No, you started it all right, because it's there in the ps output: > >> Fri Oct 07 14:18:20 -- Baby-Irmo :: ps -ax | grep "postgres" >> 56 ?? 0:08.28 /usr/bin/postgres -D /var/pgsql -c listen_addresses= >> -c log_connections=on -c log_directory=/Library/Logs -c >> log_filename=PostgreSQL.log -c log_lock_waits=on -c log_statement=ddl -c >> log_line_prefix=%t -c logging_collector=on -c >> unix_socket_directory=/var/pgsql_socket -c unix_socket_group=_postgres -c >> unix_socket_permissions=0770 >> 226 ?? 0:33.67 postgres: logger process >> 253 ?? 1:41.76 postgres: writer process >> 254 ?? 1:12.68 postgres: wal writer process >> >> 255 ?? 0:23.39 postgres: autovacuum launcher process >> >> 256 ?? 0:38.62 postgres: stats collector process >> > The problem is with that command-line option > "unix_socket_directory=/var/pgsql_socket" that your startup wrapper is > evidently feeding to the postmaster. That means your postmaster is > listening on a socket file in /var/pgsql_socket, which is not where your > psql is expecting to find it: > >> Fri Oct 07 14:18:04 -- Baby-Irmo :: /opt/local/lib/postgresql90/bin/psql -U >> postgres >> psql: could not connect to server: No such file or directory >> Is the server running locally and accepting >> connections on Unix domain socket "/tmp/.s.PGSQL.5432"? > > Here, psql shows it's expecting to find the socket file in /tmp, which > is the default location for PG socket files. > > While there are defensible reasons for putting the socket file somewhere > other than /tmp, I'd have to say that an installation wrapper that > thinks it should change that setting on the postmaster command line is > just about as incompetent as it could possibly be. That's equivalent to > changing the default port number and not bothering to tell the clients > about it. You need to find a packaging built by somebody with more of a > clue than that. It's not helping any that you seem to be using a psql > that was built independently of the postmaster (why is the postmaster in > /usr/bin when your psql is in /opt/local/lib/postgresql90?) ... but > frankly, I cannot imagine a sane reason for setting > unix_socket_directory with that mechanism, unless you're intentionally > trying to make it hard to connect. (And those unix_socket_group and > unix_socket_permissions settings are making it even harder ...) > You might try http://www.postgresqlformac.com/ Neil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Mac OS X shared_buffers not same as postgresql.conf file
On Sep 24, 2011, at 4:21 PM, Guillaume Lelarge wrote: > On Sat, 2011-09-24 at 14:43 -0500, Neil Tiffin wrote: >> On Sep 24, 2011, at 1:31 PM, Joe Conway wrote: >> >>> On 09/23/2011 02:33 PM, Neil Tiffin wrote: >>>> I have shared_buffers in the config file set for 32 MB and pgAdmin >>>> reports a value of 32 MB, but pgAdmin also says the current value is >>>> 4096. Can anyone point me to any docs about why the current value >>>> may be different than the config value? Temp_buffers are the same >>>> way, config file 8MB, but current value in pgAdmin is 1024? >>> >>> Internally shared_buffers is tracked as number of 8K pages. >>> >>> postgres=# show shared_buffers; >>> shared_buffers >>> >>> 32MB >>> (1 row) >>> >>> postgres=# select setting from pg_settings where name='shared_buffers'; >>> setting >>> - >>> 4096 >>> (1 row) >>> >>> postgres=# select 4096 * 8 / 1024 as MB; >>> mb >>> >>> 32 >>> (1 row) >>> >>> HTH, >> >> Excellent, just what I was looking for. I know there had to be a simple >> explanation, even though it seems a little retarded that pgAdmin does not >> point this out. > > And how do you get that value from pgAdmin? the config editor gives me > the value with its unit, so I guess it's not from the config editor. > I got the values from selecting in pgAdmin: Tools -> Server Configuration -> postgresql.conf, then I get a table with setting name, value, current value, and comment columns. The 'value' column shows the unit, but the 'current value' column does not, at least on Mac OS X. Neil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Mac OS X shared_buffers not same as postgresql.conf file
On Sep 24, 2011, at 1:31 PM, Joe Conway wrote: > On 09/23/2011 02:33 PM, Neil Tiffin wrote: >> I have shared_buffers in the config file set for 32 MB and pgAdmin >> reports a value of 32 MB, but pgAdmin also says the current value is >> 4096. Can anyone point me to any docs about why the current value >> may be different than the config value? Temp_buffers are the same >> way, config file 8MB, but current value in pgAdmin is 1024? > > Internally shared_buffers is tracked as number of 8K pages. > > postgres=# show shared_buffers; > shared_buffers > > 32MB > (1 row) > > postgres=# select setting from pg_settings where name='shared_buffers'; > setting > - > 4096 > (1 row) > > postgres=# select 4096 * 8 / 1024 as MB; > mb > > 32 > (1 row) > > HTH, Excellent, just what I was looking for. I know there had to be a simple explanation, even though it seems a little retarded that pgAdmin does not point this out. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Mac OS X shared_buffers not same as postgresql.conf file
Hello all, I am hoping someone can help me with 9.0.4 server on 8GB Mac w/Snow Leopard and shared_buffers configuration setting. I have shared_buffers in the config file set for 32 MB and pgAdmin reports a value of 32 MB, but pgAdmin also says the current value is 4096. Can anyone point me to any docs about why the current value may be different than the config value? Temp_buffers are the same way, config file 8MB, but current value in pgAdmin is 1024? Thank you. Neil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general