Re: [GENERAL] Pains in upgrading to 8.3
On Mon, Feb 18, 2008 at 06:35:11PM -0300, Alvaro Herrera wrote: Bruce Momjian escribió: Magnus Hagander wrote: For the case of upgrading, it wouldn't work. But there are certainly other cases where it would help. Say from your central pgadmin console administering 10 servers from 3 different major release trees :-( What's wrong with providing statically-linked pg_dump-8.2, pg_dump-8.3 and so on, and asking the user which one to use (depending on the target server version)? Other than the much-increased work in building things, probabliy nothing. (The package would be noticably larger as well, of course, but that shuouldn't be a big problem today). //Magnus ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Pains in upgrading to 8.3
On Feb 19, 2008 8:48 AM, Magnus Hagander [EMAIL PROTECTED] wrote: On Mon, Feb 18, 2008 at 06:35:11PM -0300, Alvaro Herrera wrote: Bruce Momjian escribió: Magnus Hagander wrote: For the case of upgrading, it wouldn't work. But there are certainly other cases where it would help. Say from your central pgadmin console administering 10 servers from 3 different major release trees :-( What's wrong with providing statically-linked pg_dump-8.2, pg_dump-8.3 and so on, and asking the user which one to use (depending on the target server version)? Other than the much-increased work in building things, probabliy nothing. (The package would be noticably larger as well, of course, but that shuouldn't be a big problem today). I suspect that building static versions of the utilities and retaining the OpenSSL Kerberos support would be nigh-on impossible (I've never even managed to build my own dynamic version of Kerberos (which seems to rely heavily on the build environment used within MIT). In pgAdmin, bundling such utilities would be a big no-no. Imagine the docs - pgAdmin supports SSL encryption and Kerberos authentication, but if you wish to back or restore your databases you'll need to turn off those requirements in the server. -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com The Oracle-compatible database company ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Pains in upgrading to 8.3
Bruce Momjian escribió: Magnus Hagander wrote: For the case of upgrading, it wouldn't work. But there are certainly other cases where it would help. Say from your central pgadmin console administering 10 servers from 3 different major release trees :-( What's wrong with providing statically-linked pg_dump-8.2, pg_dump-8.3 and so on, and asking the user which one to use (depending on the target server version)? Using the new pg_dump for dumping older versions during an ugprade is just inconvenient and something we should not need to do. At the worst we should have a way for us to upgrade the older version of pg_dump with whatever functionality we need and just tell people to be running the most recent minor release before upgrading. What cases on the past have needed the new pg_dump? Dependency handling IIRC in 7.3 (or was it 7.2?) was a big change for pg_dump, and I don't think we would have liked to backpatch the pg_dump changes. Also, AFAIK the sequences stuff with OWNED BY also needed the newer pg_dump, which is more recent (8.2?). I don't think it's as rare as you suggest. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Pains in upgrading to 8.3
Alvaro Herrera [EMAIL PROTECTED] writes: Bruce Momjian escribió: What cases on the past have needed the new pg_dump? Dependency handling IIRC in 7.3 (or was it 7.2?) was a big change for pg_dump, and I don't think we would have liked to backpatch the pg_dump changes. Also, AFAIK the sequences stuff with OWNED BY also needed the newer pg_dump, which is more recent (8.2?). I don't think it's as rare as you suggest. Note that in straightforward cases, a dump from the older pg_dump will serve perfectly well. However there have been cases in the past where the dump/reload behavior of a PG version was unfixably broken for certain corner cases --- renamed serial sequences being the latest example. In such cases, dumping with the older pg_dump and loading into a newer DB will not give you a correct copy of the state of your old database, whereas dumping with the new pg_dump often does (because the newer pg_dump knows how to exploit the feature we added to solve the problem, such as OWNED BY). The other usual reason for recommending this is the fear that someone's old installation may not be up-to-date to its latest patch release, and so its pg_dump might be missing bug fixes even for problems that *are* soluble within its release series. The thought is that the version they are installing probably is an up-to-date one, and so its pg_dump might have fixes the other one doesn't. None of this matters if you have a DB you can successfully dump and reload in the old installation version. In that case you haven't got any dump/reload bugs you are tripping over. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Pains in upgrading to 8.3
Magnus Hagander wrote: Dave Page wrote: On Fri, Feb 15, 2008 at 4:21 PM, Tony Caduto [EMAIL PROTECTED] wrote: paul rivers wrote: Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me. However, unlike the blogger you cite, I read the directions before, not after, attempting it. The blogger has a point about pg_dump and restore, it could be much better, for example the backup process could be part of the server core and instead of having a fat client where most of the process is running on the client, a API could be used where the backup is generated on the server and then have options where it could be left on the server or transferred to the clients PC. Not really an option - the reason it's recommended to use the new pg_dump version with the older server when upgrading is to allow the dump to be made in the way most compatible with the new server, effectively doing some of the upgrade process as part of the dump operation. For the case of upgrading, it wouldn't work. But there are certainly other cases where it would help. Say from your central pgadmin console administering 10 servers from 3 different major release trees :-( It can be done with commandline pg_dump, but it means you have to have three different installs on your management or backup or whatever machine. Those cases would certainly be easier if you could just call a backup API on the server that would feed you the data... (yes, there are ways to do it with ssh tunneling and whatever, but that's yet another external service that has to be set up and configured) Using the new pg_dump for dumping older versions during an ugprade is just inconvenient and something we should not need to do. At the worst we should have a way for us to upgrade the older version of pg_dump with whatever functionality we need and just tell people to be running the most recent minor release before upgrading. What cases on the past have needed the new pg_dump? -- Bruce Momjian [EMAIL PROTECTED]http://momjian.us EnterpriseDB http://postgres.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] Pains in upgrading to 8.3
I'm glad I didn't go from 8.2.3 to 8.3 straight! http://ogasawalrus.com/blog/node/462 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Pains in upgrading to 8.3
On Fri, Feb 15, 2008 at 8:21 AM, Phoenix Kiula [EMAIL PROTECTED] wrote: I'm glad I didn't go from 8.2.3 to 8.3 straight! ither way, you need to update to 8.2.6 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Pains in upgrading to 8.3
Phoenix Kiula wrote: I'm glad I didn't go from 8.2.3 to 8.3 straight! http://ogasawalrus.com/blog/node/462 Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me. However, unlike the blogger you cite, I read the directions before, not after, attempting it. Paul ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Pains in upgrading to 8.3
On Fri, Feb 15, 2008 at 4:21 PM, Tony Caduto [EMAIL PROTECTED] wrote: paul rivers wrote: Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me. However, unlike the blogger you cite, I read the directions before, not after, attempting it. The blogger has a point about pg_dump and restore, it could be much better, for example the backup process could be part of the server core and instead of having a fat client where most of the process is running on the client, a API could be used where the backup is generated on the server and then have options where it could be left on the server or transferred to the clients PC. Not really an option - the reason it's recommended to use the new pg_dump version with the older server when upgrading is to allow the dump to be made in the way most compatible with the new server, effectively doing some of the upgrade process as part of the dump operation. -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com The Oracle-compatible database company ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Pains in upgrading to 8.3
Tony Caduto wrote: paul rivers wrote: Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me. However, unlike the blogger you cite, I read the directions before, not after, attempting it. The blogger has a point about pg_dump and restore, it could be much better, for example the backup process could be part of the server core and instead of having a fat client where most of the process is running on the client, a API could be used where the backup is generated on the server and then have options where it could be left on the server or transferred to the clients PC. Using pg_dump remotely is becoming a pain because it's not really backwards compatible with earlier releases, so you end up having to have multiple copies laying around to use on different server versions. While Firebird is mostly inferior, it's backup system is much nicer that PostgreSQL's system. Firebird uses a backup API, so if you backup remotely there is no fat client needed and it eliminates all the dependency issues on the client side. The client access library implements the API and that's it. You of course could hack something similar on PGSQL by using SSH and remotely executing pg_dump on the server, but that does not really help on windows servers where SSH is not a common thing. The backup data is coming back to the client regardless, so why not just return it as a result set? Just my opinion on the matter, no flames please. I agree with you 100% it would be nice if this weren't necessary, so no flames intended! It's just if the blogger is going to use a software package, it's in his/her best interests to rtfm. It's no good to write, say, a lot of tricky SQL that depends on transactional control and properties of certain isolation levels, and then be surprised when in MySQL I get odd results, especially when my tables span storage engine types. If I did that, I would blame myself, not MySQL, even if I also thought MySQL should reconsider the behavior. MySQL did warn me after all, in the docs. I do agree it would be nice to change this aspect, and no, I've no clue how hard it would be. As a model of ease and flexibility, Microsoft's SQL Server is very good in this respect, probably the easiest I've ever worked with (at least from v2000 - v2005, prior version upgrades were a little rockier). Hot backups of full databases via T-SQL commands, in-place upgrades that convert page structures as necessary, turn archive log mode on/off dynamically, differential vs incremental backups, backups by tablespace, etc. All in all, they got that part of their engine mostly right, excepting from problems in 2000 with relocating master database files (and got a nice head-start that direction from Sybase). Paul ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Pains in upgrading to 8.3
Dave Page wrote: On Fri, Feb 15, 2008 at 4:21 PM, Tony Caduto [EMAIL PROTECTED] wrote: paul rivers wrote: Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me. However, unlike the blogger you cite, I read the directions before, not after, attempting it. The blogger has a point about pg_dump and restore, it could be much better, for example the backup process could be part of the server core and instead of having a fat client where most of the process is running on the client, a API could be used where the backup is generated on the server and then have options where it could be left on the server or transferred to the clients PC. Not really an option - the reason it's recommended to use the new pg_dump version with the older server when upgrading is to allow the dump to be made in the way most compatible with the new server, effectively doing some of the upgrade process as part of the dump operation. For the case of upgrading, it wouldn't work. But there are certainly other cases where it would help. Say from your central pgadmin console administering 10 servers from 3 different major release trees :-( It can be done with commandline pg_dump, but it means you have to have three different installs on your management or backup or whatever machine. Those cases would certainly be easier if you could just call a backup API on the server that would feed you the data... (yes, there are ways to do it with ssh tunneling and whatever, but that's yet another external service that has to be set up and configured) I'm not saying it's worth the work and potential downsides, just that there are clear upsides :-) //Magnus ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Pains in upgrading to 8.3
Tony Caduto [EMAIL PROTECTED] writes: paul rivers wrote: However, unlike the blogger you cite, I read the directions before, not after, attempting it. The blogger has a point about pg_dump and restore, Does he? He claims it didn't work, but there's no details about what went wrong. He also seems entirely misinformed on the difference between portable and PG-specific pg_dump output. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Pains in upgrading to 8.3
On Fri, Feb 15, 2008 at 10:21:16PM +0800, Phoenix Kiula wrote: http://ogasawalrus.com/blog/node/462 Reading more carefully sounds like it was the first read to me. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Pains in upgrading to 8.3
On Fri, 15 Feb 2008, Tom Lane wrote: He claims it didn't work, but there's no details about what went wrong. He also seems entirely misinformed on the difference between portable and PG-specific pg_dump output. I just left a note on this and related subjects on the blog. If you search for postgresql upgrade 8.3 on Google that comes back as hit #5 already and it would be good to shut down some of the misunderstandings there (PostgreSQL doesn't recreate the databases during the restore process?) before they get any more publicity. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Pains in upgrading to 8.3
paul rivers wrote: Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me. However, unlike the blogger you cite, I read the directions before, not after, attempting it. The blogger has a point about pg_dump and restore, it could be much better, for example the backup process could be part of the server core and instead of having a fat client where most of the process is running on the client, a API could be used where the backup is generated on the server and then have options where it could be left on the server or transferred to the clients PC. Using pg_dump remotely is becoming a pain because it's not really backwards compatible with earlier releases, so you end up having to have multiple copies laying around to use on different server versions. While Firebird is mostly inferior, it's backup system is much nicer that PostgreSQL's system. Firebird uses a backup API, so if you backup remotely there is no fat client needed and it eliminates all the dependency issues on the client side. The client access library implements the API and that's it. You of course could hack something similar on PGSQL by using SSH and remotely executing pg_dump on the server, but that does not really help on windows servers where SSH is not a common thing. The backup data is coming back to the client regardless, so why not just return it as a result set? Just my opinion on the matter, no flames please. Thanks, Tony ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Pains in upgrading to 8.3
On Fri, Feb 15, 2008 at 8:21 AM, Phoenix Kiula [EMAIL PROTECTED] wrote: I'm glad I didn't go from 8.2.3 to 8.3 straight! http://ogasawalrus.com/blog/node/462 If only he were on debian or ubuntu, he could run pg_upgradecluster and he'd have been done. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Pains in upgrading to 8.3
Magnus Hagander wrote: For the case of upgrading, it wouldn't work. But there are certainly other cases where it would help. Say from your central pgadmin console administering 10 servers from 3 different major release trees :-( It can be done with commandline pg_dump, but it means you have to have three different installs on your management or backup or whatever machine. Those cases would certainly be easier if you could just call a backup API on the server that would feed you the data... (yes, there are ways to do it with ssh tunneling and whatever, but that's yet another external service that has to be set up and configured) I'm not saying it's worth the work and potential downsides, just that there are clear upsides :-) Exactly, I didn't necessarily mean the blogger had a point about upgrades in general, just that pg_dump had room for improvement. Hey maybe a backup API is something for the Google Summer of Code thing, it would be really nice to have, and make general backups much easier from a admin point of view. Later, Tony ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/