Re: [GENERAL] timestamps, formatting, and internals
On 2012-05-18, David Salisbury salisb...@globe.gov wrote: So one question I have is if there a way to set PG in the way Oracle does it.. probably not. set nls_date_format = '...' so I can query and see exactly what PG is seeing, even to the microseconds? set datestyle to 'ISO'; Is there a config parameter I can set in PG so that calculations are done only to the second? no, but you can truncate explicitly date_trunc('second',your_timestamp_expression) The query: show integer_datetimes; should return 'on' which means timestamps are microsecond precision if it returns 'off' your database was built with floating point timstamps and equality tests will be unreliable, It seems this join doesn't always find a record that's closest to solar noon, and therefore drops the summary and join record all together. you didn't show the actual join only the where clause. given your task I would create CTEs finding the first record before and after local solar noon, then combine them with a union and feed the result of that to another disctint on to filter the most appropriate record for each site. -- ⚂⚃ 100% natural -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PG vs MSSQL language comparison ?
Le 26/05/2012 06:36, Grant Allen a écrit : On 26/05/2012 00:04, Andreas wrote: Hi, I'm not into comparing which DBMS is better as we all know ... kind of ... well ... I'd like to find ressources to look up how one can do X in MSSQL when one knows how it is done in PG's SQL and the other way around. regards Andreas Hi Andreas, It's a little out of date (I've been meaning to update it for about 2 years now) and has some gaps, but try the Database Rosetta Stone. http://www.grantondata.com/community/dbrosettastone.html Very out of date... some example : Statistics Gathering: CPU Costing, says no... But it is YES Disk / IO Costing, says no... But it is YES Since a lot ! Query Management: Query/Resource Governor, says no... But it is YES (since V 2008) View historic queries in system cache, says no... But it is YES (since V 2005) Parallelism Parallel Sorts, says ?... But it is YES (since a lot) Parallel Index (re)build, says ?... But it is YES (since V 2005) Parallel (table) Reorganisation, says ?... But it is YES (since a lot) same on datatype DATE, TIME, DATETIME WITh TIME ZONE... have been added to 2008 version** and some indication are wrong. Example : CLOB Datatype sauys no. It is yes vith text type almost since v 7 (1999 !) Totally incomplete for date functions... Let me know if you spot anything missing you'd like updated. Ciao Fuzzy :-) -- Frédéric BROUARD - expert SGBDR et SQL - MVP SQL Server - 06 11 86 40 66 Le site sur le langage SQL et les SGBDR : http://sqlpro.developpez.com Enseignant Arts Métiers PACA, ISEN Toulon et CESI/EXIA Aix en Provence Audit, conseil, expertise, formation, modélisation, tuning, optimisation *** http://www.sqlspot.com * -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Forcefully adding a CHECK constrained
On Sat, 2012-05-26 at 22:06 +0300, Catalin(ux) M. Boie wrote: Hello. Thanks for the answer. I really want to avoid reading the whole table. It is too expensive, and with the proposed feature will be not needed. I think is much faster to forcefully add the check if you know the range of data. What do you think? Why not just create the CHECK constraint as NOT VALID, and never validate it? It will still enforce the constraint, it just won't validate it against your old data, which sounds like what you want. Regards, Jeff Davis -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PG vs MSSQL language comparison ?
On Sat, May 26, 2012 at 7:04 AM, Andreas maps...@gmx.net wrote: I'd like to find ressources to look up how one can do X in MSSQL when one knows how it is done in PG's SQL and the other way around. Here's another resource, going into more detail: http://troels.arvin.dk/db/rdbms/ Regards, Marti -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: [GENERAL] Forcefully adding a CHECK constrained
Hello. Thanks for the answer. I really want to avoid reading the whole table. It is too expensive, and with the proposed feature will be not needed. I think is much faster to forcefully add the check if you know the range of data. What do you think? -- Catalin(ux) M. BOIE http://kernel.embedromix.ro - Reply message - From: Jeff Davis pg...@j-davis.com To: Catalin(ux) M. BOIE ca...@embedromix.ro Cc: pgsql-general@postgresql.org Subject: [GENERAL] Forcefully adding a CHECK constrained Date: Sat, May 26, 2012 20:48 On Tue, 2012-05-15 at 12:52 +0300, Catalin(ux) M. BOIE wrote: The old_stats is so big that I cannot afford to add a check constraint. But, I know that all values of the itime field are before 2012_04, so, would be great if I could run something like: ALTER TABLE old_stats ADD CONSTRAINT xxx CHECK (itime 2012_04_timestamp) FORCE; I never looked at PostgreSQL sources, but the commit Enable CHECK constraints to be declared NOT VALID http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=897795240cfaaed724af2f53ed2c50c9862f951f inspired me to dive. Is PostgreSQL's team willing to accept such a feature? It looks like you already found the answer! Create the constraint using NOT VALID, and then sometime later (when you can afford the full scan) do a VALIDATE CONSTRAINT. Unfortunately, this is only available in 9.2, which is still in beta. http://www.postgresql.org/docs/9.2/static/sql-altertable.html CHECK constraints don't use indexes, so CREATE INDEX CONCURRENTLY doesn't help you. Regards, Jeff Davis
[GENERAL] Attempting to do a rolling move to 9.2Beta (as a slave) fails
Here's what I'm trying to do in testing 9.2Beta1. The current configuration is a master and a hot standby at a diverse location for both hot swap and online backup. Both are archived regularly so if something goes south I can recover (to either as a master.) I am attempting to validate the path forward to 9.2, and thus tried the following: 1. Build 9.2Beta1; all fine. 2. Run a pg_basebackup from the current master machine (running 9.1) to a new directory on the slave machine, using the 9.2Beta1 pg_basebackup executable. 3. Run a pg_upgrade against that from the new binary directory, producing a 9.2Beta1 data store. 4. Attempt to start the result as a SLAVE against the existing 9.1 master. Everything is ok until I try to start the result as a slave. I would think I should be able to, since this is exactly the procedure (minus the upgrade) that I used to get the slave in operation in the first place (although I did the archive/dump/copy to the slave machine manually rather than use pg_basebackup to get it.) But the last step fails, claiming that wal_level was set to minimal when the WAL records were written. No it wasn't. Not only was it not on the master where the base backup came from, it wasn't during the upgrade either nor is it set that way on the new candidate slave. Is this caused by the version mismatch? Note that it does NOT bitch about the versions not matching. For obvious reasons I'm not interested in rolling the production master up to 9.2 until it's released, but running a second instance of my HA code against it as a slave would allow me to perform a very complete set of tests against 9.2Beta1 without any hassle or operational risks, yet keep the full working data set available and online during the testing. Do I need to run a complete parallel environment instead of trying to attach a 9.2Beta1 slave to an existing 9.1 master? (and if so, why doesn't the code complain about the mismatch instead of the bogus WAL message?) -- -- Karl Denninger /The Market Ticker ®/ http://market-ticker.org Cuda Systems LLC
Re: [GENERAL] Attempting to do a rolling move to 9.2Beta (as a slave) fails
Hi Karl, On Sun, May 27, 2012 at 9:18 PM, Karl Denninger k...@denninger.net wrote: Here's what I'm trying to do in testing 9.2Beta1. The current configuration is a master and a hot standby at a diverse location for both hot swap and online backup. Both are archived regularly so if something goes south I can recover (to either as a master.) Okay 1. Build 9.2Beta1; all fine. 2. Run a pg_basebackup from the current master machine (running 9.1) to a new directory on the slave machine, using the 9.2Beta1 pg_basebackup executable. 3. Run a pg_upgrade against that from the new binary directory, producing a 9.2Beta1 data store. 4. Attempt to start the result as a SLAVE against the existing 9.1 master. Hmm - that's likely a problem: In general, log shipping between servers running different major PostgreSQL release levels is not possible. [1] Is this caused by the version mismatch? Probably. Do I need to run a complete parallel environment instead of trying to attach a 9.2Beta1 slave to an existing 9.1 master? (and if so, why doesn't the code complain about the mismatch instead of the bogus WAL message?) Slony [2] or PGBouncer+Londiste [3] should allow you to do this in an integrated fashion. [4] Cheers, Jan [1] http://www.postgresql.org/docs/current/static/warm-standby.html [2] http://slony.info/ [3] http://itand.me/zero-downtime-upgrades-of-postgresql-with-pgb [4] http://www.postgresql.org/docs/current/static/different-replication-solutions.html
Re: [GENERAL] Attempting to do a rolling move to 9.2Beta (as a slave) fails
On 5/27/2012 11:08 PM, Jan Nielsen wrote: Hi Karl, On Sun, May 27, 2012 at 9:18 PM, Karl Denninger k...@denninger.net mailto:k...@denninger.net wrote: Here's what I'm trying to do in testing 9.2Beta1. The current configuration is a master and a hot standby at a diverse location for both hot swap and online backup. Both are archived regularly so if something goes south I can recover (to either as a master.) Okay 1. Build 9.2Beta1; all fine. 2. Run a pg_basebackup from the current master machine (running 9.1) to a new directory on the slave machine, using the 9.2Beta1 pg_basebackup executable. 3. Run a pg_upgrade against that from the new binary directory, producing a 9.2Beta1 data store. 4. Attempt to start the result as a SLAVE against the existing 9.1 master. Hmm - that's likely a problem: In general, log shipping between servers running different major PostgreSQL release levels is not possible. [1] Is this caused by the version mismatch? Probably. Then the error message is wrong :-) Do I need to run a complete parallel environment instead of trying to attach a 9.2Beta1 slave to an existing 9.1 master? (and if so, why doesn't the code complain about the mismatch instead of the bogus WAL message?) Slony [2] or PGBouncer+Londiste [3] should allow you to do this in an integrated fashion. [4] I ran Slony for quite a while before 9.x showed up; I could put it back into use for a while but I really like the integrated setup that exists now with 9.x. I'll look at doing a parallel setup but it will more limited in what I can actually validate against in terms of workload than the above was workable... -- -- Karl Denninger /The Market Ticker ®/ http://market-ticker.org Cuda Systems LLC