Re: [GENERAL] PostgreSQL Developer Best Practices
On Wed, Aug 26, 2015 at 5:23 AM, rob stone wrote: > On Tue, 2015-08-25 at 20:17 -0400, Melvin Davidson wrote: > > I think a lot of people here are missing the point. I was trying to > > give examples of natural keys, but a lot of people are taking great > > delight > > in pointing out exceptions to examples, rather than understanding the > > point. > > So for the sake of argument, a natural key is something that in > > itself is unique and the possibility of a duplicate does not exist. > > Before ANYONE continues to insist that a serial id column is good, > > consider the case where the number of tuples will exceed a bigint. > > Don't say it cannot happen, because it can. > > However, if you have an alphanumeric field, let's say varchar 50, and > > it's guaranteed that it will never have a duplicate, then THAT is a > > natural primary > > key and beats the hell out of a generic "id" field. > > > > Further to the point, since I started this thread, I am holding to it > > and will not discuss "natural primary keys" any further. > > > > Other suggestions for good PostgreSQL Developer database (not web > > app) guidelines are still welcome. > > > > Funny how Melvin's attempt to bring order to the chaos ended up as a > discussion about primary keys. > > We once hired a "genius" to design an application to handle fixed > assets. Every table had a primary key named "id". Some were integer and > some were character. So the foreign key columns in child tables had to > be named differently. Writing the joins was complex. > > I also know of an airline reservation system where you are unable to > alter your e-mail address. It apparently needs a DBA type person to > make the change. I can only guess that your e-mail address is used as a > foreign key in one or more tables. As well as assigning you a frequent > flyer number they also assign another integer identifier. A bit of > common sense goes a long way when designing an application. > > Cheers, > rob > > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > I am in favour of using BIGINT "id" for the primary key in each table I create. I found out that in the fields in my tables that I thought would be unique end up not being so in the longer term. Also these values may need to be updated for some reason. I have been using PRIMARY KEY(id) where id is of type BIGINT on each table I create. I use a sequence to provide a default value to this field. I create one such sequence DB object per table and the use it in the table definition. For example if I have a sequenceDB "some_schema.some_table_seq" for table "some_schema.some_table". In the table definition of "some_schema.some_table" I have the field "id" as follows. id BIGINT NOT NULL DEFAULT NEXTVAL('some_schema.some_table_seq') When I use this "id" field as a foreign key in another table, I would prefix it with the name of its parent table followed by a couple of underscores as shown below. FOREIGN KEY(some_table__id)REFERENCES some_schema.some_table(id)ON UPDATE CASCADE ON DELETE CASCADE For the composite keys that are unique (for now) I create a unique constraint. Allan.
[GENERAL] backup and archive postgresql data older than 6 months
Hi, Are there scripts which takes backup of postgresql database and archive data older than 6 months and push it to a remote server using scp or rsync method and purge/clean the local data on the hard disk at the same time meaning at any given time we have only six months of postgresql data on the postgresql database server. Any help will be highly appreciable. Regards, Kaushal
Re: [GENERAL] backup and archive postgresql data older than 6 months
On 15/08/26 19:46, Kaushal Shriyan wrote: > Hi, > > Are there scripts which takes backup of postgresql database and archive data > older than 6 months and push it to a remote server using scp or rsync method > and purge/clean the local data on the hard disk at the same time meaning at > any given time we have only six months of postgresql data on the postgresql > database server. Have you looked at Barman ( http://www.pgbarman.org/ )? Regards Ian Barwick -- Ian Barwick http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, RemoteDBA, Training & Services -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
On Wednesday 26 August 2015 10:27:04 Gavin Flower wrote: > Actually I would suggest standardising on singular names, not JUST > because that this the standard I prefer! :-) > > But (also) because: > > 1. Singular words tend to be shorter > > 2. plurals are more ambiguous wrt spelling > > 3. there other good reasons, that I've forgotten for now :-( > (but I remember having them!!!) Oh, so you name your tables 'fish' and 'crow' instead of 'school' and 'murder' ? Such wasted opportunities :p -- Vincent de Phily -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
On Wed, Aug 26, 2015 at 7:03 AM, Vincent de Phily < vincent.deph...@mobile-devices.fr> wrote: > On Wednesday 26 August 2015 10:27:04 Gavin Flower wrote: > > Actually I would suggest standardising on singular names, not JUST > > because that this the standard I prefer! :-) > > > > But (also) because: > > > > 1. Singular words tend to be shorter > > > > 2. plurals are more ambiguous wrt spelling > > > > 3. there other good reasons, that I've forgotten for now :-( > > (but I remember having them!!!) > > Oh, so you name your tables 'fish' and 'crow' instead of 'school' and > 'murder' ? Such wasted opportunities :p > OTOH, either is better than some insane DBA who calls them "TableOf" (camel case in quotes to preserve case) to be "self documenting and easy to read" (Likely a COBOL programmer in a former life). Oh, and then "to make it simple", all the SQL uses the AS to "alias" the table name to a single alphabetic character in [a-z]. > > -- > Vincent de Phily > > -- Schrodinger's backup: The condition of any backup is unknown until a restore is attempted. Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be. He's about as useful as a wax frying pan. 10 to the 12th power microphones = 1 Megaphone Maranatha! <>< John McKown
Re: [GENERAL] PostgreSQL Developer Best Practices
On 26-08-2015 10:13, Allan Kamau wrote: On Wed, Aug 26, 2015 at 5:23 AM, rob stone wrote: On Tue, 2015-08-25 at 20:17 -0400, Melvin Davidson wrote: I think a lot of people here are missing the point. I was trying to give examples of natural keys, but a lot of people are taking great delight in pointing out exceptions to examples, rather than understanding the point. So for the sake of argument, a natural key is something that in itself is unique and the possibility of a duplicate does not exist. Before ANYONE continues to insist that a serial id column is good, consider the case where the number of tuples will exceed a bigint. Don't say it cannot happen, because it can. However, if you have an alphanumeric field, let's say varchar 50, and it's guaranteed that it will never have a duplicate, then THAT is a natural primary key and beats the hell out of a generic "id" field. Further to the point, since I started this thread, I am holding to it and will not discuss "natural primary keys" any further. Other suggestions for good PostgreSQL Developer database (not web app) guidelines are still welcome. Funny how Melvin's attempt to bring order to the chaos ended up as a discussion about primary keys. We once hired a "genius" to design an application to handle fixed assets. Every table had a primary key named "id". Some were integer and some were character. So the foreign key columns in child tables had to be named differently. Writing the joins was complex. I also know of an airline reservation system where you are unable to alter your e-mail address. It apparently needs a DBA type person to make the change. I can only guess that your e-mail address is used as a foreign key in one or more tables. As well as assigning you a frequent flyer number they also assign another integer identifier. A bit of common sense goes a long way when designing an application. Cheers, rob -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general [1] I am in favour of using BIGINT "id" for the primary key in each table I create. I found out that in the fields in my tables that I thought would be unique end up not being so in the longer term. Also these values may need to be updated for some reason. I have been using PRIMARY KEY(id) where id is of type BIGINT on each table I create. I use a sequence to provide a default value to this field. I create one such sequence DB object per table and the use it in the table definition. For example if I have a sequenceDB "some_schema.some_table_seq" for table "some_schema.some_table". In the table definition of "some_schema.some_table" I have the field "id" as follows. id BIGINT NOT NULL DEFAULT NEXTVAL('some_schema.some_table_seq') When I use this "id" field as a foreign key in another table, I would prefix it with the name of its parent table followed by a couple of underscores as shown below. FOREIGN KEY(some_table__id)REFERENCES some_schema.some_table(id)ON UPDATE CASCADE ON DELETE CASCADE For the composite keys that are unique (for now) I create a unique constraint. Allan. I recall the words of my professor at last lecture of Databases was telling us that model of thinking as he told: nomenclature is wrong and not good and we should avoid it in any cost if we can. Cheers, Hristo -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] backup and archive postgresql data older than 6 months
On Wed, 26 Aug 2015 10:46:53 + Kaushal Shriyan wrote: > Hi, > > Are there scripts which takes backup of postgresql database and archive > data older than 6 months and push it to a remote server using scp or rsync > method and purge/clean the local data on the hard disk at the same time > meaning at any given time we have only six months of postgresql data on the > postgresql database server. What system are you on? On linux, you can use log_rotate to rotate your backups; just add a file to /etc/logrotate.d. Here is what I use to keep a weekly backup over one year: /var/backups/marica/marica.gz { weekly missingok rotate 52 notifempty } And here is a naive bash script to send a dump file to a remote server; it's launched by cron: http://pastebin.com/Xg4wXdjy -- Salutations, Vincent Veyron https://legalcase.libremen.com/ Legal case, contract and insurance claim management software -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
On Tue, 25 Aug 2015 18:57:28 -0400, Neil Tiffin wrote: On Aug 25, 2015, at 1:38 PM, Karsten Hilbert wrote: In most cases developers don’t care about index, unique, foreign key, or primary key names (from a coding standpoint) Until the day they’d like to write a reliable database change script. Not sure I understand. Once the object is created the name is set, it does not change, so I don’t understand why it is not possible to write a reliable database change script. Dump and restore maintain the name. Of course every project has periodic scripts that need to run, so these objects would, if they are dropped or manipulated in the script, have to be manually named, especially during development since the whole database might be dropped and recreated multiple times. My original comment included that situation. My projects typically have many, many objects that once created are not referred to again, unless a DBA is doing some tuning or troubleshooting. In that case, the DBA just looks up the name. I can see if say 2 years later you want to create a development database from the original SQL that generated the original table definitions that could be problematic. But I always have used the current definitions not the original and those can be exported with the current names. It just seems like busy work to me, but I would love to be enlightened. Neil I suspect he's alluding to migration scripts from an ORM - which are typically scaffolded with boilerplate, but almost invariably need to be tweaked in order to effect the desired changes in the database.. - John -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] backup and archive postgresql data older than 6 months
On 08/26/2015 03:46 AM, Kaushal Shriyan wrote: Hi, Are there scripts which takes backup of postgresql database and archive data older than 6 months and push it to a remote server using scp or rsync method and purge/clean the local data on the hard disk at the same time meaning at any given time we have only six months of postgresql data on the postgresql database server. Not entirely sure what you want here?: 1) Do you want to keep a set of backups that end no less then 6 months ago or 2) Do you want the actual data in the server to be no more then 6 months old with older data pulled out and archived. If 2) A) How do you propose determining the age of a record ? B) How do you deal with FK relationships and missing data? Any help will be highly appreciable. Regards, Kaushal -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
On Wed, Aug 26, 2015 at 09:04:08AM -0400, John Turner wrote: > >>>In most cases developers don’t care about index, unique, foreign key, > >>>or primary key names (from a coding standpoint) > >> > >>Until the day they’d like to write a reliable database change script. > > > >Not sure I understand. Once the object is created the name is set, it > >does not change, so I don’t understand why it is not possible to write a > >reliable database change script. Dump and restore maintain the name. Of > >course every project has periodic scripts that need to run, so these > >objects would, if they are dropped or manipulated in the script, have to > >be manually named, especially during development since the whole database > >might be dropped and recreated multiple times. My original comment > >included that situation. My projects typically have many, many objects > >that once created are not referred to again, unless a DBA is doing some > >tuning or troubleshooting. In that case, the DBA just looks up the name. > > > >I can see if say 2 years later you want to create a development database > >from the original SQL that generated the original table definitions that > >could be problematic. But I always have used the current definitions not > >the original and those can be exported with the current names. > > > >It just seems like busy work to me, but I would love to be enlightened. > > I suspect he's alluding to migration scripts from an ORM Not in the least. https://github.com/ncqgm/gnumed/tree/master/gnumed/gnumed/server/sql Karsten -- GPG key ID E4071346 @ eu.pool.sks-keyservers.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Point in time recovery
Hi, Can somebody please explain about Point-In-Time-Recovery (PITR) in context to http://www.pgbarman.org/about/? Regards, Kaushal
Re: [GENERAL] PostgreSQL Developer Best Practices
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Melvin Davidson Sent: Tuesday, August 25, 2015 8:18 PM To: Adrian Klaver Cc: Jerry Sievers ; John R Pierce ; pgsql-general@postgresql.org Subject: Re: [GENERAL] PostgreSQL Developer Best Practices …. Before ANYONE continues to insist that a serial id column is good, consider the case where the number of tuples will exceed a bigint. Don't say it cannot happen, because it can. Melvin Davidson Now, it’s easy to overcome this limitation. You just make concatenated PK (id1, id2) with both columns of BIGINT type. In general, I see the main advantage of artificial PK in NO NEED to change multiple child tables, when NATURAL key changes in the parent table. And I never saw a system where NATURAL key wouldn’t need to be changed eventually. So, my conclusion: use artificial PK (for db convenience) and unique NATURAL key (for GUI representation). Regards, Igor Neyman
Re: [GENERAL] PostgreSQL Developer Best Practices
On Wed, Aug 26, 2015 at 9:45 AM, Igor Neyman wrote: > *From:* pgsql-general-ow...@postgresql.org [mailto: > pgsql-general-ow...@postgresql.org] *On Behalf Of *Melvin Davidson > *Sent:* Tuesday, August 25, 2015 8:18 PM > *To:* Adrian Klaver > *Cc:* Jerry Sievers ; John R Pierce < > pie...@hogranch.com>; pgsql-general@postgresql.org > *Subject:* Re: [GENERAL] PostgreSQL Developer Best Practices > > > > …. > > Before ANYONE continues to insist that a serial id column is good, > consider the case where the number of tuples will exceed a bigint. > > Don't say it cannot happen, because it can. > > > > *Melvin Davidson* > > > > Now, it’s easy to overcome this limitation. > > You just make concatenated PK (id1, id2) with both columns of BIGINT type. > > > Easy, yes, but at this point I'd probably resort to converting to a length-limited text field (so as ensure toasting never occurs). In general, I see the main advantage of artificial PK in NO NEED to change > multiple child tables, when NATURAL key changes in the parent table. And I > never saw a system where NATURAL key wouldn’t need to be changed eventually. > > So, my conclusion: use artificial PK (for db convenience) and unique > NATURAL key (for GUI representation). > > > I haven't really had a chance to implement this formally but I've had similar thoughts along these lines. One nice thing about this, in theory, is that you can have a different lifecycle and usage policy for those GUI identifiers and they can be made to be inherently changeable. A unique tag that you can remove from one entity and reuse on a different one should the need arise. David J.
Re: [GENERAL] PostgreSQL Developer Best Practices
On 08/25/2015 05:28 PM, Adrian Klaver wrote: On 08/25/2015 05:17 PM, Melvin Davidson wrote: I think a lot of people here are missing the point. I was trying to give examples of natural keys, but a lot of people are taking great delight in pointing out exceptions to examples, rather than understanding the point. So for the sake of argument, a natural key is something that in itself is unique and the possibility of a duplicate does not exist. Correct. Before ANYONE continues to insist that a serial id column is good, consider the case where the number of tuples will exceed a bigint. Don't say it cannot happen, because it can. Yes it can. However, if you have an alphanumeric field, let's say varchar 50, and it's guaranteed that it will never have a duplicate, then THAT is a natural primary Wrong. Refer back to your above definition. It is definitely possible, based on a varchar(50) that a duplicate will happen. A better definition would be something along the lines of: A natural key is distinct and is derived from the data being stored. That is a big IF and a guarantee I would not put money on. Right, here is a perfect example. Generally speaking if you are storing a United States company's information, a natural primary key could be an FEIN. However, there is an exception that would have to be incorporated into that idea. If the company is a Sole Proprietorship the FEIN may actually be the SSN of the owner, but not necessarily. Then you have to ask yourself if that matters. It may not depending on the application you are building or the reason the data is being stored. key and beats the hell out of a generic "id" field. Further to the point, since I started this thread, I am holding to it and will not discuss "natural primary keys" any further. That doesn't mean others won't. JD -- Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564 PostgreSQL Centered full stack support, consulting and development. Announcing "I'm offended" is basically telling the world you can't control your own emotions, so everyone else should do it for you. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
Melvin Davidson wrote: > Before ANYONE continues to insist that a serial id column is good, consider > the case where the number of tuples will exceed a bigint. > Don't say it cannot happen, because it can. In practice, it cannot happen. A tuple with a bigint column weighs at least 32 bytes (in the sense that it's what pg_column_size("table".*) reports when "table" has only a bigint column). So the size of your hypothetical table would be at a minimum 32 bytes * 2^63 tuples = 2^68 bytes But a postgres table size weighs 32TB max, or 2^45 bytes (see http://www.postgresql.org/about/ ) So the table with more rows than a bigint can count would have to be 2^23 (=8388608) times bigger than the biggest possible table. Also there's the fact that COUNT() returns a BIGINT, so the tuples couldn't be counted in SQL. That by itself hints at the fact that counts of tuples are expected to always fit in BIGINT these days. Also what about pg_database_size() returning a bigint? Even if the hypothetical table was alone in the database, and even if every tuple occupied only 1 byte instead of 32+, the single table would exceed what pg_database_size() can report. Maybe at some point all these will be 128 bits, but that's years ahead. Best regards, -- Daniel Vérité PostgreSQL-powered mailer: http://www.manitou-mail.org Twitter: @DanielVerite -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
On 27/08/15 00:03, Vincent de Phily wrote: On Wednesday 26 August 2015 10:27:04 Gavin Flower wrote: Actually I would suggest standardising on singular names, not JUST because that this the standard I prefer! :-) But (also) because: 1. Singular words tend to be shorter 2. plurals are more ambiguous wrt spelling 3. there other good reasons, that I've forgotten for now :-( (but I remember having them!!!) Oh, so you name your tables 'fish' and 'crow' instead of 'school' and 'murder' ? Such wasted opportunities :p ??? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
On 26/08/2015 19:54, Gavin Flower wrote: > On 27/08/15 00:03, Vincent de Phily wrote: >> On Wednesday 26 August 2015 10:27:04 Gavin Flower wrote: >>> Actually I would suggest standardising on singular names, not JUST >>> because that this the standard I prefer! :-) >>> >>> But (also) because: >>> >>> 1. Singular words tend to be shorter >>> >>> 2. plurals are more ambiguous wrt spelling >>> >>> 3. there other good reasons, that I've forgotten for now :-( >>> (but I remember having them!!!) >> Oh, so you name your tables 'fish' and 'crow' instead of 'school' and >> 'murder' ? Such wasted opportunities :p >> > ??? A school of fish and a murder of crows... wonderfully evocative collective nouns. Ray. -- Raymond O'Donnell :: Galway :: Ireland r...@iol.ie -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
- Original Message - From: Gavin Flower On 27/08/15 00:03, Vincent de Phily wrote: On Wednesday 26 August 2015 10:27:04 Gavin Flower wrote: Actually I would suggest standardising on singular names, not JUST because that this the standard I prefer! :-) But (also) because: 1. Singular words tend to be shorter 2. plurals are more ambiguous wrt spelling 3. there other good reasons, that I've forgotten for now :-( (but I remember having them!!!) Oh, so you name your tables 'fish' and 'crow' instead of 'school' and 'murder' ? Such wasted opportunities :p ??? Dish - Dishes Fish - School Bow - Bows Crow - Murder Goose - Geese Moose - Moose House - Houses Mouse - Mice and so on... Cheers, George -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
On 26/08/15 12:17, Melvin Davidson wrote: [...] So for the sake of argument, a natural key is something that in itself is unique and the possibility of a duplicate does not exist. Before ANYONE continues to insist that a serial id column is good, consider the case where the number of tuples will exceed a bigint. Don't say it cannot happen, because it can. If you create tuples in your table at the rate of a million records per second every hour of every day of every year, it will take over 250,000 years to exceed the value of a bigint! [...] -- *Melvin Davidson* I reserve the right to fantasize. Whether or not you wish to share my fantasy is entirely up to you. I will refrain from marking snarky comments about your sig - tempting though it might be! :-) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
On 27/08/15 06:59, Raymond O'Donnell wrote: On 26/08/2015 19:54, Gavin Flower wrote: On 27/08/15 00:03, Vincent de Phily wrote: On Wednesday 26 August 2015 10:27:04 Gavin Flower wrote: Actually I would suggest standardising on singular names, not JUST because that this the standard I prefer! :-) But (also) because: 1. Singular words tend to be shorter 2. plurals are more ambiguous wrt spelling 3. there other good reasons, that I've forgotten for now :-( (but I remember having them!!!) Oh, so you name your tables 'fish' and 'crow' instead of 'school' and 'murder' ? Such wasted opportunities :p ??? A school of fish and a murder of crows... wonderfully evocative collective nouns. Ray. Hmm... Reminds me about a story I was told where a programmer had used names of his favourite footballers as labels in an assembler program! :-) -Gavin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Most effective settings for deleting lots of data?
Hi, I am using postgres 9.3 and am preparing to migrate to 9.4. As part of the migration, I was hoping to also delete a bunch of data that is no longer needed (100M+ rows across several tables). I can fairly trivially delete the data by doing a simple statement like this: DELETE FROM account WHERE id = 1234; All the foreign keys are setup to have "ON DELETE CASCADE", so this will effectively drop the entire graph, which is what I want. My desire is to do this as quickly as possible. The first thing I have done is to disable autovacuuming on all of the largest tables so that the entire system doesn't lock up, like so: ALTER TABLE my_data SET (autovacuum_enabled = false, toast.autovacuum_enabled = false); I was wondering what other settings might be best be applied? For instance, I was thinking that "work_mem" could be bumped up to some ridiculously large value if needed. I have the benefit of being able to have exclusive access to the database at the time that this operation needs to happen so most options are on the table. What settings would you recommend? Also, it just occurred to me that I should try to disable/drop all indexes (especially since they will be recreated) later so that those are not updated in the process. thanks in advance, --Cory
Re: [GENERAL] Most effective settings for deleting lots of data?
On Wednesday, August 26, 2015 08:25:02 PM Cory Tucker wrote: > What settings would you recommend? Also, it just occurred to me that I > should try to disable/drop all indexes (especially since they will be > recreated) later so that those are not updated in the process. Don't drop the indexes your foreign keys use to find cascading deletes. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Most effective settings for deleting lots of data?
On 08/26/2015 02:34 PM, Alan Hodgson wrote: On Wednesday, August 26, 2015 08:25:02 PM Cory Tucker wrote: What settings would you recommend? Also, it just occurred to me that I should try to disable/drop all indexes (especially since they will be recreated) later so that those are not updated in the process. Don't drop the indexes your foreign keys use to find cascading deletes. Or do drop indexes and walk up your dependency graph manually.
Re: [GENERAL] Grouping sets, cube and rollup
On Tue, Aug 25, 2015 at 7:04 PM, Edson Richter wrote: > Any chance to get those amazing wonderful features backported to 9.4? you might have some luck merging in the feature yourself if you're so inclined. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Grouping sets, cube and rollup
On 2015-08-26 17:09:26 -0500, Merlin Moncure wrote: > On Tue, Aug 25, 2015 at 7:04 PM, Edson Richter wrote: > > Any chance to get those amazing wonderful features backported to 9.4? > > you might have some luck merging in the feature yourself if you're so > inclined. It's imo too large a feature for that: 63 files changed, 5255 insertions(+), 618 deletions(-) in addition there's a bunch of followup commits fixing things. There'll be a significant number of conflicts when applying this to 9.4. More importantly it breaks the catalog compatibility, i.e. you can't patch postgres and then continue with an existing data directory. - Andres -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Point in time recovery
On 08/26/2015 08:29 AM, Kaushal Shriyan wrote: Hi, Can somebody please explain about Point-In-Time-Recovery (PITR) in context to http://www.pgbarman.org/about/? From Postgres end: http://www.postgresql.org/docs/9.4/static/continuous-archiving.html From Barman end: http://docs.pgbarman.org/#main_features Restoring to a given point in time Regards, Kaushal -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] two factor authentication for pgadmin iii
Hello, I'm going to create two factor authentication for pgadmin server... I'm using postgresql 9.4 with pgadmin III on Linux Mint 17.2 32bit... I already have 1 password authentication but For better security, I just want to force 2 of them. The authentication factors could be any things(what people have,what people know,where people are or what people are). Could anybody help me on this?