Re: [GENERAL] Packages for Ubuntu Wily (15.10)
Thanks, this looks like it will do the job. Now we just have to hack the Puppet Postgres module in order to handle the -testing source. :) On 16 December 2015 at 16:25, Adrian Klaver wrote: > On 12/16/2015 04:42 AM, Antony Gelberg wrote: > >> Hi Steve, Adrian and list, sorry for my delay in replying. As I >> suggested in my OP, there aren't currently 9.3 (or 9.4) packages for >> Wily in PGDG, that's why I was asking if whoever's responsible for these >> packages could flip the switch to build them. It would be ever so helpful. >> > > Take a look at: > http://apt.postgresql.org/pub/repos/apt/dists/wily-pgdg-testing/ > > >> See >> http://apt.postgresql.org/pub/repos/apt/dists/wily-pgdg/9.3/binary-amd64/ >> and >> http://apt.postgresql.org/pub/repos/apt/dists/wily-pgdg/9.4/binary-amd64/ >> - >> the Packages file is empty, whereas >> http://apt.postgresql.org/pub/repos/apt/dists/wily-pgdg/9.5/binary-amd64/ >> Packages >> file contains the relevant data. >> >> On 7 December 2015 at 19:43, Steve Crawford >> mailto:scrawf...@pinpointresearch.com>> >> wrote: >> >> You should be able to add the pgdg repository to your system and >> then install through apt as normal. Scroll down to the "PostgreSQL >> APT repository" section on this page: >> http://www.postgresql.org/download/linux/ubuntu/ >> >> Cheers, >> Steve >> >> On Mon, Dec 7, 2015 at 9:27 AM, Antony Gelberg >> mailto:antony.gelb...@gmail.com>> wrote: >> >> Hi all, >> >> We want to run 9.3 on the above distro, which comes with 9.4 as >> standard (in the distribution). However, we note that there are >> only 9.5 packages in the postgresql 15.10 repository. Can >> somebody flip the switch to build these? We really aren't ready >> to upgrade to 9.4 at the present time. >> >> Hope somebody can help. :) >> >> Antony >> >> -- >> http://www.linkedin.com/in/antgel >> http://about.me/antonygelberg >> >> >> >> >> >> -- >> http://www.linkedin.com/in/antgel >> http://about.me/antonygelberg >> > > > -- > Adrian Klaver > adrian.kla...@aklaver.com > -- http://www.linkedin.com/in/antgel http://about.me/antonygelberg
Re: [GENERAL] Packages for Ubuntu Wily (15.10)
Hi Steve, Adrian and list, sorry for my delay in replying. As I suggested in my OP, there aren't currently 9.3 (or 9.4) packages for Wily in PGDG, that's why I was asking if whoever's responsible for these packages could flip the switch to build them. It would be ever so helpful. See http://apt.postgresql.org/pub/repos/apt/dists/wily-pgdg/9.3/binary-amd64/ and http://apt.postgresql.org/pub/repos/apt/dists/wily-pgdg/9.4/binary-amd64/ - the Packages file is empty, whereas http://apt.postgresql.org/pub/repos/apt/dists/wily-pgdg/9.5/binary-amd64/ Packages file contains the relevant data. On 7 December 2015 at 19:43, Steve Crawford wrote: > You should be able to add the pgdg repository to your system and then > install through apt as normal. Scroll down to the "PostgreSQL APT > repository" section on this page: > http://www.postgresql.org/download/linux/ubuntu/ > > Cheers, > Steve > > On Mon, Dec 7, 2015 at 9:27 AM, Antony Gelberg > wrote: > >> Hi all, >> >> We want to run 9.3 on the above distro, which comes with 9.4 as standard >> (in the distribution). However, we note that there are only 9.5 packages in >> the postgresql 15.10 repository. Can somebody flip the switch to build >> these? We really aren't ready to upgrade to 9.4 at the present time. >> >> Hope somebody can help. :) >> >> Antony >> >> -- >> http://www.linkedin.com/in/antgel >> http://about.me/antonygelberg >> > > -- http://www.linkedin.com/in/antgel http://about.me/antonygelberg
[GENERAL] Packages for Ubuntu Wily (15.10)
Hi all, We want to run 9.3 on the above distro, which comes with 9.4 as standard (in the distribution). However, we note that there are only 9.5 packages in the postgresql 15.10 repository. Can somebody flip the switch to build these? We really aren't ready to upgrade to 9.4 at the present time. Hope somebody can help. :) Antony -- http://www.linkedin.com/in/antgel http://about.me/antonygelberg
Re: [GENERAL] Stuck trying to backup large database - best practice?
On Mon, Jan 12, 2015 at 7:08 PM, Adrian Klaver wrote: > > On 01/12/2015 08:40 AM, Antony Gelberg wrote: >> >> On Mon, Jan 12, 2015 at 6:23 PM, Adrian Klaver >> wrote: >>> >>> On 01/12/2015 08:10 AM, Antony Gelberg wrote: >>>> >>>> On Mon, Jan 12, 2015 at 5:31 PM, Adrian Klaver >>>> wrote: >>> pg_basebackup has additional features which in your case are creating >>> issues. pg_dump on the other hand is pretty much a straight forward data >>> dump and if you use -Fc you get compression. >> >> >> So I should clarify - we want to be able to get back to the same point >> as we would once the WAL was applied. If we were to use pg_dump, >> would we lose out in any way? > > > pg_dump does not save WALs, so it would not work for that purpose. > > Appreciate insight as to how >> >> pg_basebackup is scuppering things. > > > From original post it is not entirely clear whether you are using the -X or > -x options. The command you show does not have them, but you mention -Xs. In > any case it seems wal_keep_segments will need to be bumped up to keep WAL > segments around that are being recycled during the backup process. How much > will depend on a determination of fast Postgres is using/recycling log > segments? Looking at the turnover in the pg_xlog directory would be a start. The original script used -xs, but that didn't make sense, so we used -Xs in the end, but then we cancelled the backup as we assumed that we wouldn't have enough space for it uncompressed. Did we miss something? I think your suggestion of looking in pg_xlog and tweaking wal_keep_segments is interesting, we'll take a look, and I'll report back with findings. Thanks for your very detailed help. Antony -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Stuck trying to backup large database - best practice?
On Mon, Jan 12, 2015 at 6:23 PM, Adrian Klaver wrote: > On 01/12/2015 08:10 AM, Antony Gelberg wrote: >> >> >> >> On Mon, Jan 12, 2015 at 5:31 PM, Adrian Klaver >> wrote: >>> >>> >>> On 01/12/2015 07:20 AM, Antony Gelberg wrote: >>>> >>>> >>>> pg_basebackup: could not get transaction log end position from server: >>>> ERROR: requested WAL segment 00042B9F00B4 has already been >>>> removed >>>> >>>> This attempted backup reached 430GB before failing. >>> >>> >>> >>> It fails because the WAL file it needs has been removed from under it. >>> >> >> Okay. We simply understood that it took too long. Clearly we have a >> lot to learn about WAL and its intricacies. > > > See here: > > http://www.postgresql.org/docs/9.4/interactive/wal.html > Of course we read the docs before asking here, but really learning about a subject comes with time. :) >> >>>> We were advised on IRC to try -Xs, but that only works with a plain >>>> (uncompressed) backup, and as you'll note from above, we don't have >>>> enough disk space for this. >>>> >>>> Is there anything else we can do apart from get a bigger disk (not >>>> trivial at the moment)? Any best practice? >>> >>> >>> What is the purpose of the backup? >>> >>> In other words do really want the data and the WALs together or do you >>> just want the data? >> >> >> No, we just want to be able to restore our data at a later point. (As >> as secondary point, it's not that clear to me why it would be useful >> to have both, I'd be interested for some insight.) > > > Seems you may be better served by pg_dump: > > http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html > > pg_basebackup has additional features which in your case are creating > issues. pg_dump on the other hand is pretty much a straight forward data > dump and if you use -Fc you get compression. So I should clarify - we want to be able to get back to the same point as we would once the WAL was applied. If we were to use pg_dump, would we lose out in any way? Appreciate insight as to how pg_basebackup is scuppering things. > Something I failed to ask in my previous post, how are you determining the > size of the database? It's a managed server - the hosting company told us it was 1.8TB. I just ran the query at http://stackoverflow.com/questions/2596624/how-do-you-find-the-disk-size-of-a-postgres-postgresql-table-and-its-indexes, and I don't have the total, but I'd say the actual table data is less, nearer 1TB at a quick glance. > In addition are you talking about a single database or the Postgres database > cluster? > We only have one database in the cluster, so it's the same thing. Antony -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Stuck trying to backup large database - best practice?
On Mon, Jan 12, 2015 at 5:31 PM, Adrian Klaver wrote: > > On 01/12/2015 07:20 AM, Antony Gelberg wrote: >> >> pg_basebackup: could not get transaction log end position from server: >> ERROR: requested WAL segment 00042B9F00B4 has already been >> removed >> >> This attempted backup reached 430GB before failing. > > > It fails because the WAL file it needs has been removed from under it. > Okay. We simply understood that it took too long. Clearly we have a lot to learn about WAL and its intricacies. >> We were advised on IRC to try -Xs, but that only works with a plain >> (uncompressed) backup, and as you'll note from above, we don't have >> enough disk space for this. >> >> Is there anything else we can do apart from get a bigger disk (not >> trivial at the moment)? Any best practice? > > What is the purpose of the backup? > > In other words do really want the data and the WALs together or do you > just want the data? No, we just want to be able to restore our data at a later point. (As as secondary point, it's not that clear to me why it would be useful to have both, I'd be interested for some insight.) Antony -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Stuck trying to backup large database - best practice?
Hi, We have a postgres 9.3.x box, with 1.3TB free space, and our database of around 1.8TB. Unfortunately, we're struggling to back it up. When we try a compressed backup with the following command: pg_basebackup -D "$BACKUP_PATH/$TIMESTAMP" -Ft -Z9 -P -U "$DBUSER" -w we get error: pg_basebackup: could not get transaction log end position from server: ERROR: requested WAL segment 00042B9F00B4 has already been removed This attempted backup reached 430GB before failing. We were advised on IRC to try -Xs, but that only works with a plain (uncompressed) backup, and as you'll note from above, we don't have enough disk space for this. Is there anything else we can do apart from get a bigger disk (not trivial at the moment)? Any best practice? I suspect that setting up WAL archiving and / or playing with the wal_keep_segments setting might help, but as you can probably gather, I'd like to be sure that I'm doing something sane before I dive in. Happy to give more detail if required. Antony
[GENERAL] Lost database
Hi, This problem is related to an sql-ledger database, but given the nature, I think it's more appropriate here. Our sql-ledger server had a filesystem corruption. We do have backups, but it's only now, after reading the PostgreSQL manual, that I knew that it would have been a good idea to run pg_dump every now and then, via cron perhaps. We only have a filesystem backup of /var/lib/postgres. I have tried restoring the directory to a new server with postgresql not running. However, upon starting the database, sql-ledger reports database not found. It is also not listed in the psql -l command. However, when I grep in the /var/lib/postgres/data directory for known sql-ledger strings, they are found. So I'm hoping that there is, um, hope. I'm not an expert on Postgres internals (or externals!), but was wondering if anybody has an idea on how I could retrieve the data. (Debian Sarge, pgsql 7.4) Antony ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings