Hi Daniel,

This does not appear to be the case at all. 

I am running master:cd800151735f977783a8956f7ab9e5dd42104699 and did a 
backup-push to S3. The base cluster directory (/var/lib/pgsql/9.2/data) is 
about 14 GB (the 'base' subdirectory per se is 11 GB):

[root@gw1 data]# du --si base
6.7M base/12865
6.9M base/12870
11G base/19073
6.8M base/1
7.7M base/18521
197k base/pgsql_tmp
11G base

... but there is an additional user-defined tablespace called 'cdr_archive' 
(location /cdr_archive) that has another 34 GB:

[root@gw1 data]# du --si /cdr_archive/
34G /cdr_archive/PG_9.2_201204301/19073
34G /cdr_archive/PG_9.2_201204301
34G /cdr_archive/

The base dump that was pushed up to S3 by backup-push does not appear to 
have grabbed this tablespace at all. It grabbed the tablespace 
definition/DDL, of course, but not the actual data stored in /cdr_archive:

[root@gw1 ~]# s3cmd ls -H 
s3://.../PG_Backups/basebackups_005/base_000000010000028D0000009A_00000032/tar_partitions/
2014-09-03 22:14       203M 
 
s3://...PG_Backups/basebackups_005/base_000000010000028D0000009A_00000032/tar_partitions/part_00000000.tar.lzo
2014-09-03 22:14       187M 
 
s3://.../PG_Backups/basebackups_005/base_000000010000028D0000009A_00000032/tar_partitions/part_00000001.tar.lzo
2014-09-03 22:14       310M 
 
s3://.../PG_Backups/basebackups_005/base_000000010000028D0000009A_00000032/tar_partitions/part_00000002.tar.lzo
2014-09-03 22:14       282M 
 
s3://.../PG_Backups/basebackups_005/base_000000010000028D0000009A_00000032/tar_partitions/part_00000003.tar.lzo
2014-09-03 22:14       121M 
 
s3://.../PG_Backups/basebackups_005/base_000000010000028D0000009A_00000032/tar_partitions/part_00000004.tar.lzo
2014-09-03 22:14       178M 
 
s3://.../PG_Backups/basebackups_005/base_000000010000028D0000009A_00000032/tar_partitions/part_00000005.tar.lzo
2014-09-03 22:14       196M 
 
s3://.../PG_Backups/basebackups_005/base_000000010000028D0000009A_00000032/tar_partitions/part_00000006.tar.lzo
2014-09-03 22:15       339M 
 
s3://.../PG_Backups/basebackups_005/base_000000010000028D0000009A_00000032/tar_partitions/part_00000007.tar.lzo
2014-09-03 22:15       597M 
 
s3://.../PG_Backups/basebackups_005/base_000000010000028D0000009A_00000032/tar_partitions/part_00000008.tar.lzo
2014-09-03 22:15       448M 
 
s3://.../PG_Backups/basebackups_005/base_000000010000028D0000009A_00000032/tar_partitions/part_00000009.tar.lzo

That's only ~2.8 GB compressed. 48 GB isn't going to compress down to that. 
But just in case, I undertook to actually do a backup-fetch. I ensured that 
the pg_tblspc symlink is in place on the restore target:

[root@walerestore data]# ls -l pg_tblspc
total 0
lrwxrwxrwx 1 postgres postgres 13 Sep  4 00:43 27582562 -> /cdr_archive/

But alas, no such luck:

[root@walerestore data]# ls -l /cdr_archive/
total 0

The restore also ran much too quickly for it to be conceivable that this 
directory's base data store was contained in it.

Barring any option to backup-push that tells it to include these 
tablespaces, I think it's safe to say that this Doesn't Work(TM). :-)

-- Alex

On Friday, August 29, 2014 12:26:51 PM UTC-4, Daniel Farina wrote:
>
> On Fri, Aug 29, 2014 at 9:14 AM, Christophe Pettus <[email protected] 
> <javascript:>> wrote: 
> > Reviewing the code and docs for HEAD, it appears the current situation 
> of user-defined tablespaces is that all of the data in the tablespaces is 
> archived with a backup-fetch without any special formality.  Is this 
> correct? 
>
> That is the intent.  I do not use that code myself and haven't heard 
> about surprises in it.  I'd appreciate reports if it works to expectation. 
>
> If the docs are not clear, please submit a patch. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"wal-e" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to