On 12/02/2014 1:40 AM, Peter Palfrader wrote:
On Wed, 29 Jan 2014, James Bromberger wrote:
Peter, does this sound sensible - the logic for restores, and the
setting of content-disposition header so clients can download direct
from the AWS S3 Bucket?
I can totally see that being useful as a backup. I'm still less than
convinced that offline files are suitable for something that people have
in their sources.list or browse interactively.
The objective is to keep the last X days with on live storage, and the
prior history 'near line' (3-5 hours). At the moment I have X at 365
days (after 7 January when I finished importing historical files).
Perhaps we try and ensure that 90/95/99% of files accessed are on live
storage by adjusting this time-to-archive value; but in doing so, the
cost is minimised. If X needs to be 4 years then that's fine.
Peter, is S3 receiving a copy of all new ingested files into snapshot?
If not, who should do this?
I suspect the easiest approach might be a new cronjob as snapshot user
on sibelius that does that?
Sure. I've got in to it fine (thanks for access) - I see the
farm-journal has recent uploads (2 days worth) so i can script around
that to ensure we copy across.
Have you figured out how to handle non-US yet?
I thought that we ended on the crypt-in-main was precedent for the
archive containing all files.
Also, may I get a read-only access to the
live postgres DB to be able to monitor files being ingested and to check
the real filenames that should be set (access from localhost on sibelius
is fine)?
See other email.
Thanks again.
James
--
To UNSUBSCRIBE, email to [email protected]
with a subject of "unsubscribe". Trouble? Contact [email protected]
Archive: http://lists.debian.org/[email protected]