On 6/8/21 4:40 am, Frank Steinmetzger wrote:
> Am Tue, Aug 03, 2021 at 10:18:06AM +0200 schrieb Frank Steinmetzger:
>
>>> You could delete and rebuild the cache each time (or I think there is a
>>> way to do without it).
>> If the cache can be easily rebuilt, then there’d be no need to store it a
Am Tue, Aug 03, 2021 at 10:18:06AM +0200 schrieb Frank Steinmetzger:
> > You could delete and rebuild the cache each time (or I think there is a
> > way to do without it).
>
> If the cache can be easily rebuilt, then there’d be no need to store it at
> all.
Here’s an afterthought that just hit m
Am Tue, Aug 03, 2021 at 07:10:03AM +0800 schrieb William Kenworthy:
> >> Keep in mind that both repos have the same ID - you should also rsync
> >> the cache and security directories as well as they are now out of sync
> >> (hence the warning).
> > That thought crossed my mind recently but I was
On 3/8/21 6:03 am, Frank Steinmetzger wrote:
> Am Mon, Aug 02, 2021 at 02:12:24PM +0800 schrieb William Kenworthy:
> And you are storing several machines into a single repo? The docs say this
> is not supported officially. But I have one repo each for /, /home and
> data
> for bo
On 3/8/21 5:52 am, Frank Steinmetzger wrote:
> Am Mon, Aug 02, 2021 at 01:38:31PM +0800 schrieb William Kenworthy:
>
>>> Yup. Today I did my (not so) weekly backup and rsynced the repo to the new
>>> drive. After that I wanted to compare performance of my old 3 TB drive and
>>> the new SMR one by
Am Mon, Aug 02, 2021 at 02:12:24PM +0800 schrieb William Kenworthy:
> >>> And you are storing several machines into a single repo? The docs say this
> >>> is not supported officially. But I have one repo each for /, /home and
> >>> data
> >>> for both my PC and laptop. Using a wrapper script, I cr
Am Mon, Aug 02, 2021 at 01:38:31PM +0800 schrieb William Kenworthy:
> > Yup. Today I did my (not so) weekly backup and rsynced the repo to the new
> > drive. After that I wanted to compare performance of my old 3 TB drive and
> > the new SMR one by deleting a snapshot from the repo on each drive.
On 2/8/21 5:55 am, Frank Steinmetzger wrote:
> Am Sun, Aug 01, 2021 at 11:36:36AM +0800 schrieb William Kenworthy:
>
Its not raid, just a btrfs single on disk (no partition). Contains a
single borgbackup repo for an offline backup of all the online
borgbackup repo's I have for a 3
On 2/8/21 5:38 am, Frank Steinmetzger wrote:
> Am Sun, Aug 01, 2021 at 11:46:02AM +0800 schrieb William Kenworthy:
>
And you are storing several machines into a single repo? The docs say this
is not supported officially. But I have one repo each for /, /home and data
for both my PC
Am Sun, Aug 01, 2021 at 11:36:36AM +0800 schrieb William Kenworthy:
> >> Its not raid, just a btrfs single on disk (no partition). Contains a
> >> single borgbackup repo for an offline backup of all the online
> >> borgbackup repo's I have for a 3 times a day backup rota of individual
> >> machin
Am Sun, Aug 01, 2021 at 11:41:48AM +0800 schrieb William Kenworthy:
>
> On 1/8/21 8:50 am, Frank Steinmetzger wrote:
> > Am Sat, Jul 31, 2021 at 12:58:29PM +0800 schrieb William Kenworthy:
> >
> > ...
> > And thanks to the cache, a new snapshots usually is done very fast. But for
> > a yet unknown
Am Sun, Aug 01, 2021 at 11:46:02AM +0800 schrieb William Kenworthy:
> >> And you are storing several machines into a single repo? The docs say this
> >> is not supported officially. But I have one repo each for /, /home and data
> >> for both my PC and laptop. Using a wrapper script, I create snap
On Sat, Jul 31, 2021 at 11:05 PM William Kenworthy wrote:
>
> On 31/7/21 9:30 pm, Rich Freeman wrote:
> >
> > I'd love server-grade ARM hardware but it is just so expensive unless
> > there is some source out there I'm not aware of. It is crazy that you
> > can't get more than 4-8GiB of RAM on an
On 1/8/21 11:36 am, William Kenworthy wrote:
> On 1/8/21 8:50 am, Frank Steinmetzger wrote:
>> Am Sat, Jul 31, 2021 at 12:58:29PM +0800 schrieb William Kenworthy:
>>
>>> Its not raid, just a btrfs single on disk (no partition). Contains a
>>> single borgbackup repo for an offline backup of all t
On 1/8/21 8:50 am, Frank Steinmetzger wrote:
> Am Sat, Jul 31, 2021 at 12:58:29PM +0800 schrieb William Kenworthy:
>
> ...
> And thanks to the cache, a new snapshots usually is done very fast. But for
> a yet unknown reason, sometimes Borg re-hashes all files, even though I
> didn’t touch the cac
On 1/8/21 8:50 am, Frank Steinmetzger wrote:
> Am Sat, Jul 31, 2021 at 12:58:29PM +0800 schrieb William Kenworthy:
>
>> Its not raid, just a btrfs single on disk (no partition). Contains a
>> single borgbackup repo for an offline backup of all the online
>> borgbackup repo's I have for a 3 times
On 31/7/21 9:30 pm, Rich Freeman wrote:
> On Sat, Jul 31, 2021 at 8:59 AM William Kenworthy wrote:
>> I tried using moosefs with a rpi3B in the
>> mix and it didn't go well once I started adding data - rpi 4's were not
>> available when I set it up.
> Pi2/3s only have USB2 as far as I'm aware, a
On Sat, Jul 31, 2021 at 8:41 PM Frank Steinmetzger wrote:
>
> Am Sat, Jul 31, 2021 at 08:12:40AM -0400 schrieb Rich Freeman:
>
> > Plus it creates other kinds of confusion. Suppose you're measuring
> > recording densities in KB/mm^2. Under SI prefixes 1KB/mm^2 equals
> > 1MB/m^2
>
> *Cough* actu
Am Sat, Jul 31, 2021 at 12:58:29PM +0800 schrieb William Kenworthy:
> Its not raid, just a btrfs single on disk (no partition). Contains a
> single borgbackup repo for an offline backup of all the online
> borgbackup repo's I have for a 3 times a day backup rota of individual
> machines/data stor
Am Sat, Jul 31, 2021 at 08:12:40AM -0400 schrieb Rich Freeman:
> Plus it creates other kinds of confusion. Suppose you're measuring
> recording densities in KB/mm^2. Under SI prefixes 1KB/mm^2 equals
> 1MB/m^2
*Cough* actually, 1 GB/m^2
;-)
--
Grüße | Greetings | Qapla’
Please do not share an
On 31/07/2021 05:58, William Kenworthy wrote:
Its not raid, just a btrfs single on disk (no partition). Contains a
single borgbackup repo for an offline backup of all the online
borgbackup repo's I have for a 3 times a day backup rota of individual
machines/data stores - I get an insane amount o
On Sat, Jul 31, 2021 at 8:59 AM William Kenworthy wrote:
>
> I tried using moosefs with a rpi3B in the
> mix and it didn't go well once I started adding data - rpi 4's were not
> available when I set it up.
Pi2/3s only have USB2 as far as I'm aware, and they stick the ethernet
port on that USB bu
On 31/7/21 8:21 pm, Rich Freeman wrote:
> On Fri, Jul 30, 2021 at 11:50 PM Wols Lists wrote:
>> btw, you're scrubbing over USB? Are you running a raid over USB? Bad
>> things are likely to happen ...
> So, USB hosts vary in quality I'm sure, but I've been running USB3
> drives on lizardfs for a
On Fri, Jul 30, 2021 at 11:50 PM Wols Lists wrote:
>
> btw, you're scrubbing over USB? Are you running a raid over USB? Bad
> things are likely to happen ...
So, USB hosts vary in quality I'm sure, but I've been running USB3
drives on lizardfs for a while now with zero issues.
At first I was shu
On Sat, Jul 31, 2021 at 12:58 AM William Kenworthy wrote:
>
> I am amused in a cynical way at disk manufacturers using decimal values ...
>
So, the disk manufacturers obviously have marketing motivations.
However, IMO the programming community would be well-served to just
join basically every oth
On 31/7/21 11:14 am, William Kenworthy wrote:
> On 30/7/21 10:29 pm, Rich Freeman wrote:
>> On Fri, Jul 30, 2021 at 1:14 AM William Kenworthy wrote:
>>> 2. btrfs scrub (a couple of days)
>>>
>> Was this a read-only scrub, or did this involve repair (such as after
>> losing a disk/etc)?
>>
>> My
On 31/7/21 11:50 am, Wols Lists wrote:
> On 31/07/21 04:14, William Kenworthy wrote:
>> (seagate lists it as a 5Tb drive managed SMR)
>>
>> It was sold as a USB3 4Tb desktop expansion drive, fdisk -l shows "Disk
>> /dev/sde: 3.64 TiB, 4000787029504 bytes, 7814037167 sectors" and Seagate
>> is cal
On 31/07/21 04:14, William Kenworthy wrote:
> (seagate lists it as a 5Tb drive managed SMR)
>
> It was sold as a USB3 4Tb desktop expansion drive, fdisk -l shows "Disk
> /dev/sde: 3.64 TiB, 4000787029504 bytes, 7814037167 sectors" and Seagate
> is calling it 5Tb - marketing!
Note that it's now of
On 30/7/21 10:29 pm, Rich Freeman wrote:
> On Fri, Jul 30, 2021 at 1:14 AM William Kenworthy wrote:
>> 2. btrfs scrub (a couple of days)
>>
> Was this a read-only scrub, or did this involve repair (such as after
> losing a disk/etc)?
>
> My understanding of SMR is that it is supposed to perform
On Fri, Jul 30, 2021 at 12:50 PM antlists wrote:
>
> On 30/07/2021 15:29, Rich Freeman wrote:
> > Honestly I feel like the whole SMR thing is a missed opportunity,
> > mainly because manufacturers decided to use it as a way to save a few
> > bucks instead of as a new technology that can be embrace
On 30/07/2021 15:29, Rich Freeman wrote:
Honestly I feel like the whole SMR thing is a missed opportunity,
mainly because manufacturers decided to use it as a way to save a few
bucks instead of as a new technology that can be embraced as long as
you understand its benefits and limitations. One t
On Fri, Jul 30, 2021 at 1:14 AM William Kenworthy wrote:
>
> 2. btrfs scrub (a couple of days)
>
Was this a read-only scrub, or did this involve repair (such as after
losing a disk/etc)?
My understanding of SMR is that it is supposed to perform identically
to CMR for reads. If you've just recen
Am Thu, Jul 29, 2021 at 11:31:48PM +0200 schrieb Frank Steinmetzger:
> Am Thu, Jul 29, 2021 at 10:55:18PM +0200 schrieb Frank Steinmetzger:
> > In case someone is interested, here’s a little experience report:
> > […]
> > I just finished transferring my existing Borg backup repos.
> > […]
> > I’ve
On 30/7/21 4:55 am, Frank Steinmetzger wrote:
> Am Thu, Jul 29, 2021 at 05:46:16PM +0100 schrieb Wols Lists:
>
>>> Yea. First the SMR fiasco became public and then there was some other PR
>>> stunt they did that I can’t remember right now, and I said “I can’t buy WD
>>> anymore”. But there is no
Am Thu, Jul 29, 2021 at 10:55:18PM +0200 schrieb Frank Steinmetzger:
> In case someone is interested, here’s a little experience report:
> […]
> I just finished transferring my existing Borg backup repos.
> […]
> I’ve since been writing 1,2 TiB to the drive with rsync happily without
> any glitches
Am Thu, Jul 29, 2021 at 05:46:16PM +0100 schrieb Wols Lists:
> > Yea. First the SMR fiasco became public and then there was some other PR
> > stunt they did that I can’t remember right now, and I said “I can’t buy WD
> > anymore”. But there is no real alternative these days. And CMR drives are
> >
36 matches
Mail list logo