Hi,
hw wrote:
> with CDs/DVDs, writing is not so easy.
Thus it is not as easy to overwrite them by mistake.
The complicated part of optical burning can be put into scripts.
But i agree that modern HDD sizes cannot be easily covered by optical
media.
I wrote:
> > [...] LTO tapes [...]
hw wrote
On Thu, 2022-11-10 at 15:32 +0100, Thomas Schmitt wrote:
> Hi,
>
> i wrote:
> > > the time window in which the backuped data
> > > can become inconsistent on the application level.
>
> hw wrote:
> > Or are you referring to the data being altered while a backup is in
> > progress?
>
> Yes.
Ah I
On Mon, 2022-11-14 at 20:37 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Fri, 2022-11-11 at 22:11 +0100, Linux-Fan wrote:
> > > hw writes:
> > > > On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
> [...]
> > How do you intend to copy files at any other level than at file level? At
> > that
On 11/14/22 13:48, hw wrote:
On Fri, 2022-11-11 at 21:55 -0800, David Christensen wrote:
Lots of snapshots slows down commands that involve snapshots (e.g. 'zfs
list -r -t snapshot ...'). This means sysadmin tasks take longer when
the pool has more snapshots.
Hm, how long does it take? It
On Fri, 2022-11-11 at 21:55 -0800, David Christensen wrote:
> [...]
> As with most filesystems, performance of ZFS drops dramatically as you
> approach 100% usage. So, you need a data destruction policy that keeps
> storage usage and performance at acceptable levels.
>
> Lots of snapshots slows
hw writes:
On Fri, 2022-11-11 at 22:11 +0100, Linux-Fan wrote:
> hw writes:
> > On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
>
> [...]
>
> > > If you do not value the uptime making actual (even
> > > scheduled) copies of the data may be recommendable over
> > > using a RAID bec
On 11/13/22 13:02, hw wrote:
On Fri, 2022-11-11 at 07:55 -0500, Dan Ritter wrote:
hw wrote:
On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
Linux-Fan wrote:
[...]
* RAID 5 and 6 restoration incurs additional stress on the other
disks in the RAID which makes it more likely that one of
On Fri, 2022-11-11 at 22:11 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
>
> [...]
>
> > > If you do not value the uptime making actual (even
> > > scheduled) copies of the data may be recommendable over
> > > using a RAID because such
On Fri, 2022-11-11 at 07:55 -0500, Dan Ritter wrote:
> hw wrote:
> > On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> > > Linux-Fan wrote:
> > >
> > >
> > > [...]
> > > * RAID 5 and 6 restoration incurs additional stress on the other
> > > disks in the RAID which makes it more likely th
David Christensen wrote:
> The Intel Optane Memory Series products are designed to be cache devices --
> when using compatible hardware, Windows, and Intel software. My hardware
> should be compatible (Dell PowerEdge T30), but I am unsure if FreeBSD 12.3-R
> will see the motherboard NVMe slot or
On 11/11/22 00:43, hw wrote:
On Thu, 2022-11-10 at 21:14 -0800, David Christensen wrote:
On 11/10/22 07:44, hw wrote:
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
Taking snapshots is
hw writes:
On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
[...]
> If you do not value the uptime making actual (even
> scheduled) copies of the data may be recommendable over
> using a RAID because such schemes may (among other advantages)
> protect you from accidental f
On Fri, Nov 11, 2022 at 09:03:45AM +0100, hw wrote:
On Thu, 2022-11-10 at 23:12 -0500, Michael Stone wrote:
The advantage to RAID 6 is that it can tolerate a double disk failure.
With RAID 1 you need 3x your effective capacity to achieve that and even
though storage has gotten cheaper, it hasn't
hw wrote:
> On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> > Linux-Fan wrote:
> >
> >
> > [...]
> > * RAID 5 and 6 restoration incurs additional stress on the other
> > disks in the RAID which makes it more likely that one of them
> > will fail. The advantage of RAID 6 is that it ca
Am 10.11.2022 14:40, schrieb Curt:
(or maybe a RAID array is
conceivable over a network and a distance?).
Not only conceivable, but indeed practicable: Linbit DRBD
On Thursday, November 10, 2022 09:06:39 AM Dan Ritter wrote:
> If you need a filesystem that is larger than a single disk (that you can
> afford, or that exists), RAID is the name for the general approach to
> solving that.
PIcking a nit, I would say: "RAID is the name for *a* general approach to
Am 11.11.2022 um 07:36 schrieb hw:
> That's on https://docs.freebsd.org/en/books/handbook/zfs/
>
> I don't remember where I read about 8, could have been some documentation
> about
> FreeNAS.
Well, OTOH there do exist some considerations, which may have lead to
that number sticking somewhere, bu
On Thu, 2022-11-10 at 13:40 +, Curt wrote:
> On 2022-11-08, The Wanderer wrote:
> >
> > That more general sense of "backup" as in "something that you can fall
> > back on" is no less legitimate than the technical sense given above, and
> > it always rubs me the wrong way to see the unconditio
On Thu, 2022-11-10 at 21:14 -0800, David Christensen wrote:
> On 11/10/22 07:44, hw wrote:
> > On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
> > > On 11/9/22 00:24, hw wrote:
> > > > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
>
> [...]
>
> >
> Taking snapshots is
On Thu, 2022-11-10 at 23:12 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 08:32:36PM -0500, Dan Ritter wrote:
> > * RAID 5 and 6 restoration incurs additional stress on the other
> > disks in the RAID which makes it more likely that one of them
> > will fail.
>
> I believe that's mostly
On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> Linux-Fan wrote:
>
>
> [...]
> * RAID 5 and 6 restoration incurs additional stress on the other
> disks in the RAID which makes it more likely that one of them
> will fail. The advantage of RAID 6 is that it can then recover
> from tha
On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> > > hw writes:
> > > > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > > > Le 09/11/2022 à 12:41, hw a écrit :
>
> [...]
>
> > > > I'd
> > > > have to use md
On Thu, 2022-11-10 at 14:28 +0100, DdB wrote:
> Am 10.11.2022 um 13:03 schrieb Greg Wooledge:
> > If it turns out that '?' really is the filename, then it becomes a ZFS
> > issue with which I can't help.
>
> just tested: i could create, rename, delete a file with that name on a
> zfs filesystem ju
On Thu, 2022-11-10 at 08:48 -0500, Dan Ritter wrote:
> hw wrote:
> > And I've been reading that when using ZFS, you shouldn't make volumes with
> > more
> > than 8 disks. That's very inconvenient.
>
>
> Where do you read these things?
I read things like this:
"Sun™ recommends that the number
On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
ls -la
insgesamt 5
drwxr-xr-x 3 namefoo namefoo 3 16. Aug 22:36 .
drwxr-xr-x 24 root root 4096 1. Nov 2017 ..
drwxr-xr-x 2 namefoo namefoo 2 21. Jan 2020 ?
namefoo@host /srv/datadir $ ls -la '?'
ls: Zugriff auf ? nicht möglich:
On 11/10/22 07:44, hw wrote:
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
Be careful that you do not confuse a ~33 GiB full backup set, and 78
snapshots over six months of that same full
On Thu, Nov 10, 2022 at 08:32:36PM -0500, Dan Ritter wrote:
* RAID 5 and 6 restoration incurs additional stress on the other
disks in the RAID which makes it more likely that one of them
will fail.
I believe that's mostly apocryphal; I haven't seen science backing that
up, and it hasn't been
Linux-Fan wrote:
> I think the arguments of the RAID5/6 critics summarized were as follows:
>
> * Running in a RAID level that is 5 or 6 degrades performance while
> a disk is offline significantly. RAID 10 keeps most of its speed and
> RAID 1 only degrades slightly for most use cases.
>
> *
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Am 10.11.2022 um 22:37 schrieb Linux-Fan:
> Ext4 still does not offer snapshots. The traditional way to do
> snapshots outside of fancy BTRFS and ZFS file systems is to add LVM
> to the equation although I do not have any useful experience with
> tha
hw writes:
On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> hw writes:
> > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > Le 09/11/2022 à 12:41, hw a écrit :
[...]
> > I'd
> > have to use mdadm to create a RAID5 (or use the hardware RAID but that
> > isn't
>
> AFAIK BT
On Thu, Nov 10, 2022 at 06:54:31PM +0100, hw wrote:
> Ah, yes. I tricked myself because I don't have hd installed,
It's just a symlink to hexdump.
lrwxrwxrwx 1 root root 7 Jan 20 2022 /usr/bin/hd -> hexdump
unicorn:~$ dpkg -S usr/bin/hd
bsdextrautils: /usr/bin/hd
unicorn:~$ dpkg -S usr/bin/hex
On Thu, 2022-11-10 at 09:30 -0500, Greg Wooledge wrote:
> On Thu, Nov 10, 2022 at 02:48:28PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
>
> [...]
> > printf '%s\0' * | hexdump
> > 000 00c2 6177 7468
> > 007
>
> I dislike this outp
On Wed, 2022-11-09 at 14:22 +0100, Nicolas George wrote:
> hw (12022-11-08):
> > When I want to have 2 (or more) generations of backups, do I actually want
> > deduplication? It leaves me with only one actual copy of the data which
> > seems
> > to defeat the idea of having multiple generations of
On Thu, 2022-11-10 at 10:47 +0100, DdB wrote:
> Am 10.11.2022 um 06:38 schrieb David Christensen:
> > What is your technique for defragmenting ZFS?
> well, that was meant more or less a joke: there is none apart from
> offloading all the data, destroying and rebuilding the pool, and filling
> it ag
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
> On 11/9/22 00:24, hw wrote:
> > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
>
> > Hmm, when you can backup like 3.5TB with that, maybe I should put
> FreeBSD on my
> > server and give ZFS a try. Worst thing that can
On Wed, 09 Nov 2022 13:28:46 +0100
hw wrote:
> On Tue, 2022-11-08 at 09:52 +0100, DdB wrote:
> > Am 08.11.2022 um 05:31 schrieb hw:
> > > > That's only one point.
> > > What are the others?
> > >
> > > > And it's not really some valid one, I think, as
> > > > you do typically not run int
Brad Rogers wrote:
> On Thu, 10 Nov 2022 08:48:43 -0500
> Dan Ritter wrote:
>
> Hello Dan,
>
> >8 is not a magic number.
>
> Clearly, you don't read Terry Pratchett. :-)
In the context of ZFS, 8 is not a magic number.
May you be ridiculed by Pictsies.
-dsr-
On 2022-11-10, Nicolas George wrote:
> Curt (12022-11-10):
>> Why restate it then needlessly?
>
> To NOT state that you were wrong when you were not.
>
> This branch of the discussion bores me. Goodbye.
>
This isn't solid enough for a branch. It couldn't support a hummingbird.
And me too! That o
Curt (12022-11-10):
> Why restate it then needlessly?
To NOT state that you were wrong when you were not.
This branch of the discussion bores me. Goodbye.
--
Nicolas George
On 2022-11-10, Nicolas George wrote:
> Curt (12022-11-10):
>> > one drive fails → you can replace it immediately, no downtime
>> That's precisely what I said,
>
> I was not stating that THIS PART of what you said was srong.
Why restate it then needlessly?
>> so I'm
Curt (12022-11-10):
> > one drive fails → you can replace it immediately, no downtime
> That's precisely what I said,
I was not stating that THIS PART of what you said was srong.
> so I'm baffled by the redundancy of your
> words.
Hint: my mail did not stop at the l
On 2022-11-10, Nicolas George wrote:
> Curt (12022-11-10):
>> Maybe it's a question of intent more than anything else. I thought RAID
>> was intended for a server scenario where if a disk fails, you're down
>> time is virtually null, whereas as a backup is intended to prevent data
>> loss.
>
> May
On 2022-11-10 at 09:06, Dan Ritter wrote:
> Now, RAID is not a backup because it is a single store of data: if
> you delete something from it, it is deleted. If you suffer a
> lightning strike to the server, there's no recovery from molten
> metal.
Here's where I find disagreement.
Say you didn'
On Thu, 10 Nov 2022 08:48:43 -0500
Dan Ritter wrote:
Hello Dan,
>8 is not a magic number.
Clearly, you don't read Terry Pratchett. :-)
--
Regards _ "Valid sig separator is {dash}{dash}{space}"
/ ) "The blindingly obvious is never immediately apparent"
/ _)rad
Hi,
i wrote:
> > the time window in which the backuped data
> > can become inconsistent on the application level.
hw wrote:
> Or are you referring to the data being altered while a backup is in
> progress?
Yes. Data of different files or at different places in the same file
may have relations wh
On Thu, Nov 10, 2022 at 02:48:28PM +0100, hw wrote:
> On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> good idea:
>
> printf %s * | hexdump
> 000 77c2 6861 0074
> 005
Looks like there might be more than one file here.
> > If you misrepresented the situat
Curt wrote:
> On 2022-11-08, The Wanderer wrote:
> >
> > That more general sense of "backup" as in "something that you can fall
> > back on" is no less legitimate than the technical sense given above, and
> > it always rubs me the wrong way to see the unconditional "RAID is not a
> > backup" trot
hw wrote:
> And I've been reading that when using ZFS, you shouldn't make volumes with
> more
> than 8 disks. That's very inconvenient.
Where do you read these things?
The number of disks in a zvol can be optimized, depending on
your desired redundancy method, total number of drives, and
tole
Curt (12022-11-10):
> Maybe it's a question of intent more than anything else. I thought RAID
> was intended for a server scenario where if a disk fails, you're down
> time is virtually null, whereas as a backup is intended to prevent data
> loss.
Maybe just use common sense. RAID means your data
On 2022-11-10 at 08:40, Curt wrote:
> On 2022-11-08, The Wanderer wrote:
>
>> That more general sense of "backup" as in "something that you can
>> fall back on" is no less legitimate than the technical sense given
>> above, and it always rubs me the wrong way to see the unconditional
>> "RAID is
On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
> > ls -la
> > insgesamt 5
> > drwxr-xr-x 3 namefoo namefoo 3 16. Aug 22:36 .
> > drwxr-xr-x 24 root root 4096 1. Nov 2017 ..
> > drwxr-xr-x 2 namefoo namefoo 2 21. Jan 2020
On 2022-11-08, The Wanderer wrote:
>
> That more general sense of "backup" as in "something that you can fall
> back on" is no less legitimate than the technical sense given above, and
> it always rubs me the wrong way to see the unconditional "RAID is not a
> backup" trotted out blindly as if tha
Am 10.11.2022 um 13:03 schrieb Greg Wooledge:
> If it turns out that '?' really is the filename, then it becomes a ZFS
> issue with which I can't help.
just tested: i could create, rename, delete a file with that name on a
zfs filesystem just as with any other fileystem.
But: i recall having seen
On Thu, 2022-11-10 at 10:59 +0100, DdB wrote:
> Am 10.11.2022 um 04:46 schrieb hw:
> > On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > > [...]
> [...]
> > >
> > Why would partitions be better than the block device itself?
On Thu, 2022-11-10 at 10:34 +0100, Christoph Brinkhaus wrote:
> Am Thu, Nov 10, 2022 at 04:46:12AM +0100 schrieb hw:
> > On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > > [...]
> [...]
> > >
> >
> > Why would partitions
On Wed, 2022-11-09 at 12:08 +0100, Thomas Schmitt wrote:
> Hi,
>
> i wrote:
> > > https://github.com/dm-vdo/kvdo/issues/18
>
> hw wrote:
> > So the VDO ppl say 4kB is a good block size
>
> They actually say that it's the only size which they support.
>
>
> > Deduplication doesn't work when f
On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
> ls -la
> insgesamt 5
> drwxr-xr-x 3 namefoo namefoo3 16. Aug 22:36 .
> drwxr-xr-x 24 rootroot4096 1. Nov 2017 ..
> drwxr-xr-x 2 namefoo namefoo2 21. Jan 2020 ?
> namefoo@host /srv/datadir $ ls -la '?'
> ls: Zugriff auf ? nic
On Wed, 09 Nov 2022 13:52:26 +0100 hw wrote:
Does that work? Does bees run as long as there's something to
deduplicate and
only stops when there isn't?
Bees is a service (daemon) which runs 24/7 watching btrfs transaction
state (the checkpoints). If there are new transactions then it kicks
Am 10.11.2022 um 04:46 schrieb hw:
> On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
>> Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
>> [...]
>>> FreeBSD has ZFS but can't even configure the disk controllers, so that won't
>>> work.
>>
>> If I understand you right you mean R
Am 10.11.2022 um 06:38 schrieb David Christensen:
> What is your technique for defragmenting ZFS?
well, that was meant more or less a joke: there is none apart from
offloading all the data, destroying and rebuilding the pool, and filling
it again from the backup. But i do it from time to time if fr
Am Thu, Nov 10, 2022 at 04:46:12AM +0100 schrieb hw:
> On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > [...]
> > > FreeBSD has ZFS but can't even configure the disk controllers, so that
> > > won't
> > > work.
> >
> > If
On 11/9/22 01:35, DdB wrote:
> But
i am satisfied with zfs performance from spinning rust, if i dont fill
up the pool too much, and defrag after a while ...
What is your technique for defragmenting ZFS?
David
On 11/9/22 03:08, Thomas Schmitt wrote:
So i would use at least four independent storage facilities interchangeably.
I would make snapshots, if the filesystem supports them, and backup those
instead of the changeable filesystem.
I would try to reduce the activity of applications on the filesyste
On 11/9/22 05:29, didier gaumet wrote:
- *BSDs nowadays have departed from old ZFS code and use the same source
code stack as Linux (OpenZFS)
AIUI FreeBSD 12 and prior use ZFS-on-Linux code, while FreeBSD 13 and
later use OpenZFS code.
On 11/9/22 05:44, didier gaumet wrote:
> I was usin
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
> Hmm, when you can backup like 3.5TB with that, maybe I should put
FreeBSD on my
> server and give ZFS a try. Worst thing that can happen is that it
crashes and
> I'd have made an experiment that wasn't
On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > Le 09/11/2022 à 12:41, hw a écrit :
>
> [...]
>
> > > I am really not so well aware of ZFS state but my impression was that:
> > > - FUSE implementation of ZoL (ZF
On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> [...]
> > FreeBSD has ZFS but can't even configure the disk controllers, so that won't
> > work.
>
> If I understand you right you mean RAID controllers?
yes
> According to my
hw writes:
On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> Le 09/11/2022 à 12:41, hw a écrit :
[...]
> I am really not so well aware of ZFS state but my impression was that:
> - FUSE implementation of ZoL (ZFS on Linux) is deprecated and that,
> Ubuntu excepted (classic module?), Z
Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
Hi hw,
> On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > Le 09/11/2022 à 12:41, hw a écrit :
> > [...]
> > > In any case, I'm currently tending to think that putting FreeBSD with ZFS
> > > on
> > > my
> > > server might be the best
On Wed, 2022-11-09 at 17:29 +0100, DdB wrote:
> Am 09.11.2022 um 12:41 schrieb hw:
> > In any case, I'm currently tending to think that putting FreeBSD with ZFS on
> > my
> > server might be the best option. But then, apparently I won't be able to
> > configure the controller cards, so that won't
On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> Le 09/11/2022 à 12:41, hw a écrit :
> [...]
> > In any case, I'm currently tending to think that putting FreeBSD with ZFS on
> > my
> > server might be the best option. But then, apparently I won't be able to
> > configure the controller ca
On Wed, 2022-11-09 at 14:44 +0100, didier gaumet wrote:
> Le 09/11/2022 à 14:25, hw a écrit :
>
> > I don't think it was, see https://docs.freebsd.org/en/books/handbook/zfs/
> >
> > I does mention performance, but I remember other statements saying that was
> > designed for arrays with 40+ disks
Am 09.11.2022 um 12:41 schrieb hw:
> In any case, I'm currently tending to think that putting FreeBSD with ZFS on
> my
> server might be the best option. But then, apparently I won't be able to
> configure the controller cards, so that won't really work. And ZFS with Linux
> isn't so great becau
hw wrote on 11/9/22 04:41:
configure the controller cards, so that won't really work. And ZFS with Linux
isn't so great because it keeps fuse in between.
That isn't true. I've been using ZFS with Debian for years without FUSE,
through the ZFSonLinux project.
The only slightly discomfortin
Le 09/11/2022 à 14:25, hw a écrit :
I don't think it was, see https://docs.freebsd.org/en/books/handbook/zfs/
I does mention performance, but I remember other statements saying that was
designed for arrays with 40+ disks and, besides data integrity, with ease of use
in mind. Performance doesn'
Le 09/11/2022 à 12:41, hw a écrit :
[...]
In any case, I'm currently tending to think that putting FreeBSD with ZFS on my
server might be the best option. But then, apparently I won't be able to
configure the controller cards, so that won't really work. And ZFS with Linux
isn't so great because
On Wed, 2022-11-09 at 11:05 +0100, didier gaumet wrote:
> Le 09/11/2022 à 10:27, hw a écrit :
> [...]
> > Yes, I've seen those. I can only wonder how much performance impact VDO
> > would
> > have for backups. And I wonder why it doesn't require as much memory as ZFS
> > seems to need for dedupli
hw (12022-11-08):
> When I want to have 2 (or more) generations of backups, do I actually want
> deduplication? It leaves me with only one actual copy of the data which seems
> to defeat the idea of having multiple generations of backups at least to some
> extent.
The idea of having multiple gene
Le 09/11/2022 à 13:12, hw a écrit :
On Wed, 2022-11-09 at 11:37 +0100, didier gaumet wrote:
[...]
in my opinion you are confusing
deduplicating during backup and incremental/differential backups.
[...]
I don't know why you think that.[...]
Because earlier in a previous message you stated:
"
hw wrote:
>
> The question is rather if it makes sense to have two full backups on the same
> machine for redundancy and to be able to go back in time, or if it's better to
> give up on redundancy and to have only one copy and use snapshots or whatever
> to
> be able to go back in time.
And fo
On Tue, 2022-11-08 at 15:07 +0100, hede wrote:
> On 08.11.2022 05:31, hw wrote:
> > That still requires you to have enough disk space for at least two full
> > backups.
>
> Correct, if you do always full backups then the second run will consume
> full backup space in the first place. (not fully
On Tue, 2022-11-08 at 10:04 -0500, The Wanderer wrote:
> On 2022-11-08 at 09:36, Nicolas George wrote:
>
> > Curt (12022-11-08):
> >
> > > Redundancy sounds a lot like a back up.
> >
> > RAID also sounds a lot like a backup, and the R means redundant.
> >
> > Yet raid is not a backup.
>
> That
On Tue, 2022-11-08 at 09:52 +0100, DdB wrote:
> Am 08.11.2022 um 05:31 schrieb hw:
> > > That's only one point.
> > What are the others?
> >
> > > And it's not really some valid one, I think, as
> > > you do typically not run into space problems with one single action
> > > (YMMV). Running mult
On Wed, 2022-11-09 at 11:37 +0100, didier gaumet wrote:
>
> I am no expert (in Linux, backporting or anything else) and cannot emit
> a viable advice about what your backup plan should be. You are in better
> position to evaluate your needs, your means and design a satisfying
> backup plan acco
On Wed, 2022-11-09 at 12:13 +0100, to...@tuxteam.de wrote:
> On Wed, Nov 09, 2022 at 11:15:15AM +0100, hw wrote:
> > On Wed, 2022-11-09 at 09:46 +0100, to...@tuxteam.de wrote:
>
> [...]
> > > But, as others have said, deduplication at the file system level (or
> > > below,
> > > as VDO does) is ma
On Wed, 2022-11-09 at 10:35 +0100, DdB wrote:
> Am 09.11.2022 um 09:24 schrieb hw:
> > > Learn more about ZFS and invest in hardware to get performance.
> > Hardware like? In theory, using SSDs for cache with ZFS should improve
> > performance. In practise, it only wore out the SSDs after a while
On Tue, 2022-11-08 at 07:19 -0500, Dan Ritter wrote:
> hw wrote:
> > > As you say, deduplication in backup systems is quite common, and works
> > > pretty well. There's also an on-disk non-filesystem utility, rdfind,
> > > which is packaged in Debian. It can discover identical files and make
> > >
On Wed, Nov 09, 2022 at 11:15:15AM +0100, hw wrote:
> On Wed, 2022-11-09 at 09:46 +0100, to...@tuxteam.de wrote:
[...]
> > Pehaps you don't know about rsync's --link-dest option: you can, with rsync,
> > keep generations without duplicating between them.
>
> No, I didn't know that. My intention
Hi,
i wrote:
> > https://github.com/dm-vdo/kvdo/issues/18
hw wrote:
> So the VDO ppl say 4kB is a good block size
They actually say that it's the only size which they support.
> Deduplication doesn't work when files aren't sufficiently identical,
The definition of sufficiently identical pro
I am no expert (in Linux, backporting or anything else) and cannot emit
a viable advice about what your backup plan should be. You are in better
position to evaluate your needs, your means and design a satisfying
backup plan accordingly.
What I was underlyning is that in my opinion you are
On Wed, 2022-11-09 at 09:46 +0100, to...@tuxteam.de wrote:
> On Wed, Nov 09, 2022 at 09:39:45AM +0100, hw wrote:
>
> [...]
>
> > When you keep N full generations of backups it's different. Using rsync,
> > you'll
> > only write the changes anyway, switching between the generations. Most of
> >
On Tue, 2022-11-08 at 11:11 +0100, Thomas Schmitt wrote:
> Hi,
>
> hw wrote:
> > I still wonder how VDO actually works.
>
> There is a comparer/decider named UDS which holds an index of the valid
> storage blocks, and a device driver named VDO which performes the
> deduplication and hides its int
Le 09/11/2022 à 10:27, hw a écrit :
[...]
Yes, I've seen those. I can only wonder how much performance impact VDO would
have for backups. And I wonder why it doesn't require as much memory as ZFS
seems to need for deduplication.
It's *only* an hypothesis, but I would suppose that ZFS was desi
Am 09.11.2022 um 09:24 schrieb hw:
>> Learn more about ZFS and invest in hardware to get performance.
> Hardware like? In theory, using SSDs for cache with ZFS should improve
> performance. In practise, it only wore out the SSDs after a while, and now
> it's
> not any faster without SSD cache.
>
On Tue, 2022-11-08 at 10:04 +0100, didier gaumet wrote:
> Le 08/11/2022 à 05:13, hw a écrit :
> > On Mon, 2022-11-07 at 13:57 -0500, rhkra...@gmail.com wrote:
> > >
> > >
> > > I didn't (and don't) know much about deduplication (beyond what you might
> > > deduce from the name), so I google and f
On Wed, Nov 09, 2022 at 09:39:45AM +0100, hw wrote:
[...]
> When you keep N full generations of backups it's different. Using rsync,
> you'll
> only write the changes anyway, switching between the generations. Most of the
> data is being stored N times.
Pehaps you don't know about rsync's --l
On Tue, 2022-11-08 at 10:26 +0100, didier gaumet wrote:
> Le 08/11/2022 à 04:49, hw a écrit :
> [...]
> > When I want to have 2 (or more) generations of backups, do I actually want
> > deduplication? It leaves me with only one actual copy of the data which
> > seems
> > to defeat the idea of havin
On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
> On 11/7/22 23:13, hw wrote:
> > On Mon, 2022-11-07 at 21:46 -0800, David Christensen wrote:
>
> > Are you deduplicating?
>
>
> Yes.
>
>
> > Apparently some people say bad things happen when ZFS
> > runs out of memory from deduplic
On Tue, Nov 08, 2022 at 05:44:00PM -0500, Stefan Monnier wrote:
> > I had to look up the word deduplicate (I was going to say, "That isn't even
> > a
> > word!"), which reveals my extensive knowledge of the matter.
>
> It was originally called to "duplicate duplicate", but then
> self-application
On 11/7/22 23:13, hw wrote:
On Mon, 2022-11-07 at 21:46 -0800, David Christensen wrote:
Are you deduplicating?
Yes.
Apparently some people say bad things happen when ZFS
runs out of memory from deduplication.
Okay.
16 GiB seems to be enough for my SOHO server.
I put rsync based
1 - 100 of 128 matches
Mail list logo