On Mon, 2022-11-14 at 20:37 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Fri, 2022-11-11 at 22:11 +0100, Linux-Fan wrote:
> > > hw writes:
> > > > On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
> [...]
> > How do you intend to copy files at any other level than at file level? At
> > that
On 11/14/22 13:48, hw wrote:
On Fri, 2022-11-11 at 21:55 -0800, David Christensen wrote:
Lots of snapshots slows down commands that involve snapshots (e.g. 'zfs
list -r -t snapshot ...'). This means sysadmin tasks take longer when
the pool has more snapshots.
Hm, how long does it take? It
On Fri, 2022-11-11 at 21:55 -0800, David Christensen wrote:
> [...]
> As with most filesystems, performance of ZFS drops dramatically as you
> approach 100% usage. So, you need a data destruction policy that keeps
> storage usage and performance at acceptable levels.
>
> Lots of snapshots slows
hw writes:
On Fri, 2022-11-11 at 22:11 +0100, Linux-Fan wrote:
> hw writes:
> > On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
>
> [...]
>
> > > If you do not value the uptime making actual (even
> > > scheduled) copies of the data may be recommendable over
> > > using a RAID bec
On 11/13/22 13:02, hw wrote:
On Fri, 2022-11-11 at 07:55 -0500, Dan Ritter wrote:
hw wrote:
On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
Linux-Fan wrote:
[...]
* RAID 5 and 6 restoration incurs additional stress on the other
disks in the RAID which makes it more likely that one of
On Fri, 2022-11-11 at 22:11 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
>
> [...]
>
> > > If you do not value the uptime making actual (even
> > > scheduled) copies of the data may be recommendable over
> > > using a RAID because such
On Fri, 2022-11-11 at 07:55 -0500, Dan Ritter wrote:
> hw wrote:
> > On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> > > Linux-Fan wrote:
> > >
> > >
> > > [...]
> > > * RAID 5 and 6 restoration incurs additional stress on the other
> > > disks in the RAID which makes it more likely th
David Christensen wrote:
> The Intel Optane Memory Series products are designed to be cache devices --
> when using compatible hardware, Windows, and Intel software. My hardware
> should be compatible (Dell PowerEdge T30), but I am unsure if FreeBSD 12.3-R
> will see the motherboard NVMe slot or
On 11/11/22 00:43, hw wrote:
On Thu, 2022-11-10 at 21:14 -0800, David Christensen wrote:
On 11/10/22 07:44, hw wrote:
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
Taking snapshots is
hw writes:
On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
[...]
> If you do not value the uptime making actual (even
> scheduled) copies of the data may be recommendable over
> using a RAID because such schemes may (among other advantages)
> protect you from accidental f
On Fri, Nov 11, 2022 at 09:03:45AM +0100, hw wrote:
On Thu, 2022-11-10 at 23:12 -0500, Michael Stone wrote:
The advantage to RAID 6 is that it can tolerate a double disk failure.
With RAID 1 you need 3x your effective capacity to achieve that and even
though storage has gotten cheaper, it hasn't
hw wrote:
> On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> > Linux-Fan wrote:
> >
> >
> > [...]
> > * RAID 5 and 6 restoration incurs additional stress on the other
> > disks in the RAID which makes it more likely that one of them
> > will fail. The advantage of RAID 6 is that it ca
Am 11.11.2022 um 07:36 schrieb hw:
> That's on https://docs.freebsd.org/en/books/handbook/zfs/
>
> I don't remember where I read about 8, could have been some documentation
> about
> FreeNAS.
Well, OTOH there do exist some considerations, which may have lead to
that number sticking somewhere, bu
On Thu, 2022-11-10 at 21:14 -0800, David Christensen wrote:
> On 11/10/22 07:44, hw wrote:
> > On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
> > > On 11/9/22 00:24, hw wrote:
> > > > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
>
> [...]
>
> >
> Taking snapshots is
On Thu, 2022-11-10 at 23:12 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 08:32:36PM -0500, Dan Ritter wrote:
> > * RAID 5 and 6 restoration incurs additional stress on the other
> > disks in the RAID which makes it more likely that one of them
> > will fail.
>
> I believe that's mostly
On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> Linux-Fan wrote:
>
>
> [...]
> * RAID 5 and 6 restoration incurs additional stress on the other
> disks in the RAID which makes it more likely that one of them
> will fail. The advantage of RAID 6 is that it can then recover
> from tha
On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> > > hw writes:
> > > > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > > > Le 09/11/2022 à 12:41, hw a écrit :
>
> [...]
>
> > > > I'd
> > > > have to use md
On Thu, 2022-11-10 at 14:28 +0100, DdB wrote:
> Am 10.11.2022 um 13:03 schrieb Greg Wooledge:
> > If it turns out that '?' really is the filename, then it becomes a ZFS
> > issue with which I can't help.
>
> just tested: i could create, rename, delete a file with that name on a
> zfs filesystem ju
On Thu, 2022-11-10 at 08:48 -0500, Dan Ritter wrote:
> hw wrote:
> > And I've been reading that when using ZFS, you shouldn't make volumes with
> > more
> > than 8 disks. That's very inconvenient.
>
>
> Where do you read these things?
I read things like this:
"Sun™ recommends that the number
On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
ls -la
insgesamt 5
drwxr-xr-x 3 namefoo namefoo 3 16. Aug 22:36 .
drwxr-xr-x 24 root root 4096 1. Nov 2017 ..
drwxr-xr-x 2 namefoo namefoo 2 21. Jan 2020 ?
namefoo@host /srv/datadir $ ls -la '?'
ls: Zugriff auf ? nicht möglich:
On 11/10/22 07:44, hw wrote:
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
Be careful that you do not confuse a ~33 GiB full backup set, and 78
snapshots over six months of that same full
On Thu, Nov 10, 2022 at 08:32:36PM -0500, Dan Ritter wrote:
* RAID 5 and 6 restoration incurs additional stress on the other
disks in the RAID which makes it more likely that one of them
will fail.
I believe that's mostly apocryphal; I haven't seen science backing that
up, and it hasn't been
Linux-Fan wrote:
> I think the arguments of the RAID5/6 critics summarized were as follows:
>
> * Running in a RAID level that is 5 or 6 degrades performance while
> a disk is offline significantly. RAID 10 keeps most of its speed and
> RAID 1 only degrades slightly for most use cases.
>
> *
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Am 10.11.2022 um 22:37 schrieb Linux-Fan:
> Ext4 still does not offer snapshots. The traditional way to do
> snapshots outside of fancy BTRFS and ZFS file systems is to add LVM
> to the equation although I do not have any useful experience with
> tha
hw writes:
On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> hw writes:
> > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > Le 09/11/2022 à 12:41, hw a écrit :
[...]
> > I'd
> > have to use mdadm to create a RAID5 (or use the hardware RAID but that
> > isn't
>
> AFAIK BT
On Thu, Nov 10, 2022 at 06:54:31PM +0100, hw wrote:
> Ah, yes. I tricked myself because I don't have hd installed,
It's just a symlink to hexdump.
lrwxrwxrwx 1 root root 7 Jan 20 2022 /usr/bin/hd -> hexdump
unicorn:~$ dpkg -S usr/bin/hd
bsdextrautils: /usr/bin/hd
unicorn:~$ dpkg -S usr/bin/hex
On Thu, 2022-11-10 at 09:30 -0500, Greg Wooledge wrote:
> On Thu, Nov 10, 2022 at 02:48:28PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
>
> [...]
> > printf '%s\0' * | hexdump
> > 000 00c2 6177 7468
> > 007
>
> I dislike this outp
On Thu, 2022-11-10 at 10:47 +0100, DdB wrote:
> Am 10.11.2022 um 06:38 schrieb David Christensen:
> > What is your technique for defragmenting ZFS?
> well, that was meant more or less a joke: there is none apart from
> offloading all the data, destroying and rebuilding the pool, and filling
> it ag
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
> On 11/9/22 00:24, hw wrote:
> > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
>
> > Hmm, when you can backup like 3.5TB with that, maybe I should put
> FreeBSD on my
> > server and give ZFS a try. Worst thing that can
Brad Rogers wrote:
> On Thu, 10 Nov 2022 08:48:43 -0500
> Dan Ritter wrote:
>
> Hello Dan,
>
> >8 is not a magic number.
>
> Clearly, you don't read Terry Pratchett. :-)
In the context of ZFS, 8 is not a magic number.
May you be ridiculed by Pictsies.
-dsr-
On Thu, 10 Nov 2022 08:48:43 -0500
Dan Ritter wrote:
Hello Dan,
>8 is not a magic number.
Clearly, you don't read Terry Pratchett. :-)
--
Regards _ "Valid sig separator is {dash}{dash}{space}"
/ ) "The blindingly obvious is never immediately apparent"
/ _)rad
On Thu, Nov 10, 2022 at 02:48:28PM +0100, hw wrote:
> On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> good idea:
>
> printf %s * | hexdump
> 000 77c2 6861 0074
> 005
Looks like there might be more than one file here.
> > If you misrepresented the situat
hw wrote:
> And I've been reading that when using ZFS, you shouldn't make volumes with
> more
> than 8 disks. That's very inconvenient.
Where do you read these things?
The number of disks in a zvol can be optimized, depending on
your desired redundancy method, total number of drives, and
tole
On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
> > ls -la
> > insgesamt 5
> > drwxr-xr-x 3 namefoo namefoo 3 16. Aug 22:36 .
> > drwxr-xr-x 24 root root 4096 1. Nov 2017 ..
> > drwxr-xr-x 2 namefoo namefoo 2 21. Jan 2020
Am 10.11.2022 um 13:03 schrieb Greg Wooledge:
> If it turns out that '?' really is the filename, then it becomes a ZFS
> issue with which I can't help.
just tested: i could create, rename, delete a file with that name on a
zfs filesystem just as with any other fileystem.
But: i recall having seen
On Thu, 2022-11-10 at 10:59 +0100, DdB wrote:
> Am 10.11.2022 um 04:46 schrieb hw:
> > On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > > [...]
> [...]
> > >
> > Why would partitions be better than the block device itself?
On Thu, 2022-11-10 at 10:34 +0100, Christoph Brinkhaus wrote:
> Am Thu, Nov 10, 2022 at 04:46:12AM +0100 schrieb hw:
> > On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > > [...]
> [...]
> > >
> >
> > Why would partitions
On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
> ls -la
> insgesamt 5
> drwxr-xr-x 3 namefoo namefoo3 16. Aug 22:36 .
> drwxr-xr-x 24 rootroot4096 1. Nov 2017 ..
> drwxr-xr-x 2 namefoo namefoo2 21. Jan 2020 ?
> namefoo@host /srv/datadir $ ls -la '?'
> ls: Zugriff auf ? nic
Am 10.11.2022 um 04:46 schrieb hw:
> On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
>> Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
>> [...]
>>> FreeBSD has ZFS but can't even configure the disk controllers, so that won't
>>> work.
>>
>> If I understand you right you mean R
Am 10.11.2022 um 06:38 schrieb David Christensen:
> What is your technique for defragmenting ZFS?
well, that was meant more or less a joke: there is none apart from
offloading all the data, destroying and rebuilding the pool, and filling
it again from the backup. But i do it from time to time if fr
Am Thu, Nov 10, 2022 at 04:46:12AM +0100 schrieb hw:
> On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > [...]
> > > FreeBSD has ZFS but can't even configure the disk controllers, so that
> > > won't
> > > work.
> >
> > If
On 11/9/22 01:35, DdB wrote:
> But
i am satisfied with zfs performance from spinning rust, if i dont fill
up the pool too much, and defrag after a while ...
What is your technique for defragmenting ZFS?
David
On 11/9/22 05:29, didier gaumet wrote:
- *BSDs nowadays have departed from old ZFS code and use the same source
code stack as Linux (OpenZFS)
AIUI FreeBSD 12 and prior use ZFS-on-Linux code, while FreeBSD 13 and
later use OpenZFS code.
On 11/9/22 05:44, didier gaumet wrote:
> I was usin
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
> Hmm, when you can backup like 3.5TB with that, maybe I should put
FreeBSD on my
> server and give ZFS a try. Worst thing that can happen is that it
crashes and
> I'd have made an experiment that wasn't
On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > Le 09/11/2022 à 12:41, hw a écrit :
>
> [...]
>
> > > I am really not so well aware of ZFS state but my impression was that:
> > > - FUSE implementation of ZoL (ZF
On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> [...]
> > FreeBSD has ZFS but can't even configure the disk controllers, so that won't
> > work.
>
> If I understand you right you mean RAID controllers?
yes
> According to my
hw writes:
On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> Le 09/11/2022 à 12:41, hw a écrit :
[...]
> I am really not so well aware of ZFS state but my impression was that:
> - FUSE implementation of ZoL (ZFS on Linux) is deprecated and that,
> Ubuntu excepted (classic module?), Z
Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
Hi hw,
> On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > Le 09/11/2022 à 12:41, hw a écrit :
> > [...]
> > > In any case, I'm currently tending to think that putting FreeBSD with ZFS
> > > on
> > > my
> > > server might be the best
On Wed, 2022-11-09 at 17:29 +0100, DdB wrote:
> Am 09.11.2022 um 12:41 schrieb hw:
> > In any case, I'm currently tending to think that putting FreeBSD with ZFS on
> > my
> > server might be the best option. But then, apparently I won't be able to
> > configure the controller cards, so that won't
On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> Le 09/11/2022 à 12:41, hw a écrit :
> [...]
> > In any case, I'm currently tending to think that putting FreeBSD with ZFS on
> > my
> > server might be the best option. But then, apparently I won't be able to
> > configure the controller ca
Am 09.11.2022 um 12:41 schrieb hw:
> In any case, I'm currently tending to think that putting FreeBSD with ZFS on
> my
> server might be the best option. But then, apparently I won't be able to
> configure the controller cards, so that won't really work. And ZFS with Linux
> isn't so great becau
hw wrote on 11/9/22 04:41:
configure the controller cards, so that won't really work. And ZFS with Linux
isn't so great because it keeps fuse in between.
That isn't true. I've been using ZFS with Debian for years without FUSE,
through the ZFSonLinux project.
The only slightly discomfortin
Le 09/11/2022 à 12:41, hw a écrit :
[...]
In any case, I'm currently tending to think that putting FreeBSD with ZFS on my
server might be the best option. But then, apparently I won't be able to
configure the controller cards, so that won't really work. And ZFS with Linux
isn't so great because
On Wed, 2022-11-09 at 10:35 +0100, DdB wrote:
> Am 09.11.2022 um 09:24 schrieb hw:
> > > Learn more about ZFS and invest in hardware to get performance.
> > Hardware like? In theory, using SSDs for cache with ZFS should improve
> > performance. In practise, it only wore out the SSDs after a while
Am 09.11.2022 um 09:24 schrieb hw:
>> Learn more about ZFS and invest in hardware to get performance.
> Hardware like? In theory, using SSDs for cache with ZFS should improve
> performance. In practise, it only wore out the SSDs after a while, and now
> it's
> not any faster without SSD cache.
>
55 matches
Mail list logo