On Tuesday, July 09, 2019, at 8:10 PM, Matthew Ahrens wrote:
> We expect to open PR's against ZoL for this work within the next month (it
> depends on https://github.com/zfsonlinux/zfs/pull/8442).
Oh, after reading this, things are even more clear.
And this explains why I could see a massive IO
On Tuesday, July 09, 2019, at 8:10 PM, Matthew Ahrens wrote:
> This behavior is not really specific to having a lot of pools. If you had
> one big pool with all the disks in it, ZFS would still try to allocate from
> each disk, causing most of that disk's metaslabs to be loaded (ZFS selects
>
It looks like your disks are quite fragmented, with most of the free space
being in 8K-63KB chunks. There are very few larger (>=128K) free chunks,
which can cause ZFS to load most of the metaslabs when looking for a 128K
free chunk (which it does periodically, especially if the ZIL is in heavy
On Friday, July 05, 2019, at 7:43 PM, Matthew Ahrens wrote:
> How many metaslabs are there total, and how many are loaded?
zdb -m for a disk (on machine 03, disk5, for future reference) says:
Metaslabs:
vdev 0
metaslabs 116 offsetspacemap free
On Friday, July 05, 2019, at 7:43 PM, Matthew Ahrens wrote:
> Given that Nagy is using recordsize=128K or 1M, writing large files, and not
> deleting any files
No, there are deletes, they are just written once and never modified. (original
sentence was: *The usage pattern is write once files
On Friday, July 05, 2019, at 4:36 PM, Allan Jude wrote:
> How much memory was actually in use after importing the 46 pools?
Wired in top was between 40-50 GiBs. ARC was only a maximum of 2-4 at that time.
--
openzfs: openzfs-developer
Permalink:
On Fri, Jul 5, 2019 at 7:37 AM Allan Jude wrote:
> On 2019-07-05 04:21, nagy.att...@gmail.com wrote:
> > On Friday, July 05, 2019, at 12:34 AM, Allan Jude wrote:
> >> And now which values are growing. This breaks down each UMA cache and
> >> how much slack it contains.
> > I had one, but lost it
On 2019-07-05 04:21, nagy.att...@gmail.com wrote:
> On Friday, July 05, 2019, at 12:34 AM, Allan Jude wrote:
>> And now which values are growing. This breaks down each UMA cache and
>> how much slack it contains.
> I had one, but lost it with machine reboots (my fault).
>
> What I have now is
On Thursday, July 04, 2019, at 8:03 PM, K. R. Sanborn wrote:
> One way to deal with the pool import, is to manually run them sequentially.
> It will take longer, but it's more controlled.
I did it, things were the same.
> Next, since these are in essence R/O files, I'd disable "atime" in the
>
On Friday, July 05, 2019, at 12:34 AM, Allan Jude wrote:
> And now which values are growing. This breaks down each UMA cache and
how much slack it contains.
I had one, but lost it with machine reboots (my fault).
What I have now is from a quick import, which didn't eat all of the memory:
On 2019-07-02 07:50, nagy.att...@gmail.com wrote:
> ZFS's feature set is pretty much enough for my use case, why should I
> reinvent those?
> It's strange to hear on a file system forum that "you should write your
> own file system". :)
>
> From ZFS viewpoint the only difference here is that I
> On Jul 2, 2019, at 5:48 AM, nagy.att...@gmail.com wrote:
>
> Glad to hear that! :)
> I'll try to be more verbose then.
> For example I have a machine with 44*4T SATA disks. Each of these disks have
> a zpool on them, so I have 44 zpools on the machine (with one zfs on each
> zpool).
> I put
On Wednesday, July 03, 2019, at 10:23 AM, George Melikov wrote:
> On your main question - ZoL 0.7.13, Debian, 1-2 pools <1TB in size definitely
> DON'T eat 1-1.5 GB RAM per pool only on import for me.
>
> IIRC ARC will grow only then you'll access (meta)data.
Well, these are in the range of
On Tuesday, July 02, 2019, at 11:28 PM, Richard Laager wrote:
> Why are you doing 44 single disk zpools? One big downside then is that
you have no redundancy.
I've tried to explain that above.
I have redundancy over multiple hosts. I replicate all objects between the
machines.
This way doing
On your main question - ZoL 0.7.13, Debian, 1-2 pools <1TB in size definitely DON'T eat 1-1.5 GB RAM per pool only on import for me. IIRC ARC will grow only then you'll access (meta)data. 02.07.2019, 15:49, "nagy.att...@gmail.com" :Glad to hear that! :)I'll try to be more verbose then.For example
On 7/2/19 7:48 AM, nagy.att...@gmail.com wrote:
> For example I have a machine with 44*4T SATA disks. Each of these disks
> have a zpool on them, so I have 44 zpools on the machine (with one zfs
> on each zpool).
Why are you doing 44 single disk zpools? One big downside then is that
you have no
Glad to hear that! :)
I'll try to be more verbose then.
For example I have a machine with 44*4T SATA disks. Each of these disks have a
zpool on them, so I have 44 zpools on the machine (with one zfs on each zpool).
I put files onto these zfs/zpools into hashed directories.
On file numbers/sizes:
ZFS's feature set is pretty much enough for my use case, why should I reinvent
those?
It's strange to hear on a file system forum that "you should write your own
file system". :)
>From ZFS viewpoint the only difference here is that I have as much zpools as
>disks. Otherwise it's used as a
On Tue, Jul 2, 2019 at 1:50 PM wrote:
> ZFS's feature set is pretty much enough for my use case, why should I
> reinvent those?
> It's strange to hear on a file system forum that "you should write your
> own file system". :)
>
how did you construe my question to mean anything like that?
As I
19 matches
Mail list logo