I knew there would be a balance between the number of metaslabs vs. system
memory usage as well as just being able to keep track of it all without
affecting write latency as the pool fills up. No matter what happens, it is
nice to see this being addressed. Again, thanks.

On Fri, Nov 28, 2014 at 2:40 PM, <adam.levent...@delphix.com> wrote:

> While I agree with George, I would note that we’ve been considering some
> of the fundamental tensions of sizing metaslabs and of allocating more
> generally. For us, large devices and small allocation sizes (our clients
> mostly write compressible, 8KB chunk) can lead to large in-memory
> structures for each metaslab. Smaller metaslabs though mean more metaslabs
> to manage per device, and finding “good” areas to allocate can be more
> difficult.
>
> This is an open an active area of investigation for us.
>
> Adam
>
> —
> Adam Leventhal
> CTO, Delphix
> blog.delphix.com/ahl
>
>
> On Mon, Nov 24, 2014 at 3:36 PM, George Wilson <george.wil...@delphix.com>
> wrote:
>
>> Jason,
>>
>> Yes, it does make sense and something that we at Delphix have discussed.
>> This tunable was the first attempt at making this adjustable so expect to
>> see more in this space as things continue to evolve.
>>
>> Thanks,
>> George
>>
>> On 11/24/14, 11:52 AM, Jason Cox wrote:
>>
>>  (I apologize if this has been posted on the list from me already. I
>> have had some issues and not sure if it was successfully posted yet or not)
>>
>> My company has been dealing with fragmentation issues on ZFS with some
>> heavy write OLTP databases and performance problems overtime as the DB
>> grows and data is written. We have come to the point that every so often
>> (once a quarter or so), we defragment it by creating new LUNs and doing a
>> ZFS send/receive. Over time we have learned to create more, smaller LUNs so
>> that we get more vdevs thus more/smaller metaslabs on the pool and better
>> write performance to the SAN without having to mess with
>> zfs_vdev_max_pending as much. (Please note that we are currently using
>> Oracle Solaris 11, but I have been making the case to switch to an Illumos
>> based OS instead to take advantage of some of the improvements that have
>> come out)
>>
>> I was doing some digging recently and found that back in September, a
>> change was committed:
>>
>> "5161 add tunable for number of metaslabs per vdev
>>
>>  https://reviews.csiden.org/r/95/";
>>
>> My question is does it make sense to just have a tunable to say how many
>> metaslabs per vdev only or could we look at the option of having a knob
>> that lets you set the maximum size you would prefer per metaslab and then
>> have the code determine the number of metaslabs per vdev automagicly? Would
>> it make sense to try to optimize the metaslabs size like that? Is there a
>> point that a high number of metaslabs affects performance in some way?
>>
>> Anyways I look forward to being able to use the new option of setting the
>> number of metaslabs per vdev in time. Improvements like this help me move
>> my case forward for a change.
>>
>> --Jason
>>
>>
>> _______________________________________________
>> developer mailing 
>> listdeveloper@open-zfs.orghttp://lists.open-zfs.org/mailman/listinfo/developer
>>
>>
>>
>
_______________________________________________
developer mailing list
developer@open-zfs.org
http://lists.open-zfs.org/mailman/listinfo/developer

Reply via email to