Marcelo,

I just finished writing up my test results. Hopefully it will answer 
most of your questions. You can find it in my blog, as permalink

http://blogs.sun.com/blogfinger/entry/zfs_and_the_uberblock_part

Regards,

Bernd

Marcelo Leal wrote:
>> Marcelo,
>  Hello there... 
>> I did some more tests.
> 
> You are getting very useful informations with your tests. Thanks a lot!!
> 
>> I found that not each uberblock_update() is also
>> followed by a write to 
>> the disk (although the txg is increased every 30
>> seconds for each of the 
>> three zpools of my 2008.11 system). In these cases,
>> ub_rootbp.blk_birth 
>> stays at the same value while txg is incremented by
>> 1.
>   Are you sure about that? I mean, what i could understand for the 
> ondiskformat, is that there is a correlation 1:1 between txg, creation time, 
> and ubberblock. Each time there is write to the pool, we have another "state" 
> of the filesystem. Actually, we just need another valid uberblock when we 
> change the filesystem state (write to it). 
>  
>> But each sync command on the OS level is followed by
>> a 
>> vdev_uberblock_sync() directly after the
>> uberblock_update() and then by 
>> four writes to the four uberblock copies (one per
>>  copy) on disk.
>  Hmm, maybe the uberblock_update is not really important in our discussion... 
> ;-)
>  
>> And a change to one or more files in any pool during
>> the 30 seconds 
>> interval is also followed by a vdev_uberblock_sync()
>> of that pool at the 
>> end of the interval.
> 
>  So, what is the uberblock_update? 
>> So on my system (a web server) during time when there
>> is enough activity 
>> that each uberblock_update() is followed by
>> vdev_uberblock_sync(),
>>
>> I get:
>>       2 writes per minute (*60)
> 
>  I'm totally lost... 2 writes per minute?
> 
>>  writes per hour (*24)
>>  2880 writes per day
>> ut only each 128th time to the same block ->
>> = 22.5 writes to the same block on the drive per day.
>>
>> If we take the lower number of max. writes in the
>> referenced paper which 
>> is 10.000, we get 10.000/22.5 = 444.4 days or one
>> year and 79 days.
>>
>> For 100.000, we get 4444.4 days or more than 12
>> years.
> 
>  Ok, but i think the number is 10.000. 100.000 would be a static wear 
> leveling, and it is a non-trivial implementation for USB pen drives right?
>> During times without http access to my server, only
>> about each 5th to 
>> 10th uberblock_update() is followed by
>> vdev_uberblock_sync() for rpool, 
>> and much less for the two data pools, which means
>> that the corresponding 
>> uberblocks on disk will be skipped for writing (if I
>> did not overlook 
>> anything), and the device will likely be worn out
>> later.
>  I need to know what is the uberblock_update... it seems not related with 
> txg, sync of disks, labels, nothing... ;-) 
> 
>  Thanks a lot Bernd.
> 
>  Leal
> [http://www.eall.com.br/blog]
>> Regards,
>>
>> Bernd
>>
>> Marcelo Leal wrote:
>>> Hello Bernd,
>>>  Now i see your point... ;-)
>>>  Well, following a "very simple" math:
>>>
>>>  - One txg each 5 seconds = 17280/day;
>>>  - Each txg writing 1MB (L0-L3) = 17GB/day
>>>  
>>>  In the paper the math was 10 years = ( 2.7 * the
>> size of the USB drive) writes per day, right? 
>>>  So, in a 4GB drive, would be ~10GB/day. Then, just
>> the labels update would make our USB drive live for 5
>> years... and if each txg update 5MB of data, our
>> drive would live for just a year.
>>>  Help, i'm not good with numbers... ;-)
>>>
>>>  Leal
>>> [http://www.eall.com.br/blog]
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
>> ss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to