Peter C. Norton writes:
 > On Tue, Aug 21, 2007 at 05:34:38PM +0200, Roch - PAE wrote:
 > > 
 > > adrian cockcroft writes:
 > >  > Why can't you adapt ZFS for swap? It does the transactional clustering 
 > > of
 > >  > random writes into sequential related blocks, aggressive prefetch on 
 > > read,
 > >  > and would also guard against corrupt blocks in swap. Anon-ZFS?
 > >  > Adrian
 > > 
 > > Good point. And It works already, swap to a zvol 
 > > 
 > >    http://blogs.sun.com/scottdickson/entry/fun_with_zvols_-_swap
 > >    ZFS admin guide (currently down): 
 > > http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qsl?a=view
 > > 
 > > For output ZFS will get streaming perf  out of the disks and
 > > as you  say will order pages from a  given object (an anon segment?)
 > > 
 > > For swapin, I don't think we'll trigger read ahead (zfetch
 > > code) but the low level vdev prefetch could be activated. The
 > > heuristic was recently adjusted so that it won't trigger on
 > > data blocks. We might need to revisit this for zvol/swap case.
 > > 
 > > -r
 > > 
 > > PS. I would set up the zvol to use volblocksize == pagesize.
 > 
 > The swap to zpool idea is very interesting. But this brings up another
 > question in the puzzle for us... What is this mythical creature, a
 > page size, in the Solaris VM, when it comes to tuning? You have
 > variable pages in solaris, but in practice you can only get a single
 > large page on x86 (sparc seems to be better) which indicates a
 > horrendous amount of fragmentation in the VM, never myind how that is
 > reflected in swap, enough that you just can't get large page
 > performance.
 > 

I don't know how swap treats large pages. Anyone can comment 
on this ?

 > In any case, are you optimizing for 2k or 2M pages? Does it make a
 > difference? I suspect it skews some of the considerations, but I'm not
 > sure exactly how.
 > 

I had in mind for volblocksize : 8K for sparc and 4K for intel.

-r

 > -Peter
 > 
 > -- 
 > The 5 year plan:
 > In five years we'll make up another plan.
 > Or just re-use this one.
 > 
 > _______________________________________________
 > perf-discuss mailing list
 > [email protected]

_______________________________________________
perf-discuss mailing list
[email protected]

Reply via email to