On 2012-Oct-12 08:11:13 +0100, andy thomas <a...@time-domain.co.uk> wrote:
>This is apparently what had been done in this case:
>
>       gpart add -b 34 -s 6000000 -t freebsd-swap da0
>       gpart add -b 6000034 -s 1947525101 -t freebsd-zfs da1
>       gpart show

Assuming that you can be sure that you'll keep 512B sector disks,
that's OK but I'd recommend that you align both the swap and ZFS
partitions on at least 4KiB boundaries for future-proofing (ie
you can safely stick the same partition table onto a 4KiB disk
in future).

>Is this a good scheme? The server has 12 G of memory (upped from 4 GB last 
>year after it kept crashing with out of memory reports on the console 
>screen) so I doubt the swap would actually be used very often.

Having enough swap to hold a crashdump is useful.  You might consider
using gmirror for swap redundancy (though 3-way is overkill).  (And
I'd strongly recommend against swapping to a zvol or ZFS - FreeBSD has
"issues" with that combination).

>The other issue with this server is it needs to be rebooted every 8-10 
>weeks as disk I/O slows to a crawl over time and the server becomes 
>unusable. After a reboot, it's fine again. I'm told ZFS 13 on FreeBSD 8.0 
>has a lot of problems

Yes, it does - and your symptoms match one of the problems.  Does
top(1) report lots of inactive and cache memory and very little free
memory and a high kstat.zfs.misc.arcstats.memory_throttle_count once
I/O starts slowing down?

> so I was planning to rebuild the server with FreeBSD 
>9.0 and ZFS 28 but I didn't want to make any basic design mistakes in 
>doing this.

I'd suggest you test 9.1-RC2 (just released) with a view to using 9.1,
rather than installing 9.0.

Since your questions are FreeBSD specific, you might prefer to ask on
the freebsd-fs list.

-- 
Peter Jeremy

Attachment: pgpoDwzmWvFUU.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to