> This involves a lot of guess work when defining the slice size with 
> potential problems. For instance, undersize /var and pretty soon there 
> will be no space left to apply patch clusters. Undersize root filesystem 
> and you are in trouble.
> Nowadays, servers come standard with 72/146 GB disks and, if one is 
> willing to pay for, 300 GB disks. My thinking is no longer to follow the 
> above recommendation and use a new approach, such as:
> s1 = swap (size=1GB)
> s0 = /  (size = disk capacity - 1GB)
> 
> This way one gets rid of with the guess work of sizing the partitions.
> 
> Any comments about this approach? Pros and cons?

That's what I do.

Cons:  
Old machines (pre sun4u) cannot use large (>2GB) root filesystems.
Problems in one filesystem can affect all.  
For very large disks, it may cause backup/restore issues.

Pros:
No need to worry about size allocation.
Much simpler.
If you're doing VxVM mirroring, no need to explicitly publish slices.

I used to hear tales about "if your root filesystem fills up, the
machine crashes".  I've never seen that happen.  I've been doing the
one-big-filesystem thing since 2.6.  Yes, it might fill if you're not
paying attention, but I've never had it crash or hang.  Log in, clean
up, everything was okay.  

I might make a more application specific partition for special needs.
Like a busy mailserver might get root, swap, and /var/spool/mail....


-- 
Darren Dunham                                           [EMAIL PROTECTED]
Senior Technical Consultant         TAOS            http://www.taos.com/
Got some Dr Pepper?                           San Francisco, CA bay area
         < This line left intentionally blank to confuse you. >
_______________________________________________
Solaris-Users mailing list
[email protected]
http://www.filibeto.org/mailman/listinfo/solaris-users

Reply via email to