That is a good point.
In my data center, power supply failure is minimized as much as possible.
All servers have dual power supplies. Each power supply is fed by
independent power line. Not to mention, backup diesel generators.
Besides, all root disks are mirrored via Veritas with logging option
enabled.
I think when 300 GB disks become standard (meaning, cheap), it doesn't
matter if a system has several partitions or only one partition.
Filesystems will be big one way or other: either a gigantic 300 GB /
partition or several partitions (var, usr, opt, etc) of dozens of GB in
size.
---------------------------------------------------------------------------------------------
Any comments or statements made in this transmission reflect the views of
the sender and are not necessarily the views of the Federal Reserve Bank
of New York.
"Leonardo Lagos" <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]
07/18/2005 12:02 PM
Please respond to leo; Please respond to Solaris-Users mailing list
To: "'Solaris-Users mailing list'"
<[email protected]>
cc:
Subject: RE: [Solaris-Users] slices/partitions
Failure scenario: power supply failure, disk data corrupt, fsck. Fsck will
take hours to fix such a big disk, and You will need to be there to check
the entire process, press "Y" from time to time, etc.
I do something similar, few partitions (plus swap, of course), but of 4GB
minimum each...
/ (just one for all operating system)
/var (for patches)
/opt (for specific application software; typical 10+ GB)
/usr/local (for additional free/GNU software)
/export (for home's and users data)
That's it.
Regards,
Leo
-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Darren Dunham
Sent: Friday, July 15, 2005 8:46 PM
To: [email protected]
Subject: Re: [Solaris-Users] slices/partitions
> This involves a lot of guess work when defining the slice size with
> potential problems. For instance, undersize /var and pretty soon there
> will be no space left to apply patch clusters. Undersize root filesystem
> and you are in trouble.
> Nowadays, servers come standard with 72/146 GB disks and, if one is
> willing to pay for, 300 GB disks. My thinking is no longer to follow the
> above recommendation and use a new approach, such as:
> s1 = swap (size=1GB)
> s0 = / (size = disk capacity - 1GB)
>
> This way one gets rid of with the guess work of sizing the partitions.
>
> Any comments about this approach? Pros and cons?
That's what I do.
Cons:
Old machines (pre sun4u) cannot use large (>2GB) root filesystems.
Problems in one filesystem can affect all.
For very large disks, it may cause backup/restore issues.
Pros:
No need to worry about size allocation.
Much simpler.
If you're doing VxVM mirroring, no need to explicitly publish slices.
I used to hear tales about "if your root filesystem fills up, the
machine crashes". I've never seen that happen. I've been doing the
one-big-filesystem thing since 2.6. Yes, it might fill if you're not
paying attention, but I've never had it crash or hang. Log in, clean
up, everything was okay.
I might make a more application specific partition for special needs.
Like a busy mailserver might get root, swap, and /var/spool/mail....
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOS http://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
< This line left intentionally blank to confuse you. >
_______________________________________________
Solaris-Users mailing list
[email protected]
http://www.filibeto.org/mailman/listinfo/solaris-users
_______________________________________________
Solaris-Users mailing list
[email protected]
http://www.filibeto.org/mailman/listinfo/solaris-users
_______________________________________________
Solaris-Users mailing list
[email protected]
http://www.filibeto.org/mailman/listinfo/solaris-users