Forgot to mention earlier that we have 2 now 3 other large zpools in operation 
in our datacenter.

We had originally intended to use PureDisk, we told their professional services 
that hardware he was coming out to upgrade our system to....but they didn't 
mention until after he'd been here a while that the 6140 we purchased for this 
upgrade wasn't supported for PureDisk.  

Instead of we two 10TB zpools on each of our T2000 NetBackup media servers.  We 
recently added another media server, and it has a 12TB zpool from 6180?  A 
fourth media server is planned, and I heard 100+TB zpool from Hitachi storage 
(apparently the only other time than NetApp, have we gone with non-Sun/Oracle 
storage...we took delivery of the Oracle rebadged Sun badged 9990 a week before 
they announced they were breaking up with Hitachi) is planned (this media 
server will be at a remote datacenter, to do local backups of systems there and 
off-site backups for here...also a T2000).  Not sure what the fate of one of 
the 10TB zpools is.  One of the media servers is also the master, and Symantec 
recently recommended that we make it master only....

The zpools are carved up into <2TB chunks for netbackup.  Before the upgrade to 
T2000s/6140/Sol10, we had a 3511 on an 880/Sol9...and all of the 3511 was 
carved into <1TB LUNs and formated UFS for disk staging.  3511 was junk 
though....even said in the docs to not rely on it for important data.  But, it 
was rammed down our throats by former manager.  She had originally asked me to 
research a whole bunch of storage vendors on an affordable solution (cheaper 
and better than the 3511)....and when I went to give what I had found, I was 
informed that she had gone ahead and bought the 3511.  I also talked to both 
the storage admin and the backup admin during my research, and that was the 
first they had heard of this project....


----- Original Message -----
> responding to excerpts:
> 
> $.02, you should have spent a little more to get VxVM + VxFS. A defrag
> would have fixed you right up and the snapshots, journaling and speed
> are all great.

Symantec/Vertitas priced themselves out our datacenter.  We continue to run 3.x 
on the systems that still have it, but when we upgrade those systems they 
usually become ZFS.

Though we really like VxVM....it was really nice when we moving from 9980 to 
9990.  No downtime.  With ZFS, we had to quiesce...send/receive, etc.  We keep 
asking when we'll be able to add and remove storage from a zpool, still no 
answer.... 

> > Back when we ran our own email....we were running 2+TB ZFS
> > filesystems for mail spools (shared out over NFS to 3 MDA's and 8
> > imap/pop servers) The only hiccup was we had to turn ZIL off,
> > because ZFS insisting that our 9980 flush cache and blocking
> > everything until it did wasn't working too well (also ran into an
> > issue with IPF and NFS...it was a T2000 and IPF would pin one CPU
> > and cap our throughput....we knew we could do better from testing,
> > but put 40,000 users on it and couldn't get there...)
> 
> For small files over NFS, zfs is sub-optimal when compared to UFS. a
> fast SSD ZIL makes the metadata performance rock for lots of small
> random IOs. For large file, sequential performance, you don't want a
> flash SSD ZIl because the controller to the ZIL becomes the bottleneck
> for throughput storage. YMMV.
> 
These are things we learn from experience I guess, and later by reading....

the storage admin was trying strange things with raidz and SSDs before he 
followed the specific Oracle recommendation... he's still not happy, he's 
convinced for some reason that we need raidz SSDs even though the single SSD 
configuration works.

Though I had played with raidz versus (software) raid5 once...to see if the 
variable block sizing of raidz would get me better performance than 
raid5....but that was on linux, so it wasn't enough to edge over.  Ended up 
going with raid10 instead.

-- 
Who: Lawrence K. Chen, P.Eng. - W0LKC - Senior Unix Systems Administrator
For: Enterprise Server Technologies (EST) -- & SafeZone Ally
Snail: Computing and Telecommunications Services (CTS)
Kansas State University, 109 East Stadium, Manhattan, KS 66506-3102
Phone: (785) 532-4916 - Fax: (785) 532-3515 - Email: [email protected]
Web: http://www-personal.ksu.edu/~lkchen - Where: 11 Hale Library
_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to