On Tue, 2009-03-03 at 13:47 -0500, Luke S Crawford wrote: > Adam Levin <[email protected]> writes: > > > I'd like to open a brief discussion on iSCSI. Recently, we've had two > > vendors tell us to abandon iSCSI. We're not using it extensively -- just > > investigating it for possible use in certain applications in the data > > center and remote offices (primarily remote offices). > > If I may hijack this thread, I'm quite intrested in iscsi as well, though > it sounds like I'm a few notches down from you in terms of cost per gigabyte > budget. > > First, info about me: I don't have much experience at all with iscsi, (I've > used fibre-channel, though, prgmr.com ran entirely on 1GB fibre-channel for > the first two years or so. Nice. and cheap. and pretty quick. But the cost > per gigabyte is quite large, and it's complex that it's really easy for the > new guy to take down everything. I've moved entirely to mirrored local > storage and I have no regrets. We haven't had data loss due to 'new guy > error' since we switched, we've gained a lot in flexibility (we can put one > server in its own location without worrying about proximity to a SAN) > and we can now afford new parts with warranty. > > Now, I'm starting to look at software iscsi. I've got 6 of these massive > SuperMicro SC833 cases in my front room right now. (3U 8 hot-swap sata bays) > Now, my current setup is a dual-socket quad-core opteron with 32GB ram and a > mirror of 1TB sata drives. (currently I use the supermicro 1U twin... the > a+ board with 16 ram slots each. I reccomend, if that is the sort of thing > you need.) I then partition with Xen and rent out the resulting slices. > > I want to do the same thing with these new cases (8 cores, 32G ram, 2x1TB > disk) > but then I want to fill the other 6 disk slots with 1.5TB seagates, > import them into an OpenSolaris DomU using Xen's pvscsi stuff. With > OpenSolaris, I plan to zraid (or 2zraid) it and export over iscsi. > > (the big driver for zfs is that I want to use cheap crap disks. silent data > corruption happens even on good disks, but it happens a lot more on cheap > kit. it is my hope that zfs will make up for the quality difference. ) > > the other unique bits about my situation is that I have more ram than even > my most spendy clients who do everything over NetApp SAN. because I use > opterons (thus registered ecc, not FBDIMMS) and motherboards with 16 ram > slots, my standard setup with 8 cores and 32GB ram sets me back around $2000, > so I can throw ram at my storage solution if that will help. > > (if you don't have enough ram, don't use Xen. Xen is awesome because of > it's strong partitioning, but other virtualization solutions are much > better at being miserly with RAM. There is a cost to everything, though, > and my experience has been that just paying for the ram and using xen > is a lot cheaper than using those other solutions, once you count > SysAdmin time. Ram is cheap.) > > > The whole thing has got to be cheap. Amazon charges $0.10-$0.13 per gig, > so I'd have to charge $0.04-$0.05 per gig or less to compete, so propritary > solutions are right out. > > So yeah. My concern is that the only time I've used software iscsi (granted, > this was two years ago over 100M pipes, and I've got gig now) it was pretty > much useless due to speed, even compared to old pata drives. > > I mean, it doesn't need to be super fast, but it needs to be usable.
Luke, Your ideas may work as I do something similar. I use a Silicon Mechanics Storform (rebranded Supermicro) with 1TB drives running the Open-E DSS software. This software provides NAS and iSCSI. I connect it to a 8 core, 32GB ram 1U Supermicro running ESXi. The system runs nice, easy to maintain, and relatively low cost. Great for a SMB that needs 10 - 20 servers (in this case a small software development firm). cheers, ski -- "When we try to pick out anything by itself, we find it connected to the entire universe" John Muir Chris "Ski" Kacoroski, [email protected], 206-501-9803 or ski98033 on most IM services _______________________________________________ Discuss mailing list [email protected] http://lopsa.org/cgi-bin/mailman/listinfo/discuss This list provided by the League of Professional System Administrators http://lopsa.org/
