Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-24 Thread Edmund White
On 11/24/12 5:51 PM, "Erik Trimble"  wrote:


>On 11/24/2012 5:17 AM, Edmund White wrote:
>> Heh, I wouldn't be using G5's for ZFS purposes now. G6 and better
>> ProLiants are a better deal for RAM capacity and CPU core countŠ
>>
>> Either way, I also use HP systems as the basis for my ZFS/Nexenta
>>storage
>> systems. Typically DL380's, since I have expansion room for either 16
>> drive bays, or for using them as a head unit to a D2700 or D2600 JBOD.
>>
>> The right replacement for the old DL320s storage server is the DL180 G6.
>> This model was available in a number of configurations, but the best
>> solutions for storage were the 2U 12-bay 3.5" model and the 2U 25-bay
>>2.5"
>> model. Both models have a SAS expander on the backplane, but with a nice
>> controller (LSI 9211-4i), make good ZFS storage servers.
>>
>
>Really?  I mean, sure, the G6 is beefier, but I can still get 8 cores of
>decently-fast CPU and 64GB of RAM in a G5, which, unless I'm doing Dedup
>and need a *stupid* amount of RAM, is more than sufficient for anything
>I've ever seen as a ZFS appliance.   I'd agree that the 64GB of RAM
>limit can be annoying if you really want to run a Super App Server + ZFS
>server on them, but they're so much more powerful than the X4500/X4540
>that I'd think they make an excellent drop-in replacement when paired
>with an MSA70, particularly on cost. The G6 is over double the cost of
>the G5.

My X4540 wasn't lacking in power... Just the annoyance of SATA drive
timeouts. Regardless, recommending a G5 ProLiant nowadays is a bad deal.
I've nearly replaced all of the G5 units I installed between 2006 and
2009. You're limited to 3G SAS and the constrained (super $$$) RAM supply
is an issue.

>One thing that I do know about the G6 is that they have Nehalem CPUS
>(X5500-series), which support VT-D, the virtualization I/O acceleration
>technology from Intel, while the G5's X5400-series Harpertown's don't.
>If you're running zones on the system, it won't matter, but VirtualBox
>will care.

VT-D can be handy. As can HyperThreading, *moar* RAM, DirectPath, etc.

>
>Thanks for the DL180 link.  Once again, I think I'd go for the G5 rather
>than the G6 - it's roughly half the cost (or less, as the 2.5"-enabled
>G6s seem to be expensive), and these boxes make nice log servers, not
>app servers. The DL180 G5 seems to be pretty much a DL380 G5 with a
>different hard drive layout (12x2.5" rather than 8x2.5")

While there is a DL180 G5, the DL180 G6 is the right recommendation
because it fixes a lot of the ugly issues present in the G5. The 180 G5
platform is nothing like the DL380 G5. Different system boards, backplane,
management.

Not sure where you're looking, but eBay and a couple of the HP liquidators
are good sources for these systems. E.g. http://r.ebay.com/p0xrLu

-- 
Edmund White
ewwh...@mac.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-24 Thread Erik Trimble

On 11/24/2012 5:17 AM, Edmund White wrote:

Heh, I wouldn't be using G5's for ZFS purposes now. G6 and better
ProLiants are a better deal for RAM capacity and CPU core countŠ

Either way, I also use HP systems as the basis for my ZFS/Nexenta storage
systems. Typically DL380's, since I have expansion room for either 16
drive bays, or for using them as a head unit to a D2700 or D2600 JBOD.

The right replacement for the old DL320s storage server is the DL180 G6.
This model was available in a number of configurations, but the best
solutions for storage were the 2U 12-bay 3.5" model and the 2U 25-bay 2.5"
model. Both models have a SAS expander on the backplane, but with a nice
controller (LSI 9211-4i), make good ZFS storage servers.



Really?  I mean, sure, the G6 is beefier, but I can still get 8 cores of 
decently-fast CPU and 64GB of RAM in a G5, which, unless I'm doing Dedup 
and need a *stupid* amount of RAM, is more than sufficient for anything 
I've ever seen as a ZFS appliance.   I'd agree that the 64GB of RAM 
limit can be annoying if you really want to run a Super App Server + ZFS 
server on them, but they're so much more powerful than the X4500/X4540 
that I'd think they make an excellent drop-in replacement when paired 
with an MSA70, particularly on cost. The G6 is over double the cost of 
the G5.


One thing that I do know about the G6 is that they have Nehalem CPUS 
(X5500-series), which support VT-D, the virtualization I/O acceleration 
technology from Intel, while the G5's X5400-series Harpertown's don't.   
If you're running zones on the system, it won't matter, but VirtualBox 
will care.


---

Thanks for the DL180 link.  Once again, I think I'd go for the G5 rather 
than the G6 - it's roughly half the cost (or less, as the 2.5"-enabled 
G6s seem to be expensive), and these boxes make nice log servers, not 
app servers. The DL180 G5 seems to be pretty much a DL380 G5 with a 
different hard drive layout (12x2.5" rather than 8x2.5")


---

One word here for everyone getting HP equipment:  you want to get the 
Px1x or Px2x  (e.g. P812) series of SmartArray controllers, if you plan 
on running SATA drives attached to them.  The older Px0x series only 
supports SATA I (1.5GB/s) and SAS 3GB/s, which is a serious handicap if 
you want to do SSDs on that channel. The newer series do SATA II (3GB/s) 
and SAS 6Gb/s.


http://h18004.www1.hp.com/products/servers/proliantstorage/arraycontrollers/index.html


-Erik

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Woeful performance from an iSCSI pool

2012-11-24 Thread Ian Collins

Ian Collins wrote:

I look after a remote server that has two iSCSI pools.  The volumes for
each pool are sparse volumes and a while back the target's storage
became full, causing weird and wonderful corruption issues until they
manges to free some space.

Since then, one pool has been reasonably OK, but the other has terrible
performance receiving snapshots.  Despite both iSCSI devices using the
same IP connection, iostat shows one with reasonable service times while
the other shows really high (up to 9 seconds) service times and 100%
busy.  This kills performance for snapshots with many random file
removals and additions.

I'm currently zero filling the bad pool to recover space on the target
storage to see if that improves matters.


It did. Maybe the volume's free space had become very fragmented.

There are a couple of lessons here:

1) When using a thin provisioned volume for an iSCSI target, don't let 
the volume's pool become full!


2) if the pool using the iSCSI target has a lot of churn, consider zero 
filling the pool to flush out the free blocks.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-24 Thread Edmund White
Heh, I wouldn't be using G5's for ZFS purposes now. G6 and better
ProLiants are a better deal for RAM capacity and CPU core countŠ

Either way, I also use HP systems as the basis for my ZFS/Nexenta storage
systems. Typically DL380's, since I have expansion room for either 16
drive bays, or for using them as a head unit to a D2700 or D2600 JBOD.

The right replacement for the old DL320s storage server is the DL180 G6.
This model was available in a number of configurations, but the best
solutions for storage were the 2U 12-bay 3.5" model and the 2U 25-bay 2.5"
model. Both models have a SAS expander on the backplane, but with a nice
controller (LSI 9211-4i), make good ZFS storage servers.

-- 
Edmund White




On 11/23/12 8:51 PM, "Erik Trimble"  wrote:

>On 11/23/2012 5:50 AM, Edward Ned Harvey
>(opensolarisisdeadlongliveopensolaris) wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>>   
>>> I wonder if it would make weird sense to get the boxes, forfeit the
>>> cool-looking Fishworks, and install Solaris/OI/Nexenta/whatever to
>>> get the most flexibility and bang for a buck from the owned hardware...
>> This is what we decided to do at work, and this is the reason why.
>> But we didn't buy the appliance-branded boxes; we just bought normal
>>servers running solaris.
>>
>>
>
>I gave up and am now buying HP-branded hardware for running Solaris on
>it. Particularly if you get off-lease used hardware (for which, HP is
>still very happy to let you buy a HW support contract), it's cheap, and
>HP has a lot of Solaris drivers for their branded stuff. Their whole
>SmartArray line of adapters has much better Solaris driver coverage than
>the generic stuff or the equivalent IBM or Dell items.
>
>For instance, I just got a couple of DL380 G5 systems with dual
>Harpertown CPUs, fully loaded with 8 2.5" SAS drives and 32GB of RAM,
>for about $800 total.  You can attach their MSA30/50/70-series (or
>DS2700-series, if you want new) as dumb JBODs via SAS, and the nice
>SmartArray controllers have 1GB of NVRAM, which is sufficient for many
>purposes, so you don't even have cough up the dough for a nice ZIL SSD.
>
>HP even made a sweet little "appliance" thing that was designed for
>Windows, but happens to run Solaris really, really well.  The DL320s
>(the "s" is part of the model designation).   14x 3.5" SAS/SATA hot swap
>bays, a Xeon 3070 dual-core CPU, SmartArray controller, 2 x GB Nic, LOM,
>and a free 1x PCI-E expansion slot. The only drawback is that it only
>takes up to 8GB of RAM.   It makes a *fabulous* little backup system for
>logs and stuff, and it's under about $2000 even after you splurge for
>1TB drives and an SSD for the thing.
>
>I am in the market for something newer than that, though. Anyone know
>what HP's using as a replacement for the DL320s?
>
>-Erik
>
>
>___
>zfs-discuss mailing list
>zfs-discuss@opensolaris.org
>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss