Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Albert Shih
 Le 19/10/2011 à 21:30:31+0700, Fajar A. Nugraha a écrit
  Sorry to cross-posting. I don't knwon which mailing-list I should post this
  message.
 
  I'll would like to use FreeBSD with ZFS on some Dell server with some
  MD1200 (classique DAS).
 
  When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
  two options :
 
         1/ create a LV on the PERC H800 so the server see one volume and put
         the zpool on this unique volume and let the hardware manage the
         raid.
 
         2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
         and ZFS manage the raid.
 
  which one is the best solution ?
 
 Neither.
 
 The best solution is to find a controller which can pass the disk as
 JBOD (not encapsulated as virtual disk). Failing that, I'd go with (1)
 (though others might disagree).

Thanks. That's going to be very complicate...but I'm going to try. 

 
 
  Any advise about the RAM I need on the server (actually one MD1200 so 
  12x2To disk)
 
 The more the better :)

Well, my employer is not so rich. 

It's first time I'm going to use ZFS on FreeBSD on production (I use on my
laptop but that's mean nothing), so what's in your opinion the minimum ram
I need ? Is something like 48 Go is enough ? 

 Just make sure do NOT use dedup untul you REALLY know what you're
 doing (which usually means buying lots of RAM and SSD for L2ARC).

Ok. 

Regards.

JAS
--
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
jeu 20 oct 2011 11:30:49 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Fajar A. Nugraha
On Thu, Oct 20, 2011 at 4:33 PM, Albert Shih albert.s...@obspm.fr wrote:
  Any advise about the RAM I need on the server (actually one MD1200 so 
  12x2To disk)

 The more the better :)

 Well, my employer is not so rich.

 It's first time I'm going to use ZFS on FreeBSD on production (I use on my
 laptop but that's mean nothing), so what's in your opinion the minimum ram
 I need ? Is something like 48 Go is enough ?

If you don't use dedup (recommended), should be more than enough.

If you use dedup, search zfs-discuss archive for some calculation method posted.

For comparison purposes, you could also look at Oracle's zfs storage
appliance configuration:
https://shop.oracle.com/pls/ostore/f?p=dstore:product:3479784507256153::NO:RP,6:P6_LPI,P6_PROD_HIER_ID:424445158091311922637762,114303924177622138569448

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Albert Shih
 Le 19/10/2011 à 10:52:07-0400, Krunal Desai a écrit
 On Wed, Oct 19, 2011 at 10:14 AM, Albert Shih albert.s...@obspm.fr wrote:
  When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
  two options :
 
         1/ create a LV on the PERC H800 so the server see one volume and put
         the zpool on this unique volume and let the hardware manage the
         raid.
 
         2/ create 12 LV on the perc H800 (so without raid) and let FreeBSD
         and ZFS manage the raid.
 
  which one is the best solution ?
 
  Any advise about the RAM I need on the server (actually one MD1200 so 
  12x2To disk)
 
 I know the PERC H200 can be flashed with IT firmware, making it in
 effect a dumb HBA perfect for ZFS usage. Perhaps the H800 has the
 same? (If not, can you get the machine configured with a H200?)

I'm not sure what you mean when you say «H200 flashed with IT firmware» ? 

 If that's not an option, I think Option 2 will work. My first ZFS
 server ran on a PERC 5/i, and I was forced to make 8 single-drive RAID
 0s in the PERC Option ROM, but Solaris did not seem to mind that.

OK.

I don't have choice (too complexe to explain and it's meanless here) but I
can only buy at Dell at this moment. 

On the Dell website I've the choice between : 


SAS 6Gbps External Controller
PERC H800 RAID Adapter for External JBOD, 512MB Cache, PCIe 
PERC H800 RAID Adapter for External JBOD, 512MB NV Cache, PCIe 
PERC H800 RAID Adapter for External JBOD, 1GB NV Cache, PCIe
PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 256MB Cache
PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 512MB Cache
LSI2032 SCSI Internal PCIe Controller Card

I've no idea what's the first thing is. But what I understand the best
solution is the first or the last ? 

Regards.

JAS

-- 
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
jeu 20 oct 2011 11:44:39 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stream versions in Solaris 10.

2011-10-20 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Ian Collins
 
   I just tried sending from a oi151a system to a Solaris 10 backup
 server and the server barfed with
 
 zfs_receive: stream is unsupported version 17
 
 I can't find any documentation linking stream version to release, so
 does anyone know the Update 10 stream version?

What zpool versions are you using on each system?  More importantly, what
zpool versions are supported on each system?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Dennis Glatting



On Thu, 20 Oct 2011, Fajar A. Nugraha wrote:


On Thu, Oct 20, 2011 at 7:56 AM, Dave Pooser dave@alfordmedia.com wrote:

On 10/19/11 9:14 AM, Albert Shih albert.s...@obspm.fr wrote:


When we buy a MD1200 we need a RAID PERC H800 card on the server


No, you need a card that includes 2 external x4 SFF8088 SAS connectors.
I'd recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then
it presents the individual disks and ZFS can handle redundancy and
recovery.


Exactly, thanks for suggesting an exact controller model that can
present disks as JBOD.

With hardware RAID, you'd pretty much rely on the controller to behave 
nicely, which is why I suggested to simply create one big volume for zfs 
to use (so you pretty much only use features like snapshot, clones, etc, 
but don't use zfs self healing feature). Again, others might (and have) 
disagree and suggest using volumes for individual disk (even when you're 
still relying on hardware RAID controller). But ultimately there's no 
question that the best possible setup would be to present the disks as 
JBOD and let zfs handle it directly.




I saw something interesting and different today, which I'll just throw 
out.


A buddy has a HP370 loaded with disks (not the only machine that provides 
these services, rather the one he was showing off). The 370's disks are 
managed by the underlying hardware RAID controller, which he built as 
multiple RAID1 volumes.


ESXi 5.0 is loaded and in control of the volumes, some of which are 
partitioned. Consequently, his result is vendor supported interfaces 
between disks, RAID controller, ESXi, and managing/reporting software.


The HP370 has multiple FreeNAS instances whose disks are the disks 
(volumes/partitions) from ESXi (all on the same physical hardware). The 
FreeNAS instances are partitioned according to their physical and logical 
function within the infrastructure, whether by physical or logical 
connections. The FreeNAS instances then serves its disks to consumers.


We have not done any performance testing. Generally, his NAS consumers are 
not I/O pigs though we want the best performance possible (some consumers 
are over the WAN resulting in any HP/ESXi/FreeNAS performance issues 
possibly moot). (I want to do some performance testing because, well, it 
may have significant amusement value.) A question we have is whether ZFS 
(ARC, maybe L2ARC) within FreeNAS is possible or would provide any value.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Damien Fleuriot


On 20 Oct 2011, at 05:24, Dennis Glatting free...@penx.com wrote:

 
 
 On Thu, 20 Oct 2011, Fajar A. Nugraha wrote:
 
 On Thu, Oct 20, 2011 at 7:56 AM, Dave Pooser dave@alfordmedia.com 
 wrote:
 On 10/19/11 9:14 AM, Albert Shih albert.s...@obspm.fr wrote:
 
 When we buy a MD1200 we need a RAID PERC H800 card on the server
 
 No, you need a card that includes 2 external x4 SFF8088 SAS connectors.
 I'd recommend an LSI SAS 9200-8e HBA flashed with the IT firmware-- then
 it presents the individual disks and ZFS can handle redundancy and
 recovery.
 
 Exactly, thanks for suggesting an exact controller model that can
 present disks as JBOD.
 
 With hardware RAID, you'd pretty much rely on the controller to behave 
 nicely, which is why I suggested to simply create one big volume for zfs to 
 use (so you pretty much only use features like snapshot, clones, etc, but 
 don't use zfs self healing feature). Again, others might (and have) disagree 
 and suggest using volumes for individual disk (even when you're still 
 relying on hardware RAID controller). But ultimately there's no question 
 that the best possible setup would be to present the disks as JBOD and let 
 zfs handle it directly.
 
 
 I saw something interesting and different today, which I'll just throw out.
 
 A buddy has a HP370 loaded with disks (not the only machine that provides 
 these services, rather the one he was showing off). The 370's disks are 
 managed by the underlying hardware RAID controller, which he built as 
 multiple RAID1 volumes.
 
 ESXi 5.0 is loaded and in control of the volumes, some of which are 
 partitioned. Consequently, his result is vendor supported interfaces between 
 disks, RAID controller, ESXi, and managing/reporting software.
 
 The HP370 has multiple FreeNAS instances whose disks are the disks 
 (volumes/partitions) from ESXi (all on the same physical hardware). The 
 FreeNAS instances are partitioned according to their physical and logical 
 function within the infrastructure, whether by physical or logical 
 connections. The FreeNAS instances then serves its disks to consumers.
 
 We have not done any performance testing. Generally, his NAS consumers are 
 not I/O pigs though we want the best performance possible (some consumers are 
 over the WAN resulting in any HP/ESXi/FreeNAS performance issues possibly 
 moot). (I want to do some performance testing because, well, it may have 
 significant amusement value.) A question we have is whether ZFS (ARC, maybe 
 L2ARC) within FreeNAS is possible or would provide any value.
 


Possible, yes.
Provides value, somewhat.

You still get to use snapshots, compression, dedup...
You don't get ZFS self healing though which IMO is a big loss.

Regarding the ARC, it totally depends on the kind of files you serve and the 
amount of RAM you have available.

If you keep serving huge, different files all the time, it won't help as much 
as when clients request the same small/avg files over and over again.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Growing CKSUM errors with no READ/WRITE errors

2011-10-20 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Jim Klimov

 new CKSUM errors
 are being found. There are zero READ or WRITE error counts,
 though.
 
 Should we be worried about replacing the ex-hotspare drive
 ASAP as well?

You should not be increasing CKSUM errors.  There is something wrong.  I cannot 
say it's necessarily the fault of the drive, but probably it is.  When some 
threshold is reached, ZFS should mark the drive as faulted due to too many 
cksum errors.  I don't recommend waiting for it.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Koopmann, Jan-Peter


 
 On the Dell website I've the choice between : 
 
 
SAS 6Gbps External Controller
PERC H800 RAID Adapter for External JBOD, 512MB Cache, PCIe 
PERC H800 RAID Adapter for External JBOD, 512MB NV Cache, PCIe 
PERC H800 RAID Adapter for External JBOD, 1GB NV Cache, PCIe
PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 256MB Cache
PERC 6/E SAS RAID Controller, 2x4 Connectors, External, PCIe 512MB Cache
LSI2032 SCSI Internal PCIe Controller Card
 

The first one probably is a LSI card. However check with DELL (and if it is 
LSI, check what card exactly). And check if with that controller they support 
seeing all individual drives in the chassis as JBOD. 

Otherwise consider buying the chassis without the controller and get just the 
LSI from someone else. 

Regards,
  JP
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Dell with FreeBSD

2011-10-20 Thread Krunal Desai
On Thu, Oct 20, 2011 at 5:49 AM, Albert Shih albert.s...@obspm.fr wrote:
 I'm not sure what you mean when you say «H200 flashed with IT firmware» ?

IT is Initiator Target, and many LSI chips have a version of their
firmware available that will put them into this mode, which is
desirable for ZFS. This is opposed to other LSI firmware modes like
IR which is RAID, I believe. (which you do not want). Since the H200
uses a LSI chip, you can download that firmware from LSI and flash it
to the card turning it into an IT-mode card and a simple HBA.

--khd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss