Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Eugen Leitl
On Tue, Feb 26, 2013 at 06:01:39PM +0100, Sašo Kiselkov wrote:
> On 02/26/2013 05:57 PM, Eugen Leitl wrote:
> > On Tue, Feb 26, 2013 at 06:51:08AM -0800, Gary Driggs wrote:
> >> On Feb 26, 2013, at 12:44 AM, "Sašo Kiselkov" wrote:
> >>
> >> I'd also recommend that you go and subscribe to z...@lists.illumos.org, 
> >> since
> > 
> > I can't seem to find this list. Do you have an URL for that?
> > Mailman, hopefully?
> 
> http://wiki.illumos.org/display/illumos/illumos+Mailing+Lists

Oh, it's the illumos-zfs one. Had me confused.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Eugen Leitl
On Tue, Feb 26, 2013 at 06:51:08AM -0800, Gary Driggs wrote:
> On Feb 26, 2013, at 12:44 AM, "Sašo Kiselkov" wrote:
> 
> I'd also recommend that you go and subscribe to z...@lists.illumos.org, since

I can't seem to find this list. Do you have an URL for that?
Mailman, hopefully?

> this list is going to get shut down by Oracle next month.
> 
> 
> Whose description still reads, "everything ZFS running on illumos-based
> distributions."
> 
> -Gary

> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-04 Thread Eugen Leitl
On Fri, Jan 04, 2013 at 06:57:44PM -, Robert Milkowski wrote:
> 
> > Personally, I'd recommend putting a standard Solaris fdisk
> > partition on the drive and creating the two slices under that.
> 
> Why? In most cases giving zfs an entire disk is the best option.
> I wouldn't bother with any manual partitioning.

Caches are ok, but log needs a mirror, and I only have
two SSDs.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-04 Thread Eugen Leitl
On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote:
> Eugen,

Thanks Phillip and others, most illuminating (pun intended).
 
>   Be aware that p0 corresponds to the entire disk, regardless of how it 
> is partitioned with fdisk.  The fdisk partitions are 1 - 4.  By using p0 for 
> log and p1 for cache, you could very well be writing to same location on the 
> SSD and corrupting things.

Does this mean that with 

Part  TagFlag Cylinders SizeBlocks
  0 unassignedwm   0 -   6684.00GB(669/0/0) 8391936
  1 unassignedwm 669 - 12455   70.50GB(11787/0/0) 147856128
  2 backupwu   0 - 12456   74.51GB(12457/0/0) 156260608
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0
  8   bootwu   0 - 06.12MB(1/0/0) 12544
  9 unassignedwm   00 (0/0/0) 0

/dev/dsk/c4t1d0p0 /dev/dsk/c4t2d0p0 means the whole disk? 
I thought the backup partition would be that, and that's p2?

>   Personally, I'd recommend putting a standard Solaris fdisk partition on 
> the drive and creating the two slices under that.

Can you please give me the rundown for commands for that?
I seem to partition a Solaris disk every decade, or so, so
I have no idea what I'm doing.

I've redone the

# zpool remove tank0 /dev/dsk/c4t1d0p1 /dev/dsk/c4t2d0p1
# zpool remove tank0 mirror-1

so the pool is back to mice and pumpkins:

  pool: tank0
 state: ONLINE
  scan: scrub in progress since Fri Jan  4 16:55:12 2013
773G scanned out of 3.49T at 187M/s, 4h15m to go
0 repaired, 21.62% done
config:

NAME   STATE READ WRITE CKSUM
tank0  ONLINE   0 0 0
  raidz3-0 ONLINE   0 0 0
c3t5000C500098BE9DDd0  ONLINE   0 0 0
c3t5000C50009C72C48d0  ONLINE   0 0 0
c3t5000C50009C73968d0  ONLINE   0 0 0
c3t5000C5000FD2E794d0  ONLINE   0 0 0
c3t5000C5000FD37075d0  ONLINE   0 0 0
c3t5000C5000FD39D53d0  ONLINE   0 0 0
c3t5000C5000FD3BC10d0  ONLINE   0 0 0
c3t5000C5000FD3E8A7d0  ONLINE   0 0 0

errors: No known data errors

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Eugen Leitl
On Thu, Jan 03, 2013 at 03:44:54PM -0600, Phillip Wagstrom wrote:
> 
> On Jan 3, 2013, at 3:33 PM, Eugen Leitl wrote:
> 
> > On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote:
> >> Eugen,
> >> 
> >>Be aware that p0 corresponds to the entire disk, regardless of how it 
> >> is partitioned with fdisk.  The fdisk partitions are 1 - 4.  By using p0 
> >> for log and p1 for cache, you could very well be writing to same location 
> >> on the SSD and corrupting things.
> > 
> > My partitions are like this:
> > 
> > partition> print
> > Current partition table (original):
> > Total disk cylinders available: 496 + 2 (reserved cylinders)
> > 
> > Part  TagFlag Cylinders SizeBlocks
> >  0 unassignedwm   00 (0/0/0) 0
> >  1 unassignedwm   00 (0/0/0) 0
> >  2 backupwu   0 - 11709   70.04GB(11710/0/0) 146890240
> >  3 unassignedwm   00 (0/0/0) 0
> >  4 unassignedwm   00 (0/0/0) 0
> >  5 unassignedwm   00 (0/0/0) 0
> >  6 unassignedwm   00 (0/0/0) 0
> >  7 unassignedwm   00 (0/0/0) 0
> >  8   bootwu   0 - 06.12MB(1/0/0) 12544
> >  9 unassignedwm   00 (0/0/0) 0
> > 
> > am I writing to the same location?
> 
>   Okay.  The above are the slices within the Solaris fdisk partition.  
> These would be the "s0" part of "c0t0d0s0".  These are modified with via 
> format under "partition".  
>   p1 through p4 refers to the x86 fdisk partition which is administered 
> with the fdisk command or called from the format command via "fdisk"
> > 
> >>Personally, I'd recommend putting a standard Solaris fdisk partition on 
> >> the drive and creating the two slices under that.
> > 
> > Which command invocations would you use to do that, under Open Indiana?
> 
>   format -> partition then set the size of each there.

Thanks. Apparently, napp-it web interface did not do what I asked it to do.
I'll try to remove the cache and the log devices from the pool, and redo it
from the command line interface.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Eugen Leitl
On Thu, Jan 03, 2013 at 03:21:33PM -0600, Phillip Wagstrom wrote:
> Eugen,
> 
>   Be aware that p0 corresponds to the entire disk, regardless of how it 
> is partitioned with fdisk.  The fdisk partitions are 1 - 4.  By using p0 for 
> log and p1 for cache, you could very well be writing to same location on the 
> SSD and corrupting things.

My partitions are like this:

partition> print
Current partition table (original):
Total disk cylinders available: 496 + 2 (reserved cylinders)

Part  TagFlag Cylinders SizeBlocks
  0 unassignedwm   00 (0/0/0) 0
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 11709   70.04GB(11710/0/0) 146890240
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0
  8   bootwu   0 - 06.12MB(1/0/0) 12544
  9 unassignedwm   00 (0/0/0) 0

am I writing to the same location?

>   Personally, I'd recommend putting a standard Solaris fdisk partition on 
> the drive and creating the two slices under that.

Which command invocations would you use to do that, under Open Indiana?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Eugen Leitl
On Thu, Jan 03, 2013 at 12:44:26PM -0800, Richard Elling wrote:
> 
> On Jan 3, 2013, at 12:33 PM, Eugen Leitl  wrote:
> 
> > On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote:
> >> 
> >> Happy $holidays,
> >> 
> >> I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
> > 
> > Just a little update on the home NAS project.
> > 
> > I've set the pool sync to disabled, and added a couple
> > of
> > 
> >   8. c4t1d0 
> >  /pci@0,0/pci1462,7720@11/disk@1,0
> >   9. c4t2d0 
> >  /pci@0,0/pci1462,7720@11/disk@2,0
> 
> Setting sync=disabled means your log SSDs (slogs) will not be used.
>  -- richard

Whoops. Set it back to sync=standard. Will rerun the bonnie++ once
the scrub finishes, and post the results.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-03 Thread Eugen Leitl
On Sun, Dec 30, 2012 at 06:02:40PM +0100, Eugen Leitl wrote:
> 
> Happy $holidays,
> 
> I have a pool of 8x ST31000340AS on an LSI 8-port adapter as

Just a little update on the home NAS project.

I've set the pool sync to disabled, and added a couple
of

   8. c4t1d0 
  /pci@0,0/pci1462,7720@11/disk@1,0
   9. c4t2d0 
  /pci@0,0/pci1462,7720@11/disk@2,0

I had no clue what the partitions names (created with napp-it web
interface, a la 5% log and 95% cache, of 80 GByte) were and so
did a iostat -xnp

1.40.35.50.0  0.0  0.00.00.0   0   0 c4t1d0
0.10.03.70.0  0.0  0.00.00.5   0   0 c4t1d0s2
0.10.02.60.0  0.0  0.00.00.5   0   0 c4t1d0s8
0.00.00.00.0  0.0  0.00.00.2   0   0 c4t1d0p0
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t1d0p1
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t1d0p2
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t1d0p3
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t1d0p4
1.20.31.40.0  0.0  0.00.00.0   0   0 c4t2d0
0.00.00.60.0  0.0  0.00.00.4   0   0 c4t2d0s2
0.00.00.70.0  0.0  0.00.00.4   0   0 c4t2d0s8
0.10.00.00.0  0.0  0.00.00.2   0   0 c4t2d0p0
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t2d0p1
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t2d0p2

then issued

# zpool add tank0 cache /dev/dsk/c4t1d0p1 /dev/dsk/c4t2d0p1
# zpool add tank0 log mirror /dev/dsk/c4t1d0p0 /dev/dsk/c4t2d0p0

which resulted in 

root@oizfs:~# zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 0h1m with 0 errors on Wed Jan  2 21:09:23 2013
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c4t3d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: tank0
 state: ONLINE
  scan: scrub repaired 0 in 5h17m with 0 errors on Wed Jan  2 17:53:20 2013
config:

NAME   STATE READ WRITE CKSUM
tank0  ONLINE   0 0 0
  raidz3-0 ONLINE   0 0 0
c3t5000C500098BE9DDd0  ONLINE   0 0 0
c3t5000C50009C72C48d0  ONLINE   0 0 0
c3t5000C50009C73968d0  ONLINE   0 0 0
c3t5000C5000FD2E794d0  ONLINE   0 0 0
c3t5000C5000FD37075d0  ONLINE   0 0 0
c3t5000C5000FD39D53d0  ONLINE   0 0 0
c3t5000C5000FD3BC10d0  ONLINE   0 0 0
c3t5000C5000FD3E8A7d0  ONLINE   0 0 0
logs
  mirror-1 ONLINE   0 0 0
c4t1d0p0   ONLINE   0 0 0
c4t2d0p0   ONLINE   0 0 0
cache
  c4t1d0p1 ONLINE   0 0 0
  c4t2d0p1 ONLINE   0 0 0

errors: No known data errors

which resulted in bonnie++
befo':

NAME SIZEBonnie  Date(y.m.d) FileSeq-Wr-Chr  %CPU
Seq-Write   %CPUSeq-Rewr%CPUSeq-Rd-Chr  %CPU
Seq-Read%CPURnd Seeks   %CPUFiles   Seq-Create  
Rnd-Create
 rpool   59.5G   start   2012.12.28  15576M  24 MB/s 61  47 
MB/s 18  40 MB/s 19  26 MB/s 98  273 MB/s   
 48  2657.2/s25  16  12984/s 12058/s
 tank0   7.25T   start   2012.12.29  15576M  35 MB/s 86  145 
MB/s48  109 MB/s50  25 MB/s 97  291 MB/s
53  819.9/s 12  16  12634/s 9194/s

aftuh:

-Wr-Chr  %CPUSeq-Write   %CPUSeq-Rewr%CPUSeq-Rd-Chr 
 %CPUSeq-Read%CPURnd Seeks   %CPUFiles   Seq-Create 
 Rnd-Create
 rpool   59.5G   start   2012.12.28  15576M  24 MB/s 61  47 
MB/s 18  40 MB/s 19  26 MB/s 98  273 MB/s   
 48  2657.2/s25  16  12984/s 12058/s
 tank0   7.25T   start   2013.01.03  15576M  35 MB/s 86  149 
MB/s48  111 MB/s50  26 MB/s 98  404 MB/s
76  1094.3/s12  16  12601/s 9937/s

Does the layout make sense? Do the stats make sense, or is there still 
something very wrong
with that pool?

Thanks. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-02 Thread Eugen Leitl
On Sun, Dec 30, 2012 at 10:40:39AM -0800, Richard Elling wrote:
> On Dec 30, 2012, at 9:02 AM, Eugen Leitl  wrote:

> > The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3
> > memory, no ECC. All the systems have Intel NICs with mtu 9000
> > enabled, including all switches in the path.
> 
> Does it work faster with the default MTU?

No, it was even slower, that's why I went from 1500 to 9000.
I estimate it brought ~20 MByte/s more peak on Windows 7 64 bit CIFS.

> Also check for retrans and errors, using the usual network performance
> debugging checks.

Wireshark or tcpdump on Linux/Windows? What would
you suggest for OI?
 
> > P.S. Not sure whether this is pathological, but the system
> > does produce occasional soft errors like e.g. dmesg
> 
> More likely these are due to SMART commands not being properly handled

Otherwise napp-it attests full SMART support.

> for SATA devices. They are harmless.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] poor CIFS and NFS performance

2012-12-30 Thread Eugen Leitl

Happy $holidays,

I have a pool of 8x ST31000340AS on an LSI 8-port adapter as
a raidz3 (no compression nor dedup) with reasonable bonnie++ 
1.03 values, e.g.  145 MByte/s Seq-Write @ 48% CPU and 291 MByte/s 
Seq-Read @ 53% CPU. It scrubs with 230+ MByte/s with reasonable
system load. No hybrid pools yet. This is latest beta napp-it 
on OpenIndiana 151a5 server, living on a dedicated 64 GByte SSD.

The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3
memory, no ECC. All the systems have Intel NICs with mtu 9000
enabled, including all switches in the path.

My problem is pretty poor network throughput. An NFS
mount on 12.04 64 bit Ubuntu (mtu 9000) or CIFS are
read at about 23 MBytes/s. Windows 7 64 bit (also jumbo
frames) reads at about 65 MBytes/s. The highest transfer
speed on Windows just touches 90 MByte/s, before falling
back to the usual 60-70 MBytes/s.

I kinda can live with above values, but I have a feeling
the setup should be able to saturate GBit Ethernet with
large file transfers, especially on Linux (20 MByte/s
is nothing to write home about).

Does anyone have any suggestions on how to debug/optimize
throughput?

Thanks, and happy 2013.

P.S. Not sure whether this is pathological, but the system
does produce occasional soft errors like e.g. dmesg

Dec 30 17:45:00 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0  
   Error Block: 0
Dec 30 17:45:00 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA 
   Serial Number:  
Dec 30 17:45:00 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
Dec 30 17:45:00 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (), ASCQ: 0x1d, FRU: 0x0
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.warning] WARNING: 
/scsi_vhci/disk@g5000c50009c72c48 (sd9):
Dec 30 17:45:01 oizfs   Error for Command: Error Level: 
Recovered
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0  
   Error Block: 0
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA 
   Serial Number:  
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (), ASCQ: 0x1d, FRU: 0x0
Dec 30 17:45:01 oizfs pcplusmp: [ID 805372 kern.info] pcplusmp: ide (ata) 
instance 0 irq 0xe vector 0x45 ioapic 0x3 intin 0xe is bound to cpu 0
Dec 30 17:45:01 oizfs pcplusmp: [ID 805372 kern.info] pcplusmp: ide (ata) 
instance 0 irq 0xe vector 0x45 ioapic 0x3 intin 0xe is bound to cpu 1
Dec 30 17:45:01 oizfs pcplusmp: [ID 805372 kern.info] pcplusmp: ide (ata) 
instance 0 irq 0xe vector 0x45 ioapic 0x3 intin 0xe is bound to cpu 0
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.warning] WARNING: 
/scsi_vhci/disk@g5000c50009c73968 (sd4):
Dec 30 17:45:01 oizfs   Error for Command: Error Level: 
Recovered
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0  
   Error Block: 0
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA 
   Serial Number:  
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
Dec 30 17:45:01 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (), ASCQ: 0x1d, FRU: 0x0
Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.warning] WARNING: 
/scsi_vhci/disk@g5000c500098be9dd (sd10):
Dec 30 17:45:03 oizfs   Error for Command: Error Level: 
Recovered
Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0  
   Error Block: 0
Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA 
   Serial Number:  
Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
Dec 30 17:45:03 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (), ASCQ: 0x1d, FRU: 0x0
Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.warning] WARNING: 
/pci@0,0/pci1462,7720@11/disk@3,0 (sd8):
Dec 30 17:45:04 oizfs   Error for Command: Error Level: 
Recovered
Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.notice] Requested Block: 0  
   Error Block: 0
Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.notice] Vendor: ATA 
   Serial Number:  
Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.notice] Sense Key: Soft_Error
Dec 30 17:45:04 oizfs scsi: [ID 107833 kern.notice] ASC: 0x0 (no additional 
sense info), ASCQ: 0x0, FRU: 0x0

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sonnet Tempo SSD supported?

2012-12-04 Thread Eugen Leitl
On Tue, Dec 04, 2012 at 11:07:17AM +0100, Eugen Leitl wrote:
> On Mon, Dec 03, 2012 at 06:28:17PM -0500, Peter Tripp wrote:
> > HI Eugen,
> > 
> > Whether it's compatible entirely depends on the chipset of the SATA 
> > controller.
> 
> This is what I was trying to find out. I guess I just have to 
> test it empirically.
>  
> > Basically that card is just a dual port 6gbps PCIe SATA controller with the 
> > space to mount one ($149) or two ($299) 2.5inch disks.  Sonnet, a mac 
> > focused company, offers it as a way to better utilize existing Mac Pros 
> > already in the field without an external box.  Mac Pros only have 3gbps 
> > SATA2 and a 4x3.5inch drive backplane, but nearly all have a free 
> > full-length PCIe slot.  This product only makes sense if you're trying to 
> > run OpenIndiana on a Mac Pro, which in my experience is more trouble than 
> > it's worth, but to each their own I guess. 
> 
> My application is to stick 2x SSDs into a SunFire X2100 M2,
> without resorting to splicing into power cables and mounting
> SSD in random location with double-side sticky tape. Depending
> on hardware support I'll either run OpenIndiana or Linux
> with a zfs hybrid pool (2x SATA drives as mirrored pool).
>  
> > If you can confirm the chipset you might get lucky and have it be a 
> > supported chip.  The big chip is labelled PLX, but I can't read the 
> > markings and wasn't aware PLX made any PCIe SATA controllers (PCIe and 
> > USB/SATA bridges sure, but not straight controllers) so that may not even 
> > be the chip we care about. 
> > http://www.profil-marketing.com/uploads/tx_lipresscenter/Sonnet_Tempo_SSD_Pro_01.jpg
> 
> Eiter way I'll know the hardware support situation soon
> enough. 

I see a Marvell 88SE9182 on that Sonnet.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sonnet Tempo SSD supported?

2012-12-04 Thread Eugen Leitl
On Tue, Dec 04, 2012 at 03:38:07AM -0800, Gary Driggs wrote:
> On Dec 4, 2012, Eugen Leitl wrote:
> 
> > Either way I'll know the hardware support situation soon
> > enough.
> 
> Have you tried contacting Sonnet?

No, but I did some digging. It *might* be a Marvell 88SX7042,
which would be then supported by Linux, but not by Solaris
http://www.nexentastor.org/boards/1/topics/2383
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sonnet Tempo SSD supported?

2012-12-04 Thread Eugen Leitl
On Mon, Dec 03, 2012 at 06:28:17PM -0500, Peter Tripp wrote:
> HI Eugen,
> 
> Whether it's compatible entirely depends on the chipset of the SATA 
> controller.

This is what I was trying to find out. I guess I just have to 
test it empirically.
 
> Basically that card is just a dual port 6gbps PCIe SATA controller with the 
> space to mount one ($149) or two ($299) 2.5inch disks.  Sonnet, a mac focused 
> company, offers it as a way to better utilize existing Mac Pros already in 
> the field without an external box.  Mac Pros only have 3gbps SATA2 and a 
> 4x3.5inch drive backplane, but nearly all have a free full-length PCIe slot.  
> This product only makes sense if you're trying to run OpenIndiana on a Mac 
> Pro, which in my experience is more trouble than it's worth, but to each 
> their own I guess. 

My application is to stick 2x SSDs into a SunFire X2100 M2,
without resorting to splicing into power cables and mounting
SSD in random location with double-side sticky tape. Depending
on hardware support I'll either run OpenIndiana or Linux
with a zfs hybrid pool (2x SATA drives as mirrored pool).
 
> If you can confirm the chipset you might get lucky and have it be a supported 
> chip.  The big chip is labelled PLX, but I can't read the markings and wasn't 
> aware PLX made any PCIe SATA controllers (PCIe and USB/SATA bridges sure, but 
> not straight controllers) so that may not even be the chip we care about. 
> http://www.profil-marketing.com/uploads/tx_lipresscenter/Sonnet_Tempo_SSD_Pro_01.jpg

Eiter way I'll know the hardware support situation soon
enough. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sonnet Tempo SSD supported?

2012-12-03 Thread Eugen Leitl

Anyone here using http://www.sonnettech.com/product/tempossd.html
with a zfs-capable OS? Is e.g. OpenIndiana supported? 

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on SunFire X2100M2 with hybrid pools

2012-11-27 Thread Eugen Leitl
On Tue, Nov 27, 2012 at 12:12:43PM +, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Eugen Leitl
> > 
> > can I make e.g. LSI SAS3442E
> > directly do SSD caching (it says something about CacheCade,
> > but I'm not sure it's an OS-side driver thing), as it
> > is supposed to boost IOPS? Unlikely shot, but probably
> > somebody here would know.
> 
> Depending on the type of work you will be doing, the best performance thing 
> you could do is to disable zil (zfs set sync=disabled) and use SSD's for 
> cache.  But don't go *crazy* adding SSD's for cache, because they still have 
> some in-memory footprint.  If you have 8G of ram and 80G SSD's, maybe just 
> use one of them for cache, and let the other 3 do absolutely nothing.  Better 
> yet, make your OS on a pair of SSD mirror, then use pair of HDD mirror for 
> storagepool, and one SSD for cache.  Then you have one SSD unused, which you 
> could optionally add as dedicated log device to your storagepool.  There are 
> specific situations where it's ok or not ok to disable zil - look around and 
> ask here if you have any confusion about it.  
> 
> Don't do redundancy in hardware.  Let ZFS handle it.

Thanks. I'll try doing that, and see how it works out.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs on SunFire X2100M2 with hybrid pools

2012-11-26 Thread Eugen Leitl

Dear internets,

I've got an old SunFire X2100M2 with 6-8 GBytes ECC RAM, which
I wanted to put into use with Linux, using the Linux
VServer patch (an analogon to zones), and 2x 2 TByte 
nearline (WD RE4) drives. It occured to me that the
1U case had enough space to add some SSDs (e.g.
2-4 80 GByte Intel SSDs), and the power supply
should be able to take both the 2x SATA HDs as well 
as 2-4 SATA SSDs, though I would need to splice into
existing power cables.

I also have a few LSI and an IBM M1015 (potentially 
reflashable to IT mode) adapters, so having enough ports
is less an issue (I'll probably use an LSI
with 4x SAS/SATA for 4x SSD, and keep the onboard SATA
for HDs, or use each 2x for SSD and HD).

Now there are multiple configurations for this. 
Some using Linux (roof fs on a RAID10, /home on
RAID 1) or zfs. Now zfs on Linux probably wouldn't
do hybrid zfs pools (would it?) and it wouldn't
be probably stable enough for production. Right?

Assuming I wont't have to compromise CPU performance 
(it's an anemic Opteron 1210 1.8 GHz, dual core, after all, and
it will probably run several 10 of zones in production) and
sacrifice data integrity, can I make e.g. LSI SAS3442E
directly do SSD caching (it says something about CacheCade,
but I'm not sure it's an OS-side driver thing), as it
is supposed to boost IOPS? Unlikely shot, but probably
somebody here would know.

If not, should I go directly OpenIndiana, and use
a hybrid pool?

Should I use all 4x SATA SSDs and 2x SATA HDs to
do a hybrid pool, or would this be an overkill?
The SSDs are Intel SSDSA2M080G2GC 80 GByte, so no speed demons
either. However, they've seen some wear and tear and
none of them has keeled over yet. So I think they'll
be good for a few more years.

How would you lay out the pool with OpenIndiana
in either case to maximize IOPS and minimize CPU
load (assuming it's an issue)? I wouldn't mind
to trade 1/3rd to 1/2 of CPU due to zfs load, if
I can get decent IOPS.

This is terribly specific, I know, but I figured
somebody had tried something like that with an X2100 M2,
it being a rather popular Sun (RIP) Solaris box at
the time. Or not.

Thanks muchly, in any case.

-- Eugen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mixing WD20EFRX and WD2002FYPS in one pool

2012-11-21 Thread Eugen Leitl
On Wed, Nov 21, 2012 at 08:31:23AM -0700, Jan Owoc wrote:
> HI Eugen,
> 
> 
> On Wed, Nov 21, 2012 at 3:45 AM, Eugen Leitl  wrote:
> > Secondly, has anyone managed to run OpenIndiana on an AMD E-350
> > (MSI E350DM-E33)? If it doesn't work, my only options would
> > be all-in-one with ESXi, FreeNAS, or zfs on Linux.
> 
> I'm currently running OI 151a7 on an AMD E-350 system (installed as
> 151a1, I think). I think it's the ASUS E35M-I [1]. I use it as a NAS,
> so I only know that the SATA ports, USB port and network ports work -
> sound, video acceleration, etc., are untested.

Thanks, this is great to know. The box will be headless, and
run in text-only mode. I have an Intel NIC in there, and don't
intend to use the Realtek port for anything serious.
I intend to boot off USB flash stick, and runn OI with napp-it.
8 GByte RAM, unfortunately not ECC, but it will do for a secondary 
SOHO NAS, as data is largely read-only.
 
> [1] http://www.asus.com/Motherboards/AMD_CPU_on_Board/E35M1I/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mixing WD20EFRX and WD2002FYPS in one pool

2012-11-21 Thread Eugen Leitl
Hi,

after a flaky 8-drive Linux RAID10 just shredded about 2 TByte worth
of my data at home (conveniently just before I could make
a backup) I've decided to both go full redundancy as well as
all zfs at home.

A couple questions: is there a way to make WD20EFRX (2 TByte, 4k
sectors) and WD200FYPS (4k internally, reported as 512 Bytes?) 
work well together on a current OpenIndiana? Which parameters
need I give the zfs pool in regards to alignment? 

Or should I just give up, and go 4x WD20EFRX?

Secondly, has anyone managed to run OpenIndiana on an AMD E-350
(MSI E350DM-E33)? If it doesn't work, my only options would
be all-in-one with ESXi, FreeNAS, or zfs on Linux.

Thanks,
-- Eugen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-09 Thread Eugen Leitl
On Thu, Nov 08, 2012 at 04:57:21AM +, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:

> Yes you can, with the help of Dell, install OMSA to get the web interface
> to manage the PERC.  But it's a pain, and there is no equivalent option for
> most HBA's.  Specifcally, on my systems with 3ware, I simply installed the
> solaris 3ware utility to manage the HBA.  Which would not be possible on
> ESXi.  This is important because the systems are in a remote datacenter, and
> it's the only way to check for red blinking lights on the hard drives.  ;-)

I thought most IPMI came with full KVM, and also SNMP, and some ssh built-in.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Eugen Leitl
On Wed, Nov 07, 2012 at 01:33:41PM +0100, Sašo Kiselkov wrote:
> On 11/07/2012 01:16 PM, Eugen Leitl wrote:
> > I'm very interested, as I'm currently working on an all-in-one with
> > ESXi (using N40L for prototype and zfs send target, and a Supermicro
> > ESXi box for production with guests, all booted from USB internally
> > and zfs snapshot/send source).
> 
> Well, seeing as Illumos KVM requires an Intel CPU with VT-x and EPT
> support, the N40L won't be usable for that test.

Ok; I know it does support ESXi and disk pass-through though,
and even the onboard NIC (though I'll add an Intel NIC) with
the HP patched ESXi.
 
> > Why would you advise against the free ESXi, booted from USB, assuming
> > your hardware has disk pass-through? The UI is quite friendly, and it's
> > easy to deploy guests across the network.
> 
> Several reasons:
> 
> 1) Zones - much cheaper VMs than is possible with ESXi and at 100%
>native bare-metal speed.

I use Linux VServer for that, currently. It wouldn't fit this
particular application though, as the needs for VM guests are
highly heterogenous, including plently of Windows (uck, ptui).

> 2) Crossbow integrated straight in (VNICs, virtual switches, IPF, etc.)
>- no need for additional firewall boxes or VMs

ESXi does this as well, and for this (corporate) application the
firewall is as rented service, administered by the hoster. For my 
personal small business needs I have a pfSense dual-machine cluster,
with fully rendundant hardware and ability to deal with up to
1 GBit/s data rates.

> 3) Tight ZFS integration with the possibility to do VM/zone snapshots,
>replication, etc.

Well, I get this with an NFS-export of an all-in-one as well, with
the exception of zones. But, I cannot use zones for this anyway.
 
> In general, for me Illumos is just a tighter package with many features
> built-in for which you'd need dedicated hardware in an ESX(i)
> deployment. ESX(i) makes sense if you like GUIs for setting things up

In a corporate environment, I need to create systems which play well
with external customers and can be used by others. GUIs are actually
very useful for less technical co-workers.

> and fitting inside neat use-cases and for that it might be great. But if
> you need to step out of line at any point, you're pretty much out of
> luck. I'm not saying it's good or bad, I just mean that for me and my
> needs, Illumos is a much better hypervisor than VMware.

Thanks!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Eugen Leitl
On Wed, Nov 07, 2012 at 12:58:04PM +0100, Sašo Kiselkov wrote:
> On 11/07/2012 12:39 PM, Tiernan OToole wrote:
> > Morning all...
> > 
> > I have a Dedicated server in a data center in Germany, and it has 2 3TB
> > drives, but only software RAID. I have got them to install VMWare ESXi and
> > so far everything is going ok... I have the 2 drives as standard data
> > stores...
> > 
> > But i am paranoid... So, i installed Nexenta as a VM, gave it a small disk
> > to boot off and 2 1Tb disks on separate physical drives... I have created a
> > mirror pool and shared it with VMWare over NFS and copied my ISOs to this
> > share...
> > 
> > So, 2 questions:
> > 
> > 1: If you where given the same hardware, what would you do? (RAID card is
> > an extra EUR30 or so a month, which i don't really want to spend, but
> > could, if needs be...)

A RAID will only hurt you with all in one. Do you have hardware passthrough
with Hetzner (I presume you're with them, from the sound of it) on ESXi?

> > 2: should i mirror the boot drive for the VM?
> 
> If it were my money, I'd throw ESXi out the window and use Illumos for
> the hypervisor as well. You can use KVM for full virtualization and
> zones for light-weight. Plus, you'll be able to set up a ZFS mirror on

I'm very interested, as I'm currently working on an all-in-one with
ESXi (using N40L for prototype and zfs send target, and a Supermicro
ESXi box for production with guests, all booted from USB internally
and zfs snapshot/send source).

Why would you advise against the free ESXi, booted from USB, assuming
your hardware has disk pass-through? The UI is quite friendly, and it's
easy to deploy guests across the network.

> the data pair and set copies=2 on the rpool if you don't have another
> disk to complete the rpool with it. Another possibility, though somewhat
> convoluted, is to slice up the disks into two parts: a small OS part and
> a large datastore part (e.g. 100GB for the OS, 900GB for the datastore).
> Then simply put the OS part in a three-way mirror rpool and the
> datastore part in a raidz (plus do a grubinstall on all disks). That
> way, you'll be able to sustain a single-disk failure of any one of the
> three disks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [Freenas-announce] FreeNAS 8.3.0-RELEASE

2012-10-26 Thread Eugen Leitl
- Forwarded message from Josh Paetzel  -

From: Josh Paetzel 
Date: Fri, 26 Oct 2012 09:55:22 -0700
To: freenas-annou...@lists.sourceforge.net
Subject: [Freenas-announce] FreeNAS 8.3.0-RELEASE
User-Agent: Mozilla/5.0 (X11; FreeBSD amd64;
rv:13.0) Gecko/20120621 Thunderbird/13.0.1

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The FreeNAS development team is pleased to announce the immediate
availability of FreeNAS 8.3.0-RELEASE.

Images and plugins can be downloaded from the following site:

http://sourceforge.net/projects/freenas/files/FreeNAS-8.3.0/RELEASE/

FreeNAS 8.3.0 is based on FreeBSD 8.3 with version 28 of the ZFS
filesystem.  This is a major milestone in FreeNAS development, bringing
in the plugin system with ZFS version 28.  Development of the FreeNAS 8.2
branch has come to a halt, as both ZFS version 15 as well as FreeBSD 8.2
are no longer supported.

There have been no major changes between 8.3.0-RC1 and RELEASE, mostly
bugfixes and minor usability improvements to the GUI.  See the
release notes for a complete list:

http://sourceforge.net/projects/freenas/files/FreeNAS-8.3.0/RC1/README/download

The bug tracker for FreeNAS is available at http://support.freenas.org

Discussion about FreeNAS occurs in the FreeNAS forums, located at:
http://forums.freenas.org as well as in the official FreeNAS IRC channel
on FreeNode in #freenas.

- -- 
Thanks,

Josh Paetzel
Director of IT, iXsystems
Servers For Open Source  http://www.ixsystems.com

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.19 (FreeBSD)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJQisB6AAoJECFKQTJR8TNdPccH/0BWu5Soil8eurr7088azhfI
qm1Euk/W2y0mvg7cC4PvzGclX8S7Lsd40fVYlr7u5igtdtbbbG9mR5SuonzG9IZY
rqRwuNMdo67RUwSMXPvMG9uGx7FxrtOlrAkvQqFxpSl8TMKpfW93tgkKpDdaoeUz
SSFaPon18hyKd/Ic9ZD/7I10d2t/pwfgbJ+XxljU/8pQrWtmyQZwrtm4GAOlogQ4
vrDa54HsvvwWx+CTAtlilSdbxbnYMbePzfZ2xMHP9LH/Zf58K3ok83j28LngffEX
Bb0CjViXHXW+zOpK0LYsIIWC1igqwS05y5QAlFoc1P9tv0j5wfj9jhrn17A9lV8=
=W0RS
-END PGP SIGNATURE-

--
The Windows 8 Center 
In partnership with Sourceforge
Your idea - your app - 30 days. Get started!
http://windows8center.sourceforge.net/
what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/
___
Freenas-announce mailing list
freenas-annou...@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freenas-announce

- End forwarded message -
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] looking for slides for basic zfs intro

2012-10-19 Thread Eugen Leitl

Hi,

I would like to give a short talk at my organisation in order
to sell them on zfs in general, and on zfs-all-in-one and
zfs as remote backup (zfs send).

Does anyone have a short set of presentation slides or maybe 
a short video I could pillage for that purpose? Thanks.

-- Eugen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] OT: does the LSI 9211-8i fit into the HP N40L?

2012-09-19 Thread Eugen Leitl

Hi again,

thanks for all the replies to the all in one with ESXi, it was 
most illuminating. I will use this setup at my dayjob.

Now for a slight variation on a theme: N40L with ESXi with raw drive 
passthrough, with OpenIndiana/napp-it NFS or iSCSI export of underlying
devices. This particular setup is for a home VMWare lab, using
spare hardware parts I have around.

I'm trying to do something like 

http://forums.servethehome.com/showthread.php?464-HP-MicroServer-HBA-controller-recommendation

(using 4x SSD in http://www.sharkoon.com/?q=en/node/1824 and
4x SATA in the internal drive cage) and from that thread alone
can't quite tell whether it would fit, physically.

Anyone running LSI 9211-8i in that little box without 90 degree
Mini-SAS? If you do ineed need a 90 degree Mini-SAS, do you have a 
part number for me, perchance?

Would you use the 9211-8i for all 8 SATA devices internally,
disregarding the onboard mini-SAS to SATA chipset? Or use the 4x onboard
SATA to avoid saturating the port? I might or might not
use 2 TByte SAS instead of SATA, assuming I can get those
cheaply. It's 2-3 TByte SATA disks otherwise.

In case for 4x SSD (Intel 1st or 2nd gen ~80 GByte), would you go for
2x mirror for L2ARC and 2x mirror for ZIL? Some other configuration?

Thanks!

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] all in one server

2012-09-18 Thread Eugen Leitl

I'm currently thinking about rolling a variant of

http://www.napp-it.org/napp-it/all-in-one/index_en.html

with remote backup (via snapshot and send) to 2-3
other (HP N40L-based) zfs boxes for production in
our organisation. The systems themselves would
be either Dell or Supermicro (latter with ZIL/L2ARC
on SSD, plus SAS disks (pools as mirrors) all with 
hardware pass-through).

The idea is to use zfs for data integrity and
backup via data snapshot (especially important
data will be also back-up'd via conventional DLT
tapes).

Before I test thisi --

Is anyone using this is in production? Any caveats?

Can I actually have a year's worth of snapshots in
zfs without too much performance degradation?

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-04 Thread Eugen Leitl
On Fri, Aug 03, 2012 at 08:39:55PM -0500, Bob Friesenhahn wrote:

> For the slog, you should look for a SLC technology SSD which saves  
> unwritten data on power failure.  In Intel-speak, this is called  
> "Enhanced Power Loss Data Protection".  I am not running across any  
> Intel SSDs which claim to match these requirements.

The 
http://www.intel.com/content/www/us/en/solid-state-drives/solid-state-drives-710-series.html
seems to qualify:

"Enhanced power-loss data protection. Saves all cached data in the process of 
being 
written before the Intel SSD 710 Series shuts down, which helps minimize 
potential 
data loss in the event of an unexpected system power loss."

> Extreme write IOPS claims in consumer SSDs are normally based on large  
> write caches which can lose even more data if there is a power failure.

Intel 311 with a good UPS would seem to be a reasonable tradeoff.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs] LZ4 compression algorithm

2012-07-23 Thread Eugen Leitl
- Forwarded message from Bob Friesenhahn  
-

From: Bob Friesenhahn 
Date: Mon, 23 Jul 2012 12:55:44 -0500 (CDT)
To: z...@lists.illumos.org
cc: Radio młodych bandytów ,
Pawel Jakub Dawidek , develo...@lists.illumos.org
Subject: Re: [zfs] LZ4 compression algorithm
User-Agent: Alpine 2.01 (GSO 1266 2009-07-14)
Reply-To: z...@lists.illumos.org

On Mon, 23 Jul 2012, Sašo Kiselkov wrote:
>
> Anyway, the mere caring for clang by ZFS users doesn't necessarily mean
> that clang is unusable. It just may not be usable for kernel
> development. The userland story, however, can be very different.

FreeBSD 10 is clang-based and still includes ZFS which tracks the Illumos 
code-base.

Bob
-- 
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/


---
illumos-zfs
Archives: https://www.listbox.com/member/archive/182191/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182191/22842876-6fe17e6f
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=22842876&id_secret=22842876-a25d3366
Powered by Listbox: http://www.listbox.com


- End forwarded message -
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [Freenas-announce] FreeNAS 8.2.0-RC1

2012-07-15 Thread Eugen Leitl
s on OS X.

- Upgrades from FreeNAS 0.7 aren't supported.

- The installer doesn't check the size of the install media before attempting
  an install.  A 2 GB device is required, but the install will appear to
  complete successfully on smaller devices, only to fail at boot.

- The installer will let you switch from i386 to amd64 architecture and
  vice-versa, but some files, such as the rrd files used by the statistics
  graphing package are architecture dependent.

- There are known interoperability issues with FreeNAS and Samba 4.x being
  used as a PDC due to Samba 4.x not conforming to the Microsoft CIFS
  specification and the way LDAP queries are executed on FreeNAS. Please see
  the following support ticket for more details:
  http://support.freenas.org/ticket/1135 .

Filename:
FreeNAS-8.2.0-RC1-x64.GUI_Upgrade.txz
SHA256 Hash:
3dc6ca50b2ee8105aebab34dd806e30e45b325afdfc7aee2311f38b5b7edcf8f

Filename:
FreeNAS-8.2.0-RC1-x64.GUI_Upgrade.xz
SHA256 Hash:
a11a96fa5617d70e69c936acf6819b53dc6a329a1015be34386b2c98dbf8a2fa

Filename:
FreeNAS-8.2.0-RC1-x64.img.xz
SHA256 Hash:
caf3c103773e74111c1555463bfd8a2c198b9e028ae1a517069b0978bf848e8f

Filename:
FreeNAS-8.2.0-RC1-x64.iso
SHA256 Hash:
c5d6787afeb7d8a20d925668e0548132e09990d2118d64bdd0839ceed88182ed

Filename:
FreeNAS-8.2.0-RC1-x86.GUI_Upgrade.txz
SHA256 Hash:
004cc7fc2deacd611659bd45eac5dfd79d6d259c11fd1a5ae609f9624dd7402e

Filename:
FreeNAS-8.2.0-RC1-x86.GUI_Upgrade.xz
SHA256 Hash:
071a93d6db20bfa235c2b9cf59d7e5c7e62a5457e85d2cc284301d6f768db8da

Filename:
FreeNAS-8.2.0-RC1-x86.img.xz
SHA256 Hash:
1b32c427ce997c0c82e0480c4af7892ca950f23db682e842a02bf2f7d8e73e43

Filename:
FreeNAS-8.2.0-RC1-x86.iso
SHA256 Hash:
e9368863fd316e502df294bc56208247cc6720cd7408bf9a328f666b3af10045




--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Freenas-announce mailing list
freenas-annou...@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freenas-announce


- End forwarded message -
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris derivate with the best long-term future

2012-07-11 Thread Eugen Leitl

On Wed, Jul 11, 2012 at 08:48:54AM -0400, Hung-Sheng Tsao Ph.D. wrote:
> hi
> if U have not check this page please do
> http://en.wikipedia.org/wiki/ZFS
> interesting info about the  status of ZFS in various OS
> regards

Thanks for the pointer. It doesn't answer my question though --
where the most development momentum is. It does seem that Illumian
is a Debian-flavored distro of OpenIndiana, so I wonder why
the napp-it author puts OpenIndiana first in his list.

It would be interesting to see when zpool versions >28 will
be available in the open forks. Particularly encryption is
a very useful functionality.

> my 2c
> 1)if you have the money buy ZFS appliance

I would certainly not give any money to Oracle.

> 2)if you want to build your self napp-it get solaris 11 support, it only  

Oracle Solaris is dead to me for pretty much the same reasons.

> charge the SW/socket  and not change by storage capacity
> Nexenta  Enterprise platform charge U $$ for raw capacity

I'm currently using NexentaCore which is EOL. It seems
my choices are either OpenIndiana or Illumian, which seem
to be very closely related.

> On 7/11/2012 7:51 AM, Eugen Leitl wrote:
>> As a napp-it user who recently needs to upgrade from NexentaCore I recently 
>> saw
>> "preferred for OpenIndiana live but running under Illumian, NexentaCore and 
>> Solaris 11 (Express)"
>> as a system recommendation for napp-it.
>>
>> I wonder about the future of OpenIndiana and Illumian, which
>> fork is likely to see the most continued development, in your opinion?
>>
>> Thanks.
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Solaris derivate with the best long-term future

2012-07-11 Thread Eugen Leitl

As a napp-it user who recently needs to upgrade from NexentaCore I recently saw
"preferred for OpenIndiana live but running under Illumian, NexentaCore and 
Solaris 11 (Express)"
as a system recommendation for napp-it. 

I wonder about the future of OpenIndiana and Illumian, which
fork is likely to see the most continued development, in your opinion?

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] making network configuration sticky in nexenta core/napp-it

2012-01-10 Thread Eugen Leitl

Sorry for an off-topic question, but anyone knows how to make
network configuration (done with ifconfig/route add) sticky in 
nexenta core/napp-it?

After reboot system reverts to 0.0.0.0 and doesn't listen
to /etc/defaultrouter

Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] status iCore

2011-11-01 Thread Eugen Leitl

As a happy napp-it user (by the way, I suggest donating to the
further development of the project: http://napp-it.org/ see Donate 
button) I've noticed the following

> New Illumos project iCore: Intension is to offer a common code base/ core OS 
> for NexentaStor and OpenIndiana.
> Based on this, you can use NexentaStor or free OpenIndiana distributions. 
> OpenSolaris based NexentaCore is end of live.
> https://www.illumos.org/projects/icore + 
> http://www.nexenta.org/projects/consolidation 

anyone knows what the status of iCore is? I presume that's
the only game in town, now that Oracle has killed the Solaris star?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-16 Thread Eugen Leitl
On Mon, Aug 15, 2011 at 01:38:36PM -0700, Brandon High wrote:
> On Thu, Aug 11, 2011 at 1:00 PM, Ray Van Dolson  wrote:
> > Are any of you using the Intel 320 as ZIL?  It's MLC based, but I
> > understand its wear and performance characteristics can be bumped up
> > significantly by increasing the overprovisioning to 20% (dropping
> > usable capacity to 80%).
> 
> Intel recently added the 311, a small SLC-based drive for use as a
> temp cache with their Z68 platform. It's limited to 20GB, but it might
> be a better fit for use as a ZIL than the 320.

Works fine over here (Nexenta Core 3.1).

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] trouble adding log and cache on SSD to a pool

2011-08-10 Thread Eugen Leitl
On Sat, Aug 06, 2011 at 07:19:56PM +0200, Eugen Leitl wrote:
> 
> Upgrading to hacked N36L BIOS seems to have done the trick:
> 
> eugen@nexenta:~$ zpool status tank
>   pool: tank
>  state: ONLINE
>  scan: none requested
> config:
> 
> NAMESTATE READ WRITE CKSUM
> tankONLINE   0 0 0
>   raidz2-0  ONLINE   0 0 0
> c0t0d0  ONLINE   0 0 0
> c0t1d0  ONLINE   0 0 0
> c0t2d0  ONLINE   0 0 0
> c0t3d0  ONLINE   0 0 0
> logs
>   c0t5d0s0  ONLINE   0 0 0
> cache
>   c0t5d0s1  ONLINE   0 0 0
> 
> errors: No known data errors
> 
> Anecdotally, the drive noise and system load have gone
> down as well. It seems even with small SSDs hybrid pools
> are definitely worthwhile.

System is still stable. zilstat on on a lightly loaded box
(this is N36L with 8 GByte RAM and 4x1 and 1.5 TByte Seagate
drives in raidz2):

root@nexenta:/tank/tank0/eugen# ./zilstat.ksh -t 60
TIMEN-Bytes  N-Bytes/s N-Max-RateB-Bytes  B-Bytes/s 
B-Max-Rateops  <=4kB 4-32kB >=32kB
2011 Aug 11 10:38:31   17475360 2912565464560   34078720 567978   
10747904260  0  0260
2011 Aug 11 10:39:31   10417568 1736266191832   20447232 340787   
12189696156  0  0156
2011 Aug 11 10:40:31   19264288 3210715975840   34603008 576716
9961472264  0  0264
2011 Aug 11 10:41:31   11176512 1862756124832   22151168 369186   
12189696169  0  0169
2011 Aug 11 10:42:31   14544432 242407   13321424   26738688 445644   
24117248204  0  0204
2011 Aug 11 10:43:31   13470688 2245115019744   25821184 430353
9961472197  0  0197
2011 Aug 11 10:44:319147112 1524514225464   18350080 305834
8519680140  0  0140
2011 Aug 11 10:45:31   12167552 2027927760864   23068672 384477   
15204352176  0  0176
2011 Aug 11 10:46:31   13306192 2217698467424   25034752 417245   
15335424191  0  0191
2011 Aug 11 10:47:318634288 1439048254112   15990784 266513   
15204352122  0  0122
2011 Aug 11 10:48:314442896  7404840784089175040 152917
8257536 70  0  0 70
2011 Aug 11 10:49:318256312 1376055283744   15859712 264328
9961472121  0  0121

I've also run bonnie++ and scrub while under about the same load,
scrub was doing 80-90 MBytes/s.
 
> 
> On Fri, Aug 05, 2011 at 10:43:02AM +0200, Eugen Leitl wrote:
> > 
> > I think I've found the source of my problem: I need to reflash
> > the N36L BIOS to a hacked russian version (sic) which allows
> > AHCI in the 5th drive bay
> > 
> > http://terabyt.es/2011/07/02/nas-build-guide-hp-n36l-microserver-with-nexenta-napp-it/
> > 
> > ...
> > 
> > Update BIOS and install hacked Russian BIOS
> > 
> > The HP BIOS for N36L does not support anything but legacy IDE emulation on 
> > the internal ODD SATA port and the external eSATA port. This is a problem 
> > for Nexenta which can detect false disk errors when using the ODD drive on 
> > emulated IDE mode. Luckily an unknown Russian hacker somewhere has modified 
> > the BIOS to allow AHCI mode on both the internal and eSATA ports. I have 
> > always said, “Give the Russians two weeks and they will crack anything” and 
> > usually that has held true. Huge thank you to whomever has modified this 
> > BIOS given HPs complete failure to do so.
> > 
> > I have enabled this with good results. The main one being no emails from 
> > Nexenta informing you that the syspool has moved to a degraded state when 
> > it actually hasn’t :) 
> > 
> > ...
> > 
> > On Fri, Aug 05, 2011 at 09:05:07AM +0200, Eugen Leitl wrote:
> > > On Thu, Aug 04, 2011 at 11:58:47PM +0200, Eugen Leitl wrote:
> > > > On Thu, Aug 04, 2011 at 02:43:30PM -0700, Larry Liu wrote:
> > > > >
> > > > >> root@nexenta:/export/home/eugen# zpool add tank log /dev/dsk/c3d1p0
> > > > >
> > > > > You should use c3d1s0 here.
> > > > >
> > > > >> Th
> > > > >> root@nexenta:/export/home/eugen# zpool add tank cache /dev/dsk/c3d1p1
> > > > >
> > > > > Use c3d1s1.
> > > > 
> > > > Thanks, that did the trick!
> > > > 
> > > > root@nexenta:/export/home/eugen# zpool status tank
> > >

Re: [zfs-discuss] [vserver] hybrid zfs pools as iSCSI targets for vserver

2011-08-06 Thread Eugen Leitl
- Forwarded message from Gordan Bobic  -

From: Gordan Bobic 
Date: Sat, 06 Aug 2011 21:37:30 +0100
To: vser...@list.linux-vserver.org
Subject: Re: [vserver] hybrid zfs pools as iSCSI targets for vserver
Reply-To: vser...@list.linux-vserver.org
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.18) 
Gecko/20110621 Red Hat/3.1.11-2.el6_1 Lightning/1.0b2 Thunderbird/3.1.11

On 08/06/2011 09:30 PM, John A. Sullivan III wrote:
> On Sat, 2011-08-06 at 21:40 +0200, Eugen Leitl wrote:
>> I've recently figured out how to make low-end hardware (e.g. HP N36L)
>> work well as zfs hybrid pools. The system (Nexenta Core + napp-it)
>> exports the zfs pools as CIFS, NFS or iSCSI (Comstar).
>>
>> 1) is this a good idea?
>>
>> 2) any of you are running vserver guests on iSCSI targets? Happy with it?
>>
> Yes, we have been using iSCSI to hold vserver guests for a couple of
> years now and are generally unhappy with it.  Besides our general
> distress at Nexenta, there is the constraint of the Linux file system.
>
> Someone please correct me if I'm wrong because this is a big problem for
> us.  As far as I know, Linux file system block size cannot exceed the
> maximum memory page size and is limited to no more than 4KB.

I'm pretty sure it is _only_ limited by memory page size, since I'm pretty 
sure I remember that 8KB blocks were available on SPARC.

> iSCSI
> appears to acknowledge every individual block that is sent. That means
> the most data one can stream without an ACK is 4KB. That means the
> throughput is limited by the latency of the network rather than the
> bandwidth.

Hmm, buffering in the FS shouldn't be dependant on the block layer  
immediately acknowledging unless you are issuing fsync()/barriers. What FS 
are you using on top of the iSCSI block device and is your application 
fsync() heavy?

Gordan

- End forwarded message -
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [vserver] hybrid zfs pools as iSCSI targets for vserver

2011-08-06 Thread Eugen Leitl
- Forwarded message from "John A. Sullivan III" 
 -

From: "John A. Sullivan III" 
Date: Sat, 06 Aug 2011 16:30:04 -0400
To: vser...@list.linux-vserver.org
Subject: Re: [vserver] hybrid zfs pools as iSCSI targets for vserver
Reply-To: vser...@list.linux-vserver.org
X-Mailer: Evolution 2.30.3 

On Sat, 2011-08-06 at 21:40 +0200, Eugen Leitl wrote:
> I've recently figured out how to make low-end hardware (e.g. HP N36L)
> work well as zfs hybrid pools. The system (Nexenta Core + napp-it)
> exports the zfs pools as CIFS, NFS or iSCSI (Comstar).
> 
> 1) is this a good idea?
> 
> 2) any of you are running vserver guests on iSCSI targets? Happy with it?
> 
Yes, we have been using iSCSI to hold vserver guests for a couple of
years now and are generally unhappy with it.  Besides our general
distress at Nexenta, there is the constraint of the Linux file system.

Someone please correct me if I'm wrong because this is a big problem for
us.  As far as I know, Linux file system block size cannot exceed the
maximum memory page size and is limited to no more than 4KB.  iSCSI
appears to acknowledge every individual block that is sent. That means
the most data one can stream without an ACK is 4KB. That means the
throughput is limited by the latency of the network rather than the
bandwidth.

Nexenta is built on OpenSolaris and has a significantly higher internal
network latency than Linux.  It is not unusual for us to see round trip
times from host to Nexenta well upwards of 100us (micro-seconds).  Let's
say it was even as good as 100us.  One could send up to 10,000 packets
per second * 4KB = 40MBps maximum throughput for any one iSCSI
conversation.  That's pretty lousy disk throughput.

Other than that, iSCSI is fabulous because it appears as a local block
device.  We typically mount a large data volume into the VServer host
and the mount rbind it into the guest file systems.  A magically well
working file server without a file server or the hassles of a network
file system.  Our single complaint other than about Nexenta themselves
is the latency constrained throughput.

Any one have a way around that? Thanks - John

- End forwarded message -
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] trouble adding log and cache on SSD to a pool

2011-08-06 Thread Eugen Leitl

Upgrading to hacked N36L BIOS seems to have done the trick:

eugen@nexenta:~$ zpool status tank
  pool: tank
 state: ONLINE
 scan: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz2-0  ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
logs
  c0t5d0s0  ONLINE   0 0 0
cache
  c0t5d0s1  ONLINE   0 0 0

errors: No known data errors

Anecdotally, the drive noise and system load have gone
down as well. It seems even with small SSDs hybrid pools
are definitely worthwhile.


On Fri, Aug 05, 2011 at 10:43:02AM +0200, Eugen Leitl wrote:
> 
> I think I've found the source of my problem: I need to reflash
> the N36L BIOS to a hacked russian version (sic) which allows
> AHCI in the 5th drive bay
> 
> http://terabyt.es/2011/07/02/nas-build-guide-hp-n36l-microserver-with-nexenta-napp-it/
> 
> ...
> 
> Update BIOS and install hacked Russian BIOS
> 
> The HP BIOS for N36L does not support anything but legacy IDE emulation on 
> the internal ODD SATA port and the external eSATA port. This is a problem for 
> Nexenta which can detect false disk errors when using the ODD drive on 
> emulated IDE mode. Luckily an unknown Russian hacker somewhere has modified 
> the BIOS to allow AHCI mode on both the internal and eSATA ports. I have 
> always said, “Give the Russians two weeks and they will crack anything” and 
> usually that has held true. Huge thank you to whomever has modified this BIOS 
> given HPs complete failure to do so.
> 
> I have enabled this with good results. The main one being no emails from 
> Nexenta informing you that the syspool has moved to a degraded state when it 
> actually hasn’t :) 
> 
> ...
> 
> On Fri, Aug 05, 2011 at 09:05:07AM +0200, Eugen Leitl wrote:
> > On Thu, Aug 04, 2011 at 11:58:47PM +0200, Eugen Leitl wrote:
> > > On Thu, Aug 04, 2011 at 02:43:30PM -0700, Larry Liu wrote:
> > > >
> > > >> root@nexenta:/export/home/eugen# zpool add tank log /dev/dsk/c3d1p0
> > > >
> > > > You should use c3d1s0 here.
> > > >
> > > >> Th
> > > >> root@nexenta:/export/home/eugen# zpool add tank cache /dev/dsk/c3d1p1
> > > >
> > > > Use c3d1s1.
> > > 
> > > Thanks, that did the trick!
> > > 
> > > root@nexenta:/export/home/eugen# zpool status tank
> > >   pool: tank
> > >  state: ONLINE
> > >  scan: scrub repaired 0 in 0h0m with 0 errors on Fri Aug  5 03:04:57 2011
> > > config:
> > > 
> > > NAMESTATE READ WRITE CKSUM
> > > tankONLINE   0 0 0
> > >   raidz2-0  ONLINE   0 0 0
> > > c0t0d0  ONLINE   0 0 0
> > > c0t1d0  ONLINE   0 0 0
> > > c0t2d0  ONLINE   0 0 0
> > > c0t3d0  ONLINE   0 0 0
> > > logs
> > >   c3d1s0ONLINE   0 0 0
> > > cache
> > >   c3d1s1ONLINE   0 0 0
> > > 
> > > errors: No known data errors
> > 
> > Hmm, it doesn't seem to last more than a couple hours
> > under test load (mapped as a CIFS share receiving a
> > bittorrent download with 10 k small files in it at
> > about 10 MByte/s) before falling from the pool:
> > 
> > root@nexenta:/export/home/eugen# zpool status tank
> >   pool: tank
> >  state: DEGRADED
> > status: One or more devices are faulted in response to persistent errors.
> > Sufficient replicas exist for the pool to continue functioning in a
> > degraded state.
> > action: Replace the faulted device, or use 'zpool clear' to mark the device
> > repaired.
> >  scan: none requested
> > config:
> > 
> > NAMESTATE READ WRITE CKSUM
> > tankDEGRADED 0 0 0
> >   raidz2-0  ONLINE   0 0 0
> > c0t0d0  ONLINE   0 0 0
> > c0t1d0  ONLINE   0 0 0
> > c0t2d0  ONLINE   0 0 0
> > c0t3d0  ONLINE   0 0 0
> > logs
> >   c3d1s0FAULTED  0 4 0  too many errors
> > cache
> >   c3d1s1FAULTED 13 7.68K 0  too many errors
> > 
> > errors: No

Re: [zfs-discuss] trouble adding log and cache on SSD to a pool

2011-08-05 Thread Eugen Leitl

I think I've found the source of my problem: I need to reflash
the N36L BIOS to a hacked russian version (sic) which allows
AHCI in the 5th drive bay

http://terabyt.es/2011/07/02/nas-build-guide-hp-n36l-microserver-with-nexenta-napp-it/

...

Update BIOS and install hacked Russian BIOS

The HP BIOS for N36L does not support anything but legacy IDE emulation on the 
internal ODD SATA port and the external eSATA port. This is a problem for 
Nexenta which can detect false disk errors when using the ODD drive on emulated 
IDE mode. Luckily an unknown Russian hacker somewhere has modified the BIOS to 
allow AHCI mode on both the internal and eSATA ports. I have always said, “Give 
the Russians two weeks and they will crack anything” and usually that has held 
true. Huge thank you to whomever has modified this BIOS given HPs complete 
failure to do so.

I have enabled this with good results. The main one being no emails from 
Nexenta informing you that the syspool has moved to a degraded state when it 
actually hasn’t :) 

...

On Fri, Aug 05, 2011 at 09:05:07AM +0200, Eugen Leitl wrote:
> On Thu, Aug 04, 2011 at 11:58:47PM +0200, Eugen Leitl wrote:
> > On Thu, Aug 04, 2011 at 02:43:30PM -0700, Larry Liu wrote:
> > >
> > >> root@nexenta:/export/home/eugen# zpool add tank log /dev/dsk/c3d1p0
> > >
> > > You should use c3d1s0 here.
> > >
> > >> Th
> > >> root@nexenta:/export/home/eugen# zpool add tank cache /dev/dsk/c3d1p1
> > >
> > > Use c3d1s1.
> > 
> > Thanks, that did the trick!
> > 
> > root@nexenta:/export/home/eugen# zpool status tank
> >   pool: tank
> >  state: ONLINE
> >  scan: scrub repaired 0 in 0h0m with 0 errors on Fri Aug  5 03:04:57 2011
> > config:
> > 
> > NAMESTATE READ WRITE CKSUM
> > tankONLINE   0 0 0
> >   raidz2-0  ONLINE   0 0 0
> > c0t0d0  ONLINE   0 0 0
> > c0t1d0  ONLINE   0 0 0
> > c0t2d0  ONLINE   0 0 0
> > c0t3d0  ONLINE   0 0 0
> > logs
> >   c3d1s0ONLINE   0 0 0
> > cache
> >   c3d1s1ONLINE   0 0 0
> > 
> > errors: No known data errors
> 
> Hmm, it doesn't seem to last more than a couple hours
> under test load (mapped as a CIFS share receiving a
> bittorrent download with 10 k small files in it at
> about 10 MByte/s) before falling from the pool:
> 
> root@nexenta:/export/home/eugen# zpool status tank
>   pool: tank
>  state: DEGRADED
> status: One or more devices are faulted in response to persistent errors.
> Sufficient replicas exist for the pool to continue functioning in a
> degraded state.
> action: Replace the faulted device, or use 'zpool clear' to mark the device
> repaired.
>  scan: none requested
> config:
> 
> NAMESTATE READ WRITE CKSUM
> tankDEGRADED 0 0 0
>   raidz2-0  ONLINE   0 0 0
> c0t0d0  ONLINE   0 0 0
> c0t1d0  ONLINE   0 0 0
> c0t2d0  ONLINE   0 0 0
> c0t3d0  ONLINE   0 0 0
> logs
>   c3d1s0FAULTED  0 4 0  too many errors
> cache
>   c3d1s1FAULTED 13 7.68K 0  too many errors
> 
> errors: No known data errors
> 
> dmesg sez
> 
> Aug  5 05:53:26 nexenta EVENT-TIME: Fri Aug  5 05:53:26 CEST 2011
> Aug  5 05:53:26 nexenta PLATFORM: ProLiant-MicroServer, CSN: CN7051P024, 
> HOSTNAME: nexenta
> Aug  5 05:53:26 nexenta SOURCE: zfs-diagnosis, REV: 1.0
> Aug  5 05:53:26 nexenta EVENT-ID: 516e9c7c-9e29-c504-a422-db37838fa676
> Aug  5 05:53:26 nexenta DESC: A ZFS device failed.  Refer to 
> http://sun.com/msg/ZFS-8000-D3 for more information.
> Aug  5 05:53:26 nexenta AUTO-RESPONSE: No automated response will occur.
> Aug  5 05:53:26 nexenta IMPACT: Fault tolerance of the pool may be 
> compromised.
> Aug  5 05:53:26 nexenta REC-ACTION: Run 'zpool status -x' and replace the bad 
> device.
> Aug  5 05:53:39 nexenta fmd: [ID 377184 daemon.error] SUNW-MSG-ID: 
> ZFS-8000-FD, TYPE: Fault, VER: 1, SEVERITY: Major
> Aug  5 05:53:39 nexenta EVENT-TIME: Fri Aug  5 05:53:39 CEST 2011
> Aug  5 05:53:39 nexenta PLATFORM: ProLiant-MicroServer, CSN: CN7051P024, 
> HOSTNAME: nexenta
> Aug  5 05:53:39 nexenta SOURCE: zfs-diagnosis, REV: 1.0
> Aug  5 05:53:39 nexenta EVENT-ID: 3319749a-b6f7-c305-ec86-d94897dde85b
> Aug  5 05:53:39 nexenta DESC: The number of I/O errors associated with a ZFS 
> d

Re: [zfs-discuss] trouble adding log and cache on SSD to a pool

2011-08-05 Thread Eugen Leitl
On Thu, Aug 04, 2011 at 11:58:47PM +0200, Eugen Leitl wrote:
> On Thu, Aug 04, 2011 at 02:43:30PM -0700, Larry Liu wrote:
> >
> >> root@nexenta:/export/home/eugen# zpool add tank log /dev/dsk/c3d1p0
> >
> > You should use c3d1s0 here.
> >
> >> Th
> >> root@nexenta:/export/home/eugen# zpool add tank cache /dev/dsk/c3d1p1
> >
> > Use c3d1s1.
> 
> Thanks, that did the trick!
> 
> root@nexenta:/export/home/eugen# zpool status tank
>   pool: tank
>  state: ONLINE
>  scan: scrub repaired 0 in 0h0m with 0 errors on Fri Aug  5 03:04:57 2011
> config:
> 
> NAMESTATE READ WRITE CKSUM
> tankONLINE   0 0 0
>   raidz2-0  ONLINE   0 0 0
> c0t0d0  ONLINE   0 0 0
> c0t1d0  ONLINE   0 0 0
> c0t2d0  ONLINE   0 0 0
> c0t3d0  ONLINE   0 0 0
> logs
>   c3d1s0ONLINE   0 0 0
> cache
>   c3d1s1ONLINE   0 0 0
> 
> errors: No known data errors

Hmm, it doesn't seem to last more than a couple hours
under test load (mapped as a CIFS share receiving a
bittorrent download with 10 k small files in it at
about 10 MByte/s) before falling from the pool:

root@nexenta:/export/home/eugen# zpool status tank
  pool: tank
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
 scan: none requested
config:

NAMESTATE READ WRITE CKSUM
tankDEGRADED 0 0 0
  raidz2-0  ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
logs
  c3d1s0FAULTED  0 4 0  too many errors
cache
  c3d1s1FAULTED 13 7.68K 0  too many errors

errors: No known data errors

dmesg sez

Aug  5 05:53:26 nexenta EVENT-TIME: Fri Aug  5 05:53:26 CEST 2011
Aug  5 05:53:26 nexenta PLATFORM: ProLiant-MicroServer, CSN: CN7051P024, 
HOSTNAME: nexenta
Aug  5 05:53:26 nexenta SOURCE: zfs-diagnosis, REV: 1.0
Aug  5 05:53:26 nexenta EVENT-ID: 516e9c7c-9e29-c504-a422-db37838fa676
Aug  5 05:53:26 nexenta DESC: A ZFS device failed.  Refer to 
http://sun.com/msg/ZFS-8000-D3 for more information.
Aug  5 05:53:26 nexenta AUTO-RESPONSE: No automated response will occur.
Aug  5 05:53:26 nexenta IMPACT: Fault tolerance of the pool may be compromised.
Aug  5 05:53:26 nexenta REC-ACTION: Run 'zpool status -x' and replace the bad 
device.
Aug  5 05:53:39 nexenta fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-FD, 
TYPE: Fault, VER: 1, SEVERITY: Major
Aug  5 05:53:39 nexenta EVENT-TIME: Fri Aug  5 05:53:39 CEST 2011
Aug  5 05:53:39 nexenta PLATFORM: ProLiant-MicroServer, CSN: CN7051P024, 
HOSTNAME: nexenta
Aug  5 05:53:39 nexenta SOURCE: zfs-diagnosis, REV: 1.0
Aug  5 05:53:39 nexenta EVENT-ID: 3319749a-b6f7-c305-ec86-d94897dde85b
Aug  5 05:53:39 nexenta DESC: The number of I/O errors associated with a ZFS 
device exceeded
Aug  5 05:53:39 nexenta  acceptable levels.  Refer to 
http://sun.com/msg/ZFS-8000-FD for more information.
Aug  5 05:53:39 nexenta AUTO-RESPONSE: The device has been offlined and marked 
as faulted.  An attempt
Aug  5 05:53:39 nexenta  will be made to activate a hot spare if 
available.
Aug  5 05:53:39 nexenta IMPACT: Fault tolerance of the pool may be compromised.
Aug  5 05:53:39 nexenta REC-ACTION: Run 'zpool status -x' and replace the bad 
device.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] trouble adding log and cache on SSD to a pool

2011-08-04 Thread Eugen Leitl
On Thu, Aug 04, 2011 at 02:43:30PM -0700, Larry Liu wrote:
>
>> root@nexenta:/export/home/eugen# zpool add tank log /dev/dsk/c3d1p0
>
> You should use c3d1s0 here.
>
>> Th
>> root@nexenta:/export/home/eugen# zpool add tank cache /dev/dsk/c3d1p1
>
> Use c3d1s1.

Thanks, that did the trick!

root@nexenta:/export/home/eugen# zpool status tank
  pool: tank
 state: ONLINE
 scan: scrub repaired 0 in 0h0m with 0 errors on Fri Aug  5 03:04:57 2011
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz2-0  ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
logs
  c3d1s0ONLINE   0 0 0
cache
  c3d1s1ONLINE   0 0 0

errors: No known data errors


-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] trouble adding log and cache on SSD to a pool

2011-08-04 Thread Eugen Leitl

I'm a bit solaristarded, can somebody please help?

I've got a 

 c3d1 
  /pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0

which looks like

partition> print
Current partition table (original):
Total disk sectors available: 39074830 + 16384 (reserved sectors)

Part  TagFlag First SectorSizeLast Sector
  0usrwm   256   4.00GB 8388863
  1usrwm   8388864  14.63GB 39074830
  2 unassignedwm 0  0  0
  3 unassignedwm 0  0  0
  4 unassignedwm 0  0  0
  5 unassignedwm 0  0  0
  6 unassignedwm 0  0  0
  8   reservedwm  39074831   8.00MB 39091214

I'm trying to add the 4 GByte partition as log and
the 14.6 GByte as cache to below pool:

root@nexenta:/export/home/eugen# zpool status tank
  pool: tank
 state: ONLINE
 scan: scrub repaired 0 in 0h0m with 0 errors on Fri Aug  5 03:04:57 2011
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz2-0  ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0

errors: No known data errors

root@nexenta:/export/home/eugen# zpool add tank log /dev/dsk/c3d1p0

This works:

root@nexenta:/export/home/eugen# zpool status tank
  pool: tank
 state: ONLINE
 scan: scrub repaired 0 in 0h0m with 0 errors on Fri Aug  5 03:04:57 2011
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz2-0  ONLINE   0 0 0
c0t0d0  ONLINE   0 0 0
c0t1d0  ONLINE   0 0 0
c0t2d0  ONLINE   0 0 0
c0t3d0  ONLINE   0 0 0
logs
  c3d1p0ONLINE   0 0 0

errors: No known data errors

This doesn't:

root@nexenta:/export/home/eugen# zpool add tank cache /dev/dsk/c3d1p1
cannot open '/dev/dsk/c3d1p1': No such device or address

root@nexenta:/export/home/eugen# ls -la /dev/dsk/c3d1*
lrwxrwxrwx 1 root root 52 Aug  5 00:45 /dev/dsk/c3d1 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:wd
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1p0 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:q
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1p1 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:r
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1p2 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:s
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1p3 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:t
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1p4 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:u
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s0 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:a
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s1 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:b
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s10 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:k
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s11 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:l
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s12 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:m
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s13 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:n
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s14 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:o
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s15 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:p
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s2 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:c
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s3 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:d
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s4 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:e
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s5 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:f
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s6 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:g
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s8 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:i
lrwxrwxrwx 1 root root 51 Aug  5 00:45 /dev/dsk/c3d1s9 -> 
../../devices/pci@0,0/pci-ide@14,1/ide@0/cmdk@1,0:j

It's probably something blindingly obvious to a seasoned
Solaris user, but I'm stumped. Any ideas?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
___

Re: [zfs-discuss] NexentaCore 3.1 - ZFS V. 28

2011-07-31 Thread Eugen Leitl
On Sun, Jul 31, 2011 at 05:19:07AM -0700, Erik Trimble wrote:

>
> Yes. You can attach a ZIL or L2ARC device anytime after the pool is created.

Excellent.

> Also, I think you want an Intel 320, NOT the 311, for use as a ZIL.  The  
> 320 includes capacitors, so if you lose power, your ZIL doesn't lose  
> data.  The 311 DOESN'T include capacitors.

This is basically just a test system for hybrid pools, will
be on UPS in production, and mostly read-only.

The nice advantage of Nexenta core + napp-it is that it includes
apache + mysql + php, which saves the need for a dedicated machine
or virtual guest.

The appliance will host some 600+ k small (few MBytes) files. 
Does zfs need any special tuning for this case?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NexentaCore 3.1 - ZFS V. 28

2011-07-31 Thread Eugen Leitl
On Sun, Jul 31, 2011 at 03:45:23PM +0200, Volker A. Brandt wrote:

> I would be very interested in hearing about your success.  Especially,
> if the Hitachi HDS5C3030ALA630 SATA-III disks work in the N36L at all.
> 
> My guess would be that the on-board SATA-II controller will not
> support more than 2TB, but I have not found a definitive statement.
> HP certainly will not sell you disks bigger than 2TB for the N36L.

I'm 99% sure N36L takes 3 TByte SATA, as we have 5 of such
systems in production using the more expensive 3 TByte Hitachis.

You can't boot from them, of course, but that's what the internal
USB and external eSATA ports are good for.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NexentaCore 3.1 - ZFS V. 28

2011-07-31 Thread Eugen Leitl
On Sat, Jul 30, 2011 at 12:56:38PM +0200, Eugen Leitl wrote:
> 
> apt-get update
> apt-clone upgrade
> 
> Any first impressions?

I finally came around installing NexentaCore 3.1 along with
napp-it and AMP on a HP N36L with 8 GBytes RAM. I'm testing
it with 4x 1 and 1.5 TByte consumer SATA drives (Seagate)
with raidz2 and raidz3 and like what I see so far.

Given http://opensolaris.org/jive/thread.jspa?threadID=139315
I've ordered an Intel 311 series for ZIL/L2ARC.

I hope to use above with 4x 3 TByte Hitachi Deskstar 5K3000 HDS5C3030ALA630
given the data from Blackblaze in regards to their reliability.
Suggestion for above layout (8 GByte RAM 4x 3 TByte as raidz2)
I should go with 4 GByte for slog and 16 GByte for L2ARC, right?

Is it possible to attach slog/L2ARC to a pool after the fact?
I'd rather not wear out the small SSD with ~5 TByte avoidable
writes.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] NexentaCore 3.1 - ZFS V. 28

2011-07-30 Thread Eugen Leitl

apt-get update
apt-clone upgrade

Any first impressions?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Backblaze likes Hitachi Deskstar 5K3000 HDS5C3030ALA630

2011-07-21 Thread Eugen Leitl

http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/

Seem to be real 512 Byte sectors, too.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zil on multiple usb keys

2011-07-15 Thread Eugen Leitl
On Fri, Jul 15, 2011 at 04:21:13PM +, Tiernan OToole wrote:
> This might be a stupid question, but here goes... Would adding, say, 4 4 or 
> 8gb usb keys as a zil make enough of a difference for writes on an iscsi 
> shared vol? 
> 
> I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives 
> stripped) but writes top out at about 10 and drop a lot lower... If I where 
> to add a couple usb keys for zil, would it make a difference?

Speaking of which, is there a point in using an eSATA flash stick?
If yes, which?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] RealSSD C300 -> Crucial CT064M4SSD2

2011-06-08 Thread Eugen Leitl

Anyone running a Crucial CT064M4SSD2? Any good, or should
I try getting a RealSSD C300, as long as these are still 
available?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] optimal layout for 8x 1 TByte SATA (consumer)

2011-05-27 Thread Eugen Leitl
On Fri, May 27, 2011 at 04:38:15PM +0400, Jim Klimov wrote:

> And if the ZFS is supposedly smart enough to use request coalescing
> as to minimize mechanical seek times, then it might actually be
> possible that your disks would get "stuck" averagely serving requests
> from different parts of the platter, i.e. middle-inside and middle-outside
> and this might even be averagely more than 2x faster than a single
> drive (due to non-zero track-to-track seek times).

In practice I've just found out I'm completely CPU-bound. 
Load goes to >11 during scrub, dd a large file causes ssh 
to crap out, etc. Completely unusable, in other words.

So I think I'll try to go with a mirrored pool, and see whether the
CPU load will go down.

Maybe it's a FreeBSD (FreeNAS 8.0) brain damage, and things would
have been better with OpenSolaris. I'll have to try the HP N36L
setup to see what the CPU load with either raidz2 or mirrored
pools will be.

> This is purely my speculation, but now that I thought about it, can't get
> rid of the idea ;) ...

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] optimal layout for 8x 1 TByte SATA (consumer)

2011-05-26 Thread Eugen Leitl

How bad would raidz2 do on mostly sequential writes and reads
(Athlon64 single-core, 4 GByte RAM, FreeBSD 8.2)? 

The best way is to go is striping mirrored pools, right?
I'm worried about losing the two "wrong" drives out of 8.
These are all 7200.11 Seagates, refurbished. I'd scrub
once a week, that'd probably suck on raidz2, too?

Thanks.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [cryptography] rolling hashes, EDC/ECC vs MAC/MIC, etc.

2011-05-22 Thread Eugen Leitl
able.

BTW, IIRC I filed an RFE at Sun to expose a Merkle hash tree checksum
to applications.  I believe I've seen others ask for this before too
at various times, and it may be that the RFE I filed (if I did) was in
response to a request on one of the OSOL discuss lists -- this must
have been back in 2005.  This is a really, really obvious enhancement
to want -- it should be obvious the moment you realize that Merkle
hash trees are a godsend.

Nico
--
_______
cryptography mailing list
cryptogra...@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

- End forwarded message -
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [cryptography] rolling hashes, EDC/ECC vs MAC/MIC, etc.

2011-05-21 Thread Eugen Leitl
- Forwarded message from Zooko O'Whielacronx  -

From: Zooko O'Whielacronx 
Date: Sat, 21 May 2011 12:50:19 -0600
To: Crypto discussion list 
Subject: Re: [cryptography] rolling hashes, EDC/ECC vs MAC/MIC, etc.
Reply-To: Crypto discussion list 

Dear Nico Williams:

Thanks for the reference! Very cool.

What I would most want is for ZFS (and every other filesystem) to
maintain a Merkle Tree over the file data with a good secure hash.
Whenever a change to a file is made, the filesystem can update the
Merkle Tree this with mere O(log(N)) work in the size of the file plus
O(N) work in the size of the change. For a modern filesystem like ZFS
which is already maintaining a checksum tree the *added* cost of
maintaining the secure hash Merkle Tree could be minimal.

Then, the filesystem should make this Merkle Tree available to
applications through a simple query.

This would enable applications—without needing any further
in-filesystem code—to perform a Merkle Tree sync, which would range
from "noticeably more efficient" to "dramatically more efficient" than
rsync or zfs send. :-)

Of course it is only more efficient because we're treating the
maintenance of the secure-hash Merkle Tree as free. There are two
senses in which this is legitimate and it is almost free:

1. Since the values get maintained persistently over the file's
lifetime then the total computation required is approximately O(N)
where N is the total size of all deltas that have been applied to the
file in its life. (Let's just drop the logarithmic part for now,
because see 2. below.)

Compare this to the cost of doing a fast, insecure CRC over the whole
file such as in rsync. The cost of that is O(N) * K where N is the
(then current) size of the file and K is the number of times you run
rsync on that file.

The extreme case is if the file hasn't changed. Then for the
application-level code to confirm that the file on this machine is the
same as the file on that machine, it merely has to ask the filesystem
for the root hash on each machine and transmit that root hash over the
network. This is optimally fast compared to rsync, and unlike "zfs
send|recv" it is optimally fast whenever the two files are identical
even if they have both changed since the last time they were synced.

2. Since the modern, sophisticated filesystem like ZFS is maintaining
a tree of checksums over the data *anyway* you can piggy-back this
computation onto that work, avoiding any extra seeks and minimizing
extra memory access.

In fact, ZFS itself can actually use SHA-256 for the checksum tree,
which would make it almost provide exactly what I want, except for:

2. a. From what I've read, nobody uses the SHA-256 configuration in
ZFS because it is too computationally expensive, so they use an
insecure checksum (fletcher2/4) instead.

2. b. I assume the shape of the resulting checksum tree is modified by
artifacts of the ZFS layout instead of being a simple canonical shape.
This is a show-stopper for this use case because if the same file data
exists on a different system, and some software on that system
computes a Merkle Tree over the data, it might come out with different
hashes than the ZFS checksum tree, thus eliminating all of the
performance benefits of this approach.

But, if ZFS could be modified to fix these problems or if a new
filesystem would add a feature of maintaining a canonical,
reproducible Merkle Tree, then it might be extremely useful.

Thanks to Brian Warner and Dan Shoutis for discussions about this idea.

Regards,

Zooko
___
cryptography mailing list
cryptogra...@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

- End forwarded message -
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-20 Thread Eugen Leitl
On Tue, Jan 18, 2011 at 07:07:50AM -0800, Richard Elling wrote:

> > I'd expect more than 105290K/s on a sequential read as a peak for a single 
> > drive, let alone a striped set. The system has a relatively decent CPU, 
> > however only 2GB memory, do you think increasing this to 4GB would 
> > noticeably affect performance of my zpool? The memory is only DDR1.
> 
> 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, turn off 
> dedup
> and enable compression.

Assuming 4x 3 TByte drives and 8 GByte RAM, and a lowly dual-core 1.3 GHZ
AMD Neo, should I do the same? Or should I even not bother with compression?
The data set is a lot of scanned documents, already compressed (TIF and PDF).
I presume the incidence of identical blocks will be very low under such
circumstances.

Oh, and with 4x 3 TByte SATA mirrored pool is pretty much without
alternative, right?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP ProLiant N36L

2011-01-18 Thread Eugen Leitl
On Mon, Jan 17, 2011 at 02:19:23AM -0800, Trusty Twelve wrote:
> I've successfully installed NexentaStor 3.0.4 on this microserver using PXE. 
> Works like a charm.

I've got 5 of them today, and for some reason NexentaCore 3.0.1 b134
was unable to write to disks (whether internal USB or the 4x SATA).

Known problem? Should I go to stable, or try NexentaStor instead?
(I'd rather keep options open with Nexenta Core and napp-it).

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] OT: anyone aware how to obtain 1.8.0 for X2100M2?

2010-12-19 Thread Eugen Leitl

I realize this is off-topic, but Oracle has completely
screwed up the support site from Sun. I figured someone
here would know how to obtain

Sun Fire X2100 M2 Server Software 1.8.0 Image contents:

* BIOS is version 3A21
* SP is updated to version 3.24 (ELOM)
* Chipset driver is updated to 9.27 

from

http://www.sun.com/servers/entry/x2100/downloads.jsp

I've been trying for an hour, and I'm at the end of
my rope. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-07 Thread Eugen Leitl
On Tue, Dec 07, 2010 at 05:17:08PM -0800, Brandon High wrote:
> On Mon, Dec 6, 2010 at 7:10 PM, taemun  wrote:
> > Sorry, you're right. If they're using 512B internally, this is a non-event
> > here. I think that most folks talking about 3TB drives in this list are
> > looking for internal drives. That the desktop dock (USB, I presume)
> > coalesces blocks doesn't really make any difference.
> 
> It's a shame that Seagate doesn't sell their 3TB drive bare, but right
> now it's cheaper by about $30 to buy the 7200 rpm Seagate and throw
> away the desktop dock than it is to buy a WD EARS drive. Consider it a
> fancy anti-shock packaging.

What about Hitachi HDS723030ALA640 (aka Deskstar 7K3000, claimed
24/7)?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OCZ RevoDrive ZFS support

2010-11-27 Thread Eugen Leitl
On Sat, Nov 27, 2010 at 01:19:50PM -0600, Tim Cook wrote:

> They're a standard SATA hard drive.  You can use them for whatever you'd
> like.  For the price though, they aren't really worth the money to buy just
> to put your OS on.   Your system drive on a Solaris system generally doesn't
> see enough I/O activity to require the kind of IOPS you can get out of most

I run hundreds of vserver guests from an SSD, only the /home is mounted
on a hard drive/RAID.

> modern SSD's.  If you were using the system as a workstation, it'd
> definitely help, as applications tend to feel more responsive with an SSD.
> That's all I run in my laptops now.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-12 Thread Eugen Leitl
On Fri, Nov 12, 2010 at 09:34:48AM -0600, Tim Cook wrote:
> Channeling Ethernet will not make it any faster. Each individual connection
> will be limited to 1gbit.  iSCSI with mpxio may work, nfs will not.

Would NFSv4 as cluster system over multiple boxes work?
(This question is not limited to ESX). I have a problem that
people want to have scalable in ~30 TByte increments solution,
and I'd rather avoid adding SAS expander boxes but add
identical boxes in a cluster, and not just as invididual
NFS mounts.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-12 Thread Eugen Leitl
On Fri, Nov 12, 2010 at 10:03:08AM -0500, Edward Ned Harvey wrote:
> Since combining ZFS storage backend, via nfs or iscsi, with ESXi heads, I'm
> in love.  But for one thing.  The interconnect between the head & storage.
> 
>  
> 
> 1G Ether is so cheap, but not as fast as desired.  10G ether is fast enough,

So bundle four of those. Or use IB, assuming ESX can handle IB.

> but it's overkill and why is it so bloody expensive?  Why is there nothing
> in between?  Is there something in between?  Is there a better option?  I
> mean . sata is cheap, and it's 3g or 6g, but it's not suitable for this
> purpose.  But the point remains, there isn't a fundamental limitation that
> *requires* 10G to be expensive, or *requires* a leap directly from 1G to
> 10G.  I would very much like to find a solution which is a good fit. to
> attach ZFS storage to vmware.
> 
>  
> 
> What are people using, as interconnect, to use ZFS storage on ESX(i)?

Why do you think 10 GBit Ethernet is expensive? An Intel NIC is 200 EUR,
and a crossover cable is enough. No need for a 10 GBit switch.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP ProLiant N36L

2010-11-11 Thread Eugen Leitl

Big thanks! I think I'll also buy one before long. The
power savings alone should be worth it over lifetime.

On Wed, Nov 10, 2010 at 11:03:21PM -0800, Krist van Besien wrote:
> I just bought one. :-)
> 
> My imprssions:
> 
> - Installed Nexentastor community edition in it. All hardware was recognized 
> and works. No problem there. I am however rather underwhelmed by the 
> Nexentastor system and will problably just install Opensolaris on it (b134) 
> this evening. I want to use the box as a NAS, serving CIFS to clients (a 
> mixture of MAC and Linux machines) but as I don't have that much 
> administration to do in it I'll just do it on the command line and forgo 
> fancy broken guis...
> - The system is wel build. Quality is good. I could get the whole motherboard 
> tray out without needing to use tools. It comes with 1GB of ram that I plan 
> to upgrade.
> - The system does come with four HD trays and all the screws you need. I 
> plunked in 4 2T disks, and a small SSD for the OS.
> - The motherboard has a minisas connector, which is connected to the 
> backplane, and a seperate SATA connector that is intended for an optical 
> drive. I used that to connect a SSD which lives in the optical drive bay. 
> There is also an internal USB connector you could just put a USB stick in.
> - Performance under nexentastor appears OK. I have to do some real tests 
> though.
> - It is very quiet. Can certainly live with it in my office. (But will move 
> it in to the basement anyway.
> .  A nice touch is the eSata connector on the back. It does have a VGA 
> connector, but no keyboard/mouse. This is completely legacy free...
> 
> All in all this is an excellent platform to build a NAS on.
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-10-30 Thread Eugen Leitl
On Sat, Oct 30, 2010 at 02:10:49PM -0700, zfs user wrote:

> 1 Mangy-Cours CPU  
^

Dunno whether deliberate, or malapropism, but I love it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Running on Dell hardware?

2010-10-26 Thread Eugen Leitl
On Tue, Oct 26, 2010 at 12:50:16PM +, Markus Kovero wrote:
> 
> > Add about 50% to the last price list from Sun und you will get the price
> > it costs now ...
> 
> Seems oracle does not want to sell its hardware so much, several 
> month delays with sales rep providing prices and pricing nowhere 
> close to its competitors.

Yeah, no more Sun hardware for us, either. Mostly Supermicro,
Dell, HP.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HP ProLiant N36L

2010-10-25 Thread Eugen Leitl
On Mon, Oct 25, 2010 at 05:54:09PM +0600, Yuri Vorobyev wrote:
> 28.09.2010 10:45, Brandon High wrote:
>
>>> Anyone had any look getting either OpenSolaris or FreeBSD with
>>> zfs working on
>>
>> I looked at it some, and all the hardware should be supported. There
>> is a half-height PCIe x16 and a x1 slot as well.
>
> Somebody has already bought this microserver? :)

Not yet, though I'm thinking about putting those new 4x 3 TByte SATA
disks into it. Resilver times in raidz3 will be a nightmare,
though.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Jeff Bonwick leaves Sun/Oracle

2010-09-28 Thread Eugen Leitl

http://blogs.sun.com/bonwick/en_US/entry/and_now_page_2 

Monday Sep 27, 2010

And now, page 2

To my team:

After 20 incredible years at Sun/Oracle, I have decided to try something new.

This was a very hard decision, and not one made lightly.  I have always enjoyed 
my work, and still do -- everything from MTS-2 to Sun Fellow to Oracle VP.  I 
love the people I work with and the technology we've created together, which is 
why I've been doing it for so long.  But I have always wanted to try doing a 
startup, and recently identified an opportunity that I just can't resist.  (We 
are in stealth mode, so that's all I can say for now.)

This team will always have a special place in my heart.  Being part of the 
Solaris team means doing the Right Thing, innovating, changing the rules, and 
being thought leaders -- creating the ideas that everyone else wants to copy. 
Add to that Oracle's unmatched market reach and ability to execute, and you 
have a combination that I believe will succeed in ways we couldn't have 
imagined two years ago.  I hope that Solaris and ZFS Storage are wildly 
successful, and that you have fun making it happen.

To the ZFS community:

Thank you for being behind us from Day One.  After a decade in the making, ZFS 
is now an adult.  Of course there's always more to do, and from this point 
forward, I look forward to watching you all do it.  There is a great quote 
whose origin I have never found: "Your ideas will go further if you don't 
insist on going with them."  That has proven correct many times in my life, and 
I am confident that it will prove true again.

For me, it's time to try the Next Big Thing.  Something I haven't fully fleshed 
out yet.  Something I don't fully understand yet.  Something way outside my 
comfort zone.  Something I might fail at.  Everything worth doing begins that 
way.  I'll let you know how it goes.

My last day at Oracle will be this Thursday, September 30, 2010.  After that 
you can reach me at my personal mac.com e-mail, with the usual first-dot-last 
construction.

It has truly been a wonderful couple of decades.  To everyone who taught me, 
worked with me, learned from me, and supported my efforts in countless ways 
large and small:  Thank you.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] HP ProLiant N36L

2010-09-27 Thread Eugen Leitl

Anyone had any look getting either OpenSolaris or FreeBSD with
zfs working on 

http://h10010.www1.hp.com/wwpc/uk/en/sm/WF06b/15351-15351-4237916-4237917-4237917-4248009-4248034.html

?

The Neo has a lot more oomph than the Atoms, and the box can
handle up to 8 GByte ECC memory.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] FreeBSD 8.1 out, has zfs vserion 14 and can boot from zfs

2010-07-26 Thread Eugen Leitl

http://www.h-online.com/open/news/item/FreeBSD-8-1-arrives-1044996.html 

FreeBSD 8.1 arrives

FreeBSD Logo Originally scheduled for the 9th of July, the FreeBSD Release
Engineering Team has now issued version 8.1 of its popular free Unix
derivative, the first stable major point update to version 8.0 of FreeBSD
from November of last year. According to the developers, FreeBSD 8.1 has
improved functionality and introduces some new features.

FreeBSD 8.1 features version 14 of the ZFS subsystem, the addition of the ZFS
Loader (zfsloader), allowing users to boot from ZFS, and NFSv4 ACL support
for the UFS and ZFS file systems. Desktop updates include version 2.30.1 of
GNOME and the latest 4.4.5 release of KDE SC. Other changes include SMP
support in PowerPC G5, the integration of BIND 9.6.2-P2 and various package
updates, including version 8.14.4 of sendmail and OpenSSH 5.4p1. Support for
UltraSPARC IV/IV+ and SPAR64 V have also been added.

More details about the release can be found in the official release
announcement, release notes and on the Errata page. FreeBSD 8.1 is available
to download as an ISO image for AMD64, i386, ia64, PC98, PowerPC and SPARC64
systems from one of the project's FTP sites. An installation gude is
provided. Users currently running FreeBSD 7.0 or later can upgrade via the
freebsd-update utility.

See also:

* PC-BSD 8.1 "Hubble Edition" released, a report from The H.

* Health Check: FreeBSD, a feature from The H.

(crve)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CPU requirements for zfs performance

2010-07-21 Thread Eugen Leitl
On Wed, Jul 21, 2010 at 04:56:26PM +0200, Roy Sigurd Karlsbakk wrote:

> It'll probably be ok. If you use lzjb compresion, it'll probably suffice as 
> well. Give it gzip-9 compression, and you might have a cpu bottleneck, but 
> then, for most use, that config will probably do. What sort of traffic do you 
> expect?

Thanks for the thumbs-up. Just local GBit LAN. If it does
~20-40 MByte/s it should be quite enough.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] CPU requirements for zfs performance

2010-07-21 Thread Eugen Leitl

How badly would a dual-core 1.6 GHz Atom with 4 GBytes RAM
be underpowered for serving 4-6 SATA drives? What kind of
transfer speed (GBit Ethernet, Intel NICs) can I expect with 
raidz2 or raidz3?

Thanks.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-14 Thread Eugen Leitl
On Wed, Jul 14, 2010 at 07:18:59AM -0700, Erik Trimble wrote:

> Not to beat a dead horse here, but that's an Apples-to-Oranges

No, no, 'e's uh,...he's resting. 

> comparison (it's raining idioms!).  You can't compare an OEM server
> (Dell, Sun, whatever) to a custom-built box from a parts assembler.  Not
> that same thing. Different standards, different prices.

Sure, if your 3rd party disks don't play nice with your chassis, or
you need cubic-carbon-studded platinum level support for your mission
critical piece of infrastructure you're out out to lunch if
it hits it ;p 

However, in a whole series of anecdotes I've done quite well ditching 
Dells and Suns and HPs for Supermicro, and sourcing disks (and sometimes
memory) from the likes of TechData and IngramMicro. No doubt, others have 
very different stories to tell. 

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-14 Thread Eugen Leitl
On Wed, Jul 14, 2010 at 04:28:44PM +1200, Ian Collins wrote:

> >If you're new to solaris etc, I might not recommend the Dell because
> >installation isn't straightforward.  Hardware support exists, but it's less
> >"enterprise" than what you might expect.  The sun hardware is the
> >recommended way to go, but it's also more expensive.
> >   
> 
> Not in my neck of the woods, Sun have always been most competitive.

You find Sun to be a better deal than Supermicro? Especially,
when you're sticking a very large number of disks into it, and
can't source the diskless caddies elsewhere? 

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2010-06-10 Thread Eugen Leitl
On Thu, Jun 10, 2010 at 04:04:42PM +0300, Pasi Kärkkäinen wrote:

> > Intel X25-M G1 firmware 8820 (80GB MLC)
> > Intel X25-M G2 firmware 02HD (160GB MLC)
> > 
> 
> What problems did you have with the X25-M models?

I'm not the OP, but I've had two X25M G2's (80 and 160 GByte)
suddenly die out me, out of a sample size of maybe 20.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Review: SuperMicro’s SC847 (S C847A) 4U chassis with 36 drive bays

2010-05-19 Thread Eugen Leitl

http://www.natecarlson.com/2010/05/07/review-supermicros-sc847a-4u-chassis-with-36-drive-bays/
 

Review: SuperMicro’s SC847 (SC847A) 4U chassis with 36 drive bays

May 7, 2010 · 9 comments

in Geek Stuff, Linux, Storage, Virtualization, Work Stuff

SuperMicro SC847 Thumbnail

[Or "my quest for the ultimate home-brew storage array."] At my day job, we
use a variety of storage solutions based on the type of data we’re hosting.
Over the last year, we have started to deploy SuperMicro-based hardware with
OpenSolaris and ZFS for storage of some classes of data. The systems we have
built previously have not had any strict performance requirements, and were
built with SuperMicro’s SC846E2 chassis, which supports 24 total SAS/SATA
drives, with an integrated port multiplier in the backplane to support
multipath to SAS drives. We’re building out a new system that we hope to be
able to promote to tier-1 for some “less critical data”, so we wanted better
drive density and more performance. We landed on the relatively new
SuperMicro SC847 chassis, which supports 36 total 3.5″ drives (24 front and
12 rear) in a 4U enclosure. While researching this product, I didn’t find
many reviews and detailed pictures of the chassis, so figured I’d take some
pictures while building the system and post them for the benefit of anyone
else interested in such a solution.

In the systems we’ve built so far, we’ve only deployed SATA drives since
OpenSolaris can still get us decent performance with SSD for read and write
cache. This means that in the 4U cases we’ve used with integrated port
multipliers, we have only used one of the two SFF-8087 connectors on the
backplane; this works fine, but limits the total throughput of all drives in
the system to 4 3gbit/s channels (on this chassis, 6 drives would be on each
3gbit channel.) On our most recent build, we built it with the intention of
using it both for “nearline”-class storage, and as a test platform to see if
we can get the performance we need to store VM images. As part of this
decision, we decided to go with a backplane that supports full throughput to
each drive.

[...]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hard drives for ZFS NAS

2010-05-12 Thread Eugen Leitl
On Wed, May 12, 2010 at 09:05:14PM +1000, Emily Grettel wrote:
> 
> Hello,
> 
>  
> 
> I've decided to replace my WD10EADS and WD10EARS drives as I've checked the 
> SMART values and they've accrued some insanely high numbers for the 
> load/unload counts (40K+ in 120 days on one!).
> 
>  
> 
> I was leaning towards the Black drives but now I'm a bit worried about the 
> TLER lackingness which was a mistake made my previous sysadmin.
> 
>  
> 
> I'm wondering what other people are using, even though the Green series has 
> let me down, I'm still a Western Digital gal.

Try WD RE3 and RE4 series, no issues here so far. Presumably, 1 TByte would
be better than 2 TByte due to resilver times.
Some say Hitachis work, too.
 
> Would you recommend any of these for use in a ZFS NAS?
> 
>  
> 
> 
> 4x WD2003FYYS - http://www.wdc.com/en/products/Products.asp?DriveID=732 [RE4]
> 4x WD2002FYPS - http://www.wdc.com/en/products/products.asp?DriveID=610 
> [Green]
> 6x WD1002FBYS - http://www.wdc.com/en/products/Products.asp?DriveID=503 [RE3]
>  
> 
> What do people already use on their enterprise level NAS's? Any good Seagates?

After a 7200.11 debacle (in fact, SMART just told me I've got another deader
on my hands) I'm quite leery. Maybe the SAS Seagates are better.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle to no longer support ZFS on OpenSolaris?

2010-04-21 Thread Eugen Leitl
On Tue, Apr 20, 2010 at 06:51:01PM +0100, Bayard Bell wrote:

> These folks running the relevant business lines have already said  
> publicly to the OGB that Oracle's corporate management accepts the  
> basic premise of OpenSolaris, so why pass the time waiting to learn  
> how they're going to make good on this by concocting baroque  
> conspiracy theories about how they're going to reverse themselves in  
> some material fashion or passing along rumours to that effect?

It doesn't take 'baroque conspiracy theories', just look at
Oracle's track of past technology acquisitions. The burden
of proof is quite onerous, and quite in their court. Words
are not nearly enough.

It seems the technology is finished, unless a credible fork is 
forthcoming. 

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD sale on newegg

2010-04-07 Thread Eugen Leitl
On Tue, Apr 06, 2010 at 05:22:25PM -0700, Carson Gaspar wrote:

> I just found an 8 GB SATA Zeus (Z4S28I) for £83.35 (~US$127) shipped to 
> California. That should be more than large enough for my ZIL @home, 
> based on zilstat.

Transcend sells an 8 GByte SLC SSD for about 70 EUR. The specs
are not awe-inspiring though (I used it in an embedded firewall).
 
> The web site says EOL, limited to current stock.
> 
> http://www.dpieshop.com/stec-zeus-z4s28i-8gb-25-sata-ssd-solid-state-drive-industrial-temp-p-410.html
> 
> Of course this seems _way_ too good to be true, but I decided to take 
> the risk.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] FYI: Ben Rockwood: Solaris no longer free

2010-03-29 Thread Eugen Leitl
 solaris.

jeremy (Email) - 29 March '10 - 12:03
Jeremy, sorry, but I believe you are misinformed.

Larry (and the whole Oracle crew) has stated that the future of high-end will 
be Solaris. Read the transcript of his webcasts, or watch them in full. You can 
find some of Larry’s remarks such as:

“I think Solaris is way far advanced, and I love Linux, but I think Solaris is 
a more capable operating system,”

“I think Solaris’ home is in the high-end of the data center, and it will be a 
long time before Linux catches up”

“But again we will have Linux—I’m a Linux fan and if you want Linux we have the 
best Linux in the world. If you want UNIX, we have the best UNIX in the world. 
And again, they are different and I don’t think the high end is in trouble at 
all.”

As you can see, they are positioning Solaris first (high-end), and if you want 
Linux, well, they have that too.

Sorry, but Ubuntu doesn’t even comes close to Solaris in any way. It doesn’t 
even directly compare to RHEL.

Once again, I invite you to read and research a bit before posting such 
groundless comments.

Phobos (Email) - 29 March '10 - 12:15
@Phobos
I am all for reading and researching and tapping the minds of knowledgeable 
folk which your post seem to imply that you could belong to this group =). So I 
would like to ask you your comments on the performance of FreeBSD vs Solaris.
kr
Phil

phil (Email) - 29 March '10 - 12:46
Phil, my only comment is that your mileage may vary!.

Both Solaris and FreeBSD are good OSs for general use, but in order to know 
which one suits you best, nothing beats a hands on experience. ZFS performance 
on FreeBSD still has some details that needs to be addressed, but it already is 
labeled as production ready.

Dtrace on FreeBSD also works great. I believe it doesn’t have all the probes 
Solaris has yet, but it performs great anyways.

Personally, I would recommend you to test both systems. Maybe you will find 
some comparative benchmarks in Phoronix, but those don’t guarantee your 
application will work better on one or the other.

Hope that helps.

Phobos (Email) - 29 March '10 - 13:12 
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS where to go!

2010-03-26 Thread Eugen Leitl
On Fri, Mar 26, 2010 at 07:46:01AM -0400, Edward Ned Harvey wrote:

> And FreeBSD in general will be built using older versions of packages than
> what's in OpenSolaris.
> 
> Both are good OSes.  If you can use FreeBSD but OpenSolaris doesn't have the
> driver for your hardware, go for it.

While I use zfs with FreeBSD (FreeNAS appliance with 4x SATA 1 TByte drives) 
it is trailing OpenSolaris by at least a year if not longer and hence lacks
many key features people pick zfs over other file systems. The performance,
especially CIFS is quite lacking. Purportedly (I have never seen the source
nor am I a developer), such crucial features are nontrivial to backport because 
FreeBSD doesn't practice layer separation. Inasmuch this is still true
for the future we'll see once the Oracle/Sun dust settles.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] suggested ssd for zil

2010-03-01 Thread Eugen Leitl
On Mon, Mar 01, 2010 at 12:18:45AM -0500, rwali...@washdcmail.com wrote:

> > ACARD ANS-9010, as mentioned several times here recently (also sold as
> > hyperdrive5) 
> 
> You are right.  I saw that in a recent thread.  In my case I don't have a 
> spare bay for it.  I'm similarly constrained on some of the PCI solutions 
> that have either battery backup or external power.
> 
> But this seems like a good solution if someone has the space.

It doesn't do ECC memory though, which is a real pity.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_133 - high cpu

2010-02-23 Thread Eugen Leitl
On Tue, Feb 23, 2010 at 01:03:04PM -0600, Bob Friesenhahn wrote:

> Zfs can consume appreciable CPU if compression, sha256 checksums, 
> and/or deduplication is enabled.  Otherwise, substantial CPU 
> consumption is unexpected.

In terms of scaling, does zfs on OpenSolaris play well on multiple
cores? How much disks (assuming 100 MByte/s throughput for each)
would be considered pushing it for a current single-socket quadcore?
 
> Are compression, sha256 checksums, or deduplication enabled for the 
> filesystem you are using?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] future of OpenSolaris

2010-02-22 Thread Eugen Leitl

Oracle's silence is starting to become a bit ominous. What are
the future options for zfs, should OpenSolaris be left dead
in the water by Suracle? I have no insight into who core
zfs developers are (have any been fired by Sun even prior to
the merger?), and who's paying them. Assuming a worst case
scenario, what would be the best candidate for a fork? Nexenta?
Debian already included FreeBSD as a kernel flavor into its
fold, it seems Nexenta could be also a good candidate.

Maybe anyone in the know could provide a short blurb on what
the state is, and what the options are.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Eugen Leitl
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:

> I found the Hyperdrive 5/5M, which is a half-height drive bay sata 
> ramdisk with battery backup and auto-backup to compact flash at power 
> failure.
> Promises 65,000 IOPS and thus should be great for ZIL. It's pretty 
> reasonable priced (~230 EUR) and stacked with 4GB or 8GB DDR2-ECC should 
> be more than sufficient.

Wouldn't it be better investing these 300-350 EUR into 16 GByte or more of
system memory, and a cheap UPS?
 
> http://www.hyperossystems.co.uk/07042003/hardware.htm

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Abysmal ISCSI / ZFS Performance

2010-02-18 Thread Eugen Leitl
On Wed, Feb 17, 2010 at 11:21:07PM -0800, Matt wrote:
> Just out of curiosity - what Supermicro chassis did you get?  I've got the 
> following items shipping to me right now, with SSD drives and 2TB main drives 
> coming as soon as the system boots and performs normally (using 8 extra 500GB 
> Barracuda ES.2 drives as test drives).

That looks like a sane combination. Please report how this particular
setup performs, I'm quite curious.

One question though:
 
> 
> http://www.acmemicro.com/estore/merchant.ihtml?pid=5440&lastcatid=53&step=4
> http://www.newegg.com/Product/Product.aspx?Item=N82E16820139043
> http://www.acmemicro.com/estore/merchant.ihtml?pid=4518&step=4

Just this one SAS adaptor? Are you connecting to the drive
backplane with one cable for the 4 internal SAS connectors?
Are you using SAS or SATA drives? Will you be filling up 24
slots with 2 TByte drives, and are you sure you won't be 
oversubscribed with just 4x SAS? And SSD, which drives are you 
using and in which mounts (internal or external caddies)?

> http://www.acmemicro.com/estore/merchant.ihtml?pid=6708&step=4
> http://www.newegg.com/Product/Product.aspx?Item=N82E16819117187
> http://www.newegg.com/Product/Product.aspx?Item=N82E16835203002

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-10 Thread Eugen Leitl
On Tue, Feb 09, 2010 at 11:16:44PM -0700, Eric D. Mudama wrote:

> >no one is selling disk brackets without disks.  not Dell, not EMC, not
> >NetApp, not IBM, not HP, not Fujitsu, ...
> 
> http://discountechnology.com/Products/SCSI-Hard-Drive-Caddies-Trays

I don't see why we have to hunt down random parts.

My lesson from this is that I no longer buy Sun, Dell, HP or Fujitsu.
Screw them. I can get complete systems with fully populated slots with
empty caddies e.g. from Supermicro. Drives are commodity. Same thing
with buying extra licenses to un-cripple IPMI crippleware (HP, Fujitsu). 

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-04 Thread Eugen Leitl
On Wed, Feb 03, 2010 at 03:02:21PM -0800, Brandon High wrote:

> Another solution, for a true DIY x4500: BackBlaze has schematics for
> the 45 drive chassis that they designed available on their website.
> http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/
> 
> Someone brought it up on the list a few months ago (which is how I
> know about it) and there was some interesting discussion at that time.

IIRC the consensus was that the vibration dampening was inadequate
and the interfaces oversubscribed and the disks being not nearline
too unreliable, but I might be misremembering.

I'm still happy with my 16x WD RE4 drives (linux mdraid RAID 10,
CentOS, Oracle, no zfs). Supermicro does 36x drive chassis now
http://www.supermicro.com/products/chassis/4U/?chs=847 so budget
DIY for zfs is about 72 TByte raw storage with 2 TByte nearline
SATA drives.

I've had trouble finding internal 2x 2.5" in one 3.5" 
SSD mounts from Supermicro for hybrid zfs, but no doubt one 
could improvise something from the usual ricer supplies. 

On smaller scale http://www.supermicro.com/products/chassis/2U/?chs=216
works well with 2.5" Intel SSDs and VelociRaptors. I hope to be able
to use one for a hybrid zfs iSCSI target for VMWare, probably with
10 GBit Ethernet.

> There's no way I would use something like this for most installs, but
> there is definitely some use. Now that opensolaris supports sata pmp,
> you could use a similar chassis for a zfs pool.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeNAS 0.7 zfs performance

2009-11-30 Thread Eugen Leitl
On Mon, Nov 30, 2009 at 10:20:09PM +0100, Harald Dumdey wrote:

> please have a look at my this blogpost ->
> http://harryd71.blogspot.com/2009/06/benchmark-of-freenas-07-and-single-ssd.html
> 105 MByte/s Read - 77 MByte/s Write over 1 GBit/s Ethernet is not to
> bad for a single SSD...

That is indeed quite nice. I was hoping to be able to use ZIL and L2ARC
in a hybrid storage pool with FreeBSD 8.0, but FreeNAS just commited
suicide (went Linux) 
http://sourceforge.net/apps/phpbb/freenas/viewtopic.php?f=5&t=3966

This is highly unfortunate, since a fork is arbitrarily improbable,
and leaves me with with just the options of NexentaStor, OpenSolaris or 
FreeBSD at command line level, or a random Linux NAS with btrfs
(when it will go production eventually, if ever).

Not a happy day.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] FreeNAS 0.7 zfs performance

2009-11-30 Thread Eugen Leitl

Just as a random data point, I have about 80-100 MBit/s write
performance to a CIFS share on a 4x 1 TByte Seagate 7200.11 
system (all four drives on the same PCI SATA Adaptec at 1.5 GBit), 
2 GByte RAM, 2 GHz Athlon 64 with FreeNAS 0.7 (FreeBSD 7.2). This is raidz2.
Interface is GBit Ethernet (Intel NIC, PCI), jumbo frames (MTU 9000).
When scrub is in progress, write falls down to 30-40 MBit/s. 
Memory usage and CPU load during normal write is about 20-30%.

This is pretty bad, whether this is due to hardware issues
or poor zfs implementation in FreeBSD 7.2 is beyond my ken.
Here's hoping FreeBSD 8.0 which is just out and claims zfs ready for
production will do better.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] FreeNAS 0.7 with zfs out

2009-11-10 Thread Eugen Leitl
'Dib)
Monday, 20 April 2009
Majors changes:

* Add another WOL patch. It is tested for nfe(4) und xl(4). Thanks to 
Tobias Reber.
* Add switch in 'System|Advanced' WebGUI to enable the console screensaver 
(FR 2777301).
* Upgrade Adaptec SCSI RAID administration tool to 6.10.18359.
* Add ability to enable or disable rc.conf variables configured via 
'System|Advanced|rc.conf'.
* Add danish WebGUI translation. Thanks to all translators.
* Add kernel patches to get ARTiGO A2000 hardware working. Thanks to David 
Davis for the patches.
* Add ability to use %d (date) and %h (hostname) in email subjects (e.g. 
Services|UPS) (FR 2796141).


Minors changes:

* Add 'MaxLoginAttempts' event to FTP ban list rules (FR 2777481).
* Add 'ClientConnectRate' event to FTP ban list rules.
* Allow selecting the key length of the cryptographic algorithm used to 
encrypt a disk (FR 2779692).
* Add system power control options to 'System|Advanced|rc.conf' (FR 
2784889).
* Show FTP transfer log in 'Diagnostics|Logs|FTP|Transfer' (FR 2785325).
* Add filechooser button to 'Home directory' editbox in 'Access|Users|Edit' 
WebGUI (FR 2790909).
* Sort various lists displayed in the WebGUI using a 'natural order' 
algorithm (FR 2481934). Thanks to Marion Desnault for the patch.


Bug fixes:

* It was not possible to configure multiple FTP ban list rules. Thanks to 
Michael Zoon.
* Modify Fuppes UPnP configuration to get PS3 with firmware 2.70 working 
again (BR 2782729).
* Editing existing config items in WebGUI will display incorrect data (e.g. 
'System|Advanced|rc.conf' or 'System|Advanced|sysctl.conf') (BR 2792956).
* Fix bug in WebGUI which is caused by unhandled special HTML characters 
used in various languages (BR 2793875).
* Set Quixplorer user permissions to 'View only' because of security reason 
because Quixplorer does not respect system user permissions (BR 2798934).
* Disk temperature not detected correct for SCSI devices (BR 2801565).
* Fix JPCERT/CC JVN#89791790 (Cross-site scripting vulnerability).


-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks "fork"

2009-10-28 Thread Eugen Leitl
On Wed, Oct 28, 2009 at 12:27:50PM -0400, David Magda wrote:

> The "problem" is that many of these units use 'embedded' processors, and
> (Open)Solaris does not readily run on many of them (e.g., PowerPC- and
> ARM-based SoCs). Though AFAIK, ReadyNAS actually runs (ran?) on SPARC
> (Leon), but used Linux nonetheless.

Embedded means many things these days. Is AMD's Geode an embedded? 
Is Intel's Atom? 
 
> Perhaps as Intel and AMD build processors more suited to embedded /
> light-weight systems, Solaris and ZFS may be used in more situations.
> There's also FreeBSD, which also has ZFS and has been scaling up its

FreeNAS 0.7 final with zfs will be out Any Day Now. It may lag behind
OpenSolaris, but it is usable.

> support for embedded platforms (MIPS, ARM, PowerPC) recently. Not sure of
> the porting progress of OpenSolaris off-hand.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks "fork"

2009-10-28 Thread Eugen Leitl
On Wed, Oct 28, 2009 at 01:40:12PM +0800, "C. Bergström" wrote:

> >So use Nexenta?
> Got data you care about?
> 
> Verify extensively before you jump to that ship.. :)

So you're saying Nexenta have been known to drop bits on
the floor, unprovoked? Inquiring minds...

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Stupid to have 2 disk raidz?

2009-10-15 Thread Eugen Leitl
On Thu, Oct 15, 2009 at 03:32:44PM -0700, Erik Trimble wrote:

> Expanding a RAIDZ (i.e. adding another disk for data, not parity) is a 
> constantly-asked-for feature.
> 
> It's decidedly non-trivial (frankly, I've been staring at the code for a 
> year now, trying to figure out how, and I'm just not up to the task).   
> The biggest issue is interrupted expansion - i.e. I've got code to do it 
> (expansion), but it breaks all over the place when I interrupt the 
> expansion - horrible pool corruption all the time.  And I think that's 
> the big problem - how to do the expansion in stages, while keeping the 
> pool active.  At this point, the only way I can get it to work is to 
> offline (ie export) the whole pool, and then pray that nothing 
> interrupts the expansion process.

Anyone knows how Drobo does it?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-10-01 Thread Eugen Leitl
On Wed, Sep 30, 2009 at 05:03:21PM -0700, Brandon High wrote:

> Supermicro has a 3 x 5.25" bay rack that holds 5 x 3.5" drives. This
> doesn't leave space for a optical drive, but I used a USB drive to
> install the OS and don't need it anymore.

I've had such a bay rack for years, and it survived one big tower,
and is now dwelling in a cheap Sharkoon case. The fan is a bit noisy,
but then, the server is behind a couple of doors, and serves the
house LAN. It's currently running Linux, but has already a FreeNAS
on an IDE DOM preinstalled.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] poor man's Drobo on FreeNAS

2009-09-30 Thread Eugen Leitl

Somewhat hairy, but interesting. FYI.

https://sourceforge.net/apps/phpbb/freenas/viewtopic.php?f=97&t=1902

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-29 Thread Eugen Leitl
On Tue, Sep 29, 2009 at 07:28:13AM -0400, rwali...@washdcmail.com wrote:

> I agree completely with the ECC.  It's for home use, so the power  
> supply issue isn't huge (though if it's possible that's a plus).  My  
> concern with this particular option is noise.  It will be in a closet,  
> but one with louvered doors right off a room where people watch TV.   
> Anything particularly loud would be an issue.  The comments on Newegg  
> make this sound pretty loud.  Have you tried one outside of a server  
> room environment?

No, basically all rackmount gear (especially 1-2 height units) which 
dissipates nontrivial power is loud, since it has to maintain air flow, 
which at small geometries means high-rpm and high-pitched noise. I've
just hauled a 3U Supermicro chassis from my office, where it has
been running for a couple weeks into a server room, where such
systems belongs.

The only way to deal with that is watercooling, or operating the thing
out of your earshot (cellar, etc). 

There are large enclosures with large, slow-moving fans which are
suitable for the living room, but I doubt you can miss 16-24 drives
in action. 

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-28 Thread Eugen Leitl
On Mon, Sep 28, 2009 at 06:04:01PM -0400, Thomas Burgess wrote:
> personally i like this case:
> 
> 
> http://www.newegg.com/Product/Product.aspx?Item=N82E16811219021
> 
> it's got 20 hot swap bays, and it's surprisingly well built.  For the money,
> it's an amazing deal.

You don't like http://www.supermicro.com/products/nfo/chassis_storage.cfm ?
I must admit I don't have a price list of these.

When running that many hard drives I would insist on redundant
power supplies, and server motherboards with ECC memory. Unless
it's for home use, where a downtime of days or weeks is not critical.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Collecting hardware configurations (was Re: White box server for OpenSolaris)

2009-09-25 Thread Eugen Leitl
On Fri, Sep 25, 2009 at 10:18:15AM +0100, Tim Foster wrote:

> I don't have enough experience myself in terms of knowing what's the
> best hardware on the market, but from time to time, I do think about
> upgrading my system at home, and would really appreciate a
> zfs-community-recommended configuration to use.
> 
> Any takers?

I'm willing to contribute (zfs on Opensolaris, mostly Supermicro
boxes and FreeNAS (FreeBSD 7.2, next 8.x probably)). Is there a 
wiki for that somewhere?

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Eugen Leitl
On Thu, Sep 17, 2009 at 12:55:35PM +0200, Tomas Ögren wrote:

> It's not a fixed value per technology, it depends on the number of disks
> per group. RAID5/RAIDZ1 "loses" 1 disk worth to parity per group.
> RAID6/RAIDZ" loses 2 disks. RAIDZ3 loses 3 disks. Raid1/mirror loses
> half the disks. So in your 14 drive case, if you go for one big
> raid6/raidz2 setup (which is larger than recommended for performance

I presume for 24 disks (my next project, the current 16-disk 
one had to be converted to CentOS for software compatibility reasons) 
you would recommend splitting them into two groups, a la 12 disks. 
With raidz3, there would be 9 disks left for data, 18 total -- 
36 TBytes effective in case of 2 TByte WD RE4 drives, half that 
for WD Caviar Black. How many hot spares should I leave in 
each pool, one or more? 

Is it safe to stripe over two such 12-disk pools? 
Or is mirror the right thing to do, regardless of drive costs?

Speaking of which, does anyone use NFSv4 clustering in production
to aggregate individual zfs boxes? Experiences good/bad?

> reasons), you will lose 2 disks worth of storage to parity leaving 12
> disks worth of data. With raid10 you will lose half, 7 disks to
> parity/redundancy. With two raidz2 sets, you will get (5+2)+(5+2), that
> is 5+5 disks worth of storage and 2+2 disks worth of redundancy. The
> actual redudancy/parity is spread over all disks, not like raid3 which
> has a dedicated parity disk.

So raidz3 has a dedicated parity disk? I couldn't see that from
skimming http://blogs.sun.com/ahl/entry/triple_parity_raid_z
 
> For more info, see for example http://en.wikipedia.org/wiki/RAID

Unfortunately, this is very thin on zfs.

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
is very helpful, but it doesn't offer concrete layout examples for
odd number of disks (understandable, since Sun has to sell the 
Thumper), and is pretty mum on raidz3.

Thank you. This list is fun, and helpful.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Eugen Leitl
On Wed, Sep 16, 2009 at 10:23:01AM -0700, Richard Elling wrote:

> This line of reasoning doesn't get you very far.  It is much better to  
> take a look at
> the mean time to data loss (MTTDL) for the various configurations.  I  
> wrote a
> series of blogs to show how this is done.
> http://blogs.sun.com/relling/tags/mttdl

Excellent information, thanks! I presume MTTDL[1] years and
MTTDL[2] is the same as in 
http://blogs.sun.com/relling/entry/a_story_of_two_mttdl 

Do you think it would be possible to publish same information
for 24 drives (not all of us can buy a Thumper), and maybe 
include raidz3 into the number crunch?

Thanks!

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Eugen Leitl
On Wed, Sep 16, 2009 at 08:02:35PM +0300, Markus Kovero wrote:

> It's possible to do 3-way (or more) mirrors too, so you may achieve better 
> redundancy than raidz2/3

I understand there's almost no additional performance penalty to raidz3 
over raidz2 in terms of CPU load. Is that correct?

So SSDs for ZIL/L2ARC don't bring that much when used with raidz2/raidz3,
if I write a lot, at least, and don't access the cache very much, according
to some recent posts on this list.

How much drive space am I'm losing with mirrored pools versus raidz3? IIRC
in RAID 10 it's only 10% over RAID 6, which is why I went for RAID 10 in
my 14-drive SATA (WD RE4) setup.

Let's assume I want to fill a 24-drive Supermicro chassis with 1 TByte
WD Caviar Black or 2 TByte RE4 drives, and use 4x X25-M 80 GByte
2nd gen Intel consumer drives, mirrored, each pair as ZIL/L2ARC
for the 24 SATA drives behind them. Let's assume CPU is not an issue,
with dual-socket Nehalems and 24 GByte RAM or more. There are applications
packaged in Solaris containers running on the same box, however.

Let's say the workload is mostly multiple streams (hundreds to thousands
simultaneously, some continuous, some bursty) each writing data 
to the storage system. However, some few clients will be using database-like
queries to read, potentially on the entire data store.

With above workload, is raidz2/raid3 right out, and will I need mirrored
pools? 

How would you lay out the pools for above workload, assuming 24 SATA
drives/chassis (24-48 TBytes raw storage), and 80 GByte SSD each for ZIL/L2ARC 
(is that too little?  Would 160 GByte work better?)

Thanks lots.
 
-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Eugen Leitl
On Thu, Sep 10, 2009 at 11:54:16AM -0700, Chris Du wrote:
> Why do you need 3x LSI SAS3081E-R? The back plane has LSI SAS x36 expander so 
> you only nedd 1x 3081E. If you want multipathing, you need E2 model.

Can you use SATA drives with expanders at all? (I have to stick
to enterprise/nearline SATA (100 EUR/TByte vs. 60 EUR/TByte
consumer SATA) for cost reasons).

Also, won't you get oversubscription at such large disk populations
on a single host adapter? I was thinking 8x SATA on an 8-lane
PCI Express (what, about 8*200 MByte/s nominal bandwidth?) 
was a more conservative setting.
 
> Second, I'd say use Seagate ES 2 1TB SAS disk especially if you want 
> multipathing. I believe E2 only supports SAS disks.
> 
> I have Supermicro 936E1 (LSI SAS X28 expander) as diskshelf and 

What is the advantage of using external disk expanders?
They use up more rack height units and add hardware expense
and cabling hassle for very little to show for it, IMHO.

Supermicro makes 24-drive chassis with redundant power
supplies which can take server motherboards with enough
PCI Express slots and CPU power to serve them. If you need
more storage, a cluster file system (e.g. pNFS aka NFS 4.1
or PVFS2) can be used to build up nodes. Granted, you'll
probably need InfiniBand in order to make optimal use 
of it within the cluster as even channel-bonded GBit
Ethernet will peak at 480 MBytes/s, or so.

> LSI 3080X on head unit, Intel X25-E as ZIL, works like charm. 
> Your setup is very well supported by Solaris.

Thank you for the confirmation.

> For motherboard, my Supermicro X8SAX and X8ST3 both work well with Solaris. 
> You may want dual proc board that supports more memory. ECC is given on i7 
> based when using XEON.

Given that this is a hybrid application (application in
Solaris containers accessing zfs pool on the same machine)
I realize ECC is very important.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-10 Thread Eugen Leitl
On Thu, Sep 10, 2009 at 01:11:49PM -0400, Eric Sproul wrote:

> I would not use the Caviar Black drives, regardless of TLER settings.  The RE3
> or RE4 drives would be a better choice, since they also have better vibration
> tolerance.  This will be a significant factor in a chassis with 20 spinning 
> drives.

Yes, I'm aware of the issue, and am using 16x RE4 drives in my current 
box right now (which I unfortunately had to convert to CentOS 5.3 for Oracle/
custom software compatibility reasons). I've made very bad experiences
with Seagate 7200.11 in RAID in the past.

Thanks for your advice against Caviar Black. 
 
> > Do you think above is a sensible choice? 
> 
> All your other choices seem good.  I've used a lot of Supermicro gear with 
> good
> results.  The very leading-edge hardware is sometimes not supported, but

I've been using 
http://www.supermicro.com/products/motherboard/QPI/5500/X8DAi.cfm
in above box.

> anything that's been out for a while should work fine.  I presume you're going
> for an Intel Xeon solution-- the peripherals on those boards a a bit better
> supported than the AMD stuff, but even the AMD boards work well.

Yes, dual-socket quadcore Xeon.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  1   2   >