[zfs-discuss] External Backup with Time Slider Manager

2011-06-24 Thread Alex
Hi all,

I'm trying to understand the "external backup" feature of Time Slider Manager 
in Solaris 11 Express. In the window, I have "Replicate backups to an external 
drive" checked. The "Backup Device" is the mount point of my backup drive. In 
"File Systems To Back Up," "Select" and "Replicate" are checked for everything 
except the backup filesystem.

After having set this a few times, only two filesystems have been backed up: my 
root fs, and an fs for one of my zones. These are located at 
"/backup/TIMESLIDER/hostname/fsname".

I let it sit for a few weeks and checked again, thinking maybe it would start 
with a weekly snapshot or something, but this does not seem to be the case. 
Restarting the time-slider and auto-snapshot processes doesn't seem to do 
anything either and there is no relevant info in any of the relevant SMF log 
files.

My understanding is that this feature is supposed to replicate via rsync all 
Time Slider snapshots for all filesystems that have been selected to replicate, 
to the backup drive. I wonder if this understanding is incorrect, or if I'm 
doing something wrong.

The relevant time-slider services appear as follows:

online Jun_11   svc:/application/time-slider:default
online Jun_11   svc:/system/filesystem/zfs/auto-snapshot:daily
online Jun_11   svc:/system/filesystem/zfs/auto-snapshot:monthly
online Jun_11   svc:/system/filesystem/zfs/auto-snapshot:frequent
online Jun_11   svc:/system/filesystem/zfs/auto-snapshot:weekly
online Jun_11   svc:/system/filesystem/zfs/auto-snapshot:hourly
online Jun_11   svc:/application/time-slider/plugin:rsync

The capacity of the backup drive is 928GB; the total capacity of all 
filesystems to back up is 1161GB, however, only 410GB are used.

Thanks,
Alex
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Question: adding a single drive to a mirrored zpool

2011-06-24 Thread alex stun
Hello,
I have a zpool consisting of several mirrored vdevs. I was in the middle of 
adding another mirrored vdev today, but found out one of the new drives is bad. 
I will be receiving the replacement drive in a few days. In the mean time, I 
need the additional storage on my zpool.

Is the command to add a single drive to a mirrored zpool:
zpool add -f tank drive1?

Does the -f command cause any issues?
I realize that there will be no redundancy on that drive for a few days, and I 
can live with that as long as the rest of my zpool remains intact.

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Same device node appearing twice in same mirror; one faulted, one not...

2011-05-24 Thread Alex Dolski
Sure enough Cindy, the eSATA cables had been crossed. I exported, powered off, 
reversed the cables, booted, imported, and the pool is currently resilvering 
with both c5t0d0 & c5t1d0 present in the mirror. :) Thank you!!

Alex



On May 24, 2011, at 9:58 AM, Cindy Swearingen wrote:

> Hi Alex,
> 
> If the hardware and cables were moved around then this is probably
> the root cause of your problem. You should see if you can move the
> devices/cabling back to what they were before the move.
> 
> The zpool history output provides the original device name, which
> isn't c5t1d0, either:
> 
> # zpool create tank c13t0d0
> 
> You might grep the zpool history output to find out which disk was
> eventually attached, like this:
> 
> # zpool history | grep attach
> 
> But its clear from the zdb -l output, that the devid for this
> particular device changed, which we've seen happen on some hardware. If
> the devid persists, ZFS can follow the devid of the device even if its
> physical path changes and is able to recover more gracefully.
> 
> If you continue to use this hardware for your storage pool, you should
> export the pool before making any kind of hardware change.
> 
> Thanks,
> 
> Cindy
> 
> 
> On 05/21/11 18:05, Alex Dolski wrote:
>> Hi Cindy,
>> Thanks for the advice. This is just a little old Gateway PC provisioned as 
>> an informal workgroup server. The main storage is two SATA drives in an 
>> external enclosure, connected to a Sil3132 PCIe eSATA controller. The OS is 
>> snv_134b, upgraded from snv_111a.
>> I can't identify a cause in particular. The box has been running for several 
>> months without much oversight. It's possible that the two eSATA cables got 
>> reconnected to different ports after a recent move.
>> The backup has been made and I will try the export & import, per your advice 
>> (if zpool command works - it does again at the moment, no reboot!). I will 
>> also try switching the eSATA cables to opposite ports.
>> Thanks,
>> Alex
>> Command output follows:
>> # format
>> Searching for disks...done
>> AVAILABLE DISK SELECTIONS:
>>   0. c5t1d0 
>>  /pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0
>>   1. c8d0 
>>  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
>>   2. c9d0 
>>  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
>>   3. c11t0d0 
>>  /pci@0,0/pci107b,5058@1a,7/storage@1/disk@0,0
>> # zpool history tank
>> History for 'tank':
>> 2010-06-18.15:14:16 zpool create tank c13t0d0
>> 2011-05-07.02:00:07 zpool scrub tank
>> 2011-05-14.02:00:08 zpool scrub tank
>> 2011-05-21.02:00:12 zpool scrub tank
>> > omitted>
>> # zdb -l /dev/dsk/c5t1d0s0
>> 
>> LABEL 0
>> 
>>version: 14
>>name: 'tank'
>>state: 0
>>txg: 3374337
>>pool_guid: 6242690959503408617
>>hostid: 8697169
>>hostname: 'wdssandbox'
>>top_guid: 17982590661103377266
>>guid: 1717308203478351258
>>vdev_children: 1
>>vdev_tree:
>>type: 'mirror'
>>id: 0
>>guid: 17982590661103377266
>>whole_disk: 0
>>metaslab_array: 23
>>metaslab_shift: 32
>>ashift: 9
>>asize: 500094468096
>>is_log: 0
>>children[0]:
>>type: 'disk'
>>id: 0
>>guid: 1717308203478351258
>>path: '/dev/dsk/c5t1d0s0'
>>devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1939879/a'
>>phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
>>whole_disk: 1
>>DTL: 27
>>children[1]:
>>type: 'disk'
>>id: 1
>>guid: 9267693216478869057
>>path: '/dev/dsk/c5t1d0s0'
>>devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1769949/a'
>>phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
>>whole_disk: 1
>>DTL: 893
>> 
>> LABEL 1
>> 
>>version: 14
>>name: 'tank'
>>state: 0
>>txg: 3374337
>>pool_guid: 6242690959503408617
>>hostid: 8697169
>>hostname: 'wdssandbox'

Re: [zfs-discuss] Same device node appearing twice in same mirror; one faulted, one not...

2011-05-21 Thread Alex Dolski
Hi Cindy,

Thanks for the advice. This is just a little old Gateway PC provisioned as an 
informal workgroup server. The main storage is two SATA drives in an external 
enclosure, connected to a Sil3132 PCIe eSATA controller. The OS is snv_134b, 
upgraded from snv_111a.

I can't identify a cause in particular. The box has been running for several 
months without much oversight. It's possible that the two eSATA cables got 
reconnected to different ports after a recent move.

The backup has been made and I will try the export & import, per your advice 
(if zpool command works - it does again at the moment, no reboot!). I will also 
try switching the eSATA cables to opposite ports.

Thanks,
Alex


Command output follows:

# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
   0. c5t1d0 
  /pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0
   1. c8d0 
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c9d0 
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
   3. c11t0d0 
  /pci@0,0/pci107b,5058@1a,7/storage@1/disk@0,0


# zpool history tank
History for 'tank':
2010-06-18.15:14:16 zpool create tank c13t0d0
2011-05-07.02:00:07 zpool scrub tank
2011-05-14.02:00:08 zpool scrub tank
2011-05-21.02:00:12 zpool scrub tank



# zdb -l /dev/dsk/c5t1d0s0

LABEL 0

version: 14
name: 'tank'
state: 0
txg: 3374337
pool_guid: 6242690959503408617
hostid: 8697169
hostname: 'wdssandbox'
top_guid: 17982590661103377266
guid: 1717308203478351258
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 17982590661103377266
whole_disk: 0
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 500094468096
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 1717308203478351258
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1939879/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 27
children[1]:
type: 'disk'
id: 1
guid: 9267693216478869057
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1769949/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 893

LABEL 1

version: 14
name: 'tank'
state: 0
txg: 3374337
pool_guid: 6242690959503408617
hostid: 8697169
hostname: 'wdssandbox'
top_guid: 17982590661103377266
guid: 1717308203478351258
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 17982590661103377266
whole_disk: 0
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 500094468096
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 1717308203478351258
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1939879/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 27
children[1]:
type: 'disk'
id: 1
guid: 9267693216478869057
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1769949/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 893

LABEL 2

version: 14
name: 'tank'
state: 0
txg: 3374337
pool_guid: 6242690959503408617
hostid: 8697169
hostname: 'wdssandbox'
top_guid: 17982590661103377266
guid: 1717308203478351258
vdev_children: 1
vdev_tree:
type: 'mirror'
id: 0
guid: 17982590661103377266
whole_disk: 0
metaslab_array: 23
metaslab_shift: 32
ashift: 9
asize: 500094468096
is_log: 0
children[0]:
type: 'disk'
id: 0
guid: 1717308203478351258
path: '/dev/dsk/c5t1d0s0'
devid: 'id1,sd@SATA_WDC_WD5000AAKS-0_WD-WCAWF1939879/a'
phys_path: '/pci@0,0/pci8086,2845@1c,3/pci1095,3132@0/disk@1,0:a'
whole_disk: 1
DTL: 27
 

[zfs-discuss] Same device node appearing twice in same mirror; one faulted, one not...

2011-05-19 Thread Alex
I thought this was interesting - it looks like we have a failing drive in our 
mirror, but the two device nodes in the mirror are the same:

  pool: tank
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: scrub completed after 1h9m with 0 errors on Sat May 14 03:09:45 2011
config:

NAMESTATE READ WRITE CKSUM
tankDEGRADED 0 0 0
  mirror-0  DEGRADED 0 0 0
c5t1d0  ONLINE   0 0 0
c5t1d0  FAULTED  0 0 0  corrupted data

c5t1d0 does indeed only appear once in the "format" list. I wonder how to go 
about correcting this if I can't uniquely identify the failing drive.

"format" takes forever to spill its guts, and the zpool commands all hang.. 
clearly there is hardware error here, probably causing that, but not sure how 
to identify which disk to pull.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS back on Mac OS X

2011-03-15 Thread Alex Blewitt
MacZFS has been running on OSX since Apple dropped the ball, but only up to 
onnv_74 for the stable branch. 

Alex

Sent from my iPhone 4

On 15 Mar 2011, at 15:21, Jerry Kemp  wrote:

> FYI.
> 
> This came across a Mac OS X server list that I am subscribed to.
> 
> Jerry
> 
> 
> 
> +
> Don Brady, former senior Apple engineer has started a company to bring
> what appears to be a commercially supported version of ZFS to OS X.
> 
> Ten's Complement: http://info.tenscomplement.com/
> 
> Details are sparse, but they have a twitter feed and email newsletter.
> When I signed up I got a invitation to join the beta program.
> 
>> From his twitter (http://twitter.com/#!/tenscomplement):
> 
> $ uname -prs
> Darwin 10.6.0 i386
> $ zpool upgrade
> This system is currently running ZFS pool version 28.
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS root clone problem

2011-01-28 Thread alex bartonek
Hey Cindy...

wanted to post up on here since you've been helping me in email (which I 
greatly appreciate!).


I figured it out.. I've done the 'dd' thing before etc.  I got it all the way 
to where it was complaining that it cannot use a EFI labeled drive.  When I did 
a prtvtoc | fmthard on the drive, I was never able to change it to a SMI label. 
 So I went in there, changed the cylinder info, relabeled, changed it back, 
label..and voila..now I can mirror again!!

Thank you for taking the time to personally email me with my issue.

-Alex
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS root clone problem

2011-01-28 Thread alex bartonek
(for some reason I cannot find  my original thread..so I'm reposting it)

I am trying to move my data off of a 40gb 3.5" drive to a 40gb 2.5" drive.  
This is in a Netra running Solaris 10.

Originally what I did was:

zpool attach -f rpool c0t0d0 c0t2d0.

Then I did an installboot on c0t2d0s0.

Didnt work.  I was not able to boot from my second drive (c0t2d0).

I cannot remember my other commands but I ended up removing c0t2d0 from my 
pool.  So here is how it looks now:

# zpool status -v
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c0t0d0s0  ONLINE   0 0 0

zfs list shows no other drive connected to the pool.

I am trying to redo this to see where I went wrong but I get the following 
error:
zpool attach -f rpool c0t0d0 c0t2d0


# zpool attach -f rpool c0t0d0 c0t2d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c0t2d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
/dev/dsk/c0t2d0s2 is part of active ZFS pool rpool. Please see zpool(1M).


How can I remove c0t2d0 from the pool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-11 Thread Alex Blewitt


On Dec 11, 2010, at 14:15, Frank Van Damme wrote:


2010/12/10 Freddie Cash :

On Fri, Dec 10, 2010 at 5:31 AM, Edward Ned Harvey
 wrote:
It's been a while since I last heard anybody say anything about  
this.
What's the latest version of publicly released ZFS?  Has oracle  
made it

closed-source moving forward?

Nexenta ... openindiana ... etc ... Are they all screwed?


ZFSv28 is available for FreeBSD 9-CURRENT.

We won't know until after Oracle releases Solaris 11 whether or not
they'll live up to their promise to open the source to ZFSv31.  Until
Solaris 11 is released, there's really not much point in debating it.


And if they don't, it will be Sad, both in terms of useful code not
being available to a wide community to review and amend, as in terms
of Oracle not really getting the point about open source development.


I think it's a known fact that Oracle hasn't got the point of open  
source development. Forks ahoy!


http://www.jroller.com/niclas/entry/apache_leaves_jcp_ec
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance issues with iSCSI under Linux [SEC=UNCLASSIFIED]

2010-10-14 Thread Wilkinson, Alex

0n Thu, Oct 14, 2010 at 09:54:09PM -0400, Edward Ned Harvey wrote: 

>If you happen to find that MegaCLI is the right tool for your hardware, let
>me know, and I'll paste a few commands here, which will simplify your life.
>When I first started using it, I found it terribly cumbersome.  But now 
I've
>gotten used to it, and MegaCLI commands just roll off the tongue.

can you paste them anyway ?

  -Alex

IMPORTANT: This email remains the property of the Department of Defence and is 
subject to the jurisdiction of section 70 of the Crimes Act 1914. If you have 
received this email in error, you are requested to contact the sender and 
delete the email.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-26 Thread Alex Blewitt
On 25 Sep 2010, at 19:56, Giovanni Tirloni  wrote:

> We have correctable memory errors on ECC systems on a monthly basis. It's not 
> if they'll happen but how often.

"DRAM Errors in the wild: a large-scale field study" is worth a read if you 
have time. 

http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf

Alex
(@alblue on Twitter)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] iSCSI targets mapped to a VMWare ESX server

2010-09-07 Thread Alex Fler
check fler.us
Solaris 10 iSCSI Target for Vmware ESX
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Postmortem - file system recovered [SEC=UNCLASSIFIED]

2010-08-29 Thread Wilkinson, Alex

0n Sun, Aug 29, 2010 at 08:09:22PM -0700, Brian wrote: 

>The fix:
>"""the trick was to modify mode in in-kernel buffer containing 
znode_phys_t and then force ZFS to flush it out to disk."""

Can you give an example of how you did this ?

   -Alex

IMPORTANT: This email remains the property of the Department of Defence and is 
subject to the jurisdiction of section 70 of the Crimes Act 1914. If you have 
received this email in error, you are requested to contact the sender and 
delete the email.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirrored pool unimportable (FAULTED)

2010-08-28 Thread Alex Blewitt
On 28 Aug 2010, at 16:25, Norbert Harder  wrote:

> Later, since the development of the ZFS extension was discontinued ...

The MacZFS project lives on at Google Code and http://github.com/alblue/mac-zfs

Not that it helps if the data has already become corrupted. 

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog and TRIM support [SEC=UNCLASSIFIED]

2010-08-25 Thread Wilkinson, Alex

0n Wed, Aug 25, 2010 at 02:54:42PM -0400, LaoTsao ?? wrote: 

>IMHO, U want -E for ZIL and -M for L2ARC

Why ?

   -Alex

IMPORTANT: This email remains the property of the Department of Defence and is 
subject to the jurisdiction of section 70 of the Crimes Act 1914. If you have 
received this email in error, you are requested to contact the sender and 
delete the email.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored raidz

2010-07-26 Thread Alex Blewitt
On 26 Jul 2010, at 19:51, Dav Banks  wrote:

> I wanted to test it as a backup solution. Maybe that's crazy in itself but I 
> want to try it.
> 
> Basically, once a week detach the 'backup' pool from the mirror, replace the 
> drives, add the new raidz to the mirror and let it resilver and sit for a 
> week.

Why not do it the other way around? Create a pool which consists of mirrored 
pairs (or triples) of drives. You don't need raidz to make it appear that the 
pool is bigger and it will use disks in the pool appropriately. If you want to 
have more copies of data, set copies=2 and zfs will try to schedule writes 
across different mirrored pairs. 

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS compression

2010-07-25 Thread Alex Blewitt
On 25 Jul 2010, at 14:12, Ben  wrote:

> I've read a small amount about compression, enough to find that it'll effect 
> performance (not a problem for me) and that once you enable compression it 
> only effects new files written to the file system.  

Yes, that's true. Compression on defaults to lzjb which is fast; but gzip-9 can 
be twice as good. (I've just done some tests on the MacZFS port on my blog for 
more info)

> Is this still true of b134?  And if it is, how can I compress all of the 
> current data on the file system?  Do I have to move it off then back on?

Any changes to the filesystem only take effect on newly written/updated files. 
You could do a cp to force a rewrite but in the interim would take space for 
the old and new copies; furthermore, if you have snapshots, then even removing 
the old (uncompressed) files won't get the space back. 

If you destroy all snapshots, then do a cp/rm on a file by file basis you may 
be able to do an in-place compression. 

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-07-12 Thread Alex Krasnov
> From this output it appears as if Solaris, via the
> BIOS I presume, it looks like my BIOS thinks it
> doesn't have ECC RAM, even though all the memory
> modules are indeed ECC modules.
> 
> Might be time to check (1) my current BIOS settings,
> even though I felt sure ECC was enabled in the BIOS
> already, and (2) check for a newer BIOS update. A
> pity, as the machine has been rock-solid so far, and
> I don't like changing stable BIOSes...

My apologies for resurrecting this thread, but I am curious whether you have 
had any success enabling ECC on your M2N-SLI machine, using either the BIOS or 
the setpci scripts. I am experiencing a similar issue with my M2N32-SLI 
machine. The BIOS reports that ECC is turned on, but smbios reports that it is 
turned off:

IDSIZE TYPE
0 106  SMB_TYPE_BIOS (BIOS information)

  Vendor: Phoenix Technologies, LTD
  Version String: ASUS M2N32-SLI DELUXE ACPI BIOS Revision 2001
  Release Date: 05/19/2008
  Address Segment: 0xe000
  ROM Size: 1048576 bytes
  Image Size: 131072 bytes
  Characteristics: 0x7fcb9e80

IDSIZE TYPE
6315   SMB_TYPE_MEMARRAY (physical memory array)

  Location: 3 (system board or motherboard)
  Use: 3 (system memory)
  ECC: 3 (none)
  Number of Slots/Sockets: 4
  Memory Error Data: Not Supported
  Max Capacity: 17179869184 bytes
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-09 Thread Alex Blewitt
On 9 Jul 2010, at 20:38, Garrett D'Amore wrote:

> On Fri, 2010-07-09 at 15:02 -0400, Miles Nordin wrote:
>>>>>>> "ab" == Alex Blewitt  writes:
>> 
>>ab> All Mac Minis have FireWire - the new ones have FW800.
>> 
>> I tried attaching just two disks to a ZFS host using firewire, and it
>> worked very badly for me.  I found:
>> 
>> 1. The solaris firewire stack isn't as good as the Mac OS one.
> 
> Indeed.  There has been some improvement here in the past year or two,
> but I still wouldn't deem it ready for serious production work.

That may be true for Solaris; but not so for Mac OS X. And after all, that's 
what I'm working to get ZFS on.

>> 3. The quality of software inside the firewire cases varies wildly
>>and is a big source of stability problems.  (even on mac)

It would be good if you could refrain from spreading FUD if you don't have 
experience with it. I have used FW400 and FW800 on Mac systems for the last 8 
years; the only problem was with the Oxford 911 chipset in OSX 10.1 days. Since 
then, I've not experienced any issues to do with the bus itself. 

It may not suit everyone's needs, and it may not be supported well on 
OpenSolaris, but it works fine on a Mac.

Alex

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Legality and the future of zfs...

2010-07-09 Thread Alex Blewitt
On 9 Jul 2010, at 08:55, James Van Artsdalen  wrote:

>> On Thu, 8 Jul 2010, Edward Ned Harvey wrote:
>> Yep.  Provided it supported ZFS, a Mac Mini makes for
>> a compelling SOHO server.
> 
> Warning: a Mac Mini does not have eSATA ports for external storage.  It's 
> dangerous to use USB for external storage since many (most? all?) USB->SATA 
> chips discard SYNC instead of passing FLUSH to the drive - very bad for ZFS.

All Mac Minis have FireWire - the new ones have FW800. In any case, the server 
class mini has two internal hard drives which make them amenable to mirroring. 

The Mac ZFS port limps on in any case - though I've not managed to spend much 
time on it recently, I have been making progress this week. 

The Google code project is at http://code.google.com/p/maczfs/ and my Github is 
at http://github.com/alblue/ for those that are interested. 

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Native ZFS for Linux

2010-06-11 Thread Alex Blewitt

On Jun 11, 2010, at 11:03, Joerg Schilling wrote:


Alex Blewitt  wrote:


On Jun 11, 2010, at 10:43, Joerg Schilling wrote:


Jason King  wrote:


Well technically they could start with the GRUB zfs code, which is
GPL
licensed, but I don't think that's the case.


As explained in depth in a previous posting, there is absolutely no
legal
problem with putting the CDDLd original ZFS implementation into the
Linux
kernel.


You are sadly mistaken.

From GNU.org on license compatibilities:

http://www.gnu.org/licenses/license-list.html


What you read there is completely wrong :-(

The FSF even knows that it is wrong as the FSF did never sue Veritas
for publishing a modified version of GNU tar that links against
close source libs from veritas.

The best you can do is to ignore it and to ask independent lawyers.

I encourage you to read my other post that in depth explains why the  
FSF

publishes incorrect claims.


There was nothing there other than fluff from a different website,  
though. And your argument "Look, it says it's Open Source here" means  
that they are compatible is not the generally held position of almost  
everyone else who has looked into this.


The GPL doesn't prevent you doing things. However, it does withdraw  
the agreement that you are permitted to copy someone else's work if  
you do those things. So whilst one can compile and link code together,  
you may not have the rights to use other's code without every  
committers individual agreement that you can copy their code.


The GPL doesn't prevent; it just withdraws rights - without which, you  
may be breaking copyright. And the GPL has been tested a number of  
times in court with regards to copyright violations where the GPL no  
longer covers you to do the same.


As an observation, the Eclipse Foundation lawyers have agreed that the  
GPL is incompatible with the EPL for the same reasons:


http://www.eclipse.org/legal/eplfaq.php#GPLCOMPATIBLE

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Native ZFS for Linux

2010-06-11 Thread Alex Blewitt

On Jun 11, 2010, at 10:43, Joerg Schilling wrote:


Jason King  wrote:

Well technically they could start with the GRUB zfs code, which is  
GPL

licensed, but I don't think that's the case.


As explained in depth in a previous posting, there is absolutely no  
legal
problem with putting the CDDLd original ZFS implementation into the  
Linux

kernel.


You are sadly mistaken.

From GNU.org on license compatibilities:

http://www.gnu.org/licenses/license-list.html

Common Development and Distribution License (CDDL), version 1.0
	This is a free software license. It has a copyleft with a scope  
that's similar to the one in the Mozilla Public License, which makes  
it incompatible with the GNU GPL. This means a module covered by the  
GPL and a module covered by the CDDL cannot legally be linked  
together. We urge you not to use the CDDL for this reason.


	Also unfortunate in the CDDL is its use of the term “intellectual  
property”.


Whether a license is classified as "Open Source" or not does not imply  
that all open source licenses are compatible with each other.


Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] why both dedup and compression?

2010-05-05 Thread Alex Blewitt
Dedup came much later than compression. Also, compression saves both  
space and therefore load time even when there's only one copy. It is  
especially good for e.g. HTML or man page documentation which tends to  
compress very well (versus binary formats like images or MP3s that  
don't).


It gives me an extra, say, 10g on my laptop's 80g SSD which isn't bad.

Alex

Sent from my (new) iPhone

On 6 May 2010, at 02:06, Richard Jahnel  wrote:


I've googled this for a bit, but can't seem to find the answer.

What does compression bring to the party that dedupe doesn't cover  
already?


Thank you for you patience and answers.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS mirrored boot disks

2010-04-23 Thread Alex
I was having this same problem with snv_134. I executed all the same commands 
as you did. The cloned disk booted up to the "Hostname:" line and then died. 
Booting with the "-kv" kernel option in GRUB, it died at a different point each 
time, most commonly after:

"srn0 is /pseudo/s...@0"

What's worse, my primary disk wouldn't boot either! I tried all manner of 
swapping disks in and out, unplugging & plugging certain disks, changing boot 
order in BIOS, etc. These are PATA disks and I tried changing master to slave, 
slave to master, booting with one drive but not the other, enabling/disabling 
DMA on the drives, etc.

But anyway, after my customary 8 hours of Googling, I found the fix:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6923585

Looks like I neglected to detach the mirror before removing it...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-23 Thread Alex Blewitt

On 22 Apr 2010, at 20:50, Rich Teer  wrote:


On Thu, 22 Apr 2010, Alex Blewitt wrote:

Hi Alex,


For your information, the ZFS project lives (well, limps really) on
at http://code.google.com/p/mac-zfs. You can get ZFS for Snow Leopard
from there and we're working on moving forwards from the ancient pool
support to something more recent. I've relatively recently merged in
the onnv-gate repository (at build 72) which should make things  
easier

to track in the future.


That's good to hear!  I thought Apple yanking ZFS support from Mac  
OS was

a really dumb idea.  Do you work for Apple?


No, the entire effort is community based. Please feel free to join up  
to the mailing list from the project page if you're interested in ZFS  
on Mac OSX.


Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-22 Thread Alex Blewitt
Rich, Shawn,

> Of course, it probably doesn't help that Apple, in their infinite wisdom, 
> canned
> native suport for ZFS in Snow Leopard (idiots).

For your information, the ZFS project lives (well, limps really) on at 
http://code.google.com/p/mac-zfs. You can get ZFS for Snow Leopard from there 
and we're working on moving forwards from the ancient pool support to something 
more recent. I've relatively recently merged in the onnv-gate repository (at 
build 72) which should make things easier to track in the future.

> Ah.  The file systems I'm trying to use are locally attached to the server, 
> and
> shared via NFS.

What are the problems? I have read-write files over a (Mac-exported) ZFS share 
via NFS to Mac clients, and that has no problem at all. It's possible that it 
could be permissions related, especially if you're using NFSv4 - AFAIK the Mac 
client is an alpha stage of that on Snow Leopard. 

You could try listing the files (from OSX) with ls -...@e which should show you 
all the extended attributes and ACLs to see if that's causing a problem.

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/OSOL/Firewire...

2010-03-19 Thread Alex Blewitt
On 19 Mar 2010, at 15:30, Bob Friesenhahn wrote:

> On Fri, 19 Mar 2010, Khyron wrote:
>> Getting better FireWire performance on OpenSolaris would be nice though.
>> Darwin drivers are open...hmmm.
> 
> OS-X is only (legally) used on Apple hardware.  Has anyone considered that 
> since Firewire is important to Apple, they may have selected a particular 
> Firewire chip which performs particularly well?

Darwin is open-source.

http://www.opensource.apple.com/source/xnu/xnu-1486.2.11/
http://www.opensource.apple.com/source/IOFireWireFamily/IOFireWireFamily-417.4.0/

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] USB 3.0 possibilities

2010-02-22 Thread Alex Blewitt

On Feb 22, 2010, at 18:02, Richard Elling wrote:


On Feb 22, 2010, at 9:46 AM, A. Krijgsman wrote:

Today I received a commecial offer on some external USB 3.0 disk  
enclosure.
Since it was new to me I googled my way to wikipedia and found that  
the specs say

USB 3.0 should have a 5 Gbit speeds capability.
Could it be an interesting solution to build a very cheap storage  
area network?
( Ofcourse ZFS in the middle to manage the shares. ) Or is this  
wishfull (e.g. bad) thinking?


It all depends on if the USB disk honors cache flush commands.


'Cheap' is the keyword - you get what you pay for. I found with some  
USB drives that ZFS scrubs were showing hundreds of checksum errors,  
so I ditched the drives straight away. The problem is more the  
firmware rather than the interface itself; but you won't know until  
you find it.


On the other hand, I've not found drives with FireWire 800 support to  
have problems, and in any case, FW800 has a better real-world  
throughput than USB2. By the time you go much above it, you end up  
with a single drive's spindle being the bottleneck rather than the bus  
- though of course, multiple drives will start to fill up a bus anyway.


It's worth noting that USB leeches control from the host computer, so  
even if the bandwidth is there, the performance might not be for  
several competing drives on the same bus, regardless of how big the  
number is printed.


Here's some (old) analysis of ZFS and HFS on USB and firewire:

http://alblue.blogspot.com/2008/04/review-iomega-ultramax-and-hfz-vs-zfs.html

How much that translates over to USB 3, I don't know, but there's a  
difference between 'theoretical' and 'practical'. It would be good to  
see what kind of performance numbers you can come up with, or if you  
run into the same kind of problems with USB that existed for slower  
models. (Sadly, the planned FW3200 seems to have disappeared into a  
hole in the ground.)


Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS compression on Clearcase

2010-02-04 Thread Alex Blewitt
On 4 Feb 2010, at 16:35, Bob Friesenhahn wrote:

> On Thu, 4 Feb 2010, Darren J Moffat wrote:
>>> Thanks - IBM basically haven't test clearcase with ZFS compression 
>>> therefore, they don't support currently. Future may change, as such my 
>>> customer cannot use compression. I have asked IBM for roadmap info to find 
>>> whether/when it will be supported.
>> 
>> That is FUD generation in my opinion and being overly cautious.  The whole 
>> point of the POSIX interfaces to a filesystem is that applications don't 
>> actually care how the filesystem stores their data.
> 
> Clearcase itself implements a versioning filesystem so perhaps it is not 
> being overly cautious.  Compression could change aspects such as how free 
> space is reported.

I'd also like to echo Bob's observations here. Darren's FUDFUD is based on 
limited experience of ClearCase, I expect ...

On the client side, ClearCase actually presnets itself as a mounted filesystem, 
regardless of what the OS has under the covers. In other words, a ClearCase 
directory will never be 'ZFS' because it's not ZFS, it's ClearCaseFS. On the 
server side (which might be the case here) the way ClearCase works is to 
represent the files and contents in a way more akin to a database (e.g. Oracle) 
than traditional file-system approaches to data (e.g. CVS, SVN). In much the 
same way there are app-specific issues with ZFS (e.g. matching block-sizes, 
dealing with ZFS snapshots on a VM image and so forth) there may well be some 
with ClearCase.

At the very least, though, IBM may just be unable/willing to test it at the 
time and put their stamp of approval on it. In many cases for IBM products, 
there are supported platforms (often with specific patch levels), much like 
there are offically supported Solaris platforms and hot-fixes to go for certain 
applications. They may well just being cautious in what there is until they've 
had time to test it out for themselves - or more likely, until the first set of 
paying customers wants to get invoiced for the investigation. But to claim it's 
FUD without any real data to back it up is just FUD^2.

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool fragmentation issues? (dovecot) [SEC=UNCLASSIFIED]

2010-01-14 Thread Wilkinson, Alex

0n Thu, Jan 14, 2010 at 08:43:06PM -0800, Michael Keller wrote: 

>> The best Mail Box to use under Dovecot for ZFS is
>> MailDir, each email is store as a individual file.
>
>Can not agree on that. dbox is about 10x faster - at least if you have > 
1 messages in one mailbox
>folder. Thats not because of ZFS but dovecot just handles dbox files (one 
for each message like maildir) better in terms of indexing. 

Got a link to this magic dbox format ?

  -Alex

IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-14 Thread Alex Lam S.L.
Very interesting product indeed!

Given the volume one of these cards take up inside the server though,
I couldn't help but think that 4GB is a bit on the low side.

Alex.



On Wed, Jan 13, 2010 at 5:51 PM, Christopher George
 wrote:
> The DDRdrive X1 OpenSolaris device driver is now complete,
> please join us in our first-ever ZFS Intent Log (ZIL) beta test
> program.  A select number of X1s are available for loan,
> preferred candidates would have a validation background
> and/or a true passion for torturing new hardware/driver :-)
>
> We are singularly focused on the ZIL device market, so a test
> environment bound by synchronous writes is required.  The
> beta program will provide extensive technical support and a
> unique opportunity to have direct interaction with the product
> designers.
>
> Would you like to take part in the advancement of Open
> Storage and explore the far-reaching potential of ZFS
> based Hybrid Storage Pools?
>
> If so, please send an inquiry to "zfs at ddrdrive dot com".
>
> The drive for speed,
>
> Christopher George
> Founder/CTO
> www.ddrdrive.com
>
> *** Special thanks goes out to SUN employees Garrett D'Amore and
> James McPherson for their exemplary help and support.  Well done!
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Tutorial slides from USENIX LISA09 [SEC=UNCLASSIFIED]

2010-01-07 Thread Wilkinson, Alex

0n Thu, Jan 07, 2010 at 10:49:50AM -0800, Richard Elling wrote: 

>I have posted my ZFS Tutorial slides from USENIX LISA09 on  
>slideshare.net.

>http://richardelling.blogspot.com/2010/01/zfs-tutorial-at-usenix-lisa09-slides.html

Is there a PDF available of this ?

  -Alex



IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rethinking RaidZ and Record size [SEC=UNCLASSIFIED]

2010-01-06 Thread Wilkinson, Alex

0n Wed, Jan 06, 2010 at 11:00:49PM -0800, Richard Elling wrote: 

>On Jan 6, 2010, at 10:39 PM, Wilkinson, Alex wrote:
>>
>>0n Wed, Jan 06, 2010 at 02:22:19PM -0800, Richard Elling wrote:
>>
>>> Rather, ZFS works very nicely with "hardware RAID" systems or JBODs
>>> iSCSI, et.al.  You can happily  add the
>>
>> Im not sure how ZFS works very nicely with say for example an EMC  
>> Cx310 array ?
>
>Why would ZFS be any different than other file systems on a Cx310?

Well, not specifically the filesystem but using ZFS as a volume manager.
Please see: 
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-April/028089.html

  -Alex


IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rethinking RaidZ and Record size [SEC=UNCLASSIFIED]

2010-01-06 Thread Wilkinson, Alex

0n Wed, Jan 06, 2010 at 02:22:19PM -0800, Richard Elling wrote: 

>Rather, ZFS works very nicely with "hardware RAID" systems or JBODs
>iSCSI, et.al.  You can happily  add the

Im not sure how ZFS works very nicely with say for example an EMC Cx310 array ?

  -Alex

IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moving a pool from FreeBSD 8.0 to opensolaris

2009-12-24 Thread Alex Blewitt

On 24 Dec 2009, at 21:27, Mattias Pantzare  wrote:


An EFI label isn't "OS specific formatting"!


It is. Not all OS will read an EFI label.


You misunderstood the concept of OS specific, I feel. EFI is indeed  
OS
independent; however, that doesn't necesssarily imply that all OSs  
can read
EFI disks. My Commodore 128D could boot CP/M but couldn't  
understand FAT32 -

that doesn't mean that therefore FAT32 isn't OS independent either.


On a PC EFI is very OS specific as most OS on that platform does not
support EFI.



Still false, I'm afraid. There is nothing OS specific about EFI,  
regardless of whether any given OS supports EFI or not. Nor does it  
need to be a "PC" - I have several Mac PPCs that can read EFI  
partitioned disks (as well as some Intel ones). These can also be read  
by other systems that understand EFI partitioned disks.


Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moving a pool from FreeBSD 8.0 to opensolaris

2009-12-24 Thread Alex Blewitt

On 24 Dec 2009, at 10:33, Mattias Pantzare  wrote:


On Thu, Dec 24, 2009 at 04:36, Ian Collins  wrote:



An EFI label isn't "OS specific formatting"!


It is. Not all OS will read an EFI label.


You misunderstood the concept of OS specific, I feel. EFI is indeed OS  
independent; however, that doesn't necesssarily imply that all OSs can  
read EFI disks. My Commodore 128D could boot CP/M but couldn't  
understand FAT32 - that doesn't mean that therefore FAT32 isn't OS  
independent either.


Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What can I get with 2x250Gb ?

2009-11-08 Thread Alex Blewitt
If you want any kind of data guarantee, you need to go for a mirrored  
pool. If you don't want a data guarantee, you can create a single pool  
(non-mirrored) of the two devs which will give you 500Gb. The key is  
in the 'zpool create' command


zpool create twofifty mirror disk1 disk2
zpool create fivehundred disk1 disk2

Alex

On Nov 8, 2009, at 15:09, Wael Nasreddine (a.k.a eMxyzptlk) wrote:


Hello,

I'm sure this question has been asked many times already, but I  
couldn't find the answer myself. Anyway I have a laptop with 2  
identical hard disks 250Gb each, I'm currently using Linux on RAID0  
which gave me ~500Gb..


I'm planning to switch to FreeBSD but I want to know before I do,  
what can I get with these hard disks? do I get ~500Gb or less? can  
ZFS be setup to use RAIDz with only 2 hard disks ?


Thank you

--
Wael Nasreddine

Weem Chief-Development Officer - http://www.weem.com

Blog: http://wael.nasreddine.com
E-mail  : wael.nasredd...@weem.com
gTalk   : wael.nasredd...@gmail.com
Tel : +33.6.32.94.70.13
Skype   : eMxyzptlk
Twitter : @eMxyzptlk

PGP: 1024D/C8DD18A2 06F6 1622 4BC8 4CEB D724  DE12 5565 3945 C8DD 18A2

.: An infinite number of monkeys typing into GNU emacs,
  would never make a good program. (L. Torvalds 1995) :.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Location of ZFS documentation (source)?

2009-11-04 Thread Alex Blewitt

On 3 Nov 2009, at 14:48, Cindy Swearingen wrote:


Alex,

You can download the man page source files from this URL:

http://dlc.sun.com/osol/man/downloads/current/


FYI there's a couple of nits in the man pages:

* the zpool create synopsis hits the 80 char mark. Might be better to  
fit on several lines e.g.


   zpool create [-fn] [-o property=value] ...
   [-O file-system-property=value] ...
   [-m mountpoint] [-R root] pool vdev ...

* same for the zpool import
* There's a few bold open square brackets when there probably  
shouldn't be;

zfs.1m:\fB[\fB-ug\fR] "\fIeveryone\fR"|\fIuser\fR|\fIgroup\fR[,...]\fR
zfs.1m:\fB[\fB-e\fR] \fIperm\fR|@\fIsetname\fR[,...]\fR
zfs.1m:\fB[\fB-ld\fR] \fIfilesystem\fR|\fIvolume\fR\fR
zpool.1m:\fB[\fB-O\fR \fIfile-system-property=value\fR] ...\fR
->
zfs.1m:[\fB-ug\fR] "\fIeveryone\fR"|\fIuser\fR|\fIgroup\fR[,...]\fR
zfs.1m:[\fB-e\fR] \fIperm\fR|@\fIsetname\fR[,...]\fR
zfs.1m:[\fB-ld\fR] \fIfilesystem\fR|\fIvolume\fR\fR
zpool.1m:[\fB-O\fR \fIfile-system-property=value\fR] ...\fR

* zpool upgrade looks like it's an older message,
zpool.1m:This system is currently running ZFS version 2.
->
 # zpool upgrade -a
 This system is currently running ZFS pool version 8.

 All pools are formatted using this version.

This is confusing, especially since ZFS filesystem version 2 is the  
default:

# zfs upgrade
This system is currently running ZFS filesystem version 2.

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Location of ZFS documentation (source)?

2009-11-04 Thread Alex Blewitt
On Tue, Nov 3, 2009 at 2:48 PM, Cindy Swearingen
 wrote:
> Alex,
>
> You can download the man page source files from this URL:
>
> http://dlc.sun.com/osol/man/downloads/current/

Thanks, that's great.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Location of ZFS documentation (source)?

2009-11-02 Thread Alex Blewitt
The man pages documentation from the old Apple port
(http://github.com/alblue/mac-zfs/tree/master/zfs_documentation/man8/)
don't seem to have a corresponding source file in the onnv-gate
repository (http://hub.opensolaris.org/bin/view/Project+onnv/WebHome)
although I've found the text on-line
(http://docs.sun.com/app/docs/doc/819-2240/zfs-1m)

Can anyone point me to where these are stored, so that we can update
the documentation in the Apple fork?

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Alex Lam S.L.
Looks great - and by the time OpenSolaris build has it, I will have a
brand new laptop to put it on ;-)

One question though - I have a file server at home with 4x750GB on
raidz1. When I upgrade to the latest build and set dedup=on, given
that it does not have an offline mode, there is no way to operate on
the existing dataset?

As a workaround I can move files in and out of the pool through an
external 500GB HDD, and with the ZFS snapshots I don't really risk
much about losing data if anything goes (not too horribly, anyway)
wrong.

Thanks to you guys again for the great work!

Alex.


On Mon, Nov 2, 2009 at 1:20 PM, Jeff Bonwick  wrote:
>> Terrific! Can't wait to read the man pages / blogs about how to use it...
>
> Just posted one:
>
> http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup
>
> Enjoy, and let me know if you have any questions or suggestions for
> follow-on posts.
>
> Jeff
>



-- 

Mike Ditka  - "If God had wanted man to play soccer, he wouldn't have
given us arms." -
http://www.brainyquote.com/quotes/authors/m/mike_ditka.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Alex Lam S.L.
Terrific! Can't wait to read the man pages / blogs about how to use it...

Alex.

On Mon, Nov 2, 2009 at 12:21 PM, David Magda  wrote:
> Deduplication was committed last night by Mr. Bonwick:
>
>> Log message:
>> PSARC 2009/571 ZFS Deduplication Properties
>> 6677093 zfs should have dedup capability
>
>
> http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html
>
>
>
> Via c0t0d0s0.org.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 

Marie von Ebner-Eschenbach  - "Even a stopped clock is right twice a
day." - 
http://www.brainyquote.com/quotes/authors/m/marie_von_ebnereschenbac.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] automate zpool scrub

2009-11-01 Thread Alex Blewitt
On Sun, Nov 1, 2009 at 3:45 PM, Vano Beridze  wrote:
> Now I've logged in and there was a mail saying that cron did not found zpool
>
> it's in my path
> which zpool
> /usr/sbin/spool
>
> Does cron use different PATH setting?

Yes. Typically your PATH is set up by various shell initialisations
which may not get run for Cron jobs. In any case, it's safer to assume
it's not.

> Is it ok to specify /usr/sbin/zpool in crontab file?

It is in fact preferable to specify fully qualified paths in crontabs
generally, so yes.

Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Apple cans ZFS project

2009-10-24 Thread Alex Blewitt
Apple has finally canned [1] the ZFS port [2]. To try and keep momentum up and 
continue to use the best filing system available, a group of fans have set up a 
continuation project and mailing list [3,4].

If anyone's interested in joining in to help, please join in the mailing list.

[1] http://alblue.blogspot.com/2009/10/apple-finally-kill-off-zfs.html
[2] http://zfs.macosforge.org
[3] http://code.google.com/p/maczfs/
[4] http://groups.google.com/group/zfs-macos
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-10 Thread Alex Li
We finally resolved this issue by change LSI driver. For details, please refer 
to here http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SMBS share disappears daily

2009-08-15 Thread Alex Lam S.L.
Hi there,

I have just upgraded to b118 - except that the mouse is now unable to
select / focus on any other windows except the ones that the desktop
randomly decided to give the focus to.

So I guess I have a bigger problem than just SMB not working here...

Alex.


On Fri, Aug 14, 2009 at 7:32 PM, Will Murnane wrote:
> On Fri, Aug 14, 2009 at 13:35, Alex Lam S.L. wrote:
>> Hi there,
>>
>> I have a raid-z(1) data pool with "smb=name=data" (4x750GB local SATA
>> II), which I open up for my Windows machines to dump files into.
>>
>> Trouble is, the SMB share would disappear from the network at least
>> once a day, and when it happens, I have to reboot OpenSolaris
>> (2009.06) in order to make it available again. Restarting the SMB
>> server service without rebooting does not resolve the issue.
>>
>> Is this a known issue, or have I missed something obvious?
> Known issue.  Upgrading to b117/118 from the /dev repository fixed
> this for me, give it a shot.
>
> Will
>



--

Ted Turner  - "Sports is like a war without the killing." -
http://www.brainyquote.com/quotes/authors/t/ted_turner.html



-- 

Stephen Leacock  - "I detest life-insurance agents: they always argue
that I shall some day die, which is not so." -
http://www.brainyquote.com/quotes/authors/s/stephen_leacock.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SMBS share disappears daily

2009-08-14 Thread Alex Lam S.L.
Hi there,

I have a raid-z(1) data pool with "smb=name=data" (4x750GB local SATA
II), which I open up for my Windows machines to dump files into.

Trouble is, the SMB share would disappear from the network at least
once a day, and when it happens, I have to reboot OpenSolaris
(2009.06) in order to make it available again. Restarting the SMB
server service without rebooting does not resolve the issue.

Is this a known issue, or have I missed something obvious?

Regards,
Alex.

-- 

Pablo Picasso  - "Computers are useless. They can only give you
answers." - http://www.brainyquote.com/quotes/authors/p/pablo_picasso.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LZO versus LZJB

2009-08-14 Thread Alex Lam S.L.
Just to answer my own question - this one might be interesting:

http://www.quicklz.com/

Alex.


On Fri, Aug 14, 2009 at 3:15 PM, Alex Lam S.L. wrote:
> Thanks for the informative analysis!
>
> Just wondering - are there better candidates out there than even LZO
> for this purpose?
>
> Alex.
>
>
> On Fri, Aug 14, 2009 at 8:05 AM, Denis Ahrens wrote:
>> Hi
>>
>> Some developers here said a long time ago that someone should show
>> the code for LZO compression support for ZFS before talking about the
>> next step. I made that code with a friend and we also made a little
>> benchmark to give a first impression:
>>
>> http://denisy.dyndns.org/lzo_vs_lzjb/
>>
>> I hope we made no technical error, but if you find something
>> not accurate, we will correct it.
>>
>> Denis
>> --
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
>
>
> --
>
> Marie von Ebner-Eschenbach  - "Even a stopped clock is right twice a
> day." - 
> http://www.brainyquote.com/quotes/authors/m/marie_von_ebnereschenbac.html
>



-- 

Mike Ditka  - "If God had wanted man to play soccer, he wouldn't have
given us arms." -
http://www.brainyquote.com/quotes/authors/m/mike_ditka.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LZO versus LZJB

2009-08-14 Thread Alex Lam S.L.
Thanks for the informative analysis!

Just wondering - are there better candidates out there than even LZO
for this purpose?

Alex.


On Fri, Aug 14, 2009 at 8:05 AM, Denis Ahrens wrote:
> Hi
>
> Some developers here said a long time ago that someone should show
> the code for LZO compression support for ZFS before talking about the
> next step. I made that code with a friend and we also made a little
> benchmark to give a first impression:
>
> http://denisy.dyndns.org/lzo_vs_lzjb/
>
> I hope we made no technical error, but if you find something
> not accurate, we will correct it.
>
> Denis
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 

Marie von Ebner-Eschenbach  - "Even a stopped clock is right twice a
day." - 
http://www.brainyquote.com/quotes/authors/m/marie_von_ebnereschenbac.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Alex Lam S.L.
At a first glance, your production server's numbers are looking fairly
similar to the "small file workload" results of your development
server.

I thought you were saying that the development server has faster performance?

Alex.


On Tue, Aug 11, 2009 at 1:33 PM, Ed Spencer wrote:
> I've come up with a better name for the concept of file and directory
> fragmentation which is, "Filesystem Entropy". Where, over time, an
> active and volitile  filesystem moves from an organized state to a
> disorganized state resulting in backup difficulties.
>
> Here are some stats which illustrate the issue:
>
> First the development mail server:
> ==
> (Jump frames, Nagle disabled and tcp_xmit_hiwat,tcp_recv_hiwat set to
> 2097152)
>
> Small file workload (copy from zfs on iscsi network to local ufs
> filesystem)
> # zpool iostat 10
>               capacity     operations    bandwidth
> pool         used  avail   read  write   read  write
> --  -  -  -  -  -  -
> space       70.5G  29.0G      3      0   247K  59.7K
> space       70.5G  29.0G    136      0  8.37M      0
> space       70.5G  29.0G    115      0  6.31M      0
> space       70.5G  29.0G    108      0  7.08M      0
> space       70.5G  29.0G    105      0  3.72M      0
> space       70.5G  29.0G    135      0  3.74M      0
> space       70.5G  29.0G    155      0  6.09M      0
> space       70.5G  29.0G    193      0  4.85M      0
> space       70.5G  29.0G    142      0  5.73M      0
> space       70.5G  29.0G    159      0  7.87M      0
>
> Large File workload (cd and dvd iso's)
> # zpool iostat 10
>               capacity     operations    bandwidth
> pool         used  avail   read  write   read  write
> --  -  -  -  -  -  -
> space       70.5G  29.0G      3      0   224K  59.8K
> space       70.5G  29.0G    462      0  57.8M      0
> space       70.5G  29.0G    427      0  53.5M      0
> space       70.5G  29.0G    406      0  50.8M      0
> space       70.5G  29.0G    430      0  53.8M      0
> space       70.5G  29.0G    382      0  47.9M      0
>
> The production mail server:
> ===
> Mail system is running with 790 imap users logged in (low imap work
> load).
> Two backup streams are running.
> Not using jumbo frames, nagle enabled, tcp_xmit_hiwat,tcp_recv_hiwat set
> to 2097152
>    - we've never seen any effect of changing the iscsi transport
> parameters
>      under this small file workload.
>
> # zpool iostat 10
>               capacity     operations    bandwidth
> pool         used  avail   read  write   read  write
> --  -  -  -  -  -  -
> space       1.06T   955G     96     69  5.20M  2.69M
> space       1.06T   955G    175    105  8.96M  2.22M
> space       1.06T   955G    182     16  4.47M   546K
> space       1.06T   955G    170     16  4.82M  1.85M
> space       1.06T   955G    145    159  4.23M  3.19M
> space       1.06T   955G    138     15  4.97M  92.7K
> space       1.06T   955G    134     15  3.82M  1.71M
> space       1.06T   955G    109    123  3.07M  3.08M
> space       1.06T   955G    106     11  3.07M  1.34M
> space       1.06T   955G    120     17  3.69M  1.74M
>
> # prstat -mL
>   PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG
> PROCESS/LWPID
>  12438 root      12 6.9 0.0 0.0 0.0 0.0  81 0.1 508  84  4K   0 save/1
>  27399 cyrus     15 0.5 0.0 0.0 0.0 0.0  85 0.0  18  10 297   0 imapd/1
>  20230 root     3.9 8.0 0.0 0.0 0.0 0.0  88 0.1 393  33  2K   0 save/1
>  25913 root     0.5 3.3 0.0 0.0 0.0 0.0  96 0.0  22   2  1K   0 prstat/1
>  20495 cyrus    1.1 0.2 0.0 0.0 0.5 0.0  98 0.0  14   3 191   0 imapd/1
>  1051 cyrus    1.2 0.0 0.0 0.0 0.0 0.0  99 0.0  19   1  80   0 master/1
>  24350 cyrus    0.5 0.5 0.0 0.0 1.4 0.0  98 0.0  57   1 484   0 lmtpd/1
>  22645 cyrus    0.6 0.3 0.0 0.0 0.0 0.0  99 0.0  53   1 603   0 imapd/1
>  24904 cyrus    0.3 0.4 0.0 0.0 0.0 0.0  99 0.0  66   0 863   0 imapd/1
>  18139 cyrus    0.3 0.2 0.0 0.0 0.0 0.0  99 0.0  24   0 195   0 imapd/1
>  21459 cyrus    0.2 0.3 0.0 0.0 0.0 0.0  99 0.0  54   0 635   0 imapd/1
>  24891 cyrus    0.3 0.3 0.0 0.0 0.9 0.0  99 0.0  28   0 259   0 lmtpd/1
>   388 root     0.2 0.3 0.0 0.0 0.0 0.0 100 0.0   1   1  48   0
> in.routed/1
>  21643 cyrus    0.2 0.3 0.0 0.0 0.2 0.0  99 0.0  49   7 540   0 imapd/1
>  18684 cyrus    0.2 0.3 0.0 0.0 0.0 0.0 100 0.0  48   1 544   0 imapd/1
>  25398 cyrus    0.2 0.2 0.0 0.0 0.0 0.0 100 0.0  47   0 466   0 pop3d/1
>  23724 cyrus    0.2 0.2 0.0 0.0 0.0 0.0 100 0.0  47   0 540   0 imapd/1
>  24909 cyrus    0.1 0.2 0.0 0.0 0.2 0.0  99 0.0  25   1 251   0 l

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-07-30 Thread Alex Li
We found lots of SAS Controller Reset and errors to SSD on our servers 
(OpenSolaris 2008.05 and 2009.06 with third-party JBOD and X25-E). Whenever 
there is an error, the MySQL insert takes more than 4 seconds. It was quite 
scary.

Eventually our engineer disabled the Fault Management SMART Pooling and seems 
working.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-10 Thread Alex Lam S.L.
On Thu, Jun 11, 2009 at 2:08 AM, Aaron Blew wrote:
> That's quite a blanket statement.  MANY companies (including Oracle)
> purchased Xserve RAID arrays for important applications because of their
> price point and capabilities.  You easily could buy two Xserve RAIDs and
> mirror them for what comparable arrays of the time cost.
>
> -Aaron

I'd very much doubt that, but I guess one can always push their time
budgets around ;-)

Alex.


>
> On Wed, Jun 10, 2009 at 8:53 AM, Bob Friesenhahn
>  wrote:
>>
>> On Wed, 10 Jun 2009, Rodrigo E. De León Plicet wrote:
>>
>>>
>>> http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
>>
>> Maybe Apple will drop the server version of OS-X and will eliminate their
>> only server hardware (Xserve) since all it manages to do is lose money for
>> Apple and distracts from releasing the next iPhone?
>>
>> Only a lunatic would rely on Apple for a mission-critical server
>> application.
>>
>> Bob
>> --
>> Bob Friesenhahn
>> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
>> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>



-- 

Josh Billings  - "Every man has his follies - and often they are the
most interesting thing he has got." -
http://www.brainyquote.com/quotes/authors/j/josh_billings.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-04-30 Thread Wilkinson, Alex

0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote: 

>On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
>>
>> I currently have a single 17TB MetaLUN that i am about to present to an
>> OpenSolaris initiator and it will obviously be ZFS. However, I am 
constantly
>> reading that presenting a JBOD and using ZFS to manage the RAID is best
>> practice ? Im not really sure why ? And isn't that a waste of a high 
performing
>> RAID array (EMC) ?
>
>The JBOD "advantage" is that then ZFS can schedule I/O for the disks 
>and there is less chance of an unrecoverable pool since ZFS is assured 
>to lay out redundant data on redundant hardware and ZFS uses more 
>robust error detection than the firmware on any array.  When using 
>mirrors there is considerable advantage since writes and reads can be 
>concurrent.
>
>That said, your EMC hardware likely offers much nicer interfaces for 
>indicating and replacing bad disk drives.  With the ZFS JBOD approach 
>you have to back-track from what ZFS tells you (a Solaris device ID) 
>and figure out which physical drive is not behaving correctly.  EMC 
>tech support may not be very helpful if ZFS says there is something 
>wrong but the raid array says there is not. Sometimes there is value 
>with taking advantage of what you paid for.

So, shall I forget ZFS and use UFS ?

 -aW

IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-04-30 Thread Wilkinson, Alex

0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote: 

>On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
>>
>> I currently have a single 17TB MetaLUN that i am about to present to an
>> OpenSolaris initiator and it will obviously be ZFS. However, I am 
constantly
>> reading that presenting a JBOD and using ZFS to manage the RAID is best
>> practice ? Im not really sure why ? And isn't that a waste of a high 
performing
>> RAID array (EMC) ?
>
>The JBOD "advantage" is that then ZFS can schedule I/O for the disks 
>and there is less chance of an unrecoverable pool since ZFS is assured 
>to lay out redundant data on redundant hardware and ZFS uses more 
>robust error detection than the firmware on any array.  When using 
>mirrors there is considerable advantage since writes and reads can be 
>concurrent.
>
>That said, your EMC hardware likely offers much nicer interfaces for 
>indicating and replacing bad disk drives.  With the ZFS JBOD approach 
>you have to back-track from what ZFS tells you (a Solaris device ID) 
>and figure out which physical drive is not behaving correctly.  EMC 
>tech support may not be very helpful if ZFS says there is something 
>wrong but the raid array says there is not. Sometimes there is value 
>with taking advantage of what you paid for.

So forget ZFS and use UFS ? Or use UFS with a ZVOL ? Or Just use Vx{VM,FS} ?
It kinda sux that you get no benefit from using such a killer volume manager
+ filesystem with an EMC array :(

 -aW

IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-04-30 Thread Wilkinson, Alex
Hi all,

In terms of best practices and high performance would it be better to present a
JBOD to an OpenSolaris initiator or a single MetaLUN ?

The scenario is:

I currently have a single 17TB MetaLUN that i am about to present to an
OpenSolaris initiator and it will obviously be ZFS. However, I am constantly
reading that presenting a JBOD and using ZFS to manage the RAID is best
practice ? Im not really sure why ? And isn't that a waste of a high performing
RAID array (EMC) ?

 -aW

IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Problems using ZFS on Smart Array P400

2009-01-27 Thread Alex
I am having trouble getting ZFS to behave as I would expect.

I am using the HP driver (cpqary3) for the Smart Array P400 (in a HP Proliant 
DL385 G2) with 10k 2.5" 146GB SAS drives. The drives appear correctly, however 
due to the controller not offering JBOD functionality I had to configure each 
drive as a RAID0 logical drive.

Everything appears to work fine, the drives are detected and I created a mirror 
for the OS to install to and an additional raidz2 array with the remaining 6 
discs.

But when I remove a disc and then reinsert it I cannot get ZFS to accept it 
back into the array see bellow for the details.

I thought it might be a problem with using the whole discs eg: c1t*d0 so I 
created a single partition on each and used that, but had the same results. The 
module seem to detect the drive has been reinserted successfully but the OS 
doesn't seem to want to write to it.

Any help would be most appreciated as I would much prefer to use ZFS's software 
capabilities rather than the hardware card in the machine.

When rebooting the system the Array BIOS also displays some interesting 
behavior.

### BIOS Output

1792-Slot 1 Drive Array - Valid Data Found in the Array Accelerator
Data will automatically be written to the drive array
1779-Slot 1 Drive Array - Replacement drive(s) detected OR previously failed 
drives(s) no appear to be operational
POrt 2I: Box1: Bay3
Logical drives(s) disabled due to possible data loss.
Select "F1" to continue with logical drive(s) disabled
Select "F2" to accept data loss and to re-enable logical drive(s)

 Terminal output

bash-3.00# zpool status test

pool: test
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: resilver completed after 0h0m with 0 errors on Tue Jan 27 03:30:16 2009
config:

NAME STATE READ WRITE CKSUM
test DEGRADED 0 0 0
raidz2 DEGRADED 0 0 0
c1t2d0p0 ONLINE 0 0 0
c1t3d0p0 ONLINE 0 0 0
c1t4d0p0 ONLINE 0 0 0
c1t5d0p0 UNAVAIL 0 0 0 cannot open
c1t6d0p0 ONLINE 0 0 0
c1t8d0p0 ONLINE 0 0 0

errors: No known data errors
bash-3.00# zpool online test c1t5d0p0
warning: device 'c1t5d0p0' onlined, but remains in faulted state
use 'zpool replace' to replace devices that are no longer present

bash-3.00# dmesg

Jan 27 03:27:40 unknown cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array 
P400 Controller
Jan 27 03:27:40 unknown cpqary3: [ID 823470 kern.notice] Hot-plug drive 
inserted, Port: 2I Box: 1 Bay: 3
Jan 27 03:27:40 unknown cpqary3: [ID 479030 kern.notice] Configured Drive ? 
... YES
Jan 27 03:27:40 unknown cpqary3: [ID 10 kern.notice]
Jan 27 03:27:40 unknown cpqary3: [ID 823470 kern.notice] NOTICE: Smart Array 
P400 Controller
Jan 27 03:27:40 unknown cpqary3: [ID 834734 kern.notice] Media exchange 
detected, logical drive 6
Jan 27 03:27:40 unknown cpqary3: [ID 10 kern.notice]
...
Jan 27 03:36:24 unknown scsi: [ID 107833 kern.warning] WARNING: 
/p...@38,0/pci1166,1...@10/pci103c,3...@0/s...@5,0 (sd6):
Jan 27 03:36:24 unknown SYNCHRONIZE CACHE command failed (5)
...
Jan 27 03:47:58 unknown scsi: [ID 107833 kern.warning] WARNING: 
/p...@38,0/pci1166,1...@10/pci103c,3...@0/s...@5,0 (sd6):
Jan 27 03:47:58 unknown drive offline
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread Alex Viskovatoff
Hi Cindy,

I now suspect that the boot blocks are located outside of the space in 
partition 0 that actually belongs to the zpool, in which case it is not 
necessarily a bug that zpool attach does not write those blocks, IMO. Indeed, 
that must be the case, since GRUB needs to get to stage2 in order to be able to 
read zfs file systems. I'm just glad zpool attach warned me that I need to 
invoke grubinstall manually!

Thank you for making things less mysterious.

Alex
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread Alex Viskovatoff
Thanks for clearing that up. That all makes sense.

I was wondering why ZFS doesn't use the whole disk in the standard OpenSolaris 
install. That explains it.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread Alex Viskovatoff
Cindy,

Well, it worked. The system can boot off c4t0d0s0 now.

But I am still a bit perplexed. Here is how the invocation of installgrub went:

a...@diotiima:~# installgrub -m /boot/grub/stage1 /boot/grub/stage2 
/dev/rdsk/c4t0d0s0
Updating master boot sector destroys existing boot managers (if any).
continue (y/n)?y
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 267 sectors starting at 50 (abs 16115)
stage1 written to master boot sector
a...@diotima:~# 

So installgrub writes to partition 0. How does one know that those sectors have 
not already been used by zfs, in its mirroring of the first drive by this 
second drive? And why is writing to partition 0 even necessary? Since c3t0d0 
must contain stage1 and stage2 in its partition 0, wouldn't c4t0d0 already have 
stage1 and stage 2 in its partition 0 through the silvering process?

I don't find the present disk format/label/partitioning experience particularly 
unpleasant (except for grubinstall writing directly into a partition which 
belongs to a zpool). I just wish I understood what it involves.

Thank you for that link to the System Administration Guide. I just looked at it 
again, and it says partition 8 "Contains GRUB boot information". So partition 8 
is the master boot sector and contains GRUB stage1?

Alex
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Questions about OS 2008.11 partitioning scheme

2009-01-06 Thread Alex Viskovatoff
Hi all,

I did an install of OpenSolaris in which I specified that the whole disk should 
be used for the installation. Here is what "format> verify" produces for that 
disk:

Part  TagFlag Cylinders SizeBlocks
  0   rootwm   1 - 60797  465.73GB(60797/0/0) 976703805
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 60797  465.74GB(60798/0/0) 976719870
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0
  8   bootwu   0 - 07.84MB(1/0/0) 16065
  9 unassignedwm   00 (0/0/0) 0

I have several questions. First, what is the purpose of partitions 2 and 8 
here? Why not simply have partition 0, the "root" partition, be the only 
partition, and start at cylinder 0 as opposed to 1?

My second question concerns the disk I have used to mirror the first root zpool 
disk. After I set up the second disk to mirror the first one with "zpool attach 
-f rpool c3t0d0s0 c4t0d0s0", I got the response

Please be sure to invoke installgrub(1M) to make 'c4t0d0s0' bootable.

Is that correct? Or do I want to make c4t0d0s8 bootable, given that the label 
of that partition is "boot"? I cannot help finding this a little confusing. As 
far as i can tell, c4t0d0s8 (as well as c3t0d0s8 from the original disk which I 
mirrored), cylinder 0, is not used for anything.

Finally, is the correct command to make the disk I have added to mirror the 
first disk bootable

"installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c4t0d0s0" ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Easiest way to replace a boot disk with a larger one?

2008-12-16 Thread Alex Viskovatoff
Hi again Cindy,

Well, I got the two new 1.5 TB disks, but I ran into a snag:

> a...@diotima:~# zpool attach rpool c3t0d0s0 c3t1d0
> cannot label 'c3t1d0': EFI labeled devices are not supported on root pools.

The Solaris 10 System Administration Guide: Devices and File Systems gives some 
pertinent information on p. 191.

> Comparison of the EFI Label and the VTOC Label
> The EFI disk label differs from the VTOC disk label in the following ways:
>Provides support for disks greater than 1 terabyte in size.

Well, that explains why my new disks have EFI labels; furthermore, it appears 
to mean that I must keep the EFI label if I want to use the full capacity of 
the disks.

> Restrictions of the EFI Disk Label
> Keep the following restrictions in mind when determining whether using disks 
> greater than 1
   terabyte is appropriate for your environment:
> [...]
>  You cannot boot from a disk with an EFI disk label.

It thus appears that my original plan to replace my 500 GB boot disk with a 1.5 
TB boot disk was ill advised, since there is at present no such thing as a 1.5 
TB Solaris boot disk.

I was thinking of trying an install of OS 2008.11 one of the 1.5 TB disks, just 
to see what would happen, but then I found the following thread: 
http://opensolaris.org/jive/thread.jspa?threadID=81852&tstart=0

which refers to this "bug":

> 4051 opensolaris b99b/b100a does not install on 1.5 TB disk or boot fails 
> after install

It thus appears that I will have to create a new non-root zpool for the 1.5 TB 
disks. (Fortunately, I have an extra 500 GB disk which I´m now going to set up 
to mirror my boot disk.)

Does anyone know if there are any plans in the work to make disks with an EFI 
disk label bootable? The present situation of their not being bootable places 
the restriction on OpenSolaris that it cannot boot off disks greater than 1 TB.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Easiest way to replace a boot disk with a larger one?

2008-12-11 Thread Alex Viskovatoff
Hi Cindy,

Thanks for clearing that up. I don't mind rebooting, just as long as that makes 
the zpool use the additional space. I did read about the export/import 
workaround, but wasn't sure if rebooting would have the same effect.

The ZFS documentation convinced me to set up a mirrored pool, even though I've 
never mirrored disks or used RAID before. So I've already put in a word with 
Santa about a second disk.

Btw, I would never consider using a disk with bleeding-edge capacity for my 
system (as opposed to for expendable data like movies) with any file system 
other than ZFS.

Alex
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Easiest way to replace a boot disk with a larger one?

2008-12-11 Thread Alex Viskovatoff
Thanks, that's what I thought. Just wanted to make sure.

I guess the writers of the documentation think that this is so obviously the 
way things would work in a well designed system that there is no reason to 
mention it explicitly.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Easiest way to replace a boot disk with a larger one?

2008-12-11 Thread Alex Viskovatoff
Maybe this has been discussed before, but I haven't been able to find any 
relevant threads.

I have a simple OpenSolaris 2008.11 setup with one ZFS pool consisting of the 
whole of the single hard drive on the system. What I want to do is to replace 
the present 500 GB drive with a 1.5 TB drive. (The latter costs what the former 
cost a year ago. :-) Once the replacement is complete, I will install a second 
1.5 TB drive to mirror the first one. The smaller drive will go into my legacy 
Linux box.)

The way I hope I can do this is by first using the larger drive to mirror the 
smaller one. Once the silvering is complete, I would remove the smaller drive. 
My question is: once the smaller drive has been removed, will the zpool use all 
of the larger, replacement drive?

The ZFS Administration Guide does not appear to give an answer to this. The 
only thing I could find in the December 2008 version is the following about 
"Replacing Devices in a Storage Pool" on p. 115: "If the replacement device is 
larger, the pool capacity is increased when the replacement is complete." But 
"zpool replace" does not seem relevant to what I want to do, since I don't see 
how you can use the procedure described there to replace a drive which 
comprises the root zpool, which is what I want to do.

The only thing I have been able to find about this anywhere is the following 
from the Wikipedia article on ZFS:

"Capacity expansion is normally achieved by adding groups of disks as a vdev 
(stripe, RAID-Z, RAID-Z2, or mirrored). Newly written data will dynamically 
start to use all available vdevs. It is also possible to expand the array by 
iteratively swapping each drive in the array with a bigger drive and waiting 
for ZFS to heal itself — the heal time will depend on amount of stored 
information, not the disk size. The new free space will not be available until 
all the disks have been swapped."

Is this correct?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs not yet suitable for HA applications?

2008-11-14 Thread alex black
hi All,

I realize the subject is a bit incendiary, but we're running into what  
I view as a design omission with ZFS that is preventing us from  
building highly available storage infrastructure; I want to bring some  
attention (again) to this major issue:

Currently we have a set of iSCSI targets published from a storage host  
which are consumed by a ZFS host.

If a _single_disk_ on the storage host goes bad, ZFS pauses for a full  
180 seconds before allowing read/write operations to resume. This is  
an aeon, beyond TCP timeout, etc.

I've read the claims that ZFS is unconcerned with underlying  
infrastructure and agree with the basic sense of those claims see:[1],  
however:

* If ZFS experiences _any_ behavior when interacting with a device  
which is not consistent with known historical performance norms
-and-
* ZFS knows the data it is attempting to fetch from that device is  
resident on another device

Why then would it not make a decision, dynamically based on a  
reasonably small sample of recent device performance to drop its  
current attempt and instead fetch the data from the other device?

I don't even think a configurable timeout is that useful - it should  
be based on a sample of performance from (say) a day - or, hey, for  
the moment, just to make it easy, a configurable timeout!

As it is, I can't put this in production. 180 seconds is not "highly  
available", it's users seeing "The Connection has Timed Out".

Everything - and I mean every other tiny detail - of ZFS that I have  
seen and used is crystalline perfection.

So, ZFS is (for us) a diamond with a little bit of volcanic crust  
remaining to be polished off.

Is there any intention of dealing with this problem in the (hopefully  
very) near future?

If you're in the bay area, I will personally deliver (2) cases of the  
cold beer of your choice (including trappist) if you solve this problem.

If offering a bounty would have any effect, I'd offer one. We need  
this to work.

thanks,

_alex


Related:
[1] 
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-August/thread.html#50609


-- 
alex black, founder
the turing studio, inc.
888.603.6023 / main
510.666.0074 / office
[EMAIL PROTECTED]



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] add autocomplete feature for zpool, zfs command

2008-10-09 Thread Alex Peng
Is it fun to have autocomplete in zpool or zfs command?

For instance -

"zfs cr 'Tab key' " will become "zfs create"
"zfs clone 'Tab key' " will show me the available snapshots
"zfs set 'Tab key' " will show me the available properties, then "zfs set 
com 'Tab key'" will become "zfs set compression=",  another 'Tab key' here 
would show me "on/off/lzjb/gzip/gzip-[1-9]"
..


Looks like a good RFE.

Thanks,
-Alex
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Wilkinson, Alex

0n Thu, Oct 09, 2008 at 06:37:23AM -0500, Mike Gerdts wrote: 

>FWIW, I belive that I have hit the same type of bug as the OP in the
>following combinations:
>
>- T2000, LDoms 1.0, various builds of Nevada in control and guest
>  domains.
>- Laptop, VirtualBox 1.6.2, Windows XP SP2 host, OpenSolaris 2008.05 @
>  build 97 guest
>
>In the past year I've lost more ZFS file systems than I have any other
>type of file system in the past 5 years.  With other file systems I
>can almost always get some data back.  With ZFS I can't get any back.

Thats scary to hear!

 -aW

IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] An slog experiment (my NAS can beat up your NAS)

2008-10-08 Thread Wilkinson, Alex

0n Sat, Oct 04, 2008 at 10:37:26PM -0700, Chris Greer wrote: 

>The big thing here is I ended up getting a MASSIVE boost in
>performance even with the overhead of the 1GB link, and iSCSI.
>The iorate test I was using went from 3073 IOPS on 90% sequential
>writes to 23953 IOPS with the RAM slog added.  The service time 
>was also significantly better than the physical disk.

Curios, what tool did you use to benchmark your IOPS ?

 -aW

IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-29 Thread Wilkinson, Alex

0n Mon, Sep 29, 2008 at 09:28:53PM -0700, Richard Elling wrote: 

>EMC does not, and cannot, provide end-to-end data validation.  So how
>would measure its data reliability?  If you search the ZFS-discuss 
archives,
>you will find instances where people using high-end storage also had data
>errors detected by ZFS.  So, you should consider them complementary rather
>than adversaries.

Mmm ... got any keywords to search for ?

 -aW

IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun samba <-> ZFS ACLs

2008-09-03 Thread Wilkinson, Alex
0n Wed, Sep 03, 2008 at 12:57:52PM -0700, Paul B. Henson wrote: 

>I tried installing the Sun provided samba source code package to try to do
>some debugging on my own, but it won't even compile, configure fails with:

Oh, where did you get that from ?

 -aW

IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GUI support for ZFS root?

2008-08-15 Thread Wilkinson, Alex
0n Thu, Aug 14, 2008 at 09:00:12AM -0700, Rich Teer wrote: 

>Summary: Solaris Express Community Edition (SXCE) is like the OpenSolaris
>of old; OpenSolaris .xx is apparently Sun's intended future direction
>for Solaris.  Based on what I've heard, I've not tried the latter.  If I
>wanted Linux I'd use Linux.  But for the foreseeable future, I'm sticking
>to SXCE.

Does that mean SXCE is going to disapear and be replaced by .xx ?

 -aW

IMPORTANT: This email remains the property of the Australian Defence 
Organisation and is subject to the jurisdiction of section 70 of the CRIMES ACT 
1914.  If you have received this email in error, you are requested to contact 
the sender and delete the email.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] install opensolaris on raidz

2008-06-11 Thread Alex
What do you mean about "mirrored vdevs" ? RAID1 hardware? Because I have only 
ICH9R and opensolaris doesn't know about it.

Would be network boot a good idea?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] install opensolaris on raidz

2008-06-02 Thread Alex
Hi,

Using the opensolaris installer I've created a raidz array from two 500GB hdds, 
but the installer keeps seening two hdds, not the array I've just made.
How do I install opensolaris on raidz array?

Thanks!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Alex
Yes, you're of course right. I will make archive copies of this stuff and store 
it offsite. However I am treating the backup piece of this as an occasional 
archiving. Basically an online storage site which can back up my content once a 
week or so.

Thank you for your suggestions and for pointing out that I need to pay 
attention to backups. You are obviously right and it's easy to dismiss personal 
data as non-essential. Though when thought of in the hundreds of hours of 
processing vinyl -> mp3, it's a different story.

-Alex
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Alex
Thanks a bunch! I'll look into this very config. Just one Q, where did you get 
the case?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Alex
Hi,

I'm sure this has been asked many times and though a quick search didn't reveal 
anything illuminating, I'll post regardless.

I am looking to make a storage system available on my home network. I need 
storage space in the order of terabytes as I have a growing iTunes collection 
and tons of MP3s that I converted from vinyl. At this time I am unsure of the 
growth rate, but I suppose it isn't unreasonable to look for 4TB usable 
storage. Since I will not be backing this up, I think I want RAIDZ2.

Since this is for home use, I don't want to spend an inordinate amount of 
money. I did look at the cheaper STK arrays, but they're more than what I want 
to pay, so I am thinking that puts me in the white-box market. Power 
consumption would be nice to keep low also.

I don't really care if it's external or internal disks. Even though I don't 
want to get completely skinned over the money, I also don't want to buy 
something that is unreliable.

I am very interested as to your thoughts and experiences on this. E.g. what to 
buy, what to stay away from.

Thanks in advance!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Remove a mirrored pair from a pool

2008-01-07 Thread Alex
Hi, I had a question regarding a situation i have with my zfs pool

I have a zfs pool "ftp" and within it are 3 250gb drives in a raid z and 2 
400gb drives in a simple mirror. The pool itself has more than 400gb free and I 
would like to remove the 400gb drives from the server. My concern is how to 
remove them without causing the entire pool to become inconsistent. Is there a 
way to tell zfs to get all data off the 400gb mirror so the disks can safely be 
removed? 

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Stange zfs mirror behavarior

2007-10-03 Thread Alex
Hi,

we are running a v240 with a zfs pool mirror onto two 3310 (SCSI). During 
redundancy test, when offlining one 3310.. all zfs data are unsable.
- zpool hang without displaying any info
- trying to read filesystem hang the command (df,ls,...)
- /var/log/messages keep sending error for the fautly disk
 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL PROTECTED],70/[EMAIL 
PROTECTED],1/[EMAIL PROTECTED],0 (sd41): offline
scsi: [ID 107833 kern.warning] WARNING: /[EMAIL PROTECTED],70/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0 (sd5):
disk not responding to selection
and other


using sotruss with zpool command lead to :
libzfs.so.2:*zpool_iter(0x92b88, 0x1ae38, 0x87fa0)

and using mdb and zpool command are in the same state
> ::pgrep zpool | ::walk thread | ::findstack
stack pointer for thread 30003adb900: 2a105092d51
[ 02a105092d51 turnstile_block+0x604() ]
  02a105092e01 mutex_vector_enter+0x478()
  02a105092ec1 spa_all_configs+0x64()
  02a105092f81 zfs_ioc_pool_configs+4()
  02a105093031 zfsdev_ioctl+0x15c()
  02a1050930e1 fop_ioctl+0x20()
  02a105093191 ioctl+0x184()
  02a1050932e1 syscall_trap32+0xcc()
stack pointer for thread 3000213a9e0: 2a1050b2ca1
[ 02a1050b2ca1 cv_wait+0x38() ]
  02a1050b2d51 spa_config_enter+0x38()
  02a1050b2e01 spa_open_common+0x1e0()
  02a1050b2eb1 spa_get_stats+0x1c()
  02a1050b2f71 zfs_ioc_pool_stats+0x10()
  02a1050b3031 zfsdev_ioctl+0x15c()
  02a1050b30e1 fop_ioctl+0x20()
  02a1050b3191 ioctl+0x184()
  02a1050b32e1 syscall_trap32+0xcc()
stack pointer for thread 30003adafa0: 2a100c7aca1
[ 02a100c7aca1 cv_wait+0x38() ]
  02a100c7ad51 spa_config_enter+0x38()
  02a100c7ae01 spa_open_common+0x1e0()
  02a100c7aeb1 spa_get_stats+0x1c()
  02a100c7af71 zfs_ioc_pool_stats+0x10()
  02a100c7b031 zfsdev_ioctl+0x15c()
  02a100c7b0e1 fop_ioctl+0x20()
  02a100c7b191 ioctl+0x184()
  02a100c7b2e1 syscall_trap32+0xcc()
stack pointer for thread 3000213a080: 2a1051e8ca1
[ 02a1051e8ca1 cv_wait+0x38() ]
  02a1051e8d51 spa_config_enter+0x38()
  02a1051e8e01 spa_open_common+0x1e0()
  02a1051e8eb1 spa_get_stats+0x1c()
  02a1051e8f71 zfs_ioc_pool_stats+0x10()
  02a1051e9031 zfsdev_ioctl+0x15c()
  02a1051e90e1 fop_ioctl+0x20()
  02a1051e9191 ioctl+0x184()
  02a1051e92e1 syscall_trap32+0xcc()
stack pointer for thread 30001725960: 2a100d98c91
[ 02a100d98c91 cv_wait+0x38() ]
  02a100d98d41 spa_config_enter+0x88()
  02a100d98df1 spa_vdev_enter+0x20()
  02a100d98ea1 spa_vdev_setpath+0x10()
  02a100d98f71 zfs_ioc_vdev_setpath+0x3c()
  02a100d99031 zfsdev_ioctl+0x15c()
  02a100d990e1 fop_ioctl+0x20()
  02a100d99191 ioctl+0x184()
  02a100d992e1 syscall_trap32+0xcc()


Any one got info about problem like this with zfs ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool status faulted, but raid1z status is online?

2007-05-09 Thread Alex
Drive in my solaris box that had the OS on it decided to kick the bucket this 
evening, a joyous occasion for all, but luckly all my data is stored on a zpool 
and the OS is nothing but a shell to serve it up on. One quick install later 
and im back trying to import my pool, and things are not going well. 

Once I have things where I want them, I issue an import
# zpool import
  pool: ftp
id: 1752478903061397634
 state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

ftp FAULTED   corrupted data
  raidz1DEGRADED
c1d0ONLINE
c1d1ONLINE
c4d0UNAVAIL   cannot open

Looks like c4d0 died as well, they were purchased at the same time but oh well. 
zfs should still be able to recover because i have 2 working drives, and the 
raidz1 says its degraded but not destroyed. But the pool itself reads as 
faulted?

I issue a import, with force thinking the system is just being silly.

# zpool import -f ftp
cannot import 'ftp': I/O error

Odd. After looking on the threads here I see that when importing a drive the 
label of a drive is rather important, so I go look at what zdb thinks the 
labels for my drives are

first, the pool itself
# zdb -l ftp

LABEL 0

failed to read label 0

LABEL 1

failed to read label 1

LABEL 2

failed to read label 2

LABEL 3

failed to read label 3

thats not good, how about the drives?

# zdb -l /dev/dsk/c1d0 

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

version=3
name='ftp'
state=2
txg=21807
pool_guid=7724307712458785867
top_guid=14476414087876222880
guid=3298133982375235519
vdev_tree
type='raidz'
id=0
guid=14476414087876222880
nparity=1
metaslab_array=13
metaslab_shift=32
ashift=9
asize=482945794048
children[0]
type='disk'
id=0
guid=4586792833877823382
path='/dev/dsk/c0d0s3'
devid='id1,[EMAIL PROTECTED]/d'
whole_disk=0
children[1]
type='disk'
id=1
guid=3298133982375235519
path='/dev/dsk/c4d0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0

LABEL 3

version=3
name='ftp'
state=2
txg=21807
pool_guid=7724307712458785867
top_guid=14476414087876222880
guid=3298133982375235519
vdev_tree
type='raidz'
id=0
guid=14476414087876222880
nparity=1
metaslab_array=13
metaslab_shift=32
ashift=9
asize=482945794048
children[0]
type='disk'
id=0
guid=4586792833877823382
path='/dev/dsk/c0d0s3'
devid='id1,[EMAIL PROTECTED]/d'
whole_disk=0
children[1]
type='disk'
id=1
guid=3298133982375235519
path='/dev/dsk/c4d0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
#   
 

So the label on the disk itself is theremostly.
now for disk 2
# zdb -l /dev/dsk/c1d1  
 

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

version=3
name='ftp'
state=2
txg=21807
pool_guid=7724307712458785867
top_guid=11006938707951749786
guid=11006938707951749786
vdev_tree
type='disk'
id=1
guid=11006938707951749786
path='/dev/dsk/c1d0p0'
devid='id1,[EMAIL PROTECTED]/q'
whole_disk=0
metaslab_array=112
metaslab_shift=31
ashift=9
asize=250053918720
--

[zfs-discuss] zfs cache

2006-07-09 Thread Alex
With the upcoming Thumper server, I understand that there won't be any hardware 
RAID. ZFS would be the solution to use on this platform. One apparent use for 
this would be an NFS server. But does it really make sense to do this over a 
disk cabinet (e.g. a SCSI->SATA) with some sort of write cache? Realizing of 
course that ZFS allows for near-platter speed, aren't we becoming even more 
dependent on hard drive performance?

Can one appropriate RAM for a write cache to speed up remote I/O? Understanding 
of course that there would be some issues with ensuring data integrity with 
asynchronous writes to such write cache, it would seem that such a solution 
would give certain disk cabinet manufacturers a run for their money.

-Alex
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] On-write Automatic Snapshot

2006-05-22 Thread Alex Barclay

Darren, thank you for your reply.

While it didn't come out correctly (need to brush up on nomenclature),
I did mean snapshot on closure.


Now if what you really mean is snapshot on file closure I think you
might well be on to something useful.  Whats more NTFS has some cool
stuff in this area for consolidating identical files. The hooks that
would need to be put into ZFS to do snapshot on file close could be used
for other things like single instance storage (though isn't that the
opposite of ditto blocks on user data hmn whats the opposite of ditto :-)).


Hmmm, I will try and research what NTFS and others do.


You can also use dtrace to simulate the every single write case and see
for yourself the massive explosion of snapshots that would occur as a
result.


Yea, this is would be bad.


Thank you, will try and see if other filesystems do anything with a
closure hook.
--
Alex Barclay
University of Tulsa
Center for Information Security
Enterprise Research Group
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] On-write Automatic Snapshot

2006-05-22 Thread Alex Barclay

Apologies if this has been addressed, but looking at some of the sun
blogs and google searches I have not been able to find an answer.

Does ZFS support on write automatic snapshots?

For example, according to defined policy, every time a file is written
a snapshot is created with the diff stored. I can see this being
useful in high security environments and companies that have extreme
regulatory requirements.

If not, would there be a way besides scripts/programs to emulate this feature?

Thank You,
Alex
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss