Re: [zfs-discuss] SAS/short stroking vs. SSDs for ZIL

2010-12-29 Thread Kevin Walker
You do seem to misunderstand ZIL.

ZIL is quite simply write cache and using a short stroked rotating drive is
never going to provide a performance increase that is worth talking about
and more importantly ZIL was designed to be used with a RAM/Solid State
Disk.

We use sata2 *HyperDrive5* RAM disks in mirrors and they work well and are
far cheaper than STEC or other enterprise SSD's and have non of the issue
related to trim...

Highly recommended... ;-)

http://www.hyperossystems.co.uk/

Kevin


On 29 December 2010 13:40, Edward Ned Harvey 
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

  From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
  Sent: Tuesday, December 28, 2010 9:23 PM
 
   The question of IOPS here is relevant to conversation because of ZIL
   dedicated log.  If you have advanced short-stroking to get the write
 latency
   of a log device down to zero, then it can compete against SSD for
 purposes
   of a log device, but nobody seems to believe such technology currently
   exists, and it certainly couldn't compete against SSD for random reads.
   (ZIL log is the only situation I know of, where write performance of a
 drive
   matters and read performance does not matter.)
 
  It seems that you may be confused.  For the ZIL the drive's rotational
  latency (based on RPM) is the dominating factor and not the lateral
  head seek time on the media.  In this case, the short-stroking you
  are talking about does not help any.  The ZIL is already effectively
  short-stroking since it writes in order.

 Nope.  I'm not confused at all.  I'm making a distinction between short
 stroking and advanced short stroking.  Where simple short stroking
 does
 as you said - eliminates the head seek time but still susceptible to
 rotational latency.  As you said, the ZIL already effectively accomplishes
 that end result, provided a dedicated spindle disk for log device, but does
 not do that if your ZIL is on the pool storage.  And what I'm calling
 advanced short stroking are techniques that effectively eliminate, or
 minimize both seek  latency, to zero or near-zero.  What I'm calling
 advanced short stroking doesn't exist as far as I know, but is
 theoretically possible through either special disk hardware or special
 drivers.


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Swapping disks in pool to facilitate pool growth

2010-10-07 Thread Kevin Walker
Hi Guys,

We are a running a Solaris 10 production server being used for backup
services within our DC. We have 8 500GB drives in a zpool and we wish to
swap them out 1 by 1 for 1TB drives.

I would like to know if it is viable to add larger disks to zfs pool to grow
the pool size and then remove the smaller disks?

I would assume this would degrade the pool and require it to resilver?

Any advice would be gratefully received.

Kind regards

Kevin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-15 Thread Kevin Walker
To be fair, he did talk some sense about how everyone was claiming to have a
product that was cloud computing, but I still don't like Oracle. With there
current Java Patent war with Google and now this with OpenSolaris, it leaves
a very bad taste in my mouth.

Will this affect ZFS being used in FreeBSD?

On 15 August 2010 15:13, David Magda dma...@ee.ryerson.ca wrote:

 On Aug 14, 2010, at 19:39, Kevin Walker wrote:

  I once watched a video interview with Larry from Oracle, this ass rambled
 on
 about how he hates cloud computing and that everyone was getting into
 cloud
 computing and in his opinion no one understood cloud computing, apart from
 him... :-|


 If this is the video you're talking about, I think you misinterpreted what
 he meant:

  Cloud computing is not only the future of computing, but it is the
 present, and the entire past of computing is all cloud. [...] All it is is a
 computer connected to a network. What do you think Google runs on? Do you
 think they run on water vapour? It's databases, and operating systems, and
 memory, and microprocessors, and the Internet. And all of a sudden it's none
 of that, it's the cloud. [...] All the cloud is, is computers on a
 network, in terms of technology. In terms of business model, you can say
 it's rental. All SalesForce.com was, before they were cloud computing, was
 software-as-a-service, and then they became cloud computing. [...] Our
 industry is so bizarre: they change a term and think they invented
 technology.


 http://www.youtube.com/watch?v=rmrxN3GWHpM#t=45m

 I don't see any inaccurate in what said.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-14 Thread Kevin Walker
I once watched a video interview with Larry from Oracle, this ass rambled on
about how he hates cloud computing and that everyone was getting into cloud
computing and in his opinion no one understood cloud computing, apart from
him... :-| From that day on I felt enlightened about Oracle and how they
want do business; they are run by a CEO who is narrow minded and clearly
doesn't understand Open Source or  cloud computing and Oracle are very, very
greedy...

I only hope that OpenSolaris can live on the Illumos project and assist
great projects such as Nexentastor.

http://www.illumos.org/

K

On 15 August 2010 00:02, Mark Bennett mark.benn...@public.co.nz wrote:

 On 8/13/10 8:56 PM -0600 Eric D. Mudama wrote:
  On Fri, Aug 13 at 19:06, Frank Cusack wrote:
  Interesting POV, and I agree. Most of the many distributions of
  OpenSolaris had very little value-add. Nexenta was the most interesting
  and why should Oracle enable them to build a business at their expense?
 
  These distributions are, in theory, the gateway drug where people
  can experiment inexpensively to try out new technologies (ZFS, dtrace,
  crossbow, comstar, etc.) and eventually step up to Oracle's big iron
  as their business grows.

 I've never understood how OpenSolaris was supposed to get you to Solaris.
 OpenSolaris is for enthusiasts and great great folks like Nexenta.
 Solaris lags so far behind it's not really an upgrade path.

 Fedora is a great beta test arena for what eventually becomes a commercial
 Enterprise offering. OpenSolaris was the Solaris equivalent.

 Losing the free bleeding edge testing community will no doubt impact on the
 Solaris code quality.

 It is now even more likely Solaris will revert to it's niche on SPARC over
 the next few years.

 Mark.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raidz2 drive failure zpool will not import

2010-04-17 Thread Kevin Denton
Thanks Richard,
I tried removing the replacement drive and received the same error.
Output of zdb -l /dev/rdsk/c5d1s0 results in:
ke...@opensolaris:~# zdb -l /dev/rdsk/c5d1s0
cannot open '/dev/rdsk/c5d1s0': No such device or address
All other drives have 4 readable labels 0-3
I even attempted the old trick of putting the failed drive in the freezer for 
an hour and it did spin up, but only for a minute and not long enough to be 
recognized by the system.
Not sure what to try next.
~kevin
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] raidz2 drive failure zpool will not import

2010-04-15 Thread Kevin Denton
After attempting unsuccessfully to replace a failed drive in a 10 drive raidz2 
array and reading as many forum entries as I could find I followed a suggestion 
to export and import the pool.

In another attempt to import the pool I reinstalled the OS, but I have so far 
been unable to import the pool.

Here is the output from format and zpool commands:

ke...@opensolaris:~# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c8d0s0ONLINE   0 0 0

errors: No known data errors
ke...@opensolaris:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c4d0 ST350083- 9QG0LW8-0001-465.76GB
  /p...@0,0/pci8086,2...@1e/pci-...@1/i...@0/c...@0,0
   1. c4d1 ST350063- 9QG1E50-0001-465.76GB
  /p...@0,0/pci8086,2...@1e/pci-...@1/i...@0/c...@1,0
   2. c5d0 ST350063- 9QG3AM7-0001-465.76GB
  /p...@0,0/pci8086,2...@1e/pci-...@1/i...@1/c...@0,0
   3. c5d1 ST350063- 9QG19MY-0001-465.76GB
  /p...@0,0/pci8086,2...@1e/pci-...@1/i...@1/c...@1,0
   4. c6d0 ST350063- 9QG19VY-0001-465.76GB
  /p...@0,0/pci8086,2...@1e/pci-...@2/i...@0/c...@0,0
   5. c6d1 ST350063- 5QG019W-0001-465.76GB
  /p...@0,0/pci8086,2...@1e/pci-...@2/i...@0/c...@1,0
   6. c7d0 ST350063- 9QG1DKF-0001-465.76GB
  /p...@0,0/pci8086,2...@1e/pci-...@2/i...@1/c...@0,0
   7. c7d1 ST350063- 5QG0B2Y-0001-465.76GB
  /p...@0,0/pci8086,2...@1e/pci-...@2/i...@1/c...@1,0
   8. c8d0 DEFAULT cyl 9961 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@1f,1/i...@0/c...@0,0
   9. c10d0 ST350083- 9QG0LR5-0001-465.76GB
  /p...@0,0/pci-...@1f,2/i...@0/c...@0,0
  10. c11d0 ST350083- 9QG0LW6-0001-465.76GB
  /p...@0,0/pci-...@1f,2/i...@1/c...@0,0
Specify disk (enter its number): ^C
ke...@opensolaris:~# zpool import
  pool: storage
id: 18058787158441119951
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

storage  UNAVAIL  insufficient replicas
  raidz2-0   DEGRADED
c4d0 ONLINE
c4d1 ONLINE
c5d0 ONLINE
replacing-3  DEGRADED
  c5d1   ONLINE
  c5d1   FAULTED  corrupted data
c6d0 ONLINE
c6d1 ONLINE
c7d0 ONLINE
c7d1 ONLINE
c10d0ONLINE
c11d0ONLINE
ke...@opensolaris:~# zpool import -f
  pool: storage
id: 18058787158441119951
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

storage  UNAVAIL  insufficient replicas
  raidz2-0   DEGRADED
c4d0 ONLINE
c4d1 ONLINE
c5d0 ONLINE
replacing-3  DEGRADED
  c5d1   ONLINE
  c5d1   FAULTED  corrupted data
c6d0 ONLINE
c6d1 ONLINE
c7d0 ONLINE
c7d1 ONLINE
c10d0ONLINE
c11d0ONLINE
ke...@opensolaris:~# zpool import -f storage
cannot import 'storage': one or more devices is currently unavailable
Destroy and re-create the pool from
a backup source.


Prior to exporting the pool I was able to offline the failed drive.

Finally about a month ago I upgraded the zpool version to enable dedupe.

The suggestions I have read include playing with the metadata and this is 
something I would need help with as I am just an informed user.

I am hoping that as only one drive failed and this is a dual parity raid that 
there is someway to recover the pool.

Thanks in advance,
Kevin
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] vPool unavailable but RaidZ1 is online

2010-04-04 Thread Kevin
I am trying to recover a raid set, there are only three drives that are part of 
the set.  I attached a disk and discovered it was bad.  It was never part of 
the raid set.  The disk is now gone and when I try to import the pool I get the 
error listed below.  Is there a chance to recover?  TIA!

Sun Microsystems Inc.   SunOS 5.11  snv_112 November 2008
# zpool import
  pool: vpool
id: 14231674658037629037
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

vpool   UNAVAIL  missing device
  raidz1ONLINE
c0t0d0  ONLINE
c0t1d0  ONLINE
c0t2d0  ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
# bash
bash-3.2# zpool import -fF
  pool: vpool
id: 14231674658037629037
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

vpool   UNAVAIL  missing device
  raidz1ONLINE
c0t0d0  ONLINE
c0t1d0  ONLINE
c0t2d0  ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + fsck

2009-11-04 Thread Kevin Walker
Hi all,

Just subscribed to the list after a debate on our helpdesk lead me to the 
posting about  ZFS corruption and the need for a fsck repair tool of some 
kind...

Has there been any update on this?



Kind regards,
 
Kevin Walker
Coreix Limited
 
DDI: (+44) 0207 183 1725 ext 90
Mobile: (+44) 07960 967818
Fax: (+44) 0208 53 44 111

*
This message is intended solely for the use of the individual or organisation 
to whom it is addressed. It may contain privileged or confidential information. 
If you are not the intended recipient, you should not use, copy, alter, or 
disclose the contents of this message
*
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-29 Thread Kevin Maguire
Hi

We have been using a Solaris 10 system (Sun-Fire-V245) for a while as
our primary file server. This is based on Solaris 10 06/06, plus
patches up to approx May 2007. It is a production machine, and until
about a week ago has had few problems.

Attached to the V245 is a SCSI RAID array, which presents one LUN to
the OS.  On this lun is a zpool (tank), and within that 300+ zfs file
systems (one per user for automounted home directories). The system is
connected to our LAN via gigabit Ethernet,. most of our NFS clients
have just 100FD network connection.

In recent days performance of the file server seems to have gone off a
cliff.  I don't know how to troubleshoot what might be wrong? Typical
zpool iostat 120 output is shown below. If I run truss -D df I see
each call to statvfs64(/tank/bla) takes 2-3 seconds. The RAID itself
is healthy, and all disks are reporting as OK.

I have tried to establish if some client or clients are thrashing the
server via nfslogd, but without seeing anything obvious.  Is there
some kind of per-zfs-filesystem iostat?

End users are reporting just saving small files can take 5-30 seconds?
prstat/top shows no process using significant CPU load.  The system
has 8GB of RAM, vmstat shows nothing interesting.

I have another V245, with the same SCSI/RAID/zfs setup, and a similar
(though a bit less) load of data and users where this problem is NOT
apparent there?

Suggestions?
Kevin

Thu Jan 29 11:32:29 CET 2009
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank2.09T   640G 10 66   825K  1.89M
tank2.09T   640G 39  5  4.80M   126K
tank2.09T   640G 38  8  4.73M   191K
tank2.09T   640G 40  5  4.79M   126K
tank2.09T   640G 39  5  4.73M   170K
tank2.09T   640G 40  3  4.88M  43.8K
tank2.09T   640G 40  3  4.87M  54.7K
tank2.09T   640G 39  4  4.81M   111K
tank2.09T   640G 39  9  4.78M   134K
tank2.09T   640G 37  5  4.61M   313K
tank2.09T   640G 39  3  4.89M  32.8K
tank2.09T   640G 35  7  4.31M   629K
tank2.09T   640G 28 13  3.47M  1.43M
tank2.09T   640G  5 51   433K  4.27M
tank2.09T   640G  6 51   450K  4.23M
tank2.09T   639G  5 52   543K  4.23M
tank2.09T   640G 26 57  3.00M  1.15M
tank2.09T   640G 39  6  4.82M   107K
tank2.09T   640G 39  3  4.80M   119K
tank2.09T   640G 38  8  4.64M   295K
tank2.09T   640G 40  7  4.82M   102K
tank2.09T   640G 43  5  4.79M   103K
tank2.09T   640G 39  4  4.73M   193K
tank2.09T   640G 39  5  4.87M  62.1K
tank2.09T   640G 40  3  4.88M  49.3K
tank2.09T   640G 40  3  4.80M   122K
tank2.09T   640G 42  4  4.83M  82.0K
tank2.09T   640G 40  3  4.89M  42.0K
...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI (For VirtualBox target) and SMB

2009-01-05 Thread Kevin Pattison
Thanks Sanjeevb,

By the way, this only seems to fail when I set up a volume instead of a file 
system. Should I be setting up a volume in this case, or will a file system 
suffice?

If I turn off snapshots for this then it should work. I'll try this.

Regards,
Kevin
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS iSCSI (For VirtualBox target) and SMB

2009-01-02 Thread Kevin Pattison
Hey all,

I'm setting up a ZFS based fileserver to use both as a shared network drive and 
separately to have an iSCSI target to be used as the Hard disk of a windows 
based VM runninf on another machine.

I've built the machine, installed the OS, created the RAIDZ pool and now have a 
couple of questions (I'm pretty much new to Solaris by the way but have been 
using Linux for some time). In my attempt to create the iSCSI target to be used 
and the VM disk I created (through the web frontend) a new dataset under the 
main pool of type Volume and gave it 30GB of space and called it iTunesVM. I 
then tried to run:
zfs set shareiscsi=on tank/iTunesVM
but got the error:
cannot share 'tank/iTunesVM': iscsitgtd failed request to share
cannot share 'tank/itune...@zfs-auto-snap:weekly-2009-01-02-15:02': iscsitgtd 
failed request to share

I've checked and my iSCSI target service is on and running.

With regards a network share accessible to both Windows, Linux and Mac OS 
machines on the network, what protocol would be best to use (NFS or SMB). I 
would then like to set up a locally hosted headless windows VM to run a Windows 
Media Player/iTunes share over the network for access to the music from my 
xbox/PS3.

All help appreciated,
Kevpatts
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] lofiadm -d keeps a lock on file in an nbmand-mounted zfs

2008-12-29 Thread Kevin Sze
Hi,

Has anyone seen the following problem?

After lofiadm -d removes an association, the file is still locked and cannot 
be moved or deleted if the file resides in a ZFS mounted with nbmand=on.

There are two ways to remove the lock.  (1) remount the zfs by the 
unmount+mount; the lock is removed even if nbmand=on option is given again, or 
(2) reboot the system.

I don't have a system with UFS to test the nbmand mount-option to see if the 
problem exists for UFS as well.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any commands to dump all zfs snapshots like NetApp snapmirror

2008-09-11 Thread Haiou Fu (Kevin)
Excuse me but could you please copy and paste the part of zfs send -l  ?  
I couldn't find it in the link you send me:

http://docs.sun.com/app/docs/doc/819-2240/zfs-1m?a=view

What release is this send -l  option available ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any commands to dump all zfs snapshots like NetApp snapmirror

2008-09-10 Thread Haiou Fu (Kevin)
Can you explain more about zfs send -lI know zfs send -i but didn't 
know there is a -l option? In which release is this option available?
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any commands to dump all zfs snapshots like NetApp snapmirror

2008-09-10 Thread Haiou Fu (Kevin)
The closest thing I can find is:
http://bugs.opensolaris.org/view_bug.do?bug_id=6421958

But just like it says:   Incremental +
recursive will be a bit tricker, because how do you specify the multiple
source and dest snaps?  

Let me clarify this more:

Without send -r I need do something like this;

   Given  a zfs file system myzfs in zpool  mypool,  it has N snapshots:
mypool/myzfs
mypool/[EMAIL PROTECTED]
mypool/[EMAIL PROTECTED]

mypool/[EMAIL PROTECTED],

   Do following things:

   zfs snapshot mypool/[EMAIL PROTECTED] 
   zfs send mypool/[EMAIL PROTECTED] | gzip -   /somewhere/myzfs-current.gz
   zfs send -i mypool/[EMAIL PROTECTED] mypool/[EMAIL PROTECTED] | gzip -  
/somewhere/myzfs-1.gz
   zfs send -i mypool/[EMAIL PROTECTED] mypool/[EMAIL PROTECTED] | gzip -  
/somewhere/myzfs-2.gz 
  ..
   zfs send -i mypool/[EMAIL PROTECTED] mypool/[EMAIL PROTECTED] | gzip -  
/somewhere/myzfs-N.gz

   As you can see, above commands are kind of a stupid solution, and it 
didn't reach maximum efficiency because those myzfs-1 ~ N.gz files contain 
a lot of common stuffs in them!
I wonder how will send -r do in above situation?  How does it choose 
multiple source and dest snaps? And can -r efficient enough to just dump the 
incremental changes?  What is the corresponding receive command for send -r?
(receive -r ? I guess? )

Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] raid card vs zfs

2008-06-22 Thread kevin williams
digg linked to an article related to the apple port of ZFS 
(http://www.dell.com/content/products/productdetails.aspx/print_1125?c=uscs=19l=ens=dhss).
  I dont have a mac but was interested in ZFS. 

The article says that ZFS eliminates the need for a RAID card and is faster 
because the striping is running on the main cpu rather than an old chipset on a 
card.  My question is, is this true?  Can I install opensolaris with zfs and 
stripe and mirror a bunch of sata disc for a home NAS server?  I sure would 
like to do that but the cost of the good raid cards has put me off; maybe this 
is the solution.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling ZFS ACL

2008-05-28 Thread kevin kramer
that is my thread and I'm still having issues even after applying that patch. 
It just came up again this week.

[locahost] uname -a
Linux dv-121-25.centtech.com 2.6.18-53.1.14.el5 #1 SMP Wed Mar 5 11:37:38 EST 
2008 x86_64 x86_64 x86_64 GNU/Linux
[localhost] cat /etc/issue
CentOS release 5 (Final)
Kernel \r on an \m

[localhost: /n/scr20] touch test
[localhost: /n/scr20] mv test /n/scr01/test/ ** this is a UFS mount on FreeBSD

mv: preserving permissions for `/n/scr01/test/test': Operation not supported
mv: preserving ACL for `/n/scr01/test/test': Operation not supported
mv: preserving permissions for `/n/scr01/test/test': Operation not supported

If I move it to the local /tmp, I get no errors.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ACL invalid argument from client

2008-04-16 Thread kevin kramer
new problem. We have patched the system and it has fixed the error creating 
dirs/files on the ZFS filesystem. now I am getting permission errors with mv/cp 
from one of these ZFS areas to a regular FreeBSD server using UFS. thoughts?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Degraded zpool won't online disk device, instead resilvers spare

2007-12-11 Thread Kevin
|spare  -  -  0137  0  5.95M
|--c5t29d0  -  -  0  0  0  0
|--c5t21d0  -  -  0136  0  5.95M
|c4t37d0-  - 74 23  3.86M   190K
-  -  -  -  -  -  -

So notice that there is 0 disk traffic for the disk we are trying to bring 
online (c5t29d0), but there is write disk traffic for the spare disk AND the 
other spare disk. So it looks like it's resilvering both mirror disks again? 
(why would it need to do that?)

So I try using the replace command instead of the online command to tell it to 
bring itself online (and resilver only what has changed since it was brought 
online). But now it's complaining that the disk is already part of the same 
pool (since it's reading the old yet valid on-disk metadata for that disk which 
is still valid):

ROOT $ zpool replace tank.2 c5t29d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c5t29d0s0 is part of active ZFS pool tank.2. Please see zpool(1M).

I could try the -f command to force it, but I want it to only resilver those 
parts that have changed.

I tried detaching the mirror in hope that it would recognize that the c5t29d0 
is online again:

ROOT $ zpool detach tank.2 c5t21d0

However, running zpool status again shows that the spare has been removed, but 
no change other than that. When I reattach the spare device immediately, the 
resilver process begins again (it looks like again from zpool iostat or iostat 
-xn that it is resilvering both of the attached spares, not just the one that 
I'm attaching again). Also, this resilver process takes quite a long time (like 
it has to resilver everything all over again, as opposed to just changes). Does 
the resilver logic work differently if there is a spare involved?

Any idea what is going wrong here? It seems that zfs should be able to online 
the disks since the OS can read/write perfectly fine to those devices. And it 
seems that if the online fails it shouldn't cause a resilver of both of the 
attached spares.

You will notice that the pool was renamed by doing zpool export tank, zpool 
import tank tank.2. Could this be causing ZFS to get confused when the device 
is brought online?

We are willing to try zpool replace -f on the disks that need to be brought 
online during the weekend to see what happens.

Here is the system info:
ROOT $ uname -a
SunOS x.x.com 5.10 Generic_120012-14 i86pc i386 i86pc

Will send showrev -p output if desired.

Thanks,
Kevin
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [xen-discuss] xVm blockers!

2007-11-29 Thread Kevin Fox

On Nov 28, 2007, at 5:38 AM, K wrote:


 1/ Anchor vNic, the equivalent of linux dummy interfaces, we need more
 flexibility in the way we setup xen networking. What is sad is that
 the code is already available in the unreleased crossbow bits... but
 it won't appear in nevada until Q1 2008 :(

Indeed ... a frustration for many, including myself, who need specific
pieces of functionality from larger projects that have larger schedules.

We've been using a dummy NIC driver for the development of the Virtual
Router project which should be coming online soon.  I'm in the  
process of
getting a legal approval for making the driver available sooner ...  
It's based
on afe, which is already in the OpenSolaris repository, so I don't  
see any
problems with making it available, but I need to check first.

The intent of the driver, of course, is to bridge the gap until  
Crossbow Anchor
VNICs appear in Nevada, so any long term dependency on the driver  
should be
discouraged, but having to allocate hardware NICs for virtual  
interfaces in the
mean time is certainly a more substantial discouragement.

Kev

 This is a real blocker for me as my ISP just started implementing port
 security and locks my connection everytime it sees a foreign mac
 address using one of the IP addresses that were originally assigned to
 my dom0. On linux, I can setup a dummy interface and create a bridge
 with it for a domU but on Solaris I need a physical NIC per bridge ! 
 $!!
 @#$!

 For this particular feature, I am ready to give a few hundred dollars
 as booty if anyone has a workaround.

 2/ Pci passthru, this is really useful so you can let a domU access a
 PCI card. It comes really handy if you want to virtualize a PBX that
 is using cheap zaptel FXO cards. Again on linux, xen pci passthru has
 been available for a while. Last time I mention this on the xen
 solaris discussion, I received a very dry reply.

 3/ Problem with DMA under Xen ... e.g. my areca raid cards works
 perfect on a 8GB box without xen but because of the way xen allocates
 memory... I am forced to allocate only 1 or 2 gig for the dom0 or the
 areca drivers will fail miserably trying to do DMA above the first 4G
 address space. This very same problem affected xen under linux over a
 year ago and seems to have been addressed. Several  persons on the ZFS
 discuss list who complain about poor ZFS IO performance are affected
 by this issue.

 4/ Poor exploit mitigation under Solaris. In comparaison, OpenBSD,
 grsec linux and Windows = XP SP2 have really good exploit
 mitigation It is a shame because solaris offered a non-exec stack
 before nearly everyone else... but it stopped there... no heap
 protection, etc...

 The only thing that is preventing me from switching back to linux (no
 zfs), freebsd (no xen) or openbsd (no xen and no zfs), right now is
 ZFS and it is the same reason I switched to Solaris in the first  
 place.



 ___
 xen-discuss mailing list
 [EMAIL PROTECTED]

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mysterious corruption with raidz2 vdev

2007-07-30 Thread Kevin
We'll try running all of the diagnostic tests to rule out any other issues.

But my question is, wouldn't I need to see at least 3 checksum errors on the 
individual devices in order for there to be a visible error in the top level 
vdev? There doesn't appear to be enough raw checksum errors on the disks for 
there to have been 3 errors in the same vdev block. Or am I not understanding 
the checksum count correctly?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mysterious corruption with raidz2 vdev (1 checksum err on disk, 2 on vd

2007-07-26 Thread Kevin
Here's some additional output from the zpool and zfs tools:

$ zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
tank   10.2T   8.58T   1.64T83%  ONLINE -

$ zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank  5.11T   901G  5.11T  /tank

Record size is 128K, checksums are on, compression is off, atime is off. This 
is the only zpool/filesystem in the system. Thanks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Mysterious corruption with raidz2 vdev (1 checksum err on disk, 2 on vdev?)

2007-07-25 Thread Kevin
After a scrub of a pool with 3 raidz2 vdevs (each with 5 disks in them) I see 
the following status output. Notice that the raidz2 vdev has 2 checksum errors, 
but only one disk inside the raidz2 vdev has a checksum error. How is this 
possible? I thought that you would have to have 3 errors in the same 'stripe' 
within a raidz2 vdev in order for the error to become unrecoverable.

And I have not reset any errors with zpool clear ...

Comments will be appreciated. Thanks.

$ zpool status -v
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed with 1 errors on Mon Jul 23 19:59:07 2007
config:

NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 2
  raidz2 ONLINE   0 0 2
c2t0d0   ONLINE   0 0 1
c2t1d0   ONLINE   0 0 0
c2t2d0   ONLINE   0 0 0
c2t3d0   ONLINE   0 0 0
c2t4d0   ONLINE   0 0 0
  raidz2 ONLINE   0 0 0
c2t5d0   ONLINE   0 0 0
c2t6d0   ONLINE   0 0 0
c2t7d0   ONLINE   0 0 0
c2t8d0   ONLINE   0 0 0
c2t9d0   ONLINE   0 0 0
  raidz2 ONLINE   0 0 0
c2t10d0  ONLINE   0 0 0
c2t11d0  ONLINE   0 0 0
c2t12d0  ONLINE   0 0 1
c2t13d0  ONLINE   0 0 0
c2t14d0  ONLINE   0 0 0
spares
  c2t15d0AVAIL   

errors: The following persistent errors have been detected:

  DATASET  OBJECT   RANGE
  55fe9784  lvl=0 blkid=40299
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] NFS share problem with mac os x client

2007-02-07 Thread Kevin Bortis
Hello, I test right now the beauty of zfs. I have installed opensolaris on a 
spare server to test nfs exports. After creating tank1 with zpool and a 
subfilesystem with zfs tank1/nfsshare, I have set the option sharenfs=on to 
tank1/nfsshare.

With Mac OS X as client I can mount the filesystem in Finder.app with 
nfs://server/tank1/nfsshare, but if I copy a file an error ocours. Finder say 
The operation cannot be completed because you do not have sufficient 
privileges for some of the items..

Until now I have shared the filesystems always with samba so I have almost no 
experience with nfs. Any ideas?

Kevin
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] solaris - ata over ethernet - zfs - HPC

2007-02-05 Thread Kevin Abbey

Hi,

I'd like to consider using the coraid products with solaris and ZFS but 
I need them to work with x86_64 on on generic opteron/amd compatible 
hardware.   Currently the AOE driver is beta for sparc only.  I am 
planning to use the ZFS file system so the raid hardware in the coraid 
device will not be used as recommended for ZFS.   Only the direct access 
over ethernet to the disks will be used.  The installation will be part 
of new HPC cluster for computational chemistry and biology.


Does this seem like a good idea?  I am not a storage expert and am 
attempting to create a scalable distributed storage cluster for an HPC 
cluster.  I expect have a limited budget and commodity hardware, except 
for the coraid box.


Relevant links:
http://en.wikipedia.org/wiki/Ata_over_ethernet
http://www.coraid.com/
http://www.coraid.com/support/solaris/


All comments are welcome.
Thank you,
Kevin



--
Kevin C. Abbey
System Administrator
Rutgers University - BioMaPS Institute

Email: [EMAIL PROTECTED]


Hill Center - Room 279
110 Frelinghuysen Road
Piscataway, NJ  08854
Phone and Voice mail: 732-445-3288   



Wright-Rieman Laboratories Room 201
610 Taylor Rd.
Piscataway, NJ  08854-8087
Phone: 732-445-2069
Fax: 732-445-5958 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss