[zfs-discuss] Re: ZFS vs. Apple XRaid

2006-09-22 Thread Sergey
Please read also http://docs.info.apple.com/article.html?artnum=303503.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Veritas NetBackup Support for ZFS

2006-09-22 Thread Nicolas Dorfsman
 I am using Netbackup 6.0 MP3 on several ZFS systems
 just fine.  I
 think that NBU won't back up some exotic ACLs of ZFS,
 but if you
 are using ZFS like other filesystems (UFS, etc) then there aren't  any issues.

  Hum. ACLs are not so exotic.

  This IS a really BIG issue.  If you are using ACLs, even POSIX, moving 
production to ZFS filesystems means loosing any ACLs in backups.

   In other words, if you're using 30 years old UNIX rights, no problem.  

   If I'd have to give a list of complaint on ZFS, that would be the first on 
my list !   Sun SHOULD make pressure on backup software editor (or send them 
some engineer) to support ZFS.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Newbie in ZFS

2006-09-22 Thread Alf

Hi all,
as I am newbie in ZFS, yesterday I played with it a little bit and there 
are so many good things but I've notes few things I couldn't explain so.


1) It's not possible anymore within a pool create a file system with a 
specific sizeIf I have 2 file systems I can't decide to give for 
example 10g to one and 20g to the other one unless I set a reservation 
for them. Also I tried to manually create pool with slices and have for 
each pool a FS with the size I wanted..Is that true?


2) I mirrored 2 disks within the same D1000 and while I was putting a 
big tar ball in the FS I tried to physically remove one mirror and 
..I had to turn off the system as I couldn't login in even trough 
the console. Is there something wrong on what I did?


cheers

--
Alf



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Newbie in ZFS

2006-09-22 Thread Alf

Hi,
Dick Davies wrote:

On 22/09/06, Alf [EMAIL PROTECTED] wrote:


1) It's not possible anymore within a pool create a file system with a
specific sizeIf I have 2 file systems I can't decide to give for
example 10g to one and 20g to the other one unless I set a reservation
for them. Also I tried to manually create pool with slices and have for
each pool a FS with the size I wanted..Is that true?


zfs set quota=5G poolname/fsname

will give you a filesystem that shows up as 5GiB in 'df' - is that
what you want?

I tried quota and it works fine but if you have another fs that takes 
all the space you have in the pool, your FS will not have space for 
itself. So set a reservation for the FS is fine but comparing with other 
VOLUME MANAGER is differentthat's what I am asking!


 You mean pull it out? Does your hardware support hotswap?

As far as I know D1000 support itdoes it?

cheers







Dick Davies wrote:

On 22/09/06, Alf [EMAIL PROTECTED] wrote:


1) It's not possible anymore within a pool create a file system with a
specific sizeIf I have 2 file systems I can't decide to give for
example 10g to one and 20g to the other one unless I set a reservation
for them. Also I tried to manually create pool with slices and have for
each pool a FS with the size I wanted..Is that true?


zfs set quota=5G poolname/fsname

will give you a filesystem that shows up as 5GiB in 'df' - is that
what you want?



2) I mirrored 2 disks within the same D1000 and while I was putting a
big tar ball in the FS I tried to physically remove one mirror and


You mean pull it out? Does your hardware support hotswap?



--
Alfredo De Luca


==
May you live in interesting times.
==


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Newbie in ZFS

2006-09-22 Thread Alf

Hi Michael,
I completely agree with you. I was just wondering about the differences 
between ZFS and others VM and also if I got the essence of it.


Also customers could ask these things and if they can use ZFS 
filesystems like old fashion mode setting a specific size.


What do you thing about pulling out a mirror on D1000 and the completely 
hang of the system?


-- Alf

Michael Schuster wrote:

Alf wrote:

Hi,
Dick Davies wrote:

On 22/09/06, Alf [EMAIL PROTECTED] wrote:


1) It's not possible anymore within a pool create a file system with a
specific sizeIf I have 2 file systems I can't decide to give for
example 10g to one and 20g to the other one unless I set a reservation
for them. Also I tried to manually create pool with slices and have 
for

each pool a FS with the size I wanted..Is that true?


zfs set quota=5G poolname/fsname

will give you a filesystem that shows up as 5GiB in 'df' - is that
what you want?

I tried quota and it works fine but if you have another fs that takes 
all the space you have in the pool, your FS will not have space for 
itself. So set a reservation for the FS is fine but comparing with 
other VOLUME MANAGER is differentthat's what I am asking!


with volume managers (ZFS is more than a VM), you don't get to share 
all the free space between file systems - that's one of the BIG 
advantages of zfs, IMO. I think using reservations is a very minor 
nuisance in comparison to the administrative effort you have to go 
to in other Volume Managers to move free space from one FS to another.






--
Alfredo De Luca


==
May you live in interesting times.
==


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Newbie in ZFS

2006-09-22 Thread Michael Schuster

Alf wrote:

What do you thing about pulling out a mirror on D1000 and the completely 
hang of the system?


I on purpose left that for others to answer - I don't know HW well enough by 
far :-)


--
Michael Schuster  +49 89 46008-2974 / x62974
visit the online support center:  http://www.sun.com/osc/

Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Newbie in ZFS

2006-09-22 Thread Alf

Hi James,
I agree. with you but I think it could take a while

cheers

Alf


James C. McPherson wrote:

Alf wrote:

Hi Michael,
I completely agree with you. I was just wondering about the 
differences between ZFS and others VM and also if I got the essence 
of it.
Also customers could ask these things and if they can use ZFS 
filesystems like old fashion mode setting a specific size.


That is part of the problem - ZFS _requires_ a complete re-working
of your understanding of how storage works, because the old limitations
are no longer valid.

If the customer actually wants to get benefit from ZFS then they have
to be prepared to undergo a paradigm shift.


James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
  http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson





--
Alfredo De Luca


==
May you live in interesting times.
==


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Newbie in ZFS

2006-09-22 Thread David Dyer-Bennet

On 9/22/06, Dick Davies [EMAIL PROTECTED] wrote:

On 22/09/06, Alf [EMAIL PROTECTED] wrote:




 2) I mirrored 2 disks within the same D1000 and while I was putting a
 big tar ball in the FS I tried to physically remove one mirror and

You mean pull it out? Does your hardware support hotswap?


And even more to the point, do the Solaris drivers support hotswap on
your hardware?  When I was first inquring about hotswap hardware
(about which I knew nothing then) nobody warned me about this, and I'm
now the proud owner of a fine case with 8 hot-swap bays which work
fine -- but it turns out that Solaris doesn't support hotswap on the
SATA controllers on my  motherboard, although it access the disks
through them fine.

In fact, it turns out (read recent postings) that even chipsets it
claims to support are still being issued on new hardware in steppings
that aren't actually supported.

This area seems to be a major minefield currently.
--
David Dyer-Bennet, mailto:[EMAIL PROTECTED], http://www.dd-b.net/dd-b/
RKBA: http://www.dd-b.net/carry/
Pics: http://www.dd-b.net/dd-b/SnapshotAlbum/
Dragaera/Steven Brust: http://dragaera.info/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Newbie in ZFS

2006-09-22 Thread Darren Dunham
   You mean pull it out? Does your hardware support hotswap?
 
 As far as I know D1000 support itdoes it?

I'm sure the D1000 is fine with the concept.  It's probably something in
the software stack that is upset.

I was told that a similar issue that I once had when testing was likely
due to some limitations in how ZFS and the sd driver communicates.  That
the sd driver will take a really long time to timeout each of what may
be several I/Os to it.

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
  This line left intentionally blank to confuse you. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: live upgrade incompability

2006-09-22 Thread Aric Gregson
I believe I am experiencing a similar, but more severe issue and I do 
not know how to resolve it. I used liveupgrade from s10u2 to NV b46 
(via solaris express release). My second disk is zfs with the file 
system fitz. I did a 'zpool export fitz'


Reboot with init 6 into new environment, NV b46, I get the following 
error:

cannot mount '/fitz' : directory is not empty
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a 
failed: exit status 1
svc.startd[7]: svc:/system/filesystem/local:default: Method 
/lib/svc/method/fs-local failed with exit status 95.


zfs list = nothing listed.

There is already a /fitz directory filled with the zpool fitz files on 
mounted. Since filesystem/local svc won't start, I cannot start X, 
which is critical to using the computer. I now see that there was no 
real need to export the pool fitz and that I should have just imported 
it once in the new BE. How can I now solve this issue? (BTW, attempting 
to boot back into s10u2, the original BE, results in a kernel panic, so 
I cannot go back).


thanks,

aric

--On September 21, 2006 10:01:28 AM -0700 Haik Aftandilian 
[EMAIL PROTECTED] sent:



I did a liveupgrade from NV b41 to b47 and I still ran into this
problem on one of my ZFS mounts. Both mounts failed to mount in the
new BE because directories were created for the mount points, but
only one of the mounts actually had its data copied into the BE. I
checked /etc/default/lu and I do have the fix for

  6335531 Liveupgrade should not copy zfs file systems into new BEs

which was putback to build 27. Here's my configuration

# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
scratch   3.07G  39.5G  3.07G  /scratch
twosyncs  52.1G   176G  24.5K  /twosyncs
twosyncs/home 52.1G   176G  25.5K  /export/home
twosyncs/home/haik52.1G   176G  51.4G  /export/home/haik

The data in /scratch was copied into a /scratch directory in the new
BE. /export/home/haik wasn't copied into the new BE, but directories
were created in the new BE preventing it from mounting on boot.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: live upgrade incompability

2006-09-22 Thread Haik Aftandilian
 I believe I am experiencing a similar, but more
 severe issue and I do 
 not know how to resolve it. I used liveupgrade from
 s10u2 to NV b46 
 (via solaris express release). My second disk is zfs
 with the file 
 system fitz. I did a 'zpool export fitz'
 
 Reboot with init 6 into new environment, NV b46, I
 get the following 
 error:
 cannot mount '/fitz' : directory is not empty
 svc:/system/filesystem/local:default: WARNING:
 /usr/sbin/zfs mount -a 
 failed: exit status 1
 svc.startd[7]: svc:/system/filesystem/local:default:
 Method 
 /lib/svc/method/fs-local failed with exit status
 95.
 
 zfs list = nothing listed.
 
 There is already a /fitz directory filled with the
 zpool fitz files on 
 mounted. Since filesystem/local svc won't start, I
 cannot start X, 
 which is critical to using the computer. I now see
 that there was no 
 real need to export the pool fitz and that I should
 have just imported 
 it once in the new BE. How can I now solve this
 issue? (BTW, attempting 
 to boot back into s10u2, the original BE, results in
 a kernel panic, so 
 I cannot go back).

Aric,

It sounds like you can resolve this issue by simply booting into the new BE and 
deleting the /fitz directory and then rebooting and going back into the new BE. 
I say this because from your message it sounds like the data from your zfs 
filesystem in /fitz was copied to /fitz in the new BE (instead of just being 
mounted in the new BE). BEFORE DELETING ANYTHING, please make sure /fitz is not 
a zfs mount and just a plain directory and therefore just a copy of what is in 
your zpool. Be careful, I don't want you to lose any data.

Also, what does zpool list report?

Lastly, ZFS people might be interested in the panic message you get when you 
boot back into Solaris 10.

Haik



 
 thanks,
 
 aric

 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: live upgrade incompability

2006-09-22 Thread Aric Gregson

Apologies for any confusion, but I am now able to give more output
regarding the zpool fitz.

unknown# zfs list -- returns list of zfs file system fitz and related
snapshots

unknown# zpool status
pool: fitz
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool
can still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scrub: none requested
config:
NAMESATE
fitzONLINE
c2d0s7  ONLINE
errors: No known data errors

unknown# zpool upgrade -v
This system is currently running ZFS version 3.

the following versions are supported:
..

unknown# zfs mount -- lists the zfs pool as mounted as it should be at
/fitz

but 'zfs unmount fitz' returns 'cannot unmount 'fitz' : not currently
mounted

zpool import -- no pools available to import
zpool import -d /fitz -- no pools available to import

thanks,

aric



I believe I am experiencing a similar, but more severe issue and I do
not know how to resolve it. I used liveupgrade from s10u2 to NV b46
(via solaris express release). My second disk is zfs with the file
system fitz. I did a 'zpool export fitz'

Reboot with init 6 into new environment, NV b46, I get the following
error:
cannot mount '/fitz' : directory is not empty
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a
failed: exit status 1
svc.startd[7]: svc:/system/filesystem/local:default: Method
/lib/svc/method/fs-local failed with exit status 95.

zfs list = nothing listed.

There is already a /fitz directory filled with the zpool fitz files on
mounted. Since filesystem/local svc won't start, I cannot start X,
which is critical to using the computer. I now see that there was no
real need to export the pool fitz and that I should have just imported
it once in the new BE. How can I now solve this issue? (BTW, attempting
to boot back into s10u2, the original BE, results in a kernel panic, so
I cannot go back).

thanks,

aric


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Building large home file server with SATA

2006-09-22 Thread Alexei Rodriguez
 Alexei Rodriguez wrote:
 Unless they break the spec, yes, it should work.  PCI

Excellent to know! I will verify that the motherboard and the PCI-X cards play 
well together.

Thanks!


Alexei
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Building large home file server with SATA

2006-09-22 Thread Frank Cusack
On September 22, 2006 10:26:01 AM -0700 Alexei Rodriguez 
[EMAIL PROTECTED] wrote:

Alexei Rodriguez wrote:
Unless they break the spec, yes, it should work.  PCI


Excellent to know! I will verify that the motherboard and the PCI-X cards
play well together.


You might run into a problem with 3.3V cards vs 5V slots.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] slow reads question...

2006-09-22 Thread Harley Gorrell

   I have set up a small box to work with zfs.  (2x 2.4GHz
xeons, 4GB memory, 6x scsi disks) I made one drive the boot
drive and put the other five into a pool with the zpool
create tank command right out of the admin manual.

   The administration experience has been very nice and most
everything as worked as expected.  (Setting up new
filesystems, swapping out failed drives, etc.) What isnt as
I expected is the slow speed.

   When using a raw device, a scsi disk on the system reads
at 34MB/s.  About what I would expect for these disks.

| # time dd if=/dev/rdsk/c0t1d0 of=/dev/null bs=8k count=102400
| 102400+0 records in
| 102400+0 records out
|
| real0m23.182s
| user0m0.135s
| sys 0m1.979s

   However when reading from a 10GB file of zeros, made with
mkfile, the read performace is much lower, 11MB/s.

| # time dd if=zeros-10g of=/dev/null bs=8k count=102400
| 102400+0 records in
| 102400+0 records out
|
| real1m8.763s
| user0m0.104s
| sys 0m1.759s

   After reading the list archives, I saw ztune.sh. Using
it I tried a couple of different settings and didnt see any
changes.  After that I toggled the compression, atime,
recordsize and checksum options on and off to no avail.

   Am I expecting too much from this setup?  What might be
changed to speed things up?  Wait until snv_45?

   The version of open solaris is:

| # uname -a
| SunOS donatella 5.11 snv_44 i86pc i386 i86pc

   The options on the filesystem are:

| # zfs get all tank/home
| NAME   PROPERTY   VALUE  SOURCE
| tank/home  type   filesystem - 
| tank/home  creation   Fri Sep 22 10:47 2006  - 
| tank/home  used   39.1K  - 
| tank/home  available  112G   - 
| tank/home  referenced 39.1K  - 
| tank/home  compressratio  1.00x  - 
| tank/home  mountedyes- 
| tank/home  quota  none   default 
| tank/home  reservationnone   default 
| tank/home  recordsize 128K   default 
| tank/home  mountpoint /export/zfslocal

| tank/home  sharenfs   on local
| tank/home  checksum   on default 
| tank/home  compressionoffdefault 
| tank/home  atime  on default 
| tank/home  deviceson default 
| tank/home  exec   on default 
| tank/home  setuid on default 
| tank/home  readonly   offdefault 
| tank/home  zoned  offdefault 
| tank/home  snapdirhidden default 
| tank/home  aclmodegroupmask  default 
| tank/home  aclinherit secure default


thanks,
harley.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: slow reads question...

2006-09-22 Thread johansen
ZFS uses a 128k block size.  If you change dd to use a bs=128k, do you observe 
any performance improvement?

 | # time dd if=zeros-10g of=/dev/null bs=8k
 count=102400
 | 102400+0 records in
 | 102400+0 records out

 | real1m8.763s
 | user0m0.104s
 | sys 0m1.759s

It's also worth noting that this dd used less system and user time than the 
read from the raw device, yet took a longer time in real time.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] I'm dancin' in the streets

2006-09-22 Thread Anantha N. Srirama
Wow! I solved a tricky problem this morning thanks to Zones  ZFS integration. 
We have a SAS SPDS database environment running on Sol10 06/06. The SPDS 
database is unique in that when a table is being updated by one user it is 
unavailable to the rest of the user community. Our nightly update jobs 
(occassionally they turn into day jobs when they take longer :-() were coming 
in the way of our normal usage.

So I put on my ZFS cap and figure it can be simply solved by deploying the 
'clone' feature. Simply stated I'd create a clone of all the SPDS filesystems 
and start another instance of SPDS to read/write from the cloned data. 
Unfortunately I hit a wall when I realized that there is no way to update the 
SPDS metadata (binary file containing a description of the physical structure 
of the database) with the new directory path.

I was stumped until it occurred to me that I can solve it by simply marrying 
the clones with a Solaris Zone Now our problem is solved as follows:

1. Stop local zone
2. Reclaim the ZFS clones in the global-zone
3. Destroy the clone/snapshot
4. Recreate the clone/snapshot
5. Restart the local zone
6. Start SPDS in the local zone and it works beautifully because it sees all 
the files it needs per its metadata!!!

To accomplish the same in traditional methods would have required a SAN disk, 
disk merge/split, ... You get the picture, ugly!

Chalk one more victory for the Solaris 10 Zones/ZFS!!! Thanks to the developers 
of these features that enabled me elegantly solve a difficult problem.

-Anantha-
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool always thinks it's mounted on another system

2006-09-22 Thread Rich

The history is quite simple:
1) Installed nv_b32 or around there on a zeroed drive. Created this
ZFS pool for the first time.
2) Non-live upgraded to nv_b42 when it came out, zpool upgrade on the
zpool in question from v2 to v3.
3) Tried to non-live upgrade to nv_b44, upgrade failed every time, so
I just blew away my existing partition scheme and install nv_b44
cleanly.
4) Problem begins.

I can't think of any sane reason I could have blown away that
directory accidentally, so I don't know.

- Rich

On 9/22/06, Eric Schrock [EMAIL PROTECTED] wrote:

On Fri, Sep 22, 2006 at 03:36:36AM -0400, Rich wrote:
 ...huh.

 So /etc/zfs doesn't exist. At all.

 Creating /etc/zfs using mkdir, then importing the pool with zpool
 import -f, then rebooting, the behavior vanishes, so...yay.

 Problem solved, I guess, but shouldn't ZFS be smarter about creating
 its own config directory?

That seems a reasonable RFE, but I wonder how you got into this
situation in the first place.  What is the history of the OS on this
system?  Nevada? Solaris 10?  Upgraded? Patched?  I assume that you
don't tend to go around removing random /etc directories on purpose, so
I want to make sure that our software didn't screw up somehow.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock




--
Friends don't let friends use Internet Explorer or Outlook.

Choose something better.
www.mozilla.org
www.getfirefox.com
www.getthunderbird.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: slow reads question...

2006-09-22 Thread Harley Gorrell

On Fri, 22 Sep 2006, johansen wrote:

ZFS uses a 128k block size.  If you change dd to use a
bs=128k, do you observe any performance improvement?


   I had tried other sizes with much the same results, but
hadnt gone as large as 128K.  With bs=128K, it gets worse:

| # time dd if=zeros-10g of=/dev/null bs=128k count=102400
| 81920+0 records in
| 81920+0 records out
| 
| real2m19.023s

| user0m0.105s
| sys 0m8.514s


It's also worth noting that this dd used less system and
user time than the read from the raw device, yet took a
longer time in real time.


   I think some of the blocks might be cached, as I have run
this a number of times.  I really dont know how the time
might be accounted for -- However, the real time is correct
as that is what I see while waiting for the command to
complete.

   Is there any other info I can provide which would help?

harley.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: live upgrade incompability

2006-09-22 Thread Haik Aftandilian
 Haik,
 
 Thank you very much. 'zpool list' yeilds
 NAMESIZEUSEDAVAILCAPHEALTH
ALTROOT
 z   74.5G  22.9G  51.6G30%ONLINE  -
 
 How do I confirm that /fitz is not currently a zfs
 mountpoint? 'zfs mount' yeilds
 
 fitz/home/fitz/home
 fitz/home/aorchid   /fitz/home/aorchid
 fitz/music   /fitz/music
 fitz/pg/fitz/pg
 fitz/pictures/fitz/pictures

Ah, OK. It's good that you didn't delete /fitz. This is what I recommend that 
you do.

# zfs unmount -a
# zfs mount
This should produce no output since now all zfs filesystems are unmounted
# find /fitz
This should produce no files, only empty directories
At this point, as long as there is nothing important in /fitz, you can go 
ahead an delete it
# rm -r /fitz
Or just delete everything inside /fitz
# zfs mount -a
 Now /fitz should be all set. When you reboot you should not see the /fitz 
filesystem mount error

Someone else please chime in if this looks wrong.

Hope that helps.

Haik



 
 'ls -la /fitz' yeilds
 total 85
 drwxr-xr-x  7 root  sys   512 Sep 20 10:41 .
 drwxr-xr-x 30 root  root  512 Sep 21 18:28 ..  --
 this is when I ran 'zpool export fitz'
 drwxr-xr-x  3 root  sys   3 Jul 25 12:22 home
 etc...
 
 /etc/vfstab does not have /fitz and umount /fitz
 returns 
 umount: warning: /fitz not in mnttab
 umount: /fitz not mounted
 
  Lastly, ZFS people might be interested in the
 panic
  message you get when you boot back into Solaris
 10.
 
 They are all related to the NVIDIA driver, gfxp, from
 what I remember from two weeks ago. I am on an Ultra
 20. 
 
 thanks,
 aric
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: slow reads question...

2006-09-22 Thread johansen-osdev
Harley:

I had tried other sizes with much the same results, but
 hadnt gone as large as 128K.  With bs=128K, it gets worse:
 
 | # time dd if=zeros-10g of=/dev/null bs=128k count=102400
 | 81920+0 records in
 | 81920+0 records out
 | 
 | real2m19.023s
 | user0m0.105s
 | sys 0m8.514s

I may have done my math wrong, but if we assume that the real
time is the actual amount of time we spent performing the I/O (which may
be incorrect) haven't you done better here?

In this case you pushed 81920 128k records in ~139 seconds -- approx
75437 k/sec.

Using ZFS with 8k bs, you pushed 102400 8k records in ~68 seconds --
approx 12047 k/sec.

Using the raw device you pushed 102400 8k records in ~23 seconds --
approx 35617 k/sec.

I may have missed something here, but isn't this newest number the
highest performance so far?

What does iostat(1M) say about your disk read performance?

Is there any other info I can provide which would help?

Are you just trying to measure ZFS's read performance here?

It might be interesting to change your outfile (of) argument and see if
we're actually running into some other performance problem.  If you
change of=/tmp/zeros does performance improve or degrade?  Likewise, if
you write the file out to another disk (UFS, ZFS, whatever), does this
improve performance?

-j
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: low disk performance

2006-09-22 Thread Gino Ruopolo
Update ...

iostat output during zpool scrub

  extended device statistics   
device   r/sw/s   Mr/s   Mw/s wait actv  svc_t  %w  %b 
sd34 2.0  395.20.10.6  0.0 34.8   87.7   0 100 
sd3521.0  312.21.22.9  0.0 26.0   78.0   0  79 
sd3620.01.01.20.0  0.0  0.7   31.4   0  13 
sd3720.01.01.00.0  0.0  0.7   35.1   0  21 

sd34 is always at 100% ...
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: low disk performance

2006-09-22 Thread Gino Ruopolo
 Update ...
 
 iostat output during zpool scrub
 
 extended device statistics
   
   w/s   Mr/s   Mw/s wait actv  svc_t  %w  %b 
 34 2.0  395.20.10.6  0.0 34.8   87.7
   0 100 
 3521.0  312.21.22.9  0.0 26.0   78.0
   0  79 
 3620.01.01.20.0  0.0  0.7   31.4
   0  13 
 3720.01.01.00.0  0.0  0.7   35.1
   0  21 
 sd34 is always at 100% ...


  pool: zpool1
 state: ONLINE
 scrub: scrub in progress, 0.13% done, 72h39m to go
config:

NAME   STATE READ WRITE CKSUM
zpool1ONLINE   0 0 0
  raidzONLINE   0 0 0
c4t60001FE100118DB91190724700C7d0  ONLINE   0 0 0
c4t60001FE100118DB91190724700C9d0  ONLINE   0 0 0
c4t60001FE100118DB91190724700CBd0  ONLINE   0 0 0
c4t60001FE100118DB91190724700CCd0  ONLINE   0 0 0

72hours?? isn't too much for 370GB of data?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: slow reads question...

2006-09-22 Thread Harley Gorrell

On Fri, 22 Sep 2006, [EMAIL PROTECTED] wrote:

Are you just trying to measure ZFS's read performance here?


   That is what I started looking at.  We scrounged around
and found a set of 300GB drives to replace the old ones we
started with.  Comparing these new drives to the old ones:

Old 36GB drives:

| # time mkfile -v 1g zeros-1g
| zeros-1g 1073741824 bytes
| 
| real2m31.991s

| user0m0.007s
| sys 0m0.923s

Newer 300GB drives:

| # time mkfile -v 1g zeros-1g
| zeros-1g 1073741824 bytes
| 
| real0m8.425s

| user0m0.010s
| sys 0m1.809s

   At this point I am pretty happy.

   I am wondering if there is something other than capacity
and seek time which has changed between the drives.  Would a
different scsi command set or features have this dramatic a
difference?

thanks!,
harley.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: slow reads question...

2006-09-22 Thread johansen-osdev
Harley:

 Old 36GB drives:
 
 | # time mkfile -v 1g zeros-1g
 | zeros-1g 1073741824 bytes
 | 
 | real2m31.991s
 | user0m0.007s
 | sys 0m0.923s
 
 Newer 300GB drives:
 
 | # time mkfile -v 1g zeros-1g
 | zeros-1g 1073741824 bytes
 | 
 | real0m8.425s
 | user0m0.010s
 | sys 0m1.809s

This is a pretty dramatic difference.  What type of drives were your old
36g drives?

I am wondering if there is something other than capacity
 and seek time which has changed between the drives.  Would a
 different scsi command set or features have this dramatic a
 difference?

I'm hardly the authority on hardware, but there are a couple of
possibilties.  Your newer drives may have a write cache.  It's also
quite likely that the newer drives have a faster speed of rotation and
seek time.

If you subtract the usr + sys time from the real time in these
measurements, I suspect the result is the amount of time you were
actually waiting for the I/O to finish.  In the first case, you spent
99% of your total time waiting for stuff to happen, whereas in the
second case it was only ~86% of your overall time.

-j
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: low disk performance

2006-09-22 Thread Rich

On 9/22/06, Gino Ruopolo [EMAIL PROTECTED] wrote:

 Update ...

 iostat output during zpool scrub

 extended device statistics

   w/s   Mr/s   Mw/s wait actv  svc_t  %w  %b
 34 2.0  395.20.10.6  0.0 34.8   87.7
   0 100
 3521.0  312.21.22.9  0.0 26.0   78.0
   0  79
 3620.01.01.20.0  0.0  0.7   31.4
   0  13
 3720.01.01.00.0  0.0  0.7   35.1
   0  21
 sd34 is always at 100% ...


  pool: zpool1
 state: ONLINE
 scrub: scrub in progress, 0.13% done, 72h39m to go
config:

NAME   STATE READ WRITE CKSUM
zpool1ONLINE   0 0 0
  raidzONLINE   0 0 0
c4t60001FE100118DB91190724700C7d0  ONLINE   0 0 0
c4t60001FE100118DB91190724700C9d0  ONLINE   0 0 0
c4t60001FE100118DB91190724700CBd0  ONLINE   0 0 0
c4t60001FE100118DB91190724700CCd0  ONLINE   0 0 0

72hours?? isn't too much for 370GB of data?


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



For what it's worth, I've found that usually, within the first ~5m or
so of starting a scrub, the time estimate is disproportionate to the
actual time the scrub will take.

- Rich
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss