[zfs-discuss] Some questions about Intent Log Devices

2007-11-23 Thread Brian Hechinger
I finally got some new drives for my Ultra 80. I have two 73gig 10K
RPM SCSI disks in it now with 60GB in a ZFS mirror. I am going to be
adding 4x500G SATA disks in a RAIDZ, and I was thinking about using
the "old" zfs space on the SCSI disks for intent logs.

My questions are this:

1) Is it possible to add intent log devices after the fact? (I'll need
   to build the new array first so I can move the data over to it.)

2) What would be a better use of the two 60G intent log partitions?
   Mirrored? Striped?

3) How much space is needed for the intent logs? For 1.5TB of data space,
   would 60G be enough?  120G?

4) What version should I be running?  I've got 64a on it now, but have
   be considering upgrading to the latest SCXE.

I guess a lot of this revolves around the answer to #1, since if I can
do thing after the fact with regards to intent logs, then I can answer
those other questions at a later date.

Thanks!!

-brian
-- 
"Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is built by people who'd be better
suited to making sure that my burger is cooked thoroughly."  -- Jonathan 
Patschke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic receiving incremental snapshots

2007-11-23 Thread Mike Gerdts
On Aug 25, 2007 8:36 PM, Stuart Anderson <[EMAIL PROTECTED]> wrote:
> Before I open a new case with Sun, I am wondering if anyone has seen this
> kernel panic before? It happened on an X4500 running Sol10U3 while it was
> receiving incremental snapshot updates.
>
> Thanks.
>
>
> Aug 25 17:01:50 ldasdata6 ^Mpanic[cpu0]/thread=fe857d53f7a0:
> Aug 25 17:01:50 ldasdata6 genunix: [ID 895785 kern.notice] dangling dbufs 
> (dn=fe82a3532d10, dbuf=fe8b4e338b90)

I saw "dangling dbufs" panics beginning with S10U4 beta and the then
current (May '07) nevada builds.  If you are running a kernel newer
than the x86 equivalent of 125100-10, you may be seeing the same
thing.  The panics I saw were not triggered by zfs receive, so you may
be seeing something different.  An IDR was produced for me.  If you
have Sun support search for my name, you can likely get the same IDR
(errr, an IDR with the same fix - mine was SPARC) to see if it
addresses your problem.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic receiving incremental snapshots

2007-11-23 Thread Stuart Anderson
This kernel panic when running "zfs receive" has been solved with
IDR127787-10.  Does anyone know when this large set of ZFS bug fixes
will be released as a normal/official S10 patch?

Thanks.


On Sat, Aug 25, 2007 at 07:36:25PM -0700, Stuart Anderson wrote:
> Before I open a new case with Sun, I am wondering if anyone has seen this
> kernel panic before? It happened on an X4500 running Sol10U3 while it was
> receiving incremental snapshot updates.
> 
> Thanks.
> 
> 
> Aug 25 17:01:50 ldasdata6 ^Mpanic[cpu0]/thread=fe857d53f7a0: 
> Aug 25 17:01:50 ldasdata6 genunix: [ID 895785 kern.notice] dangling dbufs 
> (dn=fe82a3532d10, dbuf=fe8b4e338b90)
...

-- 
Stuart Anderson  [EMAIL PROTECTED]
http://www.ligo.caltech.edu/~anderson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Motherboard

2007-11-23 Thread Vincent Fox
H, well depends on what you are looking for.  Is the speed not enough, or 
the size of RAM?  I am thinking people found out the original GLY would 
actually work with a 2-gig DIMM.   So it's possible the GLY2 will accept 2-gig 
also, which seems plenty for me.  YMMV.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Questions from a windows admin - Samba, shares & quotas

2007-11-23 Thread Ross
Yeah, I'd seen that, but we're only going to be running 100 users so the boot 
time shouldn't be too bad. :-)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Motherboard

2007-11-23 Thread Andy Lubel
Arcea,  nice!
 
Any word on whether 3ware has come around yet?  I've been bugging them for 
months to do something to get a driver made for solaris.
 
-Andy



From: [EMAIL PROTECTED] on behalf of James C. McPherson
Sent: Thu 11/22/2007 5:06 PM
To: mike
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Home Motherboard



mike wrote:
> I actually have a related motherboard, chassis, dual power-supplies
> and 12x400 gig drives already up on ebay too. If I recall Areca cards
> are supported in OpenSolaris...

At the moment you can download the Areca "arcmsr" driver
from areca.com.tw, but I'm in the process of integrating
it into OpenSolaris

http://bugs.opensolaris.org/view_bug.do?bug_id=6614012
6614012 add Areca SAS/SATA RAID adapter driver


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Questions from a windows admin - Samba, shares & quotas

2007-11-23 Thread Akhilesh Mritunjai
Yes it will work, and quite nicely indeed. But you need to be careful.

Currently ZFS mounting is not "instantaneous", if you have like say 3 
users, you might be for a rude surprize as system takes its own merry time (~ 
few hrs) mounting them at next reboot. Even with auto mounter, things won't be 
so fast.

ZFS philosophy of "helluva tons of filesystems" breaks a lot of tools made with 
assumption of "who would ever need more than 4 filesystems ?".

To test it, create $NUM_USERS filesystems, reboot the server and see if 
everything comes up ok and in acceptable time.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] R/W ZFS on Leopard Question -or- Where's my 40MB?

2007-11-23 Thread Akhilesh Mritunjai
> SUMMARY:
> 1) Why the difference between pool size and fs
> capacity?

With zfs take df output with a grain of salt -- add more if compression is 
turned on.

ZFS being quite complicated, it seems only an "approximate" free space is 
reported, which won't be too wrong and would suffice for the purpose. But if 
you're expecting it to be correct to the last block,it won't be.

> 2) If this is normal overhead, then how to you
> examine these aspects
> of the fs (commands to use, background links to read,
> etc. (If you say
> RTFM then please supply a page number for
> "817-2271.pdf"))?

No public mechanism currently exists, afaik. Some black magic with dtrace might 
be possible to look at the FS data structures OR by reading the code and ZFS 
on-disk format document one /could/ possibly figure it out.

> 3) What's the relationship between pools (zpool) and
> filesystems (zfs
> command)?  / Is there a default fs created hwne the
> pool is created?

Yes. As soon as you create a pool, it can be used as a FS. Nothing else needed. 
You can of course, create additional filesystems in the pool, but one is always 
available to you (you may or may not like it... I keep it unmounted).

> 4) BONUS QUESTION: Is Sun currently using / promoting
> / shipping hardware that *boots* ZFS? (e.g. last I checked even
> stuff like "Thumper" did not use ZFS for the 2 mirror'd boot
> drives (UFS?) but used ZFS for the 10,000 other drives (OK, maybe there
> aren't 10,000 drive but there sure are a lot)).

ZFS boot didn't get integrated into even Nevada until very recently, let alone 
backported to Solaris 10. I doubt it is ready for production use yet. 

The new "Opensolaris Dev. preview aka. Project Indiana" by default installs ZFS 
boot (no UFS needed). So, things are moving but we still need to go a long way 
before all things are stabilized, documented, corner cases identified, recovery 
tools & OS install/update applications updated etc etc

> 5) BONUS QUESTION #2: How does a frustrated yet
> extremely seasoned Mac/
> OS X technician with a terrific Solaris background
> find happiness by landing a job at his other favorite company, Sun? (My
> "friend" wants to know.)

WARNING: Zen mode ON!

One has to find happiness within.

A more correct question might be: Would it be better for you to switch to 
working for Sun ? Well I personally admire Sun's engineering. It's one of the 
*few* places left where you are allowed to dream, and of course, build! If that 
is what you want to do, you might like working for them very much!

> 6) FINAL QUESTION (2 parts): (a) When will we see
> default booting to
> ZFS?

You can see it now... download the "OpenSolaris Developer Preview" live CD and 
install it to HDD. It's there!

> (b) [When] will we see ZFS as the default fs
> on OS X?

Only when uncle Stevie says so! (Don't hold your breath)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] R/W ZFS on Leopard Question -or- Where's my 40MB?

2007-11-23 Thread jzone
Howdy,

Cross-posted to: zfs-discuss@opensolaris.org

I am playing around with the latest Read-Write ZFS on Leopard and am
confused about why the available size of my admittedly tiny test pool
(100 MB) is showing at ~2/3 (63 MB) of the expected capacity.  I used
mkfile to create test "disks".  Is this due to normal ZFS overhead? If
so, how can I list / view / examine these properties?  I don't think
it's compression related (BTW, is compression ON or OFF by default in
OS X's current implementation of ZFS?).

tcpb:jpool avatar$ uname -a
Darwin tcpb.local 9.1.0 Darwin Kernel Version 9.1.0: Wed Oct 31
17:48:21 PDT 2007; root:xnu-1228.0.2~1/RELEASE_PPC Power Macintosh

tcpb:jpool avatar$ sw_vers
ProductName:Mac OS X
ProductVersion: 10.5.1
BuildVersion:   9B18

tcpb:aguas avatar$ kextstat | grep zfs
  1250 0x3203a000 0xcf0000xce000com.apple.filesystems.zfs
(6.0) <7 6 5 2>

I created a test pool in the "aguas" directory on an external firewire
HDD:

cd to my zfs test directory: "aguas" on an external HDD..
cd /Volumes/jDrive/aguas/

Create 5 100MB files to act as "Disks" in my Pool...
sudo mkfile 100M disk1
sudo mkfile 100M disk2
sudo mkfile 100M disk3
sudo mkfile 100M disk4
sudo mkfile 100M disk5

Create MIRROR'd Pool, "jpool" using 1st two Disks...
sudo zpool create jpool mirror /Volumes/jDrive/aguas/disk1 /Volumes/
jDrive/aguas/disk2

zpool list =>
NAMESIZEUSED   AVAILCAP  HEALTH
ALTROOT
jpool  95.5M151K   95.4M 0%ONLINE
-

zpool status =>
 pool: jpool
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
jpoolONLINE   0 0 0
  mirror ONLINE   0 0 0
/Volumes/jDrive/aguas/disk1  ONLINE   0 0 0
/Volumes/jDrive/aguas/disk2  ONLINE   0 0 0

errors: No known data errors
=

Added a spare:
sudo zpool add jpool spare /Volumes/jDrive/aguas/disk5

zpool status =>
  pool: jpool
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
jpoolONLINE   0 0 0
  mirror ONLINE   0 0 0
/Volumes/jDrive/aguas/disk1  ONLINE   0 0 0
/Volumes/jDrive/aguas/disk2  ONLINE   0 0 0
spares
  /Volumes/jDrive/aguas/disk5AVAIL

errors: No known data errors
=

"jpool" NOW SHOWS UP ON THE FINDER...

tcpb:aguas avatar$ df -h
Filesystem  Size   Used  Avail Capacity  Mounted on
/dev/disk0s3   112Gi  103Gi  8.7Gi93%/
devfs  114Ki  114Ki0Bi   100%/dev
fdesc  1.0Ki  1.0Ki0Bi   100%/dev
map -hosts   0Bi0Bi0Bi   100%/net
map auto_home0Bi0Bi0Bi   100%/home
/dev/disk1s14   56Gi   50Gi  5.4Gi91%/Volumes/jDrive ONE
/dev/disk1s10   75Gi   68Gi  7.3Gi91%/Volumes/jDrive
/dev/disk1s12   55Gi   52Gi  2.5Gi96%/Volumes/Free 55
jpool   63Mi   59Ki   63Mi 1%/Volumes/jpool
=

OK, GIVEN:
zpool list =>
NAMESIZEUSED   AVAILCAP  HEALTH
ALTROOT
jpool  95.5M151K   95.4M 0%ONLINE
-

*WHY* ONLY 63MB?!?:
jpool   63Mi   59Ki   63Mi 1%/Volumes/jpool

More info (I turned COMPRESSION on after I noticed the
discrepancy.) ...

tcpb:jpool avatar$ sudo zfs get all jpool =>
NAME   PROPERTY   VALUE  SOURCE
jpool  type   filesystem -
jpool  creation   Tue Nov 20 14:48 2007  -
jpool  used   392K   -
jpool  available  63.1M  -
jpool  referenced 59K-
jpool  compressratio  1.00x  -
jpool  mountedyes-
jpool  quota  none   default
jpool  reservationnone   default
jpool  recordsize 128K   default
jpool  mountpoint /Volumes/jpool default
jpool  sharenfs   offdefault
jpool  checksum   on default
jpool  compressionon local
jpool  atime  on default
jpool  deviceson default
jpool  exec   on default
jpool  setuid on default
jpool  readonly   offdefault
jpool  zoned  offdefault
jpool  snapdirhidden default
jpool  aclmodegroupmask  default
jpool  aclinherit secure default
jpool  canmount   on default
jpool  shareiscsi offdefault
jpool  xattr  on default
jpool  copies 1  de