Re: [zfs-discuss] Where is the ZFS configuration data stored?

2006-10-11 Thread Matthew Ahrens

James McPherson wrote:

On 10/12/06, Steve Goldberg <[EMAIL PROTECTED]> wrote:

Where is the ZFS configuration (zpools, mountpoints, filesystems,
etc) data stored within Solaris?  Is there something akin to vfstab
or perhaps a database?



Have a look at the contents of /etc/zfs for an in-filesystem artefact
of zfs. Apart from that, the information required is stored on the
disk itself.

There is really good documentation on ZFS at the ZFS community
pages found via http://www.opensolaris.org/os/community/zfs.


FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot 
up.  Everything else (mountpoints, filesystems, etc) is stored in the 
pool itself.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] system hangs on POST after giving zfs a drive

2006-10-11 Thread Matthew Ahrens

John Sonnenschein wrote:

I *just* figgured out this problem, looking for a potential solution
(or at the very least some validation that i'm not crazy)

Okay, so here's the deal. I've been using this terrible horrible
no-good very bad hackup of a couple partitions spread across 3 drives
as a zpool.

I got sick of having to dig up the info of what slices do what every
time I need to do something, so I shuffled around some data & created
a new zpool out of my SATA drive. ( [i]# zpool create xenophanes
c2d0[/i] ).

Everything works, etc. for a while, then I rebooted the machine.

As it turns out now, something about the drive is causing the machine
to hang on POST. It boots fine if the drive isn't connected, and if I
hot plug the drive after the machine boots, it works fine, but the
computer simply will not boot with the drive attatched.


As I recall, some BIOSs get confused by EFI labels, which ZFS uses when 
you give it a whole disk (as opposed to just a slice).  You might want 
to see if there's a BIOS update available for your motherboard, or 
search this forum for a previous thread on this topic, or maybe someone 
here has a better memory that I do...


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] system hangs on POST after giving zfs a drive

2006-10-11 Thread Chris Csanady

On 10/11/06, John Sonnenschein <[EMAIL PROTECTED]> wrote:


As it turns out now, something about the drive is causing the machine to hang 
on POST. It boots fine if the drive isn't connected, and if I hot plug the 
drive after the machine boots, it works fine, but the computer simply will not 
boot with the drive attatched.

any thoughts on resolution?


Are you using an nforce4 based board?

I have a Tyan K8E, and it hangs on boot if there are EFI labeled disks
present.  (Which is what ZFS uses when you give it whole disks.)  If this
is the problem, configure the BIOS settings so as to not probe those
disks, and then it should boot.

Of course, it won't be possible to boot off those disks, but they should
work fine in Solaris.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 10 / ZFS file system major/minor number

2006-10-11 Thread Sanjeev Bagewadi

Hi Darren,

Coments inline
Darren Dunham wrote:

ZFS creates a unique FSID for every filesystem (called a object set in 
ZFS terminology).


The unique id is saved (ondisk) as part of dsl_dataset_phys_t in 
ds_fsid_guid.

And this id is a random number generated when the FS is created.

This id is used to populate the zfs_t structure (refer to zfs_init_fs()).

And the same id would be used as FSID for NFS.
   



Sorry, allow me to be dense for a moment.

Does this mean that I should expect to be able to bring up any machine
with the same ZFS pool and the same IP address and have it serve
filehandles handed out by a previous server?  Including NFS3 and 4?

How about if I have to mount the filesystem on an alternate root?

I don't really have a setup that I could move between machines to test
this at the moment...
 


This works... I tried it here and things worked fine... here is what I did :
- Configured an additional IP address (IP1) on a host (HOST1)
- Shared a pool over NFS..
- Automounted the filesystem on my desktop
- Started a "tail -f " on one of the files
- Exported the pool on HOST1
- Unplumbed the ip address IP1 on HOST1
- Plumbed up the ip address on HOST2
- Imported the pool on HOST2 with alternate root ie. -R option

- Any further concats to the file done were seen by the "tail -f"

So, it works...

Thanks and regards,
Sanjeev.

PS : This very similar to what HA-ZFS does in SunCluster 3.2

--
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80 669 27521 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Dale Ghent

On Oct 12, 2006, at 12:23 AM, Frank Cusack wrote:

On October 11, 2006 11:14:59 PM -0400 Dale Ghent  
<[EMAIL PROTECTED]> wrote:

Today, in 2006 - much different story. I even had Linux AND Solaris
problems with my machine's MCP51 chipset when it first came out. Both
forcedeth and nge croaked on it. Welcome to the bleeding edge. You're
unfortunately on the bleeding edge of hardware AND software.


Yeah, Solaris x86 is so bleeding edge that it doesn't even support
Sun's own hardware!  (x2100 SATA, which is now already in its second
generation)


You know, I'm really perplexed over that, especially given that the  
silicon image chips (AFAIK) aren't in any Sun product and yet they  
have a SATA framework driver.


/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Frank Cusack
On October 11, 2006 11:14:59 PM -0400 Dale Ghent <[EMAIL PROTECTED]> 
wrote:

Today, in 2006 - much different story. I even had Linux AND Solaris
problems with my machine's MCP51 chipset when it first came out. Both
forcedeth and nge croaked on it. Welcome to the bleeding edge. You're
unfortunately on the bleeding edge of hardware AND software.


Yeah, Solaris x86 is so bleeding edge that it doesn't even support
Sun's own hardware!  (x2100 SATA, which is now already in its second
generation)

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] system hangs on POST after giving zfs a drive

2006-10-11 Thread John Sonnenschein
I *just* figgured out this problem, looking for a potential solution (or at the 
very least some validation that i'm not crazy)

Okay, so here's the deal. I've been using this terrible horrible no-good very 
bad hackup of a couple partitions spread across 3 drives as a zpool. 

I got sick of having to dig up the info of what slices do what every time I 
need to do something, so I shuffled around some data & created a new zpool out 
of my SATA drive. ( [i]# zpool create xenophanes c2d0[/i] ). 

Everything works, etc. for a while, then I rebooted the machine. 

As it turns out now, something about the drive is causing the machine to hang 
on POST. It boots fine if the drive isn't connected, and if I hot plug the 
drive after the machine boots, it works fine, but the computer simply will not 
boot with the drive attatched. 

any thoughts on resolution?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread David Dyer-Bennet

On 10/11/06, Dale Ghent <[EMAIL PROTECTED]> wrote:

On Oct 11, 2006, at 7:36 PM, David Dyer-Bennet wrote:

> I've been running Linux since kernel 0.99pl13, I think it was, and
> have had amazingly little trouble.  Whereas I'm now sitting on $2k of
> hardware that won't do what I wanted it to do under Solaris, so it's a
> bit of a hot-button issue for me right now.

Yes, but remember back in the days of Linux 0.99, the amount of PC
hardware was nowhere near as varied as it is today. Integrated
chipsets? A pipe dream! Aside from video card chips and proprietary
pre-ATAPI CDROM interfaces, you didn't have to reach far to find a
driver which covered a given piece of hardware because when you got
down to it, most hardware was the same. NE2000, anyone?


Yep, I had NE2000 cards; still have some I think, but not in use anymore.

Don't forget SCSI controllers!  Of course I was running SCSI disks in
the Linux boxes back then (and the windows boxes).

And multi-serial cards, and multi-modem cards.  I had a 16-port fast
serial card for the BBS (overkill, but 4 was nowhere near enough).


Today, in 2006 - much different story. I even had Linux AND Solaris
problems with my machine's MCP51 chipset when it first came out. Both
forcedeth and nge croaked on it. Welcome to the bleeding edge. You're
unfortunately on the bleeding edge of hardware AND software.


Yeah, and that's probably a mistake. But I already own the hardware.

What I'm pissed about, though, is that I tried fairly hard to
determine not just what hardware probably worked, but *how paranoid I
had to be* about hardware choice.  I didn't, I feel, get the necessary
warning aobut the level of paranoia needed.  That might have lead me
to different hardware, or it might have lead me to giving up on
Solaris, but it probably would have kept me away from the current
unfortunate position.

So one thing I'm trying to do to be helpful is to give people some
idea of how paranoid they have to be.  I now have stories of people
who couldn't run cards that worked for others because of wrong chipset
versions, and my own system whose SATA subsystem doesn't support
hotswap is shown as fully supported in 32 and 64 bit mode by the
install test tool.


When in that situation, one can be patient, be helpful, or go back to
where one came from.


And in fact it seems fairly likely that Linux will have ZFS before
Solaris has SATA drivers for me.  And I  now have so many 400GB drives
that I no longer care about pool expandability for the next 2-3 years.

"Helpful" would be nice of course; though I haven't worked in device
drivers for Unix seriously since the early 90s, and don't know the
current Solaris module and driver environment at all.  And it would
take many months to get anywhere, which doesn't really fit the plan
with that much money tied up in the hardware.
--
David Dyer-Bennet, , 
RKBA: 
Pics: 
Dragaera/Steven Brust: 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Dale Ghent

On Oct 11, 2006, at 7:36 PM, David Dyer-Bennet wrote:


I've been running Linux since kernel 0.99pl13, I think it was, and
have had amazingly little trouble.  Whereas I'm now sitting on $2k of
hardware that won't do what I wanted it to do under Solaris, so it's a
bit of a hot-button issue for me right now.


Yes, but remember back in the days of Linux 0.99, the amount of PC  
hardware was nowhere near as varied as it is today. Integrated  
chipsets? A pipe dream! Aside from video card chips and proprietary  
pre-ATAPI CDROM interfaces, you didn't have to reach far to find a  
driver which covered a given piece of hardware because when you got  
down to it, most hardware was the same. NE2000, anyone?


Today, in 2006 - much different story. I even had Linux AND Solaris  
problems with my machine's MCP51 chipset when it first came out. Both  
forcedeth and nge croaked on it. Welcome to the bleeding edge. You're  
unfortunately on the bleeding edge of hardware AND software.


When in that situation, one can be patient, be helpful, or go back to  
where one came from.


/dale
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread clockwork
Well thats probably because both windows and Linux were designed with the intel/x86/cheap crap market in mind. A more valid comparison would be OSX, since it is also designed to run on a somewhat specific set of hardware.
Solaris will get there, but the open aspect of solaris on intel is still fairly new, newer than .99 was at the time.# I am writing this from my t60, which is running linux. Ask me about the wireless driver. Its as flaky as a blonde from southern california circa the hair band era.
On 10/11/06, David Dyer-Bennet <[EMAIL PROTECTED]> wrote:
On 10/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:> > The more I learn about Solaris hardware support, the more I see it as
> > a minefield.>>> I've found this to be true for almost all open source platforms where> you're trying to use something that hasn't been explicitly used and> tested by the developers.
I've been running Linux since kernel 0.99pl13, I think it was, andhave had amazingly little trouble.  Whereas I'm now sitting on $2k ofhardware that won't do what I wanted it to do under Solaris, so it's a
bit of a hot-button issue for me right now.  I've never had toconsider Linux issues in selecting hardware (in fact I haven'tselected hardware, my linux boxes have all been castoffs originallypurchased to run Windowsx), whereas I made considerable efforts to
find out what should work and how careful I had to be, includingasking for advice on this list, and I have still ended up gettingscrewed.  Yeah, I'm a little bitter about this.--David Dyer-Bennet, [EMAIL PROTECTED]>, RKBA: Pics: <
http://www.dd-b.net/dd-b/SnapshotAlbum/>Dragaera/Steven Brust: ___
zfs-discuss mailing listzfs-discuss@opensolaris.orghttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Where is the ZFS configuration data stored?

2006-10-11 Thread James McPherson

On 10/12/06, Steve Goldberg <[EMAIL PROTECTED]> wrote:

Where is the ZFS configuration (zpools, mountpoints, filesystems,
etc) data stored within Solaris?  Is there something akin to vfstab
or perhaps a database?



Have a look at the contents of /etc/zfs for an in-filesystem artefact
of zfs. Apart from that, the information required is stored on the
disk itself.

There is really good documentation on ZFS at the ZFS community
pages found via http://www.opensolaris.org/os/community/zfs.


cheers,
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
 http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Where is the ZFS configuration data stored?

2006-10-11 Thread Steve Goldberg
Hi All,

Where is the ZFS configuration (zpools, mountpoints, filesystems, etc) data 
stored within Solaris?  Is there something akin to vfstab or perhaps a database?

Thanks,

Steve
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool misses the obvious?

2006-10-11 Thread James Litchfield

Artem Kachitchkine wrote:



# fstyp  c3t0d0s0
zfs


s0? How is this disk labeled? From what I saw, when you put EFI label 
on a USB disk, the "whole disk" device is going to be d0 (without 
slice). What do these commands print:


# fstyp /dev/dsk/c3t0d0


unknown_fstyp (no matches)


# fdisk -W - /dev/rdsk/c3t0d0



/dev/rdsk/c3t0d0 default fdisk table
Dimensions:
   512 bytes/sector
63 sectors/track
   255 tracks/cylinder
  36483 cylinders

[ eliding almost all the systid cruft ]

*  238: EFI_PMBR

* IdAct  Bhead  Bsect  BcylEhead  Esect  EcylRsectNumsect
 238   025563 102325563 10231586114703


# fdisk -W /dev/rdsk/c3t0d0p0


Same dimension info as above...

* IdAct  Bhead  Bsect  BcylEhead  Esect  EcylRsectNumsect
 238   025563 102325563 10231586114703


-Artem.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread David Dyer-Bennet

On 10/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


> The more I learn about Solaris hardware support, the more I see it as
> a minefield.


I've found this to be true for almost all open source platforms where
you're trying to use something that hasn't been explicitly used and
tested by the developers.


I've been running Linux since kernel 0.99pl13, I think it was, and
have had amazingly little trouble.  Whereas I'm now sitting on $2k of
hardware that won't do what I wanted it to do under Solaris, so it's a
bit of a hot-button issue for me right now.  I've never had to
consider Linux issues in selecting hardware (in fact I haven't
selected hardware, my linux boxes have all been castoffs originally
purchased to run Windowsx), whereas I made considerable efforts to
find out what should work and how careful I had to be, including
asking for advice on this list, and I have still ended up getting
screwed.  Yeah, I'm a little bitter about this.
--
David Dyer-Bennet, , 
RKBA: 
Pics: 
Dragaera/Steven Brust: 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool misses the obvious?

2006-10-11 Thread Artem Kachitchkine



# fstyp  c3t0d0s0
zfs


s0? How is this disk labeled? From what I saw, when you put EFI label on a USB 
disk, the "whole disk" device is going to be d0 (without slice). What do these 
commands print:


# fstyp /dev/dsk/c3t0d0

# fdisk -W /dev/rdsk/c3t0d0

# fdisk -W /dev/rdsk/c3t0d0p0

-Artem.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Darren . Reed

David Dyer-Bennet wrote:


On 10/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


There are tools around that can tell you if hardware is supported by
Solaris.
One such tool can be found at:
http://www.sun.com/bigadmin/hcl/hcts/install_check.html



Beware of this tool.  It reports "Y" for both 32-bit and 64-bit on the
nVidia MCP55 SATA controller -- but in the real world, it's supported
only in compatibility mode, and (fatal flaw for me) *it doesn't
support hot-swap with this controller*.  So apparently even a clean
result from this utility isn't a safe indication that the device is
fully supported.

Also, it says that the nVidia MCP55 ethernet is NOT supported in
either 32 or 64 bit, but actually nv_44 found the ethernet without any
trouble.  Maybe that's just that the support was extended recently;
the install tool is based on S10 6/06.



Driver support for Solaris Nevada is not the same as Solaris 10 Update 2,
so it is not surprising to see these discrepencies.

In some cases, getting Solaris to support a piece of hardware is as simple
as running the update_drv command to tell it about a new PCI id (these
change often and are central to driver support on all x86 platforms.)


The more I learn about Solaris hardware support, the more I see it as
a minefield.



I've found this to be true for almost all open source platforms where
you're trying to use something that hasn't been explicitly used and
tested by the developers.

Darren

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread David Dyer-Bennet

On 10/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


There are tools around that can tell you if hardware is supported by
Solaris.
One such tool can be found at:
http://www.sun.com/bigadmin/hcl/hcts/install_check.html


Beware of this tool.  It reports "Y" for both 32-bit and 64-bit on the
nVidia MCP55 SATA controller -- but in the real world, it's supported
only in compatibility mode, and (fatal flaw for me) *it doesn't
support hot-swap with this controller*.  So apparently even a clean
result from this utility isn't a safe indication that the device is
fully supported.

Also, it says that the nVidia MCP55 ethernet is NOT supported in
either 32 or 64 bit, but actually nv_44 found the ethernet without any
trouble.  Maybe that's just that the support was extended recently;
the install tool is based on S10 6/06.

The more I learn about Solaris hardware support, the more I see it as
a minefield.
--
David Dyer-Bennet, , 
RKBA: 
Pics: 
Dragaera/Steven Brust: 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] best dual HD install-- RAID vs ZFS?

2006-10-11 Thread Patrick
i'm replacing the stock HD in my vaio notebook with 2 100GB 7200 RPM hitachi-- 
yes it can hold  2 HDs. ;)  i was thinking about doing some sort of striping 
setup to get even more performance, but i am hardly a storage expert, so i'm 
not sure if it is better to set them up to do sofware RAID or to install 
solaris on a normal UFS partition on one and then make a big zpool spanning 
both disks-- i'm sure i read it the ZFS docs that it would stripe to two disks 
just fine.

can anyone recommend the best course of action?

thanks!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: Metadata corrupted

2006-10-11 Thread Siegfried Nikolaivich
> On Mon, Oct 09, 2006 at 11:08:14PM -0700, Matthew
> Ahrens wrote:
> You may also want to try 'fmdump -eV' to get an idea
> of what those
> faults were.

I am not sure how to interpret the results, maybe you can help me.  It looks 
like the following with many more similar pages following:

% fmdump -eV
TIME   CLASS
Oct 07 2006 17:28:48.265102839 ereport.fs.zfs.checksum
nvlist version: 0
class = ereport.fs.zfs.checksum
ena = 0x933872163a1
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0xbe23c6961def3450
vdev = 0x46f50fe03a3fd818
(end detector)

pool = tank
pool_guid = 0xbe23c6961def3450
pool_context = 0
vdev_guid = 0x46f50fe03a3fd818
vdev_type = disk
vdev_path = /dev/dsk/c0t1d0s0
parent_guid = 0x3bb6ede3be1cf975
parent_type = raidz
zio_err = 0
zio_offset = 0x1c3644ae00
zio_size = 0xac00
zio_objset = 0x20
zio_object = 0x78
zio_level = 0
zio_blkid = 0xafaf
__ttl = 0x1
__tod = 0x45284640 0xfcd25f7

Oct 07 2006 17:31:24.616729701 ereport.fs.zfs.checksum
nvlist version: 0
class = ereport.fs.zfs.checksum
ena = 0xb7a0bad55900401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0xbe23c6961def3450
vdev = 0xa543197df30d1460
(end detector)

pool = tank
pool_guid = 0xbe23c6961def3450
pool_context = 0
vdev_guid = 0xa543197df30d1460
vdev_type = disk
vdev_path = /dev/dsk/c0t2d0s0
parent_guid = 0x3bb6ede3be1cf975
parent_type = raidz
zio_err = 0
zio_offset = 0x30d218e00
zio_size = 0xac00
zio_objset = 0x20
zio_object = 0xea
zio_level = 0
zio_blkid = 0x7577
__ttl = 0x1
__tod = 0x452846dc 0x24c28c65

Oct 07 2006 17:31:24.903968466 ereport.fs.zfs.checksum
nvlist version: 0
class = ereport.fs.zfs.checksum
ena = 0xb7b1da39251
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0xbe23c6961def3450
vdev = 0x46f50fe03a3fd818
(end detector)

pool = tank
pool_guid = 0xbe23c6961def3450
pool_context = 0
vdev_guid = 0x46f50fe03a3fd818
vdev_type = disk
vdev_path = /dev/dsk/c0t1d0s0
parent_guid = 0x3bb6ede3be1cf975
parent_type = raidz
zio_err = 0
zio_offset = 0x30e558800
zio_size = 0xac00
zio_objset = 0x20
zio_object = 0xea
zio_level = 0
zio_blkid = 0x7724
__ttl = 0x1
__tod = 0x452846dc 0x35e176d2

Oct 07 2006 17:31:52.178481693 ereport.fs.zfs.checksum
nvlist version: 0
class = ereport.fs.zfs.checksum
ena = 0xbe0bb6f3b11
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0xbe23c6961def3450
vdev = 0xa543197df30d1460
(end detector)

pool = tank
pool_guid = 0xbe23c6961def3450
pool_context = 0
vdev_guid = 0xa543197df30d1460
vdev_type = disk
vdev_path = /dev/dsk/c0t2d0s0
parent_guid = 0x3bb6ede3be1cf975
parent_type = raidz
zio_err = 0
zio_offset = 0x375e12800
zio_size = 0xac00
zio_objset = 0x20
zio_object = 0xec
zio_level = 0
zio_blkid = 0x7788
__ttl = 0x1
__tod = 0x452846f8 0xaa36a1d

Cheers,
Albert
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Darren . Reed

Dick Davies wrote:


On 11/10/06, Peter van Gemert <[EMAIL PROTECTED]> wrote:


Hi There,

You might want to check the HCL at http://www.sun.com/bigadmin/hcl to 
find out which hardware is supported by Solaris 10.


Greetings,
Peter



I tried that myself - there really isn't very much on there.
I can't believe Solaris runs on so little hardware (well, I know most of
my kit isn't on there), so I assume it isn't updated that much...



There are tools around that can tell you if hardware is supported by 
Solaris.

One such tool can be found at:
http://www.sun.com/bigadmin/hcl/hcts/install_check.html

There is a process for submitting input back to Sun on driver testing BUT
this requires the submitter to sign a contract of sorts, not just email.

Darren

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A versioning FS

2006-10-11 Thread Nicolas Williams
On Wed, Oct 11, 2006 at 08:24:13PM +0200, Joerg Schilling wrote:
> Before we start defining the first offocial functionality for this Sun 
> feature, 
> we should define a mapping for Mac OS, FreeBSD and Linux. It may make sense, 
> to 
> define a sub directory for the attribute directory for keeping old versions
> of a file.

Definitely a sub-directory would be needed yes, and I don't agree to the
first part.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A versioning FS

2006-10-11 Thread Joerg Schilling
Nicolas Williams <[EMAIL PROTECTED]> wrote:

> On Mon, Oct 09, 2006 at 12:44:34PM +0200, Joerg Schilling wrote:
> > Nicolas Williams <[EMAIL PROTECTED]> wrote:
> > 
> > > You're arguing for treating FV as extended/named attributes :)
> > >
> > > I think that'd be the right thing to do, since we have tools that are
> > > aware of those already.  Of course, we're talking about somewhat magical
> > > attributes, but I think that's fine (though, IIRC, NFSv4 [RFC3530] has
> > > some strange verbiage limiting attributes to "applications").
> > 
> > I thought NFSv4 supports extended attributes. What "limiting" are you 
> > aware of?
>
> It does.  I meant this on pg. 12:
>
>  [...]  Named attributes
>are meant to be used by client applications as a method to associate
>application specific data with a regular file or directory.

FreeBSD and Linux implement something different also called extended attributes.
There should be a possibility to map from FreeBSD/Linux to Solaris.

> and this on pg. 36:
>
>Named attributes are intended for data needed by applications rather
>than by an NFS client implementation.  NFS implementors are strongly
>encouraged to define their new attributes as recommended attributes
>by bringing them to the IETF standards-track process.

See above... Since the extended attributes appeared on a Solaris ( 8 update???),
I was looking for a way to map simple exteneded attribute implementation as 
those on Mac OS, FreeBSD and Linux to the more general implementation on 
Solaris.

Before we start defining the first offocial functionality for this Sun feature, 
we should define a mapping for Mac OS, FreeBSD and Linux. It may make sense, to 
define a sub directory for the attribute directory for keeping old versions
of a file.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool misses the obvious?

2006-10-11 Thread James Litchfield

I have a zfs pool on a USB hard drive attached to my system.
I had unplugged it and when I reconnect it, zpool import does
not see the pool.

# cd /dev/dsk
# fstyp  c3t0d0s0
zfs

When I truss zpool import, it looks everywhere (seemingly) *but*
c3t0d0s0 for the pool...

The relevant portion...

stat64("/dev/dsk/c3t0d0s1", 0x08043150) = 0
open64("/dev/dsk/c3t0d0s1", O_RDONLY)   Err#5 EIO
stat64("/dev/dsk/c1t0d0p3", 0x08043150) = 0
open64("/dev/dsk/c1t0d0p3", O_RDONLY)   Err#16 EBUSY

This is Nevada B49, BFUed to B50 and then BFUed to
10/9/2006 nightly. I have been seeing this behavior for a while
so I don't think it is the result of a very recent change...

Thoughts?

Jim Litchfield

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 10 / ZFS file system major/minor number

2006-10-11 Thread Darren Dunham
> ZFS creates a unique FSID for every filesystem (called a object set in 
> ZFS terminology).
> 
> The unique id is saved (ondisk) as part of dsl_dataset_phys_t in 
> ds_fsid_guid.
> And this id is a random number generated when the FS is created.
> 
> This id is used to populate the zfs_t structure (refer to zfs_init_fs()).
> 
> And the same id would be used as FSID for NFS.

Sorry, allow me to be dense for a moment.

Does this mean that I should expect to be able to bring up any machine
with the same ZFS pool and the same IP address and have it serve
filehandles handed out by a previous server?  Including NFS3 and 4?

How about if I have to mount the filesystem on an alternate root?

I don't really have a setup that I could move between machines to test
this at the moment...

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
 < This line left intentionally blank to confuse you. >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Dana H. Myers
Al Hopper wrote:
> On Wed, 11 Oct 2006, Dana H. Myers wrote:
> 
>> Al Hopper wrote:
>>
>>> Memory: DDR-400 - your choice but Kingston is always a safe bet.  2*512Mb
>>> sticks for a starter, cost effective, system.  4*512Mb for a good long
>>> term solution.
>> Due to fan-out considerations, every BIOS I've seen will run DDR400
>> memory at 333MHz when connected to more than 1 DIMM-per-channel (I
>> believe at AMD's urging).
> 
> Really!?  That's surprising.  Is there a way to verify that on an Ultra20
> running Solaris 06/06?

Have a look at the BIOS set-up screen; see what speed it's running
your DDR at.  It may make a difference whether you have single-sided
vs. double-sided DIMMs.  It's not an OS issue, it's a hardware issue
handled by the BIOS.

> Now you've gone & done it Dana - you've aroused my curiosity!  :)

My apologies ;-)

Dana
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Al Hopper
On Wed, 11 Oct 2006, Dana H. Myers wrote:

> Al Hopper wrote:
>
> > Memory: DDR-400 - your choice but Kingston is always a safe bet.  2*512Mb
> > sticks for a starter, cost effective, system.  4*512Mb for a good long
> > term solution.
>
> Due to fan-out considerations, every BIOS I've seen will run DDR400
> memory at 333MHz when connected to more than 1 DIMM-per-channel (I
> believe at AMD's urging).

Really!?  That's surprising.  Is there a way to verify that on an Ultra20
running Solaris 06/06?

Now you've gone & done it Dana - you've aroused my curiosity!  :)

> In other words, you might save a few dollars using DDR333 for 4 x 512MB
> if you're not going to run 2 x 1GB (which is the preferred approach).
>
> Dana
>

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Dana H. Myers
Al Hopper wrote:

> Memory: DDR-400 - your choice but Kingston is always a safe bet.  2*512Mb
> sticks for a starter, cost effective, system.  4*512Mb for a good long
> term solution.

Due to fan-out considerations, every BIOS I've seen will run DDR400
memory at 333MHz when connected to more than 1 DIMM-per-channel (I
believe at AMD's urging).

In other words, you might save a few dollars using DDR333 for 4 x 512MB
if you're not going to run 2 x 1GB (which is the preferred approach).

Dana

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Dale Ghent

On Oct 11, 2006, at 10:10 AM, [EMAIL PROTECTED] wrote:

So are there any pci-e SATA cards that are supported ? I was hoping  
to go with a sempron64. Using old-pci seems like a waste.


Yes.

I wrote up a little review of the SIIG SC-SAE412-S1 card which is a  
two port PCIe card based on the Silicon Image 3132 chip:


http://elektronkind.org/2006/09/siig-esata-ii-pcie-card-and-opensolaris

The card is a two port eSATA2 card, but SIIG also sells a two port  
internal SATA card based on the same chip as well.


This card is running fine under SX:CR build 47 and would presumably  
also run fine under Solaris 10 Update 2 or later.


/dale

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Al Hopper

Followup - if you also want to also use the machine as a workstation:

Graphics card (PCI Express): Pick a Nvidia based board to take advantage
fo the excellent Solaris native driver[0].  The 7600GS has a great
price/performance ratio.  This ref [1] also mentions the 7600GT - altough
I'm (almost) sure you won't be interested in volt modding them.

[0] http://www.nvidia.com/object/solaris_display_1.0-8774.html
[1] 
http://www.xbitlabs.com/articles/video/display/geforce7600gs-voltmodding.html


Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Al Hopper
On Tue, 10 Oct 2006 [EMAIL PROTECTED] wrote:

> All,
>  So I have started working with Solaris 10 at work a bit (I'm a Linux
> guy by trade) and I have a dying nfs box at home. So the long and short of
> it is as follows: I would like to setup a SATAII whitebox that uses ZFS as
> its filesystem. The box will probably be very lightly used, streaming media
> to my laptop and workstation would be the bulk of the work. However I do
> have quite a good deal of data, roughly 400G. So what I would like to know
> is what hardware solutions work best for this ? I dont need to have 2TB of
> storage on day one, but I might need it sometime down the road. I would
> prefer to keep the price low(400 - 600), but I dont buy house brand
> motherboard, or controllers either. So who makes a native supported board,
> controller (pci-e ?), gigE card and so on. I have a DVD+_RW made by samsung
> which I would imagine would work. Any assistance is welcomed and
> appreciated.

I'll shoot for a failsafe, cost effective system that is known to run
Solaris:

Motherboard: Tyan S2865ANRF - same motherboard as used in the Sun
Ultra20[0].  MonarchComputer.com part# 110624 $169

CPU: 939-pin AMD 64, two choices:

a) AMD Athlon 64 3800+ 512K 90nm Rev. E Venice (939) (Retail Box-w-Fan)
Code: 120274 Price: $119.99

b) AMD Athlon 64 X2 4400+ Dual-Core 1MB Per Core 90nm (939) (Retail
Box-w-Fan) Code: 120241 Price: $229.99

Upgraded heatsink: ThermalRight XP90C (copper)  [1]
heatsink fans: FBA09A12M 92mm Panaflo 92x92x25 [2]
heatsink compound: Artic Silver (use very sparingly)

Memory: DDR-400 - your choice but Kingston is always a safe bet.  2*512Mb
sticks for a starter, cost effective, system.  4*512Mb for a good long
term solution.

Ethernet: Use the built-in interface on the motherboard

SATA controller (for ZFS): 4 port Si3114 PCI card purchased from newegg:

http://www.newegg.com/Product/Product.asp?Item=N82E16815124020

See details on this list[3].

Caveats: DDR2 memory used with the current AMD AM2 based products is too
expensive to meet your budget.  The system I speced is a one-shot deal -
since DDR memory will become harder to find and the non-AM2 939-pin CPUs
will cease to be made at the end of this year.  Some are already hard to
find.


[0] A user on the Solaris on Intel list upgraded to the Sun Ultra 20 BIOS!
[1] very heavy.  Don't transport the resulting system without removing
the heatsink first.
[2] suffix  "M" indicates one of L (low-speed), M (medium), H (hi-speed)
[3] Date: Thu, 5 Oct 2006 17:29:40 -0700
 From: "David Dyer-Bennet" <[EMAIL PROTECTED]>
 To: zfs-discuss@opensolaris.org
 Subject: Fwd: [zfs-discuss] solaris-supported 8-port PCI-X SATA controller

Feel free to email me offlist if you have any other questions.

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread clockwork
So are there any pci-e SATA cards that are supported ? I was hoping to go with a sempron64. Using old-pci seems like a waste.Regards.On 10/11/06, Dick Davies
 <[EMAIL PROTECTED]> wrote:On 11/10/06, Peter van Gemert <
[EMAIL PROTECTED]> wrote:> Hi There,>> You might want to check the HCL at http://www.sun.com/bigadmin/hcl
 to find out which hardware is supported by Solaris 10.>> Greetings,> PeterI tried that myself - there really isn't very much on there.I can't believe Solaris runs on so little hardware (well, I know most of
my kit isn't on there), so I assume it isn't updated that much...My dream machine at the minute is a nice quiet athlon 64 x2 basedsytem (probably one of the energy-efficient Windsors, so you get low heat
and virtualization support). ZFS root mirror running iSCSI targets.Have yet to find a good recommendation for an AM2 based SATAII motherboard(although in dreamland, solaris has a solid Xen domain0 which takes advantage
of Pacifica/AMDV hardware, so I doubt I'll need to make this reality before nextChristmas :)--Rasputin :: Jack of All Trades - Master of Nunshttp://number9.hellooperator.net/
___zfs-discuss mailing listzfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Dick Davies

On 11/10/06, Peter van Gemert <[EMAIL PROTECTED]> wrote:

Hi There,

You might want to check the HCL at http://www.sun.com/bigadmin/hcl to find out 
which hardware is supported by Solaris 10.

Greetings,
Peter


I tried that myself - there really isn't very much on there.
I can't believe Solaris runs on so little hardware (well, I know most of
my kit isn't on there), so I assume it isn't updated that much...

My dream machine at the minute is a nice quiet athlon 64 x2 based
sytem (probably one of the energy-efficient Windsors, so you get low heat
and virtualization support). ZFS root mirror running iSCSI targets.

Have yet to find a good recommendation for an AM2 based SATAII motherboard
(although in dreamland, solaris has a solid Xen domain0 which takes advantage
of Pacifica/AMDV hardware, so I doubt I'll need to make this reality before next
Christmas :)

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Erik Trimble
Generally, I've found the way to go is to get a 4-port SATA PCI 
controller (something based on the Silicon Image stuff seems to be 
cheap, common, and supported), and then plunk it into any old PC you can 
find (or get off of eBay).


The major caveat here is that I'd recommend trying to find a PC which 
has a 64-bit processor, something like an AMD Sempron64 or Intel Celeron 
D 331 (or similar).  Running Solaris in 64-bit mode makes things so much 
simpler (and usually faster) than 32-bit mode.


Avoid like the plague any of the on-board "RAID" solutions. At best, you 
can use the SATA ports as normal ports. In many cases, they're just useless.


-Erik


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 10 / ZFS file system major/minor number

2006-10-11 Thread Sanjeev Bagewadi

Hi Luke,

Luke Schwab wrote:


Hi,

In migrating from **VM to ZFS am I going to have an issue with Major/Minor 
numbers with NFS mounts? Take the following scenario.

1. NFS clients are connected to an active NFS server that has SAN shared 
storage between the active and standby nodes in a cluster.
2. The NFS clients are using the major/minor numbers on the active node in the 
cluster to communicate to the NFS active server.
3. The active node fails over to the secondary node in the cluster.
4. The NFS clients can no longer access the same major minor number for NFS shares. 

 

AFAIK NFS uses the fsid provided by the underlying FS to identify the 
file system.
Most of the underlying filesystems use major/minor number as the fsid 
(as reported by stat(2))...


ZFS creates a unique FSID for every filesystem (called a object set in 
ZFS terminology).
The following is the stack trace which shows how we create a unique id 
when a new FS is created :


-- snip --
#  zfs create mypool/test
-- snip --
# dtrace -n 'fbt::unique_create:entry { stack(); ustack()}'
dtrace: description 'fbt::unique_create:entry ' matched 1 probe
CPU IDFUNCTION:NAME
1  38106  unique_create:entry
zfs`dsl_dataset_create_sync+0x10e
zfs`dmu_objset_create_sync+0x42
zfs`dsl_dir_sync+0x47
zfs`dsl_pool_sync+0xd4
zfs`spa_sync+0x110
zfs`txg_sync_thread+0x1a5
unix`thread_start+0x8
-- snip --

The unique id is saved (ondisk) as part of dsl_dataset_phys_t in 
ds_fsid_guid.

And this id is a random number generated when the FS is created.

This id is used to populate the zfs_t structure (refer to zfs_init_fs()).

And the same id would be used as FSID for NFS.

Hope that answers your questions.

-- Sanjeev.


Does anyone know how ZFS fixes this problem. I read something on NFSv4 in Linux that has 
the "fsid" option of mount that allows you to set the device name instead of 
the major/minor number. Does Solaris 10 have anything like this?

Thanks,

ljs


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 




--
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80 669 27521 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Peter van Gemert
Hi There,

You might want to check the HCL at http://www.sun.com/bigadmin/hcl to find out 
which hardware is supported by Solaris 10.

Greetings,
Peter
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss