Re: [zfs-discuss] ZFS Boot: Dividing up the name space

2007-05-01 Thread Torrey McMahon

Mike Dotson wrote:

On Sat, 2007-04-28 at 17:48 +0100, Peter Tribble wrote:
  

On 4/26/07, Lori Alt <[EMAIL PROTECTED]> wrote:


Peter Tribble wrote:
  





  

Why do administrators do 'df' commands?  It's to find out how much space


is used or available in a single file system.   That made sense when file
systems each had their own dedicated slice, but now it doesn't make that
much sense anymore.  Unless you've assigned a quota to a zfs file system,
"space available" is meaningful more at the pool level.
  

True, but it's actually quite hard to get at the moment. It's easy if
you have a single pool - it doesn't matter which line you look at.
But once you have 2 or more pools (and that's the way it would
work, I expect - a boot pool and 1 or more data pools) there's
an awful lot of output you may have to read. This isn't helped
by zpool and zfs giving different answers., with the one from zfs
being the one I want. The point is that every filesystem adds
additional output the administrator has to mentally filter. (For
one thing, you have to map a directory name to a containing
pool.)



It's actually quite easy and easier than the other alternatives (ufs,
veritas, etc):

# zfs list -rH -o name,used,available,refer rootdg

And now it's setup to be parsed by a script (-H) since the output is
tabbed.  The -r says to recursively display children of the parent and
the -o with the specified fields says to only display the fields
specified.

(output from one of my systems)

blast(9):> zfs list -rH -o name,used,available,refer rootdg
rootdg  4.39G   44.1G   32K
rootdg/nvx_wos_62   4.38G   44.1G   503M
rootdg/nvx_wos_62/opt   793M44.1G   793M
rootdg/nvx_wos_62/usr   3.01G   44.1G   3.01G
rootdg/nvx_wos_62/var   113M44.1G   113M
rootdg/swapvol  16K 44.1G   16K

Even tho the mount point is setup as a legacy mount point, I know where
each of them is mounted due to the vol name.


And yes, this system has more than one pool:

blast(10):> zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
lpool  17.8G   11.4G   6.32G64%  ONLINE -
rootdg 49.2G   4.39G   44.9G 8%  ONLINE -


  

With zfs, file systems are in many ways more like directories than what
we used to call file systems.   They draw from pooled storage.  They
have low overhead and are easy to create and destroy.  File systems
are sort of like super-functional directories, with quality-of-service
control and cloning and snapshots.  Many of the things that sysadmins
used to have to do with file systems just aren't necessary or even
meaningful anymore.  And so maybe the additional work of managing
more file systems is actually a lot smaller than you might initially think.
  

Oh, I agree. The trouble is that sysadmins still have to work using
their traditional tools, including their brains, which are tooled up
for cases with a much lower filesystem count. What I don't see as
part of this are new tools (or enhancements to existing tools) that
make this easier to handle.



Not sure I agree with this.  Many times, you end up dealing with
multiple vxvol's and file systems.  Anything over 12 filesystems and
you're in overload (at least for me;) and I used my monitoring and
scripting tools to filter that for me. 


Many of the systems I admin'd were setup quite differently based on use
and functionality and disk size.

Most of my tools were setup to take most of these into consideration and
the fact that we ran almost every flavor of UNIX possible using the
features of each OS as appropriate.

Most of the tools will still work with zfs (if using df, etc) but it
actually makes it easier once you have a monitoring issue - running out
of space for example.

Most tools have high and low water marks so when a file system gets too
full, you get a warning.  ZFS makes this much easier to admin as you can
see which file system is being the hog and go directly to that file
system and hunt instead of first finding the file system, hence the
debate of the all-in-one / slice or breaking up to the major os fs's.

Benefit of all-in-one / is you didn't have to guess at how much space
you needed for each slice so you could upgrade, add optional software
without needing to grow/shrink the OS.

Drawback, if you filled up the file system, you had to hunt where it was
filling up - /dev, /usr, /var/tmp, /var, / ??? 


Benefit of multiple slices was one fs didn't affect the others if you
filled it up and you could find which was the problem fs very easily but
if you estimated incorrectly, you had wasted disk space in one slice and
not enough in another.

ZFS gives you the benefit of both all-in-one and partitioned as it draws
from a single pool of storage but also allows you to find which fs is
being the problem and lock it down with quota's and reservations.

  

For example, backup tools are currently filesystem based.



And this changes the scenario how?  

Re: [zfs-discuss] zfs boot image conversion kit is posted

2007-05-01 Thread Torrey McMahon

Brian Hechinger wrote:

On Fri, Apr 27, 2007 at 02:44:02PM -0700, Malachi de ??lfweald wrote:
  


2. ZFS mirroring can work without the metadb, but if you want the dump
mirrored too, you need the metadb (I don't know if it needs to be mirrored,
but I wanted both disks to be identical in case one died)



I can't think of any real good reason you would need a mirrored dump device.
The only place that would help you is if your main disk died between panic
and next boot.  ;)
  


If you lose the primary drive, and your dump device points to the 
metadevice, then you wouldn't have to reset it. Also, most folks use the 
swap device for dumps. You wouldn't want to lose that on a live box. 
(Though honestly I've never just yanked the swap device and seen if the 
system keels over.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Multiple filesystem costs? Directory sizes?

2007-05-01 Thread Jeff Bonwick

Mario,

For the reasons you mentioned, having a few different filesystems
(on the order of 5-10, I'd guess) can be handy.  Any time you want
different behavior for different types of data, multiple filesystems
are the way to go.

For maximum directory size, it turns out that the practical limits
aren't in ZFS -- they're in your favorite applications, like ls(1)
and file browsers.  ZFS won't mind if you put millions of files
in a directory, but ls(1) will be painfully slow.  Similarly, if
you're using a mail program and you go to a big directory to grab
an attachment... you'll wait and wait while it reads the first few
bytes of every file in the directory to determine its type.

Hope that helps,

Jeff

Mario Goebbels wrote:

While setting up my new system, I'm wondering whether I should go with plain 
directories or use ZFS filesystems for specific stuff. About the cost of ZFS 
filesystems, I read on some Sun blog in the past about something like 64k 
kernel memory (or whatever) per active filesystem. What are however the 
additional costs?

The reason I'm considering multiple filesystems is for instance easy ZFS 
backups and snapshots, but also tuning the recordsizes. Like storing lots of 
generic pictures from the web, smaller recordsizes may be appropriate to trim 
down the waste once the filesize surpasses the record size, aswell as using 
large recordsizes for video files on a seperate filesystem. Turning on and off 
compression and access times for performance reasons are another thing.

Also, in this same message, I'd like to ask what sensible maximum directory 
sizes are. As in amount of files.

Thanks.
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very Large Filesystems

2007-05-01 Thread Peter Tribble

On 4/28/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:

On Sat, Apr 28, 2007 at 05:02:47PM +0100, Peter Tribble wrote:
>
> In practical terms, backing up much over a terabyte
> in a single chunk isn't ideal. What I would like to see
> here is more flexibility from something like Legato
> in terms of defining schedules that would allow us to
> back this up sensibly. (Basically, the changes are
> relatively small, so it would be nice to use quarterly
> schedules - Legato only really does weekly or monthly.)

So what you *really* want is TSM.  I wonder if IBM would ever
consider supporting ZFS.


Educate me. In what way would TSM help? The only real
issue I have with Legato (or NetBackup) is the inability to define
schedules just the way I want. (The workaround is to set up
a monthly schedule and them manually override some of the
months.)

Thanks,

--
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Multiple filesystem costs? Directory sizes?

2007-05-01 Thread Richard Elling

Mario Goebbels wrote:
While setting up my new system, I'm wondering whether I should go with plain 
directories or use ZFS filesystems for specific stuff. About the cost of ZFS 
filesystems, I read on some Sun blog in the past about something like 64k

kernel memory (or whatever) per active filesystem. What are however the 
additional
costs?


I don't think the resource costs are well characterized, yet.
IMHO, you should only create file systems if you need to have different
policies for the file systems.  Search this forum for more discussion on
this topic.

The reason I'm considering multiple filesystems is for instance easy ZFS backups 
and snapshots, but also tuning the recordsizes. Like storing lots of generic 
pictures from the web, smaller recordsizes may be appropriate to trim down the 
waste once the filesize surpasses the record size, aswell as using large recordsizes 
for video files on a seperate filesystem. Turning on and off compression and access 
times for performance reasons are another thing.


compression and atime settings are policies.
recordsize could also be a policy, however, it seems to me that you are 
confused about
ZFS and recordsize.  The reason it exists is for those applications (eg. 
databases)
which use a fixed recordsize and we want to match that record size to avoid 
doing
extra work.  For example, if the application recordsize is fixed at 8 kBytes, 
then
we don't want to prefetch 129 kBytes (or 56 kBytes) as that could be wasted 
work.
By default, ZFS will dynamically adjust its recordsize, which is probably what 
you
want.

Also, in this same message, I'd like to ask what sensible maximum directory sizes 
are. As in amount of files.


Dunno. In theory, you could go until you run out of space.  Several people have
commented on their usage, so you can look in the archives.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs boot image conversion kit is posted

2007-05-01 Thread Lori Alt



The name of the pool should not matter.

Lori

Malachi de Ælfweald wrote:
That's a good catch - I had indeed changed mine to rootpool, but 
didn't think the chosen name mattered.


On 5/1/07, *Rob Logan* < [EMAIL PROTECTED] > wrote:

> sits there for a second, then boot loops and comes back to the
grub menu.

I noticed this too when I was playing... using
kernel$ /platform/i86pc/kernel/$ISADIR/unix -v -B $ZFS-BOOTFS
I could see vmunix loading, but it quickly NMIed around the
rootnex: [ID 349649 kern.notice] isa0 at root
point... changing "bootfs root/snv_62" to "bootfs rootpool/snv_62"
and rebuilding the pool EXACTLY the same way fixed it.

try changing "dataset mypool" to "dataset rootpool..."
and I bet it will work..

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org 
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs boot image conversion kit is posted

2007-05-01 Thread Malachi de Ælfweald

That's a good catch - I had indeed changed mine to rootpool, but didn't
think the chosen name mattered.

On 5/1/07, Rob Logan <[EMAIL PROTECTED]> wrote:


> sits there for a second, then boot loops and comes back to the grub
menu.

I noticed this too when I was playing... using
kernel$ /platform/i86pc/kernel/$ISADIR/unix -v -B $ZFS-BOOTFS
I could see vmunix loading, but it quickly NMIed around the
rootnex: [ID 349649 kern.notice] isa0 at root
point... changing "bootfs root/snv_62" to "bootfs rootpool/snv_62"
and rebuilding the pool EXACTLY the same way fixed it.

try changing "dataset mypool" to "dataset rootpool..."
and I bet it will work..

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool command causes a crash of my server

2007-05-01 Thread Leon Koll
Hello,
on my sparc server running s10/u3 w/all latest patches.
I created a zpool and one fs and started to copy the data to it. The host 
crashed during the tar... | tar ... run.
After it happened I tried "zpool destroy" and the host crashed. The same with 
"zpool export".
It looks like a bug http://bugs.opensolaris.org/view_bug.do?bug_id=6393634
Is there a fix of this bug for s10/u3 ? 
If somebody is interested, I can provide the crash dumps. Signal me before I 
start the reinstall of this host (I don't see any other way to restore it).

Thanks,
[i]-- leon[/i]

[EMAIL PROTECTED] mdb -k *.2
Loading modules: [ unix krtld genunix dtrace specfs ufs sd pcisch ip sctp usba 
fcp fctl emlxs nca lofs ssd crypto zfs random logindmux ptm md cpc fcip sppp 
nfs ]
> ::status
debugging crash dump vmcore.2 (64-bit) from myhost
operating system: 5.10 Generic_125100-05 (sun4u)
panic message: assertion failed: dmu_read(os, smo->smo_object, offset, size, 
entry_map) == 0 (0x5 == 0x0), file: ../../common/fs/zfs/space_map.c, line: 307
dump content: kernel pages only

> $c
vpanic(11ed500, 7b69a6c8, 5, 7b69a708, 0, 7b69a710)
assfail3+0x94(7b69a6c8, 5, 7b69a708, 0, 7b69a710, 133)
space_map_load+0x1a4(600091cb578, 600091cd000, 1000, 600091cb248, 0, 1)
metaslab_activate+0x3c(600091cb240, 8000, c000, 
e7d4d6000, 600091cb240, c000)
metaslab_group_alloc+0x1bc(3fff, 400, 8000, 347fe68000, 
60014988000, )
metaslab_alloc_dva+0x114(0, 347fe68000, 60014988000, 400, 60008352a80, 6e3)
metaslab_alloc+0x2c(60002c12080, 400, 60014988000, 2, 6e3, 0)
zio_dva_allocate+0x4c(600090db480, 7b67b5a8, 60014988000, 7047a508, 7047a400, 
20001)
zio_write_compress+0x1ec(600090db480, 23e20b, 23e000, 1f001f, 3, 60014988000)
arc_write+0xe4(600090db480, 60002c12080, 7, 3, 2, 6e3)
dbuf_sync+0x6c0(600149714a0, 600090db700, 0, 3, 7, 6e3)
dnode_sync+0x35c(0, 0, 600090db700, 6001309cd80, 0, 7)
dmu_objset_sync_dnodes+0x6c(600090d5940, 600090d5a80, 6001309cd80, 6000935ef38, 
0, 0)
dmu_objset_sync+0x7c(600090d5940, 6001309cd80, 3, 3, 600090e5688, 6e3)
dsl_dataset_sync+0xc(60011f4e940, 6001309cd80, 60011f4e9d0, 600068d25b8, 
600068d25b8, 60011f4e940)
dsl_pool_sync+0x64(600068d2500, 6e3, 60011f4e940, 60009485ec0, 600083bc5c0, 
600083bc5e8)
spa_sync+0x1b0(60002c12080, 6e3, 0, 0, 2a100d7dcc4, 1)
txg_sync_thread+0x134(600068d2500, 6e3, 0, 2a100d7dab0, 600068d2610, 
600068d2612)
thread_start+4(600068d2500, 0, 2820290a2020205b, 2070726576696f75, 
73205d0a3b0a0a0a, 766f636162756
> $q
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] "ZFS: Under The Hood" at LOSUG (16/05/07)

2007-05-01 Thread Brian Gupta

Any chance you might do a video podcast of this?

-Brian

On 5/1/07, Joy Marshall <[EMAIL PROTECTED]> wrote:


The speaker at our next London OpenSolaris User Group (LOSUG) session will
be Jarod Nash, TSC Systems Engineer.

Jarod will present "ZFS: Under The Hood", which aims to lift up the ZFS
Bonnet and explain the building blocks of the ZFS engine.  Without getting
too bogged down with specific implementation details, he will cover the
technical function of all the key components such as the ZPL, the DMU, and
the SPA. Jarod says "You could think of this as a "ZFS Internals Lite".

Refreshments will be served from 18.00 at the Sun Microsystems Customer
Briefing Centre, Regis House, 45 King William Street, London, EC4R 9AN and
the presentation will start at 18.30, following which there will be food
and drinks provided.

Anyone with an interest in OpenSolaris is welcome at LOSUG from Students &
Campus Ambassadors to Customers and Developers, so please come along and see
what it's all about.

To register to attend the next LOSUG meeting on 16th May, please send your
name to Sean Sprague - [EMAIL PROTECTED] asking to be added to the
LOSUG 16/05 list.

We're looking forward to seeing you there.

Joy


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: zfs boot image conversion kit is posted

2007-05-01 Thread Mike Walker
i am attempting to install b62 from the b62_zfsboot.iso that was posted last 
week.

> 
> Mike makes a good point.  We have some severe
> problems
> with build 63.  I've been hoping to get an answer for
> what's
> going on with it, but so far, I don't have one.
> 
> So, note to everyone:  for zfs boot purposes, build
> 63 appears
> to be DOA.  We'll get out information on that as soon
> as possible,
> and try it get it fixed for build 64, but until then,
> stick with build 62.
> 
> Lori
> 
> Mike Dotson wrote:
> > Lori,
> >
> > Couldn't tell but is he running build 63?
> >
> > On Tue, 2007-05-01 at 08:16 -0600, Lori Alt wrote:
> >   
> >> It looks to me like what you did should have
> worked.
> >> The "cluster" line is fine.  I almost always
> include one
> >> in my profiles.
> >>
> >> So here's a couple things to try:
> >>
> >> 1.  After the install completes, but before you
> reboot, look at
> >> the GRUB menu file:
> >>
> >> #  mount -F zfs mypool /mnt
> >> #  cat /mnt/boot/grub/menu.lst
> >>
> >> The tail of the file should look like this:
> >>
> >> #-- ADDED BY BOOTADM - DO NOT EDIT
> --
> >> title Solaris Nevada snv_62 X86
> >> kernel$ /platform/i86pc/kernel/$ISADIR/unix -B
> $ZFS-BOOTFS
> >> module$ /platform/i86pc/$ISADIR/boot_archive
> >> #-END
> BOOTADM
> >> #-- ADDED BY BOOTADM - DO NOT EDIT
> --
> >> title Solaris failsafe
> >> kernel /boot/platform/i86pc/kernel/unix -s
> >> module /boot/x86.miniroot-safe
> >> #-END
> BOOTADM
> >>
> >>If it doesn't, something has gone wrong.
>  Although, as long as you have
> >the first of those two entries, it should still
> boot.
> >>
> >> 2.  You can edit the first kernel$ line above to
> add "-kd" to it.  That
> >>will cause the system to boot into kmdb
> (assuming it boots at all).
> >>If it gets that far, you can either poke around
> in kmdb if you know it,
> >>or just type ":c" to complete rebooting, at
> which point maybe you'll
> >>get some useful error messages.
> >>
> >> 3.  When you did the install, were there any error
> messages? 
> >>
> >> Offhand, I don't have any ideas as to what the
> problem is.  But these
> >> are some of the things I'd do to debug it.  Let me
> know how it goes.
> >>
> >> Lori
> >>
> >> Mike Walker wrote:
> >> 
> >>> I also downloaded the .iso file, burned it, and
> started the install process.  I followed these
> instructions for creating the profile.
> >>>
> >>> [i]Here's a quick-and-dirty way to do a
> profile-driven install:
> >>>
> >>> 1. Boot your system off the net or from the DVD
> in the usual manner.
> >>>
> >>> 2. Select "Interactive Install".  Then, at the
> first opportunity
> >>>to exit out of it (which will be after you've
> answered the
> >>>system configuration questions, such as
> whether you want
> >>>Kerberos and what the root password will be),
> exit out to a shell.
> >>>
> >>> 3. Create a profile for the install in
> /tmp/profile.  (The contents
> >>>of the profile are described below).
> >>>
> >>> 4. Execute the following:
> >>>
> >>># pfinstall /tmp/profile
> >>>
> >>> When it's done, reboot.  You should get a GRUB
> menu.  Select the
> >>> entry with the title "Solaris 
> X86".  The failsave
> >>> entry should work too.
> >>>
> >>>
> >>> Creating a profile for the install
> >>> --
> >>> The system profile you use should look something
> like this:
> >>>
> >>> install_type initial_install
> >>> cluster SUNWCuser
> >>> filesys c0t0d0s1 auto swap
> >>> pool mypool free / mirror c0t0d0s0 c0t1d0s0
> >>> dataset mypool/BE1 auto /
> >>> dataset mypool/BE1/usr auto /usr
> >>> dataset mypool/BE1/opt auto /opt
> >>> dataset mypool/BE1/var auto /var
> >>> dataset mypool/BE1/export auto /export[/i]
> >>>
> >>> Obviously I changed the drive's as required.
> Then ran the pfinstall on the profile I created.
> The install looked like it worked correctly, but
>  after a reboot I'm having issues.
> >>
> >>> I get to the grub menu, which only has one entry
> "Solaris".  Which when you edit the line is the
> following [b]kernel$
> /platform/i86pc/kernel/$ISADIR/unix -B
> $ZFS-BOOTFS[/b].  When I pick this option all it sits
> there for a second, then boot loops and comes back to
> the grub menu.
> >>>
> >>> Any suggestions?  Any way I can see what its
> doing when it pauses before the reboot?  I'm kinda
> new at this OpenSolaris stuff, so any debugging
> tips/tricks would be greatly appreciated.
> >>>
> >>> Mike
> >>>  
> >>>  
> >>> This message posted from opensolaris.org
> >>> ___
> >>> zfs-discuss mailing list
> >>> zfs-discuss@opensolaris.org
> >>>
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
> >>>   
> >>>   
> >> ___
> >> zfs-discuss mailing list
> >> zfs-discuss@opensolaris.org
> >>
> http://mail.op

[zfs-discuss] "ZFS: Under The Hood" at LOSUG (16/05/07)

2007-05-01 Thread Joy Marshall
The speaker at our next London OpenSolaris User Group (LOSUG) session will be 
Jarod Nash, TSC Systems Engineer.

Jarod will present "ZFS: Under The Hood", which aims to lift up the ZFS Bonnet 
and explain the building blocks of the ZFS engine.  Without getting too bogged 
down with specific implementation details, he will cover the technical function 
of all the key components such as the ZPL, the DMU, and the SPA. Jarod says 
"You could think of this as a "ZFS Internals Lite".

Refreshments will be served from 18.00 at the Sun Microsystems Customer 
Briefing Centre, Regis House, 45 King William Street, London, EC4R 9AN and the 
presentation will start at 18.30, following which there will be food and drinks 
provided.
 
Anyone with an interest in OpenSolaris is welcome at LOSUG from Students & 
Campus Ambassadors to Customers and Developers, so please come along and see 
what it's all about.

To register to attend the next LOSUG meeting on 16th May, please send your name 
to Sean Sprague - [EMAIL PROTECTED] asking to be added to the LOSUG 16/05 list. 

We're looking forward to seeing you there.

Joy
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs boot image conversion kit is posted

2007-05-01 Thread Rob Logan

> sits there for a second, then boot loops and comes back to the grub menu.

I noticed this too when I was playing... using
kernel$ /platform/i86pc/kernel/$ISADIR/unix -v -B $ZFS-BOOTFS
I could see vmunix loading, but it quickly NMIed around the
rootnex: [ID 349649 kern.notice] isa0 at root
point... changing "bootfs root/snv_62" to "bootfs rootpool/snv_62"
and rebuilding the pool EXACTLY the same way fixed it.

try changing "dataset mypool" to "dataset rootpool..."
and I bet it will work..

Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs boot image conversion kit is posted

2007-05-01 Thread Lori Alt


Mike makes a good point.  We have some severe problems
with build 63.  I've been hoping to get an answer for what's
going on with it, but so far, I don't have one.

So, note to everyone:  for zfs boot purposes, build 63 appears
to be DOA.  We'll get out information on that as soon as possible,
and try it get it fixed for build 64, but until then, stick with build 62.

Lori

Mike Dotson wrote:

Lori,

Couldn't tell but is he running build 63?

On Tue, 2007-05-01 at 08:16 -0600, Lori Alt wrote:
  

It looks to me like what you did should have worked.
The "cluster" line is fine.  I almost always include one
in my profiles.

So here's a couple things to try:

1.  After the install completes, but before you reboot, look at
the GRUB menu file:

#  mount -F zfs mypool /mnt
#  cat /mnt/boot/grub/menu.lst

The tail of the file should look like this:

#-- ADDED BY BOOTADM - DO NOT EDIT --
title Solaris Nevada snv_62 X86
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive
#-END BOOTADM
#-- ADDED BY BOOTADM - DO NOT EDIT --
title Solaris failsafe
kernel /boot/platform/i86pc/kernel/unix -s
module /boot/x86.miniroot-safe
#-END BOOTADM

   If it doesn't, something has gone wrong.  Although, as long as you have
   the first of those two entries, it should still boot.

2.  You can edit the first kernel$ line above to add "-kd" to it.  That
   will cause the system to boot into kmdb (assuming it boots at all).
   If it gets that far, you can either poke around in kmdb if you know it,
   or just type ":c" to complete rebooting, at which point maybe you'll
   get some useful error messages.

3.  When you did the install, were there any error messages? 


Offhand, I don't have any ideas as to what the problem is.  But these
are some of the things I'd do to debug it.  Let me know how it goes.

Lori

Mike Walker wrote:


I also downloaded the .iso file, burned it, and started the install process.  I 
followed these instructions for creating the profile.

[i]Here's a quick-and-dirty way to do a profile-driven install:

1. Boot your system off the net or from the DVD in the usual manner.

2. Select "Interactive Install".  Then, at the first opportunity
   to exit out of it (which will be after you've answered the
   system configuration questions, such as whether you want
   Kerberos and what the root password will be), exit out to a shell.

3. Create a profile for the install in /tmp/profile.  (The contents
   of the profile are described below).

4. Execute the following:

   # pfinstall /tmp/profile

When it's done, reboot.  You should get a GRUB menu.  Select the
entry with the title "Solaris  X86".  The failsave
entry should work too.


Creating a profile for the install
--
The system profile you use should look something like this:

install_type initial_install
cluster SUNWCuser
filesys c0t0d0s1 auto swap
pool mypool free / mirror c0t0d0s0 c0t1d0s0
dataset mypool/BE1 auto /
dataset mypool/BE1/usr auto /usr
dataset mypool/BE1/opt auto /opt
dataset mypool/BE1/var auto /var
dataset mypool/BE1/export auto /export[/i]

Obviously I changed the drive's as required.  Then ran the pfinstall on the 
profile I created.  The install looked like it worked correctly, but after a 
reboot I'm having issues.

I get to the grub menu, which only has one entry "Solaris".  Which when you 
edit the line is the following [b]kernel$ /platform/i86pc/kernel/$ISADIR/unix -B 
$ZFS-BOOTFS[/b].  When I pick this option all it sits there for a second, then boot loops 
and comes back to the grub menu.

Any suggestions?  Any way I can see what its doing when it pauses before the 
reboot?  I'm kinda new at this OpenSolaris stuff, so any debugging tips/tricks 
would be greatly appreciated.

Mike
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs boot image conversion kit is posted

2007-05-01 Thread Lori Alt

Jason King wrote:

I tried it and it worked great.  Even cloned my boot environment, and BFU'd the 
clone and it seemed to work (minus a few unrelated annoyances I haven't tracked 
down yet).  I'm quite excited about the possibilities :)

I am wondering though, is it possible to skip the creation of the pool and have 
it install to an empty filesystem(s) in an existing pool (assume the pool is 
already setup w/ grub and the like)?   I'm thinking installing new builds (no 
upgrades), etc, as time goes on until the new installer is here.
 
  

Yes, eventually, we should be able to do that.  But the version
of pfinstall you have right now doesn't support it.

Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Re: Re: zfs boot image conversion kit is posted

2007-05-01 Thread Lori Alt

It looks to me like what you did should have worked.
The "cluster" line is fine.  I almost always include one
in my profiles.

So here's a couple things to try:

1.  After the install completes, but before you reboot, look at
   the GRUB menu file:

   #  mount -F zfs mypool /mnt
   #  cat /mnt/boot/grub/menu.lst

   The tail of the file should look like this:

#-- ADDED BY BOOTADM - DO NOT EDIT --
title Solaris Nevada snv_62 X86
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive
#-END BOOTADM
#-- ADDED BY BOOTADM - DO NOT EDIT --
title Solaris failsafe
kernel /boot/platform/i86pc/kernel/unix -s
module /boot/x86.miniroot-safe
#-END BOOTADM

  If it doesn't, something has gone wrong.  Although, as long as you have
  the first of those two entries, it should still boot.

2.  You can edit the first kernel$ line above to add "-kd" to it.  That
  will cause the system to boot into kmdb (assuming it boots at all).
  If it gets that far, you can either poke around in kmdb if you know it,
  or just type ":c" to complete rebooting, at which point maybe you'll
  get some useful error messages.

3.  When you did the install, were there any error messages? 


Offhand, I don't have any ideas as to what the problem is.  But these
are some of the things I'd do to debug it.  Let me know how it goes.

Lori

Mike Walker wrote:

I also downloaded the .iso file, burned it, and started the install process.  I 
followed these instructions for creating the profile.

[i]Here's a quick-and-dirty way to do a profile-driven install:

1. Boot your system off the net or from the DVD in the usual manner.

2. Select "Interactive Install".  Then, at the first opportunity
   to exit out of it (which will be after you've answered the
   system configuration questions, such as whether you want
   Kerberos and what the root password will be), exit out to a shell.

3. Create a profile for the install in /tmp/profile.  (The contents
   of the profile are described below).

4. Execute the following:

   # pfinstall /tmp/profile

When it's done, reboot.  You should get a GRUB menu.  Select the
entry with the title "Solaris  X86".  The failsave
entry should work too.


Creating a profile for the install
--
The system profile you use should look something like this:

install_type initial_install
cluster SUNWCuser
filesys c0t0d0s1 auto swap
pool mypool free / mirror c0t0d0s0 c0t1d0s0
dataset mypool/BE1 auto /
dataset mypool/BE1/usr auto /usr
dataset mypool/BE1/opt auto /opt
dataset mypool/BE1/var auto /var
dataset mypool/BE1/export auto /export[/i]

Obviously I changed the drive's as required.  Then ran the pfinstall on the 
profile I created.  The install looked like it worked correctly, but after a 
reboot I'm having issues.

I get to the grub menu, which only has one entry "Solaris".  Which when you 
edit the line is the following [b]kernel$ /platform/i86pc/kernel/$ISADIR/unix -B 
$ZFS-BOOTFS[/b].  When I pick this option all it sits there for a second, then boot loops 
and comes back to the grub menu.

Any suggestions?  Any way I can see what its doing when it pauses before the 
reboot?  I'm kinda new at this OpenSolaris stuff, so any debugging tips/tricks 
would be greatly appreciated.

Mike
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Multiple filesystem costs? Directory sizes?

2007-05-01 Thread Mario Goebbels
While setting up my new system, I'm wondering whether I should go with plain 
directories or use ZFS filesystems for specific stuff. About the cost of ZFS 
filesystems, I read on some Sun blog in the past about something like 64k 
kernel memory (or whatever) per active filesystem. What are however the 
additional costs?

The reason I'm considering multiple filesystems is for instance easy ZFS 
backups and snapshots, but also tuning the recordsizes. Like storing lots of 
generic pictures from the web, smaller recordsizes may be appropriate to trim 
down the waste once the filesize surpasses the record size, aswell as using 
large recordsizes for video files on a seperate filesystem. Turning on and off 
compression and access times for performance reasons are another thing.

Also, in this same message, I'd like to ask what sensible maximum directory 
sizes are. As in amount of files.

Thanks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: B62 AHCI and ZFS

2007-05-01 Thread Peter Goodman
Changed to:

zpool create -f mtf c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0

works fine and the throughput is still the very satisfactory sustained 
sequential

360MB/s write and 385MB/s read


Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Very Large Filesystems

2007-05-01 Thread Jan-Frode Myklebust
On 2007-04-28, Brian Hechinger <[EMAIL PROTECTED]> wrote:
>> 
> So what you *really* want is TSM.  I wonder if IBM would ever
> consider supporting ZFS.
>

TSM can do file-level backup of any normal posix file system
by specifying "VirtualMountPoint /directory". You just woun't get
the extra features like extended attributes and ACL's covered.


  -jf

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs performance on fuse (Linux) compared to other fs

2007-05-01 Thread Georg-W. Koltermann
Not much bliss to report with OpenSolaris.

Getting there was an adventure.  I had to find out how to boot from a changed 
root
device (after reconnecting my primary disk, I had disconnected that during the
Solaris install so I wouldn't accidentally overwrite it).  I had to find a way 
of transferring
my /home: ext3 wasn't recognized, so I copied it to zfs on Linux -- now that 
was on a
logical partition, and Solaris doesn't seem to recognize those; ok, redid the 
copy to a
primary partition -- now Solaris complains about not being able to read ZFS 
version 6;
ok, redid the copy a third time, this time initing the zfs on the Solaris side 
then writing
to it from Linux -- finally it worked.

The times, though, are a "little" discouraging:

1.
real4m26.736s
user0m0.509s
sys 0m8.572s

2.
real4m27.082s
user0m0.515s
sys 0m8.609s

So zfs-fuse actually outperforms zfs on Solaris by a factor of 2! Now I'm 
impatiently
waiting until it gets optimized :)

Disclaimer:

I admit doing a find is a silly FS benchmark.  But I didn't want to benchmark,
I wanted to find the best solution for my workload, more or less random access 
to
a bunch of small files.  I admit doing a build in Eclipse would have been a 
better
sample than find(1), but installing Eclipse on OpenSolaris would even require 
that
I build Eclipse from source.  That was more time than I was willing to invest.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss