Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Boyd Adamson
Phil Harman phil.har...@sun.com writes:

 Gary Mills wrote:
 The Solaris implementation of mmap(2) is functionally correct, but the
 wait for a 64 bit address space rather moved the attention of
 performance tuning elsewhere. I must admit I was surprised to see so
 much code out there that still uses mmap(2) for general I/O (rather
 than just to support dynamic linking).

Probably this is encouraged by documentation like this:

 The memory mapping interface is described in Memory Management
 Interfaces. Mapping files is the most efficient form of file I/O for
 most applications run under the SunOS platform.

Found at:

http://docs.sun.com/app/docs/doc/817-4415/fileio-2?l=ena=view


Boyd.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenStorage GUI

2008-11-11 Thread Boyd Adamson
Bryan Cantrill [EMAIL PROTECTED] writes:

 On Tue, Nov 11, 2008 at 02:21:11PM -0500, Ed Saipetch wrote:
 Can someone clarify Sun's approach to opensourcing projects and  
 software?  I was under the impression the strategy was to charge for  
 hardware, maintenance and PS.  If not, some clarification would be nice.

 There is no single answer -- we use open source as a business strategy,
 not as a checkbox or edict.  For this product, open source is an option
 going down the road, but not a priority.  Will our software be open
 sourced in the fullness of time?  My Magic 8-Ball tells me signs
 point to yes (or is that ask again later?) -- but it's certainly 
 not something that we have concrete plans for at the moment...

I think that's fair enough. What Sun choose to do is, of course, up to
Sun.

One can, however, understand that people might have expected otherwise
given statements like this:

 With our announced intent to open source the entirety of our software
 offerings, every single developer across the world now has access to
 the most sophisticated platform available for web 1.0, 2.0 and beyond

- Jonathan Schwartz
http://www.sun.com/smi/Press/sunflash/2005-11/sunflash.20051130.1.xml

-- 
Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] add autocomplete feature for zpool, zfs command

2008-10-10 Thread Boyd Adamson
Alex Peng [EMAIL PROTECTED] writes:
 Is it fun to have autocomplete in zpool or zfs command?

 For instance -

 zfs cr 'Tab key'  will become zfs create
 zfs clone 'Tab key'  will show me the available snapshots
 zfs set 'Tab key'  will show me the available properties, then zfs set 
 com 'Tab key' will become zfs set compression=,  another 'Tab key' here 
 would show me on/off/lzjb/gzip/gzip-[1-9]
 ..


 Looks like a good RFE.

This would be entirely under the control of your shell. The zfs and
zpool commands have no control until after you press enter on the
command line.

Both bash and zsh have programmable completion that could be used to add
this (and I'd like to see it for these and other solaris specific
commands).

I'm sure ksh93 has something similar.

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] add autocomplete feature for zpool, zfs command

2008-10-10 Thread Boyd Adamson

On 10/10/2008, at 5:12 PM, Nathan Kroenert wrote:
 On 10/10/08 05:06 PM, Boyd Adamson wrote:
 Alex Peng [EMAIL PROTECTED] writes:
 Is it fun to have autocomplete in zpool or zfs command?

 For instance -

zfs cr 'Tab key'  will become zfs create
zfs clone 'Tab key'  will show me the available snapshots
zfs set 'Tab key'  will show me the available properties,  
 then zfs set com 'Tab key' will become zfs set compression=,   
 another 'Tab key' here would show me on/off/lzjb/gzip/gzip-[1-9]
..


 Looks like a good RFE.
 This would be entirely under the control of your shell. The zfs and
 zpool commands have no control until after you press enter on the
 command line.
 Both bash and zsh have programmable completion that could be used  
 to add
 this (and I'd like to see it for these and other solaris specific
 commands).
 I'm sure ksh93 has something similar.
 Boyd
 Hm -

 This caused me to ask the question: Who keeps the capabilities in  
 sync?

 Is there a programmatic way we can have bash (or other shells)  
 interrogate zpool and zfs to find out what it's capabilities are?

 I'm thinking something like having bash spawn a zfs command to see  
 what options are available in that current zfs / zpool version...

 That way, you would never need to do anything to bash/zfs once it  
 was done the first time... do it once, and as ZFS changes, the  
 prompts change automatically...

 Or - is this old hat, and how we do it already? :)

 Nathan.

I can't speak for bash, but there are certainly some completions in  
zsh that do this kind of thing. I'm pretty sure there is some  
completion code that runs commands gnu-style with --help and then  
parses the output.

As long as there is a reliable and regular way to query the  
subcommands I think it's reasonable.

(Personally I'd like to see a more general and complete way for  
commands to provide completion info to shells, either in a separate  
file of some sort or by calling them in a certain way, but that's a  
pipe dream I fear)

Boyd
.. who has a vague feeling that that's how it worked for DCL on VMS


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on green-bytes

2008-10-10 Thread Boyd Adamson
David Dyer-Bennet [EMAIL PROTECTED] writes:
 On Tue, October 7, 2008 09:19, Johan Hartzenberg wrote:

 Wouldn't it be great if programmers could just focus on writing code
 rather than having to worry about getting sued over whether someone
 else is able or not to make a derivative program from their code?

 If that's what you want, it's easy to achieve.  Simply place your code
 explicitly in the public domain.

 So stop trying to muck up what lots of the rest of us want, which is
 that developments based on free code *stay* free, okay?

Alas, it's not even as simple as that. The author of SQLite, D. Richard
Hipp, took this approach for reasons like those above. He's said[1] that
he wouldn't do it again, since there are problems for users in some
jurisdictions that have no concept of Public Domain.

He said he'd probably use a BSD licence, if he did it all again.

[1] See, e.g. http://twit.tv/floss26
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle DB sequential dump questions

2008-10-01 Thread Boyd Adamson
Carson Gaspar [EMAIL PROTECTED] writes:

 Joerg Schilling wrote:
 Carson Gaspar[EMAIL PROTECTED]  wrote:

 Louwtjie Burger wrote:
 Dumping a large file from memory using tar to LTO yields 44 MB/s ... I 
 suspect the CPU cannot push more since it's a single thread doing all the 
 work.

 Dumping oracle db files from filesystem yields ~ 25 MB/s. The interesting 
 bit (apart from it being a rather slow speed) is the fact that the speed 
 fluctuates from the disk area.. but stays constant to the tape. I see up 
 to 50-60 MB/s spikes over 5 seconds, while the tape continues to push it's 
 steady 25 MB/s.
 ...
 Does your tape drive compress (most do)? If so, you may be seeing
 compressible vs. uncompressible data effects.

 HW Compression in the tape drive usually increases the speed of the drive.

 Yes. Which is exactly what I was saying. The tar data might be more 
 compressible than the DB, thus be faster. Shall I draw you a picture, or 
 are you too busy shilling for star at every available opportunity?

Sheesh, calm down, man.

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [Fwd: File system size reduction]

2008-08-27 Thread Boyd Adamson
Vikas Kakkar [EMAIL PROTECTED] writes:
[..]
 Would you know if ZFS is supported for Sun Cluster?


ZFS is supported as a failover filesystem in SunCluster 3.2. There is no
support for ZFS as a global filesystem.

HTH,

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-24 Thread Boyd Adamson
Enda O'Connor ( Sun Micro Systems Ireland) [EMAIL PROTECTED]
writes:
[..]
 meant to add that on x86 the following should do the trick ( again I'm open 
 to correction )

 installgrub /boot/grub/stage1 /zfsroot/boot/grub/stage2 /dev/rdsk/c1t0d0s0

 haven't tested the z86 one though.

I used

installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0

(i.e., no /zfsroot)

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool upgrade -v

2008-07-03 Thread Boyd Adamson
Walter Faleiro [EMAIL PROTECTED] writes:

 GC Warning: Large stack limit(10485760): only scanning 8 MB
 Hi,
 I reinstalled our Solaris 10 box using the latest update available.
 However I could not upgrade the zpool

 bash-3.00# zpool upgrade -v
 This system is currently running ZFS version 4.

 The following versions are supported:

 VER  DESCRIPTION
 ---  
  1   Initial ZFS version
  2   Ditto blocks (replicated metadata)
  3   Hot spares and double parity RAID-Z
  4   zpool history

 For more information on a particular version, including supported releases,
 see:

 http://www.opensolaris.org/os/community/zfs/version/N

 Where 'N' is the version number.

 bash-3.00# zpool upgrade -a
 This system is currently running ZFS version 4.

 All pools are formatted using this version.


 The sun docs said to use zpool upgrade -a. Looks like I have missed something.

Not that I can see. Not every solaris update includes a change to the
zpool version number. Zpool version 4 is the most recent version on
Solaris 10 releases.

HTH,

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] CIFS HA service with solaris 10 and SC 3.2

2008-06-22 Thread Boyd Adamson
Marcelo Leal [EMAIL PROTECTED] writes:
 Hello all,

 [..]
 
  1) What the difference between the smb server in solaris/opensolaris,
  and the new project CIFS?

What you refer to as the smb server in solaris/opensolaris is in fact
Samba, which sits on top of a plain unix system. This has limitations in
the areas of user accounts and ACLs, among others. The new CIFS project
provides a CIFS server that's integrated from the ground up, including
the filesystem itself.

  2) I think samba.org has an implementation of CIFS protocol, to make
  a unix-like operating system to be a SMB/CIFS server. Why don't use
  that? license problems? the smbserver that is already on
  solaris/opensolaris is not a samba.org implementation?

See above

  3) One of the goals to the CIFS Server project on OpenSolaris, is to
  support OpenSolaris as a storage operating system... we can not do it
  with samba.org implementation, or smbserver implementation that is
  already there?

See above

  4) And the last one: ZFS has smb/cifs share/on/off capabilities,
  what is the relation of that with all of that??

Those properties are part of the administrative interface for the new
in-kernel CIFS server.

  5) Ok, there is another question... there is a new projetc (data
  migration manager/dmm), that is intend to migrate NFS(GNU/Linux)
  services, and CIFS(MS/Windows) services to Solaris/Opensolaris and
  ZFS. That project is on storage community i think...but, how can we
  create a migration plan if we can not handle the services yet? or
  can?

I'm not sure what you mean by we can not handle the services yet. As
mentioned above, OpenSolaris now has 2 separate ways to provide SMB/CIFS
services, and has had NFS support since... oh, about when Sun invented
NFS, I'd guess. :) And it's way more solid than Linux's

  Ok, i'm very confuse, but is not just my fault, i think is a little
  complicated all this efforts without a glue, don't you agree?

  And in the top of all, is a need to have an agent to implement HA
  services on it... i want implement a SMB/CIFS server on
  solaris/opensolaris, and don't know if we have the solution in ou
  community or not, and if there is an agent to provide HA or we need
  to create a project to implement that...

Have you seen this?
http://opensolaris.org/os/community/ha-clusters/ohac/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Quota question

2008-06-11 Thread Boyd Adamson
Glaser, David [EMAIL PROTECTED] writes:

 Hi all, I?m new to the list and I thought I?d start out on the right
 foot. ZFS is great, but I have a couple questions?.

 I have a Try-n-buy x4500 with one large zfs pool with 40 1TB drives in
 it. The pool is named backup.

 Of this pool, I have a number of volumes.

 backup/clients

 backup/clients/bob

 backup/clients/daniel

 ?

 Now bob and Daniel are populated by rsync over ssh to synchronize
 filesystems with client machines. (the data will then be written to a
 SL500) I?d like to set the quota on /backup/clients to some arbitrary
 small amount. Seems pretty handy since nothing should go into
 backup/clients but into the volumes backup/ clients/* But when I set
 the quota on backup/clients, I am unable to increase the quota for the
 sub volumes (bob, Daniel, etc).

 Any ideas if this is possible or how to do it?

Sounds like you want refquota:

From: zfs(1M)

 refquota=size | none

 Limits the amount of space a dataset can  consume.  This
 property  enforces  a  hard limit on the amount of space
 used. This hard limit does not  include  space  used  by
 descendents, including file systems and snapshots.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] new install - when is zfs root offered? (snv_90)

2008-06-04 Thread Boyd Adamson
A Darren Dunham [EMAIL PROTECTED] writes:

 On Tue, Jun 03, 2008 at 05:56:44PM -0700, Richard L. Hamilton wrote:
 How about SPARC - can it do zfs install+root yet, or if not, when?
 Just got a couple of nice 1TB SAS drives, and I think I'd prefer to
 have a mirrored pool where zfs owns the entire drives, if possible.
 (I'd also eventually like to have multiple bootable zfs filesystems in
 that pool, corresponding to multiple versions.)

 Is they just under 1TB?  I don't believe there's any boot support in
 Solaris for EFI labels, which would be required for 1TB+.

ISTR that I saw an ARC case go past about a week ago about extended SMI
labels to allow  1TB disks, for exactly this reason.

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Create ZFS now, add mirror later

2008-06-02 Thread Boyd Adamson
Richard Elling [EMAIL PROTECTED] writes:

 W. Wayne Liauh wrote:
 E. Mike Durbin wrote:
 
 Is there a way to to a create a zfs file system
 (e.g. zpool create boot /dev/dsk/c0t0d0s1)

 Then, (after vacating the old boot disk) add
   
 another
 
 device and make the zpool a mirror?
   
   
 zpool attach
  -- richard
 
 (as in: zpool create boot mirror /dev/dsk/c0t0d0s1
   
 /dev/dsk/c1t0d0s1)
 
 Thanks!

 emike
  
  
   

 Thanks, but for starters, where is the best place to find info like this 
 (i.e., the easiest to get started on zfs)?
   

 The main ZFS community site is:
 http://www.opensolaris.org/os/community/zfs/

 There is a lot of good information and step-by-step examples
 in the ZFS Administration Guide located under the docs
 section:
 http://www.opensolaris.org/os/community/zfs/docs/
  -- richard

I'd further add that, since there are only 2 zfs related commands[1],
reading the man pages for zpool(1M) and zfs(1M) is a good investment of
time.

Boyd.

[1] I'm aware of zdb, but it's not really relevant to this discussion.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Space used by the snapshot

2008-06-02 Thread Boyd Adamson
Silvio Armando Davi [EMAIL PROTECTED] writes:

 Hi,

 I create a pool mirrored with 1gb of space. After I create a file
 system in that pool and put a file (file1) of 300MB it that file
 system. After that, I create a snapshot in the file system. With the
 zfs list command the space used by the snapshot is 0 (zero). It´s ok.

 Well after that I copied the file of 300 mb to a another file (file2)
 in the same file system. Listing the files in the file system I can
 see the two files and listing the files in the snapshot I can see only
 the first file. It´s ok too, but the zfs list now shows that the
 snapshot uses 23.5KB of space.

 I suppose that the copy of the file1 change the atime of the inode and
 for this reason the inode of the file1 needed to be copied to the
 snapshot using the space of the snapshot. I tried to set the atime to
 off, but the space of 23.5KB of the snapshot still being used after
 the copy of the file.

 Anyone knows the reason the snapshot uses that 23.5kb of space?

I would guess that there is a lot more than the atime that needs
tracking. The containing directory at the destination has changed, so
the before version of that would take up space. I think that free
space maps go in there too.

Others will probably be able to answer more authoriatatively

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Version Correct

2008-05-21 Thread Boyd Adamson
Kenny [EMAIL PROTECTED] writes:

 I have Sun Solaris 5.10 Generic_120011-14 and the zpool version is 4.
 I've found references to version 5-10 on the Open Solaris site.

 Are these versions for Open solaris only?  I've searched the SUN site
 for ZFS patches and found nothing (most likely operator headspace).
 Can I update ZFS on my Sun box and if so where are the updates?

I didn't see anyone answer your specific question about the zpool
version on solaris 10.

At this stage zpool version 4 is the latest version in Solaris. S10u6 is
expected to take that to version 10.

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] path-name encodings

2008-03-05 Thread Boyd Adamson
Marcus Sundman [EMAIL PROTECTED] writes:
 So, you see, there is no way for me to use filenames intelligibly unless
 their encodings are knowable. (In fact I'm quite surprised that zfs
 doesn't (and even can't) know the encoding(s) of filenames. Usually Sun
 seems to make relatively sane design decisions. This, however, is more
 what I'd expect from linux with their overpragmatic who cares if it's
 sane, as long as it kinda works-attitudes.)

To be fair, ZFS is constrained by compatibility requirements with
existing systems. For the longest time the only interpretation that Unix
kernels put on the filenames passed by applications was to treat / and
\000 specially. The interfaces provided to applications assume this is
the entire extent of the process. 

Changing this incompatibly is not an option, and adding new interfaces
to support this is meaningless unless there is a critical mass of
applications that use them. It's not reasonable to talk about ZFS
doing this, since it's just a part of the wider ecosystem.

To solve this problem at the moment takes one of two approaches.

1. A userland convention is adopted to decide on what meaning the byte
strings that the kernel provides have.

2. Some new interfaces are created to pass this information into the
kernel and get it back.

Leaving aside the merits of either approach, both of them require
significant agreement from applications to use a certain approach before
they reap any benefits. There's not much ZFS itself can do there.

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dealing with Single Bit Flips - WAS: Cause for data corruption?

2008-03-03 Thread Boyd Adamson
Nathan Kroenert [EMAIL PROTECTED] writes:
 Bob Friesenhahn wrote:
 On Tue, 4 Mar 2008, Nathan Kroenert wrote:

 It does seem that some of us are getting a little caught up in disks 
 and their magnificence in what they write to the platter and read 
 back, and overlooking the potential value of a simple (though 
 potentially computationally expensive) circus trick, which might, just 
 might, make your broken 1TB archive useful again...
 
 The circus trick can be handled via a user-contributed utility.  In 
 fact, people can compete with their various repair utilities.  There are 
 only 1048576 1-bit permuations to try, and then the various two-bit 
 permutations can be tried.

 That does not sound 'easy', and I consider that ZFS should be... :) and 
 IMO it's something that should really be built in, not attacked with an 
 addon.

 I had (as did Jeff in his initial response) considered that we only need 
 to actually try to flip 128KB worth of bits once... That many flips 
 means that we in a way 'processing' some 128GB in the worse case when 
 re-generating checksums.  Internal to a CPU, depending on Cache 
 Aliasing, competing workloads, threadedness, etc, this could be 
 dramatically variable... something I guess the ZFS team would want to 
 keep out of the 'standard' filesystem operation... hm. :\

Maybe an option to scrub... something that says work on bitflips for
bad blocks, or work on bitflips for bad blocks in this file

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs. Novell NSS

2008-02-28 Thread Boyd Adamson
Richard Elling [EMAIL PROTECTED] writes:
 Tim wrote:

 The greatest hammer in the world will be inferior to a drill when 
 driving a screw :)


 The greatest hammer in the world is a rotary hammer, and it
 works quite well for driving screws or digging through degenerate
 granite ;-)  Need a better analogy.
 Here's what I use (quite often) on the ranch:
 http://www.hitachi-koki.com/powertools/products/hammer/dh40mr/dh40mr.html

Hasn't the greatest hammer in the world lost the ability to drive
nails? 

I'll have to start belting them in with the handle of a screwdriver...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-26 Thread Boyd Adamson
Uwe Dippel [EMAIL PROTECTED] writes:
 Any completed write needs to be CDP-ed.

And that is the rub, precisely. There is nothing in the app - kernel
interface currently that indicates that a write has completed to a state
that is meaningful to the application.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can't offline second disk in a mirror

2008-01-30 Thread Boyd Adamson
Since I spend a lot of time going from machine to machine so I thought  
I'd carry a pool with me on a couple of USB keys. It all works fine  
but it's slow, so I thought I'd attach a file vdev to the pool and  
then offline the USB devices for speed, then undo when I want to take  
the keys with me. Unfortunately, it seems that once I've offlined one  
device, the mirror is marked as degraded and then I'm not allows to  
take the other USB key offline:

# zpool create usb mirror /dev/dsk/c5t0d0p0 /dev/dsk/c6t0d0p0
# mkfile 2g /file
# zpool attach usb c6t0d0p0 /file
# zpool status
pool: usb
  state: ONLINE
  scrub: resilver completed with 0 errors on Thu Jan 31 13:24:22 2008
config:

 NAME  STATE READ WRITE CKSUM
 usb   ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c5t0d0p0  ONLINE   0 0 0
 c6t0d0p0  ONLINE   0 0 0
 /file ONLINE   0 0 0

errors: No known data errors
# zpool offline usb c5t0d0p0
Bringing device c5t0d0p0 offline
# zpool status
   pool: usb
  state: DEGRADED
status: One or more devices has been taken offline by the administrator.
 Sufficient replicas exist for the pool to continue  
functioning in a
 degraded state.
action: Online the device using 'zpool online' or replace the device  
with
 'zpool replace'.
  scrub: resilver completed with 0 errors on Thu Jan 31 13:24:22 2008
config:

 NAME  STATE READ WRITE CKSUM
 usb   DEGRADED 0 0 0
   mirror  DEGRADED 0 0 0
 c5t0d0p0  OFFLINE  0 0 0
 c6t0d0p0  ONLINE   0 0 0
 /file ONLINE   0 0 0

errors: No known data errors
# zpool offline usb c6t0d0p0
cannot offline c6t0d0p0: no valid replicas
# cat /etc/release
 Solaris 10 8/07 s10x_u4wos_12b X86
Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.
 Use is subject to license terms.
 Assembled 16 August 2007


I've experimented with other configurations (not just keys and files,  
but slices as well) and found the same thing - once one device in a  
mirror is offline I can't offline any others, even though there are  
other (sometimes multiple) copies left.

Of course, I can detach the device, but I was hoping to avoid a full  
resilver when I reattach.

Is this the expected behaviour? Am I missing something that would mean  
that what I'm trying to do is a bad idea?

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-26 Thread Boyd Adamson
On 27/08/2007, at 12:36 AM, Rainer J.H. Brandt wrote:
 Sorry, this is a bit off-topic, but anyway:

 Ronald Kuehn writes:
 No. You can neither access ZFS nor UFS in that way. Only one
 host can mount the file system at the same time (read/write or
 read-only doesn't matter here).

 I can see why you wouldn't recommend trying this with UFS
 (only one host knows which data has been committed to the disk),
 but is it really impossible?

 I don't see why multiple UFS mounts wouldn't work, if only one
 of them has write access.  Can you elaborate?

Even with a single writer you would need to be concerned with read  
cache invalidation on the read-only hosts and (probably harder)  
ensuring that read hosts don't rely on half-written updates (since  
UFS doesn't do atomic on-disk updates).

Even without explicit caching on the read-only hosts there is some  
implicit caching when, for example, a read host reads a directory  
entry and then uses that information to access a file. The file may  
have been unlinked in the meantime. This means that you need atomic  
reads, as well as writes.

Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Will there be a GUI for ZFS ?

2007-08-16 Thread Boyd Adamson
Craig Cory [EMAIL PROTECTED] writes:
 The GUI is an implementation of the webmin tool. You must be running the
 server - started with

Actually, I think webmin is a completely different tool.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problems in dCache

2007-08-01 Thread Boyd Adamson
On 01/08/2007, at 7:50 PM, Joerg Schilling wrote:
 Boyd Adamson [EMAIL PROTECTED] wrote:

 Or alternatively, are you comparing ZFS(Fuse) on Linux with XFS on
 Linux? That doesn't seem to make sense since the userspace
 implementation will always suffer.

 Someone has just mentioned that all of UFS, ZFS and XFS are  
 available on
 FreeBSD. Are you using that platform? That information would be  
 useful
 too.

 FreeBSD does not use what Solaris calls UFS.

 Both Solaris and FreeBSD did start with the same filesystem code but
 Sun did start enhancing UFD in the late 1980's while BSD did not  
 take over
 the changes. Later BSD started a fork on the filesystemcode.  
 Filesystem
 performance thus cannot be compared.

I'm aware of that, but they still call it UFS. I'm trying to  
determine what the OP is asking.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problems in dCache

2007-07-31 Thread Boyd Adamson
Sergey Chechelnitskiy [EMAIL PROTECTED] writes:

 Hi All,

 We have a problem running a scientific application dCache on ZFS.
 dCache is a java based software that allows to store huge datasets in
 pools.  One dCache pool consists of two directories pool/data and
 pool/control. The real data goes into pool/data/ For each file in
 pool/data/ the pool/control/ directory contains two small files, one
 is 23 bytes, another one is 989 bytes.  When dcache pool starts it
 consecutively reads all the files in control/ directory.  We run a
 pool on ZFS.

 When we have approx 300,000 files in control/ the pool startup time is
 about 12-15 minutes. When we have approx 350,000 files in control/ the
 pool startup time increases to 70 minutes. If we setup a new zfs pool
 with the smalles possible blocksize and move control/ there the
 startup time decreases to 40 minutes (in case of 350,000 files).  But
 if we run the same pool on XFS the startup time is only 15 minutes.
 Could you suggest to reconfigure ZFS to decrease the startup time.

 When we have approx 400,000 files in control/ we were not able to
 start the pool in 24 hours. UFS did not work either in this case, but
 XFS worked.

 What could be the problem ?  Thank you,

I'm not sure I understand what you're comparing. Is there an XFS
implementation for Solaris that I don't know about?

Are you comparing ZFS on Solaris vs XFS on Linux? If that's the case it
seems there is much more that's different than just the filesystem.

Or alternatively, are you comparing ZFS(Fuse) on Linux with XFS on
Linux? That doesn't seem to make sense since the userspace
implementation will always suffer.

Someone has just mentioned that all of UFS, ZFS and XFS are available on
FreeBSD. Are you using that platform? That information would be useful
too.

Boyd

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS vs VXFS

2007-07-17 Thread Boyd Adamson
Richard Elling [EMAIL PROTECTED] writes:

 Vishal Dhuru wrote:
 Hi ,
 I am looking for customer shareable presentation on the ZFS vs VxFS
 , Any pointers to URL or direct attached prezo is highly appreciated
 !

 40,000 foot level, one slide for PHBs, one slide for Dilberts :-)
  -- richard

Priceless! Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Wishlist items

2007-06-27 Thread Boyd Adamson

On 26/06/2007, at 12:08 PM, [EMAIL PROTECTED] wrote:

I've been saving up a few wishlist items for zfs. Time to share.

1. A verbose (-v) option to the zfs commandline.

In particular zfs sometimes takes a while to return from zfs  
snapshot -r tank/[EMAIL PROTECTED] in the case where there are a great  
many iscsi shared volumes underneath. A little progress feedback  
would go a long way. In general I feel the zfs tools lack  
sufficient feedback and/or logging of actions, and this'd be a  
great start.


Since IIRC snapshot -r is supposed to be atomic (one TXG) I'm not  
sure that progress reports would be meaningful.


Have you seen zpool history?


2. LUN management and/or general iscsi integration enhancement

Some of these iscsi volumes I'd like to be under the same target  
but with different LUNs. A means for mapping that would be  
excellent. As would a means to specify the IQN explicitly, and the  
set of permitted initiators.


3. zfs rollback on clones. It should be possible to rollback a  
clone to the origin snapshot, yes? Right now the tools won't allow  
it. I know I can hack in a race-sensitive snapshot of the new  
volume immediately after cloning, but I already have many hundreds  
of entities and I'm trying not to proliferate them.


Yes, since rollback only takes the snapshot as an argument there  
seems not to be a way to rollback a clone to the fork snapshot.


You could, of course just blow away the clone and make a new one from  
the same snapshot


Similarly the ability to do zfs send -i [clone origin snapshot1]  
snapshot2 in order to efficiently transmit/backup clones would be  
terrific.


It seems that a way to use [EMAIL PROTECTED] as an alias for  
[EMAIL PROTECTED] would solve both of these problems, at least at the  
user interface level.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Suggestions on 30 drive configuration?

2007-06-27 Thread Boyd Adamson

On 28/06/2007, at 12:29 AM, Victor Latushkin wrote:
It is not so easy to predict.  ZFS will coalesce writes.  A single  
transaction
group may have many different writes in it.  Also, raidz[12] is  
dynamic, and
will use what it needs, unlike separate volume managers who do not  
have any

understanding of the context of the data.


There is a good slide which illustrates how stripe width is  
selected dynamically in RAID-Z. Please see slide 13 in this slide  
deck:
http://www.snia.org/events/past/sdc2006/zfs_File_Systems-bonwick- 
moore.pdf

[...]
Btw, I believe there's no link to this presentation on  
opensolaris.org. unfortunately...


Indeed. Is there any reason that the presentation at http:// 
www.opensolaris.org/os/community/zfs/docs/zfs_last.pdf


Couldn't be updated to the one that Victor mentions?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Data Management API

2007-03-20 Thread Boyd Adamson

IIRC, there is at least some of the necessary code for file change
notification present in order to support NFSv4 delegations on the server
side. Last time I looked it wasn't exposed to userspace.

On 3/21/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:






On the file event monitor portion of the OP,  has Solaris added dnotify,
inotify or FAM support to the kernel or is the goal still to extend  the
ports/poll framework junk with a file events notification facility?  As
far as I know the file attributes do not handle file change monitoring.


http://mail.opensolaris.org/pipermail/perf-discuss/2006-May/000540.html



Wade Stuart
Fallon Worldwide
P: 612.758.2660
C: 612.877.0385

Conjeturo que no soy buen cocinero.


[EMAIL PROTECTED] wrote on 03/20/2007 11:40:15 AM:

 Erast Benson wrote:
  On Tue, 2007-03-20 at 09:29 -0700, Erast Benson wrote:
  On Tue, 2007-03-20 at 16:22 +, Darren J Moffat wrote:
  Robert Milkowski wrote:
  Hello devid,
 
  Tuesday, March 20, 2007, 3:58:27 PM, you wrote:
 
  d Does ZFS have a Data Management API to monitor events on files
and
  d to store arbitrary attribute information with a file? Any answer
on
  d this would be really appreciated.
 
  IIRC correctly there's being developed file event mechanism - more
  general which should work with other file systems too. I have no
idea
  of its status or if someone even started coding it.
 
  Your second question - no, you can't.
  Yes you can and it has been there even before ZFS existed see
fsattr(5)
  it isn't ZFS specific but a generic attribute extension to the
  filesystems, currently supported by ufs, nfs, zfs, tmpfs.
  apparently fsattr is not part of OpenSolaris or at least I can't find
  it..
 
  oh, this is API...

 the (5) is a section 5 man page, which is the misc dumping ground for
 man pages.

 If you want a CLI interface to this see runat(1), for example to create
 an attribute called mime-type with the content 'text/plain' on file foo
 you could do this:

 $ runat foo 'echo text/plain  mime-type'

 To see the value of mime-type for file foo do this:

 $ runat foo cat mime-type
 text/plain



 --
 Darren J Moffat
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: ZFS or UFS - what to do?

2007-01-29 Thread Boyd Adamson

On 29/01/2007, at 12:50 AM, [EMAIL PROTECTED] wrote:





On 28-Jan-07, at 7:59 AM, [EMAIL PROTECTED] wrote:





On 27-Jan-07, at 10:15 PM, Anantha N. Srirama wrote:


... ZFS will not stop alpha particle induced memory corruption
after data has been received by server and verified to be correct.
Sadly I've been hit with that as well.



My brother points out that you can use a rad hardened CPU. ECC  
should

take care of the RAM. :-)

I wonder when the former will become data centre best practice?


Alpha particles which hit CPUs must have their origin inside said
CPU.

(Alpha particles do not penentrate skin, paper, let alone system  
cases

or CPU packagaging)


Thanks. But what about cosmic rays?



I was just in pedantic mode; cosmic rays is the term covering
all different particles, including alpha, beta and gamma rays.

Alpha rays don't reach us from the cosmos; they are caught
long before they can do any harm.  Ditto beta rays.  Both have
an electrical charge that makes passing magnetic fields or passing
through materials difficult.  Both do exist in the free but are
commonly caused by slow radioactive decay of our natural environment.

Gamma rays are photons with high energy; they are not capture by
magnetic fields (such as those existing in atoms: electons, protons).
They need to take a direct hit before they're stopped; they can only
be stopped by dense materials, such as lead.  Unfortunately, natural
occuring lead is polluted by pollonium and uranium and is an alpha/ 
beta

source in its own right.  That's why 100 year old lead from roofs is
worth more money than new lead: it's radioisotopes have been depleted.


ludicrous_topic_drift

Ok, I'll bite. It's been a long day, so that may be why I can't see  
why the radioisotopes in lead that was dug up 100 years ago would be  
any more depleted than the lead that sat in the ground for the  
intervening 100 years. Half-life is half-life, no?


Now if it were something about the modern extraction process that  
added contaminants, then I can see it.


/ludicrous_topic_drift
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Boyd Adamson

On 18/01/2007, at 9:55 PM, Jeremy Teo wrote:

On the issue of the ability to remove a device from a zpool, how
useful/pressing is this feature? Or is this more along the line of
nice to have?


Assuming we're talking about removing a top-level vdev..

I introduce new sysadmins to ZFS on a weekly basis. After 2 hours of  
introduction this is the single feature that they most often realise  
is missing.


The most common reason is migration of data to new storage  
infrastructure. The experience is often that the growth in disk size  
allows the new storage to consist of fewer disks/LUNs than the old.


I can see that is will come increasingly needed as more and more  
storage goes under ZFS. Sure, we can put 256 quadrillion zettabytes  
in the pool, but if you accidentally add a disk to the wrong pool or  
with the wrong redundancy you have a long long wait for your tape  
drive :)


Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs exported a live filesystem

2006-12-11 Thread Boyd Adamson


On 12/12/2006, at 8:48 AM, Richard Elling wrote:


Jim Hranicky wrote:

By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors. Should I file this as a bug, or  
should I just not do that :-


Don't do that.  The same should happen if you umount a shared UFS
file system (or any other file system types).
 -- richard


Except that it doesn't:

# mount /dev/dsk/c1t1d0s0 /mnt
# share /mnt
# umount /mnt
umount: /mnt busy
# unshare /mnt
# umount /mnt
#
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool list No known data errors

2006-10-09 Thread Boyd Adamson

On 10/10/2006, at 10:05 AM, ttoulliu2002 wrote:

Hi:

I have zpool created
# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH  
ALTROOT

ktspool34,5G   33,5K   34,5G 0%  ONLINE -

However, zpool status shows no known data error.  May I know what  
is the problem

# zpool status
  pool: ktspool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
ktspool ONLINE   0 0 0
  c0t1d0s6  ONLINE   0 0 0

errors: No known data errors


Umm... from the information you've provided, I'd say There is no  
problem.


What makes you think there is a problem?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [request-sponsor] request sponsor for #4890717

2006-10-04 Thread Boyd Adamson

On 05/10/2006, at 8:10 AM, Darren J Moffat wrote:

Jeremy Teo wrote:

Hello,
request sponsor for #4890717 want append-only files.
I have a working prototype where the administrator can put a zfs fs
into append only mode by setting the zfs appendonly property to
on using zfs(1M).
append only mode in this case means
1. Applications can only append to any existing files, but cannot
truncate files by creating a new file with the same filename an
existing file, or by writing in a file at an offset other than the  
end

of the file. (Applications can still create new files)
2. Applications cannot remove existing files/directories.
3. Applications cannot rename/move existing files/directories.
Thanks! I hope this is still wanted. :)


How does this interact with the a append_only ACL that ZFS supports ?

How does this property work in the face of inheritance.

How does this property work in the the user delegation environment ?


I was wondering the same thing. Personally, I'd rather see the  
append_only ACL work than a whole new fs property.


Last time I looked there was some problem with append_only, but I  
can't remember what it was.


Boyd

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [request-sponsor] request sponsor for #4890717

2006-10-04 Thread Boyd Adamson

On 05/10/2006, at 11:28 AM, Mark Shellenbaum wrote:

Boyd Adamson wrote:

On 05/10/2006, at 8:10 AM, Darren J Moffat wrote:

Jeremy Teo wrote:

Hello,
request sponsor for #4890717 want append-only files.
I have a working prototype where the administrator can put a zfs fs
into append only mode by setting the zfs appendonly property to
on using zfs(1M).
append only mode in this case means
1. Applications can only append to any existing files, but cannot
truncate files by creating a new file with the same filename an
existing file, or by writing in a file at an offset other than  
the end

of the file. (Applications can still create new files)
2. Applications cannot remove existing files/directories.
3. Applications cannot rename/move existing files/directories.
Thanks! I hope this is still wanted. :)


How does this interact with the a append_only ACL that ZFS  
supports ?


How does this property work in the face of inheritance.

How does this property work in the the user delegation environment ?
I was wondering the same thing. Personally, I'd rather see the  
append_only ACL work than a whole new fs property.
Last time I looked there was some problem with append_only, but I  
can't remember what it was.


The basic problem at the moment with append_only via ACLs is the  
following:


We have a problem with the NFS server, where there is no notion of  
O_APPEND.  An open operation over NFS does not convey whether the  
client wishes to append or do a general write; only at the time of  
a write operation can the server see whether the client is  
appending. Therefore, a process could receive an error, e.g.  
ERANGE, EOVERFLOW, or ENOSPC, upon issuing an attempted write()  
somewhere other than at EOF. This adds unwanted overhead in the  
write path.


I recently created a prototype that adds support for append only  
files in local ZFS file systems via ACLs.  However, NFS clients  
will receive EACCES when attempting to open append only files.


Ah, that's right... it was NFS over ZFS. Am I the only person who  
sees it as odd that an ACL feature derived from NFSv4 is, in fact,  
not implemented in NFSv4?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS API (again!), need quotactl(7I)

2006-09-13 Thread Boyd Adamson

On 13/09/2006, at 2:29 AM, Eric Schrock wrote:

On Tue, Sep 12, 2006 at 07:23:00AM -0400, Jeff A. Earickson wrote:


Modify the dovecot IMAP server so that it can get zfs quota  
information
to be able to implement the QUOTA feature of the IMAP protocol  
(RFC 2087).

In this case pull the zfs quota numbers for quoted home directory/zfs
filesystem.  Just like what quotactl() would do with UFS.

I am really surprised that there is no zfslib API to query/set zfs
filesystem properties.  Doing a fork/exec just to execute a zfs get
or zfs set is expensive and inelegant.


The libzfs API will be made public at some point.  However, we need to
finish implementing the bulk of our planned features before we can  
feel

comfortable with the interfaces.  It will take a non-trivial amount of
work to clean up all the interfaces as well as document them.  It will
be done eventually, but I wouldn't expect it any time soon - there are
simply too many important things to get done first.

If you don't care about unstable interfaces, you're welcome to use  
them

as-is.  If you want a stable interface, you are correct that the only
way is through invoking 'zfs get' and 'zfs set'.


I'm sure I'm missing something, but is there some reason that statvfs 
() is not good enough?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: ZFS + rsync, backup on steroids.

2006-09-12 Thread Boyd Adamson

On 12/09/2006, at 1:28 AM, Nicolas Williams wrote:

On Mon, Sep 11, 2006 at 06:39:28AM -0700, Bui Minh Truong wrote:

Does ssh -v tell you any more ?
I don't think problem is ZFS send/recv. I think it's take a lot of  
time to connect over SSH.
I tried to access SSH by typing: ssh remote_machine. It also takes  
serveral seconds( one or a half second) to connect. Maybe because  
of Solaris SSH.

If you have 100files, it may take : 1000 x 0.5 = 500seconds


You're not doing making an SSH connection for every file though --
you're making an SSH connection for every snapshot.

Now, if you're taking snapshots every second, and each SSH connection
takes on the order of .5 seconds, then you might have a problem.


So that I gave up that solution. I wrote 2 pieces of perl script:
client and server. Their roles are similar to ssh and sshd, then I  
can

connect faster.


But is that secure?


Do you have any suggestions?


Yes.

First, let's see if SSH connection establishment latency is a real
problem.

Second, you could adapt your Perl scripts to work over a persistent  
SSH

connection, e.g., by using SSH port forwarding:

% ssh -N -L 12345:localhost:56789 remote-host

Now you have a persistent SSH connection to remote-host that forwards
connections to localhost:12345 to port 56789 on remote-host.

So now you can use your Perl scripts more securely.


It would be *so* nice if we could get some of the OpenSSH behaviour  
in this area. Recent versions include the ability to open a  
persistent connection and then automatically re-use it for subsequent  
connections to the same host/user.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + rsync, backup on steroids.

2006-08-29 Thread Boyd Adamson

On 30/08/2006, at 5:17 AM, James Dickens wrote:

ZFS + rsync, backup on steroids.

I was thinking today about backing up filesystems, and came up with an
awesome idea. Use the power of rsync and ZFS together.

Start with a one or two large SATA/PATA drives if you use two and
don't need the space you can mirror other wise just use as in raid0,
enable compression unless your files are mostly precompressed, use
rsync as the backup tool, the first time you just copy the data over.
After you are done, take a snapshot, export the pool. And uninstall
the drives until next time. When next time rolls around have rsync
update the changed files, as it does block copies of changed data,
only a small part of the data has changed. After than is done, take a
snapshot.

Now thanks to ZFS you have complete access to incremental backups,
just look at the desired snapshots. For now rsync doesn't support
nfsv4 acls, but at least you have the data.

The best part of this solution is that its completely free, and uses
tools that you most likely are are already familiar with, and has
features that are only available in commercial apps.


I've been doing this for a while (although I don't remove the disks,  
just keep them on the other side of the network).


I got the idea from the tool I was using before (http:// 
www.rsnapshot.org/) which uses hard links to reduce the space usage  
at the destination.


You might like to consider the --inplace option to rsync which should  
reduce the space usage for files which change in place, since rsync  
will just do the changed blocks, rather than making a copy then  
applying the changes. The latter will result in all unchanged  blocks  
in the file being duplicated (in snapshots) on ZFS.


Boyd

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] does anybody port the zfs webadmin to webmin?

2006-08-25 Thread Boyd Adamson

On 26/08/2006, at 4:32 AM, Richard Elling - PAE wrote:

Hawk Tsai wrote:

Webmin is faster and light weight compared to SMC.


... and most people don't know it ships with Solaris.  See webmin 
(1m) and

webminsetup(1m).
 -- richard


I suspect the real question was that in the subject, but not in the  
body of the email (why do people do that?)


Anyway, I'm not aware of any port of ZFS management to webmin, but  
there *is* the ZFS web interface already.


Boyd

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import: snv_33 to S10 6/06

2006-08-23 Thread Boyd Adamson

On 24/08/2006, at 6:40 AM, Matthew Ahrens wrote:
However, once you upgrade to build 35 or later (including S10  
6/06), do

not downgrade back to build 34 or earlier, per the following message:

Summary: If you use ZFS, do not downgrade from build 35 or later to
build 34 or earlier.

This putback (into Solaris Nevada build 35) introduced a backwards-
compatable change to the ZFS on-disk format.  Old pools will be
seamlessly accessed by the new code; you do not need to do anything
special.

However, do *not* downgrade from build 35 or later to build 34 or
	earlier.  If you do so, some of your data may be inaccessible with  
the

old code, and attemts to access this data will result in an assertion
failure in zap.c.


This reminds me of something that I meant to ask when this came up  
the first time.


Isn't the whole point of the zpool upgrade process to allow users to  
decide when they want to remove the fall back to old version option?


In other words shouldn't any change that eliminates going back to an  
old rev require an explicit zpool upgrade?


Boyd

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Need Help: didn't create the pool as radiz but stripes

2006-08-23 Thread Boyd Adamson

On 24/08/2006, at 10:14 AM, Arlina Goce-Capiral wrote:

Hello James,

Thanks for the response.

Yes. I got the bug id# and forwarded that to customer. But cu said  
that he can create a large file
that  is large as the stripe of the 3 disks. And if he pull a disk,  
the whole zpool failes, so there's no

degraded pools, just fails.

Any idea on this?


The output of your zpool command certainly shows a raidz pool. It may  
be that the failing pool and the size issues are unrelated.


How are they creating a huge file? It's not sparse is it? Compression  
involved?


As to the failure mode, you may like to include any relevant /var/adm/ 
messages lines and errors from fmdump -e.


Boyd

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Querying ZFS version?

2006-08-10 Thread Boyd Adamson

On 08/08/2006, at 10:44 PM, Luke Scharf wrote:
The release I'm playing with (Alpha 5) does, indeed, have ZFS.   
However, I can't determine what version of ZFS is included.   
Dselect gives the following information, which doesn't ring any  
bells for me:

*** Req base sunwzfsr 5.11.40-1   5.11.40-1   ZFS (Root)


I'm no nexenta expert, just an intrigued solaris and debian user, but  
I'd interpret that version number as being from build 40 of nevada,  
nexenta package rev 1.


Boyd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS Web administration interface

2006-05-21 Thread Boyd Adamson

On 22/05/2006, at 6:41 AM, Ron Halstead wrote:
To expand on the original question: in nv 38 and 39, I start the  
Java Web Console https://localhost:6789 and log in as root. Instead  
of the available application including ZFS admin, I get this page:


You Do Not Have Access to Any Application
No application is registered with this Sun Java(TM) Web Console, or  
you have no rights to use any applications that are registered. See  
your system administrator for assistance.


I've tried smreg and wcadmin but do not know the /location/name of  
the ZFS app to register. Any help is appreciated, google and  
sunsolve come up empty. On the same note, are there any other apps  
that can be registed in the Sun Java Web Console?


I've noticed the same problem on b37 and followed the same path. I  
also have no answer :(


Boyd

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and databases

2006-05-10 Thread Boyd Adamson
One question that has come up a number of times when I've been  
speaking with people (read: evangelizing :) ) about ZFS is about  
database storage. In conventional use storage has separated redo logs  
from table space, on a spindle basis.


I'm not a database expert but I believe the reasons boil down to a  
combination of:

- Separation for redundancy

- Separation for reduction of bottlenecks (most write ops touch both  
the logs and the table)


- Separation of usage patterns (logs are mostly sequential writes,  
tables are random).


The question then comes up about whether in a ZFS world this  
separation is still needed. It seems to me that each of the above  
reasons is to some extent ameliorated by ZFS:
- Redundancy is performed at the filesystem level, probably on all  
disks in the pool.


- Dynamic striping and copy-on-write mean that all write ops can be  
striped across vdevs and the log writes can go right next to the  
table writes


- Copy-on-write also turns almost all writes into sequential writes  
anyway.


So it seems that the old reasoning may no longer apply. Is my  
thinking correct here? Have I missed something? Do we have any  
information to support either the use of a single pool or of separate  
pools for database usage?


Boyd
Melbourne, Australia

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss