Re: [zfs-discuss] GUI to set ACLs

2012-01-31 Thread Chris Ridd

On 31 Jan 2012, at 12:20, Achim Wolpers wrote:

> Hi all!
> 
> I'm searching for a GUI tool to set ZFS (NFSv4) ACLs. I found some nautilus 
> add ons in the web but they don't seen to work with nautilus shipped with OI. 
> Any solution?

Does Windows count? Windows can certainly edit ZFS ACLs when they're exposed to 
it over CIFS.

;-)

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple's ZFS-alike - Re: Does raidzN actually protect against bitrot? If yes - how?

2012-01-16 Thread Chris Ridd

On 16 Jan 2012, at 16:56, Rich Teer wrote:

> On Mon, 16 Jan 2012, Freddie Cash wrote:
> 
>>> There would likely be a market if someone was to sell pre-packaged zfs for
>>> Apple OS-X at a much higher price than the operating system itself.
> 
> 10's Complement (?) are planning such a thing, although I have no idea
> on their pricing.  The software is still in development.

They have announced pricing for 2 of their 4 ZFS products: see 
.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA hardware advice

2011-12-17 Thread Chris Ridd

On 17 Dec 2011, at 19:35, Edmund White wrote:

> On 12/17/11 8:27 PM, "Chris Ridd"  wrote:
> 
> 
>> 
>> Can you explain how you got the SSDs into the HP sleds? Did you buy blank
>> sleds from somewhere, or cannibalise some "cheap" HP drives?
>> 
>> I assumed some part of the HP hardware would freak out if it ever saw a
>> drive with non-HP firmware - is that a problem?
>> 
>> We've got an HP D2700 JBOD attached to an LSI SAS 9208 controller in a
>> DL360G7, and I'm keen on getting a ZIL into the mix somewhere - either
>> into the JBOD or the spare bays in the DL360.
>> 
>> Chris
> 
> Chris,
> 
> It's possible to obtain the HP drive carriers in bulk on eBay. I haven't

So they are - googling thinks they are "hp 378343-002" and seems to find a good 
number for sale. Good tip, thanks!

> had many issues with HP backplanes or RAID controllers complaining about
> non-HP disks. There was one instance of a particular Intel SSD that didn't
> provide proper temperature data to the HP drive backplane, but that's the
> worst issue I've ever encountered. Later revisions of the same SSD worked.
> 
> I also have DL380 G7 with D2700 JBOD setups running. In one, I'm using a
> Pliant/Sandisk SSD for ZIL. The other has a DDRdrive installed in the
> storage head.

A DDRdrive is beyond our budget :-(

My plan B was to put an OCZ revodrive in the spare PCIe slot. But an HP drive 
carrier + cheap small SSD would be perfect.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA hardware advice

2011-12-17 Thread Chris Ridd

On 16 Dec 2011, at 23:48, Edmund White wrote:

> If you're building from scratch, please choose nearline/midline SAS disks
> instead of SATA if you're looking for capacity. For detailed reasoning,
> see: http://serverfault.com/a/331504/13325
> 
> For the server, I've had great success with HP ProLiant systems, focusing
> on the DL380 G6/G7 models. If you can budget 4U of rackspace, the DL370 G6
> is a good option that can accommodate 14LFF or 24 SFF disks (or a
> combination). I've built onto DL180 G6 systems as well. If you do the
> DL180 G6, you'll need a 12-bay LFF model. I'd recommend a Lights-Out 100
> license key to gain remote console. The backplane has a built-in SAS
> expander, so you'll only have a single 4-lane SAS cable to the controller.
> I typically use LSI controllers. In the DL180, I would spec a LSI 9211-4i
> SAS HBA. You have room to mount a ZIL or L2Arc internally and leverage the
> motherboard SATA ports. Otherwise, consider a LSI 9211-8i HBA and use the
> second 4-land SAS connector for those.
> 
> See: http://www.flickr.com/photos/ewwhite/sets/72157625918734321/ for an
> example of the DL380 G7 build.

Can you explain how you got the SSDs into the HP sleds? Did you buy blank sleds 
from somewhere, or cannibalise some "cheap" HP drives?

I assumed some part of the HP hardware would freak out if it ever saw a drive 
with non-HP firmware - is that a problem?

We've got an HP D2700 JBOD attached to an LSI SAS 9208 controller in a DL360G7, 
and I'm keen on getting a ZIL into the mix somewhere - either into the JBOD or 
the spare bays in the DL360.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is there an inmemory map of all files in a zfs filesystem, or how can I get it ?

2011-09-04 Thread Chris Ridd

On 31 Aug 2011, at 09:36, spambuf...@orcon.net.nz wrote:

> I really want something like the MFT in NTFS, which has an easily accessible 
> list of everything on the filesystem.
>  
> I want easy access to the realtime events happening in the filesystem.

ZFS supports virus scanners via the "ICAP" protocol. If you wrote your tool 
pretending to be a virus scanner, I think it would probably get advised for all 
the necessary changes. This may not be the completely correct solution, but it 
might work.

Some practical advice on the ICAP side is at 


Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [OpenIndiana-discuss] Question about ZFS/CIFS

2011-08-13 Thread Chris Ridd

On 12 Aug 2011, at 21:47, Roy Sigurd Karlsbakk wrote:

> Hi all
> 
> We've migrated from an old samba installation to a new box with openindiana, 
> and it works well, but... It seems Windows now honours the executable bit, so 
> that .exe files for installing packages, are no longer directly executable. 
> While it is positive that windows honours this bit, it breaks things when we 
> have a software repository on this server.
> 
> Does anyone know a way to counter this without chmod -R o+x?

Does setting the aclinherit=passthrough-x zfs property on the filesystem help?

I'm not sure, but you may still need to do a chmod -R on each filesystem to set 
the ACLs on each existing directory.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing failed drive

2011-07-22 Thread Chris Ridd

On 22 Jul 2011, at 21:29, Chris Dunbar - Earthside, LLC wrote:

> It's resilvering now - thanks for the help!

I think the command you were trying to recall was prtvtoc.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 700GB gone? "zfs list" and "df" differs!

2011-07-04 Thread Chris Ridd

On 4 Jul 2011, at 12:58, Orvar Korvar wrote:

> PS. I do not have any snapshots:
> 
> root@frasse:~# zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> TempStorage916G  45,1G  37,3G  
> /mnt/TempStorage
> TempStorage/Backup 799G  45,1G   177G  
> /mnt/TempStorage/Backup
> TempStorage/EmmasFolder   78,6G  45,1G  78,6G  
> /mnt/TempStorage/EmmasFolder
> TempStorage/Stuff 1,08G  45,1G  1,08G  
> /mnt/TempStorage/Stuff

In recent Solaris variants "zfs list" doesn't show snapshots by default; you 
need to add "-t snapshot" (or "-t all") to see them.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Chris Ridd

On 19 May 2011, at 14:44, Evaldas Auryla wrote:

> Hi Chris, there is no sestopo on this box (Solaris Express 11 151a), fmtopo 
> -dV works nice, although it's a bit "overkill" with manually parsing the 
> output :)

You need to install pkg:/system/io/tests.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mapping sas address to physical disk in enclosure

2011-05-19 Thread Chris Ridd

On 19 May 2011, at 08:55, Evaldas Auryla wrote:

> Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single 
> path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with 
> sas-addresses such as this in "zpool status" output:
> 
>NAME   STATE READ WRITE CKSUM
>cuve   ONLINE   0 0 0
>  mirror-0 ONLINE   0 0 0
>c9t5000C50025D5AF66d0  ONLINE   0 0 0
>c9t5000C50025E5A85Ad0  ONLINE   0 0 0
>  mirror-1 ONLINE   0 0 0
>c9t5000C50025D591BEd0  ONLINE   0 0 0
>c9t5000C50025E1BD56d0  ONLINE   0 0 0
>   ...
> 
> Is there an easy way to map these sas-addresses to the physical disks in 
> enclosure ?

Does /usr/lib/scsi/sestopo or /usr/lib/fm/fmd/fmtopo help? I can't recall how 
you work out the arg to pass to sestopo :-(

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GPU acceleration of ZFS

2011-05-10 Thread Chris Ridd

On 10 May 2011, at 16:44, Hung-Sheng Tsao (LaoTsao) Ph. D. wrote:

> 
> IMHO, zfs need to run in all kind of HW
> T-series CMT server that can help sha calculation since T1 day, did not see 
> any work in ZFS to take advantage it

That support would be in the crypto framework though, not ZFS per se. So I 
think the OP might consider how best to add GPU support to the crypto framework.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive id confusion

2011-02-06 Thread Chris Ridd

On 6 Feb 2011, at 03:14, David Dyer-Bennet wrote:

> I'm thinking either Solaris' appalling mess of device files is somehow scrod, 
> or else ZFS is confused in its reporting (perhaps because of cache file 
> contents?).  Is there anything I can do about either of these?  Does devfsadm 
> really create the apporpirate /dev/dsk and etc. files based on what's present?

Is reviewing the source code to devfsadm helpful? I bet it hasn't changed much 
from:



Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs + charset

2011-01-15 Thread Chris Ridd

On 15 Jan 2011, at 13:57, Achim Wolpers wrote:

> Am 15.01.11 14:52, schrieb Chris Ridd:
>> What are the normalization properties of these filesystems? The zfs man page 
>> says they're used when comparing filenames:
> The normalization properties are set to none. Is this the key to my
> solution?

Judging from some discussion of normalization on this list in 2009 
<http://opensolaris.org/jive/thread.jspa?threadID=110207> I would say so.

But I am not really certain of the practical implications of the other 
settings. It feels like anything apart from "none" would be OK, but maybe one 
is better than the others?

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs + charset

2011-01-15 Thread Chris Ridd

On 15 Jan 2011, at 13:44, Achim Wolpers wrote:

> Hi!
> 
> I have a problem with the charset in the following scenario:
> 
> - OSOL Server with zfs pool und NFS/CIFS shares enabled
> - OSX Client with CIFS mounts
> - OSX Client with NFSv3 mounts
> 
> If one of the clients saves a file with a special character in the
> filename like 'äüöß', the other client can not access that file. The
> characters a displayed correctly on both clients as well as on the
> server. What is the reason of this incompatibility of CIFS and NFS
> invoked filenames and how can I get around it?

What are the normalization properties of these filesystems? The zfs man page 
says they're used when comparing filenames:

---
 normalization = none | formC | formD | formKC | formKD

 Indicates whether the file system should perform a  uni-
 code normalization of file names whenever two file names
 are compared, and which normalization  algorithm  should
 be  used. File names are always stored unmodified, names
 are normalized as part of  any  comparison  process.  If
 this  property  is set to a legal value other than none,
 and the utf8only  property  was  left  unspecified,  the
 utf8only  property  is  automatically  set  to  on.  The
 default value of the  normalization  property  is  none.
 This property cannot be changed after the file system is
 created.
---

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What are .$EXTEND directories?

2011-01-03 Thread Chris Ridd

On 3 Jan 2011, at 17:08, Volker A. Brandt wrote:

>> On our build 147 server (pool version 22) I've noticed that some directories 
>> called ".$EXTEND" (no quotes) are appearing underneath some shared NFS 
>> filesystems, containing an empty file called "$QUOTA". We aren't using 
>> quotas.
>> 
>> What are these ? Googling for the names doesn't really work too well :-(
>> 
>> I don't think they're doing any harm, but I'm curious. Someone's bound to
>> notice and ask me as well :-) 
> 
> Well, googling for '.$EXTEND' and '$QUOTA' does give some results,
> especially when combined with 'NTFS'. :-)

Aha! Foolishly I'd used zfs in my search string :-)

> Check out the table on "Metafiles" here:
> 
>  http://en.wikipedia.org/wiki/NTFS

OK, so they're probably an artefact of having set sharesmb=on, even though I've 
not joined the box to a domain yet.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] What are .$EXTEND directories?

2011-01-03 Thread Chris Ridd
On our build 147 server (pool version 22) I've noticed that some directories 
called ".$EXTEND" (no quotes) are appearing underneath some shared NFS 
filesystems, containing an empty file called "$QUOTA". We aren't using quotas.

What are these ? Googling for the names doesn't really work too well :-(

I don't think they're doing any harm, but I'm curious. Someone's bound to 
notice and ask me as well :-)

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-04-25 Thread Chris Ridd

On 26 Apr 2010, at 06:02, Dave Pooser wrote:

> On 4/25/10 6:07 PM, "Rich Teer"  wrote:
> 
>> Sounds fair enough!  Let's move this to email; meanwhile, what's the
>> packet sniffing incantation I need to use?  On Solaris I'd use snoop,
>> but I don't htink Mac OS comes with that!
> 
> Use Wireshark (formerly Ethereal); works great for me. It does require X11
> on your machine.

Macs come with the command-line tcpdump tool. Wireshark (recommended anyway!) 
can read files saved by tcpdump and snoop.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Chris Ridd

On 31 Mar 2010, at 17:50, Bob Friesenhahn wrote:

> On Wed, 31 Mar 2010, Chris Ridd wrote:
> 
>>> Yesterday I noticed that the Sun Studio 12 compiler (used to build 
>>> OpenSolaris) now costs a minimum of $1,015/year.  The "Premium" service 
>>> plan costs $200 more.
>> 
>> The download still seems to be a "free, full-license copy" for SDN members; 
>> the $1015 you quote is for the standard Sun Software service plan. Is a 
>> service plan now *required*, a la Solaris 10?
> 
> There is no telling.  Everything is subject to evaluation by Oracle and it is 
> not clear which parts of the web site are confirmed and which parts are still 
> subject to change.  In the past it was free to join SDN but if one was to put 
> an 'M' in front of that SDN, then there would be a subtantial yearly charge 
> for membership (up to $10,939 USD per year according to Wikipedia).  This is 
> a world that Oracle has been commonly exposed to in the past.  Not everyone 
> who uses a compiler qualifies as a "developer".

Indeed, but Microsoft still give out free "express" versions of their tools. If 
memory serves, you're not allowed to distribute binaries built with them but 
otherwise they're not broken in any significant way.

Maybe this will also be the difference between Sun Studio and Sun Studio 
Express.

Perhaps we should take this to tools-compilers.

Cheers,

Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Chris Ridd
On 31 Mar 2010, at 17:23, Bob Friesenhahn wrote:

> Yesterday I noticed that the Sun Studio 12 compiler (used to build 
> OpenSolaris) now costs a minimum of $1,015/year.  The "Premium" service plan 
> costs $200 more.

The download still seems to be a "free, full-license copy" for SDN members; the 
$1015 you quote is for the standard Sun Software service plan. Is a service 
plan now *required*, a la Solaris 10?

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] " . . formatted using older on-disk format . ."

2010-03-10 Thread Chris Ridd

On 11 Mar 2010, at 04:17, Erik Trimble wrote:

> Matt Cowger wrote:
>> On Mar 10, 2010, at 6:30 PM, Ian Collins wrote:
>> 
>>  
>>> Yes, noting the warning.  
>> 
>> Is it safe to execute on a live, active pool?
>> 
>> --m
>>  
> Yes.  No reboot necessary.
> 
> The Warning only applies to this circumstance:  if you've upgraded from an 
> older build, then upgrading the zpool /may/ mean that you will NOT be able to 
> reboot to the OLDER build and still read the now-upgraded zpool.
> 
> 
> So, say you're currently on 111b (fresh 2009.06 build).   It has zpool 
> version X (I'm too lazy to look up the actual version numbers now).  You now 
> decide to live on the bleeding edge, and upgrade to build 133.  That has 
> zpool version X+N.   Without doing anything, all pool are still at version X, 
> and everything can be read by either BootEnvironment (BE).  However, you want 
> the neat features in zpool X+N.  You boot to the 133 BE, and run 'zpool 
> upgrade' on all pools.  You now get all those fancy features, instantly.  
> Naturally, these new features don't change any data that is already on the 
> disk (it doesn't somehow magically dedup previously written data).  HOWEVER, 
> you are now in the situation where you CAN'T boot to the 111b BE, as that 
> version doesn't understand the new pool format.
> 
> Basically, it boils down to this:  upgrade your pools ONLY when you are sure 
> the new BE is stable and working for you, and you have no desire to revert to 
> the old pool.   I run a 'zpool upgrade' right after I do a 'beadm destroy 
> '

I'd also add that for disaster recovery purposes you should also have a live CD 
handy which supports your new zpool version.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] future of OpenSolaris

2010-02-25 Thread Chris Ridd

On 25 Feb 2010, at 14:28, Sean Sprague wrote:

> Bob,
> 
>> On Tue, 23 Feb 2010, Joerg Schilling wrote:
>>> 
>>> and what uname -s reports.
>> 
>> It will surely report "OrkOS".
> 
> For OpenSolaris, "OracOS" - surely there must be Blakes 7 fans in Oracle 
> Corp.?

You can see all the working bits courtesy of dtrace...

>> I am glad to be able to contribute positively and constructively to this 
>> discussion.
> 
> Metoo ;-) ... Sean.

I'll get my coat.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_133 - high cpu

2010-02-23 Thread Chris Ridd

On 23 Feb 2010, at 19:53, Bruno Sousa wrote:

> The system becames really slow during the data copy using network, but i copy 
> data between 2 pools of the box i don't notice that issue, so probably i may 
> be hitting some sort of interrupt conflit in the network cards...This system 
> is configured with alot of interfaces, being :
> 
> 4 internal broadcom gigabit
> 1 PCIe 4x, Intel Dual Pro gigabit
> 1 PCIe 4x, Intel 10gbE card
> 2 PCIe 8x Sun non-raid HBA
> 
> 
> With all of this, is there any way to check if there is indeed an interrupt 
> conflit or some other type of conflit that leads this high load? I also 
> noticed some messages about acpi..can this acpi also affect the performance 
> of the system?

To see what interrupts are being shared:

# echo "::interrupts -d" | mdb -k

Running intrstat might also be interesting.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] etc on separate pool

2010-01-22 Thread Chris Ridd

On 22 Jan 2010, at 08:55, Alexander wrote:

> Is it possible to have /etc on separate zfs pool in OpenSolaris? 
> The purpose is to have rw non-persistent main pool and rw persistent /etc...
> I've tried to make legacy etcpool/etc file system and mount it in 
> /etc/vfstab... 
> Is it possible to extend boot-archive in such a way that it include most of 
> the files necessary for mounting /etc from separate pool? Have someone tried 
> such configurations?

What does the live CD do?

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] link in zpool upgrade -v broken

2010-01-07 Thread Chris Ridd

On 7 Jan 2010, at 23:52, Ian Collins wrote:

> http://www.opensolaris.org/os/community/zfs/version/
> 
> No longer exists.  Is there a bug for this yet?

I don't think so. But 
 is where 
they've moved to.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] solaris 10U7

2009-12-13 Thread Chris Ridd

On 13 Dec 2009, at 09:05, dick hoogendijk wrote:

> I just noticed that my zpool is still running v10 and my zfs filesystems
> are on v3. This is on solaris 10U3. Before upgrading the zpool and ZFS
> versions I'd like to know the supported versions by solaris 10 update.7
> I'd rather not make my zpools unaccessable ;)

zpool upgrade -v should point you at:



where N is the pool version. However those links don't work any more since the 
opensolaris.org mass-reorg. This looks equivalent:



Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool problems

2009-12-07 Thread Chris Ridd

On 6 Dec 2009, at 16:14, Michael Armstrong wrote:

> Hi, I'm using zfs version 6 on mac os x 10.5 using the old macosforge pkg. 
> When I'm writing files to the fs they are appearing as 1kb files and if I do 
> zpool status or scrub or anything the command is just hanging. However I can 
> still read the zpool ok, just write is having problems and any diagnostics. 
> Any ideas how I can get more information or what my symptoms are resemblent 
> of? I'm considering using the freebsd ppc port (as i have a powermac) for 
> better zfs support. Any thoughts would be great on why I'm having these 
> problems.

You may be better off talking to the folks at 
 who are actively using and working 
on the Mac port of ZFS.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't rm file when "No space left on device"...

2009-10-01 Thread Chris Ridd


On 1 Oct 2009, at 19:34, Andrew Gabriel wrote:

Pick a file which isn't in a snapshot (either because it's been  
created since the most recent snapshot, or because it's been  
rewritten since the most recent snapshot so it's no longer sharing  
blocks with the snapshot version).


Out of curiosity, is there an easy way to find such a file?

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Checksum property change does not change pre-existing data - right?

2009-09-24 Thread Chris Ridd


On 24 Sep 2009, at 03:09, Mark J Musante wrote:



On 23 Sep, 2009, at 21.54, Ray Clark wrote:

My understanding is that if I "zfs set checksum=" to  
change the algorithm that this will change the checksum algorithm  
for all FUTURE data blocks written, but does not in any way change  
the checksum for previously written data blocks.


I need to corroborate this understanding.  Could someone please  
point me to a document that states this?  I have searched and  
searched and cannot find this.


I haven't googled for a specific doc, but I can at least tell you  
that your understanding is correct.  If you change the checksum  
algorithm, that checksum is applied only to future writes.  Other  
properties work similarly, such as compression or copies.  I see  
that the zfs manpage (viewable here: http://docs.sun.com/app/docs/doc/816-5166/zfs-1m?a=view 
 ) only indicates that this is true for the copies property.  I  
guess we'll have to update that doc.


It mentions something similar for the recordsize property too:

---
 Changing the file system's recordsize affects only files
 created afterward; existing files are unaffected.
---

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Real help

2009-09-21 Thread Chris Ridd


On 20 Sep 2009, at 19:46, dick hoogendijk wrote:



On Sun, 2009-09-20 at 11:41 -0700, vattini giacomo wrote:
Hi there,i'm in a bad situation,under Ubuntu i was tring to import  
a solaris zpool that is in /dev/sda1,while the Ubuntu is in /dev/ 
sda5;not being able to mount the solaris pool i decide to destroy  
the pool created like that

sudo zfs-fuse
sudo zpool  create hazz0 /dev/sda1
sudo zpool destroy hazz0
sudo reboot
Now opensolaris is not booting everything is vanished
Is there anyhow to restore everything?


Any idea about the meaning of the verb DESTROY ?


Does zpool destroy prompt "are you sure" in any way? Some admin tools  
do (beadm destroy for example) but there's not a lot of consistency.


Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Space has not been freed up!

2009-09-10 Thread Chris Ridd


On 10 Sep 2009, at 09:38, Fajar A. Nugraha wrote:


On Thu, Sep 10, 2009 at 3:27 PM, Gino wrote:

# cd /dr/netapp11bkpVOL34
# rm -r *
# ls -la
#

Now there are no files in /dr/netapp11bkpVOL34, but

# zfs list|egrep netapp11bkpVOL34
dr/netapp11bkpVOL34   1.34T   158G 
1.34T  /dr/netapp11bkpVOL34


Space has not been freed up!


Are there hidden files in that directory? Try "ls -la"
What happens when you export-import that pool?


Also are there snapshots? "zfs list -t all"

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Failure of Quicktime *.mov files after move to zfs disk

2009-08-22 Thread Chris Ridd


On 21 Aug 2009, at 22:35, Scott Laird wrote:


Checksum all of the files using something like md5sum and see if
they're actually identical.  Then test each step of the copy and see
which one is corrupting your files.


It might be worth checking if they've got funny Unicode chars in the  
names. What normalization's happening on both servers, what version of  
NFS is being used? How big are the files?


Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] filesystem notification / query

2009-08-21 Thread Chris Ridd


On 20 Aug 2009, at 21:22, Felix Nielsen wrote:


Hi

Is it possible to get filesystem notification like when files are  
created, modified, deleted? or can the "activity" be exported?


If you have a vscan service then that will get notified when files are  
accessed or modified; would that be sufficient for your purposes?




Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-27 Thread Chris Ridd


On 27 Jul 2009, at 18:49, Thomas Burgess wrote:



i was under the impression it was virtualbox and it's default  
setting that ignored the command, not the hard drive


Do other virtualization products (eg VMware, Parallels, Virtual PC)  
have the same default behaviour as VirtualBox?


I've a suspicion they all behave similarly dangerously, but actual  
data would be useful.


Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-11 Thread Chris Ridd


On 11 Jun 2009, at 10:52, Paul van der Zwan wrote:



On 11 jun 2009, at 11:48, Sami Ketola wrote:



On 11 Jun 2009, at 12:44, Paul van der Zwan wrote:


Strange thing I noticed in the keynote is that they claim the disk  
usage of Snow Leopard

is 6 GB less than Leopard mostly because of compression.
Either they have implemented compressed binaries or they use  
filesystem compression.

Neither feature is present in Leopard AFAIK..
Filesystem compression is a ZFS feature, so 


I think this is because they are removing PowerPC support from the  
binaries.




I really doubt the PPC specific code is 6GB. A few 100 MB perhaps.
Most of a fat binary or an .app folder is architecture independent  
and will remain.
And Phil Schiller specifically mentioned it was because of  
compression.


They might just have changed the localized resources format from a  
directory (English.lproj) containing loads of files into a zip file.


There's probably a better place to discuss this.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Chris Ridd


On 28 Feb 2009, at 07:26, C. Bergström wrote:


Blake wrote:

Gnome GUI for desktop ZFS administration

With the libzfs java bindings I am plotting a web based interface..  
I'm not sure if that would meet this gnome requirement though..   
Knowing specifically what you'd want to do in that interface would  
be good.. I planned to compare it to fishworks and the nexenta  
appliance as a base..


Recent builds of OpenSolaris come with SWT from the Eclipse project,  
which makes it possible for Java apps to use real GNOME/GTK native  
UIs. So your libzfs bindings may well be useful with that.


Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does your device honor write barriers?

2009-02-10 Thread Chris Ridd

On 10 Feb 2009, at 18:35, Bryant Eadon wrote:

Given that ZFS is planned to be used in Snow Leopard, is it worth  
setting something up for consumer grade appliance vendors to  
'certify' against?  ("Ok, you play nice with ZFS by doing the right  
things", etc.. )  Maybe you can give them a 'Gold Star' == 'Supports  
ZFS' .  That'll give them a selling point to consumers and Sun some  
free marketing ?


Curiously though, Apple's only mentioning ZFS in the context of Snow  
Leopard *Server*, so that's probably enterprise-type disks again.


Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send -R slow

2009-01-28 Thread Chris Ridd

On 28 Jan 2009, at 19:40, BJ Quinn wrote:

>>> What about when I pop in the drive to be resilvered, but right  
>>> before I add it back to the mirror, will Solaris get upset that I  
>>> have two drives both with the same pool name?
>> No, you have to do a manual import.
>
> What you mean is that if Solaris/ZFS detects a drive with an  
> identical pool name to a currently mounted pool, that it will safely  
> not disrupt the mounted pool and simply not mount the same-named  
> pool on the newly inserted drive?
>
> Can I mount a pool "as" another pool name?

Yes: "zpool import oldname newname"

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] freeze/ thaw zfs file system

2009-01-27 Thread Chris Ridd

On 27 Jan 2009, at 17:59, Richard Elling wrote:

> ajit jain wrote:
>> Hi Andrew,
>>
>> I am writing a filtering device which tracks the write to the
>> file-system. I am doing it for ufs, vxfs and for zfs. Sometime for
>> consistent point I need to freeze the file-system which flushes dirty
>> block to the disk and block every IO on the top level. So, for ufs  
>> and
>> vxfs I got lockfs and VX_FREEZE/VX_THAW, but with zfs I didn't get  
>> the
>> luck.
>>
>
> I see no need to freeze the file system, or maybe I would say, I see
> no benefit to freezing a file system.  Perhaps your needs will be met
> by the snapshot feature which will also cause a sync(2).

It sounds like the OP could take advantage of the vscan service 


Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SDXC and the future of ZFS

2009-01-14 Thread Chris Ridd

On 14 Jan 2009, at 10:01, Andrew Gabriel wrote:

> DOS/FAT filesystem implementations in appliances can be found in less
> than 8K code and data size (mostly that's code). Limited functionality
> implementations can be smaller than 1kB size.

Just for the sake of comparison, how big is the limited ZFS  
implementation in grub?

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS import on pool with same name?

2009-01-05 Thread Chris Ridd

On 5 Jan 2009, at 17:59, Josh Rivel wrote:

> How can I import the rpool from the 2nd hard drive and have it mount  
> on
> a different partition or something so I can recover the data from it?

You rename it as you import it, eg:

zpool import 6530745808930953819 newname

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hybrid Pools - Since when?

2008-12-14 Thread Chris Ridd

On 14 Dec 2008, at 16:58, Andrew Gabriel wrote:

> Rafael Friedlander wrote:
>> Hi,
>>
>> Does anyone know since when hybrid pools are available in ZFS? Are  
>> there
>> ZFS "versions"?
>>
>> XVM Server is based on Nevada b93, and I need to know if it supports
>> hybrid pools.
>>
>
> Hybrid pool slogs (ZIL) were introduced in Nevada builds 68 and 69,  
> and
> is also in latest Solaris 10.
>
> I'm not sure when "cache" (L2ARC) devices were introduced.
> There used to be a web page with a list of all the zpool versions, but
> it's not where it was, and I can't find it now.

Run "zpool upgrade -v" for a list of the versions known to that  
version of zpool. According to that, ZIL came in zfs version 7, and  
L2ARC came in zfs version 10.

It also reports:

---
For more information on a particular version, including supported  
releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.
---

Cheers,

Chris 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slow death-spiral with zfs gzip-9 compression

2008-11-30 Thread Chris Ridd

On 30 Nov 2008, at 02:59, Bob Friesenhahn wrote:

> On Sun, 30 Nov 2008, Ian Collins wrote:
>
>> What did you expect?  A 3GHz Opteron core takes about a minutes to
>> attempt to compress a 1GB .mkv file.  So your P3 would probably take
>> between 5 and 10 minutes.  Now move that to the kernel and your  
>> system
>> will crawl.  High gzip compressions are only really feasible on fast
>> multi-core systems (the compression is threaded).
>
> The gzip manual pages says that the default compression level for gzip
> is -6.  Experimentation will show that the compression ratio does not
> increase much at -9 so it is not worth it when you are short on time
> or CPU.

Would it also help if the blocksize were reduced down from the default  
(128K?) in the filesystem with gzip compression?

It feels like it might - there'd be more (and smaller) blocks being  
compressed, so more chance of other things being able to happen in  
between blocks.

I stress this is a WAG, but it is an easy variable to alter.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem importing degraded Pool

2008-11-29 Thread Chris Ridd

On 29 Nov 2008, at 09:35, Philipp Haußleiter wrote:

> Hello...
>
> i have somehow a strange problem.
> I build a normal zfs pool of two disks (jbod) and set some folders  
> to copies=2

Mirrored disks?

> Yesterday one of the disks failed and so the zpool status changed to:
>
>> pool: tank
>> state: DEGRADED
>> status: One or more devices could not be opened. Sufficient  
>> replicas exist for
>> the pool to continue functioning in a degraded state.
>> action: Attach the missing device and online it using 'zpool online'.
>
> I replaced the failed disk with a new one with the same size and the  
> same time.

What zpool commands did you run at this point?



suggests you should have done:

zpool replace pool olddev newdev.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for swap and root pool

2008-11-28 Thread Chris Ridd

On 28 Nov 2008, at 00:07, Peter Brouwer, Principal Storage Architect  
wrote:

>
>
> dick hoogendijk wrote:
>>
>> On Thu, 27 Nov 2008 12:58:20 +
>> Chris Ridd <[EMAIL PROTECTED]> wrote:
>>
>>
>>> I'm not 100% convinced it'll boot if half the mirror's not there,
>>>
>> Believe me, it will (been there done that). You -have- to make sure
>> though that both disks have installgrub ... And that your bios is  
>> able
>> to boot from the other disk.

Nod. I've had one boot failure already, but maybe that was down to the  
mirrors rebuilding while trying to boot? The result was an error in  
grub: "Error 37: file system not found" (IIRC). Google doesn't tell me  
much about that one.

I was able to boot from the live CD and import the rpool OK, and  
rebooting off the disks then succeeded. As long as I can do that, I'm  
sort of happy.
>> You can always try it ou by pulling a plug from one of the disks ;-)
>>
> Or less drastic, use the BIOS to point to one of the disks in the  
> mirror to boot from.
> If you have used installgrub to setup the stage 1&2 for both boot  
> drives you can boot from either one.

That wouldn't test if there was some interdependency between the disks  
- I'm not sure how such a thing would happen but with my luck this  
week I'm sure I could do it.

Is there a better list to discuss boot problems?

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for swap and root pool

2008-11-27 Thread Chris Ridd

On 26 Nov 2008, at 17:08, Chris Ridd wrote:

> It feels a lot like "don't start from here" (ie from my 2008.05
> install) so I'm doing an install of 101b from CD onto one of the new
> disks right now. At least format's not showing a swap slice now,
> yippee. I'll try and get it mirroring afterwards, and then see how
> much I can extract from my broken disk.

I've successfully got 101b installed and booting and mirrored.

I did get a "Bad PBR sig" error after the install onto a single disk,  
and subsequently noticed the slices on that disk were slightly off:  
slice 8 (boot) was on cylinder 0, but slices 0 and 2 ended on  
different cylinders. I adjusted the end of slice 0, ran installgrub on  
it, and it now boots. I copied that vtoc over to the other disk and  
ran installgrub on it too.

I'm not 100% convinced it'll boot if half the mirror's not there, but  
I was rearranging the drives on the controller at the time. Presumably  
that's a terrible idea?

Now to recover the bits from my dying disk...

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for swap and root pool

2008-11-26 Thread Chris Ridd

On 26 Nov 2008, at 14:37, Darren J Moffat wrote:

> dick hoogendijk wrote:
>> On Wed, 26 Nov 2008 12:51:04 +
>> Chris Ridd <[EMAIL PROTECTED]> wrote:
>>> But what do I do with that swap slice? Should I ditch it and create
>>> an rpool/swap area? Do I still need a boot slice?
>
>
> Depending on where it is in the VTOC you maybe able to consolidate it
> into the pool.  Only do this if it as at the END of the area used for
> the pool and DON't add it as a new vdev let format(1M) grow the slice.

Rather inconveniently swap starts at cylinder 1, and the pool starts  
at 262. So no joy there.

> As for the boot slice you can't actually do anything about that and it
> is unique to x86 (SPARC doesn't have s8 and s9).

It feels a lot like "don't start from here" (ie from my 2008.05  
install) so I'm doing an install of 101b from CD onto one of the new  
disks right now. At least format's not showing a swap slice now,  
yippee. I'll try and get it mirroring afterwards, and then see how  
much I can extract from my broken disk.

Thanks for the clarifications!

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for swap and root pool

2008-11-26 Thread Chris Ridd

On 26 Nov 2008, at 13:12, dick hoogendijk wrote:

> On Wed, 26 Nov 2008 12:51:04 +
> Chris Ridd <[EMAIL PROTECTED]> wrote:
>
>> I'm replacing the disk with my rpool with a mirrored pool, and
>> wondering how best to do that.
>>
>> The disk I'm replacing is partitioned with root on s0, swap on s1
>> and boot on s8, which is what the original 2008.05 installer created
>> for me.
>
> Are you sure about this? OS2008-05 uses the whole disk as a ZFS and
> within that rpool creates the seperate filesystems (swap/dump,..)

Yep.

> I've never seen a ZFS system on seperate slices. Slices are things  
> from
> the past ;-)

Maybe this is just a hangover from my original 2008.05 install?

>> to mirror the root. The -f is to stop zpool whining about s0
>> overlapping s2.
>
> If I use a disk for a root pool I create just one slice on it (s0).
> Nothing else. This is needed because booting of EFI labeled disks is
> not spuurted (yet).

Nod, I had to use format -e to force an SMI label.

>> But what do I do with that swap slice? Should I ditch it and create
>> an rpool/swap area? Do I still need a boot slice?
>
> ALL parts are created within the one rpool.

Hm, so it might be better to do a new install onto the new disk with  
whatever slices the installer wants to set up, and then migrate the  
filesystems across from the old rpool.

So where does installgrub put the boot bits?

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best practice for swap and root pool

2008-11-26 Thread Chris Ridd
I'm replacing the disk with my rpool with a mirrored pool, and  
wondering how best to do that.

The disk I'm replacing is partitioned with root on s0, swap on s1 and  
boot on s8, which is what the original 2008.05 installer created for  
me. I've partitioned the new disk in the same way and am now running

zpool attach -f rpool c5d0s0 c9t1d0s0

to mirror the root. The -f is to stop zpool whining about s0  
overlapping s2.

But what do I do with that swap slice? Should I ditch it and create an  
rpool/swap area? Do I still need a boot slice?

What's the recommended practice here?

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Odd filename in zpool status -v output

2008-11-25 Thread Chris Ridd
My non-redundant rpool (2 replacement disks have been ordered :-) is  
reporting errors:

canopus% pfexec zpool status -v rpool
   pool: rpool
  state: ONLINE
status: One or more devices has experienced an error resulting in data
 corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
 entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
  scrub: scrub in progress for 4h18m, 72.07% done, 1h40m to go
config:

 NAMESTATE READ WRITE CKSUM
 rpool   ONLINE   8 0 0
   c5d0s0ONLINE 818 0 0  540K repaired

errors: Permanent errors have been detected in the following files:

 rpool/ROOT/opensolaris-101:/var/tmp/stmAAAaXaWkb.0015
 rpool/canopus1:<0x0>

So I don't think I care about the damage to /var/tmp/stmAAAaXaWkb. 
0015, but what's the second filename printed there?

The pool has an rpool/canopus1 filesystem so I guess it is somehow  
related to that.

I'm running the current public build (101b) of OpenSolaris.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Downgrading a zpool

2008-11-06 Thread Chris Ridd

On 6 Nov 2008, at 10:19, Christian Vallo wrote:

> Hi Chris,
>
> i think there is no way to downgrade. I think you must copy/sync the  
> data from one pool (v4) to another pool (v3).

Darn. I've managed to push back on doing the downgrade for now anyway...

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Downgrading a zpool

2008-11-06 Thread Chris Ridd

On 6 Nov 2008, at 09:53, Ian Collins wrote:

> Chris Ridd wrote:
>> I probably need to downgrade a machine from 10u5 to 10u3. The zpool  
>> on
>> u5 is a v4 pool, and AIUI 10u3 only supports up to v3 pools.
>>
>> Will this pool automatically import when I downgrade the OS?
>>
>>
> No you are out of luck.

I thought that might be the case :-)

>> Assuming I'm not that lucky, can I use 10u5's zfs send to take a
>> backup of the filesystems, and zfs receive on 10u3 to restore them?
>>
>>
> Same again. You can't receive a stream sent from a newer pool version.

That's a pity. I'm slightly surprised that the pool version affects  
the filesystem/snapshot stream format.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Downgrading a zpool

2008-11-06 Thread Chris Ridd
I probably need to downgrade a machine from 10u5 to 10u3. The zpool on  
u5 is a v4 pool, and AIUI 10u3 only supports up to v3 pools.

Will this pool automatically import when I downgrade the OS?

Assuming I'm not that lucky, can I use 10u5's zfs send to take a  
backup of the filesystems, and zfs receive on 10u3 to restore them?

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using zfs boot with MPxIO on T2000

2008-07-09 Thread Chris Ridd
Adrian Danielson wrote:
> 1.  After the install I created a zfs mirror of the root disk c0t0d0 to 
> c0t1d0, format shows the mirrored disk with sectors instead of cylinders, is 
> this normal or correct?  Is there a way to reverse this back to cylinders if 
> it is not?  Same goes for the external disk pool using SAN disk from the IBM 
> SVC.

Format show sectors when the disk has an EFI label, and cylinders when 
the disk has a Sun label. ZFS always uses EFI labels, so you're seeing 
the right thing.

You can change the label (blowing away the disk contents of course) 
using format -e. The label menu changes with the -e flag to let you 
choose the kind of label.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove old boot environment?

2008-07-08 Thread Chris Ridd
Ted Carr wrote:
> Hello All,
> 
> Is there a way I can remove my old boot environments?  Is it as simple as 
> performing a 'zfs destroy' on the older entries, followed by removing the 
> entry from the menu.lst??  I have been searching, but have not found 
> anything...  Any help would be much appreciated!!

The beadm command is probably the tool of choice here.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OT: extremely poor experience with Sun Download Manager

2007-06-14 Thread Chris Ridd
On 14/6/07 11:16, "Graham Perrin" <[EMAIL PROTECTED]> wrote:

> Intending to experiment with ZFS, I have been struggling with what
> should be a simple download routine.
> 
> Sun Download Manager leaves a great deal to be desired.
> 
> In the Online Help for Sun Download Manager there's a section on
> troubleshooting, but if it causes *anyone* this much trouble
>  then
> it should, surely, be fixed.
> 
> Sun Download Manager -- a FORCED step in an introduction to

Read Sun's page more carefully. It isn't a forced step, it is a
*recommended* step. (OK, a *highly recommended* step.)

There's nothing stopping you downloading each file separately using your web
browser or curl or something.

Cheers,

Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs reports small st_size for directories?

2007-06-10 Thread Chris Ridd
On 10/6/07 5:23, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:

> 
>> On 9/6/07 10:01, "Eric Schrock" <[EMAIL PROTECTED]> wrote:
>> 
>>> On Sat, Jun 09, 2007 at 01:56:35PM -0700, Ed Ravin wrote:
 
 I encountered the problem in NetBSD's scandir(), when reading off
 a Solaris NFS fileserver with ZFS filesystems.  I've already filed a
 bug report with NetBSD.  They were using the st_size, divided by 24, to
 determine how much memory to allocate with malloc() before reading in
 the directory entries.  All without any sanity checking.
>>> 
>>> Ah, so the original bug should never been filed against our scandir(3c),
>>> which is resilient to this type of failure.
>> 
>> I think when I originally filed this bug I was looking at the wrong scandir
>> implementation, ie the one in
>> /onnv/onnv-gate/usr/src/lib/libbc/libc/gen/common/scandir.c instead of the
>> one in /onnv/onnv-gate/usr/src/lib/libc/port/gen/scandir.c
>> 
>> Is there any way to mark the bug as resolved? Or maybe to change the
>> category etc?
> 
> Possibly; it would still need to be fixed if folks encountered this
> on their old SunOS 4.x app running on Solaris.

I can't see that being too high up on Sun's agenda :-)

Cheers,

Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs reports small st_size for directories?

2007-06-10 Thread Chris Ridd
On 9/6/07 10:01, "Eric Schrock" <[EMAIL PROTECTED]> wrote:

> On Sat, Jun 09, 2007 at 01:56:35PM -0700, Ed Ravin wrote:
>> 
>> I encountered the problem in NetBSD's scandir(), when reading off
>> a Solaris NFS fileserver with ZFS filesystems.  I've already filed a
>> bug report with NetBSD.  They were using the st_size, divided by 24, to
>> determine how much memory to allocate with malloc() before reading in
>> the directory entries.  All without any sanity checking.
> 
> Ah, so the original bug should never been filed against our scandir(3c),
> which is resilient to this type of failure.

I think when I originally filed this bug I was looking at the wrong scandir
implementation, ie the one in
/onnv/onnv-gate/usr/src/lib/libbc/libc/gen/common/scandir.c instead of the
one in /onnv/onnv-gate/usr/src/lib/libc/port/gen/scandir.c

Is there any way to mark the bug as resolved? Or maybe to change the
category etc?

Cheers,

Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Google paper on disk reliability

2007-02-18 Thread Chris Ridd
On 18/2/07 4:56, "Akhilesh Mritunjai" <[EMAIL PROTECTED]> wrote:

> Hi Folks
> 
> I believe that the word would have gone around already, Google engineers have
> published a paper on disk reliability. It might supplement the ZFS FMA
> integration and well - all the numerous debates on spares etc etc over here.
> 
> To quote /.
> 
> "The Google engineers just published a paper on Failure Trends in a Large Disk
> Drive Population. Based on a study of 100,000 disk drives over 5 years they
> find some interesting stuff. To quote from the abstract: 'Our analysis
> identifies several parameters from the drive's self monitoring facility
> (SMART) that correlate highly with failures. Despite this high correlation, we
> conclude that models based on SMART parameters alone are unlikely to be useful
> for predicting individual drive failures. Surprisingly, we found that
> temperature and activity levels were much less correlated with drive failures
> than previously reported.'"
> 
> Link to the paper is http://labs.google.com/papers/disk_failures.pdf

There was another similar paper (written at CMU) given at the same
conference:



Cheers,

Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RACF: SunFire x4500 Thumper Evaluation

2007-02-11 Thread Chris Ridd
On 11/2/07 3:04, "Ian Collins" <[EMAIL PROTECTED]> wrote:

> Rayson Ho wrote:
> 
>> Interesting...
>> 
>> http://www.rhic.bnl.gov/RCF/LiaisonMeeting/20070118/Other/thumper-eval.pdf
>> 
>> 
> I wonder where they got the information that "Solaris 10 doesn't support
> dual-core Intel" from?

Does OpenSolaris or Solaris 10 support the Intel Core Duo chips and/or the
Core 2 Duo chips?

Cheers,

Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] can I use zfs on just a partition?

2007-01-25 Thread Chris Ridd
On 25/1/07 3:16, "Jeremy Teo" <[EMAIL PROTECTED]> wrote:

> On 1/25/07, Tim Cook <[EMAIL PROTECTED]> wrote:
>> Just want to verify, if I have say, 1 160GB disk, can I format it so that the
>> first say 40GB is my main UFS parition with the base OS install, and then
>> make the rest of the disk zfs?  Or even better yet, for testing purposes make
>> two 60GB partitions out of the rest of it and make them a *mirror*?  Or do I
>> have to give zfs the entire disk?
> 
> If you're talking about slices, yes. (partitions). You can give zfs
> just a slice of the disk, while giving UFS another slice. You can
> mirror across slices (though performance will be sub optimal).

The zpool man page suggests that you can use a regular file as a backing
store, though this is only really intended for experimental purposes.

Cheers,

Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Thumper Origins Q

2007-01-24 Thread Chris Ridd
On 24/1/07 9:06, "Bryan Cantrill" <[EMAIL PROTECTED]> wrote:

> But Fowler said the name was too risque (!).  Fortunately the name
> "Thumper" stuck...

I assumed it was a reference to Bambi... That's what comes from having small
children :-)

Cheers,

Chris


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss