On Thu, Jun 03, 2010 at 12:40:34PM -0700, Frank Cusack wrote:
> On 6/3/10 12:06 AM -0400 Roman Naumenko wrote:
> >I think there is a difference. Just quickly checked netapp site:
> >
> >Adding new disks to a RAID group If a volume has more than one RAID
> >group, you can specify the RAID group to w
On Sun, May 09, 2010 at 10:55:08PM +0530, Johnson Thomas wrote:
> Customer has this query
> "If there is a way to flush ARC for filebench runs without rebooting
> the system"
>
> He is running firmware 2010.02.09.0.2,1-1.13 on the NAS 7410
In the pre-ZFS world I would have suggested unmounting th
On Wed, Apr 28, 2010 at 09:49:04PM +0200, Ragnar Sundblad wrote:
> On 28 apr 2010, at 14.06, Edward Ned Harvey wrote:
>
> What indicators do you have that ONTAP/WAFL has inode->name lookup
> functionality? I don't think it has any such thing - WAFL is pretty
> much an UFS/FFS that does COW instead
On Wed, Apr 21, 2010 at 04:49:30PM +0100, Darren J Moffat wrote:
> /foo is the filesystem
> /foo/bar is a directory in the filesystem
>
> cd /foo/bar/
> touch stuff
>
> [ you wait, time passes; a snapshot is taken ]
>
> At this point /foo/bar/.snapshot/.../stuff exists
>
> Now do this:
>
> rm
On Wed, Apr 21, 2010 at 10:10:09PM -0400, Edward Ned Harvey wrote:
> > From: Nicolas Williams [mailto:nicolas.willi...@oracle.com]
> >
> > POSIX doesn't allow us to have special dot files/directories outside
> > filesystem root directories.
>
> So? Tell it to Netapp. They don't seem to have any
On Sat, Apr 17, 2010 at 09:03:33AM -0400, Edward Ned Harvey wrote:
> > "zfs list -t snapshot" lists in time order.
>
> Good to know. I'll keep that in mind for my "zfs send" scripts but it's not
> relevant for the case at hand. Because "zfs list" isn't available on the
> NFS client, where the us
On Tue, Mar 02, 2010 at 11:42:30AM -0800, Carson Gaspar wrote:
>
> NetApp does _not_ expose an ACL via NFSv3, just old school POSIX
> mode/owner/group info. I don't know how NetApp deals with chmod, but
> I'm sure it's documented.
I can't get a chmod to succeed in that situation. This particular
On Fri, Feb 19, 2010 at 07:43:10AM -0800, Thanos Makatos wrote:
> Hello.
>
> I want to know what is the unit of compression in ZFS. Is it 4 KB or larger?
> Is it tunnable?
It is the ZFS filesystem block.
--
Darren
___
zfs-discuss mailing list
zfs-d
On Fri, Feb 05, 2010 at 08:35:15AM -0800, J wrote:
> To be more descriptive, I plan to have a Raid 1 array for the OS, and
> then I will need 3 additional Raid5/RaidZ/etc arrays for data
> archiving, backups and other purposes. There is plenty of
> documentation on how to recover an array if one o
On Wed, Jan 27, 2010 at 10:55:21AM -0800, Albert Frenz wrote:
> hi there,
>
> maybe this is a stupid question, yet i haven't found an answer anywhere ;)
> let say i got 3x 1,5tb hdds, can i create equal partitions out of each and
> make a raid5 out of it? sure the safety would drop, but that is n
On Thu, Jan 21, 2010 at 12:38:56AM +0100, Ragnar Sundblad wrote:
> On 21 jan 2010, at 00.20, Al Hopper wrote:
> > I remember for about 5 years ago (before LT0-4 days) that streaming
> > tape drives would go to great lengths to ensure that the drive kept
> > streaming - because it took so much time
On Wed, Jan 20, 2010 at 08:11:27AM +1300, Ian Collins wrote:
> True, but I wonder how viable its future is. One of my clients
> requires 17 LT04 types for a full backup, which cost more and takes
> up more space than the equivalent in removable hard drives.
What kind of removable hard drives are
On Fri, Jan 15, 2010 at 02:07:40PM -0600, Gary Mills wrote:
> I have a ZFS filesystem that I wish to split into two
> ZFS filesystems at one of the subdirectories. I understand that I
> first need to make a snapshot of the filesystem and then make a clone
> of the snapshot, with a different name.
On Thu, Jan 14, 2010 at 06:11:10PM -0500, Miles Nordin wrote:
> zpool offline / zpool online of a mirror component will indeed
> fast-resync, and I do it all the time. zpool detach / attach will
> not.
Yes, but the offline device is still part of the pool. What are you
doing with the device when
On Wed, Jan 13, 2010 at 04:38:42PM +0200, Cyril Plisko wrote:
> On Wed, Jan 13, 2010 at 4:35 PM, Max Levine wrote:
> > Veritas has this feature called fast mirror resync where they have ?a
> > DRL on each side of the mirror and, detaching/re-attaching a mirror
> > causes only the changed bits to b
On Tue, Jan 05, 2010 at 04:49:00PM +, Robert Milkowski wrote:
> A possible *workaround* is to use SVM to set-up RAID-5 and create a
> zfs pool on top of it.
> How does SVM handle R5 write hole? IIRC SVM doesn't offer RAID-6.
As far as I know, it does not address it. It's possible that adding
On Sun, Dec 27, 2009 at 06:02:18PM +0100, Colin Raven wrote:
> Are there any negative consequences as a result of a force import? I mean
> STUNT; "Sudden Totally Unexpected and Nasty Things"
> -Me
If the pool is not in use, no. It's a safety check to avoid problems
that can easily crop up when st
On Tue, Dec 29, 2009 at 02:37:20PM -0800, Brad wrote:
> I would appreciate some feedback on what I've understood so far:
>
> WRITES
>
> raid5 - A FS block is written on a single disk (or multiple disks
depending on size data???)
There is no direct relationship between a filesystem and the RAID
s
On Thu, Dec 17, 2009 at 12:30:29PM -0800, Stacy Maydew wrote:
> So thanks for that answer. I'm a bit confused though if the dedup is
> applied per zfs filesystem, not zpool, why can I only see the dedup on
> a per pool basis rather than for each zfs filesystem?
>
> Seems to me there should be a wa
On Mon, Dec 14, 2009 at 09:30:29PM +0300, Andrey Kuzmin wrote:
> ZFS deduplication is block-level, so to deduplicate one needs data
> broken into blocks to be written. With compression enabled, you don't
> have these until data is compressed. Looks like cycles waste indeed,
> but ...
ZFS compressi
On Thu, Dec 03, 2009 at 12:44:16PM -0800, Per Baatrup wrote:
> >any of f1..f5's last blocks are partial
> Does this mean that f1,f2,f3,f4 needs to be exact multiplum of the ZFS
> blocksize? This is a severe restriction that will fail unless in very
> special cases. Is this related to the disk form
On Thu, Dec 03, 2009 at 09:36:23AM -0800, Per Baatrup wrote:
> The reason I was speaking about "cat" in stead of "cp" is that in
> addition to copying a single file I would like also to concatenate
> several files into a single file. Can this be accomplished with your
> "(z)cp"?
Unless you have s
On Tue, Nov 10, 2009 at 03:04:24PM -0600, Tim Cook wrote:
> No. The whole point of a snapshot is to keep a consistent on-disk state
> from a certain point in time. I'm not entirely sure how you managed to
> corrupt blocks that are part of an existing snapshot though, as they'd be
> read-only.
Ph
On Mon, Nov 09, 2009 at 03:25:02PM -0700, Robert Thurlow wrote:
> Andrew Daugherity wrote:
>
> >if I invoke bart via truss, I see it calls statvfs() and fails. Way to
> >keep up with the times, Sun!
>
> % file /bin/truss /bin/amd64/truss
>
> /bin/truss: ELF 32-bit LSB executable 80386 Vers
On Thu, Nov 05, 2009 at 11:55:58AM -0800, Ilya wrote:
> Slide 18 shows variably sizes extents but doesn't explain the process
> of full-on write. What I'm looking for is one example. I still don't
> understand how it works with variable sized extents. So if you have 2
> stripe units on one disk an
On Wed, Nov 04, 2009 at 04:41:34PM +, Andrew Gabriel wrote:
> A Darren Dunham wrote:
> >I don't think the second fdisk partition can be used. The system
> >doesn't like to have multiple "Solaris" partitions.
> >
>
> Make sure it isn't
On Wed, Nov 04, 2009 at 09:59:05AM +, Andrew Gabriel wrote:
> It can be done by careful use of fdisk (with some risk of blowing away
> the data if you get it wrong), but I've seen other email threads here
> that indicate ZFS then won't mount the pool, because the two labels at
> the end of t
On Mon, Oct 26, 2009 at 10:24:16AM -0700, Brian wrote:
> Why does resilvering an entire disk, yield different amounts of data that was
> resilvered each time.
> I have read that ZFS only resilvers what it needs to, but in the case
of replacing an entire disk with another formatted clean disk, you
On Fri, Oct 16, 2009 at 01:42:49PM +0200, Sander Smeenk wrote:
> Recently i switched on 'snapdir=visible' on one of the zfs volumes to
> easily expose the available snapshots and then i noticed rsync -removes-
> snapshots even though i am not able to do so myself, even as root, with
> plain /bin/rm
On Thu, Oct 15, 2009 at 05:31:42AM -0700, Julio P?rez wrote:
> I am thinking in another possibility. Format the current NTFS
> partition to ZFS and then I would be able to use this space like
> another disk, to store the user home for example, or other
> stuff. Would it be possible?
Not easily. S
On Tue, Oct 13, 2009 at 05:32:35AM -0700, Julio wrote:
> Hi,
>
> I have the following partions on my laptop, Inspiron 6000, from fdisk:
>
> 1 Other OS 011 12 0
> 2 EXT LBA 12 25612550 26
> 3
On Tue, Oct 06, 2009 at 06:53:15PM -0500, Harry Putnam wrote:
> I don't get that... I was thinking of something like
>
> set use:"z3 use - mirror rhosts public_html"
Probably something more like:
zfs set local:use="mirror rhosts public_html" tank/pubfs
"local" means nothing here. Just something
On Mon, Oct 05, 2009 at 02:14:24PM -0700, Mark Horstman wrote:
> I have a snapshot that I'd like to destroy:
If you have a filesystem and a clone of that filesystem, a snapshot
always connects them. You can destroy the snapshot only if there are no
clones.
--
Darren
> >Yes, if you stick (say) a 1.5TB, 1TB, and .5TB drive together in a
> >RAIDZ, you will get only 1TB of usable space.
On Wed, Aug 12, 2009 at 05:30:14PM -0400, Adam Sherman wrote:
> I believe you will get .5 TB in this example, no?
The slices used on each of the three disks will be .5TB. Mult
On Wed, Aug 12, 2009 at 04:53:20AM -0700, Sascha wrote:
> confirmed, it's really an EFI Label. (see below)
>
>format> label
>[0] SMI Label
>[1] EFI Label
>Specify Label type[1]: 0
>Warning: This disk has an EFI label. Changing to SMI label will erase all
>current partitions
On Tue, Aug 11, 2009 at 09:35:53AM -0700, Sascha wrote:
> Then creating a zpool:
> [b]zpool create -m /zones/huhctmp huhctmppool
> c6t6001438002A5435A0001005Ad0[/b]
>
> [b]zpool list[/b]
> NAME SIZE USED AVAILCAP HEALTH ALTROOT
> huhctmppool 59.5G 103K 59.5G 0%
On Mon, Aug 03, 2009 at 01:15:49PM -0700, Jan wrote:
> Yes, I have an EFI label on that device.
> This is my procedure to try growing the capacity of the device:
> -> export the zpool
> -> overwrite the existing EFI label with format tool
> -> auto-configure it
> -> import the zpool
>
> What do y
On Wed, Jul 29, 2009 at 03:51:22AM -0700, Jan wrote:
> Hi all,
> I need to know if it is possible to expand the capacity of a zpool
> without loss of data by growing the LUN (2TB) presented from an HP EVA
> to a Solaris 10 host.
Yes.
> I know that there is a possible way in Solaris Express Commun
On Sun, Jun 07, 2009 at 10:38:29AM -0700, Leonid Zamdborg wrote:
> Out of curiosity, would destroying the zpool and then importing the
> destroyed pool have the effect of recognizing the size change? Or
> does 'destroying' a pool simply label a pool as 'destroyed' and make
> no other changes...
I
On Mon, Jun 01, 2009 at 03:19:59PM -0700, Jonathan Loran wrote:
>
> Kinda scary then. Better make sure we delete all the bad files before
> I back it up.
That shouldn't be necessary. Clearing the error count doesn't disable
checksums. Every read is going to verify checksums on the file data
On Fri, Apr 10, 2009 at 01:18:05PM -0500, Harry Putnam wrote:
> I'm looking for ways to backup data on a linux server that has been
> using rsync with the script `rsnapshot'. Some of you may know how
> that works... I won't explain it here other than to say only changed
> data gets rsynced to the
On Wed, Apr 01, 2009 at 12:41:25AM +, A Darren Dunham wrote:
> On Wed, Apr 01, 2009 at 01:41:06AM +0300, Dimitar Vasilev wrote:
> > Hi all,
> > Could someone give a hint if it's possible to create rpool/tmp, mount
> > it as /tmp so that tmpfs has some disk-based back
On Wed, Apr 01, 2009 at 01:41:06AM +0300, Dimitar Vasilev wrote:
> Hi all,
> Could someone give a hint if it's possible to create rpool/tmp, mount
> it as /tmp so that tmpfs has some disk-based back-end instead of
> memory-based size-limited one.
You mean you want /tmp to be a regular ZFS filesyst
On Wed, Mar 18, 2009 at 07:13:41PM +0100, Carsten Aulbert wrote:
> Well, consider one box being installed from CD (external USB-CD) and
> another one which is jumpstarted via the network. The results usually
> are two different boot device names :(
>
> Q: Is there an easy way to reset this without
On Tue, Mar 17, 2009 at 03:51:25PM -0700, Neal Pollack wrote:
> >Step 3, you'll be presented with the disks to be selected as in
> >previous releases. So, for example, to select the boot disks on the
> >Thumper,
> >select both of them:
> >
> >[x] c5t0d0
> >[x] c4t0d0
>
> Why have the controller
On Mon, Mar 16, 2009 at 09:54:57PM +0100, Carsten Aulbert wrote:
> o what happens when a user opens the file and does a lot of seeking
> inside the file? For example our scientists use a data format where
> quite compressible data is contained in stretches and the file header
> contains a dictionar
On Mon, Mar 16, 2009 at 10:34:54PM +0100, Carsten Aulbert wrote:
> Can ZFS make educated guesses where the seek targets might be or will it
> read the file block by block until it reaches the target position, in
> the latter case it might be quite inefficient if the file is huge and
> has a large v
On Tue, Mar 10, 2009 at 05:57:16PM -0500, Bob Friesenhahn wrote:
> On Tue, 10 Mar 2009, Moore, Joe wrote:
>
> >As far as workload, any time you use RAIDZ[2], ZFS must read the
> >entire stripe (across all of the disks) in order to verify the
> >checksum for that data block. This means that a 12
On Thu, Feb 12, 2009 at 10:33:40AM -0500, Greg Mason wrote:
> What I'm looking for is a faster way to do this than format -e -d
> -f
On Wed, Jan 14, 2009 at 04:39:03PM -0600, Gary Mills wrote:
> I realize that any error can occur in a storage subsystem, but most
> of these have an extremely low probability. I'm interested in this
> discussion in only those that do occur occasionally, and that are
> not catastrophic.
What level
On Tue, Jan 06, 2009 at 04:10:10PM -0500, JZ wrote:
> Hello Darren,
> This one, ok, was a validate thought/question --
Darn, I was hoping...
> On Solaris, root pools cannot have EFI labels (the boot firmware doesn't
> support booting from them).
> http://blog.yucas.info/2008/11/26/zfs-boot-sola
On Tue, Jan 06, 2009 at 01:24:17PM -0800, Alex Viskovatoff wrote:
> a...@diotiima:~# installgrub -m /boot/grub/stage1 /boot/grub/stage2
> /dev/rdsk/c4t0d0s0
> Updating master boot sector destroys existing boot managers (if any).
> continue (y/n)?y
> stage1 written to partition 0 sector 0 (abs 160
On Tue, Jan 06, 2009 at 11:49:27AM -0700, cindy.swearin...@sun.com wrote:
> My wish for this year is to boot from EFI-labeled disks so examining
> disk labels is mostly unnecessary because ZFS pool components could be
> constructed as whole disks, and the unpleasant disk
> format/label/partitioning
On Tue, Jan 06, 2009 at 10:22:20AM -0800, Alex Viskovatoff wrote:
> I did an install of OpenSolaris in which I specified that the whole disk
> should be used for the installation. Here is what "format> verify" produces
> for that disk:
>
> Part TagFlag Cylinders Size
On Tue, Jan 06, 2009 at 08:44:01AM -0800, Jacob Ritorto wrote:
> Is this increase explicable / expected? The throughput calculator
> sheet output I saw seemed to forecast better iops with the striped
> raidz vdevs and I'd read that, generally, throughput is augmented by
> keeping the number of vd
On Sat, Jan 03, 2009 at 09:58:37PM -0500, JZ wrote:
>> Under what situations would you expect any differences between the ZFS
>> checksums and the Netapp checksums to appear?
>>
>> I have no evidence, but I suspect the only difference (modulo any bugs)
>> is how the software handles checksum failur
On Wed, Dec 31, 2008 at 01:53:03PM -0500, Miles Nordin wrote:
> The thing I don't like about the checksums is that they trigger for
> things other than bad disks, like if your machine loses power during a
> resilver, or other corner cases and bugs. I think the Netapp
> block-level RAID-layer check
On Thu, Dec 18, 2008 at 10:24:26AM +0200, Johan Hartzenberg wrote:
> Similarly, adding a device into a raid-Z vdev seems easy to do: All future
> writes include that device in the list of devices from which to allocate
> blocks.
In general, I agree completely. But in practice there are limitatio
On Wed, Dec 17, 2008 at 01:57:37PM -0600, Tim wrote:
> On Wed, Dec 17, 2008 at 10:23 AM, wrote:
>
> > Hi Alex,
> >
> > Sorry, I missed the 1.5 TB disk/boot issue previously.
> >
> > A project is underway to provide booting for disks that are large
> > than 1 TB. This project is outside of a futur
On Tue, Dec 16, 2008 at 12:07:52PM +, Ross Smith wrote:
> It sounds to me like there are several potentially valid filesystem
> uberblocks available, am I understanding this right?
>
> 1. There are four copies of the current uberblock. Any one of these
> should be enough to load your pool wit
On Wed, Nov 26, 2008 at 04:30:59PM +0100, "C. Bergstr?m" wrote:
> Ok. here's a trick question.. So to the best of my understanding zfs
> turns off write caching if it doesn't own the whole disk.. So what if s0
> *is* the whole disk? Is write cache supposed to be turned on or off?
Actually, ZFS
On Tue, Nov 04, 2008 at 05:52:33AM -0800, Ivan Wang wrote:
> > $ /usr/bin/amd64/ls -l .gtk-bookmarks
> > -rw-r--r-- 1 user opc0 oct. 16 2057
> > .gtk-bookmarks
> >
> > This is a bit absurd. I thought Solaris was fully 64
> > bit. I hope those tools will be integrated soon.
Solaris
On Sat, Oct 11, 2008 at 03:19:49AM +0300, Marcus Sundman wrote:
> I've used format's "volname" command to give labels to my drives
> according to their physical location. I did quite a lot of work
> labeling all my drives (I couldn't figure out which controller got
> which numbers so I had to disco
On Tue, Sep 30, 2008 at 03:19:40PM -0700, Erik Trimble wrote:
> To make Will's argument more succinct (), with a NetApp,
> undetectable (by the NetApp) errors can be introduced at the HBA and
> transport layer (FC Switch, slightly damage cable) levels. ZFS will
> detect such errors, and fix th
On Tue, Sep 23, 2008 at 08:56:39AM +0200, Nils Goroll wrote:
>> That case appears to be about trying to get a raidz sized properly
>> against disks of different sizes. I don't see a similar issue for
>> someone preferring a concat over a stripe.
>
> I don't quite understand your comment.
>
> The q
On Mon, Sep 22, 2008 at 01:03:13PM +0200, Nils Goroll wrote:
> See
>
> http://www.opensolaris.org/jive/thread.jspa?messageID=271983
>
> The case mentioned there is one where concatenation in zdevs would be
useful.
That case appears to be about trying to get a raidz sized properly
against disks
On Fri, Sep 19, 2008 at 10:31:07AM -0400, Michael Dvinyaninov wrote:
> Hello,
>
> I am sure that this question was answered already but I could not find
> an answer.
> Is it possible to force zfs pool to have concatenation not striping or
> it can't be specified.
No, it can't.
How would having
On Thu, Sep 18, 2008 at 01:26:09PM +0200, Nils Goroll wrote:
> Thank you very much for correcting my long-time misconception.
>
> On the other hand, isn't there room for improvement here? If it was
> possible to break large writes into smaller blocks with individual
> checkums(for instance those w
On Thu, Sep 11, 2008 at 04:28:03PM -0400, Jim Dunham wrote:
>
> On Sep 11, 2008, at 11:19 AM, A Darren Dunham wrote:
>
>> On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
>>> The issue with any form of RAID >1, is that the instant a disk fails
>>>
On Thu, Sep 11, 2008 at 10:33:00AM -0400, Jim Dunham wrote:
> The issue with any form of RAID >1, is that the instant a disk fails
> out of the RAID set, with the next write I/O to the remaining members
> of the RAID set, the failed disk (and its replica) are instantly out
> of sync.
Does ra
On Fri, Sep 05, 2008 at 03:17:44PM -0400, Paul Raines wrote:
> [EMAIL PROTECTED] # ls -l
> ./README: Value too large for defined data type
> total 36
> -rw-r- 1 mreuter mreuter 1019 Sep 25 2006 Makefile
> -rw-r- 1 mreuter mreuter 3185 Feb 22 2000 lcompgre.cc
> -rw-r- 1
On Fri, Aug 22, 2008 at 10:54:00AM -0700, Gordon Ross wrote:
> I noted this PSARC thread with interest:
> Re: zpool autoexpand property [PSARC/2008/353 Self Review]
> because it so happens that during a recent disk upgrade,
> on a laptop. I've migrated a zpool off of one partition
> onto a slight
On Thu, Aug 07, 2008 at 11:34:12AM -0700, Richard Elling wrote:
> Anton B. Rang wrote:
> > First, there are two types of utilities which might be useful in the
> > situation where a ZFS pool has become corrupted. The first is a file system
> > checking utility (call it zfsck); the second is a dat
On Thu, Jun 12, 2008 at 07:28:23AM -0400, Brian Hechinger wrote:
> I think something else that might help is if ZFS were to boot, see that
> the volume it booted from is older than the other one, print a message
> to that effect and either halt the machine or issue a reboot pointing
> at the other
On Thu, Jun 12, 2008 at 07:29:08AM -0700, Rich Teer wrote:
> Hi all,
>
> Booting from a two-way mirrored metadevice created using SVM
> can be a bit risky, especially when one of the drives fail
> (not being able to form a quarum, the kernel will panic).
SVM doesn't panic in that situation. At b
On Tue, Jun 10, 2008 at 05:32:21PM -0400, Torrey McMahon wrote:
> However, some apps will probably be very unhappy if i/o takes 60 seconds
> to complete.
It's certainly not uncommon for that to occur in an NFS environment.
All of our applications seem to hang on just fine for minor planned and
un
On Tue, Jun 10, 2008 at 11:33:36AM -0700, Wyllys Ingersoll wrote:
> Im running build 91 with ZFS boot. It seems that ZFS will not allow
> me to add an additional partition to the current root/boot pool
> because it is a bootable dataset. Is this a known issue that will be
> fixed or a permanent l
On Thu, Jun 05, 2008 at 11:13:01AM -0400, Luke Scharf wrote:
> So, can I build a working system without s2?
Build? I'm not so sure. The first label is going to have s2 by
default. You'd have to remove it later. I doubt there's language in
the jumpstart scripts to remove it then.
But yes, remo
On Wed, Jun 04, 2008 at 06:28:58PM -0400, Luke Scharf wrote:
>2. The number s2 is arbitrary. If it were s0, then there would at
> least be the beginning of the list. If it were s3, it would be at
> the end of a 2-bit list, which could be explained historically.
> If it were
On Tue, Jun 03, 2008 at 05:56:44PM -0700, Richard L. Hamilton wrote:
> How about SPARC - can it do zfs install+root yet, or if not, when?
> Just got a couple of nice 1TB SAS drives, and I think I'd prefer to
> have a mirrored pool where zfs owns the entire drives, if possible.
> (I'd also eventuall
On Wed, May 21, 2008 at 02:43:26PM -0400, Will Murnane wrote:
> So, my questions are:
> * Are there options I can set server- or client-side to make Solaris
> child mounts happen automatically (i.e., match the Linux behavior)?
I think these are known as "mirror-mounts" in Solaris. They first
inte
On Fri, May 16, 2008 at 07:29:31PM -0700, Paul B. Henson wrote:
> For ZFS root, is it required to have a partition and slices? Or can I just
> give it the whole disk and have it write an EFI label on it?
Last I heard, no support yet for EFI boot. I'm not sure if that's
something that's being acti
On Tue, May 13, 2008 at 04:33:29PM +0200, Simon Breden wrote:
>
> If multiple snapshots reference (own?) the same file, what's the quickest
> way to zap that file from all snapshots?
There is no way.
If you could do that, then they wouldn't really be "snapshots".
I'm not saying that the abili
On Tue, May 13, 2008 at 10:02:01AM -0700, Marc Glisse wrote:
> Can't you turn the snapshot into a clone (kind of an editable
> snapshot)? Or does the existence of a clone created from this snapshot
> prevent from removing the snapshot afterwards?
You can create a clone from the snapshot, but it do
On Mon, May 12, 2008 at 06:44:39PM +0200, Ralf Bertling wrote:
> ...you should be able to "simulate" a scrub on the latest data by
> using
> zfs send > /dev/null
> Since the primary purpose is to verify latent bugs and to have zfs
> auto-correct them, simply reading all data would be sufficient
On Sat, Apr 19, 2008 at 10:28:45AM -0700, Richard Elling wrote:
> Bob Friesenhahn wrote:
> > I don't agree that if swap is used that performance will necessarily
> > suck. If swap is available, Solaris will mount /tmp there, which
> > helps temporary file performance. It is best to look at syst
On Sat, Apr 19, 2008 at 12:16:11PM -0500, Bob Friesenhahn wrote:
> On Sat, 19 Apr 2008, Richard Elling wrote:
> >
> > Don't worry about swapping on CF. In most cases, you won't be
> > using the swap device for normal operations. You can use the
> > swap -l command to observe the swap device usage
On Thu, Apr 17, 2008 at 12:51:03PM -0500, Bob Friesenhahn wrote:
> Even though I am on a bunch of Sun propaganda lists, I have not yet
> spotted an announcement for Solaris 10U5 even though it is now
> available for download. Sun's formal web site is useless for
> comparing what is in different
On Fri, Mar 21, 2008 at 09:55:38AM -0700, Tim Wood wrote:
> Hi,
> I'm interested in the overhead of making, cloning, and destroying snapshots.
> It sounds like the cost for all of these is low, but how low??
>
> For example, could I make snapshots of a system every 5 seconds?
> every second? Mo
On Thu, Mar 20, 2008 at 03:12:01PM -0700, Walter Faleiro wrote:
> Layman's method would be to try and total the space it lists against each
> snapshot, but its not the case ZFS calculates. So I go on deleting the
> snapshots, until the last one.
Yes. This has been discussed before. There doesn't
On Wed, Mar 12, 2008 at 11:41:21AM -0500, Scott Gaspard wrote:
> I have a customer who has implemented the following layout: As you can
> see, he has mostly raidz zvols but has one raidz2 in the same zpool.
> What are the implications here? Is this a bad thing to do? Please
> elaborate.
It's
On Tue, Feb 19, 2008 at 12:13:25PM -0800, Marion Hakanson wrote:
> [EMAIL PROTECTED] said:
> > It may not be relevant, but I've seen ZFS add weird delays to things too. I
> > deleted a file to free up space, but when I checked no more space was
> > reported. A second or two later the space appear
On Wed, Feb 13, 2008 at 02:48:25PM -0800, Sam wrote:
> I saw some other people have a similar problem but reports claimed
> this was 'fixed in release 42' which is many months old, I'm running
> the latest version. I made a RAIDz2 of 8x500GB which should give me a
> 3TB pool:
How many sectors on
I notice that files within a snapshot show a different deviceID to stat
than the parent file does. But this is not true when mounted via NFS.
Is this a limitation of the NFS client, or just what the ZFS fileserver
is doing?
Will this change in the future? With NFS4 mirror mounts?
--
Darren Dun
On Wed, Jan 23, 2008 at 11:11:38AM -0800, Matt Newcombe wrote:
> Creating an empty zpool & zfs
> Creating a 6MB text file
> Taking a snapshot
>
> So far so good. The filesystem size is 6MB and the snapshot 0MB
>
> Now I edit the first 4 characters of the text file. I would have
> expected the siz
On Wed, Nov 14, 2007 at 09:40:59AM -0800, Boris Derzhavets wrote:
> I was able to create second Solaris partition by running
>
> #fdisk /dev/rdsk/c1t0d0p0
I'm afraid that won't do you much good.
Solaris only works with one "Solaris" partition at a time (on any one
disk). If you have free space
On Tue, Nov 13, 2007 at 07:33:20PM -0200, Toby Thain wrote:
> >>> Yup - that's exactly the kind of error that ZFS and
> >> WAFL do a
> >>> perhaps uniquely good job of catching.
> >>
> >> WAFL can't catch all: It's distantly isolated from
> >> the CPU end.
> >
> > WAFL will catch everything that ZF
On Sat, Nov 10, 2007 at 02:05:04PM -0200, Toby Thain wrote:
> > Yup - that's exactly the kind of error that ZFS and WAFL do a
> > perhaps uniquely good job of catching.
>
> WAFL can't catch all: It's distantly isolated from the CPU end.
How so? The checksumming method is different from ZFS, bu
On Tue, Oct 23, 2007 at 09:55:58AM -0700, Scott Laird wrote:
> I'm writing a couple scripts to automate backups and snapshots, and I'm
> finding myself cringing every time I call 'zfs destroy' to get rid of a
> snapshot, because a small typo could take out the original filesystem
> instead of a sna
On Mon, Oct 22, 2007 at 11:41:57AM -0700, Michael Schuster wrote:
> Mike DeMarco wrote:
> > Looking for a way to mount a zfs filesystem ontop of another zfs
> > filesystem without resorting to legacy mode.
>
> doesn't simply 'zfs set mountpoint=...' work for you?
Does this have boot-time problems
1 - 100 of 108 matches
Mail list logo