Hi,
We are currently using NetApp file clone option to clone multiple VMs on our FS.
ZFS dedup feature is great storage space wise but when we need to clone allot
of VMs it just takes allot of time.
Is there a way (or a planned way) to clone a file without going through the
process of actually
On Fri, Apr 16, 2010 at 10:54 AM, Edward Ned Harvey
wrote:
> there's a file or something you want to rollback, it's presently difficult
> to know how far back up the tree you need to go, to find the correct ".zfs"
> subdirectory, and then you need to figure out the name of the snapshots
There is
Thanks. That was it
-Original Message-
From: Brandon High [mailto:bh...@freaks.com]
Sent: Wednesday, 21 April 2010 6:57 AM
To: Ryan John
Cc: zfs-discuss
Subject: Re: [zfs-discuss] Double slash in mountpoint
On Tue, Apr 20, 2010 at 7:38 PM, Ryan John wrote:
> Anyone know how to fix it?
On Tue, Apr 20, 2010 at 7:38 PM, Ryan John wrote:
> Anyone know how to fix it?
> I can't even do a zfs destroy
zfs unmount -a -f
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
Hi Timothy,
That didn't work either.
# zfs inherit mountpoint dataPool/SoftwareRepo
cannot unmount '/sw-repo1/dir2': Device busy
Regards
John
-Original Message-
From: Timothy Haley [mailto:tim.ha...@oracle.com]
Sent: Wednesday, 21 April 2010 5:52 AM
To: Ryan John
Cc: zfs-discuss@opensol
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nicolas Williams
>
> The .zfs/snapshot directory is most certainly available over NFS.
I'm not sure you've been following this thread. Nobody said .zfs/snapshot
wasn't available over NFS. It
Ryan John wrote:
Hi,
I've accidentally put a double slash in a mountpoint, and now can't change it.
# zfs list
...
dataPool/SoftwareRepo 529G 31.3T 73.1K /sw-repo1/
dataPool/SoftwareRepo/dir1 6.10G 31.3T 6.10G /sw-repo1//dir1
dataPool/SoftwareRepo/dir2 26.0G 31.3T 25.7G /sw
> From: cas...@holland.sun.com [mailto:cas...@holland.sun.com] On Behalf
> Of casper@sun.com
>
> >On Mon, 19 Apr 2010, Edward Ned Harvey wrote:
> >> Improbability assessment aside, suppose you use something like the
> DDRDrive
> >> X1 ... Which might be more like 4G instead of 32G ... Is it ev
> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
>
> On Mon, 19 Apr 2010, Edward Ned Harvey wrote:
> >
> > Just be aware that if *any* of your devices fail, all is lost.
> (Because
> > you've said it's configured as a nonredundant stripe.)
>
> The good news is that it is easy to conv
I'm doing a little research study on ZFS benchmarking and performance
profiling. Like most, I've had my favorite methods, but I'm
re-evaluating my choices and trying to be a bit more scientific than I
have in the past.
To that end, I'm curious if folks wouldn't mind sharing their work on
the sub
Hi,
I've accidentally put a double slash in a mountpoint, and now can't change it.
# zfs list
...
dataPool/SoftwareRepo 529G 31.3T 73.1K /sw-repo1/
dataPool/SoftwareRepo/dir1 6.10G 31.3T 6.10G /sw-repo1//dir1
dataPool/SoftwareRepo/dir2 26.0G 31.3T 25.7G /sw-repo1//dir2
...
#
I have a storage server with snv_134 installed. This has four zfs file systems
shared with iscsi that are mounted as zfs volumes on a Sun v480.
Everything has been working great for about a month, and all of a sudden the
v480 has timeout errors when trying to connect to the iscsi volumes on the
On Tue, Apr 20, 2010 at 12:55:10PM -0600, Cindy Swearingen wrote:
> You can use the OpenSolaris beadm command to migrate a ZFS BE over
> to another root pool, but you will also need to perform some manual
> migration steps, such as
> - copy over your other rpool datasets
> - recreate swap and dump
I believe the question of Ned and the answers given have more far
reaching consequences than has been discussed so far.
When I read this thread I thought there was an easy solution to
deleting files from a snapshot by using clones instead. Clones are a
writable copy so you should be able to delet
Ken,
The sharpest parts of my remarks weren't directed your way, and I
regret if that wasn't as clear as I had thought. For clarification: I
was referring to the thread as starting with what you forwarded by URL
(which was sent from a gmail address to the freebsd list), and my
objection w
Geoff Nordli wrote:
> With our particular use case we are going to do a "save
> state" on their
> virtual machines, which is going to write 100-400 MB
> per VM via CIFS or
> NFS, then we take a snapshot of the volume, which
> guarantees we get a
> consistent copy of their VM.
maybe you left out
On Tue, 2010-04-20 at 18:51 +0100, Bayard Bell wrote:
> This thread starts with someone who doesn't claim to have any
> authoritative information or attempt to cite any sources using a gmail
> account to post to a mailgroup. Now people turn around and say that
Whoa! By way of clarification:
1)
Brandon,
You can use the OpenSolaris beadm command to migrate a ZFS BE over
to another root pool, but you will also need to perform some manual
migration steps, such as
- copy over your other rpool datasets
- recreate swap and dump devices
- install bootblocks
- update BIOS and GRUB entries to bo
Nicolas Williams wrote:
On Tue, Apr 20, 2010 at 04:28:02PM +, A Darren Dunham wrote:
On Sat, Apr 17, 2010 at 09:03:33AM -0400, Edward Ned Harvey wrote:
"zfs list -t snapshot" lists in time order.
Good to know. I'll keep that in mind for my "zfs send" scripts but it's not
relevant for the
This thread starts with someone who doesn't claim to have any
authoritative information or attempt to cite any sources using a gmail
account to post to a mailgroup. Now people turn around and say that
they doubt the sourcing on this, but looking at the archives of this
list, there are a num
On Tue, 20 Apr 2010, Don Turnbull wrote:
Not to be a conspiracy nut but anyone anywhere could have registered that
gmail account and supplied that answer. It would be a lot more believable
from Mr Kay's Oracle or Sun account.
It is true that gmail accounts are just as free and untrustworthy
On Tue, Apr 20 at 11:41, Don Turnbull wrote:
Not to be a conspiracy nut but anyone anywhere could have registered
that gmail account and supplied that answer. It would be a lot more
believable from Mr Kay's Oracle or Sun account.
+1
Glad I wasn't the only one who noticed.
--
Eric D. Mudama
On Tue, Apr 20, 2010 at 04:28:02PM +, A Darren Dunham wrote:
> On Sat, Apr 17, 2010 at 09:03:33AM -0400, Edward Ned Harvey wrote:
> > > "zfs list -t snapshot" lists in time order.
> >
> > Good to know. I'll keep that in mind for my "zfs send" scripts but it's not
> > relevant for the case at
Not to be a conspiracy nut but anyone anywhere could have registered
that gmail account and supplied that answer. It would be a lot more
believable from Mr Kay's Oracle or Sun account.
On 4/20/2010 9:40 AM, Ken Gunderson wrote:
On Tue, 2010-04-20 at 13:57 +0100, Dominic Kay wrote:
Oracle
>From: Richard Elling [mailto:richard.ell...@gmail.com]
>Sent: Monday, April 19, 2010 10:17 PM
>
>Hi Geoff,
>The Canucks have already won their last game of the season :-)
>more below...
Hi Richard,
I didn't watch the game last night, but obviously Vancouver better pick up
their socks or they wi
Hi All.
I have pool (3 disks, raidz1). I made recabling for disks and now some of
disks in pool not available (cannot open). bounce back is not possible. Can
i recovery data from this pool?
Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
On Sat, Apr 17, 2010 at 09:03:33AM -0400, Edward Ned Harvey wrote:
> > "zfs list -t snapshot" lists in time order.
>
> Good to know. I'll keep that in mind for my "zfs send" scripts but it's not
> relevant for the case at hand. Because "zfs list" isn't available on the
> NFS client, where the us
I did the same experiment in an VMWare guest (SLES10 x64). The archive was
stored on the vdisk and untarring went to the same vdisk.
The storage backend is sun system with 64 GB RAM, 2*QC cpus, 24 SAS disks with
450 GB, 4 vdevs with 6 disks as RAIDZ2, an Intel X25-E as log device (c2t1d0).
A Stor
On Tue, 2010-04-20 at 05:48 -0700, Tonmaus wrote:
> Don't copy the netiquette issue you are seeing, as I am talking about nothing
> but an issue in a post on this forum. Why should I contact the OP off record
> about this?
> There is no need to read intentions either. I just made clear once more
1 -> kmem_free
1-> kmem_cache_free
1<- kmem_cache_free 0
1 <- kmem_free 0
1<- dmu_tx_commit 0
1-> txg_wait_synced
1 -&g
I looked through that distributor page already and none of the ones I visited
listed the IOPS SSD's- they all listed DRAM and other memory from STEC- but not
the SSD's.
I'm not looking to get the same number of IOPS out of 15k RPM drives. I'm
looking for an appropriate number of IOPS for my env
Yes, I apologize. I didn't notice you were running the OpenSolaris
release. What I outlined below would work on a Solaris 10 system.
I wonder if beadm supports a similar migration. I will find out
and let you know.
Thanks,
Cindy
On 04/19/10 17:22, Brandon High wrote:
On Mon, Apr 19, 2010 at 7
Good news for Nexenta and OpenSolaris community in general:
http://www.nexenta.com/corp/blog/2010/04/06/bill-moore-joins-nexenta-advisory-board/
Nexenta invites talents and hiring OpenSolaris Kernel/API engineers. If
you are in SF bay area and you think you are qualified, send your resume
by f
On Mon, April 19, 2010 23:05, Don wrote:
>> A STEC Zeus IOPS SSD (45K IOPS) will behave quite differently than an
>> Intel X-25E (~3.3K IOPS).
>
> Where can you even get the Zeus drives? I thought they were only in the
> OEM market and last time I checked they were ludicrously expensive. I'm
> look
On Tue, 2010-04-20 at 13:57 +0100, Dominic Kay wrote:
> Oracle has no plan to move from ZFS as the principle storage platform
> for Solaris 10 and OpenSolaris. It remains key to both data management
> and to the OS infrastructure such as root/boot, install and upgrade.
> Thanks
>
> Dominic Kay
Khyron,
Finally, Michael S. made the best recommendation...talk to your sales
rep if you're
a paying customer.
... but don't expect any commitments or generic answer from them at the
moment.
I do however congratulate quoting Mr. Harman in your .sig ;-)
Regards... Sean.
___
Oracle has no plan to move from ZFS as the principle storage platform for
Solaris 10 and OpenSolaris. It remains key to both data management and to
the OS infrastructure such as root/boot, install and upgrade.
Thanks
Dominic Kay
Product Manager, Filesystems
Oracle
2010/4/20 Khyron
> This is how
Don't copy the netiquette issue you are seeing, as I am talking about nothing
but an issue in a post on this forum. Why should I contact the OP off record
about this?
There is no need to read intentions either. I just made clear once more what is
obvious from board metadata anyhow.
Besides that,
Cindy Swearingen writes:
> Hi Harry,
>
> Both du and df are pre-ZFS commands and don't really understand ZFS
> space issues, which are described in the ZFS FAQ here:
>
> http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq
>
> Why does du(1) report different file sizes for ZFS and UFS? Why
>
> On Mon, Apr 19, 2010 at 1:42
> AM, Ian Garbutt < href="mailto:ian.g.garb...@newcastle.gov.uk";>ian.g.gar
> b...@newcastle.gov.uk>
> wrote: style="margin:0 0 0 .8ex;border-left:1px #ccc
> solid;padding-left:1ex;">
> Having looked through the forum I gather that you
> cannot just add an addition
Tonmaus,
you talking about and to whom were you
responding?
My intention was a response to the OP, which I guess from what I am
seeing in the jive forum, happened as well. Indeed, my concern was the
broken link in the first post which would be simple to fix if
intended. That not being the cas
> you talking about and to whom were you
> responding?
My intention was a response to the OP, which I guess from what I am seeing in
the jive forum, happened as well. Indeed, my concern was the broken link in the
first post which would be simple to fix if intended. That not being the case
incre
Harry Putnam writes:
> I'm seeing a really big (to big to be excused lightly) difference with
> the 2 zfs native methods zpool and rpool
Typo alert: The above line should have read:
the 2 zfs native methods ZPOOL list and ZFS list
> compared to 2 native unix methods, du and
I have no idea who you're talking to, but presumably you mean this link:
http://lists.freebsd.org/pipermail/freebsd-questions/2010-April/215269.html
Worked fine for me. I didn't post it. I'm not the OP on this thread or on
the FreeBSD thread. So what "broken link" are you talking about and to
ok thanks for the fast info. that sounds really awesome. i am glad i tried out
zfs, so i no longer have to worry about this issues and the fact that i can
upgrad forth and back between stripe and mirror is amazing. money was short, so
only 2 disks had been put in and since the data is not that w
Why don't you just fix the apparently broken link to your source, then?
Regards,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thank you very much for your help! I wasn't aware of those options.
...sending end is running rsync < 3.0 (Ubuntu 8.04 LTS), crossing my fingers,
hoping it'll work.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
I have certainly moved a root pool from one disk to another, with the
same basic process, ie:
- fuss with fdisk and SMI labels (sigh)
- zpool create
- snapshot, send and recv
- installgrub
- swap disks
I looked over the "root pool recovery" section in the Best Practices guide
at the time,
>On Mon, 19 Apr 2010, Edward Ned Harvey wrote:
>> Improbability assessment aside, suppose you use something like the DDRDrive
>> X1 ... Which might be more like 4G instead of 32G ... Is it even physically
>> possible to write 4G to any device in less than 10 seconds? Remember, to
>> achieve worst
49 matches
Mail list logo