On Tue, Dec 15, 2009 at 2:31 AM, Craig S. Bell wrote:
> Mike, I believe that ZFS treats runs of zeros as holes in a sparse file,
> rather than as regular data. So they aren't really present to be counted for
> compressratio.
>
> http://blogs.sun.com/bonwick/entry/seek_hole_
s,
but that would seem to contribute to a higher compressratio rather
than a lower compressratio.
If I disable compression and enable dedup, does it count deduplicated
blocks of zeros toward the dedupratio?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-di
d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Model: Hitachi HTS5425 Revision: Serial No: 080804BB6300HCG Size:
160.04GB <160039305216 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0
...
That /should/ be printed on the di
od to know that this can
be done.
On Wed, Dec 9, 2009 at 5:16 AM, Alexander J. Maidak wrote:
> On Tue, 2009-12-08 at 09:15 -0800, Mike wrote:
> > I had a system that I was testing zfs on using EMC Luns to create a
> striped zpool without using the multi-pathing software PowerPath. Of c
Alex, thanks for the info. You made my heart stop a little when reading your
problem with PowerPath, but MPxIO seems like it might be a good option for me.
I'll will try that as well although I have not used it before. Thank you!
--
This message posted from opensolaris.org
___
Thanks Cindys for your input... I love your fear example too, but lucky for me
I have 10 years before I have to worry about that and hopefully we'll all be in
hovering bumper cars by then.
It looks like I'm going to have to create another test system and try
recommondations give here...and hop
I had a system that I was testing zfs on using EMC Luns to create a striped
zpool without using the multi-pathing software PowerPath. Of coarse a storage
emergency came up so I lent this storage out for temp storage and we're still
using. I'd like to add PowerPath to take advanage of the multi
used as a starting point.
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_raidz.c
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm sure its been asked a thousand times but is there any prospect of being
able to remove a vdev from a pool anytime soon?
Thanks!
--
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
On Thu, Nov 26, 2009 at 8:53 PM, Toby Thain wrote:
>
> On 26-Nov-09, at 8:57 PM, Richard Elling wrote:
>
>> On Nov 26, 2009, at 1:20 PM, Toby Thain wrote:
>>>
>>> On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote:
>>>
>>>> On 2009-Nov-24 14:07:06
s.org/pipermail/zfs-discuss/2008-July/019762.html.
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/019762.html
Mike
On Thu, Jul 10, 2008 at 4:42 AM, Darren J Moffat wrote:
> I regularly create new zfs filesystems or snapshots and I find it
> annoying that I have to type the full d
t is small enough that it is somewhat likely that many
of those random reads will be served from cache. A dtrace analysis of
just how random the reads are would be interesting. I think that
hotspot.d from the DTrace Toolkit would be a good starting place.
--
Mike Gerdts
http://mgerdts.blogspo
On Tue, Nov 24, 2009 at 1:39 PM, Richard Elling
wrote:
> On Nov 24, 2009, at 11:31 AM, Mike Gerdts wrote:
>
>> On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling
>> wrote:
>>>
>>> Good question! Additional thoughts below...
>>>
>>> On Nov 24, 2
On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling
wrote:
> Good question! Additional thoughts below...
>
> On Nov 24, 2009, at 6:37 AM, Mike Gerdts wrote:
>
>> Suppose I have a storage server that runs ZFS, presumably providing
>> file (NFS) and/or block (iSCSI, FC) service
characteristics in
this area?
Is there less to be concerned about from a performance standpoint if
the workload is primarily read?
To maximize the efficacy of dedup, would it be best to pick a fixed
block size and match it between the layers of zfs?
--
Mike Gerdts
http://mgerdts.blogspot.com
pub" from agent
> Server refused to allocate pty
> Sun Microsystems Inc. SunOS 5.11 snv_127 November 2008
This looks like...
http://defect.opensolaris.org/bz/show_bug.cgi?id=12380
But that was supposed to be fixed in snv_126. Can you check
/etc/minor_perm for this entry:
Maybe to create snapshots "after the fact" as a part of some larger disaster
recovery effort.
(What did my pool/file-system look like at 10am?... Say 30-minutes before the
database barffed on itself...)
With some enhancements might this functionality be extendable into a "poor
man's CDP" offeri
;s. It because quite
significant if you have 5000 (e.g. on a ZFS-based file server).
Assuming that the deduped blocks stay deduped in the ARC, it means
that it is feasible to every block that is accessed with any frequency
to be in memory. Oh yeah, and you save a lot of disk space.
--
Mike Gerdts
ht
ording to page 35 of
http://www.slideshare.net/ramesh_r_nagappan/wirespeed-cryptographic-acceleration-for-soa-and-java-ee-security,
a T2 CPU can do 41 Gb/s of SHA256. The implication here is that this
keeps the MAU's busy but the rest of the core is still idle for things
like compression, TCP,
hms implemented in
software and sha256 implemented in hardware?
I've been waiting very patiently to see this code go in. Thank you
for all your hard work (and the work of those that helped too!).
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
Anyone have any creative solutions for near-synchronous replication between
2 ZFS hosts?
Near-synchronous, meaning RPO X--->0
I realize performance will take a hit.
Thanks,
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
Once data resides within a pool, there should be an efficient method of moving
it from one ZFS file system to another. Think Link/Unlink vs. Copy/Remove.
Here's my scenario... When I originally created a 3TB pool, I didn't know the
best way carve up the space, so I used a single, flat ZFS file s
Any reason why ZFS would not work on a FDE (Full Data Encryption) Hard drive?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Does anyone know when this will be available? Project says Q4 2009 but does not
give a build.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Current Size: 4206 MB (arcsize)
> Target Size (Adaptive): 4207 MB (c)
That looks a lot like ~ 4 * 1024 MB. Is this a 64-bit capable system
that you have booted from a 32-bit kernel?
--
Mike Gerdts
http://mgerdts.blogspot.com/
__
host1# zoneadm -z zone1 detach
host1# zfs snapshot zonepool/zo...@migrate
host1# zfs send -r zonepool/zo...@migrate \
| ssh host2 zfs receive zones/zo...@migrate
host2# zonecfg -z zone1 create -a /zones/zone1
host2# zonecfg -z zone1 attach
host2# zoneadm -z zone1 boot
--
Mike Gerdts
http://m
On Wed, Sep 23, 2009 at 7:32 AM, bertram fukuda wrote:
> Thanks for the info Mike.
>
> Just so I'm clear. You suggest 1)create a single zpool from my LUN 2) create
> a single ZFS filesystem 3) create 2 zone in the ZFS filesystem. Sound right?
Correct
--
to it, so I will give each thing X/Y space. This is
because it is quite likely that someone will do the operation Y++ and
there are very few storage technologies that allow you to shrink the
amount of space allocated to each item.
--
Mike Gerdts
h
g/pipermail/fm-discuss/2009-June/000436.html
from June 10 suggests you are running firmware release (045C)8626. On
August 11 they released firmware revisions 8820, 8850, and 02G9,
depending on the drive model.
http://downloadcenter.intel.com/Detail_Desc.aspx?agr
On Wed, Sep 2, 2009 at 4:46 PM, Richard Elling wrote:
> Thanks Cindy!
>
> Mike, et.al.,
> I think the confusion is surrounding replacing an enterprise backup
> scheme with send-to-file. There is nothing wrong with send-to-file,
> it functions as designed. But it isn'
On Wed, Sep 2, 2009 at 4:06 PM, wrote:
> Hi Mike,
>
> I reviewed this doc and the only issue I have with it now is that uses
> /var/tmp an an example of storing snapshots in "long-term storage"
> elsewhere.
One other point comes from zfs(1M):
The format of t
to do
things that will lead them to unsympathetic ears if things go poorly.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Try a: zfs get -pH -o value creation
-- MikeE
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Chris Baker
Sent: Friday, August 28, 2009 10:52 AM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Snapshot creati
ry for a project that we are working on
together. Unfortunately, his umask was messed up and I can't modify
the files in ~alice/proj1. Can you do a 'chmod -fR a+rw
/home/alice/proj1' for me? Thanks!" | mailx -s "permissions fix"
Helpdesk$ pfexec chmod -fR a+r
cessible. But
if the snapshots were created after the mount - they are not accessible
from inside of a zone.
So this is correct behavior or it's bug, any workarounds?
Thanks in advance for all comments.
Regards,
Mike
___
zfs-discuss mailing list
zf
//opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#404589
http://opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#405835
http://opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#405308
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
anpages/vxfs/man1m/fcladm.html
This functionality would come in very handy. It would seem that it
isn't too big of a deal to identify the files that changed, as this
type of data is already presented via "zpool status -v" when
corruption is detected.
http://docs.sun.com/app/
in
the parallelism gaps as the longer-running ones finish.
3. That is, there is sometimes benefit in having many more jobs to run
than you have concurrent streams. This avoids having one save set
that finishes long after all the others because of poorly balanced
save sets.
--
Mike Gerdts
http
"sequential" I mean that one doesn't start until the other
finishes. There is certainly a better word, but it escapes me at the
moment.
At an average file size of 45 KB, that translates to about 3 MB/sec.
As you run two data streams, you are seeing throughput that looks
kinda like the 2 * 3 MB/sec.
With 4 backup streams do you get something that looks like 4 * 3 MB/s?
How does that effect iostat output?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ase of creating snapshots, there is also this:
# mkdir .zfs/snapshot/foo
# zfs list | grep foo
rpool/ROOT/s10u7_...@foo 0 - 9.76G -
# rmdir .zfs/snapshot/foo
# zfs list | grep foo
I don't know of a similar shortcut for the create or clone subcommands.
--
Mike Gerdts
http://mg
On Sat, Aug 8, 2009 at 3:25 PM, Ed Spencer wrote:
>
> On Sat, 2009-08-08 at 15:12, Mike Gerdts wrote:
>
>> The DBA's that I know use files that are at least hundreds of
>> megabytes in size. Your problem is very different.
> Yes, definitely.
>
> I'm relat
peed
with SSD's than there is in read speeds. However, the NVRAM on the
NetApp that is backing your iSCSI LUNs is probably already giving you
most of this benefit (assuming low latency on network connections).
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
increase the performance of a zfs
> filesystem without causing any downtime to an Enterprise email system
> used by 30,000 intolerant people, when you don't really know what is
> causing the performance issues in the first place? (Yeah, it sucks to be
> me!)
Hopefully I've helped
s/2007-September/013233.html
Quite likely related to:
http://bugs.opensolaris.org/view_bug.do?bug_id=6684721
In other words, it was a buggy Sun component that didn't do the right
thing with cache flushes.
--
Mike Gerdts
http://mgerdts.blogspot.com/
lly?
It appears as though there is an upgrade path.
http://www.c0t0d0s0.org/archives/5750-Upgrade-of-a-X4500-to-a-X4540.html
However, the troll that you have to pay to follow that path demands a
hefty sum ($7995 list). Oh, and a reboot is required. :)
--
Mike Gerdts
http://m
roducts (eg VMware, Parallels, Virtual PC) have the
> same default behaviour as VirtualBox?
I've lost a pool due to LDoms doing the same. This bug seems to be related.
http://bugs.opensolaris.org/view_bug.do?bug_id=6684721
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
u would
have enough to pay this credit card bill.
http://www.cnn.com/2009/US/07/15/quadrillion.dollar.glitch/index.html
> - Rich
>
> (Footnote: I ran ntpdate between starting the scrub and it finishing,
> and time rolled backwards. Nothing more exciting.)
And Visa is willing to wave
Use is subject to license terms.
Assembled 07 May 2009
# uname -srvp
SunOS 5.11 snv_111b sparc
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ied via truss that each
read(2) was returning 128K. I thought I had seen excessive reads
there too, but now I can't reproduce that. Creating another fs with
recordsize=8k seems to make this behavior go away - things seem to be
working as designed. I'll go upd
On Mon, Jul 13, 2009 at 3:16 PM, Joerg
Schilling wrote:
> Bob Friesenhahn wrote:
>
>> On Mon, 13 Jul 2009, Mike Gerdts wrote:
>> >
>> > FWIW, I hit another bug if I turn off primarycache.
>> >
>> > http://defect.opensolaris.org/bz/show_bug.c
4m21.57s
user0m9.72s
sys 0m36.30s
Doing second 'cpio -o > /dev/null'
4800025 blocks
real4m21.56s
user0m9.72s
sys 0m36.19s
Feel free to clean up with 'zfs destroy testpool/zfscachetest'.
This bug report contains more detail of the configuration. O
r trouble in the long term without
deduplication to handle ongoing operation.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
his for smallish (8KB) directories.
>
>
> BTW: If you like to fix the software, you should know that Linux has at least
> one filesystem that returns the entries for "." and ".." out of order.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/lib/libc/port/gen/readdir.c
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libbc/libc/gen/common/readdir.c
The libbc version hasn't changed since the code became public. You
can get to an older libc variant of it by clicking on the history link
or using th
Thanks Darren,
I might request that it gets added.
That is, if anyone else thinks it might be a useful feature?
Regards,
Mike.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
very tidy, thanks! :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
I'd like to be able to select zfs filesystems, based on the value of properties.
Something like this:
zfs select mounted=yes
Is anyone aware if this feature might be available in the future?
If not, is there a clean way of achieving the same result?
Thanks, Mike.
--
This message p
009 09:06:09 KST
> open(/dev/dtrace/helper)
>
> libc.so.1`open
> libCrun.so.1`0x7a50aed8
> libCrun.so.1`0x7a50b0f4
> ld.so.1`call_fini+0xd0
> ld.so.1`atexit_fini+0x80
> libc.so.1`_exithandle+0x48
> libc.so.1`exit+0x4
> oracle`_start+0x184
>
> ***
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
4682e64#l8.80>
*/
For some reason, the CR's listed above are not available through
bugs.opensolaris.org. However, at least 6833605 is available through
sunsolve if you have a support contract.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sible?
>
> Thanks...
I stumbled across this just now while performing a search for something else.
http://opensolaris.org/jive/thread.jspa?messageID=377018
I have no idea of the quality or correctness of this solution.
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
16 16 WDC WD4000YS-01MPB1 400.1GB Pass Through
> ===
> GuiErrMsg<0x00>: Success.
> r...@nfs0009:~#
Perhaps you have change the configuration of the array since the last
reconfiguration boot. I
On Wed, May 6, 2009 at 2:54 AM, wrote:
>
>>On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike wrote:
>>> PS: At one point the old JumpStart code was encumbered, and the
>>> community wasn't able to assist. I haven't looked at the next-gen
>>> jumpsta
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike wrote:
> PS: At one point the old JumpStart code was encumbered, and the
> community wasn't able to assist. I haven't looked at the next-gen
> jumpstart framework that was delivered as part of the OpenSolaris SPARC
> preview.
How about a generic "zfs options" field in the JumpStart profile?
(essentially an area where options can be specified that are all applied
to the boot-pool (with provisions to deal with a broken-out-var))
That should future proof things to some extent allowing for
compression=x, copies=x, blocksiz
quota locally to support ZFS.
>
> - river.
For the benefit of those finding this conversation in the archives,
this looks like it will be fixed in snv_114.
http://bugs.opensolaris.org/view_bug.do?bug_id=6824968
http://hg.genunix.org/onnv-gate.hg/rev/4f68f041ddcd
--
Mike Gerdts
http://
Create the zpool with:
zpool create log - for the ZIL
zpool create cache - for the L2ARC
On Sat, Apr 25, 2009 at 11:13 PM, Richard Elling
wrote:
> Gary Mills wrote:
>
>> On Fri, Apr 24, 2009 at 09:08:52PM -0700, Richard Elling wrote:
>>
>>
>>> Gary Mills wrote:
>>>
>>>
Does anyone k
Wow... that's seriously cool!
Throw in some of this... http://www.nexenta.com/demos/auto-cdp.html and
now we're really getting somewhere...
Nice to see this level of innovation here. Anyone try to employ these
types of techniques on s10? I haven't used nexenta in the past, and I'm
not clear in
On Sun, Apr 19, 2009 at 10:58 AM, Gary Mills wrote:
> On Sat, Apr 18, 2009 at 11:45:54PM -0500, Mike Gerdts wrote:
>> Also, you may want to consider doing backups from the NetApp rather
>> than from the Solaris box.
>
> I've certainly recommended finding a differ
or due to some research
project that happens to be on the same spindles? What does the
network look like from the NetApp side?
Are the mail server and the NetApp attached to the same switch, or are
they at opposite ends of the campus? Is there something between them
th
nd of it is not overly complicated. Is now
too early to file the RFE? For some reason it feels like the person
on the other end of bugs.opensolars.org will get confused by the
request to enhance a feature that doesn't yet exist.
--
Mike Gerdts
http://mgerdts.blogspot.com/
_
the
global zone and the dataset is deleted to a non-global zone, display
the UID rather than a possibly mistaken username.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ions are correct?
Thanks,
Mike.
-
Dr Mike Pacey, Email: m.pa...@lancaster.ac.uk
High Performance Systems Support, Phone: 01524 593543
Information Systems Services,Fax: 01524 594459
Lancaster University,
Lancaster LA1 4YW
__
Hello
> 1) Dual IO module option
> 2) Multipath support
> 3) Zone support [multi host connecting to same JBOD or same set of JBOD's
> connected in series. ]
This sounds interesting - where I can read more about connecting two
hosts to same J4200 e
me.
also making the tools simpler - absolutely no UI for instance. does it
really need one to dump out things? :)
On Wed, Mar 11, 2009 at 7:15 PM, David Magda wrote:
>
> On Mar 11, 2009, at 21:59, mike wrote:
>
>> On Wed, Mar 11, 2009 at 6:53 PM, David Magda wrote:
>>>
>
doesnt it require java and x11?
On Wed, Mar 11, 2009 at 6:53 PM, David Magda wrote:
>
> On Mar 11, 2009, at 20:14, mike wrote:
>
>>
>> http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
>> http://www.intel.com/support/motherboards
2007) would be forward compatible...
On Wed, Mar 11, 2009 at 5:14 PM, mike wrote:
> http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
> http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm
>
> It's hard to use the HAL sometimes.
>
http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm
http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm
It's hard to use the HAL sometimes.
I am trying to locate chipset info but having a hard time...
_
Brad Stone about
rolling up daily snapshots into monthly snapshots, which would roll up
into yearly snapshots...
On Mon, Mar 9, 2009 at 1:29 PM, Richard Elling wrote:
> mike wrote:
>>
>> Well, I could just use the same script to create my daily snapshot to
>> remove a snapshot
unted. Changes have
> been made to speed this up by reducing the number of mnttab lookups.
>
> And zfs list has been changed to no longer show snapshots by default.
> But it still might make sense to limit the number of snapshots saved:
> http://blogs.sun.com/timf/entry/zfs_automatic_s
I do a daily snapshot of two filesystems, and over the past few months
it's obviously grown to a bunch.
"zfs list" shows me all of those.
I can change it to use the "-t" flag to not show them, so that's good.
However, I'm worried about boot times and other things.
Will it get to a point with 100
On Sat, Feb 28, 2009 at 8:34 PM, Nicolas Williams
wrote:
> On Sat, Feb 28, 2009 at 05:19:26PM -0600, Mike Gerdts wrote:
>> On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams
>> wrote:
>> > On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
>> >> &
each database may be constrained to
a set of spindles so that each database can be replicated or copied
independent of the various others.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
pp "snapmirror to tape"
- Even having a zfs(1M) option that could list the files that change
between snapshots could be very helpful to prevent file system crawls
and to avoid being fooled by bogus mtimes.
--
Mike Gerdts
http://mgerdts.blogspot.com/
__
/os/about/faq/licensing_faq/#patents.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
atomic operation.
The snapshots are created together (all at once) or not created at
all. The benefit of atomic snapshots operations is that the snapshot
data is always taken at one consistent time, even across descendent
file systems.
--
Mike Gerdts
http://mgerdts.blogspot.com/
ng as the list of zfs
mount points does not overflow the maximum command line length.
$ fsstat $(zfs list -H -o mountpoint | nawk '$1 !~ /^(\/|-|legacy)$/') 5
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@o
Does this all go away when BP-rewrite gets fully resolved/implemented?
Short of the pool being 100% full, it should allow a rebalancing
operation and possible LUN/device-size-shrink to match the new device
that is being inserted?
Thanks,
-- MikeE
-Original Message-
From: zfs-discuss-bo
Hi
> It would be also nice to be able to specify the zpool version during pool
> creation. E.g. If I have a newer machine and I want to move data to an older
> one, I should be able to specify the pool version, otherwise it's a one-way
> street.
zpool create -o vers
i'm not sure how many via chips support 64-bit, which seems to be
highly recommended.
atoms seem to be more suitable.
On Mon, Jan 12, 2009 at 1:14 PM, Joe S wrote:
> In the last few weeks, I've seen a number of new NAS devices released
> from companies like HP, QNAP, VIA, Lacie, Buffalo, Iomega,
t performance... even if you want to get the list
of snapshots with no other properties "zfs list -oname -t snapshot -r
file/system" it still takes quite long time if there are hundreds of
snapshots, while "ls /file/system/.zfs/snapshot" returns immediately.
Can this also be im
ise
it's almost useless in practice.
Regards
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.png
Try running
svcs -v zfs/auto-snapshot
The last few lines of the log files mentioned in the output from the
above command may provide helpful hints.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discu
AKE=gmake
If you are on 64bit system you may want to compile 64bit version:
./configure --prefix=/usr/local --disable-debug CFLAGS="-O -m64" MAKE=gmake
5) gmake && gmake install
6) /usr/local/bin/mbuffer -V
Regards
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Well, I knew it wasn't available. I meant to ask what is the status of the
development of the feature? Not started, I presume.
Is there no timeline?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
In theory, with 2 80GB drives, you would always have a copy somewhere else.
But a single drive, no.
I guess I'm thinking in the optimal situation. With multiple drives, copies
are spread through the vdevs. I guess it would work better if we could define
that if copies=2 or more, that at leas
With ZFS, we can enable copies=[1,2,3] to configure how many copies of data
there are. With copies of 2 or more, in theory, an entire disk can have read
errors, and the zfs volume still works.
The unfortunate part here is that the redundancy lies in the volume, not the
pool vdev like with ra
I've seen discussions as far back as 2006 that say development is underway to
allow the addition and remove of disks in a raidz vdev to grow/shrink the
group. Meaning, if a 4x100GB raidz only used 150GB of space, one could do
'zpool remove tank c0t3d0' and data residing on c0t3d0 would be migra
On Tue, Dec 2, 2008 at 6:13 PM, Lori Alt <[EMAIL PROTECTED]> wrote:
> On 12/02/08 10:24, Mike Gerdts wrote:
> I follow you up to here. But why do the next steps?
>
> > zonecfg -z $zone
> > remove fs dir=/var
> >
> > zfs set mountpoint=/zones/$zone/root/var r
$zone
remove fs dir=/var
zfs set mountpoint=/zones/$zone/root/var rpool/zones/$zone/var
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Boot from the other root drive, mount up the "bad"
> one at /mnt. Then:
>
> # mv /mnt/etc/zfs/zpool.cache
> /mnt/etc/zpool.cache.bad
>
>
>
> On Tue, Nov 25, 2008 at 8:18 AM, Mike DeMarco
> <[EMAIL PROTECTED]> wrote:
> > My root dri
101 - 200 of 527 matches
Mail list logo