Sure, but that will put me back into the original situation.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
That is likely it. I create the volume using 2009.06, then later upgraded to
124. I just now created a new zvol, connected it to my windows server,
formatted, and added some data. Then I snapped the zvol, cloned the snap, and
used 'pfexec sbdadm create-lu'. When presented to the windows server,
I plan on filing a support request with Sun, and will try to post back with any
results.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I don't think adding an SSD mirror to an existing pool will do much for
performance. Some of your data will surely go to those SSDs, but I don't think
the solaris will know they are SSDs and move blocks in and out according to
usage patterns to give you an all around boost. They will just be use
You might have to force the import with -f.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>To be clear, you can do what you want with the following items (besides
>your server):
>(1) OpenSolaris LiveCD
>(1) 8GB USB Flash drive
>As many tapes as you need to store your data pools on.
>Make sure the USB drive has a saved stream from your rpool. It should
>also have a downloaded copy of
Greg, I am using NetBackup 6.5.3.1 (7.x is out) with fine results. Nice and
fast.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This is what I used:
http://wikis.sun.com/display/OpenSolarisInfo200906/How+to+Configure+iSCSI+Target+Ports
I distilled that to:
disable the old, enable the new (comstar)
* sudo svcadm disable iscsitgt
* sudo svcadm enable stmf
Then four steps (using my zfs/zpool info - substitute for yours):
It is hard, as you note, to recommend a box without knowing the load. How many
linux boxes are you talking about?
I think having a lot of space for your L2ARC is a great idea.
Will you mirror your SLOG, or load balance them? I ask because perhaps one will
be enough, IO wise. My box has one SLOG
>I was planning to mirror them - mainly in the hope that I could hot swap a new
>one in the event that an existing one started to degrade. I suppose I could
>start with one of each and convert to a mirror later although the prospect of
>losing either disk fills me with dread.
You do not need to
>Apple users have different expectations regarding data loss than Solaris and
>Linux users do.
Come on, no Apple user bashing. Not true, not fair.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.o
You will get much better random IO with mirrors, and better reliability when a
disk fails with raidz2. Six sets of mirrors are fine for a pool. From what I
have read, a hot spare can be shared across pools. I think the correct term
would be "load balanced mirrors", vs RAID 10.
What kind of perf
> One of the reasons I am investigating solaris for
> this is sparse volumes and dedupe could really help
> here. Currently we use direct attached storage on
> the dom0s and allocate an LVM to the domU on
> creation. Just like your example above, we have lots
> of those "80G to start with please"
I have used build 124 in this capacity, although I did zero tuning. I had about
4T of data on a single 5T iSCSI volume over gigabit. The windows server was a
VM, and the opensolaris box is on a Dell 2950, 16G of RAM, x25e for the zil, no
l2arc cache device. I used comstar.
It was being used as
At the time we had it setup as 3 x 5 disk raidz, plus a hot spare. These 16
disks were in a SAS cabinet, and the the slog was on the server itself. We are
now running 2 x 7 raidz2 plus a hot spare and slog, all inside the cabinet.
Since the disks are 1.5T, I was concerned about resliver times fo
My use case for opensolaris is as a storage server for a VM environment (we
also use EqualLogic, and soon an EMC CX4-120). To that end, I use iometer
within a VM, simulating my VM IO activity, with some balance given to easy
benchmarking. We have about 110 VMs across eight ESX hosts. Here is wha
VMware will properly handle sharing a single iSCSI volume across multiple ESX
hosts. We have six ESX hosts sharing the same iSCSI volumes - no problems.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
iSCSI writes require a sync to disk for every write. SMB writes get cached in
memory, therefore are much faster.
I am not sure why it is so slow for reads.
Have you tried comstar iSCSI? I have read in these forums that it is faster.
-Scott
--
This message posted from opensolaris.org
__
You might bring over all of your old data and snaps, then clone that into a new
volume. Bring your recent stuff into the clone. Since the clone only updates
blocks that are different than the underlying snap, you may see a significant
storage savings.
Two clones could even be made - one for you
Price? I cannot find it.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Look again at how XenServer does storage. I think you will find it already has
a solution, both for iSCSI and NFS.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
Reaching into the dusty regions of my brain, I seem to recall that since RAIDz
does not work like a traditional RAID 5, particularly because of variably sized
stripes, that the data may not hit all of the disks, but it will always be
redundant.
I apologize for not having a reference for this a
If these files are deduped, and there is not a lot of RAM on the machine, it
can take a long, long time to work through the dedupe portion. I don't know
enough to know if that is what you are experiencing, but it could be the
problem.
How much RAM do you have?
Scott
--
This message posted fro
Another data point - I used three 15K disks striped using my RAID controller as
a slog for the zil, and performance went down. I had three raidz sata vdevs
holding the data, and my load was VMs, i.e. a fair amount of small, random IO
(60% random, 50% write, ~16k in size).
Scott
--
This messag
Are there other file systems underneath daten/backups that have snapshots?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This has been a very enlightening thread for me, and explains a lot of the
performance data I have collected on both 2008.11 and 2009.06 which mirrors the
experiences here. Thanks to you all.
NFS perf tuning, here I come...
-Scott
--
This message posted from opensolaris.org
__
You can use a separate SSD ZIL.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Note - this has a mini PCIe interface, not PCIe.
I had the 64GB version in a Dell Mini 9. While it was great for it's small
size, low power and low heat characteristics (no fan on the Mini 9!), it was
only faster than the striped sata drives in my mac pro when it came to random
reads. Everythin
> ZFS absolutely observes synchronous write requests (e.g. by NFS or a
> database). The synchronous write requests do not benefit from the
> long write aggregation delay so the result may not be written as
> ideally as ordinary write requests. Recently zfs has added support
> for using a SSD as
Yes! That would be icing on the cake.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
My EqualLogic arrays do not disconnect when resizing volumes.
When I need to resize, on the Windows side I open the iSCSI control panel, and
get ready to click the 'logon' button. I then resize the volume on the
OpenSolaris box, and immediately after that is complete, on the Windows side,
re-lo
You can try:
zpool iostat pool_name -v 1
This will show you IO on each vdev at one second intervals. Perhaps you will
see different IO behavior on any suspect drive.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
Roman, are you saying you want to install OpenSolaris on your old servers, or
make the servers look like an external JBOD array, that another server will
then connect to?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
As I understand it, when you expand a pool, the data do not automatically
migrate to the other disks. You will have to rewrite the data somehow, usually
a backup/restore.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
You are completely off your rocker :)
No, just kidding. Assuming the virtual front-end servers are running on
different hosts, and you are doing some sort of raid, you should be fine.
Performance may be poor due to the inexpensive targets on the back end, but you
probably know that. A while bac
Roch Bourbonnais Wrote:
""100% random writes produce around 200 IOPS with a 4-6 second pause
around every 10 seconds. "
This indicates that the bandwidth you're able to transfer
through the protocol is about 50% greater than the bandwidth
the pool can offer to ZFS. Since, this is is not sustainabl
I am still not buying it :) I need to research this to satisfy myself.
I can understand that the writes come from memory to disk during a txg write
for async, and that is the behavior I see in testing.
But for sync, data must be committed, and a SSD/ZIL makes that faster because
you are writing
This sounds like the same behavior as opensolaris 2009.06. I had several disks
recently go UNAVAIL, and the spares did not take over. But as soon as I
physically removed a disk, the spare started replacing the removed disk. It
seems UNAVAIL is not the same as the disk not being there. I wish the
So what happens during the txg commit?
For example, if the ZIL is a separate device, SSD for this example, does it not
work like:
1. A sync operation commits the data to the SSD
2. A txg commit happens, and the data from the SSD are written to the spinning
disk
So this is two writes, correct?
Doh! I knew that, but then forgot...
So, for the case of no separate device for the ZIL, the ZIL lives on the disk
pool. In which case, the data are written to the pool twice during a sync:
1. To the ZIL (on disk)
2. From RAM to disk during tgx
If this is correct (and my history in this thread
So, I just re-read the thread, and you can forget my last post. I had thought
the argument was that the data were not being written to disk twice (assuming
no separate device for the ZIL), but it was just explaining to me that the data
are not read from the ZIL to disk, but rather from memory to
Yes, I was getting confused. Thanks to you (and everyone else) for clarifying.
Sync or async, I see the txg flushing to disk starve read IO.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
I only see the blocking while load testing, not during regular usage, so I am
not so worried. I will try the kernel settings to see if that helps if/when I
see the issue in production.
For what it is worth, here is the pattern I see when load testing NFS (iometer,
60% random, 65% read, 8k chun
True, this setup is not designed for high random I/O, but rather lots of
storage with fair performance. This box is for our dev/test backend storage.
Our production VI runs in the 500-700 IOPS (80+ VMs, production plus dev/test)
on average, so for our development VI, we are expecting half of tha
I think in theory the ZIL/L2ARC should make things nice and fast if your
workload includes sync requests (database, iscsi, nfs, etc.), regardless of the
backend disks. But the only sure way to know is test with your work load.
-Scott
--
This message posted from opensolaris.org
_
How can I verify if the ZIL has been disabled or not?
I am trying to see how much benefit I might get by using an SSD as a ZIL. I
disabled the ZIL via the ZFS Evil Tuning Guide:
echo zil_disable/W0t1 | mdb -kw
and then rebooted. However, I do not see any benefits for my NFS workload.
Thanks,
Thank you both, much appreciated.
I ended up having to put the flag into /etc/system. When I disabled the ZIL and
umount/mounted without a reboot, my ESX host would not see the NFS export, nor
could I create a new NFS connection from my ESX host. I could get into the file
system from the host i
> zfs share -a
Ah-ha! Thanks.
FYI, I got between 2.5x and 10x improvement in performance, depending on the
test. So tempting :)
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
It is more cost, but a WAN Accelerator (Cisco WAAS, Riverbed, etc.) would be a
big help.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Requires a login...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have an Intel X25-E 32G in the mail (actually the kingston version), and
wanted to get a sanity check before I start.
System:
Dell 2950
16G RAM
16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no extra drive
slots, a single zpool.
svn_124, but with my zpool still running at the
Thanks Frédéric, that is a very interesting read.
So my options as I see them now:
1. Keep the x25-e, and disable the cache. Performance should still be improved,
but not by a *whole* like, right? I will google for an expectation, but if
anyone knows off the top of their head, I would be app
Ed, your comment:
>If solaris is able to install at all, I would have to acknowledge, I
>have to shutdown anytime I need to change the Perc configuration, including
>replacing failed disks.
Replacing failed disks is easy when PERC is doing the RAID. Just remove the
failed drive and replace with
I don't think so. But, you can clone at the ZFS level, and then just use the
vmdk(s) that you need. As long as you don't muck about with the other stuff in
the clone, the space usage should be the same.
-Scott
--
This message posted from opensolaris.org
_
Hi Jeremy,
I had a loosely similar problem with my 2009.06 box. In my case (which may not
be yours), working with support we found a bug that was causing my pool to
hang. I also got erroneous errors when I did a scrub ( 3 x 5 disk raidz). I am
using the same LSI controller. A sure fire way to k
Hi all,
I received my SSD, and wanted to test it out using fake zpools with files as
backing stores before attaching it to my production pool. However, when I
exported the test pool and imported, I get an error. Here is what I did:
I created a file to use as a backing store for my new pool:
mkf
Excellent! That worked just fine. Thank you Victor.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am sorry that I don't have any links, but here is what I observe on my
system. dd does not do sync writes, so the ZIL is not used. iSCSI traffic does
sync writes (as of 2009.06, but not 2008.05), so if you repeat your test using
an iSCSI target from your system, you should see log activity. Sa
I second the use of zilstat - very useful, especially if you don't want to mess
around with adding a log device and then having to destroy the pool if you
don't want the log device any longer.
On Nov 18, 2009, at 2:20 AM, Dushyanth wrote:
> Just to clarify : Does iSCSI traffic from a Solaris iS
# 1. It may help to use 15k disks as the zil. When I tested using three 15k
disks striped as my zil, it made my workload go slower, even though it seems
like it should have been faster. My suggestion is to test it out, and see if it
helps.
#3. You may get good performance with an inexpensive SS
If the 7310s can meet your performance expectations, they sound much better
than a pair of x4540s. Auto-fail over, SSD performance (although these can be
added to the 4540s), ease of management, and a great front end.
I haven't seen if you can use your backup software with the 7310s, but from
It does 'just work', however you may have some file and/or file system
corruption if the snapshot was taken at the moment that your mac is updating
some files. So use the time slider function and take a lot of snaps. :)
--
This message posted from opensolaris.org
Yes, a coworker lost a second disk during a rebuild of a raid5 and lost all
data. I have not had a failure, however when migrating EqualLogic arrays in and
out of pools, I lost a disk on an array. No data loss, but it concerns me
because during the moves, you are essentially reading and writing
I think Y is such a variable and complex number it would be difficult to give a
rule of thumb, other than to 'test with your workload'.
My server, having three, five disk raidzs (striped) and an intel x25-e as a zil
can fill my two G ethernet pipes over NFS (~200MBps) during mostly sequential
It looks like there is not a free slot for a hot spare? If that is the case,
then it is one more factor to push towards raidz2, as you will need time to
remove the failed disk and insert a new one. During that time you don't want to
be left unprotected.
--
This message posted from opensolaris.o
Link aggregation can use different algorithms to load balance. Using L4 (IP
plus originating port I think), using a single client computer and the same
protocol (NFS), but different origination ports has allowed me to saturate both
NICS in my LAG. So yes, you just need more than one 'conversatio
I have a single zfs volume, shared out using COMSTAR and connected to a Windows
VM. I am taking snapshots of the volume regularly. I now want to mount a
previous snapshot, but when I go through the process, Windows sees the new
volume, but thinks it is blank and wants to initialize it. Any ideas
Thanks Dan.
When I try the clone then import:
pfexec zfs clone
data01/san/gallardo/g...@zfs-auto-snap:monthly-2009-12-01-00:00
data01/san/gallardo/g-testandlab
pfexec sbdadm import-lu /dev/zvol/rdsk/data01/san/gallardo/g-testandlab
The sbdadm import-lu gives me:
sbdadm: guid in use
which mak
The SATA drive will be your bottleneck, and you will lose any speed advantages
of the SAS drives, especially using 3 vdevs on a single SATA disk.
I am with Richard, figure out what performance you need, and build accordingly.
--
This message posted from opensolaris.org
__
My testing with 2008.11 iSCSI vs NFS was that iSCSI was about 2x faster. I used
a 3 stripe 5 disk raidz (15 1.5TB sata disks). I just used the default zil, no
SSD or similar to make NFS faster.
I think (don't quote me) that ESX can only mount 64 iSCSI targets, so you
aren't much better off. But
Both iSCSI and NFS are slow? I would expect NFS to be slow, but in my iSCSI
testing with OpenSolaris 2008.11, performance we reasonable, about 2x NFS.
Setup: Dell 2950 with a SAS HBA and SATA 3x5 raidz (15 disks, no separate ZIL),
iSCSI using vmware ESXi 3.5 software initiator.
Scott
--
This
So how are folks getting around the NFS speed hit? Using SSD or battery backed
RAM ZILs?
Regarding limited NFS mounts, underneath a single NFS mount, would it work to:
* Create a new VM
* Remove the VM from inventory
* Create a new ZFS file system underneath the original
* Copy the VM to that fi
Generally, yes. Test it with your workload and see how it works out for you.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Oh boy, there are a lot of things here :)
How many people in your office will be using these services? If it are just 50
people or so, you would probably be fine with just about any configuration. 500
or 5000 would be a different story, and you would have to be much more careful.
If possible, y
For ~100 people, I like Bob's answer. RAID 10 will get you lots of speed.
Perhaps RAID50 would be just fine for you as well and give your more space, but
without measuring, you won't be sure. Don't forget a hot spare (or two)!
Your MySQL database - will that generate a lot of IO?
Also, to ensur
See this thread for information on load testing for vmware:
http://communities.vmware.com/thread/73745?tstart=0&start=0
Within the thread there are instructions for using iometer to load test your
storage. You should test out your solution before going live, and compare what
you get with what yo
> if those servers are on physical boxes right now i'd do some perfmon
> caps and add up the iops.
Using perfmon to get a sense of what is required is a good idea. Use the 95
percentile to be conservative. The counters I have used are in the Physical
disk object. Don't ignore the latency counter
> Isn't that section of the evil tuning guide you're quoting actually about
> checking if the NVRAM/driver connection is working right or not?
Miles, yes, you are correct. I just thought it was interesting reading about
how syncs and such work within ZFS.
Regarding my NFS test, you remind me tha
I ran the RealLife iometer profile on NFS based storage (vs. SW iSCSI), and got
nearly identical results to having the disks on iSCSI:
iSCSI
IOPS: 1003.8
MB/s: 7.8
Avg Latency (s): 27.9
NFS
IOPS: 1005.9
MB/s: 7.9
Avg Latency (s): 29.7
Interesting!
Here is how the pool was behaving during the t
Hi,
When you have a lot of random read/writes, raidz/raidz2 can be fairly slow.
http://blogs.sun.com/roch/entry/when_to_and_not_to
The recommendation is to break the disks into smaller raidz/z2 stripes, thereby
improving IO.
>From the ZFS Best Practices Guide:
http://www.solarisinternals.com/wi
For what it is worth, I too have seen this behavior when load testing our zfs
box. I used iometer and the RealLife profile (1 worker, 1 target, 65% reads,
60% random, 8k, 32 IOs in the queue). When writes are being dumped, reads drop
close to zero, from 600-700 read IOPS to 15-30 read IOPS.
zpo
> On Tue, 30 Jun 2009, Bob Friesenhahn wrote:
>
> Note that this issue does not apply at all to NFS
> service, database
> service, or any other usage which does synchronous
> writes.
I see read starvation with NFS. I was using iometer on a Windows VM, connecting
to an NFS mount on a 2008.11 phy
> which gap?
>
> 'RAID-Z should mind the gap on writes' ?
>
> Message was edited by: thometal
I believe this is in reference to the raid 5 write hole, described here:
http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5_performance
RAIDZ should avoid this via it's Copy on Write model:
http:
"I had already begun the process of migrating my 134 boxes over to Nexenta
before Oracle's cunning plans became known. This just reaffirms my decision. "
Us too. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
Craig,
3. I do not think you will get much dedupe on video, music and photos. I would
not bother. If you really wanted to know at some later stage, you could create
a new file system, enable dedupe, and copy your data (or a subset) into it just
to see. In my experience there is a significant CP
Hi Peter,
dedupe is pool wide. File systems can opt in or out of dedupe. So if multiple
file systems are set to dedupe, then they all benefit from using the same pool
of deduped blocks. In this way, if two files share some of the same blocks,
even if they are in different file systems, they wil
"Can I disable dedup on the dataset while the transfer is going on?"
Yes. Only the blocks copied after disabling dedupe will not be deduped. The
stuff you have already copied will be deduped.
"Can I simply Ctrl-C the procress to stop it?"
Yes, you can do that to a mv process.
Maybe stop the pr
When I do the calculations, assuming 300bytes per block to be conservative,
with 128K blocks, I get 2.34G of cache (RAM, L2ARC) per Terabyte of deduped
data. But block size is dynamic, so you will need more than this.
Scott
--
This message posted from opensolaris.org
___
I just realized that the email I sent to David and the list did not make the
list (at least as jive can see it), so here is what I sent on the 23rd:
Brilliant. I set those parameters via /etc/system, rebooted, and the pool
imported with just the –f switch. I had seen this as an option earlier,
I am running nexenta CE 3.0.3.
I have a file system that at some point in the last week went from a directory
per 'ls -l' to a special character device. This results in not being able to
get into the file system. Here is my file system, scott2, along with a new file
system I just created, as
On 9/27/10 9:56 AM, "Victor Latushkin" wrote:
>
> On Sep 27, 2010, at 8:30 PM, Scott Meilicke wrote:
>
>> I am running nexenta CE 3.0.3.
>>
>> I have a file system that at some point in the last week went from a
>> directory per 'ls -l'
s-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Scott Meilicke | Enterprise Systems Administrator | Crane Aerospace &
Electronics | +1 425-743-8153 | M: +1 206-406-2670
---
Has it been running long? Initially the numbers are way off. After a while
it settles down into something reasonable.
How many disks, and what size, are in your raidz2?
-Scott
On 9/29/10 8:36 AM, "LIC mesh" wrote:
> Is there any way to stop a resilver?
>
> We gotta stop this thing - at minimu
llions(about 30mins in) and restarts.
>
> Never gets past 0.00% completion, and K resilvered on any LUN.
>
> 64 LUNs, 32x5.44T, 32x10.88T in 8 vdevs.
>
>
>
>
> On Wed, Sep 29, 2010 at 11:40 AM, Scott Meilicke
> wrote:
>> Has it been running long? Initially
ttp://pastebin.com/pan9DBBS
>
>
>
> On Wed, Sep 29, 2010 at 12:17 PM, Scott Meilicke
> wrote:
>> OK, let me see if I have this right:
>>
>> 8 shelves, 1T disks, 24 disks per shelf = 192 disks
>> 8 shelves, 2T disks, 24 disks per shelf = 192 disks
>> Ea
This must be resliver day :)
I just had a drive failure. The hot spare kicked in, and access to the pool
over NFS was effectively zero for about 45 minutes. Currently the pool is still
reslivering, but for some reason I can access the file system now.
Resliver speed has been beaten to death I
I should add I have 477 snapshots across all files systems. Most of them are
hourly snaps (225 of them anyway).
On Sep 29, 2010, at 3:16 PM, Scott Meilicke wrote:
> This must be resliver day :)
>
> I just had a drive failure. The hot spare kicked in, and access to the pool
>
_____
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 104 matches
Mail list logo