Thank you Bob and Richard. I will go with A, as it also keeps things simple.
One physical device per pool.
-Scott
On 10/20/09 6:46 PM, "Bob Friesenhahn" wrote:
> On Tue, 20 Oct 2009, Richard Elling wrote:
>>
>> The ZIL device will never require more space than RAM.
, I am
leaning towards option C. Any gotchas I should be aware of?
Thanks,
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On what is now a live system, I had previously been tinkering with ZFS,
creating and destroying pools and datasets.
Those old pools still seem to be visible to the system even though I've
re-created new pools with new names :
zpool status
pool: BackupP0
state: ONLINE
scrub: none requested
Requires a login...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It is more cost, but a WAN Accelerator (Cisco WAAS, Riverbed, etc.) would be a
big help.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
> zfs share -a
Ah-ha! Thanks.
FYI, I got between 2.5x and 10x improvement in performance, depending on the
test. So tempting :)
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
itself of course.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bob Friesenhahn wrote:
On Fri, 18 Sep 2009, David Magda wrote:
If you care to keep your pool up and alive as much as possible, then
mirroring across SAN devices is recommended.
One suggestion I heard was to get a LUN that's twice the size, and
set "copies=2". This way you have some redund
list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
_
Scott Lawson
Systems Architect
Information Communication Technology Services
Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
M
I think in theory the ZIL/L2ARC should make things nice and fast if your
workload includes sync requests (database, iscsi, nfs, etc.), regardless of the
backend disks. But the only sure way to know is test with your work load.
-Scott
--
This message posted from opensolaris.org
True, this setup is not designed for high random I/O, but rather lots of
storage with fair performance. This box is for our dev/test backend storage.
Our production VI runs in the 500-700 IOPS (80+ VMs, production plus dev/test)
on average, so for our development VI, we are expecting half of tha
raidz, Dell 2950, 16GB RAM.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Yes, I was getting confused. Thanks to you (and everyone else) for clarifying.
Sync or async, I see the txg flushing to disk starve read IO.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
So, I just re-read the thread, and you can forget my last post. I had thought
the argument was that the data were not being written to disk twice (assuming
no separate device for the ZIL), but it was just explaining to me that the data
are not read from the ZIL to disk, but rather from memory to
Doh! I knew that, but then forgot...
So, for the case of no separate device for the ZIL, the ZIL lives on the disk
pool. In which case, the data are written to the pool twice during a sync:
1. To the ZIL (on disk)
2. From RAM to disk during tgx
If this is correct (and my history in this thread
the spare
*would* take over in these cases, since the pool is degraded.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
writing to the SSD/ZIL, and not to spinning disk. Eventually that data
on the SSD must get to spinning disk.
To the books I go!
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
?
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a good idea, although
I have not yet tried and tested it myself.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
re) with a single mirror using two 7200 drives gave me
about 200 IOPS using the same test, presumably because of the large amounts of
RAM for the L2ARC cache.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
z
As I understand it, when you expand a pool, the data do not automatically
migrate to the other disks. You will have to rewrite the data somehow, usually
a backup/restore.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Roman, are you saying you want to install OpenSolaris on your old servers, or
make the servers look like an external JBOD array, that another server will
then connect to?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
may help to
isolate poorly performing individual disks.
Scott Meilicke wrote:
You can try:
zpool iostat pool_name -v 1
This will show you IO on each vdev at one second intervals. Perhaps you will
see different IO behavior on any suspect drive.
-Scott
___
serge goyette wrote:
actually i did apply the latest recommended patches
Recommended patches and upgrade clusters are different by the way.
10_Recommended != Upgrade Cluster that. Upgrade cluster will upgrade
the system to a effectively the Solaris Release that the upgrade cluster
is minu
erved.
Use is subject to license terms.
Assembled 27 October 2008
--
___
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Ser
You can try:
zpool iostat pool_name -v 1
This will show you IO on each vdev at one second intervals. Perhaps you will
see different IO behavior on any suspect drive.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
Checksum all of the files using something like md5sum and see if
they're actually identical. Then test each step of the copy and see
which one is corrupting your files.
On Fri, Aug 21, 2009 at 1:43 PM, Harry Putnam wrote:
> During the course of backup I had occassion to copy a number of
> quickti
n
than you have concurrent streams. This avoids having one save set
that finishes long after all the others because of poorly balanced
save sets.
Couldn't agree more Mike.
--
Mike Gerdts
http://mgerdts.blogspot.com/
--
______
t one path
active, you should be fine.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Yes! That would be icing on the cake.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
__
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
___
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland New Zealand
writes to a system without a separate ZIL also be written as intelligently as
with a separate ZIL?
Thanks,
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this will be just fine :)
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You can use a separate SSD ZIL.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This has been a very enlightening thread for me, and explains a lot of the
performance data I have collected on both 2008.11 and 2009.06 which mirrors the
experiences here. Thanks to you all.
NFS perf tuning, here I come...
-Scott
--
This message posted from opensolaris.org
Tobias Exner wrote:
Hi list,
some months ago I spoke with an zfs expert on a Sun Storage event.
He told it's possible to grow a zpool by replacing every single disk
with a larger one.
After replacing and resilvering all disks of this pool zfs will
provide the new size automatically.
Now
Dave Stubbs wrote:
I don't mean to be offensive Russel, but if you do
ever return to ZFS, please promise me that you will
never, ever, EVER run it virtualized on top of NTFS
(a.k.a. worst file system ever) in a production
environment. Microsoft Windows is a horribly
unreliable operating system
work core routers and
needless to say achieves very high throughput. I have seen it pushing
the full capacity of the SAS link to the J4500 quite
commonly. This is probably the choke point for this system.
/Scott
--
___
Scott
% ONLINE -
nbupool 40.8T 34.4T 6.37T 84% ONLINE -
[r...@solnbu1 /]#>
--
___
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland
support are
invited from the list.
Thanks,
Scott.
--
___
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology Services Private Bag 94006 Manukau
City Auckland New Zealand
Phon
g/mailman/listinfo/zfs-discuss
--
_____
Scott Lawson
Systems Architect
Information Communication Technology Services
Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckland
New Zealand
Phone : +64 09 968 7611
Fax: +64 09 968 7641
Mobile : +64 27 568 7611
ma
27;s Copy on Write model:
http://en.wikipedia.org/wiki/Zfs#Copy-on-write_transactional_model
So I'm not sure what the 'RAID-Z should mind the gap on writes' comment is
getting at either.
Clarification?
-Scott
--
This message posted from opensolaris.org
__
Have each node record results locally, and then merge pair-wise until
a single node is left with the final results? If you can do merges
that way while reducing the size of the result set, then that's
probably going to be the most scalable way to generate overall
results.
On Thu, Jul 16, 2009 at
second 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real1m59.11s
user0m9.93s
sys 1m49.15s
Feel free to clean up with 'zfs destroy nbupool/zfscachetest'.
Scott Lawson wrote:
Bob,
Output of my run for you. System is a M3000 with 16 GB RAM and 1 zpool
called test
Bob Friesenhahn wrote:
On Wed, 15 Jul 2009, Scott Lawson wrote:
NAME STATE READ WRITE
CKSUM
test1 ONLINE 0
0 0
mirror ONLINE 0
0
) 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real3m25.13s
user0m2.67s
sys 0m28.40s
Doing second 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real8m53.05s
user0m2.69s
sys 0m32.83s
Feel free to clean up with 'zfs destroy test1/zfscachet
stroy test1/zfscachetest'.
Looks like a 25% performance loss for me. I was seeing around 80MB/s
sustained
on the first run and around 60M/'s sustained on the 2nd.
/Scott.
Bob Friesenhahn wrote:
There has been no forward progress on the ZFS read performance issue
for a week now. A 4X r
536K 2.97M
data01 59.7G 20.4T 32 23 483K 2.97M
data01 59.7G 20.4T 37 37 538K 4.70M
While writes are being committed to the ZIL all the time, periodic dumping to
the pool still occurs, and during those times reads are starved. Maybe this
doesn't happen in the
David Magda wrote:
On Jun 30, 2009, at 14:08, Bob Friesenhahn wrote:
I have seen UPSs help quite a lot for short glitches lasting seconds,
or a minute. Otherwise the outage is usually longer than the UPSs
can stay up since the problem required human attention.
A standby generator is neede
4.23K 223K 23.2M
data01 55.6G 20.4T 13 4.37K 87.1K 23.9M
data01 55.6G 20.4T 21 3.33K 136K 18.6M
data01 55.6G 20.4T468496 2.89M 1.82M
data01 55.6G 20.4T687 0 4.13M 0
-Scott
--
This message posted from opensolaris.org
Monish Shah wrote:
A related question: If you are on a UPS, is it OK to disable ZIL?
I think the answer to this is no. UPS's do fail. If you have two
redundant units, answer *might* be maybe. But prudence says *no*.
I have seen numerous UPS' failures over the years, cascading UPS
failures
Haudy Kazemi wrote:
Hello,
I've looked around Google and the zfs-discuss archives but have not
been able to find a good answer to this question (and the related
questions that follow it):
How well does ZFS handle unexpected power failures? (e.g.
environmental power failures, power supply
com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations
"The recommended number of disks per group is between 3 and 9. If you have more
disks, use multiple groups."
-Scott
--
This message posted from op
iSCSI is not (See my earlier zpool
iostat data for iSCSI). Isn't this what we expect, because NFS does syncs,
while iSCSI does not (assumed)?
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
It's actually worse than that--it's not just "recent CPUs" without VT
support. Very few of Intel's current low-price processors, including
the Q8xxx quad-core desktop chips, have VT support.
On Wed, Jun 24, 2009 at 12:09 PM, roland wrote:
>>Dennis is correct in that there are significant areas wh
t, you remind me that my test was flawed, in that my iSCSI
numbers were using the ESXi iSCSI SW initiator, while the NFS tests were
performed with the VM as the guest, not ESX. I'll give ESX as the NFS client,
vmdks on NFS, a go and get back to you. Thanks!
Scott
--
This message posted
a separate device. The periodic high
writes show it being flushed. You can also see reads stall to nearly zero as
the ZIL is dumping. Not good. This thread is discussing this behavior:
http://www.opensolaris.org/jive/thread.jspa?threadID=106453
Coming from a mostly Windows world, I really like th
replacing a disk.
HTH,
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
___
Scott Lawson
Systems Architect
Manukau Insti
ZIL usage, from what I have read you will only see benefits if you
are using NFS backed storage, but that it can be significant. Remove the ZIL
for testing to see the max benefit you could get. Don't do this in production!
-Scott
--
This message posted from o
f IO?
Also, to ensure you can recover from failures, consider separate pools for your
database files and log files, both for MySQL and Exchange.
Good luck!
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opens
, try to get a card that supports JBOD mode so you can use software raid if
you change your mind.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Generally, yes. Test it with your workload and see how it works out for you.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So how are folks getting around the NFS speed hit? Using SSD or battery backed
RAM ZILs?
Regarding limited NFS mounts, underneath a single NFS mount, would it work to:
* Create a new VM
* Remove the VM from inventory
* Create a new ZFS file system underneath the original
* Copy the VM to that fi
Both iSCSI and NFS are slow? I would expect NFS to be slow, but in my iSCSI
testing with OpenSolaris 2008.11, performance we reasonable, about 2x NFS.
Setup: Dell 2950 with a SAS HBA and SATA 3x5 raidz (15 disks, no separate ZIL),
iSCSI using vmware ESXi 3.5 software initiator.
Scott
--
This
lexibility over iSCSI (quotas, reservations, etc.)
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The SATA drive will be your bottleneck, and you will lose any speed advantages
of the SAS drives, especially using 3 vdevs on a single SATA disk.
I am with Richard, figure out what performance you need, and build accordingly.
--
This message posted from opensolaris.org
__
admin deploy -a zfs -x zfs /usr/share/webconsole/webapps/zfs
Also if you wish to make the webconsole accessible from more than just
the localhost, use :
# svccfg -s svc:/system/webconsole setprop options/tcp_listen = true
# smcwebserver restart
Hope this helps,
Scott.
cindy.swearin...@su
amped with such a load.
We're running Solaris 10, not OpenSolaris, so it could also be the case that
there is a regression somewhere in there.
Scott Duckworth, Systems Programmer II
Clemson University School of Computing
On Tue, May 12, 2009 at 10:10 PM, Rince wrote:
> Hi world,
> I h
Bob Friesenhahn wrote:
On Thu, 7 May 2009, Scott Lawson wrote:
A STK2540 storage array with this configuration:
* Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
* Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.
Just thought I would point out that these are hardware backed RAID
arrays. You
Roger Solano wrote:
Hello,
Does it make any sense to use a bunch of 15K SAS drives as L2ARC
cache for several TBs of SATA disks?
For example:
A STK2540 storage array with this configuration:
* Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs.
* Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.
Wilkinson, Alex wrote:
0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:
>On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
>>
>> I currently have a single 17TB MetaLUN that i am about to present to an
>> OpenSolaris initiator and it will obviously be ZFS. However
Wilkinson, Alex wrote:
0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote:
>On Thu, 30 Apr 2009, Wilkinson, Alex wrote:
>>
>> I currently have a single 17TB MetaLUN that i am about to present to an
>> OpenSolaris initiator and it will obviously be ZFS. However
Richard Elling wrote:
Some history below...
Scott Lawson wrote:
Michael Shadle wrote:
On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson
wrote:
If possible though you would be best to let the 3ware controller
expose
the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris
as you
Michael Shadle wrote:
On Mon, Apr 27, 2009 at 5:32 PM, Scott Lawson
wrote:
One thing you haven't mentioned is the drive type and size that you are
planning to use as this
greatly influences what people here would recommend. RAIDZ2 is built for
big, slow SATA
disks as reconstruction
Michael Shadle wrote:
On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson
wrote:
If possible though you would be best to let the 3ware controller expose
the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris as you
will then
gain the full benefits of ZFS. Block self healing etc etc
gives you
greater cover in
the event of a drive failing in a large vdev stripe.
/Scott
Leon Meßner wrote:
Hi,
i'm new to the list so please bare with me. This isn't an OpenSolaris
related problem but i hope it's still the right list to post to.
I'm on the way to move a ba
service contract, etc, wasn't
important to you.
Compare the URL above with this one:
http://www.intel.com/design/flash/nand/extreme/index.htm
Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
you best read performance. Also
chuck in as much RAM as you can
for ARC caching.
Hope this real world case is of use to you. Feel free to ask any more
questions..
Cheers,
Scott.
Francois wrote:
Hello list,
What would be the best zpool configuration for a cache/proxy server
(probably ba
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
___
Scott Lawson
Systems Architect
Manukau Institute of Technology
Information Communication Technology
Michael Shadle wrote:
On Wed, Apr 1, 2009 at 3:19 AM, Michael Shadle wrote:
I'm going to try to move one of my disks off my rpool tomorrow (since
it's a mirror) to a different controller.
According to what I've heard before, ZFS should automagically
recognize this new location and have no
Michael Shadle wrote:
On Mon, Mar 30, 2009 at 4:13 PM, Michael Shadle wrote:
Sounds like a reasonable idea, no?
Follow up question: can I add a single disk to the existing raidz2
later on (if somehow I found more space in my chassis) so instead of a
7 disk raidz2 (5+2) it becomes a 6+2
pensolaris.org/mailman/listinfo/zfs-discuss
--
_
Scott Lawson
Systems Architect
Information Communication Technology Services
Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckla
en importing to the new system. Works well as long as both
systems are of the same OS revision
or greater on the target system.
/Scott.
Grant Lowe wrote:
Hi Erik,
A couple of questions about what you said in your email. In synopsis 2, if
hostA has gone belly up and is no longer accessible, th
r,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
_______
Scott Lawson
Systems Architect
Manukau
to them and they might not understand
having to change their shell paths
to get the userland that they want ;)
On Wed, Mar 4, 2009 at 2:47 AM, Scott Lawson wrote:
Stephen Nelson-Smith wrote:
Hi,
I recommended a ZFS-based archive solution to a client needing to have
a network-b
ver
a commercially supported solution for them.
Thanks,
S.
--
_____
Scott Lawson
Systems Architect
Information Communication Technology Services
Manukau Institute of Technology
Private Bag 94006
South Auckland Mail Centre
Manukau 2240
Auckland
New Zealand
Phone : +64 09 968 7611
Fax: +64 09 968 764
Miles Nordin wrote:
"sl" == Scott Lawson writes:
sl> Electricity *is* the lifeblood of available storage.
I never meant to suggest computing machinery could run without
electricity. My suggestion is, if your focus is _reliability_ rather
than availability
t the twinstrata website. (as should others).
Sorry to all if we are diverging too much from zfs-discuss.
/Scott
This stuff does happen. When you have been around for a while you see it.
Robin Harris wrote:
Calculating the availability and economic trade-offs of configurations
is hard. Rule of
Robert Milkowski wrote:
Hello Asif,
Wednesday, February 18, 2009, 1:28:09 AM, you wrote:
AI> On Tue, Feb 17, 2009 at 5:52 PM, Robert Milkowski wrote:
Hello Asif,
Tuesday, February 17, 2009, 7:43:41 PM, you wrote:
AI> Hi All
AI> Does anyone have any experience on running qmail on solar
Hi Andras,
No problems writing direct. Answers inline below. (If there are any
typo's it cause it's late and I have had a very long day ;))
andras spitzer wrote:
Scott,
Sorry for writing you directly, but most likely you have missed my
questions regarding your SW design, wheneve
David Magda wrote:
On Feb 17, 2009, at 21:35, Scott Lawson wrote:
Everything we have has dual power supplies, feed from dual power
rails, feed from separate switchboards, through separate very large
UPS's, backed by generators, feed by two substations and then cloned
to another
Toby Thain wrote:
On 17-Feb-09, at 3:01 PM, Scott Lawson wrote:
Hi All,
...
I have seen other people discussing power availability on other threads
recently. If you
want it, you can have it. You just need the business case for it. I
don't buy the comments
on UPS unreliability.
H
ms.
I have 0% Solaris older than Solaris 10. Why would you?
In short I hope people don't hold back from adoption of ZFS because they
are unsure
about it. Judge for yourself as I have done and dip your toes in at
whatever rate you
are happy to do so. Thats what I did.
/Scott.
I also use
laris 10. Why would you?
In short I hope people don't hold back from adoption of ZFS because they
are unsure
about it. Judge for yourself as I have done and dip your toes in at
whatever rate you
are happy to do so. Thats what I did.
/Scott.
I also use it at home too with and old D1000 attached
Have tried the procedure in the ZFS TS guide?
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Panic.2FReboot.2FPool_Import_Problems
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
Looks like your scrub was not finished yet. Did check it later? You should not
have had to replace the disk. You might have to reinstall the bootblock.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
controller port, so that the new device will have the same device name as the
failed one.
-- Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Well, the second resilver finished, and everything looks okay now. Doing one
more scrub to be sure...
-- Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
they wind up on different ports? If so, seems like it needs
to back-map that information to the device names when mounting. Or something :)
-- Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolar
101 - 200 of 318 matches
Mail list logo