pool buy overwriting the start and end of two member disks (and possibly some
data). I assume that if I could have restored the lost metadata I could
have recovered most of the real data.
Thanks
Scott
___
zfs-discuss mailing list
zfs-discuss@
On Mon, Aug 13, 2012 at 10:40:45AM -0700, Richard Elling wrote:
>
> On Aug 13, 2012, at 2:24 AM, Sa?o Kiselkov wrote:
>
> > On 08/13/2012 10:45 AM, Scott wrote:
> >> Hi Saso,
> >>
> >> thanks for your reply.
> >>
> >> If all d
Thanks again Saso,
at least I have closure :)
Scott
On Mon, Aug 13, 2012 at 11:24:55AM +0200, Sa?o Kiselkov wrote:
> On 08/13/2012 10:45 AM, Scott wrote:
> > Hi Saso,
> >
> > thanks for your reply.
> >
> > If all disks are the same, is the root pointer the sam
Hi Saso,
thanks for your reply.
If all disks are the same, is the root pointer the same?
Also, is there a "signature" or something unique to the root block that I can
search for on the disk? I'm going through the On-disk specification at the
moment.
Scott
On Mon, Aug 13, 201
t the labels using the infomration from the 3
valid disks?
Thanks
Scott
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e to get some of my data back. Any recovery is a bonus.
If anyone is keen, I have enabled SSH into the Open Indiana box
which I'm using to try and recovery the pool, so if you'd like to take a shot
please let me know.
Thanks in advance,
Scott
___
On Sat, Jun 16, 2012 at 09:58:40AM -0500, Gregg Wonderly wrote:
>
> On Jun 16, 2012, at 9:49 AM, Scott Aitken wrote:
>
> > On Sat, Jun 16, 2012 at 09:09:53AM -0500, Gregg Wonderly wrote:
> >> Use 'dd' to replicate as much of lofi/2 as you can onto another devic
in that slot so that it will import and then you can 'zpool replace'
> the
> new disk into the pool perhaps?
>
> Gregg Wonderly
>
> On 6/16/2012 2:02 AM, Scott Aitken wrote:
> > On Sat, Jun 16, 2012 at 08:54:05AM +0200, Stefan Ring wrote:
> >
was /dev/lofi/2
/dev/lofi/5 ONLINE 0 0 0
/dev/lofi/4 ONLINE 0 0 0
/dev/lofi/3 ONLINE 0 0 0
/dev/lofi/1 ONLINE 0 0 0
root@openindiana-01:/mnt# zpool sc
in the second import, it complains that it can't open the device, rather
than saying it has corrupted data.
It's interesting that even though 4 of the 5 disks are available, it still
can import it as DEGRADED.
Thanks again.
Scott
___
zfs-
disk with an incorrect label. But how I can reconstruct
that label is a problem.
Also, there are four drives of the five-drive RAIDZ available. Based on what
criteria does ZFS decide that it is FAULTED and not DEGRADED? Odd.
Thanks,
Scott
ps I'm downloading OpenIndiana now.
>
> Whe
On Thu, Jun 14, 2012 at 09:56:43AM +1000, Daniel Carosone wrote:
> On Tue, Jun 12, 2012 at 03:46:00PM +1000, Scott Aitken wrote:
> > Hi all,
>
> Hi Scott. :-)
>
> > I have a 5 drive RAIDZ volume with data that I'd like to recover.
>
> Yeah, still..
>
>
so make the solaris machine available via SSH if some wonderful
person wants to poke around. If I lose the data that's ok, but it'd be nice
to know all avenues were tried before I delete the 9TB of images (I need the
space...)
Many thanks,
Scott
zfs-list at thismonkey dot com
Did you 4k align your partition table and is ashift=12?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
boxes
we have left. M$ also heavily discounts Exchange CALS to Edu and Oracle
is not very friendly
the way Sun was with their JES licensing. So it is bye bye Sun Messaging
Server for us.
2011-06-13 1:14, Scott Lawson пишет:
Hi All,
I have an interesting question that may or may not be an
On 13/06/11 10:28 AM, Nico Williams wrote:
On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson
wrote:
I have an interesting question that may or may not be answerable from some
internal
ZFS semantics.
This is really standard Unix filesystem semantics.
I Understand this, just wanting
gards,
Scott.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I don't disagree that zfs is the better choice, but...
> Seriously though. UFS is dead. It has no advantage
> over ZFS that I'm aware
> of.
>
When it comes to dumping and restoring filesystems, there is still no official
replacement for the ufsdump and ufsrestore. The discussion has been had
Hi,
Took me a couple of minutes to find the download for this in my Oracle
support. Search
for the patch like this .
Patches and Updates Panel -> Patch Search -> Patch Name or Number is :
10275731
Pretty easy really.
Scott.
PS. I found that patch by using product or family equals
-2413
> Unix Administrator
>
>
> "From a little spark may burst a mighty flame."
> -Dante Alighieri
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs
Hello Peter,
Read the ZFS Best Practices Guide to start. If you still have questions, post
back to the list.
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Storage_Pool_Performance_Considerations
-Scott
On Oct 13, 2010, at 3:21 PM, Peter Taps wrote:
> Folks,
>
sk designed to be sequential, while writes to the
ZIL/SLOG will be more random (in order to commit quickly)?
Scott Meilicke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
r reliability), and which may have bugs.
At some point you have to rely on your backups for the unexpected and
unforeseen. Make sure they are good!
Michael, nice reliability write up!
--
Scott Meilicke
___
zfs-discuss mailing lis
___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_____
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Scott Meilicke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I should add I have 477 snapshots across all files systems. Most of them are
hourly snaps (225 of them anyway).
On Sep 29, 2010, at 3:16 PM, Scott Meilicke wrote:
> This must be resliver day :)
>
> I just had a drive failure. The hot spare kicked in, and access to the pool
>
insights.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
you do it :). Although stopping a scrub is pretty
innocuous.
-Scott
On 9/29/10 9:22 AM, "LIC mesh" wrote:
> You almost have it - each iSCSI target is made up of 4 of the raidz vdevs - 4
> * 6 = 24 disks.
>
> 16 targets total.
>
> We have one LUN with status of
llions(about 30mins in) and restarts.
>
> Never gets past 0.00% completion, and K resilvered on any LUN.
>
> 64 LUNs, 32x5.44T, 32x10.88T in 8 vdevs.
>
>
>
>
> On Wed, Sep 29, 2010 at 11:40 AM, Scott Meilicke
> wrote:
>> Has it been running long? Initially
Has it been running long? Initially the numbers are way off. After a while
it settles down into something reasonable.
How many disks, and what size, are in your raidz2?
-Scott
On 9/29/10 8:36 AM, "LIC mesh" wrote:
> Is there any way to stop a resilver?
>
> We gotta s
Brilliant. I set those parameters via /etc/system, rebooted, and the pool
imported with just the f switch. I had seen this as an option earlier,
although not that thread, but was not sure it applied to my case.
Scrub is running now. Thank you very much!
-Scott
On 9/23/10 7:07 PM, "
s-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Scott Meilicke | Enterprise Systems Administrator | Crane Aerospace &
Electronics | +1 425-743-8153 | M: +1 206-406-2670
---
On 9/27/10 9:56 AM, "Victor Latushkin" wrote:
>
> On Sep 27, 2010, at 8:30 PM, Scott Meilicke wrote:
>
>> I am running nexenta CE 3.0.3.
>>
>> I have a file system that at some point in the last week went from a
>> directory per 'ls -l'
st created, as seen by ls -l:
drwxr-xr-x 4 root root4 Sep 27 09:14 scott
crwxr-xr-x 9 root root 0, 0 Sep 20 11:51 scott2
Notice the 'c' vs. 'd' at the beginning of the permissions list. I had been
fiddling with permissions last week, then had problems with a kernel panic.
,
although not that thread, but was not sure it applied to my case.
Scrub is running now. Thank you very much!
-Scott
Update: The scrub finished with zero errors.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
When I do the calculations, assuming 300bytes per block to be conservative,
with 128K blocks, I get 2.34G of cache (RAM, L2ARC) per Terabyte of deduped
data. But block size is dynamic, so you will need more than this.
Scott
--
This message posted from opensolaris.org
ss.
Maybe stop the process, delete the deduped file system (your copy target), and
create a new file system without dedupe to see if that is any better?
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@
will dedupe.
I am not sure why reporting is not done at the file system level. It may be an
accounting issue, i.e. which file system owns the dedupe blocks. But it seems
some fair estimate could be made. Maybe the overhead to keep a file system
updated with these stats is too high?
-Scott
CPU penalty as well. My four
core (1.86GHz xeons, 4 yrs old) box nearly maxes out when putting a lot of data
into a deduped file system.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
"I had already begun the process of migrating my 134 boxes over to Nexenta
before Oracle's cunning plans became known. This just reaffirms my decision. "
Us too. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
Are there other file systems underneath daten/backups that have snapshots?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Another data point - I used three 15K disks striped using my RAID controller as
a slog for the zil, and performance went down. I had three raidz sata vdevs
holding the data, and my load was VMs, i.e. a fair amount of small, random IO
(60% random, 50% write, ~16k in size).
Scott
--
This
If these files are deduped, and there is not a lot of RAM on the machine, it
can take a long, long time to work through the dedupe portion. I don't know
enough to know if that is what you are experiencing, but it could be the
problem.
How much RAM do you have?
Scott
--
This message p
> At this point, I will repeat my recommendation about
> using
> zpool-in-files as a backup (staging) target.
> Depending where you
> ost, and how you combine the files, you can achieve
> these scenarios
> without clunkery, and with all the benefits a zpool
> provides.
>
This is another good sch
evik wrote:
Reading this list for a while made it clear that zfs send is not a
backup solution, it can be used for cloning the filesystem to a backup
array if you are consuming the stream with zfs receive so you get
notified immediately about errors. Even one bitflip will render the
stream unusa
>
> if, for example, the network pipe is bigger then one
> unsplitted stream
> of zfs send | zfs recv then splitting it to multiple
> streams should
> optimize the network bandwidth, shouldn't it ?
>
Well, I guess so. But I wonder, what is the bottle neck here. If it is the
rate at which zfs
>
> would be nice if i could pipe the zfs send stream to
> a split and then
> send of those splitted stream over the
> network to a remote system. it would help sending it
> over to remote
> system quicker. can your tool do that?
>
> something like this
>
>s | ->
o. The only way I know of achieving that is by using zfs send etc.
>
> On 6/28/2010 11:26 AM, Tristram Scott wrote:
[snip]
> >
> > Tristram
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
>
For quite some time I have been using zfs send -R fsn...@snapname | dd
of=/dev/rmt/1ln to make a tape backup of my zfs file system. A few weeks back
the size of the file system grew to larger than would fit on a single DAT72
tape, and I once again searched for a simple solution to allow dumping
assertion, so I may be
completely wrong.
I assume your hardware is recent, the controllers are on PCIe x4 buses, etc.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Look again at how XenServer does storage. I think you will find it already has
a solution, both for iSCSI and NFS.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
need certain kernel features turned on.
--
Scott Kaelin
0x6BE43783
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Price? I cannot find it.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0t3d5 ONLINE 0 0 0
|c10t3d6 ONLINE 0 0 0
|spares
| c10t3d7AVAIL
|_
Is ZFS dependent on the order of the drives? Will this cause any issue down
the road? Thank you all;
Scott
--
your live data, another to access the
historical data.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
iSCSI writes require a sync to disk for every write. SMB writes get cached in
memory, therefore are much faster.
I am not sure why it is so slow for reads.
Have you tried comstar iSCSI? I have read in these forums that it is faster.
-Scott
--
This message posted from opensolaris.org
VMware will properly handle sharing a single iSCSI volume across multiple ESX
hosts. We have six ESX hosts sharing the same iSCSI volumes - no problems.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
sn't much control over which one it keeps - for
> backupsyou may realyl want to keep the earliest (or latest?) backup the
> file appeared in.
I've used "Dirvish" http://www.dirvish.org/ and rsync to do just
that...worked great!
Scott
>
> Using ZFS Dedup is an
.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
At the time we had it setup as 3 x 5 disk raidz, plus a hot spare. These 16
disks were in a SAS cabinet, and the the slog was on the server itself. We are
now running 2 x 7 raidz2 plus a hot spare and slog, all inside the cabinet.
Since the disks are 1.5T, I was concerned about resliver times fo
as a target for Doubletake, so it only saw write IO, with
very little read. My load testing using iometer was very positive, and I would
not have hesitated to use it as the primary node serving about 1000 users,
maybe 200-300 active at a time.
Scott
--
This message posted from opensolaris.org
ied to use a ZVOL from rpool (on fast 15k rpm drives) as a cache device
for another pool (on slower 7.2k rpm drives). It worked great up until it
hit the race condition and hung the system. It would have been nice if zfs
had issued a warning, or at least if this fact was better documented.
Scott
> One of the reasons I am investigating solaris for
> this is sparse volumes and dedupe could really help
> here. Currently we use direct attached storage on
> the dom0s and allocate an LVM to the domU on
> creation. Just like your example above, we have lots
> of those "80G to start with please"
at kind of performance do you need? Maybe raidz2 will give you the
performance you need. Maybe not. Measure the performance of each configuration
and decide for yourself. I am a big fan of iometer for this type of work.
-Scott
--
This message posted from opens
>Apple users have different expectations regarding data loss than Solaris and
>Linux users do.
Come on, no Apple user bashing. Not true, not fair.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-d
>I was planning to mirror them - mainly in the hope that I could hot swap a new
>one in the event that an existing one started to degrade. I suppose I could
>start with one of each and convert to a mirror later although the prospect of
>losing either disk fills me with dread.
You do not need to
disks?
Hopefully your switches support NIC aggregation?
The only issue I have had on 2009.06 using iSCSI (I had a windows VM directly
attaching to an iSCSI 4T volume) was solved and back ported to 2009.06 (bug
6794994).
-Scott
--
This message posted from opensolari
volume, no security.
Not quite a one liner. After you create the target once (step 3), you do not
have to do that again for the next volume. So three lines.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
Greg, I am using NetBackup 6.5.3.1 (7.x is out) with fine results. Nice and
fast.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ave a downloaded copy of whichever main backup software you use.
>That's it. You backup data using Amanda/Bacula/et al onto tape. You
>backup your boot/root filesystem using 'zfs send' onto the USB key.
Erik, great! I never thought of the USB key to store an rpool copy. I
You might have to force the import with -f.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d up reads.
Here is the ZFS best practices guide, which should help with this decision:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Read that, then come back with more questions.
Best,
Scott
--
This message posted from opens
I plan on filing a support request with Sun, and will try to post back with any
results.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
That is likely it. I create the volume using 2009.06, then later upgraded to
124. I just now created a new zvol, connected it to my windows server,
formatted, and added some data. Then I snapped the zvol, cloned the snap, and
used 'pfexec sbdadm create-lu'. When presented to the windows server,
Sure, but that will put me back into the original situation.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Thanks,
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Gallardo can see the LUN, but like I said, it looks
blank to the OS. I suspect the 'sbdadm create-lu' phase.
Any help to get Windows to see it as a LUN with NTFS data would be appreciated.
Thanks,
Scott
--
This message posted from opens
'conversation', but the LAG
setup will determine how a conversation is defined.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It looks like there is not a free slot for a hot spare? If that is the case,
then it is one more factor to push towards raidz2, as you will need time to
remove the failed disk and insert a new one. During that time you don't want to
be left unprotected.
--
This message posted from opensolaris.o
> Thus far there is no evidence that there is anything wrong with your
> storage arrays, or even with zfs. The problem seems likely to be
> somewhere else in the kernel.
Agreed. And I tend to think that the problem lays somewhere in the LDOM
software. I mainly just wanted to get some experience
No errors reported on any disks.
$ iostat -xe
extended device statistics errors ---
devicer/sw/s kr/s kw/s wait actv svc_t %w %b s/w h/w trn tot
vdc0 0.65.6 25.0 33.5 0.0 0.1 17.3 0 2 0 0 0 0
vdc1 78.1 24.4
[Cross-posting to ldoms-discuss]
We are occasionally seeing massive time-to-completions for I/O requests on ZFS
file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200,
and using a SSD drive as a ZIL device. Primary access to this system is via
NFS, and with NFS COMMITs b
sequential
writes. That same server can only consume about 22 MBps using an artificial
load designed to simulate my VM activity (using iometer). So it varies greatly
depending upon Y.
-Scott
--
This message posted from opensolaris.org
___
zfs-discu
protection.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It does 'just work', however you may have some file and/or file system
corruption if the snapshot was taken at the moment that your mac is updating
some files. So use the time slider function and take a lot of snaps. :)
--
This message posted from opensolaris.org
If the 7310s can meet your performance expectations, they sound much better
than a pair of x4540s. Auto-fail over, SSD performance (although these can be
added to the 4540s), ease of management, and a great front end.
I haven't seen if you can use your backup software with the 7310s, but from
ncern about losing power and having the X25
RAM cache disappear during a write.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
iSCSI volume has
nothing to do with ZFS' zil usage.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ivity. Same for NFS. I
see no ZIL activity using rsync, for an example of a network file transfer that
does not require sync.
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
upgrade to the latest dev release fixed the problem for me.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a repeatable test case for this indecent.Every time I access my ZFS
cifs shared file system with Adobe Photoshop elements 6.0 via my Vista
workstation the OpenSolaris server stops serving CIFS. The share functions as
expected for all other CIFS operations.
-Begin Configuration
Excellent! That worked just fine. Thank you Victor.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the SSD to my production pool. Any ideas why I am
getting the import error?
Thanks,
Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ago, I
have had no problems.
Again, I don't know if this would fix your problem, but it may be worth a try.
Just don't upgrade your ZFS version, and you will be able to roll back to
2009.06 at any time.
-Scott
--
This message posted from opens
I don't think so. But, you can clone at the ZFS level, and then just use the
vmdk(s) that you need. As long as you don't muck about with the other stuff in
the clone, the space usage should be the same.
-Scott
--
This message posted from opens
Interesting. We must have different setups with our PERCs. Mine have
always auto rebuilt.
--
Scott Meilicke
On Oct 22, 2009, at 6:14 AM, "Edward Ned Harvey"
wrote:
Replacing failed disks is easy when PERC is doing the RAID. Just
remove
the failed drive and replace with a goo
lace with a good one, and the PERC will rebuild
automatically. But are you talking about OpenSolaris managed RAID? I am pretty
sure, but not tested, that in pseudo JBOD mode (each disk a raid 0 or 1), the
PERC would still present a replaced disk to the OS without reconfiguring the
PERC BIO
the regular pool for the ZIL,
correct? Assuming this is correct, a mirror would be to preserve performance
during a failure?
Thanks everyone, this has been really helpful.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Thanks Ed. It sounds like you have run in this mode? No issues with
the perc?
--
Scott Meilicke
On Oct 20, 2009, at 9:59 PM, "Edward Ned Harvey"
wrote:
System:
Dell 2950
16G RAM
16 1.5T SATA disks in a SAS chassis hanging off of an LSI 3801e, no
extra drive slots, a si
1 - 100 of 318 matches
Mail list logo