Nevermind this, I destroyed the raid volume, then checked each hard drive one
by one, and when I put it back together, the problem fixed itself. I'm now
getting 30-60MB/s read and write, which is still slow as heck, but works well
for my application.
--
This message posted from opensolaris.org
Do you mean that OI148 might have a bug that Solaris 11 Express might solve? I
will download the Solaris 11 Express LiveUSB and give it a shot.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
Hello all,
I'm building a file server (or just a storage that I intend to access by
Workgroup from primarily Windows machines) using zfs raidz2 and openindiana
148. I will be using this to stream blu-ray movies and other media, so I will
be happy if I get just 20MB/s reads, which seems like a pr
>> New to ZFS, I made a critical error when migrating data and
>> configuring zpools according to needs - I stored a snapshot stream to
>> a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]".
>
>Why is this a critical error, I thought you were supposed to be
>able to save the outp
Hey all,
New to ZFS, I made a critical error when migrating data and
configuring zpools according to needs - I stored a snapshot stream to
a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]".
When I attempted to receive the stream onto to the newly configured
pool, I ended up with a
On Aug 2, 2010, at 8:18 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jonathan Loran
>>
> Because you're at pool v15, it does not matter if the log device fails while
> you&
ill the GUID for each pool get found by
the system from the partitioned log drives?
Please give me your sage advice. Really appreciate it.
Jon
- _/ _/ / - Jonathan Loran - -
-/ /
Hi,
I would really apreciate if any of you can help me get the modified mdb and zdb
(in any version of OpenSolaris) for digital forensic reserch purpose.
Thank you.
Jonathan Cifuentes
Can anyone confirm my action plan is the proper way to do this? The reason I'm
doing this is I want to create 2xraidz2 pools instead of expanding my current
2xraidz1 pool. So I'll create a 1xraidz2 vdev, migrate my current 2xraidz1
pool over, destroy that pool and then add it as a 1xraidz2 vde
>
> Do worry about media errors. Though this is the most
> common HDD
> error, it is also the cause of data loss.
> Fortunately, ZFS detected this
> and repaired it for you.
Right. I assume you do recommend swapping the faulted drive out though?
Other file systems may not
> be so gracious.
>
Yeah,
--
$smartctl -d sat,12 -i /dev/rdsk/c5t0d0
smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net
Smartctl: Device Read Identity Failed (not an ATA/ATAPI device)
I just ran 'iostat -En'. This is what was reported for the drive in question
(all other drives showed 0 errors across the board.
All drives indicated the "illegal request... predictive failure analysis"
--
c7t1d0
I just started replacing drives in this zpool (to increase storage). I pulled
the first drive, and replaced it with a new drive and all was well. It
resilvered with 0 errors. This was 5 days ago. Just today I was looking around
and noticed that my pool was degraded (I see now that this occurred
> First a little background, I'm running b130, I have a
> zpool with two Raidz1(each 4 drives, all WD RE4-GPs)
> "arrays" (vdev?). They're in a Norco-4220 case
> ("home" server), which just consists of SAS
> backplanes (aoc-usas-l8i ->8087->backplane->SATA
> drives). A couple of the drives are sh
First a little background, I'm running b130, I have a zpool with two
Raidz1(each 4 drives, all WD RE4-GPs) "arrays" (vdev?). They're in a
Norco-4220 case ("home" server), which just consists of SAS backplanes
(aoc-usas-l8i ->8087->backplane->SATA drives). A couple of the drives are
showing a
/work with the LSI-SAS
expander in the supermicro chassis. Using an 1068e based HBA works fine and
works well with osol.
Jonathan
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
The real problem for us is down to the fact that with ufsdump and ufsrestore
they handled tape spanning and zfs send does not.
we looked into having a wrapper to "zfs send" to a file and running gtar (which
does support tape spanning), or cpio ... then we looked at the amount we
started storing
On Sep 9, 2009, at 9:29 PM, Bill Sommerfeld wrote:
On Wed, 2009-09-09 at 21:30 +, Will Murnane wrote:
Some hours later, here I am again:
scrub: scrub in progress for 18h24m, 100.00% done, 0h0m to go
Any suggestions?
Let it run for another day.
A pool on a build server I manage takes ab
On Aug 14, 2009, at 11:14 AM, Peter Schow wrote:
On Thu, Aug 13, 2009 at 05:02:46PM -0600, Louis-Fr?d?ric Feuillette
wrote:
I saw this question on another mailing list, and I too would like to
know. And I have a couple questions of my own.
== Paraphrased from other list ==
Does anyone have a
>
> > We have a SC846E1 at work; it's the 24-disk, 4u
> version of the 826e1.
> > It's working quite nicely as a SATA JBOD enclosure.
> We'll probably be
> buying another in the coming year to have more
> capacity.
> Good to hear. What HBA(s) are you using against it?
>
I've got one too and it
On Jul 4, 2009, at 11:57 AM, Bob Friesenhahn wrote:
This brings me to the absurd conclusion that the system must be
rebooted immediately prior to each use.
see Phil's later email .. an export/import of the pool or a remount of
the filesystem should clear the page cache - with mmap'd files
On Jul 4, 2009, at 12:03 AM, Bob Friesenhahn wrote:
% ./diskqual.sh
c1t0d0 130 MB/sec
c1t1d0 130 MB/sec
c2t202400A0B83A8A0Bd31 13422 MB/sec
c3t202500A0B83A8A0Bd31 13422 MB/sec
c4t600A0B80003A8A0B096A47B4559Ed0 191 MB/sec
c4t600A0B80003A8A0B096E47B456DAd0 192 MB/sec
c4t600A0B80003A8A0B00
i've seen a problem where periodically a 'zfs mount -a' and sometimes
a 'zpool import ' can create what appears to be a race condition
on nested mounts .. that is .. let's say that i have:
FS mountpoint
pool/export
pool/fs1
he zfs layer, and also do backups.
Unfortunately for me, penny pinching has precluded both for us until
now.
Jon
On Jun 1, 2009, at 4:19 PM, A Darren Dunham wrote:
On Mon, Jun 01, 2009 at 03:19:59PM -0700, Jonathan Loran wrote:
Kinda scary then. Better make sure we delete all the bad fil
on
On Jun 1, 2009, at 2:41 PM, Paul Choi wrote:
"zpool clear" just clears the list of errors (and # of checksum
errors) from its stats. It does not modify the filesystem in any
manner. You run "zpool clear" to make the zpool forget that it ever
had any issues.
-Paul
Jonat
es in tact?
I'm going to perform a full backup of this guy (not so easy on my
budget), and I would rather only get the good files.
Thanks,
Jon
- _/ _/ / - Jonathan Loran - -
-/ / /
Daniel Rock wrote:
> Jonathan schrieb:
>> OpenSolaris Forums wrote:
>>> if you have a snapshot of your files and rsync the same files again,
>>> you need to use "--inplace" rsync option , otherwise completely new
>>> blocks will be allocated for the
blocks will be allocated for the new files. that`s because rsync will
> write entirely new file and rename it over the old one.
ZFS will allocate new blocks either way, check here
http://all-unix.blogspot.com/2007/03/zfs-cow-and-relate-features.html
for more information about how
Michael Shadle wrote:
> On Sat, Mar 28, 2009 at 1:37 AM, Peter Tribble
wrote:
>
>> zpool add tank raidz1 disk_1 disk_2 disk_3 ...
>>
>> (The syntax is just like creating a pool, only with add instead of
create.)
>
> so I can add individual disks to the existing tank zpool anytime i want?
Using th
On Mar 6, 2009, at 8:58 AM, Andrew Gabriel wrote:
Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does
maintain filesystem consistency through coordination between the
ZPL (ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately
for SNDR, ZFS caches a lot o
not quite .. it's 16KB at the front and 8MB back of the disk (16384
sectors) for the Solaris EFI - so you need to zero out both of these
of course since these drives are <1TB you i find it's easier to format
to SMI (vtoc) .. with format -e (choose SMI, label, save, validate -
then choose EFI
's easier just to spend the money on enough
hardware to do it properly without the chance of data loss and the
extended down time. "Doesn't invest the time in" may be a be a better
phrase than "avoids" though. I doubt Sun actually goes out of their way
to make things harder for people.
Hope that helps,
Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
u start seeing hundreds of errors be sure to check things like the
cable. I had a SATA cable come loose on a home ZFS fileserver and scrub
was throwing 100's of errors even though the drive itself was fine, I
don't want to think about what could have happened with UFS...
H
the system board for this machine would make use of ECC
memory either, which is not good from a ZFS perspective. How many SATA
plugs are there on the MB in this guy?
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /I
y, give it a go and see what happens. I'm sure I can still dimly
recall a time when 500MHz/512MB was a kick-ass system...
Jonathan
(*) This machine can sustain 110MB/s off of the 4-disk RAIDZ1 set,
which is substantially more than I can get over my 100Mb network.
___
tools, resilience of the platform, etc.)..
>
> .. Of course though, I guess a lot of people who may have never had a
> problem wouldn't even be signed up on this list! :-)
>
>
> Thanks!
> ___
> storage-discuss mailing li
Hi
Please see the query below. Appreciate any help.
Rgds
jonathan
Original Message
Would you mind helping me ask your tech guy whether there will be
repercussions when I try to run this command in view of the situation below:
# /*zpool add -f zhome raidz
two vdevs out
of two raidz to see if you get twice the throughput, more or less. I'll
bet the answer is yes.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ /
asis in reality until it's about 1% do or so. I think there is some
bookkeeping or something ZFS does at the start of a scrub or resilver that
throws off the time estimate for a while. Thats just my experience with
it but it's been like that pretty consistently for me.
Jonathan
On 25 Sep 2008, at 17:14, Darren J Moffat wrote:
> Chris Gerhard has a zfs_versions script that might help:
> http://blogs.sun.com/chrisg/entry/that_there_is
Ah. Cool. I will have to try this out.
Jonathan
___
zfs-discuss mailing list
zfs-d
s requires me to a) type more; and b) remember where the top of
the filesystem is in order to split the path. This is obviously more
of a pain if the path is 7 items deep, and the split means you can't
just use $PWD.
[My choice of .snapshot/nightly.0 is a deliberate nod to the
value of a failure in one year:
Fe = 46% failures/month * 12 months = 5.52 failures
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Science
e a chance of being recovered. If
it stops half way, it has _no_ chance of recovering that data, so I favor my
odds of letting it go on to at least try :)
Or is that an entirely new CR itself?
Jonathan
This message posted from opensolaris.org
ID=220125
It's way over my head, but if anyone can tell me the mdb commands I'm happy to
try them, even if they do kill my cat. I don't really have anything to loose
with a copy of the data, and I'll do it all in a VM anyway.
Thanks,
Jonathan
This message posted from
over the /home fs
from the pre-zfsroot.zfs dump? Since there seems to be a problem with the first
fs (faith/virtualmachines), I need to find a way to skip restoring that zfs, so
it can focus on the faith/home fs.
How can this be achieved with zfs receive?
Jonathan
This message posted from
other helpful chap pointed out, if tar encounters an error in the
bitstream it just moves on until it finds usable data again. Can zfs not do
something similar?
I'll take whatever i can get!
Jonathan
This message posted from opensolaris.org
___
z
Jorgen Lundman wrote:
> # /usr/X11/bin/scanpci | /usr/sfw/bin/ggrep -A1 "vendor 0x11ab device
> 0x6081"
> pci bus 0x0001 cardnum 0x01 function 0x00: vendor 0x11ab device 0x6081
> Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller
>
> But it claims resolved for our version:
it's not so!), why can't I at least have the 20GB of data that
it can restore before it bombs out with that checksum error.
Thanks for any help with this!
Jonathan
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Miles Nordin wrote:
>> "s" == Steve <[EMAIL PROTECTED]> writes:
>>
>
> s> http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354
>
> no ECC:
>
> http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets
>
This MB will take these:
http://www.inte
e best position to monitor the device.
> >
> > The primary goal of ZFS is to be able to correctly read data which was
> > successfully committed to disk. There are programming interfaces
> > (e.g. fsync(), msync()) which may be used to en
it be possible to have a number of possible places to store this
> log? What I'm thinking is that if the system drive is unavailable,
> ZFS could try each pool in turn and attempt to store the log there.
>
> In fact e-mail alerts or external error logging would be a great
> addition to ZFS. Surely it makes sense that filesy
tml
This has the advantage of requiring no other libraries and no compile
phase at all.
Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d your tree
is and what your churn rate is .. we know on QFS we can go up to 100M,
but i trust the tree layout a little better there, can separate the
metadata out if i need to and have planned on it, and know that we've
got some tools to relayout the metadata or dump/restore for
sed upon
block reference count. If a block has few references, it should expire
first, and vise versa, blocks with many references should be the last
out. With all the savings on disks, think how much RAM you could buy ;)
Jon
--
- _/ _/ / -
t; Check out the following blog..:
>
> http://blogs.sun.com/erickustarz/entry/how_dedupalicious_is_your_pool
>
>
Unfortunately we are on Solaris 10 :( Can I get a zdb for zfs V4 that
will dump those checksums?
Jon
--
- _/ _/ / - Jonathan Loran -
e willing to run it and provide feedback. :)
>
> -Tim
>
>
Me too. Our data profile is just like Tim's: Terra bytes of satellite
data. I'm going to guess that the d11p ratio won't be fantastic for
us. I sure would like
ardware and software, but they are all steep on the ROI
curve. I would be very excited to see block level ZFS deduplication
roll out. Especially since we already have the infrastructure in place
using Solaris/ZFS.
Cheers,
Jon
--
- _/ _/ / - Jonathan Loran -
ions.
>
>
Ben,
Haven't read this whole thread, and this has been brought up before, but
make sure you power supply is running clean. I can't tell you how many
times I've seen very strange and intermittent system errors occur from a
ld presumably expect it to be instantaneous if it was creating
a sparse file. It's not a compressed filesystem though is it? /dev/
zero tends to be fairly compressible ;-)
I think, as someone else pointed out, running zpool iostat at the same
time might
base files or large log files. The actual modified/appended
blocks would be sent rather than the whole changed file. This may be
an important point depending on your file modification patterns.
Jonathan
___
zfs-discuss mailing list
zfs-discuss@op
backup disk to the primary system and import it as the new
primary pool.
It's a bit-perfect incremental backup strategy that requires no
additional tools.
Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e-based
access, full history (although it could be collapsed by deleting older
snapshots as necessary), and no worries about stream format changes.
Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
have
all of the dmu_zfetch() logic in that instead of in-line with the
original dbuf_read().
Jonathan
PS: Hi Darren!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jonathan Loran wrote:
> Since no one has responded to my thread, I have a question: Is zdb
> suitable to run on a live pool? Or should it only be run on an exported
> or destroyed pool? In fact, I see that it has been asked before on this
> forum, but is there a users
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / / (510) 643-5146 [EMAIL PROTECTED
Hi List,
First of all: S10u4 120011-14
So I have the weird situation. Earlier this week, I finally mirrored up
two iSCSI based pools. I had been wanting to do this for some time,
because the availability of the data in these pools is important. One
pool mirrored just fine, but the other po
s, which use an indirect map,
we just use the Solaris map, thus:
auto_home:
*zfs-server:/home/&
Sorry to be so off (ZFS) topic.
Jon
--
- _____/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- __
Dominic Kay wrote:
> Hi
>
> Firstly apologies for the spam if you got this email via multiple aliases.
>
> I'm trying to document a number of common scenarios where ZFS is used
> as part of the solution such as email server, $homeserver, RDBMS and
> so forth but taken from real implementations
Bob Friesenhahn wrote:
> On Tue, 22 Apr 2008, Jonathan Loran wrote:
>>>
>> But that's the point. You can't correct silent errors on write once
>> media because you can't write the repair.
>
> Yes, you can correct the error (at time of read) due to
Bob Friesenhahn wrote:
>> The "problem" here is that by putting the data away from your machine,
>> you loose the chance to "scrub"
>> it on a regular basis, i.e. there is always the risk of silent
>> corruption.
>>
>
> Running a scrub is pointless since the media is not writeable. :-)
>
>
Luke Scharf wrote:
> Maurice Volaski wrote:
>
>>> Perhaps providing the computations rather than the conclusions would
>>> be more persuasive on a technical list ;>
>>>
>>>
>> 2 16-disk SATA arrays in RAID 5
>> 2 16-disk SATA arrays in RAID 6
>> 1 9-disk SATA array in RAID 5.
>>
>
Chris Siebenmann wrote:
> | What your saying is independent of the iqn id?
>
> Yes. SCSI objects (including iSCSI ones) respond to specific SCSI
> INQUIRY commands with various 'VPD' pages that contain information about
> the drive/object, including serial number info.
>
> Some Googling turns up
Just to report back to the list... Sorry for the lengthy post
So I've tested the iSCSI based zfs mirror on Sol 10u4, and it does more
or less work as expected. If I unplug one side of the mirror - unplug
or power down one of the iSCSI targets - I/O to the zpool stops for a
while, perhaps a
On Apr 9, 2008, at 11:46 AM, Bob Friesenhahn wrote:
> On Wed, 9 Apr 2008, Ross wrote:
>>
>> Well the first problem is that USB cables are directional, and you
>> don't have the port you need on any standard motherboard. That
>
> Thanks for that info. I did not know that.
>
>> Adding iSCSI suppor
Vincent Fox wrote:
> Followup, my initiator did eventually panic.
>
> I will have to do some setup to get a ZVOL from another system to mirror
> with, and see what happens when one of them goes away. Will post in a day or
> two on that.
>
>
On Sol 10 U4, I could have told you that. A few
kristof wrote:
> If you have a mirrored iscsi zpool. It will NOT panic when 1 of the
> submirrors is unavailable.
>
> zpool status will hang for some time, but after I thinkt 300 seconds it will
> put the device on unavailable.
>
> The panic was the default in the past, And it only occurs if all
> This guy seems to have had lots of fun with iSCSI :)
> http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.html
>
>
This is scaring the heck out of me. I have a project to create a zpool
mirror out of two iSCSI targets, and if the failure of one of them will
panic my system, that wil
Bob Friesenhahn wrote:
> On Tue, 25 Mar 2008, Robert Milkowski wrote:
>> As I wrote before - it's not only about RAID config - what if you have
>> hundreds of file systems, with some share{nfs|iscsi|cifs) enabled with
>> specific parameters, then specific file system options, etc.
>
> Some zfs-re
On Mar 20, 2008, at 2:00 PM, Bob Friesenhahn wrote:
> On Thu, 20 Mar 2008, Jonathan Edwards wrote:
>>
>> in that case .. try fixing the ARC size .. the dynamic resizing on
>> the ARC
>> can be less than optimal IMHO
>
> Is a 16GB ARC size not considered
On Mar 20, 2008, at 11:07 AM, Bob Friesenhahn wrote:
> On Thu, 20 Mar 2008, Mario Goebbels wrote:
>
>>> Similarly, read block size does not make a
>>> significant difference to the sequential read speed.
>>
>> Last time I did a simple bench using dd, supplying the record size as
>> blocksize to it
On Mar 14, 2008, at 3:28 PM, Bill Shannon wrote:
> What's the best way to backup a zfs filesystem to tape, where the size
> of the filesystem is larger than what can fit on a single tape?
> ufsdump handles this quite nicely. Is there a similar backup program
> for zfs? Or a general tape manageme
Robert Milkowski wrote:
Hello Jonathan,
Friday, March 14, 2008, 9:48:47 PM, you wrote:
>
Carson Gaspar wrote:
Bob Friesenhahn wrote:
On Fri, 14 Mar 2008, Bill Shannon wrote:
What's the best way to backup a zfs filesystem to tape, where the size
of the files
x27;s choice of NFS v4 ACLs. This is the only way to ensure
CIFS compatibility, and it is the way the industry will be moving.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ /
Patrick Bachmann wrote:
Jonathan,
On Tue, Mar 04, 2008 at 12:37:33AM -0800, Jonathan Loran wrote:
I'm 'not sure I follow how this would work.
The keyword here is thin provisioning. The sparse zvol only uses
as much space as the actual data needs. So, if you use a sparse
Patrick Bachmann wrote:
> Jonathan,
>
> On Mon, Mar 03, 2008 at 11:14:14AM -0800, Jonathan Loran wrote:
>
>> What I'm left with now is to do more expensive modifications to the new
>> mirror to increase its size, or using zfs send | receive or rsync to
>>
Shawn Ferry wrote:
On Mar 3, 2008, at 2:14 PM, Jonathan Loran wrote:
Now I know this is counterculture, but it's biting me in the back side
right now, and ruining my life.
I have a storage array (iSCSI SAN) that is performing badly, and
requires some upgrades/reconfiguration. I h
with Solaris instead on the SAN box? It's just commodity x86 server
hardware.
My life is ruined by too many choices, and not enough time to evaluate
everything.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/
the ZIO pipeline gets filled from the dmu_tx routines (for the whole
pool), i guess it would make the most sense to look at the
dmu_tx_create() entry from vnops (as Jeff already pointed out.)
---
jonathan
___
zfs-discuss mailing list
z
On Mar 1, 2008, at 4:14 PM, Bill Shannon wrote:
> Ok, that's much better! At least I'm getting output when I touch
> files
> on zfs. However, even though zpool iostat is reporting activity, the
> above program isn't showing any file accesses when the system is idle.
>
> Any ideas?
assuming th
On Mar 1, 2008, at 3:41 AM, Bill Shannon wrote:
> Running just plain "iosnoop" shows accesses to lots of files, but none
> on my zfs disk. Using "iosnoop -d c1t1d0" or "iosnoop -m /export/
> home/shannon"
> shows nothing at all. I tried /usr/demo/dtrace/iosnoop.d too, still
> nothing.
hi Bil
Roch Bourbonnais wrote:
>
> Le 28 févr. 08 à 21:00, Jonathan Loran a écrit :
>
>>
>>
>> Roch Bourbonnais wrote:
>>>
>>> Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
>>>
>>>>
>>>> Quick question:
>>>>
Roch Bourbonnais wrote:
>
> Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
>
>>
>> Quick question:
>>
>> If I create a ZFS mirrored pool, will the read performance get a boost?
>> In other words, will the data/parity be read round robin between the
>
Quick question:
If I create a ZFS mirrored pool, will the read performance get a boost?
In other words, will the data/parity be read round robin between the
disks, or do both mirrored sets of data and parity get read off of both
disks? The latter case would have a CPU expense, so I would thi
On Feb 27, 2008, at 8:36 AM, Uwe Dippel wrote:
> As much as ZFS is revolutionary, it is far away from being the
> 'ultimate file system', if it doesn't know how to handle event-
> driven snapshots (I don't like the word), backups, versioning. As
> long as a high-level system utility needs to
David Magda wrote:
> On Feb 24, 2008, at 01:49, Jonathan Loran wrote:
>
>> In some circles, CDP is big business. It would be a great ZFS offering.
>
> ZFS doesn't have it built-in, but AVS made be an option in some cases:
>
> http://opensolaris.org/os/project/avs
Uwe Dippel wrote:
> [i]google found that solaris does have file change notification:
> http://blogs.sun.com/praks/entry/file_events_notification
> [/i]
>
> Didn't see that one, thanks.
>
> [i]Would that do the job?[/i]
>
> It is not supposed to do a job, thanks :), it is for a presentation at a
[EMAIL PROTECTED] wrote:
On Tue, Feb 12, 2008 at 10:21:44PM -0800, Jonathan Loran wrote:
Thanks for any help anyone can offer.
I have faced similar problem (although not exactly the same) and was going to
monitor disk queue with dtrace but couldn't find any docs/urls abo
up for the VFS layer.
>
> I'd also check syscall latencies - it might be too obvious, but it can be
> worth checking (eg, if you discover those long latencies are only on the
> open syscall)...
>
> Brendan
>
>
>
--
- _/ _/ / -
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP.
Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty
handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart
more on random I/O. The s
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
...
I know, I know, I should have gone with a JBOD setup, but it's too late for
that in this iteration of this server. We we set this up, I had the gear
already, and it's not in my budget to get new stuff right now.
What kind of arra
1 - 100 of 235 matches
Mail list logo