Re: [zfs-discuss] Raidz2 slow read speed (under 5MB/s)

2011-07-22 Thread Jonathan Chang
Nevermind this, I destroyed the raid volume, then checked each hard drive one by one, and when I put it back together, the problem fixed itself. I'm now getting 30-60MB/s read and write, which is still slow as heck, but works well for my application. -- This message posted from opensolaris.org

Re: [zfs-discuss] Raidz2 slow read speed (under 5MB/s)

2011-07-21 Thread Jonathan Chang
Do you mean that OI148 might have a bug that Solaris 11 Express might solve? I will download the Solaris 11 Express LiveUSB and give it a shot. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://m

[zfs-discuss] Raidz2 slow read speed (under 5MB/s)

2011-07-21 Thread Jonathan Chang
Hello all, I'm building a file server (or just a storage that I intend to access by Workgroup from primarily Windows machines) using zfs raidz2 and openindiana 148. I will be using this to stream blu-ray movies and other media, so I will be happy if I get just 20MB/s reads, which seems like a pr

[zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Jonathan Walker
>> New to ZFS, I made a critical error when migrating data and >> configuring zpools according to needs - I stored a snapshot stream to >> a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]". > >Why is this a critical error, I thought you were supposed to be >able to save the outp

[zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Jonathan Walker
Hey all, New to ZFS, I made a critical error when migrating data and configuring zpools according to needs - I stored a snapshot stream to a file using "zfs send -R [filesystem]@[snapshot] >[stream_file]". When I attempted to receive the stream onto to the newly configured pool, I ended up with a

Re: [zfs-discuss] Using multiple logs on single SSD devices

2010-08-03 Thread Jonathan Loran
On Aug 2, 2010, at 8:18 PM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Jonathan Loran >> > Because you're at pool v15, it does not matter if the log device fails while > you&

[zfs-discuss] Using multiple logs on single SSD devices

2010-08-02 Thread Jonathan Loran
ill the GUID for each pool get found by the system from the partitioned log drives? Please give me your sage advice. Really appreciate it. Jon - _/ _/ / - Jonathan Loran - - -/ /

[zfs-discuss] modified mdb and zdb

2010-07-28 Thread Jonathan Cifuentes
Hi, I would really apreciate if any of you can help me get the modified mdb and zdb (in any version of OpenSolaris) for digital forensic reserch purpose. Thank you. Jonathan Cifuentes

[zfs-discuss] Migrating ZFS/data pool to new pool on the same system

2010-05-04 Thread Jonathan
Can anyone confirm my action plan is the proper way to do this? The reason I'm doing this is I want to create 2xraidz2 pools instead of expanding my current 2xraidz1 pool. So I'll create a 1xraidz2 vdev, migrate my current 2xraidz1 pool over, destroy that pool and then add it as a 1xraidz2 vde

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Jonathan
> > Do worry about media errors. Though this is the most > common HDD > error, it is also the cause of data loss. > Fortunately, ZFS detected this > and repaired it for you. Right. I assume you do recommend swapping the faulted drive out though? Other file systems may not > be so gracious. >

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Jonathan
Yeah, -- $smartctl -d sat,12 -i /dev/rdsk/c5t0d0 smartctl 5.39.1 2010-01-28 r3054 [i386-pc-solaris2.11] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Smartctl: Device Read Identity Failed (not an ATA/ATAPI device)

Re: [zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Jonathan
I just ran 'iostat -En'. This is what was reported for the drive in question (all other drives showed 0 errors across the board. All drives indicated the "illegal request... predictive failure analysis" -- c7t1d0

[zfs-discuss] Replaced drive in zpool, was fine, now degraded - ohno

2010-04-14 Thread Jonathan
I just started replacing drives in this zpool (to increase storage). I pulled the first drive, and replaced it with a new drive and all was well. It resilvered with 0 errors. This was 5 days ago. Just today I was looking around and noticed that my pool was degraded (I see now that this occurred

Re: [zfs-discuss] Couple Questions about replacing a drive in a zpool

2010-03-08 Thread Jonathan
> First a little background, I'm running b130, I have a > zpool with two Raidz1(each 4 drives, all WD RE4-GPs) > "arrays" (vdev?). They're in a Norco-4220 case > ("home" server), which just consists of SAS > backplanes (aoc-usas-l8i ->8087->backplane->SATA > drives). A couple of the drives are sh

[zfs-discuss] Couple Questions about replacing a drive in a zpool

2010-03-08 Thread Jonathan
First a little background, I'm running b130, I have a zpool with two Raidz1(each 4 drives, all WD RE4-GPs) "arrays" (vdev?). They're in a Norco-4220 case ("home" server), which just consists of SAS backplanes (aoc-usas-l8i ->8087->backplane->SATA drives). A couple of the drives are showing a

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-25 Thread Jonathan Borden
/work with the LSI-SAS expander in the supermicro chassis. Using an 1068e based HBA works fine and works well with osol. Jonathan -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opens

Re: [zfs-discuss] zfsdump

2009-11-04 Thread Jonathan Adams
The real problem for us is down to the fact that with ufsdump and ufsrestore they handled tape spanning and zfs send does not. we looked into having a wrapper to "zfs send" to a file and running gtar (which does support tape spanning), or cpio ... then we looked at the amount we started storing

Re: [zfs-discuss] This is the scrub that never ends...

2009-09-10 Thread Jonathan Edwards
On Sep 9, 2009, at 9:29 PM, Bill Sommerfeld wrote: On Wed, 2009-09-09 at 21:30 +, Will Murnane wrote: Some hours later, here I am again: scrub: scrub in progress for 18h24m, 100.00% done, 0h0m to go Any suggestions? Let it run for another day. A pool on a build server I manage takes ab

Re: [zfs-discuss] Books on File Systems and File System Programming

2009-08-15 Thread Jonathan Edwards
On Aug 14, 2009, at 11:14 AM, Peter Schow wrote: On Thu, Aug 13, 2009 at 05:02:46PM -0600, Louis-Fr?d?ric Feuillette wrote: I saw this question on another mailing list, and I too would like to know. And I have a couple questions of my own. == Paraphrased from other list == Does anyone have a

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-16 Thread Jonathan Borden
> > > We have a SC846E1 at work; it's the 24-disk, 4u > version of the 826e1. > > It's working quite nicely as a SATA JBOD enclosure. > We'll probably be > buying another in the coming year to have more > capacity. > Good to hear. What HBA(s) are you using against it? > I've got one too and it

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Jonathan Edwards
On Jul 4, 2009, at 11:57 AM, Bob Friesenhahn wrote: This brings me to the absurd conclusion that the system must be rebooted immediately prior to each use. see Phil's later email .. an export/import of the pool or a remount of the filesystem should clear the page cache - with mmap'd files

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Jonathan Edwards
On Jul 4, 2009, at 12:03 AM, Bob Friesenhahn wrote: % ./diskqual.sh c1t0d0 130 MB/sec c1t1d0 130 MB/sec c2t202400A0B83A8A0Bd31 13422 MB/sec c3t202500A0B83A8A0Bd31 13422 MB/sec c4t600A0B80003A8A0B096A47B4559Ed0 191 MB/sec c4t600A0B80003A8A0B096E47B456DAd0 192 MB/sec c4t600A0B80003A8A0B00

Re: [zfs-discuss] cannot mount '/tank/home': directory is not empty

2009-06-10 Thread Jonathan Edwards
i've seen a problem where periodically a 'zfs mount -a' and sometimes a 'zpool import ' can create what appears to be a race condition on nested mounts .. that is .. let's say that i have: FS mountpoint pool/export pool/fs1

Re: [zfs-discuss] Does zpool clear delete corrupted files

2009-06-01 Thread Jonathan Loran
he zfs layer, and also do backups. Unfortunately for me, penny pinching has precluded both for us until now. Jon On Jun 1, 2009, at 4:19 PM, A Darren Dunham wrote: On Mon, Jun 01, 2009 at 03:19:59PM -0700, Jonathan Loran wrote: Kinda scary then. Better make sure we delete all the bad fil

Re: [zfs-discuss] Does zpool clear delete corrupted files

2009-06-01 Thread Jonathan Loran
on On Jun 1, 2009, at 2:41 PM, Paul Choi wrote: "zpool clear" just clears the list of errors (and # of checksum errors) from its stats. It does not modify the filesystem in any manner. You run "zpool clear" to make the zpool forget that it ever had any issues. -Paul Jonat

[zfs-discuss] Does zpool clear delete corrupted files

2009-06-01 Thread Jonathan Loran
es in tact? I'm going to perform a full backup of this guy (not so easy on my budget), and I would rather only get the good files. Thanks, Jon - _/ _/ / - Jonathan Loran - - -/ / /

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread Jonathan
Daniel Rock wrote: > Jonathan schrieb: >> OpenSolaris Forums wrote: >>> if you have a snapshot of your files and rsync the same files again, >>> you need to use "--inplace" rsync option , otherwise completely new >>> blocks will be allocated for the

Re: [zfs-discuss] Data size grew.. with compression on

2009-04-09 Thread Jonathan
blocks will be allocated for the new files. that`s because rsync will > write entirely new file and rename it over the old one. ZFS will allocate new blocks either way, check here http://all-unix.blogspot.com/2007/03/zfs-cow-and-relate-features.html for more information about how

Re: [zfs-discuss] Can this be done?

2009-03-28 Thread Jonathan
Michael Shadle wrote: > On Sat, Mar 28, 2009 at 1:37 AM, Peter Tribble wrote: > >> zpool add tank raidz1 disk_1 disk_2 disk_3 ... >> >> (The syntax is just like creating a pool, only with add instead of create.) > > so I can add individual disks to the existing tank zpool anytime i want? Using th

Re: [zfs-discuss] ZFS and SNDR..., now I'm confused.

2009-03-06 Thread Jonathan Edwards
On Mar 6, 2009, at 8:58 AM, Andrew Gabriel wrote: Jim Dunham wrote: ZFS the filesystem is always on disk consistent, and ZFS does maintain filesystem consistency through coordination between the ZPL (ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately for SNDR, ZFS caches a lot o

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-22 Thread Jonathan Edwards
not quite .. it's 16KB at the front and 8MB back of the disk (16384 sectors) for the Solaris EFI - so you need to zero out both of these of course since these drives are <1TB you i find it's easier to format to SMI (vtoc) .. with format -e (choose SMI, label, save, validate - then choose EFI

Re: [zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-15 Thread Jonathan
's easier just to spend the money on enough hardware to do it properly without the chance of data loss and the extended down time. "Doesn't invest the time in" may be a be a better phrase than "avoids" though. I doubt Sun actually goes out of their way to make things harder for people. Hope that helps, Jonathan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Drive Checksum error

2008-12-16 Thread Jonathan
u start seeing hundreds of errors be sure to check things like the cable. I had a SATA cable come loose on a home ZFS fileserver and scrub was throwing 100's of errors even though the drive itself was fine, I don't want to think about what could have happened with UFS... H

Re: [zfs-discuss] Inexpensive ZFS home server

2008-11-12 Thread Jonathan Loran
the system board for this machine would make use of ECC memory either, which is not good from a ZFS perspective. How many SATA plugs are there on the MB in this guy? Jon -- - _/ _/ / - Jonathan Loran - - -/ / /I

Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-06 Thread Jonathan Hogg
y, give it a go and see what happens. I'm sure I can still dimly recall a time when 500MHz/512MB was a kick-ass system... Jonathan (*) This machine can sustain 110MB/s off of the 4-disk RAIDZ1 set, which is substantially more than I can get over my 100Mb network. ___

Re: [zfs-discuss] [storage-discuss] ZFS Success Stories

2008-10-20 Thread Jonathan Loran
tools, resilience of the platform, etc.).. > > .. Of course though, I guess a lot of people who may have never had a > problem wouldn't even be signed up on this list! :-) > > > Thanks! > ___ > storage-discuss mailing li

[zfs-discuss] [Fwd: Another ZFS question]

2008-09-27 Thread jonathan sai
Hi Please see the query below. Appreciate any help. Rgds jonathan Original Message Would you mind helping me ask your tech guy whether there will be repercussions when I try to run this command in view of the situation below: # /*zpool add -f zhome raidz

Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-26 Thread Jonathan Loran
two vdevs out of two raidz to see if you get twice the throughput, more or less. I'll bet the answer is yes. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ /

Re: [zfs-discuss] zfs resilvering

2008-09-26 Thread jonathan
asis in reality until it's about 1% do or so. I think there is some bookkeeping or something ZFS does at the start of a scrub or resilver that throws off the time estimate for a while. Thats just my experience with it but it's been like that pretty consistently for me. Jonathan

Re: [zfs-discuss] zfs-auto-snapshot default schedules

2008-09-25 Thread Jonathan Hogg
On 25 Sep 2008, at 17:14, Darren J Moffat wrote: > Chris Gerhard has a zfs_versions script that might help: > http://blogs.sun.com/chrisg/entry/that_there_is Ah. Cool. I will have to try this out. Jonathan ___ zfs-discuss mailing list zfs-d

Re: [zfs-discuss] zfs-auto-snapshot default schedules

2008-09-25 Thread Jonathan Hogg
s requires me to a) type more; and b) remember where the top of the filesystem is in order to split the path. This is obviously more of a pain if the path is 7 items deep, and the split means you can't just use $PWD. [My choice of .snapshot/nightly.0 is a deliberate nod to the

Re: [zfs-discuss] pulling disks was: ZFS hangs/freezes after disk failure,

2008-08-28 Thread Jonathan Loran
value of a failure in one year: Fe = 46% failures/month * 12 months = 5.52 failures Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Science

Re: [zfs-discuss] corrupt zfs stream? checksum mismatch

2008-08-15 Thread Jonathan Wheeler
e a chance of being recovered. If it stops half way, it has _no_ chance of recovering that data, so I favor my odds of letting it go on to at least try :) Or is that an entirely new CR itself? Jonathan This message posted from opensolaris.org

Re: [zfs-discuss] corrupt zfs stream? checksum mismatch

2008-08-13 Thread Jonathan Wheeler
ID=220125 It's way over my head, but if anyone can tell me the mdb commands I'm happy to try them, even if they do kill my cat. I don't really have anything to loose with a copy of the data, and I'll do it all in a VM anyway. Thanks, Jonathan This message posted from

Re: [zfs-discuss] corrupt zfs stream? checksum mismatch

2008-08-13 Thread Jonathan Wheeler
over the /home fs from the pre-zfsroot.zfs dump? Since there seems to be a problem with the first fs (faith/virtualmachines), I need to find a way to skip restoring that zfs, so it can focus on the faith/home fs. How can this be achieved with zfs receive? Jonathan This message posted from

Re: [zfs-discuss] corrupt zfs stream? "checksum mismatch"

2008-08-12 Thread Jonathan Wheeler
other helpful chap pointed out, if tar encounters an error in the bitstream it just moves on until it finds usable data again. Can zfs not do something similar? I'll take whatever i can get! Jonathan This message posted from opensolaris.org ___ z

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jonathan Loran
Jorgen Lundman wrote: > # /usr/X11/bin/scanpci | /usr/sfw/bin/ggrep -A1 "vendor 0x11ab device > 0x6081" > pci bus 0x0001 cardnum 0x01 function 0x00: vendor 0x11ab device 0x6081 > Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller > > But it claims resolved for our version:

[zfs-discuss] corrupt zfs stream? checksum mismatch

2008-08-10 Thread Jonathan Wheeler
it's not so!), why can't I at least have the 20GB of data that it can restore before it bombs out with that checksum error. Thanks for any help with this! Jonathan This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-31 Thread Jonathan Loran
Miles Nordin wrote: >> "s" == Steve <[EMAIL PROTECTED]> writes: >> > > s> http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354 > > no ECC: > > http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets > This MB will take these: http://www.inte

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-30 Thread Jonathan Loran
e best position to monitor the device. > > > > The primary goal of ZFS is to be able to correctly read data which was > > successfully committed to disk. There are programming interfaces > > (e.g. fsync(), msync()) which may be used to en

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-29 Thread Jonathan Loran
it be possible to have a number of possible places to store this > log? What I'm thinking is that if the system drive is unavailable, > ZFS could try each pool in turn and attempt to store the log there. > > In fact e-mail alerts or external error logging would be a great > addition to ZFS. Surely it makes sense that filesy

Re: [zfs-discuss] Announcement: The Unofficial Unsupported Python ZFS API

2008-07-14 Thread Jonathan Hogg
tml This has the advantage of requiring no other libraries and no compile phase at all. Jonathan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Largest (in number of files) ZFS instance tested

2008-07-11 Thread Jonathan Edwards
d your tree is and what your churn rate is .. we know on QFS we can go up to 100M, but i trust the tree layout a little better there, can separate the metadata out if i need to and have planned on it, and know that we've got some tools to relayout the metadata or dump/restore for

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Jonathan Loran
sed upon block reference count. If a block has few references, it should expire first, and vise versa, blocks with many references should be the last out. With all the savings on disks, think how much RAM you could buy ;) Jon -- - _/ _/ / -

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Jonathan Loran
t; Check out the following blog..: > > http://blogs.sun.com/erickustarz/entry/how_dedupalicious_is_your_pool > > Unfortunately we are on Solaris 10 :( Can I get a zdb for zfs V4 that will dump those checksums? Jon -- - _/ _/ / - Jonathan Loran -

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Jonathan Loran
e willing to run it and provide feedback. :) > > -Tim > > Me too. Our data profile is just like Tim's: Terra bytes of satellite data. I'm going to guess that the d11p ratio won't be fantastic for us. I sure would like

Re: [zfs-discuss] ZFS deduplication

2008-07-07 Thread Jonathan Loran
ardware and software, but they are all steep on the ROI curve. I would be very excited to see block level ZFS deduplication roll out. Especially since we already have the infrastructure in place using Solaris/ZFS. Cheers, Jon -- - _/ _/ / - Jonathan Loran -

Re: [zfs-discuss] Cannot delete errored file

2008-06-13 Thread Jonathan Loran
ions. > > Ben, Haven't read this whole thread, and this has been brought up before, but make sure you power supply is running clean. I can't tell you how many times I've seen very strange and intermittent system errors occur from a

Re: [zfs-discuss] SATA controller suggestion

2008-06-09 Thread Jonathan Hogg
ld presumably expect it to be instantaneous if it was creating a sparse file. It's not a compressed filesystem though is it? /dev/ zero tends to be fairly compressible ;-) I think, as someone else pointed out, running zpool iostat at the same time might

Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-30 Thread Jonathan Hogg
base files or large log files. The actual modified/appended blocks would be sent rather than the whole changed file. This may be an important point depending on your file modification patterns. Jonathan ___ zfs-discuss mailing list zfs-discuss@op

Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Jonathan Hogg
backup disk to the primary system and import it as the new primary pool. It's a bit-perfect incremental backup strategy that requires no additional tools. Jonathan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Jonathan Hogg
e-based access, full history (although it could be collapsed by deleting older snapshots as necessary), and no worries about stream format changes. Jonathan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Video streaming and prefetch

2008-05-06 Thread Jonathan Hogg
have all of the dmu_zfetch() logic in that instead of in-line with the original dbuf_read(). Jonathan PS: Hi Darren! ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Inconcistancies with scrub and zdb

2008-05-05 Thread Jonathan Loran
Jonathan Loran wrote: > Since no one has responded to my thread, I have a question: Is zdb > suitable to run on a live pool? Or should it only be run on an exported > or destroyed pool? In fact, I see that it has been asked before on this > forum, but is there a users

Re: [zfs-discuss] Inconcistancies with scrub and zdb

2008-05-05 Thread Jonathan Loran
-- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643-5146 [EMAIL PROTECTED

[zfs-discuss] Inconcistancies with scrub and zdb

2008-05-04 Thread Jonathan Loran
Hi List, First of all: S10u4 120011-14 So I have the weird situation. Earlier this week, I finally mirrored up two iSCSI based pools. I had been wanting to do this for some time, because the availability of the data in these pools is important. One pool mirrored just fine, but the other po

Re: [zfs-discuss] share zfs hierarchy over nfs

2008-04-29 Thread Jonathan Loran
s, which use an indirect map, we just use the Solaris map, thus: auto_home: *zfs-server:/home/& Sorry to be so off (ZFS) topic. Jon -- - _____/ _/ / - Jonathan Loran - - -/ / /IT Manager - - __

Re: [zfs-discuss] ZFS - Implementation Successes and Failures

2008-04-29 Thread Jonathan Loran
Dominic Kay wrote: > Hi > > Firstly apologies for the spam if you got this email via multiple aliases. > > I'm trying to document a number of common scenarios where ZFS is used > as part of the solution such as email server, $homeserver, RDBMS and > so forth but taken from real implementations

Re: [zfs-discuss] ZFS for write-only media?

2008-04-22 Thread Jonathan Loran
Bob Friesenhahn wrote: > On Tue, 22 Apr 2008, Jonathan Loran wrote: >>> >> But that's the point. You can't correct silent errors on write once >> media because you can't write the repair. > > Yes, you can correct the error (at time of read) due to

Re: [zfs-discuss] ZFS for write-only media?

2008-04-22 Thread Jonathan Loran
Bob Friesenhahn wrote: >> The "problem" here is that by putting the data away from your machine, >> you loose the chance to "scrub" >> it on a regular basis, i.e. there is always the risk of silent >> corruption. >> > > Running a scrub is pointless since the media is not writeable. :-) > >

Re: [zfs-discuss] 24-port SATA controller options?

2008-04-15 Thread Jonathan Loran
Luke Scharf wrote: > Maurice Volaski wrote: > >>> Perhaps providing the computations rather than the conclusions would >>> be more persuasive on a technical list ;> >>> >>> >> 2 16-disk SATA arrays in RAID 5 >> 2 16-disk SATA arrays in RAID 6 >> 1 9-disk SATA array in RAID 5. >> >

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-10 Thread Jonathan Loran
Chris Siebenmann wrote: > | What your saying is independent of the iqn id? > > Yes. SCSI objects (including iSCSI ones) respond to specific SCSI > INQUIRY commands with various 'VPD' pages that contain information about > the drive/object, including serial number info. > > Some Googling turns up

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-09 Thread Jonathan Loran
Just to report back to the list... Sorry for the lengthy post So I've tested the iSCSI based zfs mirror on Sol 10u4, and it does more or less work as expected. If I unplug one side of the mirror - unplug or power down one of the iSCSI targets - I/O to the zpool stops for a while, perhaps a

Re: [zfs-discuss] ZFS volume export to USB-2 or Firewire?

2008-04-09 Thread Jonathan Edwards
On Apr 9, 2008, at 11:46 AM, Bob Friesenhahn wrote: > On Wed, 9 Apr 2008, Ross wrote: >> >> Well the first problem is that USB cables are directional, and you >> don't have the port you need on any standard motherboard. That > > Thanks for that info. I did not know that. > >> Adding iSCSI suppor

Re: [zfs-discuss] [storage-discuss] OpenSolaris ZFS NAS Setup

2008-04-05 Thread Jonathan Loran
Vincent Fox wrote: > Followup, my initiator did eventually panic. > > I will have to do some setup to get a ZVOL from another system to mirror > with, and see what happens when one of them goes away. Will post in a day or > two on that. > > On Sol 10 U4, I could have told you that. A few

Re: [zfs-discuss] [storage-discuss] OpenSolaris ZFS NAS Setup

2008-04-05 Thread Jonathan Loran
kristof wrote: > If you have a mirrored iscsi zpool. It will NOT panic when 1 of the > submirrors is unavailable. > > zpool status will hang for some time, but after I thinkt 300 seconds it will > put the device on unavailable. > > The panic was the default in the past, And it only occurs if all

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-04 Thread Jonathan Loran
> This guy seems to have had lots of fun with iSCSI :) > http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.html > > This is scaring the heck out of me. I have a project to create a zpool mirror out of two iSCSI targets, and if the failure of one of them will panic my system, that wil

Re: [zfs-discuss] Backup-ing up ZFS configurations

2008-03-25 Thread Jonathan Loran
Bob Friesenhahn wrote: > On Tue, 25 Mar 2008, Robert Milkowski wrote: >> As I wrote before - it's not only about RAID config - what if you have >> hundreds of file systems, with some share{nfs|iscsi|cifs) enabled with >> specific parameters, then specific file system options, etc. > > Some zfs-re

Re: [zfs-discuss] ZFS I/O algorithms

2008-03-20 Thread Jonathan Edwards
On Mar 20, 2008, at 2:00 PM, Bob Friesenhahn wrote: > On Thu, 20 Mar 2008, Jonathan Edwards wrote: >> >> in that case .. try fixing the ARC size .. the dynamic resizing on >> the ARC >> can be less than optimal IMHO > > Is a 16GB ARC size not considered

Re: [zfs-discuss] ZFS I/O algorithms

2008-03-20 Thread Jonathan Edwards
On Mar 20, 2008, at 11:07 AM, Bob Friesenhahn wrote: > On Thu, 20 Mar 2008, Mario Goebbels wrote: > >>> Similarly, read block size does not make a >>> significant difference to the sequential read speed. >> >> Last time I did a simple bench using dd, supplying the record size as >> blocksize to it

Re: [zfs-discuss] zfs backups to tape

2008-03-16 Thread Jonathan Edwards
On Mar 14, 2008, at 3:28 PM, Bill Shannon wrote: > What's the best way to backup a zfs filesystem to tape, where the size > of the filesystem is larger than what can fit on a single tape? > ufsdump handles this quite nicely. Is there a similar backup program > for zfs? Or a general tape manageme

Re: [zfs-discuss] zfs backups to tape

2008-03-14 Thread Jonathan Loran
Robert Milkowski wrote: Hello Jonathan, Friday, March 14, 2008, 9:48:47 PM, you wrote: > Carson Gaspar wrote: Bob Friesenhahn wrote: On Fri, 14 Mar 2008, Bill Shannon wrote: What's the best way to backup a zfs filesystem to tape, where the size of the files

Re: [zfs-discuss] zfs backups to tape

2008-03-14 Thread Jonathan Loran
x27;s choice of NFS v4 ACLs. This is the only way to ensure CIFS compatibility, and it is the way the industry will be moving. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ /

Re: [zfs-discuss] Mirroring to a smaller disk

2008-03-04 Thread Jonathan Loran
Patrick Bachmann wrote: Jonathan, On Tue, Mar 04, 2008 at 12:37:33AM -0800, Jonathan Loran wrote: I'm 'not sure I follow how this would work. The keyword here is thin provisioning. The sparse zvol only uses as much space as the actual data needs. So, if you use a sparse

Re: [zfs-discuss] Mirroring to a smaller disk

2008-03-04 Thread Jonathan Loran
Patrick Bachmann wrote: > Jonathan, > > On Mon, Mar 03, 2008 at 11:14:14AM -0800, Jonathan Loran wrote: > >> What I'm left with now is to do more expensive modifications to the new >> mirror to increase its size, or using zfs send | receive or rsync to >>

Re: [zfs-discuss] Mirroring to a smaller disk

2008-03-03 Thread Jonathan Loran
Shawn Ferry wrote: On Mar 3, 2008, at 2:14 PM, Jonathan Loran wrote: Now I know this is counterculture, but it's biting me in the back side right now, and ruining my life. I have a storage array (iSCSI SAN) that is performing badly, and requires some upgrades/reconfiguration. I h

[zfs-discuss] Mirroring to a smaller disk

2008-03-03 Thread Jonathan Loran
with Solaris instead on the SAN box? It's just commodity x86 server hardware. My life is ruined by too many choices, and not enough time to evaluate everything. Jon -- - _/ _/ / - Jonathan Loran - - -/

Re: [zfs-discuss] [dtrace-discuss] periodic ZFS disk accesses

2008-03-01 Thread Jonathan Edwards
the ZIO pipeline gets filled from the dmu_tx routines (for the whole pool), i guess it would make the most sense to look at the dmu_tx_create() entry from vnops (as Jeff already pointed out.) --- jonathan ___ zfs-discuss mailing list z

Re: [zfs-discuss] periodic ZFS disk accesses

2008-03-01 Thread Jonathan Edwards
On Mar 1, 2008, at 4:14 PM, Bill Shannon wrote: > Ok, that's much better! At least I'm getting output when I touch > files > on zfs. However, even though zpool iostat is reporting activity, the > above program isn't showing any file accesses when the system is idle. > > Any ideas? assuming th

Re: [zfs-discuss] periodic ZFS disk accesses

2008-03-01 Thread Jonathan Edwards
On Mar 1, 2008, at 3:41 AM, Bill Shannon wrote: > Running just plain "iosnoop" shows accesses to lots of files, but none > on my zfs disk. Using "iosnoop -d c1t1d0" or "iosnoop -m /export/ > home/shannon" > shows nothing at all. I tried /usr/demo/dtrace/iosnoop.d too, still > nothing. hi Bil

Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Jonathan Loran
Roch Bourbonnais wrote: > > Le 28 févr. 08 à 21:00, Jonathan Loran a écrit : > >> >> >> Roch Bourbonnais wrote: >>> >>> Le 28 févr. 08 à 20:14, Jonathan Loran a écrit : >>> >>>> >>>> Quick question: >>>>

Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Jonathan Loran
Roch Bourbonnais wrote: > > Le 28 févr. 08 à 20:14, Jonathan Loran a écrit : > >> >> Quick question: >> >> If I create a ZFS mirrored pool, will the read performance get a boost? >> In other words, will the data/parity be read round robin between the >

[zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Jonathan Loran
Quick question: If I create a ZFS mirrored pool, will the read performance get a boost? In other words, will the data/parity be read round robin between the disks, or do both mirrored sets of data and parity get read off of both disks? The latter case would have a CPU expense, so I would thi

Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-27 Thread Jonathan Edwards
On Feb 27, 2008, at 8:36 AM, Uwe Dippel wrote: > As much as ZFS is revolutionary, it is far away from being the > 'ultimate file system', if it doesn't know how to handle event- > driven snapshots (I don't like the word), backups, versioning. As > long as a high-level system utility needs to

Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-25 Thread Jonathan Loran
David Magda wrote: > On Feb 24, 2008, at 01:49, Jonathan Loran wrote: > >> In some circles, CDP is big business. It would be a great ZFS offering. > > ZFS doesn't have it built-in, but AVS made be an option in some cases: > > http://opensolaris.org/os/project/avs

Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-23 Thread Jonathan Loran
Uwe Dippel wrote: > [i]google found that solaris does have file change notification: > http://blogs.sun.com/praks/entry/file_events_notification > [/i] > > Didn't see that one, thanks. > > [i]Would that do the job?[/i] > > It is not supposed to do a job, thanks :), it is for a presentation at a

Re: [zfs-discuss] Which DTrace provider to use

2008-02-14 Thread Jonathan Loran
[EMAIL PROTECTED] wrote: On Tue, Feb 12, 2008 at 10:21:44PM -0800, Jonathan Loran wrote: Thanks for any help anyone can offer. I have faced similar problem (although not exactly the same) and was going to monitor disk queue with dtrace but couldn't find any docs/urls abo

Re: [zfs-discuss] Which DTrace provider to use

2008-02-14 Thread Jonathan Loran
up for the VFS layer. > > I'd also check syscall latencies - it might be too obvious, but it can be > worth checking (eg, if you discover those long latencies are only on the > open syscall)... > > Brendan > > > -- - _/ _/ / -

Re: [zfs-discuss] Which DTrace provider to use

2008-02-14 Thread Jonathan Loran
Marion Hakanson wrote: [EMAIL PROTECTED] said: It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP. Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart more on random I/O. The s

Re: [zfs-discuss] Which DTrace provider to use

2008-02-13 Thread Jonathan Loran
Marion Hakanson wrote: [EMAIL PROTECTED] said: ... I know, I know, I should have gone with a JBOD setup, but it's too late for that in this iteration of this server. We we set this up, I had the gear already, and it's not in my budget to get new stuff right now. What kind of arra

  1   2   3   >