Re: [zfs-discuss] Best practice for boot partition layout in ZFS

2011-04-06 Thread Torrey McMahon
On 4/6/2011 11:08 AM, Erik Trimble wrote: Traditionally, the reason for a separate /var was one of two major items: (a) /var was writable, and / wasn't - this was typical of diskless or minimal local-disk configurations. Modern packaging systems are making this kind of configuration increasi

Re: [zfs-discuss] ZFS Performance

2011-02-28 Thread Torrey McMahon
On 2/25/2011 4:15 PM, Torrey McMahon wrote: On 2/25/2011 3:49 PM, Tomas Ögren wrote: On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes: > Hi All, > > In reading the ZFS Best practices, I'm curious if this statement is > still true about 80% utilizatio

Re: [zfs-discuss] ZFS Performance

2011-02-25 Thread Torrey McMahon
On 2/25/2011 3:49 PM, Tomas Ögren wrote: On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes: > Hi All, > > In reading the ZFS Best practices, I'm curious if this statement is > still true about 80% utilization. It happens at about 90% for me.. all of a sudden, the mail

Re: [zfs-discuss] One LUN per RAID group

2011-02-15 Thread Torrey McMahon
On 2/14/2011 10:37 PM, Erik Trimble wrote: That said, given that SAN NVRAM caches are true write caches (and not a ZIL-like thing), it should be relatively simple to swamp one with write requests (most SANs have little more than 1GB of cache), at which point, the SAN will be blocking on flushi

Re: [zfs-discuss] [storage-discuss] multipath used inadvertantly?

2011-02-15 Thread Torrey McMahon
in.mpathd is the IP multipath daemon. (Yes, it's a bit confusing that mpathadm is the storage multipath admin tool. ) If scsi_vhci is loaded in the kernel you have storage multipathing enabled. (Check with modinfo.) On 2/15/2011 3:53 PM, Ray Van Dolson wrote: I'm troubleshooting an existing

Re: [zfs-discuss] Best choice - file system for system

2011-01-31 Thread Torrey McMahon
daily basis to do ufsdumps of large fs'es. Mark On Jan 30, 2011, at 5:41 PM, Torrey McMahon wrote: On 1/30/2011 5:26 PM, Joerg Schilling wrote: Richard Elling wrote: ufsdump is the problem, not ufsrestore. If you ufsdump an active file system, there is no guarantee you can ufsrestore it

Re: [zfs-discuss] Best choice - file system for system

2011-01-30 Thread Torrey McMahon
On 1/30/2011 5:26 PM, Joerg Schilling wrote: Richard Elling wrote: ufsdump is the problem, not ufsrestore. If you ufsdump an active file system, there is no guarantee you can ufsrestore it. The only way to guarantee this is to keep the file system quiesced during the entire ufsdump. Needless

Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-25 Thread Torrey McMahon
On 1/25/2011 2:19 PM, Marion Hakanson wrote: The only special tuning I had to do was turn off round-robin load-balancing in the mpxio configuration. The Seagate drives were incredibly slow when running in round-robin mode, very speedy without. Interesting. Did you switch to the load-balance op

Re: [zfs-discuss] How well does zfs mirror handle temporary disk offlines?

2011-01-18 Thread Torrey McMahon
On 1/18/2011 2:46 PM, Philip Brown wrote: My specific question is, how easily does ZFS handle*temporary* SAN disconnects, to one side of the mirror? What if the outage is only 60 seconds? 3 minutes? 10 minutes? an hour? Depends on the multipath drivers and the failure mode. For example, if

Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Torrey McMahon
I've seen a lot of cases where enabling compression helps with systems that are disk-bound. If you've got extra CPU ... give it a shot. On 1/18/2011 10:11 AM, Michael Armstrong wrote: I've since turned off dedup, added another 3 drives and results have improved to around 148388K/sec on average

Re: [zfs-discuss] Changing GUID

2010-11-15 Thread Torrey McMahon
Are those really your requirements? What is it that you're trying to accomplish with the data? Make a copy and provide to an other host? On 11/15/2010 5:11 AM, sridhar surampudi wrote: Hi I am looking in similar lines, my requirement is 1. create a zpool on one or many devices ( LUNs ) from a

Re: [zfs-discuss] ZFS no longer working with FC devices.

2010-05-23 Thread Torrey McMahon
On 5/23/2010 11:49 AM, Richard Elling wrote: FWIW, the A5100 went end-of-life (EOL) in 2001 and end-of-service-life (EOSL) in 2006. Personally, I hate them with a passion and would like to extend an offer to use my tractor to bury the beast:-). I'm sure I can get some others to help. Can I sm

Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Torrey McMahon
The author mentions multipathing software in the blog entry. Kind of hard to mix that up with cache mirroring if you ask me. On 4/5/2010 9:16 PM, Brad wrote: I'm wondering if the author is talking about "cache mirroring" where the cache is mirrored between both controllers. If that is the ca

Re: [zfs-discuss] mpxio load-balancing...it doesn't work??

2010-04-05 Thread Torrey McMahon
Not true. There are different ways that a storage array, and it's controllers, connect to the host visible front end ports which might be confusing the author but i/o isn't duplicated as he suggests. On 4/4/2010 9:55 PM, Brad wrote: I had always thought that with mpxio, it load-balances IO re

Re: [zfs-discuss] demise of community edition

2010-01-31 Thread Torrey McMahon
This is a topic for indiana-discuss, not zfs-discuss. If you read through the archives of that alias you should see some pointers. On 1/31/2010 11:38 AM, Tom Bird wrote: Afternoon, I note to my dismay that I can't get the "community edition" any more past snv_129, this version was closest to

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Torrey McMahon
On 1/8/2010 10:04 AM, James Carlson wrote: Mike Gerdts wrote: This unsupported feature is supported with the use of Sun Ops Center 2.5 when a zone is put on a "NAS Storage Library". Ah, ok. I didn't know that. Does anyone know how that works? I can't find it in the docs, no on

Re: [zfs-discuss] ZFS and LiveUpgrade

2010-01-07 Thread Torrey McMahon
Make sure you have the latest LU patches installed. There were a lot of fixes put back in that area within the last six months or so. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Thin device support in ZFS?

2009-12-30 Thread Torrey McMahon
On 12/30/2009 2:40 PM, Richard Elling wrote: There are a few minor bumps in the road. The ATA PASSTHROUGH command, which allows TRIM to pass through the SATA drivers, was just integrated into b130. This will be more important to small servers than SANs, but the point is that all parts of the sof

Re: [zfs-discuss] primarycache and secondarycache properties on Solaris 10 u8

2009-10-15 Thread Torrey McMahon
Suggest you start with the man page http://docs.sun.com/app/docs/doc/819-2240/zfs-1m On 10/15/2009 4:19 PM, Javier Conde wrote: Hello, I've seen in the "what's new" of Solaris 10 update 8 just released that ZFS now includes the "primarycache" and "secondarycache" properties. Is this the "e

Re: [zfs-discuss] Petabytes on a budget - blog

2009-09-02 Thread Torrey McMahon
As some Sun folks pointed out 1) No redundancy at the power or networking side 2) Getting 2TB drives in a x4540 would make the numbers closer 3) Performance isn't going to be that great with their design but...they might not need it. On 9/2/2009 2:13 PM, Michael Shadle wrote: Yeah I wrote the

[zfs-discuss] Compression/copies on root pool RFE

2009-05-05 Thread Torrey McMahon
Before I put one in ... anyone else seen one? Seems we support compression on the root pool but there is no way to enable it at install time outside of a custom script you run before the installer. I'm thinking it should be a real install time option, have a jumpstart keyword, etc. Same with c

Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-05-01 Thread Torrey McMahon
On 5/1/2009 2:01 PM, Miles Nordin wrote: I've never heard of using multiple-LUN stripes for storage QoS before. Have you actually measured some improvement in this configuration over a single LUN? If so that's interesting. Because of the way queing works in the OS and in most array controllers

Re: [zfs-discuss] StorageTek 2540 performance radically changed

2009-04-20 Thread Torrey McMahon
On 4/20/2009 7:26 PM, Robert Milkowski wrote: Well, you need to disable cache flushes on zfs side then (or make a firmware change work) and it will make a difference. If you're running recent OpenSolaris/Solaris/SX builds you shouldn't have to disable cache flushing on the array. The drive

Re: [zfs-discuss] ZFS vs ZFS + HW raid? Which is best?

2009-01-20 Thread Torrey McMahon
On 1/20/2009 1:14 PM, Richard Elling wrote: > Orvar Korvar wrote: > >> What does this mean? Does that mean that ZFS + HW raid with raid-5 is not >> able to heal corrupted blocks? Then this is evidence against ZFS + HW raid, >> and you should only use ZFS? >> >> http://www.solarisinternals.com

Re: [zfs-discuss] Zero page reclaim with ZFS

2008-12-29 Thread Torrey McMahon
On 12/29/2008 10:36 PM, Tim wrote: > > > On Mon, Dec 29, 2008 at 8:52 PM, Torrey McMahon <mailto:tmcmah...@yahoo.com>> wrote: > > On 12/29/2008 8:20 PM, Tim wrote: > > I run into the same thing but once I say, "I can add more space > without d

Re: [zfs-discuss] Zero page reclaim with ZFS

2008-12-29 Thread Torrey McMahon
On 12/29/2008 8:20 PM, Tim wrote: > > > On Mon, Dec 29, 2008 at 6:09 PM, Torrey McMahon <mailto:tmcmah...@yahoo.com>> wrote: > > > There are some mainframe filesystems that do such things. I think > there > was also an STK array - Iceberg[?]

Re: [zfs-discuss] Zero page reclaim with ZFS

2008-12-29 Thread Torrey McMahon
Cyril Payet wrote: >> Hello there, >> Hitachi USP-V (sold as 9990V by Sun) provides thin provisioning, >> known as Hitachi Dynamic Provisioning (HDP). >> This gives a way to make the OS believes that a huge lun is >> available whilst its size is not physically allocated on the >> DataSystem side

Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-07 Thread Torrey McMahon
Ian Collins wrote: > > On Mon 08/12/08 08:14 , Torrey McMahon [EMAIL PROTECTED] sent: > >> I'm pretty sure I understand the importance of a snapshot API. (You take >> the snap, then you do the backup or whatever) My point is that, at >> least on my quick r

Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-07 Thread Torrey McMahon
I hope everyone enjoyed the discussion. I did. > > zStorageAnalyst > > > - Original Message - From: "Torrey McMahon" <[EMAIL PROTECTED]> > To: "Joseph Zhou" <[EMAIL PROTECTED]> > Cc: "Richard Elling" <[EMAIL PROTECTED]>; "

Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-06 Thread Torrey McMahon
gration??? > > Talking about garbage! > z > > > - Original Message - From: "Torrey McMahon" <[EMAIL PROTECTED]> > To: "Richard Elling" <[EMAIL PROTECTED]> > Cc: "Joseph Zhou" <[EMAIL PROTECTED]>; "William D. Hathaway" > &l

Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-06 Thread Torrey McMahon
Richard Elling wrote: > Joseph Zhou wrote: > >> Yeah? >> http://www.adaptec.com/en-US/products/Controllers/Hardware/sas/value/SAS-31605/_details/Series3_FAQs.htm >> Snapshot is a big deal? >> >> > > Snapshot is a big deal, but you will find most "hardware" RAID > implementations > are s

Re: [zfs-discuss] Tuning ZFS for Sun Java Messaging Server

2008-10-24 Thread Torrey McMahon
Richard Elling wrote: > Adam N. Copeland wrote: > >> Thanks for the replies. >> >> It appears the problem is that we are I/O bound. We have our SAN guy >> looking into possibly moving us to faster spindles. In the meantime, I >> wanted to implement whatever was possible to give us breathing room

Re: [zfs-discuss] Tuning ZFS for Sun Java Messaging Server

2008-10-24 Thread Torrey McMahon
You may want to ask your SAN vendor if they have a setting you can make to no-op the cache flush. That way you don't have to worry about the flush behavior if you change/add different arrays. Adam N. Copeland wrote: > Thanks for the replies. > > It appears the problem is that we are I/O bound. W

Re: [zfs-discuss] X4540

2008-07-10 Thread Torrey McMahon
Richard Elling wrote: > Torrey McMahon wrote: >> Spencer Shepler wrote: >> >>> On Jul 10, 2008, at 7:05 AM, Ross wrote: >>> >>> >>>> Oh god, I hope not. A patent on fitting a card in a PCI-E slot, >>>> or using nvram

Re: [zfs-discuss] X4540

2008-07-10 Thread Torrey McMahon
Spencer Shepler wrote: > On Jul 10, 2008, at 7:05 AM, Ross wrote: > > >> Oh god, I hope not. A patent on fitting a card in a PCI-E slot, or >> using nvram with RAID (which raid controllers have been doing for >> years) would just be rediculous. This is nothing more than cache, >> and eve

[zfs-discuss] ZFS Deferred Frees

2008-06-16 Thread Torrey McMahon
I'm doing some simple testing of ZFS block reuse and was wondering when deferred frees kick in. Is it on some sort of timer to ensure data consistency? Does an other routine call it? Would something as simple as sync(1M) get the free block list written out so future allocations could use the sp

Re: [zfs-discuss] ZFS conflict with MAID?

2008-06-11 Thread Torrey McMahon
A Darren Dunham wrote: > On Tue, Jun 10, 2008 at 05:32:21PM -0400, Torrey McMahon wrote: > >> However, some apps will probably be very unhappy if i/o takes 60 seconds >> to complete. >> > > It's certainly not uncommon for that to occur in an NFS environm

Re: [zfs-discuss] ZFS conflict with MAID?

2008-06-10 Thread Torrey McMahon
Richard Elling wrote: > Tobias Exner wrote: > >> Hi John, >> >> I've done some tests with a SUN X4500 with zfs and "MAID" using the >> powerd of Solaris 10 to power down the disks which weren't access for >> a configured time. It's working fine... >> >> The only thing I run into was the proble

Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-19 Thread Torrey McMahon
wnside is parity is being transmitted from the host to the disks > rather than living on the controller entirely. > > -Andy > > ________ > > From: [EMAIL PROTECTED] on behalf of Torrey McMahon > Sent: Mon 5/19/2008 1:59 PM > To: Bob Friesenhahn

Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-19 Thread Torrey McMahon
Bob Friesenhahn wrote: > On Mon, 19 May 2008, Kenny wrote: > > >> Bob M.- Thanks for the heads up on the 2 (1.998) TN Lun limit. >> This has me a little concerned esp. since I have 1 TB drives being >> delivered! Also thanks for the scsi cache flushing heads up, yet >> another item to lookup!

Re: [zfs-discuss] Backup-ing up ZFS configurations

2008-03-23 Thread Torrey McMahon
eric kustarz wrote: > > On Mar 21, 2008, at 2:03 PM, David W. Smith wrote: >> If you import a zpool you only get the history from that point forward I >> believe, so you might not have all the past history, such as how the >> pool was originally created. Having a way to dump the config for >> as e

Re: [zfs-discuss] Backup-ing up ZFS configurations

2008-03-21 Thread Torrey McMahon
Cyril Plisko wrote: > On Fri, Mar 21, 2008 at 6:53 PM, Torrey McMahon <[EMAIL PROTECTED]> wrote: > >> eric kustarz wrote: >> > So even with the above, if you add a vdev, slog, or l2arc later on, >> > that can be lost via the history being a ring buffer.

Re: [zfs-discuss] Backup-ing up ZFS configurations

2008-03-21 Thread Torrey McMahon
eric kustarz wrote: > So even with the above, if you add a vdev, slog, or l2arc later on, > that can be lost via the history being a ring buffer. There's a RFE > for essentially taking your current 'zpool status' output and > outputting a config (one that could be used to create a brand new

Re: [zfs-discuss] Round-robin NFS protocol with ZFS

2008-03-13 Thread Torrey McMahon
Tim wrote: > > > > > He wants to mount the ZFS filesystem (I'm assuming off of a backend > SAN storage array) to two heads, then round-robin NFS connections > between the heads to essentially *double* the throughput. pNFS is the droid you are looking for.

[zfs-discuss] SunMC module for ZFS

2008-02-15 Thread Torrey McMahon
Anyone have a pointer to a general ZFS health/monitoring module for SunMC? There isn't one baked into SunMC proper which means I get to write one myself if someone hasn't already done it. Thanks. ___ zfs-discuss mailing list zfs-discuss@opensolaris.or

Re: [zfs-discuss] Case #65841812

2008-02-02 Thread Torrey McMahon
I'm not an Oracle expert but I don't think Oracle checksumming can correct data. If you have ZFS checksums enabled, and you're mirroring in your zpools, then ZFS can self-correct as long the checksum on the other half of the mirror is good. Mertol Ozyoney wrote: > Don't take my words as an expe

Re: [zfs-discuss] Hardware RAID vs. ZFS RAID

2008-01-31 Thread Torrey McMahon
Kyle McDonald wrote: > Vincent Fox wrote: > >> So the point is, a JBOD with a flash drive in one (or two to mirror the >> ZIL) of the slots would be a lot SIMPLER. >> >> We've all spent the last decade or two offloading functions into specialized >> hardware, that has turned into these massiv

Re: [zfs-discuss] ZFS under VMware

2008-01-30 Thread Torrey McMahon
Lewis Thompson wrote: > Hello, > > I'm planning to use VMware Server on Ubuntu to host multiple VMs, one > of which will be a Solaris instance for the purposes of ZFS > I would give the ZFS VM two physical disks for my zpool, e.g. /dev/sda > and /dev/sdb, in addition to the VMware virtual disk for

Re: [zfs-discuss] NFS performance on ZFS vs UFS

2008-01-25 Thread Torrey McMahon
Robert Milkowski wrote: > Hello Darren, > > > > DJM> BTW there isn't really any such think as "disk corruption" there is > DJM> "data corruption" :-) > > Well, if you scratch it hard enough :) > http://www.philohome.com/hammerhead/broken-disk.jpg :-) ___

Re: [zfs-discuss] iscsi on zvol

2008-01-24 Thread Torrey McMahon
Jim Dunham wrote: > > This raises a key point that that you should be aware of. ZFS does not > support shared access to the same ZFS filesystem. unless you put NFS or something on top of it. (I always forget that part myself.) ___ zfs-discuss ma

Re: [zfs-discuss] ZFS via Virtualized Solaris?

2008-01-07 Thread Torrey McMahon
Peter Schuller wrote: >>> >From what I read, one of the main things about ZFS is "Don't trust the >>> underlying hardware". If this is the case, could I run Solaris under VirtualBox or under some other emulated environment and still get the benefits of ZFS such as end to end

Re: [zfs-discuss] Does Oracle support ZFS as a file system with Oracle RAC?

2007-12-23 Thread Torrey McMahon
Louwtjie Burger wrote: > On 12/19/07, David Magda <[EMAIL PROTECTED]> wrote: > >> On Dec 18, 2007, at 12:23, Mike Gerdts wrote: >> >> >>> 2) Database files - I'll lump redo logs, etc. in with this. In Oracle >>>RAC these must live on a shared-rw (e.g. clustered VxFS, NFS) file >>>s

Re: [zfs-discuss] [storage-discuss] SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush

2007-11-27 Thread Torrey McMahon
Nicolas Dorfsman wrote: > Le 27 nov. 07 à 16:17, Torrey McMahon a écrit : > >> According to the array vendor the 99xx arrays no-op the cache flush >> command. No need to set the /etc/system flag. >> >> http://blogs.sun.com/torrey/entry/zfs_and_99xx_storage_a

Re: [zfs-discuss] SAN arrays with NVRAM cache : ZIL and zfs_nocacheflush

2007-11-27 Thread Torrey McMahon
Marion Hakanson wrote: > [EMAIL PROTECTED] said: > >> They clearly suggest to disable cache flush http://www.solarisinternals.com/ >> wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH . >> >> It seems to be the only serious article on the net about this subject. >> >> Could someone here state on this

Re: [zfs-discuss] [storage-discuss] zpool io to 6140 is really slow

2007-11-17 Thread Torrey McMahon
Have you tried disabling the zil cache flushing? http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes Asif Iqbal wrote: > (Including storage-discuss) > > I have 6 6140s with 96 disks. Out of which 64 of them are Seagate > ST337FC (300GB - 1 RPM FC-AL) > > I c

Re: [zfs-discuss] Sun's storage product roadmap?

2007-10-18 Thread Torrey McMahon
The profit stuff has been NDA for awhile but we started telling the street a while back and they seem to like the idea. :) Selim Daoud wrote: > wasn't that an NDA info?? > > s- > > On 10/18/07, Torrey McMahon <[EMAIL PROTECTED]> wrote: > >> MC wrote:

Re: [zfs-discuss] Sun's storage product roadmap?

2007-10-18 Thread Torrey McMahon
MC wrote: > Sun's storage strategy: > > 1) Finish Indiana and distro constructor > 2) (ship stuff using ZFS-Indiana) > 3) Success 4) Profit :) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discu

Re: [zfs-discuss] ZFS & array NVRAM cache?

2007-09-26 Thread Torrey McMahon
Albert Chin wrote: > On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrote: > >> I don't understand. How do you >> >> "setup one LUN that has all of the NVRAM on the array dedicated to it" >> >> I'm pretty familiar with 3510 and 3310. Forgive me for being a bit >> thick here, but can you

Re: [zfs-discuss] The ZFS-Man.

2007-09-21 Thread Torrey McMahon
Jonathan Edwards wrote: > On Sep 21, 2007, at 14:57, eric kustarz wrote: > > >>> Hi. >>> >>> I gave a talk about ZFS during EuroBSDCon 2007, and because it won >>> the >>> the best talk award and some find it funny, here it is: >>> >>> http://youtube.com/watch?v=o3TGM0T1CvE >>> >>> a bit b

Re: [zfs-discuss] ZFS Solaris 10 Update 4 Patches

2007-09-20 Thread Torrey McMahon
Did you upgrade your pools? "zpool upgrade -a" John-Paul Drawneek wrote: > err, I installed the patch and am still on zfs 3? > > solaris 10 u3 with kernel patch 120011-14 > > > This message posted from opensolaris.org > ___ > zfs-discuss mailing list

Re: [zfs-discuss] Mirrored zpool across network

2007-08-19 Thread Torrey McMahon
Mark wrote: > Hi All, > > Im just wondering (i figure you can do this but dont know what hardware and > stuff i would need) if I can set up a mirror of a raidz zpool across a > network. > > Basically, the setup is a large volume of Hi-Def video is being streamed from > a camera, onto an editing

[zfs-discuss] Snapshots and worm devices

2007-08-14 Thread Torrey McMahon
Has anyone thought about using snapshots and WORM devices. In theory, you'd have to keep the WORM drive out of the pool, or as a special device, and it would have to be a full snapshot even though we really don't have those. Any plans in this area? I could take a snapshot, clone it, then copy i

Re: [zfs-discuss] how to remove sun volume mgr configuration?

2007-07-16 Thread Torrey McMahon
James C. McPherson wrote: > > > The T3B with fw v3.x (I think) and the T4 (aka 6020 tray) allow > more than two volumes, but you're still quite restricted in what > you can do with them. > You are limited to two raid groups with slices on top of those raid groups presented as LUNs. I'd just st

Re: [zfs-discuss] how to remove sun volume mgr configuration?

2007-07-16 Thread Torrey McMahon
Bill Sommerfeld wrote: > On Mon, 2007-07-16 at 18:19 -0700, Russ Petruzzelli wrote: > >> Or am I just getting myself into shark infested waters? >> > > configurations that might be interesting to play with: > (emphasis here on "play"...) > > 1) use the T3's management CLI to reconfigure th

Re: [zfs-discuss] ZFS and powerpath

2007-07-16 Thread Torrey McMahon
Darren Dunham wrote: >>> If it helps at all. We're having a similar problem. Any LUN's >>> configured with their default owner to be SP B, don't get along with >>> ZFS. We're running on a T2000, With Emulex cards and the ssd driver. >>> MPXIO seems to work well for most cases, but the SAN g

Re: [zfs-discuss] ZFS and powerpath

2007-07-16 Thread Torrey McMahon
Carisdad wrote: > Peter Tribble wrote: > >> # powermt display dev=all >> Pseudo name=emcpower0a >> CLARiiON ID=APM00043600837 [] >> Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46] >> state=alive; policy=CLAROpt; priority=0; queued-IOs=0 >> Owner: default=SP B, current=SP B >>

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Torrey McMahon
[EMAIL PROTECTED] wrote: > > > > [EMAIL PROTECTED] wrote on 07/13/2007 02:21:52 PM: > > >> Peter Tribble wrote: >> >> >>> I've not got that far. During an import, ZFS just pokes around - there >>> doesn't seem to be an explicit way to tell it which particular devices >>> or SAN paths to use

Re: [zfs-discuss] ZFS and powerpath

2007-07-13 Thread Torrey McMahon
Peter Tribble wrote: > On 7/13/07, Alderman, Sean <[EMAIL PROTECTED]> wrote: > >> I wonder what kind of card Peter's using and if there is a potential >> linkage there. We've got the Sun branded Emulux cards in our sparcs. I >> also wonder if Peter were able to allocate an additional LUN to hi

Re: [zfs-discuss] Plans for swapping to part of a pool

2007-07-12 Thread Torrey McMahon
I really don't want to bring this up but ... Why do we still tell people to use swap volumes? Would we have the same sort of issue with the dump device so we need to fix it anyway? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.ope

Re: [zfs-discuss] pool analysis

2007-07-11 Thread Torrey McMahon
David Dyer-Bennet wrote: > Kent Watsen wrote: > >>> #define MTTR_HOURS_NO_SPARE 16 >>> >>> I think this is optimistic :-) >>> >>> >> Not really for me as the array is in my basement - so I assume that I'll >> swap in a drive when I get home from work ;) >> >> > Yes, it's in

Re: [zfs-discuss] How to take advantage of PSARC 2007/171: ZFS Separate Intent Log

2007-07-08 Thread Torrey McMahon
Bryan Cantrill wrote: > On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote: > >> PSARC 2007/171 will be available in b68. Any documentation anywhere on >> how to take advantage of it? >> >> Some of the Sun storage arrays contain NVRAM. It would be really nice >> if the array NVRAM would

Re: [zfs-discuss] Re: ZFS - SAN and Raid

2007-06-24 Thread Torrey McMahon
Gary Mills wrote: On Wed, Jun 20, 2007 at 12:23:18PM -0400, Torrey McMahon wrote: James C. McPherson wrote: Roshan Perera wrote: But Roshan, if your pool is not replicated from ZFS' point of view, then all the multipathing and raid controller backup in the world will not m

Re: [zfs-discuss] Re: ZFS - SAN and Raid

2007-06-24 Thread Torrey McMahon
Victor Engle wrote: On 6/20/07, Torrey McMahon <[EMAIL PROTECTED]> wrote: Also, how does replication at the ZFS level use more storage - I'm assuming raw block - then at the array level? ___ Just to add to the previous comments. In the

Re: [zfs-discuss] zfs space efficiency

2007-06-24 Thread Torrey McMahon
The interesting collision is going to be file system level encryption vs. de-duplication as the former makes the latter pretty difficult. dave johnson wrote: How other storage systems do it is by calculating a hash value for said file (or block), storing that value in a db, then checking every

Re: [zfs-discuss] Re: ZFS - SAN and Raid

2007-06-20 Thread Torrey McMahon
James C. McPherson wrote: Roshan Perera wrote: But Roshan, if your pool is not replicated from ZFS' point of view, then all the multipathing and raid controller backup in the world will not make a difference. James, I Agree from ZFS point of view. However, from the EMC or the customer point

Re: [zfs-discuss] zfs and EMC

2007-06-15 Thread Torrey McMahon
This sounds familiarlike something about the powerpath device not responding to the SCSI inquiry strings. Are you using the same version of powerpath on both systems? Same type of array on both? Dominik Saar wrote: Hi there, have a strange behavior if i´ll create a zfs pool at an EMC Powe

Re: [zfs-discuss] IRC: thought: irc.freenode.net #zfs for platform-agnostic or multi-platform discussion

2007-06-08 Thread Torrey McMahon
Graham Perrin wrote: We have and and the other channels listed at AND growing discussion of ZFS in Mac- 'FUSE- and Linux-oriented channels BUT unless I'm missing something, no IRC channel for ZFS. Please: * which IRC channel will be be

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-25 Thread Torrey McMahon
Toby Thain wrote: On 25-May-07, at 1:22 AM, Torrey McMahon wrote: Toby Thain wrote: On 22-May-07, at 11:01 AM, Louwtjie Burger wrote: On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote: What if your HW-RAID-controller dies? in say 2 years or more.. What will read your disk

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-24 Thread Torrey McMahon
ming - It's end to end... Nathan. Torrey McMahon wrote: Toby Thain wrote: On 22-May-07, at 11:01 AM, Louwtjie Burger wrote: On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote: What if your HW-RAID-controller dies? in say 2 years or more.. What will read your disks as a configure

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-24 Thread Torrey McMahon
Toby Thain wrote: On 22-May-07, at 11:01 AM, Louwtjie Burger wrote: On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote: What if your HW-RAID-controller dies? in say 2 years or more.. What will read your disks as a configured RAID? Do you know how to (re)configure the >controller or restore

Re: [zfs-discuss] No zfs_nocacheflush in Solaris 10?

2007-05-24 Thread Torrey McMahon
Albert Chin wrote: On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote: I'm getting really poor write performance with ZFS on a RAID5 volume (5 disks) from a storagetek 6140 array. I've searched the web and these forums and it seems that this zfs_nocacheflush option is the solution,

Re: [zfs-discuss] Re: ZFS - Use h/w raid or not? Thoughts. Considerations.

2007-05-21 Thread Torrey McMahon
Phillip Fiedler wrote: Thanks for the input. So, I'm trying to meld the two replies and come up with a direction for my case and maybe a "rule of thumb" that I can use in the future (i.e., near future until new features come out in zfs) when I have external storage arrays that have built in R

Re: [zfs-discuss] Re: Re: Lots of overhead with ZFS - what am I doing wrong?

2007-05-19 Thread Torrey McMahon
Jonathan Edwards wrote: On May 15, 2007, at 13:13, Jürgen Keil wrote: Would you mind also doing: ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1 to see the raw performance of underlying hardware. This dd command is reading from the block device, which might cache dataand proba

Re: [zfs-discuss] Re: AVS replication vs ZFS send recieve for odd sized volume pairs

2007-05-19 Thread Torrey McMahon
John-Paul Drawneek wrote: Yes, i am also interested in this. We can't afford two super fast setup so we are looking at having a huge pile sata to act as a real time backup for all our streams. So what can AVS do and its limitations are? Would a just using zfs send and receive do or does AVS

[zfs-discuss] Re: Making 'zfs destroy' safer

2007-05-19 Thread Torrey McMahon
ZFS can make for complicated environments. Your dog wants management tools. :) Seriously - We're adding all of these options to ZFS. Where are the tools that let someone make an informed decision concerning what their actions are going to do to the system? Where is the option that lets someon

Re: [zfs-discuss] Re: ZFS Support for remote mirroring

2007-05-09 Thread Torrey McMahon
Anantha N. Srirama wrote: For whatever reason EMC notes (on PowerLink) suggest that ZFS is not supported on their arrays. If one is going to use a ZFS filesystem on top of a EMC array be warned about support issues. They should have fixed that in their matrices. It should say something like,

Re: [zfs-discuss] ZFS Support for remote mirroring

2007-05-07 Thread Torrey McMahon
Matthew Ahrens wrote: Aaron Newcomb wrote: Does ZFS support any type of remote mirroring? It seems at present my only two options to achieve this would be Sun Cluster or Availability Suite. I thought that this functionality was in the works, but I haven't heard anything lately. You could put s

Re: [zfs-discuss] Re: ZFS Support for remote mirroring

2007-05-02 Thread Torrey McMahon
Aaron Newcomb wrote: Terry, Yes. AVS is pretty expensive. If ZFS did this out of the box it would be a huge differentiator. I know ZFS does snapshots today, but if we could extend this functionality to work across distance then we would have something that could compete with expensive solutio

Re: [zfs-discuss] ZFS Support for remote mirroring

2007-05-02 Thread Torrey McMahon
Aaron Newcomb wrote: Does ZFS support any type of remote mirroring? It seems at present my only two options to achieve this would be Sun Cluster or Availability Suite. I thought that this functionality was in the works, but I haven't heard anything lately. AVS is working today. (See Jim Du

Re: [zfs-discuss] ZFS Boot: Dividing up the name space

2007-05-01 Thread Torrey McMahon
Mike Dotson wrote: On Sat, 2007-04-28 at 17:48 +0100, Peter Tribble wrote: On 4/26/07, Lori Alt <[EMAIL PROTECTED]> wrote: Peter Tribble wrote: Why do administrators do 'df' commands? It's to find out how much space is used or available in a single file system

Re: [zfs-discuss] zfs boot image conversion kit is posted

2007-05-01 Thread Torrey McMahon
Brian Hechinger wrote: On Fri, Apr 27, 2007 at 02:44:02PM -0700, Malachi de ??lfweald wrote: 2. ZFS mirroring can work without the metadb, but if you want the dump mirrored too, you need the metadb (I don't know if it needs to be mirrored, but I wanted both disks to be identical in case one

Re: [zfs-discuss] slow sync on zfs

2007-04-23 Thread Torrey McMahon
Dickon Hood wrote: [snip] I'm currently playing with ZFS on a T2000 with 24x500GB SATA discs in an external array that presents as SCSI. After having much 'fun' with the Solaris SCSI driver not handling LUNs >2TB That should work if you have the latest KJP and friends. (Actually, it should h

Re: [zfs-discuss] ZFS+NFS on storedge 6120 (sun t4)

2007-04-21 Thread Torrey McMahon
to zvol iscsi targets. thanks anyways.. back to the drawing board on how to resolve this! -Andy -Original Message- From: [EMAIL PROTECTED] on behalf of Torrey McMahon Sent: Fri 4/20/2007 6:00 PM To: Marion Hakanson Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] ZFS+NFS on s

Re: [zfs-discuss] Bandwidth requirements (was Re: Preferred backup mechanism for ZFS?)

2007-04-21 Thread Torrey McMahon
Lyndon Nerenberg wrote: But a tape in a van is a very high bandwidth connection :) Australia used to get it's usenet feed on FedExed 9-tracks. But you had to put them in the reader upside down and read them back to front. ___ zfs-discuss mailing l

Re: [zfs-discuss] ZFS+NFS on storedge 6120 (sun t4)

2007-04-20 Thread Torrey McMahon
Marion Hakanson wrote: [EMAIL PROTECTED] said: We have been combing the message boards and it looks like there was a lot of talk about this interaction of zfs+nfs back in november and before but since i have not seen much. It seems the only fix up to that date was to disable zil, is that sti

Re: [zfs-discuss] Re: Testing of UFS, VxFS and ZFS

2007-04-17 Thread Torrey McMahon
Anton B. Rang wrote: Second, VDBench is great for testing raw block i/o devices. I think a tool that does file system testing will get you better data. OTOH, shouldn't a tool that measures raw device performance be reasonable to reflect Oracle performance when configured for raw devices?

Re: [zfs-discuss] Testing of UFS, VxFS and ZFS

2007-04-16 Thread Torrey McMahon
Tony Galway wrote: I had previously undertaken a benchmark that pits “out of box” performance of UFS via SVM, VxFS and ZFS but was waylaid due to some outstanding availability issues in ZFS. These have been taken care of, and I am once again undertaking this challenge on behalf of my custome

Re: [zfs-discuss] snapshot features

2007-04-16 Thread Torrey McMahon
Frank Cusack wrote: On April 16, 2007 10:24:04 AM +0200 Selim Daoud <[EMAIL PROTECTED]> wrote: hi all , when doing several zfs snapshot of a given fs, there are dependencies between snapshots that complexify the management of snapshots is there a plan to easy thes dependencies, so we can reach

Re: [zfs-discuss] Poor man's backup by attaching/detaching mirror drives on a _striped_ pool?

2007-04-11 Thread Torrey McMahon
Frank Cusack wrote: On April 11, 2007 11:54:38 AM +0200 Constantin Gonzalez Schmitz <[EMAIL PROTECTED]> wrote: Hi Mark, Mark J Musante wrote: On Tue, 10 Apr 2007, Constantin Gonzalez wrote: Has anybody tried it yet with a striped mirror? What if the pool is composed out of two mirrors? Can I

[zfs-discuss] Size taken by a zfs symlink

2007-04-02 Thread Torrey McMahon
If I create a symlink inside a zfs file system and point the link to a file on a ufs file system on the same node how much space should I expect to see taken in the pool as used? Has this changed in the last few months? I know work is being done under 6516171 to make symlinks "dittoable" but I don

  1   2   3   >