VERITAS Storage Foundation has had the MvFS (multi-volume multi-VxFS)
component since the SF 5.0 product was introduced in 2006. Send data into
one file system, metadata in another, file system storage checkpoints into a
third, on so on. the volumes can be "thin provisioned" to automatically
request growth to its pre-configured size


You gotta give tons of credit to hardware scientists, them developing drives
of  truly astonishing capacity (I've recently ordered my first laptop with
solid state drives, another remarkable achievement). But, size matters for
performance only slightly. What is being offered is more storage not faster
storage (except in the vacuum of advertising)

On Thu, Jan 7, 2010 at 1:46 PM, Hudes, Dana <hud...@hra.nyc.gov> wrote:

>  Interesting point about striping and use of entire disk. So if my LUNs
> are pieces of disks rather than entire disks I lose the benefit of striping?
> This is more and more common as physical drives, even 15K RPM Fiber Channel
> ones, increase in size. I think the 9990V is using 320GB drives where the
> 9980 had 73GB drives.
>
> This brings us to a benefit of the 'many filesystems - one volume' approach
> of ZFS when combined with virtual hosts (containers/non-global-zones).
> Because I can get/justify a larger pool of storage for the entire platform
> rather than piecemealing LUNs on a per-vhost basis, I can get the entire
> physical device for each column in the RAID group.  The trouble is that with
> ZFS I can't peer inside the LUNs the way you describe with VxVM. I suppose I
> could use VxVM + ISP and then use the resulting volumes as my devices for
> ZFS. Once I have a zpool I can parcel out logical storage to each non-global
> zone.
>
> The 1 filesystem - 1 volume VxFs+VxVM approach results in lots of wasted
> storage in our shop as admins have to leave room for growth in each
> filesystem instead of letting them all pull from the one -- but we have to
> have separate filesystems because of demands from application for different
> mountpoints.
>
>
>  ------------------------------
>  *From:* William Havey [mailto:bbha...@gmail.com]
> *Sent:* Thursday, January 07, 2010 12:22 PM
>
> *To:* Hudes, Dana
> *Cc:* veritas-vx@mailman.eng.auburn.edu
> *Subject:* Re: [Veritas-vx] Relayout volume from 2 to 4 columns
>
>  The array stays a high performance machine. Few, if any, features are
> sacrificed. The software expense is reduced tremendously. It is provided on
> the host by software which is needed to link the OS to an array, i.e., a
> volume management product. Hardware vendors providing software functionality
> becomes the redundant, extra cost feature. Total cost of ownership (cost per
> gigabyte) is reduced by running software capable of I/O to any device, not
> just one specific device.
>
> On the technical side, as prezmol indicates in his latest post, cache is
> not infinite, it will become saturated, de-staging kicks in, and performance
> analysis and improvement includes once again how to make disks work the
> fastest. To the items already cited in the discussion I would add addressing
> of data blocks to specific storage locations. I think this is what striping
> does best. Striping can be pushed through the LUN object to the disk object.
> Successive I/Os can be sent to either the same disk (if within a address
> range) or to the next disk in the raid group when the address is outside the
> range. Striping across LUNs is beneficial when each device in the raid group
> comprising the LUN spans the entire physical device.  The address of an I/O
> includes a LUN and a specific disk within the raid group, i.e., striping
> over stripes
>
> On Wed, Jan 6, 2010 at 1:09 PM, Hudes, Dana <hud...@hra.nyc.gov> wrote:
>
>>  So indeed this feature turns your $$$$$$$ Hitachi 9990V into a JBOD.
>> Wow. I guess there are products that need this sort of thing.
>> And enterprises where the people running the SAN array can't manage it
>> properly so the server administrators have need of bypassing them.
>>
>> The other question of SCSI queues one per column as a benefit of striping
>> is interesting.
>> Doesn't this just generate extra disk i/o? The array is already doing RAID
>> with 2 parity stripes. Now what?
>> Yet this is effectively what ZFS does so there must be a performance
>> gain.  Hmm. Multiple SCSI queues actually might possibly make sense if you
>> have a large number of CPUs (like the Sun 4v architecture esp. 5240 with 128
>> cores or 5440 with 256, or a 4+ board domain on the SunFire 25K which gives
>> you 32 cores) all of which are running threads that do disk i/o.
>> This benefit seems more practicaly in the ZFS approach where you have one
>> volume-equivalent (the zpool is both disk group and VM volume in that it has
>> storage layout) and many filesystems so you would likely have multiple
>> processes doing independent disk i/o.  In the VxVM one volume one filesstem
>> model your e.g. Oracle table space is in one filesystem as one huge file
>> (possibly other databases are files in other filesystems). Even if you have
>> multiple listeners doing their thing ultimately there's one file they're
>> working on...of course Oracle has row locking and other paralleization
>> ....hmm.
>>
>>
>>
>>
>>  ------------------------------
>> *From:* William Havey [mailto:bbha...@gmail.com]
>> *Sent:* Wednesday, January 06, 2010 12:30 PM
>> *To:* Hudes, Dana
>>
>> *Cc:* veritas-vx@mailman.eng.auburn.edu
>> *Subject:* Re: [Veritas-vx] Relayout volume from 2 to 4 columns
>>
>>   Yes, it certainly does. And that is why Symantec put the feature in the
>> VM product; to use host-based software to construct and control storage from
>> host to physical disk. This would help eliminate multi-vendor chaos in the
>> storage aspects of the data center.
>>
>> On Wed, Jan 6, 2010 at 12:19 PM, Hudes, Dana <hud...@hra.nyc.gov> wrote:
>>
>>>  >the ISP feature of VM would allow you to drill down to individual
>>> spindles and place subdisks on each spindle.
>>>
>>> Individual spindles of the RAID group? Doesn't that defeat the purpose of
>>> the RAID group?
>>> Striping across LUNs gets ...interesting; we usually just use them
>>> concat. Of course that's with a real SAN array such as Hitachi 99x0 or Sun
>>> 61x0.
>>> I'm not sure I see the point of striping LUNs. If you are having
>>> performance problems from the array, fix the layout of the RAID group on the
>>> array: that's why you pay the big bucks to Hitachi for their hardware. I not
>>> sure I want to know about the load that could flatline a RAID-6 array of 6
>>> 15K RPM Fiber channel disks backed by a multigigabyte RAM cache.
>>>
>>> I have certainly seen bad storage layout on the host cause hot spots.
>>> That's when people make ridiculous numbers of small (gigabyte or so) volumes
>>> scattered all over the place -- another argument against the old way of
>>> doing things with databases and raw volumes (if you're going to use raw
>>> volumes at least use decent size ones not 2GB each ). While old (< 10)
>>> Solaris AIO did indeed suck dead bunnies thorugh a straw for performance,
>>> that's no longer a problem in Solaris 10 ZFS if you use it natively (using
>>> "branded" zones to run Solaris 8 and 9 puts the old AIO interface in front)
>>> nor would I expect it to be a problem with VxFS.
>>>
>>>
>>>  ------------------------------
>>> *From:* veritas-vx-boun...@mailman.eng.auburn.edu [mailto:
>>> veritas-vx-boun...@mailman.eng.auburn.edu] *On Behalf Of *William Havey
>>> *Sent:* Wednesday, January 06, 2010 12:00 PM
>>> *To:* przemol...@poczta.fm
>>> *Cc:* veritas-vx@mailman.eng.auburn.edu
>>> *Subject:* Re: [Veritas-vx] Relayout volume from 2 to 4 columns
>>>
>>>   VM views the two raid groups as single LUNs. It needn't be concerned
>>> with the layout of each raid group. To change from 2 columns to 4 columns
>>> use the relayout option to vxassist and also specify the two new LUNs on
>>> which to place the two new columns.
>>>
>>> That being said, the ISP feature of VM would allow you to drill down to
>>> individual spindles and place subdisks on each spindle.
>>>
>>> Bill
>>>
>>> On Wed, Jan 6, 2010 at 6:36 AM, <przemol...@poczta.fm> wrote:
>>>
>>>> Hello,
>>>>
>>>> we are using VSF 5.0 MP3 on Solaris 10 attached to SAN-based hardware
>>>> array.
>>>> On this array we have created 2 raid groups and on each RG we have
>>>> created
>>>> a few LUNs:
>>>>
>>>> raid group:   RG1    RG2
>>>>              LUN1   LUN7
>>>>              LUN2   LUN8
>>>>              LUN3   LUN9
>>>>              LUN4   LUN10
>>>>              LUN5   LUN11
>>>>              LUN6   LUN12
>>>>
>>>> For performance reason some of our volumes are striped between the two
>>>> raid groups
>>>> (using two columns ncol=2) e.g.:
>>>>
>>>> pl <name> <vol> ENABLED ACTIVE 419256320 STRIPE 2/128 RW
>>>>
>>>> In this configuration IOs involves two raid groups.
>>>>
>>>> It seems that in the future in certain cases performance might be not as
>>>> expected
>>>> so we would like to add two additional LUNs (taken from two additional
>>>> raid groups)
>>>> and relayout the whole volume from 2-col to 4-cols e.g.:
>>>>
>>>> raid group:   RG1    RG2    RG3    RG4
>>>>              LUN1   LUN7   LUN13  LUN19
>>>>              LUN2   LUN8   LUN14  LUN20
>>>>              LUN3   LUN9   LUN15  LUN21
>>>>              LUN4   LUN10  LUN16  LUN22
>>>>              LUN5   LUN11  LUN17  LUN23
>>>>              LUN6   LUN12  LUN18  LUN24
>>>>
>>>> Is it possible to order relayout of existings volumes to spread it over
>>>> all four
>>>> RGs ? Can I point somehow that it should relayout using these particular
>>>> LUNs ?
>>>>
>>>>
>>>> Regards
>>>> Przemyslaw Bak (przemol)
>>>> --
>>>> http://przemol.blogspot.com/
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> ----------------------------------------------------------------------
>>>> Milosc, praca, pieniadze.
>>>> Sprawdz swoj horoskop na dzis >> http://link.interia.pl/f2531
>>>>
>>>>
>>>> _______________________________________________
>>>> Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
>>>> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
>>>>
>>>
>>>
>>> _______________________________________________
>>> Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
>>> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx
>>>
>>>
>>
>
_______________________________________________
Veritas-vx maillist  -  Veritas-vx@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-vx

Reply via email to