Re: Fw: Linux CDL packs on RVA

2003-11-25 Thread Don Mulvey
>I have forwared this discussion to the developers and
>they do know about the problem. In SP3, they have set
>the DBL1STAR field of the fmt1 DSCB to zero to show
>100% full in SP3.

The format 1 dscb has a DS1LSTAR field ... last used track&block.  Is that
what you meant?  I would not have thought that setting it to 0 would
indicate that the volume was full.   Anyway ... looked through code and it
gets initialized to zero but doesn't seem to ever get changed or used by
any of the tools or dasd drvr code.

>However, the bottom line is the same - beware of IXFP
>DDSR from MVS against a Linux CDL volume, especially
>prior to SP3 and other distributions, depending on
>their level.

Gotcha.

-Don


Re: Linux CDL pack and RVA free space collection.

2003-11-24 Thread Don Mulvey
Hi Jim,

>It uses a Log Structured Array to map virtual tracks
>to the back end raid array hardware. This is analogous
>to a paging in MVS. As in MVS, a track is not
>allocated on the hardware until it is written to.

This is kinda how evms builds a sparse LV.  The logical volume is divided
up into chunks (like a snapshot).  If a write is made to a chunk that has
no backing storage an exception causes the chunk to be mapped before the
i/o continues.  This lets you create volumes much larger than the actual
physical storage.  The maps are saved for persistance.  Originally
conceived of as a means of testing very large volumes on very modest disks.
On-demand storage  :)

>The technology is based on several assumptions true
>for an MVS environment 1) The VTOC extent maps may say
>the data set has been allocated with so much space,
>but in reality, only a small park of it is used, and
>2) free space really takes up space on a volume, and
>3) there are a lot of repeated characters (mostly
>blanks) on MVS volumes.

>Typically, MVS volumes may be run at 50% full because
>of the need to expand without abending a process.

Dont know anything about mvs.  But a format_1 dscb has room for describing
multiple extents.  Can't mvs automatically grow the dataset till these
extent descriptors are used up before abending the process?


>In addition, they added space compression, to get the
>repeated characters compressed out.

>The net effect is that the REAL hardware space
>occupied for a volume can be zero (0) to a real full
>volume. In MVS, the volumes get up to 66% compression
>of the data and the packs are only 50% full.

Neat.

>Storage Tech took this one step further. Since they
>are  do not have to have all the space on a volume
>reserved on the hardware, they only back the virtual
>space with a fracture of the real hardware space. For
>example, on one of my RVA's I have 512 3390-3 volumes
>defined for a virtual capacity of 1.4 Terabytes.
>However, the real dasd on the RVA is only 219 GB! So
>the assumption is that there will  be 6.3
>"overalloction" of the pack due to compressible data
>and unused free space.

Gotcha.

>Actually, quite brilliant when you think about it.
>Remember, this is late 1980's technology when disk
>drives were still expensive and RAID technology was
>new. I don't anyone uses this approach now because the
>disk drives are so cheap. Vendors are using RAID10 now
>- mirrored raid5, so there is actually twice has much
>disc hardware that is needed for recovery purposes.

Yep ... and linear concatenations of raid10 or feeding raid10 devices into
an lvm volume ... to help when it comes time to resize the volume.

>If you look at an MVS VTOC, the extent map may show
>that the pack is full and for conventional dasd and
>most vendors implementations, the space is reserved,
>though it may not be actually used.

Sure ... that is exactly how I viewed a cdl partition ... %100
allocated/reserved ... but I think the amount actually used would have to
be determined by asking the file system.

>This then brings up question of garbage collection.
>There is an interface between MVS allocation and the
>RVA called IXFP in the IBM RVA that communicates
>allocations and free up of space between MVS and the
>RVA. There is also dynamic dasd space reclamation
>(DDSR) that runs periodically, interrogates the VTOC,
>and frees any free tracks based on the VTOC. This is
>the exposure that we are talking about. Depending upon
>what IXFP interrogates in the VTOC, there is a data
>loss exposure. I have seen no reports that anyone has
>tested this, just recommendations that you keep your
>Linux DASD seperate from your other OS.

Sounds like DDSR finds dataset extents and then determines which tracks in
the extent are used or not.  In the evms case, the sparse volume knows
which chunks have had write operations and it knows for certain which
chunks are free or not.  I guess IXFP must be doing something similiar by
talking to mvs allocation/free rtns.

-Don


Re: Linux CDL pack and RVA free space collection.

2003-11-21 Thread Don Mulvey
>Well, I'm not sure if I'm coming or going on this!

You and me both :)

>I just formatted a volume CDL on another RVA with
>SLES8 SP2 and it shows 0% USED and 50,083 tracks FREE!

First off ... could somebody tell me what an RVA is?

If you just formatted a cdl disk then it should only have 2 dscbs in the
vtoc; a type 4 vtoc descriptor and a type 5 freespace descriptor.  So the
info seems fine.

>The other volume I formatted was with SLES8 SP3 and it
>showed 100% USED and 0 tracks FREE!

Somebody has to ask ... are you sure you typed in cdl rather than ldl?  Can
you confirm that it is indeed cdl and there isn't a bug that produced an
ldl volume?

>So the MVS guy wasn't wrong after all! He was looking
>at a volume I had formatted CDL with SLES8 SP2.

>So this seems to be release dependent and there IS AN
>EXPOSURE if you have a lower release that SLES8 SP3. I
>don't know what would happen with RedHat. I'll try
>that after I get an RHEL3 system running.

Doesn't make any sense.  If you reformat a volume your gunna reblock and
refresh: ipl records, vol label record and the vtoc itself.  A format
doesnt create any type 1 dscbs.  If there is a bug that resulted in an ldl
volume ... even if you specified cdl ... then this would make sense.  I am
assuming that RVA (whatever it is) looks for any/all type 5 dscbs if the
vtoc track is available.  If it thinks the vtoc track is missing (ldl
volume) then my guess is that it would declare no datasets and %100 free.

>Bottom line still seems to be - don't run DDSR on your
>CDL formatted volumes!


-Don


Re: Filesystems and growing them online

2003-11-18 Thread Don Mulvey
>A while back I asked about the adding of dasd without a reboot.  First,
>thank you to all who answered I have proven the steps and everything
>works.  Now I have a question about the growing of filesystems without
>unmounting them.  It seems that most of the documentation I have found
>discusses ext3 and / or ext2, not much on reiserfs, so my questions
>include the following:

>1. Is there an issue with reiserfs on the Z platform installs of Linux?

I dont know of any.

>2. Is there a way of growing a filesystem without unmounting it?  I
>thought you could do this with Reiserfs but I have found nothing that is
>related to Reiserfs on the Z Platform.

You can expand a reiser fs while it is mounted but you can only shrink it
unmounted.

-Don


Re: How to configure EMC SAN

2003-09-17 Thread Don Mulvey
>We are attempting have LINUX/390 use an EMC SAN.
>scsiload puts out a message about an 'unknown partition table'
>Are other folks using EMC SYMMETRIC successfully? Is there doc anywhere
>on
>how to configure the EMC SAN for LINUX/390?

Unknown partition table simply means that your kernel was unable to
recognize any partitions on the disk.   Usually means that you did not
build the kernel with the partition scheme found on the disk.   For
example, the disk may have been formerly used on SUN or IA64 system and
could contain a Solaris or GPT partition scheme.   Happens when you get
drives that have not been wiped clean.Fdisk only understands a few
partition schemes.I can probably tell you what is on the disk if you
send me some of the first tracks of the disk  ...

Lets see ... fdisk said ...

>Disk /dev/sda: 1 heads, 15 sectors, 1024 cylinders
>Units = cylinders of 15 * 512 bytes


Weird ... 1 head ... must be the emc controller producing a contrived track
geometry  ... you would ... dd  if=/dev/sda   of=sda.tracks  bs=512
count=60  ... this will give you the 1st 4 tracks which you can send the
file sda.tracks to [EMAIL PROTECTED]

In the meantime ... if you don't care about what is on this disk ... then
... just go ahead and repartition it.

-Don


Re: Slightly fuzzy on what to do next in making a copy of a system on new volumes IPL-able

2003-09-11 Thread Don Mulvey
>I know to change all the device numbers to their corresponding deviuces,
>but root= parameter?do I leave it as /dev/dasdb1 and run zipl, or do I
>change it to /dev/dasdg1 for the running of zipl. I just wanna make sure
>the boot record goes to the right place.

I install to dasda, root is dasda1.
Ipl cms and ddr to a second volume ... dasdc.
Ipl 200 and mount dasdc1 as /data2.

Then ..

zipl.conf
[defaultboot]
default=linux
[linux]
  target=/data2/boot/
  image=/data2/boot/vmlinuz
  parameters="root=/dev/dasda1 dasd=200-20f"

And zipl ... now you have an ipl-able 202 ... ipl 202 clear.
You can now play with the install on 202. Since I was doing kernel work I
kept the same root volume. If your doing more than playing with kernels
then you may want to pickup a different root vol.  You can always reipl 200
if you bugger up 202 ... and start over ... ddr ... mount ... zipl ...

-Don


Re: Possible enhancement idea for dasdfmt

2003-09-06 Thread Don Mulvey
Hi Leland.

>I have looked into this, but from a z/VM standpoint.  I had to table it
>until I got some other things cleaned up.  If interest is high, I will
move
>it closer to the edge of the table and it might fall off sooner.  ;-)

Cool.

>What I had planned was this...

>Port portions of the e2fsprogs distro, mainly the ext2fs library
>Provide the necessary lowlevel I/O routines

I like to remind folks that file systems live within logical volumes.  In a
simple world your ext2/3 fs would be habitating within a single disk
partition.  However, what do you do for md, lvm or evms volumes?  You would
not have the logical volume manager code that is capable of understanding
metadata and building the logical volumes.  And a 390 that isn't using
linear or lvm volumes just isn't being used to its potential.

>Create simple command line utes like "ls", "cp", "chmod", and such to
access
>the filesystem

Port busybox. Don't know about soft links on vm or mvs though.

>Ultimately, create a Rexx function package to directly access the file
>system

>Any suggestions, comments, rants?

Well, how do you build a fs on VM or MVS?  I think most everything lives in
your addr space and the i/o is the only thing that trancends.   Like you
could memory map the dasd partition into a dataspace or build a logical
volume using several dataspaces and then run your linux fs in the process
space ... kinda like a flattened fs.  This would be like running device
mapper and evms in the process space and having the device mapper code
build the dataspaces for you when evms sends it target mappings.

-Don


Re: Possible enhancement idea for dasdfmt

2003-09-06 Thread Don Mulvey
>Here's a radical thought. Why not let USS under MVS
>use volumes formatted for/by Linux with

Well ... do USS and MVS understand MD, LVM and EVMS logical volume
metadata?  If not ... then you can't build the volumes that your file
system lives within.   So, your only going to be able to look at
compatibility volumes built from partitions.  Seems like a waste to not be
able to use linear, mirroring or lvm volumes on a 390.

>ext2/ext3/reiserfs write and read! Then the two OS
>could interchange data more freely and at a higher
>data rate than is possible with TCP/IP applications.

Why not run a cluster fs (e.g. opengfs) on mvs or uss (what is uss by the
way?) and not worry about such things.

>As to the data interlock issue, it could initially be
>solved with two volumes

>volume 1  rw to USS, ro to Linux
>volume 2  rw to Linux, ro to USS

>And invent a "refresh command" that rereads the meta
>data on the ro system?

>The refresh command would also allow ro filesystems to
>be shared between Linux systems and and the systems
>would not have to be rebooted to get updates. Just
>issue to "refresh" on the ro systems.

This is a bit of a reach.

-Don


Re: Possible enhancement idea for dasdfmt

2003-09-05 Thread Don Mulvey
>I've noticed that when the "dasdfmt" program formats a "cdl" volume, there
>is one thing "missing". DS1LSTAR is not set.
>Is this "by design" or just because it was too much of a bother to put it
in
>there?

An ldl has no vtoc and the partition starts right after the label record
and occupies the remainder of the disk.  The cdl partitions use fmt1 dscbs
and their sizes and locations are defined by DS1EXT1, the first extent
descriptor.  Are you saying that the DS1LSTAR field should also be set?  It
seems largely ignored.

-Don


Re: Force purge of buffercache??

2003-08-19 Thread Don Mulvey
>Anybody know of a reliable way to cause buffer cache pages to be purged
(not
>written to disk, but released)??

Sure ... umount ... but this isn't going to help you :)

>Problem: In a shared disk environment, changes made by the owner (R/W) are
>not seen by R/O sharers in a timely fashion.  This is especially
problematic

How are your disk readers ever going to know that disk hard sectors have
been updated ... in any kind of reliable fashion?  Scenario ... the vm with
writeable access to the disk rewrites some hard sectors.  Some/All of these
sectors have already been read ... and buffered ... by the vms with read
access to the disk.  So, (1) the readers have no reason to go out and
re-read these sectors again  (2) you have no idea if the readers will seel
all the updated sectors.

>when the Linux virtual machine has a sizable virtual memory size allowing
it
>to have lots of buffercache.  In small memory Linux images, the changes
are
>recognized fairly quickly, you can force by editing a sizeable file which
>will cause the buffercache pages to be re-used (stolen).

How are the changes recognized ... are the readers doing directory scans of
the file system on the shared dasd?  Are you using cron or something?
There are no events to tell the disk readers (1) that changes have been
made (2) what sectors have been updated.  How about remounting the file
system?  Is this possible in your situation?

>I realize that this is unsafe in the general sense (writing to a minidisk
>that is shared by others), but I believe that it is fairly safe in the
usage
>I am attempting.

Dunno about that ...

>Have tried several variations of commands on both the owner and sharers.
>Sync'ing and even shutting down the owner (to make sure the buffers are
>written) does not have any effect.  On the sharing systems, 'mount -o
>remount', various other mount options and various blockdev options
>(flushbufs, rereadpt) don't seem to help.  Interestingly enough, 'blockdev
>--rereadpt'  will cause deleted files to be noticed immediately, but not
>newly created files (I would have thought it wouldn't have affected
either,
>but was trying about anything!).

>Have scoured some kernel code and believe there are routines that will do
>what is needed.  Usually used when unmounting a device or some other act
of
>removing, but don't see a way to force them to be called via any system
>call/command available.

Use a cluster file system (opengfs) or a network file system (nfs) and
forget trying to sync your reader vms with the writer vm.  The simplest
method is probably just to nfs mount the file system.  You still will not
know when changes occur but at least you'll be in sync amongst the vms with
the nfs mount.

-Don


Re: big and little endian

2003-08-08 Thread Don Mulvey
> I keep seeing references to big endian and little endian.  I am going to
> show off my ignorance here and ask - What does this mean?  I do not know
> what the term endian means.

Basically, it is how numbers are stored ... if i have the number 0x01020304
and I store it in memory ... will it be stored as [1][2][3][4] ... which is
big endian ... or will it be stored as [4][3][2][1] ... which is little
endian. On little endian architecture (e.g. i386)... when you read the
numeric into a register or store a number from a register to memory ... the
number gets byte/word swapped and you can find Linux byte swap macros that
can handle this for you.

-Don


Re: kernel debugging

2003-07-31 Thread Don Mulvey
>S/390 specific or general? Some S/390 specific info. is at

Just to clarify things ... I am not looking for 390 specific information.
I am looking for tips/suggestions/references/ideas on how to perform kernel
debugging on an s390 install.  I have worked on i386 and ppc but the s390
is still elusive.  Other than sprinkling printk stmts (my current practice)
... or causing a kernel panic to get a back trace (guilty as charged) ...
what are folks doing?  kdb, ltt, dprobes ... ?

Thanks,

Don


kernel debugging

2003-07-30 Thread Don Mulvey
Could anyone point me to a simple how-to on the subject.   I haven't had
any luck finding references on kernel debugging on a s390.   All
suggestions and pointers are welcome!

-Don


Re: Update: Parallel Access Volumes with SLES7

2003-06-26 Thread Don Mulvey
>I tried Mark Post's recommendation that I CMS format the volume but skip
>the CMS reserve, and that did in fact allow me to install a new instance.
>As a matter of fact, I have tried this a few times to make sure that the
>first time was not a fluke, and apparently the CMS reserve step needs to
be
>skipped if installing to PAV enabled DASD.
>If there are detailed instructions for actually making use of PAV with LVM
>in SLES8, I would still be interested in seeing those.
>I can even offer my services as a guinea pig if someone needs to test with
>our specific hardware/software combination.

I thought I understood how PAV works and it was my impression that the i/o
subsystem knew the addresses belonged to a PAV and would/could drive
simultaneous i/o to the phys device.  If so ... then the dasd driver would
register a single disk with the block layer.  Which means that lvm1 would
not know about the multiple PAV addresses used by the i/o subsys ... only
seeing a single disk.  However, coming in through scsi, with multiple
interfaces out to the storage, you would indeed see multiple disks
registered with the block layer ... all of them mapped to the same phys
disk.  This is a situation in which I think you'd use lvm1 with its support
for multi-paths.  MD has also introduced multi-path support by adding an mp
personality.  Its job is to aggregate the multi-path disks and present them
as a single md volume.  This is real interesting stuff and I hope others
can comment on it.


-Don


Re: DASD requirements

2003-06-03 Thread Don Mulvey
>In my case I used 3 disks and then used LVM to manage them but they could
>easily be separate devices. The /boot device is an ext2 partition of a
>minidisk from which we IPL. Here are the details:

Yep ... and lvm will let you toss additional disks into the volume group to
create freespace that you can then use to grow the size of existing logical
volumes.  BUT ... make sure the file system allows resizing!

-Don


Re: Accessing IBM 5120 storage from zseries Linux

2003-06-02 Thread Don Mulvey
Hi Vic,

You mentioned using an fdisk symlink to access fdasd.  This seems kinda
strange to me.  Doesn't fdasd try to create ibm partitions ... format 1
dscbs?  Does this make sense on a scsi disk?  I guess you want a vtoc and
named datasets for backup to a zOS but heck ... this just doesn't seem
right.   I mean if you have a vtoc for the dscbs then you also have a
couple of ipl records and a vtoc record ... on a scsi disk??  Strange.  I
would have thought that the scsi disks would have used some other
partitioning scheme ... I think Neil's share paper even said they needed
MDDOS partition support configured ... so I am a bit confused.   Oh yea ...
can you boot off a scsi disk (which would be saying that it has ipl
records) ?

Thanks,

-Don


Accessing IBM 5120 storage from zseries Linux

2003-05-29 Thread Don Mulvey
I was wondering if anyone could comment about accessing 5120 storage from
their Linux systems.   I heard that the disks appear as scsi disks and so
the 390 dasd block driver isn't involved in accessing/formatting these
devices.   In fact, my understanding is that you use the IBM Subsystem
Device Driver to access the disks, using the multipath support found in the
IBM SDD.   Can anyone describe their experiences with the 5120 ... aka
shark ... and confirm/clarify how you access the disks?

Thanks!

Don Mulvey


Re: CDL/LDL

2003-02-17 Thread Don Mulvey
I just picked up on this thread and thought I'd make a few comments ... :-)


>CDL exists to foster interop between Linux and z/OS.
>That's good.   That's VERY good.   But z/OS has a problem
>in its (lack of) layering of device drivers and filesystem drivers.
>Read on.   [sorry this has gotten long]

Right but block device drivers and file systems are not the entire story.
A logical volume might also include LVM, MD and EVMS metadata.  It might
also include other partitioning/slicing schemes.  The proper orchestration
would be for the block device layer to export a logical device ... that the
slicing layer exports extents from ... that features like LVM,MD,EVS are
use to construct logical volumes ... that are consumed by file systems.


>(And,  again,  if CDL allows for EXT2 in partition zero,
>I don't care.   It's the *capability* I'm after,  not LDL itself.)
>We should probably investigate to learn if CDL breaks part zero FS.

CDL doesn't "allow" for ext2 in partition zero.  CDL allows for several ipl
records and a vtoc at the start of the phys volume.   Partition zero is a
reference to the entire disk ... /dev/hda and /dev/hda0 are the same thing.
A cdl disk differs from an ldl disk because it maintains a vtoc and has a
ptr to the vtoc in the 3rd ipl block I think.  So, you get at least a
tracks worth of dscbs that can be used to describe the disk.  There doesn't
seem to be any harm in this.  There may be other differences to system
software but at the block level the fundamental difference that I see is
the vtoc.


>I agree with the principle of least astonishment.
>I'm not suggesting that all DASD on VM (or otherwise) be unpartitioned.
>I'm only saying that the ability to use unpartitioned DASD
>when hosted by z/VM is a Good Thing,  a capability,  an option.

I like to hand raw disks (with no boot blocks) to region managers like LVM
and AIX who suballoc the disks anyway using phys extents.  This works fine
with scsi and ide disks but I am not sure about dasd. Dasd devices have
default ipl records that may have strange i/o characteristics (short block
writes).  Also, if your into handing an entire disk over to a file system
then your really saying that you don't ever want to resize (expand/shrink)
the file system.  I don't think this setup is very flexible.  The ability
to resize a logical volume is determined by all the components in the
logical volume stack ... first the file system must permit resizing, then
everything below it must permit resizing.  If the file system sits directly
on top of a disk your never going to resize it.  If the file system sat on
top of an LVM volume you could add/remove extents to/from the volume.  If
it sat on top of a partition you could resize the partition ... and so
forth.


>INCIDENTALLY,
>many FS do not collide with the IBM DASD label.
>EXT2FS does not collide with an IBM DASD label on FBA.
>(Does collide with the IBM DASD label on CKD,  sadly.)
>ISO9660 FS does not collide with any IBM DASD label,  FBA or CKD.

Are you sure?  The reason I ask is because file systems can keep duplicate
copies of the super block. So, if the first copy was clobbered by being
written to the 1st track on the device the file system might be surviving
by finding an alternate super block.


>IN FACT
>a non-partitioned DASD (either CKD or FBA)
>could be made IPLable with minor adjustment to the bootstrap.
>I'm not interested in "fixing" zipl.   But I'll argue that
>characteristics of zipl do not demonstrate inability of the hardware.

Dont know anything about zipl.



>   If I can always assume that the 1st X blocks of a disk are a
> fixed format label structure of some kind, and that structure is the
> same with dedicated DASD or minidisk, then the driver code can be
> simplifed and optimized for that case. If I have to figure out that this
> might be a different format each time I go to mount a disk structure,
> then the code continues to be hard(er) to maintain.


I think that a logical volume is properly constructed by having each layer
in the logical volume export useable space to layers above it.  The block
driver properly exports the entire disk so that boot blocks are addressible
to software layers above it.  A slicing/partitioning layer removes the
blocks/tracks that it keeps its metadata on and exports the remaining
useable space on the disk to layers above it.  Eventually reaching the top
of the logical volume the file system only sees the useable space on the
disk ... it doesn't have addressibility to those blocks/tracks that are
holding metadata for lower layers.  Having the file system assume that X
blocks of a disk are used for whatever is a layering violation.  The file
system would need to know about different block device drivers and what
their metadata looks like and where it resides and so forth.  This really
isn't the way to construct logical volumes.



>The reason I'm kicking so hard against this
>is because we're talking about blurring the line between
>hard

Re: Root almost filled on 3390-3

2003-02-07 Thread Don Mulvey
> One more thing.  I wouldn't get too enamored of LVM just yet.  I was in a
meeting yesterday and one of the Unix gurus we work with on Linux/390 (and
who is usually pretty knowlegeable about these things) mentioned that LVM
is going to be sunsetted.  It is rumored that Sistina will not be enhancing
it beyond the 2.4 kernel, only providing basic maintenance.  This same
person said that Linus won't be putting LVM into 2.6 when it comes out.
LVM will apparently be replaced by something similar but more capable from
IBM, and that this new filesystem is already in the 2.4.17 kernel.  There
was an IBM rep there at the meeting and he seemed to know about this change
as well.   We've put all our expansion of LVM on hold until we find out if
this rumor is true and (if so) what the replacement is and how you work
with it.

According to Sistina's press release dated 12/9/02 ... LVM2 will be a
standard 2.6 component ... and sure enough you will find device mapper in
the std 2.5.xx trees.   I also run device mapper on my 2.4.19 390 kernels
and I have used it with a 2.4.20 ppc64 kernel ... testing testing testing
testing.

-Don



Re: CDL layout, and the Logical Volume Manager

2003-02-06 Thread Don Mulvey
>As I am sitting here designating the file system for my next Linux LPAR,
It
>occurs to me that it will be advantageous to take the 2 remaining 3390-mod
>3 1 gig custom volumes I have and make them into a single 2 gig volume
with
>LVM. My management wants all this done in CDL so that we can take volume
>backups, etc. via S/390.

>The question I have is this - what are the implications/problems/concerns
>with having 2 CDL volumes (2 vtocs, etc) in a Linux managed LVM?

Dont know much about the 390 backup facilities.   I figure they look for
format 1-3 dscbs and backup the dataset to some media.   I doub't you'll
see additional extent dscb or indexed data dscb entries on a Linux system.
So, basically your just backing up a format 1dscb ... copying the 390
"partition" to some media.  I also figure that lvm pulls the format 1 dasd
extents as physical volumes (pv)  into the volume group ... and then carves
them up into its own phys extents (pe) which are used when mapping logical
extents (le) in the volume.   So, you are basically talking about backing
up a single extent (format 1 dscb) on each dasd.   Not knowing how they
actually get backed up, I'd say the biggest headache is keeping them in
sync.  Since your volume will be spread across two different backups you'll
want to be sure the backups are done together and then related to each
other somehow-date/time/sequence number/etc.  I assume the 390 backup
facility takes care of all of this stuff ;-)

-Don



Re: EVMS scrapped?

2003-01-07 Thread Don Mulvey
>"Sistina's announcement that LVM 2.0 would be incorporated into the 2.6
Linux
>kernel came shortly after IBM programmers working on their own competing
>Enterprise Volume Management System (EVMS) announced they would scrap much
of
>their project.

The new kernel service is device mapper and we (evms) just released a 2.0
alpha that uses device mapper kernel services. Some of our metadata layouts
(sparse seg mgr, bad block reloc, os2) require additional dm kernel plugins
and we will provide them as needed. Our own kernel runtime code remains
fully supported -BUT- will be phased out gradually as we move entirely over
to device mapper.  So, you will find the evms team actively supporting
1.2.x, releasing dm kernel patches, working on a 2.0 release, testing
dm+evms on platforms ... like the 390 ... and ... clarifying statements
like the above.


--


>EVMS isn't exactly dead. EVMS is two things - one of them was a rather
>overengineered (IMHO) kernel driver which partly due to that had some
>fun bugs, security holes etc and was hard to follow. Partly a very

Please ... give it a rest. We bent over backwards trying to satisfy
concerns and criticisms and have moved on.  I don't want to go back and
rehash issues.


>nice integrated view of volumes and good strong tools. The tools end
>of EVMS is alive and well, its merely the low level implementation
>details which have changed.

Exactly.  Our kernel runtime requirements will be met by device mapper
(with additional device mapper plugins as needed) and we already have an
alpha release of our user-space tools.  But ... lots more work to do ...
like testing on various hw platforms like the 390.


>So it'll look like EVMS, configure like EVMS, and run like LVM2

Your right ... in a nutshell ... it will look and act like evms but run
with dm plugins.

-Don



Re: Anyone running a 2.5 kernel?

2002-11-12 Thread Don Mulvey
Hi Mark,

No development system here.  I am running Linux in a VM to work on volume
mgt code. I need to verify evms user interface tools with device mapper on
various platforms and was looking for any gotchas before proceeding on 390.

-Don


-Original Message-
Date:Mon, 11 Nov 2002 10:34:10 -0500
From:"Post, Mark K" <[EMAIL PROTECTED]>
Subject: Re: Anyone running a 2.5 kernel?

Don,

The only warning I can think of is that you're going to be running a
"development" kernel.  Unless you're planning on being part of the
development process, providing feedback to the kernel developers, etc., you
don't want to do that.  If that _is_ your intent, then go for it.

Mark Post



Anyone running a 2.5 kernel?

2002-11-11 Thread Don Mulvey
Hi,  I was wondering if any of you were running a 2.5 kernel.   I haven't
tried it yet myself and am about to do so.  Any warnings/suggestions would
be appreciated.   Thanks,  Don



Re: CDL/LDL - was Adding a DASD on RedHat 7.2

2002-09-18 Thread Don Mulvey

Marcy wrote:

>I did see a value to it, so much so that I went back and re-did
>some volumes in CDL format.

>The things I found I liked were to be able to access the minidisk
>from CMS without it blowing away CMS, putting a meaningful (to me)

Can you elaborate a bit on this.  What kinda access?  Does the vm drop to
CP?  What error?

>minidisk label on, and the biggie, being able to run VMBACKUP
>against the minidisks for disaster recovery (while Linux is down
>of course).

Both LDL and CDL disks have label tracks and a volume label record. Where
they differ is that LDL does not have a vtoc.

-Don



More than 3 partitions on CDL disks ...

2002-05-24 Thread Don Mulvey

FDASD only allows 3 partitions on a compatibility disk.   However, in every
case I have looked at there is room in the vtoc for more partitions.   Does
anyone know if there is some reason for not allowing more partitions on a
cdl disk?

Thanks,

Don



Re: Multipath I/O on 390 Linux

2002-05-20 Thread Don Mulvey

>I have a minor point related to your efforts, not multipath itself:

>What happens if another task (or machine) looks at that volume when it
>has your bogus volid?


Hi John,

First off ... I think that multipath should be handled below the gendisk
layer and it shouldn't be my problem at all.  In the meantime I am trying
to prevent confusion in our volume management code by recognizing multipath
devices on 390 in the method I described.  Threading issues - none -
whomever opens the engine (user mode config tool) is guaranteed exclusive
access.  Shared device issues - we are adding clustering support this year
and the distributed lock manager should protect me.   But let me reiterate
- I don't see the benefit of allowing multipath devices to surface past the
gendisk layer.  Um ... I think I answered your question.  The approach
seems Ok to me ... just inadequte ... I think more work needs to be done to
support multipath devices.

-Don



Re: Multipath I/O on 390 Linux

2002-05-20 Thread Don Mulvey

On Fri 17 May at 09:05:47 -0700 [EMAIL PROTECTED] said:

>> So ... I'd -LOVE- to see multipath support on s390 that would give me
the
>> comfort of knowing I won't ever see multiple instances on the same
physical
>> disk being reported and also ... the added benefit of improved
performance

>Is this any different than what happens in pretty much any OS or any of
the
>other linux architectures?  If you have a Shark or whatever and have
multiple scsi
>or FC paths to its disks you get multiple sd's or hdisks or whatever that
all
>correspond to the same physical disk.

>Patrick Mansfield just posted to the linux-scsi list the other day a patch
>that pushes multipathing into the scsi mid-layer.  That would allow you to
>hide the multiple instances.

Hi Tim,

>From what I understand ... there are both differences as well as
similarities. Scsi multipath support (1) only handles scsi drives (2)
usually is limited to newer drives with burnt in drive ids (3) is supported
in the generic scsi code i think and so scsi mp disk recognition in kept
beneath the gendisk layer.  390 multipath support (1) handles any channel
attached storage device (2) identifies drives by ch-cu-dev addressing (3)
if supported in the io subsystem layer then the OS can't see and won't add
multiple instances of the device to the gendisk list.

>From recent posts ... sounds like I should be asking questions about PAV
devices rather than multipath.

-Don



Re: Multipath I/O on 390 Linux

2002-05-17 Thread Don Mulvey

>The kernel of current Linux-Distributions does not support
>multiple pathes to a dasd device at all.
>A workaround is to spread the data over multiple devices
>using LVM or MD in striping mode. Using the same amount
>of devices like the amount of pathes available (or a multiple
>of it) should fit best.
>This problem is already addressed in the current (experimental)
>2.4.17 code.


Hi Carsten,

I work on the Enterprise Volume Management System (evms) project.   You can
look us up on sourceforge.   We support lvm and md volumes in addition to
evms volumes, compatibility volumes, etc.I don't understand how any
flavor of raid is a workaround for not supporting multipath.  This is not
... i repeat ... not ... a performance question at all.   So, let me
explain a bit further ... you undoubtedly know all this ...  A linux block
device driver reports disks to the kernel (2.4) by calling register_blkdev
() I think. If the block device driver is incapable of recognizing
alternate paths to the same device, either by scsi id or ch-cu-dev
addressing or whatever,  then later on your going to find multiple
instances of the same disk in the gendisk list.   Then, our evms logical
device manager walks the gendisk list and thinks it is finding unique disks
... but it truly isn't.   It hands the logical disks up the feature stack
so that disk segments, regions, containers ... volumes can be discovered
... only its now handing up multiple instances of the same disk.   I
actually work on the user mode configuration tools and honestly don't know
much about the kernel side of things.  However, my engine plugins ( i write
features like drive linking and segment managers like mdos or the 390
segmgr) need to be able to run discovery paths just like the kernel and
life suddenly gets tougher for me when I might see multiple instances of
the same physical disk.

You say that 2.4.17 supports multipath.   I built a 2.4.17 evms kernel  to
test this out on ... I don't think my 390 has multi channel paths
configured ... at least I don't think so ... and so I tried the following :

shutdown -h now
cp link * 208 209 mw
ipl 200 clear

And now I have dasdj showing up on my machine.   This isn't a very good
test case but it does seem to simulate what can happen when the same
physical device appears more than once in the gendisk list.Currently, I
look for disks with the same volume id.   So, dasdi and dasdj are both
going to report a volid of 0x0208 (in EBCDIC) and so I inspect them a bit
closer ... by writing a test pattern to the volid field on one of the disks
and then reading the volume label on the second disk to see if the test
pattern appears in the volid field.  If it does ... then I am looking at
the same disk through different gendisk entries ... restore the volid on
the disk ... remove the second instance of the disk ... continue along the
volume discovery path.Ok, this seems to allow me to recognize multiple
entries ... prevent anyone from trying to allocate a data extent on the
second entry ... even do failover to the second entry if the first starts
producing i/o errors ... but I can't test for channel busy and start the
i/o on a non busy chpid to get improved performance.   Plus ... I now have
a paranoid segment manager  on the 390 (a plugin that manages a disk data
extent) that is worried about multipath devices when it should only be
concerned about managing segments.

So ... I'd -LOVE- to see multipath support on s390 that would give me the
comfort of knowing I won't ever see multiple instances on the same physical
disk being reported and also ... the added benefit of improved performance
by utilizing non-busy channel paths.   But my test case seems to show that
on a 2.4.17 kernel I will still see mp disks showing up in the gendisk list
multiple times.   Is this just a bad test case?

Also,  I think your suggested configuration was to have striping equal the
number of channel paths to the device.   Would it not be more desirable to
have each physical volume on a separate control unit with multiple channel
paths to each control unit?  This would seem to lessen the chance of
getting channel busy back when trying to read/write a stripe to its device.


Mit Freundlichem ... and Thanks!

Don Mulvey
IBM Austin
Linux Technology Center
(512) 838-1787



Multipath I/O on 390 Linux

2002-05-16 Thread Don Mulvey

I have heard that multipath i/o is supported on s390 and was curious how
this is accomplished.   If anyone has any information I'd appreciate a
reference or a reply to this post.

On other architectures,  multipath i/o is handled in the scsi layer, using
the scsi id found on newer drives.   However,  on a 390 system I haven't a
clue as to how multipath would be handled ... unless the dasd driver is
handling it ... but I don't think it does.Can anyone shed some light on
this?

Thanks!

-Don



[Evms-devel] [ANNOUNCE] EVMS Release 1.0.0

2002-03-28 Thread Don Mulvey

I thought I'd cross post this from the evms mailing list for those of you
interested in volume management.

-Don


The EVMS team is announcing the first full release of the Enterprise Volume
Management System. Package 1.0.0 is now available for download at the
project
web site:
http://www.sf.net/projects/evms

Highlights for version 1.0.0:

v1.0.0 - 3/28/02
 - Core Kernel
   - Support for /proc.
 - New directory /proc/evms/.
 - File entries: volumes, info, plugins
 - Sysctl entry: /proc/sys/dev/evms/evms_info_level can be used to
   query and set the EVMS kernel messaging level.
 - GUI
   - Option panel fixes.
   - New columns in most panels: Read-only and Corrupt.
   - Default engine logging level changed from "everything" to "default".
   - Check for minimum required engine API version.
 - Text-Mode UI
   - Added "F5=Commit" to menu to allow saving changes without exiting.
   - Screen refresh fixes.
   - Default engine logging level changed from "everything" to "default".
   - Check for minimum required engine API version.
 - Command Line
   - On-line help cleanup.
 - New Plugin: s390 Segment Manager
   - Recognizes existing CDL, LDL, and CMS partitions.
   - Can build on top of these partitions in the engine, but
 cannot yet create new s390 partitions.
 - MD Plugin
   - Added proc entry: /proc/evms/mdstat
   - Added sysctl entries: /proc/sys/dev/evms/md/speed_limit_[min|max] for
 controlling the speed of RAID-1 and RAID-5 sync'ing.
 - BBR Plugin
   - Bug fixes to the I/O error remap handling.
 - AIX Plugin
   - Bug fixes in the discovery path and mirroring I/O path.
 - LVM Plugin
   - Added proc entry: /proc/evms/lvm/global



Re: Logical Volume Problems

2002-03-26 Thread Don Mulvey

>I then build 5 Logical volume groups with Yast, and selected
/dev/dasdb1-f1
>for each of the 5 Logical volume groups respectively.
>In each group, I selected 1 logical volume.  In the " Set target
>partitions/filesystems" of Yast I pointed each logical volume to the
>respective mount points I required.  My /etc/fstab currently is as
>follows:

I think LVM places metadata at the front of the partitions ... so your
approach would wipe out any existing file system.  Build logical volumes
...  mkfs ... mount.

-Don



Re: insmod: dasd: no module by that name found

2002-03-25 Thread Don Mulvey

>On a running 2.2 system, the only way to add/remove DASD is to update the
>parmfile, re-run silo, and re-boot.  On a 2.4 system, it can be done
>dynamically.  Since I believe you're trying to use a SuSE 7.0 system to
>install the Red Hat 7.2 system, then I would have to say you're looking at
a
>reboot.

Mark, how do you dynamically add dasd to a 2.4 kernel?



Re: insmod: dasd: no module by that name found

2002-03-25 Thread Don Mulvey

>Hello,
>Does anynone know why I might receive the following error when trying to
add
>an additional
>dasd vol to my system ? ('cause you're a dolt.. is not a valid answer ;)

># insmod dasd dasd=33a2
>insmod: dasd: no module by that name found

>I have added volumes before, but that was during the initial install
phase.

A block device driver can either be statically linked with the kernel or
built as a separately loadable module. You choose how you wish the module
to be linked with the kernel when you configure the kernel prior to
building it. So, if insmod cant find a module then either it wasn't built
or else it is already linked in with the kernel.  Since your system is up
and running then this driver must have been linked into the kernel.  The
only way I know how to specify dasd to the block device driver is via the
parm file or in your zipl.conf file.  BTW, the statement you show above is
using insmod to set a module variable called dasd to a value of 33a2. I
dont know how this would cause the dasd driver to do a rediscover of disks
but then I am kinda new to 390.

-Don



can I boot Linux off either cdl or ldl formatted disks?

2002-03-15 Thread Don Mulvey

Hope you folks dont mind simple questions ;-)

Can I ipl a Linux image off either a cdl or ldl format disk?

I am going to try this and see if it works but I thought I'd ask and see
what I can learn.   BTW ... could somebody point me to a good explanation
of cdl v/s ldl?

Thanks,

-Don



Re: building gtk apps ...

2002-03-15 Thread Don Mulvey

Hi Mark,

Yea, but there isn't any glib_config script or any header files that this
gtk app needs to build.   So, I figure that development support isn't
really there.

-Don





-Original Message-

Don,

When I look at my copy of Red Hat 7.2 for Linux/390, I see
-rw-r--r--1   145511 Dec 18 11:39 glib-1.2.10-5.s390.rpm
-rw-r--r--1   120616 Dec 18 11:39 glib-devel-1.2.10-5.s390.rpm
-rw-r--r--1   962339 Dec 18 11:51 gtk+-1.2.10-11.s390.rpm
-rw-r--r--1  1169075 Dec 18 11:52 gtk+-devel-1.2.10-11.s390.rpm

Mark Post

-Original Message-----
From: Don Mulvey [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, March 12, 2002 5:49 PM
To: [EMAIL PROTECTED]
Subject: building gtk apps ...


I am trying to build gtk apps on my 390 RedHat install.   I looked and
didn't see glib development support.  Has anyone gone this route and
installed the gtk devel rpm(s) and built gtk apps?   I am kinda new to 390
Linux and if there is a url or reference I should refer to ... I'd
appreciate hearing about it.

Thanks,

Don Mulvey
IBM Austin
Linux Technology Center
(512) 838-1787



building gtk apps ...

2002-03-12 Thread Don Mulvey

I am trying to build gtk apps on my 390 RedHat install.   I looked and
didn't see glib development support.  Has anyone gone this route and
installed the gtk devel rpm(s) and built gtk apps?   I am kinda new to 390
Linux and if there is a url or reference I should refer to ... I'd
appreciate hearing about it.

Thanks,

Don Mulvey
IBM Austin
Linux Technology Center
(512) 838-1787