RHEL 6 - Unable to read package metadata ... Cannot retrieve repository metadata at install time

2011-02-22 Thread Merino, Gaston
All,

I am trying to install RHEL 6 on a virtual machine. After it creates the 
partition, I get the error: 
Unable to read package metadata. This may be due to a missing repodata 
directory. Please ensure that your install tree has been correctly generated.
   
   Cannot retrieve repository metadata (repomd.xml) for
   repository: anaconda-RedHatEnterpriseLinux-201009222021.s390x. 
   Please verify its path and try again.

Options are to Exit Installer, Edit (which errors out), Retry (which gives the 
same error).

Any assistance is greatly appreciated.

Thanks.

Gaston J. Merino
Sr. Platform Systems Administrator
IS&T R&D Support
BMC Software

phone: 713.918.1772
mobile: 713.494.2109
fax: 713.918.2022

2101 City West Blvd. 
Houston, TX 77042

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Writing a Systems Programmer Resume

2011-02-22 Thread Joe Gallaher
I would like to invite anyone attending next week's SHARE conference in Anaheim 
to come to my session on "How to Write a Resume for
a Mainframe Systems Programmer" (session 8903).  It is the third time I have 
given this presentation at SHARE and it contains a lot
of useful information and samples for the aspiring resume writer.  Here is a 
link to my session:



http://share.confex.com/share/116/webprogram/Session8903.html



If you cannot attend, feel free to send me an email (or LinkedIn message) and I 
will send you a link to my PowerPoint slides (which
will be available after Feb. 28).  I look forward to seeing you Monday!



Joe Gallaher

j...@spci.net

www.SPCI.net

www.linkedin.com/in/joegallaher

323-822-1569


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Convert filesystems now or wait for SLES1x?

2011-02-22 Thread Leland Lucius

On 2/22/11 12:54 AM, Mark Post wrote:

On 2/22/2011 at 01:42 AM, Leland Lucius  wrote:

Up til now we've used Reiser and would convert to ext3 as part of the
rebuilds.  But, should we leave all of this until the next upgrade?
Will the next recommendation be btrfs or ext4?  Will they still be too
"new" to bet the house on?


I would wait until a system gets replaced or upgraded.  I doubt very much that 
ext4 will ever be a SUSE Linux standard.  btrfs might be, but not in the short 
term, and certainly not for SLES11.  SLES12 would be the earliest, if at all.



Then we shall just stick with ext3.

Thanks,

Leland

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Shared filesystem in redhat, MQ redundancy

2011-02-22 Thread Michael MacIsaac
modprobe vmcp

"Mike MacIsaac"(845) 433-7061

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Shared filesystem in redhat, MQ redundancy

2011-02-22 Thread Scott Rohling
Are you referring to XLINK?   If you define XLINK volumes and systems in
SYSTEM CONFIG - and XLINK FORMAT the volumes -- then you get 'LINK
protection' across systems.   From VM1 - attempt rw link to minidisk on VM2
that already has RW link ...   your LINK will get a RC indicating the disk
in in RW by 'VM2'.   (not which user on VM2 - just VM2)

Scott Rohling

On Thu, Dec 16, 2010 at 1:11 AM, Agblad Tore  wrote:

> It's two different z/VM systems in two different z196.
> We have this software in z/VM that enables each z/VM system to check what
> the
> other one is using. Don't remember that abbreviation now.
> That is sort of requirement, because now only one server can LINK to the
> disk in write mode. We even tested a logic there both servers actually try
> to LINK in write mode, the one that got it first is the current MQ.
> Worked perfect, but was harder to control the traffic from app-servers.
> So the switch is operator initiated now, much safer.
> And by the way, we run SLES11 SP1, but it's the same for RedHat I guess.
>
> ___
> Tore Agblad
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Shared filesystem in redhat, MQ redundancy

2011-02-22 Thread Roger Evans
This shared VM disk solution looks pretty neat, and I would like to use
it to make database backups visible to two linux VMs running under the
same VM.   But when I try the vmcp command, I get a missing device msg.
eg.:

# vmcp q disk
Error: Could not open device /dev/vmcp: No such file or directory

modprod -l shows the vmcp module loaded.

Do I need a mknod command?  If so, what are the major/minor nr.s?

Running SLES10 sp3, kernel: 2.6.16.60-0.74.7

Roger

On Thu, 2010-12-16 at 09:11 +0100, Agblad Tore wrote:

> It's two different z/VM systems in two different z196.
> We have this software in z/VM that enables each z/VM system to check what the
> other one is using. Don't remember that abbreviation now.
> That is sort of requirement, because now only one server can LINK to the
> disk in write mode. We even tested a logic there both servers actually try
> to LINK in write mode, the one that got it first is the current MQ.
> Worked perfect, but was harder to control the traffic from app-servers.
> So the switch is operator initiated now, much safer.
> And by the way, we run SLES11 SP1, but it's the same for RedHat I guess.
>
> ___
> Tore Agblad
> Volvo Information Technology
> Infrastructure Mainframe Design & Development, Linux servers
> Dept 4352  DA1S
> SE-405 08, Gothenburg  Sweden
>
> Telephone: +46-31-3233569
> E-mail: tore.agb...@volvo.com
>
> http://www.volvo.com/volvoit/global/en-gb/
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Marcy 
> Cortes
> Sent: den 15 december 2010 18:37
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: Shared filesystem in redhat
>
> Agblad,
>
> Do you configure both MQ server names in your app server?  Or do you move 
> host names to the new server?  Or use VIP or something?
> Do you have anything in place to prevent write links from both servers?  Are 
> they on the same VM system?
>
> Just curious, we have a MQ MI implementation in proof of concept now but set 
> up with NFS.
> Marcy
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@vm.marist.edu] On Behalf Of Agblad 
> Tore
> Sent: Wednesday, December 15, 2010 7:37 AM
> To: LINUX-390@vm.marist.edu
> Subject: Re: [LINUX-390] Shared filesystem in redhat
>
> Agree with that NFS has serious drawback, especially when you
> need redundant (duplicate servers)
>
> Here is just a hint to get rid of that problem when using MQ:
>
> We needed a redundant solution for MQ, who stores for example it's queues
> in normal files. The recommendation when you have two (or more) MQ servers
> was to use NFS mounted files and there goes redundancy :(
>
> We finally ended up with having two servers with MQ installed, but only active
> in one of them. And both appl servers(redundant) using them called the one 
> active.
> The MQ server choosen as active at start did VMCP LINK 800 as writable, 
> dasd_configure 0.0.0800 1 1
> (we used DIAG) and mount it before starting MQ.
> When stopping MQ we added umount, dasd_configure 0.0.0800 0 0 and VMCP DET 800
> You need to put a wait for 10 seconds since the dasd_configure start the 
> requested action async.
> Now we can switch MQ between the two servers making it typically redundant.
> It's very stable and we have normal realdisk performance.
>
>
> ___
> Tore Agblad
> Volvo Information Technology
> Infrastructure Mainframe Design & Development, Linux servers
> Dept 4352  DA1S
> SE-405 08, Gothenburg  Sweden
>
> Telephone: +46-31-3233569
> E-mail: tore.agb...@volvo.com
>
> http://www.volvo.com/volvoit/global/en-gb/
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Richard 
> Troth
> Sent: den 14 december 2010 16:21
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: Shared filesystem in redhat
>
> Good points.
> I did not mention your #5 because it affects shared DASD too and was
> trying to contrast NFS and shared disk.
>
> -- R;   <><
> Rick Troth
> Velocity Software
> http://www.velocitysoftware.com/
>
>
>
>
>
> On Tue, Dec 14, 2010 at 10:12, Patrick Spinler  
> wrote:
> > On 12/14/2010 07:47 AM, Richard Troth wrote:
> >>
> >> Having just recommended NFS, I must respond to this question.  YES,
> >> there are reasons why one might NOT want to use it.
> >>
> >> NFS is my first choice for shared RW storage or for any shared storage
> >> where you don't have a hardware sharing option.  Caleb can share the
> >> disks at the HW level, so that is preferred.
> >>
> >> So why or when would one not want to go NFS?
> >>
> >> #1 - NFS firstly requires the network.  The sharing systems cannot
> >> operate independently.  (They cannot be isolated.  Sometimes people
> >> isolate systems.  Lots of reasons for that; do I need to enumerate?
> >> And don't get me started about port-grained access controls in
> >> switches and VLANs.)  The requirement for the network also affects the
> >> sequencing at startup. 

Re: PAV in SUSE 10 SP2

2011-02-22 Thread Eric R Farman
Samir,

Stefan mentioned:

> When Linux is running in an LPAR, then it enables PAV itself,
> and everything should work as you expected. In case of a z/VM
> system however, it is z/VM that enables PAV, and in your case
> it enables Hyper PAV. Perhaps it is possible to configure the
> use of base PAV / Hyper PAV in z/VM, but I am not sure.

z/VM will put each logical control unit at the highest level the DASD
supports, provided the corresponding z/VM support is present (HyperPAV
support was introduced in z/VM 5.3.0).  So if the DASD supports HyperPAV,
by default z/VM will try to use them as HyperPAV.  To change the level of
PAV being used by z/VM for a logical control unit, use the SET CU command
or CU statement in SYSTEM CONFIG.  You will need to VARY OFF the alias
subchannels before issuing the SET CU command, and you can get the ssid
parameter from the output of QUERY DASD DETAILS.

Regards,
Eric

Eric Farman
z/VM I/O Development
IBM Endicott, NY

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: PAV in SUSE 10 SP2

2011-02-22 Thread Stefan Weinhuber
Hi,

Samir Reddahi  wrote:
> in z/VM:
> att 9ce 9d8 9d9 *
> 00: DASD 09CE ATTACHED TO LINUXSYS 09CE WITH DEVCTL HYPERPAV BASE
> 00: DASD 09D8 ATTACHED TO LINUXSYS 09D8 WITH DEVCTL HYPERPAV ALIAS
> 00: DASD 09D9 ATTACHED TO LINUXSYS 09E1 WITH DEVCTL HYPERPAV ALIAS
> 
[..] 
> in dmesg I get:
> dasd(eckd): 0.0.09d8: 3390/0A(CU:3990/01) Cyl:0 Head:0 Sec:0
> dasd(eckd): 0.0.09d8: Cannot online device that reports no cylinder or
> head info
> rmation.
> dasd_generic couldn't online device 0.0.09d8 with discipline ECKD rc=-95
> 
> Is this because these are HyperPAV devices? I thought that it was 
possible
> to use HyperPAV devices as regular PAV devices in SUSE 10?

The base PAV / Hyper PAV compatibility you are looking for, is
usually provided by the storage server. Before the storage server
shows any alias devices to an LPAR, the operating system running
in that LPAR has to explicitly tell the storage server, which
kind of PAV it supports. If the operating system only supports
base PAV (e.g. SLES10 SP2), then the storage server will enable
the alias devices as base PAV aliases, if it supports Hyper PAV
then the alias devices will be Hyper PAV aliases. 
This is also the reason, why you do not see any alias devices
in sysfs before you have enabled at least one base device.

When Linux is running in an LPAR, then it enables PAV itself,
and everything should work as you expected. In case of a z/VM
system however, it is z/VM that enables PAV, and in your case
it enables Hyper PAV. Perhaps it is possible to configure the
use of base PAV / Hyper PAV in z/VM, but I am not sure.

Mit freundlichen Grüßen / Kind regards
 
Stefan Weinhuber

-- 
Linux for zSeries kernel development
IBM Systems &Technology Group, Systems Software Development / SW Linux für 
zSeries Entwicklung

IBM Deutschland
Schoenaicher Str. 220
71032 Boeblingen
E-Mail: w...@de.ibm.com

IBM Deutschland Research & Development GmbH / Vorsitzender des 
Aufsichtsrats: Martin Jetter
Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, 
HRB 243294

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/