Hey All
I wrote a doc covering how I got my RHEL 8 based IDM cluster updated to
RHEL 9. Its not terrible but its somewhat involved.
you should be able to get it from https://ibm.biz/Bdar8z
Share and Enjoy!
--
Jay Brenneman
--
try https://www.redbooks.ibm.com/abstracts/sg248303.html
On Thu, Aug 22, 2024 at 12:33 AM Jake Anderson
wrote:
> Hello
>
> I am looking for redhat installation using SCSI on zVM.
>
> is there any documentation which describes step by step to build zlinux on
> SCSI?
>
> Jake
>
> -
Hey Yall
I just put a paper online called "Writing Baby's First Ansible Module"
which is exactly what it sounds like - a guide to getting started writing
yourself a role or module for Ansible. If you're having trouble getting
started ( like I was this time last year ) I hope this eases your pain.
quot; virtus in medio stat "
> "Perfect is the enemy of the Good"
>
>
> On Tue, Sep 19, 2023 at 4:19 PM Robert J Brenneman
> wrote:
>
> > Greetings All,
> >
> > I've just gotten permission from the powers that be to release some tools
> >
Greetings All,
I've just gotten permission from the powers that be to release some tools
I've been using internally on my team for a couple months now. These are a
set of Ansible modules that you can use to create/destroy and modify
virtual machines on z/VM via SMAPI.
Yes - I am aware of everyone
#CP SYSTEM RESTART will do a PSW restart which in turn triggers a kdump if
the machine is enabled for kdump
On Mon, Feb 20, 2023 at 10:51 AM Dave Jones wrote:
> I have a client with an unresponsive zLinux guest. The client wants to
> (somehow) force a kernel dump; but does not have access to ei
my understanding is that you now get openJ9 with the IBM security providers
from the IBM Semeru runtimes :
https://developer.ibm.com/languages/java/semeru-runtimes/
Yes this is a colossal PITA after we finally got a multi arch enabled and
fully optimized java runtime from AdoptOpenJDK - but appare
at a really bad time.
On Mon, Mar 7, 2022 at 17:36 Mark Post wrote:
> On 3/7/22 09:11, Robert J Brenneman wrote:
> > Yeah - thats them, depending on platform generation. ( z14, z15 )
> > If the virtual functions are configured to the LPAR and configured online
> > from the S
Yeah - thats them, depending on platform generation. ( z14, z15 )
If the virtual functions are configured to the LPAR and configured online
from the SE then they just show up as Mellanox PCI adapter VFs in linux and
you configure them exactly like you do a PCI Ethernet device on x86.
pci=on kernel
who
don't talk to or like each other, then you want NPIV enabled on all the
linux FCP chpids so that they cannot accidentally or maliciously attach to
each others disk in the SAN.
On Tue, Feb 8, 2022 at 11:31 AM Steffen Maier wrote:
> On 2/8/22 00:16, Robert J Brenneman wrote:
> > IO
IODF devices for FCP do not point to disks - they are simply a hole to dump
frames into the network and pull frames out of the network with. Its a lot
more like OSA devices than DASD. the OSA device is not an endpoint in the
IP network that you are talking to, and neither is the FCP device in th
you don't run the X server process on the remote system, the X Server runs
on the system with the video card in it - which is generally the
workstation/pc/laptop you are looking at. the programs running on the
remote server are the X Clients, and you tell them where the server is by
setting the D
I thought it would be cool if the hardware platform could emulate a set of
Keyboard/Video/Mouse devices via pretend PCI devices presented into the
LPAR that hook into like a VNC / RemoteDesktop / SPICE software service
that runs on the SE or HMC.
If you virtualize it like pretend PCI devices ( sim
I run a small spectrum scale cluster for my team.
The cool thing about spectrum scale is it does exactly what it says on the
side of the box: you can create a multi writer cluster with a lot of
clients and no single point of failure that can scale in basically any
dimension you need it to.
In my
only thing that pops to mind immediately is to make sure you're not limited
by a single CPU thread. Similar issue can be seen when comparing a single
thread transfer over a HiperSocket vs a dedicated OSA in a CPU
unconstrained system.
Maybe try comparing multiple processes running in parallel ?
RedHat provides podman which is a whitebox reimplementation of the docker
management tools. You can run docker images in podman and vice versa.
both podman and docker are the low level tools to create/destroy/start/stop
individual container instances on container hosting systems.
OpenShift is the
you've got a gig of swap used , and you said %system CPU time is way higher
while the system becomes unusable ?
are you actively swapping during that time when the system is not
responsive? If yes you need to try to either add memory or reduce the
memory demand on the system.
On Tue, Nov 3, 2020 a
sweet!
On Fri, Oct 30, 2020 at 7:02 PM Neale Ferguson wrote:
> Still a lot of work to be done but this is .NET runtime running on s390x.
>
> $ ~/dotnet/dotnet HelloWorld.dll
> Hello World!
>
> This is built from the Microsoft github dotnet/runtime with a handful of
> build-related patches.
>
> -
^c
On Wed, Oct 7, 2020 at 10:21 AM Frank M. Ramaekers
wrote:
> Thanks...ah, how to stop a ping w/o Ctrl-C? (z/VM console of zLinux)
>
> Frank M. Ramaekers Jr.
> Unisys | Systems Senior Administrator
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Beh
Ah yeah - that was before Neale was at SN - thanks Mark!
On Mon, Jun 1, 2020 at 12:20 PM Mark Post wrote:
> On 6/1/20 10:46 AM, Robert J Brenneman wrote:
> > I think I remember Neale booted up windows something or
> > other under qemu on linux on s390 ... a while ago.
>
> L
If you are already emulating a different architecture ( s390x on top of
amd64 ) then I would imagine that the specifics of the OS running inside
the emulator would not really be an issue at all. The emulator is handling
binary translation of the instruction sequence to an instruction sequence
that
Guys...
Linux is not UNIX. It used to kinda sorta be, but it isn't anymore. The
people who like the UNIX way are a small minority in the Linux community.
Everyone ( besides you ) wants it to go towards the magical MacOS land of
'do what I want without asking me about it all the time'
for example
[deepest sarcasm voice]
excuse me, but the system you refer to as Linux is actually a SystemD
system with a Linux kernel and GNU userspace.
On Thu, Apr 30, 2020 at 11:21 AM Luciano Mannucci
wrote:
> On Thu, 30 Apr 2020 14:42:33 +
> David Boyes wrote:
>
> >somebody please make it stop
>
If anyone has a chance to see Liz do her thing on the conference circuit
you should absolutely take the opportunity to do so.
>
> --
Jay Brenneman
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to
If you're working on open source projects on github a new thing happened:
Public Github projects can now take advantage of Travis build automation to
build s390x, ppc64le, and arm64 binaries in addition to x86. For Free.
ref:
*https://blog.travis-ci.com/2019-11-12-multi-cpu-architecture-ibm-power-
Barton - you are correct.
RH only supports 'unlimited' numbers of virtual machines on their dedicated
hypervisor offering which is RHV ( used to be RHEV ). RHV is not yet
available on s390x. I am not a part of the team that can say whether it
will become available for s390x, either, so your guess
aahhh yea. a dark corner of memory gets its first ray of light
in a while
V7000 and other storwize related products ( SVC, FS9xxx ) now do NPIV on
the storage side by default if you are doing a new install. You have to
explicitly enable it if you are upgrading from older versions.
Also be aware that if you want to use NPIV you /must/ have a switch and the
switch must explicitly support NPIV since that is a function that requires
both FCP Host adapter and switch support to work.
On Tue, Sep 17, 2019 at 7:35 AM Jim Elliott wrote:
> Christian:
>
> Thanks. We will go with a s
Is your root fs btrfs ? Thats the most meaningul difference I can think of
comparing SLES 11 to 12.
On Wed, Aug 28, 2019 at 9:53 AM Rich Smrcina
wrote:
> You have zVPS. Our zAlert component can detect this situation and notify
> interested parties.
>
> Rich Smrcina
> Sr. Systems Engineer
>
> Vel
OSPF is not to be taken on lightly. It introduces new, exciting, and
difficult to debug layers of complexity and there are now better solutions
to 95% of the issues people used to use it to address. ( Link Aggregation /
Bonding / Teaming on the host side and VRRP and fancier virtual router
interfa
If the only thing you want to change is the layer2 mode setting you can do
this as root:
cd /sys/bus/ccwgroup/drivers/qeth/
echo 0 > online
echo 1 > layer2
echo 1 > online
and the interface ought to come online with the same IP address as before.
this will be temporary though and it will go back
no.
On Mon, Jan 14, 2019 at 9:14 AM Terri C. Glowaniak <
terri.glowan...@regions.com> wrote:
> Is anyone running Redhat OpenShift on the z platform? I'm having a hard
> time finding the info on the Redhat website.
>
> Thanks,
> Terri
>
> --
KVM Host bridges require a L2 network interface with the 'bridge_role'
attribute set on the OSA device supporting the bridge.
ref: https://public.dhe.ibm.com/software/dw/linux390/docu/lhs4dd05.pdf
chapter 14, section Layer 2 promiscuous mode on p205.
But you're not using a L2 Vswitch - the 'q lan
Thanks for the gentle reminder about VMWorkshop guys - I've already got
stuff on the docket that I can't back out of for that week though.
It's due for a refresh in any event with current releases though - is there
anything you'd like to see in the way of 'stuff to add to a ramdisk to make
recover
I had a linux recovery session at share a couple years ago:
http://linuxvm.org/Present/SHARE110/S9240jb.pdf
the last section covers ripping open the Linux initrd and adding more tools
to it to do bare metal recovery from the ramdisk - the approach is still
valid, but some of the tools used have pr
Check your gskit version :
http://www-01.ibm.com/support/docview.wss?uid=swg1PI90141
Theres a known issue with certain gskit versions on z14 where some tasks
hang forever and possibly also drive high cpu utilization while hung.
Fix is an updated gksit version, or a z14 specific workaround on the
sweet! thanks Neale!
On Tue, Jan 23, 2018 at 10:11 PM, Neale Ferguson
wrote:
> Over the past couple of months I’ve worked with the Docker folks to make
> our base ClefOS docker image “official”. This process is now complete and
> it is now available at: https://hub.docker.com/_/clefos/
>
> Neale
I have the following at the very end of my
/etc/udev/rules.d/51-qeth-0.0.1080.rules
file:
>>>
ACTION=="add", SUBSYSTEM=="ccwgroup", KERNEL=="0.0.1080", ATTR{layer2}="1"
ACTION=="add", SUBSYSTEM=="ccwgroup", KERNEL=="0.0.1080",
ATTR{buffer_count}="128"
ACTION=="add", SUBSYSTEM=="ccwgroup", KERNE
Ancient history: http://www.redbooks.ibm.com/redpapers/pdfs/redp3871.pdf
Without NPIV you're in that same boat.
Even if you had NPIV you would still have to mount the new clone and fix
the ramdisk so that it points to the new target device instead of the
golden image.
This is especially an issue
Your old recipe is still usable with the current version of those linux
flavors on z14 I think - ie if you built the recipe on sles 11 on EC12 then
if you had a sles 11 with current service on it on z14 you could do the
same configuration and get hardware accelerated https working.
There are chang
y
> the other mechanisms of which you speak. The --ip and --hostname options
> take care of the network configuration. I would be interested in what the
> anti-VOLUMEites propose for things like persistent data.
>
> On 8/4/17, 11:04 AM, "Linux on 390 Port on behalf of
"Linux on 390 Port on behalf of Robert J Brenneman"
> wrote:
>
> >best practice with docker is to keep a docker image absolutely generic so
> >that it can be deployed to dev/test/production without any changes. Any
> >personalization that needs to be
best practice with docker is to keep a docker image absolutely generic so
that it can be deployed to dev/test/production without any changes. Any
personalization that needs to be done to make the docker container do it's
job should be either
a) passed to the docker container at startup through env
Apache is generally provided by the distribution ( RedHat / SUSE / Ubuntu )
as part of their base OS. It might be called apache or httpd or something
similar to that in the package name, but it should be there as part of the
core distribution that you can install directly with yum/zypper/apt-get
W
Yeah, works as expected.
These guys have a huge listing of open source projects that they've gotten
ported to s390x:
https://www.ibm.com/developerworks/community/forums/html/topic?id=5dee144a-7c64-4bfe-884f-751d6308dbdf&ps=
--
Jay Brenneman
-
check for udev rules for the eth1 and eth2 devices in /etc/udev/rules.d
You may need to copy an existing rule set for the new devices and alter the
contents to reflect the correct device numbers.
Also be aware of persistent naming - sometimes other udev rules decide that
eth1 really needs to be r
the /best/ way to edit linux files on 3270 is not to. Plan ahead and make
backup copies of important config files before making changes and rebooting
so you can just rename the backup back in place.
but when one must, ed is available. You have to think of it as a typewriter
though. It is not capab
we use the issue tracker that comes with GitLab now.
On Mon, Sep 19, 2016 at 2:06 PM, Ed Jaffe
wrote:
> On 9/19/2016 4:04 AM, Michael MacIsaac wrote:
>
>> I downloaded the tgz file and looked at the installation requirments - it
>> requires 'Genshi' which also comes from Edgewall and has a .tgz
How are your devices mounted in /etc/fstab ?
If you are using device UUIDs which are based on hardware tokens those will
change with the new clone since it is using different disk extents.
On Tue, Sep 6, 2016 at 9:32 AM, Michael Weiner
wrote:
> Hi there
>
> Is there anything special that needs
It depends on what you consider to be the bigger problem.
A) having to care upfront about what needs space where is hard. Make / one
huge logical volume and don't worry about space until you need more, then
just grow / online.
B) fixing a machine that won't boot is hard. When /'s contents are sme
Hi All -
We were having a discussion in the test lab here on the customer usage of
guest relocation on VM with SSI - so I decided to hit up the experts.
If you are able to, do you actually use the RELOCATE function ?
If so, how often ? How many at once ?
Do you drive it manually or have you writt
Hmmm... When configuring STP and the HMC to retrieve time from a GPS based
NTP server appliance, I am betting that the GPS time source also already
has the leap seconds applied. In that case - Should we be setting the HMC
leap second value for the STP network to 0 ?
On Tue, Jul 26, 2016 at 4:37 AM
In a heterogeneous environment if your non Z NTP systems are off by roughly
26 seconds compared to your STP managed Z Linux clocks it could be due to
this:
https://access.redhat.com/articles/15145
basically - the default Linux timezone files do not respect leap seconds,
but the long time mainframe
see also:
https://www.ibm.com/developerworks/community/blogs/2280dc86-a78a-441b-89d7-5b4c41595852/entry/Ubuntu_16_04_Xenial_on_KVM_for_z_network_Installation_Guide?lang=en
On Tue, May 31, 2016 at 8:02 AM, Michael MacIsaac
wrote:
> James,
>
> The paper was written under z/VM not KVM so of cours
On Thu, Apr 21, 2016 at 2:00 PM, Grzegorz Powiedziuk
wrote:
> I see. I never used btrfs before so it makes sense.
> So isn't using LVM with btrfs on top of it complicating things more?
>
Basically yes.
The default SLES 12 install does not use LVM though, it only uses the btrfs
functions to
On Thu, Apr 21, 2016 at 12:27 PM, Grzegorz Powiedziuk wrote:
> /dev/mapper/system-root 5.3G 3.0G 2.0G 60% /opt
> /dev/mapper/system-root 5.3G 3.0G 2.0G 60% /home
> /dev/mapper/system-root 5.3G 3.0G 2.0G 60% /boot/grub2/s390x-emu
>
>
> According to this, the same logical volume is mou
In both environments smaller virtual machines are easier to dispatch and
are better guests as far as the hypervisor is concerned. However- on x86
the hypervisor has to do a lot more work to virtualize the IO, and the
platform in general has to do a lot more work for IO, and so everyone adds
system
Ubuntu specifically looks at the machine type during install. They compiled
everything with the EC12 level instructions so it depends on what
instruction set the zPDT supports. If zPDT supports the EC12 instructions
then you should open a bug against Ubuntu to get 1090 added to the
supported machin
Its failing due to LibreOffice ( an Office productivity suite ) and
libpurple ( a sametime chat protocol library ) not being available, but
they are getting pulled in as dependancies of other packages when you
select 'with GUI'
Do you want to run a full desktop environment on your Z system ? If no
first make sure you have enough empty spool space to contain the entire
guest's memory. IE for a 1 GB guest have 1 GB of spool available.
then from the broken guest's console : #CP VMDUMP
While you wait for that to run, read:
http://public.dhe.ibm.com/software/dw/linux390/docu/l26edt00.pdf
Pay a
some use a big spreadsheet
some nothing at all and just look for free devices and space in the disk
subsystem to create a new LUN, then ask for more when they run out.
some manage the whole thing using home grown automation driven from config
files that live ... somewhere
some use a combination
> Sent: den 11 februari 2016 4:43
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: TSM backup LAN connection problem
>
> On Tuesday, 02/09/2016 at 04:16 GMT, Robert J Brenneman
> wrote:
> > Multi homed linux seems to consider any MAC address as good as any other
> > whe
et.ipv4.conf.eth1.arp_announce = 0
> > net.ipv4.conf.eth2.arp_announce = 1
> >
> > Still no effect.
> > I will take some more tcpdump on all interfaces as you suggested.
> >
> >
> > BR /Tore
> >
> > ________
Multi homed linux seems to consider any MAC address as good as any other
when responding to an ARP. The default seems to assume you have every
ethernet port plugged to the same logical network. If you run tcpdump on
all interfaces for both machines, I bet you see the target of the telnet
request so
We create a user in the VM directory called '191USER' who's sole purpose is
to be a container for Linux system 191 A disks that get shared read only.
This allows us to keep all the CMS PROFILE EXEC automation in one place and
have it shared by all the Linux systems that need it. Since it is owned b
see also: https://www.ibm.com/developerworks/linux/linux390/docker.html
On Tue, Jan 19, 2016 at 1:10 AM, Huckert, James
wrote:
> Thanks John, that is pretty neat will have to go check it out. I have
> never played with GO but will look into it for sure.
>
>
> Thank you
> James H.
>
> -Origin
Are you running under z/VM or in native LPARs ?
If you are running under z/VM what performance tools do you have available
?
On Wed, Oct 28, 2015 at 9:26 AM, Victor Echavarry Diaz <
vechava...@evertecinc.com> wrote:
> Hi people:
>
> We've had several incidents where one Linux server has high CPU
; Mainframe Operating Systems
> >
> > Pershing Plaza
> > 95 Christopher Columbus Drive
> > Floor 14
> > Jersey City, N.J. 07302
> > Work 201-395-1509
> > Cell917-903-0102
> >
> >
> > -Original Message-
> > From: Lin
from the guest's console issue #CP VMDUMP to trigger a CP managed dump of
the virtual machine. This dumps to spool space and it shows up in the
guest's rdr device. You can then 'vmur receive --convert
/path/to/where/dump/goes' to import the vmcore to the Linux file system
when you IPL after the du
1) format the disk with something else first. It almost doesn't matter what
- just make a real volume label on cyl 0.
2) just click all the way through the install and take the defaults
for the disk layout.
Open a PMR if that works describing what you were trying to do originally -
this is to pro
If linux is running in an LPAR the scm driver can see the flash attached to
the LPAR as a a set of block devices, and you can LVM them together and put
a file system on them if needed.
I use it to create a large space to kdump to in order to make the dump
outtage as short as possible, but you can
Another option would be to boot the install media on the system and run
through enough of the process to get TCPIP configured, and then ssh into
the installer environment as root - not as the install user.
At this point, manually echo values into /sys to bring your new storage
unit and LUN online
To put it another way - When running a FCP chpid in Non-NPIV mode - the
demands made on the channel by the CPC and made on the SAN switch by the
channel are very low. You can totally run 256 subchannels in a single LPAR,
or like 1000 something in total across multiple LPARs using a single FCP
port
Please use the native package format of the operating system.
Things like InstallMangler seem like good ideas to product owners since it
allows them to provide the illusion of a simplified install experience that
is common across all platforms, but more often just cause a lot of
confusion which tu
Attachments don't make it through the listserve software, but I can guess
that what is happening to your system is that the relocation involves a
pause while the processor state is relocated to the new target system,
followed by an I/O recovery interrupt.
the FCP / SCSI / Multipath driver stack in
It's there for when you bring Linux up in an LPAR with bajillions of
devices defined, like an old z/OS LPAR for example. The IPL takes forever
as udev enumerates all those devices in /sys and /dev, and then you're
running a system that can touch all the devices which it should not have
access to.
Mother's rule of z/VM Service number 1: Never change anything IBM sends you
Mother's rule of z/VM Service number 2: Never mix your stuff with IBM's
stuff.
In my shop we create a user in the directory named after our department and
set the PW to NOLOG. We give that new user a 191 disk and control a
try cramming it into the existing rescue or install media initrd ?
I did that with a TSM agent and it worked OK. You may have to manually copy
a couple additional libraries into the ramdisk to support the package, but
it //should// work.
On Tue, Oct 14, 2014 at 4:26 PM, Neale Ferguson
wrote:
>
Echoing what M. Post said earlier - please do not use the Marist Linux
distribution for s390 to evaluate Linux on System Z. It does not reflect
the current state of what you get from the Linux distributors any longer.
There have been huge changes to how Linux itself handles enumerating
I/O devi
conversely - if you answer yes here - the installer will do a bunch of
customization for you which is very difficult to undo if it is not to your
liking.
On Mon, Oct 21, 2013 at 3:18 PM, Michael MacIsaac wrote:
> Chris,
>
> Thanks for the feedback.
>
> I got this answer to your question:
> I
try 'lszfcp -H -P -D' to see the Host adapters, then the target Ports, then
the Devices listed out.
On Sun, Aug 25, 2013 at 7:09 PM, Smith, Ann (CTO Service Delivery) <
ann.sm...@thehartford.com> wrote:
> Is there a good doc on san commands with SLES11?
> Trying to see what commands were dropped
What do you mean by replicate?
a) Do you want to move the database entirely from x86_64 to s390x ? I
suspect this can be done using Oracle's native dump and restore utilities.
b) Or do you want a hybrid HA cluster where some nodes are x86_64 and
others are s390x ? I do not believe this is possible.
check these papers for what we did in the Pok test lab:
http://www-03.ibm.com/systems/resources/linux_ha_ospf.pdf
http://www-03.ibm.com/systems/resources/systems_services_platformtest_z_ospfscaling.pdf
http://www-03.ibm.com/systems/resources/ZSW03221-USEN-00.pdf
The first shows our initial OSPF s
Additionally - why does Linux not make better use of the I/O subsystem ?
For example, a z/OS DSF copy job copying a dataset from one volume to
another uses like 6% of a CPU at most, whereas Linux dd or cp uses
100% of a CPU, and doesn't go noticably faster than the DSF job.
Could Linux make bette
t;
> Dave Stuart
> Prin. Info. Systems Support Analyst
> County of Ventura, CA
> 805-662-6731
> david.stu...@ventura.org
>
>
>
> >>> "Robert J Brenneman" 10/1/2012 2:45 PM >>>
> If you know a little about your network topology you can pi
If you know a little about your network topology you can ping the default
gateway from your linux system, then the next hop out, and so on to see if
you have a connectivity issue or a routing issue.
Are you using vlans? Did something in the vlan config change?
Did the other OSA sharing lpars cha
I relocate my Linux systems and I IPL CMS by default so I can use
SWAPGEN too - what is the actual error message you're getting when you
attempt to relocate ?
--
Jay Brenneman
--
For LINUX-390 subscribe / signoff / archive access
Have a look at
http://mobile.share.org/client_files/SHARE_in_Atlanta/Session_10327_handout_2560_0.pdf
I've got some working examples of ospf configuations in there, but
they are not exactly what you're trying to do.
The zebra daemon is used to acquire the network interface state from
the kernel,
I second what Rob said - the WWPN tool is designed for this very situation.
> Also, the NPIV definitions need to be carried from the old LPAR to the new
> one. Have a look at the "WWPN tool" (from Resource Link) It should help
> preparing the definitions that allow the SAN changes to be made befor
Can you run multiple parallel CCL processes on the same Linux guest and
attach more OSN devices to it?
--
Jay Brenneman
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with
More generally - the SCSI attached tape requirement is a "SCSI
attached tape LIBRARY requirement".
If you want to talk to stand alone tape drives, either SCSI or FICON
attached drives work just fine. If you want to send commands directly
to the robot that loads the drives, it has to be SCSI. Linux
SELinux used to confuse me and make me angry until I had an epithany:
It's really just Mandatory Access Controls that override the normal
Unixish permissions. Since I sorta understood the RACFVM MAC concepts
I was able to transfer the ideas over to Linux and suddenly all the
docs & training materia
If the HMC Is reporting this connectivity issue it has nothing to do
with the OS.
Your IOCP deck looks fine, so the hardware is configured properly on
your end, unless you made a change to the IOCP and forgot to activate
/ POR that change.
Are the link lights on? Is the fiber known to be good? Do
Obviously something is not the same...
Ask the storage admin to verify that the subsystem port that pchid 113
is connected to is configured as an open systems or FCP port - not a
FICON port.
--
Jay Brenneman
--
For LINUX-390 s
ok - from the maintenance prompt:
lsluns
lsscsi
pvscan
vgscan
pvdisplay
vgdisplay
according to that output, are you missing a volume group or a pv ?
--
Jay Brenneman
--
For LINUX-390 subscribe / signoff / archive access instruct
Any other error messages earlier in the log? right after the "Waiting
for drier initialization" message?
--
Jay Brenneman
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu w
I've got a TSM server running on Linux under z/VM for backing up
internal test systems and it's happily running in 512 MB of Storage.
If the disk and tape subsystems are fast enough you can make it pretty
small and still be OK. It depends on your daily load. Mine is pretty
light.
--
Jay Brenneman
I thought the whole point of OCFS2 was that it was a cluster aware
file system - so you had multiple guests with write access to the same
media and it would all work out correctly. If one guest goes down the
others in the cluster can continue to provide access to the media, and
so on.
Am I badly m
oops - re lun expansion:
I've done it, but I've always had to drive all the devices completely
out of the SCSI stack and re add them to get them to rescan and find
the extra space. I don't think Linux normally supports lun expansion
gracefully.
I'm sure someone will speak up to correct me if I'm
theres two different places to look for the san login.
The first is on the FC switch itself, you should be able to see the
NPIV WWPN login on the same port as the channel's normal WWPN.
On the storage controller... e... it's just kinda always worked
for me. But then again, I'm at level 4.some
1 - 100 of 247 matches
Mail list logo