z/VM 540 - Simulate I/O error

2016-09-29 Thread Mariusz Walczak
Hello,

Lately I installed a PTF to EREP on second level z/VM. I'm looking for a
way to test the PTF. Do you know how to simulate hardware error and force
EREP to write records ?
So far I tried...
 - writing to the file simultaneously by 2 users.
 - attempt to write on read-only mdisk
 - write to file in a loop until mdisk space is out

...but EREP doesn't catch that.

Best regards,
Mariusz

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: z/VM 540 - Simulate I/O error

2016-09-30 Thread Mariusz Walczak
This solution worked. Thank you Alan !

---
Regards
Mariusz

2016-09-30 9:27 GMT+02:00 Alan Altmark :

> You would have to use a 2nd level system and detach minidisks and such
> that the 2nd level system is using.
>
> EREP gathers device i/o errors and CU notifications.
>
> Regards,
>   Alan
>
>
>Mariusz Walczak --- [LINUX-390] z/VM 540 - Simulate I/O error ---
> From:"Mariusz Walczak" To:LINUX-
> 390@VM.MARIST.EDUDate:Thu, Sep 29, 2016 11:57 PMSubject:[LINUX-390] z/VM
> 540 - Simulate I/O error
>
> Hello,Lately I installed a PTF to EREP on second level z/VM. I'm
> looking for away to test the PTF. Do you know how to simulate hardware
> error and forceEREP to write records ?So far I tried... - writing
> to the file simultaneously by 2 users. - attempt to write on
> read-only mdisk - write to file in a loop until mdisk space is
> out...but EREP doesn't catch that.Best regards,Mariusz---
> ---For LINUX-390
> subscribe / signoff / archive access instructions,send email to
> lists...@vm.marist.edu with the message: INFO LINUX-390 or visithttp://
> www.marist.edu/htbin/wlvindex?LINUX-390--
> For more
> information on Linux on System z, visithttp://wiki.linuxvm.org/
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Accounting on zVM

2017-05-25 Thread Mariusz Walczak
Hello Community ,

We are running VM 6.4. Do you know any tool or manual for processing
accounting records on zVM? Those files are accumulating on our systems and
I'd like to create a report from it. However, the data from column 29 is
not human readable (""b"}a"") and I'm not sure how to process it.


Thanks!
Mariusz

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Accounting on zVM

2017-06-14 Thread Mariusz Walczak
Thank you for all replies. I will make use of this knowledge.

Mariusz

2017-05-25 17:39 GMT+02:00 Alan Altmark :

> On Thursday, 05/25/2017 at 02:40 GMT, Dave Jones 
> wrote:
> > For quick and dirty z/VM accounting reports, I find CMS Pipelines very
> > useful. It can even connect to the CP *ACCOUNT service to product
> > reports in real time as data arrives.
>
> But let's not put the cart before the horse.  If you don't need the the
> accounting data, don't collect it.
>   CP RECORDING ACCOUNT OFF PURGE
>
> If memory serves, CP will remember this across IPLs.
>
> Alan Altmark
>
> Senior Managing z/VM and Linux Consultant
> Lab Services System z Delivery Practice
> IBM Systems & Technology Group
> ibm.com/systems/services/labservices
> office: 607.429.3323
> mobile; 607.321.7556
> alan_altm...@us.ibm.com
> IBM Endicott
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


TSM client - DSMC interface hung

2018-03-09 Thread Mariusz Walczak
The below email is classified: Internal

Hello,

We are running TSM server v7.1.8 and TSM clients 6.2.4 on zLinux SLES 11.4. 
Lately I updated TSM clients to v7.1.8. From 32 updated servers, 14 are 
suffering the issue where I cannot issue any command by DSMC interface. Any 
command hangs DSMC for 10 to 25 minutes and if I wait through that glitch, I 
get the response. The scheduler, DSMCAD, works without issues. If I downgrade 
the client to v6.4.2, the issue is gone. Trace shows it stops somewhere in IBM 
GSK toolkit - GSKitPasswordFile.cpp

02/23/18   10:10:47.385 [016035] [2285885216] : GSKitPasswordFile.cpp(1517): 
IDX file is ../TSM.IDX
02/23/18   10:34:29.145 [016035] [2285885216] : psPasswordFile.cpp  (1879): 
psOpenPswdFile(): file name is '../TSM.sth', mode is 'ab+'

Any ideas what is happening ?


Thanks,
Mariusz

-
Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte Informationen.
Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie bitte
sofort den Absender und loeschen Sie diese E-Mail. Das unbefugte Kopieren
dieser E-Mail oder die unbefugte Weitergabe der enthaltenen Informationen
ist nicht gestattet.

The information contained in this message is confidential or protected by
law. If you are not the intended recipient, please contact the sender and
delete this message. Any unauthorised copying of this message or
unauthorised distribution of the information contained herein is prohibited.

Legally required information for business correspondence/
Gesetzliche Pflichtangaben fuer Geschaeftskorrespondenz:
http://deutsche-boerse.com/letterhead

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: TSM client - DSMC interface hung

2018-03-14 Thread Mariusz Walczak
The below email is classified: Internal

Hello Jay,

Thanks for the hint. Indeed the problem is gone when I moved the zLinux guest 
to zEC12. I have a PMR opened and I will post solution when we fix this.



Thanks again ,
Mariusz


-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Robert J 
Brenneman
Sent: Friday, March 9, 2018 5:03 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: TSM client - DSMC interface hung

Check your gskit version :
http://www-01.ibm.com/support/docview.wss?uid=swg1PI90141

Theres a known issue with certain gskit versions on z14 where some tasks hang 
forever and possibly also drive high cpu utilization while hung.

Fix is an updated gksit version, or a z14 specific workaround on the current 
version if the software product has not delivered an update with the new gskit 
version included. The details of the workaround vary by software product - open 
a PMR against TSM client and they will steer you towards the best solution in 
your case.

On Fri, Mar 9, 2018 at 3:47 AM, Mariusz Walczak < 
mariusz.walc...@deutsche-boerse.com> wrote:

> The below email is classified: Internal
>
> Hello,
>
> We are running TSM server v7.1.8 and TSM clients 6.2.4 on zLinux SLES 
> 11.4. Lately I updated TSM clients to v7.1.8. From 32 updated servers, 
> 14 are suffering the issue where I cannot issue any command by DSMC interface.
> Any command hangs DSMC for 10 to 25 minutes and if I wait through that 
> glitch, I get the response. The scheduler, DSMCAD, works without 
> issues. If I downgrade the client to v6.4.2, the issue is gone. Trace 
> shows it stops somewhere in IBM GSK toolkit - GSKitPasswordFile.cpp
>
> 02/23/18   10:10:47.385 [016035] [2285885216] :
> GSKitPasswordFile.cpp(1517): IDX file is ../TSM.IDX
> 02/23/18   10:34:29.145 [016035] [2285885216] : psPasswordFile.cpp
> (1879): psOpenPswdFile(): file name is '../TSM.sth', mode is 'ab+'
>
> Any ideas what is happening ?
>
>
> Thanks,
> Mariusz
>
> -
> Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte 
> Informationen.
> Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie 
> bitte sofort den Absender und loeschen Sie diese E-Mail. Das unbefugte 
> Kopieren dieser E-Mail oder die unbefugte Weitergabe der enthaltenen 
> Informationen ist nicht gestattet.
>
> The information contained in this message is confidential or protected 
> by law. If you are not the intended recipient, please contact the 
> sender and delete this message. Any unauthorised copying of this 
> message or unauthorised distribution of the information contained 
> herein is prohibited.
>
> Legally required information for business correspondence/ Gesetzliche 
> Pflichtangaben fuer Geschaeftskorrespondenz:
> http://deutsche-boerse.com/letterhead
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions, send 
> email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit 
> http://wiki.linuxvm.org/
>



--
Jay Brenneman

--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org/
-
Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte Informationen.
Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie bitte
sofort den Absender und loeschen Sie diese E-Mail. Das unbefugte Kopieren
dieser E-Mail oder die unbefugte Weitergabe der enthaltenen Informationen
ist nicht gestattet.

The information contained in this message is confidential or protected by
law. If you are not the intended recipient, please contact the sender and
delete this message. Any unauthorised copying of this message or
unauthorised distribution of the information contained herein is prohibited.

Legally required information for business correspondence/
Gesetzliche Pflichtangaben fuer Geschaeftskorrespondenz:
http://deutsche-boerse.com/letterhead


Re: TSM client - DSMC interface hung

2018-03-21 Thread Mariusz Walczak
The below email is classified: Internal

Hello,

If someone is planning to run TSM client > v 6.4.3 on z14, then ask IBM to 
provide you gskcrypt and gskssl package at least 8.0.50.84. It fixed the issue.


Thanks,
Mariusz


-Original Message-
From: Mariusz Walczak 
Sent: Wednesday, March 14, 2018 3:38 PM
To: LINUX-390@VM.MARIST.EDU
Subject: RE: TSM client - DSMC interface hung

The below email is classified: Internal

Hello Jay,

Thanks for the hint. Indeed the problem is gone when I moved the zLinux guest 
to zEC12. I have a PMR opened and I will post solution when we fix this.



Thanks again ,
Mariusz


-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Robert J 
Brenneman
Sent: Friday, March 9, 2018 5:03 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: TSM client - DSMC interface hung

Check your gskit version :
http://www-01.ibm.com/support/docview.wss?uid=swg1PI90141

Theres a known issue with certain gskit versions on z14 where some tasks hang 
forever and possibly also drive high cpu utilization while hung.

Fix is an updated gksit version, or a z14 specific workaround on the current 
version if the software product has not delivered an update with the new gskit 
version included. The details of the workaround vary by software product - open 
a PMR against TSM client and they will steer you towards the best solution in 
your case.

On Fri, Mar 9, 2018 at 3:47 AM, Mariusz Walczak < 
mariusz.walc...@deutsche-boerse.com> wrote:

> The below email is classified: Internal
>
> Hello,
>
> We are running TSM server v7.1.8 and TSM clients 6.2.4 on zLinux SLES 
> 11.4. Lately I updated TSM clients to v7.1.8. From 32 updated servers,
> 14 are suffering the issue where I cannot issue any command by DSMC interface.
> Any command hangs DSMC for 10 to 25 minutes and if I wait through that 
> glitch, I get the response. The scheduler, DSMCAD, works without 
> issues. If I downgrade the client to v6.4.2, the issue is gone. Trace 
> shows it stops somewhere in IBM GSK toolkit - GSKitPasswordFile.cpp
>
> 02/23/18   10:10:47.385 [016035] [2285885216] :
> GSKitPasswordFile.cpp(1517): IDX file is ../TSM.IDX
> 02/23/18   10:34:29.145 [016035] [2285885216] : psPasswordFile.cpp
> (1879): psOpenPswdFile(): file name is '../TSM.sth', mode is 'ab+'
>
> Any ideas what is happening ?
>
>
> Thanks,
> Mariusz
>
> -
> Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte 
> Informationen.
> Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie 
> bitte sofort den Absender und loeschen Sie diese E-Mail. Das unbefugte 
> Kopieren dieser E-Mail oder die unbefugte Weitergabe der enthaltenen 
> Informationen ist nicht gestattet.
>
> The information contained in this message is confidential or protected 
> by law. If you are not the intended recipient, please contact the 
> sender and delete this message. Any unauthorised copying of this 
> message or unauthorised distribution of the information contained 
> herein is prohibited.
>
> Legally required information for business correspondence/ Gesetzliche 
> Pflichtangaben fuer Geschaeftskorrespondenz:
> http://deutsche-boerse.com/letterhead
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions, send 
> email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit 
> http://wiki.linuxvm.org/
>



--
Jay Brenneman

--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org/
-
Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte Informationen.
Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie bitte
sofort den Absender und loeschen Sie diese E-Mail. Das unbefugte Kopieren
dieser E-Mail oder die unbefugte Weitergabe der enthaltenen Informationen
ist nicht gestattet.

The information contained in this message is confidential or protected by
law. If you are not the intended recipient, please contact the sender and
delete this message. Any unauthorised copying of this message or
unauthorised distribution of the information contained herein is prohibited.

Legally required information for business correspondence/
Gesetzliche Pflichtangaben fuer Geschaeftskorrespondenz:
http://deutsche-boerse.com/letterhead


zLinux with HyperPAV

2018-03-22 Thread Mariusz Walczak
The below email is classified: Internal

Hello,

I'm trying to setup HyperPAV on our SLES 11.4. From documentation I read I must 
define a full-pack minidisk in form:
MDISK basevdev devtype DEVNO rdev mode   OR   MDISK basevdev devtype 0 END 
volser mode

I used DEVNO and I believe I was able to setup everting right (I see alias 
device in lsdasd -u). However, when I run dasdfmt on base , I lose VM label 
from DASD. Is there a way to preserve label on cylinder 0 ? We reference them 
in other programs.


Regards,
Mariusz
Infrastructure Support
zVM zLinux
-
Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte Informationen.
Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie bitte
sofort den Absender und loeschen Sie diese E-Mail. Das unbefugte Kopieren
dieser E-Mail oder die unbefugte Weitergabe der enthaltenen Informationen
ist nicht gestattet.

The information contained in this message is confidential or protected by
law. If you are not the intended recipient, please contact the sender and
delete this message. Any unauthorised copying of this message or
unauthorised distribution of the information contained herein is prohibited.

Legally required information for business correspondence/
Gesetzliche Pflichtangaben fuer Geschaeftskorrespondenz:
http://deutsche-boerse.com/letterhead

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zLinux with HyperPAV

2018-03-23 Thread Mariusz Walczak
The below email is classified: Internal

Hello,

Thank you for replies.  Dasdfmt -l option preserved the label. 

@Alan your explanation is interesting, but we reference those DASD labels on 
zOS for backup purposes. If I understood correctly you recommend to forget 
about the labels on VM and use real device numbers, which is a no-go.


Thanks for help,
Mariusz
Infrastructure Support
zVM zLinux

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Davis, 
Jim [PRI-1PP]
Sent: Thursday, March 22, 2018 7:14 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: zLinux with HyperPAV

When you use dasdfmt in linux add the -l volser option to reapply the VM volume 
label.

When a full pack minidisk formatted with dasdfmt will look just like an z/OS 
volume with a vtoc with 1 - 3 datasets and can be backed up by DFDSS on a z/os 
system.

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Mariusz 
Walczak
Sent: Thursday, March 22, 2018 11:20 AM
To: LINUX-390@VM.MARIST.EDU
Subject: zLinux with HyperPAV

The below email is classified: Internal

Hello,

I'm trying to setup HyperPAV on our SLES 11.4. From documentation I read I must 
define a full-pack minidisk in form:
MDISK basevdev devtype DEVNO rdev mode   OR   MDISK basevdev devtype 0 END 
volser mode

I used DEVNO and I believe I was able to setup everting right (I see alias 
device in lsdasd -u). However, when I run dasdfmt on base , I lose VM label 
from DASD. Is there a way to preserve label on cylinder 0 ? We reference them 
in other programs.


Regards,
Mariusz
Infrastructure Support
zVM zLinux
-
Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte Informationen.
Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie bitte sofort 
den Absender und loeschen Sie diese E-Mail. Das unbefugte Kopieren dieser 
E-Mail oder die unbefugte Weitergabe der enthaltenen Informationen ist nicht 
gestattet.

The information contained in this message is confidential or protected by law. 
If you are not the intended recipient, please contact the sender and delete 
this message. Any unauthorised copying of this message or unauthorised 
distribution of the information contained herein is prohibited.

Legally required information for business correspondence/ Gesetzliche 
Pflichtangaben fuer Geschaeftskorrespondenz:
http://deutsche-boerse.com/letterhead

--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org/
-
Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte Informationen.
Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie bitte
sofort den Absender und loeschen Sie diese E-Mail. Das unbefugte Kopieren
dieser E-Mail oder die unbefugte Weitergabe der enthaltenen Informationen
ist nicht gestattet.

The information contained in this message is confidential or protected by
law. If you are not the intended recipient, please contact the sender and
delete this message. Any unauthorised copying of this message or
unauthorised distribution of the information contained herein is prohibited.

Legally required information for business correspondence/
Gesetzliche Pflichtangaben fuer Geschaeftskorrespondenz:
http://deutsche-boerse.com/letterhead

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


LVM volume size

2018-04-23 Thread Mariusz Walczak
The below email is classified: Internal

Hello group,

Recently we had a DASD failure and I had to delete and recreate logical volume 
in LVM. We had two model-9 DASDs defined as two logical volumes (different 
volume groups). I used lvcreate -l 1760  to make sure the number of extents is 
the same as before. When I mount it, df command shows 110MB difference from old 
volume. In LVM number of Allocated PE are the same.
Volume db4lv was created by LVM2 version 2.02.39 back in 2014...  Volume db1lv 
created by LVM2 version 2.02.98(2). They have to match in size.
Any ideas what can be the reason of this ?

  Filesystem1K-blocksUsed Available 
Use% Mounted on
/dev/mapper/DB4VG-db4lv 7208008  978512   6229496  14% /DB4
/dev/mapper/DB1VG-db1lv 7095808   15828   6719532   1% /DB1


# vgs -a -o +devices
  DB1VG  1   1   0 wz--n-  6.88g0  /dev/dasdae1(0)
  DB2VG  1   1   0 wz--n-  6.88g0  /dev/dasdaa1(0)


Bus-ID Status  Name  Device  Type  BlkSz  Size  Blocks
==
0.0.0305   active  dasdaa94:104   ECKD  4096   7042MB1802880
0.0.0309   active  dasdae94:120   ECKD  4096   7042MB1802880

# pvdisplay
  --- Physical volume ---
  PV Name   /dev/dasdae1
  VG Name   DB1VG
  PV Size   6.88 GiB / not usable 2.41 MiB
  Allocatable   yes (but full)
  PE Size   4.00 MiB
  Total PE  1760
  Free PE   0
  Allocated PE  1760

  --- Physical volume ---
  PV Name   /dev/dasdaa1
  VG Name   TSM1DB2VG
  PV Size   6.88 GiB / not usable 2.41 MiB
  Allocatable   yes (but full)
  PE Size   4.00 MiB
  Total PE  1760
  Free PE   0
  Allocated PE  1760



Thanks,
Mariusz
Infrastructure Support
zVM zLinux
Deutsche Börse Services s.r.o.
Sokolovská 662/136b
186 00 Prague 8
Czech Republic
+420 2964 28581
+420 737208624

-
Diese E-Mail enthaelt vertrauliche oder rechtlich geschuetzte Informationen.
Wenn Sie nicht der beabsichtigte Empfaenger sind, informieren Sie bitte
sofort den Absender und loeschen Sie diese E-Mail. Das unbefugte Kopieren
dieser E-Mail oder die unbefugte Weitergabe der enthaltenen Informationen
ist nicht gestattet.

The information contained in this message is confidential or protected by
law. If you are not the intended recipient, please contact the sender and
delete this message. Any unauthorised copying of this message or
unauthorised distribution of the information contained herein is prohibited.

Legally required information for business correspondence/
Gesetzliche Pflichtangaben fuer Geschaeftskorrespondenz:
http://deutsche-boerse.com/letterhead

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


zLinux authentication on windows AD LDAP

2019-03-29 Thread Mariusz Walczak
Hello Group,

We are running zLinux SLES 12-3 on zVM. I'm looking for a way to
authenticate user logon to zLinux server over windows AD LDAP.
I configured LDAP client to point to windows LDAP. Then I used ldapsearch
to make a query for id user01 - I got results. I used yast "auth" module to
"test connection" - bind was successfull.
Then I tried to logon user01 which is not defined localy on Linux - only in
AD.
SSH returns error "sshd: input_userauth_request: invalid user 
[preauth]".

Is it technically possible to authenticate logon with Active Directory LDAP
?
I've heard rumors this might be a problem, because AD users are not posix.
Anyone tried to authenticate over AD ?

Thanks in advance,
Mariusz

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: zLinux authentication on windows AD LDAP

2019-04-11 Thread Mariusz Walczak
Hello,

FYI. To make linux LDAP client working with AD, I had to add posix
attributes (uid,gid,uidNumber, etc..) to my AD user. I configured LDAP
client using "sssd" on SLES12 and I'm happily authenticating against AD.

Thanks for help,
Mariusz

pon., 1 kwi 2019 o 16:19 Alan Altmark  napisał(a):

> On Monday, 04/01/2019 at 08:21 GMT, "Harder, Pieter"
>  wrote:
> > Until 2 years ago our AD was 2003. And that was a really big headache.
> And I
> > think they dropped the last win2003 servers quite recently.
> > Since moving to a more recent AD the win guys have been debating moving
> off
> > NTLM. But it seems there are some oldish applications that don't talk
> Kerberos
> > and require NTLM.
> > Anyway, it's not my problem. But I thought I would just mention it when
> I saw
> > your statement, in case anybody else does have NTLM still active.
>
> To your original question, though, many clients have integrated LDAP-based
> clients with AD.  As David said, AD is just a variation of LDAP.  If all
> you need is authentication, then it's supposedly pretty straightforward
> (I've never personally done it).
>
> Ignoring the specific application (ITM), I found this to be helpful in
> understanding how LDAP fits into AD:
>
> https://www.ibm.com/support/knowledgecenter/en/SSTFXA_6.3.0/com.ibm.itm.doc_6.3/adminuse/msad_ldap_beforeyoubegin.htm#msad_ldap_beforeyoubegin__tepuser
> .  Mostly I was happy because it had screen shots.  :-)  It may be that AD
> administration for LDAP clients is more integrated into the AD admin tools
> than is shown.
>
> Alan Altmark
>
> Senior Managing z/VM and Linux Consultant
> IBM Systems Lab Services
> IBM Z Delivery Practice
> ibm.com/systems/services/labservices
> office: 607.429.3323
> mobile; 607.321.7556
> alan_altm...@us.ibm.com
> IBM Endicott
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Quick question on single user mode

2020-03-11 Thread Mariusz Walczak
Hello Chris,

If you are on zVM, then Boot SLES into installation mode (from VM reader
using image and parm file). Temp root password and IP is specified in parm
file. Then you connect via ssh, activate dasd, mount and chroot (look into
Sles Admin guide "repair bootloader")

Regards.
M.

On Wed, 11 Mar 2020, 12:30 Alan Altmark,  wrote:

> Please copy and paste text, not images.
>
> Regards,
> Alan Altmark
> IBM
>
> > On Mar 11, 2020, at 7:17 AM, Will, Chris  wrote:
> >
> > Here is all we see, never get a menu.  Root password does not work.
> >
> > [cid:image001.gif@01D5F621.645C6C70]
> >
> > 
> > From: Linux on 390 Port  on behalf of ITschak
> Mugzach 
> > Sent: Wednesday, March 11, 2020 2:29 AM
> > To: LINUX-390@VM.MARIST.EDU 
> > Subject: Re: Quick question on single user mode
> >
> > [External email]
> >
> >
> > Cris,
> >
> > I had the same issue here. As I wrote to you, if you type E on the boot
> > selection menu, and edit the line displayed, you get into single root
> user
> > mode without specifying a password. see, for example,
> >
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__chriscientific.blogspot.com_2016_03_suse-2Dsles-2D12-2Dsingle-2Duser-2Dmode-2Dw.html&d=DwIFAg&c=jf_iaSHvJObTbx-siA1ZOg&r=XX3LPhXj6Fv4hkzdpbonTd1gcy88ea-vqLQGEWWoD4M&m=_OnSLSxjuZ1EFZYWZyNXDEPm_UzgCblNHpTesOfkXCM&s=jUUUL7YD5G2s8hnoVRJf9yE-wQN3MS7PIZyUytpcFnc&e=
>
> >
> > ITschak
> > ITschak Mugzach
> > *|** IronSphere Platform* *|* *Information Security Continuous Monitoring
> > for z/OS, x/Linux & IBM I **| z/VM comming son  *
> >
> >
> >
> >
> >> On Wed, Mar 11, 2020 at 1:02 AM Will, Chris  wrote:
> >>
> >> Thanks we are having issues booting a guest and all we see are messages
> >> about rdsosreport.txt message.  I was hoping that single user mode would
> >> bypass the enter root password prompt but it does not.  Our root
> password
> >> vault product must have not synced properly since I cannot get into the
> >> shell to see what the issue is.  All started when expanding a SLES 12
> SP4
> >> XFS file system that contained the /var file system.
> >>
> >> Get Outlook for
> iOS<
> https://urldefense.proofpoint.com/v2/url?u=https-3A__aka.ms_o0ukef&d=DwIFAg&c=jf_iaSHvJObTbx-siA1ZOg&r=XX3LPhXj6Fv4hkzdpbonTd1gcy88ea-vqLQGEWWoD4M&m=_OnSLSxjuZ1EFZYWZyNXDEPm_UzgCblNHpTesOfkXCM&s=33TFul5YvZExgB__azh3NTVvHxqVrhm4rL6s2DA6zzk&e=
>  >
> >> 
> >> From: Linux on 390 Port  on behalf of
> Edgington,
> >> Jerry 
> >> Sent: Monday, March 9, 2020 12:38:29 PM
> >> To: LINUX-390@VM.MARIST.EDU 
> >> Subject: Re: Quick question on single user mode
> >>
> >> [External email]
> >>
> >>
> >> If you are running under z/VM, then you can shut down the server, mount
> >> the root mini-disk to another Linux server, correct the configuration
> issue
> >> and restart the server.
> >>
> >> -Original Message-
> >> From: Linux on 390 Port  On Behalf Of Will,
> Chris
> >> Sent: Monday, March 9, 2020 12:29 PM
> >> To: LINUX-390@VM.MARIST.EDU
> >> Subject: Quick question on single user mode
> >>
> >> This message was sent from an external source outside of Western &
> >> Southern's network. Do not click links or open attachments unless you
> >> recognize the sender and know the contents are safe.
> >>
> >>
>
> 
>
> >>
> >> Have a broken sles 12 server.  I believe at one point there was an
> option
> >> on the "ipl 100" to go into single user mode.  Any help?
> >>
> >> Chris Will
> >> Enterprise Linux/UNIX (ELU)
> >> (313) 549-9729 Cell
> >> cw...@bcbsm.com
> >>
> >>
> >>
> >> The information contained in this communication is highly confidential
> and
> >> is intended solely for the use of the individual(s) to whom this
> >> communication is directed. If you are not the intended recipient, you
> are
> >> hereby notified that any viewing, copying, disclosure or distribution of
> >> this information is prohibited. Please notify the sender, by electronic
> >> mail or telephone, of any unintended receipt and delete the original
> >> message without making any copies.
> >>
> >> Blue Cross Blue Shield of Michigan and Blue Care Network of Michigan are
> >> nonprofit corporations and independent licensees of the Blue Cross and
> Blue
> >> Shield Association.
> >>
> >> --
> >> For LINUX-390 subscribe / signoff / archive access instructions, send
> >> email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> >>
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www2.marist.edu_htbin_wlvindex-3FLINUX-2D390&d=DwIFAg&c=jf_iaSHvJObTbx-siA1ZOg&r=XX3LPhXj6Fv4hkzdpbonTd1gcy88ea-vqLQGEWWoD4M&m=_OnSLSxjuZ1EFZYWZyNXDEPm_UzgCblNHpTesOfkXCM&s=3sB2aSmvx8fmxIX6L4jEjVVuDLFosA8kyMAqCU4PHkM&e=
>
> >>
> >> --
> >> For LINUX-390 subscrib

Overcommitting zLinux CPU

2020-07-31 Thread Mariusz Walczak
Hello Group,

We have 4 IFL on Mainframe box, 4 IFL on zVM and 4 cpu on zLinux. I'd like
to gain zLinux capacity (run more processes) and increase to 16 cpu. Will I
degrade performance of a single threaded workload if I do this?

All the best,
Mariusz

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Overcommitting zLinux CPU

2020-08-01 Thread Mariusz Walczak
Hello Mark,

Yes, that is what I meant.
Is there a way to increase zLinux capacity without buying physical IFL on
Mainframe? Can I make use of LPAR and zVM virtualization features to
achieve this?

I already learnt that enabling SMT on zVM doubled our vCPU count (from 4 to
8) on Hypervisor. After enabling the feature, CPU utilization On Mainframe
dropped from 95% to 60%.  On the other hand, I'm wondering if it negatively
affected single threaded workloads?

All the best
Mariusz


On Sun, 2 Aug 2020, 00:34 Mark Post,  wrote:

> On 7/31/20 7:46 PM, Mariusz Walczak wrote:
> > We have 4 IFL on Mainframe box, 4 IFL on zVM and 4 cpu on zLinux. I'd
> like
> > to gain zLinux capacity (run more processes) and increase to 16 cpu.
> Will I
> > degrade performance of a single threaded workload if I do this?
>
> Do you mean you want to specify 16 virtual CPUs for one Linux guest,
> while not changing the number of actual CPUs available to the LPAR?
> That's usually a bad idea.
>
> If you're talking about something different, please say so.
>
>
> Mark Post
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www2.marist.edu/htbin/wlvindex?LINUX-390
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Overcommitting zLinux CPU

2020-08-02 Thread Mariusz Walczak
 The motivation to increase vCPU on zLinux was to increase the capacity of
Openshift Cluster. We have 3 worker nodes, 4 vCPU each = 12 vCPU on
cluster. This allows us to run 7 Postgres databases in parallel (1 vCPU
each). We would like to have 24 vCPU on the cluster, but not degrade
performance of containers.
How can we achieve this?
- increasing vCPU on zLinux is not an option (thank you Alan for
explanation)
- if I deploy 3 more worker nodes, 4 vCPU each, we will have 24 vCPU
capacity, but isn't it the same as increasing vCPU on current worker nodes
?

Do we have to buy more IFLs ?

All the best,
Mariusz

niedz., 2 sie 2020 o 05:37 Alan Altmark 
napisał(a):

>
> A physical core has a certain amount of “horsepower” in it.  It can, at top
> speed do X amount of work.
>
> In SMT, you split the core in half, creating two execution contexts (CPUs)
> instead of just one.  The two CPUs share resources on the physical core,
> but the total horsepower doesn’t increase. In fact, it gets a little
> smaller in the sense that the core must now spend cycles managing the two
> CPUs (threads) on it.   Some workloads need more threads. Other workloads
> need faster CPUs. So you choose between SMT (threads) or non-SMT (speed).
> Knowing which is best means measuring workload response times.
>
> You have 4 cores.  Four execution contexts.  All of the virtual CPU in all
> of the LPARs, plus the work the Control Program does for itself, all must
> run on the 4 cores. If a guest has 16 virtual, you are absolutely 100%
> guaranteed that at least 12 of them will NOT be running at any given
> moment.   Plus you’ve added workload to CP to manage all of those virtual
> CPUs.
>
> All of your LPARs fight for those same 4 CPUs. LPAR hypervisor creates
> logical CPUs, each getting horsepower according to the weighting of the
> LPAR.
>
> For these reasons, you generally do not want to have more virtual CPUs in a
> guest than you have logical CPUs to run them on.
>
> Regards,
> Alan Altmark
> IBM
>
> > On Aug 1, 2020, at 8:00 PM, Mariusz Walczak  wrote:
> >
> > Hello Mark,
> >
> > Yes, that is what I meant.
> > Is there a way to increase zLinux capacity without buying physical IFL on
> > Mainframe? Can I make use of LPAR and zVM virtualization features to
> > achieve this?
> >
> > I already learnt that enabling SMT on zVM doubled our vCPU count (from 4
> to
> > 8) on Hypervisor. After enabling the feature, CPU utilization On
> Mainframe
> > dropped from 95% to 60%.  On the other hand, I'm wondering if it
> negatively
> > affected single threaded workloads?
> >
> > All the best
> > Mariusz
> >
> >
> >> On Sun, 2 Aug 2020, 00:34 Mark Post,  wrote:
> >>
> >>> On 7/31/20 7:46 PM, Mariusz Walczak wrote:
> >>> We have 4 IFL on Mainframe box, 4 IFL on zVM and 4 cpu on zLinux. I'd
> >> like
> >>> to gain zLinux capacity (run more processes) and increase to 16 cpu.
> >> Will I
> >>> degrade performance of a single threaded workload if I do this?
> >>
> >> Do you mean you want to specify 16 virtual CPUs for one Linux guest,
> >> while not changing the number of actual CPUs available to the LPAR?
> >> That's usually a bad idea.
> >>
> >> If you're talking about something different, please say so.
> >>
> >>
> >> Mark Post
> >>
> >> --
> >> For LINUX-390 subscribe / signoff / archive access instructions,
> >> send email to lists...@vm.marist.edu with the message: INFO LINUX-390
> or
> >> visit
> >>
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www2.marist.edu_htbin_wlvindex-3FLINUX-2D390&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=XX3LPhXj6Fv4hkzdpbonTd1gcy88ea-vqLQGEWWoD4M&m=5qmeqOqJ_S3Xrj8kpDz10wUxZzgyAQQV3Oj2C8I2R2M&s=7zg8_fQyMzoAA0_qnz5cgRWfEOX_D2E_bwfMTEhYBAU&e=
>
> >>
> >
> > --
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> >
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www2.marist.edu_htbin_wlvindex-3FLINUX-2D390&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=XX3LPhXj6Fv4hkzdpbonTd1gcy88ea-vqLQGEWWoD4M&m=5qmeqOqJ_S3Xrj8kpDz10wUxZzgyAQQV3Oj2C8I2R2M&s=7zg8_fQyMzoAA0_qnz5cgRWfEOX_D2E_bwfMTEhYBAU&e=
>
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www2.marist.edu/htbin/wlvindex?LINUX-390
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Overcommitting zLinux CPU

2020-08-03 Thread Mariusz Walczak
Hello,

I also thought that enabling SMT will degrade performance of a single
threaded process, but I could not find anyone to confirm that. Thank you
Christian.

Alan:
Postgres is not constrained by having only 4 cores. It's running fine with
1 vCPU. The problem is with overall Openshift Cluster CPU count. Now we
have 3 Worker Nodes, 4 vCPU each = 12 vCPU. We can schedule max 7 postgres
workloads.

Our test shows - running 7 identical postgres workloads on Openshift
simultaneously, we get to 66% IFL Utilization on CEC (we have 4 IFL). This
means there is still some "horsepower" available on the Mainframe, right ?

How to properly use zVM and zLinux virtualization technology to increase
Openshift Cluster capacity from 12 vCPU to 24 vCPU, without degrading
performance. Which of the approaches below is correct ?
- to have 6 Worker Nodes (zLinux), 4 vCPU each ?
- to have 1 Worker Node with 24 vCPU ? (lets forget about single point of
failure for now)

I'm trying to understand - does it make any difference which option I take
? At the end its vCPUs fighting for real CPU time on CEC (and for option 2
I avoid OS overhead and OS management)

All the best,
Mariusz



pon., 3 sie 2020 o 08:29 Christian Borntraeger 
napisał(a):

> On 02.08.20 06:36, Alan Altmark wrote:
> >
> > A physical core has a certain amount of “horsepower” in it.  It can, at
> top
> > speed do X amount of work.
> >
> > In SMT, you split the core in half, creating two execution contexts
> (CPUs)
> > instead of just one.  The two CPUs share resources on the physical core,
> > but the total horsepower doesn’t increase. In fact, it gets a little
> > smaller in the sense that the core must now spend cycles managing the two
> > CPUs (threads) on it.   Some workloads need more threads. Other workloads
> > need faster CPUs. So you choose between SMT (threads) or non-SMT (speed).
> > Knowing which is best means measuring workload response times.
>
> To tell some more details: The sum of both SMT threads is usually larger
> than one single thread. This is because a CPU does have many execution
> units
> (floating point, fixed point, etc). Now the CPU can only execute things
> where
> all dependencies are resolved.So several units are sitting idle, e.g. when
> there
> was a wrong branch prediction until the pipeline has enough things in the
> out
> of order window again. With SMT there are now 2 independent dependency
> tracking
> streams that can make use of the execution units.
>
> So as a rule of thumb: IF you have enough parallel threads and you are
> bound
> by overall capacity, enabling SMT is usually a win for throughput. The z15
> technical guide says 25% on average over single thread. As an example that
> could mean instead of 100% you get for example 60% + 65%.
>
> What Alan tried to tell is that this of course DOES have an impact on
> latency.
> When one single thread only gets lets say 65% the latency is larger.
> So you balance latency vs throughput. And if you only have one thread on
> the
> whole system, then this thread would be faster without SMT.
>
> Now as latency might also depend on the questions "do I get ressources at
> all"
> I also think that for highly virtualized systems with many guests and
> overcommitment of CPUs, SMT is usually a win as z/VM or KVM have more
> threads to
> distribute load. There is even some math for queueing systems that can show
> reduced wait time for an idealized workload.
>
> There are cases where SMT is even worse, though. Some workloads really
> go to the limit of execution units and if you have two of these workloads
> splitting the overall number of lets say rename registers might actually
> hurt
> performance so that the sum is smaller than just one single thread. The
> CPUs
> got better over time and from z13->z14 and from 14 to z15 we identified
> several
> of these cases and improved the CPUs. So on z14 and z15 SMT is a win most
> of
> the time.
>
>
> [...]
> > For these reasons, you generally do not want to have more virtual CPUs
> in a
> > guest than you have logical CPUs to run them on.
>
> Absolutely. Having more virtual than physical never makes sense apart from
> corner
> cases like testing. But this statement is mostly for a single guest or a
> single
> LPAR.
>
> The sum of all virtual cpus can be higher as long as there is enough idle
> time,
> e.g. not all cpus run 100% all the time.
> For example with 4IFLs and SMT you can have 8 vcpus active at the same
> time. With
> lots of idle systems that could also mean lets say 100 guests with 1vcpu
> each.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www2.marist.edu/htbin/wlvindex?LINUX-390
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.mari

zVM VSwitch networking

2020-10-09 Thread Mariusz Walczak
Hello,

I have the following network configuration scenarios. In scenario 1 -
physical network infrastructure attached to Mainframe is not involved in
communication between two zLinux hosts and traffic is handled on zVM level
, is it a correct assumption ?

Scenario 1:
2x zLinux on VSWITCH (option Layer-2), same VLAN, connected to same VSWITCH
(same CHPID) on the same zVM LPAR.

Scenario 2:
2x zLinux on VSWITCH (option Layer-2), same VLAN, running in 2 different
Datacentres , on 2 different LPARs.

All the best,
Mariusz

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Elasticsearch and Openshift on zVM - Suffering from CPU steal ?

2021-02-06 Thread Mariusz Walczak
Hello,



I hope someone can share the experience or put some light on the problem. I
will refer to elasticsearch as "ES" in this email.

We are running 2 Openshift clusters on 1 zVM LPAR (16 logical CPU , SMT2).



Cluster 1 (development workload):

3x Master node (each zLinux 8 vCPU) (VSWITCH-1 VLAN 1)

3x Worker node (each 10 vCPU) (VSWITCH-1 VLAN 1)

1x Infra node (6 vCPU) (VSWITCH-1 VLAN 1) ("ES" ON) (high CPU use)



Cluser 2 (no workload, just "ES" ON):

3x Master node (each zLinux 4 vCPU) (VSWITCH-2 VLAN 2)

4x Worker node (each 4 vCPU) (VSWITCH-2 VLAN 2)

2x Infra node (6 vCPU) (VSWITCH-2 VLAN 2) ("ES" on on each) (high CPU use)



Problem:

With "ES" OFF on both clusters, the batch time of APP1 is ~600 seconds.

With "ES" ON on both clusters, batch time is ~1200 seconds.



Sympthoms:

- high cpu steal on zLinux nodes (TOP) with elasticsearch active

- bad network response (git clone, downloading images)

- CPU steal drops if we shutdown elasticsearch


With "ES" ON: zVM perfkit LPAR CPU at ~60% . CEC IFL usage 40%.

Where do you expect the bottleneck and what is causing high CPU steal on
zLinux nodes ?

Some more info - there is Fluentd pod running on every cluster node and is
sending log data constantly (quite big amounts) to Infra node
(elasticsearch)



IBM gave a tip that, CPU steal is accounted to zLinux when VSWITCH is
processing network requests for this zLinux. If so, how can we solve this ?

- run guests on Direct Attached OSA ?

- split Nodes to different VSWITCHES ? (currently all nodes + DB running on
1 vswitch same VLAN)

- ?


The application we are using for testing, is split into 6 microservices
(processes). PROC1 is used to read file and insert data to DB. I saw that
CPU time (top) of PROC1 accounted when file is processed = 10 sec, but the
real time we have to wait to see this work done is from 60-300 seconds
(depending if ES is running)



So far everyone is just saying "you need more IFLs". But why do I need more
IFLs if I'm using 40% of CEC IFL capacity ?





Thanks you,

Mariusz

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Elasticsearch and Openshift on zVM - Suffering from CPU steal ?

2021-02-17 Thread Mariusz Walczak
Hello,

Unfortunately I won't be able to send monitor data due company rules. I
started with reducing the number of vCPU and this indeed helped with
application performance. I also increased SHARE REL for worker nodes, where
application is running and set CPUPOOL CAP limit on infrastructure node
(where ES is running). This also helped application performance - it's
getting CPU time more often (on the other hand now ES is suffering and
would like to have more cpu).

Michael, to answer your questions:
*1) Is there any contention or resource wait for disk i/o (where are the
files/data)?* - at the moment elasticsearch is writing to NFS share
(unfortunately). DASD response is 1.55 for 151 IO (interim monitor interval
30mins). But bad performance of ES is not my concern. My concern is bad
Application performance, when ES is active.

*2) How about z/VM Memory, Linux memory and paging?* - zVM memory is at 77%
utilized. Infra node memory at 50% , worker nodes memory ~30%.  No paging
is occuring (not on zVM or linux) and there is no swap partition defined on
any openshift node.

*3) Any network contention?* - how can I check that ? this is actually my
most concern, because ES and Prometheus/Grafana are sending a lot data
between worker<>infra nodes + they write data to NFS...(persistent storage)


Thank you for help so far,
Mariusz

sob., 6 lut 2021 o 18:51 Michel Beaulieu 
napisał(a):

> In my experience, Elastic Search(ES) is resources intensive: CPU, Memory,
> disk i/o and Network.
> Is there any contention or resource wait for disk i/o (where are the
> files/data)?
> How about z/VM Memory, Linux memory and paging?
> Any network contention?
> ==
> About steal in Linux,  if Linux is not using all the vCPU at high percent,
> reducing the number of vCPU may help.
> It is easier to add vCPU without Linux reboot.
> To reduce vCPU count you can do that with chcpu, just don't cp detach from
> the Linux VM, that force a reset.
> ==
> Michel Beaulieu
> IBM Services (Canada)
> /* Comments expressed here are my own and do not engage my company in any
> way */
>
> 
> From: Linux on 390 Port  on behalf of Rob van
> der Heij 
> Sent: February 6, 2021 12:18 PM
> To: LINUX-390@VM.MARIST.EDU 
> Subject: Re: Elasticsearch and Openshift on zVM - Suffering from CPU steal
> ?
>
> On Sat, 6 Feb 2021 at 12:45, Mariusz Walczak
>
> >
> > So far everyone is just saying "you need more IFLs". But why do I need
> more
> > IFLs if I'm using 40% of CEC IFL capacity ?
>
>
> Anyone interested in z/VM performance would indeed want to study monitor
> data from the two situations. I’m no smarter to try without either.
>
> IIRC Elastic Search is mostly Java that like CPU and normally
> multithreaded. But Java also assumes to have the virtual CPU for itself. If
> you spread very thin, so have lots of virtual CPUs competing, Java finds
> each virtual CPU running only for a short time and is using only part of
> your capacity. I’d start with reducing the number of virtual CPUs in your
> OCP worker nodes.
>
> Rob
>
> >
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www2.marist.edu/htbin/wlvindex?LINUX-390
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www2.marist.edu/htbin/wlvindex?LINUX-390
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390