qeth utilities [equivalent rarp]?

2006-03-31 Thread Johnny Tan
Hi All,

I would like to ask about the qeth utilities. As far as I know, Linux on 
Mainframe is not able to display the MAC address of the servers in the same 
network / subnet using arp.

Hence, the equivalent "arp" program for Linux on Mainframe will be "qetharp".

Is there any equivalent "rarp" programs for Linux on Mainframe?

Note:
I am running SLES 8.1 SP 4 on z/VM 4.4

Thanks for your attention.


Sent via the AlumMail https://alumni.nus.edu.sg

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Bonding

2006-02-14 Thread Johnny Tan
Hi David,

I am using SLES 8 SP4. OSA is attached to z/VM 4.4 and z/VM assigned three 
device numbers to linux guest to form on NIC.

If there is qetharp for ARP equivalent on zLinux, is there any other network 
teaming software that is equivalent with bonding? qethbonding?

Cheers.

-- Original Message --
From: "David Boyes" <[EMAIL PROTECTED]>
Date:  Tue, 14 Feb 2006 10:07:06 -0500

>
>> I found that SLES 8 includes "bonding" drivers. It works fine
>> for SLES 8 on Intel. However, I encountered problem with SLES
>> 8 on S390.
>>
>> Has anyone tried using "bonding" on SLES 8 (SP4) on S390 platform?
>
>Haven't tried it, but unless you're running very recent network drivers
>(later than SLES8 SP2) and attaching the guests directly to the OSA or a
>VSWITCH, I doubt it'll work. A lot of those tools rely on ARP actually
>functioning all the way to the guest, and until the layer 2 code was
>introduced, OSAs and guest LANs didn't actually allow ARP to work (by
>design -- the ARP function is offloaded into the OSA and not directly
>accessible to the guest).
>
>


Sent via the AlumMail https://alumni.nus.edu.sg

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Bonding

2006-02-14 Thread Johnny Tan
Hi All,

Recently, I have tried to do network teaming for two network cards to perform 
as one network for HA purpose.

I found that SLES 8 includes "bonding" drivers. It works fine for SLES 8 on 
Intel. However, I encountered problem with SLES 8 on S390.

Has anyone tried using "bonding" on SLES 8 (SP4) on S390 platform?

The /etc/log/warn has the following entry
kernel:  qeth: eth2: not enough headroom in skb. Increasing the add_hhlen 
parameter by 30 may help.

I am not sure if "bonding" is meant for "qeth" as well. It works well for eth. 
Just like arp command works to find MAC address of ethxx but not on qethxx. For 
qeth, you will need to use qetharp command.

Note:
I am using OSA express card for the network interface card on z890.




Sent via the AlumMail https://alumni.nus.edu.sg

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: LVM for system fs in production?

2005-09-21 Thread Johnny Tan
Hi,

We did encountered problems (filesystem corruption) when using reiserfs on 
SLES8 (31-bit) in the beginning of installation. The filesystem corruption was 
resolved by applying to the newer kernel (i.e 2.4.21-278) and applied latest 
reiserfs rpm. The filesystem corruption on reiserfs can be fixed after that.


Sent via the AlumMail https://alumni.nus.edu.sg

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


z/VM Minidisk Cache. Still relevant today? [supersede]

2005-09-04 Thread Johnny Tan
Hi,

I have a question here on z/VM minidisk cache.

By default, z/VM minidisk cache is enable. As I know, z/VM minidisk cache is 
using z/VM real memory and expanded storage. Would anyone recommend to use 
minidisk cache on z/VM  where the main purpose of z/VM here is to support Linux 
guests?

Today's DASD technology has fast write, which means that writing to DASD will 
no longer write direct to the disk, however, it will write direct to the cache 
(which is memory) of the DASD controller. In view of this fast write feature, 
is minidisk cache still relevant? If it is still relevant, does the z/VM 
minidisk cache work well to predict open system I/O characteristics in view 
that I/O pattern of open system is random (not sequential) as compared to MVS ??

Here are some statistics (By default, MDC *LIMIT* on storage and xstore are not 
set)

q mdcache
Minidisk cache ON for system
Storage MDC min=0M max=10240M, usage=55%, bias=1.00
Xstore MDC min=0M max=2048M, usage=65%, bias=1.00

Total memory allocation to linux guests = 7GB.

Thanks for your attention. 


Sent via the AlumMail https://alumni.nus.edu.sg

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


z/VM Minidisk Cache. Still relevant today?

2005-09-04 Thread Johnny Tan
Hi,

I have a question here on z/VM minidisk cache.

By default, z/VM minidisk cache is enable. As I know, z/VM minidisk cache is 
using z/VM real memory and expanded storage. Would anyone recommend to use 
minidisk cache on z/VM  where the main purpose of z/VM here is to support Linux 
guests?

Today's DASD technology has fast write, which means that writing to DASD will 
no longer write direct to the disk, however, it will write direct to the cache 
(which is memory) of the DASD controller. In view of this fast write feature, 
is minidisk cache still relevant? If it is still relevant, does the z/VM 
minidisk cache work well to predict open system I/O characteristics in view 
that I/O pattern of open system is random (not sequential) as compared to MVS ??

Here are some statistics (By default, MDC on storage and xstore are not set)

q mdcache
Minidisk cache ON for system
Storage MDC min=0M max=10240M, usage=55%, bias=1.00
Xstore MDC min=0M max=2048M, usage=65%, bias=1.00

Total memory allocation to linux guests = 7GB.

Thanks for your attention. 


Sent via the AlumMail https://alumni.nus.edu.sg

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


V=R?

2005-07-19 Thread Johnny Tan
Hi

I have one question about running linux on z/VM and z/VM memory
management.

My current environment (production)

Total real memory = 12GB
Total page volumes = +- 5GB
Total memory allocation to linux servers (guests) = 7104MB (+- 7GB)
Paging usage = 40 - 50% (2GB - 2.5GB)
XSTORE = 2GB (just recently added after noticing paging did happen in z/VM 
though we have enough real memory)

Main storage used by z/VM is only  5GB (meaning at any one time, 2 GB or more 
will be swapped off)

Operating Systems:
z/VM 4.4
SLES 8.1 SP4 (31-bit)

I observed that by default, memory mode for linux guest is V=V. Hence, z/VM 
takes control of memory management. Since I have enough real memory, why z/VM
still does the paging? Any way to avoid paging on z/VM. I would prefer
to let linux server to have real memory (V=R). Any comments on this? And
how to set it in the user direct? Any fine-tuning can be done?

I also heard that V=R will not be supported in z/VM 5.1. Is it true?

Thanks for your attention.

Regards,
Johnny Tan 

Sent via the AlumMail https://alumni.nus.edu.sg

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390