Re: Undeleting files

2011-12-15 Thread Rafael Godinez Perez
El 14/12/11 23:00, Tom Duerbusch escribió:
 Where I know the answer to this question, generally.  I wonder if this can be 
 done in a very defined sitituation.

 I have disk /dev/dasdb1, formatted with ext3.
 There is one directory on it.
 That directory had about 40 files on it of a few megabytes each.
 This is SLES 10 SP 2.

 I connected to the Linux image with WINSCP.
 I bought up that directory in one pane and in the other pane, I bought up my 
 thumb drive.
 I wanted to copy the files to my thumb drive.

 Instead of copying the files, I thought syncing the directories would be 
 easier.
 Well, I synced an empty directory to the Linux directory.  All files are gone.

 In most cases, recovering deleted files is very dependant on if any of the 
 space or directory structure has been reused.  In this case, the space hasn't 
 been reused, but I don't know if the deletion of 40 files, one at a time, 
 would reuse the directory blocks or just mark them available.

 Before I go too far in this
 Am I just out of luck?
 Or is there a decent chance I can recover these files?

 Thanks

 Tom Duerbusch
 THD Consulting

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/
You may want to try this tool.
It worked for me many times.

http://www.cgsecurity.org/wiki/PhotoRec_Step_By_Step

HTH,
Rafa.

-- 
Rafael Godínez Pérez
Red Hat - Senior Solution Architect EMEA
RHCE, RHCVA, RHCDS
Tel: +34 91 414 8800 - Ext. 68815
Mo: +34 600 418 002

Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, Planta 3ºD, 
28016 Madrid, Spain
Dirección Registrada: Red Hat S.L., C/ Velazquez 63, Madrid 28001, Spain
Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Fedora 16 for IBM System z 64bit official release

2011-12-15 Thread Dan Horák
There was still a longer delay after the Fedora 16 release for the
primary architectures than we would like to see, but at least we are
faster than with Fedora 15 and so here we are. 

As today, the Fedora IBM System z (s390x) Secondary Arch team proudly 
presents the Fedora 16 for IBM System z 64bit official release!

The links to the actual release are here:

http://secondary.fedoraproject.org/pub/fedora-secondary/releases/16/Fedora/s390x/

http://secondary.fedoraproject.org/pub/fedora-secondary/releases/16/Everything/s390x/os/

and obviously on all mirrors that mirror the secondary arch content.

The first directory contains the normal installation trees as well as
one DVD ISO with the complete release.

Everything as usual contains, well, everything. :)


Additional information about known issues, the current progress and state
for future release, where and how the team can be reached and just 
anything else IBM System z on Fedora related can be found here:


http://fedoraproject.org/wiki/Architectures/s390x/16
for architecture specific release notes

and 

http://fedoraproject.org/wiki/Architectures/s390x
for the general information.


Thanks go out to everyone involved in making this happen!


Your Fedora/s390x Maintainers

-- 
Dan Horák, RHCE
Senior Software Engineer, BaseOS

Red Hat Czech s.r.o., Purkyňova 99, 612 45 Brno

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cmsfs-fuse: mmap failed: Cannot allocate memory

2011-12-15 Thread Jan Glauber
On Wed, 2011-12-14 at 13:40 +0100, Rob van der Heij wrote:
 On Wed, Dec 14, 2011 at 1:20 PM, Jan Glauber j...@linux.vnet.ibm.com wrote:

  Since there seems to be collective disapproval of the requirement to
  touch the memory settings for large disks I'm looking into changing
  that...

 You don't get a random sample of the impacted audience  ;-)   While
 the ideal world has no limitations, no rain when you walk the dog and
 good performance every day, you can't always have it. You probably do
 need to document the limitation and have a useful error message. But
 if it hurts, then don't do it.

Thanks Rob ;- I've indeed documented this limitation for our new Device
Drivers book, p. 485:
http://public.dhe.ibm.com/software/dw/linux390/docu/l3n1dd13.pdf

But I guess everybody including myself prefers things that work
out-of-the box...

Jan


 I think we can agree that CMS mini disks of 20 GB is beyond common
 usage and intended purpose of cmsfs. In fact, CMS will show its
 limitations as well when you try to do something with it. When looking
 at where to put your energy, I have a dozen other things that seem far
 more significant to make the platform work...

 Rob (from the peanut gallery)

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cmsfs-fuse: mmap failed: Cannot allocate memory

2011-12-15 Thread Jan Glauber
On Wed, 2011-12-14 at 11:39 -0700, Mark Post wrote:
  On 12/14/2011 at 07:40 AM, Rob van der Heij rvdh...@gmail.com wrote:
  I think we can agree that CMS mini disks of 20 GB is beyond common
  usage and intended purpose of cmsfs.

 Is 2.3GB too much to ask?  That was my situation, and I found it rather 
 irritating even then.

Well, kind of depends on the distributions settings:

h4245006:~ # cat /etc/SuSE-release
SUSE Linux Enterprise Server 11 (s390x)
VERSION = 11
PATCHLEVEL = 1

h4245006:~ # ulimit -v
816320

I can't image cmsfs-fuse is the only program that will hit that limit
by creating a memory maping bigger than ~800MB.

Jan


 Mark Post

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cmsfs-fuse: mmap failed: Cannot allocate memory

2011-12-15 Thread Jan Glauber
On Wed, 2011-12-14 at 11:43 -0700, Mark Post wrote:
  On 12/14/2011 at 07:20 AM, Jan Glauber j...@linux.vnet.ibm.com wrote:

  I can easily replace the mmap-memcpy with pread/pwrite. Unfortunately I
  see huge performance drops if I do so. Currently I'm looking why the
  system call variant costs so much more than mmap.

 Define huge performance drops.  I don't know that most people are going to 
 be reading large files from a CMS minidisk.  They're more likely to have 
 large disks with lots of reasonably sized files.  I suspect that some people 
 are wanting to read very large files, but I don't know how many people, or 
 how large the files are.

I've seen read performance go down by a factor of ten by using pread vs.
memcpy. But this measurement seems not worth the bytes I'm currently
wasting. Today everything is fast (enough) again, regardless of pread or
memcpy. I suspect there was something fishy with the disk I'm using.

While I would not like to introduce something that costs factor of 10
I think CMS EDF is not a high-performance file system and neither is
cmsfs-fuse for various reasons.

 Would it be possible to use pread for the FST, and then mmap-memcpy the 
 actual file being read?  I would find it far more understandable to need more 
 memory to read a very large file than I would to just mount the file system.

That would not be possible since files (and also FST's and the
allocation map) could consist of (nearly) any block on the disk so
they are not contiguous. And mmap'ing single blocks would not be worth
the effort.

Jan


 Mark Post

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cmsfs-fuse: mmap failed: Cannot allocate memory

2011-12-15 Thread Rob van der Heij
On Thu, Dec 15, 2011 at 2:05 PM, Jan Glauber j...@linux.vnet.ibm.com wrote:

 I can't image cmsfs-fuse is the only program that will hit that limit
 by creating a memory maping bigger than ~800MB.

You could not even run FF8 with that, I fear...  Maybe that's the
reason the JVM unzips all the files into private memory before use
;-)

I think the really big stuff does not rely on the default of the
distribution. Things like Oracle come with their own setup
instructions for example to set ulimit -v to allow for 4G mapped, I
guess you need to raise that even further when you grow the SGA beyond
that.

Rob

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cmsfs-fuse: mmap failed: Cannot allocate memory

2011-12-15 Thread Jan Glauber
On Wed, 2011-12-14 at 16:27 -0500, Richard Troth wrote:
 I suggest two things:
 First, draw a line (this would be a somewhat arbitrary number).  If
 the disk is less than (for example) 256M (roughly 350 cyls of 3390),
 then continue to use mmap/memcpy.  If larger, then switch to
 pread/pwrite.  Second, provide a mount option so that the user can
 always specify which model to they want.  (With the option to be
 explicit, there is little reason any of us can complain about where
 you draw that line.)

I like that idea of using both methods, but I don't like the artificial
limit.

What I would like to do is the following:

1. mmap the disk (as before)
2. check for the -ENOMEM case
   2a. mmap was successful - go on use memcpy in the read/write
functions
   2b. mmap failed - use pread/pwrite in the read/write functions

That way we can avoid any memory setting changes since pread/pwrite will
also work with very little memory. If the sysadmin sets ulimit -v
cmsfs-fuse will also work. And in case there is enough memory it can be
used to cache the disk operations.

Is anybody ok with that approach or do you still see the need for an
additional switch???

Jan

 -- R;   
 Rick Troth
 Velocity Software
 http://www.velocitysoftware.com/





 On Wed, Dec 14, 2011 at 07:20, Jan Glauber j...@linux.vnet.ibm.com wrote:
  On Fri, 2011-12-02 at 14:05 -0600, David Boyes wrote:
   It's just mmap'ing the whole disk into the process's address space for 
   the
   programmers sake.
 
  BAD idea (in the sense of 'broken as designed'). You're taxing the virtual 
  memory system in a shared-resource environment, which hurts the entire 
  environment for a little convenience in programming.
 
   If that turns out to be a problem we could theoretically go back to
   pread/pwrite. But I'm not sure how many users have such large CMS disks?
 
  Please do. Doesn't matter if it's an edge case, it shouldn't do this.
 
  Since there seems to be collective disapproval of the requirement to
  touch the memory settings for large disks I'm looking into changing
  that...
 
  I can easily replace the mmap-memcpy with pread/pwrite. Unfortunately I
  see huge performance drops if I do so. Currently I'm looking why the
  system call variant costs so much more than mmap.
 
  Jan
 
  --
  For LINUX-390 subscribe / signoff / archive access instructions,
  send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
  visit
  http://www.marist.edu/htbin/wlvindex?LINUX-390
  --
  For more information on Linux on System z, visit
  http://wiki.linuxvm.org/
 
 
  --
  For LINUX-390 subscribe / signoff / archive access instructions,
  send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
  visit
  http://www.marist.edu/htbin/wlvindex?LINUX-390
  --
  For more information on Linux on System z, visit
  http://wiki.linuxvm.org/

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cmsfs-fuse: mmap failed: Cannot allocate memory

2011-12-15 Thread Jan Glauber
On Wed, 2011-12-14 at 12:29 -0600, David Boyes wrote:
  I can easily replace the mmap-memcpy with pread/pwrite. Unfortunately I
  see huge performance drops if I do so. Currently I'm looking why the system
  call variant costs so much more than mmap.

 An alternative approach would be to mmap *only* the minidisk label and FST on 
 access of the minidisk, and then mmap individual files as you need them. I'm 
 not objecting to the use of mmap, just the gratuitous use of mmap on the 
 whole disk rather than what you actually *need* at that moment.

No, that wont work, files and the FST (also the allocation map) can be
spread across any blocks of the disk. And mmap'ing single blocks
together is complex and not worth the effort.

Please see the other thread for my proposal...

Jan

 It's like the old Antonin Chekov quote about what's necessary in a play: if 
 there's a pistol on stage, you need to have killed someone by act II, scene 
 1. Otherwise, take it out -- it didn't need to be there.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Porting old application- numeric keypad

2011-12-15 Thread McKown, John
 -Original Message-
 From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On 
 Behalf Of Henry Schaffer
 Sent: Wednesday, December 14, 2011 9:19 PM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Porting old application- numeric keypad
 
 On Wed, Dec 14, 2011 at 10:06 PM, John McKown 
 joa...@swbell.net wrote:
  On Wed, 2011-12-14 at 18:34 -0500, Smith, Ann (ISD, IT) wrote:
  Took a while to figute out what they had done.
  Keep telling them to stop pulling everything over from HPUX server.
 
 
  I do have another question for folks
  They seemed to have used the dircmp command a lot - in 
 particular 'dircmp -d'
  It appears that linux distro does not have dircmp
  Trying to find an equivalent for SLES10
 
 
  I am not an expert in any way, shape, or form. But that's 
 never stopped
  me from talking. grin
  Looking here: http://nixdoc.net/man-pages/hp-ux/man1/dircmp.1.html
  quote
  -d      Compare the contents of files with the same name in both
                    directories and output a list telling what must be
                    changed in the two files to bring them 
 into agreement.
                    The list format is described in diff(1).
  /quote
  it appears to me that GNU diff would do some similar functions.
 
  diff dir1 dir2
 
 diff does compare files
 
 dir1 and dir1 need to be files - but they are directories
 
 even if the contents of the directories are put into files, e.g.
 ls dir1  dir1-file
 then the diff will show how the two directories differ, and not say
 anything about the contents of files with the same name in both
 directories
 
   I think I've seen references to a script which will go through the
 steps described above for   dircmp -d
 
 --henry schaffer

Have you tried? I assure you that I have compared entire directories of files 
using diff dir1 dir2. Example from my Linux/Intel desktop:

[tsh009@it-mckownjohn2 zos]$ ls -ld sys1.dev1.vtampds sys1.lih1.vtampds
drwxr-xr-x 2 tsh009 TSHG 4096 Dec  2 07:46 sys1.dev1.vtampds
drwxr-xr-x 2 tsh009 TSHG 4096 Dec  2 07:46 sys1.lih1.vtampds
[tsh009@it-mckownjohn2 zos]$ diff sys1.dev1.vtampds/ sys1.lih1.vtampds/
Only in sys1.lih1.vtampds/: ADVFLOGM
diff sys1.dev1.vtampds/ASMCOS sys1.lih1.vtampds/ASMCOS
16c16
 //SYSINDD  DSN=SYS1.DEV1.VTAMPDS(ISTSDCOS),DISP=SHR
---
 //SYSINDD  DSN=SYS1.LIH1.VTAMPDS(ISTSDCOS),DISP=SHR
diff sys1.dev1.vtampds/ASMMODE sys1.lih1.vtampds/ASMMODE
1c1
 //TSH010MT JOB (H0I),'D.STREET---TECH',CLASS=Z,
---
 //ASMMODE  JOB (H0I),'D.STREET---TECH',CLASS=Z,
3c3
 // NOTIFY=TSH010
---
 // NOTIFY=SYSUID
22c22
 //SYSLMOD  DD  DISP=SHR,DSN=SYS1.LIH1.VTAMLIB
---
 //SYSLMOD  DD  DISP=SHR,DSN=SYS1.DEV1.VTAMLIB
Only in sys1.lih1.vtampds/: ASMUSS
diff sys1.dev1.vtampds/ASMUSSXB sys1.lih1.vtampds/ASMUSSXB
1c1
 //TSH010UB JOB (H0I),'D.STREET---TECH',CLASS=Q,
---
 //TSH010UB JOB (H0I),'D.STREET---TECH',CLASS=Z,
20c20
 //SYSINDD  DSN=SYS1.DEV1.VTAMPDS(USSTXBSC),DISP=SHR
---
 //SYSINDD  DSN=SYS1.LIH1.VTAMPDS(USSTXBSC),DISP=SHR
26c26
 //SYSLMOD  DD  DISP=SHR,DSN=SYS1.DEV1.VTAMLIB
---
 //SYSLMOD  DD  DISP=SHR,DSN=SYS1.LIH1.VTAMLIB
diff sys1.dev1.vtampds/ASMUSSXS sys1.lih1.vtampds/ASMUSSXS
1c1
 //TSH010US JOB (H0I),'D.STREET---TECH',CLASS=Q,
---
 //TSH010US JOB (H0I),'D.STREET---TECH',CLASS=Z,
7,9c7,9
 //** MEMBER USSTXBSO OLD UICI SCREEN
 //** MEMBER USSTXBSC NEW HEALTHMARKETS SCREEN
 //** COPY MEMBER USSTXBSN TO USSTXBSC AFTER TESTING OK
---
 //** MEMBER USSTXSNO OLD UICI SCREEN
 //** MEMBER USSTXSNN NEW HEALTHMARKETS SCREEN
 //** COPY MEMBER USSTXSNN TO USSTXSN1 AFTER TESTING OK
20c20
 //SYSINDD  DSN=SYS1.DEV1.VTAMPDS(USSTXSN1),DISP=SHR
---
 //SYSINDD  DSN=SYS1.LIH1.VTAMPDS(USSTXSN1),DISP=SHR
26c26
 //SYSLMOD  DD  DISP=SHR,DSN=SYS1.DEV1.VTAMLIB
---
 //SYSLMOD  DD  DISP=SHR,DSN=SYS1.LIH1.VTAMLIB
Only in sys1.lih1.vtampds/: ASSMOLD
diff sys1.dev1.vtampds/AUSS sys1.lih1.vtampds/AUSS
5c5
 //**   ASSEMBLE USS TABES FOR DEV1 SYSTEM
---
 //** ASSEMBLE USS TABES FOR ESA SYSTEM
16c16,17
 //SYSINDD  DSN=SYS1.DEV1.VTAMPDS(USSTXBSC),DISP=SHR
---
 //SYSINDD  DSN=SYS1.LIH1.VTAMPDS(USSTXBSC),DISP=SHR
 //*YSINDD  DSN=SYS1.LIH1.VTAMPDS(USSTXSN1),DISP=SHR
Only in sys1.lih1.vtampds/: EFGTPX
Only in sys1.lih1.vtampds/: IBMGWN
Only in sys1.lih1.vtampds/: IEBNCPLD
Only in sys1.lih1.vtampds/: JES2MOD1
Only in sys1.lih1.vtampds/: LU6262
Only in sys1.lih1.vtampds/: MODETAB
Only in sys1.lih1.vtampds/: MODETABA
Only in sys1.lih1.vtampds/: MODETABO
diff sys1.dev1.vtampds/MODETABP sys1.lih1.vtampds/MODETABP


and more, but I don't want to overload the example. I don't know that this is 
what dircmp -d does, but is what I would expect from the documentation that I 
found on the web.

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets®

9151 Boulevard 26 . N. Richland Hills . TX 76010
(817) 255-3225 phone . 
john.mck...@healthmarkets.com . www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 

Re: Porting old application- numeric keypad

2011-12-15 Thread Smith, Ann (ISD, IT)
They have tried diff.
Has some functions but appareently not all that dircmp -d provides. 

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of McKown, 
John
Sent: Thursday, December 15, 2011 8:43 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Porting old application- numeric keypad

 -Original Message-
 From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of 
 Henry Schaffer
 Sent: Wednesday, December 14, 2011 9:19 PM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Porting old application- numeric keypad
 
 On Wed, Dec 14, 2011 at 10:06 PM, John McKown joa...@swbell.net 
 wrote:
  On Wed, 2011-12-14 at 18:34 -0500, Smith, Ann (ISD, IT) wrote:
  Took a while to figute out what they had done.
  Keep telling them to stop pulling everything over from HPUX server.
 
 
  I do have another question for folks They seemed to have used the 
  dircmp command a lot - in
 particular 'dircmp -d'
  It appears that linux distro does not have dircmp Trying to find an 
  equivalent for SLES10
 
 
  I am not an expert in any way, shape, or form. But that's
 never stopped
  me from talking. grin
  Looking here: http://nixdoc.net/man-pages/hp-ux/man1/dircmp.1.html
  quote
  -d      Compare the contents of files with the same name in both
                    directories and output a list telling what must be
                    changed in the two files to bring them
 into agreement.
                    The list format is described in diff(1).
  /quote
  it appears to me that GNU diff would do some similar functions.
 
  diff dir1 dir2
 
 diff does compare files
 
 dir1 and dir1 need to be files - but they are directories
 
 even if the contents of the directories are put into files, e.g.
 ls dir1  dir1-file
 then the diff will show how the two directories differ, and not say 
 anything about the contents of files with the same name in both 
 directories
 
   I think I've seen references to a script which will go through the
 steps described above for   dircmp -d
 
 --henry schaffer

Have you tried? I assure you that I have compared entire directories of files 
using diff dir1 dir2. Example from my Linux/Intel desktop:

[tsh009@it-mckownjohn2 zos]$ ls -ld sys1.dev1.vtampds sys1.lih1.vtampds 
drwxr-xr-x 2 tsh009 TSHG 4096 Dec  2 07:46 sys1.dev1.vtampds drwxr-xr-x 2 
tsh009 TSHG 4096 Dec  2 07:46 sys1.lih1.vtampds
[tsh009@it-mckownjohn2 zos]$ diff sys1.dev1.vtampds/ sys1.lih1.vtampds/ Only in 
sys1.lih1.vtampds/: ADVFLOGM diff sys1.dev1.vtampds/ASMCOS 
sys1.lih1.vtampds/ASMCOS
16c16
 //SYSINDD  DSN=SYS1.DEV1.VTAMPDS(ISTSDCOS),DISP=SHR
---
 //SYSINDD  DSN=SYS1.LIH1.VTAMPDS(ISTSDCOS),DISP=SHR
diff sys1.dev1.vtampds/ASMMODE sys1.lih1.vtampds/ASMMODE
1c1
 //TSH010MT JOB (H0I),'D.STREET---TECH',CLASS=Z,
---
 //ASMMODE  JOB (H0I),'D.STREET---TECH',CLASS=Z,
3c3
 // NOTIFY=TSH010
---
 // NOTIFY=SYSUID
22c22
 //SYSLMOD  DD  DISP=SHR,DSN=SYS1.LIH1.VTAMLIB
---
 //SYSLMOD  DD  DISP=SHR,DSN=SYS1.DEV1.VTAMLIB
Only in sys1.lih1.vtampds/: ASMUSS
diff sys1.dev1.vtampds/ASMUSSXB sys1.lih1.vtampds/ASMUSSXB
1c1
 //TSH010UB JOB (H0I),'D.STREET---TECH',CLASS=Q,
---
 //TSH010UB JOB (H0I),'D.STREET---TECH',CLASS=Z,
20c20
 //SYSINDD  DSN=SYS1.DEV1.VTAMPDS(USSTXBSC),DISP=SHR
---
 //SYSINDD  DSN=SYS1.LIH1.VTAMPDS(USSTXBSC),DISP=SHR
26c26
 //SYSLMOD  DD  DISP=SHR,DSN=SYS1.DEV1.VTAMLIB
---
 //SYSLMOD  DD  DISP=SHR,DSN=SYS1.LIH1.VTAMLIB
diff sys1.dev1.vtampds/ASMUSSXS sys1.lih1.vtampds/ASMUSSXS
1c1
 //TSH010US JOB (H0I),'D.STREET---TECH',CLASS=Q,
---
 //TSH010US JOB (H0I),'D.STREET---TECH',CLASS=Z,
7,9c7,9
 //** MEMBER USSTXBSO OLD UICI SCREEN
 //** MEMBER USSTXBSC NEW HEALTHMARKETS SCREEN  //** COPY MEMBER USSTXBSN TO 
USSTXBSC AFTER TESTING OK
---
 //** MEMBER USSTXSNO OLD UICI SCREEN
 //** MEMBER USSTXSNN NEW HEALTHMARKETS SCREEN
 //** COPY MEMBER USSTXSNN TO USSTXSN1 AFTER TESTING OK
20c20
 //SYSINDD  DSN=SYS1.DEV1.VTAMPDS(USSTXSN1),DISP=SHR
---
 //SYSINDD  DSN=SYS1.LIH1.VTAMPDS(USSTXSN1),DISP=SHR
26c26
 //SYSLMOD  DD  DISP=SHR,DSN=SYS1.DEV1.VTAMLIB
---
 //SYSLMOD  DD  DISP=SHR,DSN=SYS1.LIH1.VTAMLIB
Only in sys1.lih1.vtampds/: ASSMOLD
diff sys1.dev1.vtampds/AUSS sys1.lih1.vtampds/AUSS
5c5
 //**   ASSEMBLE USS TABES FOR DEV1 SYSTEM
---
 //** ASSEMBLE USS TABES FOR ESA SYSTEM
16c16,17
 //SYSINDD  DSN=SYS1.DEV1.VTAMPDS(USSTXBSC),DISP=SHR
---
 //SYSINDD  DSN=SYS1.LIH1.VTAMPDS(USSTXBSC),DISP=SHR
 //*YSINDD  DSN=SYS1.LIH1.VTAMPDS(USSTXSN1),DISP=SHR
Only in sys1.lih1.vtampds/: EFGTPX
Only in sys1.lih1.vtampds/: IBMGWN
Only in sys1.lih1.vtampds/: IEBNCPLD
Only in sys1.lih1.vtampds/: JES2MOD1
Only in sys1.lih1.vtampds/: LU6262
Only in sys1.lih1.vtampds/: MODETAB
Only in sys1.lih1.vtampds/: MODETABA
Only in sys1.lih1.vtampds/: MODETABO
diff sys1.dev1.vtampds/MODETABP sys1.lih1.vtampds/MODETABP


and more, but I don't want to overload the example. I don't know that this is 
what dircmp -d does, but is what I would expect from the 

Re: Porting old application- numeric keypad

2011-12-15 Thread Malcolm Beattie
Smith, Ann (ISD, IT) writes:
 They have tried diff.

As John says, GNU diff, as available on any Linux, provides a lot
of powerful functions include recursion support (specified
explicitly via the -r option). I'd encourage anyone using diff to
use the -u option to provide unidiff format output which is
much more human-readable and provides more context information used
by patching software to behave more robustly in the face of
applying patches to slightly modified files. Using diff to do
diff -ur dir1 dir2 and such like is something I do fairly
frequently and I've never found any glaring omissions in its
functionality.

 Has some functions but appareently not all that dircmp -d provides.

What functionality do they think is missing from GNU diff?
I wouldn't be surprised if education plus possibly some minor
pre/post-processing with other utilities solved their problems.

--Malcolm

--
Malcolm Beattie
Mainframe Systems and Software Business, Europe
IBM UK

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Running top on second level RHEL5

2011-12-15 Thread David Boyes
 Quick question.  When you run top on RHEL5 linux on systemz, is it accurate
 or is it a best guess?

It's mostly not very useful. It reflects only the usage when the virtual 
machine is dispatched. I/O counts are mostly OK in that it's a reflection of 
virtual I/O requests (does not reflect that CP may have done something smart 
with the I/O to make it more efficient -- think about VDISKs for a moment). 
Memory usage is from the perspective of inside the virtual machine (doesn't 
reflect impact on the total machine).

Note also that running top keeps the virtual machine active, so in this case, 
the person running top is actually making it worse by trying to measure how bad 
it is. 8-)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Running top on second level RHEL5

2011-12-15 Thread Rob van der Heij
On Thu, Dec 15, 2011 at 4:46 PM, David Boyes dbo...@sinenomine.net wrote:
 Quick question.  When you run top on RHEL5 linux on systemz, is it accurate
 or is it a best guess?

 It's mostly not very useful. It reflects only the usage when the virtual 
 machine is dispatched. I/O counts are mostly OK in that it's a reflection 
 of virtual I/O requests (does not reflect that CP may have done something 
 smart with the I/O to make it more efficient -- think about VDISKs for a 
 moment). Memory usage is from the perspective of inside the virtual machine 
 (doesn't reflect impact on the total machine).

You dismiss CPU usage too quick as mostly OK - while vague, probably
still not useful.

* Recent kernels the Linux CPU measurements only show virtual time,
it misses the overhead. Thanks to virtualization hardware, you get
close. But in this configuration with Linux running on second level
z/VM, the overhead is significant (read: high). So top inside that
Linux will not show you the amount of CPU resources consumed to run
the guest.

* Most people using top don't really care about what they use, but
about what they don't use. The assumption is that what you don't use
is still available to use. But not in a virtualized environment...

You can't have it all, and in this case you can't have either of them.
For the full picture you need a performance monitor that combines data
from inside and outside the virtual machine.
-- 
Rob van der Heij
Velocity Software
http://www.velocitysoftware.com/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zFCP issue with RHEL5

2011-12-15 Thread David Boyes
  Any particular reason for the different name? It seems like a gratuitous
 change that really doesn't have any useful purpose.
 
 One reason is historical and the second, more important, is that s390utils is
 collection of not only s390-tools but also other s390-related tools (cmsfs,
 src_vipa, libzfcphbaapi and few other small
 scripts/tools) and the previous maintainers wanted all of them in one
 package.

How annoying. Wouldn't it have been easier to have meta-packages that just 
prereqed the appropriate other packages...? *sigh*
It's just that this is one more thing to document that makes running in a mixed 
environment a PITA.

OK, thanks. 

-- db





Re: Running top on second level RHEL5

2011-12-15 Thread Scott Rohling
It depends on what you're trying to get from top.   If it's to find out the
highest users of cpu relative to others within the same virtual guest  -
it's fine..  If you're trying to compare this with something outside of
that virtual guest - or are looking for an accurate overall CPU usage for
the guest - it's not.

Scott Rohling

On Wed, Dec 14, 2011 at 11:25 AM, Shumate, Scott scshum...@bbandt.comwrote:

 Quick question.  When you run top on RHEL5 linux on systemz, is it
 accurate or is it a best guess?


 Thanks
 Scott

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
 visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Porting old application- numeric keypad

2011-12-15 Thread David Boyes
 I am not an expert in any way, shape, or form. But that's never stopped me
 from talking. grin Looking here: http://nixdoc.net/man-pages/hp-
 ux/man1/dircmp.1.html
 quote
 -d  Compare the contents of files with the same name in both
directories and output a list telling what must be
changed in the two files to bring them into agreement.
The list format is described in diff(1).
 /quote
 it appears to me that GNU diff would do some similar functions.
 
 diff dir1 dir2

Rsync would probably also work (and actually provide a way to DO the changes as 
well). 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zFCP issue with RHEL5

2011-12-15 Thread Dan Horák
David Boyes píše v Čt 15. 12. 2011 v 10:44 -0600: 
   Any particular reason for the different name? It seems like a gratuitous
  change that really doesn't have any useful purpose.
  
  One reason is historical and the second, more important, is that s390utils 
  is
  collection of not only s390-tools but also other s390-related tools (cmsfs,
  src_vipa, libzfcphbaapi and few other small
  scripts/tools) and the previous maintainers wanted all of them in one
  package.
 
 How annoying. Wouldn't it have been easier to have meta-packages that just 
 prereqed the appropriate other packages...? *sigh*
 It's just that this is one more thing to document that makes running in a 
 mixed environment a PITA.

on a system installed with standard installer you always have the base
s390utils/s390-tools pre-installed (even all if I see correctly). I plan
to move some stuff to own package(s) and at the end we could rename the
remaining package back to s390-tools.


Dan

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zFCP issue with RHEL5

2011-12-15 Thread David Boyes
 on a system installed with standard installer you always have the base
 s390utils/s390-tools pre-installed (even all if I see correctly). I plan to 
 move
 some stuff to own package(s) and at the end we could rename the remaining
 package back to s390-tools.

I think that would be very nice. 

Currently it can be hard to trace where something comes from, and particularly 
for appliance creators, it's nice to be able to pare off all that stuff cleanly 
with RPM transactions if you don't need/want it. 


Re: cmsfs-fuse: mmap failed: Cannot allocate memory

2011-12-15 Thread David Boyes
 While I would not like to introduce something that costs factor of 10 I think
 CMS EDF is not a high-performance file system and neither is cmsfs-fuse for
 various reasons.

Natively EDF is pretty good as single user OSes go. CMS VSAM... well, there's a 
tale for others to tell. 
 
 That would not be possible since files (and also FST's and the allocation map)
 could consist of (nearly) any block on the disk so they are not contiguous.
 And mmap'ing single blocks would not be worth the effort.

I guess I don't understand this comment. Mapping the label and running the FST 
to determine what blocks to map seems worthwhile, since you could use common 
code to mmap the blocks of files you want to read into a contiguous memory area 
for efficient access. I don't see why that couldn't be a contiguous struct in 
Linux space regardless of where the blocks actually are on the disk.  For R/O 
access, you generally don't care about updates to the FST until the next access 
of the minidisk (standard CMS semantics), and for R/W, you have transactional 
control of the blocks updated, and you need to update the FST anyway, so you 
want to be able to run that chain efficiently to find the right places to 
update.

Look at how DIAG 250 works in the CP Programming Services manual to get an idea 
of a way to approach this with minimum memory usage. The way DIAG 250 handles 
it (essentially assemble a list of block operations (read AND write) and a 
memory buffer to populate, then execute the I/O to fill the buffer, then hand 
it back to the application logic to parse the memory buffer) is extremely 
efficient on memory and CPU, works nicely on LPAR and VM, and would work 
equally well for the minidisk access routines and file access (both read-only 
and read-write).  The FST gives you sufficient information to determine the 
size of the buffer needed to mmap the file you're accessing (in Mark's example 
case of a large disk with lots of small files on it -- probably the most common 
use case IMHO -- you probably want one or maybe two files off that disk). Note: 
I'm talking about only the I/O list logic in the DIAG 250 description -- I am 
explicitly *not* asking you to do a clone of DIAG 250.. 8-)

That gives you a use-only-what-you need approach and then the size of the 
minidisk doesn't ever matter. 

-- db
 



ext3...

2011-12-15 Thread Offer Baruch
Hi,

This is not a z/Linux or z/VM question but Let me ask you guys something
about ext3... I guess that most of you are using it in production...
How is it possible that i am using ext3 in my production systems and
face stuff like:
1. Corrupted FS during normal work that needs to be fixed with fsck or
worse restore from a backup
2. Resizing a FS requires me to fsck before I resize (as if the FS does not
trust itself to be valid forcing me to umount the FS before a resize)
3. Resizing a FS offline actually corrupts the FS
4. The fstab parameters, that states that it is normal to fsck your FS
every boot or every several mounts...
5. FS is busy although it is not mounted or in use by anyone...
6. fuser command will not always show the using processes
7. open files can be removed without any warning from the rm command.
8. removing files from the FS will not free up space in the FS

I can go on with Linux stuff that bother me but lets stick to ext3, and i
guess maybe some of my issues might not be accurate.
I am a z/OS system programmer and maybe i am expecting for too much, but
even windows don't have this kind of stuff anymore...
I am using redhat V5.2 (not too old) and recently was asked from my local
redhat representative to upgrade my kernel to V5.6 (2.6.18-238).
To my huge surprise i am still seeing this kind of issues even with the new
kernel...

Am i alone here? how can this be? Why are we all using linux if it is still
not ready for production?
will ext4 fix that or is it just bigger, faster but based on the same
unstable technology?

Blowing out some steam...
Offer Baruch

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Running top on second level RHEL5

2011-12-15 Thread David Boyes
  It's mostly not very useful. It reflects only the usage when the virtual
 machine is dispatched. I/O counts are mostly OK in that it's a reflection of
 virtual I/O requests (does not reflect that CP may have done something
 smart with the I/O to make it more efficient -- think about VDISKs for a
 moment). Memory usage is from the perspective of inside the virtual
 machine (doesn't reflect impact on the total machine).
 
 You dismiss CPU usage too quick as mostly OK - while vague, probably still
 not useful.

You missed  a period there.  mostly OK in that sentence applied to I/O 
counts, not CPU. I actually didn't comment on CPU at all. 

Wrt to top and CPU, it's less wrong on I/O than it is on CPU, but top is still 
probably somewhat wrong on I/O (see comment about reflecting only virtual I/O 
counts). 

The only useful measure in top related to CPU (IMHO) is proportionally how much 
of the kernel workload is allocated to which virtual processors *inside* the 
virtual machine. Which tells you pretty much nothing useful unless your 
application is massively multi-threaded and you want to know if threads are 
getting dispatched on all available processors. 

 * Most people using top don't really care about what they use, but about
 what they don't use. The assumption is that what you don't use is still
 available to use. But not in a virtualized environment...

My original point (and statement): Top is mostly not very useful. 

 You can't have it all, and in this case you can't have either of them.
 For the full picture you need a performance monitor that combines data
 from inside and outside the virtual machine.

Agreed. You need data from both perspectives. Top sees only one perspective. 

Also true on any other virtualized architecture (POWER domains, Solaris zones, 
VMWare, Xen, etc, etc, etc). 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Running top on second level RHEL5

2011-12-15 Thread Rob van der Heij
On Dec 15, 2011 7:50 PM, David Boyes dbo...@sinenomine.net wrote:

   It's mostly not very useful. It reflects only the usage when the
virtual
  machine is dispatched. I/O counts are mostly OK in that it's a
reflection of
  virtual I/O requests (does not reflect that CP may have done something
  smart with the I/O to make it more efficient -- think about VDISKs for a
  moment). Memory usage is from the perspective of inside the virtual
  machine (doesn't reflect impact on the total machine).
 
  You dismiss CPU usage too quick as mostly OK - while vague, probably
still
  not useful.

 You missed  a period there.  mostly OK in that sentence applied to I/O
counts, not CPU. I actually didn't comment on CPU at all.

Yes, my apologies. Old man with new tiny devices... :-)

 Wrt to top and CPU, it's less wrong on I/O than it is on CPU, but top is
still probably somewhat wrong on I/O (see comment about reflecting only
virtual I/O counts).

 The only useful measure in top related to CPU (IMHO) is proportionally
how much of the kernel workload is allocated to which virtual processors
*inside* the virtual machine. Which tells you pretty much nothing useful
unless your application is massively multi-threaded and you want to know if
threads are getting dispatched on all available processors.

This tends to get less exciting with virtualized servers because you're
less likely running lots of apps on one server.

Rob

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ext3...

2011-12-15 Thread Richard Troth
Your #2 and #7 and #8 are all normal.  The rest has me worried.

Something bad is happening with the underlying storage.

My replies peppered throughout:

 This is not a z/Linux or z/VM question but Let me ask you guys something
 about ext3... I guess that most of you are using it in production...

I use EXT3 personally, and we use it at work, and I see it all around.

 How is it possible that i am using ext3 in my production systems and
 face stuff like:
 1. Corrupted FS during normal work that needs to be fixed with fsck or
 worse restore from a backup

It's good that this is your #1 question, because it is probably the #1 clue.

In my experience, it is quite rare for EXT3 to be corrupted.

Define restore from a backup.  Tell me you don't mean image backup.
A true image backup of an EXT3 should restore more cleanly than an
(active) EXT2, but ... where was the image copy taken?  If not from
the owning Linux system, then all bets are off in case blocks were not
flushed from cache and written to disk.

If you're on VM and try to CP LINK the disk to make a copy, then the
copy will almost certainly NOT be current.  This is not a problem with
VM nor with Linux on VM nor with zLinux.  It is the nature of shared
disks.  When they are being written, what the sharing system sees will
never match what the writing system sees.

 2. Resizing a FS requires me to fsck before I resize

That is normal for offline resize.

 (as if the FS does not
 trust itself to be valid forcing me to umount the FS before a resize)

You cannot resize a mounted filesystem unless you use the online
resize tools.  I don't use them.  (Mostly because they are still new
to me.)  But online resize is really convenient, and is reliable
(especially when driven via distributor tools).

If you try to resize a mounted filesystem with the offline resize
tools, you will trash it.

 3. Resizing a FS offline actually corrupts the FS

This goes back to your #1.
It makes me wonder what else is able to touch the underlying storage.
Is the disk shared?  Is it possible that there is a disk overlap?
This symptom does not point to EXT3 per se, but to the media.

 4. The fstab parameters, that states that it is normal to fsck your FS
 every boot or every several mounts...

EXT3 is a journaling filesystem, so you could get away without a
forced 'fsck' as often.

How often is this filesystem unmounted and remounted?  How often is
the owning system rebooted?  Maybe you need to adjust the number of
mounts.  That setting is in the superblock.  Use 'tune2fs' most
likely.

 5. FS is busy although it is not mounted or in use by anyone...

Here's another clue.  It comes back to #1 again.

A corrupted filesystem can appear to be stuck busy.

 6. fuser command will not always show the using processes

FS corruption ... again.

 7. open files can be removed without any warning from the rm command.

NOT an error.  That is normal Unix behavior.

But it is a clue.  Maybe something unique to your environment is
making an incorrect assumption about this filesystem?

 8. removing files from the FS will not free up space in the FS

This goes along with #7.  Files can be removed from the directory (and
rendered invisible) but still be in use.  Blocks are still used, until
the last process with an open filehandle to that file closes it.  THEN
the file is truly removed and all blocks are freed.  This is normal.

 I can go on with Linux stuff that bother me but lets stick to ext3, and i
 guess maybe some of my issues might not be accurate.

Aside from 2, 7, and 8, the rest of your points suggest that something
is wrong.
If this is your only exposure to Linux, I can see why you would be
very disappointed.
Here on the Linux-390 discussion list, there are plenty of people who
will agree: not normal, and not acceptable.

 I am a z/OS system programmer and maybe i am expecting for too much, but
 even windows don't have this kind of stuff anymore...

See my prior paragraph.  I would agree with your conclusion, but have
never had such trouble from Linux.  (Until Windows 7, Linux for me was
years ahead of Windows in terms of reliability and stability ... and
the jury is still out w/r/t W7 because I have not had time to use it
much.)

Some of the behavior you see should be reflected in Unix on MVS (USS).

 I am using redhat V5.2 (not too old) and recently was asked from my local
 redhat representative to upgrade my kernel to V5.6 (2.6.18-238).
 To my huge surprise i am still seeing this kind of issues even with the new
 kernel...

I see no reason, based on your report, to think that you have a kernel problem.

 Am i alone here? how can this be? Why are we all using linux if it is still
 not ready for production?

Yes, and no.  You are not alone in astonishment with using Linux.  You
are somewhat alone in the number of problems and the severity.
Hundreds of us have been using Linux on the mainframe for a long time,
some of us for more than a decade.  And most of us have used Linux on
other 

Re: cmsfs-fuse: mmap failed: Cannot allocate memory

2011-12-15 Thread Mark Post
 On 12/15/2011 at 08:33 AM, Jan Glauber j...@linux.vnet.ibm.com wrote: 
 1. mmap the disk (as before)
 2. check for the -ENOMEM case
2a. mmap was successful - go on use memcpy in the read/write
 functions
2b. mmap failed - use pread/pwrite in the read/write functions
 
 That way we can avoid any memory setting changes since pread/pwrite will
 also work with very little memory. If the sysadmin sets ulimit -v
 cmsfs-fuse will also work. And in case there is enough memory it can be
 used to cache the disk operations.
 
 Is anybody ok with that approach or do you still see the need for an
 additional switch???

I'm willing to say it's good for a first whack at the problem.  Further testing 
may show that something more (or else) is needed.  Either way, I'll report back 
what I find.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ext3...

2011-12-15 Thread Offer Baruch
Hi,

thanks for your time and answer.
I am sure my storage vendor will disagree with you :-)
The FS corruption are sometimes shared storage and sometimes not.
either way the FS are not mounted on the other systems and there should be
no reason for corruption.
let's put the corruption aside. all the thing you say are normal to linux
shouldn't be normal and should not be accepted.
an open file should not be deleted in any case.
i am using online resize (normally it works) but sometimes it asks me to
umount the FS and fsck it.
the hole approach just seems immature...
I am having a hard time accepting any of the stuff you call normal in an
enterprise production solution.


Offer

On Thu, Dec 15, 2011 at 9:07 PM, Richard Troth vmcow...@gmail.com wrote:

 Your #2 and #7 and #8 are all normal.  The rest has me worried.

 Something bad is happening with the underlying storage.

 My replies peppered throughout:

  This is not a z/Linux or z/VM question but Let me ask you guys something
  about ext3... I guess that most of you are using it in production...

 I use EXT3 personally, and we use it at work, and I see it all around.

  How is it possible that i am using ext3 in my production systems and
  face stuff like:
  1. Corrupted FS during normal work that needs to be fixed with fsck or
  worse restore from a backup

 It's good that this is your #1 question, because it is probably the #1
 clue.

 In my experience, it is quite rare for EXT3 to be corrupted.

 Define restore from a backup.  Tell me you don't mean image backup.
 A true image backup of an EXT3 should restore more cleanly than an
 (active) EXT2, but ... where was the image copy taken?  If not from
 the owning Linux system, then all bets are off in case blocks were not
 flushed from cache and written to disk.

 If you're on VM and try to CP LINK the disk to make a copy, then the
 copy will almost certainly NOT be current.  This is not a problem with
 VM nor with Linux on VM nor with zLinux.  It is the nature of shared
 disks.  When they are being written, what the sharing system sees will
 never match what the writing system sees.

  2. Resizing a FS requires me to fsck before I resize

 That is normal for offline resize.

  (as if the FS does not
  trust itself to be valid forcing me to umount the FS before a resize)

 You cannot resize a mounted filesystem unless you use the online
 resize tools.  I don't use them.  (Mostly because they are still new
 to me.)  But online resize is really convenient, and is reliable
 (especially when driven via distributor tools).

 If you try to resize a mounted filesystem with the offline resize
 tools, you will trash it.

  3. Resizing a FS offline actually corrupts the FS

 This goes back to your #1.
 It makes me wonder what else is able to touch the underlying storage.
 Is the disk shared?  Is it possible that there is a disk overlap?
 This symptom does not point to EXT3 per se, but to the media.

  4. The fstab parameters, that states that it is normal to fsck your FS
  every boot or every several mounts...

 EXT3 is a journaling filesystem, so you could get away without a
 forced 'fsck' as often.

 How often is this filesystem unmounted and remounted?  How often is
 the owning system rebooted?  Maybe you need to adjust the number of
 mounts.  That setting is in the superblock.  Use 'tune2fs' most
 likely.

  5. FS is busy although it is not mounted or in use by anyone...

 Here's another clue.  It comes back to #1 again.

 A corrupted filesystem can appear to be stuck busy.

  6. fuser command will not always show the using processes

 FS corruption ... again.

  7. open files can be removed without any warning from the rm command.

 NOT an error.  That is normal Unix behavior.

 But it is a clue.  Maybe something unique to your environment is
 making an incorrect assumption about this filesystem?

  8. removing files from the FS will not free up space in the FS

 This goes along with #7.  Files can be removed from the directory (and
 rendered invisible) but still be in use.  Blocks are still used, until
 the last process with an open filehandle to that file closes it.  THEN
 the file is truly removed and all blocks are freed.  This is normal.

  I can go on with Linux stuff that bother me but lets stick to ext3, and i
  guess maybe some of my issues might not be accurate.

 Aside from 2, 7, and 8, the rest of your points suggest that something
 is wrong.
 If this is your only exposure to Linux, I can see why you would be
 very disappointed.
 Here on the Linux-390 discussion list, there are plenty of people who
 will agree: not normal, and not acceptable.

  I am a z/OS system programmer and maybe i am expecting for too much, but
  even windows don't have this kind of stuff anymore...

 See my prior paragraph.  I would agree with your conclusion, but have
 never had such trouble from Linux.  (Until Windows 7, Linux for me was
 years ahead of Windows in terms of reliability and stability ... and
 the jury is 

Re: Running top on second level RHEL5

2011-12-15 Thread David Boyes
  The only useful measure in top related to CPU (IMHO) is proportionally
 how much of the kernel workload is allocated to which virtual processors
 *inside* the virtual machine. Which tells you pretty much nothing useful
 unless your application is massively multi-threaded and you want to know if
 threads are getting dispatched on all available processors.
 
 This tends to get less exciting with virtualized servers because you're less
 likely running lots of apps on one server.

Yeah. It still comes up useful  if you have some very complex Apache 
configurations or WAS (Apache + some other ugliness) apps, but as you say, one 
app, one host tends not to be all that interesting (unless you get into the 
threading issues, and most business apps don't have that level of parallelism). 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ext3...

2011-12-15 Thread Christian Borntraeger
On 15/12/11 19:09, Offer Baruch wrote:
 Hi,

 This is not a z/Linux or z/VM question but Let me ask you guys something
 about ext3... I guess that most of you are using it in production...
 How is it possible that i am using ext3 in my production systems and
 face stuff like:
 1. Corrupted FS during normal work that needs to be fixed with fsck or
 worse restore from a backup

Unless you do something wrong (typical candidates:
- mounting an ext3 at the same time on several systems
- file system on /dev/dasda instead of /dev/dasda1 on a CDL formatted disk
- logical volume on dev/dasda instead of /dev/dasda1 on a CDL formatted disk
- full disk backup of a running system from an external system
- misconfiguring the partition size after a resize
) or have broken hardware this should never happen.

 2. Resizing a FS requires me to fsck before I resize (as if the FS does not
 trust itself to be valid forcing me to umount the FS before a resize)
 3. Resizing a FS offline actually corrupts the FS

Cant tell about resizing.

 4. The fstab parameters, that states that it is normal to fsck your FS
 every boot or every several mounts...

You are the first z/OS admin that complains about too many sanity checks :-)
You can disable or change these values with tune2fs -c and -i.

 5. FS is busy although it is not mounted or in use by anyone...

This can also happen, if another file system is mounted on top

 6. fuser command will not always show the using processes

Might happen if there are short living processes or processes
that open/close all the time. (fuser does a snapshot and if these
processes are not accessing a file at that moment it doesnt work)

 7. open files can be removed without any warning from the rm command

This is actually a very nice unix feature. The directory entry is gone, but
the file does not go away (due to reference  counting on the inode) unless
the last user closes the file. And really its not a bug, its a feature -
e.g. programs can make sure that temp files got deleted when they crash.

 8. removing files from the FS will not free up space in the FS

Can be explained by 7. As soon as the last user goes away the space
will be freed.



Your 1. should really worry you. Again this should not happen. (and it
does not happen on my systems)

Christian

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ext3...

2011-12-15 Thread Rick Troth
(portions removed for brevity)

 I am sure my storage vendor will disagree with you :-)

I did not mean to point at the vendor.
It could be something in the configuration, in the layout, in the VM
side ... any number of things.

 let's put the corruption aside. all the thing you say are normal to linux
 shouldn't be normal and should not be accepted.
 an open file should not be deleted in any case.

No.  That is not the design of Unix or Linux.

I can see that it is astonishing that an open file can be deleted, but
that does not make it wrong.  This feature turns out to be really very
useful.  When I found on HP-UX that it was not normally allowed, I was
just as astonished then as you are now.  (Except that I had the
knowledge that HP had removed a useful feature from Unix.)

Windows is not Unix, so they can make up their own rules w/r/t open
files being deleted or not.

 i am using online resize (normally it works) but sometimes it asks me to
 umount the FS and fsck it.
 the hole approach just seems immature...

As I said, I do not use the online resize (for reasons that I won't go
into further).  But it makes sense that if SOMETHING ELSE is wrong,
and the online resize has detected a problem ... requiring you to
unmount the filesystem is a basic safety requirement.

 I am having a hard time accepting any of the stuff you call normal in an
 enterprise production solution.

Offer, I cannot make you like these things which are new to you.
And I won't try.

Clearly, something is trashing your filesystem(s).  It is difficult to
tell what, since we only just started the conversation.

As a z/OS professional, you value high availability and you are
accustomed to live changes.  Linux runs on a broad range of
hardware.  Unix, as a genre, runs on a broader range, and must
accommodate sysadmins with varying skills.  Most of the tools and
utilities in Linux have been developed over time and tested by this
broad range of people.  So if the online resize detects a problem, it
cannot assume that the user will be as highly skilled as you are.

Most of us on this list are quite pleased to have Linux on the
mainframe, and are anxious to help you, as a peer, to get your
implementation working very very well.  It will never work like z/OS
(or even quite like USS).  But we will try to walk you through making
things reliable.  (The filesystem corruption you describe is NOT
normal and is NOT something any of us tolerate for production work.)

Many of us on this list are also long time mainframe people.  You are
not alone.

Given that so much code these days is written for the POSIX model,
your best alternative (as a z/OS guy) to using mainframe Linux is to
use USS.  If you can get the porting done and can get support from
your vendors, go for it!

-- R;   

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ext3...

2011-12-15 Thread David Boyes
 How is it possible that i am using ext3 in my production systems and face
 stuff like:
 1. Corrupted FS during normal work that needs to be fixed with fsck or worse
 restore from a backup 

If you are getting corrupted filesystems during normal operations and with 
normal shutdowns (ie, you don't just force the Linux guests or let them die 
horribly with CP SHUTDOWN), then something in your configuration is actively 
clobbering data behind Linux' back. Ext3 is a journaling filesystem and will 
survive most user abuse with reasonable care. 

What other systems have access to those disks? If z/OS, do they have valid VOL1 
and VTOCs so z/OS thinks they're full and keeps it's grubby mitts off them? 

 2. Resizing a FS requires me to fsck before I resize (as
 if the FS does not trust itself to be valid forcing me to umount the FS 
 before a
 resize) 

An unmounted filesystem is not guaranteed to be error free or consistent. The 
fsck requirement is a safety measure to ensure the filesystem is error free and 
consistent on disk before you do major surgery -- and resizes ARE major 
surgery. If you don't want to do the fsck, you can explicitly disable it, and 
it's your gun, your foot. 

3. Resizing a FS offline actually corrupts the FS 

See #1. Something is happening outside Linux's control. 

 4. The fstab
 parameters, that states that it is normal to fsck your FS every boot or every
 several mounts...

This is another err on the side of safety item, and is probably from when 
Linux made the transition to reliable hardware. Most systems don't have the hw 
reliability guarantees that System z has, so you do this to adapt to unreliable 
hardware on smaller systems. You are free to change this to adapt to your risk 
tolerance, and most Unix (of any sort) with decent iron do change these 
parameters.  Having a cautious default seems right to me -- especially since 
it's telling you about a serious problem.. 

 5. FS is busy although it is not mounted or in use by anyone...

An example would help here. You can also try lsof, which (IMHO) tends to be 
more reliable in finding who has what open.

 6. fuser command will not always show the using processes

Lsof again. 

  7. open files can
 be removed without any warning from the rm command.

Working as designed. Remember (not like z/OS), Unix is  if you have permission 
to do it, you asked for it, you got it. Your gun, your foot. Locking is an 
application issue..  If you want the safety, check out configuration of 
SELinux. Almost as ugly as RACF, but you can stop root from doing stupid stuff. 

 8. removing files from the FS will not free up space in the FS

All Unix filesystems reserve some space in a filesystem as a soft quota to halt 
runaway processes to let admins fix stuff before everything comes casters up. 
Is the filesystem showing 100% or more than 100% usage in du? If so, then 
you've probably hit that, and the number won't really change until you drop 
below that threshold AND all open files are closed. The filesystem won't 
release space until nobody is using it.

 I am a z/OS system programmer and maybe i am expecting for too much, but
 even windows don't have this kind of stuff anymore...

No, they have other variations of other problems. 

 I am using redhat V5.2 (not too old) and recently was asked from my local
 redhat representative to upgrade my kernel to V5.6 (2.6.18-238).
 To my huge surprise i am still seeing this kind of issues even with the new
 kernel...

 Sounds like someone is easter-egging the problem. Kernel upgrades are unlikely 
to fix this kind of problem. Something else is corrupting your data outside 
Linux' control. The Linux kernel can't compensate for what it doesn't know 
about. 

 Am i alone here? how can this be? Why are we all using linux if it is still 
 not
 ready for production?

Production Linux (or any Unix) requires different techniques to manage it. It's 
learning a lot from z/OS (finally), but this is pretty much the state of the 
world for *any* Unix implementation at scale. Your data corruption problems are 
something else, though -- something unique to your configuration.

 will ext4 fix that or is it just bigger, faster but based on the same unstable
 technology?

Ext4 isn't going to fix your problem (in fact, it has a whole new set of 
issues). 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cmsfs-fuse: mmap failed: Cannot allocate memory

2011-12-15 Thread David Boyes
 -Original Message-
 From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
 Rob van der Heij
 As Jan points out the FST is fragmented.

Agreed. However, each piece contains pointers to the next piece you need, and 
you need that information anyway, so following the breadcrumbs is not an 
operational loss as it happens in two scenarios: at first access, and after a 
update to the FST in R/W mode.

 The purpose of mmap() is that you
 map all blocks in virtual mem 

The purpose of mmap() is to map *A* specified object (disk, shared memory 
block, etc) of a specified size starting at a specified offset from the start 
of the object to *A* memory segment of equal size in the process' address 
space. It does not have to map ALL blocks of a disk to access some of the data 
on it. 

POSIX (IEEE Std 1003.1) definition of mmap():

The mmap() function shall establish a mapping between a process' address space 
and a file, shared memory object, or typed memory object. The format of the 
call is as follows:

pa=mmap(addr, len, prot, flags, fildes, off);

The mmap() function shall establish a mapping between the address space of the 
process at an address pa for len bytes to the memory object represented by the 
file descriptor fildes at offset off for len bytes. The value of pa is an 
implementation-defined function of the parameter addr and the values of flags, 
further described below. A successful mmap() call shall return pa as its 
result. The address range starting at pa and continuing for len bytes shall be 
legitimate for the possible (not necessarily current) address space of the 
process. The range of bytes starting at off and continuing for len bytes shall 
be legitimate for the possible (not necessarily current) offsets in the file, 
shared memory object, or typed memory object represented by fildes.

 and then simply access the blocks in memory.
 Linux does the I/O under the covers.

I follow the concept, and see the advantages of using mmap to do the I/O under 
the covers. At this point, we're optimizing to minimize the amount of data we 
need, and thus the impact on other stuff that uses memory in the same virtual 
machine (and WSS of same).

 Since your blocks can be anywhere on
 disk, you map the entire thing. 

Here's where we diverge. 

There are two issues here: 

1) accessing the minidisk and representing its contents to Linux at a point in 
time
2) accessing the content of the minidisk

Mmap()ing the whole disk is a convenient solution to both problems, HOWEVER:

To access a minidisk and represent it to Linux, you do NOT need every block on 
the disk to be represented in a structure, you need the label data and the FST 
data (which, btw you need to read first ANYWAY to mmap the whole disk as you 
need to know the logical number of blocks to set up the mmap!). 

To use the files on the minidisk, you need the blocks CONTAINED in the file, 
not the entire disk. You get that from the FST and you mmap() those blocks.

Quote (again from IEEE Std 1003.1): Use of mmap() may reduce the amount of 
memory available to other memory allocation functions. 

This is what triggered the discussion. In no case do you ever need the entire 
set of blocks on the disk at the same time, unless they are contained in a 
single large file, which our use case (big disk with lots of small to 
medium-size files) makes unlikely, if not explicitly impossible by definition 
of the problem. 

To map just record 3-5 is no gain if you need
 to point still at the rest of the blocks.

See above. It is a gain at access time (you don't need ALL the blocks, you need 
the ones to identify the volume, create a view of the volume contents, and 
where the interesting content is, or at least starts).  For R/O you need to 
build an in memory copy exactly once (on first access to the minidisk, then you 
can use it forever until the next access). For R/W you need a live copy in 
memory of the entire FST, which you need to build and maintain, regardless of 
activity or access method. The in-memory copy does not have to be discontiguous 
-- in fact, you *want* it to be contiguous so you can use simple indexing of a 
structure pattern over the FST entries for performance. 

You don't need the other data AT ALL until you actually access a file in some 
way, and then you need only the blocks that comprise the file you want. 

 DIAG250 is a block driver, just like Linux can do. Extra work is to allocate
 memory to hold blocks while you work on them, make sure to flush the
 updated pages, etc.

I suggested looking at DIAG 250 for ideas on how to approach the problem. I 
explicitly said that I do not want a duplicate of DIAG 250. 

Yes, it's going to be a little more work on the housekeeping tasks if you want 
R/W access, but you'd have to do substantially the same housekeeping with the 
full-disk map  approach. It's mostly buffer management issues, though; the 
actual update to the data on disk can still be done with memcpy 

Re: ext3...

2011-12-15 Thread Marcy Cortes
We use SuSE 10 and 11 and ext3 everywhere.  Here's my answers.

1. I've never seen that happen
2. Typically we use online resizing and that does not require unmounting and 
fsck'ing
3. That's never happened here
4. You can change that if you wish
5. Haven't seen that.
6. Haven't used that command - I use lsof
7. I think that is normal behavior in Linux
8. Haven't seen that either

You do sound alone here :(

Marcy 

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@vm.marist.edu] On Behalf Of Offer 
Baruch
Sent: Thursday, December 15, 2011 10:10 AM
To: LINUX-390@vm.marist.edu
Subject: [LINUX-390] ext3...

Hi,

This is not a z/Linux or z/VM question but Let me ask you guys something
about ext3... I guess that most of you are using it in production...
How is it possible that i am using ext3 in my production systems and
face stuff like:
1. Corrupted FS during normal work that needs to be fixed with fsck or
worse restore from a backup
2. Resizing a FS requires me to fsck before I resize (as if the FS does not
trust itself to be valid forcing me to umount the FS before a resize)
3. Resizing a FS offline actually corrupts the FS
4. The fstab parameters, that states that it is normal to fsck your FS
every boot or every several mounts...
5. FS is busy although it is not mounted or in use by anyone...
6. fuser command will not always show the using processes
7. open files can be removed without any warning from the rm command.
8. removing files from the FS will not free up space in the FS

I can go on with Linux stuff that bother me but lets stick to ext3, and i
guess maybe some of my issues might not be accurate.
I am a z/OS system programmer and maybe i am expecting for too much, but
even windows don't have this kind of stuff anymore...
I am using redhat V5.2 (not too old) and recently was asked from my local
redhat representative to upgrade my kernel to V5.6 (2.6.18-238).
To my huge surprise i am still seeing this kind of issues even with the new
kernel...

Am i alone here? how can this be? Why are we all using linux if it is still
not ready for production?
will ext4 fix that or is it just bigger, faster but based on the same
unstable technology?

Blowing out some steam...
Offer Baruch

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ext3...

2011-12-15 Thread Alan Cox
 How is it possible that i am using ext3 in my production systems and
 face stuff like:
 1. Corrupted FS during normal work that needs to be fixed with fsck or
 worse restore from a backup

You have something wrong with your configuration or hardware eg two
systems with the same fs mounted at once, the same blocks assigned to two
things etc.

 2. Resizing a FS requires me to fsck before I resize (as if the FS does not
 trust itself to be valid forcing me to umount the FS before a resize)

Ext3 should allow a resize if the disk is clean

 3. Resizing a FS offline actually corrupts the FS

You definitely have something wrong with your configuration, hardware or
tools

 4. The fstab parameters, that states that it is normal to fsck your FS
 every boot or every several mounts...

Usually every n reboots or few months. If it's doing it regularly it
probably wasn't cleanly unmounted so is being checked - see 1,2,3 above.
The 180 day check is done to make sure nothing is slowly going astray (eg
bad hardware), ditto the every 'n' mounts. These are tunable and you can
turn them off if your hardware is trusted. In PC space they tend to be a
good idea because cheap disks and cheap hardware with limited ECC do now
and then do bad things.

 5. FS is busy although it is not mounted or in use by anyone...

If you have a process with its current directory there, or it is NFS
exported or other things then you may find

 6. fuser command will not always show the using processes

See 5, also check your command line choices include current working
directory etc.

 7. open files can be removed without any warning from the rm command.

That's a feature

 8. removing files from the FS will not free up space in the FS

Ditto if they are in use.

 Am i alone here? how can this be? Why are we all using linux if it is still
 not ready for production?
 will ext4 fix that or is it just bigger, faster but based on the same
 unstable technology?

It's based on the same stable technology. I think you have a local
problem. In the PC space I'd also suspect bad memory but on a 390 the
hardware is probably less of a suspect.

But if you can't figure it out then you could upgrade to a PC - it's
really stable on those ;)

Alan

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ext3...

2011-12-15 Thread Offer Baruch
Thank you all for all the answers...

I had no intention for you guys to try and solve those issues :-)
I am more than aware that some of the stuff i wrote are intentional by
linux design... and that is exactly my point.
I can assure all of you i don't mount a FS in 2 machines and blame ext3 for
corrupting it... when i do that i blame my self :-)
i am using both linux on system z and on IBM Blades (VMware and physical
servers).
actually i rarely have any ext3 corruption issues on system z (i think it
happened only once).
out of 200 guest (not too much) every once in a while (lets say 2 months)
one of my linux FS needs an fsck.
and when i think about it all features and messages i get implies that fsck
is normal. other than mounting a FS in 2 places (and other stuff like
that)  i do not expect FS corruption ever.
all i am trying to say is that i expected more of linux and it looks
different.

Offer.

On Thu, Dec 15, 2011 at 8:09 PM, Offer Baruch offerbar...@gmail.com wrote:

 Hi,

 This is not a z/Linux or z/VM question but Let me ask you guys something
 about ext3... I guess that most of you are using it in production...
 How is it possible that i am using ext3 in my production systems and
 face stuff like:
 1. Corrupted FS during normal work that needs to be fixed with fsck or
 worse restore from a backup
 2. Resizing a FS requires me to fsck before I resize (as if the FS does
 not trust itself to be valid forcing me to umount the FS before a resize)
 3. Resizing a FS offline actually corrupts the FS
 4. The fstab parameters, that states that it is normal to fsck your FS
 every boot or every several mounts...
 5. FS is busy although it is not mounted or in use by anyone...
 6. fuser command will not always show the using processes
 7. open files can be removed without any warning from the rm command.
 8. removing files from the FS will not free up space in the FS

 I can go on with Linux stuff that bother me but lets stick to ext3, and i
 guess maybe some of my issues might not be accurate.
 I am a z/OS system programmer and maybe i am expecting for too much, but
 even windows don't have this kind of stuff anymore...
 I am using redhat V5.2 (not too old) and recently was asked from my local
 redhat representative to upgrade my kernel to V5.6 (2.6.18-238).
 To my huge surprise i am still seeing this kind of issues even with the
 new kernel...

 Am i alone here? how can this be? Why are we all using linux if it is
 still not ready for production?
 will ext4 fix that or is it just bigger, faster but based on the same
 unstable technology?

 Blowing out some steam...
 Offer Baruch


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


lsscsi issue with RHEL5

2011-12-15 Thread Shumate, Scott
 I have an issue with zfcp luns.  I 2 luns 1 100GB lun and 1 1GB lun.
The 100GB lun has 4 paths  /dev/sda, /dev/sdb, /dev/sdd, and /dev/sde.
The 1GB lun has one path /dev/sdc.  I want paths /dev/sda thru /dev/sdd
for the 100GB lun and /dev/sde for the 1GB lun.  Is there anything I can
do to make this happen?  I've manually removed the scsi disk and
rebooted, but it still has the same issue.  What am I doing wrong?

[root@wil-zstsintgdbprdbu01 ~]# cat /etc/zfcp.conf
0.0.dc000x50060e800571f006  0x0020
0.0.dd000x50060e800571f016  0x0020
0.0.de000x50060e800571f007  0x0020
0.0.df000x50060e800571f017  0x0020

[root@wil-zstsintgdbprdbu01 ~]# lszfcp -D
0.0.dc00/0x50060e800571f006/0x0020 0:0:0:1
0.0.dc00/0x50060e800571f006/0x0026 0:0:0:2
0.0.dd00/0x50060e800571f016/0x0020 1:0:0:1
0.0.de00/0x50060e800571f007/0x0020 2:0:0:1
0.0.df00/0x50060e800571f017/0x0020 3:0:0:1


[root@wil-zstsintgdbprdbu01 ~]# lsscsi
[0:0:0:1]diskHITACHI  OPEN-V   6008  /dev/sda
[0:0:0:2]diskHITACHI  OPEN-V   6008  /dev/sdc
[1:0:0:1]diskHITACHI  OPEN-V   6008  /dev/sdd
[2:0:0:1]diskHITACHI  OPEN-V   6008  /dev/sde
[3:0:0:1]diskHITACHI  OPEN-V   6008  /dev/sdb


Thanks
Scott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ext3...

2011-12-15 Thread Alan Altmark
On Thursday, 12/15/2011 at 05:15 EST, Offer Baruch offerbar...@gmail.com
wrote:
 Thank you all for all the answers...

 I had no intention for you guys to try and solve those issues :-)

Then don't post them.  :-)  Problem.  Kill it!

 I am more than aware that some of the stuff i wrote are intentional by
 linux design... and that is exactly my point.
 I can assure all of you i don't mount a FS in 2 machines and blame ext3
for
 corrupting it... when i do that i blame my self :-)
 i am using both linux on system z and on IBM Blades (VMware and physical
 servers).
 actually i rarely have any ext3 corruption issues on system z (i think
it
 happened only once).
 out of 200 guest (not too much) every once in a while (lets say 2
months)
 one of my linux FS needs an fsck.
 and when i think about it all features and messages i get implies that
fsck
 is normal. other than mounting a FS in 2 places (and other stuff like
 that)  i do not expect FS corruption ever.
 all i am trying to say is that i expected more of linux and it looks
 different.

The problem isn't necessarily you, by the way.  If those drives are
accessible by more than one host, then you have no control what other
hosts do.  The SAN admin should zone the WWPNs and mask the LUNs to ensure
that only the proper server has access to a specific LUN.

The storage admin can help you determine if someone else is accessing the
LUNs, whether from System z or elsewhere.  (Hopefully you are using NPIV
in all of your virtualization environments.)

If you're getting errors on ECKD, then I suspect another LPAR is doing
something to your disks.

The fact that an OS has a different design point that your fave isn't
unusual.  You should see all the weird things z/OS does from my
perspective.  :-)  But I think it's a credit to Linux that it *expects*
hardware to be unreliable or interfered with, and it takes steps to
protect you from it.

Oh, and ext3 is way better than ext2 was.  The fsck'ing (careful, there!)
with ext2 would have driven you crazy.  :-)

Alan Altmark

Senior Managing z/VM and Linux Consultant
IBM System Lab Services and Training
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ext3...

2011-12-15 Thread òåôø áøåê
I am the san admin as well... And i am using npiv. Only guests that need share 
storage can access it, like clusters. I see your point with zos strange stuff 
as well. It has it down sides...

Sent by emoze push mail


-
From:   Alan Altmark alan_altm...@us.ibm.com
Subject:Re: ext3...
Date:   16 דצמבר 2011 00:55

On Thursday, 12/15/2011 at 05:15 EST, Offer Baruch offerbar...@gmail.com
wrote:
 Thank you all for all the answers...

 I had no intention for you guys to try and solve those issues :-)

Then don't post them.  :-)  Problem.  Kill it!

 I am more than aware that some of the stuff i wrote are intentional by
 linux design... and that is exactly my point.
 I can assure all of you i don't mount a FS in 2 machines and blame ext3
for
 corrupting it... when i do that i blame my self :-)
 i am using both linux on system z and on IBM Blades (VMware and physical
 servers).
 actually i rarely have any ext3 corruption issues on system z (i think
it
 happened only once).
 out of 200 guest (not too much) every once in a while (lets say 2
months)
 one of my linux FS needs an fsck.
 and when i think about it all features and messages i get implies that
fsck
 is normal. other than mounting a FS in 2 places (and other stuff like
 that)  i do not expect FS corruption ever.
 all i am trying to say is that i expected more of linux and it looks
 different.

The problem isn't necessarily you, by the way.  If those drives are
accessible by more than one host, then you have no control what other
hosts do.  The SAN admin should zone the WWPNs and mask the LUNs to ensure
that only the proper server has access to a specific LUN.

The storage admin can help you determine if someone else is accessing the
LUNs, whether from System z or elsewhere.  (Hopefully you are using NPIV
in all of your virtualization environments.)

If you're getting errors on ECKD, then I suspect another LPAR is doing
something to your disks.

The fact that an OS has a different design point that your fave isn't
unusual.  You should see all the weird things z/OS does from my
perspective.  :-)  But I think it's a credit to Linux that it *expects*
hardware to be unreliable or interfered with, and it takes steps to
protect you from it.

Oh, and ext3 is way better than ext2 was.  The fsck'ing (careful, there!)
with ext2 would have driven you crazy.  :-)

Alan Altmark

Senior Managing z/VM and Linux Consultant
IBM System Lab Services and Training
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Fedora 16 for IBM System z 64bit official release

2011-12-15 Thread Bern VK2KAD
And the first to post a Hercules disk image of a clean install will get a 
big thankyou from me   ;)


--
From: DanHorák dho...@redhat.com
Sent: Thursday, December 15, 2011 10:09 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Fedora 16 for IBM System z 64bit official release


There was still a longer delay after the Fedora 16 release for the
primary architectures than we would like to see, but at least we are
faster than with Fedora 15 and so here we are.

As today, the Fedora IBM System z (s390x) Secondary Arch team proudly
presents the Fedora 16 for IBM System z 64bit official release!

The links to the actual release are here:

http://secondary.fedoraproject.org/pub/fedora-secondary/releases/16/Fedora/s390x/

http://secondary.fedoraproject.org/pub/fedora-secondary/releases/16/Everything/s390x/os/

and obviously on all mirrors that mirror the secondary arch content.

The first directory contains the normal installation trees as well as
one DVD ISO with the complete release.

Everything as usual contains, well, everything. :)


Additional information about known issues, the current progress and state
for future release, where and how the team can be reached and just
anything else IBM System z on Fedora related can be found here:


http://fedoraproject.org/wiki/Architectures/s390x/16
for architecture specific release notes

and

http://fedoraproject.org/wiki/Architectures/s390x
for the general information.


Thanks go out to everyone involved in making this happen!


Your Fedora/s390x Maintainers

--
Dan Horák, RHCE
Senior Software Engineer, BaseOS

Red Hat Czech s.r.o., Purkyňova 99, 612 45 Brno

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
visit

http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: lsscsi issue with RHEL5

2011-12-15 Thread Steffen Maier

On 12/15/2011 11:47 PM, Shumate, Scott wrote:

  I have an issue with zfcp luns.  I 2 luns 1 100GB lun and 1 1GB lun.
The 100GB lun has 4 paths  /dev/sda, /dev/sdb, /dev/sdd, and /dev/sde.
The 1GB lun has one path /dev/sdc.  I want paths /dev/sda thru /dev/sdd
for the 100GB lun and /dev/sde for the 1GB lun.  Is there anything I can
do to make this happen?  I've manually removed the scsi disk and
rebooted, but it still has the same issue.  What am I doing wrong?


No issue here. /dev/sdX are kernel device names allocated by no 
particular rule a user can rely on. Therefore, udev provides persistent 
device names implemented by means of symbolic links under /dev. For 
storage, those can be found under /dev/disk/by-.../...
For those cases where I really need to refer to an individual path of a 
zfcp attached SCSI disk, I usually rely on 
/dev/disk/by-path/ccw-devbusid-zfcp-wwpn:lun.


http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/persistent_naming.html
and also the Section SCSI device nodes in the zfcp chapter in the 
Device Drivers, Features, and Commands book on 
http://www.ibm.com/developerworks/linux/linux390/documentation_red_hat.html#rhel5


Just curious: Having configured multiple paths I presume you use 
device-mapper multipathing on top which doesn't care about the device 
names of individual paths.

What do you need device names of underlying physical paths for?


[root@wil-zstsintgdbprdbu01 ~]# cat /etc/zfcp.conf
0.0.dc000x50060e800571f006  0x0020
0.0.dd000x50060e800571f016  0x0020
0.0.de000x50060e800571f007  0x0020
0.0.df000x50060e800571f017  0x0020


This only contains persistent configuration for four paths to your 
presumable 1GB LUN. However, where does the config for the one path to 
the 100GB LUN come from?



[root@wil-zstsintgdbprdbu01 ~]# lszfcp -D
0.0.dc00/0x50060e800571f006/0x0020 0:0:0:1
0.0.dc00/0x50060e800571f006/0x0026 0:0:0:2
0.0.dd00/0x50060e800571f016/0x0020 1:0:0:1
0.0.de00/0x50060e800571f007/0x0020 2:0:0:1
0.0.df00/0x50060e800571f017/0x0020 3:0:0:1

[root@wil-zstsintgdbprdbu01 ~]# lsscsi
[0:0:0:1]diskHITACHI  OPEN-V   6008  /dev/sda
[0:0:0:2]diskHITACHI  OPEN-V   6008  /dev/sdc
[1:0:0:1]diskHITACHI  OPEN-V   6008  /dev/sdd
[2:0:0:1]diskHITACHI  OPEN-V   6008  /dev/sde
[3:0:0:1]diskHITACHI  OPEN-V   6008  /dev/sdb


Looks perfectly good to me.

Steffen

Linux on System z Development

IBM Deutschland Research  Development GmbH
Vorsitzender des Aufsichtsrats: Martin Jetter
Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Required packages on a zLinux server running Oracle vs Put everything on

2011-12-15 Thread CHAPLIN, JAMES (CTR)
I got into a discussion with a co-worker over packages that are
installed on a zLinux oracle server. We are Running RHEL 5.7 at our
site, and are using Oracle 10g (about to go to 11g). I noticed that our
Oracle servers have an average of 1192 rpm packages installed and 91
define system services compared to our other non-Oracle servers
(application, java, MQ  Websphere) having only 450 - 480 installed rpm
packages and 53 defined services.

 

I am not an oracle expert. Can anyone point me to a list of required
software packages to be installed to support Oracle 10g? If you have any
suggestions or personal experiences with oracle and the zLinux base
platform, your comments are welcome.

 

Another statement was It does not matter what we have installed, as
long as Oracle is working, or don't touch unless it is broken. A sample
of the over 600 packages are httpd (apache) and eklogin. Others like
squid I believe is needed. I am just looking for a good baseline and
argument to clean up these servers from unneeded software.

 

James Chaplin

Systems Programmer, MVS, zVM  zLinux


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Required packages on a zLinux server running Oracle vs Put everything on

2011-12-15 Thread Marcy Cortes
The argument for not having them there is that you are subject to far less 
security patching.
Now, some organizations don't seem to care about that.  Some others care more 
than one can ever imagine.


Marcy 

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@vm.marist.edu] On Behalf Of CHAPLIN, 
JAMES (CTR)
Sent: Thursday, December 15, 2011 3:20 PM
To: LINUX-390@vm.marist.edu
Subject: [LINUX-390] Required packages on a zLinux server running Oracle vs 
Put everything on

I got into a discussion with a co-worker over packages that are
installed on a zLinux oracle server. We are Running RHEL 5.7 at our
site, and are using Oracle 10g (about to go to 11g). I noticed that our
Oracle servers have an average of 1192 rpm packages installed and 91
define system services compared to our other non-Oracle servers
(application, java, MQ  Websphere) having only 450 - 480 installed rpm
packages and 53 defined services.

 

I am not an oracle expert. Can anyone point me to a list of required
software packages to be installed to support Oracle 10g? If you have any
suggestions or personal experiences with oracle and the zLinux base
platform, your comments are welcome.

 

Another statement was It does not matter what we have installed, as
long as Oracle is working, or don't touch unless it is broken. A sample
of the over 600 packages are httpd (apache) and eklogin. Others like
squid I believe is needed. I am just looking for a good baseline and
argument to clean up these servers from unneeded software.

 

James Chaplin

Systems Programmer, MVS, zVM  zLinux


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: lsscsi issue with RHEL5

2011-12-15 Thread Shumate, Scott
Can you tell me why it loses the way I set it up manually?

I did update /etc/zfcp.conf with the following:

[root@wil-zstsintgdbprdbu01 by-path]# cat /etc/zfcp.conf
0.0.dc00   0x50060e800571f006   0x0020
0.0.dd00   0x50060e800571f016   0x0020
0.0.de00   0x50060e800571f007   0x0020
0.0.df00   0x50060e800571f017   0x0020
0.0.dc000x50060e800571f006  0x0026

When I readded the scsi disk it looks like this

[root@wil-zstsintgdbprdbu01 by-path]# lszfcp -D
0.0.dc00/0x50060e800571f006/0x0020 0:0:0:1
0.0.dc00/0x50060e800571f006/0x0026 0:0:0:2
0.0.dd00/0x50060e800571f016/0x0020 1:0:0:1
0.0.de00/0x50060e800571f007/0x0020 2:0:0:1
0.0.df00/0x50060e800571f017/0x0020 3:0:0:1

But when I reboot, it looks like this:

[root@wil-zstsintgdbprdbu01 by-path]# lszfcp -D
0.0.dc00/0x50060e800571f006/0x0020 0:0:0:1
0.0.dc00/0x50060e800571f006/0x0026 0:0:0:2
0.0.df00/0x50060e800571f017/0x0020 2:0:0:1

Does /etc/zfcp.conf control what it connects or is there another place I need 
to look? 


Thanks
Scott

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Steffen 
Maier
Sent: Thursday, December 15, 2011 8:03 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: lsscsi issue with RHEL5

On 12/15/2011 11:47 PM, Shumate, Scott wrote:
   I have an issue with zfcp luns.  I 2 luns 1 100GB lun and 1 1GB lun.
 The 100GB lun has 4 paths  /dev/sda, /dev/sdb, /dev/sdd, and /dev/sde.
 The 1GB lun has one path /dev/sdc.  I want paths /dev/sda thru 
 /dev/sdd for the 100GB lun and /dev/sde for the 1GB lun.  Is there 
 anything I can do to make this happen?  I've manually removed the scsi 
 disk and rebooted, but it still has the same issue.  What am I doing wrong?

No issue here. /dev/sdX are kernel device names allocated by no particular rule 
a user can rely on. Therefore, udev provides persistent device names 
implemented by means of symbolic links under /dev. For storage, those can be 
found under /dev/disk/by-.../...
For those cases where I really need to refer to an individual path of a zfcp 
attached SCSI disk, I usually rely on 
/dev/disk/by-path/ccw-devbusid-zfcp-wwpn:lun.

http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/persistent_naming.html
and also the Section SCSI device nodes in the zfcp chapter in the Device 
Drivers, Features, and Commands book on
http://www.ibm.com/developerworks/linux/linux390/documentation_red_hat.html#rhel5

Just curious: Having configured multiple paths I presume you use device-mapper 
multipathing on top which doesn't care about the device names of individual 
paths.
What do you need device names of underlying physical paths for?

 [root@wil-zstsintgdbprdbu01 ~]# cat /etc/zfcp.conf
 0.0.dc000x50060e800571f006  0x0020
 0.0.dd000x50060e800571f016  0x0020
 0.0.de000x50060e800571f007  0x0020
 0.0.df000x50060e800571f017  0x0020

This only contains persistent configuration for four paths to your presumable 
1GB LUN. However, where does the config for the one path to the 100GB LUN come 
from?

 [root@wil-zstsintgdbprdbu01 ~]# lszfcp -D 
 0.0.dc00/0x50060e800571f006/0x0020 0:0:0:1 
 0.0.dc00/0x50060e800571f006/0x0026 0:0:0:2 
 0.0.dd00/0x50060e800571f016/0x0020 1:0:0:1 
 0.0.de00/0x50060e800571f007/0x0020 2:0:0:1 
 0.0.df00/0x50060e800571f017/0x0020 3:0:0:1

 [root@wil-zstsintgdbprdbu01 ~]# lsscsi
 [0:0:0:1]diskHITACHI  OPEN-V   6008  /dev/sda
 [0:0:0:2]diskHITACHI  OPEN-V   6008  /dev/sdc
 [1:0:0:1]diskHITACHI  OPEN-V   6008  /dev/sdd
 [2:0:0:1]diskHITACHI  OPEN-V   6008  /dev/sde
 [3:0:0:1]diskHITACHI  OPEN-V   6008  /dev/sdb

Looks perfectly good to me.

Steffen

Linux on System z Development

IBM Deutschland Research  Development GmbH Vorsitzender des Aufsichtsrats: 
Martin Jetter
Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294

--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit 
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Re: ext3...

2011-12-15 Thread r.stricklin
On Dec 15, 2011, at 10:09 AM, Offer Baruch wrote:

 1. Corrupted FS during normal work that needs to be fixed with fsck or
 worse restore from a backup

If you use the E2FS utility to edit linux files on CMS, using an ext3 
filesystem that has not been cleanly unmounted, you will see this after 
bringing the filesystem online in linux, due to an unfortunate interaction with 
the filesystem journal. We ran into this where we had CMS automation changing 
linux IP address information, in an old DR process (that was changed later, 
partly for this reason).

 2. Resizing a FS requires me to fsck before I resize (as if the FS does not
 trust itself to be valid forcing me to umount the FS before a resize)
 3. Resizing a FS offline actually corrupts the FS
 4. The fstab parameters, that states that it is normal to fsck your FS
 every boot or every several mounts...

These three reasons (in reverse order of priority, for us) were why we switched 
from ext3 to xfs for our linux deployments. The periodic fsck at boot caused a 
thundering herd problem after a POR (or during a DR test) which made these 
things take many times longer than was strictly necessary. The balkiness 
surrounding online volume resizing was also an important motivator... I 
personally was completely disgusted that it would _sometimes_ work for ext3, 
and there was no real way to tell ahead of time whether it would or wouldn't. 
As we were going ahead full-steam with LVM we really didn't have the time or 
inclination to argue with it.

xfs solved both these problems for us, and apparently gave us better filesystem 
performance for lower CPU cost too---though I never quantified it beyond just 
the sense and collected anecdotal reports from our customers. I never had cause 
to regret abandoning ext3, though SuSE accidentally leaving it out of the YaST 
installer in SLES 11 was annoying.

 7. open files can be removed without any warning from the rm command.
 8. removing files from the FS will not free up space in the FS

This is just the way UNIX works. Linux could not reasonably change this, though 
others have already written more detailed replies on the topic.

ok
bear.

-- 
until further notice

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ext3...

2011-12-15 Thread Rob van der Heij
From the performance gallery...

Be aware that ext3 journal normally is only  the meta data. So when
something bad happens, you know *which* files are lost, but you don't
have their contents. So when there is no file creation activity, the
journal does not add much value. Unless you make ext3 do data journal
which gets very expensive.

A database typically creates a few big files and the does its own
thing inside those files. Some people will still use ext3 because they
always did or because ext3 sounds better than ext2. From profiling
data I noticed that the extra CPU cost spent in the ext3 journaling
can be significant. That's a waste when it does not add any value.

PS The same is true for the typical firewall disabled that leaves
the iptables and conntract kernel modules in with all gates wide open.
If you have a lot of network traffic, some improvement is in dropping
the modules. But that's a different story...

Rob

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/