Re: FBA/SCSI vs ECKD zLinux on VM

2011-03-11 Thread Stephen Powell
On Fri, 11 Mar 2011 12:57:17 -0500 (EST), Donald Russell wrote:
>
> But, what are other people doing? My experience so far seems to be
> that the choice of SCSI or ECKD depends on the background of the
> people making the decision people with a s390 background are
> familiar with ECKD, people coming from a unix/linux background
> are familiar with SCSI..

No-one else has mentioned this; so I will.  Don't forget about the
DIAG driver (kernel module dasd_diag_mod).  You can usually get
some performance improvement out of it, as compared with the
standard ECKD driver (dasd_eckd_mod) or the standard FBA driver
(dasd_fba_mod).  I've heard that the performance improvement
is even more significant when using emulated FBA devices on
SCSI disks.  But when using the DIAG driver, one can't use the
cdl format.  I personally use CMS RESERVED minidisks.  They work
with the DIAG driver and also work well with standard VM/CMS
backup software.

--
  .''`. Stephen Powell
 : :'  :
 `. `'`
   `-

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Meeting RSVP

2011-03-11 Thread Neale Ferguson
If you haven¹t already RSVP¹d please do so as soon as practicable. We are
trying to get numbers for catering.

Neale

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: FBA/SCSI vs ECKD zLinux on VM

2011-03-11 Thread David Boyes
>They are defined as EDEVs under z/VM.  when
> you
> compare the options.  Know that the largest size devices is around 300
> GB,
> I've forgotten the exact size.  

300G is the largest EDEV you can boot VM itself from. Otherwise, EDEVs can be 
as big as the underlying LUN. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: FBA/SCSI vs ECKD zLinux on VM

2011-03-11 Thread David Boyes
> On one hand using ECKD will get us some CPU cycles back due to more
> work
> being done by the SAP, but just recenly I heard that IO can be faster
> (higher throughput) with FBA/SCSI.

Not likely. The ability to drive up to 16 200MB/sec paths full on is very 
valuable, as is PAV. 

> 
> I'm assuming there's no clear answer as to which is best, because like
> so
> many performance tuning things, the answer is always "it depends".
> 
> But, what are other people doing? 

My general recommendation these days is ECKD for performance-sensitive data, 
FBA/SCSI for volume of data. Apps with large amounts of data tend to want 
multiple-hundred gig volumes, and that's just a nightmare to deal with using 
LVM. If you've got users expecting real-time response, then the ECKD route 
blows the FCP stuff away. If users want more than about 100G filesystems, use 
FCP/SCSI storage. 

The second decision (if you go FBA/SCSI) is raw FCP or EDEV. I prefer EDEV 
because it's a lot more natural to the VM world and tools, but there is a 
performance impact due to the necessary CPU consumption of the EDEV emulation. 

Last decision: can you afford the ECKD/FICON adapters in your storage units? 
They cost about 5 times as much FCP adapters for the same storage units. If you 
do FCP storage and/or EDEVs, you can reuse existing FCP adapters that may be 
free on your SAN. 

That's how we decide what to put where. YMMV. 


Re: cleaning up /tmp

2011-03-11 Thread Philip Rowlands

On 11/03/2011 15:18, McKown, John wrote:


On a strict reading of the above, you can't rely on a /tmp file
existing "between invocations of the program", in other words when
a file isn't actively held open by a process. This would break many
many shell scripts I've read and written over the years :)


I don't see this. When you run a shell script, there is a shell
process running the entire time with your UID. Or did I say it wrong?
I didn't mean the file existed but was closed. I mean a file existed
with a given UID and no process existed in the system anywhere which
was using that UID. Although if you run setuid programs from a script
... I hadn't thought of that. This is why I'm asking. To avoid a
stupid mistake due to my ignorance.


Sorry, I'm talking about the Linux FHS doc, not your deletion scheme.


Before doing that, however, I'd question why the /tmp directory is
so space-constrained. If software isn't cleaning up its own stuff,
fix it!


Not always within my power. If user refuses due to lack of time, I
can't do anything other than complain and get told to shut up. We are
always short of space. Space costs money. We have no money.


It sounds more like a cultural problem than a technical one at this
point. You're bean-counting on un-quota'd disk space, but have no
leverage to get problem apps addressed?

I see on the mvs-oe thread you've already discussed per-user /tmp areas,
which was going to be my next point :)


Cheers,
Phil

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: FBA/SCSI vs ECKD zLinux on VM

2011-03-11 Thread Raymond Higgs
Careful with those numbers.  Some of those tests were on some pretty old
hardware.  There have been z/VM SCSI performance improvements, qioassist,
and 2 rounds of FCP firmware improvements in the meantime.  More up to
date info is probably there.  You'll just have to click through some of
the links.

Also, this might help:

http://public.dhe.ibm.com/common/ssi/ecm/en/zsw03129usen/ZSW03129USEN.PDF

Ray Higgs
System z FCP Development
Bld. 706, B24
2455 South Road
Poughkeepsie, NY 12601
(845) 435-8666,  T/L 295-8666
rayhi...@us.ibm.com

Linux on 390 Port  wrote on 03/11/2011 01:00:48
PM:

> Christian Paro 
> Sent by: Linux on 390 Port 
>
> 03/11/2011 01:00 PM
>
> Please respond to
> Linux on 390 Port 
>
> To
>
> LINUX-390@vm.marist.edu
>
> cc
>
> Subject
>
> Re: FBA/SCSI vs ECKD zLinux on VM
>
> A full set of benchmarks for different disk technologies with z/VM:
>
> http://www.vm.ibm.com/perf/reports/zvm/html/520dasd.html
>
> On Fri, Mar 11, 2011 at 12:57 PM, Donald Russell
wrote:
>
> > I currently have a dozen or so RHEL 5.6 zLinux running on multiple VM
6.1
> > (well, 5, but 6.1 RSN), on z10 processors.
> >
> > The largest (disk space) is about 3TB and is currently FBA/SCSI
> >
> > We're thinking of changing this to ECKD to take advantage of the SAP
to do
> > the real IO, instead of IO being handled within zLinux itself.
> >
> > (We have other zLinux system using ECKD)
> >
> > On one hand using ECKD will get us some CPU cycles back due to more
work
> > being done by the SAP, but just recenly I heard that IO can be faster
> > (higher throughput) with FBA/SCSI.
> >
> > I'm assuming there's no clear answer as to which is best, because like
so
> > many performance tuning things, the answer is always "it depends".
> >
> > But, what are other people doing? My experience so far seems to be
that the
> > choice of SCSI or ECKD depends on the background of the people making
the
> > decision people with a s390 background are familiar with ECKD,
people
> > coming from a unix/linux background are familiar with SCSI..
> >
> > Thank you
> >
> > --
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or
> > visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > --
> > For more information on Linux on System z, visit
> > http://wiki.linuxvm.org/
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: FBA/SCSI vs ECKD zLinux on VM

2011-03-11 Thread Craig Collins
I'd recommend you also look at using Emulated FBA devices as long as you are
considering alternatives.  They are defined as EDEVs under z/VM.  when you
compare the options.  Know that the largest size devices is around 300 GB,
I've forgotten the exact size.  It's an option to be aware of for a number
of reasons including multipathing and disaster recovery, if you need those,
without the requirements that disk be defined as CKD.  As already stated by
Christian Paro, recognize the performance aspects between all of the options
and the requirements of your workloads.

We currently use EDEV FBA devices for boot/root/swap and linux server
managed FBA/SCSI for data LUNs, but continue to think about moving it all to
EDEV FBA definitions.

Craig Collins
State of WI

On Fri, Mar 11, 2011 at 12:00 PM, Christian Paro
wrote:

> A full set of benchmarks for different disk technologies with z/VM:
>
> http://www.vm.ibm.com/perf/reports/zvm/html/520dasd.html
>
> On Fri, Mar 11, 2011 at 12:57 PM, Donald Russell  >wrote:
>
> > I currently have a dozen or so RHEL 5.6 zLinux running on multiple VM 6.1
> > (well, 5, but 6.1 RSN), on z10 processors.
> >
> > The largest (disk space) is about 3TB and is currently FBA/SCSI
> >
> > We're thinking of changing this to ECKD to take advantage of the SAP to
> do
> > the real IO, instead of IO being handled within zLinux itself.
> >
> > (We have other zLinux system using ECKD)
> >
> > On one hand using ECKD will get us some CPU cycles back due to more work
> > being done by the SAP, but just recenly I heard that IO can be faster
> > (higher throughput) with FBA/SCSI.
> >
> > I'm assuming there's no clear answer as to which is best, because like so
> > many performance tuning things, the answer is always "it depends".
> >
> > But, what are other people doing? My experience so far seems to be that
> the
> > choice of SCSI or ECKD depends on the background of the people making the
> > decision people with a s390 background are familiar with ECKD, people
> > coming from a unix/linux background are familiar with SCSI..
> >
> > Thank you
> >
> > --
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> > visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > --
> > For more information on Linux on System z, visit
> > http://wiki.linuxvm.org/
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: FBA/SCSI vs ECKD zLinux on VM

2011-03-11 Thread Christian Paro
A full set of benchmarks for different disk technologies with z/VM:

http://www.vm.ibm.com/perf/reports/zvm/html/520dasd.html

On Fri, Mar 11, 2011 at 12:57 PM, Donald Russell wrote:

> I currently have a dozen or so RHEL 5.6 zLinux running on multiple VM 6.1
> (well, 5, but 6.1 RSN), on z10 processors.
>
> The largest (disk space) is about 3TB and is currently FBA/SCSI
>
> We're thinking of changing this to ECKD to take advantage of the SAP to do
> the real IO, instead of IO being handled within zLinux itself.
>
> (We have other zLinux system using ECKD)
>
> On one hand using ECKD will get us some CPU cycles back due to more work
> being done by the SAP, but just recenly I heard that IO can be faster
> (higher throughput) with FBA/SCSI.
>
> I'm assuming there's no clear answer as to which is best, because like so
> many performance tuning things, the answer is always "it depends".
>
> But, what are other people doing? My experience so far seems to be that the
> choice of SCSI or ECKD depends on the background of the people making the
> decision people with a s390 background are familiar with ECKD, people
> coming from a unix/linux background are familiar with SCSI..
>
> Thank you
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


FBA/SCSI vs ECKD zLinux on VM

2011-03-11 Thread Donald Russell
I currently have a dozen or so RHEL 5.6 zLinux running on multiple VM 6.1
(well, 5, but 6.1 RSN), on z10 processors.

The largest (disk space) is about 3TB and is currently FBA/SCSI

We're thinking of changing this to ECKD to take advantage of the SAP to do
the real IO, instead of IO being handled within zLinux itself.

(We have other zLinux system using ECKD)

On one hand using ECKD will get us some CPU cycles back due to more work
being done by the SAP, but just recenly I heard that IO can be faster
(higher throughput) with FBA/SCSI.

I'm assuming there's no clear answer as to which is best, because like so
many performance tuning things, the answer is always "it depends".

But, what are other people doing? My experience so far seems to be that the
choice of SCSI or ECKD depends on the background of the people making the
decision people with a s390 background are familiar with ECKD, people
coming from a unix/linux background are familiar with SCSI..

Thank you

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cleaning up /tmp

2011-03-11 Thread Edmund R. MacKenty
On Friday, March 11, 2011 10:15:49 am Richard Troth wrote:
> Mack said:
> > You might also note that according to the FHS, /tmp is only supposed to
> > be used by system processes.  User-level processes are supposed to use
> > /var/tmp. But of course, many programs violate that.  Still, you might
> > want to be cleaning up both directories.
>
> Yes ... keep an eye on /var/tmp also.
>
> I respect Ed, but I don't get this from my read of the FHS.  In my
> experience, it's the reverse:  users typically are aware of /tmp and
> use it and expect it to be available (without per-ID constraints as
> suggested in the MVS-OE thread), while /var/tmp may actually be better
> controlled (and less subject to clutter) and is lesser known to lay
> users.  My read of this part of the FHS fits.  They recommend that
> /var/tmp cleanup be less frequent than /tmp cleanup.  (Content in
> /var/tmp is explicitly expected to persist across reboots.)

Well, that was from memory, so I probably did get it wrong.  I've always
viewed /var/tmp as the place where you can mount a big filesystem for users to
play in, because /tmp may well be on the root filesystem and you don't want
that to fill up.  Of course, Rick is right about users: they often write to
/tmp anyway.  So I tend to also mount a separate filesystem on /tmp.

Personally, when I write a program or script that needs a temporary file, I
put it in /var/tmp.  When I want to temporarily save a file as a user, I put
it in $HOME/tmp.  That way I'm responsible for cleaning it up and it comes out
of my quota.  I'll bet no one else does that. :-)
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cleaning up /tmp

2011-03-11 Thread McKown, John
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On 
> Behalf Of Philip Rowlands
> Sent: Friday, March 11, 2011 8:52 AM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: cleaning up /tmp
> 
> On 11/03/2011 14:23, McKown, John wrote:
> > There's a discussion going on over on the MVS-OE forum (which I
> > started) about the /tmp subdirectory. It's gone away from 
> my original
> > towards how to keep it clean. So I thought I'd ask the UNIX wizards
> > over here what the "industry standard" is.
> 
> I don't speak for "industry", but here are some Linux standards:
> http://www.pathname.com/fhs/pub/fhs-2.3.html#TMPTEMPORARYFILES
> http://www.pathname.com/fhs/pub/fhs-2.3.html#VARTMPTEMPORARYFI
> LESPRESERVEDBETWEE
> 
> > One thing mentioned by a
> > person boiled down to "delete all the files in /tmp which 
> belong to a
> > specific user when the last process which is running with that UID
> > terminates" (rephrased by me). This got me to thinking. Is there any
> > need for a file in /tmp to exist when there is no process running by
> > a given user?
> 
> On a strict reading of the above, you can't rely on a /tmp 
> file existing
> "between invocations of the program", in other words when a file isn't
> actively held open by a process. This would break many many shell
> scripts I've read and written over the years :)

I don't see this. When you run a shell script, there is a shell process running 
the entire time with your UID. Or did I say it wrong? I didn't mean the file 
existed but was closed. I mean a file existed with a given UID and no process 
existed in the system anywhere which was using that UID. Although if you run 
setuid programs from a script ... I hadn't thought of that. This is why I'm 
asking. To avoid a stupid mistake due to my ignorance.

> 
> The typical way to clear up /tmp is with the tmpwatch utility, fired
> from cron, which selects files to delete based on their last access
> timestamp. Some distros go further and clean out /tmp completely on
> every boot.
> 
> > find /tmp -type f -exec ls -ln {} \; |\ awk '{print $3;}'|\ sort
> > -u|\ while read XUID; do echo Processing UID: $XUID; ps -u $XUID -U
> > $XUID>/dev/null || find /tmp -type f -uid $XUID -exec rm; done
> 
> I can see this approach yielding a lot of false negatives; 
> i.e. leaving
> files in place because UID has some unrelated process running.

Very true. "Better safe than sorry." in this case. But it will leave a lot of 
garbage.

> 
> If you're desperate to have a tidy /tmp, a frequent call to tmpwatch
> along these lines might work:
> 
> tmpwatch --atime -all --fuser 6 /tmp
> 
> This 6-hour deadline is a lot more severe that Redhat's 
> default of 10 days.
> 
> Before doing that, however, I'd question why the /tmp directory is so
> space-constrained. If software isn't cleaning up its own 
> stuff, fix it!

Not always within my power. If user refuses due to lack of time, I can't do 
anything other than complain and get told to shut up. We are always short of 
space. Space costs money. We have no money. We need the space for long term 
data such as databases. Also, my system is not zLinux, but z/OS. I was asking 
here because this is where the UNIX wizards hang out. And space management on 
z/OS is a whole 'nother beastie. Because management on z/OS is more 
parsimonious. We are used to extremely tight control. Having /tmp overallocated 
by 1000% for the occasional need is viewed with horror. I can't grow and shrink 
filesystems automatically on-the-fly. It's a weird management thing about DASD 
space management on z/OS. They don't apply it to our Linux/Intel or Windows 
environments. They're used to "slop" over there. I'm held to a higher standard. 

> One further trick is to unlink a /tmp file while it's open, which
> guarantees cleanup as soon as the process ends.

I like this, but it is the responsibility of the software. Which I can mention, 
but not enforce.

> 
> 
> Cheers,
> Phil

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * 
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
---

Re: cleaning up /tmp

2011-03-11 Thread Richard Troth
Mack said:
> You might also note that according to the FHS, /tmp is only supposed to be
> used by system processes.  User-level processes are supposed to use /var/tmp.
> But of course, many programs violate that.  Still, you might want to be
> cleaning up both directories.

Yes ... keep an eye on /var/tmp also.

I respect Ed, but I don't get this from my read of the FHS.  In my
experience, it's the reverse:  users typically are aware of /tmp and
use it and expect it to be available (without per-ID constraints as
suggested in the MVS-OE thread), while /var/tmp may actually be better
controlled (and less subject to clutter) and is lesser known to lay
users.  My read of this part of the FHS fits.  They recommend that
/var/tmp cleanup be less frequent than /tmp cleanup.  (Content in
/var/tmp is explicitly expected to persist across reboots.)

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Fri, Mar 11, 2011 at 10:01, Edmund R. MacKenty
 wrote:
> On Friday, March 11, 2011 09:43:47 am Alan Cox wrote:
>> > "industry standard" is. One thing mentioned by a person boiled down to
>> > "delete all the files in /tmp which belong to a specific user when the
>> > last process which is running with that UID terminates" (rephrased by
>> > me). This got me
> ...
>> The usual approach is just to bin stuff that is a few hours/days/weeks
>> old. I guess it depends what storage costs you. On a PC its what - 10
>> cents a gigabyte - so there is no real hurry.
>
> I agree with Alan: delete things older than a day.  That's how I've seen it
> done for many years.  The only problem with that would be long-running
> programs that write a /tmp file early on and then read from it periodically
> after that.
>
> You might also note that according to the FHS, /tmp is only supposed to be
> used by system processes.  User-level processes are supposed to use /var/tmp.
> But of course, many programs violate that.  Still, you might want to be
> cleaning up both directories.
>
> A UID-based deletion scheme makes sense to me as a security thing if your goal
> is to make the system clean up all /tmp files for a user after they log out.
> but the general rule as proposed may not work well for system UIDs, such as
> lp, which don't really have the concept of a "session" after which cleanup
> should occur.  If you're going with a UID-based scheme, I'd limit it to UIDs
> greater than or equal to UID_MIN, as defined in /etc/login.defs.
>        - MacK.
> -
> Edmund R. MacKenty
> Software Architect
> Rocket Software
> 275 Grove Street  -  Newton, MA 02466-2272  -  USA
> Tel: +1.617.614.4321
> Email: m...@rs.com
> Web: www.rocketsoftware.com
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cleaning up /tmp

2011-03-11 Thread Richard Troth
Many Linux installations use "tmpfs" for /tmp.  Personally, I do that
as a rule.  (All rules are subject to exception, and I do that too.)

The advantage of tmpfs is that it magically cleans up every time you
reboot.  You can get the same effect from explicit deletion of /tmp
contents when the system comes up.  That bit me a year ago ... I think
it was Kubuntu.  Irritating!  I knew the box was not using tmpfs for
/tmp, so I expected the content to remain.  But like your users, after
learning one time I stopped leaving clutter in /tmp on that system.
(I have been following the MVS-OE thread too.)  Rebooting is the most
common catastrophic event that happens to a computer system on a
regular basis.  It is therefore the most natural choice for /tmp
cleanup trigger.

About your proposed 'find' pipeline, there are A LOT of reasons why
people use /tmp.  Just because the owner of a file is not presently
logged on does not mean either that they are ignorant (of the intent
of /tmp) or forgetful (that they left something there).  /tmp is
commonly used as a staging area.  In any case, I would not do per-UID
selective removal (unless that user has been deleted).

Some people go with file age selection to clean up /tmp, but don't use
mod time for that.  (Some of us are insistent on retaining mod times,
so the mod time DOES NOT have any bearing on when a file landed under
/tmp.)

Whatever means you employ, someone almost certainly WILL get bitten.
Your phone will ring.  But you need to stop their bad behavior.  You
will have to exercise "managerial courage", bite the bullet, pull the
trigger, get er done.

I have refrained from jumping in on the MVS-OE thread.  I recommend
that you be judicious and selective about the USS-specific methods you
employ.  Some of the features of USS are excellent and really helpful.
 But where they vary from the POSIX standard you may have
interoperability issues and you WILL have sysadmin education
requirements.  Same thing happens in Linux.  (To that point, for
example, I advise people to back off from BASHisms in their shell
scripts ... if they ever want to use said scripts on USS or OpenVM or
Solaris or ... whatever.)  In other words, whatever you do, try to use
common Unix tools if you can.

Looks like other responses are pouring in already.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Fri, Mar 11, 2011 at 09:23, McKown, John
 wrote:
> There's a discussion going on over on the MVS-OE forum (which I started) 
> about the /tmp subdirectory. It's gone away from my original towards how to 
> keep it clean. So I thought I'd ask the UNIX wizards over here what the 
> "industry standard" is. One thing mentioned by a person boiled down to 
> "delete all the files in /tmp which belong to a specific user when the last 
> process which is running with that UID terminates" (rephrased by me). This 
> got me to thinking. Is there any need for a file in /tmp to exist when there 
> is no process running by a given user? IOW, can some process be dependant on 
> a file in /tmp which is owned by a UID other than its own UID (and/or maybe 
> 0). Or rephrasing again. If I have a cron entry remove all the files in /tmp 
> which are owned by a given UID (not 0) when there are no processes running 
> with that UID, could this cause a problem? If you prefer an example, what if 
> I run the following script daily by root:
>
> find /tmp -type f -exec ls -ln {} \; |\
> awk '{print $3;}'|\
> sort -u|\
> while read XUID; do
> echo Processing UID: $XUID;
> ps -u $XUID -U $XUID >/dev/null || find /tmp -type f -uid $XUID -exec rm;
> done
>
> Perhaps I should do an "lsof" to see if the file is "in use" before doing the 
> "rm" on it? And the script needs to be made more efficient. I don't like 
> doing two find commands.
>
> --
> John McKown
> Systems Engineer IV
> IT
>
> Administrative Services Group
>
> HealthMarkets(r)
>
> 9151 Boulevard 26 * N. Richland Hills * TX 76010
> (817) 255-3225 phone *
> john.mck...@healthmarkets.com * www.HealthMarkets.com
>
> Confidentiality Notice: This e-mail message may contain confidential or 
> proprietary information. If you are not the intended recipient, please 
> contact the sender by reply e-mail and destroy all copies of the original 
> message. HealthMarkets(r) is the brand name for products underwritten and 
> issued by the insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake 
> Life Insurance Company(r), Mid-West National Life Insurance Company of 
> TennesseeSM and The MEGA Life and Health Insurance Company.SM
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

Re: cleaning up /tmp

2011-03-11 Thread Edmund R. MacKenty
On Friday, March 11, 2011 09:43:47 am Alan Cox wrote:
> > "industry standard" is. One thing mentioned by a person boiled down to
> > "delete all the files in /tmp which belong to a specific user when the
> > last process which is running with that UID terminates" (rephrased by
> > me). This got me
...
> The usual approach is just to bin stuff that is a few hours/days/weeks
> old. I guess it depends what storage costs you. On a PC its what - 10
> cents a gigabyte - so there is no real hurry.

I agree with Alan: delete things older than a day.  That's how I've seen it
done for many years.  The only problem with that would be long-running
programs that write a /tmp file early on and then read from it periodically
after that.

You might also note that according to the FHS, /tmp is only supposed to be
used by system processes.  User-level processes are supposed to use /var/tmp.
But of course, many programs violate that.  Still, you might want to be
cleaning up both directories.

A UID-based deletion scheme makes sense to me as a security thing if your goal
is to make the system clean up all /tmp files for a user after they log out.
but the general rule as proposed may not work well for system UIDs, such as
lp, which don't really have the concept of a "session" after which cleanup
should occur.  If you're going with a UID-based scheme, I'd limit it to UIDs
greater than or equal to UID_MIN, as defined in /etc/login.defs.
- MacK.
-
Edmund R. MacKenty
Software Architect
Rocket Software
275 Grove Street  -  Newton, MA 02466-2272  -  USA
Tel: +1.617.614.4321
Email: m...@rs.com
Web: www.rocketsoftware.com

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cleaning up /tmp

2011-03-11 Thread McKown, John
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On 
> Behalf Of Shane G
> Sent: Friday, March 11, 2011 8:44 AM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: cleaning up /tmp
> 
> I've been known to drop files in /tmp for later collection - 
> by myself or others.
> 
> Have you considered skulker ?.

Yes, but how to know "how long" to keep the files in /tmp? A day, week, month, 
quarter, longer? I may be unduly influenced by my z/OS background of DASD 
management. In that arena, if a storage area is over provisioned, I get hit up 
as to why I am "wasting" space. Curious, to me, that space assigned to a 
function but unused is "wasted" whereas the same space assigned to the 
"available to be assigned" pool (but not usable until it is so assigned) is not 
considered "wasted".

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * 
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cleaning up /tmp

2011-03-11 Thread Shane G
Just to clarify, this was based on the OE reference - i.e Unix Systems
Services running under z/OS rather than zLinux.

Shane ...

On Sat, Mar 12th, 2011 at 1:44 AM, I wrote:

> Have you considered skulker ?.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cleaning up /tmp

2011-03-11 Thread Philip Rowlands

On 11/03/2011 14:23, McKown, John wrote:

There's a discussion going on over on the MVS-OE forum (which I
started) about the /tmp subdirectory. It's gone away from my original
towards how to keep it clean. So I thought I'd ask the UNIX wizards
over here what the "industry standard" is.


I don't speak for "industry", but here are some Linux standards:
http://www.pathname.com/fhs/pub/fhs-2.3.html#TMPTEMPORARYFILES
http://www.pathname.com/fhs/pub/fhs-2.3.html#VARTMPTEMPORARYFILESPRESERVEDBETWEE


One thing mentioned by a
person boiled down to "delete all the files in /tmp which belong to a
specific user when the last process which is running with that UID
terminates" (rephrased by me). This got me to thinking. Is there any
need for a file in /tmp to exist when there is no process running by
a given user?


On a strict reading of the above, you can't rely on a /tmp file existing
"between invocations of the program", in other words when a file isn't
actively held open by a process. This would break many many shell
scripts I've read and written over the years :)

The typical way to clear up /tmp is with the tmpwatch utility, fired
from cron, which selects files to delete based on their last access
timestamp. Some distros go further and clean out /tmp completely on
every boot.


find /tmp -type f -exec ls -ln {} \; |\ awk '{print $3;}'|\ sort
-u|\ while read XUID; do echo Processing UID: $XUID; ps -u $XUID -U
$XUID>/dev/null || find /tmp -type f -uid $XUID -exec rm; done


I can see this approach yielding a lot of false negatives; i.e. leaving
files in place because UID has some unrelated process running.

If you're desperate to have a tidy /tmp, a frequent call to tmpwatch
along these lines might work:

tmpwatch --atime -all --fuser 6 /tmp

This 6-hour deadline is a lot more severe that Redhat's default of 10 days.

Before doing that, however, I'd question why the /tmp directory is so
space-constrained. If software isn't cleaning up its own stuff, fix it!
One further trick is to unlink a /tmp file while it's open, which
guarantees cleanup as soon as the process ends.


Cheers,
Phil

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cleaning up /tmp

2011-03-11 Thread Shane G
I've been known to drop files in /tmp for later collection - by myself or 
others.

Have you considered skulker ?.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: cleaning up /tmp

2011-03-11 Thread Alan Cox
> "industry standard" is. One thing mentioned by a person boiled down to 
> "delete all the files in /tmp which belong > to a specific user when the last 
> process which is running with that UID terminates" (rephrased by me). This 
> got me

That I wonder consider as brave. There are cases where things exist which
are temporary, user owned but actually being used by non user processes
(eg spoolers)

The usual approach is just to bin stuff that is a few hours/days/weeks
old. I guess it depends what storage costs you. On a PC its what - 10
cents a gigabyte - so there is no real hurry.

It's also possible to play games with namespaces and have things like per
user private /tmp areas, which some secure systems setups like to do.

Alan

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


cleaning up /tmp

2011-03-11 Thread McKown, John
There's a discussion going on over on the MVS-OE forum (which I started) about 
the /tmp subdirectory. It's gone away from my original towards how to keep it 
clean. So I thought I'd ask the UNIX wizards over here what the "industry 
standard" is. One thing mentioned by a person boiled down to "delete all the 
files in /tmp which belong to a specific user when the last process which is 
running with that UID terminates" (rephrased by me). This got me to thinking. 
Is there any need for a file in /tmp to exist when there is no process running 
by a given user? IOW, can some process be dependant on a file in /tmp which is 
owned by a UID other than its own UID (and/or maybe 0). Or rephrasing again. If 
I have a cron entry remove all the files in /tmp which are owned by a given UID 
(not 0) when there are no processes running with that UID, could this cause a 
problem? If you prefer an example, what if I run the following script daily by 
root:
 
find /tmp -type f -exec ls -ln {} \; |\
awk '{print $3;}'|\
sort -u|\
while read XUID; do
echo Processing UID: $XUID;
ps -u $XUID -U $XUID >/dev/null || find /tmp -type f -uid $XUID -exec rm;
done

Perhaps I should do an "lsof" to see if the file is "in use" before doing the 
"rm" on it? And the script needs to be made more efficient. I don't like doing 
two find commands.

-- 
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * 
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/