Re: SFS Catalog Storage Pool

2011-07-07 Thread Schuh, Richard
I am not just anyone. :-)

I have fought my battles. Let someone who has skin in the game take up the 
banner.


Regards, 
Richard Schuh 

 

> -Original Message-
> From: The IBM z/VM Operating System 
> [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Alan Altmark
> Sent: Thursday, July 07, 2011 10:19 AM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: SFS Catalog Storage Pool
> 
> On Thursday, 07/07/2011 at 12:35 EDT, "Schuh, Richard" 
> 
> wrote:
> > Something ought to be done about the instructions regarding 
> MAXUSERS.  
> > A
> few 
> > years ago, I had a small system that only had 64 users in 
> the directory. 
>  
> > It had performance problems (a complete freeze of any user 
> that tried 
> > to
>  use 
> > SFS) with MU of 100, 200, or 300. Upping it to 1000 fixed 
> the  problem.
> >  
> > And no, Alan or Mike, I will not open an incident or call 
> the Support
> Center 
> > (Unless I call to say goodbye) :-)
> 
> Anyone can send e-mail to mhvr...@us.ibm.com and comment on the pubs.
> 
> Alan Altmark
> 
> Senior Managing z/VM and Linux Consultant IBM System Lab 
> Services and Training ibm.com/systems/services/labservices
> office: 607.429.3323
> mobile; 607.321.7556
> alan_altm...@us.ibm.com
> IBM Endicott
> 

Re: SFS Catalog Storage Pool

2011-07-07 Thread Alan Altmark
On Thursday, 07/07/2011 at 12:35 EDT, "Schuh, Richard"  
wrote:
> Something ought to be done about the instructions regarding MAXUSERS.  A 
few 
> years ago, I had a small system that only had 64 users in the directory. 
 
> It had performance problems (a complete freeze of any user that tried to 
 use 
> SFS) with MU of 100, 200, or 300. Upping it to 1000 fixed the  problem.
>  
> And no, Alan or Mike, I will not open an incident or call the Support 
Center 
> (Unless I call to say goodbye) :-)

Anyone can send e-mail to mhvr...@us.ibm.com and comment on the pubs.

Alan Altmark

Senior Managing z/VM and Linux Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: SFS Catalog Storage Pool

2011-07-07 Thread Schuh, Richard
Something ought to be done about the instructions regarding MAXUSERS. A few 
years ago, I had a small system that only had 64 users in the directory. It had 
performance problems (a complete freeze of any user that tried to use SFS) with 
MU of 100, 200, or 300. Upping it to 1000 fixed the problem.

And no, Alan or Mike, I will not open an incident or call the Support Center 
(Unless I call to say goodbye) :-)


Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf 
Of Kris Buelens
Sent: Thursday, July 07, 2011 2:52 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS Catalog Storage Pool

The physical space for the catalog is large enough, but SFS also has a notion 
of "logical space".  SFS is derived from SQL/DS, remember DBEXTENTs and 
DBSPACEs).
You need to run FILESERV REGENERATE and increase MAXUSERS and/or MAXDISKS, 
chapter 11 in the SFS admin manual.

2011/7/7 clifford jackson 
mailto:cliffordjackson...@msn.com>>
I have a large SFS and monitoring my Catalog Space Information, I observed that 
my Percent used index blocks was at 97% I performed a control bata backup and 
done a FILESERV REORG. This in turn gave me 94% percent used index blocks. My 
Catalog storage pool 1 has the following specifications:

MDISK   STORAGE GROUP   PERCENT Utilized
304 1   100%
310 1 20%
312 1 20%


What do I need to do to get the percent used index blocks down and keep it to 
something manageable ..



CliffJackson
Senior Systems Programmer



--
Kris Buelens,
IBM Belgium, VM customer support


Re: SFS Catalog Storage Pool

2011-07-07 Thread clifford jackson

Thanks kris...

Date: Thu, 7 Jul 2011 11:51:57 +0200
From: kris.buel...@gmail.com
Subject: Re: SFS Catalog Storage Pool
To: IBMVM@LISTSERV.UARK.EDU

The physical space for the catalog is large enough, but SFS also has a notion 
of "logical space".  SFS is derived from SQL/DS, remember DBEXTENTs and 
DBSPACEs).
You need to run FILESERV REGENERATE and increase MAXUSERS and/or MAXDISKS, 
chapter 11 in the SFS admin manual.



2011/7/7 clifford jackson 







I have a large SFS and monitoring my Catalog Space Information, I observed that 
my Percent used index blocks was at 97% I performed a control bata backup and 
done a FILESERV REORG. This in turn gave me 94% percent used index blocks. My 
Catalog storage pool 1 has the following specifications:


MDISK   STORAGE GROUP   PERCENT Utilized304 1   
100%310 1 
20%312 1 20%



What do I need to do to get the percent used index blocks down and keep it to 
something manageable ..


CliffJacksonSenior Systems Programmer

  


-- 
Kris Buelens,
IBM Belgium, VM customer support
  

Re: SFS Catalog Storage Pool

2011-07-07 Thread Kris Buelens
The physical space for the catalog is large enough, but SFS also has a
notion of "logical space".  SFS is derived from SQL/DS, remember DBEXTENTs
and DBSPACEs).
You need to run FILESERV *RE*GENERATE and increase MAXUSERS and/or MAXDISKS,
chapter 11 in the SFS admin manual.

2011/7/7 clifford jackson 

>  I have a large SFS and monitoring my Catalog Space Information, I observed
> that my Percent used index blocks was at 97% I performed a control bata
> backup and done a FILESERV REORG. This in turn gave me 94% percent used
> index blocks. My Catalog storage pool 1 has the following specifications:
>
> MDISK   STORAGE GROUP   PERCENT Utilized
> 304 1   100%
> 310 1 20%
> 312 1 20%
>
>
> What do I need to do to get the percent used index blocks down and keep it
> to something manageable ..
>
>
>
> CliffJackson
> Senior Systems Programmer
>



-- 
Kris Buelens,
IBM Belgium, VM customer support


SFS Catalog Storage Pool

2011-07-07 Thread clifford jackson

I have a large SFS and monitoring my Catalog Space Information, I observed that 
my Percent used index blocks was at 97% I performed a control bata backup and 
done a FILESERV REORG. This in turn gave me 94% percent used index blocks. My 
Catalog storage pool 1 has the following specifications:
MDISK   STORAGE GROUP   PERCENT Utilized304 1   
100%310 1 
20%312 1 20%

What do I need to do to get the percent used index blocks down and keep it to 
something manageable ..


CliffJacksonSenior Systems Programmer 

Re: SFS Possible problem

2011-04-28 Thread Kris Buelens
You have to redesign.  The 16MB is indeed a restriction, and when you have
really many files, an ACCESS of the dirid take ages.

You could use a Directory Control Directory (DIRC) and place it in a
dataspace, then the 16MB still applies, but you get the full 16MB (no space
required for other resident stuff, like parts of the CMS nucleus).  But, a
DIRC can be accessed RW by only virtual machine at a time.  If you place the
DIRC in a dataspace, and if you assure always one user has it accessed, an
ACCESS command is very fast, data and FST space are shared amongst all
users.
But, your users won't see new files until they reACCESS the DIRC, and if you
have many updates throughout the day, it would cause many copies of the
dataspace.  For my former customer we had a structure like this:
  - a normal, small, DIR accessed as eg F, there updates during the day were
placed
  - a big DIRC, with a dataspace accessed befind F.
Each night, the files from the small DIR were moved into the DIRC.  Then the
service machine accessed it as RO, so we always had at least this one
accessing the DIRC (and keeping the dataspace alive).  Then users of the
DIRC were recycled.   We had some other service machine checking the number
of dataspaces for that DIRC (using QUERY ACCESSORS dirid (DATASPACE and
check the number of levels; if more than x, we sent out an email).

A few months ago I redesigned a bad setup for another customer: they had two
directories with batch job listings: 'this month' and 'previous month', with
a monthly cleanup.  But near the end of the month ACCESS became slow,
sometimes impossible due to lack of storage.  Now we have a dirid per day.
Problems gone and much faster.  To make the transition invisible, maybe the
use of temporaly ALIASes can help (the code storing the files was changed at
time x; the code reading the files still found its stuff but via ALIASes,
and I had some extra day to fix the read code).


2011/4/28 clifford jackson 

>  Hello All, I have a SFS data base with a Hierarchical directory structure
> of File Control my tree structure currently has one top level directory,  this
> directory is used for report repository. This directory has grown very
> large, I have an application that access this directory by customers (thru
> worker machines) to view reports, I’ve run into a problem where it seems
> that upon each request of the SFS data base for a report it seems the
> complete top level directory is being loaded into the requesters virtual
> machine storage and it seems there is a  16m line restriction.
>
>
>
> Is there a solution that I can implement without interfering with my
> production environment too much or  can I possibly redesign the SFS data
> base structure and use  Directory Control and possibly use Data Spaces.
>
>
>
> Cliff Jackson
>
> Senior Systems Programmer
>



-- 
Kris Buelens,
IBM Belgium, VM customer support


SFS Possible problem

2011-04-28 Thread clifford jackson

Hello All, I have a SFS data base with a Hierarchical directory structure of 
File Control my tree structure currently has one top level directory,  this 
directory is used for report repository. This directory has grown very large, I 
have an application that access this directory by customers (thru worker 
machines) to view reports, I’ve run into a problem where it seems that upon 
each request of the SFS data base for a report it seems the complete top level 
directory is being loaded into the requesters virtual machine storage and it 
seems there is a  16m line restriction. 
 
Is there a solution that I can implement without interfering with my production 
environment too much or  can I possibly redesign the SFS data base structure 
and use  Directory Control and possibly use Data Spaces.  
 
Cliff Jackson
Senior Systems Programmer 

Re: SFS problem

2011-04-20 Thread John Hall
Thank you, yes, that is correct... Apologies for mixing my terms.  I should
have been saying Alternate ID.

John

On Wed, Apr 20, 2011 at 12:33 PM, Schuh, Richard  wrote:

>  Actually, SECUSER still exists and is something quite
> different. SECUSER defines a secondary user for receiving console messages
> and entering commands and responses. The entries are done as though they
> came from, not on behalf of, the user.
>
> Regards,
> Richard Schuh
>
>
>
>  --
> *From:* The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] *On
> Behalf Of *John Hall
> *Sent:* Wednesday, April 20, 2011 7:04 AM
>
> *To:* IBMVM@LISTSERV.UARK.EDU
> *Subject:* Re: SFS problem
>
> If I recall correctly, DIAG D4 is the one that manipulates the secondary
> ID, aka "Alternate ID".  (SECUSER is an old term).  Diag 88 provides the
> ability to link minidisks and perform userid/password validations (if the
> issuer has appropriate authority).
>
> An interesting usage note for this is that DIAG D4 does not change the
> userid associated with any existing (already active) SFS connections.  This
> is because it is a CP function and manipulates the VMDBK.  Once set, future
> connections (via APPC/VM Connect) will utilize the Alternate ID. ... This is
> why severing all connections prior to setting the AltID will "fix" this type
> of problem, because CMS will (re) connect and use the AltID.
>
> If this is Nora's problem, an easy work around would be wrap the job with
> an exec that uses DMSGETWU and DMSPUSWU to set a new default work unit that
> contains the appropriate user, then run the job from the exec, then finally
> reset with DMSPOPWU. (If I'm remembering all of this correctly)
>
> John
>
> --
> John Hall
> Safe Software, Inc.
> 727-608-8799
> johnh...@safesoftware.com
>
>
>
> On Tue, Apr 19, 2011 at 11:48 AM, Schuh, Richard  wrote:
>
>>  Isn't that DIAG 88, instead of SECUSER?
>>
>>
>> Regards,
>> Richard Schuh
>>
>>
>>
>>
>>  --
>> *From:* The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] *On
>> Behalf Of *John Hall
>> *Sent:* Tuesday, April 19, 2011 6:41 AM
>>
>> *To:* IBMVM@LISTSERV.UARK.EDU
>> *Subject:* Re: SFS problem
>>
>>  Nora,
>> Batch jobs normally run with the privileges of the "owner" of the job,
>> using the SECUSER facility in z/VM.  With SFS, this can lead to
>> unexpected results when a prior batch job leaves the worker with a
>> connection to the filepool under a different user's id.  If the job
>> ordering/selection of batch workers is somewhat random, you could see the
>> outcome that you're experiencing (sometimes it works, sometimes it fails).
>>
>>
>


Re: SFS problem

2011-04-20 Thread Schuh, Richard
Actually, SECUSER still exists and is something quite different. SECUSER 
defines a secondary user for receiving console messages and entering commands 
and responses. The entries are done as though they came from, not on behalf of, 
the user.

Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf 
Of John Hall
Sent: Wednesday, April 20, 2011 7:04 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

If I recall correctly, DIAG D4 is the one that manipulates the secondary ID, 
aka "Alternate ID".  (SECUSER is an old term).  Diag 88 provides the ability to 
link minidisks and perform userid/password validations (if the issuer has 
appropriate authority).

An interesting usage note for this is that DIAG D4 does not change the userid 
associated with any existing (already active) SFS connections.  This is because 
it is a CP function and manipulates the VMDBK.  Once set, future connections 
(via APPC/VM Connect) will utilize the Alternate ID. ... This is why severing 
all connections prior to setting the AltID will "fix" this type of problem, 
because CMS will (re) connect and use the AltID.

If this is Nora's problem, an easy work around would be wrap the job with an 
exec that uses DMSGETWU and DMSPUSWU to set a new default work unit that 
contains the appropriate user, then run the job from the exec, then finally 
reset with DMSPOPWU. (If I'm remembering all of this correctly)

John

--
John Hall
Safe Software, Inc.
727-608-8799
johnh...@safesoftware.com<mailto:johnh...@safesoftware.com>



On Tue, Apr 19, 2011 at 11:48 AM, Schuh, Richard 
mailto:rsc...@visa.com>> wrote:
Isn't that DIAG 88, instead of SECUSER?


Regards,
Richard Schuh






From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDU<mailto:IBMVM@LISTSERV.UARK.EDU>] On Behalf Of 
John Hall
Sent: Tuesday, April 19, 2011 6:41 AM

To: IBMVM@LISTSERV.UARK.EDU<mailto:IBMVM@LISTSERV.UARK.EDU>
Subject: Re: SFS problem

Nora,
Batch jobs normally run with the privileges of the "owner" of the job, using 
the SECUSER facility in z/VM.  With SFS, this can lead to unexpected results 
when a prior batch job leaves the worker with a connection to the filepool 
under a different user's id.  If the job ordering/selection of batch workers is 
somewhat random, you could see the outcome that you're experiencing (sometimes 
it works, sometimes it fails).




Re: SFS problem

2011-04-20 Thread Mike Walter
Nora,

Is there ever a case where the command (EXEC, or whatever) fails when it 
is NOT run in a VMBATCH worker?

If they never fail outside of VMBATCH workers, John probably has a good 
handle on the diagnosis.  You may have to read a little bit more in the 
VMBATCH documentation about how VMBATCH workers get initialized, end, and 
especially: how the post-job "cleanup" process works.  It might be as 
simple as changing the program to release the SFS directory before ending 
(and perhaps examining the connection and looping until it has been 
cleared).

BTW, the other common VMBATCH product is from CA, called VM:Batch.  Many 
moons ago there were other publically available freeware VMBATCH 
solutions, too -- IIRC, from colleges.

Mike Walter
Aon Corporation
The opinions expressed herein are mine alone, not my employer's.




"Graves Nora E"  

Sent by: "The IBM z/VM Operating System" 
04/20/2011 09:17 AM
Please respond to
"The IBM z/VM Operating System" 



To
IBMVM@LISTSERV.UARK.EDU
cc

Subject
Re: SFS problem






I do not believe that this is the problem. I was giving you information on 
the failing job that I am most familiar with.  Other jobs that fail are 
submitted to VMBatch by the ID that also owns the SFS directories.
 
FYI, "VMBatch" is the IBM VM Batch Facility Version 2.2,  I was not aware 
until recently that there are other products also known as "VMBatch".  If 
this has caused confusion, I apologize.
 
Nora Graves
nora.e.gra...@irs.gov
Main IRS, Room 6531
(202) 622-6735 
Fax (202) 622-3123
SE:W:CAR:MP:D:KS:BRSI
 

From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On 
Behalf Of John Hall
Sent: Wednesday, April 20, 2011 10:04 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

If I recall correctly, DIAG D4 is the one that manipulates the secondary 
ID, aka "Alternate ID".  (SECUSER is an old term).  Diag 88 provides the 
ability to link minidisks and perform userid/password validations (if the 
issuer has appropriate authority).

An interesting usage note for this is that DIAG D4 does not change the 
userid associated with any existing (already active) SFS connections. This 
is because it is a CP function and manipulates the VMDBK.  Once set, 
future connections (via APPC/VM Connect) will utilize the Alternate ID. 
... This is why severing all connections prior to setting the AltID will 
"fix" this type of problem, because CMS will (re) connect and use the 
AltID.

If this is Nora's problem, an easy work around would be wrap the job with 
an exec that uses DMSGETWU and DMSPUSWU to set a new default work unit 
that contains the appropriate user, then run the job from the exec, then 
finally reset with DMSPOPWU. (If I'm remembering all of this correctly)

John

--
John Hall
Safe Software, Inc.
727-608-8799
johnh...@safesoftware.com



On Tue, Apr 19, 2011 at 11:48 AM, Schuh, Richard  wrote:
Isn't that DIAG 88, instead of SECUSER?
 
Regards, 
Richard Schuh 
 
 

From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On 
Behalf Of John Hall
Sent: Tuesday, April 19, 2011 6:41 AM 

To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

Nora,
Batch jobs normally run with the privileges of the "owner" of the job, 
using the SECUSER facility in z/VM.  With SFS, this can lead to unexpected 
results when a prior batch job leaves the worker with a connection to the 
filepool under a different user's id.  If the job ordering/selection of 
batch workers is somewhat random, you could see the outcome that you're 
experiencing (sometimes it works, sometimes it fails).






The information contained in this e-mail and any accompanying documents may 
contain information that is confidential or otherwise protected from 
disclosure. If you are not the intended recipient of this message, or if this 
message has been addressed to you in error, please immediately alert the sender 
by reply e-mail and then delete this message, including any attachments. Any 
dissemination, distribution or other use of the contents of this message by 
anyone other than the intended recipient is strictly prohibited. All messages 
sent to and from this e-mail address may be monitored as permitted by 
applicable law and regulations to ensure compliance with our internal policies 
and to protect our business. E-mails are not secure and cannot be guaranteed to 
be error free as they can be intercepted, amended, lost or destroyed, or 
contain viruses. You are deemed to have accepted these risks if you communicate 
with us by e-mail. 


Re: SFS problem

2011-04-20 Thread Hughes, Jim
Interesting problem Nora.

 

Nothing recorded in the SFS log either.

 

Please post your POOLDEF statements and your DMSPARMS for the filepool
having the problems.

 

Also, what is the virtual storage size defined for the sfs server?

 

Regards,


Jim Hughes
Consulting Systems Programmer
Mainframe Technical Support Group
Department of Information Technology
State of New Hampshire
27 Hazen Drive
Concord, NH 03301
603-271-5586Fax 603.271.1516

Statement of Confidentiality: The contents of this message are
confidential. Any unauthorized disclosure, reproduction, use or
dissemination (either whole or in part) is prohibited. If you are not
the intended recipient of this message, please notify the sender
immediately and delete the message from your system.



From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Graves Nora E
Sent: Wednesday, April 20, 2011 10:17 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

 

I do not believe that this is the problem. I was giving you information
on the failing job that I am most familiar with.  Other jobs that fail
are submitted to VMBatch by the ID that also owns the SFS directories.

 

FYI, "VMBatch" is the IBM VM Batch Facility Version 2.2,  I was not
aware until recently that there are other products also known as
"VMBatch".  If this has caused confusion, I apologize.

 

Nora Graves

nora.e.gra...@irs.gov

Main IRS, Room 6531

(202) 622-6735 

Fax (202) 622-3123

SE:W:CAR:MP:D:KS:BRSI

 

 



From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of John Hall
Sent: Wednesday, April 20, 2011 10:04 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

If I recall correctly, DIAG D4 is the one that manipulates the secondary
ID, aka "Alternate ID".  (SECUSER is an old term).  Diag 88 provides the
ability to link minidisks and perform userid/password validations (if
the issuer has appropriate authority).

An interesting usage note for this is that DIAG D4 does not change the
userid associated with any existing (already active) SFS connections.
This is because it is a CP function and manipulates the VMDBK.  Once
set, future connections (via APPC/VM Connect) will utilize the Alternate
ID. ... This is why severing all connections prior to setting the AltID
will "fix" this type of problem, because CMS will (re) connect and use
the AltID.

If this is Nora's problem, an easy work around would be wrap the job
with an exec that uses DMSGETWU and DMSPUSWU to set a new default work
unit that contains the appropriate user, then run the job from the exec,
then finally reset with DMSPOPWU. (If I'm remembering all of this
correctly)

John

--
John Hall
Safe Software, Inc.
727-608-8799
johnh...@safesoftware.com




On Tue, Apr 19, 2011 at 11:48 AM, Schuh, Richard 
wrote:

Isn't that DIAG 88, instead of SECUSER?

 

Regards, 
Richard Schuh 

 

 

 





From: The IBM z/VM Operating System
[mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of John Hall
Sent: Tuesday, April 19, 2011 6:41 AM 


        To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

 

Nora,
Batch jobs normally run with the privileges of the "owner" of
the job, using the SECUSER facility in z/VM.  With SFS, this can lead to
unexpected results when a prior batch job leaves the worker with a
connection to the filepool under a different user's id.  If the job
ordering/selection of batch workers is somewhat random, you could see
the outcome that you're experiencing (sometimes it works, sometimes it
fails).

 



Re: SFS problem

2011-04-20 Thread Graves Nora E
I do not believe that this is the problem. I was giving you information
on the failing job that I am most familiar with.  Other jobs that fail
are submitted to VMBatch by the ID that also owns the SFS directories.
 
FYI, "VMBatch" is the IBM VM Batch Facility Version 2.2,  I was not
aware until recently that there are other products also known as
"VMBatch".  If this has caused confusion, I apologize.
 
Nora Graves
nora.e.gra...@irs.gov
Main IRS, Room 6531
(202) 622-6735 
Fax (202) 622-3123
SE:W:CAR:MP:D:KS:BRSI
 



From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of John Hall
Sent: Wednesday, April 20, 2011 10:04 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem


If I recall correctly, DIAG D4 is the one that manipulates the secondary
ID, aka "Alternate ID".  (SECUSER is an old term).  Diag 88 provides the
ability to link minidisks and perform userid/password validations (if
the issuer has appropriate authority).

An interesting usage note for this is that DIAG D4 does not change the
userid associated with any existing (already active) SFS connections.
This is because it is a CP function and manipulates the VMDBK.  Once
set, future connections (via APPC/VM Connect) will utilize the Alternate
ID. ... This is why severing all connections prior to setting the AltID
will "fix" this type of problem, because CMS will (re) connect and use
the AltID.

If this is Nora's problem, an easy work around would be wrap the job
with an exec that uses DMSGETWU and DMSPUSWU to set a new default work
unit that contains the appropriate user, then run the job from the exec,
then finally reset with DMSPOPWU. (If I'm remembering all of this
correctly)

John

--
John Hall
Safe Software, Inc.
727-608-8799
johnh...@safesoftware.com




On Tue, Apr 19, 2011 at 11:48 AM, Schuh, Richard 
wrote:


Isn't that DIAG 88, instead of SECUSER?
 

Regards, 
Richard Schuh 

 

 




From: The IBM z/VM Operating System
[mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of John Hall
Sent: Tuesday, April 19, 2011 6:41 AM 

To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem


Nora,
Batch jobs normally run with the privileges of the
"owner" of the job, using the SECUSER facility in z/VM.  With SFS, this
can lead to unexpected results when a prior batch job leaves the worker
with a connection to the filepool under a different user's id.  If the
job ordering/selection of batch workers is somewhat random, you could
see the outcome that you're experiencing (sometimes it works, sometimes
it fails).






Re: SFS problem

2011-04-20 Thread John Hall
If I recall correctly, DIAG D4 is the one that manipulates the secondary ID,
aka "Alternate ID".  (SECUSER is an old term).  Diag 88 provides the ability
to link minidisks and perform userid/password validations (if the issuer has
appropriate authority).

An interesting usage note for this is that DIAG D4 does not change the
userid associated with any existing (already active) SFS connections.  This
is because it is a CP function and manipulates the VMDBK.  Once set, future
connections (via APPC/VM Connect) will utilize the Alternate ID. ... This is
why severing all connections prior to setting the AltID will "fix" this type
of problem, because CMS will (re) connect and use the AltID.

If this is Nora's problem, an easy work around would be wrap the job with an
exec that uses DMSGETWU and DMSPUSWU to set a new default work unit that
contains the appropriate user, then run the job from the exec, then finally
reset with DMSPOPWU. (If I'm remembering all of this correctly)

John

--
John Hall
Safe Software, Inc.
727-608-8799
johnh...@safesoftware.com



On Tue, Apr 19, 2011 at 11:48 AM, Schuh, Richard  wrote:

>  Isn't that DIAG 88, instead of SECUSER?
>
>
> Regards,
> Richard Schuh
>
>
>
>
>  --
> *From:* The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] *On
> Behalf Of *John Hall
> *Sent:* Tuesday, April 19, 2011 6:41 AM
>
> *To:* IBMVM@LISTSERV.UARK.EDU
> *Subject:* Re: SFS problem
>
> Nora,
> Batch jobs normally run with the privileges of the "owner" of the job,
> using the SECUSER facility in z/VM.  With SFS, this can lead to unexpected
> results when a prior batch job leaves the worker with a connection to the
> filepool under a different user's id.  If the job ordering/selection of
> batch workers is somewhat random, you could see the outcome that you're
> experiencing (sometimes it works, sometimes it fails).
>
>


Re: SFS problem

2011-04-19 Thread Ethan Lanz
Just to exhaust the possibilities in this line, we have had occasions when a
job would fail for another reason, then be re-submitted by another userid
without the required authority.

On Tue, Apr 19, 2011 at 7:47 AM, Graves Nora E wrote:

>  Kris,
>
> The submitter is ABC, the directory owner is XYZ.  The submitter has
> WRITE/NEWRITE authority to the directory and files.
>
>  Nora Graves
> nora.e.gra...@irs.gov
> Main IRS, Room 6531
> (202) 622-6735
> Fax (202) 622-3123
> SE:W:CAR:MP:D:KS:BRSI
>
>
>  --
> *From:* The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] *On
> Behalf Of *Kris Buelens
> *Sent:* Thursday, April 14, 2011 4:58 PM
>
> *To:* IBMVM@LISTSERV.UARK.EDU
> *Subject:* Re: SFS problem
>
> Note: if you use VMBATCH, the worker machine connects to SFS with the
> authority of the job submitter.
> You say "This user is the owner of most of the directories"  You mean: the
> submitter is userid ABC, the dirids are all named "fpoolid:ABC.something"?
>
>
> 2011/4/14 Graves Nora E 
>
>>  We are having an intermittent problem with SFS and I'm hoping someone
>> may have some ideas of what to pursue next.
>>
>> We have several batch jobs that run under VMBATCH overnight.  Sometimes
>> they are not able to create a file in a directory, even though most times it
>> is successful.  The only differences in the executions are the file names;
>> for many of these the File Type is the date.
>>
>> In the job I am most familiar with, these are the specifics.
>>
>> The job runs Monday-Saturday.  This year, it has failed on January 4,
>> January 12, February 9, March 18, March 25, and April 13.  It has run
>> successfully the other days.  Other than the QUERY statements below, it has
>> not changed.
>> The job runs in a work machine, WORK7.
>> The job is submitted by the User ID of the database owner.
>> The SFS directories are owned by a 3rd user.  Failures occur in many of
>> the subdirectories, not just one subdirectory owned by this user.  This user
>> is the owner of most of the directories containing the data files we create
>> in batch, so I don't think it's significant that it's the ID that has the
>> problem.  However, as far as I know, it is the only ID that does have the
>> problem.
>> This job uses VMLINK to acquire a write link to SFS directory.  This
>> always looks to be successful--no error is given.  (Other jobs use GETFMADDR
>> and ACCESS to acquire the write link to the directory.  This always
>> appears successful as well).
>> Once the file is ready to be copied from the Work Machine's 191 disk to
>> the SFS directory, the intermittent error appears.  The vast majority of the
>> time, the write is successful.  However, sometimes, the job gets this error
>> message:
>> DMSOPN1258E You are not authorized to write to file XX 20110413 Z1
>>
>> The file is not large--last night's file was only 12 blocks.
>>
>> At the suggestion of our systems programmer, I've put in a lot of query
>> statements.  I've issued QUERY LIMITS for the job submitter; it's only used
>> 84% of the allocation, with several thousand blocks available. The SFS
>> directory owner has only used 76% of its allocation, with several thousand
>> more blocks still available.  The filepool is not full.
>>
>> I've issued QUERY FILEPOOL CONFLICT.  There is no conflict.
>>
>> I've issued QUERY ACCESSED.  The directory shows that is accessed R/W.
>>
>> When the write is unsuccessful, the program then loops through 5 tries of
>> releasing the access, reacquiring the access, and attempting to write the
>> file again.  This has never been successful.  I've issued both a COPYFILE
>> and a PIPE to try to write the file; these do not work once there has been a
>> failure.
>>
>> We've looked at the operator consoles to see if we can find any jobs
>> running at the same time.  We haven't found any that are accessing that
>> directory structure.
>>
>> There aren't any dumps to look at--it looks perfectly successful other
>> than the fact that it won't write the file.
>>
>> Does anyone have any suggestions of something to try next?
>>
>>
>>  Nora Graves
>> nora.e.gra...@irs.gov
>>
>>
>
>
>
> --
> Kris Buelens,
> IBM Belgium, VM customer support
>


Re: SFS problem

2011-04-19 Thread Schuh, Richard
Isn't that DIAG 88, instead of SECUSER?


Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf 
Of John Hall
Sent: Tuesday, April 19, 2011 6:41 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

Nora,
Batch jobs normally run with the privileges of the "owner" of the job, using 
the SECUSER facility in z/VM.  With SFS, this can lead to unexpected results 
when a prior batch job leaves the worker with a connection to the filepool 
under a different user's id.  If the job ordering/selection of batch workers is 
somewhat random, you could see the outcome that you're experiencing (sometimes 
it works, sometimes it fails).



Re: SFS problem

2011-04-19 Thread John Hall
Nora,
Batch jobs normally run with the privileges of the "owner" of the job, using
the SECUSER facility in z/VM.  With SFS, this can lead to unexpected results
when a prior batch job leaves the worker with a connection to the filepool
under a different user's id.  If the job ordering/selection of batch workers
is somewhat random, you could see the outcome that you're experiencing
(sometimes it works, sometimes it fails).

For example:
If WORK7 first runs a job for "JOHN" and JOHN utilizes the same filepool,
WORK7 could be left with a connection to the filepool that appears to be
from the user "JOHN".
If your failing job runs next, it would likely end up accessing the filepool
as JOHN, not as the actual owner, which could cause the failure.

If on another night, WORK7 is recycled before your failing job runs, then it
would establish a connection to the file pool with the owner's ID (say
OWNER) and then would run with the access rights of OWNER, and you'd see
success.

If this is your problem, there are many ways to solve it.  The easiest would
be to ensure that worker machines are recycled (logoff/xautolog or IPL CMS)
prior to the job running.  I wonder if there are VMBATCH options (system or
job) that provide the ability to do a more advanced "clean up" of a worker
prior to a job run.

John

--
John Hall
Safe Software, Inc.
727-608-8799
johnh...@safesoftware.com


On Thu, Apr 14, 2011 at 3:45 PM, Graves Nora E wrote:

>  We are having an intermittent problem with SFS and I'm hoping someone may
> have some ideas of what to pursue next.
>
> We have several batch jobs that run under VMBATCH overnight.  Sometimes
> they are not able to create a file in a directory, even though most times it
> is successful.  The only differences in the executions are the file names;
> for many of these the File Type is the date.
>
> In the job I am most familiar with, these are the specifics.
>
> The job runs Monday-Saturday.  This year, it has failed on January 4,
> January 12, February 9, March 18, March 25, and April 13.  It has run
> successfully the other days.  Other than the QUERY statements below, it has
> not changed.
> The job runs in a work machine, WORK7.
> The job is submitted by the User ID of the database owner.
> The SFS directories are owned by a 3rd user.  Failures occur in many of the
> subdirectories, not just one subdirectory owned by this user.  This user is
> the owner of most of the directories containing the data files we create in
> batch, so I don't think it's significant that it's the ID that has the
> problem.  However, as far as I know, it is the only ID that does have the
> problem.
> This job uses VMLINK to acquire a write link to SFS directory.  This always
> looks to be successful--no error is given.  (Other jobs use GETFMADDR and
> ACCESS to acquire the write link to the directory.  This always
> appears successful as well).
> Once the file is ready to be copied from the Work Machine's 191 disk to the
> SFS directory, the intermittent error appears.  The vast majority of the
> time, the write is successful.  However, sometimes, the job gets this error
> message:
> DMSOPN1258E You are not authorized to write to file XX 20110413 Z1
>
> The file is not large--last night's file was only 12 blocks.
>
> At the suggestion of our systems programmer, I've put in a lot of query
> statements.  I've issued QUERY LIMITS for the job submitter; it's only used
> 84% of the allocation, with several thousand blocks available. The SFS
> directory owner has only used 76% of its allocation, with several thousand
> more blocks still available.  The filepool is not full.
>
> I've issued QUERY FILEPOOL CONFLICT.  There is no conflict.
>
> I've issued QUERY ACCESSED.  The directory shows that is accessed R/W.
>
> When the write is unsuccessful, the program then loops through 5 tries of
> releasing the access, reacquiring the access, and attempting to write the
> file again.  This has never been successful.  I've issued both a COPYFILE
> and a PIPE to try to write the file; these do not work once there has been a
> failure.
>
> We've looked at the operator consoles to see if we can find any jobs
> running at the same time.  We haven't found any that are accessing that
> directory structure.
>
> There aren't any dumps to look at--it looks perfectly successful other than
> the fact that it won't write the file.
>
> Does anyone have any suggestions of something to try next?
>
>
>  Nora Graves
> nora.e.gra...@irs.gov
>
>


Re: SFS problem

2011-04-19 Thread Graves Nora E
Jim,

There are basically no messages at all in the console for the SFS
service machine.  One of the failures was on April 13.  This is the
console for that day--no editing at all of the contents.

HCPMID6001I  TIME IS 00:00:00 EDT WEDNESDAY 04/13/11  
  
00:00:00  
  
  
HCPMID6001I  TIME IS 00:00:00 EDT THURSDAY 04/14/11   
  
00:00:00  
  
   


Nora Graves
nora.e.gra...@irs.gov
Main IRS, Room 6531
(202) 622-6735 
Fax (202) 622-3123
SE:W:CAR:MP:D:KS:BRSI

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Hughes, Jim
Sent: Thursday, April 14, 2011 5:23 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

Nora,

Have you looked at the SFS service machine's console log for any odd
messages?

Could the SFS server being doing some sort of control backup?

Jim Hughes
Consulting Systems Programmer 
Mainframe Technical Support Group
Department of Information Technology
State of New Hampshire
27 Hazen Drive
Concord, NH 03301
603-271-5586Fax 603.271.1516

Statement of Confidentiality: The contents of this message are
confidential. Any unauthorized disclosure, reproduction, use or
dissemination (either whole or in part) is prohibited. If you are not
the intended recipient of this message, please notify the sender
immediately and delete the message from your system.


Re: SFS problem

2011-04-19 Thread Quay, Jonathan (IHG)
I've had similar issues before.  There are really two choices you can
make.  The first is to open a PMR and do traces and such.  Since it's a
very intermittent failure, this can be difficult.  The other is to make
the user doing the writes a filepool administrator and get on with life.
Mgmt chose the latter.

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Graves Nora E
Sent: Tuesday, April 19, 2011 7:43 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

Mike,

Yes, I checked out the message.  The thing is, the job runs nightly, and
only fails intermittently.  There is no reason to believe that the
authority to access the directory is changing during that time period;
there are only 2 people who regularly grant permissions to that user's
directory.   Neither of us has made a change.


Nora Graves
nora.e.gra...@irs.gov
Main IRS, Room 6531
(202) 622-6735 
Fax (202) 622-3123
SE:W:CAR:MP:D:KS:BRSI

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Mike Walter
Sent: Thursday, April 14, 2011 4:13 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

Nora,

Did you issue: HELP DMSOPN1258E

The response from the msg indicates an SFS authorization problem.:
--
(c) Copyright IBM Corporation 1990, 2008  
  
 DMS1258E   to
write 
(to) 
   file   
  
 Explanation: You attempted to write to a file for which you do not have

write 
 authority, or you attempted to create a new file in a directory for
which 
you 
 do not have write authority.  
  
 System Action: RC=12 or 28. Execution of the command is terminated.  
  
 User Response: Ensure that you specified the correct file. If so,
contact 
the 
 owner to gain proper authorization to the file or directory. If the
file  
 
 specified is an alias, you may enter the QUERY ALIAS command to
determine 
the 
 owner of the base file.  
  
 For a BFS file, enter the OPENVM LISTFILE command with the OWNER option

for 
 the file. This will tell you the owning user ID and group name for the 
file 
 you wish to use. Refer to the z/VM: OpenExtensions Command Reference or

enter 
 HELP OPENVM for more information on the OPENVM commands. 
--

z/VM has a terrific HELP facility (including most messages), it's worth 
investing a bit of time to familiarize yourself with it.

Were you using the VMLINK command in every case with the "(FORCERW" 
option?  Regardless, check the syntax for your SFS authorizations.

And if all those bazillion intricate SFS authorizations become 
overwhelming, consider checking out SafeSFS from safesfs.com.  We're
vary 
satisfied customers, it makes authorizations much MUCH simpler, saves
huge 
amounts of effort and back time.

Mike Walter
Aon Corporation
The opinions expressed herein are mine alone, not my employer's.
 



"Graves Nora E"  

Sent by: "The IBM z/VM Operating System" 
04/14/2011 02:45 PM
Please respond to
"The IBM z/VM Operating System" 



To
IBMVM@LISTSERV.UARK.EDU
cc

Subject
SFS problem






We are having an intermittent problem with SFS and I'm hoping someone
may 
have some ideas of what to pursue next.
 
We have several batch jobs that run under VMBATCH overnight.  Sometimes 
they are not able to create a file in a directory, even though most
times 
it is successful.  The only differences in the executions are the file 
names; for many of these the File Type is the date.
 
In the job I am most familiar with, these are the specifics.
 
The job runs Monday-Saturday.  This year, it has failed on January 4, 
January 12, February 9, March 18, March 25, and April 13.  It has run 
successfully the other days.  Other than the QUERY statements below, it 
has not changed.
The job runs in a work machine, WORK7.
The job is submitted by the User ID of the database owner.
The SFS directories are owned by a 3rd user.  Failures occur in many of 
the subdirectories, not just one subdirectory owned by this user.  This 
user is the owner of most of the directories containing the data files
we 
create in batch, so I don't think it's significant that it's the ID that

has the problem.  However, as far as I know, it is the only ID that does

have the problem.
This job uses VMLINK to acquire a write link to SFS directory.  This 
always looks to be successful--no error is given.  (Other jobs use 
GETFMADDR and ACCESS to acquire the write link to the directory.  This 
always appears successful as well).
Once the file is ready to be copied from the Work Machine's 191 disk to 
the SFS directory, the intermittent error appears.  The vast majority of

the time, the write is successful.  However, sometimes, the job gets
this 
error message: 
DMSOPN1258E You are not authorized to write to file XX 20110413 Z1
 
The file is not large--last night's file was only 12 blocks. 
 
At the suggestion of our systems programmer, I'

Re: SFS problem

2011-04-19 Thread Graves Nora E
Jim,

I didn't think to look there, I will.

Our regular backups are VMBACKUP, which is scheduled to run 90 minutes
after this job runs.  This job runs only 2 minutes maximum, so I know
that isn't the problem. But the SFS console may have some good
indications. I'll see what I can find there.  

Thanks,


Nora Graves
nora.e.gra...@irs.gov
Main IRS, Room 6531
(202) 622-6735 
Fax (202) 622-3123
SE:W:CAR:MP:D:KS:BRSI

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Hughes, Jim
Sent: Thursday, April 14, 2011 5:23 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

Nora,

Have you looked at the SFS service machine's console log for any odd
messages?

Could the SFS server being doing some sort of control backup?

Jim Hughes
Consulting Systems Programmer 
Mainframe Technical Support Group
Department of Information Technology
State of New Hampshire
27 Hazen Drive
Concord, NH 03301
603-271-5586Fax 603.271.1516

Statement of Confidentiality: The contents of this message are
confidential. Any unauthorized disclosure, reproduction, use or
dissemination (either whole or in part) is prohibited. If you are not
the intended recipient of this message, please notify the sender
immediately and delete the message from your system.


Re: SFS problem

2011-04-19 Thread Graves Nora E
Kris,
 
The submitter is ABC, the directory owner is XYZ.  The submitter has
WRITE/NEWRITE authority to the directory and files.
 
Nora Graves
nora.e.gra...@irs.gov
Main IRS, Room 6531
(202) 622-6735 
Fax (202) 622-3123
SE:W:CAR:MP:D:KS:BRSI
 



From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Kris Buelens
Sent: Thursday, April 14, 2011 4:58 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem


Note: if you use VMBATCH, the worker machine connects to SFS with the
authority of the job submitter.
You say "This user is the owner of most of the directories"  You mean:
the submitter is userid ABC, the dirids are all named
"fpoolid:ABC.something"?


2011/4/14 Graves Nora E 


We are having an intermittent problem with SFS and I'm hoping
someone may have some ideas of what to pursue next.
 
We have several batch jobs that run under VMBATCH overnight.
Sometimes they are not able to create a file in a directory, even though
most times it is successful.  The only differences in the executions are
the file names; for many of these the File Type is the date.
 
In the job I am most familiar with, these are the specifics.
 
The job runs Monday-Saturday.  This year, it has failed on
January 4, January 12, February 9, March 18, March 25, and April 13.  It
has run successfully the other days.  Other than the QUERY statements
below, it has not changed.
The job runs in a work machine, WORK7.
The job is submitted by the User ID of the database owner.
The SFS directories are owned by a 3rd user.  Failures occur in
many of the subdirectories, not just one subdirectory owned by this
user.  This user is the owner of most of the directories containing the
data files we create in batch, so I don't think it's significant that
it's the ID that has the problem.  However, as far as I know, it is the
only ID that does have the problem.
This job uses VMLINK to acquire a write link to SFS directory.
This always looks to be successful--no error is given.  (Other jobs use
GETFMADDR and ACCESS to acquire the write link to the directory.  This
always appears successful as well).
Once the file is ready to be copied from the Work Machine's 191
disk to the SFS directory, the intermittent error appears.  The vast
majority of the time, the write is successful.  However, sometimes, the
job gets this error message: 
DMSOPN1258E You are not authorized to write to file XX
20110413 Z1
 
The file is not large--last night's file was only 12 blocks.  
 
At the suggestion of our systems programmer, I've put in a lot
of query statements.  I've issued QUERY LIMITS for the job submitter;
it's only used 84% of the allocation, with several thousand blocks
available. The SFS directory owner has only used 76% of its allocation,
with several thousand more blocks still available.  The filepool is not
full.
 
I've issued QUERY FILEPOOL CONFLICT.  There is no conflict.
 
I've issued QUERY ACCESSED.  The directory shows that is
accessed R/W.
 
When the write is unsuccessful, the program then loops through 5
tries of releasing the access, reacquiring the access, and attempting to
write the file again.  This has never been successful.  I've issued both
a COPYFILE and a PIPE to try to write the file; these do not work once
there has been a failure.
 
We've looked at the operator consoles to see if we can find any
jobs running at the same time.  We haven't found any that are accessing
that directory structure.
 
There aren't any dumps to look at--it looks perfectly successful
other than the fact that it won't write the file.
 
Does anyone have any suggestions of something to try next?

 
Nora Graves
nora.e.gra...@irs.gov
 




-- 
Kris Buelens,
IBM Belgium, VM customer support



Re: SFS problem

2011-04-19 Thread Graves Nora E
Alan,

The submitter does have NEWRITE permissions.  And nothing in that job
erases the file.  The job runs only once per day, and the File Type is
the date, such as 20110419. 

However, I didn't know that authority to write the file disappeared with
the file if NEWRITE wasn't granted, so I learned something!


Nora Graves
nora.e.gra...@irs.gov
Main IRS, Room 6531
(202) 622-6735 
Fax (202) 622-3123
SE:W:CAR:MP:D:KS:BRSI

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Alan Altmark
Sent: Thursday, April 14, 2011 4:52 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

On Thursday, 04/14/2011 at 03:47 EDT, Graves Nora E 
 wrote:

> DMSOPN1258E You are  not authorized to write to file XX 20110413
Z1
:  
> I've issued QUERY  ACCESSED.  The directory shows that is accessed
R/W.
>  
> When the write is  unsuccessful, the program then loops through 5
tries 
of 
> releasing the access,  reacquiring the access, and attempting to write

the file 
> again.   This has never been successful.  I've issued both a COPYFILE 
and a 
> PIPE to try to write the file; these do not work once there has been
a 
failure.
>  
> We've looked at the  operator consoles to see if we can find any jobs 
running 
> at the same time.   We haven't found any that are accessing that 
directory 
> structure.
>  
> There aren't any  dumps to look at--it looks perfectly successful
other 
than 
> the fact that it  won't write the file.
>  
> Does anyone have any  suggestions of something to try next?

As you can see, it's an authorization error.  Does the worker/submittor 
have NEWRITE authority to the directory?  If not, and some part of the
job 
ERASEs the file from the directory and then tries to recreate it, it
will 
fail, as the authority to the file is deleted if the file itself is 
deleted.

In a FILECONTROL directory, R/W access to the directory does not imply
R/W 
access to all the files in it!

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: SFS problem

2011-04-19 Thread Graves Nora E
Mike,

Yes, I checked out the message.  The thing is, the job runs nightly, and
only fails intermittently.  There is no reason to believe that the
authority to access the directory is changing during that time period;
there are only 2 people who regularly grant permissions to that user's
directory.   Neither of us has made a change.


Nora Graves
nora.e.gra...@irs.gov
Main IRS, Room 6531
(202) 622-6735 
Fax (202) 622-3123
SE:W:CAR:MP:D:KS:BRSI

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Mike Walter
Sent: Thursday, April 14, 2011 4:13 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS problem

Nora,

Did you issue: HELP DMSOPN1258E

The response from the msg indicates an SFS authorization problem.:
--
(c) Copyright IBM Corporation 1990, 2008  
  
 DMS1258E   to
write 
(to) 
   file   
  
 Explanation: You attempted to write to a file for which you do not have

write 
 authority, or you attempted to create a new file in a directory for
which 
you 
 do not have write authority.  
  
 System Action: RC=12 or 28. Execution of the command is terminated.  
  
 User Response: Ensure that you specified the correct file. If so,
contact 
the 
 owner to gain proper authorization to the file or directory. If the
file  
 
 specified is an alias, you may enter the QUERY ALIAS command to
determine 
the 
 owner of the base file.  
  
 For a BFS file, enter the OPENVM LISTFILE command with the OWNER option

for 
 the file. This will tell you the owning user ID and group name for the 
file 
 you wish to use. Refer to the z/VM: OpenExtensions Command Reference or

enter 
 HELP OPENVM for more information on the OPENVM commands. 
--

z/VM has a terrific HELP facility (including most messages), it's worth 
investing a bit of time to familiarize yourself with it.

Were you using the VMLINK command in every case with the "(FORCERW" 
option?  Regardless, check the syntax for your SFS authorizations.

And if all those bazillion intricate SFS authorizations become 
overwhelming, consider checking out SafeSFS from safesfs.com.  We're
vary 
satisfied customers, it makes authorizations much MUCH simpler, saves
huge 
amounts of effort and back time.

Mike Walter
Aon Corporation
The opinions expressed herein are mine alone, not my employer's.
 



"Graves Nora E"  

Sent by: "The IBM z/VM Operating System" 
04/14/2011 02:45 PM
Please respond to
"The IBM z/VM Operating System" 



To
IBMVM@LISTSERV.UARK.EDU
cc

Subject
SFS problem






We are having an intermittent problem with SFS and I'm hoping someone
may 
have some ideas of what to pursue next.
 
We have several batch jobs that run under VMBATCH overnight.  Sometimes 
they are not able to create a file in a directory, even though most
times 
it is successful.  The only differences in the executions are the file 
names; for many of these the File Type is the date.
 
In the job I am most familiar with, these are the specifics.
 
The job runs Monday-Saturday.  This year, it has failed on January 4, 
January 12, February 9, March 18, March 25, and April 13.  It has run 
successfully the other days.  Other than the QUERY statements below, it 
has not changed.
The job runs in a work machine, WORK7.
The job is submitted by the User ID of the database owner.
The SFS directories are owned by a 3rd user.  Failures occur in many of 
the subdirectories, not just one subdirectory owned by this user.  This 
user is the owner of most of the directories containing the data files
we 
create in batch, so I don't think it's significant that it's the ID that

has the problem.  However, as far as I know, it is the only ID that does

have the problem.
This job uses VMLINK to acquire a write link to SFS directory.  This 
always looks to be successful--no error is given.  (Other jobs use 
GETFMADDR and ACCESS to acquire the write link to the directory.  This 
always appears successful as well).
Once the file is ready to be copied from the Work Machine's 191 disk to 
the SFS directory, the intermittent error appears.  The vast majority of

the time, the write is successful.  However, sometimes, the job gets
this 
error message: 
DMSOPN1258E You are not authorized to write to file XX 20110413 Z1
 
The file is not large--last night's file was only 12 blocks. 
 
At the suggestion of our systems programmer, I've put in a lot of query 
statements.  I've issued QUERY LIMITS for the job submitter; it's only 
used 84% of the allocation, with several thousand blocks available. The 
SFS directory owner has only used 76% of its allocation, with several 
thousand more blocks still available.  The filepool is not full.
 
I've issued QUERY FILEPOOL CONFLICT.  There is no conflict.
 
I've issued QUERY ACCESSED.  The directory shows that is accessed R/W.
 
When the write is unsuccessful, the program then loops through 5 tr

Re: SFS problem

2011-04-14 Thread Hughes, Jim
Nora,

Have you looked at the SFS service machine's console log for any odd
messages?

Could the SFS server being doing some sort of control backup?

Jim Hughes
Consulting Systems Programmer 
Mainframe Technical Support Group
Department of Information Technology
State of New Hampshire
27 Hazen Drive
Concord, NH 03301
603-271-5586Fax 603.271.1516

Statement of Confidentiality: The contents of this message are
confidential. Any unauthorized disclosure, reproduction, use or
dissemination (either whole or in part) is prohibited. If you are not
the intended recipient of this message, please notify the sender
immediately and delete the message from your system.


Re: SFS problem

2011-04-14 Thread Kris Buelens
Note: if you use VMBATCH, the worker machine connects to SFS with the
authority of the job submitter.
You say "This user is the owner of most of the directories"  You mean: the
submitter is userid ABC, the dirids are all named "fpoolid:ABC.something"?

2011/4/14 Graves Nora E 

>  We are having an intermittent problem with SFS and I'm hoping someone may
> have some ideas of what to pursue next.
>
> We have several batch jobs that run under VMBATCH overnight.  Sometimes
> they are not able to create a file in a directory, even though most times it
> is successful.  The only differences in the executions are the file names;
> for many of these the File Type is the date.
>
> In the job I am most familiar with, these are the specifics.
>
> The job runs Monday-Saturday.  This year, it has failed on January 4,
> January 12, February 9, March 18, March 25, and April 13.  It has run
> successfully the other days.  Other than the QUERY statements below, it has
> not changed.
> The job runs in a work machine, WORK7.
> The job is submitted by the User ID of the database owner.
> The SFS directories are owned by a 3rd user.  Failures occur in many of the
> subdirectories, not just one subdirectory owned by this user.  This user is
> the owner of most of the directories containing the data files we create in
> batch, so I don't think it's significant that it's the ID that has the
> problem.  However, as far as I know, it is the only ID that does have the
> problem.
> This job uses VMLINK to acquire a write link to SFS directory.  This always
> looks to be successful--no error is given.  (Other jobs use GETFMADDR and
> ACCESS to acquire the write link to the directory.  This always
> appears successful as well).
> Once the file is ready to be copied from the Work Machine's 191 disk to the
> SFS directory, the intermittent error appears.  The vast majority of the
> time, the write is successful.  However, sometimes, the job gets this error
> message:
> DMSOPN1258E You are not authorized to write to file XX 20110413 Z1
>
> The file is not large--last night's file was only 12 blocks.
>
> At the suggestion of our systems programmer, I've put in a lot of query
> statements.  I've issued QUERY LIMITS for the job submitter; it's only used
> 84% of the allocation, with several thousand blocks available. The SFS
> directory owner has only used 76% of its allocation, with several thousand
> more blocks still available.  The filepool is not full.
>
> I've issued QUERY FILEPOOL CONFLICT.  There is no conflict.
>
> I've issued QUERY ACCESSED.  The directory shows that is accessed R/W.
>
> When the write is unsuccessful, the program then loops through 5 tries of
> releasing the access, reacquiring the access, and attempting to write the
> file again.  This has never been successful.  I've issued both a COPYFILE
> and a PIPE to try to write the file; these do not work once there has been a
> failure.
>
> We've looked at the operator consoles to see if we can find any jobs
> running at the same time.  We haven't found any that are accessing that
> directory structure.
>
> There aren't any dumps to look at--it looks perfectly successful other than
> the fact that it won't write the file.
>
> Does anyone have any suggestions of something to try next?
>
>
>  Nora Graves
> nora.e.gra...@irs.gov
>
>



-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: SFS problem

2011-04-14 Thread Alan Altmark
On Thursday, 04/14/2011 at 03:47 EDT, Graves Nora E 
 wrote:

> DMSOPN1258E You are  not authorized to write to file XX 20110413 Z1
:  
> I've issued QUERY  ACCESSED.  The directory shows that is accessed R/W.
>  
> When the write is  unsuccessful, the program then loops through 5 tries 
of 
> releasing the access,  reacquiring the access, and attempting to write 
the file 
> again.   This has never been successful.  I've issued both a COPYFILE 
and a 
> PIPE to try to write the file; these do not work once there has been  a 
failure.
>  
> We've looked at the  operator consoles to see if we can find any jobs 
running 
> at the same time.   We haven't found any that are accessing that 
directory 
> structure.
>  
> There aren't any  dumps to look at--it looks perfectly successful other 
than 
> the fact that it  won't write the file.
>  
> Does anyone have any  suggestions of something to try next?

As you can see, it's an authorization error.  Does the worker/submittor 
have NEWRITE authority to the directory?  If not, and some part of the job 
ERASEs the file from the directory and then tries to recreate it, it will 
fail, as the authority to the file is deleted if the file itself is 
deleted.

In a FILECONTROL directory, R/W access to the directory does not imply R/W 
access to all the files in it!

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: SFS problem

2011-04-14 Thread Mike Walter
Nora,

Did you issue: HELP DMSOPN1258E

The response from the msg indicates an SFS authorization problem.:
--
(c) Copyright IBM Corporation 1990, 2008  
  
 DMS1258E   to write 
(to) 
   file   
  
 Explanation: You attempted to write to a file for which you do not have 
write 
 authority, or you attempted to create a new file in a directory for which 
you 
 do not have write authority.  
  
 System Action: RC=12 or 28. Execution of the command is terminated.  
  
 User Response: Ensure that you specified the correct file. If so, contact 
the 
 owner to gain proper authorization to the file or directory. If the file  
 
 specified is an alias, you may enter the QUERY ALIAS command to determine 
the 
 owner of the base file.  
  
 For a BFS file, enter the OPENVM LISTFILE command with the OWNER option 
for 
 the file. This will tell you the owning user ID and group name for the 
file 
 you wish to use. Refer to the z/VM: OpenExtensions Command Reference or 
enter 
 HELP OPENVM for more information on the OPENVM commands. 
--

z/VM has a terrific HELP facility (including most messages), it's worth 
investing a bit of time to familiarize yourself with it.

Were you using the VMLINK command in every case with the "(FORCERW" 
option?  Regardless, check the syntax for your SFS authorizations.

And if all those bazillion intricate SFS authorizations become 
overwhelming, consider checking out SafeSFS from safesfs.com.  We're vary 
satisfied customers, it makes authorizations much MUCH simpler, saves huge 
amounts of effort and back time.

Mike Walter
Aon Corporation
The opinions expressed herein are mine alone, not my employer's.
 



"Graves Nora E"  

Sent by: "The IBM z/VM Operating System" 
04/14/2011 02:45 PM
Please respond to
"The IBM z/VM Operating System" 



To
IBMVM@LISTSERV.UARK.EDU
cc

Subject
SFS problem






We are having an intermittent problem with SFS and I'm hoping someone may 
have some ideas of what to pursue next.
 
We have several batch jobs that run under VMBATCH overnight.  Sometimes 
they are not able to create a file in a directory, even though most times 
it is successful.  The only differences in the executions are the file 
names; for many of these the File Type is the date.
 
In the job I am most familiar with, these are the specifics.
 
The job runs Monday-Saturday.  This year, it has failed on January 4, 
January 12, February 9, March 18, March 25, and April 13.  It has run 
successfully the other days.  Other than the QUERY statements below, it 
has not changed.
The job runs in a work machine, WORK7.
The job is submitted by the User ID of the database owner.
The SFS directories are owned by a 3rd user.  Failures occur in many of 
the subdirectories, not just one subdirectory owned by this user.  This 
user is the owner of most of the directories containing the data files we 
create in batch, so I don't think it's significant that it's the ID that 
has the problem.  However, as far as I know, it is the only ID that does 
have the problem.
This job uses VMLINK to acquire a write link to SFS directory.  This 
always looks to be successful--no error is given.  (Other jobs use 
GETFMADDR and ACCESS to acquire the write link to the directory.  This 
always appears successful as well).
Once the file is ready to be copied from the Work Machine's 191 disk to 
the SFS directory, the intermittent error appears.  The vast majority of 
the time, the write is successful.  However, sometimes, the job gets this 
error message: 
DMSOPN1258E You are not authorized to write to file XX 20110413 Z1
 
The file is not large--last night's file was only 12 blocks. 
 
At the suggestion of our systems programmer, I've put in a lot of query 
statements.  I've issued QUERY LIMITS for the job submitter; it's only 
used 84% of the allocation, with several thousand blocks available. The 
SFS directory owner has only used 76% of its allocation, with several 
thousand more blocks still available.  The filepool is not full.
 
I've issued QUERY FILEPOOL CONFLICT.  There is no conflict.
 
I've issued QUERY ACCESSED.  The directory shows that is accessed R/W.
 
When the write is unsuccessful, the program then loops through 5 tries of 
releasing the access, reacquiring the access, and attempting to write the 
file again.  This has never been successful.  I've issued both a COPYFILE 
and a PIPE to try to write the file; these do not work once there has been 
a failure.
 
We've looked at the operator consoles to see if we can find any jobs 
running at the same time.  We haven't found any that are accessing that 
directory structure.
 
There aren't any dumps to look at--it looks perfectly successful other 
than the fact that it won't write the file.
 
Does anyone have any suggestions of something to try next?

 
Nora Graves
nora.e.gra...@irs.gov
 




The inform

Re: SFS problem

2011-04-14 Thread Peter . Webb
Would issuing a QUERY AUTHORITY at the time of the error give anything
useful, I wonder. Maybe try for both the directory and the file?

 

Peter

 

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Graves Nora E
Sent: April 14, 2011 15:46
To: IBMVM@LISTSERV.UARK.EDU
Subject: SFS problem

 

We are having an intermittent problem with SFS and I'm hoping someone
may have some ideas of what to pursue next.

 

We have several batch jobs that run under VMBATCH overnight.  Sometimes
they are not able to create a file in a directory, even though most
times it is successful.  The only differences in the executions are the
file names; for many of these the File Type is the date.

 

In the job I am most familiar with, these are the specifics.

 

The job runs Monday-Saturday.  This year, it has failed on January 4,
January 12, February 9, March 18, March 25, and April 13.  It has run
successfully the other days.  Other than the QUERY statements below, it
has not changed.

The job runs in a work machine, WORK7.

The job is submitted by the User ID of the database owner.

The SFS directories are owned by a 3rd user.  Failures occur in many of
the subdirectories, not just one subdirectory owned by this user.  This
user is the owner of most of the directories containing the data files
we create in batch, so I don't think it's significant that it's the ID
that has the problem.  However, as far as I know, it is the only ID that
does have the problem.

This job uses VMLINK to acquire a write link to SFS directory.  This
always looks to be successful--no error is given.  (Other jobs use
GETFMADDR and ACCESS to acquire the write link to the directory.  This
always appears successful as well).

Once the file is ready to be copied from the Work Machine's 191 disk to
the SFS directory, the intermittent error appears.  The vast majority of
the time, the write is successful.  However, sometimes, the job gets
this error message: 

DMSOPN1258E You are not authorized to write to file XX 20110413 Z1

 

The file is not large--last night's file was only 12 blocks.  

 

At the suggestion of our systems programmer, I've put in a lot of query
statements.  I've issued QUERY LIMITS for the job submitter; it's only
used 84% of the allocation, with several thousand blocks available. The
SFS directory owner has only used 76% of its allocation, with several
thousand more blocks still available.  The filepool is not full.

 

I've issued QUERY FILEPOOL CONFLICT.  There is no conflict.

 

I've issued QUERY ACCESSED.  The directory shows that is accessed R/W.

 

When the write is unsuccessful, the program then loops through 5 tries
of releasing the access, reacquiring the access, and attempting to write
the file again.  This has never been successful.  I've issued both a
COPYFILE and a PIPE to try to write the file; these do not work once
there has been a failure.

 

We've looked at the operator consoles to see if we can find any jobs
running at the same time.  We haven't found any that are accessing that
directory structure.

 

There aren't any dumps to look at--it looks perfectly successful other
than the fact that it won't write the file.

 

Does anyone have any suggestions of something to try next?

 

 

Nora Graves

nora.e.gra...@irs.gov

 



The information transmitted is intended only for the person or entity to which 
it is addressed and may contain confidential and/or privileged material.  Any 
review retransmission dissemination or other use of or taking any action in 
reliance upon this information by persons or entities other than the intended 
recipient or delegate is strictly prohibited.  If you received this in error 
please contact the sender and delete the material from any computer.  The 
integrity and security of this message cannot be guaranteed on the Internet.  
The sender accepts no liability for the content of this e-mail or for the 
consequences of any actions taken on the basis of information provided.  The 
recipient should check this e-mail and any attachments for the presence of 
viruses.  The sender accepts no liability for any damage caused by any virus 
transmitted by this e-mail.  This disclaimer is property of the TTC and must 
not be altered or circumvented in any manner.


SFS problem

2011-04-14 Thread Graves Nora E
We are having an intermittent problem with SFS and I'm hoping someone
may have some ideas of what to pursue next.
 
We have several batch jobs that run under VMBATCH overnight.  Sometimes
they are not able to create a file in a directory, even though most
times it is successful.  The only differences in the executions are the
file names; for many of these the File Type is the date.
 
In the job I am most familiar with, these are the specifics.
 
The job runs Monday-Saturday.  This year, it has failed on January 4,
January 12, February 9, March 18, March 25, and April 13.  It has run
successfully the other days.  Other than the QUERY statements below, it
has not changed.
The job runs in a work machine, WORK7.
The job is submitted by the User ID of the database owner.
The SFS directories are owned by a 3rd user.  Failures occur in many of
the subdirectories, not just one subdirectory owned by this user.  This
user is the owner of most of the directories containing the data files
we create in batch, so I don't think it's significant that it's the ID
that has the problem.  However, as far as I know, it is the only ID that
does have the problem.
This job uses VMLINK to acquire a write link to SFS directory.  This
always looks to be successful--no error is given.  (Other jobs use
GETFMADDR and ACCESS to acquire the write link to the directory.  This
always appears successful as well).
Once the file is ready to be copied from the Work Machine's 191 disk to
the SFS directory, the intermittent error appears.  The vast majority of
the time, the write is successful.  However, sometimes, the job gets
this error message: 
DMSOPN1258E You are not authorized to write to file XX 20110413 Z1
 
The file is not large--last night's file was only 12 blocks.  
 
At the suggestion of our systems programmer, I've put in a lot of query
statements.  I've issued QUERY LIMITS for the job submitter; it's only
used 84% of the allocation, with several thousand blocks available. The
SFS directory owner has only used 76% of its allocation, with several
thousand more blocks still available.  The filepool is not full.
 
I've issued QUERY FILEPOOL CONFLICT.  There is no conflict.
 
I've issued QUERY ACCESSED.  The directory shows that is accessed R/W.
 
When the write is unsuccessful, the program then loops through 5 tries
of releasing the access, reacquiring the access, and attempting to write
the file again.  This has never been successful.  I've issued both a
COPYFILE and a PIPE to try to write the file; these do not work once
there has been a failure.
 
We've looked at the operator consoles to see if we can find any jobs
running at the same time.  We haven't found any that are accessing that
directory structure.
 
There aren't any dumps to look at--it looks perfectly successful other
than the fact that it won't write the file.
 
Does anyone have any suggestions of something to try next?

 
Nora Graves
nora.e.gra...@irs.gov
 


Re: SFS management question

2011-03-31 Thread Schuh, Richard
Are you running the CRR server? I seem to remember that, even though you do not 
need it for its stated purpose, it is needed for performance reasons.

Another p[possibility is that the log m-disks might be filling up. The control 
data backup is initiated at 80% full, a non-configurable but arbitrary size. 
If, while that backup is running, it reaches 95%, all activity is suspended 
until it is below a given percentage.


Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf 
Of Kris Buelens
Sent: Thursday, March 31, 2011 3:16 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS management question

I some release it became less "required" to run SFS reorganisations.  You can 
run a FILEPOOL REORG, that will only reorganise storage pool 1, that is, the 
SFS DB2-like catalog.  There is no tool to reorganise the other storage groups 
(apart from a backup & restore).

2011/3/31 Tony Thigpen mailto:t...@vse2pdf.com>>
It's been a long time since I managed a production SFS system so I need
a little refresher.

On our development system, I ran out of space on my SFS pool, but was
able to reduce the usage down to just 45% by deleting a bunch of stuff
we no longer needed. Now the SFS seems 'slow'. Do I need to run a job to
'compress' the SFS or re-org the directory?

--

Tony Thigpen



--
Kris Buelens,
IBM Belgium, VM customer support


Re: SFS management question

2011-03-31 Thread Kris Buelens
I some release it became less "required" to run SFS reorganisations.  You
can run a FILEPOOL REORG, that will only reorganise storage pool 1, that is,
the SFS DB2-like catalog.  There is no tool to reorganise the other storage
groups (apart from a backup & restore).

2011/3/31 Tony Thigpen 

> It's been a long time since I managed a production SFS system so I need
> a little refresher.
>
> On our development system, I ran out of space on my SFS pool, but was
> able to reduce the usage down to just 45% by deleting a bunch of stuff
> we no longer needed. Now the SFS seems 'slow'. Do I need to run a job to
> 'compress' the SFS or re-org the directory?
>
> --
>
> Tony Thigpen
>



-- 
Kris Buelens,
IBM Belgium, VM customer support


SFS management question

2011-03-31 Thread Tony Thigpen
It's been a long time since I managed a production SFS system so I need
a little refresher.

On our development system, I ran out of space on my SFS pool, but was
able to reduce the usage down to just 45% by deleting a bunch of stuff
we no longer needed. Now the SFS seems 'slow'. Do I need to run a job to
'compress' the SFS or re-org the directory?

-- 

Tony Thigpen


Re: SFS

2011-03-30 Thread Alan Altmark
On Wednesday, 03/30/2011 at 06:00 EDT, clifford jackson 
 wrote:
> If I allocate x amount of storage to a SFS file pool and the tree 
structure is 
> such that I have x subdirectories under my top directory, will my 
> subdirectories have access to all allocated storage within that specific 

> storage pool, say for instance storage pool 3.

Yes.  All of the data within a file space is stored in the storage group 
assigned to the file space at ENROLL USER time.

> Also will my subdirectories inherit all security controls from my top 
directory.

No. Each directory and subdirectory must be managed separately.  If you 
have ESM (e.g. RACF) protection for SFS, then the ESM can things like 
groups to make life easier.

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: SFS

2011-03-30 Thread clifford jackson

If I allocate x amount of storage to a SFS file pool and the tree structure is 
such that I have x subdirectories under my top directory, will my 
subdirectories have access to all allocated storage within that specific 
storage pool, say for instance storage pool 3.
 
Also will my subdirectories inherit all security controls from my top directory.

 


Date: Wed, 23 Mar 2011 10:52:15 +0100
From: kris.buel...@gmail.com
Subject: Re: SFS
To: IBMVM@LISTSERV.UARK.EDU

Yes indeed, I meant "are not copied".  very bad eyes and bad typist, sorry.


2011/3/23 Les Koehler 

Spilling error below?
Les


Kris Buelens wrote:

The RENAME and ERASE commands are indeed "atomic operations".
You can't even issue a RENAME or ERASE when the current workunit has
uncommited work.  Then you can get around by getting a new workunit, making
it the default, etc.  There is no *need* to use other than the deafult
workunit with DMSFILEC or DMSRENAM if you can afford committing all work
that is pending in that workunit.

Some other thing when using SFS: if you'd use the sequence COPY into
tempfile, followed by a RENAME to install a new version of "MY FILE" note
then that the auhtories granted to the orignal "MY FILE" or not copied to
You meant: 'are not copied', right?




the new version of "MY FILE".  That pleads to use DMSFILEC with REPLACE.

2011/3/22 Alan Altmark 


On Tuesday, 03/22/2011 at 12:18 EDT, Les Koehler 
wrote:

So with FILE CONTROL you can't replace co-dependent files in one atomic
operation, as one would do with the 'copyfile two-step' (love that
phrase!)?

Yes, you can (and in DIRECTORY CONTROL directories, too), but you need to
become familiar with workunits, DMSGETWU, DMSPUSWU and DMSPOPWU.  See the
discussion of workunits in Chapter 12 of the CMS Application Dev. Guide.

If memory serves, you get a new workunit, push it on the stack, do all
your file updates, then pop the workunit.  But it's been a long time. Just
don't expect RENAME and ERASE to obey the rules.  I think they are atomic
operations.

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott






-- 
Kris Buelens,
IBM Belgium, VM customer support
  

Re: SFS

2011-03-23 Thread Kris Buelens
Yes indeed, I meant "are not copied".  very bad eyes and bad typist,
sorry.

2011/3/23 Les Koehler 

> Spilling error below?
> Les
>
>
> Kris Buelens wrote:
>
>> The RENAME and ERASE commands are indeed "atomic operations".
>> You can't even issue a RENAME or ERASE when the current workunit has
>> uncommited work.  Then you can get around by getting a new workunit,
>> making
>> it the default, etc.  There is no *need* to use other than the deafult
>> workunit with DMSFILEC or DMSRENAM if you can afford committing all work
>> that is pending in that workunit.
>>
>> Some other thing when using SFS: if you'd use the sequence COPY into
>> tempfile, followed by a RENAME to install a new version of "MY FILE" note
>> then that the auhtories granted to the orignal "MY FILE" or not copied to
>>
> You meant: 'are not copied', right?
>
>  the new version of "MY FILE".  That pleads to use DMSFILEC with REPLACE.
>>
>> 2011/3/22 Alan Altmark 
>>
>>  On Tuesday, 03/22/2011 at 12:18 EDT, Les Koehler >> >
>>> wrote:
>>>
>>>> So with FILE CONTROL you can't replace co-dependent files in one atomic
>>>> operation, as one would do with the 'copyfile two-step' (love that
>>>>
>>> phrase!)?
>>>
>>> Yes, you can (and in DIRECTORY CONTROL directories, too), but you need to
>>> become familiar with workunits, DMSGETWU, DMSPUSWU and DMSPOPWU.  See the
>>> discussion of workunits in Chapter 12 of the CMS Application Dev. Guide.
>>>
>>> If memory serves, you get a new workunit, push it on the stack, do all
>>> your file updates, then pop the workunit.  But it's been a long time.
>>> Just
>>> don't expect RENAME and ERASE to obey the rules.  I think they are atomic
>>> operations.
>>>
>>> Alan Altmark
>>>
>>> z/VM and Linux on System z Consultant
>>> IBM System Lab Services and Training
>>> ibm.com/systems/services/labservices
>>> office: 607.429.3323
>>> mobile; 607.321.7556
>>> alan_altm...@us.ibm.com
>>> IBM Endicott
>>>
>>>
>>
>>
>>


-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: SFS

2011-03-23 Thread Les Koehler

Spilling error below?
Les

Kris Buelens wrote:

The RENAME and ERASE commands are indeed "atomic operations".
You can't even issue a RENAME or ERASE when the current workunit has
uncommited work.  Then you can get around by getting a new workunit, making
it the default, etc.  There is no *need* to use other than the deafult
workunit with DMSFILEC or DMSRENAM if you can afford committing all work
that is pending in that workunit.

Some other thing when using SFS: if you'd use the sequence COPY into
tempfile, followed by a RENAME to install a new version of "MY FILE" note
then that the auhtories granted to the orignal "MY FILE" or not copied to

You meant: 'are not copied', right?

the new version of "MY FILE".  That pleads to use DMSFILEC with REPLACE.

2011/3/22 Alan Altmark 


On Tuesday, 03/22/2011 at 12:18 EDT, Les Koehler 
wrote:

So with FILE CONTROL you can't replace co-dependent files in one atomic
operation, as one would do with the 'copyfile two-step' (love that

phrase!)?

Yes, you can (and in DIRECTORY CONTROL directories, too), but you need to
become familiar with workunits, DMSGETWU, DMSPUSWU and DMSPOPWU.  See the
discussion of workunits in Chapter 12 of the CMS Application Dev. Guide.

If memory serves, you get a new workunit, push it on the stack, do all
your file updates, then pop the workunit.  But it's been a long time. Just
don't expect RENAME and ERASE to obey the rules.  I think they are atomic
operations.

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott







Re: SFS

2011-03-23 Thread Kris Buelens
The RENAME and ERASE commands are indeed "atomic operations".
You can't even issue a RENAME or ERASE when the current workunit has
uncommited work.  Then you can get around by getting a new workunit, making
it the default, etc.  There is no *need* to use other than the deafult
workunit with DMSFILEC or DMSRENAM if you can afford committing all work
that is pending in that workunit.

Some other thing when using SFS: if you'd use the sequence COPY into
tempfile, followed by a RENAME to install a new version of "MY FILE" note
then that the auhtories granted to the orignal "MY FILE" or not copied to
the new version of "MY FILE".  That pleads to use DMSFILEC with REPLACE.

2011/3/22 Alan Altmark 

> On Tuesday, 03/22/2011 at 12:18 EDT, Les Koehler 
> wrote:
> > So with FILE CONTROL you can't replace co-dependent files in one atomic
> > operation, as one would do with the 'copyfile two-step' (love that
> phrase!)?
>
> Yes, you can (and in DIRECTORY CONTROL directories, too), but you need to
> become familiar with workunits, DMSGETWU, DMSPUSWU and DMSPOPWU.  See the
> discussion of workunits in Chapter 12 of the CMS Application Dev. Guide.
>
> If memory serves, you get a new workunit, push it on the stack, do all
> your file updates, then pop the workunit.  But it's been a long time. Just
> don't expect RENAME and ERASE to obey the rules.  I think they are atomic
> operations.
>
> Alan Altmark
>
> z/VM and Linux on System z Consultant
> IBM System Lab Services and Training
> ibm.com/systems/services/labservices
> office: 607.429.3323
> mobile; 607.321.7556
> alan_altm...@us.ibm.com
> IBM Endicott
>



-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: SFS

2011-03-22 Thread Alan Altmark
On Tuesday, 03/22/2011 at 12:18 EDT, Les Koehler  
wrote:
> So with FILE CONTROL you can't replace co-dependent files in one atomic
> operation, as one would do with the 'copyfile two-step' (love that 
phrase!)?

Yes, you can (and in DIRECTORY CONTROL directories, too), but you need to 
become familiar with workunits, DMSGETWU, DMSPUSWU and DMSPOPWU.  See the 
discussion of workunits in Chapter 12 of the CMS Application Dev. Guide.

If memory serves, you get a new workunit, push it on the stack, do all 
your file updates, then pop the workunit.  But it's been a long time. Just 
don't expect RENAME and ERASE to obey the rules.  I think they are atomic 
operations.

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: SFS

2011-03-22 Thread Les Koehler

Kris,
Thanks for correcting my memory! Back in my IGS days I had an EXEC that would do 
it all for my service machines, via MSG from authorized users. I just sent the 
DVM the files and a list of files identified by Change Request number. The EXEC 
did all the work, including sending notification of success/failure to a Notify 
List. It could even back of a change, since all files were RENAMEd.


Les

Kris Buelens wrote:

Alan forgot to mention one restriction: with a DIRC, only one user can get
R/W access at a given time.  And, the benefit: a DIRC can be placed in a
dataspace.

As for the trick to update a few files in one atomic operation: you had it
almost right, except that COPYFILE has no NOUPDIRT option, but that isn't
needed.  The use of temp files and the RENAMEs with NOUPDIRT that care of
that.

With SFS however, RENAME does not honor the NOUPDIRT option.

I never tried but you can use the CSL routine DMSFILEC, it has a NOCOMMIT
option.  At the end of the DMSFILEC calls, you then call DMSCOMM to commit
all copy operations (or DMSROLLB if you change your mind).  There is also a
DMSRENAM CSL routine, (with a NOCOMMIT option) but that is no longer
required thanks to DMSFILEC NOCOMMIT.

2011/3/22 Les Koehler 


So with FILE CONTROL you can't replace co-dependent files in one atomic
operation, as one would do with the 'copyfile two-step' (love that phrase!)?

Les


Alan Altmark wrote:


On Monday, 03/21/2011 at 10:54 EDT, Les Koehler 
wrote:


Does FILE CONTROL thus have the same exposure to a file being changed


'under the


covers' as a minidisk has, possibly causing an error for the other


users?


Is there something similar to the minidisk trick of:

copyfile new1 version a pseudo1 name k (noupdir
copyfile new2 version a pseudo2 name k (noupdir
rename old1 version k save1 version k (noupdir
rename old2 version k save2 version k (noupdir
rename pseudo1 name k old1 version k (noupdir
rename pseudo2 name k old2 version k

(hope I got that right!) to avoid impacting other users?


You never get ERROR 3 READING FILE with SFS and you don't use the
"copyfile twostep" with SFS.  You just change the files.  If someone
accesses the directory while you're updating, then they will see changed
files on for files that are closed when they access it.  In that respect,
DIRCONTROL directories are better than minidisks.

FILECONTROL directories reveal file changes immediately.  I.e. you can
open a file and see one set of content, then close and re-open and see
something new.  But you never see a mix and you never see the file while it
is open and being written to - you see the old version.  And if you erase a
file in a FILECONTROL directory, all authorizations are lost.

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservicesoffice: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott







Re: SFS

2011-03-21 Thread Kris Buelens
Alan forgot to mention one restriction: with a DIRC, only one user can get
R/W access at a given time.  And, the benefit: a DIRC can be placed in a
dataspace.

As for the trick to update a few files in one atomic operation: you had it
almost right, except that COPYFILE has no NOUPDIRT option, but that isn't
needed.  The use of temp files and the RENAMEs with NOUPDIRT that care of
that.

With SFS however, RENAME does not honor the NOUPDIRT option.

I never tried but you can use the CSL routine DMSFILEC, it has a NOCOMMIT
option.  At the end of the DMSFILEC calls, you then call DMSCOMM to commit
all copy operations (or DMSROLLB if you change your mind).  There is also a
DMSRENAM CSL routine, (with a NOCOMMIT option) but that is no longer
required thanks to DMSFILEC NOCOMMIT.

2011/3/22 Les Koehler 

> So with FILE CONTROL you can't replace co-dependent files in one atomic
> operation, as one would do with the 'copyfile two-step' (love that phrase!)?
>
> Les
>
>
> Alan Altmark wrote:
>
>> On Monday, 03/21/2011 at 10:54 EDT, Les Koehler 
>> wrote:
>>
>>> Does FILE CONTROL thus have the same exposure to a file being changed
>>>
>> 'under the
>>
>>> covers' as a minidisk has, possibly causing an error for the other
>>>
>> users?
>>
>>> Is there something similar to the minidisk trick of:
>>>
>>> copyfile new1 version a pseudo1 name k (noupdir
>>> copyfile new2 version a pseudo2 name k (noupdir
>>> rename old1 version k save1 version k (noupdir
>>> rename old2 version k save2 version k (noupdir
>>> rename pseudo1 name k old1 version k (noupdir
>>> rename pseudo2 name k old2 version k
>>>
>>> (hope I got that right!) to avoid impacting other users?
>>>
>>
>> You never get ERROR 3 READING FILE with SFS and you don't use the
>> "copyfile twostep" with SFS.  You just change the files.  If someone
>> accesses the directory while you're updating, then they will see changed
>> files on for files that are closed when they access it.  In that respect,
>> DIRCONTROL directories are better than minidisks.
>>
>> FILECONTROL directories reveal file changes immediately.  I.e. you can
>> open a file and see one set of content, then close and re-open and see
>> something new.  But you never see a mix and you never see the file while it
>> is open and being written to - you see the old version.  And if you erase a
>> file in a FILECONTROL directory, all authorizations are lost.
>>
>> Alan Altmark
>>
>> z/VM and Linux on System z Consultant
>> IBM System Lab Services and Training 
>> ibm.com/systems/services/labservicesoffice: 607.429.3323
>> mobile; 607.321.7556
>> alan_altm...@us.ibm.com
>> IBM Endicott
>>
>>


-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: SFS

2011-03-21 Thread Les Koehler
So with FILE CONTROL you can't replace co-dependent files in one atomic 
operation, as one would do with the 'copyfile two-step' (love that phrase!)?


Les

Alan Altmark wrote:
On Monday, 03/21/2011 at 10:54 EDT, Les Koehler  
wrote:
Does FILE CONTROL thus have the same exposure to a file being changed 

'under the
covers' as a minidisk has, possibly causing an error for the other 

users?

Is there something similar to the minidisk trick of:

copyfile new1 version a pseudo1 name k (noupdir
copyfile new2 version a pseudo2 name k (noupdir
rename old1 version k save1 version k (noupdir
rename old2 version k save2 version k (noupdir
rename pseudo1 name k old1 version k (noupdir
rename pseudo2 name k old2 version k

(hope I got that right!) to avoid impacting other users?


You never get ERROR 3 READING FILE with SFS and you don't use the 
"copyfile twostep" with SFS.  You just change the files.  If someone 
accesses the directory while you're updating, then they will see changed 
files on for files that are closed when they access it.  In that respect, 
DIRCONTROL directories are better than minidisks.


FILECONTROL directories reveal file changes immediately.  I.e. you can 
open a file and see one set of content, then close and re-open and see 
something new.  But you never see a mix and you never see the file while 
it is open and being written to - you see the old version.  And if you 
erase a file in a FILECONTROL directory, all authorizations are lost.


Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323

mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott



Re: SFS

2011-03-21 Thread Alan Altmark
On Monday, 03/21/2011 at 10:54 EDT, Les Koehler  
wrote:
> Does FILE CONTROL thus have the same exposure to a file being changed 
'under the
> covers' as a minidisk has, possibly causing an error for the other 
users?
> 
> Is there something similar to the minidisk trick of:
> 
> copyfile new1 version a pseudo1 name k (noupdir
> copyfile new2 version a pseudo2 name k (noupdir
> rename old1 version k save1 version k (noupdir
> rename old2 version k save2 version k (noupdir
> rename pseudo1 name k old1 version k (noupdir
> rename pseudo2 name k old2 version k
> 
> (hope I got that right!) to avoid impacting other users?

You never get ERROR 3 READING FILE with SFS and you don't use the 
"copyfile twostep" with SFS.  You just change the files.  If someone 
accesses the directory while you're updating, then they will see changed 
files on for files that are closed when they access it.  In that respect, 
DIRCONTROL directories are better than minidisks.

FILECONTROL directories reveal file changes immediately.  I.e. you can 
open a file and see one set of content, then close and re-open and see 
something new.  But you never see a mix and you never see the file while 
it is open and being written to - you see the old version.  And if you 
erase a file in a FILECONTROL directory, all authorizations are lost.

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: SFS

2011-03-21 Thread Les Koehler
Does FILE CONTROL thus have the same exposure to a file being changed 'under the 
covers' as a minidisk has, possibly causing an error for the other users?


Is there something similar to the minidisk trick of:

 copyfile new1 version a pseudo1 name k (noupdir
 copyfile new2 version a pseudo2 name k (noupdir
 rename old1 version k save1 version k (noupdir
 rename old2 version k save2 version k (noupdir
 rename pseudo1 name k old1 version k (noupdir
 rename pseudo2 name k old2 version k

(hope I got that right!) to avoid impacting other users?

Les

Alan Altmark wrote:
On Monday, 03/21/2011 at 05:50 EDT, clifford jackson 
 wrote:
What are the major differences between filecontrol and directorycontrol 
in a 
SFS structure 


DIRECTORY CONTROL directories have two important attributes:
1.  You don't seen any changes to any files until you re-ACCESS the 
directory.
2.  Authority for the files resides at the directory level (DIRREAD, 
DIRWRITE).  That is, all files have the same access rights, matching the 
access rights of the directory.  You just grant access to the directory 
and the person has those rights to all the files.  If there are new files, 
they automatically get access, but only after they re-access the 
directory.


The above is called "Access-to-Release Consistency" and mimics the way a 
minidisk works.


FILE CONTROL directories have separate file and directory access rights, 
and any changes to a file are visible to others as soon as the file is 
closed.


Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323

mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott



Re: SFS

2011-03-21 Thread Alan Altmark
On Monday, 03/21/2011 at 05:50 EDT, clifford jackson 
 wrote:
> What are the major differences between filecontrol and directorycontrol 
in a 
> SFS structure 

DIRECTORY CONTROL directories have two important attributes:
1.  You don't seen any changes to any files until you re-ACCESS the 
directory.
2.  Authority for the files resides at the directory level (DIRREAD, 
DIRWRITE).  That is, all files have the same access rights, matching the 
access rights of the directory.  You just grant access to the directory 
and the person has those rights to all the files.  If there are new files, 
they automatically get access, but only after they re-access the 
directory.

The above is called "Access-to-Release Consistency" and mimics the way a 
minidisk works.

FILE CONTROL directories have separate file and directory access rights, 
and any changes to a file are visible to others as soon as the file is 
closed.

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: Temp SFS environment

2011-03-20 Thread Alain Benveniste
The work space is not allocated the same way if control and logs have many 
blocks defined or not. I did several tests to suit my needs and i remarked that 
dasd pages could be used or (vs. Memory) not depending of what coded in the 
pooldef file. That seems normal anyway to me and glad to see that this resolves 
my need.

Alain

Envoyé de mon iPhone

Le 17 mars 2011 à 20:23, "Gentry, Stephen"  a 
écrit :

> Wasn't the original poster trying to do this with virtual disk(s)?  And
> if that's the case it has a high potential of being in memory anyway?
> Or did I miss something.
> Steve
> 
> -Original Message-
> From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
> Behalf Of David Boyes
> Sent: Thursday, March 17, 2011 2:51 PM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: Temp SFS environment
> 
>> All that brings a subsidiary question : is it important to preserve
> the machine xc 
>> in the VMSERVx directories (I should read the doc I think...) ?
> 
> You need the XC mode setting if you want SFS to use VM dataspaces to map
> parts of the data into memory (which would be consistent with the errors
> you saw) for performance reasons. It works fine without it (as you've
> seen), but performance will be impaired with very heavily loaded SFS
> servers. 


Re: Temp SFS environment

2011-03-17 Thread David Boyes
> Wasn't the original poster trying to do this with virtual disk(s)?  And
> if that's the case it has a high potential of being in memory anyway?
> Or did I miss something.
> Steve

In this case, it probably doesn't matter at all, but that's the reason the XC 
mode setting is in the VMSERVx directory entries. 

There's probably some coding advantages to be able to just flip over to another 
address space and directly read or write a page, but I doubt it matters nearly 
as much as it used to. 


Re: Temp SFS environment

2011-03-17 Thread Kris Buelens
With Dataspaces, the SFS end-user can directly read the CMS file data,
without APPC communication to the SFS server.  Only at ACCESS time, the
end-user of a "dataspaced" DIRC must talk to the SFS server.

2011/3/17 Gentry, Stephen 

> Wasn't the original poster trying to do this with virtual disk(s)?  And
> if that's the case it has a high potential of being in memory anyway?
> Or did I miss something.
> Steve
>
> -Original Message-
> From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
> Behalf Of David Boyes
> Sent: Thursday, March 17, 2011 2:51 PM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: Temp SFS environment
>
> > All that brings a subsidiary question : is it important to preserve
> the machine xc
> > in the VMSERVx directories (I should read the doc I think...) ?
>
> You need the XC mode setting if you want SFS to use VM dataspaces to map
> parts of the data into memory (which would be consistent with the errors
> you saw) for performance reasons. It works fine without it (as you've
> seen), but performance will be impaired with very heavily loaded SFS
> servers.
>



-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: Temp SFS environment

2011-03-17 Thread Gentry, Stephen
Wasn't the original poster trying to do this with virtual disk(s)?  And
if that's the case it has a high potential of being in memory anyway?
Or did I miss something.
Steve

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of David Boyes
Sent: Thursday, March 17, 2011 2:51 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Temp SFS environment

> All that brings a subsidiary question : is it important to preserve
the machine xc 
> in the VMSERVx directories (I should read the doc I think...) ?

You need the XC mode setting if you want SFS to use VM dataspaces to map
parts of the data into memory (which would be consistent with the errors
you saw) for performance reasons. It works fine without it (as you've
seen), but performance will be impaired with very heavily loaded SFS
servers. 


Re: Temp SFS environment

2011-03-17 Thread David Boyes
> All that brings a subsidiary question : is it important to preserve the 
> machine xc 
> in the VMSERVx directories (I should read the doc I think...) ?

You need the XC mode setting if you want SFS to use VM dataspaces to map parts 
of the data into memory (which would be consistent with the errors you saw) for 
performance reasons. It works fine without it (as you've seen), but performance 
will be impaired with very heavily loaded SFS servers. 


Re: Temp SFS environment

2011-03-17 Thread Alain Benvéniste
> Feedback: Reading Alan¹s answer, I thought to be unlucky,I changed my point of
> view in making  some tests by defining my MDISK 301 to 305 as 3390 T-DISK.
> I fall in what Simon talked about : fileserv generate cmd didn¹t see the tdisk
> and ask me to link them...  More ugly than my previous tests. I decided to
> freely Œsupersede¹ Alan¹s directives to go back with my v-disk and check again
> the directory...
> and I got a idea. I had copied a VMSERVx directory to create my own SFS user.
> VMSERVx come with a MACHINE XC  definition. I removed that line. And it works
> ! I am Œready for communication¹. I don¹t say I will not have problems later
> but I think to be on the good way.
> All that brings a subsidiary question : is it important to preserve the
> machine xc in the VMSERVx directories (I should read the doc I think...) ?
> 
> Thanks to all
> Alain
> 



Re: Temp SFS environment

2011-03-17 Thread RPN01
V-disks stay around until the last user releases them, so could you have
them linked read-only by a second user as well to keep them alive until
you¹re done with them?

-- 
Robert P. Nix  Mayo Foundation.~.
RO-OC-1-18 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
"In theory, theory and practice are the same, but
 in practice, theory and practice are different."



On 3/17/11 8:43 AM, "Ivica Brodaric"  wrote:

> I don't know about detach, but re-link definitely happens. File pool minidisks
> can be defined in the directory with linkmode R and they still become linked
> R/W when the server starts and remain R/W after the server terminates.
> 
> Ivica Brodaric
> BNZ
> 



Re: Temp SFS environment

2011-03-17 Thread Ivica Brodaric
I don't know about detach, but re-link definitely happens. File pool
minidisks can be defined in the directory with linkmode R and they still
become linked R/W when the server starts and remain R/W after the server
terminates.

Ivica Brodaric
BNZ


Re: Temp SFS environment

2011-03-17 Thread Wandschneider, Scott
Thank you for sharing your reasoning.  Makes sense to me.

Thank you,
Scott


-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf 
Of Alain Benveniste
Sent: Thursday, March 17, 2011 1:47 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Temp SFS environment

Scott

My VM is used as hypervisor for DR needs and defined with minimal dasd. I was 
looking for a way to transfer big files without adding perm or temp dasd space 
and avoid to increase time of backup... And we are waiting for a new dasd array 
coming in end of may. All the rest of free space is used to migrate 3380 to 
3390.

Alain

Envoyé de mon iPhone

Le 16 mars 2011 à 23:12, "Wandschneider, Scott" 
 a écrit :

> My curious mind wants me to ask. . . Why would you want a temporary SFS
> Directory in memory?
> 
> Thank you,
> Scott
> 
> 
> -Original Message-
> From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
> Behalf Of Alain Benveniste
> Sent: Wednesday, March 16, 2011 5:01 PM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Temp SFS environment
> 
> Hi,
> 
> I tried to create a Temp SFS environment in using FB-512 VDSK.
> Part of my directory looks like this:
> MDISK 0191 3390 11 1 VM0HVF MR
> MDISK 0301 FB-512 V-DISK 64000
> MDISK 0302 FB-512 V-DISK 64000
> MDISK 0303 FB-512 V-DISK 64000
> MDISK 0304 FB-512 V-DISK 64000
> MDISK 0305 FB-512 V-DISK 64000
> 
> The fileserv generate command works perfectly but when I do a fileserv
> st=
> art
> I get: 
> DMSWFV1117I FILESERV processing begun at 22:55:02 on 16 Mar 2011
> DMSWFV1121I SFS DMSPARMS A1 will be used for FILESERV processing
> DMS4PD3400I Initializing begins for DDNAME = CONTROL
> DMS4PD3400I Initializing ends for DDNAME = CONTROL
> DMS4PD3400I Initializing begins for DDNAME = MDK1
> DMS4PD3400I Initializing ends for DDNAME = MDK1
> DMS4PD3400I Initializing begins for DDNAME = MDK2
> DMS4PD3400I Initializing ends for DDNAME = MDK2
> DMS4PG3404W File pool limit of 2 minidisks has been reached
> DMS4PD3400I Initializing begins for DDNAME = LOG1
> DMS4PD3400I Initializing ends for DDNAME = LOG1
> DMS4PD3400I Initializing begins for DDNAME = LOG2
> DMS4PD3400I Initializing ends for DDNAME = LOG2
> DMS5FD3032I File pool server has terminated
> DMSWFV1120I File SFS POOLDEF A1 created or replaced
> DMSWFV1117I FILESERV processing ended at 22:55:52 on 16 Mar 2011
> DMSWFV1117I FILESERV processing begun at 22:55:52 on 16 Mar 2011
> DMSWFV1121I SFS DMSPARMS A1 will be used for FILESERV processing
> DMSWFV1121I SFS POOLDEF A1 will be used for FILESERV processing
> DMS5FE3040E File pool server system error occurred - DMS4FK 02
> 
> DMS3040E  File pool server system error occurred - modulename nn
> 
> Explanation: An internal error occurred within the file pool server. A
> du=
> mp is
> taken according to the dump option chosen in the server startup
> parameter=
> s.
> This is a system error.
> 
> The modulename is the name of the module that detected the error.
> 
> The nn is the error detection point within the module.
> 
> The modulename nn is intended only for service personnel.
> 
> 
> What modulename 02 means ?
> 
> Fileserv works fine when the MDK are 'normal' mdisk but not when
> they=
> 
> are defined as temp...
> 
> Is it just possible to do that ?
> 
> Alain 
> 
> Confidentiality Note: This e-mail, including any attachment to it, may 
> contain material that is confidential, proprietary, privileged and/or 
> "Protected Health Information," within the meaning of the regulations under 
> the Health Insurance Portability & Accountability Act as amended.  If it is 
> not clear that you are the intended recipient, you are hereby notified that 
> you have received this transmittal in error, and any review, dissemination, 
> distribution or copying of this e-mail, including any attachment to it, is 
> strictly prohibited. If you have received this e-mail in error, please 
> immediately return it to the sender and delete it from your system. Thank you.

Confidentiality Note: This e-mail, including any attachment to it, may contain 
material that is confidential, proprietary, privileged and/or "Protected Health 
Information," within the meaning of the regulations under the Health Insurance 
Portability & Accountability Act as amended.  If it is not clear that you are 
the intended recipient, you are hereby notified that you have received this 
transmittal in error, and any review, dissemination, distribution or copying of 
this e-mail, including any attachment to it, is strictly prohibited. If you 
have received this e-mail in error, please immediately return it to the sender 
and delete it from your system. Thank you.


Re: Temp SFS environment

2011-03-16 Thread Alain Benveniste
Scott

My VM is used as hypervisor for DR needs and defined with minimal dasd. I was 
looking for a way to transfer big files without adding perm or temp dasd space 
and avoid to increase time of backup... And we are waiting for a new dasd array 
coming in end of may. All the rest of free space is used to migrate 3380 to 
3390.

Alain

Envoyé de mon iPhone

Le 16 mars 2011 à 23:12, "Wandschneider, Scott" 
 a écrit :

> My curious mind wants me to ask. . . Why would you want a temporary SFS
> Directory in memory?
> 
> Thank you,
> Scott
> 
> 
> -Original Message-
> From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
> Behalf Of Alain Benveniste
> Sent: Wednesday, March 16, 2011 5:01 PM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Temp SFS environment
> 
> Hi,
> 
> I tried to create a Temp SFS environment in using FB-512 VDSK.
> Part of my directory looks like this:
> MDISK 0191 3390 11 1 VM0HVF MR
> MDISK 0301 FB-512 V-DISK 64000
> MDISK 0302 FB-512 V-DISK 64000
> MDISK 0303 FB-512 V-DISK 64000
> MDISK 0304 FB-512 V-DISK 64000
> MDISK 0305 FB-512 V-DISK 64000
> 
> The fileserv generate command works perfectly but when I do a fileserv
> st=
> art
> I get: 
> DMSWFV1117I FILESERV processing begun at 22:55:02 on 16 Mar 2011
> DMSWFV1121I SFS DMSPARMS A1 will be used for FILESERV processing
> DMS4PD3400I Initializing begins for DDNAME = CONTROL
> DMS4PD3400I Initializing ends for DDNAME = CONTROL
> DMS4PD3400I Initializing begins for DDNAME = MDK1
> DMS4PD3400I Initializing ends for DDNAME = MDK1
> DMS4PD3400I Initializing begins for DDNAME = MDK2
> DMS4PD3400I Initializing ends for DDNAME = MDK2
> DMS4PG3404W File pool limit of 2 minidisks has been reached
> DMS4PD3400I Initializing begins for DDNAME = LOG1
> DMS4PD3400I Initializing ends for DDNAME = LOG1
> DMS4PD3400I Initializing begins for DDNAME = LOG2
> DMS4PD3400I Initializing ends for DDNAME = LOG2
> DMS5FD3032I File pool server has terminated
> DMSWFV1120I File SFS POOLDEF A1 created or replaced
> DMSWFV1117I FILESERV processing ended at 22:55:52 on 16 Mar 2011
> DMSWFV1117I FILESERV processing begun at 22:55:52 on 16 Mar 2011
> DMSWFV1121I SFS DMSPARMS A1 will be used for FILESERV processing
> DMSWFV1121I SFS POOLDEF A1 will be used for FILESERV processing
> DMS5FE3040E File pool server system error occurred - DMS4FK 02
> 
> DMS3040E  File pool server system error occurred - modulename nn
> 
> Explanation: An internal error occurred within the file pool server. A
> du=
> mp is
> taken according to the dump option chosen in the server startup
> parameter=
> s.
> This is a system error.
> 
> The modulename is the name of the module that detected the error.
> 
> The nn is the error detection point within the module.
> 
> The modulename nn is intended only for service personnel.
> 
> 
> What modulename 02 means ?
> 
> Fileserv works fine when the MDK are 'normal' mdisk but not when
> they=
> 
> are defined as temp...
> 
> Is it just possible to do that ?
> 
> Alain 
> 
> Confidentiality Note: This e-mail, including any attachment to it, may 
> contain material that is confidential, proprietary, privileged and/or 
> "Protected Health Information," within the meaning of the regulations under 
> the Health Insurance Portability & Accountability Act as amended.  If it is 
> not clear that you are the intended recipient, you are hereby notified that 
> you have received this transmittal in error, and any review, dissemination, 
> distribution or copying of this e-mail, including any attachment to it, is 
> strictly prohibited. If you have received this e-mail in error, please 
> immediately return it to the sender and delete it from your system. Thank you.


Re: Temp SFS environment

2011-03-16 Thread Shimon Lebowitz
I am not sure about this, but I think it might be that the mdisks
get detached and re-linked, which would kill your SFS server
if they were TDISK.

See what response you get to:
#cp q v 301-305
immediately after the failure.

The reason I think I remember this is that years ago I once
tried to define the mdisks as slightly different addresses
than the ones in the CP directory (#CP DEF B301 301),
and ran into problems. IIRC the server wanted to be able
to LINK to the addresses I had given it.

Shimon



On Thu, Mar 17, 2011 at 12:01 AM, Alain Benveniste wrote:

> Hi,
>
> I tried to create a Temp SFS environment in using FB-512 VDSK.
> Part of my directory looks like this:
> MDISK 0191 3390 11 1 VM0HVF MR
> MDISK 0301 FB-512 V-DISK 64000
> MDISK 0302 FB-512 V-DISK 64000
> MDISK 0303 FB-512 V-DISK 64000
> MDISK 0304 FB-512 V-DISK 64000
> MDISK 0305 FB-512 V-DISK 64000
>
> The fileserv generate command works perfectly but when I do a fileserv st
> art
> I get:
> DMSWFV1117I FILESERV processing begun at 22:55:02 on 16 Mar 2011
> DMSWFV1121I SFS DMSPARMS A1 will be used for FILESERV processing
> DMS4PD3400I Initializing begins for DDNAME = CONTROL
> DMS4PD3400I Initializing ends for DDNAME = CONTROL
> DMS4PD3400I Initializing begins for DDNAME = MDK1
> DMS4PD3400I Initializing ends for DDNAME = MDK1
> DMS4PD3400I Initializing begins for DDNAME = MDK2
> DMS4PD3400I Initializing ends for DDNAME = MDK2
> DMS4PG3404W File pool limit of 2 minidisks has been reached
> DMS4PD3400I Initializing begins for DDNAME = LOG1
> DMS4PD3400I Initializing ends for DDNAME = LOG1
> DMS4PD3400I Initializing begins for DDNAME = LOG2
> DMS4PD3400I Initializing ends for DDNAME = LOG2
> DMS5FD3032I File pool server has terminated
> DMSWFV1120I File SFS POOLDEF A1 created or replaced
> DMSWFV1117I FILESERV processing ended at 22:55:52 on 16 Mar 2011
> DMSWFV1117I FILESERV processing begun at 22:55:52 on 16 Mar 2011
> DMSWFV1121I SFS DMSPARMS A1 will be used for FILESERV processing
> DMSWFV1121I SFS POOLDEF A1 will be used for FILESERV processing
> DMS5FE3040E File pool server system error occurred - DMS4FK 02
>
> DMS3040E  File pool server system error occurred - modulename nn
>
> Explanation: An internal error occurred within the file pool server. A du
> mp is
> taken according to the dump option chosen in the server startup parameter
> s.
> This is a system error.
>
> The modulename is the name of the module that detected the error.
>
> The nn is the error detection point within the module.
>
> The modulename nn is intended only for service personnel.
>
>
> What modulename 02 means ?
>
> Fileserv works fine when the MDK are 'normal' mdisk but not when they
>
> are defined as temp...
>
> Is it just possible to do that ?
>
> Alain
>


Re: Temp SFS environment

2011-03-16 Thread Alan Altmark
On Wednesday, 03/16/2011 at 06:01 EDT, Alain Benveniste 
 wrote:

> DMS5FE3040E File pool server system error occurred - DMS4FK 02

That error code indicates a failure of the MAPMDISK IDENTIFY function (see 
CP Programming Services), and the likely candidate error is "V-DISK not 
supported".

I think you should be able to use T-disk, however.

Alan Altmark

z/VM and Linux on System z Consultant
IBM System Lab Services and Training 
ibm.com/systems/services/labservices 
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott


Re: Temp SFS environment

2011-03-16 Thread Wandschneider, Scott
My curious mind wants me to ask. . . Why would you want a temporary SFS
Directory in memory?

Thank you,
Scott


-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Alain Benveniste
Sent: Wednesday, March 16, 2011 5:01 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Temp SFS environment

Hi,

I tried to create a Temp SFS environment in using FB-512 VDSK.
Part of my directory looks like this:
MDISK 0191 3390 11 1 VM0HVF MR
MDISK 0301 FB-512 V-DISK 64000
MDISK 0302 FB-512 V-DISK 64000
MDISK 0303 FB-512 V-DISK 64000
MDISK 0304 FB-512 V-DISK 64000
MDISK 0305 FB-512 V-DISK 64000

The fileserv generate command works perfectly but when I do a fileserv
st=
art
I get: 
DMSWFV1117I FILESERV processing begun at 22:55:02 on 16 Mar 2011
DMSWFV1121I SFS DMSPARMS A1 will be used for FILESERV processing
DMS4PD3400I Initializing begins for DDNAME = CONTROL
DMS4PD3400I Initializing ends for DDNAME = CONTROL
DMS4PD3400I Initializing begins for DDNAME = MDK1
DMS4PD3400I Initializing ends for DDNAME = MDK1
DMS4PD3400I Initializing begins for DDNAME = MDK2
DMS4PD3400I Initializing ends for DDNAME = MDK2
DMS4PG3404W File pool limit of 2 minidisks has been reached
DMS4PD3400I Initializing begins for DDNAME = LOG1
DMS4PD3400I Initializing ends for DDNAME = LOG1
DMS4PD3400I Initializing begins for DDNAME = LOG2
DMS4PD3400I Initializing ends for DDNAME = LOG2
DMS5FD3032I File pool server has terminated
DMSWFV1120I File SFS POOLDEF A1 created or replaced
DMSWFV1117I FILESERV processing ended at 22:55:52 on 16 Mar 2011
DMSWFV1117I FILESERV processing begun at 22:55:52 on 16 Mar 2011
DMSWFV1121I SFS DMSPARMS A1 will be used for FILESERV processing
DMSWFV1121I SFS POOLDEF A1 will be used for FILESERV processing
DMS5FE3040E File pool server system error occurred - DMS4FK 02

DMS3040E  File pool server system error occurred - modulename nn

Explanation: An internal error occurred within the file pool server. A
du=
mp is
taken according to the dump option chosen in the server startup
parameter=
s.
This is a system error.

The modulename is the name of the module that detected the error.

The nn is the error detection point within the module.

The modulename nn is intended only for service personnel.


What modulename 02 means ?

Fileserv works fine when the MDK are 'normal' mdisk but not when
they=

are defined as temp...

Is it just possible to do that ?

Alain 

Confidentiality Note: This e-mail, including any attachment to it, may contain 
material that is confidential, proprietary, privileged and/or "Protected Health 
Information," within the meaning of the regulations under the Health Insurance 
Portability & Accountability Act as amended.  If it is not clear that you are 
the intended recipient, you are hereby notified that you have received this 
transmittal in error, and any review, dissemination, distribution or copying of 
this e-mail, including any attachment to it, is strictly prohibited. If you 
have received this e-mail in error, please immediately return it to the sender 
and delete it from your system. Thank you.


Temp SFS environment

2011-03-16 Thread Alain Benveniste
Hi,

I tried to create a Temp SFS environment in using FB-512 VDSK.
Part of my directory looks like this:
MDISK 0191 3390 11 1 VM0HVF MR
MDISK 0301 FB-512 V-DISK 64000
MDISK 0302 FB-512 V-DISK 64000
MDISK 0303 FB-512 V-DISK 64000
MDISK 0304 FB-512 V-DISK 64000
MDISK 0305 FB-512 V-DISK 64000

The fileserv generate command works perfectly but when I do a fileserv st
art
I get: 
DMSWFV1117I FILESERV processing begun at 22:55:02 on 16 Mar 2011
DMSWFV1121I SFS DMSPARMS A1 will be used for FILESERV processing
DMS4PD3400I Initializing begins for DDNAME = CONTROL
DMS4PD3400I Initializing ends for DDNAME = CONTROL
DMS4PD3400I Initializing begins for DDNAME = MDK1
DMS4PD3400I Initializing ends for DDNAME = MDK1
DMS4PD3400I Initializing begins for DDNAME = MDK2
DMS4PD3400I Initializing ends for DDNAME = MDK2
DMS4PG3404W File pool limit of 2 minidisks has been reached
DMS4PD3400I Initializing begins for DDNAME = LOG1
DMS4PD3400I Initializing ends for DDNAME = LOG1
DMS4PD3400I Initializing begins for DDNAME = LOG2
DMS4PD3400I Initializing ends for DDNAME = LOG2
DMS5FD3032I File pool server has terminated
DMSWFV1120I File SFS POOLDEF A1 created or replaced
DMSWFV1117I FILESERV processing ended at 22:55:52 on 16 Mar 2011
DMSWFV1117I FILESERV processing begun at 22:55:52 on 16 Mar 2011
DMSWFV1121I SFS DMSPARMS A1 will be used for FILESERV processing
DMSWFV1121I SFS POOLDEF A1 will be used for FILESERV processing
DMS5FE3040E File pool server system error occurred - DMS4FK 02

DMS3040E  File pool server system error occurred - modulename nn

Explanation: An internal error occurred within the file pool server. A du
mp is
taken according to the dump option chosen in the server startup parameter
s.
This is a system error.

The modulename is the name of the module that detected the error.

The nn is the error detection point within the module.

The modulename nn is intended only for service personnel.


What modulename 02 means ?

Fileserv works fine when the MDK are 'normal' mdisk but not when they

are defined as temp...

Is it just possible to do that ?

Alain


Re: Old codger question: Can SFS be networked across systems?

2011-03-16 Thread Mark Wheeler

You CAN put TSAF in between SFS and AVS+VTAM, but it's not necessary.
 
Mark Wheeler
UnitedHealth Group

 



Date: Wed, 16 Mar 2011 13:13:28 -0400
From: rhamil...@cas.org
Subject: Re: Old codger question: Can SFS be networked across systems?
To: IBMVM@LISTSERV.UARK.EDU






Yep; I used TSAF over AVS+VTAM and connected users+SFSs; that should work 
regardless of the size of your country.
 
R;
 
 
Rob Hamilton
Chemical Abstracts Service
  

Re: Old codger question: Can SFS be networked across systems?

2011-03-16 Thread Richard Troth
Les --

About the authentication nightmare part, the same criticism is
frequently nailed on NFS in Unix/Linux land.  The problem there is the
same:  You have placed a certain amount of admin trust in the file
sharing peer systems.  If you don't trust the sysadmin of an NFS
client, don't export the content to his machines.  (There are various
ways people have tried to close the gap in NFS, wth varying results
and varied satisfaction, all very confusing.)  Similarly, if you don't
trust the sysadmin of the other VM system(s), don't play SFS with
them.

Rob mentions username mapping with IPGATE.  They do the same thing in
NFS, but not quite as open-ended.  IPGATE is kinda cool.

-- R;   <><
Rick Troth
Velocity Software
http://www.velocitysoftware.com/





On Wed, Mar 16, 2011 at 12:25, Les Koehler  wrote:
> A friend and I were (dis)cussing SFS and he thinks it can be networked
> cross-country. Possible? I would guess that it would be an authentication
> nightmare at the user level. Thoughts?
>
> Les
>


Re: Old codger question: Can SFS be networked across systems?

2011-03-16 Thread Hamilton, Robert
Yep; I used TSAF over AVS+VTAM and connected users+SFSs; that should
work regardless of the size of your country.

 

R;

 

 

Rob Hamilton

Chemical Abstracts Service

 



From: The IBM z/VM Operating System [mailto:IBMVM@listserv.uark.edu] On
Behalf Of Mark Wheeler
Sent: Wednesday, March 16, 2011 12:57 PM
To: IBMVM@listserv.uark.edu
Subject: Re: Old codger question: Can SFS be networked across systems?

 

In the unlikely situation where you run VTAM on your z/VM systems, you
can config your AVS gateways and connect the two SFS's via LU6.2.

Mark Wheeler
UnitedHealth Group

 
> Date: Wed, 16 Mar 2011 17:38:33 +0100
> From: rvdh...@gmail.com
> Subject: Re: Old codger question: Can SFS be networked across systems?
> To: IBMVM@LISTSERV.UARK.EDU
> 
> On Wed, Mar 16, 2011 at 5:25 PM, Les Koehler 
wrote:
> > A friend and I were (dis)cussing SFS and he thinks it can be
networked
> > cross-country. Possible? I would guess that it would be an
authentication
> > nightmare at the user level. Thoughts?
> 
> If you share via ISFC that means both sides must be in the same
> administrative domain. But given the length of the FICON connection,
> it will only work for small countries ;-)
> IPGATE run over TCP/IP connections and does long distance. Latency can
> make it less fun for regular use, but IMHO you really should not have
> critical data on a remote system. You define user access and mapping
> in the IPGATE configuration files. Provided there is some
> understanding between userid management on both sides, you could even
> use dummy userids as the target of the mapping (eg map MAINT at SYSA
> to user MAINT-A at SYSB).
> 
> Rob


Confidentiality Notice: This electronic message transmission, including any 
attachment(s), may contain confidential, proprietary, or privileged information 
from Chemical Abstracts Service ("CAS"), a division of the American Chemical 
Society ("ACS"). If you have received this transmission in error, be advised 
that any disclosure, copying, distribution, or use of the contents of this 
information is strictly prohibited. Please destroy all copies of the message 
and contact the sender immediately by either replying to this message or 
calling 614-447-3600.



Re: Old codger question: Can SFS be networked across systems?

2011-03-16 Thread Mark Wheeler

In the unlikely situation where you run VTAM on your z/VM systems, you can 
config your AVS gateways and connect the two SFS's via LU6.2.

Mark Wheeler
UnitedHealth Group

 
> Date: Wed, 16 Mar 2011 17:38:33 +0100
> From: rvdh...@gmail.com
> Subject: Re: Old codger question: Can SFS be networked across systems?
> To: IBMVM@LISTSERV.UARK.EDU
> 
> On Wed, Mar 16, 2011 at 5:25 PM, Les Koehler  wrote:
> > A friend and I were (dis)cussing SFS and he thinks it can be networked
> > cross-country. Possible? I would guess that it would be an authentication
> > nightmare at the user level. Thoughts?
> 
> If you share via ISFC that means both sides must be in the same
> administrative domain. But given the length of the FICON connection,
> it will only work for small countries ;-)
> IPGATE run over TCP/IP connections and does long distance. Latency can
> make it less fun for regular use, but IMHO you really should not have
> critical data on a remote system. You define user access and mapping
> in the IPGATE configuration files. Provided there is some
> understanding between userid management on both sides, you could even
> use dummy userids as the target of the mapping (eg map MAINT at SYSA
> to user MAINT-A at SYSB).
> 
> Rob
  

Re: Old codger question: Can SFS be networked across systems?

2011-03-16 Thread Rob van der Heij
On Wed, Mar 16, 2011 at 5:25 PM, Les Koehler  wrote:
> A friend and I were (dis)cussing SFS and he thinks it can be networked
> cross-country. Possible? I would guess that it would be an authentication
> nightmare at the user level. Thoughts?

If you share via ISFC that means both sides must be in the same
administrative domain. But given the length of the FICON connection,
it will only work for small countries ;-)
IPGATE run over TCP/IP connections and does long distance. Latency can
make it less fun for regular use, but IMHO you really should not have
critical data on a remote system. You define user access and mapping
in the IPGATE configuration files. Provided there is some
understanding between userid management on both sides, you could even
use dummy userids as the target of the mapping (eg map MAINT at SYSA
to user MAINT-A at SYSB).

Rob


Old codger question: Can SFS be networked across systems?

2011-03-16 Thread Les Koehler
A friend and I were (dis)cussing SFS and he thinks it can be networked 
cross-country. Possible? I would guess that it would be an authentication 
nightmare at the user level. Thoughts?


Les


Re: SFS question

2011-03-14 Thread Gentry, Stephen
Ivica and others.  Thank you for the help.  Increasing the size of the
control disk fixed the problem.

Steve

 

From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Ivica Brodaric
Sent: Saturday, March 12, 2011 8:12 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS question

 

Stephen, 

 

Do I understand the formula and definitions correctly?

You do, but that's just to get you started. It really all depends on how
many objects are going to be created in that file pool. So, take that
formula as an attempt to predict the future.

 

MAXUSERS determines the logical size of the catalog and that formula is
MAXUSERS * 85 (and that's a real formula). Half of that goes to data
blocks and half to index blocks. The physical size of the catalog
(number of blocks in group 1) can initially be much smaller than that.
That smaller space would again be divided equally to data and index
blocks.

 

It is much easier to add another minidisk to group 1 (when you hit
physical limit) than to do regenerate file pool (when you hit logical
limit). Therefore, allocate MAXUSERS generously, so that you don't have
to do this again soon. 

 

In your case, you have 640 cylinders in group 1 (16 * 40 cyl), which
equates to 115200 blocks. You may have over-allocated group 1, but to
decrease that now would be a pain, so leave it. To be able to use all
that space, the logical size of the catalog has to be at least that
much, so MAXUSERS must be at least 1356, which is 115200 / 85, rounded
up. I would suggest that you use at least 5000 if not 1 for
MAXUSERS.

 

Over-allocating MAXUSERS is not harmful, it just takes a bit of space in
control minidisk. Size of control minidisk (or of the rest of it)
determines the number of potentially addressable blocks in the file pool
(maximum size to which the file pool can grow to). You can see that
number if you do QUERY FILEPOOL OVERVIEW, but roughly, it is a bit less
than one million addressable blocks for each control disk cylinder.
There is no formula for this, because other things go to control disk as
well (MAXDISKS has a hand in it too), but for example (roughly again),
with a 50-cylinder control disk you will be left with at least 45
million potentially addressable blocks, which equates to around 75 3390
mod3's. That's the maximum size of disk space that you will ever be able
to have in the file pool before having to regenerate it.

 

While regenerating file pool, you will have to increase the size of the
control disk. You have to do this to allow for the increase of MAXUSERS
in order to maintain at least the same number of potentially addressable
blocks. Increase it by 25-33% to be sure, but do *not* over-allocate
control disk. Whenever the file pool server performs control data backup
(and it does it automatically when log disks fill up more than 80%), the
file pool will be unavailable to end users for the duration of the
backup. You want that backup to be quick especially if your log disks
are small and the backup is run often. Control data backup backs up
*entire* control disk and *used* blocks in group 1 (catalog). Therefore,
over-allocating group 1 is OK, but over-allocating control disk is not.

 

Follow the procedure to regenerate a repository file pool to which you
have been pointed to already and you'll be fine. One final word of
advice though: before doing anything like this, make sure that you also
have a reliable backup of the whole file pool (storage groups 2 and
above) handy.

 

Ivica Brodaric

BNZ



Re: SFS question

2011-03-12 Thread Ivica Brodaric
Stephen,

Do I understand the formula and definitions correctly?
>
> You do, but that's just to get you started. It really all depends on how
many objects are going to be created in that file pool. So, take that
formula as an attempt to predict the future.

MAXUSERS determines the logical size of the catalog and that formula is
MAXUSERS * 85 (and that's a real formula). Half of that goes to data blocks
and half to index blocks. The physical size of the catalog (number of blocks
in group 1) can initially be much smaller than that. That smaller space
would again be divided equally to data and index blocks.

It is much easier to add another minidisk to group 1 (when you hit physical
limit) than to do regenerate file pool (when you hit logical limit).
Therefore, allocate MAXUSERS generously, so that you don't have to do this
again soon.

In your case, you have 640 cylinders in group 1 (16 * 40 cyl), which equates
to 115200 blocks. You may have over-allocated group 1, but to decrease that
now would be a pain, so leave it. To be able to use all that space, the
logical size of the catalog has to be at least that much, so MAXUSERS must
be at least 1356, which is 115200 / 85, rounded up. I would suggest that you
use at least 5000 if not 1 for MAXUSERS.

Over-allocating MAXUSERS is not harmful, it just takes a bit of space in
control minidisk. Size of control minidisk (or of the rest of it) determines
the number of potentially addressable blocks in the file pool (maximum size
to which the file pool can grow to). You can see that number if you do QUERY
FILEPOOL OVERVIEW, but roughly, it is a bit less than one million
addressable blocks for each control disk cylinder. There is no formula for
this, because other things go to control disk as well (MAXDISKS has a hand
in it too), but for example (roughly again), with a 50-cylinder control disk
you will be left with at least 45 million potentially addressable blocks,
which equates to around 75 3390 mod3's. That's the maximum size of disk
space that you will ever be able to have in the file pool before having to
regenerate it.

While regenerating file pool, you will have to increase the size of the
control disk. You have to do this to allow for the increase of MAXUSERS in
order to maintain at least the same number of potentially addressable
blocks. Increase it by 25-33% to be sure, but do *not* over-allocate control
disk. Whenever the file pool server performs control data backup (and it
does it automatically when log disks fill up more than 80%), the file pool
will be unavailable to end users for the duration of the backup. You want
that backup to be quick especially if your log disks are small and the
backup is run often. Control data backup backs up *entire* control disk and
*used* blocks in group 1 (catalog). Therefore, over-allocating group 1 is
OK, but over-allocating control disk is not.

Follow the procedure to regenerate a repository file pool to which you have
been pointed to already and you'll be fine. One final word of advice though:
before doing anything like this, make sure that you also have a reliable
backup of the whole file pool (storage groups 2 and above) handy.

Ivica Brodaric
BNZ


Re: SFS question

2011-03-11 Thread Dave Jones
Yup, that's correct, Steve.you need to set MAXUSERS to at least 1500.

DJ

On 03/11/2011 08:16 AM, Gentry, Stephen wrote:
> So far, thanks to all who have replied.
> I'm going over various numbers and need some clarification.  In the CMS
> Pool Planning guide, Estimate Max Pool Size, the formula is:  maximum
> enrolled users = 300 * (# system defined users / # system active users)
> If I understand the definitions correctly #system defined users is
> number of users in USER DIRECT file. For us that number is 750 (but to
> CMA, I'm going round up to 1000),  #system active users is the number of
> users logged on.  On an average we have about 200 users logged on.  So,
> plugging in the values to the formula above:
> 1500= 300 * ( 1000 / 200 )
> So the MAXUSER statement should be 1500 ?
> If so, I'm way low, currently 300.
> Do I understand the formula and definitions correctly?
> 
> Steve
> 
> -Original Message-
> From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
> Behalf Of Sue Farrell
> Sent: Friday, March 11, 2011 8:56 AM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: SFS question
> 
> Steve,
>  
> You need to increase your MAXUSERS setting by doing a FILESERV
> REGENERATE
>  
> for the file pool.
>  
> Although it's buried, the 51010 reason code is mentioned in Chapter 5 of
> 
> 
> the CMS File Pool Planning, Administration, and Operation manual:
> 
> What Happens When the Limit is Reached: Logical catalog space is
> reserved
> 
> during file pool generation. FILESERV GENERATE processing uses the
> MAXUSERS value to estimate and set the maximum logical catalog space.
> Whe
> n
> the server runs out of logical space, it displays a warning message on
> it
> s 
> console
> and continues processing. Depending on their use of the file pool, users
> 
> 
> may
> receive error messages (DMS1146E) and error return codes (with reason
> cod
> es
> 51010 or 51020). When the logical catalog space is exhausted, you need
> to
> 
> increase the MAXUSERS value for the file pool. Follow the instructions
> in
> 
> Chapter 11, "Regenerating a Repository File Pool," on page xxx.
> 
> Thanks,
> Sue
> 

-- 
Dave Jones
V/Soft Software
www.vsoft-software.com
Houston, TX
281.578.7544


Re: SFS question

2011-03-11 Thread Gentry, Stephen
So far, thanks to all who have replied.
I'm going over various numbers and need some clarification.  In the CMS
Pool Planning guide, Estimate Max Pool Size, the formula is:  maximum
enrolled users = 300 * (# system defined users / # system active users)
If I understand the definitions correctly #system defined users is
number of users in USER DIRECT file. For us that number is 750 (but to
CMA, I'm going round up to 1000),  #system active users is the number of
users logged on.  On an average we have about 200 users logged on.  So,
plugging in the values to the formula above:
1500= 300 * ( 1000 / 200 )
So the MAXUSER statement should be 1500 ?
If so, I'm way low, currently 300.
Do I understand the formula and definitions correctly?

Steve

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Sue Farrell
Sent: Friday, March 11, 2011 8:56 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS question

Steve,
 
You need to increase your MAXUSERS setting by doing a FILESERV
REGENERATE
 
for the file pool.
 
Although it's buried, the 51010 reason code is mentioned in Chapter 5 of


the CMS File Pool Planning, Administration, and Operation manual:

What Happens When the Limit is Reached: Logical catalog space is
reserved

during file pool generation. FILESERV GENERATE processing uses the
MAXUSERS value to estimate and set the maximum logical catalog space.
Whe
n
the server runs out of logical space, it displays a warning message on
it
s 
console
and continues processing. Depending on their use of the file pool, users


may
receive error messages (DMS1146E) and error return codes (with reason
cod
es
51010 or 51020). When the logical catalog space is exhausted, you need
to

increase the MAXUSERS value for the file pool. Follow the instructions
in

Chapter 11, "Regenerating a Repository File Pool," on page xxx.

Thanks,
Sue


Re: SFS question

2011-03-11 Thread Sue Farrell
Steve,
 
You need to increase your MAXUSERS setting by doing a FILESERV REGENERATE
 
for the file pool.
 
Although it's buried, the 51010 reason code is mentioned in Chapter 5 of 

the CMS File Pool Planning, Administration, and Operation manual:

What Happens When the Limit is Reached: Logical catalog space is reserved

during file pool generation. FILESERV GENERATE processing uses the
MAXUSERS value to estimate and set the maximum logical catalog space. Whe
n
the server runs out of logical space, it displays a warning message on it
s 
console
and continues processing. Depending on their use of the file pool, users 

may
receive error messages (DMS1146E) and error return codes (with reason cod
es
51010 or 51020). When the logical catalog space is exhausted, you need to

increase the MAXUSERS value for the file pool. Follow the instructions in

Chapter 11, “Regenerating a Repository File Pool,” on page xxx.

Thanks,
Sue


Re: SFS question

2011-03-11 Thread Ward, Mike S
How large is the catalog?

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Dave Jones
Sent: Thursday, March 10, 2011 3:40 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS question

Hi, Steve.
Yes, you need to expand your storage group 1 size by adding more DASD
space

DJ

On 03/10/2011 03:33 PM, Gentry, Stephen wrote:
> I'm getting the following error on a DMSCLOSE:
>
> 51010 - No space for data left in catalog space.
>
> I am writing a group of files, 733 of them, to an SFS pool
>
> Does the error message mean that I don't have enough room in storage
> group 1?
>
> TIA
>
> Steve
>

--
Dave Jones
V/Soft Software
www.vsoft-software.com
Houston, TX
281.578.7544

==
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity
to which they are addressed. If you have received this email in error please 
notify the system manager. This message
contains confidential information and is intended only for the individual 
named. If you are not the named addressee you
should not disseminate, distribute or copy this e-mail. Please notify the 
sender immediately by e-mail if you
have received this e-mail by mistake and delete this e-mail from your system. 
If you are not the intended recipient
you are notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this
information is strictly prohibited.


Re: SFS question

2011-03-10 Thread Ivica Brodaric
Do QUERY FILEPOOL CATALOG to see the amount of catalog data blocks and
catalog index blocks used. Total number of catalog space blocks (data+index)
is MAXUSERS*85. Maybe your MAXUSERS value is too small?

Ivica Brodaric
BNZ


Re: SFS question

2011-03-10 Thread Gentry, Stephen
Here is the results from the QUERY:

q filepool storgrp vmsyse:


   VMSYSE   File Pool Storage Groups


 


Start-up Date 03/02/11  Query Date 03/10/11


Start-up Time 23:39:25  Query Time 16:57:49





STORAGE GROUP INFORMATION


 


 Storage4K Blocks   4K Blocks


 Group No.In-Use  Free


 1 17973 -  16%  96971


 2   8712106 -  67%4278828


 

The total number of files is  733, occupying approx. 2022  4k blocks.
So, I don't understand why I'm getting this error.

I have defined storage group 1 with 40 cylinders (7184 - 4k blocks)
across 16 3390 mod3's.  To clarify, that's 40 cylinders on each mod3.

I'm wondering if there isn't enough room on one of the mod3's and thus I
get this message.  Due to the nature of SFS, I assume it would spread
the load across the 16 drives.

Steve

 

 

From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Kris Buelens
Sent: Thursday, March 10, 2011 4:43 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS question

 

Thta's what I'd conclude too.

Try Q FILEPOOL STORGRP  (or use SFSULIST, it shows the summary too)

2011/3/10 Gentry, Stephen 

I'm getting the following error on a DMSCLOSE:

51010 - No space for data left in catalog space.

I am writing a group of files, 733 of them, to an SFS pool

Does the error message mean that I don't have enough room in storage
group 1?

TIA

Steve




-- 
Kris Buelens,
IBM Belgium, VM customer support



Re: SFS question

2011-03-10 Thread Schuh, Richard
Yes.


Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf 
Of Gentry, Stephen
Sent: Thursday, March 10, 2011 1:33 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: SFS question

I'm getting the following error on a DMSCLOSE:
51010 - No space for data left in catalog space.
I am writing a group of files, 733 of them, to an SFS pool
Does the error message mean that I don't have enough room in storage group 1?
TIA
Steve


Re: SFS question

2011-03-10 Thread Kris Buelens
Thta's what I'd conclude too.

Try Q FILEPOOL STORGRP  (or use SFSULIST, it shows the summary too)

2011/3/10 Gentry, Stephen 

> I’m getting the following error on a DMSCLOSE:
>
> 51010 - No space for data left in catalog space.
>
> I am writing a group of files, 733 of them, to an SFS pool
>
> Does the error message mean that I don’t have enough room in storage group
> 1?
>
> TIA
>
> Steve
>



-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: SFS question

2011-03-10 Thread Dave Jones
Hi, Steve.
Yes, you need to expand your storage group 1 size by adding more DASD
space

DJ

On 03/10/2011 03:33 PM, Gentry, Stephen wrote:
> I’m getting the following error on a DMSCLOSE:
> 
> 51010 - No space for data left in catalog space.
> 
> I am writing a group of files, 733 of them, to an SFS pool
> 
> Does the error message mean that I don’t have enough room in storage
> group 1?
> 
> TIA
> 
> Steve
> 

-- 
Dave Jones
V/Soft Software
www.vsoft-software.com
Houston, TX
281.578.7544


SFS question

2011-03-10 Thread Gentry, Stephen
I'm getting the following error on a DMSCLOSE:

51010 - No space for data left in catalog space.

I am writing a group of files, 733 of them, to an SFS pool

Does the error message mean that I don't have enough room in storage
group 1?

TIA

Steve



Re: CMS SFS Question

2011-03-01 Thread Schuh, Richard
I don't believe that I said DELETE USER RICHARD. I certainly did not intend to 
imply that, nor did I intend for someone to infer it. I should have stated it 
better.

Regards, 
Richard Schuh 

 

> -Original Message-
> From: The IBM z/VM Operating System 
> [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Les Koehler
> Sent: Tuesday, March 01, 2011 2:07 PM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: CMS SFS Question
> 
> That's NOT the scenario you gave in your original note! You 
> wrote about deleting Richard when you wrote:
> 
> It is possible for one user to grant access to other users  
> >>>> who are not enrolled. DELETE USER does not clean up 
> these  >>>> permissions.
> 
> I don't see *any* indication that would trigger a DELETE USER 
> Les (using your scenario, which was reversed from mine, 
> further confusing the issue).
> 
> Les
> 
> Schuh, Richard wrote:
> > It is permissions granted to users who are not enrolled 
> that is the issue. Here is the scenario:
> > 
> > User Richard is enrolled
> > User Les is not enrolled
> > Richard grants Les some SFS authorities.
> > DELETE USER LES is issued without enrolling LES (or no 
> DELETE USER is 
> > issued for LES) The authorities granted to LES by RICHARD 
> are left hanging and will be applied to any newly created LES 
> regardless of the identity of the owner.
> > 
> > If LES is enrolled before the DELETE USER, those 
> authorities granted to LES by others are removed. By doing 
> the ENROLL for 0 blocks for any userid that is to be deleted, 
> no ghost authorities are given to new users. The userids are 
> unconditionally enrolled. If the user has already been 
> enrolled and owns a file space, the enroll will fail. Because 
> all I care about is that the user be enrolled, I ignore that failure. 
> > 
> > 
> > Regards,
> > Richard Schuh
> > 
> >  
> > 
> >> -Original Message-
> >> From: The IBM z/VM Operating System
> >> [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Les Koehler
> >> Sent: Tuesday, March 01, 2011 1:24 PM
> >> To: IBMVM@LISTSERV.UARK.EDU
> >> Subject: Re: CMS SFS Question
> >>
> >> I guess there's something implied there that I don't get. 
> >> Scenario, from your note:
> >>
> >> Your task is to delete LES, who is enrolled, from the SFS 
> system LES 
> >> has granted rights to RICHARD but RICHARD is not enrolled
> >>
> >> How does enrolling LES for 0 blocks do anything about the granted 
> >> rights that RICHARD has?
> >>
> >> Les
> >>
> >> Schuh, Richard wrote:
> >>> I simply enroll any user to be deleted for 0 blocks. The
> >> alternative is to scan the sfs directories and files 
> looking for such 
> >> users. It is much easier to attempt the enroll. If it fails, it is 
> >> because the user is already enrolled.
> >>> Regards,
> >>> Richard Schuh
> >>>
> >>>  
> >>>
> >>>> -Original Message-
> >>>> From: The IBM z/VM Operating System 
> >>>> [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Les Koehler
> >>>> Sent: Tuesday, March 01, 2011 12:22 PM
> >>>> To: IBMVM@LISTSERV.UARK.EDU
> >>>> Subject: Re: CMS SFS Question
> >>>>
> >>>> I'm curious: How do you find the user who is not enrolled, but 
> >>>> granted rights to the target user to be deleted?
> >>>>
> >>>> Les
> >>>>
> >>>> Schuh, Richard wrote:
> >>>>> The Pipe is the easiest. 
> >>>>>
> >>>>> PIPE < user list | spec /delete user/ 1 w1 nw | cms | >
> >> delete log a
> >>>>> Note, however, that if you have an SFS that has a lot of
> >>>> files and permissions, each DELETE USER can take a long
> >> time, so you
> >>>> do not want to do this on an id that you might need soon 
> after you 
> >>>> enter the PIPE command. In our shop, an individual 
> DELETE USER can 
> >>>> take upwards of 10 minutes.
> >>>>> Cleaning up SFS when a userid is deleted is important from
> >>>> a security standpoint. If the same id should be given to a
> >> different
> >>>> person, it would automatically inherit permissions from 
> the prior 
> >>>> owner. You should be doing a DELETE USER every time that a
> >> userid is
> >&

Re: CMS SFS Question

2011-03-01 Thread Les Koehler
That's NOT the scenario you gave in your original note! You wrote about deleting 
Richard when you wrote:


It is possible for one user to grant access to other users
>>>> who are not enrolled. DELETE USER does not clean up these
>>>> permissions.

I don't see *any* indication that would trigger a DELETE USER Les (using your 
scenario, which was reversed from mine, further confusing the issue).


Les

Schuh, Richard wrote:

It is permissions granted to users who are not enrolled that is the issue. Here 
is the scenario:

User Richard is enrolled
User Les is not enrolled
Richard grants Les some SFS authorities.
DELETE USER LES is issued without enrolling LES (or no DELETE USER is issued 
for LES)
The authorities granted to LES by RICHARD are left hanging and will be applied to any newly created LES regardless of the identity of the owner. 

If LES is enrolled before the DELETE USER, those authorities granted to LES by others are removed. By doing the ENROLL for 0 blocks for any userid that is to be deleted, no ghost authorities are given to new users. The userids are unconditionally enrolled. If the user has already been enrolled and owns a file space, the enroll will fail. Because all I care about is that the user be enrolled, I ignore that failure. 



Regards, 
Richard Schuh 

 


-Original Message-
From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Les Koehler

Sent: Tuesday, March 01, 2011 1:24 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: CMS SFS Question

I guess there's something implied there that I don't get. 
Scenario, from your note:


Your task is to delete LES, who is enrolled, from the SFS 
system LES has granted rights to RICHARD but RICHARD is not enrolled


How does enrolling LES for 0 blocks do anything about the 
granted rights that RICHARD has?


Les

Schuh, Richard wrote:
I simply enroll any user to be deleted for 0 blocks. The 
alternative is to scan the sfs directories and files looking 
for such users. It is much easier to attempt the enroll. If 
it fails, it is because the user is already enrolled.

Regards,
Richard Schuh

 


-Original Message-
From: The IBM z/VM Operating System
[mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Les Koehler
Sent: Tuesday, March 01, 2011 12:22 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: CMS SFS Question

I'm curious: How do you find the user who is not enrolled, but 
granted rights to the target user to be deleted?


Les

Schuh, Richard wrote:
The Pipe is the easiest. 

PIPE < user list | spec /delete user/ 1 w1 nw | cms | > 

delete log a

Note, however, that if you have an SFS that has a lot of
files and permissions, each DELETE USER can take a long 
time, so you 
do not want to do this on an id that you might need soon after you 
enter the PIPE command. In our shop, an individual DELETE USER can 
take upwards of 10 minutes.

Cleaning up SFS when a userid is deleted is important from
a security standpoint. If the same id should be given to a 
different 
person, it would automatically inherit permissions from the prior 
owner. You should be doing a DELETE USER every time that a 
userid is 

deleted from the directory.

It is possible for one user to grant access to other users
who are not enrolled. DELETE USER does not clean up these 
permissions. To get rid of them, you have to first enroll 
the user in 
the pool even if it is for 0 blocks. To solve this in our 
automated 
process, each user to be deleted is enrolled for 0 blocks, 
ignoring 
the return code. We don't care if the user is already 
enrolled, the 
attempt does no harm. After the enroll, the deletion will 
clean out 

all permissions granted to or by the user being deleted.

Regards,
Richard Schuh

 


-Original Message-
From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Rick Troth

Sent: Tuesday, March 01, 2011 10:54 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: CMS SFS Question

Nahh ... even easier ... Pipes.
I'm thinking two pipes.  One to gather the Q ENROLL 
output then a 
second to actually perform the deletes.  In between shove that Q 
ENROLL output into a file, manually edit for confirmation,

then feed

the selected content into DELETE USER.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/



On Tue, 1 Mar 2011, Rich Smrcina wrote:


REXX?

On 03/01/2011 12:35 PM, Wandschneider, Scott wrote:

Is there a way to delete multiple users at once or create

a "batch" job to delete multiple users that are enrolled in SFS?

Thank you,
Scott R Wandschneider
Systems Programmer 3|| Infocrossing, a Wipro Company || 11707 
Miracle Hills Drive, Omaha, NE, 68154-4457|| ': 402.963.8905 ||
Ë:847.849.7223  ||  : 

scott.wandschnei...@infocrossing.com **Think

Green  - Please print responsibly**



Confidentiality Note: This e-mail, including any
attachment to it, may contain material that is confidential, 
proprieta

Re: CMS SFS Question

2011-03-01 Thread Schuh, Richard
It is permissions granted to users who are not enrolled that is the issue. Here 
is the scenario:

User Richard is enrolled
User Les is not enrolled
Richard grants Les some SFS authorities.
DELETE USER LES is issued without enrolling LES (or no DELETE USER is issued 
for LES)
The authorities granted to LES by RICHARD are left hanging and will be applied 
to any newly created LES regardless of the identity of the owner. 

If LES is enrolled before the DELETE USER, those authorities granted to LES by 
others are removed. By doing the ENROLL for 0 blocks for any userid that is to 
be deleted, no ghost authorities are given to new users. The userids are 
unconditionally enrolled. If the user has already been enrolled and owns a file 
space, the enroll will fail. Because all I care about is that the user be 
enrolled, I ignore that failure. 


Regards, 
Richard Schuh 

 

> -Original Message-
> From: The IBM z/VM Operating System 
> [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Les Koehler
> Sent: Tuesday, March 01, 2011 1:24 PM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: CMS SFS Question
> 
> I guess there's something implied there that I don't get. 
> Scenario, from your note:
> 
> Your task is to delete LES, who is enrolled, from the SFS 
> system LES has granted rights to RICHARD but RICHARD is not enrolled
> 
> How does enrolling LES for 0 blocks do anything about the 
> granted rights that RICHARD has?
> 
> Les
> 
> Schuh, Richard wrote:
> > I simply enroll any user to be deleted for 0 blocks. The 
> alternative is to scan the sfs directories and files looking 
> for such users. It is much easier to attempt the enroll. If 
> it fails, it is because the user is already enrolled.
> > 
> > Regards,
> > Richard Schuh
> > 
> >  
> > 
> >> -Original Message-
> >> From: The IBM z/VM Operating System
> >> [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Les Koehler
> >> Sent: Tuesday, March 01, 2011 12:22 PM
> >> To: IBMVM@LISTSERV.UARK.EDU
> >> Subject: Re: CMS SFS Question
> >>
> >> I'm curious: How do you find the user who is not enrolled, but 
> >> granted rights to the target user to be deleted?
> >>
> >> Les
> >>
> >> Schuh, Richard wrote:
> >>> The Pipe is the easiest. 
> >>>
> >>> PIPE < user list | spec /delete user/ 1 w1 nw | cms | > 
> delete log a
> >>>
> >>> Note, however, that if you have an SFS that has a lot of
> >> files and permissions, each DELETE USER can take a long 
> time, so you 
> >> do not want to do this on an id that you might need soon after you 
> >> enter the PIPE command. In our shop, an individual DELETE USER can 
> >> take upwards of 10 minutes.
> >>> Cleaning up SFS when a userid is deleted is important from
> >> a security standpoint. If the same id should be given to a 
> different 
> >> person, it would automatically inherit permissions from the prior 
> >> owner. You should be doing a DELETE USER every time that a 
> userid is 
> >> deleted from the directory.
> >>> It is possible for one user to grant access to other users
> >> who are not enrolled. DELETE USER does not clean up these 
> >> permissions. To get rid of them, you have to first enroll 
> the user in 
> >> the pool even if it is for 0 blocks. To solve this in our 
> automated 
> >> process, each user to be deleted is enrolled for 0 blocks, 
> ignoring 
> >> the return code. We don't care if the user is already 
> enrolled, the 
> >> attempt does no harm. After the enroll, the deletion will 
> clean out 
> >> all permissions granted to or by the user being deleted.
> >>>
> >>> Regards,
> >>> Richard Schuh
> >>>
> >>>  
> >>>
> >>>> -Original Message-
> >>>> From: The IBM z/VM Operating System 
> >>>> [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Rick Troth
> >>>> Sent: Tuesday, March 01, 2011 10:54 AM
> >>>> To: IBMVM@LISTSERV.UARK.EDU
> >>>> Subject: Re: CMS SFS Question
> >>>>
> >>>> Nahh ... even easier ... Pipes.
> >>>> I'm thinking two pipes.  One to gather the Q ENROLL 
> output then a 
> >>>> second to actually perform the deletes.  In between shove that Q 
> >>>> ENROLL output into a file, manually edit for confirmation,
> >> then feed
> >>>> the selected content into DELETE USER.
> >>>>
> >>>>

Re: CMS SFS Question

2011-03-01 Thread Les Koehler

I guess there's something implied there that I don't get. Scenario, from your 
note:

Your task is to delete LES, who is enrolled, from the SFS system
LES has granted rights to RICHARD but RICHARD is not enrolled

How does enrolling LES for 0 blocks do anything about the granted rights that 
RICHARD has?


Les

Schuh, Richard wrote:

I simply enroll any user to be deleted for 0 blocks. The alternative is to scan 
the sfs directories and files looking for such users. It is much easier to 
attempt the enroll. If it fails, it is because the user is already enrolled.

Regards, 
Richard Schuh 

 


-Original Message-
From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Les Koehler

Sent: Tuesday, March 01, 2011 12:22 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: CMS SFS Question

I'm curious: How do you find the user who is not enrolled, 
but granted rights to the target user to be deleted?


Les

Schuh, Richard wrote:
The Pipe is the easiest. 


PIPE < user list | spec /delete user/ 1 w1 nw | cms | > delete log a

Note, however, that if you have an SFS that has a lot of 
files and permissions, each DELETE USER can take a long time, 
so you do not want to do this on an id that you might need 
soon after you enter the PIPE command. In our shop, an 
individual DELETE USER can take upwards of 10 minutes. 
Cleaning up SFS when a userid is deleted is important from 
a security standpoint. If the same id should be given to a 
different person, it would automatically inherit permissions 
from the prior owner. You should be doing a DELETE USER every 
time that a userid is deleted from the directory. 
It is possible for one user to grant access to other users 
who are not enrolled. DELETE USER does not clean up these 
permissions. To get rid of them, you have to first enroll the 
user in the pool even if it is for 0 blocks. To solve this in 
our automated process, each user to be deleted is enrolled 
for 0 blocks, ignoring the return code. We don't care if the 
user is already enrolled, the attempt does no harm. After the 
enroll, the deletion will clean out all permissions granted 
to or by the user being deleted.


Regards,
Richard Schuh

 


-Original Message-
From: The IBM z/VM Operating System
[mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Rick Troth
Sent: Tuesday, March 01, 2011 10:54 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: CMS SFS Question

Nahh ... even easier ... Pipes.
I'm thinking two pipes.  One to gather the Q ENROLL output then a 
second to actually perform the deletes.  In between shove that Q 
ENROLL output into a file, manually edit for confirmation, 
then feed 

the selected content into DELETE USER.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/



On Tue, 1 Mar 2011, Rich Smrcina wrote:


REXX?

On 03/01/2011 12:35 PM, Wandschneider, Scott wrote:

Is there a way to delete multiple users at once or create

a "batch" job to delete multiple users that are enrolled in SFS?

Thank you,
Scott R Wandschneider
Systems Programmer 3|| Infocrossing, a Wipro Company || 11707 
Miracle Hills Drive, Omaha, NE, 68154-4457|| ': 402.963.8905 ||
Ë:847.849.7223  ||  : 

scott.wandschnei...@infocrossing.com **Think

Green  - Please print responsibly**



Confidentiality Note: This e-mail, including any
attachment to it, may contain material that is confidential, 
proprietary, privileged and/or "Protected Health 
Information," within 
the meaning of the regulations under the Health Insurance 
Portability&  Accountability Act as amended.
 If it is not clear that you are the intended recipient, you are 
hereby notified that you have received this transmittal in 
error, and 
any review, dissemination, distribution or copying of this e-mail, 
including any attachment to it, is strictly prohibited. If 
you have 
received this e-mail in error, please immediately return it to the 
sender and delete it from your system. Thank you.

--
Rich Smrcina
Velocity Software, Inc.
http://www.velocitysoftware.com

Catch the WAVV! http://www.wavv.org
WAVV 2011 - April 15-19, 2011 Colorado Springs, CO




Re: CMS SFS Question

2011-03-01 Thread Schuh, Richard
I simply enroll any user to be deleted for 0 blocks. The alternative is to scan 
the sfs directories and files looking for such users. It is much easier to 
attempt the enroll. If it fails, it is because the user is already enrolled.

Regards, 
Richard Schuh 

 

> -Original Message-
> From: The IBM z/VM Operating System 
> [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Les Koehler
> Sent: Tuesday, March 01, 2011 12:22 PM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: CMS SFS Question
> 
> I'm curious: How do you find the user who is not enrolled, 
> but granted rights to the target user to be deleted?
> 
> Les
> 
> Schuh, Richard wrote:
> > The Pipe is the easiest. 
> > 
> > PIPE < user list | spec /delete user/ 1 w1 nw | cms | > delete log a
> > 
> > Note, however, that if you have an SFS that has a lot of 
> files and permissions, each DELETE USER can take a long time, 
> so you do not want to do this on an id that you might need 
> soon after you enter the PIPE command. In our shop, an 
> individual DELETE USER can take upwards of 10 minutes. 
> > 
> > Cleaning up SFS when a userid is deleted is important from 
> a security standpoint. If the same id should be given to a 
> different person, it would automatically inherit permissions 
> from the prior owner. You should be doing a DELETE USER every 
> time that a userid is deleted from the directory. 
> > 
> > It is possible for one user to grant access to other users 
> who are not enrolled. DELETE USER does not clean up these 
> permissions. To get rid of them, you have to first enroll the 
> user in the pool even if it is for 0 blocks. To solve this in 
> our automated process, each user to be deleted is enrolled 
> for 0 blocks, ignoring the return code. We don't care if the 
> user is already enrolled, the attempt does no harm. After the 
> enroll, the deletion will clean out all permissions granted 
> to or by the user being deleted.
> > 
> > 
> > Regards,
> > Richard Schuh
> > 
> >  
> > 
> >> -Original Message-----
> >> From: The IBM z/VM Operating System
> >> [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Rick Troth
> >> Sent: Tuesday, March 01, 2011 10:54 AM
> >> To: IBMVM@LISTSERV.UARK.EDU
> >> Subject: Re: CMS SFS Question
> >>
> >> Nahh ... even easier ... Pipes.
> >> I'm thinking two pipes.  One to gather the Q ENROLL output then a 
> >> second to actually perform the deletes.  In between shove that Q 
> >> ENROLL output into a file, manually edit for confirmation, 
> then feed 
> >> the selected content into DELETE USER.
> >>
> >> -- R;
> >> Rick Troth
> >> Velocity Software
> >> http://www.velocitysoftware.com/
> >>
> >>
> >>
> >> On Tue, 1 Mar 2011, Rich Smrcina wrote:
> >>
> >>> REXX?
> >>>
> >>> On 03/01/2011 12:35 PM, Wandschneider, Scott wrote:
> >>>> Is there a way to delete multiple users at once or create
> >> a "batch" job to delete multiple users that are enrolled in SFS?
> >>>> Thank you,
> >>>> Scott R Wandschneider
> >>>> Systems Programmer 3|| Infocrossing, a Wipro Company || 11707 
> >>>> Miracle Hills Drive, Omaha, NE, 68154-4457|| ': 402.963.8905 ||
> >>>> Ë:847.849.7223  ||  : 
> >> scott.wandschnei...@infocrossing.com **Think
> >>>> Green  - Please print responsibly**
> >>>>
> >>>>
> >>>>
> >>>> Confidentiality Note: This e-mail, including any
> >> attachment to it, may contain material that is confidential, 
> >> proprietary, privileged and/or "Protected Health 
> Information," within 
> >> the meaning of the regulations under the Health Insurance 
> >> Portability&  Accountability Act as amended.
> >>  If it is not clear that you are the intended recipient, you are 
> >> hereby notified that you have received this transmittal in 
> error, and 
> >> any review, dissemination, distribution or copying of this e-mail, 
> >> including any attachment to it, is strictly prohibited. If 
> you have 
> >> received this e-mail in error, please immediately return it to the 
> >> sender and delete it from your system. Thank you.
> >>>
> >>> --
> >>> Rich Smrcina
> >>> Velocity Software, Inc.
> >>> http://www.velocitysoftware.com
> >>>
> >>> Catch the WAVV! http://www.wavv.org
> >>> WAVV 2011 - April 15-19, 2011 Colorado Springs, CO
> >>>
> >>>
> 

Re: CMS SFS Question

2011-03-01 Thread Les Koehler
I'm curious: How do you find the user who is not enrolled, but granted rights to 
the target user to be deleted?


Les

Schuh, Richard wrote:
The Pipe is the easiest. 


PIPE < user list | spec /delete user/ 1 w1 nw | cms | > delete log a

Note, however, that if you have an SFS that has a lot of files and permissions, each DELETE USER can take a long time, so you do not want to do this on an id that you might need soon after you enter the PIPE command. In our shop, an individual DELETE USER can take upwards of 10 minutes. 

Cleaning up SFS when a userid is deleted is important from a security standpoint. If the same id should be given to a different person, it would automatically inherit permissions from the prior owner. You should be doing a DELETE USER every time that a userid is deleted from the directory. 


It is possible for one user to grant access to other users who are not 
enrolled. DELETE USER does not clean up these permissions. To get rid of them, 
you have to first enroll the user in the pool even if it is for 0 blocks. To 
solve this in our automated process, each user to be deleted is enrolled for 0 
blocks, ignoring the return code. We don't care if the user is already 
enrolled, the attempt does no harm. After the enroll, the deletion will clean 
out all permissions granted to or by the user being deleted.


Regards, 
Richard Schuh 

 


-Original Message-
From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Rick Troth

Sent: Tuesday, March 01, 2011 10:54 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: CMS SFS Question

Nahh ... even easier ... Pipes.
I'm thinking two pipes.  One to gather the Q ENROLL output 
then a second to actually perform the deletes.  In between 
shove that Q ENROLL output into a file, manually edit for 
confirmation, then feed the selected content into DELETE USER.


-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/



On Tue, 1 Mar 2011, Rich Smrcina wrote:


REXX?

On 03/01/2011 12:35 PM, Wandschneider, Scott wrote:
Is there a way to delete multiple users at once or create 

a "batch" job to delete multiple users that are enrolled in SFS?

Thank you,
Scott R Wandschneider
Systems Programmer 3|| Infocrossing, a Wipro Company || 11707 
Miracle Hills Drive, Omaha, NE, 68154-4457|| ': 402.963.8905 || 
Ë:847.849.7223  ||  : 
scott.wandschnei...@infocrossing.com **Think 

Green  - Please print responsibly**



Confidentiality Note: This e-mail, including any 
attachment to it, may contain material that is confidential, 
proprietary, privileged and/or "Protected Health 
Information," within the meaning of the regulations under the 
Health Insurance Portability&  Accountability Act as amended. 
 If it is not clear that you are the intended recipient, you 
are hereby notified that you have received this transmittal 
in error, and any review, dissemination, distribution or 
copying of this e-mail, including any attachment to it, is 
strictly prohibited. If you have received this e-mail in 
error, please immediately return it to the sender and delete 
it from your system. Thank you.


--
Rich Smrcina
Velocity Software, Inc.
http://www.velocitysoftware.com

Catch the WAVV! http://www.wavv.org
WAVV 2011 - April 15-19, 2011 Colorado Springs, CO




Re: CMS SFS Question

2011-03-01 Thread Schuh, Richard
The Pipe is the easiest. 

PIPE < user list | spec /delete user/ 1 w1 nw | cms | > delete log a

Note, however, that if you have an SFS that has a lot of files and permissions, 
each DELETE USER can take a long time, so you do not want to do this on an id 
that you might need soon after you enter the PIPE command. In our shop, an 
individual DELETE USER can take upwards of 10 minutes. 

Cleaning up SFS when a userid is deleted is important from a security 
standpoint. If the same id should be given to a different person, it would 
automatically inherit permissions from the prior owner. You should be doing a 
DELETE USER every time that a userid is deleted from the directory. 

It is possible for one user to grant access to other users who are not 
enrolled. DELETE USER does not clean up these permissions. To get rid of them, 
you have to first enroll the user in the pool even if it is for 0 blocks. To 
solve this in our automated process, each user to be deleted is enrolled for 0 
blocks, ignoring the return code. We don't care if the user is already 
enrolled, the attempt does no harm. After the enroll, the deletion will clean 
out all permissions granted to or by the user being deleted.


Regards, 
Richard Schuh 

 

> -Original Message-
> From: The IBM z/VM Operating System 
> [mailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of Rick Troth
> Sent: Tuesday, March 01, 2011 10:54 AM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: CMS SFS Question
> 
> Nahh ... even easier ... Pipes.
> I'm thinking two pipes.  One to gather the Q ENROLL output 
> then a second to actually perform the deletes.  In between 
> shove that Q ENROLL output into a file, manually edit for 
> confirmation, then feed the selected content into DELETE USER.
> 
> -- R;
> Rick Troth
> Velocity Software
> http://www.velocitysoftware.com/
> 
> 
> 
> On Tue, 1 Mar 2011, Rich Smrcina wrote:
> 
> > REXX?
> > 
> > On 03/01/2011 12:35 PM, Wandschneider, Scott wrote:
> > > Is there a way to delete multiple users at once or create 
> a "batch" job to delete multiple users that are enrolled in SFS?
> > >
> > > Thank you,
> > > Scott R Wandschneider
> > > Systems Programmer 3|| Infocrossing, a Wipro Company || 11707 
> > > Miracle Hills Drive, Omaha, NE, 68154-4457|| ': 402.963.8905 || 
> > > Ë:847.849.7223  ||  : 
> scott.wandschnei...@infocrossing.com **Think 
> > > Green  - Please print responsibly**
> > >
> > >
> > >
> > > Confidentiality Note: This e-mail, including any 
> attachment to it, may contain material that is confidential, 
> proprietary, privileged and/or "Protected Health 
> Information," within the meaning of the regulations under the 
> Health Insurance Portability&  Accountability Act as amended. 
>  If it is not clear that you are the intended recipient, you 
> are hereby notified that you have received this transmittal 
> in error, and any review, dissemination, distribution or 
> copying of this e-mail, including any attachment to it, is 
> strictly prohibited. If you have received this e-mail in 
> error, please immediately return it to the sender and delete 
> it from your system. Thank you.
> > 
> > 
> > --
> > Rich Smrcina
> > Velocity Software, Inc.
> > http://www.velocitysoftware.com
> > 
> > Catch the WAVV! http://www.wavv.org
> > WAVV 2011 - April 15-19, 2011 Colorado Springs, CO
> > 
> > 
> 

Re: CMS SFS Question

2011-03-01 Thread Rick Troth
Nahh ... even easier ... Pipes.
I'm thinking two pipes.  One to gather the Q ENROLL output
then a second to actually perform the deletes.  In between
shove that Q ENROLL output into a file, manually edit for confirmation,
then feed the selected content into DELETE USER.

-- R;
Rick Troth
Velocity Software
http://www.velocitysoftware.com/



On Tue, 1 Mar 2011, Rich Smrcina wrote:

> REXX?
> 
> On 03/01/2011 12:35 PM, Wandschneider, Scott wrote:
> > Is there a way to delete multiple users at once or create a "batch" job to 
> > delete multiple users that are enrolled in SFS?
> >
> > Thank you,
> > Scott R Wandschneider
> > Systems Programmer 3|| Infocrossing, a Wipro Company || 11707 Miracle Hills 
> > Drive, Omaha, NE, 68154-4457|| : 402.963.8905 || :847.849.7223  ||  : 
> > scott.wandschnei...@infocrossing.com **Think Green  - Please print 
> > responsibly**
> >
> >
> >
> > Confidentiality Note: This e-mail, including any attachment to it, may 
> > contain material that is confidential, proprietary, privileged and/or 
> > "Protected Health Information," within the meaning of the regulations under 
> > the Health Insurance Portability&  Accountability Act as amended.  If it is 
> > not clear that you are the intended recipient, you are hereby notified that 
> > you have received this transmittal in error, and any review, dissemination, 
> > distribution or copying of this e-mail, including any attachment to it, is 
> > strictly prohibited. If you have received this e-mail in error, please 
> > immediately return it to the sender and delete it from your system. Thank 
> > you.
> 
> 
> -- 
> Rich Smrcina
> Velocity Software, Inc.
> http://www.velocitysoftware.com
> 
> Catch the WAVV! http://www.wavv.org
> WAVV 2011 - April 15-19, 2011 Colorado Springs, CO
> 
> 


Re: CMS SFS Question

2011-03-01 Thread Rich Smrcina

REXX?

On 03/01/2011 12:35 PM, Wandschneider, Scott wrote:

Is there a way to delete multiple users at once or create a "batch" job to 
delete multiple users that are enrolled in SFS?

Thank you,
Scott R Wandschneider
Systems Programmer 3|| Infocrossing, a Wipro Company || 11707 Miracle Hills 
Drive, Omaha, NE, 68154-4457|| : 402.963.8905 || :847.849.7223  ||  : 
scott.wandschnei...@infocrossing.com **Think Green  - Please print responsibly**



Confidentiality Note: This e-mail, including any attachment to it, may contain material that 
is confidential, proprietary, privileged and/or "Protected Health Information," 
within the meaning of the regulations under the Health Insurance Portability&  
Accountability Act as amended.  If it is not clear that you are the intended recipient, you 
are hereby notified that you have received this transmittal in error, and any review, 
dissemination, distribution or copying of this e-mail, including any attachment to it, is 
strictly prohibited. If you have received this e-mail in error, please immediately return it 
to the sender and delete it from your system. Thank you.



--
Rich Smrcina
Velocity Software, Inc.
http://www.velocitysoftware.com

Catch the WAVV! http://www.wavv.org
WAVV 2011 - April 15-19, 2011 Colorado Springs, CO


CMS SFS Question

2011-03-01 Thread Wandschneider, Scott
Is there a way to delete multiple users at once or create a "batch" job to 
delete multiple users that are enrolled in SFS?  

Thank you,
Scott R Wandschneider
Systems Programmer 3|| Infocrossing, a Wipro Company || 11707 Miracle Hills 
Drive, Omaha, NE, 68154-4457|| : 402.963.8905 || :847.849.7223  ||  : 
scott.wandschnei...@infocrossing.com **Think Green  - Please print responsibly**




Re: SFS question

2011-01-04 Thread Kris Buelens
Not really true Dave: the SFSULIST panel is complete when not being SFS
admin.  Only the Catalog stats are not available to non-admins (upper right
corner).
But, yes, using PF11 to start DIRLIST of the user pointed too will only list
the subdirs one is authorized to.

2011/1/4 Dave Jones 

> Steve, note that your user id must be enrolled in the file pool as an
> administrator for SFSULIST to return all of the menaingful results.
>
> DJ
>
> On 01/04/2011 08:25 AM, Gentry, Stephen wrote:
> > Thanks Dave.  I was sure there was something out there that would
> > provide this info.  I had SFSULIST already downloaded just didn't know
> > that's what it would/could do.
> > Steve
> >
> > -Original Message-
> > From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
> > Behalf Of Dave Jones
> > Sent: Tuesday, January 04, 2011 9:20 AM
> > To: IBMVM@LISTSERV.UARK.EDU
> > Subject: Re: SFS question
> >
> > Steve, grab Kris Buelen's SFSULIST tool from the VM download page; it
> > does exactly what you are looking for.
> >
> > Have a good one.
> >
> > On 01/04/2011 08:13 AM, Gentry, Stephen wrote:
> >> How can I tell what users and/or what files are in a storage group?
> > I'd
> >> prefer to know users but can trace it back if I know what files.
> >>
> >> Thanks,
> >>
> >> Steve
> >>
> >>
> >
>
> --
> Dave Jones
> V/Soft Software
> www.vsoft-software.com
> Houston, TX
> 281.578.7544
>



-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: SFS question

2011-01-04 Thread Dave Jones
Steve, note that your user id must be enrolled in the file pool as an
administrator for SFSULIST to return all of the menaingful results.

DJ

On 01/04/2011 08:25 AM, Gentry, Stephen wrote:
> Thanks Dave.  I was sure there was something out there that would
> provide this info.  I had SFSULIST already downloaded just didn't know
> that's what it would/could do.
> Steve
> 
> -Original Message-
> From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
> Behalf Of Dave Jones
> Sent: Tuesday, January 04, 2011 9:20 AM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: SFS question
> 
> Steve, grab Kris Buelen's SFSULIST tool from the VM download page; it
> does exactly what you are looking for.
> 
> Have a good one.
> 
> On 01/04/2011 08:13 AM, Gentry, Stephen wrote:
>> How can I tell what users and/or what files are in a storage group?
> I'd
>> prefer to know users but can trace it back if I know what files.
>>
>> Thanks,
>>
>> Steve
>>
>>
> 

-- 
Dave Jones
V/Soft Software
www.vsoft-software.com
Houston, TX
281.578.7544


Re: SFS question

2011-01-04 Thread Kris Buelens
SFSULIST is an interactive tool.  If you need this information in a REXX
exec: Q LIMITS is your friend.  If you also need to know the files: after Q
LIMITS, you use LISTDIR; ACCESS and LISTFILE...

2011/1/4 Gentry, Stephen 

> Thanks Dave.  I was sure there was something out there that would
> provide this info.  I had SFSULIST already downloaded just didn't know
> that's what it would/could do.
> Steve
>
> -Original Message-
> From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
> Behalf Of Dave Jones
> Sent: Tuesday, January 04, 2011 9:20 AM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: SFS question
>
> Steve, grab Kris Buelen's SFSULIST tool from the VM download page; it
> does exactly what you are looking for.
>
> Have a good one.
>
> On 01/04/2011 08:13 AM, Gentry, Stephen wrote:
> > How can I tell what users and/or what files are in a storage group?
> I'd
> > prefer to know users but can trace it back if I know what files.
> >
> > Thanks,
> >
> > Steve
> >
> >
>
> --
> Dave Jones
> V/Soft Software
> www.vsoft-software.com
> Houston, TX
> 281.578.7544
>



-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: SFS question

2011-01-04 Thread Gentry, Stephen
Thanks Dave.  I was sure there was something out there that would
provide this info.  I had SFSULIST already downloaded just didn't know
that's what it would/could do.
Steve

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Dave Jones
Sent: Tuesday, January 04, 2011 9:20 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: SFS question

Steve, grab Kris Buelen's SFSULIST tool from the VM download page; it
does exactly what you are looking for.

Have a good one.

On 01/04/2011 08:13 AM, Gentry, Stephen wrote:
> How can I tell what users and/or what files are in a storage group?
I'd
> prefer to know users but can trace it back if I know what files.
> 
> Thanks,
> 
> Steve
> 
> 

-- 
Dave Jones
V/Soft Software
www.vsoft-software.com
Houston, TX
281.578.7544


Re: SFS question

2011-01-04 Thread Dave Jones
Steve, grab Kris Buelen's SFSULIST tool from the VM download page; it
does exactly what you are looking for.

Have a good one.

On 01/04/2011 08:13 AM, Gentry, Stephen wrote:
> How can I tell what users and/or what files are in a storage group?  I'd
> prefer to know users but can trace it back if I know what files.
> 
> Thanks,
> 
> Steve
> 
> 

-- 
Dave Jones
V/Soft Software
www.vsoft-software.com
Houston, TX
281.578.7544


SFS question

2011-01-04 Thread Gentry, Stephen
How can I tell what users and/or what files are in a storage group?  I'd
prefer to know users but can trace it back if I know what files.

Thanks,

Steve



Re: Time out an SFS command

2010-12-24 Thread Alan Ackerman
On Fri, 24 Dec 2010 18:44:04 +0100, Berry van Sleeuwen  wrote:

>Would PIPSTOP cancel the SFS command (or any CP/CMS command for that
>matter)? If so then you could use PIPSTOP to end the pipeline. Just a
>theory, not tested in any way...

As John Hartmann explained:

>> From: "John P. Hartmann" 
>> Subject:  Re: Time out a CMS command
>>
>> The pipeline is dead in the water while the CMS command is executing;
>> no way it can force a timeout.  If you have two virtual machines to
>> play with, the situation is different.  Then you can send the command
>> and wait for a response with STARMSG and send HX when it times out.
>> Whether CMS reacts to the HX is another matter.


Re: Time out an SFS command

2010-12-24 Thread Alan Ackerman
We think the problem is IPGATE, which IBM does not support, right?

In one case when my PROFILE EXEC in Richmond, VA issues 

ACCESS SFSKC:ACKERMAN.TOOLS C

I can fix the problem via

#CP IPL 
ACC (NOPROF
FORCE IPGATE
AUTOLOG IPGATE

In the case I reported here, I don't even get an error message until the 
RC = 55, 18 hours after the 
hang started. At that point the problem is gone.

On Fri, 24 Dec 2010 00:20:52 -0500, Alan Altmark  wrote:

>John was right, you can't set a timeout for sfs commands.  Having anothe
r
>ID hx your ID after a too-long wait is about it.  That said, you're bett
er
>off trying to find out why your workunit is hanging and solve the *real*

>problem.  Maybe the server is hung up on a backup.
>
>Depending on what's going on, you may have grounds for a PMR.
>
>Alan Altmark
>IBM Lab Services
>
>Merry Christmas, everyone!
>
=
==
==


Re: Time out an SFS command

2010-12-24 Thread Dave Jones
If anyone is interested, I have a TIMEBOMB module that lets a CMS user
set the CP command(s) to be executed and the amount of time to wait
before the timebomb 'explodes'.

Drop me a note off list if anyone would like a copy.

Merry Christmas to all

DJ

On 12/24/2010 07:44 AM, Ronald van der Laan wrote:
> Or use the CP timebomb to issue a timed CP command within your own userid.
> 
> Ronald van der Laan
> 

-- 
Dave Jones
V/Soft Software
www.vsoft-software.com
Houston, TX
281.578.7544


Re: Time out an SFS command

2010-12-24 Thread Berry van Sleeuwen
Would PIPSTOP cancel the SFS command (or any CP/CMS command for that
matter)? If so then you could use PIPSTOP to end the pipeline. Just a
theory, not tested in any way...

'PIPE (end \) command LISTDIR ...',  /* execute command */
'| stem',/* store */
'\ literal +60', /* 60 seconds */
'| delay',   /* wait for it */
'| pipstop'  /* cancel command */

Regards, Berry.

Op 24-12-10 01:09, Alan Ackerman schreef:
> Moving this question from CMS-PIPELINES to IBMVM, since I am assured by
> John Hartmann that it cannot be done with CMS Pipelines.  Anyone figured 
> out how to get it to time out?
>
> Sender:   CMSTSO Pipelines Discussion List  pipeli...@vm.marist.edu>
> From: "Ackerman, Alan" 
> Subject:  Time out a CMS command
>
> Is there a PIPELINE idiom to force a CMS command to timeout after a
> certain length of time?
>
> I am issuing:
>
> 'PIPE command LISTDIR SFSLNB00:SFSADMIN.GETSMTP | stem emsg.'
>
> Occasionally (about once a week), this hangs for a long time (18 hours
> the last time) and then returns with RC = 55.
>
> I can live with RC=55, but not with my virtual machine being tied up for
> 18 hours. (There are other VMSCHED jobs it should be running.)
>
> From: "Schuh, Richard" 
> Subject:  Re: Time out a CMS command
>
> Look at the beat stage. It may work for this problem.
>
> From: "John P. Hartmann" 
> Subject:  Re: Time out a CMS command
>
> The pipeline is dead in the water while the CMS command is executing;
> no way it can force a timeout.  If you have two virtual machines to
> play with, the situation is different.  Then you can send the command
> and wait for a response with STARMSG and send HX when it times out.
> Whether CMS reacts to the HX is another matter.
>
> From: Rob van der Heij 
> Subject:  Re: Time out a CMS command
>
> Some SFS commands have the ability to hang forever and prevent any
> recovery. I recall we did some check right before these, just to make
> sure the remote file pool is actually alive. But I don't remember
> which check it was, maybe Bruce or Rod can fill in the blanks...
>
> From: "Ackerman, Alan" 
> Subject:  Re: Time out a CMS command
>
> I suppose I can just CP FORCE the hung userid. I'm not sure what state
> that would leave SFS in, though.
>
> Date: Tue, 21 Dec 2010 20:16:34 -0500
> From: Rich Greenberg 
> Subject:  Re: Time out a CMS command
>
> If you don't find a fix, instead of running it from VMSCHED, have
> VMSCHED autolog another service machine for the actual LISTDIR.  It
> wouldn't be a problem if the extra SVM gets hung.
>
> Date: Wed, 22 Dec 2010 12:09:49 +0100
> From: Rod Furey 
> Subject:  Re: Time out a CMS command
>
> Methinks Holger did it this way:
>
> start up a thread or two via multitasking CMS
> set up a timer on one thread
> set up the access (or whatever) on the other thread
> if the access completes before the time ticks, kill the timer thread
> if the timer thread ticks before the access completes, kill the access
> thread
>
> Don't ask me about the ramifications of doing this and what happens
> about cleanup. Multitasking CMS was never an area I hit before I went
> in search of other work.
>
> I do recall that the dev group did discover some problems in the mtCMS
> area at the time.
> I would hope that they've been fixed by now.
>
> Date: Wed, 22 Dec 2010 08:07:34 -0500
> From: Bruce Hayden 
> Subject:  Re: Time out a CMS command
>
> I haven't looked at this in years!  But, looking at the code, you have
> it essentially correct.  The only sticky part is that the code doesn't
> attempt to do an access, but tries to simulate some kind of SFS
> communication via APPC, which I'm sure is not a documented interface
> (it is coded in the exec as a long hex string.)
>
> Anyway, the best approach for the rest of us to this problem would be
> to use 2 userids, as was already suggested.  I doubt it would have any
> affect on SFS.  If this happened to my own id, I just enter #CP IPL
> CMS to cancel the APPC wait and recover and there was never a problem
> in SFS after the communication was fixed or reset.
>
>
>   


Re: Time out an SFS command

2010-12-24 Thread jcb

On 12/24/2010 8:44 AM, Ronald van der Laan wrote:

Or use the CP timebomb to issue a timed CP command within your own userid.

Ronald van der Laan
What about doing an async dmsconn csl call, and then if you get no 
response in several iterations of dmscheck, then close the workunit ???


John


Re: Time out an SFS command

2010-12-24 Thread Ronald van der Laan
Or use the CP timebomb to issue a timed CP command within your own userid.

Ronald van der Laan


Re: Time out an SFS command

2010-12-23 Thread Alan Altmark
John was right, you can't set a timeout for sfs commands.  Having another
ID hx your ID after a too-long wait is about it.  That said, you're better
off trying to find out why your workunit is hanging and solve the *real*
problem.  Maybe the server is hung up on a backup.

Depending on what's going on, you may have grounds for a PMR.

Alan Altmark
IBM Lab Services

Merry Christmas, everyone!


Re: Time out an SFS command

2010-12-23 Thread Alan Altmark
John was right, you can't cause an sfs command to time out.


- Original Message -
From: Alan Ackerman [alan.acker...@bankofamerica.com]
Sent: 12/23/2010 06:09 PM CST
To: IBMVM@LISTSERV.UARK.EDU
Subject: [IBMVM] Time out an SFS command



Moving this question from CMS-PIPELINES to IBMVM, since I am assured by
John Hartmann that it cannot be done with CMS Pipelines.  Anyone figured =


out how to get it to time out?

Sender:   CMSTSO Pipelines Discussion List 
From: "Ackerman, Alan" 
Subject:  Time out a CMS command

Is there a PIPELINE idiom to force a CMS command to timeout after a
certain length of time?

I am issuing:

'PIPE command LISTDIR SFSLNB00:SFSADMIN.GETSMTP | stem emsg.'

Occasionally (about once a week), this hangs for a long time (18 hours
the last time) and then returns with RC = 55.

I can live with RC=55, but not with my virtual machine being tied up fo=

r
18 hours. (There are other VMSCHED jobs it should be running.)

From: "Schuh, Richard" 
Subject:  Re: Time out a CMS command

Look at the beat stage. It may work for this problem.

From: "John P. Hartmann" 
Subject:  Re: Time out a CMS command

The pipeline is dead in the water while the CMS command is executing;
no way it can force a timeout.  If you have two virtual machines to
play with, the situation is different.  Then you can send the command
and wait for a response with STARMSG and send HX when it times out.
Whether CMS reacts to the HX is another matter.

From: Rob van der Heij 
Subject:  Re: Time out a CMS command

Some SFS commands have the ability to hang forever and prevent any
recovery. I recall we did some check right before these, just to make
sure the remote file pool is actually alive. But I don't remember
which check it was, maybe Bruce or Rod can fill in the blanks...

From: "Ackerman, Alan" 
Subject:  Re: Time out a CMS command

I suppose I can just CP FORCE the hung userid. I'm not sure what state
that would leave SFS in, though.

Date: Tue, 21 Dec 2010 20:16:34 -0500
From: Rich Greenberg 
Subject:  Re: Time out a CMS command

If you don't find a fix, instead of running it from VMSCHED, have
VMSCHED autolog another service machine for the actual LISTDIR.  It
wouldn't be a problem if the extra SVM gets hung.

Date: Wed, 22 Dec 2010 12:09:49 +0100
From: Rod Furey 
Subject:  Re: Time out a CMS command

Methinks Holger did it this way:

start up a thread or two via multitasking CMS
set up a timer on one thread
set up the access (or whatever) on the other thread
if the access completes before the time ticks, kill the timer thread
if the timer thread ticks before the access completes, kill the access
thread

Don't ask me about the ramifications of doing this and what happens
about cleanup. Multitasking CMS was never an area I hit before I went
in search of other work.

I do recall that the dev group did discover some problems in the mtCMS
area at the time.
I would hope that they've been fixed by now.

Date: Wed, 22 Dec 2010 08:07:34 -0500
From: Bruce Hayden 
Subject:  Re: Time out a CMS command

I haven't looked at this in years!  But, looking at the code, you have
it essentially correct.  The only sticky part is that the code doesn't
attempt to do an access, but tries to simulate some kind of SFS
communication via APPC, which I'm sure is not a documented interface
(it is coded in the exec as a long hex string.)

Anyway, the best approach for the rest of us to this problem would be
to use 2 userids, as was already suggested.  I doubt it would have any
affect on SFS.  If this happened to my own id, I just enter #CP IPL
CMS to cancel the APPC wait and recover and there was never a problem
in SFS after the communication was fixed or reset.


  1   2   3   4   5   6   >