Re: Speculation: Random thoughts on web based report access

2011-09-06 Thread Gilbert C Cardenas
The technology currently exists (sort of).  Our content management software is 
as you describe in your boring historical stuff LPR--W2K Server--CM 
Application--Tomcat--Web Viewer.
The vendor has given the ability for a user to select any report they have 
access to and select a check box if they want to receive an url link whenever a 
new report is added to the repository.  That way, when they check their email 
they can readily see what's new and open that report directly via the link.
They also have another option with the web viewer that allows users to flag 
their favorite reports (i.e. Favorites).  Then they can go to their favorites 
link and only pull up the reports they really want.
Not exactly RSS but accomplishes the same thing.
By no means am I promoting this vendor because there are some areas that they 
are lacking in but just pointing out that they technology currently 
exists...somewhat.


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of 
John McKown
Sent: Sunday, September 04, 2011 10:40 PM
To: IBM-MAIN@bama.ua.edu
Subject: Speculation: Random thoughts on web based report access

boring historical stuff
Taking reports from the SPOOL and putting them in some sort of archive
is now rather well established. I remember host based-only systems such
as SAR, RMDS, InfoPAC (now ViewDirect?) and others. And they still exist
and are in use. They seem to fall into two groups. The first consists of
actual reports generated by an application. The second consists of the
JES related SPOOL like JESMSGLG, JESJCL, JESYSMSG, and maybe utility
messages to SYSPRINT.

Most of these started out being accessed by either TSO ISPF applications
or VTAM applications or both. Many of these are now accessible via Web
Browsers.

Some even keep the data on other platforms such a Windows or Linux. We
do this where I work. We have a product which reads the JES SPOOL and
uses the LPR protocol to send the print files to a Windows server which
indexes it and writes the output into proprietary files. Another server
running Tomcat serves up the reports.
/boring historical stuff

Now for my random thought. Many web sites such as news sites and blogs
use RSS and/or Atom news feeds. The user subscribes to the feeds that
they are interested in. Their PC or tablet or smartphone periodically
scans those feeds for new articles. So I'm curious as to whether people
who read reports could also use that facility. That is, instead of
coming in, firing up a browser, and checking to see if there is a new
xyz report, they subscribe to the xyz report feed. The report archive
software, or whatever, would create the feed. Now they just do a fast
scan of their aggregator to see if a new report is ready, instead of
needing to click on a lot of links to see what is available.

Now, the user can look at the report from where ever they are, subject
to appropriate authority. And the ability of the device to display the
report intelligibly, of course. This function would likely require an
HTTPS connection instead of simple HTTP for security reasons as well as
some sort of user validation (I'd prefer a digital cert, but
userid/password would work too). They fire up their new aggregrator and
see all a list of all the new reports to which they are subscribed.

Am I stating the obvious and implemented? Or is this actually something
that is a new use of existing technology? If this is new, I freely
release any and all interest that might theoretically be mine to the
community to implement. I say that because somebody is likely to try to
patent it in the U.S. And I hate most software patents.

--
John McKown
Maranatha! 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

This e-mail (and any attachments) may contain information that is confidential 
and/or protected by law. Any review, use, distribution or disclosure to anyone 
other than the intended recipient(s) is strictly prohibited. If you are not the 
intended recipient, please contact the sender by reply email and delete all 
copies of this message.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Speculation: Random thoughts on web based report access

2011-09-05 Thread Cris Hernandez #9
any webpage will do if the report is properly formatted for the software used 
to view it.  security considerations are site specific.  I prefer not to have 
the overhead of maintaining a webpage.  Instead, I prefer to format my reports 
for simple text/wordpad viewing on the pc if no further data manipulation is 
expected, or format it with delimited columns for Excel if the recipient wants 
to further tinker with the data.  SAS works wonders here too.  delivery method 
is email.  I prefer to keep the original data on mainframe for archival.  no 
doubt cots products are, or will be, available to help impress management and 
add to overhead costs.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Speculation: Random thoughts on web based report access

2011-09-04 Thread John McKown
boring historical stuff
Taking reports from the SPOOL and putting them in some sort of archive
is now rather well established. I remember host based-only systems such
as SAR, RMDS, InfoPAC (now ViewDirect?) and others. And they still exist
and are in use. They seem to fall into two groups. The first consists of
actual reports generated by an application. The second consists of the
JES related SPOOL like JESMSGLG, JESJCL, JESYSMSG, and maybe utility
messages to SYSPRINT. 

Most of these started out being accessed by either TSO ISPF applications
or VTAM applications or both. Many of these are now accessible via Web
Browsers.

Some even keep the data on other platforms such a Windows or Linux. We
do this where I work. We have a product which reads the JES SPOOL and
uses the LPR protocol to send the print files to a Windows server which
indexes it and writes the output into proprietary files. Another server
running Tomcat serves up the reports. 
/boring historical stuff

Now for my random thought. Many web sites such as news sites and blogs
use RSS and/or Atom news feeds. The user subscribes to the feeds that
they are interested in. Their PC or tablet or smartphone periodically
scans those feeds for new articles. So I'm curious as to whether people
who read reports could also use that facility. That is, instead of
coming in, firing up a browser, and checking to see if there is a new
xyz report, they subscribe to the xyz report feed. The report archive
software, or whatever, would create the feed. Now they just do a fast
scan of their aggregator to see if a new report is ready, instead of
needing to click on a lot of links to see what is available. 

Now, the user can look at the report from where ever they are, subject
to appropriate authority. And the ability of the device to display the
report intelligibly, of course. This function would likely require an
HTTPS connection instead of simple HTTP for security reasons as well as
some sort of user validation (I'd prefer a digital cert, but
userid/password would work too). They fire up their new aggregrator and
see all a list of all the new reports to which they are subscribed.

Am I stating the obvious and implemented? Or is this actually something
that is a new use of existing technology? If this is new, I freely
release any and all interest that might theoretically be mine to the
community to implement. I say that because somebody is likely to try to
patent it in the U.S. And I hate most software patents.

-- 
John McKown
Maranatha! 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Speculation: Random thoughts on web based report access

2011-09-04 Thread Brian Westerman
Hi John,

A very similar function to this is part of our newer releases of SyzSPOOL/z.  
We have made part of the SyzSpool log (where everything is indexed) file 
available to the web interface, depending on your settings on the web side of 
things, and what the mainframe side has set, you can see (with limits) 
information on the tasks that executed.  The new push interface we have will 
update at the interval the user selects with the newest task information that 
they are interested in.  Our design specs were to make it act like RSS, and 
we found that we could do it, but not in one step because we had to change the 
log file first to get the required information into it so that we could take 
advantage of it without exposing everything to the interface.  We decided that 
it would be best to use two small releases to get the pieces all in place, then 
a whole new version will add the RSS/push part.


So far we have the two releases done and out in the field, and we updated the 
current release to support the new web pages that we use to push things, we are 
in the design testing of the new version that uses those new pages and can push 
the actual data.

Our biggest hurdle was that we thought we had to keep things so secure that 
there was no possibility that anyone would see that a job had even ended, when 
that wasn't really an issue.  Knowing that the payroll job ended is not the big 
secret, well... it is, but not to the extent that we thought originally.  
Actually seeing anything about that job is the secret.  So what we ended up 
doing was splitting the log file into internal pieces which are able to be 
controlled by not only RACF (or ACF/2, etc.) resource rules, but make it so 
that some parts are just not accessible from some interfaces.  That allowed us 
to have multiple classes of access (there are 255), and still control 
everything from the mainframe side.

It's not that we don't trust the web users... well, I guess it's true, we don't 
trust them, but the way we have implemented things the site can decide how much 
(to a point) they want to disclose under different authentication modes.  But 
we didn't change the rules for the spool data itself.  If you want to browse 
the actual output, you still have to have RACF access to the output in the 
first place.

In the older releases, you had to have the RACF authority to look at the output 
to even see that it was there, now, you might or might not need that authority, 
(depends on the class assigned by the site), and we can push that 
informationout to the authenticated user.  

The idea is that eventually, it would be nice to provide a standard RSS feed 
type access, but security comes first.  Even if the site wants to be 
non-secure, we don't allow it for the actual output, only a subset of the task 
execution information.  

How large that subset is will depend on what the site wants to do with the 
classifications on the mainframe side.

Brian

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html