Re: z/VM CTCA won't start
On Tue, 20 Jan 2009 00:06:05, Alan Altmark wrote: One of the Usage Restrictions in the license agreement: z/VM Version 5 Release 3 Evaluation Edition operates on the IBM System z10 Enterprise Class (z10 EC) and IBM System z10 Business Class (z10 BC). The Program requires hardware that implements the IBM 64-bit z/Architecture in order to execute properly and therefore You are not authorized to install or use this Program on any machine that does not properly implement 64-bit z/Architecture. For information about specific z/VM machine requirements and programming requirements, see the z/VM: General Information manual, GC24-6095. There is the interesting phrase properly implement. As a scientist, technical professional, and industry veteran, I do not believe that I could legitimately claim to properly implement *any* piece of machinery for which the complete specifications and verification tools are not published. If I cannot verify proper implementation, then authorization does not exist. Interesting, but I think I would tend to disagree: Absence of evidence is by no means evidence of absence... Jan From the Hercules entry in Wikipedia: Newer operating systems, such as OS/390, z/OS, VSE/ESA, z/VSE, VM/ESA, and z/VM will run, but cannot legally be used except in very limited circumstances for license reasons. Alan Altmark z/VM Development IBM Endicott -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: 2.6 device node help
Adam, You will also need to install module-init-tools, this installs 2.6 compatible versions of insmod et al. Other then that a woody upgraded to sarge system is fine. (also be aware of the grouped devices for your network interfaces) Jan. From: Adam Thornton [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: 2.6 device node help Date: Mon, 26 Apr 2004 11:24:08 -0500 On Mon, 2004-04-26 at 11:05, Post, Mark K wrote: Are you using the 1.3.0 version of the s390-tools? Probably not. Still, would it matter in the initial IPL? I can't find the root device, whether I call it /dev/dasd/0.0.0150/part1 or /dev/dasd/0150/part1 (which is what it was before). The IPL record shows the new kernel paramters when I boot. This is a Debian system, installed at woody, and then apt-gotten dist-upgraded to sarge, with a 2.6.5 kernel built with virgin sources plus the IBM fixes. Adam -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 _ MSN Zoeken helpt je om de gekste dingen te vinden! http://search.msn.nl -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
New cio drivers and network devices
How does one relate a given device address to a network device under the lastest kernels? This relation was previously made in /etc/chandev.conf, however the new common io structure obsoleted this. I do not seem to be able to find the right documentation or code that makes this relation. (I am running kernel 2.6.5 with all the latest patches applied) Thanks, Jan Jaeger _ MSN Zoeken, voor duidelijke zoekresultaten! http://search.msn.nl -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: New cio drivers and network devices
I do not believe I can use that to associate an eth0 or ctc0 (as in ifconfig) to a unit address. Jan From: Post, Mark K [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: New cio drivers and network devices Date: Thu, 15 Apr 2004 13:23:08 -0400 That would be the /sys file system, similar to /proc. Mark Post -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Jan Jaeger Sent: Thursday, April 15, 2004 1:11 PM To: [EMAIL PROTECTED] Subject: New cio drivers and network devices How does one relate a given device address to a network device under the lastest kernels? This relation was previously made in /etc/chandev.conf, however the new common io structure obsoleted this. I do not seem to be able to find the right documentation or code that makes this relation. (I am running kernel 2.6.5 with all the latest patches applied) Thanks, Jan Jaeger _ MSN Zoeken, voor duidelijke zoekresultaten! http://search.msn.nl -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 _ MSN Zoeken, voor duidelijke zoekresultaten! http://search.msn.nl -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: New cio drivers and network devices
It took me a while but for those who are interrested: for example a ctc configuration with a ctc on 0a00-0a01 # echo 0.0.0a00,0.0.0a01 /sys/bus/ccwgroup/drivers/ctc/group grouping device /sys/bus/ccwgroup/drivers/ctc/0.0.0a00 will be created # echo 1 /sys/bus/ccwgroup/drivers/ctc/0.0.0a00/online device ctc0 will now be registered with a read dev 0a00 and write dev 0a01 I guess we will need a few scripts to make this a little more userfriendly, but this new interface is a lot nicer then the old chandev code. Thanks, Jan Jaeger. _ MSN Zoeken, voor duidelijke zoekresultaten! http://search.msn.nl -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: s390 storage key inconsistency? [was Re: msync() behaviour broken for MS_ASY
The reference bit is not 100% accurate anyway: The reference bit may be set to one by fetching data or instructions that are neither designated nor used by the program, and, under certain conditions, a reference may be made without the reference bit being set to one. Under certain unusual circumstances, a reference bit may be set to zero by other than explicit program action. Jan. From: Martin Schwidefsky [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: s390 storage key inconsistency? [was Re: msync() behaviour broken for MS_ASYNC, revert patch?] Date: Thu, 1 Apr 2004 20:13:40 +0200 Hi Stephen, I just happened to follow the function and noticed that on s390, page_test_and_clear_dirty() has the comment: * Test and clear dirty bit in storage key. * We can't clear the changed bit atomically. This is a potential * race against modification of the referenced bit. This function * should therefore only be called if it is not mapped in any * address space. but in this case the page is clearly mapped in the caller's address space, else we wouldn't have reached this. Is this a problem? The clearing of the dirty bit while the page is still mapped somewhere races against the setting of the referenced bit. The worst that can happen is that the setting of a referenced bit gets lost. This can lead to a bad swapping decision because a page is considered to be old but in reality someone accessed it recently. This is not nice but it doesn't happen often. This isn't a problem as long as no dirty bit gets lost. blue skies, Martin Linux/390 Design Development, IBM Deutschland Entwicklung GmbH Schvnaicherstr. 220, D-71032 Bvblingen, Telefon: 49 - (0)7031 - 16-2247 E-Mail: [EMAIL PROTECTED] -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 _ MSN Zoeken, voor duidelijke zoekresultaten! http://search.msn.nl -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: OT: Intel gets virtualization clue?
My guess that why the z990 only supports LPAR mode is that quite radical changes will be required to z/OS, z/VM, Linux for zSeries or any other operating system that is going to run in basic mode. By using a hypervisor one can actually emulate the current channel implementation, so these operating systems do not need to be rewritten to take advantage of multiple LSSes. I would guess that the z990 architecture with its multiple logical channel subsystems implements an SSID other then X'0001', so one would have for LSS0 subchannel numbers starting with X'0001', and for LSS1 subchannel number starting with X'0002' or something. A simple modification to SIE which will swap X'0001' to X'0002' for those LPARs which are attached to the 2nd LSS. This will keep everything to do with an additional LSS shielded from the guest operating systems, and as such they will not require any major modifications. Just guessing, but there are not that may other possibilities for this to work within the current architecture. Jan Jaeger. From: David Boyes [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: OT: Intel gets virtualization clue? Date: Mon, 13 Oct 2003 09:40:25 -0400 The limit of 6 preferred guests has always puzzled me -- it always seemed more a political decision (thou shalt not make LPAR look bad) than a technical decision. Was it that, or was there some serious technical problem that prohibited just marching on up through storage computing offsets until you run out of storage? Although it's probably useful to point out that that decision is driven more by use of IFLs forcing LPAR mode than any real necessity for LPARs. It's understandable why IFLs require LPAR mode, but it's not a particularly good reason to eliminate basic mode operation. I still don't understand the z990 channel system well enough to argue that point, but I do wonder whether it's substantially more complicated than dealing with the split channel system that the 3084 and it's ilk had. We seemed to deal with that well enough w/o losing basic mode. The psychological argument you mentioned can be continued to point out that creating a new virtual machine in a basic mode system is even less committment of resources than an LPAR requires, and with VM still trailing z/OS in some of the hardware management functions, shops running w/o z/OS really do much better operationally running in basic mode and not ever disturbing the machine configuration. -- db _ Hotmail en Messenger on the move http://www.msn.nl/communicatie/smsdiensten/hotmailsmsv2/
Re: OT: Intel gets virtualization clue?
Rick, see this from the positive side, once SIE assist code etc has been removed, there will no longer be an argument for OCO ;-) Jan Jaeger. (How about z/VM V5 all source again?) From: Alan Altmark [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: OT: Intel gets virtualization clue? Date: Thu, 9 Oct 2003 08:58:22 -0400 On Thursday, 10/09/2003 at 12:38 EST, Richard Troth [EMAIL PROTECTED] wrote: Jim ... I don't like where this is going. But then, I'm a purist: I see what VM offers and find little value in VM in the hardware other than to sell to those customers who either have the rare real problem with VM support or the stereotypical allergy to it. (Can't make people LIKE something.) Consider what MPG offered: Increased performance. Moving to a more powerful machine plus the ability to RESERVE or LOCK guest pages helps make up for the loss of MPG. Plus, the limit of 6 preferred guests makes it less interesting for server consolidation, IMHO. I have been bothered by lack of basic mode for the past couple years. Maybe this is not a problem, since I hear few customers complaining. But then perhaps there just are not enough customers who have been hit by the issue like Jan has. Intellectually, from the purist's perspective, I'm sure the loss of MPG hurts, but the reality is that of those who run zLinux, the vast majority run in LPARs. So, the z990 changes nothing in this respect. 30 LPARs is great, and probably serves a great number of customers. But 30 LPARs lose a whole shipload of other value that VM offers, that I don't need to enumerate, preaching to the choir this is. I don't think 30 LPARs cost VM anything. I think it makes using LPARs less painful for those times when you need one. Psychologically, 1/30th of the machine is less impact than 1/15th. That means getting an LPAR when you need one is easier. With HiperSockets, IEEE VLAN, and the z/VM 4.4 virtual switch, the management of the images in those LPARs is much easier. You can still clone and manage content from within VM. Whether you IPL in a virtual machine or in an LPAR is a choice based on performance requirements. Alan Altmark Sr. Software Engineer IBM z/VM Development _ Chatten met je online vrienden via MSN Messenger. http://messenger.msn.nl/
Re: OT: Intel gets virtualization clue?
I did not say anything about the removal of SIE, just the SIE assists (which require OCO). SIE itself is documented in SA22-7095 and invoked from HCPRUN, that's not OCO. Jan Jaeger. From: Alan Altmark [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: OT: Intel gets virtualization clue? Date: Thu, 9 Oct 2003 13:25:21 -0400 On Thursday, 10/09/2003 at 04:53 GMT, Jan Jaeger [EMAIL PROTECTED] wrote: Rick, see this from the positive side, once SIE assist code etc has been removed, there will no longer be an argument for OCO ;-) No one said anything about the removal of SIE; there continues to be support for two levels of SIE in the hardware. The I/O assists were the main attraction of V=F (IMO). Looking down the road, the DMA aspects of QDIO (for SCSI and network devices) reduce the benefit of I/O assists anyway. Alan Altmark Sr. Software Engineer IBM z/VM Development _ Chatten met je online vrienden via MSN Messenger. http://messenger.msn.nl/
Re: OT: Intel gets virtualization clue?
What is the status of multiple preferred guests on z990 machines? iirc the z990 cannot run in basic mode, which was always a prereq for multiple preferred guests. When running under PR/SM the MHPGF would be taken by PR/SM. Running multiple preferred guests will require something like 2nd level zones, unless PR/SM has changed such that it no longer uses the MHPGF. Jan Jaeger. From: Jim Elliott [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: OT: Intel gets virtualization clue? Date: Tue, 7 Oct 2003 20:57:26 EDT This was further enhanced in announcements on June 11, 1987 with the VM/XA System Product and the Multiple High Performance Guest Support facility (MHPGS) and February 15, 1988 as the Processor Resource/Systems Manager (PR/SM) which provides the Logical Partitioning facility (the first ever reference to Logical Partitions to my knowledge). True. Before that they were called domains and you could only get them from Amdahl. The PR/SM announcement remains one of the very few to offer pronunciation guidelines. The Amdahl Multiple Domain Feature (MDF) was a different implementation from that in the 3090, with somewhat the same goals. The big difference with PR/SM is that it could be used by z/VM to provide preferred guests or LPAR to provide Logical Partitions. In any case, the Intel Vanderpool architecture is much closer to SIE than to MDF or PR/SM. The current VM product, z/VM, makes extensive use of this function to provide support for running a great many guests (in some environments 100s) and the current LPAR support provides for 60 Logical Partitions on the z990 mainframe. According to the preview PDF for today's announcements that I received late yesterday, 60 LPARs is still a Statement of Direction. Correct, a typo on my part. With today's announcement 30 LPARs are available on the z990 with the SoD being 60 LPARs. Regards, Jim _ Chatten met je online vrienden via MSN Messenger. http://messenger.msn.nl/
Re: No DIAG discipline in the 64-bit world?
Question I have is what are the VM development plans for full 64 bit support? If that is going to be fixed soon then the diag suport would be even easier to implement. Jan Jaeger. From: Lucius, Leland [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: No DIAG discipline in the 64-bit world? Date: Sun, 13 Jul 2003 03:53:37 -0500 What are the issues with it? DIAG 210/250 only support 24/31-bit addrs? Maybe I'm being a bit naive, but wouldn't it be a simple matter to allow the use, but only if there's less than 2GB of storage? And maybe use SAM31/64 to switch in and out of 64-bit mode? Oh yea, there might be a slight problem with int and long usage, but that should be fixable also. (Just asking before I post a patch so I don't make a COMPLETE fool of myself... ;-)) Leland _ Chatten met je online vrienden via MSN Messenger. http://messenger.msn.nl/
Re: Suggestions on how to implement LOADPARM
Why not implement this somewhat like lilo does this. One could simply put out a prompt on the hmc, and if no response is given within a certain amount of time then simply continue (as lilo does). Jan Jaeger From: Lucius, Leland [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: Suggestions on how to implement LOADPARM Date: Sun, 9 Mar 2003 14:45:31 -0600 I guess what you should do is support a number of IPL sections identified by something of 4 bytes, say a number of 'slots' in the IPL record. Either learn zipl how to put things in these slots, or use a separate userspace program that can copy the first entry (written by zipl) to one of the others. And you probably could have different slots point to the same image but use different command line. This is almost exactly what I had in mind. But, I couldn't decide how much of the LOADPARM field to use. We could even get down to 2 characters since the bootmap is currently limited to 256 entries. That would mean we could store a maximum of 128 different configs. That might be too short though as it wouldn't be easy to remember. Then I thought, use the full 8 characters of the zipl.conf section name, but that seemd like a waste. So, you're suggestion of 4 might very well be a good middle of the road. A new value would have to be added to zipl.conf so the user could specify a LABEL that would be used for the LOADPARM. Another thing I couldn't decide on was what should happen if someone entered a LOADPARM that wasn't valid. Disabled wait or use whatever was coded in the defaultboot section? The latter could be very dangerous... Leland _ MSN Zoeken, voor duidelijke zoekresultaten! http://search.msn.nl
Re: Suggestions on how to implement LOADPARM
#CP uppercasing the response on VINPUT is inconsistent with real machine behaviour. What I would prefer, is a TERM MODE SYSCONS setting in VM. This such that operation under VM using the syscons interface becomes more acceptable, and removes the need to dual-path code (ie support VM console) when running under VM. Jan Jaeger. From: Alan Altmark [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: Suggestions on how to implement LOADPARM Date: Mon, 10 Mar 2003 12:23:14 -0500 When a VM guest attempts to talk to the integrated HMC console, it is virtualized on the server's virtual console. The #CP VINPUT command is used to talk to it. Note, however, that #CP will uppercase the response. Only CP can talk to the real HMC. Alan Altmark Sr. Software Engineer IBM z/VM Development _ MSN Zoeken, voor duidelijke zoekresultaten! http://search.msn.nl
Re: CPU Arch Security
From: Ulrich Weigand [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: CPU Arch Security Date: Sun, 10 Nov 2002 17:55:53 +0100 Well, Linux has capabilities nowadays, but they aren't much used in your typical distribution. There are also patches that implement a variety of more elaboate authorisation schemes like mandatory access control (see e.g. the NSA's SELinux patch or the RSBAC project). You are right, but most of these schemes are based on user access to objects, where the authorisation is based on the user. This is different from the operating model, where programs are compartimentalised in their execution environment such that they only have access to what is strictly needed. The latter is an integrity issue, whereas the former is an authorisation issue. I am mainly concerned about the integrity issue, as that is the area that I think is the weakest part, and the part where most vulnerabilities reside that can be exploited (other then configuration issues). But even so, I think the confusions stem from another source. The question really is, what entity is the 'holder' of any authorisation? The various schemes (uid-based rights, set-uid, capabilities, MAC, ...) are all different in the details of how you acquire any particular authorization, but once you got it, the authority rests in the *process*. This means that either the process has the right to perform any particular act or it doesn't. But at no place does this right depend on just what code within its address space the process is currently executing. I think that that is exactly my point, once a process is running, there is nothing left that restricts any program (ie piece of code) to do anything to any of the resources it has access to, including code/data of other programs within the same process (either on purpose or inadvertendly). What you call a 'program in the shared lib' is a concept that is fundamentally alien to Unix. A process is a process; it executes code from within its address space. If the code was loaded from a shared library, it is still executed as part of this process; it has no 'identity' of its own. I'm really not sure how this fundamental concept could be changed so that the result still somewhat resembles Unix ... I am not sure that one would run away from the concept of unix. One can easily (in concept anyway) add more spaces to one process, separate spaces for code, stack and data for example. Similarly, shared libs could reside in their own space, one would need a different linkage mechanism, but it can effectively been done. Such a linkage mechanism can also reduce/extend the program authority (based on key masks for example, and what spaces are accesible from the shared lib, and to what extent). The same logic can be applied to different code segments within one process. The (initial) linkage would need some supervisor assistance in order to setup/verify program authorisations. Nevertheless, something like what you describe can of course be modelled in Unix, but you need to use multiple processes to represent the different authorization domains; these processes can then use shared memory and other inter-process communication facilities to interact in a way that precisely implements the various privileges you describe. I would actually like to see those mechanisms applicable to one process, inter process communication is a different issue. Surely, one can do a lot of message passing between processes in order to isolate different program segments, but that is not quite what I see as an issue here, or even a solution. What I still have not seen in this whole discussion is even *if* one were to implement some sort of authorization domains on a finer granularity than processes, and even if this would all work out somehow, what *benefit* this would bring over and above what is already now possible in Unix using multiple process as just described ... If one where to implement any sort of facility that compartimentalises programs and code fragments in their execution environment, then that could greatly reduce the damage a potential exploit or programming error could do. Jan Jaeger. _ MSN Zoeken, voor duidelijke zoekresultaten! http://search.msn.nl/worldwide.asp
Re: CPU Arch Security [was: Re: Probably the first published shell code]
Linas, Do I understand you correctly, in that you propose a multi layered system integrity design, whereby shared libs for example have a different authorisation from normal apps (almost like a multi ring structure)? One of the issues I can see with such an implementation in linux, is that the solutions to achive something like this are going to be very hw platform dependent. S/390 offers a wealth of features to implement this efficiently whereas other hw platforms which are more risc based will need to do a lot of tricks. In order to keep linux linux, one could think of some kind of micro kernel which is then hw dependent, and includes all the hw dependent services (including the above) under which one would run linux. I think a hw dependent micro kernel/hypervisor would make the above issues easier to solve. Such a model is by no means new, AIX V2 (RT) ran under a virtual resource manager, and AIX/370 only ran under VM, both of these AIXes used the hypervisor to take care of the hw specifics in one way or another. Jan Jaeger From: Linas Vepstas [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: CPU Arch Security [was: Re: Probably the first published shell code] Date: Thu, 7 Nov 2002 11:09:04 -0600 On Tue, Nov 05, 2002 at 10:16:28PM +0100, Ulrich Weigand was heard to remark: Adam Thornton wrote: (However, changing the Linux tool chain and basically *all* applications from a flat address space I don't see that you need to change *all* apps. This would be only for apps that really care. to multiple address spaces is an *enormous* task; and I'm not I didn't say it wasn't enormous. Its not tiny, but I'm not sure its that big either. Depends on how easy you want to make it for the app developer. Certainly a prototype would not do this to everything, not by default. If you were to default it in the wrong way, it would be enormous, and it would break many (most?) apps. convinced this buys you anything w.r.t. security that can't be achieved much more easily, e.g. by StackGuard-type compilers. What I had in mind was the following: preventing an app from getting write access to a file that the shared library opened. Or preventing the app from getting write access to a socket or other IPC that the shared lib created/opened/is using. Or, for example, the following database stunt: having a database shared lib memory-map a database file, and then giving the app read-write access to one page of that memory map, but not all of them. In some ways, I suppose this is possible today, but its hard makes you jump through loops. Client-server loops, in particular. Complex IPC setups. Haven't you ever noticed how few apps actually use traditional IPC (semaphores, shmem, etc?) That's because its hard, its complexity, its crap that the app developer has to design, and it takes a lot of effort. Today, the only kind of address-space security that unix has is that one process cannot corrupt the address space of another process. Thus, if you want to have address-space security, you *must* write multiple-process apps, which means you *must* use IPC to coordinate the processes. Ugh. *That is what I'm talking about.* --linas -- pub 1024D/01045933 2001-02-01 Linas Vepstas (Labas!) [EMAIL PROTECTED] PGP Key fingerprint = 8305 2521 6000 0B5E 8984 3F54 64A9 9A82 0104 5933 _ Direct chatten met je vrienden met MSN Messenger http://messenger.msn.nl
Re: CPU Arch Security [was: Re: Probably the first published shell code]
I am not sure that you would need dcss's to protect one from arbitrarily jumping into shared libraries (as may be used by exploits). If one was to design shared libraries such that each shared library has its own address space then one could use cross memory to execute from that address space. One could have a PC call for each shared library function, and as such normal users would never be able to get to that code, other then by means of the pc call, which executes a predetermined function. This together with a non executable stack will make things harder for any viruses. I think that hardware funtionality, with os support is the best answer to viruses, although it will probably never be 100% failsafe. Jan Jaeger From: Adam Thornton [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: CPU Arch Security [was: Re: Probably the first published shell code] Date: Tue, 5 Nov 2002 16:01:57 -0500 On Tue, Nov 05, 2002 at 08:03:35PM +, Alan Cox wrote: Flavour of the year appears to be maths sign/overflow mishandling. Buffer overflows are no longer a growth area as programmers learn that one. Gee, only took 'em, what, 40 years? For this to catch on in the mainstream, other CPU architectures would need to add similar features as well. But given the recent burbling from microsoft and intel about palladium and how cpu arch changes can enhance security, (which intel seems to be actually working on) I do not think that it is too wild, too early or too impractical to engage in this task. I don't really see how fiddling with libraries helps you, but enlighten me Well, one thing I can see exploiting under VM would be an agressive use of DCSSes (or something like them--I don't know if you can put DCSSes in other data spaces, and I don't think you can execute code from data spaces, but you see where this is going), so you could share your shared libraries between Linux images. If each one were in its own read-only address space, you'd get a vast reduction in overall memory footprint, plus code couldn't exploit bugs in the standard libraries--even if you have a buffer overflow (or whatever) vulnerability, a) the code is off in its own private address space, so you can't go trash anything else, and b) your virtual machine has that segment marked read-only anyway. Good lord, I can't believe that I'm arguing for a segmented architecture. Adam _ Chatten met je vrienden via het web? Probeer MSN Messenger http://messenger.msn.nl/default.asp?client=1
Re: Probably the first published shell code example for Linux/390
True, but it is common virus technique to execute machine code by utilizing buffer overflows in a scripting language, and as such bypassing the limitations that are imposed by the scripting language. Jan Jaeger From: John Summerfield [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: Probably the first published shell code example for Linux/390 Date: Sat, 2 Nov 2002 15:25:52 +0800 On Fri, 1 Nov 2002 22:50, you wrote: When we are talking about storing (ie overlaying) programs (trojans) on the Maybe I'm being picky, but trojans are always present by invitation. A user us sucked into installing a program that (maybe) does what's claimed of it, but also does something you might not like. -- Cheers John Summerfield Microsoft's most solid OS: http://www.geocities.com/rcwoolley/ Join the Linux Support by Small Businesses list at http://mail.computerdatasafe.com.au/mailman/listinfo/lssb _ Surf voor nieuws, filenieuws entertainment naar MSN.nl http://www.msn.nl/default.asp
Re: Probably the first published shell code example for Linux/390
When we are talking about storing (ie overlaying) programs (trojans) on the stack space, then only hardware protection can really help. One would need to come to a model where instructions cannot be executed from the stack. One can achive this in S/390, by making the stack space a separate space, which is only addressable thru an access register (like an MVS data space). This way instructions can never be executed from the stack space, however, I am afraid that such an implementation would break a few things. Jan Jaeger. From: Ross Patterson [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: Probably the first published shell code example for Linux/390 Date: Thu, 31 Oct 2002 18:33:57 -0500 At 13:10 10/31/2002 -0600, Ward, Garry wrote: push something to the stack, decrement the address, and if you've gone negative, you've gone too far? Sure, and the same is true of upwards-growing stacks (only in the other direction, natch). The issue isn't accidental stack overflow. The difference is in the impact of storage overlays - if your stack grows down, the memory above the current stack frame is your caller's. If your stack grows up, the memory above it is your callee's. Now imagine storing 1000 bytes into a 10-byte buffer on the stack (the classic shellcode-insertion hack). In the grows-down case, you overlay some active memory including possibly the savearea containing the register's you're going to reload when you hit the return statement. In the grows-up case, you overlay some inactive memory. sorry, PC assembler is a long time past, but I vaguely remember the argument being made that top down stacking was easier to manage. That's true on platforms that actually have stacks (sometimes). The 8080 and it's descendants do, and Intel chose to grow them downwards. It's a design issue, just like little-endian-ness, and IMHO just as wrong. :-) S/390 doesn't have a general-purpose hardware stack, so it's a matter of implementation preference. Ross Patterson _ Je kan ook Messenger berichten op je mobiele telefoon ontvangen! http://www.msn.nl/services/hotmailsmsv271551/messenger/
Re: Failed to get ACL's to work
Hi Tim, I believe the kernel patches are incomplete as far as s390(x) is concerned. You will need to rework the kernel patch that updates the svc table. This table is written in assembler in the arch specific part. rsbac has the same issue. It is quite trivial to do, but it needs to be done. Jan Jaeger From: Tim Verhoeven [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Failed to get ACL's to work Date: Mon, 30 Sep 2002 09:37:17 +0200 Hello All, I'm trying to implement Samba and ACL on a G5 S/390. The installation went fine (off everything) but the ACL tools now give me a error. When I'm trying to set a ACL (using setfacl) I get a function not implemented error. But I have seen no error in the compilation of these tools (I used the source rpm packages) I did a strace and this is the relevant part : stat64(acl.txt, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0 getuid() = -1 ENOSYS (Function not implemented) getuid() = -1 ENOSYS (Function not implemented) write(2, setfacl: acl.txt: Function not i..., 43) = 43 _exit(1) The list of software involved and there versions : 2.4.17acl s390 (Suse SLES 7 / SuSE 7.2) acl-2.0.18-0.src.rpm e2fsprogs-1.27ea-26.4.src.rpm attr-2.0.10-0.src.rpm fileutils-4.1.8acl-65.5.src.rpm glibc-2.2.4-31 linux-2.4.17acl-0.8.28.diff linux-2.4.17ea-0.8.26.diff Has anyone had a simular issue ? Or can give me hints to further debug this issue ? Thanks, Tim -- === Tim Verhoeven Linux Open Source Specialist GSM : 0496 / 693 453 + e-business solutions Email : [EMAIL PROTECTED] + consulting URL : www.sin.khk.be/~dj/ + Server consolidation === _ Meld je aan bij de grootste e-mailservice ter wereld met MSN Hotmail: http://www.hotmail.com/nl
Re: Strange fs behavior
I think that an fs full situation may occur in your case, have you tried a sleep after the rm? My systems behaves the same, and frequent df displays after the rm of a large file show an increaing amount of freespace. Jan Jaeger From: Ferguson, Neale [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: Strange fs behavior Date: Wed, 11 Sep 2002 08:14:25 -0400 -Original Message- Neale, I do not quite understand what you are saying, when a look at the rexx code i see a loop using various block sizes which: 1) writes to a file (with sufficient real storage, you will write to cache only) In this instance the devices I'm using greatly exceed real memory. 2) sleep for 2 seconds (what happens here? is there an update type sync task running which mignt take control?) I put in a sleep before (and after in a later version) to see if that allowed things to settle down. However, I still get full reports. 3) rm of the file (iirc rm 'syncs' the inode bam blocks asyncronously) 4) sync (which should not do much as the cache has been invalidated) Do you mean to say that you get an fs full after removing the file (ie end round though the loop)? Yes, I get a full condition reported after the remove and dd tries to write its 1st record. Note there's a typo in the exec. It should be: count = (1075 * (4096 / bs.I_bs)) % 1
Re: { ANN ] Mainframe FS and Hitachi hardware patch for kernel 2.4.7 available
babelfish translates the hitachi disclaimer acknowledgement to either 'it agrees' or 'it agrees' Unfortunately the code itself also contains comments in Japanese, which is not quite the linux standard I believe... Jan Jaeger From: Alan Cox [EMAIL PROTECTED] Use one of the translation pages ? Be glad its not English that you can't read. The first one is a caution, the second is a license check, then the code. The code has some problems (its been ifdeffed to hell) rather than avoiding all the noise with some kind of is_hitachi()/is_ibm() macro set. Otherwise it doesn't look too bad
Re: Putting current in register
Pete, You should NOT use control registers for the following reasons: 1) They are slow to load and store (especially under VM) 2) If you do get it to work you cannot use the function for which the control register was designed 3) If you use unassigned bits future architectural changes will cause unpredictable results in your case 4) Future arch changes might change the registers contents without you knowing it. The overhead of loading/storing in the psa is very low, I think it actually faster then loading/storing access registers. Jan Jaeger From: Pete Zaitcev [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Putting current in register Date: Fri, 21 Jun 2002 00:23:19 -0400 Since interrupt stacks were split, current sits in the prefix page. Did anyone think about returning it into a register? I am not sufficiently familiar with the architecture to figure it out. Candidates include control 10 (PER start), control 15 (Linkage), and GPR 12. Possibly I missed something. I was thinking about abusing X15, but it seems that applications may do some mischief trying extract saved registers instructions. PER appears to be used, I am not sure if that's actually the case. So, I am zooming on R12. Anyone wants to comment? -- Pete _ MSN Foto's is de makkelijkste manier om je foto's te delen met anderen en af te drukken: http://photos.msn.nl/Support/WorldWide.aspx
Re: [OT] Neale's effective use of irony and sarcasm
Oh boy, australian jokes about sheep... outside the scope of listserv guidelines... jj From: Phil Payne [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: [OT] Neale's effective use of irony and sarcasm Date: Thu, 6 Jun 2002 15:16:44 +0200 Sorry to disappoint you, old friend, but I seem to recall that Rolf Harries was born in Cardiff! To which the logical reply is: What did he do to a sheep to get to Australia? However - he was born in Perth, Western Australia. (I think his wife - Alwen - might be Welsh.) -- Phil Payne http://www.isham-research.com +44 7785 302 803 +49 173 6242039 _ Download MSN Explorer gratis van http://explorer.msn.nl/intl.asp.
Re: [OT] Neale's effective use of irony and sarcasm
G'day Rod, I am not sure that you get what you want if you ask for some dead horse in one of those stake houses of yours ;-) Jan Jaeger. From: Rod Nash [EMAIL PROTECTED] Reply-To: Linux on 390 Port [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: Re: [OT] Neale's effective use of irony and sarcasm Date: Thu, 6 Jun 2002 16:09:14 +1000 Exposing the world to Aussie-isms is goodness. It raises everyone to a higher level of awareness! Exposing Australia to the world by the proliferation of Outback Steakhouse restaurants is badness and must be stopped immediately. I am use to seeing the great American icons throughout Asia, on nearly every street corner in fact. Of course I am talking about Maca's and KFC. I can handle those. Even when Starbucks started to spread I thought, oh well at least I know where to get a semi decent coffee. But now you are taking it to far. Outback Steakhouse's are spreading, at last count there were 11 in Seoul, Korea.. This has got to stop now or it will not be safe for Aussie's to travel around Asia without some asking what Kookaburra wings are really like or where Walhalla is Date:Wed, 5 Jun 2002 15:15:46 -0400 From:Ferguson, Neale [EMAIL PROTECTED] Subject: Re: [OT] Neale's effective use of irony and sarcasm Fair suck of the sav! Don't come the raw prawn with me sport. (With apologies to Barry McKenzie). -Original Message- No, Neale, we've just gotten used to it. (All very embarassing, I'm sure.) What we were all actually hoping for was for you to stop using very strange Aussie-isms that require a special committee to be established simply to determine (a) what words you used, and (2) what they meant. ;-) On the other hand, such things broaden our horizons and keep us on our toes (and some other metaphors). Rod Nash Senior IT Specialist Linux on zSeries Asia Pacific IBM Australia 60 City Road, Southbank, Victoria, Australia, 3006 Office +61-3-9626 6587 Mobile 041 221 4292 Internet - [EMAIL PROTECTED] _ Meld je aan bij de grootste e-mailservice wereldwijd met MSN Hotmail: http://www.hotmail.com/nl
Re: [OT] Neale's effective use of irony and sarcasm
From: Phil Payne [EMAIL PROTECTED] Subject: Re: [OT] Neale's effective use of irony and sarcas Oh boy, australian jokes about sheep... outside the scope of listserv guidelines... I would have thought any discussion of Rolf Harris well outside 'list guidelines'. Yet it's my post you pick on ... Sorry mate, but you're the sport that brought the sheep along ;-) jj _ Download MSN Explorer gratis van http://explorer.msn.nl/intl.asp.
Re: S390/zSeries CPU questions
In general, only the I/O instructions and model dependent instructions (such as diagnose) cause intercepts, so if you can avoid these as much as you can you should be ok as far as sie is concerned. When running v=r or v=f most of the i/o instructions will be assisted ie will not cause an intercept. Jan Jaeger Rob van der Heij wrote: This I understand in theory; in practice, what, if anything, can an application writer do to minimize this? And what does SIE mean? SIE is the Start Interpretative Execution, used by VM to dispatch the virtual machine. The SIE control block is used by SIE micro code to tell which operations will cause an intercept. To some extent the configuration of the virtual machine determines what can be handled by SIE. I don't think application programmers can do a lot in this area. If anything, it would be Linux kernel level work. One of the options in this area is to replace costly virtualisation of part of the architecture by a high level VM-only interface (e.g. ECKD channel programs vs Diagnose I/O). Rob