Re: Running on CP or IFL ?

2014-07-14 Thread Pavelka, Tomas
> There are no conditions in which a CPU's type is unknown.  Rather, it's a bug 
> in hyptop.

If anyone actually tries to use this, note that the version of hyptop I was 
running is quite old. I did not test whether newer versions fix this behavior.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Running on CP or IFL ?

2014-07-14 Thread Chuck Tribolet
load test in a software development environment.



Chuck Tribolet
trib...@us.ibm.com (IBM business)
trib...@garlic.com (Personal)
http://www.almaden.ibm.com/cs/people/triblet



From:   Scott Rohling 
To: LINUX-390@vm.marist.edu
Date:   07/14/2014 08:17 AM
Subject:Re: Running on CP or IFL ?
Sent by:Linux on 390 Port 



Hi Chuck - can you elaborate?   When do you need such precise
measurements?
 What are the measurements used for?   Just interested if this is more of
a
'test'  (as in load tests) or a production implementation.

Scott Rohling


On Mon, Jul 14, 2014 at 8:01 AM, Chuck Tribolet 
wrote:

> We are running Linux native in an LPAR in cases where precise and
> consistent performance measurements are required.  When running under
zVM,
> the data can get quite noisy.
>
>
>
> Chuck Tribolet
> trib...@us.ibm.com (IBM business)
> trib...@garlic.com (Personal)
> http://www.almaden.ibm.com/cs/people/triblet
>
>
>
> From:   Mike Shorkend 
> To: LINUX-390@vm.marist.edu,
> Date:   07/12/2014 03:46 PM
> Subject:Re: Running on CP or IFL ?
> Sent by:Linux on 390 Port 
>
>
>
> Is anybody doing that? Running Linux natively in an LPAR?
>
> If yes, why?
>
>
> On 12 July 2014 01:45, Marcy Cortes 
wrote:
>
> > "  It can't be used natively in an LPAR."
> >
> > Who'd want to do that anyway! :)
> >
> > Marcy
> >
> > --
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390
or
> > visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > --
> > For more information on Linux on System z, visit
> > http://wiki.linuxvm.org/
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zLinux: Dynamic Memory Management - from a Share proesentation

2014-07-14 Thread Pedro Principeza
Joe.

There's a good paper about that, available on IBM Knowledge Center, on
using both CPU and Memory hotplug, through cpuplugd + use cases.

http://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaag/l0cpup00.pdf


HTH,
Pedro Principeza



From:   "Vitale, Joseph" 
To: LINUX-390@vm.marist.edu,
Date:   14/07/2014 18:29
Subject:zLinux: Dynamic Memory Management - from a Share
proesentation
Sent by:Linux on 390 Port 



Hello,

I found some share slides from 2011/2012  on  "Dynamically Adding Memory"
to  RedHat  Linux.   Curious to know if anyone uses it and which guests,
all or only large memory guests?

Also, procedure described seems manually intensive.   Any comments on
that?


Thanks
Joe

Joseph Vitale
Technology Services Group
Mainframe Operating Systems
95 Christopher Columbus Drive
Floor 14
Jersey City,  N.J.  07302
Work  201-395-1509
Cell917-903-0102


The information contained in this e-mail, and any attachment, is
confidential and is intended solely for the use of the intended recipient.
Access, copying or re-use of the e-mail or any attachment, or any
information contained therein, by any other person is not authorized. If
you are not the intended recipient please return the e-mail to the sender
and delete it from your computer. Although we attempt to sweep e-mail and
attachments for viruses, we do not guarantee that either are virus-free
and accept no liability for any damage sustained as a result of viruses.

Please refer to http://disclaimer.bnymellon.com/eu.htm for certain
disclosures relating to European legal entities.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


zLinux: Dynamic Memory Management - from a Share proesentation

2014-07-14 Thread Vitale, Joseph
Hello,

I found some share slides from 2011/2012  on  "Dynamically Adding Memory"  to  
RedHat  Linux.   Curious to know if anyone uses it and which guests, all or 
only large memory guests?

Also, procedure described seems manually intensive.   Any comments on that?


Thanks
Joe

Joseph Vitale
Technology Services Group
Mainframe Operating Systems
95 Christopher Columbus Drive
Floor 14
Jersey City,  N.J.  07302
Work  201-395-1509
Cell917-903-0102


The information contained in this e-mail, and any attachment, is confidential 
and is intended solely for the use of the intended recipient. Access, copying 
or re-use of the e-mail or any attachment, or any information contained 
therein, by any other person is not authorized. If you are not the intended 
recipient please return the e-mail to the sender and delete it from your 
computer. Although we attempt to sweep e-mail and attachments for viruses, we 
do not guarantee that either are virus-free and accept no liability for any 
damage sustained as a result of viruses. 

Please refer to http://disclaimer.bnymellon.com/eu.htm for certain disclosures 
relating to European legal entities.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Running on CP or IFL ?

2014-07-14 Thread Alan Altmark
On Monday, 07/14/2014 at 04:14 EDT, "Pavelka, Tomas"
 wrote:
> I have not found the doc for diag 204 and neither do I know how is it
possible
> to call a diagnose when not running under z/VM (would that be a diagnose
of
> PR/SM?), but if you look inside the kernel sources, you may be able to
find out.

IBM does not publish the specifications for DIAGNOSE 0x204, as its
behavior is machine-specific, but, yes, it works in an LPAR.

> There is one catch that I know of, besides CP and IFL, there is also an
unknown
> processor type. When I run hyptop under z/VM 6.3 and RHEL 6.1 the CPU
shows up
> as unknown (CPU-T: UN). I am running under IFL only system. If you
wanted to
> use this detection in practice, you would have to figure out under which
> conditions can the CPU be reported as unknown.

There are no conditions in which a CPU's type is unknown.  Rather, it's a
bug in hyptop.

Alan Altmark

Senior Managing z/VM and Linux Consultant
IBM System Lab Services and Training
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: openssl CA certificate maintenance

2014-07-14 Thread David L. Craig
On 14Jul14:1553-0400, Alan Altmark wrote:

> On Monday, 07/14/2014 at 11:03 EDT, Rick Troth
>  wrote:
>
> > A growing number of experts are making noise about the sad state of PKI
> > on this planet.
>
> The people who make the most noise about old mousetraps typically have a
> vested interest in a new model.  There's nothing wrong with the PKI model,
> IMO, but there is an incredible lack of understanding about how to manage
> its deployment so that it improves your overall security posture.

I'd suggest the model is flawed if it assumes all CAs
are above reproach--history has proven otherwise.
--

May the LORD God bless you exceedingly abundantly!

Dave_Craig__
"So the universe is not quite as you thought it was.
 You'd better rearrange your beliefs, then.
 Because you certainly can't rearrange the universe."
__--from_Nightfall_by_Asimov/Silverberg_

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: openssl CA certificate maintenance

2014-07-14 Thread Alan Altmark
On Monday, 07/14/2014 at 11:03 EDT, Rick Troth
 wrote:
> On 07/14/2014 08:54 AM, Alan Altmark wrote:
> > As far as I know, the only place you'll find a URI is in the CRL
portion
> > of the certificate.  When a client or server sends a certificate, the
only
> > assumption it can make is that the base CA certificate is known to and
> > trusted by the peer.  All intermediate CAs, plus the base CA, must be
sent
> > to the peer during the TLS handshake.
>
> Look for the "Authority Information Access" object.
>
> There are URIs in other sections too. "They're everywhere."

I believe that the only place a URI is mandatory is in the CRL
Distribution extension.   My VeriSign cert doesn't have it anywhere else.

> There must be a doc somewhere that affirms the mandate you cite about
> the intermediates. Not clear how well that is implemented in practice. I
> regularly see certificates which lack their chaining intermediates.
> Dunno if those are separated out of band. Possible.

>From RFC 2256 (TLSv1) re: the certificate_list:
   This is a sequence (chain) of X.509v3 certificates. The sender's
   certificate must come first in the list. Each following
   certificate must directly certify the one preceding it. Because
   certificate validation requires that root keys be distributed
   independently, the self-signed certificate which specifies the
   root certificate authority may optionally be omitted from the
   chain, under the assumption that the remote end must already
   possess it in order to validate it in any case.


> > If you just use the public CAs, then the bundles are fine, but that's
the
> > point of asking the question.  Each way has its advantages, but I want
to
> > know how people deal with managing well-known public CAs, self-signed
> > certs (boo! hiss!), and private CAs in an openssl environment.
>
> _Self-signed does not equate to bad._
> It simply requires manual assertion or IT department pre-loading.
> You're frowning on the bad practice of letting end /users perform manual
> assertion/ of a self-signed cert. That /is/ bad.
> It's all about trust.

Outside of a closed environment, self-signed certs are bad.  If
something-A needs a cert, then something-B will be asked to trust it.
"Trust me, you need to trust the untrustworthy."   Self-signed (non-CA)
certs are effortless to obtain and therefore have no value for
authentication.

> A growing number of experts are making noise about the sad state of PKI
> on this planet.

The people who make the most noise about old mousetraps typically have a
vested interest in a new model.  There's nothing wrong with the PKI model,
IMO, but there is an incredible lack of understanding about how to manage
its deployment so that it improves your overall security posture.

> Managing certs and carrying issue/sign chains makes the PKI trust model
> more like the PGP trust model. One criticism of the PGP trust model (for
> consumer use) is the need for such work.
> Ironic.

This is that lack of understanding about PKI.  If the rule is "If Mom says
it's ok, then its ok," you ignore what Dad says unless it's "Go ask your
Mom."  But, no, we start using certs before we have PKI baked into the
infrastructure, so the only "rule" that works is to obey Mom.  And Dad.
And your siblings.  Oh, and the family pet ferret.  Ferrets are
trustworthy, right?  (More so than siblings, perhaps.)

Outside of the mainframe, I haven't seen a centralized cert management and
TLS implementation.  On Linux, for example, every application can have a
different implementation with its own cert store.  Only by convention and
configuration do we manage to centralize things.  No enforcement.  There
are too many places where the "trusted root" begins.  I should be able to
deploy a new ca root bundle once on each host and KNOW that all subsystems
that use ca root certs will be using the ones I just deployed. Application
stacks don't get to control policy - they follow it.  This is why z/VM and
z/OS have System SSL.  Vermin-ridden applications don't have access to
certs and keys, just as God intended.

Alan Altmark

Senior Managing z/VM and Linux Consultant
IBM System Lab Services and Training
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: openssl CA certificate maintenance

2014-07-14 Thread R P Herrold
On Mon, 14 Jul 2014, Alan Altmark wrote:

> I'm not concerned about the format of the files, but how you
> and/or your Linux admins like to manage them, particularly
> in light of the fact that the bundles of well-known CAs is
> updated from time to time.

We are researching / testing in this space, as Oracle has
decided to only honor content .jar executables 'code signing'
signed by 'yet another' clutch of CA's not including the one
we prefer and use

The Mozilla.org folks have their (different) rules for
inclusion in their base bundle as well

And I saw a note that Google / Chrome had decided to restrict
all but eight or nine domains compromised after a secondary
Indian governmental CA incautiously signed several CSR's
purporting to be, but not actually from Google ...

And my personal long-standing desire to be able to inject at
our boundry 'squid' proxyies, a local CA wildcard certificate
so all interior content is retrieved and proxyable over SSL
only

-- Russ herrold

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Running on CP or IFL ?

2014-07-14 Thread Scott Rohling
Hi Chuck - can you elaborate?   When do you need such precise measurements?
 What are the measurements used for?   Just interested if this is more of a
'test'  (as in load tests) or a production implementation.

Scott Rohling


On Mon, Jul 14, 2014 at 8:01 AM, Chuck Tribolet  wrote:

> We are running Linux native in an LPAR in cases where precise and
> consistent performance measurements are required.  When running under zVM,
> the data can get quite noisy.
>
>
>
> Chuck Tribolet
> trib...@us.ibm.com (IBM business)
> trib...@garlic.com (Personal)
> http://www.almaden.ibm.com/cs/people/triblet
>
>
>
> From:   Mike Shorkend 
> To: LINUX-390@vm.marist.edu,
> Date:   07/12/2014 03:46 PM
> Subject:Re: Running on CP or IFL ?
> Sent by:Linux on 390 Port 
>
>
>
> Is anybody doing that? Running Linux natively in an LPAR?
>
> If yes, why?
>
>
> On 12 July 2014 01:45, Marcy Cortes  wrote:
>
> > "  It can't be used natively in an LPAR."
> >
> > Who'd want to do that anyway! :)
> >
> > Marcy
> >
> > --
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> > visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > --
> > For more information on Linux on System z, visit
> > http://wiki.linuxvm.org/
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Running on CP or IFL ?

2014-07-14 Thread Chuck Tribolet
We are running Linux native in an LPAR in cases where precise and
consistent performance measurements are required.  When running under zVM,
the data can get quite noisy.



Chuck Tribolet
trib...@us.ibm.com (IBM business)
trib...@garlic.com (Personal)
http://www.almaden.ibm.com/cs/people/triblet



From:   Mike Shorkend 
To: LINUX-390@vm.marist.edu,
Date:   07/12/2014 03:46 PM
Subject:Re: Running on CP or IFL ?
Sent by:Linux on 390 Port 



Is anybody doing that? Running Linux natively in an LPAR?

If yes, why?


On 12 July 2014 01:45, Marcy Cortes  wrote:

> "  It can't be used natively in an LPAR."
>
> Who'd want to do that anyway! :)
>
> Marcy
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: openssl CA certificate maintenance

2014-07-14 Thread Rick Troth
On 07/14/2014 08:54 AM, Alan Altmark wrote:
> As far as I know, the only place you'll find a URI is in the CRL portion
> of the certificate.  When a client or server sends a certificate, the only
> assumption it can make is that the base CA certificate is known to and
> trusted by the peer.  All intermediate CAs, plus the base CA, must be sent
> to the peer during the TLS handshake.

Look for the "Authority Information Access" object.

There are URIs in other sections too. "They're everywhere."

There must be a doc somewhere that affirms the mandate you cite about
the intermediates. Not clear how well that is implemented in practice. I
regularly see certificates which lack their chaining intermediates.
Dunno if those are separated out of band. Possible.

> An application like a web browser will happily let you add the base CA to
> your inventory of trusted CAs.

Correct. And that's normally all one needs.

> If you just use the public CAs, then the bundles are fine, but that's the
> point of asking the question.  Each way has its advantages, but I want to
> know how people deal with managing well-known public CAs, self-signed
> certs (boo! hiss!), and private CAs in an openssl environment.

_Self-signed does not equate to bad._
It simply requires manual assertion or IT department pre-loading.
You're frowning on the bad practice of letting end /users perform manual
assertion/ of a self-signed cert. That /is/ bad.
It's all about trust.

A growing number of experts are making noise about the sad state of PKI
on this planet.

> I'm not concerned about the format of the files, but how you and/or your
> Linux admins like to manage them, particularly in light of the fact that
> the bundles of well-known CAs is updated from time to time.

Yep! It's a pain. More than just pain, it's an exposure for many sites.

Managing certs and carrying issue/sign chains makes the PKI trust model
more like the PGP trust model. One criticism of the PGP trust model (for
consumer use) is the need for such work.
Ironic.


On 07/14/2014 09:23 AM, David Boyes wrote:
> Option 1 has a high probability of human error, and if you break one, you 
> break them all.   ...

That's a failure of the implementation. I haven't seen such. Guessing
you have.

> ...   It's also kind of a pain to determine what certs are installed where.

Too true.

> Option 2 permits easily distributing and installing certificates using RPMs, 
> which makes updating them (or removing them) a snap. It's also a lot easier 
> to make sure that any necessary intermediate certificates get pulled in 
> (package dependencies + something like yum work a treat) and it's super easy 
> to know which systems are affected if a cert is compromised (rpm -qa |grep 
> local-cert-x). It also makes it trivial to automate the c_rehash run in a 
> post-install script so you don't ever forget to do it.

Agreed: individual files are the way to go for general certificate
management.

I like the bundles for loading a known batch of trusted certs.

> It's a little more work to set up certificate distribution that way the first 
> time, but it's worth it.

Yes, it's a good exercise to go through.

Keep in mind that discrete-to-bundle and bundle-to-discrete are trivial
operations where PEM is used.
Got Pipes?



--

Rick Troth
Senior Software Developer

Velocity Software Inc.
Mountain View, CA 94041
Main: (877) 964-8867
Direct: (614) 594-9768
ri...@velocitysoftware.com 




--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LINUX-390 Digest - 12 Jul 2014 to 13 Jul 2014 (#2014-122)

2014-07-14 Thread David Boyes
> From:Alan Altmark 
> Subject: openssl CA certificate maintenance
> 
> I (think I) know that openSSL provides two ways to manage certificates:
> 1.  A single PEM file that has all of your CA certificates in it.  I say 
> "single" as a
> matter of practice.
> 2.  A single directory that contains all of the certificates stored in 
> separate
> PEM files.  You use the c_rehash utility each time you add or delete a
> certificate to/from the directory.
> I'm curious as to which way most people do it, and why.

Whenever possible, option 2. Some applications that try to be "smart" about 
certificates don't like this approach, but those seem to be getting rarer 
(yay). 

Option 1 has a high probability of human error, and if you break one, you break 
them all. It's also kind of a pain to determine what certs are installed where. 

Option 2 permits easily distributing and installing certificates using RPMs, 
which makes updating them (or removing them) a snap. It's also a lot easier to 
make sure that any necessary intermediate certificates get pulled in (package 
dependencies + something like yum work a treat) and it's super easy to know 
which systems are affected if a cert is compromised (rpm -qa |grep 
local-cert-x). It also makes it trivial to automate the c_rehash run in a 
post-install script so you don't ever forget to do it. 

It's a little more work to set up certificate distribution that way the first 
time, but it's worth it. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: openssl CA certificate maintenance

2014-07-14 Thread Alan Altmark
On Sunday, 07/13/2014 at 09:25 EDT, Rick Troth
 wrote:
> Planning to set up your own CA?

No.  :-)

> It's actually a single file with multiple PEM instances.
> [...]
> Velocity uses the same format for our "CA Bundle". [...]

Indeed.  It's excellent for a CA bundle.  And for those who only use certs
signed by the Usual Suspects, it's fine.

> > And if you're using a private certificate, then you probably have a
> > separate PEM file that contains it and the certificate chain, and the
> > cert's associated private key.
>
> Sure. Just be really really really careful where that private key hides.
>
> I've seen a certificate in the same file with its private key. (Two PEM
> stanzas; one file.) But I've not yet encountered a cert chain (multiple
> cert PEM stanzas) in the same file with a private key.
>
> Note: chaining can often be done dynamically. It's common for a
> certificate to contain a URL where the issuer's certificate can be
> downloaded.

As far as I know, the only place you'll find a URI is in the CRL portion
of the certificate.  When a client or server sends a certificate, the only
assumption it can make is that the base CA certificate is known to and
trusted by the peer.  All intermediate CAs, plus the base CA, must be sent
to the peer during the TLS handshake.

An application like a web browser will happily let you add the base CA to
your inventory of trusted CAs.

When you receive a signed cert request for a end-entity (server or
client), the signer includes his own certificate and all of the
intermediate CAs that prove the worth of the signature.  The "receive
certificate" function tears the chain apart and adds them all to the
certificate inventory (regardless of how the inventory is stored).

> > I'm curious as to which way most people do it, and why.
>
> Academically, it doesn't matter. Either way, the individual certificates
> need to be separated for processing.
>
> Velocity's zSSL uses both methods. We use a bundle for all pre-loaded
> certificates. (For validating clients, should you need that function.)
> Intermediate and client certificates are stored as individual files. In
> my experience, the single file is easier for a massive root store.
> Separate files make more sense when automating.

If you just use the public CAs, then the bundles are fine, but that's the
point of asking the question.  Each way has its advantages, but I want to
know how people deal with managing well-known public CAs, self-signed
certs (boo! hiss!), and private CAs in an openssl environment.

I'm not concerned about the format of the files, but how you and/or your
Linux admins like to manage them, particularly in light of the fact that
the bundles of well-known CAs is updated from time to time.

Alan Altmark

Senior Managing z/VM and Linux Consultant
IBM System Lab Services and Training
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: openssl CA certificate maintenance

2014-07-14 Thread Rick Troth
More about the acronyms and encoding.

PEM (originally "Privacy Enhanced Mail") refers to a base64 encoded DER
format with the "-" markers at the start and end. It's common to
have free form text outside the "-BEGIN whatever-" and "-END
whatever-" markers. The base64 stuff is always between them.

DER is "Distinguished Encoding Rules" which refers to a nifty binary
structure for holding X.509 data or related SSL stuff.

ASN.1 stands for "Abstract Syntax Notation 1". It's a tag-length-data
format used for data at rest (like a certificate) or data in flight
(LDAP, VoIP, even Kerberos). DER is based on ASN.1.

The command 'openssl asn1parse' will break apart a certificate so you
can see its structure. It takes either PEM or DER input (but specify
which).

I hope this helps.




--


Rick Troth
Senior Software Developer

Velocity Software Inc.
Mountain View, CA 94041
Main: (877) 964-8867
Direct: (614) 594-9768
ri...@velocitysoftware.com 




--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Running on CP or IFL ?

2014-07-14 Thread Pavelka, Tomas
> There must be some way to determine. At least when using hyptop in LPAR, it 
> tells the number of available IFL and CP in the top line.

I have looked at the source of hyptop and it reads the information from 
debugfs, namely from these two files:

/sys/kernel/debug/s390_hypfs/diag_204  - if running on LPAR
/sys/kernel/debug/s390_hypfs/diag_2fc  - if running under VM

I assume that DIAG 2FC is run by the kernel and saved to the file. The diagnose 
is documented here:
http://pic.dhe.ibm.com/infocenter/zvm/v6r3/topic/com.ibm.zvm.v630.hcpb4/hcpb4295.htm

I have not found the doc for diag 204 and neither do I know how is it possible 
to call a diagnose when not running under z/VM (would that be a diagnose of 
PR/SM?), but if you look inside the kernel sources, you may be able to find out.

There is one catch that I know of, besides CP and IFL, there is also an unknown 
processor type. When I run hyptop under z/VM 6.3 and RHEL 6.1 the CPU shows up 
as unknown (CPU-T: UN). I am running under IFL only system. If you wanted to 
use this detection in practice, you would have to figure out under which 
conditions can the CPU be reported as unknown.

Tomas