Re: Satellite instead of subscription-manager PLEASE HELP (BLOCKED)

2016-10-14 Thread Jason DeTiberus
On Fri, Oct 14, 2016 at 10:35 AM, Dean Peterson 
wrote:

> I went to the link: "https://access.redhat.com/rhn-to-rhsm;. It says
> satellite users should be unaffected. I'm a little confused. I'm using
> satellite, but when I type subscription-manager register it says i'm
> registered. However, when I run "subscription-manager attach --auto", it
> spins for a while then says I am not registered.
>


The tooling hits separate systems, so register is showing that you are
registered, but only because the RHN tooling is configured and reports it's
registered. Attach tries to attach subscriptions from RHSM and will fail
because the RHSM system does not manage the system.



> We pay a lot of money for Openshift Enterprise and it will not work
> without upgrading our entire satellite system?
>

For the host subscription/entitlement information to be propagated into the
container, it would either require the host be subscribed to Satellite 6 or
the hosted subscription management service.


> Right now we are on version 5.5 of satellite. There is no way to make this
> work with our existing setup?
>

Possible options I can think of off the top of my head:
- Subscribe OpenShift systems directly to Subscription Manager, instead of
Satellite 5.5
- Access packages through a reposync'd mirror:
https://access.redhat.com/solutions/9892, and configure the mirror as part
of the container build.

I'd suggest contacting support and/or account manager, since they may know
of other options available and could potentially help advocate for adding
Satellite 5 support.

--
Jason



>
> On Thu, Oct 13, 2016 at 3:58 PM, Jason DeTiberus 
> wrote:
>
>>
>>
>> On Thu, Oct 13, 2016 at 4:48 PM, Dean Peterson 
>> wrote:
>>
>>> Our machines use rhn classic. If I try to run subscription-manager
>>> register it says I am already registered with redhat classic. However, this
>>> does seem to be compatible with Docker and Openshift. Operations wants to
>>> stick with redhat classic and satellite. Is this possible?
>>>
>>
>> I don't think this is currently possible, the entitlement/subscription
>> mapping is done through a set of plugins that are specific to
>> subscription-manager. With RHN Classic approaching end of life (
>> https://access.redhat.com/rhn-to-rhsm) I don't really see that changing,
>> but you could always reach out to support to file a formal RFE.
>>
>> --
>> Jason DeTiberus
>>
>>
>>>
>>> On Thu, Oct 13, 2016 at 3:29 PM, Kent Perrier 
>>> wrote:
>>>
 subscription-manager is used to register your host to your local
 satellite as well. How are you patching your hosts if they are not
 registered?

 Kent

 On Thu, Oct 13, 2016 at 3:05 PM, Dean Peterson  wrote:

> Can anyone please help? We use satellite for access to our software.
> We do not use subscription-manager. Unfortunately when running docker
> builds, the containers cannot access the hosts registries because they
> expect to access auto attached subscription-manager subscriptions
> How is openshift supposed to work with satellite instead of
> subscription-manager?
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


 --
 Kent Perrier
 Technical Account Manager


>>
>


-- 
Jason DeTiberus
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How are codes injected into builder image?

2016-10-14 Thread Jonathan Yu
Hey David,

The s2i tool clones the repository and injects the source into the
container. This means that git is not required inside the container.

There's a label that defines where it goes, so it can be overridden if
you'd like.  Example from the WildFly image:
https://github.com/openshift-s2i/s2i-wildfly/blob/master/10.1/Dockerfile#L17

Documentation that explains the labels and what they do:
https://docs.openshift.com/container-platform/3.3/creating_images/metadata.html

Specifically for S2I builders, there are a few other labels:
https://docs.openshift.com/container-platform/3.3/creating_images/s2i.html

Cheers,

Jonathan

On Fri, Oct 14, 2016 at 4:30 AM, David Strejc 
wrote:

> I will answer my own question:
>
> sources goes to /opt/s2i/destination
>
> I don't know if this is image specific - I've just not encountered this
> particular info in any documentation.
>
> Thank you.
> David Strejc
> https://octopussystems.cz
> t: +420734270131
> e: david.str...@gmail.com
>
>
> On Fri, Oct 14, 2016 at 1:12 PM, David Strejc 
> wrote:
> > I am using Wildfly builder image (just for testing purposes)
> > and I wrote my own assemble and run scripts.
> >
> > What am I doing wrong when I need to do git clone
> > inside of assemble script?
> >
> > When openshift triggers build it downloads provided
> > git url and then injects .s2i scripts into builder image
> > but how and where are codes injected into builder image?
> >
> > What am I missing?
> >
> > Thanks for any suggestion - links, docs etc. I am trying
> > to get into build process.
> >
> > David Strejc
> > https://octopussystems.cz
> > t: +420734270131
> > e: david.str...@gmail.com
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



-- 
Jonathan Yu, P.Eng. / Software Engineer, OpenShift by Red Hat / Twitter
(@jawnsy) is the quickest way to my heart 

*“A master in the art of living draws no sharp distinction between his work
and his play; his labor and his leisure; his mind and his body; his
education and his recreation. He hardly knows which is which. He simply
pursues his vision of excellence through whatever he is doing, and leaves
others to determine whether he is working or playing. To himself, he always
appears to be doing both.”* — L. P. Jacks, Education through Recreation
(1932), p. 1
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Satellite instead of subscription-manager PLEASE HELP (BLOCKED)

2016-10-14 Thread Dean Peterson
I went to the link: "https://access.redhat.com/rhn-to-rhsm;. It says
satellite users should be unaffected. I'm a little confused. I'm using
satellite, but when I type subscription-manager register it says i'm
registered. However, when I run "subscription-manager attach --auto", it
spins for a while then says I am not registered. We pay a lot of money for
Openshift Enterprise and it will not work without upgrading our entire
satellite system? Right now we are on version 5.5 of satellite. There is no
way to make this work with our existing setup?

On Thu, Oct 13, 2016 at 3:58 PM, Jason DeTiberus 
wrote:

>
>
> On Thu, Oct 13, 2016 at 4:48 PM, Dean Peterson 
> wrote:
>
>> Our machines use rhn classic. If I try to run subscription-manager
>> register it says I am already registered with redhat classic. However, this
>> does seem to be compatible with Docker and Openshift. Operations wants to
>> stick with redhat classic and satellite. Is this possible?
>>
>
> I don't think this is currently possible, the entitlement/subscription
> mapping is done through a set of plugins that are specific to
> subscription-manager. With RHN Classic approaching end of life (
> https://access.redhat.com/rhn-to-rhsm) I don't really see that changing,
> but you could always reach out to support to file a formal RFE.
>
> --
> Jason DeTiberus
>
>
>>
>> On Thu, Oct 13, 2016 at 3:29 PM, Kent Perrier 
>> wrote:
>>
>>> subscription-manager is used to register your host to your local
>>> satellite as well. How are you patching your hosts if they are not
>>> registered?
>>>
>>> Kent
>>>
>>> On Thu, Oct 13, 2016 at 3:05 PM, Dean Peterson 
>>> wrote:
>>>
 Can anyone please help? We use satellite for access to our software. We
 do not use subscription-manager. Unfortunately when running docker builds,
 the containers cannot access the hosts registries because they expect to
 access auto attached subscription-manager subscriptions
 How is openshift supposed to work with satellite instead of
 subscription-manager?

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users


>>>
>>>
>>> --
>>> Kent Perrier
>>> Technical Account Manager
>>>
>>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: HTTPS certificate change

2016-10-14 Thread Jim Minter

Hi Mila,

Try:

oc delete clusterrolebinding/router-router-role

instead of

oc delete rolebinding/router-router-role

Cheers,

Jim

--
Jim Minter
Principal Software Engineer, Red Hat UK


On 14/10/16 09:27, Miloslav Vlach wrote:

Hi Jim,

thanks for reply. I have made some investigation how it works and I have
an idea.

We have problem with certification authority and we bought the new
wildcard certificate.
I tried to change the certificate in the secured route but nothing
happen. I dive into router
pod and I found this row

bind 127.0.0.1:10444  
ssl no-sslv3 crt /etc/pki/tls/private/tls.crt crt
/var/lib/haproxy/router/certs accept-proxy

In the /etc/pki/tls/private/tls.crt is the wildcard certificate for the
domain rohlik.cz  and in the directory
/var/lib/haproxy/router/certs  there
are three certificates. Two are the same as the default certificate and
the last is the “new” certificate (wildcard certificate too).
In HAproxy documentation is written that certificates are picked in
aplhabetical order.


If a directory name is used instead of a PEM file, then all files found in
that directory will be loaded in alphabetic order unless their name ends with
'.issuer' or '.ocsp' (reserved extensions). This directive may be specified
multiple times in order to load certificates from multiple files or
directories. The certificates will be presented to clients who provide a valid
TLS Server Name Indication field matching one of their CN or alt subjects.
Wildcards are supported, where a wildcard character '*' is used instead of the
first hostname component (eg: *.example.org  matches 
www.example.org  but not
www.sub.example.org ).

When I delete environment settings from the
dc/router (the default certificate) and delete the other 2 certificates
all starts working. Why ? Because the is only one certificate which
matches and the
HAproxy picked up the correct.

In the openshift documentation there is no information how to change
certificate. I can deploy new router with changed —default-certificate -
but - how can I correctly delete the old router ? I i tried this

oc delete dc/router svc/router  rolebinding/router-router-role
serviceaccounts/router secret/router-certs

deploymentconfig "router" deleted

service "router" deleted

serviceaccount "router" deleted

secret "router-certs" deleted

Error from server: rolebinding "router-router-role" not found

and creating is erroneous too

oadm router --default-cert=cert.new.pem

info: password for stats user admin has been set to AaTk1rxtyh

--> Creating router router ...

secret "router-certs" created

serviceaccount "router" created

error: rolebinding "router-router-role" already exists

deploymentconfig "router" created

service "router" created

--> Failed



How can I correctly delete the role binding and deploy the router correctly?

Thanks Mila

Dne 14. října 2016 v 10:13:28, Jim Minter (jmin...@redhat.com
) napsal/a:


Hi Mila,

There are a number of different HTTPS certificates in OpenShift. I'm
supposing you're talking about the one served by the haproxy for actual
end-user services hosted on OpenShift?

'Route' objects in OpenShift can specify their own TLS certs, overriding
the default specifically for the route in question. See [1] as a
starting point.

The default TLS cert presented by haproxy can be set using oadm router
--default-cert. There's a bit of information at [2] as a starting point.

It's also worth noting that some browsers don't react very well to the
TLS cert changing under their feet, and they don't always report what's
going on correctly until a restart. The following command can be useful
in seeing what's going on:

$ openssl s_client -connect :443 -servername
 https://docs.openshift.org/latest/architecture/core_concepts/routes.html#secured-routes

[2]
https://docs.openshift.org/latest/install_config/router/default_haproxy_router.html#using-wildcard-certificates


Cheers,

Jim

--
Jim Minter
Principal Software Engineer, Red Hat UK

On 13/10/16 20:32, Miloslav Vlach wrote:
> Hi all,
>
> I would like to change https certificate. I modified the routes and the
> certificate served is not changed. Know somebody why ? The certificates
> are correctly written to the router pod. I don’t understand
>
>   bind 127.0.0.1:10444   ssl
no-sslv3 crt
> /etc/pki/tls/private/tls.crt crt /var/lib/haproxy/router/certs accept-proxy
>
>
> In the directory certs there are many PEM certificates. But the server
> returns the /etc/pki/tls/private/tls.crt
>
> I have question:
>
> 1. how correctly change the certificate for all routes
> 2. why didn’t works this solutions for the specific route
>
> Is there any way how to deploy/update new router (oadm router) without
> deleting them ?
>
> Thanks Mila
>
>
>

Re: How are codes injected into builder image?

2016-10-14 Thread David Strejc
I will answer my own question:

sources goes to /opt/s2i/destination

I don't know if this is image specific - I've just not encountered this
particular info in any documentation.

Thank you.
David Strejc
https://octopussystems.cz
t: +420734270131
e: david.str...@gmail.com


On Fri, Oct 14, 2016 at 1:12 PM, David Strejc  wrote:
> I am using Wildfly builder image (just for testing purposes)
> and I wrote my own assemble and run scripts.
>
> What am I doing wrong when I need to do git clone
> inside of assemble script?
>
> When openshift triggers build it downloads provided
> git url and then injects .s2i scripts into builder image
> but how and where are codes injected into builder image?
>
> What am I missing?
>
> Thanks for any suggestion - links, docs etc. I am trying
> to get into build process.
>
> David Strejc
> https://octopussystems.cz
> t: +420734270131
> e: david.str...@gmail.com

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How are codes injected into builder image?

2016-10-14 Thread David Strejc
I am using Wildfly builder image (just for testing purposes)
and I wrote my own assemble and run scripts.

What am I doing wrong when I need to do git clone
inside of assemble script?

When openshift triggers build it downloads provided
git url and then injects .s2i scripts into builder image
but how and where are codes injected into builder image?

What am I missing?

Thanks for any suggestion - links, docs etc. I am trying
to get into build process.

David Strejc
https://octopussystems.cz
t: +420734270131
e: david.str...@gmail.com

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Does NFS PVC wipe existing data before attaching?

2016-10-14 Thread Michail Kargakis
You should be able to set pvc.spec.volumeName to the name of the volume you
want to bind.

On Thu, Oct 13, 2016 at 10:59 PM, Lionel Orellana 
wrote:

> In my limited experimentation I had problems with NFS PVs getting wiped
> out even though the policy was set to Retain. In fact I ended up in this
> situation where if I created a file in the NFS volume it was deleted in
> front of my eyes in a few seconds. Obviously I did something very wrong
> with the PV.  I've found mounting the NFS volume directly into the pod as
> Seth suggested a lot easier.  I also was unable to force a particular PV to
> be bound to a PVC. They seem to work like a pool and you get what you get.
> So if you have an existing nfs volume with data you want to mount into a
> particular pod there might not be a way of doing that with PV's but I would
> love to be proven wrong by others.
>
> On 14 October 2016 at 07:32, Seth Jennings  wrote:
>
>> NFS mounts can be mounted directly into pods without being PVs like this:
>>
>> volumes:
>>   name: shared
>>   nfs:
>> server: 
>> path: 
>>
>> If you are using NFS PVs, then the persistentVolumeReclaimPolicy
>> determines if the data is wiped when the PVC is released.  The default
>> value is "Retain".  It will not delete the data unless you set it to
>> "Recycle".
>>
>> https://docs.openshift.com/enterprise/3.0/admin_guide/persis
>> tent_storage_nfs.html#reclaiming-resources
>>
>> Hope that answers your question!
>>
>> On Thu, Oct 13, 2016 at 10:19 AM, Dean Peterson 
>> wrote:
>> > If I create a persistent volume claim using an NFS share that has
>> existing
>> > data, will the data be wipded? Same thing with creating the persistent
>> > volume. Will the existing data be deleted. I want to make existing data
>> > accessible to multiple pods/containers in an NFS share. If I make a
>> > persistent volume pointing to that existing path. How do I get the
>> > persistent volume claim to access that existing path and make the
>> containers
>> > with the claim see it?
>> >
>> > ___
>> > users mailing list
>> > users@lists.openshift.redhat.com
>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> >
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: HTTPS certificate change

2016-10-14 Thread Miloslav Vlach
Hi Jim,

thanks for reply. I have made some investigation how it works and I have an
idea.

We have problem with certification authority and we bought the new wildcard
certificate.
I tried to change the certificate in the secured route but nothing happen.
I dive into router
pod and I found this row

bind 127.0.0.1:10444  ssl no-sslv3
crt /etc/pki/tls/private/tls.crt crt /var/lib/haproxy/router/certs
accept-proxy

In the /etc/pki/tls/private/tls.crt is the wildcard certificate for the
domain rohlik.cz and in the directory /var/lib/haproxy/router/certs  there
are three certificates. Two are the same as the default certificate and the
last is the “new” certificate (wildcard certificate too).
In HAproxy documentation is written that certificates are picked in
aplhabetical order.


If a directory name is used instead of a PEM file, then all files found in
that directory will be loaded in alphabetic order unless their name ends with
'.issuer' or '.ocsp' (reserved extensions). This directive may be specified
multiple times in order to load certificates from multiple files or
directories. The certificates will be presented to clients who provide a valid
TLS Server Name Indication field matching one of their CN or alt subjects.
Wildcards are supported, where a wildcard character '*' is used instead of the
first hostname component (eg: *.example.org matches www.example.org
but notwww.sub.example.org).

When I delete environment settings from the
dc/router (the default certificate) and delete the other 2 certificates all
starts working. Why ? Because the is only one certificate which matches and
the
HAproxy picked up the correct.

In the openshift documentation there is no information how to change
certificate. I can deploy new router with changed —default-certificate -
but - how can I correctly delete the old router ? I i tried this

oc delete dc/router svc/router  rolebinding/router-router-role
serviceaccounts/router secret/router-certs

deploymentconfig "router" deleted

service "router" deleted

serviceaccount "router" deleted

secret "router-certs" deleted
Error from server: rolebinding "router-router-role" not found

and creating is erroneous too

oadm router --default-cert=cert.new.pem

info: password for stats user admin has been set to AaTk1rxtyh

--> Creating router router ...

secret "router-certs" created

serviceaccount "router" created

error: rolebinding "router-router-role" already exists

deploymentconfig "router" created

service "router" created

--> Failed


How can I correctly delete the role binding and deploy the router correctly?

Thanks Mila

Dne 14. října 2016 v 10:13:28, Jim Minter (jmin...@redhat.com) napsal/a:

Hi Mila,

There are a number of different HTTPS certificates in OpenShift. I'm
supposing you're talking about the one served by the haproxy for actual
end-user services hosted on OpenShift?

'Route' objects in OpenShift can specify their own TLS certs, overriding
the default specifically for the route in question. See [1] as a
starting point.

The default TLS cert presented by haproxy can be set using oadm router
--default-cert. There's a bit of information at [2] as a starting point.

It's also worth noting that some browsers don't react very well to the
TLS cert changing under their feet, and they don't always report what's
going on correctly until a restart. The following command can be useful
in seeing what's going on:

$ openssl s_client -connect :443 -servername
 https://docs.openshift.org/latest/architecture/core_concepts/routes.html#secured-routes
[2]
https://docs.openshift.org/latest/install_config/router/default_haproxy_router.html#using-wildcard-certificates

Cheers,

Jim

-- 
Jim Minter
Principal Software Engineer, Red Hat UK

On 13/10/16 20:32, Miloslav Vlach wrote:
> Hi all,
>
> I would like to change https certificate. I modified the routes and the
> certificate served is not changed. Know somebody why ? The certificates
> are correctly written to the router pod. I don’t understand
>
> bind 127.0.0.1:10444  ssl no-sslv3 crt
> /etc/pki/tls/private/tls.crt crt /var/lib/haproxy/router/certs
accept-proxy
>
>
> In the directory certs there are many PEM certificates. But the server
> returns the /etc/pki/tls/private/tls.crt
>
> I have question:
>
> 1. how correctly change the certificate for all routes
> 2. why didn’t works this solutions for the specific route
>
> Is there any way how to deploy/update new router (oadm router) without
> deleting them ?
>
> Thanks Mila
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users