Re: Web Console default passwor

2016-06-23 Thread Jason DeTiberus
You can also set htpasswd users with the variables here:
https://github.com/openshift/openshift-ansible/blob/9193a58d129716601091b2f3ceb7ca3960a694cb/inventory/byo/hosts.origin.example#L91
On Jun 23, 2016 10:44 AM, "Olaf Radicke"  wrote:

> Yes, thank you Den. All is fine now. A restart of the master is not deeded.
>
> Olaf
>
> On 06/23/2016 11:19 AM, Den Cowboy wrote:
>
>> You have to go inside your folder and create a user:
>> htpasswd htpasswd admin
>> prompt for password: 
>>
>> User is created (don't really know if you have to restart your master).
>> To make your user cluster-admin
>>
>> $ oc login -u system:admin (authenticates with admin.kubeconfig)
>> $ oadm policiy add-cluster-role-to-user cluster-admin admin (if admin is
>> your user)
>>
>>
>> To: users@lists.openshift.redhat.com
>>> From: o.radi...@meteocontrol.de
>>> Subject: Web Console default passwor
>>> Date: Thu, 23 Jun 2016 10:06:40 +0200
>>>
>>> Hi,
>>>
>>> i've a second basic question: I can't find a default password in online
>>> documentation, for the first log in on the Web Console.
>>>
>>> I enter this in my playbook:
>>>
>>>
>>>  snip 
>>> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
>>> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
>>> 'filename': '/etc/origin/master/htpasswd'}]
>>>  snap -
>>>
>>> But the /etc/origin/master/htpasswd file is empty. I have to do it
>>> yourself the first entry? With...
>>>
>>>  snip 
>>> htpasswd /etc/origin/master/htpasswd admin
>>>  snap 
>>>
>>> ..Is this right?
>>>
>>> Thank you,
>>>
>>> Olaf Radicke
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Creating from a template: get parameters from a file

2016-06-23 Thread Luke Meyer
`oc process -v` and `oc new-app -p` work exactly the same, both being
implemented the same. You can specify multiple of either. I thought there
was supposed to be a way to escape commas but I can't find it now.

FWIW you can specify newlines - anything, really, except a comma - in
parameters.

However, have you considered using a Secret or ConfigMap to supply the
parameters? It's easy to put strings and files in those with oc create
secret|configmap. If they're only needed at runtime, not for the actual
template, that seems simplest.

On Fri, Jun 17, 2016 at 6:07 PM, Clayton Coleman 
wrote:

> The -v flag needs to be fixed for sure (splitting flag values is bad).
>
> New-app should support both -f FILE and -p (which you can specify multiple
> -p, one for each param).
>
> Do you have any templates that require new lines?
>
> On Jun 17, 2016, at 5:55 PM, Alex Wauck  wrote:
>
> I need to create services from a template that has a lot of parameters.
> In addition to having a lot of parameters, it has parameters with values
> containing commas, which does not play well with the -v flag for oc
> process.  Is there any way to make oc process get the parameter values from
> a file?  I'm currently tediously copy/pasting the values into the web UI,
> which is not a good solution.
>
> --
>
> Alex Wauck // DevOps Engineer
> *E X O S I T E*
> *www.exosite.com *
> Making Machines More Human.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What is the consequence if I switch from ovs-subnet to ovs-multitenant on production cluster ?

2016-06-23 Thread Alex Wauck
We made the change, and it appears to have gone smoothly.  Nagios didn't
scream at me at any point during the process, so there must not have been
significant downtime.

On Thu, Jun 23, 2016 at 10:08 AM, Alex Wauck  wrote:

> We're planning to do this to our production cluster today.  I'll report in
> once we're done.
>
> On Thu, Jun 23, 2016 at 8:57 AM, Philippe Lafoucrière <
> philippe.lafoucri...@tech-angels.com> wrote:
>
>> Thanks!
>> We would love some feedback from people having done this before.
>> We have a test cluster, with snapshots, but sometimes it's all about the
>> details, and something could fail after a while :)
>>
>> ​
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com *
>
> Making Machines More Human.
>
>


-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: All my image stream are bad docker registry IP, where is my mistake ?

2016-06-23 Thread Andy Goldstein
https://docs.openshift.org/latest/install_config/install/docker_registry.html#maintaining-the-registry-ip-address

On Thu, Jun 23, 2016 at 12:46 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

>
> On Thu, Jun 23, 2016 at 12:37 PM, Clayton Coleman 
> wrote:
>
>> Did you delete and recreate your docker registry?
>>
>>
> yes, several times.
> And we can't find any clue from where this IP is coming from.
> We have grep all files, searched in etcd, and nothing.
> It's a mystery :)
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: All my image stream are bad docker registry IP, where is my mistake ?

2016-06-23 Thread Clayton Coleman
The IP is cached by the master when you install the registry.  If you
delete the service, you'll need to restart your masters.  The values in
etcd will be wrong though, so you'll either want to recreate the service
using the original service ip (spec.clusterIP) or expect your images to
need to be pushed again.  1.3.0-alpha.3 will include a migrator for fixups
like this.  At some point soon we'll move to DNS for the registry but that
requires node setup.

Generally, don't delete your registry service, just recreate the DC.

On Jun 23, 2016, at 12:47 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:


On Thu, Jun 23, 2016 at 12:37 PM, Clayton Coleman 
wrote:

> Did you delete and recreate your docker registry?
>
>
yes, several times.
And we can't find any clue from where this IP is coming from.
We have grep all files, searched in etcd, and nothing.
It's a mystery :)
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: All my image stream are bad docker registry IP, where is my mistake ?

2016-06-23 Thread Philippe Lafoucrière
On Thu, Jun 23, 2016 at 12:37 PM, Clayton Coleman 
wrote:

> Did you delete and recreate your docker registry?
>
>
yes, several times.
And we can't find any clue from where this IP is coming from.
We have grep all files, searched in etcd, and nothing.
It's a mystery :)
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: All my image stream are bad docker registry IP, where is my mistake ?

2016-06-23 Thread Stéphane Klein
Yes, many time already.

2016-06-23 18:37 GMT+02:00 Clayton Coleman :

> Did you delete and recreate your docker registry?
>
> On Thu, Jun 23, 2016 at 12:34 PM, Stéphane Klein <
> cont...@stephane-klein.info> wrote:
>
>> Hi,
>>
>> I've have this configuration:
>>
>> ```
>>  -bash-4.2# oc status
>>
>> In project default on server https://...
>>
>> svc/docker-registry - 172.30.75.178:5000
>>   dc/docker-registry deploys
>> docker.io/openshift/origin-docker-registry:v1.2.0
>> deployment #1 deployed about an hour ago - 1 pod
>> ```
>>
>> I've this imagestream configuration:
>>
>> ```
>> # cat /tmp/is.yaml
>> apiVersion: v1
>> items:
>> - apiVersion: v1
>>   kind: ImageStream
>>   metadata:
>> labels:
>>   name: test-is
>> name: test-is
>>   spec: {}
>>   status:
>> dockerImageRepository: debian
>> kind: List
>> metadata: {}
>> ```
>>
>> I create this ImageStream:
>>
>> ```
>> # oc create -f /tmp/is.yaml
>> ```
>>
>> Next, when I look this ImageStream I see:
>>
>> ```
>> # oc get is
>> NAME DOCKER REPO
>> TAGS  UPDATED
>> test-is  172.30.218.93:5000/tech-angels-slots-site/test-is
>> ```
>>
>> and IP address is bad, it's not my docker-registry IP: 172.30.75.178
>>
>> Where is my mistake ? How can I debug that ?
>>
>> Best regards,
>> Stéphane
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>


-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: logs aggregation

2016-06-23 Thread Luke Meyer
Origin docs are at
https://docs.openshift.org/latest/install_config/aggregate_logging.html --
though
https://github.com/openshift/origin-aggregated-logging/blob/master/deployer/README.md
refers to some recent usability improvements that haven't made their way
into the official docs yet.

There is some integration with the console UI (
https://docs.openshift.org/latest/install_config/aggregate_logging.html#kibana)
but I'm not sure it does exactly what you asked. However you can fairly
easily have Kibana pull up all the logs by pod label, which you could
choose to be the label your DC/RC uses to identify its pods. This would
include pods that no longer exist and containers that have been restarted
and lost historical logs.

On Wed, Jun 22, 2016 at 10:14 AM, Rich Megginson 
wrote:

> On 06/22/2016 07:05 AM, Philippe Lafoucrière wrote:
>
> Hi,
>
> Something would be very useful in the console: logs from all replicas
> aggregated in a single page.
> This is particularly useful when several web servers are serving the same
> site, and we need to debug something (ala docker-compose).
> We could do that with the latest graylog, but directly in the console
> would be a killer feature.
>
>
> By "console" do you mean the origin UI console?  If so, how would you do
> that with graylog?
> Have you tried using
> https://github.com/openshift/origin-aggregated-logging ?  This doesn't
> integrate with the origin UI console, but it does provide Kibana.
>
>
> Thanks!
> Philippe
>
>
> ___
> users mailing 
> listusers@lists.openshift.redhat.comhttp://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: All my image stream are bad docker registry IP, where is my mistake ?

2016-06-23 Thread Clayton Coleman
Did you delete and recreate your docker registry?

On Thu, Jun 23, 2016 at 12:34 PM, Stéphane Klein <
cont...@stephane-klein.info> wrote:

> Hi,
>
> I've have this configuration:
>
> ```
>  -bash-4.2# oc status
>
> In project default on server https://...
>
> svc/docker-registry - 172.30.75.178:5000
>   dc/docker-registry deploys
> docker.io/openshift/origin-docker-registry:v1.2.0
> deployment #1 deployed about an hour ago - 1 pod
> ```
>
> I've this imagestream configuration:
>
> ```
> # cat /tmp/is.yaml
> apiVersion: v1
> items:
> - apiVersion: v1
>   kind: ImageStream
>   metadata:
> labels:
>   name: test-is
> name: test-is
>   spec: {}
>   status:
> dockerImageRepository: debian
> kind: List
> metadata: {}
> ```
>
> I create this ImageStream:
>
> ```
> # oc create -f /tmp/is.yaml
> ```
>
> Next, when I look this ImageStream I see:
>
> ```
> # oc get is
> NAME DOCKER REPO
> TAGS  UPDATED
> test-is  172.30.218.93:5000/tech-angels-slots-site/test-is
> ```
>
> and IP address is bad, it's not my docker-registry IP: 172.30.75.178
>
> Where is my mistake ? How can I debug that ?
>
> Best regards,
> Stéphane
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


All my image stream are bad docker registry IP, where is my mistake ?

2016-06-23 Thread Stéphane Klein
Hi,

I've have this configuration:

```
 -bash-4.2# oc status

In project default on server https://...

svc/docker-registry - 172.30.75.178:5000
  dc/docker-registry deploys
docker.io/openshift/origin-docker-registry:v1.2.0
deployment #1 deployed about an hour ago - 1 pod
```

I've this imagestream configuration:

```
# cat /tmp/is.yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: ImageStream
  metadata:
labels:
  name: test-is
name: test-is
  spec: {}
  status:
dockerImageRepository: debian
kind: List
metadata: {}
```

I create this ImageStream:

```
# oc create -f /tmp/is.yaml
```

Next, when I look this ImageStream I see:

```
# oc get is
NAME DOCKER REPO
TAGS  UPDATED
test-is  172.30.218.93:5000/tech-angels-slots-site/test-is
```

and IP address is bad, it's not my docker-registry IP: 172.30.75.178

Where is my mistake ? How can I debug that ?

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Bad CentOS packages

2016-06-23 Thread Troy Dawson
Hi Alex,
We're still working on our CentOS workflow / testing.  The tests that
let this through didn't check the version the binary thought it was
running.  That test is being updated.  We'd much rather find and fix
an issue before we push it all the way out, due to this delay of
pushing to a released repo.

If possible, you could install the fixed/updated package from the
-testing repo via yum

yum --enablerepo=centos-openshift-origin-testing update "*origin*"

It's not ideal, but it's currently the only workaround I have.


On Thu, Jun 23, 2016 at 10:55 AM, Alex Wauck  wrote:
> Whoops, I didn't see the rest of the thread "define openshift origin version
> (stable 1.2.0) for Ansible install" before sending this.  I guess the fix is
> on the way.  It's still kind of annoying that this is still failing a day
> later.
>
> On Thu, Jun 23, 2016 at 10:54 AM, Alex Wauck  wrote:
>>
>> The current latest packages in the CentOS repository (as installed by
>> openshift-ansible) are 1.2.0-2.el7.  The version of OpenShift in these
>> packages is actually v1.2.0-rc1-13-g2e62fab.  This causes OpenShift to
>> attempt to download an origin-pod image with that tag, which does not exist.
>> This prevents all pods from starting.  The repository does not appear to
>> contain the last known good packages, which are 1.2.0-1.el7.
>>
>> As a result, I now have a cluster with 5 good nodes and one bad node that
>> I can't fix.
>>
>> Here is the repo config:
>>
>> [centos-openshift-origin]
>> name=CentOS OpenShift Origin
>> baseurl=http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
>> enabled=1
>> gpgcheck=1
>> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
>>
>> [centos-openshift-origin-testing]
>> name=CentOS OpenShift Origin Testing
>> baseurl=http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin/
>> enabled=0
>> gpgcheck=0
>> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
>>
>> [centos-openshift-origin-debuginfo]
>> name=CentOS OpenShift Origin DebugInfo
>> baseurl=http://debuginfo.centos.org/centos/7/paas/x86_64/
>> enabled=0
>> gpgcheck=1
>> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
>>
>> [centos-openshift-origin-source]
>> name=CentOS OpenShift Origin Source
>> baseurl=http://vault.centos.org/centos/7/paas/Source/openshift-origin/
>> enabled=0
>> gpgcheck=1
>> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
>>
>> --
>>
>> Alex Wauck // DevOps Engineer
>>
>> E X O S I T E
>> www.exosite.com
>>
>> Making Machines More Human.
>
>
>
>
> --
>
> Alex Wauck // DevOps Engineer
>
> E X O S I T E
> www.exosite.com
>
> Making Machines More Human.
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Bad CentOS packages

2016-06-23 Thread Alex Wauck
Whoops, I didn't see the rest of the thread "define openshift origin
version (stable 1.2.0) for Ansible install" before sending this.  I guess
the fix is on the way.  It's still kind of annoying that this is still
failing a day later.

On Thu, Jun 23, 2016 at 10:54 AM, Alex Wauck  wrote:

> The current latest packages in the CentOS repository (as installed by
> openshift-ansible) are 1.2.0-2.el7.  The version of OpenShift in these
> packages is actually v1.2.0-rc1-13-g2e62fab.  This causes OpenShift to
> attempt to download an origin-pod image with that tag, which does not
> exist.  This prevents all pods from starting.  The repository does not
> appear to contain the last known good packages, which are 1.2.0-1.el7.
>
> As a result, I now have a cluster with 5 good nodes and one bad node that
> I can't fix.
>
> Here is the repo config:
>
> [centos-openshift-origin]
> name=CentOS OpenShift Origin
> baseurl=http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
> enabled=1
> gpgcheck=1
> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
>
> [centos-openshift-origin-testing]
> name=CentOS OpenShift Origin Testing
> baseurl=http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin/
> enabled=0
> gpgcheck=0
> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
>
> [centos-openshift-origin-debuginfo]
> name=CentOS OpenShift Origin DebugInfo
> baseurl=http://debuginfo.centos.org/centos/7/paas/x86_64/
> enabled=0
> gpgcheck=1
> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
>
> [centos-openshift-origin-source]
> name=CentOS OpenShift Origin Source
> baseurl=http://vault.centos.org/centos/7/paas/Source/openshift-origin/
> enabled=0
> gpgcheck=1
> gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com *
>
> Making Machines More Human.
>
>


-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Bad CentOS packages

2016-06-23 Thread Alex Wauck
The current latest packages in the CentOS repository (as installed by
openshift-ansible) are 1.2.0-2.el7.  The version of OpenShift in these
packages is actually v1.2.0-rc1-13-g2e62fab.  This causes OpenShift to
attempt to download an origin-pod image with that tag, which does not
exist.  This prevents all pods from starting.  The repository does not
appear to contain the last known good packages, which are 1.2.0-1.el7.

As a result, I now have a cluster with 5 good nodes and one bad node that I
can't fix.

Here is the repo config:

[centos-openshift-origin]
name=CentOS OpenShift Origin
baseurl=http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS

[centos-openshift-origin-testing]
name=CentOS OpenShift Origin Testing
baseurl=http://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin/
enabled=0
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS

[centos-openshift-origin-debuginfo]
name=CentOS OpenShift Origin DebugInfo
baseurl=http://debuginfo.centos.org/centos/7/paas/x86_64/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS

[centos-openshift-origin-source]
name=CentOS OpenShift Origin Source
baseurl=http://vault.centos.org/centos/7/paas/Source/openshift-origin/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-PaaS

-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What is the consequence if I switch from ovs-subnet to ovs-multitenant on production cluster ?

2016-06-23 Thread Alex Wauck
We're planning to do this to our production cluster today.  I'll report in
once we're done.

On Thu, Jun 23, 2016 at 8:57 AM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> Thanks!
> We would love some feedback from people having done this before.
> We have a test cluster, with snapshots, but sometimes it's all about the
> details, and something could fail after a while :)
>
> ​
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Web Console default passwor

2016-06-23 Thread Olaf Radicke

Yes, thank you Den. All is fine now. A restart of the master is not deeded.

Olaf

On 06/23/2016 11:19 AM, Den Cowboy wrote:

You have to go inside your folder and create a user:
htpasswd htpasswd admin
prompt for password: 

User is created (don't really know if you have to restart your master).
To make your user cluster-admin

$ oc login -u system:admin (authenticates with admin.kubeconfig)
$ oadm policiy add-cluster-role-to-user cluster-admin admin (if admin is
your user)



To: users@lists.openshift.redhat.com
From: o.radi...@meteocontrol.de
Subject: Web Console default passwor
Date: Thu, 23 Jun 2016 10:06:40 +0200

Hi,

i've a second basic question: I can't find a default password in online
documentation, for the first log in on the Web Console.

I enter this in my playbook:


 snip 
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
'filename': '/etc/origin/master/htpasswd'}]
 snap -

But the /etc/origin/master/htpasswd file is empty. I have to do it
yourself the first entry? With...

 snip 
htpasswd /etc/origin/master/htpasswd admin
 snap 

..Is this right?

Thank you,

Olaf Radicke

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What is the consequence if I switch from ovs-subnet to ovs-multitenant on production cluster ?

2016-06-23 Thread Philippe Lafoucrière
Thanks!
We would love some feedback from people having done this before.
We have a test cluster, with snapshots, but sometimes it's all about the
details, and something could fail after a while :)

​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What is the consequence if I switch from ovs-subnet to ovs-multitenant on production cluster ?

2016-06-23 Thread Andy Grimm
CC'ing a couple of people who had experience with a change like this
earlier in the week in a real-world environment.  They might be able to
provide some insight.

On Thu, Jun 23, 2016 at 9:24 AM, Scott Dodson  wrote:

> It's not a lot of detail, but this is documented here
>
> https://docs.openshift.org/latest/install_config/configuring_sdn.html#migrating-between-sdn-plugins
>
> On Thu, Jun 23, 2016 at 9:18 AM, Philippe Lafoucrière
>  wrote:
> > @Clayton, any idea on this?
> > Thanks
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What is the consequence if I switch from ovs-subnet to ovs-multitenant on production cluster ?

2016-06-23 Thread Scott Dodson
It's not a lot of detail, but this is documented here
https://docs.openshift.org/latest/install_config/configuring_sdn.html#migrating-between-sdn-plugins

On Thu, Jun 23, 2016 at 9:18 AM, Philippe Lafoucrière
 wrote:
> @Clayton, any idea on this?
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What is the consequence if I switch from ovs-subnet to ovs-multitenant on production cluster ?

2016-06-23 Thread Philippe Lafoucrière
@Clayton, any idea on this?​
Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: define openshift origin version (stable 1.2.0) for Ansible install

2016-06-23 Thread Den Cowboy
Why are you actually building 1.2.0-4 to let 1.2.0 work instead of downgrading 
(or using the older) origin-1.2.0-1.git.10183.7386b49.el7 like alexwauck? 
Because in ansible is I'm able to use 
openshift_pkg_version=-1.2.0-1.git.10183.7386b49.el7 but not 
openshift_pkg_version=-1.2.0-4.el7

Probably because you said: "This version is still getting signed and pushed 
out.  That takes more time."

Or is this because the version for origin-1.2.0-1.git.10183.7386b49.el7 is:
v1.2.0-1-g7386b49

Which is also a 'bad' version.
So as far as I understand we have to wait till origin-1.2.0-4.el7 is available 
for our ansible install?



From: dencow...@hotmail.com
To: tdaw...@redhat.com
Subject: RE: define openshift origin version (stable 1.2.0) for Ansible install
Date: Thu, 23 Jun 2016 11:17:12 +
CC: users@lists.openshift.redhat.com




Can you maybe explain how to use this?
I performed a yum --enablerepo=centos-openshift-origin-testing install origin\*

oc version gives me 
oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5

But how do I have to add nodes (using ansible) and that kind of stuff? After 
performing the yum I've just one master and one node on the same host.
Thanks



> From: tdaw...@redhat.com
> Date: Wed, 22 Jun 2016 17:27:17 -0500
> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible 
> install
> To: alexwa...@exosite.com
> CC: dencow...@hotmail.com; users@lists.openshift.redhat.com
> 
> Yep, seems that my new way of creating the rpms for CentOS got the
> version of the rpm right, but wrong for setting the ldflags, which was
> causing the binary to have a different version.
> 
> At some point in the near future we need to re-evaluate git tags and
> versions in the origin.spec file.  (Why it is the rpm spec version
> always 0.0.1 when in reality the version everywhere else is 1.2.0)
> 
> Worked with Scott to figure out a correct way to consistently build
> the rpms.  In the end, neither of our workflows failed in sneaky ways,
> so I just fixed things manually.  Not something we can do
> consistently, but I really needed to get a working 1.2.0 version out.
> 
> What works:  origin-1.2.0-4.el7
> https://cbs.centos.org/koji/buildinfo?buildID=11349
> 
> You should be able to test it within an hour via
> yum --enablerepo=centos-openshift-origin-testing install origin\*
> 
> This version is still getting signed and pushed out.  That takes more time.
> 
> Sorry for all the problems this has caused.
> 
> Troy
> 
> 
> On Wed, Jun 22, 2016 at 2:57 PM, Alex Wauck  wrote:
> > This seems to be caused by the 1.2.0-2.el7 packages containing the wrong
> > version.  I had a conversation on IRC about this earlier (#openshift), and
> > somebody confirmed it.  I suspect a new release will be available soon.  At
> > any rate, downgrading to 1.2.0-1.el7 worked for us.
> >
> > On Wed, Jun 22, 2016 at 8:55 AM, Den Cowboy  wrote:
> >>
> >> I tried:
> >> [OSEv3:vars]
> >> ansible_ssh_user=root
> >> deployment_type=origin
> >> openshift_pkg_version=-1.2.0
> >> openshift_image_tag=-1.2.0
> >>
> >> But it installed a release canidad and not v1.2.0
> >>
> >> oc v1.2.0-rc1-13-g2e62fab
> >> kubernetes v1.2.0-36-g4a3f9c5
> >>
> >> 
> >> From: dencow...@hotmail.com
> >> To: cont...@stephane-klein.info
> >> Subject: RE: define openshift origin version (stable 1.2.0) for Ansible
> >> install
> >> Date: Wed, 22 Jun 2016 12:51:18 +
> >> CC: users@lists.openshift.redhat.com
> >>
> >>
> >> Thanks for your fast reply
> >> This is the beginning of my playbook:
> >>
> >> [OSEv3:vars]
> >> ansible_ssh_user=root
> >> deployment_type=origin
> >> openshift_pkg_version=v1.2.0
> >> openshift_image_tag=v1.2.0
> >>
> >> But I got an error:
> >> TASK [openshift_master_ca : Install the base package for admin tooling]
> >> 
> >> FAILED! => {"changed": false, "failed": true, "msg": "No Package matching
> >> 'originv1.2.0' found available, installed or updated", "rc": 0, "results":
> >> []}
> >>
> >> 
> >> From: cont...@stephane-klein.info
> >> Date: Wed, 22 Jun 2016 13:53:57 +0200
> >> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible
> >> install
> >> To: dencow...@hotmail.com
> >> CC: users@lists.openshift.redhat.com
> >>
> >> Personally I use this options to fix OpenShift version:
> >>
> >> openshift_pkg_version=v1.2.0
> >> openshift_image_tag=v1.2.0
> >>
> >>
> >> 2016-06-22 13:24 GMT+02:00 Den Cowboy :
> >>
> >> Is it possible to define and origin version in your ansible install.
> >> At the moment we have so many issues with our newest install (while we had
> >> 1.1.6 pretty stable for some time)
> >> We want to go to a stable 1.2.0
> >>
> >> Our issues:
> >> version = oc v1.2.0-rc1-13-g2e62fab
> >> So images are pulled with tag oc v1.2.0-rc1-13-g2e62fab which doesn't
> >> exist in openshift. Okay we have a workaround by editing the master and 
> >> node
> >> config's and using 'i--image' but whe don't like this approach
> >>
> >> logs on 

RE: define openshift origin version (stable 1.2.0) for Ansible install

2016-06-23 Thread Den Cowboy
Can you maybe explain how to use this?
I performed a yum --enablerepo=centos-openshift-origin-testing install origin\*

oc version gives me 
oc v1.2.0
kubernetes v1.2.0-36-g4a3f9c5

But how do I have to add nodes (using ansible) and that kind of stuff? After 
performing the yum I've just one master and one node on the same host.
Thanks



> From: tdaw...@redhat.com
> Date: Wed, 22 Jun 2016 17:27:17 -0500
> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible 
> install
> To: alexwa...@exosite.com
> CC: dencow...@hotmail.com; users@lists.openshift.redhat.com
> 
> Yep, seems that my new way of creating the rpms for CentOS got the
> version of the rpm right, but wrong for setting the ldflags, which was
> causing the binary to have a different version.
> 
> At some point in the near future we need to re-evaluate git tags and
> versions in the origin.spec file.  (Why it is the rpm spec version
> always 0.0.1 when in reality the version everywhere else is 1.2.0)
> 
> Worked with Scott to figure out a correct way to consistently build
> the rpms.  In the end, neither of our workflows failed in sneaky ways,
> so I just fixed things manually.  Not something we can do
> consistently, but I really needed to get a working 1.2.0 version out.
> 
> What works:  origin-1.2.0-4.el7
> https://cbs.centos.org/koji/buildinfo?buildID=11349
> 
> You should be able to test it within an hour via
> yum --enablerepo=centos-openshift-origin-testing install origin\*
> 
> This version is still getting signed and pushed out.  That takes more time.
> 
> Sorry for all the problems this has caused.
> 
> Troy
> 
> 
> On Wed, Jun 22, 2016 at 2:57 PM, Alex Wauck  wrote:
> > This seems to be caused by the 1.2.0-2.el7 packages containing the wrong
> > version.  I had a conversation on IRC about this earlier (#openshift), and
> > somebody confirmed it.  I suspect a new release will be available soon.  At
> > any rate, downgrading to 1.2.0-1.el7 worked for us.
> >
> > On Wed, Jun 22, 2016 at 8:55 AM, Den Cowboy  wrote:
> >>
> >> I tried:
> >> [OSEv3:vars]
> >> ansible_ssh_user=root
> >> deployment_type=origin
> >> openshift_pkg_version=-1.2.0
> >> openshift_image_tag=-1.2.0
> >>
> >> But it installed a release canidad and not v1.2.0
> >>
> >> oc v1.2.0-rc1-13-g2e62fab
> >> kubernetes v1.2.0-36-g4a3f9c5
> >>
> >> 
> >> From: dencow...@hotmail.com
> >> To: cont...@stephane-klein.info
> >> Subject: RE: define openshift origin version (stable 1.2.0) for Ansible
> >> install
> >> Date: Wed, 22 Jun 2016 12:51:18 +
> >> CC: users@lists.openshift.redhat.com
> >>
> >>
> >> Thanks for your fast reply
> >> This is the beginning of my playbook:
> >>
> >> [OSEv3:vars]
> >> ansible_ssh_user=root
> >> deployment_type=origin
> >> openshift_pkg_version=v1.2.0
> >> openshift_image_tag=v1.2.0
> >>
> >> But I got an error:
> >> TASK [openshift_master_ca : Install the base package for admin tooling]
> >> 
> >> FAILED! => {"changed": false, "failed": true, "msg": "No Package matching
> >> 'originv1.2.0' found available, installed or updated", "rc": 0, "results":
> >> []}
> >>
> >> 
> >> From: cont...@stephane-klein.info
> >> Date: Wed, 22 Jun 2016 13:53:57 +0200
> >> Subject: Re: define openshift origin version (stable 1.2.0) for Ansible
> >> install
> >> To: dencow...@hotmail.com
> >> CC: users@lists.openshift.redhat.com
> >>
> >> Personally I use this options to fix OpenShift version:
> >>
> >> openshift_pkg_version=v1.2.0
> >> openshift_image_tag=v1.2.0
> >>
> >>
> >> 2016-06-22 13:24 GMT+02:00 Den Cowboy :
> >>
> >> Is it possible to define and origin version in your ansible install.
> >> At the moment we have so many issues with our newest install (while we had
> >> 1.1.6 pretty stable for some time)
> >> We want to go to a stable 1.2.0
> >>
> >> Our issues:
> >> version = oc v1.2.0-rc1-13-g2e62fab
> >> So images are pulled with tag oc v1.2.0-rc1-13-g2e62fab which doesn't
> >> exist in openshift. Okay we have a workaround by editing the master and 
> >> node
> >> config's and using 'i--image' but whe don't like this approach
> >>
> >> logs on our nodes:
> >>  level=error msg="Error reading loginuid: open /proc/27182/loginuid: no
> >> such file or directory"
> >> level=error msg="Error reading loginuid: open /proc/27182/loginuid: no
> >> such file or directory"
> >>
> >> We started a mysql instance. We weren't able to use the service name to
> >> connect:
> >> mysql -u test -h mysql -p did NOT work
> >> mysql -u test -h 172.30.x.x (service ip) -p did work..
> >>
> >> So we have too many issues on this version of OpenShift. We've deployed in
> >> a team several times and are pretty confident with the setup and it was
> >> always working fine for us. But now this last weird versions seem really 
> >> bad
> >> for us.
> >>
> >> ___
> >> users mailing list
> >> users@lists.openshift.redhat.com
> >> http://lists.openshift.redhat.com

RE: Web Console default passwor

2016-06-23 Thread Den Cowboy
You have to go inside your folder and create a user:
htpasswd htpasswd admin
prompt for password: 

User is created (don't really know if you have to restart your master).
To make your user cluster-admin

$ oc login -u system:admin (authenticates with admin.kubeconfig)
$ oadm policiy add-cluster-role-to-user cluster-admin admin (if admin is your 
user)




















> To: users@lists.openshift.redhat.com
> From: o.radi...@meteocontrol.de
> Subject: Web Console default passwor
> Date: Thu, 23 Jun 2016 10:06:40 +0200
> 
> Hi,
> 
> i've a second basic question: I can't find a default password in online 
> documentation, for the first log in on  the Web Console.
> 
> I enter this in my playbook:
> 
> 
>  snip 
> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 
> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 
> 'filename': '/etc/origin/master/htpasswd'}]
>  snap -
> 
> But the /etc/origin/master/htpasswd file is empty. I have to do it 
> yourself the first entry? With...
> 
>  snip 
> htpasswd /etc/origin/master/htpasswd admin
>  snap 
> 
> ..Is this right?
> 
> Thank you,
> 
> Olaf Radicke
> 
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Web Console default passwor

2016-06-23 Thread Olaf Radicke

Hi,

i've a second basic question: I can't find a default password in online 
documentation, for the first log in on  the Web Console.


I enter this in my playbook:


 snip 
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 
'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 
'filename': '/etc/origin/master/htpasswd'}]

 snap -

But the /etc/origin/master/htpasswd file is empty. I have to do it 
yourself the first entry? With...


 snip 
htpasswd /etc/origin/master/htpasswd admin
 snap 

..Is this right?

Thank you,

Olaf Radicke

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Web Console and Public IP

2016-06-23 Thread Olaf Radicke

Hi,

I"ve basic question. Is the IP addres of the load balancer host the 
public ip or the ip of the master host? And is this ip needed in the 
ansible inventory  as variables set?


Thank you,

Olaf Radicke

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users