Re: Cron tasks?

2016-02-23 Thread Mateus Caruccio
FYI, the Dockerfile was wrong by setting suid to /usr/bin/crontab. That
lead to /var/spool/cron/user file owner by root, thus preventing crond to
read it.
The working version is at
https://github.com/getupcloud/sti-ruby-extra/blob/master/1.9/Dockerfile#L24-L28

Regards,

*Mateus Caruccio*
Master of Puppets
+55 (51) 8298.0026
gtalk:


*mateus.caruc...@getupcloud.com twitter:
@MateusCaruccio *
This message and any attachment are solely for the intended
recipient and may contain confidential or privileged information
and it can not be forwarded or shared without permission.
Thank you!

On Tue, Feb 23, 2016 at 9:12 AM, Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> Hello David.
>
> I've got cron to work as expected by doing this:
>
> 1 - Create an "extra" image and add the necessary packages (cronie
> crontabs nss_wrapper uid_wrapper):
>
> https://github.com/getupcloud/sti-ruby-extra/blob/1dfed4ca7ca153e261c880f0b036129c5d9011ca/1.9/Dockerfile#L18
>
> We need to relax security here, otherwise neither crond nor crontab will
> work, since both are run as regular users:
>
> https://github.com/getupcloud/sti-ruby-extra/blob/1dfed4ca7ca153e261c880f0b036129c5d9011ca/1.9/Dockerfile#L24-L26
>
> 2 - Create an script to activate nss_wrapper and (optionally) uid_wrapper:
>
> https://github.com/getupcloud/sti-ruby-extra/blob/master/1.9/nss-wrapper-setup
>
> libuid_wrapper is required by /usr/bin/crontab so it believes to be
> running as root.
> In order to crond start it needs to have the current user in your passwd.
> You can achieve this by using nss_wrapper with a "fake" passwd file [1] and
> instruct everyone to use it [2]
>
> 3 - From your repo's  (.sti|.s2i)/bin/run, "source" the wrapper and start
> crond.
>
> if [ -x ${STI_SCRIPTS_PATH}/nss-wrapper-setup ]; then
> source ${STI_SCRIPTS_PATH}/nss-wrapper-setup -u
> crond-start
> fi
>
>
>
> I choose to run it from the same code container so it can reach the code
> itself.
>
> Please, feedback is very appreciated.
>
> Best Regards.
>
> [1]
> https://github.com/getupcloud/sti-ruby-extra/blob/1dfed4ca7ca153e261c880f0b036129c5d9011ca/1.9/nss-wrapper-setup#L22-L27
> [2]
> https://github.com/getupcloud/sti-ruby-extra/blob/1dfed4ca7ca153e261c880f0b036129c5d9011ca/1.9/nss-wrapper-setup#L29-L31
>
>
> *Mateus Caruccio*
> Master of Puppets
> +55 (51) 8298.0026
> gtalk:
>
>
> *mateus.caruc...@getupcloud.com twitter:
> @MateusCaruccio *
> This message and any attachment are solely for the intended
> recipient and may contain confidential or privileged information
> and it can not be forwarded or shared without permission.
> Thank you!
>
> On Tue, Feb 23, 2016 at 7:03 AM, Maciej Szulik 
> wrote:
>
>>
>> On 02/23/2016 10:41 AM, David Strejc wrote:
>>
>>> Does anyone have any experience with cron tasks as they were in OS v2?
>>>
>>
>> v3 does not have cron support yet, there was a proposal already accepted
>> in k8s. In the following weeks/months I'll be working on implementing
>> such functionality.
>>
>> I would like to let our developers maintain cron tasks through git .s2i
>>> folder as it was in v2.
>>> Is it good idea to build cron into docker image and link crontab to file
>>> inside .s2i?
>>>
>>
>> I'm not sure this will work as you expect. You'd still need a separate
>> mechanism that will actually trigger build, or other action when the
>> right time comes.
>>
>> What I can suggest as a temporary solution is writing/deploying some
>> kind of cron scheduler inside of OpenShift.
>>
>> Maciej
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Aggregated Logging Template Fails

2016-02-23 Thread Tim Moor
Thanks for the quick reply Eric,

The template appears to be using openshift/origin-logging-deployer:v1.1

I’ll pull the latest version of the repo and give that a go.

Thanks

From: Eric Wolinetz mailto:ewoli...@redhat.com>>
Date: Wednesday, 24 February 2016 at 12:29 PM
To: Tim Moor mailto:tim.m...@spring.co.nz>>
Cc: "users@lists.openshift.redhat.com" 
mailto:users@lists.openshift.redhat.com>>
Subject: Re: Aggregated Logging Template Fails

Hi Tim,

What version of the logging-deployment image is your logging-deployer.yaml file 
using?
There was a backwards compatibility issue with the previous 
support-pre-template and the version of origin you're using -- it was 
specifying a target port for the Elasticsearch headless services which is now 
validated against differently.

The related commit is here [0].

On Tue, Feb 23, 2016 at 5:18 PM, Tim Moor 
mailto:tim.m...@spring.co.nz>> wrote:
Hey list, having a few issues with the Aggregated Logging template provided as 
part of the default install.

When running the deployer it’s failing with the following errors. I’ve also 
attached a script I’ve been using to redeploy on failure as part of debugging.

Archtectiure
- aklkvm019.corp   
kubernetes.io/hostname=,region=infra,zone=default
   Ready 7d
- aklkvm020.corp   
kubernetes.io/hostname=,region=primary,zone=east
Ready 7d
- aklkvm021.corp   
kubernetes.io/hostname=,region=primary,zone=west
Ready 7d


Software Versions
- origin-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
- origin-node-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
- origin-sdn-ovs-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
- origin-clients-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
- origin-master-1.1.3-0.git.0.8edc1be.el7.centos.x86_64



Errors
- Error from server: Service "logging-es-cluster" is invalid: 
spec.ports[0].port: Invalid value: 9300: must be equal to targetPort when 
clusterIP = None
- Error from server: Service "logging-es-ops-cluster" is invalid: 
spec.ports[0].port: Invalid value: 9300: must be equal to targetPort when 
clusterIP = None


Any help greatly appreciated.

Tim

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

[0] 
https://github.com/openshift/origin-aggregated-logging/commit/a0dc8655af2a79957fdfe7afb40fd7cffc4cf2a2
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unsupported volume type after update to 1.1.3

2016-02-23 Thread Mark Turansky
Hm.  This is the worst good news for a developer.  I'm glad it works but I
don't know why, at least the Windows reboot trick worked again.

On Tue, Feb 23, 2016 at 5:29 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> Hmm, I had to restart origin-node on the scheduled node, and now the pod
> is running.​
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Aggregated Logging Template Fails

2016-02-23 Thread Jordan Liggitt
Fwiw, that validation is being reverted to maintain compatibility[0][1],
but you should still update to the corrected template. It's not correct to
to specify a different targetPort for a headless service, and if you do, it
will just be ignored.

[0] https://github.com/openshift/origin/pull/7495
[1] https://github.com/kubernetes/kubernetes/pull/21680


On Feb 23, 2016, at 6:30 PM, Eric Wolinetz  wrote:

Hi Tim,

What version of the logging-deployment image is your logging-deployer.yaml
file using?
There was a backwards compatibility issue with the previous
support-pre-template and the version of origin you're using -- it was
specifying a target port for the Elasticsearch headless services which is
now validated against differently.

The related commit is here [0].

On Tue, Feb 23, 2016 at 5:18 PM, Tim Moor  wrote:

> Hey list, having a few issues with the Aggregated Logging template
> provided as part of the default install.
>
> When running the deployer it’s failing with the following errors. I’ve
> also attached a script I’ve been using to redeploy on failure as part of
> debugging.
>
> Archtectiure
> - aklkvm019.corp   
> kubernetes.io/hostname=,region=infra,zone=default
>  Ready 7d
> - aklkvm020.corp   kubernetes.io/hostname=,region=primary,zone=east
>   Ready 7d
> - aklkvm021.corp   kubernetes.io/hostname=,region=primary,zone=west
>   Ready 7d
>
>
> Software Versions
> - origin-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
> - origin-node-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
> - origin-sdn-ovs-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
> - origin-clients-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
> - origin-master-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
>
>
>
> Errors
> - Error from server: Service "logging-es-cluster" is invalid:
> spec.ports[0].port: Invalid value: 9300: must be equal to targetPort when
> clusterIP = None
> - Error from server: Service "logging-es-ops-cluster" is invalid:
> spec.ports[0].port: Invalid value: 9300: must be equal to targetPort when
> clusterIP = None
>
>
> Any help greatly appreciated.
>
> Tim
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[0]
https://github.com/openshift/origin-aggregated-logging/commit/a0dc8655af2a79957fdfe7afb40fd7cffc4cf2a2


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Aggregated Logging Template Fails

2016-02-23 Thread Eric Wolinetz
Hi Tim,

What version of the logging-deployment image is your logging-deployer.yaml
file using?
There was a backwards compatibility issue with the previous
support-pre-template and the version of origin you're using -- it was
specifying a target port for the Elasticsearch headless services which is
now validated against differently.

The related commit is here [0].

On Tue, Feb 23, 2016 at 5:18 PM, Tim Moor  wrote:

> Hey list, having a few issues with the Aggregated Logging template
> provided as part of the default install.
>
> When running the deployer it’s failing with the following errors. I’ve
> also attached a script I’ve been using to redeploy on failure as part of
> debugging.
>
> Archtectiure
> - aklkvm019.corp   
> kubernetes.io/hostname=,region=infra,zone=default
>  Ready 7d
> - aklkvm020.corp   kubernetes.io/hostname=,region=primary,zone=east
>   Ready 7d
> - aklkvm021.corp   kubernetes.io/hostname=,region=primary,zone=west
>   Ready 7d
>
>
> Software Versions
> - origin-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
> - origin-node-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
> - origin-sdn-ovs-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
> - origin-clients-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
> - origin-master-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
>
>
>
> Errors
> - Error from server: Service "logging-es-cluster" is invalid:
> spec.ports[0].port: Invalid value: 9300: must be equal to targetPort when
> clusterIP = None
> - Error from server: Service "logging-es-ops-cluster" is invalid:
> spec.ports[0].port: Invalid value: 9300: must be equal to targetPort when
> clusterIP = None
>
>
> Any help greatly appreciated.
>
> Tim
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[0]
https://github.com/openshift/origin-aggregated-logging/commit/a0dc8655af2a79957fdfe7afb40fd7cffc4cf2a2
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Aggregated Logging Template Fails

2016-02-23 Thread Tim Moor
Hey list, having a few issues with the Aggregated Logging template provided as 
part of the default install. 

When running the deployer it’s failing with the following errors. I’ve also 
attached a script I’ve been using to redeploy on failure as part of debugging.

Archtectiure
- aklkvm019.corp   kubernetes.io/hostname=,region=infra,zone=default 
  Ready 7d
- aklkvm020.corp   kubernetes.io/hostname=,region=primary,zone=east  
  Ready 7d
- aklkvm021.corp   kubernetes.io/hostname=,region=primary,zone=west  
  Ready 7d


Software Versions
- origin-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
- origin-node-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
- origin-sdn-ovs-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
- origin-clients-1.1.3-0.git.0.8edc1be.el7.centos.x86_64
- origin-master-1.1.3-0.git.0.8edc1be.el7.centos.x86_64


 
Errors
- Error from server: Service "logging-es-cluster" is invalid: 
spec.ports[0].port: Invalid value: 9300: must be equal to targetPort when 
clusterIP = None
- Error from server: Service "logging-es-ops-cluster" is invalid: 
spec.ports[0].port: Invalid value: 9300: must be equal to targetPort when 
clusterIP = None


Any help greatly appreciated. 

Tim
#!/bin/bash

PROJECT_EXISTS=`oc get projects | grep logging | wc -l`
if [[ $PROJECT_EXISTS ]]; then
echo "Project logging already exists"
else
oadm new-project logging
fi

PROJECT_LOGGING=`oc project | grep logging | wc -l`
if [[ $PROJECT_LOGGING ]]; then
echo "We are in project logging"
else
oc project logging
fi

LOGGING_SECRET=`oc get secrets | grep logging-deployer | wc -l`
if [[ $LOGGING_SECRET ]]; then
oc delete secret logging-deployer
fi

oc secrets new logging-deployer ca.crt=/etc/origin/master/ca.crt 
ca.key=/etc/origin/master/ca.key

SERVICE_ACCOUNTS=`oc get serviceaccounts | wc -l`
if [[ $SERVICE_ACCOUNTS ]]; then
oc delete serviceaccounts --all
fi

oc create -f - 

Re: unsupported volume type after update to 1.1.3

2016-02-23 Thread Philippe Lafoucrière
Hmm, I had to restart origin-node on the scheduled node, and now the pod is
running.​
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unsupported volume type after update to 1.1.3

2016-02-23 Thread Philippe Lafoucrière
>
> For now, yes.  We're looking at ways to make dynamic provisioning more
> widely available, even outside of a cloud environment.  We'd prefer to not
> implement more recyclers and instead make more provisioners.
>

Ok thanks, the PV is Bound again:

status:
  accessModes:
  - ReadWriteOnce
  - ReadWriteMany
  capacity:
storage: 20Gi
  phase: Bound

Anyway, the pods seems to be waiting for it:

NAME READY STATUS RESTARTS   AGE
hawkular-cassandra-1-7im2u   0/1   Pending0  42m
hawkular-metrics-n4iv3   0/1   CrashLoopBackOff   9  42m
heapster-m66tt   0/1   Pending0  42m



And describe doesn't give more info:


Name:   hawkular-cassandra-1-7im2u
Namespace:  openshift-infra
Image(s):   docker.io/openshift/origin-metrics-cassandra:latest
Node:   node-1
Labels:
metrics-infra=hawkular-cassandra,name=hawkular-cassandra-1,type=hawkular-cassandra
Status: Pending
Reason:
Message:
IP:
Controllers:ReplicationController/hawkular-cassandra-1
Containers:
  hawkular-cassandra-1:
Container ID:
Image:  docker.io/openshift/origin-metrics-cassandra:latest
Image ID:
Command:
  /opt/apache-cassandra/bin/cassandra-docker.sh
  --cluster_name=hawkular-metrics
  --data_volume=/cassandra_data
  --internode_encryption=all
  --require_node_auth=true
  --enable_client_encryption=true
  --require_client_auth=true
  --keystore_file=/secret/cassandra.keystore
  --keystore_password_file=/secret/cassandra.keystore.password
  --truststore_file=/secret/cassandra.truststore
  --truststore_password_file=/secret/cassandra.truststore.password
  --cassandra_pem_file=/secret/cassandra.pem
QoS Tier:
  cpu:  BestEffort
  memory:   BestEffort
State:  Waiting
Ready:  False
Restart Count:  0
Environment Variables:
  CASSANDRA_MASTER: true
  POD_NAMESPACE:openshift-infra (v1:metadata.namespace)
Volumes:
  cassandra-data:
Type:   PersistentVolumeClaim (a reference to a
PersistentVolumeClaim in the same namespace)
ClaimName:  metrics-cassandra-1
ReadOnly:   false
  hawkular-cassandra-secrets:
Type:   Secret (a secret that should populate this volume)
SecretName: hawkular-cassandra-secrets
  cassandra-token-sciym:
Type:   Secret (a secret that should populate this volume)
SecretName: cassandra-token-sciym
Events:
  FirstSeen LastSeenCount   From
 SubobjectPath   TypeReason  Message
  - -   
 -   --  ---
  43m   43m 1   {default-scheduler }
 Normal  Scheduled   Successfully assigned
hawkular-cassandra-1-7im2u to node-1


Thanks
Philippe
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Create app with image from own docker registry on OpenShift 3.1

2016-02-23 Thread Andy Goldstein
You need to call it like this: oc new-app --insecure-registry 

On Tue, Feb 23, 2016 at 6:20 AM, Den Cowboy  wrote:

> I've added it + restarted docker:
> INSECURE_REGISTRY='--insecure-registry
> ec2-xx-xx-xx-xx.xx-central-1.compute.amazonaws.com'
>
> I'm able to perform a docker login and pull the image manually but
>
> oc new-app ec2-xxx:5000/test/image:1 or /test/image
>
> error: can't look up Docker image "ec2-xxx:5000/dbm/ponds-ui-nodejs:83":
> Internal error occurred: Get https://ec2-xxx:5000/v2/: x509: certificate
> signed by unknown authority
> error: no match for "ec2-xxx:5000/test/image:1"
>
> --
> From: bpar...@redhat.com
> Date: Thu, 18 Feb 2016 09:48:32 -0500
> Subject: Re: Create app with image from own docker registry on OpenShift
> 3.1
> To: dencow...@hotmail.com; users@lists.openshift.redhat.com
>
>
> INSECURE_REGISTRY is needed because your registry is using a self-signed
> cert, whether it is secured or not.
>
>
> On Thu, Feb 18, 2016 at 4:59 AM, Den Cowboy  wrote:
>
> No didn't do that. I'm using a secure registry for OpenShift. So the tag
> was not on insecure.
>
> --
> From: bpar...@redhat.com
> Date: Wed, 17 Feb 2016 10:53:48 -0500
> Subject: Re: Create app with image from own docker registry on OpenShift
> 3.1
> To: dencow...@hotmail.com
> CC: users@lists.openshift.redhat.com
>
>
> is ec2-xxx listed as an insecure registry in your docker daemon's
> configuration?
>
> /etc/sysconfig/docker
> INSECURE_REGISTRY='--insecure-registry ec2-'
>
> I believe that is needed for docker to communicate with registries that
> use self-signed certs.
>
> (you'll need to restart the docker daemon after adding that setting)
>
>
>
> On Wed, Feb 17, 2016 at 8:15 AM, Den Cowboy  wrote:
>
> I have my own docker registry secured with a selfsigned certificate. On
> other servers, I'm able to login on the registry and pull/push images from
> it. So that seems to work fine.
> But when I want to create an app from the image using OpenShift it does
> not seem te work:
>
> oc new-app ec2-xxx:5000/test/image1
> error: can't look up Docker image "ec2-xx/test/image1": Internal error 
> occurred: Get https://ec2-xxx:5000/v2/: x509: certificate signed by unknown 
> authority
> error: no match for "ec2-xxx:5000/test/image1"
>
> What could be the issue? I'm able to login in the registry and pull the
> image manually.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
>
> --
> Ben Parees | OpenShift
>
>
>
>
> --
> Ben Parees | OpenShift
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: All ports seem to be blocked after ansible install

2016-02-23 Thread Andy Goldstein
Just 1 place is sufficient - thanks!

On Mon, Feb 22, 2016 at 11:13 PM, Dean Peterson 
wrote:

> Oh, I opened the bug on bugzilla but I can open it on github too:
> https://bugzilla.redhat.com/show_bug.cgi?id=1310968
>
>
> On Mon, Feb 22, 2016 at 9:37 PM, Clayton Coleman 
> wrote:
>
>> Can you open an issue on GitHub with the results of `ip addr list` and
>> `ip route`?  It sounds like the SDN configuration may be disabling
>> your network, or some other unexpected interaction with the host is
>> blocking traffic.
>>
>> On Mon, Feb 22, 2016 at 1:23 PM, Dean Peterson 
>> wrote:
>> > Any ideas how the ansible installer may have made my machine
>> inaccessible to
>> > the outside world even with iptables turned off?
>> >
>> > On Feb 21, 2016 10:57 PM, "Dean Peterson" 
>> wrote:
>> >>
>> >> I performed an ansible install of openshift origin.  If I am on the
>> local
>> >> machine, I can bring up openshift in the browser.  However, on any
>> external
>> >> machine, I am no longer able to access anything on the openshift
>> master.  I
>> >> am unable to ssh, or visit the openshift web console in a browser.  I
>> >> checked iptables and all of the necessary ports are open.  I even
>> stopped
>> >> it.  Firewalld is also not running.  I was able to ssh prior to the
>> ansible
>> >> install but something during that install blocked all external
>> traffic.  How
>> >> do I open things back up?
>> >
>> >
>> > ___
>> > users mailing list
>> > users@lists.openshift.redhat.com
>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> >
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unsupported volume type after update to 1.1.3

2016-02-23 Thread Mark Turansky
On Tue, Feb 23, 2016 at 9:25 AM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> On Tue, Feb 23, 2016 at 9:00 AM, Mark Turansky 
> wrote:
>
>> There is no recycler for glusterfs, so "no volume plugin matched" would
>> occur when the volume is being reclaimed by the cluster after its release
>> from a claim.
>>
>
> yes, the pvc was probably remove when the metrics-deploy-template was used
> to replace cassandra, heapster, etc.
> So I have to manually "recycle" the pv? (ie: delete and recreate it)
>


For now, yes.  We're looking at ways to make dynamic provisioning more
widely available, even outside of a cloud environment.  We'd prefer to not
implement more recyclers and instead make more provisioners.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unsupported volume type after update to 1.1.3

2016-02-23 Thread Philippe Lafoucrière
On Tue, Feb 23, 2016 at 9:00 AM, Mark Turansky  wrote:

> There is no recycler for glusterfs, so "no volume plugin matched" would
> occur when the volume is being reclaimed by the cluster after its release
> from a claim.
>

yes, the pvc was probably remove when the metrics-deploy-template was used
to replace cassandra, heapster, etc.
So I have to manually "recycle" the pv? (ie: delete and recreate it)
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to import private image from hub.docker to ImageStream ?

2016-02-23 Thread Maciej Szulik



On 02/23/2016 01:56 PM, Stéphane Klein wrote:

I've tried to append :

```
# oc secrets add serviceaccount/default secrets/hub.docker.io --pull
# oc secrets add serviceaccount/default secrets/hub.docker.io --for=pull
# oc secrets add serviceaccount/default secrets/hub.docker.io
# oc secrets add serviceaccount/deployer secrets/hub.docker.io
```

I've always :

```
# oc import-image api
The import completed successfully.

Name:api
Created:3 hours ago
Labels:
Annotations:
openshift.io/image.dockerRepositoryCheck=2016-02-23T09:14:34Z
Docker Pull Spec:172.30.27.206:5000/foobar/api

TagSpecCreatedPullSpecImage
latestapi3 hours agoimport failed: you may not have
access to the Docker image "api"
```

Best regards,
Stéphane

2016-02-23 12:48 GMT+01:00 Stéphane Klein :


2016-02-23 11:05 GMT+01:00 Maciej Szulik :


Have you checked this doc:


https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html#private-registries




Thanks for this url :)

I've created my hub.docker.io secret with (I have replaced with my
credentials) :

```
oc secrets new-dockercfg SECRET --docker-server=DOCKER_REGISTRY_SERVER
--docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD
--docker-email=DOCKER_EMAIL
```

Now I've :

```
# oc get secret hub.docker.io -o json
{
 "kind": "Secret",
 "apiVersion": "v1",
 "metadata": {
 "name": "hub.docker.io",
 "namespace": "foobar-staging",
 "selfLink": "/api/v1/namespaces/foobar-staging/secrets/
hub.docker.io",
 "uid": "3b1b2aa4-da15-11e5-b613-080027143490",
 "resourceVersion": "19813",
 "creationTimestamp": "2016-02-23T10:07:22Z"
 },
 "data": {
 ".dockercfg": ".."
 },
 "type": "kubernetes.io/dockercfg"
}
```

When I execute :

```
# oc import-image api
The import completed successfully.

Name:api
Created:2 hours ago
Labels:
Annotations:
openshift.io/image.dockerRepositoryCheck=2016-02-23T09:14:34Z
Docker Pull Spec:172.30.27.206:5000/foobar-staging/api

TagSpecCreatedPullSpecImage
latestapi2 hours agoimport failed: you may not have
access to the Docker image "api"
```

Where is my mistake ? how can I say to my ImageStream to use my
hub.docker.io secret ?



It looks like there's an error in the image-import command if the
first import failed, I've created an issue to address that:
https://github.com/openshift/origin/issues/7555

Current workaround is to re-create the image stream and import should
pick the proper secret. Btw. make sure the server is either:
auth.docker.io/token or index.docker.io/v1/ otherwise it won't match
the server. The former is new auth endpoint, the latter is old one.

Maciej

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: unsupported volume type after update to 1.1.3

2016-02-23 Thread Mark Turansky
Hi Philippe,

Has the claim for this volume been deleted?

There is no recycler for glusterfs, so "no volume plugin matched" would
occur when the volume is being reclaimed by the cluster after its release
from a claim.

Mark

On Tue, Feb 23, 2016 at 8:46 AM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> Hi,
>
> We have a volume with status = "Failed" after upgrading to 1.1.3.
> All our volumes are mounted through glusterfs, and all the others are
> fine, the issue is just with one of them:
>
> Name:   pv-storage-1
> Labels: 
> Status: Failed
> Claim:  openshift-infra/metrics-cassandra-1
> Reclaim Policy: Recycle
> Access Modes:   RWO,RWX
> Capacity:   20Gi
> Message:no volume plugin matched
> Source:
> Type:   Glusterfs (a Glusterfs mount on the host that
> shares a pod's lifetime)
> EndpointsName:  glusterfs-cluster
> Path:   pv-staging-gemnasium-20G-2
> ReadOnly:   false
>
>
> /sbin/mount.glusterfs is available on all nodes, and I can mount the
> volume by hand (everything was working fine before the update).
>
> Any idea to fix this?
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


unsupported volume type after update to 1.1.3

2016-02-23 Thread Philippe Lafoucrière
Hi,

We have a volume with status = "Failed" after upgrading to 1.1.3.
All our volumes are mounted through glusterfs, and all the others are fine,
the issue is just with one of them:

Name:   pv-storage-1
Labels: 
Status: Failed
Claim:  openshift-infra/metrics-cassandra-1
Reclaim Policy: Recycle
Access Modes:   RWO,RWX
Capacity:   20Gi
Message:no volume plugin matched
Source:
Type:   Glusterfs (a Glusterfs mount on the host that
shares a pod's lifetime)
EndpointsName:  glusterfs-cluster
Path:   pv-staging-gemnasium-20G-2
ReadOnly:   false


/sbin/mount.glusterfs is available on all nodes, and I can mount the volume
by hand (everything was working fine before the update).

Any idea to fix this?
Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Create image-stream for image from insecure private docker registry

2016-02-23 Thread Maciej Szulik



On 02/23/2016 11:44 AM, Den Cowboy wrote:

I  try to create an image-stream for my image from a docker registry.
The registry is insecure (it's using selfsigned certificates) and there is a 
login + password on my registry.
I've put the certs on the nodes of my openshift cluster and I'm able to login 
and pull the images I want.
But I need to create image-streams for this.
My registry is: ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com:5000

docker login ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com:5000
Username: 
Password:
Email: 
WARNING: login credentials saved in /home/centos/.docker/config.json
Login Succeeded
$ docker pull 
ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com:5000/test/my-image:83
Trying to pull repository 
ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com:5000/test/my-image:83: Pulling 
from test/my-image
77e39ee82117: Pull complete
5eb1402f0414: Pull complete
9287fae7a16e: Pull complete
0288ae931294: Pull complete
9536cbaf1242: Pull complete
ddfb2360ce1e: Pull complete
8ab6f3fcbdb5: Pull complete
20ed370cdb6e: Pull complete
ebcf22a55440: Pull complete
5f8d821c760f: Pull complete
cfa77085638d: Pull complete
e154104e0560: Pull complete
9774ad57345c: Pull complete
fea97a1ec848: Pull complete
4b8c16278ead: Pull complete
dc18e7f95e9b: Pull complete
308e99456a16: Pull complete
e95130b212d6: Pull complete
7e48c416298a: Pull complete
Digest: sha256:03d4c5090dd06a29ba3473870efdbf6324c0074b94345b3a346d5a8e2dd0a141
Status: Downloaded newer image for ...

But okay. Now I have the image only on one of my nodes. So I have to create an 
image-stream for it:
I want it in my project testing:
$ oc new-project testing
I try to create a secret to make it possible to login on my registry for each 
node:
$
  oc secrets new-dockercfg SECRET
--docker-server=ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com
--docker-username=*** --docker-password=*** --docker-email=***
The Secret "SECRET" is invalid.
metadata.name:
  Invalid value: "SECRET": must be a DNS subdomain (at most 253
characters, matching regex
[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*): e.g.
"example.com"


You should replace SECRET with the name under which the secret
should be stored. The requirement for that is lowercase letters
and numbers - see the regex above.


Why is it invalid?
After that I want to create my image-stream:


kind: ImageStream
apiVersion: v1
metadata:
   name: my-image
   annotations:
 openshift.io/image.insecureRepository: "true"
   spec:
 dockerImageRepository: 
ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com/test/my-image

Is this the right approach?






___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Multi Clusters : Token management

2016-02-23 Thread Aleksandar Kostadinov

Srinivas Naga Kotaru (skotaru) wrote on 02/22/2016 08:26 PM:

Thanks guys for having some discussion on this topic. Pl confirm whether my 
understanding is correct or not pertaining to multi cluster authentication and 
token management.

1. OSE3 authentication sub system can use external oAuth based solution ( 
corporate solution). This SSO only works for browser based clients ( console 
etc) but not CLI clients like OC etc


For CLI you can obtain a token with browser and do `oc login 
--token=...` also you can use a service account. But yeah, you cannot 
directly login with cli unless you already have a user token or a 
service account token.



2. Client cert bases solution might help both browser and CLI but it is 
difficult to operate and manage unless decent PKI infrastructure available for 
cert issuing and revocation

3. It’s not best practice to have same token being used across multiple 
clusters and no efforts currently going to integrate. It is assumed that each 
cluster has its own token key and lifetime.

4. If client dealign with multiple clusters and his applications spread across 
all these clusters, they have to authenticate on each cluster to manage. His 
.kube/config file might have details all these clusters and login separately. 
Administrators can increase the token validity to reduce number of login 
attempts but that is still pain from experience perceptive.


Even if you have a single token on all nodes, it would be equally 
convenient/inconvenient to switch between clusters (as you'll have to 
copy/paste the token). Perhaps easiest would be if you have a kerbesos 
infrastructure so that you can login everywhere passwordless (including 
web and cli). But I'm mot  sure openshift cli supports that yet. And 
running kerberos is also non-trivial.

It's not like any SSO is trivial actually :)

Again you can look at freeIPA as it does provide both Kerberos/KDC and 
PKI capabilities. And is hopefully reasonably user-friendly.



Please add any helpful ideas to provide simple authentication layer in a multi 
cluster environment


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to import private image from hub.docker to ImageStream ?

2016-02-23 Thread Stéphane Klein
I've tried to append :

```
# oc secrets add serviceaccount/default secrets/hub.docker.io --pull
# oc secrets add serviceaccount/default secrets/hub.docker.io --for=pull
# oc secrets add serviceaccount/default secrets/hub.docker.io
# oc secrets add serviceaccount/deployer secrets/hub.docker.io
```

I've always :

```
# oc import-image api
The import completed successfully.

Name:api
Created:3 hours ago
Labels:
Annotations:
openshift.io/image.dockerRepositoryCheck=2016-02-23T09:14:34Z
Docker Pull Spec:172.30.27.206:5000/foobar/api

TagSpecCreatedPullSpecImage
latestapi3 hours agoimport failed: you may not have
access to the Docker image "api"
```

Best regards,
Stéphane

2016-02-23 12:48 GMT+01:00 Stéphane Klein :

> 2016-02-23 11:05 GMT+01:00 Maciej Szulik :
>
>> Have you checked this doc:
>>
>>
>> https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html#private-registries
>>
>>
>>
> Thanks for this url :)
>
> I've created my hub.docker.io secret with (I have replaced with my
> credentials) :
>
> ```
> oc secrets new-dockercfg SECRET --docker-server=DOCKER_REGISTRY_SERVER
> --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD
> --docker-email=DOCKER_EMAIL
> ```
>
> Now I've :
>
> ```
> # oc get secret hub.docker.io -o json
> {
> "kind": "Secret",
> "apiVersion": "v1",
> "metadata": {
> "name": "hub.docker.io",
> "namespace": "foobar-staging",
> "selfLink": "/api/v1/namespaces/foobar-staging/secrets/
> hub.docker.io",
> "uid": "3b1b2aa4-da15-11e5-b613-080027143490",
> "resourceVersion": "19813",
> "creationTimestamp": "2016-02-23T10:07:22Z"
> },
> "data": {
> ".dockercfg": ".."
> },
> "type": "kubernetes.io/dockercfg"
> }
> ```
>
> When I execute :
>
> ```
> # oc import-image api
> The import completed successfully.
>
> Name:api
> Created:2 hours ago
> Labels:
> Annotations:
> openshift.io/image.dockerRepositoryCheck=2016-02-23T09:14:34Z
> Docker Pull Spec:172.30.27.206:5000/foobar-staging/api
>
> TagSpecCreatedPullSpecImage
> latestapi2 hours agoimport failed: you may not have
> access to the Docker image "api"
> ```
>
> Where is my mistake ? how can I say to my ImageStream to use my
> hub.docker.io secret ?
>
> Best regards,
> Stéphane
>



-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Cron tasks?

2016-02-23 Thread Mateus Caruccio
Hello David.

I've got cron to work as expected by doing this:

1 - Create an "extra" image and add the necessary packages (cronie crontabs
nss_wrapper uid_wrapper):

https://github.com/getupcloud/sti-ruby-extra/blob/1dfed4ca7ca153e261c880f0b036129c5d9011ca/1.9/Dockerfile#L18

We need to relax security here, otherwise neither crond nor crontab will
work, since both are run as regular users:
https://github.com/getupcloud/sti-ruby-extra/blob/1dfed4ca7ca153e261c880f0b036129c5d9011ca/1.9/Dockerfile#L24-L26

2 - Create an script to activate nss_wrapper and (optionally) uid_wrapper:

https://github.com/getupcloud/sti-ruby-extra/blob/master/1.9/nss-wrapper-setup

libuid_wrapper is required by /usr/bin/crontab so it believes to be running
as root.
In order to crond start it needs to have the current user in your passwd.
You can achieve this by using nss_wrapper with a "fake" passwd file [1] and
instruct everyone to use it [2]

3 - From your repo's  (.sti|.s2i)/bin/run, "source" the wrapper and start
crond.

if [ -x ${STI_SCRIPTS_PATH}/nss-wrapper-setup ]; then
source ${STI_SCRIPTS_PATH}/nss-wrapper-setup -u
crond-start
fi



I choose to run it from the same code container so it can reach the code
itself.

Please, feedback is very appreciated.

Best Regards.

[1]
https://github.com/getupcloud/sti-ruby-extra/blob/1dfed4ca7ca153e261c880f0b036129c5d9011ca/1.9/nss-wrapper-setup#L22-L27
[2]
https://github.com/getupcloud/sti-ruby-extra/blob/1dfed4ca7ca153e261c880f0b036129c5d9011ca/1.9/nss-wrapper-setup#L29-L31


*Mateus Caruccio*
Master of Puppets
+55 (51) 8298.0026
gtalk:


*mateus.caruc...@getupcloud.com twitter:
@MateusCaruccio *
This message and any attachment are solely for the intended
recipient and may contain confidential or privileged information
and it can not be forwarded or shared without permission.
Thank you!

On Tue, Feb 23, 2016 at 7:03 AM, Maciej Szulik  wrote:

>
> On 02/23/2016 10:41 AM, David Strejc wrote:
>
>> Does anyone have any experience with cron tasks as they were in OS v2?
>>
>
> v3 does not have cron support yet, there was a proposal already accepted
> in k8s. In the following weeks/months I'll be working on implementing
> such functionality.
>
> I would like to let our developers maintain cron tasks through git .s2i
>> folder as it was in v2.
>> Is it good idea to build cron into docker image and link crontab to file
>> inside .s2i?
>>
>
> I'm not sure this will work as you expect. You'd still need a separate
> mechanism that will actually trigger build, or other action when the
> right time comes.
>
> What I can suggest as a temporary solution is writing/deploying some
> kind of cron scheduler inside of OpenShift.
>
> Maciej
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Cron tasks?

2016-02-23 Thread Clayton Coleman
In the meantime, you can run something like foreman or another
userspace process manager in your container, or create pods that have
two container using the same image, one that starts the main process
and one that starts a cron daemon.

> On Feb 23, 2016, at 5:05 AM, Maciej Szulik  wrote:
>
>
>> On 02/23/2016 10:41 AM, David Strejc wrote:
>> Does anyone have any experience with cron tasks as they were in OS v2?
>
> v3 does not have cron support yet, there was a proposal already accepted
> in k8s. In the following weeks/months I'll be working on implementing
> such functionality.
>
>> I would like to let our developers maintain cron tasks through git .s2i
>> folder as it was in v2.
>> Is it good idea to build cron into docker image and link crontab to file
>> inside .s2i?
>
> I'm not sure this will work as you expect. You'd still need a separate
> mechanism that will actually trigger build, or other action when the
> right time comes.
>
> What I can suggest as a temporary solution is writing/deploying some
> kind of cron scheduler inside of OpenShift.
>
> Maciej
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to import private image from hub.docker to ImageStream ?

2016-02-23 Thread Stéphane Klein
2016-02-23 11:05 GMT+01:00 Maciej Szulik :

> Have you checked this doc:
>
>
> https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html#private-registries
>
>
>
Thanks for this url :)

I've created my hub.docker.io secret with (I have replaced with my
credentials) :

```
oc secrets new-dockercfg SECRET --docker-server=DOCKER_REGISTRY_SERVER
--docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD
--docker-email=DOCKER_EMAIL
```

Now I've :

```
# oc get secret hub.docker.io -o json
{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "hub.docker.io",
"namespace": "foobar-staging",
"selfLink": "/api/v1/namespaces/foobar-staging/secrets/hub.docker.io
",
"uid": "3b1b2aa4-da15-11e5-b613-080027143490",
"resourceVersion": "19813",
"creationTimestamp": "2016-02-23T10:07:22Z"
},
"data": {
".dockercfg": ".."
},
"type": "kubernetes.io/dockercfg"
}
```

When I execute :

```
# oc import-image api
The import completed successfully.

Name:api
Created:2 hours ago
Labels:
Annotations:
openshift.io/image.dockerRepositoryCheck=2016-02-23T09:14:34Z
Docker Pull Spec:172.30.27.206:5000/foobar-staging/api

TagSpecCreatedPullSpecImage
latestapi2 hours agoimport failed: you may not have
access to the Docker image "api"
```

Where is my mistake ? how can I say to my ImageStream to use my
hub.docker.io secret ?

Best regards,
Stéphane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


RE: Create app with image from own docker registry on OpenShift 3.1

2016-02-23 Thread Den Cowboy
I've added it + restarted docker:
INSECURE_REGISTRY='--insecure-registry 
ec2-xx-xx-xx-xx.xx-central-1.compute.amazonaws.com'

I'm able to perform a docker login and pull the image manually but
oc new-app ec2-xxx:5000/test/image:1 or /test/imageerror: can't look up Docker 
image "ec2-xxx:5000/dbm/ponds-ui-nodejs:83": Internal error occurred: Get 
https://ec2-xxx:5000/v2/: x509: certificate signed by unknown authority
error: no match for "ec2-xxx:5000/test/image:1"

From: bpar...@redhat.com
Date: Thu, 18 Feb 2016 09:48:32 -0500
Subject: Re: Create app with image from own docker registry on OpenShift 3.1
To: dencow...@hotmail.com; users@lists.openshift.redhat.com

INSECURE_REGISTRY is needed because your registry is using a self-signed cert, 
whether it is secured or not.


On Thu, Feb 18, 2016 at 4:59 AM, Den Cowboy  wrote:



No didn't do that. I'm using a secure registry for OpenShift. So the tag was 
not on insecure. 

From: bpar...@redhat.com
Date: Wed, 17 Feb 2016 10:53:48 -0500
Subject: Re: Create app with image from own docker registry on OpenShift 3.1
To: dencow...@hotmail.com
CC: users@lists.openshift.redhat.com

is ec2-xxx listed as an insecure registry in your docker daemon's configuration?

/etc/sysconfig/docker
INSECURE_REGISTRY='--insecure-registry ec2-'

I believe that is needed for docker to communicate with registries that use 
self-signed certs.

(you'll need to restart the docker daemon after adding that setting)



On Wed, Feb 17, 2016 at 8:15 AM, Den Cowboy  wrote:





I have my own docker registry secured with a selfsigned certificate.
On other servers, I'm able to login on the registry and pull/push images from 
it. So that seems to work fine.


But when I want to create an app from the image using OpenShift it does not 
seem te work:


oc new-app ec2-xxx:5000/test/image1
error: can't look up Docker image "ec2-xx/test/image1": Internal error 
occurred: Get https://ec2-xxx:5000/v2/: x509: certificate signed by unknown 
authority
error: no match for "ec2-xxx:5000/test/image1"


What could be the issue?
I'm able to login in the registry and pull the image manually.

  

___

users mailing list

users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users




-- 
Ben Parees | OpenShift


  


-- 
Ben Parees | OpenShift


  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Create image-stream for image from insecure private docker registry

2016-02-23 Thread Den Cowboy
I  try to create an image-stream for my image from a docker registry.
The registry is insecure (it's using selfsigned certificates) and there is a 
login + password on my registry.
I've put the certs on the nodes of my openshift cluster and I'm able to login 
and pull the images I want.
But I need to create image-streams for this. 
My registry is: ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com:5000

docker login ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com:5000
Username: 
Password: 
Email: 
WARNING: login credentials saved in /home/centos/.docker/config.json
Login Succeeded
$ docker pull 
ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com:5000/test/my-image:83
Trying to pull repository 
ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com:5000/test/my-image:83: Pulling 
from test/my-image
77e39ee82117: Pull complete 
5eb1402f0414: Pull complete 
9287fae7a16e: Pull complete 
0288ae931294: Pull complete 
9536cbaf1242: Pull complete 
ddfb2360ce1e: Pull complete 
8ab6f3fcbdb5: Pull complete 
20ed370cdb6e: Pull complete 
ebcf22a55440: Pull complete 
5f8d821c760f: Pull complete 
cfa77085638d: Pull complete 
e154104e0560: Pull complete 
9774ad57345c: Pull complete 
fea97a1ec848: Pull complete 
4b8c16278ead: Pull complete 
dc18e7f95e9b: Pull complete 
308e99456a16: Pull complete 
e95130b212d6: Pull complete 
7e48c416298a: Pull complete 
Digest: sha256:03d4c5090dd06a29ba3473870efdbf6324c0074b94345b3a346d5a8e2dd0a141
Status: Downloaded newer image for ...

But okay. Now I have the image only on one of my nodes. So I have to create an 
image-stream for it:
I want it in my project testing:
$ oc new-project testing
I try to create a secret to make it possible to login on my registry for each 
node:
$
 oc secrets new-dockercfg SECRET 
--docker-server=ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com 
--docker-username=*** --docker-password=*** --docker-email=***
The Secret "SECRET" is invalid.
metadata.name:
 Invalid value: "SECRET": must be a DNS subdomain (at most 253 
characters, matching regex 
[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*): e.g. 
"example.com"

Why is it invalid?
After that I want to create my image-stream:


kind: ImageStream
apiVersion: v1
metadata:
  name: my-image
  annotations:
openshift.io/image.insecureRepository: "true" 
  spec:
dockerImageRepository: 
ec2-xx-xx-xx-xx.xx-xx-1.compute.amazonaws.com/test/my-image

Is this the right approach?


  ___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to import private image from hub.docker to ImageStream ?

2016-02-23 Thread Maciej Szulik

Have you checked this doc:

https://docs.openshift.org/latest/architecture/core_concepts/builds_and_image_streams.html#private-registries


On 02/23/2016 10:52 AM, Stéphane Klein wrote:

Hi,

I try to understand how to import private image from hub.docker to
ImageStream.

I have this in my configuration :

```
- kind: ImageStream
   apiVersion: v1
   metadata:
 name: api
   spec:
 dockerImageRepository: foobar/api
```

I've configured docker login on my host to access to my private hub.docker
images.

```
# oc import-image api
The import completed successfully.

Name:api
Created:17 minutes ago
Labels:
Annotations:
openshift.io/image.dockerRepositoryCheck=2016-02-23T09:14:34Z
Docker Pull Spec:172.30.27.206:5000/foobar/api

TagSpecCreatedPullSpecImage
latestapi16 minutes agoimport failed: you may not have
access to the Docker image "api"
```

Why this error ?

I can pull my image with success :

```
# docker pull foobar/api
Using default tag: latest
latest: Pulling from foobar/api
bada6a63fdfa: Already exists
9eebb47979e1: Already exists
Digest:
sha256:1cd26e60704d87a1f01035ab6841761abe844ba3ad255ff9c83dded9540fb073
Status: Image is up to date for foobar/api:latest
```

```
# oc get is
NAME DOCKER REPO   TAGS
UPDATED
api  foobar/api latest
```

Questions :

* How can I fix « import failed: you may not have access to the Docker
image "api" » ?
* How can I use "oc import-image api --from…" to import local docker images
?
* Is it possible to append some documentation about external private
registry in this section
https://docs.openshift.com/enterprise/3.1/architecture/core_concepts/builds_and_image_streams.html#image-streams
? maybe an example with hub.docker ?

Best regards,
Stéphane



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Cron tasks?

2016-02-23 Thread Maciej Szulik


On 02/23/2016 10:41 AM, David Strejc wrote:

Does anyone have any experience with cron tasks as they were in OS v2?


v3 does not have cron support yet, there was a proposal already accepted
in k8s. In the following weeks/months I'll be working on implementing
such functionality.


I would like to let our developers maintain cron tasks through git .s2i
folder as it was in v2.
Is it good idea to build cron into docker image and link crontab to file
inside .s2i?


I'm not sure this will work as you expect. You'd still need a separate
mechanism that will actually trigger build, or other action when the
right time comes.

What I can suggest as a temporary solution is writing/deploying some
kind of cron scheduler inside of OpenShift.

Maciej

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: After private Docker registry re-deployment, old ImageStreams still point to old registry -> app (re)deployment fails. How to cleanup ?

2016-02-23 Thread Maciej Szulik



On 02/22/2016 07:24 PM, Florian Daniel Otel wrote:

Hello all,

I have the following problem I need some guidance with:

I had to delete and redeploy my private Docker registry.


Yeah, it's still problematic and we track the issue here:
https://github.com/openshift/origin/issues/6283.
In such situations we recommend recording old registry IP and
re-deploying it with the same IP. If it's not a problem for you
I'd suggest to do just that. This will save you trouble finding
all the places that needs changing. Some guidance is here:
https://github.com/openshift/openshift-docs/issues/1494


The problem is that for some reason re-deploying new apps --e.g. the ruby
sample app from here

--
fails since it refers to (and tries to push into) the old private registry.

After a further inspection, it seems I have some old "ImageStreams"
pointing to the old registry, so I need to delete those.

Any guidance on how to do that ?


Actually, no delete is needed, but rather you need to re-import all
the images again, just invoke oc import-image imagestream_name
for every single IS you have.


And, in addition to that, what else do I need to clean up after
re-deploying the Docker registry


Unfortunately the information about last used IS is also stored in
builds and deployments, iirc, not sure about others, though. So if
it's not a problem for you I'd suggest re-deploying the registry
with the previous IP.


Thanks,

Florian

P.S. The particular reason I have deleted & re-deployed the Docker registry
was that previous app deployments failed with:

0221 10:41:34.979617   1 sti.go:223] Registry server Address:
I0221 10:41:34.979650   1 sti.go:224] Registry server User Name:
serviceaccount
I0221 10:41:34.979662   1 sti.go:225] Registry server Email:
serviceacco...@example.org
I0221 10:41:34.979672   1 sti.go:230] Registry server Password:
<>
F0221 10:41:34.979703   1 builder.go:185] Error: build error: Failed to
push image. Response from registry is: digest invalid: provided digest did
not match uploaded content

and so far I haven't found a clean way of cleanly removing all images /
blobs from the registry -- even if I am using the "host volume" option. If
there is a easier way to clean those images / blobs up other than deleting
the previous Docker registry alltogether and removing that volume, please
advise. Thanks.




Maciej

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


How to import private image from hub.docker to ImageStream ?

2016-02-23 Thread Stéphane Klein
Hi,

I try to understand how to import private image from hub.docker to
ImageStream.

I have this in my configuration :

```
- kind: ImageStream
  apiVersion: v1
  metadata:
name: api
  spec:
dockerImageRepository: foobar/api
```

I've configured docker login on my host to access to my private hub.docker
images.

```
# oc import-image api
The import completed successfully.

Name:api
Created:17 minutes ago
Labels:
Annotations:
openshift.io/image.dockerRepositoryCheck=2016-02-23T09:14:34Z
Docker Pull Spec:172.30.27.206:5000/foobar/api

TagSpecCreatedPullSpecImage
latestapi16 minutes agoimport failed: you may not have
access to the Docker image "api"
```

Why this error ?

I can pull my image with success :

```
# docker pull foobar/api
Using default tag: latest
latest: Pulling from foobar/api
bada6a63fdfa: Already exists
9eebb47979e1: Already exists
Digest:
sha256:1cd26e60704d87a1f01035ab6841761abe844ba3ad255ff9c83dded9540fb073
Status: Image is up to date for foobar/api:latest
```

```
# oc get is
NAME DOCKER REPO   TAGS
UPDATED
api  foobar/api latest
```

Questions :

* How can I fix « import failed: you may not have access to the Docker
image "api" » ?
* How can I use "oc import-image api --from…" to import local docker images
?
* Is it possible to append some documentation about external private
registry in this section
https://docs.openshift.com/enterprise/3.1/architecture/core_concepts/builds_and_image_streams.html#image-streams
? maybe an example with hub.docker ?

Best regards,
Stéphane
-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Cron tasks?

2016-02-23 Thread David Strejc
Does anyone have any experience with cron tasks as they were in OS v2?

I would like to let our developers maintain cron tasks through git .s2i
folder as it was in v2.
Is it good idea to build cron into docker image and link crontab to file
inside .s2i?

Thank you.

David Strejc
t: +420734270131
e: david.str...@gmail.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Timezone for PODs

2016-02-23 Thread Aleksandar Lazic
your welcome



From: users-boun...@lists.openshift.redhat.com 
 on behalf of Per Carlson 

Sent: Tuesday, February 23, 2016 08:00
To: aleks
Cc: openshift
Subject: Re: Timezone for PODs

Hi.

Is there a preferred way to make OpenShift inject timezone info
into containers?

I would suggest to set TZ in dc.

DC_NAME=$(oc get dc  -o name )
oc env ${DC_NAME} TZ=Europe/Vienna

That worked like perfectly. Thanks!

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users