Re: Changing Prometheus rules

2019-11-20 Thread Mateus Caruccio
Isn't cluster version operator something from okd 4.x?

Em Qua, 20 de nov de 2019 05:37, Simon Pasquier 
escreveu:

> On Tue, Nov 19, 2019 at 6:33 PM Mateus Caruccio
>  wrote:
> >
> > You must disable cluster-monitoring-operator since it will try to
> reconcile the whole monitoring stack.
> >
> > $ oc scale --replicas=0 deploy/cluster-monitoring-operator
>
> You'd need to disable the cluster version operator too IIRC and this
> has a bigger impact.
>
> >
> > Muting alerts using inhibit rules may have an unexpected side-effect as
> noted by [1]. The recommended approach is to send alerts for a "blackhole"
> receiver (rationale and example in the link)
> >
> > [1]
> https://medium.com/@wrossmann/suppressing-informational-alerts-with-prometheus-and-alertmanager-4237feab7ce9
>
> What I've described should work because source and target labels won't
> match the same alerts. Agreed that blackholing the notification is
> also a good solution.
>
> >
> > --
> > Mateus Caruccio / Master of Puppets
> > GetupCloud.com
> > We make the infrastructure invisible
> > Gartner Cool Vendor 2017
> >
> >
> > Em ter., 19 de nov. de 2019 às 13:27, Tim Dudgeon 
> escreveu:
> >>
> >> No joy with that approach. I tried editing the ConfigMap and the CRD
> but both got reset when the cluster-monitoring-operator was restarted.
> >>
> >> Looks like I'll have to live with silencing the alert.
> >>
> >> On 19/11/2019 07:56, Vladimir REMENAR wrote:
> >>
> >> Hi Tim,
> >>
> >> You need to stop cluster-monitoring-operator than and then edit
> configmap. If cluster-monitoring-operator is running while editing
> configmap it will always revert it to default.
> >>
> >>
> >> Uz pozdrav,
> >> Vladimir Remenar
> >>
> >>
> >>
> >> From:Tim Dudgeon 
> >> To:Simon Pasquier 
> >> Cc:users 
> >> Date:18.11.2019 17:46
> >> Subject:Re: Changing Prometheus rules
> >> Sent by:users-boun...@lists.openshift.redhat.com
> >> 
> >>
> >>
> >>
> >> The KubeAPILatencyHigh alert fires several times a day for us (on 2
> >> different OKD clusters).
> >>
> >> On 18/11/2019 15:17, Simon Pasquier wrote:
> >> > The Prometheus instances deployed by the cluster monitoring operator
> >> > are read-only and can't be customized.
> >> >
> https://docs.openshift.com/container-platform/3.11/install_config/prometheus_cluster_monitoring.html#alerting-rules_prometheus-cluster-monitoring
> >> >
> >> > Can you provide more details about which alerts are noisy?
> >> >
> >> > On Mon, Nov 18, 2019 at 2:43 PM Tim Dudgeon 
> wrote:
> >> >> What is the "right" way to edit Prometheus rules that are deployed by
> >> >> default on OKD 3.11?
> >> >> I have alerts that are annoyingly noisy, and want to silence them
> forever!
> >> >>
> >> >> I tried editing the definition of the PrometheusRule CRD and/or the
> >> >> prometheus-k8s-rulefiles-0 ConfigMap in the openshift-monitoring
> project
> >> >> but my changes keep getting reverted back to the original.
> >> >>
> >> >> ___
> >> >> users mailing list
> >> >> users@lists.openshift.redhat.com
> >> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >> >>
> >>
> >> ___
> >> users mailing list
> >> users@lists.openshift.redhat.com
> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >>
> >>
> >>
> >> ___
> >> users mailing list
> >> users@lists.openshift.redhat.com
> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >
> > ___
> > users mailing list
> > users@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Changing Prometheus rules

2019-11-19 Thread Mateus Caruccio
You must disable cluster-monitoring-operator since it will try to
reconcile the whole monitoring stack.

$ oc scale --replicas=0 deploy/cluster-monitoring-operator

Muting alerts using inhibit rules may have an unexpected side-effect as
noted by [1]. The recommended approach is to send alerts for a "blackhole"
receiver (rationale and example in the link)

[1]
https://medium.com/@wrossmann/suppressing-informational-alerts-with-prometheus-and-alertmanager-4237feab7ce9

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017


Em ter., 19 de nov. de 2019 às 13:27, Tim Dudgeon 
escreveu:

> No joy with that approach. I tried editing the ConfigMap and the CRD but
> both got reset when the cluster-monitoring-operator was restarted.
>
> Looks like I'll have to live with silencing the alert.
> On 19/11/2019 07:56, Vladimir REMENAR wrote:
>
> Hi Tim,
>
> You need to stop cluster-monitoring-operator than and then edit configmap.
> If cluster-monitoring-operator is running while editing configmap it will
> always revert it to default.
>
>
> Uz pozdrav,
> *Vladimir Remenar*
>
>
>
> From:Tim Dudgeon  
> To:Simon Pasquier  
> Cc:users 
> 
> Date:18.11.2019 17:46
> Subject:Re: Changing Prometheus rules
> Sent by:users-boun...@lists.openshift.redhat.com
> --
>
>
>
> The KubeAPILatencyHigh alert fires several times a day for us (on 2
> different OKD clusters).
>
> On 18/11/2019 15:17, Simon Pasquier wrote:
> > The Prometheus instances deployed by the cluster monitoring operator
> > are read-only and can't be customized.
> >
> https://docs.openshift.com/container-platform/3.11/install_config/prometheus_cluster_monitoring.html#alerting-rules_prometheus-cluster-monitoring
> >
> > Can you provide more details about which alerts are noisy?
> >
> > On Mon, Nov 18, 2019 at 2:43 PM Tim Dudgeon 
>  wrote:
> >> What is the "right" way to edit Prometheus rules that are deployed by
> >> default on OKD 3.11?
> >> I have alerts that are annoyingly noisy, and want to silence them
> forever!
> >>
> >> I tried editing the definition of the PrometheusRule CRD and/or the
> >> prometheus-k8s-rulefiles-0 ConfigMap in the openshift-monitoring project
> >> but my changes keep getting reverted back to the original.
> >>
> >> ___
> >> users mailing list
> >> users@lists.openshift.redhat.com
> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> >>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: sftp service on cluster - how to do it

2019-11-18 Thread Mateus Caruccio
I guess one could use either Service.type=LoadBalancer (one ELB per service
on port 22) or Service.type=NodePort with single ELB mapping
ELB-PORT:NODE-PORT for each service.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017


Em dom., 17 de nov. de 2019 às 22:13, Just Marvin <
marvin.the.cynical.ro...@gmail.com> escreveu:

> Tobias,
>
> I _will_ have access to load balancers if needed, but at the moment, I
> need to understand how it works. Assume that I do: what exactly does "proxy
> to the internal sftp service" mean? I assume "sftp service" would be the
> service that I set up, but which piece is the proxy? I don't see that load
> balancer and proxy functions as being the same, so it seems like you are
> talking about a third piece. What piece is that?
>
> Regards,
> Marvin
>
> On Sun, Nov 17, 2019 at 1:30 PM Tobias Florek 
> wrote:
>
>> Hi!
>>
>> I assume you don't have easy access to load balancers, because that
>> would be easiest.  Just proxy to the internal sftp service.
>>
>> If you don't I have used Nodeport service in the past.  You will lose
>> the nice port 22 though.  If you control the node's ssh daemon, you can
>> also use ProxyJumps.  Be sure to lock down ssh for the users though.
>>
>> Cheers,
>>  Tobias Florek
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: atomic install wants to use /var/lib/containers instead of /var/lib/docker for storage

2019-01-25 Thread Mateus Caruccio
I'm not sure that is the case. This file is included from [1]. MAybe
someone from redhat team could drop some wisdom.


[1]
https://github.com/openshift/openshift-ansible/blob/release-3.11/roles/container_runtime/tasks/package_docker.yml#L174



--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017


Em sex, 25 de jan de 2019 às 16:48, mabi  escreveu:

> There are no symlinks and /var/lib/docker still exists. Looking at the
> following playbook:
>
>
> https://github.com/openshift/openshift-ansible/blob/master/roles/container_runtime/tasks/common/post.yml
>
> I can see that the symlink only gets created when using CRI-O with
> OpenShift (openshift_use_crio). As I am not using CRI-O but Docker it
> looks like the symlink will never get created correctly...
>
>
>
> Is this maybe an issue to be reported?
>
> ‐‐‐ Original Message ‐‐‐
> On Friday, January 25, 2019 7:02 PM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
> There should be a symlink /var/lib/docker pointing to
> /var/lib/containers/docker.
>
>
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
>
> Em sex, 25 de jan de 2019 às 11:43, mabi  escreveu:
>
>> I changed the /etc/sysconfig/docker-storage-setup to use
>> /var/lib/containers instead of /var/lib/docker in the
>> CONTAINER_ROOT_LV_MOUNT_PATH variable and the OpenShift ansible
>> prerequisites.yml playbook now worked fine.
>>
>> So I now I went on to the next step of the installation which is to run
>> the OpenShift ansible deploy_cluster.yml playbook and this one fails
>> because it tries do pull a docker image and store it into /var/lib/docker
>> as you can see from the output below in this mail...
>>
>> Here is the command I am running:
>>
>> atomic install --system --storage=ostree --set INVENTORY_FILE=/root/hosts
>> --set
>> PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
>> --set OPTS="-v" docker.io/openshift/origin-ansible:v3.11
>>
>> And here the failing output from ansible:
>>
>> TASK [openshift_node : Copy node container image to ostree storage]
>> 
>> Friday 25 January 2019  13:18:08 + (0:00:31.595)   0:03:12.069
>> 
>> FAILED - RETRYING: Copy node container image to ostree storage (3 retries
>> left).
>> FAILED - RETRYING: Copy node container image to ostree storage (3 retries
>> left).
>> FAILED - RETRYING: Copy node container image to ostree storage (3 retries
>> left).
>> FAILED - RETRYING: Copy node container image to ostree storage (2 retries
>> left).
>> FAILED - RETRYING: Copy node container image to ostree storage (2 retries
>> left).
>> FAILED - RETRYING: Copy node container image to ostree storage (2 retries
>> left).
>> FAILED - RETRYING: Copy node container image to ostree storage (1 retries
>> left).
>> fatal: [inst4.mydomain.org]: FAILED! => {"attempts": 3, "changed":
>> false, "cmd": ["atomic", "pull", "--storage=ostree", "docker:
>> docker.io/openshift/origin-node:v3.11"], "delta": "0:00:08.381259",
>> "end": "2019-01-25 14:19:01.462409", "msg": "non-zero return code", "rc":
>> 1, "start": "2019-01-25 14:18:53.081150", "stderr":
>> "time=\"2019-01-25T14:19:01+01:00\" level=fatal msg=\"Error initializing
>> source docker-daemon:openshift/origin-node:v3.11: Error loading image from
>> docker engine: Error response from daemon: write
>> /var/lib/docker/tmp/docker-export-196848125/d43adf9eb4cc67bbc8d08f6922ff9fdfcbb1830a8b586f24dfc4afa335a1c51b/layer.tar:
>> no space left on device\" ", "stderr_lines":
>> ["time=\"2019-01-25T14:19:01+01:00\" level=fatal msg=\"Error initializing
>> source docker-daemon:openshift/origin-node:v3.11: Error loading image from
>> docker engine: Error response from daemon: write
>> /var/lib/docker/tmp/docker-export-196848125/d43adf9eb4cc67bbc8d08f6922ff9fdfcbb1830a8b586f24dfc4afa335a1c51b/layer.tar:
>> no space left on device\" "], "stdout": "", "stdout_lines": []}
>>
>>
>>
>>
>> ‐‐‐ Original Message ‐‐‐
>> On Thursday, January 24, 2019 9:40 PM, Mateus Caruccio <
>> mateus.caruc...@getupcloud.com> wrote:
>>
>> oops, didn't noticed that message. I guess you could simply mount

Re: atomic install wants to use /var/lib/containers instead of /var/lib/docker for storage

2019-01-25 Thread Mateus Caruccio
There should be a symlink /var/lib/docker pointing to
/var/lib/containers/docker.



--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017


Em sex, 25 de jan de 2019 às 11:43, mabi  escreveu:

> I changed the /etc/sysconfig/docker-storage-setup to use
> /var/lib/containers instead of /var/lib/docker in the
> CONTAINER_ROOT_LV_MOUNT_PATH variable and the OpenShift ansible
> prerequisites.yml playbook now worked fine.
>
> So I now I went on to the next step of the installation which is to run
> the OpenShift ansible deploy_cluster.yml playbook and this one fails
> because it tries do pull a docker image and store it into /var/lib/docker
> as you can see from the output below in this mail...
>
> Here is the command I am running:
>
> atomic install --system --storage=ostree --set INVENTORY_FILE=/root/hosts
> --set
> PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
> --set OPTS="-v" docker.io/openshift/origin-ansible:v3.11
>
> And here the failing output from ansible:
>
> TASK [openshift_node : Copy node container image to ostree storage]
> 
> Friday 25 January 2019  13:18:08 + (0:00:31.595)   0:03:12.069
> 
> FAILED - RETRYING: Copy node container image to ostree storage (3 retries
> left).
> FAILED - RETRYING: Copy node container image to ostree storage (3 retries
> left).
> FAILED - RETRYING: Copy node container image to ostree storage (3 retries
> left).
> FAILED - RETRYING: Copy node container image to ostree storage (2 retries
> left).
> FAILED - RETRYING: Copy node container image to ostree storage (2 retries
> left).
> FAILED - RETRYING: Copy node container image to ostree storage (2 retries
> left).
> FAILED - RETRYING: Copy node container image to ostree storage (1 retries
> left).
> fatal: [inst4.mydomain.org]: FAILED! => {"attempts": 3, "changed": false,
> "cmd": ["atomic", "pull", "--storage=ostree", "docker:
> docker.io/openshift/origin-node:v3.11"], "delta": "0:00:08.381259",
> "end": "2019-01-25 14:19:01.462409", "msg": "non-zero return code", "rc":
> 1, "start": "2019-01-25 14:18:53.081150", "stderr":
> "time=\"2019-01-25T14:19:01+01:00\" level=fatal msg=\"Error initializing
> source docker-daemon:openshift/origin-node:v3.11: Error loading image from
> docker engine: Error response from daemon: write
> /var/lib/docker/tmp/docker-export-196848125/d43adf9eb4cc67bbc8d08f6922ff9fdfcbb1830a8b586f24dfc4afa335a1c51b/layer.tar:
> no space left on device\" ", "stderr_lines":
> ["time=\"2019-01-25T14:19:01+01:00\" level=fatal msg=\"Error initializing
> source docker-daemon:openshift/origin-node:v3.11: Error loading image from
> docker engine: Error response from daemon: write
> /var/lib/docker/tmp/docker-export-196848125/d43adf9eb4cc67bbc8d08f6922ff9fdfcbb1830a8b586f24dfc4afa335a1c51b/layer.tar:
> no space left on device\" "], "stdout": "", "stdout_lines": []}
>
>
>
>
> ‐‐‐ Original Message ‐‐‐
> On Thursday, January 24, 2019 9:40 PM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
> oops, didn't noticed that message. I guess you could simply mount overlay
> over /var/lib/containers
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
>
> Em qui, 24 de jan de 2019 às 18:10, mabi  escreveu:
>
>> Thank you Mateus for the hint regarding the new storage dir.
>>
>> However I am not sure it it will help with the installation because you
>> mention that I should mount my overlay2 LVM partition in
>> docker_alt_storage_path, which is actually /var/lib/containers/docker
>>
>> Now the "no space left on device" error message I got during the
>> installation was for the directory: /var/lib/containers/atomic/... The
>> /var/lib/containers/atomic directory is still located on the small 3 GB
>> root partition.
>>
>> Or am I missing something?
>>
>>
>>
>> ‐‐‐ Original Message ‐‐‐
>> On Thursday, January 24, 2019 7:44 PM, Mateus Caruccio <
>> mateus.caruc...@getupcloud.com> wrote:
>>
>> Since the introduction of cri-o, the docker storage dir was moved to
>> /var/lib/containers/docker and /var/lib/docker is a symlink to it. You can
>> see it in action in [1] and [2].
>>
>> Make sure to mount your overlay2 partition at [3]`docker_alt

Re: atomic install wants to use /var/lib/containers instead of /var/lib/docker for storage

2019-01-24 Thread Mateus Caruccio
oops, didn't noticed that message. I guess you could simply mount overlay
over /var/lib/containers

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017


Em qui, 24 de jan de 2019 às 18:10, mabi  escreveu:

> Thank you Mateus for the hint regarding the new storage dir.
>
> However I am not sure it it will help with the installation because you
> mention that I should mount my overlay2 LVM partition in
> docker_alt_storage_path, which is actually /var/lib/containers/docker
>
> Now the "no space left on device" error message I got during the
> installation was for the directory: /var/lib/containers/atomic/... The
> /var/lib/containers/atomic directory is still located on the small 3 GB
> root partition.
>
> Or am I missing something?
>
>
>
> ‐‐‐ Original Message ‐‐‐
> On Thursday, January 24, 2019 7:44 PM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
> Since the introduction of cri-o, the docker storage dir was moved to
> /var/lib/containers/docker and /var/lib/docker is a symlink to it. You can
> see it in action in [1] and [2].
>
> Make sure to mount your overlay2 partition at [3]`docker_alt_storage_path`
>
> [1]
> https://github.com/openshift/openshift-ansible/blob/master/roles/container_runtime/tasks/common/post.yml#L2
> [2]
> https://github.com/openshift/openshift-ansible/blob/master/roles/container_runtime/tasks/common/setup_docker_symlink.yml
> [3]
> https://github.com/openshift/openshift-ansible/blob/master/roles/container_runtime/defaults/main.yml#L47
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
>
> Em qui, 24 de jan de 2019 às 15:52, mabi  escreveu:
>
>> Hi,
>>
>> I am trying to install OKD version 3.11 on CentOS 7 Atomic Host using the
>> official documentation here:
>>
>>
>> https://docs.okd.io/3.11/install/running_install.html#running-the-advanced-installation-containerized
>>
>> So after running the following command:
>>
>> atomic install --system --storage=ostree --set INVENTORY_FILE=/root/hosts
>> --set
>> PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
>> --set OPTS="-v" docker.io/openshift/origin-ansible:v3.11
>>
>> I get the following output/error:
>>
>> Getting image source signatures
>> Copying blob
>> sha256:a02a4930cb5d36f3290eb84f4bfa30668ef2e9fe3a1fb73ec015fc58b9958b17
>>  71.68 MB / 71.68 MB
>> [==] 6s
>> Copying blob
>> sha256:16ffbd784f7768f3ebfc1697df136659c6c3481754263b6f852317c5f948b860
>>  9.57 MB / 9.57 MB
>> [] 0s
>> Copying blob
>> sha256:df4469e6f51747d98306dd4d60d23b0ef6386e0ae55eafc2473745ce0d12f6f5
>>  271 B / 271 B
>> [] 0s
>> Copying blob
>> sha256:36fd26a639ac48c560feb36254ec2f3de38e915b93d5f6bf7749abd675f7f0c7
>>  201.01 MB / 201.01 MB
>> [===] 18s
>> Copying config
>> sha256:184868402205ab5533ce71b76c1cc1edf9d9d98227c2de845f90b83a67f0a52c
>>  4.80 KB / 4.80 KB
>> [] 0s
>> Writing manifest to image destination
>> Storing signatures
>> FATA[0118] Error committing the finished image: mkdir
>> /var/lib/containers/atomic/.CjIXZk/docker.io_2Fopenshift_2Forigin-ansible_3Av3.11/36fd26a639ac48c560feb36254ec2f3de38e915b93d5f6bf7749abd675f7f0c7/root/usr/share/ansible/openshift-ansible/roles/nuage_node/vars:
>> no space left on device
>>
>> As you see it looks like the directory /var/lib/containers is used but it
>> should actually be using /var/lib/docker, that is where I have configured
>> my LVM thin layer volume for Docker storage.
>>
>> How can I change that directory? or should I configure my docker storage
>> to use /var/lib/containers instead?
>>
>> For setting up my docker storage using the Overlay2 driver I followed the
>> following guide:
>>
>>
>> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/managing_storage_with_docker_formatted_containers#using_the_overlay_graph_driver
>>
>> Thanks for any hints.
>>
>> Regards,
>> Mabi
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: atomic install wants to use /var/lib/containers instead of /var/lib/docker for storage

2019-01-24 Thread Mateus Caruccio
Since the introduction of cri-o, the docker storage dir was moved to
/var/lib/containers/docker and /var/lib/docker is a symlink to it. You can
see it in action in [1] and [2].

Make sure to mount your overlay2 partition at [3]`docker_alt_storage_path`

[1]
https://github.com/openshift/openshift-ansible/blob/master/roles/container_runtime/tasks/common/post.yml#L2
[2]
https://github.com/openshift/openshift-ansible/blob/master/roles/container_runtime/tasks/common/setup_docker_symlink.yml
[3]
https://github.com/openshift/openshift-ansible/blob/master/roles/container_runtime/defaults/main.yml#L47

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017


Em qui, 24 de jan de 2019 às 15:52, mabi  escreveu:

> Hi,
>
> I am trying to install OKD version 3.11 on CentOS 7 Atomic Host using the
> official documentation here:
>
>
> https://docs.okd.io/3.11/install/running_install.html#running-the-advanced-installation-containerized
>
> So after running the following command:
>
> atomic install --system --storage=ostree --set INVENTORY_FILE=/root/hosts
> --set
> PLAYBOOK_FILE=/usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
> --set OPTS="-v" docker.io/openshift/origin-ansible:v3.11
>
> I get the following output/error:
>
> Getting image source signatures
> Copying blob
> sha256:a02a4930cb5d36f3290eb84f4bfa30668ef2e9fe3a1fb73ec015fc58b9958b17
>  71.68 MB / 71.68 MB
> [==] 6s
> Copying blob
> sha256:16ffbd784f7768f3ebfc1697df136659c6c3481754263b6f852317c5f948b860
>  9.57 MB / 9.57 MB
> [] 0s
> Copying blob
> sha256:df4469e6f51747d98306dd4d60d23b0ef6386e0ae55eafc2473745ce0d12f6f5
>  271 B / 271 B
> [] 0s
> Copying blob
> sha256:36fd26a639ac48c560feb36254ec2f3de38e915b93d5f6bf7749abd675f7f0c7
>  201.01 MB / 201.01 MB
> [===] 18s
> Copying config
> sha256:184868402205ab5533ce71b76c1cc1edf9d9d98227c2de845f90b83a67f0a52c
>  4.80 KB / 4.80 KB
> [] 0s
> Writing manifest to image destination
> Storing signatures
> FATA[0118] Error committing the finished image: mkdir
> /var/lib/containers/atomic/.CjIXZk/docker.io_2Fopenshift_2Forigin-ansible_3Av3.11/36fd26a639ac48c560feb36254ec2f3de38e915b93d5f6bf7749abd675f7f0c7/root/usr/share/ansible/openshift-ansible/roles/nuage_node/vars:
> no space left on device
>
> As you see it looks like the directory /var/lib/containers is used but it
> should actually be using /var/lib/docker, that is where I have configured
> my LVM thin layer volume for Docker storage.
>
> How can I change that directory? or should I configure my docker storage
> to use /var/lib/containers instead?
>
> For setting up my docker storage using the Overlay2 driver I followed the
> following guide:
>
>
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/managing_storage_with_docker_formatted_containers#using_the_overlay_graph_driver
>
> Thanks for any hints.
>
> Regards,
> Mabi
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: problem with installation of Okd 3.11

2018-11-28 Thread Mateus Caruccio
I've faced the same issue on 3.9 and the issue was nodes was unable to
close an SSL connection to master in order to create that
`80-openshift-network.conf` file under `/etc/cni/net.d. AFAIK, this is
origin-node's job.

Check if there are any error logs regarding unable to connect to master.
Raise node logLevel to >=4 in /etc/sysconfig/origin-node
--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017


Em qua, 28 de nov de 2018 às 03:13, Dharmit Shah 
escreveu:

> On 27/11, Erekle Magradze wrote:
> > Hi,
> >
> > Can you please drop the link to github issue here?
>
> Sorry for missing that out in my original response. Here's the link to
> the issue: https://github.com/openshift/openshift-ansible/issues/10690
>
> Regards,
> Dharmit
>
> > On 11/27/18 3:02 PM, Dharmit Shah wrote:
> > > On 27/11, Erekle Magradze wrote:
> > > >   Nov 26 06:55:55 os-master origin-node: E1126
> > > >   06:55:53.4952945353 kubelet.go:2101] Container runtime
> > > >   network not ready: NetworkReady=false
> > > >   reason:NetworkPluginNotReady message:docker: network
> plugin is
> > > >   not ready: cni config uninitialized
> > > >   Nov 26 06:55:58 os-master origin-node: W1126
> 06:55:58.496250
> > > >   5353 cni.go:172] Unable to update cni config: No networks
> > > >   found in /etc/cni/net.d
> > > I faced similar error while upgrading from OKD 3.9 to 3.10. While
> trying
> > > to bring up the master node, openshift-ansible always failed and
> > > journalctl had similar logs. Looking around, I figured that this was
> due
> > > to missing `80-openshift-network.conf` file under `/etc/cni/net.d` on
> > > the master node. Other nodes had this file.
> > >
> > > So I copy pasted the file from a node to master and then `oc get nodes`
> > > showed me "Ready" instead of "NotReady" for the master. I then tried to
> > > update again from 3.9 to 3.10 but would fail with same error and same
> > > issue. I opened a GitHub issue [1] with all the details I could find
> but
> > > I haven't received any help yet.
> > >
> > > However, in my case issue was with 3.9 to 3.10 upgrade.
> > >
> > > Regards,
> > > Dharmit
> > >
>
> --
> Dharmit Shah
> Red Hat Developer Tools (https://developers.redhat.com/)
> irc, mattermost: dharmit
> https://dharmitshah.com
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Kustomize with OpenShift

2018-09-01 Thread Mateus Caruccio
Pretty cool project. Didn't know about kustomize. Thanks for sharing!

Em Sáb, 1 de set de 2018 16:37, David Schweikert 
escreveu:

> Hi,
>
> I tried out a kustomize when it was announced a couple of months ago and I
> really liked the approach of having "overlays" of configuration, instead of
> the usual templating. Unfortunately, it didn't work so well with OpenShift,
> because route objects are not properly supported.
>
> Now I finally managed to get also routes working, and I wanted to share it
> here:
> https://github.com/adnovum/kustomize-openshift
>
> Maybe it is of interest to some of you.
>
> Cheers
> David
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: ansible version issues

2018-04-26 Thread Mateus Caruccio
Hey Tracy.
Have you tried to use the images from
https://hub.docker.com/r/openshift/origin-ansible/ ?
Best,

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2018-04-26 0:15 GMT-03:00 Tracy Reed <tr...@ultraviolet.org>:

>
> Is there really no version of ansible currently suitable for running
> atomic-openshift-installer install?
>
> I keep plowing ahead trying to get openshift up and running and keep
> running into weird obstacles. Possibly entirely my fault but I'm keen to
> get this up and running!
>
> Most recently while running the openshift installer I ran into this:
>
> https://pastebin.com/35Ra7ZKH
>
> and googling turns up this:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1499627
>
> so I uninstall my latest version ansible which is apparently not
> compatible and install ansible-2.3.2.0-2.el7.noarch. I run the installer
> again and I get this:
>
> https://pastebin.com/qJDFGbVT
>
> for which google turns up this:
>
> https://github.com/ansible/ansible/issues/14991
>
> which says "you have an old version of the setup module and are trying
> to run it with a new version of the module_utils."
>
> So the first version of ansible I tried, the most recent in the rhel
> repo, is apparently too new. But the recommended version in that first
> bug report is too old. So maybe I now need to find the sweet spot
> version of ansible which works? The next version up is
> ansible-2.4.0.0-5.el7.noarch so I try that.
>
> The installer says:
>
> athering information from hosts...
> [DEPRECATION WARNING]: 'include' for playbook includes. You should use
> 'import_playbook' instead. This feature will be removed in version 2.8.
> Deprecation warnings can be disabled by setting
> deprecation_warnings=False in ansible.cfg.
> ERROR! Unexpected Exception, this is probably a bug: 'CallbackModule'
> object has no attribute 'set_options'
> There was a problem fetching the required information. Please see
> /tmp/ansible.log for details.
>
> and the log contains:
>
> https://pastebin.com/QB7JVEf3
>
> So the next version of ansible is 2.4.1.0-1.el7 so I do yum install
> 2.4.1.0-1.el7 and try running the installer again:
>
> atomic-openshift-installer install
>
> which gets me:
>
> Gathering information from hosts...
> [DEPRECATION WARNING]: 'include' for playbook includes. You should use
> 'import_playbook' instead. This feature will be removed in version 2.8.
> Deprecation warnings can be disabled by setting
> deprecation_warnings=False in ansible.cfg.
> [DEPRECATION WARNING]: default callback, does not support setting
> 'options', it will work for now,  but this will be required in the
> future and should be updated,  see the 2.4 porting guide for details..
> This feature will be removed in
> version 2.9. Deprecation warnings can be disabled by setting
> deprecation_warnings=False in ansible.cfg.
>  [WARNING]: Failure using method (v2_playbook_on_start) in callback
>  plugin (  at 0x7fa589c60fd0>): playbook_on_start() takes exactly 1 argument (2
>  given)
>   [WARNING]: Failure using method (v2_runner_on_ok) in callback plugin
>   (   0x7fa589c60fd0>): runner_on_ok() takes exactly 3 arguments (2 given)
>[WARNING]: Failure using method (v2_runner_on_failed) in callback
>plugin (object at 0x7fa589c60fd0>): runner_on_failed() takes at least 3
>arguments (3 given)
>There was a problem fetching the required information. Please see
>/tmp/ansible.log for details.
>
> https://pastebin.com/MuZXjA9g
>
> so I try the next version of ansible:
>
> yum install ansible-2.4.2.0-2.el7.noarch
>
> and run the installer again which results in:
>
> Gathering information from hosts...
> [DEPRECATION WARNING]: 'include' for playbook includes. You should use
> 'import_playbook' instead. This feature will be removed in version 2.8.
> Deprecation warnings can be disabled by setting
> deprecation_warnings=False in ansible.cfg.
> [DEPRECATION WARNING]: default callback, does not support setting
> 'options', it will work for now,  but this will be required in the
> future and should be updated,  see the 2.4 porting guide for details..
> This feature will be removed in
> version 2.9. Deprecation warnings can be disabled by setting
> deprecation_warnings=False in ansible.cfg.
>  [WARNING]: Failure using method (v2_playbook_on_start) in callback
>  plugin (  at 0x7f2527601b10>): playbook_on_start() takes exactly 1 argument (2
>  given)
>   [WARNING]: Failure using method (v2_runner_on_ok) in callback plugin
>   (   0x7f2527601b10>): runner_on_ok() takes exactly 3 arguments (2 given)
>[WARNING]: Failure using method 

Re: Openshift router certificate chain

2017-11-17 Thread Mateus Caruccio
What is the value of `ROUTER_CIPHERS`?

$ oc -n default env --list dc/router | grep ROUTER_CIPHERS

Maybe you need to set it to `old` in order to support sha1.



--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-11-17 10:42 GMT-02:00 Marcello Lorenzi <cell...@gmail.com>:

> Hi Mateus,
> this is the output reported:
>
>
>   # Prevent vulnerability to POODLE attacks
>   ssl-default-bind-options no-sslv3
>
> # The default cipher suite can be selected from the three sets recommended
> by https://wiki.mozilla.org/Security/Server_Side_TLS,
> # or the user can provide one using the ROUTER_CIPHERS environment
> variable.
> # By default when a cipher set is not provided, intermediate is used.
> {{- if eq (env "ROUTER_CIPHERS" "intermediate") "modern" }}
>   # Modern cipher suite (no legacy browser support) from
> https://wiki.mozilla.org/Security/Server_Side_TLS
>   tune.ssl.default-dh-param 2048
>   ssl-default-bind-ciphers ECDHE-ECDSA-AES256-GCM-SHA384:
> ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:
> ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:
> ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:
> ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
> {{ else }}
>
>   {{- if eq (env "ROUTER_CIPHERS" "intermediate") "intermediate" }}
>   # Intermediate cipher suite (default) from https://wiki.mozilla.org/
> Security/Server_Side_TLS
>   tune.ssl.default-dh-param 2048
>   ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:
> ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:
> ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:
> ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-
> RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-
> AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-
> SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:
> ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-
> SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-
> AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-
> SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-
> SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS
>   {{ else }}
>
> {{- if eq (env "ROUTER_CIPHERS" "intermediate") "old" }}
>
>   # Old cipher suite (maximum compatibility but insecure) from
> https://wiki.mozilla.org/Security/Server_Side_TLS
>   tune.ssl.default-dh-param 1024
>   ssl-default-bind-ciphers ECDHE-ECDSA-CHACHA20-POLY1305:
> ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:
> ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:
> ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-
> DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-
> SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:
> ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-
> AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-
> SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-
> SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-
> AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:
> EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:
> AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:DES-
> CBC3-SHA:HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!
> PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP
>
> {{- else }}
>   # user provided list of ciphers (Colon separated list as seen above)
>   # the env default is not used here since we can't get here with empty
> ROUTER_CIPHERS
>   tune.ssl.default-dh-param 2048
>   ssl-default-bind-ciphers {{env "ROUTER_CIPHERS" "ECDHE-ECDSA-CHACHA20-
> POLY1305"}}
> {{- end }}
>   {{- end }}
> {{- end }}
>
> defaults
>   maxconn {{env "ROUTER_MAX_CONNECTIONS" "2"}}
>
>   # Add x-forwarded-for header.
> {{- if ne (env "ROUTER_SYSLOG_ADDRESS" "") "" }}
>   {{- if ne (env "ROUTER_SYSLOG_FORMAT" "") "" }}
>
> Marcello
>
> On Fri, Nov 17, 2017 at 1:36 PM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> Hey Marcello.
>>
>> Correct me if I'm wrong, but you could look into haproxy's config and set
>> all ciphers you need:
>>
>> $ oc -n default rsh dc/router grep -C 10 ssl-default-bind-ciphers
>> haproxy-config.template
>>
>> There is this env var `ROUTER_CIPHERS` you can choose standard profiles
>> (modern|intermediate|old) or define your own list.
>>
>> Hope this help.
>>
>> Mateus
>>
&

Re: Openshift router certificate chain

2017-11-17 Thread Mateus Caruccio
Hey Marcello.

Correct me if I'm wrong, but you could look into haproxy's config and set
all ciphers you need:

$ oc -n default rsh dc/router grep -C 10 ssl-default-bind-ciphers
haproxy-config.template

There is this env var `ROUTER_CIPHERS` you can choose standard profiles
(modern|intermediate|old) or define your own list.

Hope this help.

Mateus


--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-11-17 10:28 GMT-02:00 Marcello Lorenzi <cell...@gmail.com>:

> Hi All,
> we tried to configure a new route on Openshift Origin 3.6 to expose a pod
> where the SSL termination is enabled. We have a problem to configure a
> re-encrypt route because we noticed that the application is not present on
> the router and after some investigation we discovered that the problem is
> related to pod certificate chain. The chain is formed by:
>
> - root certificate sha1
> - intermediate certificate sha256
> - server certificate sha256
>
> We have update the root certificate to sha256 and all works fine.
>
> Could you confirm if the Openshift router doesn't support the sha1
> certificate?
>
> Thanks,
> Marcello
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: DNS resolving problem - in pod

2017-10-19 Thread Mateus Caruccio
Alpine's musl libc only supports "search" starting from version 1.1.13.
Check if this is your case.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-10-19 10:58 GMT-02:00 Cameron Braid <came...@braid.com.au>:

> I had that happen quite a bit within containers based on alpine linux
>
> Cam
>
> On Thu, 19 Oct 2017 at 23:49 Łukasz Strzelec <lukasz.strze...@gmail.com>
> wrote:
>
>> Dear all :)
>>
>> I have following problem:
>>
>> [image: Obraz w treści 1]
>>
>>
>> Frequently I have to restart origin-node to solve this issue, but I can't
>> find  the root cause of it.
>> Does anybody has got any idea ? Where to start looking ?
>> In addition , this problem is affecting different cluster nodes -
>> randomly diffrent pods have got this issues.
>>
>>
>> Best regards
>> --
>> Ł.S.
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc observe quota - double update

2017-10-03 Thread Mateus Caruccio
Simply ignore status.* and compare old spec.hard.cpu against new
spec.hard.cpu.

Em 3 de out de 2017 03:46, "Tobias Brunner"  escreveu:

> Thanks for your answer!
>
> On 02.10.2017 23:54, Clayton Coleman wrote:
> > You'd need to filter that out in your script - this is just how the
> > changes get passed down.
>
> Yeah, I had this idea too, but I think that's gonna be tricky as I
> probably would have to compare older states and so forth.
>
> > Generally you don't want to trigger based on changes - but instead
> trigger on "state"
>
> Could you explain this a bit more? My goal is to record changes to quota
> objects in a database and in my impression is that the update trigger is
> exactly what I want. What do you mean with "trigger on state"?
>
> Best,
> Tobias
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Metrics not accessible

2017-09-18 Thread Mateus Caruccio
In fact there is
​a
var openshift_hosted_metrics_deployer_version=3.6.0
​ [1]​ but looks like default value "3.6.0" is not being enforced.

[1]
https://github.com/openshift/openshift-ansible/blob/release-3.6/inventory/byo/hosts.origin.example#L561



Em 18 de set de 2017 11:04, "Tim Dudgeon" <tdudgeon...@gmail.com> escreveu:

> That worked for me too.
> I used web console to look at the hawkular-metrics replication controller
> definition, edited its yaml and changed the image from
> docker.io/openshift/origin-metrics-hawkular-metrics:latest to
> docker.io/openshift/origin-metrics-hawkular-metrics:v3.6.0 and then it
> ran OK.
>
> So what is the best way to deal with this?
>
> My ansible inventory is specifying:
> openshift_release=v3.6
> though the openshift/openshift-ansible repo is checked out from master.
> Does not the openshift_release variable tell the installer to use images
> tagged with v3.6?
>
> Is there a way to specify the right images to use to the ansible installer.
>
> And, presumably this is a bug that should be reported to the issues for
> openshift/openshift-ansible? Happy to do so if someone can confirm.
>
> Tim
>
>
> On 14/09/2017 11:29, Daniel Kučera wrote:
>
>> Thank you! That helped. I was running latest from ansible installation.
>>
>> Changing to v3.6.0 helped and now it runs ok.
>>
>> 2017-09-14 12:11 GMT+02:00 Mateus Caruccio <mateus.caruc...@getupcloud.co
>> m>:
>>
>>> Check if you are running the latest version of the images. If that is
>>> your
>>> case change it to v3.6.0 for cassandra, metrics and heapster and restart
>>> all
>>> of them as stated before.
>>>
>>> Ansible always installs the :latest tag of the images by default.
>>>
>>> Em 14 de set de 2017 07:05, "Daniel Kučera" <daniel.kuc...@gmail.com>
>>> escreveu:
>>>
>>>> I'm getting the same error, is there any workaround?
>>>>
>>>>
>>>> 2017-09-14 09:50:00,572 SEVERE
>>>> [com.google.common.util.concurrent.ExecutionList]
>>>> (cluster3-nio-worker-7) RuntimeException while executing runnable
>>>> rx.observable.ListenableFutureObservable$2$1@54d5c01c with executor
>>>>
>>>> com.google.common.util.concurrent.MoreExecutors$ListeningDec
>>>> orator@29a43cd3:
>>>> java.util.concurrent.RejectedExecutionException: Task
>>>> rx.observable.ListenableFutureObservable$2$1@54d5c01c rejected from
>>>> java.util.concurrent.ThreadPoolExecutor@2d127c2f[Terminated, pool size
>>>> = 0, active threads = 0, queued tasks = 0, completed tasks = 78]
>>>>
>>>> --
>>>>
>>>> S pozdravom / Best regards
>>>> Daniel Kucera.
>>>>
>>>> ___
>>>> users mailing list
>>>> users@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>
>>>
>>
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Problem about logging in openshift origin

2017-09-15 Thread Mateus Caruccio
You can look into two places for clues.  The pod's log itself (oc -n
logging logs -f logging-es-data-master-lf6al5rb-5) and project events (oc
-n logging get events)

Em 15 de set de 2017 07:10, "Yu Wei"  escreveu:

> Hi,
>
> I setup OpenShift origin 3.6 cluster successfully and enabled metrics and
> logging.
>
> Metrics worked well and logging didn't worked.
>
> Pod *logging-es-data-master-lf6al5rb-5-deploy* in logging frequently
> crashed with below logs,
>
> *--> Scaling logging-es-data-master-lf6al5rb-5 to 1 *
> *--> Waiting up to 10m0s for pods in rc logging-es-data-master-lf6al5rb-5
> to become ready *
> *error: update acceptor rejected logging-es-data-master-lf6al5rb-5: pods
> for rc "logging-es-data-master-lf6al5rb-5" took longer than 600 seconds to
> become ready*
>
> I didn't find other information. How could I debug such problem?
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift Origin 3.6 + Ceph persistent storage problems with secret

2017-09-15 Thread Mateus Caruccio
Hey Piotr, I believe you'd have a better chance asking on
d...@lists.openshift.redhat.com (CCed)

Cheers,
Mateus


Em 15 de set de 2017 05:08, "Piotr Baranowski" 
escreveu:

*bump*

Anyone?

--

*Od: *"Piotr Baranowski" 
*Do: *"users" 
*Wysłane: *czwartek, 31 sierpnia, 2017 20:22:19
*Temat: *OpenShift Origin 3.6 + Ceph persistent storage problems with secret

Hey group,

I'm struggling a little trying to integrate Origin 3.6 with ceph.
First there are several docs that are not actually in sync and send
contradicting messages.

docs.openshift.org
access.redhat.com

They have slighly different examples on how to set up that integration.

I have an issue:
Creating the pvc automatically creates a PV.
I see that it was successful:

date master1.foo.bar origin-master-controllers[26578]: I0831
19:58:45.018017   26578 rbd.go:324] successfully created rbd image
"kubernetes-dynamic-pvc-07ef3830-8e76-11e7-80e4-5254000e374d"

I see that that rbd image was created:

[root@ceph1 ~]# rbd --pool=kube ls
kubernetes-dynamic-pvc-07ef3830-8e76-11e7-80e4-5254000e374d

[root@ceph1 ~]# rbd --pool=kube info kubernetes-dynamic-pvc-
07ef3830-8e76-11e7-80e4-5254000e374d
rbd image 'kubernetes-dynamic-pvc-07ef3830-8e76-11e7-80e4-5254000e374d':
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rb.0.3e0e6.2ae8944a
format: 1

I can create such storage from default project as well as from any other
project i want.

However when i try to use it i end up with Creating Container  and state
Pending.
date node1.foo.bar origin-node[36836]: E0831 20:16:22.995258   36836
rbd.go:459] failed to get secret from ["foo"/"ceph-secret-user"]
date node1.foo.bar origin-node[36836]: E0831 20:16:22.995296   36836
rbd.go:111] Couldn't get secret from foo/{
Name:ceph-secret-user,}
date node1.foo.bar origin-node[36836]: E0831 20:16:22.995338   36836
reconciler.go:308] operationExecutor.MountVolume failed for volume "
kubernetes.io/rbd/18573675-8e77-11e7-8a05-5254000e374d-
pvc-07e8d3a0-8e76-11e7-94f2-5254008efc4e" (spec.Name:
"pvc-07e8d3a0-8e76-11e7-94f2-5254008efc4e") pod
"18573675-8e77-11e7-8a05-5254000e374d"
(UID: "18573675-8e77-11e7-8a05-5254000e374d")
controllerAttachDetachEnabled: true with err: MountVolume.NewMounter failed
for volume "kubernetes.io/rbd/18573675-8e77-11e7-8a05-5254000e374d-
pvc-07e8d3a0-8e76-11e7-94f2-5254008efc4e" (spec.Name:
"pvc-07e8d3a0-8e76-11e7-94f2-5254008efc4e") pod
"18573675-8e77-11e7-8a05-5254000e374d"
(UID: "18573675-8e77-11e7-8a05-5254000e374d") with: failed to get secret
from ["foo"/"ceph-secret-user"]

(the message is for another attempt so pvc-id does not match but that does
not matter. Ther error message is pretty much the same for all attempts)

Any idea what's wrong?

br

-- 
Piotr Baranowski

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


-- 
Piotr Baranowski
CTO/VP/Chief Instructor@OSEC  mob://0048504242337
Dlaczego informatycy mylą Halloween z Bożym narodzeniem?
bo 31oct == 25dec

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Metrics not accessible

2017-09-14 Thread Mateus Caruccio
Check if you are running the latest version of the images. If that is your
case change it to v3.6.0 for cassandra, metrics and heapster and restart
all of them as stated before.

Ansible always installs the :latest tag of the images by default.

Em 14 de set de 2017 07:05, "Daniel Kučera" 
escreveu:

> I'm getting the same error, is there any workaround?
>
>
> 2017-09-14 09:50:00,572 SEVERE
> [com.google.common.util.concurrent.ExecutionList]
> (cluster3-nio-worker-7) RuntimeException while executing runnable
> rx.observable.ListenableFutureObservable$2$1@54d5c01c with executor
> com.google.common.util.concurrent.MoreExecutors$
> ListeningDecorator@29a43cd3:
> java.util.concurrent.RejectedExecutionException: Task
> rx.observable.ListenableFutureObservable$2$1@54d5c01c rejected from
> java.util.concurrent.ThreadPoolExecutor@2d127c2f[Terminated, pool size
> = 0, active threads = 0, queued tasks = 0, completed tasks = 78]
>
> --
>
> S pozdravom / Best regards
> Daniel Kucera.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Ansible - permission denied gathering facts

2017-09-12 Thread Mateus Caruccio
Just figured out I'd created dir /etc/ansible/facts.d/openshift.fact when
that should be the path name for the fact file.
Took me only 24h!

Anyway, I believe the error message is misleading:

$ mkdir mydir && echo > mydir
bash: mydir: Is a directory

That is something I expect to see when a file is being created on a dir
with same name.


--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-09-12 15:47 GMT-03:00 Mateus Caruccio <mateus.caruc...@getupcloud.com>:

> Hello.
>
> I'm running openshift-ansible and hit on a permission denied error during
> fact gathering of playbook "Evaluate node groups"
>
> It doesn't matter if it runs as regular (getup) or root user.
> Both user have ssh {getup,root}@localhost access using private keys and
> sudo without password.
> I even tried to chmod -R 777 /etc/ansible in oder to run as regular user,
> but no success.
>
> Any directions? Am I missing something?
>
> This is the latest and only error:
>
> *$ ansible-playbook -i ./hosts getup.yaml*
> PLAY [Evaluate node groups] **
> 
> **
>
> TASK [Gathering Facts] **
> 
> ***
> Using module file /usr/lib/python2.7/site-packages/ansible/modules/
> system/setup.py
> <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: getup
> <127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 && sleep 0'
> fatal: [localhost]: FAILED! => {
> "changed": false,
> "cmd": "/etc/ansible/facts.d/openshift.fact",
> "failed": true,
> "invocation": {
> "module_args": {
> "fact_path": "/etc/ansible/facts.d",
> "filter": "*",
> "gather_subset": [
> "all"
> ],
> "gather_timeout": 10
> }
> },
> "msg": "[Errno 13] Permission denied",
> "rc": 13
> }
>
> *$ cat getup.yaml*
> ---
> - include: /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
>   vars:
> openshift_node_local_quota_per_fsgroup: 1Gi
> deployment_type: origin
> containerized: false
>
> $ ansible --version
> ansible 2.3.2.0
>   config file = /home/getup/getup-engine/ansible/ansible.cfg
>   configured module search path = Default w/o overrides
>   python version = 2.7.5 (default, Nov  6 2016, 00:28:07) [GCC 4.8.5
> 20150623 (Red Hat 4.8.5-11)]
>
> *$ cat /home/getup/getup-engine/ansible/ansible.cfg*
> # config file for ansible -- http://ansible.com/
> # ==
> [defaults]
> #callback_plugins = ../openshift-ansible/ansible-profile/callback_plugins
> forks = 50
> host_key_checking = False
> #hostfile = ~centos/hosts
> roles_path = /usr/share/ansible/openshift-ansible/roles:/opt/ansible/
> roles:./roles:../../roles:
> remote_user = getup
> gathering = smart
> retry_files_enabled = false
> nocows = true
> #lookup_plugins = ./playbooks/lookup_plugins
> #log_path = /tmp/ansible.log
>
> [privilege_escalation]
> become = True
>
> [ssh_connection]
> ssh_args = -o RequestTTY=yes -o ControlMaster=auto -o ControlPersist=900s
> -o GSSAPIAuthentication=no
> control_path = /var/tmp/%%h-%%r
> pipelining = True
>
>
> *$ cd /usr/share/ansible/openshift-ansible*
> *$ git branch -v*
>   master  4acdef4 Merge pull request #5340 from sdodson/bz1489913
> * release-3.6 d53c565 Automatic commit of package [openshift-ansible]
> release [3.6.173.0.32-1].
>
>
> --
> Mateus Caruccio / Master of Puppets
> GetupCloud.com
> We make the infrastructure invisible
> Gartner Cool Vendor 2017
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Ansible - permission denied gathering facts

2017-09-12 Thread Mateus Caruccio
Hello.

I'm running openshift-ansible and hit on a permission denied error during
fact gathering of playbook "Evaluate node groups"

It doesn't matter if it runs as regular (getup) or root user.
Both user have ssh {getup,root}@localhost access using private keys and
sudo without password.
I even tried to chmod -R 777 /etc/ansible in oder to run as regular user,
but no success.

Any directions? Am I missing something?

This is the latest and only error:

*$ ansible-playbook -i ./hosts getup.yaml*
PLAY [Evaluate node groups]


TASK [Gathering Facts]
*
Using module file
/usr/lib/python2.7/site-packages/ansible/modules/system/setup.py
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: getup
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 && sleep 0'
fatal: [localhost]: FAILED! => {
"changed": false,
"cmd": "/etc/ansible/facts.d/openshift.fact",
"failed": true,
"invocation": {
"module_args": {
"fact_path": "/etc/ansible/facts.d",
"filter": "*",
"gather_subset": [
"all"
],
"gather_timeout": 10
}
},
"msg": "[Errno 13] Permission denied",
"rc": 13
}

*$ cat getup.yaml*
---
- include: /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml
  vars:
openshift_node_local_quota_per_fsgroup: 1Gi
deployment_type: origin
containerized: false

$ ansible --version
ansible 2.3.2.0
  config file = /home/getup/getup-engine/ansible/ansible.cfg
  configured module search path = Default w/o overrides
  python version = 2.7.5 (default, Nov  6 2016, 00:28:07) [GCC 4.8.5
20150623 (Red Hat 4.8.5-11)]

*$ cat /home/getup/getup-engine/ansible/ansible.cfg*
# config file for ansible -- http://ansible.com/
# ==
[defaults]
#callback_plugins = ../openshift-ansible/ansible-profile/callback_plugins
forks = 50
host_key_checking = False
#hostfile = ~centos/hosts
roles_path =
/usr/share/ansible/openshift-ansible/roles:/opt/ansible/roles:./roles:../../roles:
remote_user = getup
gathering = smart
retry_files_enabled = false
nocows = true
#lookup_plugins = ./playbooks/lookup_plugins
#log_path = /tmp/ansible.log

[privilege_escalation]
become = True

[ssh_connection]
ssh_args = -o RequestTTY=yes -o ControlMaster=auto -o ControlPersist=900s
-o GSSAPIAuthentication=no
control_path = /var/tmp/%%h-%%r
pipelining = True


*$ cd /usr/share/ansible/openshift-ansible*
*$ git branch -v*
  master  4acdef4 Merge pull request #5340 from sdodson/bz1489913
* release-3.6 d53c565 Automatic commit of package [openshift-ansible]
release [3.6.173.0.32-1].


--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Let's Encrypt certificates

2017-09-05 Thread Mateus Caruccio
Hey.
How ready is it to use on production? Is there any plans to change
interfaces/mechanisms in the near future?

Thanks, great job! ;)

Em 5 de set de 2017 2:23 PM, "Tim Dudgeon"  escreveu:

> Tomas
>
> Thanks, that helped.
>
> The problem was that it wasn't clear that you needed to install into a new
> project, and then update the
>
> oc adm policy add-cluster-role-to-user acme-controller
> system:serviceaccount:acme:default
>
> command and replace acme with the name of the project. Once done it
> installs fine and issues certificates as described.
>
> Thanks
> Tim
>
>
> On 05/09/2017 17:38, Tomas Nozicka wrote:
>
>> Hi Tim,
>>
>> (see inline...)
>>
>> On Tue, 2017-09-05 at 17:12 +0100, Tim Dudgeon wrote:
>>
>>> Thanks.
>>>
>>> I'm having problems getting this running.
>>> When I deploy the deploymentconfig the pod fails to start and the
>>> logs
>>> contain these errors:
>>>
>>> 2017-09-05T16:03:11.764025351Z  ERROR cmd.go:138 Unable to
 bootstrap
 certificate database: 'User

>>> and
>>>
 2017-09-05T16:03:11.766213869Z  ERROR cmd.go:173 Couln't
 initialize
 RouteController: 'RouteController could not find its own service:
 'User "system:serviceaccount:acme-controller:default" cannot get
 services in project "acme-controller"''

>>> misconfigured SA is system:serviceaccount:acme-controller:default
>> - notably the namespace is **acme-controller**
>>
>> I already deployed the clusterrole and executed
>>>
>>> oc adm policy add-cluster-role-to-user acme-controller
 system:serviceaccount:acme:default

>>> Even tried as suggested:
>>>
>>> oc adm policy add-cluster-role-to-user cluster-admin
 system:serviceaccount:acme:default

>>> You are modifying SA in namespace **acme** not **acme-controller**
>>
>> I tried this in the default project and in a new acme-controller
>>> project.
>>>
>>> Could you help describe steps to get this running in a new openshift
>>> environment?
>>>
>> Try looking at the exact steps our CI is using to create it from
>> scratch but it should work as described in our docs.
>>
>>https://github.com/tnozicka/openshift-acme/blob/master/.travis.yml#L6
>> 7-L73
>>
>> Thanks
>>> Tim
>>>
>>>
>>>
>>> On 04/09/2017 09:44, Tomas Nozicka wrote:
>>>
 Hi Tim,

 On Mon, 2017-09-04 at 09:16 +0100, Tim Dudgeon wrote:

> Tomas
>
> Thanks for that. Looks very interesting.
>
> I've looked it over and not totally sure how to use this.
>
> Am I right that if this controller is deployed and running
> correctly
> then all you need to do for any routes is add the
> 'kubernetes.io/tls-acme: "true"' annotation to your route  and
> the
> controller will handle creating the initial certificate and
> renewing
> it
> as needed?
>
 Correct.

 And in doing so it will generate/renew certificate for the
> hostname,
> add/update it as a secret, and update the route definition to use
> that
> certificate?
>
 For Routes it will generate a secret with that certificate and also
 inline it into the Route as it doesn't support referencing it.
 (Ingresses do, but the project doesn't support those yet.) The
 secret
 can be useful for checking or mounting it into pods directly if you
 don't want to terminate your TLS in the router but in pods.

 And that this will only apply to external routes. Some mechanism,
> such
> as the Ansible playbook, will still be required to maintain the
> certificates that are used internally by the Openshift
> infrastructure?
>
 I have some thoughts on this but no code :/

 As I said at this point you need to bootstrap the infra using your
 own
 CA/self-signed cert and then you can expose the OpenShift API + web
 console using a Route. This should work fine even for 'oc' client
 unless the Router is down and you need to fix it. For that rare
 case,
 when only the admin will need to log in to fix the router he can
 use
 the internal cert or ssh into the cluster directly.

 So this hack should cover all the use cases for users except this
 special case for an admin.

 Thanks
> Tim
>
> On 25/08/2017 17:09, Tomas Nozicka wrote:
>
>> Hi Tim,
>>
>> there is a controller to take care about generating and
>> renewing
>> Let's
>> Encrypt certificates for you.
>>
>> https://github.com/tnozicka/openshift-acme
>>
>> That said it won't generate it for masters but you can expose
>> master
>> API using Route and certificate for that Route would be fully
>> managed
>> by openshift-acme.
>>
>> Further integrations might be possible in future but this is
>> how
>> you
>> can get it done now.
>>
>> Regards,
>> Tomas
>>
>>
>> On Fri, 2017-08-25 at 16:27 +0100, Tim Dudgeon wrote:
>>
>>> Does 

Re: oc -w timeout

2017-09-05 Thread Mateus Caruccio
Thanks a lot! It would take me forever to realize masters are behind an
ELB ;)

Best

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017

2017-09-05 9:44 GMT-03:00 Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com>:

> Hi,
>
> You might want to take a look at this thread: https://lists.
> openshift.redhat.com/openshift-archives/users/2017-June/msg00135.html
> ​
> Cheers
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


oc -w timeout

2017-09-05 Thread Mateus Caruccio
Hi there.
Where is located the config to change timeout of watch operations? I'm
getting disconnected after 5 minutes and would like to increase this value.

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible
Gartner Cool Vendor 2017
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: [EXTERNAL] Re: garbage collection docker metadata

2017-06-09 Thread Mateus Caruccio
I do basically the same on an node cronjob: docker rm $(docker images -q)

--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible

2017-06-09 9:30 GMT-03:00 Gary Franczyk <gary.franc...@availity.com>:

> I regularly run an app named “docker-gc” to clean up unused images and
> containers.
>
>
>
> https://github.com/spotify/docker-gc
>
>
>
>
>
> *Gary Franczyk*
>
> Senior Unix Administrator, Infrastructure
>
>
>
> Availity | 10752 Deerwood Park Blvd S. Ste 110, Jacksonville FL 32256
> W 904.470.4953 <(904)%20470-4953> | M 561.313.2866 <(561)%20313-2866>
>
> *gary.franc...@availity.com <gary.franc...@availity.com>*
>
>
>
> *From: *<users-boun...@lists.openshift.redhat.com> on behalf of Andrew
> Lau <and...@andrewklau.com>
> *Date: *Friday, June 9, 2017 at 8:27 AM
> *To: *Fernando Lozano <floz...@redhat.com>
> *Cc: *"users@lists.openshift.redhat.com" <users@lists.openshift.redhat.com
> >
> *Subject: *[EXTERNAL] Re: garbage collection docker metadata
>
>
>
> WARNING: This email originated outside of the Availity email system.
> DO NOT CLICK links or open attachments unless you recognize the sender and
> know the content is safe.
> --
>
> The error was from a different node.
>
>
>
> `docker info` reports plenty of data storage free. Manually removing
> images from the node has always fixed the metadata storage issue, hence why
> I was asking if garbage collection did take into account metadata or only
> data storage.
>
>
>
> On Fri, 9 Jun 2017 at 22:11 Fernando Lozano <floz...@redhat.com> wrote:
>
> If the Docker GC complains images are in use and you get out of disk space
> errors, I'd assume you need more space for docker storage.
>
>
>
> On Fri, Jun 9, 2017 at 8:37 AM, Andrew Lau <and...@andrewklau.com> wrote:
>
>
>
> On Fri, 9 Jun 2017 at 21:10 Aleksandar Lazic <al...@me2digital.eu> wrote:
>
> Hi Andrew Lau.
>
> on Freitag, 09. Juni 2017 at 12:35 was written:
>
> Does garbage collection get triggered when the docker metadata storage is
> full? Every few days I see some nodes fail to create new containers due to
> the docker metadata storage being full. Docker data storage has plenty of
> capacity.
>
> I've been cleaning out the images manually as the garbage collection
> doesn't seem to trigger.
>
>
>
> Do you have tried to change the default settings?
>
> https://docs.openshift.org/latest/admin_guide/garbage_
> collection.html#image-garbage-collection
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openshift.org_latest_admin-5Fguide_garbage-5Fcollection.html-23image-2Dgarbage-2Dcollection=DwMFaQ=OICO5LaDH-Xi2CSZAJgmTQ=DWo_ijwFGqBf00dIEKrMRiq5JjUOT-29YI7Con8CGIc=3OxWWnumQqdUIEXBrqVhdKbx_eHIqeFUa5mPZU5M2fM=HmJaM6JkseziRVZ20lzdXpzAFCvFbpgWu3ag6iBSTvc=>
>
> How was the lvm thinpool created?
> https://docs.openshift.org/latest/install_config/install/
> host_preparation.html#configuring-docker-storage
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openshift.org_latest_install-5Fconfig_install_host-5Fpreparation.html-23configuring-2Ddocker-2Dstorage=DwMFaQ=OICO5LaDH-Xi2CSZAJgmTQ=DWo_ijwFGqBf00dIEKrMRiq5JjUOT-29YI7Con8CGIc=3OxWWnumQqdUIEXBrqVhdKbx_eHIqeFUa5mPZU5M2fM=JizN4AMP_DxaHGnuOmfcq9svbVbvyE6nvynwLFBy18E=>
>
> The docker-storage-setup calculates normally 0.1% for metadata as describe
> in this line
> https://github.com/projectatomic/container-storage-setup/blob/master/
> container-storage-setup.sh#L380
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_projectatomic_container-2Dstorage-2Dsetup_blob_master_container-2Dstorage-2Dsetup.sh-23L365=DwMFaQ=OICO5LaDH-Xi2CSZAJgmTQ=DWo_ijwFGqBf00dIEKrMRiq5JjUOT-29YI7Con8CGIc=3OxWWnumQqdUIEXBrqVhdKbx_eHIqeFUa5mPZU5M2fM=YoasVf-qTutZV0sR3xepc_g8gVxYPI-RIN_JcYVUAjk=>
>
>
>
>
>
> Garbage collection set to 80 high and 70 low.
>
>
>
> Garbage collection is working on, I see it complain about images in use on
> other nodes:
>
>
> ImageGCFailed wanted to free 3289487769, but freed 3466304680
> <(346)%20630-4680> space with errors in image deletion: [Error response
> from daemon: {"message":"conflict: unable to delete 96f1d6e26029 (cannot be
> forced) - image is being used by running container 3ceb5410db59"}, Error
> response from daemon: {"message":"conflict: unable to delete 4e390ce4fc8b
> (cannot be forced) - image is being used by running container
> 0040546d8f73"}, Error response from daemon: {"message":"conflict: unable to
> delete 60b78ced07a8 (cannot be forced) - image has dep

Re: Backup of databases on OpenShift

2017-06-08 Thread Mateus Caruccio
Hi Jens,

We are using a crontab-based pod to trigger backups using rsh + upload to
object storage (s3 or azure blob).
Here is the repo: https://github.com/getupcloud/backup

It will be updated to use CronJob/ScheduledJob as soon as I get some time
work on it.

Best regards,


--
Mateus Caruccio / Master of Puppets
GetupCloud.com
We make the infrastructure invisible

2017-06-08 17:05 GMT-03:00 Aleksandar Lazic <al...@me2digital.eu>:

> Hi Jens.
>
> on Donnerstag, 08. Juni 2017 at 16:46 was written:
>
>
> Hi,
>
> We recently set up an OpenShift Enterprise cloud and we're wondering what
> the best practices are for backing up databases running in an OpenShift
> cloud. I will focus on PostgreSQL here, but the same goes for MongoDB,
> MariaDB...
>
> - Should we rely on backups of the persistent volumes (we're using NFS)?
> This would mean assuming the on-disk state is always recoverable. Which it
> *should* be, but it does feel like a hack...
> - Should we have an admin-level oc script that filters out all running
> database containers and does some 'oc exec pg_dump ... > backup.sql' magic
> on them?
> - Should we provide some simple templates to our users that contain
> nothing but a cron script that calls pg_dump?
> ...
>
> Please share your solutions?
> I like this one.
>
> oc rsh  mysqldump/pg_dump/... > backup_file
>
> Some user use Filesytem back, as you have mentioned
>
> I have seen somewhere out a concept with a sidecar container but I can't
> find it now
>
> What I have seen in the past is not the backup the problem, the restore is
> the difficult part.
> I have once needed to restore a db (postgresql) and it was not that easy
> and not automatically!
>
>
>
> Kind Regards,
>
>
> Jens
>
>
>
>
> *-- Best Regards Aleksandar Lazic - ME2Digital e. U. *
> https://me2digital.online/
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: In OpenShift Ansible, what is the differences between roles/openshift_hosted_metrics and roles/openshift_metrics ?

2017-04-28 Thread Mateus Caruccio
I guess openshift_metrics is a refactor of openshift_hosted_metrics. Am I
right?

Em 28/04/2017 13:51, "Alex Wauck"  escreveu:

> I think Stéphane meant to link to this: https://github.com/openshift/
> openshift-ansible/tree/master/roles/openshift_hosted_metrics
>
> What's the difference between that one and openshift_metrics?
>
> On Fri, Apr 28, 2017 at 11:46 AM, Tim Bielawa  wrote:
>
>> I believe that openshift-hosted-logging installs kibana (logging
>> exploration) whereas openshift-metrics will install hawkular (a metric
>> storage engine).
>>
>> On Fri, Apr 28, 2017 at 9:25 AM, Stéphane Klein <
>> cont...@stephane-klein.info> wrote:
>>
>>> Hi,
>>>
>>> what is the differences between :
>>>
>>> * roles/openshift_hosted_metrics (https://github.com/openshift/
>>> openshift-ansible/tree/master/roles/openshift_hosted_logging)
>>> * and roles/openshift_metrics (https://github.com/openshift/
>>> openshift-ansible/tree/master/roles/openshift_metrics)
>>>
>>> ?
>>>
>>> Best regards,
>>> Stéphane
>>> --
>>> Stéphane Klein 
>>> blog: http://stephane-klein.info
>>> cv : http://cv.stephane-klein.info
>>> Twitter: http://twitter.com/klein_stephane
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>>
>> --
>> Tim Bielawa, Software Engineer [ED-C137]
>> Cell: 919.332.6411 <(919)%20332-6411>  | IRC: tbielawa (#openshift)
>> 1BA0 4FAB 4C13 FBA0 A036  4958 AD05 E75E 0333 AE37
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com *
>
> Making Machines More Human.
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Liveness probe frequency

2016-12-07 Thread Mateus Caruccio
Hi.

Yes, you can set periodSeconds of the probe. More info in the docs at
https://docs.openshift.org/latest/rest_api/openshift_v1.html#v1-probe

--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Wed, Dec 7, 2016 at 8:12 AM, Sobkowiak Krzysztof <
krzys.sobkow...@gmail.com> wrote:

> Hi
>
> Is it possible to set the frequency the liveness/readiness probes are
> performed? If yes, how can I do it in OpenShift?
>
> Kindly regards
> Krzysztof
>
>
> --
> Krzysztof Sobkowiak
>
> JEE & OSS Architect, Integration Architect
> Apache Software Foundation Member (http://apache.org/)
> Apache ServiceMix Committer & PMC Member (http://servicemix.apache.org/)
> Senior Solution Architect @ Capgemini SSC (http://www.capgeminisoftware.
> pl/)
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Git integration via HTTPS

2016-11-21 Thread Mateus Caruccio
Yep. Works fine, just create a secret and use it in your buildconfig:

$ oc export secrets basic-secret -o yaml
apiVersion: v1
kind: Secret
type: Opaque
data:
  password: bXlwYXNzd3JkCg==
  username: bXl1c2VyCg==
metadata:
  


$ oc export bc/app
  ...
  source:
type: Git
contextDir: /
git:
  ref: stable
  uri: g...@github.com:caruccio/private-project.git
sourceSecret:
  name: basic-secret



--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Mon, Nov 21, 2016 at 7:51 PM, Subhendu Ghosh <sghosh...@gmail.com> wrote:

> Anyone using https integration for git instead of ssh?
>
> Thanks
> Subhendu
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: How to set an proxy in the openshift origin to pull the image

2016-07-21 Thread Mateus Caruccio
Hi.

You could try to use a chinese mirror. Following article shows how to do it
(didn't tried myself)
http://rzhw.me/blog/2015/12/faster-docker-pulls-in-china-with-daocloud/


--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Thu, Jul 21, 2016 at 4:33 AM, 周华康 <huakang.z...@qq.com> wrote:

> Hi
>When i try the deploy the example apps,it shows that in the log i need
> to set an proxy,but how?
> log:
> "API error (500): Get
> https://registry-1.docker.io/v2/library/dancer-example/manifests/latest:
> Get
> https://auth.docker.io/token?scope=repository%3Alibrary%2Fdancer-example%3Apull=registry.docker.io:
> dial tcp: lookup auth.docker.io on 10.202.72.116:53: read udp
> 10.161.67.132:57753->10.202.72.116:53: i/o timeout\n"
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: best practices for custom S2I STI builder images

2016-03-30 Thread Mateus Caruccio
Hi Dale.

I believe you don't need to redo most of the work already done by the base
image openshift/python-27-centos7.

Regarding python, you could simply add cx_Oracle to requirements.txt of
your source project and let the base image to install it[1].

Also, unless you provide your own ./s2i and ./contrib files, there is no
need to COPY it again.

Take a look at this example[2]. It just adds some extra RPMs into the new
image. Everything else is already provided by the base image.

[1]
https://github.com/openshift/sti-python/blob/529c67c24609ead2962c7a5d465541bb07898a0c/2.7/s2i/bin/assemble#L16-L19
[2] https://github.com/getupcloud/sti-php-extra/blob/master/5.6/Dockerfile

Regards,

--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Wed, Mar 30, 2016 at 12:44 PM, Dale Bewley <d...@bewley.net> wrote:

> - On Mar 22, 2016, at 7:05 AM, Ben Parees <bpar...@redhat.com> wrote:
>
> On Tue, Mar 22, 2016 at 12:58 AM, Dale Bewley <d...@bewley.net> wrote:
>
>> I'm trying to understand best practices for creating and maintaining
>> builder images.
>>
>> For example, I would like to start with this repo
>> https://github.com/openshift/sti-python/blob/master/2.7/ and customize
>> the Dockerfile to the include the Oracle instantclient libraries.
>>
>
> ​So first off, rather than customizing the dockerfile and building the
> whole image over again, you should do:
> FROM centos/python-27-centos7
> RUN yum install oracle-instantclient  # or whatever
>
>
> Do you mean `FROM openshift/python-27-centos7` ?
>
> This is the approach I'm taking at the moment. Does it look sane?
>
>
> 1. Clone  https://github.com/openshift/sti-python/
> <https://github.com/openshift/sti-python/blob/master/2.7/> and make it my
> own by changing the 2.7/Dockerfile to look like this:
>
> ```
>
> FROM openshift/python-27-centos7
>
> # This image provides a Python 2.7 environment you can use to run your Python
> # applications.
>
> MAINTAINER Admin <ad...@example.com>
>
> EXPOSE 8080
>
> ENV PYTHON_VERSION=2.7 \
>  PATH=$HOME/.local/bin/:$PATH
>
> LABEL io.k8s.description="Platform for building and running Python 2.7 
> applications with Oracle Support" \
>  io.k8s.display-name="Python 2.7 Oracle" \
>  io.openshift.expose-services="8080:http" \
>  io.openshift.tags="builder,python,python27,rh-python27,oracle,example"
>
> USER 0
>
> # Setup oracle environment for Example
> # RUN yum install oracle things
>
> # Install python support for Oracle
> RUN /opt/rh/python27/root/usr/bin/pip install cx_Oracle
>
> # Copy the S2I scripts from the specific language image to $STI_SCRIPTS_PATH
> COPY ./s2i/bin/ $STI_SCRIPTS_PATH
>
> # Each language image can have 'contrib' a directory with extra files needed 
> to
> # run and build the applications.
> COPY ./contrib/ /opt/app-root
>
> # In order to drop the root user, we have to make some directories world
> # writable as OpenShift default security model is to run the container under
> # random UID.
> RUN chown -R 1001:0 /opt/app-root && chmod -R og+rwx /opt/app-root
>
> USER 1001
>
> # Set the default CMD to print the usage of the language image
> CMD $STI_SCRIPTS_PATH/usage
>
> ```
>
>
> 2. Create a `example` project and give `system:authenticated` group view
> and pull permissions.
>
> ```
>
> oadm policy add-role-to-group system:image-puller system:authenticated -n 
> example
>
> oadm policy add-role-to-group view system:authenticated -n example
>
> ```
>
>
> 3. In the example project create a imagestream like this
>
> ```
>
> apiVersion: v1
>
> kind: ImageStream
> metadata:
>   annotations:
>   name: python-27-centos7
> spec:
>   dockerImageRepository: example/python-27-centos7
>
> ```
>
>
> 4. In the example project create a buildconfig like this
>
>
> ```
> apiVersion: v1kind: BuildConfigmetadata:  name: python-27-centos7  
> annotations:description: Defines how to build python-27-centos7 builder 
> imagespec:  output:to:  kind: ImageStreamTag  name: 
> python-27-centos7:latest  source:git:  uri: 
> http://gitlab.example.com/openshift/sti-python.gittype: Git
> contextDir: "2.7"  strategy:type: DockerdockerStrategy: { }  
> triggers:  - type: "imagechange"imageChange:  from:kind: 
> "ImageStreamTag"name: "openshift/python-27-centos7:latest"
>
> ```
>
> 5. In the example project `oc start-build python-27-centos7`
>
> 6. Create a template to use example/python-27-centos7 + developer's python
> app git repo.
>

Re: Rust STI image

2016-03-21 Thread Mateus Caruccio
Cool!

It's great to see how easy it's now to build runtime images in comparison
to openshift v2 cartridges.
I myself made one or two here: https://hub.docker.com/r/getupcloud/ (BTW,
contributions are very welcome)




--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Mon, Mar 21, 2016 at 6:56 AM, Michalis Kargakis <mkarg...@redhat.com>
wrote:

> I was playing around with source-to-image and Rust yesterday and here is
> the outcome:
>
> https://github.com/kargakis/sti-rust
>
> Needs more polishing but it's possible to have Rust sti builds. Note that
> it is not an official image, merely a toy of mine.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Cron tasks?

2016-02-23 Thread Mateus Caruccio
FYI, the Dockerfile was wrong by setting suid to /usr/bin/crontab. That
lead to /var/spool/cron/user file owner by root, thus preventing crond to
read it.
The working version is at
https://github.com/getupcloud/sti-ruby-extra/blob/master/1.9/Dockerfile#L24-L28

Regards,

*Mateus Caruccio*
Master of Puppets
+55 (51) 8298.0026
gtalk:


*mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
@MateusCaruccio <https://twitter.com/MateusCaruccio>*
This message and any attachment are solely for the intended
recipient and may contain confidential or privileged information
and it can not be forwarded or shared without permission.
Thank you!

On Tue, Feb 23, 2016 at 9:12 AM, Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> Hello David.
>
> I've got cron to work as expected by doing this:
>
> 1 - Create an "extra" image and add the necessary packages (cronie
> crontabs nss_wrapper uid_wrapper):
>
> https://github.com/getupcloud/sti-ruby-extra/blob/1dfed4ca7ca153e261c880f0b036129c5d9011ca/1.9/Dockerfile#L18
>
> We need to relax security here, otherwise neither crond nor crontab will
> work, since both are run as regular users:
>
> https://github.com/getupcloud/sti-ruby-extra/blob/1dfed4ca7ca153e261c880f0b036129c5d9011ca/1.9/Dockerfile#L24-L26
>
> 2 - Create an script to activate nss_wrapper and (optionally) uid_wrapper:
>
> https://github.com/getupcloud/sti-ruby-extra/blob/master/1.9/nss-wrapper-setup
>
> libuid_wrapper is required by /usr/bin/crontab so it believes to be
> running as root.
> In order to crond start it needs to have the current user in your passwd.
> You can achieve this by using nss_wrapper with a "fake" passwd file [1] and
> instruct everyone to use it [2]
>
> 3 - From your repo's  (.sti|.s2i)/bin/run, "source" the wrapper and start
> crond.
>
> if [ -x ${STI_SCRIPTS_PATH}/nss-wrapper-setup ]; then
> source ${STI_SCRIPTS_PATH}/nss-wrapper-setup -u
> crond-start
> fi
>
>
>
> I choose to run it from the same code container so it can reach the code
> itself.
>
> Please, feedback is very appreciated.
>
> Best Regards.
>
> [1]
> https://github.com/getupcloud/sti-ruby-extra/blob/1dfed4ca7ca153e261c880f0b036129c5d9011ca/1.9/nss-wrapper-setup#L22-L27
> [2]
> https://github.com/getupcloud/sti-ruby-extra/blob/1dfed4ca7ca153e261c880f0b036129c5d9011ca/1.9/nss-wrapper-setup#L29-L31
>
>
> *Mateus Caruccio*
> Master of Puppets
> +55 (51) 8298.0026
> gtalk:
>
>
> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
> This message and any attachment are solely for the intended
> recipient and may contain confidential or privileged information
> and it can not be forwarded or shared without permission.
> Thank you!
>
> On Tue, Feb 23, 2016 at 7:03 AM, Maciej Szulik <maszu...@redhat.com>
> wrote:
>
>>
>> On 02/23/2016 10:41 AM, David Strejc wrote:
>>
>>> Does anyone have any experience with cron tasks as they were in OS v2?
>>>
>>
>> v3 does not have cron support yet, there was a proposal already accepted
>> in k8s. In the following weeks/months I'll be working on implementing
>> such functionality.
>>
>> I would like to let our developers maintain cron tasks through git .s2i
>>> folder as it was in v2.
>>> Is it good idea to build cron into docker image and link crontab to file
>>> inside .s2i?
>>>
>>
>> I'm not sure this will work as you expect. You'd still need a separate
>> mechanism that will actually trigger build, or other action when the
>> right time comes.
>>
>> What I can suggest as a temporary solution is writing/deploying some
>> kind of cron scheduler inside of OpenShift.
>>
>> Maciej
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Best practise to trigger actions when application is created

2016-02-11 Thread Mateus Caruccio
>From my own experience, monitoring etcd is one way to do it, but requires
an extra component (the monitor) to be always up and running.
This monitor must have cluster roles, since it need to watch both project
and app (bc/dc) objects.

The other way is to provide your users with templates containing all stuff
they will need.


*Mateus Caruccio*
Master of Puppets
+55 (51) 8298.0026
gtalk:


*mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
@MateusCaruccio <https://twitter.com/MateusCaruccio>*
This message and any attachment are solely for the intended
recipient and may contain confidential or privileged information
and it can not be forwarded or shared without permission.
Thank you!

On Thu, Feb 11, 2016 at 1:21 PM, David Strejc <david.str...@gmail.com>
wrote:

> Dear all,
>
> what is the best practise to trigger actions when I am creating
> application?
>
> Let's say I want to create database (as I am using external database
> cluster) and glusterfs volume for my application.
>
> Which approach should I look at? Should I make docker conteiner to ssh
> somewhere and let it trigger some scripts?
>
> Or is there any other way? Shoudl I somehow monitor etcd for app creation
> (as it was with activemq messages in Open Shift v2)?
>
> Thank you.
>
> David Strejc
> t: +420734270131
> e: david.str...@gmail.com
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Persistent Volumes

2016-02-05 Thread Mateus Caruccio
In order to avoid this conflict you can modify the template so it always
creates a random name for your pvc:

1 -Download a fresh mysql-persistent template:

  $ oc export templates/mysql-persistent -o yaml >
mysql-persistent-custom.yaml

2 - Open mysql-persistent-custom.yaml on a real text editor (vim, obviously
;) and update accordingly:

  ->PersistentVolumeClaim.metadata.name:
"${DATABASE_SERVICE_NAME}-${PVC_SUFFIX}"
  ->
DeploymentConfig.spec.template.spec.volumes.name.persistentVolumeClaim.claimName:
"${DATABASE_SERVICE_NAME}-${PVC_SUFFIX}"

3 - Add to "parameters":

  - description: Suffix for pvc names
from: [A-Z0-9]{6}
generate: expression
name: PVC_SUFFIX
required: true

4 - Optionally, create a new template:

  $ oc create -f mysql-persistent-custom.yaml

It should appear in web console.

Keep in mind that everytime you use this template, a new pvc is created.
It's up to yout delete it after using.


​Regards,


*Mateus Caruccio*
Master of Puppets
+55 (51) 8298.0026
gtalk:


*mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
@MateusCaruccio <https://twitter.com/MateusCaruccio>*
This message and any attachment are solely for the intended
recipient and may contain confidential or privileged information
and it can not be forwarded or shared without permission.
Thank you!

On Fri, Feb 5, 2016 at 10:37 AM, Mark Turansky <mtura...@redhat.com> wrote:

> The PVC is created as part of the template, so the naming conflict makes
> sense.
>
> On Fri, Feb 5, 2016 at 6:59 AM, Mateus Caruccio <
> mateus.caruc...@getupcloud.com> wrote:
>
>> The only way (that I known) is to delete the pvc pior creating from
>> template again:
>>
>> $ oc delete pvc/same-name
>>
>> Now you can use your template.
>>
>> *Mateus Caruccio*
>> Master of Puppets
>> +55 (51) 8298.0026
>> gtalk:
>>
>>
>> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
>> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
>> This message and any attachment are solely for the intended
>> recipient and may contain confidential or privileged information
>> and it can not be forwarded or shared without permission.
>> Thank you!
>>
>> On Fri, Feb 5, 2016 at 8:57 AM, Alejandro Nieto Boza <ale9...@gmail.com>
>> wrote:
>>
>>> But if I don't delete the PVC, when I launch the template
>>> mysql-persistent-template again with the same name to use the same volume,
>>> this error appears:
>>>
>>> Cannot create persistentvolumeclaims. persistentvolumeclaims
>>> "same-name" already exists.
>>>
>>>
>>> Is there any way to launch an application named "Gandalf" from
>>> mysql-persistent-template, delete this application, and launch again the
>>> same application with the same name "Gandalf" for using the same PVC
>>> "Gandalf"?
>>>
>>> El lun., 1 feb. 2016 a las 14:30, Mark Turansky (<mtura...@redhat.com>)
>>> escribió:
>>>
>>>> Yes, you can re-use the same volume by not deleting the PVC.  The
>>>> lifecycle of a claim is independent of a pod's lifecycle.  You can create
>>>> and delete pods all day long using the same claim, but once you delete the
>>>> claim, you are relinquishing your hold on that volume, hence it is 
>>>> Released.
>>>>
>>>> Mark
>>>>
>>>> On Mon, Feb 1, 2016 at 8:08 AM, Alejandro Nieto Boza <ale9...@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I create the following NFS PV in Openshift:
>>>>>
>>>>> apiVersion: "v1"
>>>>> kind: "PersistentVolume"
>>>>> metadata:
>>>>>   name: "pv01"
>>>>> spec:
>>>>>   capacity:
>>>>> storage: "5Gi"
>>>>>   accessModes:
>>>>> - "ReadWriteOnce"
>>>>>   nfs:
>>>>> path: "/mnt"
>>>>>server: "..."
>>>>>
>>>>> Then, I create a pod with mysql-persistent-template and the pod uses
>>>>> correctly the PV and the PV appears "Bound".
>>>>> Now I delete the pod and the PVC. When I delete the PVC the PV appears
>>>>> "Released".
>>>>> Now I want to launch the same template mysql-persistent and I want the
>>>>> new pod uses the same PV but the PV doesn't appear "Available" again.
>>>>> Is there any way to use the same volume?
>>>>>
>>>>> Thanks.
>>>>>
>>>>> ___
>>>>> users mailing list
>>>>> users@lists.openshift.redhat.com
>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>>
>>>>>
>>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: wiring microservices together

2016-01-27 Thread Mateus Caruccio
One amazing aspect of templates on openshift is that you can process it
without depending on the web gui:

$ oc process -f template.yml -v PARM1=value1,PARM2=43 -o json >
processed-template.json

Then create all objects at once:

$ oc create -f processed-template.json

Or, you you prefer, in a single shot:

$ oc process -f template.yml -v PARM1=value1,PARM2=43 -o json | oc create
-f -



*Mateus Caruccio*
Master of Puppets
+55 (51) 8298.0026
gtalk:


*mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
@MateusCaruccio <https://twitter.com/MateusCaruccio>*
This message and any attachment are solely for the intended
recipient and may contain confidential or privileged information
and it can not be forwarded or shared without permission.
Thank you!

On Wed, Jan 27, 2016 at 2:36 PM, Candide Kemmler <candide@intrinsic.world>
wrote:

> I didn't realize that. That's awesome. Loving OpenShift a little more
> everyday.
>
> Templates indeed look like the way to go for me and being able to look at
> the source of working templates is fantastic.
>
> Best,
>
> Candide
>
> On 26 Jan 2016, at 19:24, Mateus Caruccio <mateus.caruc...@getupcloud.com>
> wrote:
>
> Hi Candice.
>
> What you need is already there. All services may be referenced by it's
> name. There is an internal DNS service for that.
> Suppose you've created 2 microservices: ms1 and ms2. In order to ms1
> connect to ms2, just use the hostname as the same name for the service,
> i.e. "ms2".
>
> Regarding deployments, "templates" are exactly what you are looking for.
> Templates have "parameters", where one can input data. Those values can be
> used inside other objects of the templates (a templates is basically a list
> of objects to be built, plus optional parameters). Those parameters can be
> referenced like shell variables.
> For example, see how this[1] parameter is being used here[2].
>
> You may what to start from an existing template from your own
> installation. Just "oc get templates -n openshift", then "oc export
> templates/ -n openshift".
>
> [1]
> https://github.com/openshift/origin/blob/8d872505a3c85b381cb28e862d18a279a09714f9/examples/sample-app/application-template-stibuild.json#L411-L416
> [2]
> https://github.com/openshift/origin/blob/8d872505a3c85b381cb28e862d18a279a09714f9/examples/sample-app/application-template-stibuild.json#L245
>
>
> *Mateus Caruccio*
> Master of Puppets
> +55 (51) 8298.0026
> gtalk:
>
>
> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
> This message and any attachment are solely for the intended
> recipient and may contain confidential or privileged information
> and it can not be forwarded or shared without permission.
> Thank you!
>
> On Tue, Jan 26, 2016 at 3:57 PM, Candide Kemmler <candide@intrinsic.world>
> wrote:
>
>> I understand how it is possible, using OpenShift to create a few pods as
>> microservices and to wire them together to create a composite application.
>> Each pod/microservice gets its own build and deployment lifecycle, which is
>> great. With my current knowledge the way I would gradually build this
>> application is less than optimal:
>>
>> I would start by deploying service 1, note its IP address then,
>> I would deploy service 2 and wire service 1's IP address as it depends on
>> it
>> ...and so on
>>
>> So I'm wondering if there is a way that I can discover services at
>> runtime, possibly by name. I know about fabric8's api but at first glance
>> it seems a bit cumbersome to use.
>>
>> Ideally I would like to deploy the entire app made of multiple services
>> in one step, as a template, for instance. Again what I don't understand is
>> how the wiring of service is accomplished in a generic way.
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: wiring microservices together

2016-01-26 Thread Mateus Caruccio
Hi Candice.

What you need is already there. All services may be referenced by it's
name. There is an internal DNS service for that.
Suppose you've created 2 microservices: ms1 and ms2. In order to ms1
connect to ms2, just use the hostname as the same name for the service,
i.e. "ms2".

Regarding deployments, "templates" are exactly what you are looking for.
Templates have "parameters", where one can input data. Those values can be
used inside other objects of the templates (a templates is basically a list
of objects to be built, plus optional parameters). Those parameters can be
referenced like shell variables.
For example, see how this[1] parameter is being used here[2].

You may what to start from an existing template from your own installation.
Just "oc get templates -n openshift", then "oc export templates/ -n
openshift".

[1]
https://github.com/openshift/origin/blob/8d872505a3c85b381cb28e862d18a279a09714f9/examples/sample-app/application-template-stibuild.json#L411-L416
[2]
https://github.com/openshift/origin/blob/8d872505a3c85b381cb28e862d18a279a09714f9/examples/sample-app/application-template-stibuild.json#L245


*Mateus Caruccio*
Master of Puppets
+55 (51) 8298.0026
gtalk:


*mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
@MateusCaruccio <https://twitter.com/MateusCaruccio>*
This message and any attachment are solely for the intended
recipient and may contain confidential or privileged information
and it can not be forwarded or shared without permission.
Thank you!

On Tue, Jan 26, 2016 at 3:57 PM, Candide Kemmler <candide@intrinsic.world>
wrote:

> I understand how it is possible, using OpenShift to create a few pods as
> microservices and to wire them together to create a composite application.
> Each pod/microservice gets its own build and deployment lifecycle, which is
> great. With my current knowledge the way I would gradually build this
> application is less than optimal:
>
> I would start by deploying service 1, note its IP address then,
> I would deploy service 2 and wire service 1's IP address as it depends on
> it
> ...and so on
>
> So I'm wondering if there is a way that I can discover services at
> runtime, possibly by name. I know about fabric8's api but at first glance
> it seems a bit cumbersome to use.
>
> Ideally I would like to deploy the entire app made of multiple services in
> one step, as a template, for instance. Again what I don't understand is how
> the wiring of service is accomplished in a generic way.
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users