Re: Provisioning persistence for metrics with GlusterFS

2018-05-21 Thread Rodrigo Bersa
Hi Dan!

Now what I also don't understand is how did the initial volume group for
> the registry got created with just 26GB of storage if the default is for
> 100GB? Is there a rule such as: "create block-hosting volume of default
> size=100GB or max available"?
> The integrated registry's persistence is set to 5GB. This is, I believe, a
> default value, as I haven't set anything related to it in my inventory file
> when installing Openshift Origin. How can I use the remaining storage in my
> vg with glusterFS and Openshift?
>

The registry-storage volume is not a block-volume is a file volume, witch
has no "minimum" size. So you can create many other small file volumes with
no problems. The only restriction will be to create block-volumes, that
need a least 100GB.

Best regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55-11-99557-5841
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Mon, May 21, 2018 at 7:49 AM, Dan Pungă <dan.pu...@gmail.com> wrote:

> Hello Rodrigo, I appreciate your answer!
>
> In the meantime I had reached for the heketi-cli related support(chat) and
> I got the same reference. There's a config map generated by the installer
> for the heketi-registry pod that has the default for block-hosting volumes
> size set at 100GB.
> What I thought was that the "block hosting volume" would be an equivalent
> of a logical volume and it(heketi-cli) tries to create a lv of size 100GB
> inside the already created vg_bd61a1e6f317bb9decade964449c12e8(which has
> 26GB).
>
> I've actually modified the encrypted json config and tried to restart the
> heketi-registry pod, which failed. So I ended up with some unmanaged
> glusterFS storage, but since I'm on a test envionment, it's fine.
> Otherwise, good to know for the future.
>
> Now what I also don't understand is how did the initial volume group for
> the registry got created with just 26GB of storage if the default is for
> 100GB? Is there a rule such as: "create block-hosting volume of default
> size=100GB or max available"?
> The integrated registry's persistence is set to 5GB. This is, I believe, a
> default value, as I haven't set anything related to it in my inventory file
> when installing Openshift Origin. How can I use the remaining storage in my
> vg with glusterFS and Openshift?
>
> Thank you!
>
> On 19.05.2018 02:43, Rodrigo Bersa wrote:
>
> Hi Dan,
>
> The Gluster Block volumes works with the concept of block-hosting volume,
> and these ones are created with 100GB by default.
>
> To clarify, the block volumes will be provisioned over the block hosting
> volumes.
>
> Let's say you need a 10GB block volume, it will create a block hosting
> volume with 100GB and then the 10GB block volume over it, as the next block
> volumes requested until it reaches the 100GB. After that a new block
> hosting volume will be created and so on.
>
> So, if you have just 26GB available in each server, it's not enough to
> create the block hosting volume. You may need to add more devices to your
> CNS Cluster to grow your free space.
>
>
> Kind regards,
>
>
> Rodrigo Bersa
>
> Cloud Consultant, RHCVA, RHCE
>
> Red Hat Brasil <https://www.redhat.com>
>
> rbe...@redhat.comM: +55-11-99557-5841
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
> pelo *Great Place to Work*.
>
> On Wed, May 16, 2018 at 10:35 PM, Dan Pungă <dan.pu...@gmail.com> wrote:
>
>> Hello all!
>>
>> I have setup a cluster with 3 glusterFS nodes for disk persistence just
>> as specified in the docs. I have configured the inventory file to install
>> the containerized version to be used by Openshift's integrated registry.
>> This works fine.
>>
>> Now I wanted to install the metrics component and I followed the
>> procedure described here: https://docs.openshift.org/lat
>> est/install_config/persistent_storage/persistent_storage_
>> glusterfs.html#install-example-infra
>>
>> I end up with openshift-infra project set up, but with 3 pods failing to
>> start and I think this has to do with the PVC for cassandra that fails to
>> create.
>>
>> oc get pvc metrics-cassandra-1 -o yaml
>>
>> apiVersion: v1
>> kind: PersistentVolumeClaim
>> metadata:
>>   annotations:
>> control-plane.alpha.kubernetes.io/leader:
>> '{"holderIdentity":"8

Re: Provisioning persistence for metrics with GlusterFS

2018-05-18 Thread Rodrigo Bersa
Hi Dan,

The Gluster Block volumes works with the concept of block-hosting volume,
and these ones are created with 100GB by default.

To clarify, the block volumes will be provisioned over the block hosting
volumes.

Let's say you need a 10GB block volume, it will create a block hosting
volume with 100GB and then the 10GB block volume over it, as the next block
volumes requested until it reaches the 100GB. After that a new block
hosting volume will be created and so on.

So, if you have just 26GB available in each server, it's not enough to
create the block hosting volume. You may need to add more devices to your
CNS Cluster to grow your free space.


Kind regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55-11-99557-5841
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Wed, May 16, 2018 at 10:35 PM, Dan Pungă <dan.pu...@gmail.com> wrote:

> Hello all!
>
> I have setup a cluster with 3 glusterFS nodes for disk persistence just as
> specified in the docs. I have configured the inventory file to install the
> containerized version to be used by Openshift's integrated registry. This
> works fine.
>
> Now I wanted to install the metrics component and I followed the procedure
> described here: https://docs.openshift.org/latest/install_config/
> persistent_storage/persistent_storage_glusterfs.html#install-example-infra
>
> I end up with openshift-infra project set up, but with 3 pods failing to
> start and I think this has to do with the PVC for cassandra that fails to
> create.
>
> oc get pvc metrics-cassandra-1 -o yaml
>
> apiVersion: v1
> kind: PersistentVolumeClaim
> metadata:
>   annotations:
> control-plane.alpha.kubernetes.io/leader:
> '{"holderIdentity":"8ef584d1-5923-11e8-8730-0a580a830040","
> leaseDurationSeconds":15,"acquireTime":"2018-05-17T00:
> 38:34Z","renewTime":"2018-05-17T00:55:33Z","leaderTransitions":0}'
> kubectl.kubernetes.io/last-applied-configuration: |
>   {"apiVersion":"v1","kind":"PersistentVolumeClaim","
> metadata":{"annotations":{"volume.beta.kubernetes.io/storage-provisioner
> ":"gluster.org/glusterblock"},"labels":{"metrics-infra":"hawkular-
> cassandra"},"name":"metrics-cassandra-1","namespace":"
> openshift-infra"},"spec":{"accessModes":["ReadWriteOnce"]
> ,"resources":{"requests":{"storage":"6Gi"}},"storageClassName":"glusterfs-
> registry-block"}}
> volume.beta.kubernetes.io/storage-provisioner:
> gluster.org/glusterblock
>   creationTimestamp: 2018-05-17T00:38:34Z
>   labels:
> metrics-infra: hawkular-cassandra
>   name: metrics-cassandra-1
>   namespace: openshift-infra
>   resourceVersion: "1204482"
>   selfLink: /api/v1/namespaces/openshift-infra/persistentvolumeclaims/
> metrics-cassandra-1
>   uid: a18b8c20-596a-11e8-8a63-fa163ed601cb
> spec:
>   accessModes:
>   - ReadWriteOnce
>   resources:
> requests:
>   storage: 6Gi
>   storageClassName: glusterfs-registry-block
> status:
>   phase: Pending
>
> oc describe pvc metrics-cassandra-1 shows these warnings:
>
>  36m23m13gluster.org/glusterblock
> glusterblock-registry-provisioner-dc-1-tljbb 
> 8ef584d1-5923-11e8-8730-0a580a830040
> WarningProvisioningFailedFailed to provision volume
> with StorageClass "glusterfs-registry-block": failed to create volume:
> [heketi] failed to create volume: Failed to allocate new block volume: No
> space
>   36m21m14gluster.org/glusterblock
> glusterblock-registry-provisioner-dc-1-tljbb 
> 8ef584d1-5923-11e8-8730-0a580a830040
> NormalProvisioningExternal provisioner is
> provisioning volume for claim "openshift-infra/metrics-cassandra-1"
>   21m21m1gluster.org/glusterblock
> glusterblock-registry-provisioner-dc-1-tljbb 
> 8ef584d1-5923-11e8-8730-0a580a830040
> WarningProvisioningFailedFailed to provision volume
> with StorageClass "glusterfs-registry-block": failed to create volume:
> [heketi] failed to create volume: Post http://heketi-registry-
> default.apps.my.net/blockvolumes: dial tcp: lookup
> heketi-registry-default.apps.my.net on 192.168.150.16:53: no such host
>
> In the default project, if I check the logs for heketi-r

Re: Container Ready/Native storage for OpenShift 3.9

2018-05-17 Thread Rodrigo Bersa
Hi Veselin,

I don't think that allow applications to run on StorageNodes is a good
practice, so the resources on this machines can be exclusive for Gluster.

I would use a different label to avoid applications to run on StorageNode
instead of using schedulable=false.


Regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55-11-99557-5841
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Tue, May 15, 2018 at 6:06 AM, Veselin Hristov <veselin.hris...@itgix.com>
wrote:

> Hi Todd,
>
> Appreciate  the answer !
>
> I will definitely try it out, last question. What about the 'storage'
> nodes, are they allowed to run application? Maybe they are set as
> 'schedulable=False' and with 'Taints and Toleration' you allow the 3
> gluster storage pods you mentioned ?
>
> Regards,
> Veselin
>
> -Original Message-
> From: Walters, Todd [mailto:todd_walt...@unigroup.com]
> Sent: Monday, May 14, 2018 6:25 PM
> To: users@lists.openshift.redhat.com
> Cc: veselin.hris...@itgix.com
> Subject: Re: Container Ready/Native storage for OpenShift 3.9
>
> Hello Veselin,
>
> We’ve deploy Glusterfs on 3.9. Couple of items that may help you. We chose
> it CNS to be part of our existing openshift cluster, or ‘Containerized’ as
> described here
> - https://docs.openshift.org/latest/install_config/
> persistent_storage/persistent_storage_glusterfs.html
>
> We create 3 nodes for ‘app’ storage.  Labeled/tagged them  as glusterfs
> nodes.  So bacially we ran our scaleup playbook to add the 3 nodes. Once
> those nodes were on line we configured for gluster.
> - same link as above but to the ‘advanced install’ section
> - https://docs.openshift.org/latest/install_config/
> persistent_storage/persistent_storage_glusterfs.html#
> install-advanced-installer
>
> We set these for our inventory ‘roles’
> glusterfs_wipe: true
> glusterfs_devices: [ "/dev/xvdc" ]
> openshift_node_labels:
> region: gluster
> zone: default
>
> and these in our OSEv3
> `# Glusterfs # CNS storage for applications
> openshift_storage_glusterfs_namespace: app-storage
> openshift_storage_glusterfs_block_deploy: False
> openshift_storage_glusterfs_is_native: True
> #openshift_storage_glusterfs_timeout: 600
> #openshift_storage_glusterfs_wipe: True`
>
> We made sure our playbooks matched the release-3.9 and then ran this
> playbook to install glusterfs:
> - ansible-playbook -vvv /usr/share/ansible/openshift-
> ansible/playbooks/byo/openshift-glusterfs/config.yml
>
> We used this page for additional roles
> - https://github.com/openshift/openshift-ansible/tree/master/
> roles/openshift_storage_glusterfs
>
> This will install gluster into namespace ‘app-storage’ and will deploy 4
> pods, 1 heketi pod, and 3 gluster storage pods (1 for each storage node)
>
> Only issue we have had is adding devices. For example, we haven’t been
> successful adding 2nd disk device to each node to increase cluster
> capacity. The nodes see it, but our heketi-cli commands keep failing. Once
> we resolve this issue, we’ll deploy to prod.
>
> Thanks,
>
> Todd
>
> Today's Topics:
>
>1. Container Ready/Native storage for OpenShift 3.9
>   (Veselin Hristov)
>
> --
>
> Message: 1
> Date: Mon, 14 May 2018 14:04:47 +0300
> From: "Veselin Hristov" <veselin.hris...@itgix.com>
> To: <users@lists.openshift.redhat.com>
> Subject: Container Ready/Native storage for OpenShift 3.9
> Message-ID: <200f01d3eb73$5f4414a0$1dcc3de0$@itgix.com>
> Content-Type: text/plain; charset="utf-8"
>
> Dear All,
>
>
>
> I am in a need of assistance on having Persistent Storage for our
> OpenShift(Origin) 3.9;  Here are more details: 10 VMs in total -  3
> Masters
> + 7 nodes.
>
> Plan is to have redundant storage solution for our containerized apps
> and
> infrastructure components such as Registry (RWX ReadWriteMany),
> Aggregated
> Logging, Metrics.
>
> For future we might need RWX also for our apps which leads me to use
> RedHat's GlusterFS.
>
>
>
> So far have been researching for Container-Ready Storage
> <https://na01.safelinks.protection.outlook.com/?url=
> https%3A%2F%2Faccess.redhat.com%2Fdocumentation%2Fen-us%
> 2Fcontainer-native_storage%2F3.9%2F=01%7C01%7Ctodd_
> walters%40unigroup.com%7Cbe3a4520c8f04f1cea7f08d5b9a6a8f3%
> 7C259bdc2f86d3477b8cb34ee

Re: adding glusterfs to an existing cluster

2018-04-13 Thread Rodrigo Bersa
Hi Tim,

I normally add the storage=True parameter in the [nodes] section on each
glusterfs node:

[glusterfs]
orn-gluster-storage-001 glusterfs_ip=10.0.0.30 glusterfs_devices='[
"/dev/vdb" ]'
orn-gluster-storage-002 glusterfs_ip=10.0.0.33 glusterfs_devices='[
"/dev/vdb" ]'
orn-gluster-storage-003 glusterfs_ip=10.0.0.7 glusterfs_devices='[
"/dev/vdb" ]'

[nodes]

orn-gluster-storage-001 *storage=True* openshift_hostname=orn-gluster
-storage-001.openstacklocal
orn-gluster-storage-002 *storage=True* openshift_hostname=orn-gluster
-storage-002.openstacklocal
orn-gluster-storage-003 *storage=True* openshift_hostname=orn-gluster
-storage-003.openstacklocal

Kind regards,

Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55-11-99557-5841
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Fri, Apr 13, 2018 at 8:28 AM, Tim Dudgeon <tdudgeon...@gmail.com> wrote:

> I'm having unreliability problems installing a complete cluster, so am
> trying to do this piece by piece.
> First I deploy a basic origin cluster and then I try to deploy glusterfs
> using the playbooks/common/openshift-glusterfs/config.yml playbook (this
> is using v3.7 and the release-3.7 branch of openshift-ansible).
>
> I already have the three gluster nodes as normal nodes in the cluster, and
> now add the gluster sections to the inventory file like this:
>
> [glusterfs]
> orn-gluster-storage-001 glusterfs_ip=10.0.0.30 glusterfs_devices='[
> "/dev/vdb" ]'
> orn-gluster-storage-002 glusterfs_ip=10.0.0.33 glusterfs_devices='[
> "/dev/vdb" ]'
> orn-gluster-storage-003 glusterfs_ip=10.0.0.7 glusterfs_devices='[
> "/dev/vdb" ]'
>
> [nodes]
> 
> orn-gluster-storage-001 openshift_hostname=orn-gluster
> -storage-001.openstacklocal
> orn-gluster-storage-002 openshift_hostname=orn-gluster
> -storage-002.openstacklocal
> orn-gluster-storage-003 openshift_hostname=orn-gluster
> -storage-003.openstacklocal
>
>
> But when I run the playbooks/common/openshift-glusterfs/config.yml
> playbook gluster does not get installed and I see this in the log:
>
> PLAY [Configure GlusterFS] **
> 
> 
> **
> skipping: no hosts matched
>
> What's the right procedure for doing this?
>
> Tim
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Not able to route to services

2018-03-29 Thread Rodrigo Bersa
Hi TIm,

Did you try to curl directly on the docker-registry POD?

If it works maybe the docker-registry endpoint is missing. You can also try
to recreate the docker-registry service.


Best,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55-11-99557-5841
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Wed, Mar 28, 2018 at 6:05 AM, Tim Dudgeon <tdudgeon...@gmail.com> wrote:

> A little more on this.
> I have two systems, installed in an identical manner as is possible.
> One works fine, on the other I can't connect to services.
>
> For instance, from the master node I try to connect the docker-registry
> service on the infrastructure node. If I try:
>
> curl -I https://:5000/healthz
> It works on the working environment, but gets a "No route to host" error
> on the failing one.
>
> If I try:
>
> sudo traceroute -T -p 5000 
>
> it confirms the problem. On the working environment:
>
> $ sudo traceroute -T -p 5000 172.30.145.23
> traceroute to 172.30.145.23 (172.30.145.23), 30 hops max, 60 byte packets
>  1  docker-registry.default.svc.cluster.local (172.30.145.23)  3.044 ms
> 2.723 ms  2.307 ms
>
> On the failing one:
>
> $ sudo traceroute -T -p 5000 172.30.76.145
> traceroute to 172.30.76.145 (172.30.76.145), 30 hops max, 60 byte packets
>  1  docker-registry.default.svc.cluster.local (172.30.76.145)  3004.572
> ms !H  3004.517 ms !H  3004.502 ms !H
>
> The !H means the host is unreachable.
> If I run the same commands from the infrastructure node where the service
> is actually running then it works OK.
>
> The security group for both servers leaves all TCP traffic open. e.g.
>
> ALLOW IPv4 1-65535/tcp to 0.0.0.0/0
> ALLOW IPv4 1-65535/tcp from 0.0.0.0/0
>
> Any thoughts on what is blocking the traffic?
>
> Tim
>
>
>
>
> On 27/03/18 21:54, Tim Dudgeon wrote:
>
> Sorry, I am using port 5000. I wrote that bit incorrectly.
> I did do some more digging based on what's here (
> https://docs.openshift.org/latest/admin_guide/sdn_troubleshooting.html)
> and it looks like there's something wrong with the node to node
> communications.
> From the master I try to contact the infrastructure node:
>
> $ ping 192.168.253.126
> PING 192.168.253.126 (192.168.253.126) 56(84) bytes of data.
> 64 bytes from 192.168.253.126: icmp_seq=1 ttl=64 time=0.657 ms
> 64 bytes from 192.168.253.126: icmp_seq=2 ttl=64 time=0.588 ms
> 64 bytes from 192.168.253.126: icmp_seq=3 ttl=64 time=0.605 ms
> ^C
> --- 192.168.253.126 ping statistics ---
> 3 packets transmitted, 3 received, 0% packet loss, time 2000ms
> rtt min/avg/max/mdev = 0.588/0.616/0.657/0.041 ms
>
> $ tracepath 192.168.253.126
>  1?: [LOCALHOST] pmtu 1450
>  1:  no reply
>  2:  no reply
>  3:  no reply
>  4:  no reply
> ^C
>
> I can ping the node but treacepath can't reach it. On a working claster
> tracepath has no problems.
>
> I don't know the cause. Any ideas?
>
> On 27/03/18 21:46, Louis Santillan wrote:
>
> Isn't the default port for your Registry 5000? Try `curl -kv
> https://docker-registry.default.svc:5000/healthz`
> <https://docker-registry.default.svc:5000/> [0][1].
>
> [0] https://access.redhat.com/solutions/1616953#health
> [1] https://docs.openshift.com/container-platform/3.7/
> install_config/registry/accessing_registry.html#accessing-registry-metrics
>
> ___
>
> LOUIS P. SANTILLAN
>
> Architect, OPENSHIFT, MIDDLEWARE & DEVOPS
>
> Red Hat Consulting, <https://www.redhat.com/> Container and PaaS Practice
>
> lsant...@redhat.com   M: 3236334854
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
>
>
> On Tue, Mar 27, 2018 at 6:39 AM, Tim Dudgeon <tdudgeon...@gmail.com>
> wrote:
>
>> Something strange has happened in my environment which has resulted in
>> not being able to route to any of the services.
>> Earlier this was all working fine. The install was done using the ansible
>> installer and this is happening with 3.6.1 and 3.7.1.
>> The services are all there are running fine, and DNS is working, but I
>> can't reach them. e.g. from the master node:
>>
>> $ host docker-registry.default.svc
>> docker-registry.default.svc.cluster.local has address 172.30.243.173
>> $ curl -k https://docker-registry.default.svc/healthz
>> curl: (7) Failed connect to docker-registry.def

Re: Pods stuck on Terminating status

2018-03-16 Thread Rodrigo Bersa
Bahhoo,

I believe that the namespace will get stuck also. 'Cause it will only be
deleted after all of it's objects got deleted.

I would try to restart the Masters services before.


Regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55-11-99557-5841
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Fri, Mar 16, 2018 at 5:25 PM, Bahhoo <bah...@gmail.com> wrote:

> Hi  Rodrigo,
>
> No PVs are used. One of the pods is a build pod, the other one's a normal
> pod without storage.
> I'll try deleting the namespace. I didn't want to do that,since I had
> running pods in the namespace.
>
> Best,
> Bahho
> --
> Kimden: Rodrigo Bersa <rbe...@redhat.com>
> Gönderme tarihi: ‎16.‎3.‎2018 16:12
> Kime: Bahhoo <bah...@gmail.com>
> Bilgi: rahul334...@gmail.com; users <users@lists.openshift.redhat.com>
>
> Konu: Re: Pods stuck on Terminating status
>
> Hi Bahhoo,
>
> Are you using PVs on the "Terminating" POD? I heard about some issues with
> PODs bounded to PV/PVCs provided by dynamic storage, where you have to
> first remove the volume form POD, then the PVPVC. Just after that remove
> the POD or the DeplymentConfig.
>
> If it's not the case, maybe restarting the atomic-openshift-master-*
> services can work removing the inconsistent POD.
>
>
> Regards,
>
>
> Rodrigo Bersa
>
> Cloud Consultant, RHCVA, RHCE
>
> Red Hat Brasil <https://www.redhat.com>
>
> rbe...@redhat.comM: +55-11-99557-5841
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
> pelo *Great Place to Work*.
>
> On Thu, Mar 15, 2018 at 7:28 PM, Bahhoo <bah...@gmail.com> wrote:
>
>> Hi Rahul,
>>
>> That won't do it either.
>>
>> Thanks
>> Bahho
>> --
>> Kimden: Rahul Agarwal <rahul334...@gmail.com>
>> Gönderme tarihi: ‎15.‎3.‎2018 22:26
>> Kime: bahhooo <bah...@gmail.com>
>> Bilgi: users <users@lists.openshift.redhat.com>
>> Konu: Re: Pods stuck on Terminating status
>>
>> Hi Bahho
>>
>> Try: oc delete all -l app=
>>
>> Thanks,
>> Rahul
>>
>> On Thu, Mar 15, 2018 at 5:19 PM, bahhooo <bah...@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> I have some zombie pods stuck on Terminating status on a OCP 3.7
>>> HA-cluster.
>>>
>>> oc delete with --grace-period=0 --force etc. won't work.
>>> Docker restart. server reboot won't help either.
>>>
>>> I tried to find the pod key in etcd either in order to delete it
>>> manually. I couldn't find it.
>>>
>>> Is there a way to delete these pods?
>>>
>>>
>>>
>>>
>>> Bahho
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Pods stuck on Terminating status

2018-03-16 Thread Rodrigo Bersa
Hi Bahhoo,

Are you using PVs on the "Terminating" POD? I heard about some issues with
PODs bounded to PV/PVCs provided by dynamic storage, where you have to
first remove the volume form POD, then the PVPVC. Just after that remove
the POD or the DeplymentConfig.

If it's not the case, maybe restarting the atomic-openshift-master-*
services can work removing the inconsistent POD.


Regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55-11-99557-5841
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Thu, Mar 15, 2018 at 7:28 PM, Bahhoo <bah...@gmail.com> wrote:

> Hi Rahul,
>
> That won't do it either.
>
> Thanks
> Bahho
> --
> Kimden: Rahul Agarwal <rahul334...@gmail.com>
> Gönderme tarihi: ‎15.‎3.‎2018 22:26
> Kime: bahhooo <bah...@gmail.com>
> Bilgi: users <users@lists.openshift.redhat.com>
> Konu: Re: Pods stuck on Terminating status
>
> Hi Bahho
>
> Try: oc delete all -l app=
>
> Thanks,
> Rahul
>
> On Thu, Mar 15, 2018 at 5:19 PM, bahhooo <bah...@gmail.com> wrote:
>
>> Hi all,
>>
>> I have some zombie pods stuck on Terminating status on a OCP 3.7
>> HA-cluster.
>>
>> oc delete with --grace-period=0 --force etc. won't work.
>> Docker restart. server reboot won't help either.
>>
>> I tried to find the pod key in etcd either in order to delete it
>> manually. I couldn't find it.
>>
>> Is there a way to delete these pods?
>>
>>
>>
>>
>> Bahho
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Help regarding mounting hostPath volumes

2018-03-07 Thread Rodrigo Bersa
Hi Gaurav,

You need to set privileged security context for deploymentConfig and
Project/Namespace:

# oc adm policy add-scc-to-user privileged -z 

# oc patch dc  -p
'{"spec":{"template":{"spec":{"containers":[{"name":"router","securityContext":{"privileged":true}}]'


... and/or set the hostmount-anyuid context for the Project/Namespace:

# oc adm policy add-scc-to-user hostmount-anyuid -z default



https://docs.openshift.com/container-platform/3.7/admin_guide/manage_scc.html#grant-access-to-the-privileged-scc


Regards,


Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55-11-99557-5841
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo *Great Place to Work*.

On Wed, Mar 7, 2018 at 11:45 AM, Fernando Lozano <floz...@redhat.com> wrote:

> Hi Gaurav,
>
> We usually don't change a pod directly -- we change the deployment
> configuration (dc) that creates and manages the pod. Changing the dc
> automatically destroys existing pods and creates new ones, using the
> updated configuration.
>
> []s, Fernando Lozano
>
>
> On Wed, Mar 7, 2018 at 10:42 AM, Vyacheslav Semushin <vsemu...@redhat.com>
> wrote:
>
>> 2018-03-07 4:00 GMT+01:00 Gaurav Ojha <gauravo...@gmail.com>:
>>
>>> Hi,
>>>
>>> I would like some help from you guys if possible.
>>>
>>> I am trying to mount a directory on my host machine to my OpenShift
>>> instance.
>>>
>>> As per the kubernetes document here
>>> <https://kubernetes.io/docs/concepts/storage/volumes/#hostpath> , it
>>> mentions that changing the pod spec by simply adding the hostPath volume
>>> should work, however, when I do that,  OpenShift throws an error whereby it
>>> says that I am not permitted to modify other than a few handful.
>>>
>>
>> If you provide also error message, we'll be able to provide a better
>> solution for you.
>>
>> As far I remember Kubernetes doesn't allow to _modify_ all the pod fields
>> but only subset of them. Have you tried to _create_ a pod instead of
>> editing it?
>>
>> Is there any way to get this permission? I already have added set the
>>> allowHostDirVolumePlugin to true and my containers run as root.
>>>
>>
>> So, you've already seen this https://docs.openshift.org/1.2
>> /admin_guide/manage_scc.html#use-the-hostpath-volume-plugin ?
>>
>>
>> --
>> Slava Semushin | OpenShift
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Re-configure openshift cluster using ansible

2017-11-22 Thread Rodrigo Bersa
Hi Alon,

You can just run the config.yml again, or if you prefere. Run the
uninstall.yml script and then the config.yml.

Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55-11-99557-5841
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Wed, Nov 22, 2017 at 5:37 AM, Alon Zusman <aloniko@gmail.com> wrote:

> Thanks.
> But when I need to change things that a bit more like different hostnames
> or certificates, what should I run? Upgrade?
>
>
> On 22 Nov 2017, at 2:32, Joel Pearson <japear...@agiledigital.com.au>
> wrote:
>
> For reference what you're after is:
>
> openshift_disable_check=disk_availability
>
> On Wed, Nov 22, 2017 at 5:05 AM Scott Dodson <sdod...@redhat.com> wrote:
>
>> It really depends on the configuration changes you want to make whether
>> or not you can simply re-run config.yml and get what you're looking for.
>> Things like hostnames that get placed in certs and certain network
>> configuration such as services and cluster CIDR ranges are immutable and
>> cannot be changed via the installer.
>>
>> As far as the health check goes, you should be able to disable any health
>> check by setting the variable that's emitted in the error message.
>>
>> On Tue, Nov 21, 2017 at 11:25 AM, Alon Zusman <aloniko@gmail.com>
>> wrote:
>>
>>> Hello,
>>> I could not figure out how I can change the inventory file for new
>>> configurations and then Re-configure my current cluster.
>>>
>>> Whenever I re run the configure.yml in the byo folder, it checks again
>>> the minimal requirements and my /var is already less than 40G after the
>>> installation.
>>>
>>> Thanks.
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift Origin Active Directory Authentication

2017-07-12 Thread Rodrigo Bersa
Hi Mark,

I believe maybe the syntax is not right..

Could you try this?

oauthConfig:

  assetPublicURL: https://master.domain.local:8443/console/

  grantConfig:

method: auto

  identityProviders:

  - challenge: true

login: true

mappingMethod: claim

name: Active_Directory

provider:

  apiVersion: v1

  kind: LDAPPasswordIdentityProvider

  attributes:

id:

- dn

email:

- mail

name:

- cn

preferredUsername:

- uid

  bindDN: "cn=openshift,cn=users,dc=domain,dc=local"

  bindPassword: "password"

  insecure: true

  url: ldap://dc.domain.local:389/cn=users,dc=domain,dc=local?uid

  masterPublicURL: https://master.domain.local:8443
  masterURL: https://master.domain.local:8443


Best regards,

Rodrigo Bersa

Cloud Consultant, RHCVA, RHCE

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55 11 99557-5841 <+55-11-99557-5841>
<https://red.ht/sig> [image: Red Hat] <http://www.redhat.com.br>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>


<http://www.redhat.com.br>

On Wed, Jul 12, 2017 at 2:15 PM, Javier Palacios <jpalac...@net4things.com>
wrote:

>
> > I did try sAMAccountName at first and was getting the same results. Then
> I
> > had read that variable was for older Windows machines so I tried uid as
> that
> > was the other example I saw.
>
> The relevant part of my master-config.yaml is below, and appart from using
> ldaps, I don't see any other difference. If the uid attribute is valid on
> your schema, the yours seems ok.
>
> Javier Palacios
>
>   identityProviders:
>   - challenge: true
> login: true
> mappingMethod: claim
> name: n4tdc1
> provider:
>   apiVersion: v1
>   attributes:
> email:
> - mail
> id:
> - dn
> name:
> - cn
> preferredUsername:
> - sAMAccountName
>   bindDN: CN=openshift,OU=N4T-USERS,dc=net4things,dc=local
>   bindPassword: 
>   ca: ad-ldap-ca.crt
>   insecure: false
>   kind: LDAPPasswordIdentityProvider
>   url: ldaps://n4tdc1.net4things.local/dc=net4things,dc=local?
> sAMAccountName
>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: cannot push image

2017-05-23 Thread Rodrigo Bersa
Hi Hetz,

You need to do one of the 2 options:

1. Enable scheduling on your master node.
2. Label the node1 and node2 with region=infra.

I would choose the second option and remove the label from the master node.



Rodrigo Bersa

Cloud Consultant, RHCSA, RHCVA

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55 11 99557-5841 <+55-11-99557-5841>
<https://red.ht/sig> [image: Red Hat] <http://www.redhat.com.br>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>


<http://www.redhat.com.br>

On Tue, May 23, 2017 at 10:52 AM, Ben Parees <bpar...@redhat.com> wrote:

>
>
> On Tue, May 23, 2017 at 9:49 AM, Hetz Ben Hamo <h...@hetz.biz> wrote:
>
>> That's true. I didn't want to have container apps on it.
>>
>
> since you labeled it infra (based on your inventory), it won't.  but it's
> also your only infra structure labeled node, and the registry has to run on
> an infra structure node.
>
> So you either need to add another node labeled infra that's scheduleable,
> or make your master scheduleable.
>
>
>
>>
>> # oc get nodes
>> NAME  STATUS AGE
>> master-home   Ready,SchedulingDisabled   1h
>> node1-homeReady  1h
>> node2-homeReady  1h
>>
>>
>> תודה,
>> *חץ בן חמו*
>> אתם מוזמנים לבקר בבלוג היעוץ <http://linvirtstor.net/> או בבלוג הפרטי שלי
>> <http://benhamo.org>
>>
>> On Tue, May 23, 2017 at 4:45 PM, Ben Parees <bpar...@redhat.com> wrote:
>>
>>> sounds like maybe your master node is not scheduleable, can you run:
>>>
>>> $ oc get nodes
>>>
>>> $ oc describe node master
>>>
>>> ?
>>>
>>>
>>> On Tue, May 23, 2017 at 9:42 AM, Hetz Ben Hamo <h...@hetz.biz> wrote:
>>>
>>>> Sure, here it is:
>>>>
>>>> # oc describe pod docker-registry-2-deploy
>>>> Name:   docker-registry-2-deploy
>>>> Namespace:  default
>>>> Security Policy:restricted
>>>> Node:   /
>>>> Labels: openshift.io/deployer-pod-for.
>>>> name=docker-registry-2
>>>> Status: Pending
>>>> IP:
>>>> Controllers:
>>>> Containers:
>>>>   deployment:
>>>> Image:  openshift/origin-deployer:v1.4.1
>>>> Port:
>>>> Volume Mounts:
>>>>   /var/run/secrets/kubernetes.io/serviceaccount from
>>>> deployer-token-sbvm4 (ro)
>>>> Environment Variables:
>>>>   KUBERNETES_MASTER:https://master-home:8443
>>>>   OPENSHIFT_MASTER: https://master-home:8443
>>>>   BEARER_TOKEN_FILE:/var/run/secrets/kubernetes.i
>>>> o/serviceaccount/token
>>>>   OPENSHIFT_CA_DATA:-BEGIN CERTIFICATE-
>>>> MIIC6jCCAdKgAwIBAgIBATANBgkqhkiG9w0BAQsFADAmMSQwIgYDVQQDDBtvcGVu
>>>> c2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEwHhcNMTcwNTIzMTIxMzMwWhcNMjIwNTIy
>>>> MTIxMzMxWjAmMSQwIgYDVQQDDBtvcGVuc2hpZnQtc2lnbmVyQDE0OTU1NDE2MTEw
>>>> ggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC9imtjAe8JBjpD99nt3D4h
>>>> VCwlWKCMugpIGWYdnHaICBS71KuIim8pWaOWYPUb73QhoUUZhZ80MYOzlB7lk/xK
>>>> NWUnQBDFYc9zKqXkxjiWlTXHv1UCyB56mxFdfxPTHN61JbE8dD9jbiBLRudgb1cq
>>>> Vhff4CRXqkdDURk8KjpnGkWW57Ky0Icp0rbOrRT/OhYv5CB8sqJedSC2VKfe9qtz
>>>> +L4ykOOa4Q1qfqD7YqPDAqnUEJFXEbqjFCdLe6q2TS0vscx/rRJcANmzApgw4BRd
>>>> OxEHH1KX6ariXSNkSWxhQIBa8qDukDrGc2dvAoLHi8ALBbnpGLE0zwtf087zdyF/
>>>> AgMBAAGjIzAhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqG
>>>> SIb3DQEBCwUAA4IBAQA7Nn3iGUVH0HJN6WxR6oirpIv9VdqRgugqOoBM8O5GlV7D
>>>> 7kd4VGFSzFtXKr0kHCgA+/6sEiu0ZlQZT7IvwDWgiY/bXOY/gT8whMWVLXbXBGGT
>>>> 4brdqSRQVdgjv56kBG/cqIWedwNzItGFb+eye+AjHi20fUuVKW49Z7lvStHcvOHK
>>>> c4XyP+e3S/wg6VEMT64kAuZUvvRLhUvJK9ZfxlEGZnjQ+qrYCEpGGDjeDTeXOxMi
>>>> 6NL7Rh09p/yjemw3u+EZkfNlBMgBsA2+zEOxKbAGmENjjctFGRTJVGKq+FWR2HMi
>>>> P2pHCOPEcn2on3GAyTncdp1ANcBNTjb8gTnsoPbc
>>>> -END CERTIFICATE-
>>>>
>>>>   OPENSHIFT_DEPLOYMENT_NAME:docker-registry-2
>>>>   OPENSHIFT_DEPLOYMENT_NAMESPACE:   default
>>>> Conditions:
>>>>   Type  Status
>>>>   PodScheduled  False
>>>> Volumes:
>>>>   deployer-token-sbvm4:
>>>> Type:   Secret (a volume populated by a Secret)
>>>>

Re: cannot push image

2017-05-23 Thread Rodrigo Bersa
Hi Hetz,

It seems that your Registry and Router PODs are not running. Probably
there's a problem avoiding them to deploy.

Can you send the output of the commands below?

# oc describe pod docker-registry-1-deploy
# oc describe pod router-1-deploy



Rodrigo Bersa

Cloud Consultant, RHCSA, RHCVA

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55 11 99557-5841 <+55-11-99557-5841>
<https://red.ht/sig> [image: Red Hat] <http://www.redhat.com.br>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>


<http://www.redhat.com.br>

On Tue, May 23, 2017 at 8:28 AM, Hetz Ben Hamo <h...@hetz.biz> wrote:

> ]# oc get pods -n default
> NAMEREADY STATUSRESTARTS   AGE
> docker-registry-1-deploy0/1   Pending   0  16m
> registry-console-1-deploy   0/1   Error 0  15m
> router-1-deploy 0/1   Pending   0  17m
> [root@master-home ~]# oc logs registry-console-1-deploy
> --> Scaling registry-console-1 to 1
> --> Waiting up to 10m0s for pods in rc registry-console-1 to become ready
> error: update acceptor rejected registry-console-1: pods for rc
> "registry-console-1" took longer than 600 seconds to become ready
> [root@master-home ~]# oc logs router-1-deploy
> [root@master-home ~]# oc logs docker-registry-1-deploy
> [root@master-home ~]# oc logs docker-registry-1-deploy -n default
> [root@master-home ~]# oc get pods
>
>
> תודה,
> *חץ בן חמו*
> אתם מוזמנים לבקר בבלוג היעוץ <http://linvirtstor.net/> או בבלוג הפרטי שלי
> <http://benhamo.org>
>
> On Tue, May 23, 2017 at 1:49 AM, Ben Parees <bpar...@redhat.com> wrote:
>
>>
>>
>> On Mon, May 22, 2017 at 6:18 PM, Hetz Ben Hamo <h...@hetz.biz> wrote:
>>
>>> Hi,
>>>
>>> I've built on a 3 nodes openshift origin using the host file included
>>> below, but it seems few things are getting broken. I didn't modify anything
>>> yet on the openshift, just used the openshift-Ansible checked out from
>>> today.
>>>
>>> Problem one: After building an image from the examples (I chose Java
>>> with the example of wildfly) I get:
>>>
>>> [INFO] 
>>> 
>>> [INFO] BUILD SUCCESS
>>> [INFO] 
>>> 
>>> [INFO] Total time: 12.182 s
>>> [INFO] Finished at: 2017-05-22T22:08:21+00:00
>>> [INFO] Final Memory: 14M/134M
>>> [INFO] 
>>> 
>>> Moving built war files into /wildfly/standalone/deployments for later
>>> deployment...
>>> Moving all war artifacts from /opt/app-root/src/target directory into
>>> /wildfly/standalone/deployments for later deployment...
>>> '/opt/app-root/src/target/ROOT.war' -> '/wildfly/standalone/deploymen
>>> ts/ROOT.war'
>>> Moving all ear artifacts from /opt/app-root/src/target directory into
>>> /wildfly/standalone/deployments for later deployment...
>>> Moving all rar artifacts from /opt/app-root/src/target directory into
>>> /wildfly/standalone/deployments for later deployment...
>>> Moving all jar artifacts from /opt/app-root/src/target directory into
>>> /wildfly/standalone/deployments for later deployment...
>>> ...done
>>> Pushing image 172.30.172.85:5000/test1/wf:latest ...
>>> Warning: Push failed, retrying in 5s ...
>>> Warning: Push failed, retrying in 5s ...
>>> Warning: Push failed, retrying in 5s ...
>>> Warning: Push failed, retrying in 5s ...
>>> Warning: Push failed, retrying in 5s ...
>>> Warning: Push failed, retrying in 5s ...
>>> Warning: Push failed, retrying in 5s ...
>>> Registry server Address:
>>> Registry server User Name: serviceaccount
>>> Registry server Email: serviceacco...@example.org
>>> Registry server Password: <>
>>> error: build error: Failed to push image: Get
>>> https://172.30.172.85:5000/v1/_ping: dial tcp 172.30.172.85:5000:
>>> getsockopt: connection refused
>>>
>>>
>> can you confirm your registry pod is running in the default namespace (oc
>> get pods -n default)?  Can you get logs from it?
>>
>>
>>
>>>
>>> Another problem: I added the metrics option so it installed hawkler but
>>> when it complains that it needs SSL approval (it shows a message about a
>>> problem with hawkler and gives a link to open it) I get upon clicki

[ openshift-sme ] - Wrong Nodes Status

2017-05-09 Thread Rodrigo Bersa
Hi guys,

IHAC, who made some fail over and balancing tests with the RH OCP 3.4 and
we found some odd behaviour when shutting down the Nodes/Masters.

First test, we shut down the master1. But when it enters in NotReady state,
the master2 and one of the nodes goes with it. Even with the services up
and everything accessible (including the PODs hosted in them).

Second test. After shutdown the master3. It doesn't enter in NotReady state
and because of this, the platform don't move the PODs to the others.

In both tests the Webconsole, got really slow and sometimes requests new
login.


Anyone have seen something like this?


Thank you very much.


Rodrigo Bersa
Cloud Consultant | RedHat Brasil
+55 11 99557-5841 | rbe...@redhat.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Build pod is failing to push the image to docker registry

2017-04-07 Thread Rodrigo Bersa
Hi Madhukar,

I know it can be obvious, but, did you restarted the docker.service after
configure the no proxy exception?

Also, you can configure this proxy exception in the
/etc/origin/master/master-config.yaml, and restart the
atomic-openshiftt-master.service.


Regards,

Rodrigo Bersa

Cloud Consultant, RHCSA

Red Hat Brasil <https://www.redhat.com>

rbe...@redhat.comM: +55 11 99557-5841 <+55-11-99557-5841>
<https://red.ht/sig> [image: Red Hat] <http://www.redhat.com.br>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>


<http://www.redhat.com.br>

On Fri, Apr 7, 2017 at 11:18 PM, Madhukar Nayakbomman <
madhukarnaya...@gmail.com> wrote:

> Hi Tero,
>
> Thanks for the reply I did try these things but unfortunately the same
> error is popping up. So as per your reply the build pod use the docker of
> the node to push the image to the registry? Is there a documentation which
> helps to understand how build container pushes the image to the docker
> registry that would be really helpful
>
> Thanks,
> Madhukar
>
> On Thu, Apr 6, 2017 at 10:42 PM, Tero Ahonen <taho...@redhat.com> wrote:
>
>> And forgot proxy “beast” ….if you are using proxy then you need to add
>> docker registry to no proxy
>>
>> .t
>>
>> On 7 Apr 2017, at 6.59, Madhukar Nayakbomman <madhukarnaya...@gmail.com>
>> wrote:
>>
>> Hello Experts,
>>
>> I am a new bee to openshift world. Any help/assistance in solving the
>> below problem is really appreciated.
>>
>> We are creating an application using the below json file, however the
>> build is failing with below error
>>
>>
>> *input json*: https://github.com/openshift/o
>> rigin/blob/master/examples/sample-app/application-template-stibuild.json
>>
>> *Error*: error: build error: Failed to push image: Get
>> https://10.104.6.164:5000/v1/_ping: dial tcp 10.104.6.164:5000:
>> getsockopt: no route to host
>>
>> *Version details:*
>>
>> [root@a4s8 ~]# oc version
>> oc v3.4.1.10
>> kubernetes v1.4.0+776c994
>> features: Basic-Auth GSSAPI Kerberos SPNEGO
>>
>> Server https://a4s8:8443
>> openshift v3.4.1.10
>> kubernetes v1.4.0+776c994
>>
>> *Build logs:*
>>
>> [root@a4s8 ~]# oc logs ruby-sample-build-5-build -f
>> Cloning "https://github.com/openshift/ruby-hello-world.git; ...
>> Commit: 022d87e4160c00274b63cdad7c238b5c6a299265 (Merge pull
>> request #58 from junaruga/feature/fix-for-ruby24)
>> Author: Ben Parees <bpar...@users.noreply.github.com>
>> Date:   Fri Mar 3 15:29:12 2017 -0500
>> ---> Installing application source ...
>> ---> Building your Ruby application from source ...
>> ---> Running 'bundle install --deployment --without development:test' ...
>> Fetching gem metadata from https://rubygems.org/..
>> Installing rake 10.3.2
>> Installing i18n 0.6.11
>> Installing json 1.8.6
>> Installing minitest 5.4.2
>> Installing thread_safe 0.3.4
>> Installing tzinfo 1.2.2
>> Installing activesupport 4.1.7
>> Installing builder 3.2.2
>> Installing activemodel 4.1.7
>> Installing arel 5.0.1.20140414130214
>> Installing activerecord 4.1.7
>> Installing mysql2 0.3.16
>> Installing rack 1.5.2
>> Installing rack-protection 1.5.3
>> Installing tilt 1.4.1
>> Installing sinatra 1.4.5
>> Installing sinatra-activerecord 2.0.3
>> Using bundler 1.7.8
>> Your bundle is complete!
>> Gems in the groups development and test were not installed.
>> It was installed into ./bundle
>> ---> Cleaning up unused ruby gems ...
>> Running post commit hook ...
>> /opt/rh/rh-ruby22/root/usr/bin/ruby -I"lib"
>> -I"/opt/app-root/src/bundle/ruby/gems/rake-10.3.2/lib"
>> "/opt/app-root/src/bundle/ruby/gems/rake-10.3.2/lib/rake/rake_test_loader.rb"
>> "test/*_test.rb"
>> Run options: --seed 63498
>> # Running:
>> .
>> Finished in 0.000908s, 1101.1930 runs/s, 1101.1930 assertions/s.
>> 1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
>> Pushing image 10.104.6.164:5000/default/origin-ruby-sample:latest ...
>> error: build error: Failed to push image: Get
>> https://10.104.6.164:5000/v1/_ping: dial tcp 10.104.6.164:5000:
>> getsockopt: no route to host
>>
>>
>> Thanks,
>> Madhukar
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: syncing ldap groups with openshift 1.4

2017-03-21 Thread Rodrigo Bersa
Hi Joseph!

In the logs we can see:
Error determining LDAP group membership for "cn=staff,ou=Group,dc=acme,dc=net":
membership lookup for user "jgutierr" in group
"cn=staff,ou=Group,dc=acme,dc=net" failed because of "could not search by
dn, invalid dn value: DN ended with incomplete type, value pair".

And the usersQuery is set to:
baseDN: "ou=People,dc=acme,dc=net"

Maybe you can try setting the baseDN to a higher level, like
dc=acme,dc=net, so the ldapsearch can search/find anything below this.



Rodrigo Bersa
Cloud Consultant | Red Hat Brasil
rbe...@redhat.com | M: +55 11 9 9557-5841
Av. Brigadeiro Faria Lima 3900, 8° Andar. São Paulo, Brasil.
RED HAT | TRIED. TESTED. TRUSTED. Saiba porque em redhat.com
<https://www.redhat.com/pt-br/about/trusted>
<http://www.redhat.com/es/about/trusted>
[image: Red Hat] <http://www.redhat.com.br>

On Tue, Mar 21, 2017 at 4:34 PM, Joseph Lorenzini <jalo...@gmail.com> wrote:

> Hi Rodrigo,
>
> Yea, I figured as much. I am kinda tearing my hair out. Its certainly
> possible there's something wrong with my user input but trying to figure
> out why its having problem is really difficult. I have actually started
> tracing through the actual go code to see if i can figure out why its
> having such problems. Here's my latest configuration. Its not much
> different then what you have except the groupNameAttributes is set to cn
> instead of ou. I even tcpdumped the LDAP communication -- nada.
>
> kind: LDAPSyncConfig
> apiVersion: v1
> url: ldap://server:389
> insecure: true
> rfc2307:
> groupsQuery:
> baseDN: "ou=Group,dc=acme,dc=net"
> scope: sub
> derefAliases: never
> pageSize: 0
> filter: (objectClass=posixGroup)
> groupUIDAttribute: dn
> groupNameAttributes: [ cn ]
> groupMembershipAttributes: [ memberUid ]
> usersQuery:
> baseDN: "ou=People,dc=acme,dc=net"
> scope: sub
> derefAliases: never
> pageSize: 0
> userUIDAttribute: dn
> userNameAttributes: [ uid ]
> tolerateMemberNotFoundErrors: false
> tolerateMemberOutOfScopeErrors: false
>
>
> It successfully finds the group *and *the list users in the group. But
> when it tries to do a membership lookup it fails with the following. I
> don't know why its having this particular problem with the DN. Is it
> somehow having an issue trying to create the user DN and matching that to
> the memberUID attribute in the group?
>
> membership lookup for user "jdoe" in group "cn=staff,ou=Group,dc=acme,dc=net"
> failed because of "could not search by dn, invalid dn value: DN ended with
> incomplete type, value pair"
>
>
> Here are the logs.
>
> I0321 14:26:17.070608  130788 groupsyncer.go:56] Listing with
> &{[cn=staff,ou=Group,dc=acme,dc=net]}
> I0321 14:26:17.070699  130788 groupsyncer.go:62] Sync ldapGroupUIDs
> [cn=staff,ou=Group,dc=acme,dc=net]
> I0321 14:26:17.070707  130788 groupsyncer.go:65] Checking LDAP group
> cn=staff,ou=Group,dc=acme,dc=net
> I0321 14:26:17.071770  130788 query.go:228] searching LDAP server with
> config {Scheme: ldap Host: server:389 BindDN:  len(BbindPassword): 0
> Insecure: true} with dn="cn=staff,ou=Group,dc=acme,dc=net" and scope 0
> for (objectClass=*) requesting [cn dn memberUid]I0321 14:26:17.075034
>  130788 query.go:245] found dn="cn=staff,ou=Group,dc=acme,dc=net"
> I0321 14:26:17.075052  130788 query.go:198] found
> dn="cn=staff,ou=Group,dc=acme,dc=net" for (objectClass=*)
> Error determining LDAP group membership for 
> "cn=staff,ou=Group,dc=acme,dc=net":
> membership lookup for user "jgutierr" in group
> "cn=staff,ou=Group,dc=acme,dc=net" failed because of "could not search by
> dn, invalid dn value: DN ended with incomplete type, value pair".
> apiVersion: v1
> items: []
> kind: List
> metadata: {}
> membership lookup for user "jdoe" in group "cn=staff,ou=Group,dc=acme,dc=net"
> failed because of "could not search by dn, invalid dn value: DN ended with
> incomplete type, value pair"
>
>
> On Tue, Mar 21, 2017 at 2:23 PM, Rodrigo Bersa <rbe...@redhat.com> wrote:
>
>> Hi Joseph,
>>
>> Yes, it's not possible do a sync without an objectClass, but you it's
>> possible to use DN as objectClass. I had some problems syncing the
>> LDAPGroups in a client before, and after change the scopes and attributes a
>> lot of times, I got to this LDAPSyncConfig, to work correctly. I think that
>> you just need to find the right parameters =).
>>
>> kind: LDAPSyncConfig
>&

Re: syncing ldap groups with openshift 1.4

2017-03-21 Thread Rodrigo Bersa
Hi Joseph,

Yes, it's not possible do a sync without an objectClass, but you it's
possible to use DN as objectClass. I had some problems syncing the
LDAPGroups in a client before, and after change the scopes and attributes a
lot of times, I got to this LDAPSyncConfig, to work correctly. I think that
you just need to find the right parameters =).

kind: LDAPSyncConfig
apiVersion: v1
url: "ldap://ldapserver.client.com.br;
insecure: true
bindDN: "uid=openShiftAdm,ou=openShift,ou=accounts,O=CLIENT.COM"
bindPassword: "password"
rfc2307:
groupsQuery:
baseDN: "ou=openShift,ou=accounts,o=client.com"
scope: sub
derefAliases: never
filter: (objectClass=groupOfNames)
groupUIDAttribute: dn
groupNameAttributes: [ ou ]
groupMembershipAttributes: [ member ]
usersQuery:
baseDN: "O=CLIENT.COM"
scope: sub
derefAliases: never
userUIDAttribute: dn
userNameAttributes: [ uid ]
tolerateMemberNotFoundErrors: false
tolerateMemberOutOfScopeErrors: false

Hope this can help!!

Regards,


Rodrigo Bersa
Cloud Consultant | Red Hat Brasil
rbe...@redhat.com | M: +55 11 9 9557-5841
Av. Brigadeiro Faria Lima 3900, 8° Andar. São Paulo, Brasil.
RED HAT | TRIED. TESTED. TRUSTED. Saiba porque em redhat.com
<https://www.redhat.com/pt-br/about/trusted>
<http://www.redhat.com/es/about/trusted>
[image: Red Hat] <http://www.redhat.com.br>

On Tue, Mar 21, 2017 at 10:02 AM, Joseph Lorenzini <jalo...@gmail.com>
wrote:

> Hi all,
>
> I am following the documentation here:
>
> https://docs.openshift.org/latest/install_config/syncing_
> groups_with_ldap.html
>
>
> I used a yaml config here:
>
> https://gist.github.com/jaloren/ec7b76feea980dd23d757c477680f751
>
>
> Which failed with:
>
> error: validation of LDAP sync config failed: usersQuery.filter: Invalid
> value: "(objectclass=inetOrgPerson)": cannot specify a filter when using
> "dn" as the UID attribute
>
> Seems like the bug here in the docs has not actually been fixed.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1381674
>
> But okay so you can't use DN with a object class filter that's fine. So
> then I tried it without an object class but left everything else the same
> and now I see this:
>
> error: validation of LDAP sync config failed: groupsQuery.filter: Invalid
> value: "": invalid query filter: LDAP Result Code 201 "": ldap: filter does
> not start with an '('
>
> So if I can't use an object class with a DN as the UID attribute and I
> can't do a sync without an object class, my questions are: how does one get
> this to work where the DN is the UID attribute and if DN is not acceptable
> for the UID attribute, then what is?
>
> Thanks,
>
> Joe
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users