Re: Limits for CPU worth? Vs benefits

2018-03-23 Thread Srinivas Naga Kotaru (skotaru)
Yep, we are doing pretty much the same.

We creating limitrange for every projects with defaults
We creating quota for every project and using it as cost model for CPU, memory, 
storage
We configured cluster overcommit 50 % for memory and 10 % for CPU

ClusterResourceOverride:
  configuration:
apiVersion: v1
cpuRequestToLimitPercent: '10'
kind: ClusterResourceOverrideConfig
memoryRequestToLimitPercent: '50'

monitorin resources uage and adding more nodes

we are trying to tune CPU a little bit as CPU is little confussign and 
uncompress resource

Thanks for your information. It was really useful to clear some doubts around 
CPU request Vs limits


--
Srinivas Kotaru
From: Frederic Giloux 
Date: Friday, March 23, 2018 at 11:55 AM
To: Srinivas Naga Kotaru 
Cc: users 
Subject: Re: Limits for CPU worth? Vs benefits

In the previous example we looked at setting a limit at the pod having the 
lower request but you may rather want it for the pod having the higher request. 
In this extreme scenario (node with 32 cores) pod A was gaining 21 cores on top 
of its request where pod B only 3 when no limit was set. You may find that out 
of proportion and may want to cap what a pod with high request may get.
Another aspect is resource fragmentation, CPU in this case. Basically you get 
better density (you can place more pods/containers not just in number but also 
as a some of requested CPU) with smaller CPU request/limit chunks. The rests 
are smaller. A cluster administrator may want to address this aspect by 
creating a limit range with a max CPU and potentially MaxLimitRequestRatio. If 
a max CPU limit range is set then you have to set a CPU limit.
That said my advice would be not to over engineer it. Start with simple and 
tolerant settings: make requests mandatory or provide defaults for all your 
pods or containers (otherwise the scheduler has a hard time), have quotas so 
that a single project does not starve your complete cluster. Monitor your 
cluster, the resource consumption and its pattern and react where needed.
Regards,
Frédéric
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Limits for CPU worth? Vs benefits

2018-03-23 Thread Srinivas Naga Kotaru (skotaru)
Thanks Frederic for providing more than ask. This explanation enough to 
understand how CPU shares working at run time and requests Vs Limits.

So basically limits not much help unless we want to throttle. Since CPU is 
uncompressed resource, is it better to use only requests and don’t depend or 
control via  limits for cluster planning and efficient utilization of CPU ( 
requests) configuration ?

Will CPU scheduling honor Qos or does it play any role with below explanation? 
Like guaranteed, burstable and best-effort? since pod B has limits, will it get 
more preference then pod A as it doesn’t  have limits?


--
Srinivas Kotaru
From: Frederic Giloux <fgil...@redhat.com>
Date: Friday, March 23, 2018 at 12:21 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: users <users@lists.openshift.redhat.com>
Subject: Re: Limits for CPU worth? Vs benefits

Srinivas,
Let me write the scenarios in a different way if you don't mind:
- pod A requests 7 cores and no limit
- pod B requests 1 core and 3 cores as limit
Node 1 has more than 8 cores available (additional cores may have been reserved 
for system and kubelet processes but we will ignore that) and no other pod 
running on it. Pod A and B can both be scheduled on node 1 (the requests fit). 
When there is contention pod A will get 7 cores and pod B 1 core as requests 
are guaranteed (and the scheduler takes care of not having more requests than 
cores available).
When there is no contention extra cycles will get allocated proportionally to 
the request ratio. Let say there is 1 additional core free. pod A will get 7/8 
out of 9 cores. pod B will get 1/8*9. Pod A uses 7.875 and pod B 1.125.
Now let say that the node has plenty of cores: 32.
According to the CPU shares configured pod A should get 7/8*32=28 cores and pod 
B should get 1/8*32=4 cores. But wait we set limit to 3 cores for pod B and it 
gets throttled to not consume more than the 3 cores. What happens to the cycles 
of the remaining 1 core? Idle? No, pod A can freely use them as CPU share is 
what is guaranteed to the process not a limit.
I hope this helps.
Regards,
Frédéric


On Thu, Mar 22, 2018 at 9:09 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Frederic , thanks for quick reply. You are touching QOS tier.

Let us take a scenario to better understand me.  pod  A has 7000 shared as 
requests ( --cpu-shares) but no limits. Pod B has 1000 shares as requests and 
3000 as limits. In CPU contention situation, how scheduling and QOS works in 
Kubernet world?

Will Pod A get more CPU time then Pod B? or POD B get its guaranteed cpu slices 
first before CPU scheduling pod A since it doesn’t have limits?


--
Srinivas Kotaru
From: Frederic Giloux <fgil...@redhat.com<mailto:fgil...@redhat.com>>
Date: Thursday, March 22, 2018 at 9:22 AM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: users 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: Limits for CPU worth? Vs benefits

Hi Srinivas,
here are a couple of scenarios where I find setting limits useful:
- When I do performance tests and want to compare results between runs, setting 
CPU limits=CPU requests give me confidence that the CPU cycles available 
between the runs were more or less the same. If you don't set a limit or have a 
higher limit anything between the two values is best effort and depend on what 
is happening on the node, including resources consumed by other pods.
- You may also set CPU limits when you want to differentiate between 
applications that are able to consume the "extra" CPU cycles, the ones that 
haven't been "requested". Or you may want to limit how much "extra" these 
applications can get. An example is batch processing, which can use lots of CPU 
cycles but you may not mind it to finish a bit earlier or later.
I hope this helps.
Regards,
Frédéric

On Thu, Mar 22, 2018 at 4:59 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
CPU requests enforced using shares. Even in contention situation, kernel still 
scheduling based on shares and depending on shares, pods getting their own 
shares and never lead to cpu bottleneck or high load on the nodes. Basically it 
never cause noise Neighbour problem.

I understand cpu limits enforced using cpu quota and helps throttling.

Question or argument is do we still need when cpu shares already doing their 
job well both non-contention and contention situation? What extra benefits it 
bringing?

Need some clarity for in the context of noise neighbors problem and prevent 
node going down or prevent one or few bad pods disturbing every pod in node?

Basically looking for what is benefit of having or not having cpu limits for 
pods ?

Sent from my iPhone

___
users mailing list
users@lists.openshif

Re: Limits for CPU worth? Vs benefits

2018-03-22 Thread Srinivas Naga Kotaru (skotaru)
Frederic , thanks for quick reply. You are touching QOS tier.

Let us take a scenario to better understand me.  pod  A has 7000 shared as 
requests ( --cpu-shares) but no limits. Pod B has 1000 shares as requests and 
3000 as limits. In CPU contention situation, how scheduling and QOS works in 
Kubernet world?

Will Pod A get more CPU time then Pod B? or POD B get its guaranteed cpu slices 
first before CPU scheduling pod A since it doesn’t have limits?


--
Srinivas Kotaru
From: Frederic Giloux <fgil...@redhat.com>
Date: Thursday, March 22, 2018 at 9:22 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: users <users@lists.openshift.redhat.com>
Subject: Re: Limits for CPU worth? Vs benefits

Hi Srinivas,
here are a couple of scenarios where I find setting limits useful:
- When I do performance tests and want to compare results between runs, setting 
CPU limits=CPU requests give me confidence that the CPU cycles available 
between the runs were more or less the same. If you don't set a limit or have a 
higher limit anything between the two values is best effort and depend on what 
is happening on the node, including resources consumed by other pods.
- You may also set CPU limits when you want to differentiate between 
applications that are able to consume the "extra" CPU cycles, the ones that 
haven't been "requested". Or you may want to limit how much "extra" these 
applications can get. An example is batch processing, which can use lots of CPU 
cycles but you may not mind it to finish a bit earlier or later.
I hope this helps.
Regards,
Frédéric

On Thu, Mar 22, 2018 at 4:59 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
CPU requests enforced using shares. Even in contention situation, kernel still 
scheduling based on shares and depending on shares, pods getting their own 
shares and never lead to cpu bottleneck or high load on the nodes. Basically it 
never cause noise Neighbour problem.

I understand cpu limits enforced using cpu quota and helps throttling.

Question or argument is do we still need when cpu shares already doing their 
job well both non-contention and contention situation? What extra benefits it 
bringing?

Need some clarity for in the context of noise neighbors problem and prevent 
node going down or prevent one or few bad pods disturbing every pod in node?

Basically looking for what is benefit of having or not having cpu limits for 
pods ?

Sent from my iPhone

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



--
Frédéric Giloux
Principal App Dev Consultant
Red Hat Germany

fgil...@redhat.com<mailto:fgil...@redhat.com> M: 
+49-174-172-4661<tel:+49-174-172-4661>

redhat.com<http://edhat.com> | TRIED. TESTED. TRUSTED. | 
redhat.com/trusted<http://redhat.com/trusted>

Red Hat GmbH, http://www.de.redhat.com/ Sitz: Grasbrunn,
Handelsregister: Amtsgericht München, HRB 153243
Geschäftsführer: Paul Argiry, Charles Cachera, Michael Cunningham, Michael 
O'Neill
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Limits for CPU worth? Vs benefits

2018-03-22 Thread Srinivas Naga Kotaru (skotaru)
CPU requests enforced using shares. Even in contention situation, kernel still 
scheduling based on shares and depending on shares, pods getting their own 
shares and never lead to cpu bottleneck or high load on the nodes. Basically it 
never cause noise Neighbour problem. 

I understand cpu limits enforced using cpu quota and helps throttling.

Question or argument is do we still need when cpu shares already doing their 
job well both non-contention and contention situation? What extra benefits it 
bringing? 

Need some clarity for in the context of noise neighbors problem and prevent 
node going down or prevent one or few bad pods disturbing every pod in node? 

Basically looking for what is benefit of having or not having cpu limits for 
pods ? 

Sent from my iPhone

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Heptio Contour

2018-01-24 Thread Srinivas Naga Kotaru (skotaru)
Clayton

Good analyses. That is exactly am looking for. Thanks for great info.

Also happy that, you already did prototype and compared with current OCP 
routing solution.

Also can u share your thoughts on how ambassador fit this eco system? my 
research shows, ambassador would be a good fit for north/south ingress 
controller where as Istio would be great fit for east/west service traffic. 
Both use Envoy internally.

Then ambassador would be another competitor to contour?

I knew lot of moving parts on routes, ingress and services side but none is 
prime time ready for high scale workloads.

--
Srinivas Kotaru
From: Clayton Coleman <ccole...@redhat.com>
Date: Wednesday, January 24, 2018 at 10:32 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: users <users@lists.openshift.redhat.com>
Subject: Re: Heptio Contour

At this point in time, contour is still pretty new, so expect some rough edges. 
 I did a prototype of routes with envoy (similar to contour, but preserving the 
router features) a few months back, and identified a set of challenges which 
made it not a great fit as a replacement for the OOTB openshift router

In general, when comparing to haproxy and the general state, here's the list

PROs (envoy):

* supports http2 natively
* deeper insight into traffic passing through

CONs (envoy):

* scale is not great right now - even using dynamic programming i couldn't get 
much above 100 backends before hitting the wall (wouldn't scale to very large, 
dense clusters)
* memory use is much higher than haproxy - so another density challenge (I was 
using 30GB at 1k backends, vs 5GB for 15k backends that we see with haproxy)
* web sockets can't be transparent - so you have to run another port for them 
instead of sharing the HTTP port
* SNI passthrough not ready, maybe in 6mo
* reencrypt was really hacky, I couldn't get it to work right now (again, 6mo 
should be fixed)
* general fragility - was easy to break when programming config

CONs (contour, vs openshift router):

* None of the security isolation stuff (preventing one tenant from using 
someone else's hostname)
* None of the protection against bad certs (preventing someone from breaking 
the router by using someone else's tenant)
* No status reporting back

I think the biggest long term challenge with envoy will be pure scale - HAProxy 
is at least two orders of magnitude more efficient right now, and I think it 
will be a while before envoy even gets close.  So if you have 10k+ frontends, 
haproxy is the only game in town.  On ingress vs routes, routes are still more 
polished, so it's really just a "do you want the features routes have that 
ingress doesn't.

On the other downsides to envoy, I expect to see progress over the next year or 
two to fixing it.  I had originally done the prototype expecting that maybe we 
would use envoy as the "out of the box" plugin to the router (continuing to 
support routes and ingress and all the features, but with envoy underneath), 
but the biggest challenge is that envoy isn't really better than haproxy for 
the feature set that ingress and routes expose.  Where envoy really shines is 
in something like istio, where you have a richer model for the ingress / 
service definition that can use the extra bells and whistles.  Ultimately 
ingress + annotations and routes are both targeted at "simple, high scale" use 
of web frontends.  I would expect a lot of people to have their apps "grow up" 
and use node ports or istio ingress as the apps get bigger and more important.  
I don't see them as directly competing.



On Fri, Jan 19, 2018 at 1:36 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
How it is different than Openshift router and what extra benefits it brings? 
Anyone educate me to understand differences or possible use cases where it fit 
into eco system? Or replacing ingress controller or will it solve ingress 
controller 244 address limitations?

https://blog.heptio.com/announcing-contour-0-3-37f4aa7bc6f7



--
Srinivas Kotaru

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: authentication for oadm prune in cron job

2016-12-05 Thread Srinivas Naga Kotaru (skotaru)
Am also interested to know the answer.

Am thinking we don’t need token for oadm command since it doesn’t use tokens or 
oauth based authentication. Since it is installed with root privileges, we are 
using sudo oadm command to executive commands.

# sudo oadm prune builds --orphans --confirm
NAMESPACE NAME
java-hello-universe   os-sample-java-web-1
upgrade   upgrade-1
sujchinncae-test  django-1

We’re not running internal registry for builds. Am not sure we still need to 
run prune operations in this scanario.

--
Srinivas Kotaru

From:  on behalf of Den Cowboy 

Date: Monday, December 5, 2016 at 12:37 AM
To: "users@lists.openshift.redhat.com" 
Subject: authentication for oadm prune in cron job


We are able to delete old deployments + old images (also inside the registry) 
with our oadm prune commands.
We want to put this in cronjobs. But to perform oadm commands we need to be 
authenticated. Which is the best way to authenticate in a cron job?

At the moment we have 1 admin account (with cluster-admin permissions) + we 
have the system:admin account.

Do we need a new account (or service account) for our cronjobs and which 
permission would we need?



Thanks
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc new-app with root privileges

2016-12-02 Thread Srinivas Naga Kotaru (skotaru)
Sorry, forgot to put blog link

http://developers.redhat.com/blog/2016/10/21/understanding-openshift-security-context-constraints/

--
Srinivas Kotaru

From: Srinivas Naga Kotaru <skot...@cisco.com>
Date: Friday, December 2, 2016 at 2:27 PM
To: Akshaya Khare <khare...@husky.neu.edu>, Ben Parees <bpar...@redhat.com>
Cc: users <users@lists.openshift.redhat.com>, Jordan Liggitt 
<jligg...@redhat.com>
Subject: Re: oc new-app with root privileges

This is the blog post am using to refer steps mentioned here. I didn’t tested 
yet but this article talking about how to run an container using anyuid SCC 
privileges

--
Srinivas Kotaru

From: Akshaya Khare <khare...@husky.neu.edu>
Date: Friday, December 2, 2016 at 1:59 PM
To: Ben Parees <bpar...@redhat.com>
Cc: users <users@lists.openshift.redhat.com>, Srinivas Naga Kotaru 
<skot...@cisco.com>, Jordan Liggitt <jligg...@redhat.com>
Subject: Re: oc new-app with root privileges

Thanks Ben,

I'll check this reference.
our developers in the team will need to start a service once the container is 
up.
But the systemd is only accessible for my image if it is run as root.

Maybe I can try adding this startup script into the docker file as well.
I'll check both and let you know...

Regards,
AK

On Fri, Dec 2, 2016 at 4:47 PM, Ben Parees 
<bpar...@redhat.com<mailto:bpar...@redhat.com>> wrote:


On Fri, Dec 2, 2016 at 4:35 PM, Akshaya Khare 
<khare...@husky.neu.edu<mailto:khare...@husky.neu.edu>> wrote:
Hi again,

I tried using the suggestions you guys gave but some how its still failing.
On further analysis I understood that this is not actually the image which I 
created.

Since I'm using source2image, the github source is being mapped on to my image 
which has root privileges.
Now my image creates a build and then a new pod is spawned up using that build.

Is there some other configuration within these steps which allows me to run the 
pod as a root user?
Or these steps have nothing to do with the user issue i'm facing?

​you can control the user the pod runs as by setting the pod's security context:
http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_podsecuritycontext
​
but it would be better to try to understand why your image needs to run as root 
and change file/etc permissions so that it does not require that.



Thanks,
AK

On Thu, Dec 1, 2016 at 6:31 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
I was thinking belwo are right steps as per my knowledge


1.   Create a service account

2.   Grant anyuid SCC to this service account

3.   And add sercice account details to dc object


I might be wrong but above steps in my mind. Even I would like to get clarity 
on this topic what is the right approach to run a container using anyuid 
priviligies


--
Srinivas Kotaru

From: 
<users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>>
 on behalf of Ben Parees <bpar...@redhat.com<mailto:bpar...@redhat.com>>
Date: Thursday, December 1, 2016 at 1:37 PM
To: Akshaya Khare <khare...@husky.neu.edu<mailto:khare...@husky.neu.edu>>, 
Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>
Cc: users 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: oc new-app with root privileges



On Thu, Dec 1, 2016 at 4:18 PM, Akshaya Khare 
<khare...@husky.neu.edu<mailto:khare...@husky.neu.edu>> wrote:
Hi,

I created my own image which can use s2i to use git urls for my internal 
projects.

The image has been created such that the systemd services will be working, and 
in order to do that the image had to be created with root user.

Now the container spawned from this image only works properly i spawn it with 
the below command:

docker run -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -d my-image-name

The container works fine.

Unfortunately, whenever I try to create the container from the openshift ui, it 
creates the pod successfully but it doesn't have access to run it since it 
doesn't run it as a root user.

I tried to provide this command:

oadm policy add-scc-to-user anyuid -z project-name

But still the pod is created without the root user.

Is there any way to run the pod with root user via both cli or ui?

​assuming your built image defaults to running as root, the adding anyuid scc 
should be all you need to do for the image to run as that user, as far as i 
know.

​



--
Thanks & Regards,
Akshaya Khare
312-785-3508

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



--
Ben Parees | OpenShift



--
Thanks & Regards,
Akshaya Khare
312-785-3508



--
Ben Parees | OpenShift



--
Thanks & Regards,
Akshaya Khare
312-785-3508
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc new-app with root privileges

2016-12-01 Thread Srinivas Naga Kotaru (skotaru)
I was thinking belwo are right steps as per my knowledge


1.   Create a service account

2.   Grant anyuid SCC to this service account

3.   And add sercice account details to dc object


I might be wrong but above steps in my mind. Even I would like to get clarity 
on this topic what is the right approach to run a container using anyuid 
priviligies


--
Srinivas Kotaru

From:  on behalf of Ben Parees 

Date: Thursday, December 1, 2016 at 1:37 PM
To: Akshaya Khare , Jordan Liggitt 
Cc: users 
Subject: Re: oc new-app with root privileges



On Thu, Dec 1, 2016 at 4:18 PM, Akshaya Khare 
> wrote:
Hi,

I created my own image which can use s2i to use git urls for my internal 
projects.

The image has been created such that the systemd services will be working, and 
in order to do that the image had to be created with root user.

Now the container spawned from this image only works properly i spawn it with 
the below command:

docker run -ti -v /sys/fs/cgroup:/sys/fs/cgroup:ro -d my-image-name

The container works fine.

Unfortunately, whenever I try to create the container from the openshift ui, it 
creates the pod successfully but it doesn't have access to run it since it 
doesn't run it as a root user.

I tried to provide this command:

oadm policy add-scc-to-user anyuid -z project-name

But still the pod is created without the root user.

Is there any way to run the pod with root user via both cli or ui?

​assuming your built image defaults to running as root, the adding anyuid scc 
should be all you need to do for the image to run as that user, as far as i 
know.

​



--
Thanks & Regards,
Akshaya Khare
312-785-3508

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users



--
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift discovery

2016-11-03 Thread Srinivas Naga Kotaru (skotaru)


% oc get svc
NAMECLUSTER-IP EXTERNAL-IP   PORT(S)AGE
net-tools   172.30.112.9   8080/TCP   18h

/ $ cat /etc/resolv.conf
search sd-testing.svc.cluster.local svc.cluster.local cluster.local cisco.com
nameserver 173.36.96.19
nameserver 173.37.137.85
nameserver 173.37.142.73
nameserver 173.37.87.157
options timeout:1 attempts:1
options ndots:5

/ $ dig +short net-tools.sd-testing.svc.cluster.local
172.30.112.9

/ $ dig +short yahoo.com

/ $ curl -I yahoo.com
HTTP/1.1 301 Moved Permanently
Date: Thu, 03 Nov 2016 17:22:30 GMT
Server: ATS
Location: https://www.yahoo.com/
Content-Language: en
Cache-Control: no-store, no-cache
Content-Length: 304
Content-Type: text/html
Via: https/1.1 ir37.fp.ne1.yahoo.com (ApacheTrafficServer), 1.1 
alln01-mda1-dmz-wsa-2.cisco.com:80 (Cisco-WSA/9.0.1-162)
Connection: keep-alive


$ nslookup 173.37.137.85
Server:   173.36.96.19
Address:173.36.96.19#53

** server can't find 85.137.37.173.in-addr.arpa: REFUSED

/ $ nslookup 173.36.96.19
Server:   173.36.96.19
Address:173.36.96.19#53

19.96.36.173.in-addr.arpa  name = l3ipn-id2-002.cisco.com.


It seems to be working but didn’t understand why dns resolution against other 
entries in /etc/resolve.conf saying server can’t find. Last 3 entries in 
/etc/resolve.conf are our enterprise DNS servers, which might be automatically 
added to container /etc/resolv.conf from host /etc/resolv.conf

--
Srinivas Kotaru

From: "ccole...@redhat.com" <ccole...@redhat.com>
Date: Thursday, November 3, 2016 at 10:11 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: "users@lists.openshift.redhat.com" <users@lists.openshift.redhat.com>
Subject: Re: Openshift discovery

Can you show me the output of dig for kubernetes.default.svc.cluster.local AND 
contents of resolv.conf?

On Thu, Nov 3, 2016 at 12:38 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
SKOTARU-M-H06U:~ $ oc get pods
NAMEREADY STATUS RESTARTS   AGE
net-tools-1-pp4t4   0/1   CrashLoopBackOff   20817h
SKOTARU-M-H06U:~ $

SKOTARU-M-H06U:~ $ oc debug net-tools-1-pp4t4
Debugging with pod/net-tools-1-pp4t4-debug, original command: sh
Waiting for pod to start ...
Pod IP: 10.1.4.10
If you don't see a command prompt, try pressing enter.

/ $ dig

; <<>> DiG 9.10.4-P3 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 18102
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;. INNS

;; Query time: 0 msec
;; SERVER: 173.36.96.19#53(173.36.96.19)
;; WHEN: Thu Nov 03 16:37:12 UTC 2016
;; MSG SIZE  rcvd: 17


--
Srinivas Kotaru

From: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Thursday, November 3, 2016 at 7:02 AM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: Openshift discovery

If you "oc debug" the crashing pods, do you get a shell up?

On Nov 3, 2016, at 9:56 AM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Clayton

Sorry for confusion. Original problem was, Service discovery not working in 
regular openshift apps. Out of the box images as well as custom images.

I was trying to build a image with a net tools for debugging, so it is easy for 
troubleshoot as out of the box images does not have basic net tools. Openshift 
throwing crash recovery for any image I build, so I might be doing some 
mistake.  These images working fine in standard docker.


Sent from my iPhone

On Nov 3, 2016, at 6:24 AM, Clayton Coleman 
<ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:
Alpine uses musl which has known differences from glibc in how it handles DNS 
resolution.  *usually* this is because multiple  nameservers are listed in 
resolv.conf and the first one doesn't answer queries for *svc.cluster.local.  
You can check that by execing into containers and looking at the resolv.conf.

In 3.3, at the host level we configure dnsmasq by default to offer a single 
resolver (so musl doesn't get confused).  You can check how that is configured 
on your hosts.

On Nov 2, 2016, at 5:06 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Trying to debug below issue reported by client. For some reason, service 
discover never working in our platform.

Building an image with net tools for easy troubleshooting these issues from 
platform side. I’m sure mak

Re: Openshift discovery

2016-11-03 Thread Srinivas Naga Kotaru (skotaru)
SKOTARU-M-H06U:~ $ oc get pods
NAMEREADY STATUS RESTARTS   AGE
net-tools-1-pp4t4   0/1   CrashLoopBackOff   20817h
SKOTARU-M-H06U:~ $

SKOTARU-M-H06U:~ $ oc debug net-tools-1-pp4t4
Debugging with pod/net-tools-1-pp4t4-debug, original command: sh
Waiting for pod to start ...
Pod IP: 10.1.4.10
If you don't see a command prompt, try pressing enter.

/ $ dig

; <<>> DiG 9.10.4-P3 <<>>
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 18102
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;. INNS

;; Query time: 0 msec
;; SERVER: 173.36.96.19#53(173.36.96.19)
;; WHEN: Thu Nov 03 16:37:12 UTC 2016
;; MSG SIZE  rcvd: 17


--
Srinivas Kotaru

From: "ccole...@redhat.com" <ccole...@redhat.com>
Date: Thursday, November 3, 2016 at 7:02 AM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: "users@lists.openshift.redhat.com" <users@lists.openshift.redhat.com>
Subject: Re: Openshift discovery

If you "oc debug" the crashing pods, do you get a shell up?

On Nov 3, 2016, at 9:56 AM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Clayton

Sorry for confusion. Original problem was, Service discovery not working in 
regular openshift apps. Out of the box images as well as custom images.

I was trying to build a image with a net tools for debugging, so it is easy for 
troubleshoot as out of the box images does not have basic net tools. Openshift 
throwing crash recovery for any image I build, so I might be doing some 
mistake.  These images working fine in standard docker.


Sent from my iPhone

On Nov 3, 2016, at 6:24 AM, Clayton Coleman 
<ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:
Alpine uses musl which has known differences from glibc in how it handles DNS 
resolution.  *usually* this is because multiple  nameservers are listed in 
resolv.conf and the first one doesn't answer queries for *svc.cluster.local.  
You can check that by execing into containers and looking at the resolv.conf.

In 3.3, at the host level we configure dnsmasq by default to offer a single 
resolver (so musl doesn't get confused).  You can check how that is configured 
on your hosts.

On Nov 2, 2016, at 5:06 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Trying to debug below issue reported by client. For some reason, service 
discover never working in our platform.

Building an image with net tools for easy troubleshooting these issues from 
platform side. I’m sure making silly mistake, but image build from below code 
always throws CrashLoopBackOff error.

Wondering what mistake am doing here?

FROM alpine:latest
RUN apk update && apk add bind-tools net-tools curl
ENTRYPOINT ["sh"]

I observed any image build throwing the same error. Example ubuntu image from 
dockerhub. What preventing oepnshfit to run ?

--
Srinivas Kotaru



Tried all of those options.


In fact, even the first one should work, since resolve.conf has search domains 
configured.  That would be ideal, since it makes the configuration of pods 
dependencies easier to port across projects.

Regards,
Tom.


___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Openshift discovery

2016-11-03 Thread Srinivas Naga Kotaru (skotaru)
Clayton

Sorry for confusion. Original problem was, Service discovery not working in 
regular openshift apps. Out of the box images as well as custom images.

I was trying to build a image with a net tools for debugging, so it is easy for 
troubleshoot as out of the box images does not have basic net tools. Openshift 
throwing crash recovery for any image I build, so I might be doing some 
mistake.  These images working fine in standard docker.


Sent from my iPhone

On Nov 3, 2016, at 6:24 AM, Clayton Coleman 
<ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:

Alpine uses musl which has known differences from glibc in how it handles DNS 
resolution.  *usually* this is because multiple  nameservers are listed in 
resolv.conf and the first one doesn't answer queries for *svc.cluster.local.  
You can check that by execing into containers and looking at the resolv.conf.

In 3.3, at the host level we configure dnsmasq by default to offer a single 
resolver (so musl doesn't get confused).  You can check how that is configured 
on your hosts.

On Nov 2, 2016, at 5:06 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:

Trying to debug below issue reported by client. For some reason, service 
discover never working in our platform.

Building an image with net tools for easy troubleshooting these issues from 
platform side. I'm sure making silly mistake, but image build from below code 
always throws CrashLoopBackOff error.

Wondering what mistake am doing here?

FROM alpine:latest
RUN apk update && apk add bind-tools net-tools curl
ENTRYPOINT ["sh"]

I observed any image build throwing the same error. Example ubuntu image from 
dockerhub. What preventing oepnshfit to run ?

--
Srinivas Kotaru



Tried all of those options.


In fact, even the first one should work, since resolve.conf has search domains 
configured.  That would be ideal, since it makes the configuration of pods 
dependencies easier to port across projects.

Regards,
Tom.


___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: quota increase

2016-10-25 Thread Srinivas Naga Kotaru (skotaru)
Oh ok. That is exactly what I did and edited JSON/YAML files. As direct edit of 
these files prone to syntax errors, was trying to explore any better way.

Thanks for trying to help

--
Srinivas Kotaru

From: David Eads <de...@redhat.com>
Date: Tuesday, October 25, 2016 at 12:43 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>
Cc: "users@lists.openshift.redhat.com" <users@lists.openshift.redhat.com>
Subject: Re: quota increase

Try `oc edit quota/foo`.  Similar command for `limitranges`.  You can also 
write `oc patch` commands, but they tend to be more difficult.

On Tue, Oct 25, 2016 at 3:03 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Hi

Can cluster-admin increase quota and limits on exiting limits object using oc 
or oadm command? If yes, what is the syntax. I couldn’t find anything useful.

--
Srinivas Kotaru

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


quota increase

2016-10-25 Thread Srinivas Naga Kotaru (skotaru)
Hi

Can cluster-admin increase quota and limits on exiting limits object using oc 
or oadm command? If yes, what is the syntax. I couldn’t find anything useful.

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: AW: Router Sharding

2016-09-26 Thread Srinivas Naga Kotaru (skotaru)
It seems writing documentation is difficult then coding. I always felt the 
other way ☺

I been scratching my head last 2 days to test basic use case of multiple 
routers/shards but no luck yet.

With all due to respect, OpenShift documentation has to be improved a lot for 
real consumption. Current documentation is very high level. We have to spend 
lot of time to understand, how a feature really works in multiple contexts. 
It’s all at all written keeping customer or platform teams in the mind. Just to 
prove a feature, taking a weeks and weeks to understand, POC and prove it. Lot 
of time waste …

Shards feature is similar to multiple routers deployment (every router has its 
own IP failover pods, floating IP, same ports to avoid port conflict)? Since 
every router has its own floating IP address, no more ports conflict (80/443) 
or no special IP tables rules required?

Or sharing is different than multiple routers deployment??

--
Srinivas Kotaru

From: Aleksandar Lazic <aleksandar.la...@cloudwerkstatt.com>
Date: Monday, September 26, 2016 at 1:53 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>, Andrew Lau 
<and...@andrewklau.com>, "users@lists.openshift.redhat.com" 
<users@lists.openshift.redhat.com>
Subject: AW: Router Sharding

Hi.

I agree with you, and I have tried to contribute to the doc but that’s wasn’t 
an easy task so I stopped.
Maybe I was also to naïve so blame me that I have stopped contribution.

@1: Currently that’s not possible you will need to add for every route the 
label for the dedicate router.

‘oc create route …’

have no options to set labels you will need to use

oc expose service ... --labels='router=one' --hostname='...'

or you can use the labels in the webconsole.

Oh and by the way the default router MUST also have ROUTE_LABELS if you don’t 
want to expose all routes to the default router.

@2: you will need the new template from OCP 3.3 there are additional env 
variables necessary to be able to use more the none router on the same node.

https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L147
https://github.com/openshift/origin/blob/master/images/router/haproxy/conf/haproxy-config.template#L184

and you need to add on the router nodes in the iptables chain 
‘OS_FIREWALL_ALLOW’ the additional ports.

@3: This would be a little bit tricky on the same node due to the fact that the

https://github.com/openshift/origin/blob/master/images/ipfailover/keepalived/lib/failover-functions.sh#L11-L12

only handle one config file. Maybe there is a way with *VIPS but I have never 
tried this.

Hth

Aleks

Von: users-boun...@lists.openshift.redhat.com 
[mailto:users-boun...@lists.openshift.redhat.com] Im Auftrag von Srinivas Naga 
Kotaru (skotaru)
Gesendet: Montag, 26. September 2016 21:31
An: Andrew Lau <and...@andrewklau.com>; users@lists.openshift.redhat.com
Betreff: Re: Router Sharding


Current sharding documentation is very high level, doesn’t cover step by step 
actual real world use cases.

Anyway, I was succeeded to create 2 shards. Lot of questions on this topic on 
how to proceed next …


1.  How to tell a project that all apps created on this project should use 
router #1 or router #2?

2.  Now we have 3 routers (default created as part of installation + 
additional 2 routers created). How the ports work? 80, 443 & 1936 assigned to 
default router. I changed ports to 81/444/1937 and 82/445/1938 to respectively 
shad #1 #2. These ports open automatically or explicit action required?

3.  Ipfailover (floating VIP) bound to default router. Do we need to create 
additional IP failover pods with different IP’s and match to shad #1 and #2? Or 
can we share same IP failover pods with single floating VIP to newly created 
shad’s as well?

--
Srinivas Kotaru

From: Andrew Lau <and...@andrewklau.com<mailto:and...@andrewklau.com>>
Date: Friday, September 23, 2016 at 7:41 PM
To: Srinivas Naga Kotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: Router Sharding

There are docs here:
- 
https://docs.openshift.org/latest/architecture/core_concepts/routes.html#router-sharding
- 
https://docs.openshift.org/latest/install_config/router/default_haproxy_router.html#creating-router-shards


On Sat, 24 Sep 2016 at 06:13 Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Just saw 3.3 features blog

https://blog.openshift.com/whats-new-openshift-3-3-cluster-management/

We’re rethinking of our cluster design and want to consolidate 1 cluster per 
data center. Initially we were planning off 2 cluster per data center to server 
internal and external traffic dedicated to its own cluster.

Consolidating to a single cluster per DC wil

Re: Router Sharding

2016-09-26 Thread Srinivas Naga Kotaru (skotaru)

Current sharding documentation is very high level, doesn’t cover step by step 
actual real world use cases.

Anyway, I was succeeded to create 2 shards. Lot of questions on this topic on 
how to proceed next …


1.   How to tell a project that all apps created on this project should use 
router #1 or router #2?

2.   Now we have 3 routers (default created as part of installation + 
additional 2 routers created). How the ports work? 80, 443 & 1936 assigned to 
default router. I changed ports to 81/444/1937 and 82/445/1938 to respectively 
shad #1 #2. These ports open automatically or explicit action required?

3.   Ipfailover (floating VIP) bound to default router. Do we need to 
create additional IP failover pods with different IP’s and match to shad #1 and 
#2? Or can we share same IP failover pods with single floating VIP to newly 
created shad’s as well?

--
Srinivas Kotaru

From: Andrew Lau <and...@andrewklau.com>
Date: Friday, September 23, 2016 at 7:41 PM
To: Srinivas Naga Kotaru <skot...@cisco.com>, 
"users@lists.openshift.redhat.com" <users@lists.openshift.redhat.com>
Subject: Re: Router Sharding

There are docs here:
- 
https://docs.openshift.org/latest/architecture/core_concepts/routes.html#router-sharding
- 
https://docs.openshift.org/latest/install_config/router/default_haproxy_router.html#creating-router-shards


On Sat, 24 Sep 2016 at 06:13 Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Just saw 3.3 features blog

https://blog.openshift.com/whats-new-openshift-3-3-cluster-management/

We’re rethinking of our cluster design and want to consolidate 1 cluster per 
data center. Initially we were planning off 2 cluster per data center to server 
internal and external traffic dedicated to its own cluster.

Consolidating to a single cluster per DC will offer multiple advantages to us.  
We currently running latest 3.2.1 release

Router Sharding is available in 3.2.x branch or need to wait for 3.3? I was 
thinking this feature has been available from 3.x onwards as per documentation 
available. Not sure what is mean for upcoming 3.3.

We really want to take advantage of this feature and test ASAP. Current 
documentation is not clear or explains only high level.

Can you help me or point to right documentation which explains step by steps to 
test this feature?

Can we control routes at project level so that clients wont modifies to move 
their routes from prod to non-prod or internal to external routers?

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Router Sharding

2016-09-23 Thread Srinivas Naga Kotaru (skotaru)
Just saw 3.3 features blog

https://blog.openshift.com/whats-new-openshift-3-3-cluster-management/

We’re rethinking of our cluster design and want to consolidate 1 cluster per 
data center. Initially we were planning off 2 cluster per data center to server 
internal and external traffic dedicated to its own cluster.

Consolidating to a single cluster per DC will offer multiple advantages to us.  
We currently running latest 3.2.1 release

Router Sharding is available in 3.2.x branch or need to wait for 3.3? I was 
thinking this feature has been available from 3.x onwards as per documentation 
available. Not sure what is mean for upcoming 3.3.

We really want to take advantage of this feature and test ASAP. Current 
documentation is not clear or explains only high level.

Can you help me or point to right documentation which explains step by steps to 
test this feature?

Can we control routes at project level so that clients wont modifies to move 
their routes from prod to non-prod or internal to external routers?

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: scenarios of entire app in a cluster unavailable

2016-09-22 Thread Srinivas Naga Kotaru (skotaru)
Thank you for info. It it’s useful 


-- 
Srinivas Kotaru

On 9/20/16, 5:37 AM, "Brenton Leanhardt" <blean...@redhat.com> wrote:

On Mon, Sep 19, 2016 at 6:40 PM, Srinivas Naga Kotaru (skotaru)
<skot...@cisco.com> wrote:
> Trying to understand on which scenarios all the instances of an 
application
> running from cluster unavailable?
>
>
>
> OS upgrade failure??
>
> Openshift upgrade bugs/failures/downtime?

The best way to mitigate risks from the first two are to upgrade
independent sets of Nodes in batches to prevent downtime in the event
of unforeseen problems.  This should be rare if there is sufficient
monitoring in the environment.

In the Origin 1.4, OCP 3.4 timeframe it will be much easier to upgrade
batches of Nodes.  It's possible today but it takes a little more
involvement with the ansible inventory.  In large environments with
strict maintenance windows it's common to only update a set of Nodes
during each window.

>
> Router failures ??

This is likely the most common source of user-facing downtime.

>
> Keepalive containers failed??

Unless this event triggered a failover to a pod that was actually in
outage I don't think the Keepalive pod failing would cause a
user-facing outage.  The platform would spawn another.

>
> Floating IP shared by keepalive container had issues??

If somehow the floating IP was in use by another interface on the
network I'm certain bad things would happen.

>
> VXLAN bug or upgrade caused entire cluster network failure?

Catastrophic network failures could indeed cause a major outage.

>
> Human config error ( what those???)

Always.  Best avoided by using a tool like Ansible and testing changes
in other environments before production.

>
>
>
> Is above list accurate? Can we think off any other possible scanarios 
where
> whole application will be down in cluster duet to platform issues?
>

I would mention downtime caused by load.  Anecdotally, this is
probably the second most common cause of downtime.  It often relates
to the human error and lack of monitoring.  The more dense the
platform operators wish to keep the environment the more rigor is
needed for monitoring.

This could simply be an error of the pod owner as well.  eg, the JVM
inside the pod might be online however the application running in the
JVM might be throwing out of memory errors due to incorrect assignment
of limits.

>
>
> --
>
> Srinivas Kotaru
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


routers probes Vs container probes

2016-08-01 Thread Srinivas Naga Kotaru (skotaru)
Have few questions on how router probes works with in conjunction with 
container live & readiness probes? Just looking at router config, it was 
configured with basic tcp probes but not http. How rolling restart scenario 
works with relation router probes vs container probes?

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Hawkular Metrics

2016-06-17 Thread Srinivas Naga Kotaru (skotaru)
Thanks Matt. That is good info 



-- 
Srinivas Kotaru

On 6/17/16, 12:48 PM, "Matt Wringe" <mwri...@redhat.com> wrote:

>- Original Message -
>> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> To: users@lists.openshift.redhat.com
>> Sent: Friday, June 17, 2016 2:30:38 PM
>> Subject: Hawkular Metrics
>> 
>> 
>> 
>> Trying to understand how an API can be crafted to get metrics and prepare
>> custom dashboard. My metrics service is up and running, and seeing nice CPU
>> and memory graphs against container metrics tab
>> 
>> 
>> 
>> Few questions around it
>> 
>> 
>> 
>> 1. If I don’t want to use console, can we use an API to get hawkular metrics
>> and prepare custom dashboards?
>
>If you want to use the Hawkular Metrics API directly to read metrics, there is 
>some documentation here for that: 
>https://github.com/openshift/origin-metrics/blob/master/docs/hawkular_metrics.adoc
>
>> 
>> 2. Can I feed hawkular data to any other custom systems to prepare graphs
>> automatically rather we our self create graphs? Like zabbix, cacti or any
>> other graphs generated systems?
>
>From the Hawkular Metrics API call you will get back json data. If you wish to 
>import that into another system you can do so. But we do not provide support 
>for bringing this data into a third party system.
>
>> 
>> 3. Am seeing cpu and memory, any other other metrics it will generate or
>> expose?
>
>Newer versions should have network metrics. We also collect node level metrics 
>as well (but those are not displayed in the console).
>
>In the future we should have support for pods to provide their own custom 
>metrics as well.
>
>> 
>> 4. Finally, what Is the exact API call we can use to get metrics?
>
>Please see the docs: 
>https://github.com/openshift/origin-metrics/blob/master/docs/hawkular_metrics.adoc
>
>> 
>> 
>> 
>> Am using below call and able to see lot of data, but not sure which are
>> interesting fields or right API call
>> 
>> 
>> 
>> curl -H "Authorization: Bearer  -H "Hawkular-tenant: " -X GET
>> https:// /hawkular/metrics/metrics | jq
>> 
>
>That will give you the list of all metrics for that project. From there you 
>will need to narrow it down and select which metrics you are interested in, 
>and over what time period.
>
>> 
>> 
>> 
>> 
>> 
>> --
>> 
>> 
>> Srinivas Kotaru
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> 


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Hawkular Metrics

2016-06-17 Thread Srinivas Naga Kotaru (skotaru)
Trying to understand how an API can be crafted to get metrics and prepare 
custom dashboard. My metrics service is up and running, and seeing nice CPU and 
memory graphs against container metrics tab

Few questions around it


1.   If I don’t want to use console, can we use an API to get hawkular 
metrics and prepare custom dashboards?

2.   Can I feed hawkular data to any other custom systems to prepare graphs 
automatically rather we our self create graphs? Like zabbix, cacti or any other 
graphs generated systems?

3.   Am seeing cpu and memory, any other other metrics it will generate or 
expose?

4.   Finally, what Is the exact API call we can use to get metrics?

Am using below call and able to see lot of data, but not sure which are 
interesting fields or right API call

curl -H "Authorization: Bearer  -H "Hawkular-tenant: " -X GET 
https:// /hawkular/metrics/metrics | jq



--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Feedback

2016-06-16 Thread Srinivas Naga Kotaru (skotaru)
Just to share an update

Am able to successfully install and configure metrics and logging using real 
certs, node classification and separate apps and OPS ES clusters.

Am happy that this setup finally working and live. Want to share feedback from 
operation side. There is some scope to improve and make it easy to setup 
logging, metrics, routers and registry components. They seem to be little 
difficult and with so many manual steps. Scope to improve documentation too.

Thanks for your help, as usual your co operation and willing to help is always 
on top. I also used RedHat global support extensively to bring all these 
services up and running in prod grade environment. It was great help from them 
too.

--
Srinivas Kotaru

From: skotaru <skot...@cisco.com>
Date: Wednesday, June 15, 2016 at 2:55 PM
To: Eric Wolinetz <ewoli...@redhat.com>
Cc: "users@lists.openshift.redhat.com" <users@lists.openshift.redhat.com>
Subject: Re: ENABLE_OPS_CLUSTER

OK thanks. I deleted whole stack and let me run the deployed again by enabling 
true.



--
Srinivas Kotaru

From: Eric Wolinetz <ewoli...@redhat.com>
Date: Wednesday, June 15, 2016 at 2:41 PM
To: skotaru <skot...@cisco.com>
Cc: "users@lists.openshift.redhat.com" <users@lists.openshift.redhat.com>
Subject: Re: ENABLE_OPS_CLUSTER



On Wed, Jun 15, 2016 at 4:01 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Hi

While deploying EFK stack, I didn’t toggled ENABLE_OPS_CLUSTER to true. Default 
value if “false”

Now my EFK stack is fully installed and working fine. Is there any way we can 
enable OPS logs without deleting whole stack and re create ?

If you do not need to have physical separation of your operations logs and your 
application logs you can leave it with ENABLE_OPS_CLUSTER as false.  Setting 
that to true don't add any extra logs, it just creates a second Elasticsearch 
cluster (the ops cluster) an Ops Kibana instance to serve up the logs within 
the Elasticsearch ops cluster and tells Fluentd that the operations logs that 
it is processing go to this new cluster instead.

To be honest, I would recommend reinstalling with ENABLE_OPS_CLUSTER=true and 
tricking Fluentd to reprocess all your logs as if it were a new installation.  
You are missing the ops templates for the different components which will come 
in handy especially when you want to later scale up the number of ES nodes for 
a cluster.

Also you have the added benefit that some of your operations logs aren't in the 
same ES cluster as your application logs (the main benefit for using this 
deployment option)

You can trick Fluentd into reprocessing logs on its node by
1. Stop Fluentd on that node
2. Delete the "/var/log/es-containers.log.pos" and "/var/log/node.log.pos" 
files on that node
3. Start Fluentd on that node again, it will act as if it had not processed any 
log files yet



--
Srinivas Kotaru

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


ENABLE_OPS_CLUSTER

2016-06-15 Thread Srinivas Naga Kotaru (skotaru)
Hi

While deploying EFK stack, I didn’t toggled ENABLE_OPS_CLUSTER to true. Default 
value if “false”

Now my EFK stack is fully installed and working fine. Is there any way we can 
enable OPS logs without deleting whole stack and re create ?


--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Metrics deployment

2016-06-15 Thread Srinivas Naga Kotaru (skotaru)
Matt

This issue fixed. Worked with RedHat support team yesterday night. It was 
narrow down to dnsmasq which was stopped on all nodes. As you knew, dnsmasq 
responsible for forwarding node DNS requests to master for service resolution. 
Since it was stopped, hawkular service unable to resolve to Metrics-Casandra 
service. Am not sure at this point, Ansible based install suppose to enable and 
restart dnsmasq across all worker nodes. 

All good now. Metrics service is up and running, able to visualize graphs on 
console. 

Thanks for your help. It was a great co operation and collaboration.  

-- 
Srinivas Kotaru

On 6/15/16, 8:45 AM, "Matt Wringe" <mwri...@redhat.com> wrote:

>The 'unknown error' is most likely because Hawkular Metrics cannot resolve the 
>'hawkular-cassandra' hostname.
>
>You should be able to exec into the Hawkular Metrics pod before it get 
>shutdown and try to dig, curl or ping the 'hawkular-cassandra' hostname. If 
>you get an error message here about the hostname not being resolveable then 
>that is the problem.
>
>- Original Message -
>> From: "Matt Wringe" <mwri...@redhat.com>
>> To: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> Cc: users@lists.openshift.redhat.com
>> Sent: Wednesday, June 15, 2016 11:36:15 AM
>> Subject: Re: Metrics deployment
>> 
>> Can you please list the secrets used and the deployment options used when
>> deploying metrics?
>> 
>> If you redeploy everything, does the error still exist?
>> 
>> I am trying to check a few error conditions to see if I can reproduce the
>> 'unknown error' here, but not displaying the full error is a bug in Hawkular
>> Metrics which needs to be fixed.
>> 
>> - Original Message -
>> > From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> > To: "Matt Wringe" <mwri...@redhat.com>
>> > Cc: users@lists.openshift.redhat.com
>> > Sent: Tuesday, June 14, 2016 7:14:46 PM
>> > Subject: Re: Metrics deployment
>> > 
>> > 18:34:03,414 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
>> > (metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics
>> > service
>> > 18:34:03,532 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
>> > (metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to
>> > Cassandra cluster - assuming its not up yet: hawkular-cassandra: unknown
>> > error
>> > 18:34:03,533 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
>> > (metricsservice-lifecycle-thread) HAWKMETRICS24: [5] Retrying
>> > connecting
>> > to Cassandra cluster in [1]s...
>> > 18:34:04,534 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
>> > (metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics
>> > service
>> > 18:34:04,534 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
>> > (metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to
>> > Cassandra cluster - assuming its not up yet: hawkular-cassandra
>> > 18:34:04,535 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
>> > (metricsservice-lifecycle-thread) HAWKMETRICS24: [6] Retrying
>> > connecting
>> > to Cassandra cluster in [2]s...
>> > 18:34:06,535 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
>> > (metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics
>> > service
>> > 18:34:06,535 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
>> > (metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to
>> > Cassandra cluster - assuming its not up yet: hawkular-cassandra
>> > 18:34:06,536 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
>> > (metricsservice-lifecycle-thread) HAWKMETRICS24: [7] Retrying
>> > connecting
>> > to Cassandra cluster in [3]s...
>> > 18:34:09,536 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
>> > (metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics
>> > service
>> > 18:34:09,537 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
>> > (metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to
>> > Cassandra cluster - assuming its not up yet: hawkular-cassandra
>> > 18:34:09,537 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle]
>> > (metricsservice-lifecycle-thread) HAWKMETRICS24: [8] Retrying
>> > connecting
>> 

Re: Metrics deployment

2016-06-14 Thread Srinivas Naga Kotaru (skotaru)
18:34:03,414 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics 
service
18:34:03,532 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to 
Cassandra cluster - assuming its not up yet: hawkular-cassandra: unknown error
18:34:03,533 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS24: [5] Retrying connecting to 
Cassandra cluster in [1]s...
18:34:04,534 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics 
service
18:34:04,534 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to 
Cassandra cluster - assuming its not up yet: hawkular-cassandra
18:34:04,535 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS24: [6] Retrying connecting to 
Cassandra cluster in [2]s...
18:34:06,535 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics 
service
18:34:06,535 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to 
Cassandra cluster - assuming its not up yet: hawkular-cassandra
18:34:06,536 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS24: [7] Retrying connecting to 
Cassandra cluster in [3]s...
18:34:09,536 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics 
service
18:34:09,537 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to 
Cassandra cluster - assuming its not up yet: hawkular-cassandra
18:34:09,537 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS24: [8] Retrying connecting to 
Cassandra cluster in [4]s...
18:34:13,537 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics 
service
18:34:13,653 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to 
Cassandra cluster - assuming its not up yet: hawkular-cassandra: unknown error
18:34:13,653 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS24: [9] Retrying connecting to 
Cassandra cluster in [1]s...
18:34:14,654 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics 
service
18:34:14,654 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to 
Cassandra cluster - assuming its not up yet: hawkular-cassandra
18:34:14,654 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS24: [10] Retrying connecting 
to Cassandra cluster in [2]s...
18:34:16,655 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics 
service



-- 
Srinivas Kotaru

On 6/14/16, 4:02 PM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> wrote:

>9042 firewall created across all nodes to node communication. But issue still 
>persist. Port 53 already open between all nodes to masters.
>
>Almost dead struck, appreciated any pointers to get fresh thoughts 
>
>-- 
>Srinivas Kotaru
>
>On 6/14/16, 3:16 PM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> 
>wrote:
>
>>Ok. Let me open the port 9043. This port should be open between node to node 
>>right? 
>>
>>
>>-- 
>>Srinivas Kotaru
>>
>>On 6/14/16, 3:02 PM, "Matt Wringe" <mwri...@redhat.com> wrote:
>>
>>>- Original Message -
>>>> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>>>> To: "Matt Wringe" <mwri...@redhat.com>
>>>> Cc: users@lists.openshift.redhat.com
>>>> Sent: Tuesday, June 14, 2016 5:40:36 PM
>>>> Subject: Re: Metrics deployment
>>>> 
>>>> Matt.
>>>> 
>>>> Sure, let us figure it out Hawkular side. Am here pasting 2 logs
>>>> 
>>>> 1. oc logs –f
>>>> 2. cat /opt/eap/standalone/log/server.log
>>>
>>>Hmm, its getting an 'unknown erro

Re: Metrics deployment

2016-06-14 Thread Srinivas Naga Kotaru (skotaru)
9042 firewall created across all nodes to node communication. But issue still 
persist. Port 53 already open between all nodes to masters.

Almost dead struck, appreciated any pointers to get fresh thoughts 

-- 
Srinivas Kotaru

On 6/14/16, 3:16 PM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> wrote:

>Ok. Let me open the port 9043. This port should be open between node to node 
>right? 
>
>
>-- 
>Srinivas Kotaru
>
>On 6/14/16, 3:02 PM, "Matt Wringe" <mwri...@redhat.com> wrote:
>
>>- Original Message -
>>> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>>> To: "Matt Wringe" <mwri...@redhat.com>
>>> Cc: users@lists.openshift.redhat.com
>>> Sent: Tuesday, June 14, 2016 5:40:36 PM
>>> Subject: Re: Metrics deployment
>>> 
>>> Matt.
>>> 
>>> Sure, let us figure it out Hawkular side. Am here pasting 2 logs
>>> 
>>> 1. oc logs –f
>>> 2. cat /opt/eap/standalone/log/server.log
>>
>>Hmm, its getting an 'unknown error' when trying to connect to Cassandra, 
>>which doesn't really tell us anything :/
>>
>>The port that Hawkular Metrics uses to connect to Cassandra is 9042, you may 
>>also want to make sure that the DNS port is also open.
>>
>>> 
>>> Srinivas Kotaru
>>> 
>>>  
>>> --
>>> Srinivas Kotaru
>>> 
>>> On 6/14/16, 2:28 PM, "Matt Wringe" <mwri...@redhat.com> wrote:
>>> 
>>> >
>>> >
>>> >- Original Message -
>>> >> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>>> >> To: "Matt Wringe" <mwri...@redhat.com>
>>> >> Cc: users@lists.openshift.redhat.com
>>> >> Sent: Tuesday, June 14, 2016 4:44:04 PM
>>> >> Subject: Re: Metrics deployment
>>> >> 
>>> >> 
>>> >> I’m still struck with this issue. It is kind of chicken and egg problem.
>>> >> Heapster health probes failing since it is waiting for hawkular to start.
>>> >> Hawkular health probes are failing since it is unable to connect 
>>> >> Casandra.
>>> >> Cansandra health probe also failing.
>>> >
>>> >The first step is to get Cassandra running, ignore Hawkular Metrics and
>>> >Heapster until you have Cassandra running properly. Without Cassandra being
>>> >able to run, those other components will not fully start.
>>> >
>>> >> 
>>> >> @Matt:  Internal DNS looks ups working. I’m able to create apps, build 
>>> >> and
>>> >> deploy code. Router and registry components also working as expected
>>> >> 
>>> >> 
>>> >> oc get pods
>>> >> NAME READY STATUS  RESTARTS   AGE
>>> >> hawkular-cassandra-1-mxd2m   1/1   Running 0  1h
>>> >> hawkular-metrics-gvp9k   0/1   Running 4  11m
>>> >> heapster-uleul   0/1   Running 4  11m
>>> >> metrics-deployer-2z75w   0/1   Completed   0  1h
>>> >
>>> >Cassandra being in "READY 1/1" means that it started up properly. So
>>> >Cassandra is running. Why do you think its not running? Things like
>>> >readiness probes are expected to fail until the pod is ready. Just because
>>> >there is a failure in the events doesn't mean its an error condition.
>>> >
>>> >Hawkular Metrics is not running here. So lets figure out why. Ignore
>>> >Heapster until Hawkular Metrics is started.
>>> >
>>> >> 
>>> >> heapster events:
>>> >> 
>>> >> 
>>> >> 
>>> >> Events:
>>> >>   FirstSeen  LastSeenCount   From
>>> >> SubobjectPath   TypeReason  
>>> >> Message
>>> >>   -  -   
>>> >> -   --
>>> >>  ---
>>> >>   1m 1m  1   {default-scheduler }
>>> >> Normal  Scheduled   
>>> >> Successfully
>>> >>  

Re: Metrics deployment

2016-06-14 Thread Srinivas Naga Kotaru (skotaru)
Ok. Let me open the port 9043. This port should be open between node to node 
right? 


-- 
Srinivas Kotaru

On 6/14/16, 3:02 PM, "Matt Wringe" <mwri...@redhat.com> wrote:

>- Original Message -
>> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> To: "Matt Wringe" <mwri...@redhat.com>
>> Cc: users@lists.openshift.redhat.com
>> Sent: Tuesday, June 14, 2016 5:40:36 PM
>> Subject: Re: Metrics deployment
>> 
>> Matt.
>> 
>> Sure, let us figure it out Hawkular side. Am here pasting 2 logs
>> 
>> 1. oc logs –f
>> 2. cat /opt/eap/standalone/log/server.log
>
>Hmm, its getting an 'unknown error' when trying to connect to Cassandra, which 
>doesn't really tell us anything :/
>
>The port that Hawkular Metrics uses to connect to Cassandra is 9042, you may 
>also want to make sure that the DNS port is also open.
>
>> 
>> Srinivas Kotaru
>> 
>>  
>> --
>> Srinivas Kotaru
>> 
>> On 6/14/16, 2:28 PM, "Matt Wringe" <mwri...@redhat.com> wrote:
>> 
>> >
>> >
>> >- Original Message -
>> >> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> >> To: "Matt Wringe" <mwri...@redhat.com>
>> >> Cc: users@lists.openshift.redhat.com
>> >> Sent: Tuesday, June 14, 2016 4:44:04 PM
>> >> Subject: Re: Metrics deployment
>> >> 
>> >> 
>> >> I’m still struck with this issue. It is kind of chicken and egg problem.
>> >> Heapster health probes failing since it is waiting for hawkular to start.
>> >> Hawkular health probes are failing since it is unable to connect Casandra.
>> >> Cansandra health probe also failing.
>> >
>> >The first step is to get Cassandra running, ignore Hawkular Metrics and
>> >Heapster until you have Cassandra running properly. Without Cassandra being
>> >able to run, those other components will not fully start.
>> >
>> >> 
>> >> @Matt:  Internal DNS looks ups working. I’m able to create apps, build and
>> >> deploy code. Router and registry components also working as expected
>> >> 
>> >> 
>> >> oc get pods
>> >> NAME READY STATUS  RESTARTS   AGE
>> >> hawkular-cassandra-1-mxd2m   1/1   Running 0  1h
>> >> hawkular-metrics-gvp9k   0/1   Running 4  11m
>> >> heapster-uleul   0/1   Running 4  11m
>> >> metrics-deployer-2z75w   0/1   Completed   0  1h
>> >
>> >Cassandra being in "READY 1/1" means that it started up properly. So
>> >Cassandra is running. Why do you think its not running? Things like
>> >readiness probes are expected to fail until the pod is ready. Just because
>> >there is a failure in the events doesn't mean its an error condition.
>> >
>> >Hawkular Metrics is not running here. So lets figure out why. Ignore
>> >Heapster until Hawkular Metrics is started.
>> >
>> >> 
>> >> heapster events:
>> >> 
>> >> 
>> >> 
>> >> Events:
>> >>   FirstSeen   LastSeenCount   From
>> >> SubobjectPath   TypeReason  
>> >> Message
>> >>   -   -   
>> >> -   --
>> >>   ---
>> >>   1m  1m  1   {default-scheduler }
>> >> Normal  Scheduled   
>> >> Successfully
>> >>   assigned heapster-uleul to l3inpn-id2-004.cisco.com
>> >>   1m  1m  1   {kubelet 
>> >> l3inpn-id2-004.cisco.com}  spec.containers{heapster}
>> >>   Normal  Pulling pulling image
>> >>   "registry.access.redhat.com/openshift3/metrics-heapster:latest"
>> >>   1m  1m  1   {kubelet 
>> >> l3inpn-id2-004.cisco.com}  spec.containers{heapster}
>> >>   Normal  Pulled  Successfully pulled image
>> >>   "registry.access.redhat.com/openshift3/metrics-heapster:latest"
>> >>   1m  1m 

Re: Metrics deployment

2016-06-14 Thread Srinivas Naga Kotaru (skotaru)
Matt.

Sure, let us figure it out Hawkular side. Am here pasting 2 logs

1. oc logs –f 
2. cat /opt/eap/standalone/log/server.log

Srinivas Kotaru

 
-- 
Srinivas Kotaru

On 6/14/16, 2:28 PM, "Matt Wringe" <mwri...@redhat.com> wrote:

>
>
>- Original Message -
>> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> To: "Matt Wringe" <mwri...@redhat.com>
>> Cc: users@lists.openshift.redhat.com
>> Sent: Tuesday, June 14, 2016 4:44:04 PM
>> Subject: Re: Metrics deployment
>> 
>> 
>> I’m still struck with this issue. It is kind of chicken and egg problem.
>> Heapster health probes failing since it is waiting for hawkular to start.
>> Hawkular health probes are failing since it is unable to connect Casandra.
>> Cansandra health probe also failing.
>
>The first step is to get Cassandra running, ignore Hawkular Metrics and 
>Heapster until you have Cassandra running properly. Without Cassandra being 
>able to run, those other components will not fully start.
>
>> 
>> @Matt:  Internal DNS looks ups working. I’m able to create apps, build and
>> deploy code. Router and registry components also working as expected
>> 
>> 
>> oc get pods
>> NAME READY STATUS  RESTARTS   AGE
>> hawkular-cassandra-1-mxd2m   1/1   Running 0  1h
>> hawkular-metrics-gvp9k   0/1   Running 4  11m
>> heapster-uleul   0/1   Running 4  11m
>> metrics-deployer-2z75w   0/1   Completed   0  1h
>
>Cassandra being in "READY 1/1" means that it started up properly. So Cassandra 
>is running. Why do you think its not running? Things like readiness probes are 
>expected to fail until the pod is ready. Just because there is a failure in 
>the events doesn't mean its an error condition.
>
>Hawkular Metrics is not running here. So lets figure out why. Ignore Heapster 
>until Hawkular Metrics is started.
>
>> 
>> heapster events:
>> 
>> 
>> 
>> Events:
>>   FirstSeen  LastSeenCount   From
>> SubobjectPath   TypeReason  Message
>>   -  -   
>> -   --  ---
>>   1m 1m  1   {default-scheduler }
>> Normal  Scheduled   Successfully
>>   assigned heapster-uleul to l3inpn-id2-004.cisco.com
>>   1m 1m  1   {kubelet l3inpn-id2-004.cisco.com}  
>> spec.containers{heapster}
>>  Normal  Pulling pulling image
>>   "registry.access.redhat.com/openshift3/metrics-heapster:latest"
>>   1m 1m  1   {kubelet l3inpn-id2-004.cisco.com}  
>> spec.containers{heapster}
>>  Normal  Pulled  Successfully pulled image
>>   "registry.access.redhat.com/openshift3/metrics-heapster:latest"
>>   1m 1m  1   {kubelet l3inpn-id2-004.cisco.com}  
>> spec.containers{heapster}
>>  Normal  Created Created container with docker id 
>> a22bb9a246ca
>>   1m 1m  1   {kubelet l3inpn-id2-004.cisco.com}  
>> spec.containers{heapster}
>>  Normal  Started Started container with docker id 
>> a22bb9a246ca
>>   1m 5s  10  {kubelet l3inpn-id2-004.cisco.com}  
>> spec.containers{heapster}
>>  Warning Unhealthy   Readiness probe failed: The heapster 
>> process is not yet
>>   started, it is waiting for the Hawkular Metrics to start.
>
>Readiness probe failing here is expected, and we know why from the error 
>message:
>"Readiness probe failed: The heapster process is not yet started, it is 
>waiting for the Hawkular Metrics to start."
>
>Once Hawkular Metrics is running then Heapster should automatically start 
>functioning.
>
>> 
>> Hawkular events :
>> ===
>> 
>> Events:
>>   FirstSeen  LastSeenCount   From
>> SubobjectPath   TypeReason  
>> Message
>>   -  -   
>> -   --  
>> ---
>>   1m 1m  1   {default-scheduler 

Re: Metrics deployment

2016-06-14 Thread Srinivas Naga Kotaru (skotaru)
694864f768a1) 
[/cassandra_data/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/la-3-big-Data.db:level=0,
 
/cassandra_data/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/la-2-big-Data.db:level=0,
 
/cassandra_data/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/la-1-big-Data.db:level=0,
 
/cassandra_data/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/la-4-big-Data.db:level=0,
 ]
INFO  20:15:20 Compacted (b7bca870-326c-11e6-a5f7-694864f768a1) 4 sstables to 
[/cassandra_data/data/system/schema_usertypes-3aa752254f82350b8d5c430fa221fa0a/la-5-big,]
 to level=0.  128 bytes to 32 (~25% of original) in 83ms = 0.000368MB/s.  0 
total partitions merged to 1.  Partition merge counts were {4:1, }
INFO  20:15:20 Compacted (b7ae77a0-326c-11e6-a5f7-694864f768a1) 4 sstables to 
[/cassandra_data/data/system/size_estimates-618f817b005f3678b8a453f3930b8e86/la-23-big,]
 to level=0.  2,538 bytes to 527 (~20% of original) in 176ms = 0.002856MB/s.  0 
total partitions merged to 3.  Partition merge counts were {4:3, }



almost struck here. Any points to look for? Any ports need to be open 
explicitly ? 







-- 
Srinivas Kotaru

On 6/14/16, 11:27 AM, "Matt Wringe" <mwri...@redhat.com> wrote:

>- Original Message -
>> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> To: "Matt Wringe" <mwri...@redhat.com>
>> Cc: users@lists.openshift.redhat.com
>> Sent: Tuesday, June 14, 2016 2:09:49 PM
>> Subject: Re: Metrics deployment
>> 
>> Not sure what do you mean by installation. Am just running oc new-app -f
>> metrics-deployer.yaml with default values except HOST_NAME and PV storage.
>
>I would suspect something wrong with your node or cluster installation. Or you 
>have firewall rules blocking connections between your nodes so that pod cannot 
>connect with each other or access the OpenShift DNS server.
>
>The lifecycle hooks exist to make sure that components only enter the ready 
>state when they are fully started and ready.
>
>Can you check the Hawkular Metrics status page and see what that outputs? eg 
>https://${HAWKULAR_METRICS_HOSTNAME}/hawkular/metrics/status
>
>> 
>> I just deleted entire metrics setup and re running. But not sure this will
>> fix the issue.
>> 
>> $ ./delete_metrics-infra.sh
>> replicationcontroller "hawkular-cassandra-1" deleted
>> replicationcontroller "hawkular-metrics" deleted
>> replicationcontroller "heapster" deleted
>> route "hawkular-metrics" deleted
>> service "hawkular-cassandra" deleted
>> service "hawkular-cassandra-nodes" deleted
>> service "hawkular-metrics" deleted
>> service "heapster" deleted
>> pod "heapster-lyf65" deleted
>> serviceaccount "cassandra" deleted
>> serviceaccount "hawkular" deleted
>> serviceaccount "heapster" deleted
>> template "hawkular-cassandra-node-emptydir" deleted
>> template "hawkular-cassandra-node-pv" deleted
>> template "hawkular-cassandra-services" deleted
>> template "hawkular-heapster" deleted
>> template "hawkular-metrics" deleted
>> template "hawkular-support" deleted
>> secret "hawkular-cassandra-certificate" deleted
>> secret "hawkular-cassandra-secrets" deleted
>> secret "hawkular-metrics-account" deleted
>> secret "hawkular-metrics-certificate" deleted
>> secret "hawkular-metrics-secrets" deleted
>> secret "heapster-secrets" deleted
>> 
>> --
>> Srinivas Kotaru
>> 
>> On 6/14/16, 10:53 AM, "Matt Wringe" <mwri...@redhat.com> wrote:
>> 
>> >- Original Message -
>> >> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> >> To: "Matt Wringe" <mwri...@redhat.com>
>> >> Cc: users@lists.openshift.redhat.com
>> >> Sent: Tuesday, June 14, 2016 1:37:01 PM
>> >> Subject: Re: Metrics deployment
>> >> 
>> >> I removed readiness probes from both hawkular-cassandra-1 &
>> >> hawkular-metrics
>> >> as both status shows probes failed.
>> >
>> >You should not have to remove the probes, this indicates that something is
>> >wrong with your installation.
>> >
>> >> 
>> >> It looks good now. Both containers looks and running
>> >> (hawkular-cassandra-1-kr8ka , hawkular-metrics-vhe3u) however
>> >> heapster-7yl34
>> >> logs still shows Could not connect 

Re: Metrics deployment

2016-06-14 Thread Srinivas Naga Kotaru (skotaru)
Matt

Am rerunning the setup again. Can you give tips how to check if something is 
blocking between nodes? I can see DNS is working as I pasted earlier with 
examples. Do we need to open any firewall rules? Am assuming this is SDN and 
default open with in the cluster. Let me know if I need to explicitely need to 
open any ports between the nodes. 

Am getting below output from browser if I hit FQDN (This was taken before I 
deleted the stack) 

{
"errorMsg": "Service unavailable while initializing."
}


-- 
Srinivas Kotaru

On 6/14/16, 11:27 AM, "Matt Wringe" <mwri...@redhat.com> wrote:

>- Original Message -
>> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> To: "Matt Wringe" <mwri...@redhat.com>
>> Cc: users@lists.openshift.redhat.com
>> Sent: Tuesday, June 14, 2016 2:09:49 PM
>> Subject: Re: Metrics deployment
>> 
>> Not sure what do you mean by installation. Am just running oc new-app -f
>> metrics-deployer.yaml with default values except HOST_NAME and PV storage.
>
>I would suspect something wrong with your node or cluster installation. Or you 
>have firewall rules blocking connections between your nodes so that pod cannot 
>connect with each other or access the OpenShift DNS server.
>
>The lifecycle hooks exist to make sure that components only enter the ready 
>state when they are fully started and ready.
>
>Can you check the Hawkular Metrics status page and see what that outputs? eg 
>https://${HAWKULAR_METRICS_HOSTNAME}/hawkular/metrics/status
>
>> 
>> I just deleted entire metrics setup and re running. But not sure this will
>> fix the issue.
>> 
>> $ ./delete_metrics-infra.sh
>> replicationcontroller "hawkular-cassandra-1" deleted
>> replicationcontroller "hawkular-metrics" deleted
>> replicationcontroller "heapster" deleted
>> route "hawkular-metrics" deleted
>> service "hawkular-cassandra" deleted
>> service "hawkular-cassandra-nodes" deleted
>> service "hawkular-metrics" deleted
>> service "heapster" deleted
>> pod "heapster-lyf65" deleted
>> serviceaccount "cassandra" deleted
>> serviceaccount "hawkular" deleted
>> serviceaccount "heapster" deleted
>> template "hawkular-cassandra-node-emptydir" deleted
>> template "hawkular-cassandra-node-pv" deleted
>> template "hawkular-cassandra-services" deleted
>> template "hawkular-heapster" deleted
>> template "hawkular-metrics" deleted
>> template "hawkular-support" deleted
>> secret "hawkular-cassandra-certificate" deleted
>> secret "hawkular-cassandra-secrets" deleted
>> secret "hawkular-metrics-account" deleted
>> secret "hawkular-metrics-certificate" deleted
>> secret "hawkular-metrics-secrets" deleted
>> secret "heapster-secrets" deleted
>> 
>> --
>> Srinivas Kotaru
>> 
>> On 6/14/16, 10:53 AM, "Matt Wringe" <mwri...@redhat.com> wrote:
>> 
>> >- Original Message -
>> >> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> >> To: "Matt Wringe" <mwri...@redhat.com>
>> >> Cc: users@lists.openshift.redhat.com
>> >> Sent: Tuesday, June 14, 2016 1:37:01 PM
>> >> Subject: Re: Metrics deployment
>> >> 
>> >> I removed readiness probes from both hawkular-cassandra-1 &
>> >> hawkular-metrics
>> >> as both status shows probes failed.
>> >
>> >You should not have to remove the probes, this indicates that something is
>> >wrong with your installation.
>> >
>> >> 
>> >> It looks good now. Both containers looks and running
>> >> (hawkular-cassandra-1-kr8ka , hawkular-metrics-vhe3u) however
>> >> heapster-7yl34
>> >> logs still shows Could not connect to
>> >> https://hawkular-metrics:443/hawkular/metrics/status. Curl exit code: 6.
>> >> Status Code 000.
>> >> 
>> >> Are we good or still had issues?
>> >> 
>> >> 
>> >> # oc get pods
>> >> NAME READY STATUSRESTARTS   AGE
>> >> hawkular-cassandra-1-kr8ka   1/1   Running   0  6m
>> >> hawkular-metrics-vhe3u   1/1   Running   2  5m
>> >> heapster-7yl34   0/1   Running 

Re: Metrics deployment

2016-06-14 Thread Srinivas Naga Kotaru (skotaru)
Not sure what do you mean by installation. Am just running oc new-app -f 
metrics-deployer.yaml with default values except HOST_NAME and PV storage. 

I just deleted entire metrics setup and re running. But not sure this will fix 
the issue. 

$ ./delete_metrics-infra.sh
replicationcontroller "hawkular-cassandra-1" deleted
replicationcontroller "hawkular-metrics" deleted
replicationcontroller "heapster" deleted
route "hawkular-metrics" deleted
service "hawkular-cassandra" deleted
service "hawkular-cassandra-nodes" deleted
service "hawkular-metrics" deleted
service "heapster" deleted
pod "heapster-lyf65" deleted
serviceaccount "cassandra" deleted
serviceaccount "hawkular" deleted
serviceaccount "heapster" deleted
template "hawkular-cassandra-node-emptydir" deleted
template "hawkular-cassandra-node-pv" deleted
template "hawkular-cassandra-services" deleted
template "hawkular-heapster" deleted
template "hawkular-metrics" deleted
template "hawkular-support" deleted
secret "hawkular-cassandra-certificate" deleted
secret "hawkular-cassandra-secrets" deleted
secret "hawkular-metrics-account" deleted
secret "hawkular-metrics-certificate" deleted
secret "hawkular-metrics-secrets" deleted
secret "heapster-secrets" deleted

-- 
Srinivas Kotaru

On 6/14/16, 10:53 AM, "Matt Wringe" <mwri...@redhat.com> wrote:

>- Original Message -
>> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> To: "Matt Wringe" <mwri...@redhat.com>
>> Cc: users@lists.openshift.redhat.com
>> Sent: Tuesday, June 14, 2016 1:37:01 PM
>> Subject: Re: Metrics deployment
>> 
>> I removed readiness probes from both hawkular-cassandra-1 & hawkular-metrics
>> as both status shows probes failed.
>
>You should not have to remove the probes, this indicates that something is 
>wrong with your installation.
>
>> 
>> It looks good now. Both containers looks and running
>> (hawkular-cassandra-1-kr8ka , hawkular-metrics-vhe3u) however heapster-7yl34
>> logs still shows Could not connect to
>> https://hawkular-metrics:443/hawkular/metrics/status. Curl exit code: 6.
>> Status Code 000.
>> 
>> Are we good or still had issues?
>> 
>> 
>> # oc get pods
>> NAME READY STATUSRESTARTS   AGE
>> hawkular-cassandra-1-kr8ka   1/1   Running   0  6m
>> hawkular-metrics-vhe3u   1/1   Running   2  5m
>> heapster-7yl34   0/1   Running   2  5m
>> 
>> 
>> 
>> 
>> 
>> --
>> Srinivas Kotaru
>> 
>> On 6/14/16, 10:07 AM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> wrote:
>> 
>> >Matt
>> >
>> >Just want to share more info by running describe pod.
>> >
>> >It seems to be health probe failing. Do you think it is the issue?
>> >
>> >
>> >
>> ># oc describe pod hawkular-cassandra-1-it5uh
>> >Name:   hawkular-cassandra-1-it5uh
>> >Namespace:  openshift-infra
>> >Node:   l3inpn-id2-003.cisco.com/173.36.96.16
>> >Start Time: Tue, 14 Jun 2016 16:36:21 +
>> >Labels:
>> >
>> > metrics-infra=hawkular-cassandra,name=hawkular-cassandra-1,type=hawkular-cassandra
>> >Status: Running
>> >IP: 10.1.9.2
>> >Controllers:ReplicationController/hawkular-cassandra-1
>> >Containers:
>> >  hawkular-cassandra-1:
>> >Container ID:
>> >
>> > docker://17a9575eb655145859a9207f5c4bde7456f947e27188a056ff2bd08c4ce6ae5d
>> >Image:  
>> > registry.access.redhat.com/openshift3/metrics-cassandra:latest
>> >Image ID:
>> >
>> > docker://ee2117c9848298ca5a0cbbce354fd4adff370435225324ab9d60cd9cd9a95c53
>> >Ports:  9042/TCP, 9160/TCP, 7000/TCP, 7001/TCP
>> >Command:
>> >  /opt/apache-cassandra/bin/cassandra-docker.sh
>> >  --cluster_name=hawkular-metrics
>> >  --data_volume=/cassandra_data
>> >  --internode_encryption=all
>> >  --require_node_auth=true
>> >  --enable_client_encryption=true
>> >  --require_client_auth=true
>> >  --keystore_file=/secret/cassandra.keystore
>> >  --keystore_password_file=/secret/cassandra.keystore.password
>> >  --tr

Re: Metrics deployment

2016-06-14 Thread Srinivas Naga Kotaru (skotaru)
I can see internal DNS server also resolvable

dig +short @master-01 hawkular-metrics.openshift-infra.svc.cluster.local
172.30.117.176

so all looks good but still seems to be some issues. 

-- 
Srinivas Kotaru

On 6/14/16, 10:37 AM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> 
wrote:

>I removed readiness probes from both hawkular-cassandra-1 & hawkular-metrics 
>as both status shows probes failed. 
>
>It looks good now. Both containers looks and running 
>(hawkular-cassandra-1-kr8ka , hawkular-metrics-vhe3u) however heapster-7yl34 
>logs still shows Could not connect to 
>https://hawkular-metrics:443/hawkular/metrics/status. Curl exit code: 6. 
>Status Code 000. 
>
>Are we good or still had issues? 
>
>
># oc get pods
>NAME READY STATUSRESTARTS   AGE
>hawkular-cassandra-1-kr8ka   1/1   Running   0  6m
>hawkular-metrics-vhe3u   1/1   Running   2  5m
>heapster-7yl34   0/1   Running   2  5m
>
>
>
>
>
>-- 
>Srinivas Kotaru
>
>On 6/14/16, 10:07 AM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> 
>wrote:
>
>>Matt
>>
>>Just want to share more info by running describe pod.
>>
>>It seems to be health probe failing. Do you think it is the issue? 
>>
>>
>>
>># oc describe pod hawkular-cassandra-1-it5uh
>>Name: hawkular-cassandra-1-it5uh
>>Namespace:openshift-infra
>>Node: l3inpn-id2-003.cisco.com/173.36.96.16
>>Start Time:   Tue, 14 Jun 2016 16:36:21 +
>>Labels:   
>>metrics-infra=hawkular-cassandra,name=hawkular-cassandra-1,type=hawkular-cassandra
>>Status:   Running
>>IP:   10.1.9.2
>>Controllers:  ReplicationController/hawkular-cassandra-1
>>Containers:
>>  hawkular-cassandra-1:
>>Container ID: 
>> docker://17a9575eb655145859a9207f5c4bde7456f947e27188a056ff2bd08c4ce6ae5d
>>Image:
>> registry.access.redhat.com/openshift3/metrics-cassandra:latest
>>Image ID: 
>> docker://ee2117c9848298ca5a0cbbce354fd4adff370435225324ab9d60cd9cd9a95c53
>>Ports:9042/TCP, 9160/TCP, 7000/TCP, 7001/TCP
>>Command:
>>  /opt/apache-cassandra/bin/cassandra-docker.sh
>>  --cluster_name=hawkular-metrics
>>  --data_volume=/cassandra_data
>>  --internode_encryption=all
>>  --require_node_auth=true
>>  --enable_client_encryption=true
>>  --require_client_auth=true
>>  --keystore_file=/secret/cassandra.keystore
>>  --keystore_password_file=/secret/cassandra.keystore.password
>>  --truststore_file=/secret/cassandra.truststore
>>  --truststore_password_file=/secret/cassandra.truststore.password
>>  --cassandra_pem_file=/secret/cassandra.pem
>>QoS Tier:
>>  cpu:BestEffort
>>  memory: BestEffort
>>State:Running
>>  Started:Tue, 14 Jun 2016 16:37:01 +
>>Ready:True
>>Restart Count:0
>>Readiness:exec 
>> [/opt/apache-cassandra/bin/cassandra-docker-ready.sh] delay=0s timeout=1s 
>> period=10s #success=1 #failure=3
>>Environment Variables:
>>  CASSANDRA_MASTER:   true
>>  POD_NAMESPACE:  openshift-infra (v1:metadata.namespace)
>>Conditions:
>>  TypeStatus
>>  Ready   True
>>Volumes:
>>  cassandra-data:
>>Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim 
>> in the same namespace)
>>ClaimName:metrics-cassandra-1
>>ReadOnly: false
>>  hawkular-cassandra-secrets:
>>Type: Secret (a volume populated by a Secret)
>>SecretName:   hawkular-cassandra-secrets
>>  cassandra-token-4urfd:
>>Type: Secret (a volume populated by a Secret)
>>SecretName:   cassandra-token-4urfd
>>Events:
>>  FirstSeen   LastSeenCount   From
>> SubobjectPath   TypeReason  
>> Message
>>  -   -   
>> -   --  
>> ---
>>  27m 27m 1   {default-scheduler }
>> Normal  Scheduled   
>> Successfully assigned hawkular-cassandra-1-it5uh to l3inpn-id2-003.cisco.com
>>  27m 27m 1   {kubelet l3

Re: Metrics deployment

2016-06-14 Thread Srinivas Naga Kotaru (skotaru)
Matt

Just want to share more info by running describe pod.

It seems to be health probe failing. Do you think it is the issue? 



# oc describe pod hawkular-cassandra-1-it5uh
Name:   hawkular-cassandra-1-it5uh
Namespace:  openshift-infra
Node:   l3inpn-id2-003.cisco.com/173.36.96.16
Start Time: Tue, 14 Jun 2016 16:36:21 +
Labels: 
metrics-infra=hawkular-cassandra,name=hawkular-cassandra-1,type=hawkular-cassandra
Status: Running
IP: 10.1.9.2
Controllers:ReplicationController/hawkular-cassandra-1
Containers:
  hawkular-cassandra-1:
Container ID:   
docker://17a9575eb655145859a9207f5c4bde7456f947e27188a056ff2bd08c4ce6ae5d
Image:  
registry.access.redhat.com/openshift3/metrics-cassandra:latest
Image ID:   
docker://ee2117c9848298ca5a0cbbce354fd4adff370435225324ab9d60cd9cd9a95c53
Ports:  9042/TCP, 9160/TCP, 7000/TCP, 7001/TCP
Command:
  /opt/apache-cassandra/bin/cassandra-docker.sh
  --cluster_name=hawkular-metrics
  --data_volume=/cassandra_data
  --internode_encryption=all
  --require_node_auth=true
  --enable_client_encryption=true
  --require_client_auth=true
  --keystore_file=/secret/cassandra.keystore
  --keystore_password_file=/secret/cassandra.keystore.password
  --truststore_file=/secret/cassandra.truststore
  --truststore_password_file=/secret/cassandra.truststore.password
  --cassandra_pem_file=/secret/cassandra.pem
QoS Tier:
  cpu:  BestEffort
  memory:   BestEffort
State:  Running
  Started:  Tue, 14 Jun 2016 16:37:01 +
Ready:  True
Restart Count:  0
Readiness:  exec 
[/opt/apache-cassandra/bin/cassandra-docker-ready.sh] delay=0s timeout=1s 
period=10s #success=1 #failure=3
Environment Variables:
  CASSANDRA_MASTER: true
  POD_NAMESPACE:openshift-infra (v1:metadata.namespace)
Conditions:
  Type  Status
  Ready True
Volumes:
  cassandra-data:
Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim 
in the same namespace)
ClaimName:  metrics-cassandra-1
ReadOnly:   false
  hawkular-cassandra-secrets:
Type:   Secret (a volume populated by a Secret)
SecretName: hawkular-cassandra-secrets
  cassandra-token-4urfd:
Type:   Secret (a volume populated by a Secret)
SecretName: cassandra-token-4urfd
Events:
  FirstSeen LastSeenCount   From
SubobjectPath   TypeReason  Message
  - -   
-   --  ---
  27m   27m 1   {default-scheduler }
Normal  Scheduled   
Successfully assigned hawkular-cassandra-1-it5uh to l3inpn-id2-003.cisco.com
  27m   27m 1   {kubelet l3inpn-id2-003.cisco.com}  
spec.containers{hawkular-cassandra-1}   Normal  Pulling pulling 
image "registry.access.redhat.com/openshift3/metrics-cassandra:latest"
  27m   27m 1   {kubelet l3inpn-id2-003.cisco.com}  
spec.containers{hawkular-cassandra-1}   Normal  Pulled  
Successfully pulled image 
"registry.access.redhat.com/openshift3/metrics-cassandra:latest"
  27m   27m 1   {kubelet l3inpn-id2-003.cisco.com}  
spec.containers{hawkular-cassandra-1}   Normal  Created Created 
container with docker id 17a9575eb655
  27m   27m 1   {kubelet l3inpn-id2-003.cisco.com}  
spec.containers{hawkular-cassandra-1}   Normal  Started Started 
container with docker id 17a9575eb655
  27m   26m 3   {kubelet l3inpn-id2-003.cisco.com}  
spec.containers{hawkular-cassandra-1}   Warning Unhealthy   
Readiness probe failed: cat: /etc/ld.so.conf.d/*.conf: No such file or directory
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection 
refused'.
Cassandra not in the up and normal state. Current state is
/opt/apache-cassandra/bin/cassandra-docker-ready.sh: line 28: [: =: unary 
operator expected





-- 
Srinivas Kotaru

On 6/14/16, 10:00 AM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> 
wrote:

>Matt
>
>DNS service is working 
>
>
>~ dig +short @master-01 kubernetes.default.svc.cluster.local
>172.30.0.1
>~  dig +short @master-01  jenkins.alln-test.svc.cluster.local
>172.30.85.148
>~  dig +short @master-01 cakephp-example.alln-test.svc.cluster.local
>172.30.31.6
>
>I captured hawkular-metrics, it shows the problem. It seems to be unable to 
>connect Cassandra cluster
>
># oc exe

Re: Metrics deployment

2016-06-14 Thread Srinivas Naga Kotaru (skotaru)
wkular-cassandra: unknown error
12:41:33,377 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS24: [13] Retrying connecting 
to Cassandra cluster in [1]s...
12:41:34,377 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics 
service
12:41:34,378 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to 
Cassandra cluster - assuming its not up yet: hawkular-cassandra
12:41:34,378 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS24: [14] Retrying connecting 
to Cassandra cluster in [2]s...
12:41:36,378 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics 
service
12:41:36,379 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to 
Cassandra cluster - assuming its not up yet: hawkular-cassandra
12:41:36,379 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS24: [15] Retrying connecting 
to Cassandra cluster in [3]s...
12:41:39,379 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics 
service
12:41:39,380 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to 
Cassandra cluster - assuming its not up yet: hawkular-cassandra
12:41:39,380 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS24: [16] Retrying connecting 
to Cassandra cluster in [4]s...
12:41:43,380 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics 
service
12:41:43,503 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to 
Cassandra cluster - assuming its not up yet: hawkular-cassandra: unknown error
12:41:43,504 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS24: [17] Retrying connecting 
to Cassandra cluster in [1]s...
12:41:44,504 INFO  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS22: Initializing metrics 
service
12:41:44,505 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS23: Could not connect to 
Cassandra cluster - assuming its not up yet: hawkular-cassandra
12:41:44,505 WARN  [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] 
(metricsservice-lifecycle-thread) HAWKMETRICS24: [18] Retrying connecting 
to Cassandra cluster in [2]s...

-- 
Srinivas Kotaru

On 6/14/16, 6:06 AM, "Matt Wringe" <mwri...@redhat.com> wrote:

>- Original Message -
>> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> To: "Matt Wringe" <mwri...@redhat.com>
>> Cc: users@lists.openshift.redhat.com
>> Sent: Monday, June 13, 2016 7:26:06 PM
>> Subject: Re: Metrics deployment
>> 
>> Matt
>> 
>> PV issue resolved. Was able to to see PV successfully bounded and Casandra
>> container has been running. However, it seems puzzle not fully yet solved.
>
>Are you sure the OpenShift DNS server is running?
>
>If you are running OSE 3.1, can you please follow this 
>https://access.redhat.com/solutions/2329131 and see if you are now seeing 
>errors in the Hawkular Metrics logs (essentially just run `oc exec 
>hawkular-metrics-x cat /opt/eap/standalone/log/server.log`)
>
>> 
>> I could see other container(heapster) not coming up, and seeing below errors
>> 
>> [skotaru@l3imas-id2-01 metrics]$ oc logs -f heapster-fnkdc
>> Endpoint Check in effect. Checking
>> https://hawkular-metrics:443/hawkular/metrics/status
>> Could not connect to https://hawkular-metrics:443/hawkular/metrics/status.
>> Curl exit code: 6. Status Code 000
>> 'https://hawkular-metrics:443/hawkular/metrics/status' is not accessible
>> [HTTP status code: 000. Curl exit code 6]. Retrying.
>> Could not connect to https://hawkular-metrics:443/hawkular/metrics/status.
>> Curl exit code: 6. Status Code 000
>> 'https://hawkular-metrics:443/hawkular/metrics/status' is not accessible
>> [HTTP status code: 000. Curl exit code 6]. Retrying.
>> 
>> 
>> # oc get pv
>> pv-5gb-0011   5GiRWO   Bound
>> openshift-infra/metrics-cassandra-1 2

Re: Metrics deployment

2016-06-13 Thread Srinivas Naga Kotaru (skotaru)
Matt

That is good catch. I ran without USE_PERSISTENT_STORAGE=false and working

I adjusted PV to 5Gi and reran. Will update progress.

Thanks you for your help so far. 

-- 
Srinivas Kotaru

On 6/13/16, 2:27 PM, "Matt Wringe" <mwri...@redhat.com> wrote:

>
>
>- Original Message -
>> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> To: "Matt Wringe" <mwri...@redhat.com>
>> Cc: users@lists.openshift.redhat.com
>> Sent: Monday, June 13, 2016 5:21:01 PM
>> Subject: Re: Metrics deployment
>> 
>> Oh ok
>> 
>> Am using PV for metrics
>> 
>> description: "The persistent volume size for each of the Cassandra nodes"
>>   name: CASSANDRA_PV_SIZE
>>   value: "10Gi"
>> 
>> oc get pv
>> NAME  CAPACITY   ACCESSMODES   STATUS  CLAIMREASON
>> AGE
>> pv-1gb-0011GiRWO   Available
>> 4d
>> pv-1gb-0021GiRWO   Available
>> 4d
>> pv-1gb-0031GiRWO   Available
>> 4d
>> pv-1gb-0041GiRWO   Bound   thlatt/mongodb
>> 4d
>> pv-1gb-0051GiRWO   Available
>> 4d
>> pv-2gb-0010   2GiRWO   Available
>> 4d
>> pv-2gb-0062GiRWO   Available
>> 4d
>> pv-2gb-0072GiRWO   Available
>> 4d
>> pv-2gb-0082GiRWO   Available
>> 4d
>> pv-2gb-0092GiRWO   Available
>> 4d
>> pv-5gb-0011   5GiRWO   Available
>> 4d
>> pv-5gb-0012   5GiRWO   Available
>> 4d
>> pv-5gb-0013   5GiRWO   Available
>> 4d
>> pv-5gb-0014   5GiRWO   Available
>> 4d
>> pv-5gb-0015   5GiRWO   Available
>> 4d
>> 
>> am running with below command
>> 
>> $ oc new-app -f metrics-deployer.yaml  ( hardcoded HOSTNAME, MASTER_API and
>> PV info so not passing any parameters)
>> 
>
>I would suspect that Cassandra is blocked because its waiting for 10Gi PV to 
>become available, and none of the PV listed above are big enough.
>
>> 
>> --
>> Srinivas Kotaru
>> 
>> On 6/13/16, 2:12 PM, "Matt Wringe" <mwri...@redhat.com> wrote:
>> 
>> >- Original Message -
>> >> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> >> To: "Matt Wringe" <mwri...@redhat.com>
>> >> Cc: users@lists.openshift.redhat.com
>> >> Sent: Monday, June 13, 2016 4:55:55 PM
>> >> Subject: Re: Metrics deployment
>> >> 
>> >> Matt
>> >> 
>> >> Thanks for looking into. I rerun the setup, but had the same issue
>> >> 
>> >> # oc get pods
>> >> NAME READY STATUS  RESTARTS   AGE
>> >> hawkular-cassandra-1-y2egy   0/1   ContainerCreating   0  5m
>> >> hawkular-metrics-4b16f   0/1   Running 1  4m
>> >> heapster-x2gj2   0/1   Running 2  4m
>> >> metrics-deployer-9v7vc   0/1   Completed   0  6m
>> >> 
>> >> $ oc logs -f hawkular-cassandra-1-y2egy
>> >> Error from server: container "hawkular-cassandra-1" in pod
>> >> "hawkular-cassandra-1-y2egy" is waiting to start: ContainerCreating
>> >
>> >Ok, so it looks like something is blocking the Cassandra pod from starting.
>> >
>> >If you are using persistent storage, Cassandra will not start until the PV
>> >is available. There may be some more information about Cassandra in the pod
>> >section of the console under events.
>> >
>> >What command did you use when deploying the deployer?
>> >
>> >> 
>> >> $ oc logs -f hawkular-metrics-4b16f
>> >> 
>> >> 16:54:25,703 DEBUG [org.jboss.as.config] (MSC service thread 1-4) VM
>> >> Arguments: -Duser.home=/home/jboss -Duser.name=jboss -D[Standalone]
>> >> -XX:+UseCompressedOops -verbose:gc -Xloggc:/opt/eap/standalone/log/gc.log
>> >> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation
>> >> -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=3M -XX:-TraceClassUnloading
>> >> -Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true
>> >> -Djboss.modules.system.pkgs=org.jboss.logmanager -Djava.awt.headless=true
>>

Re: Metrics deployment

2016-06-13 Thread Srinivas Naga Kotaru (skotaru)
Oh ok 

Am using PV for metrics

description: "The persistent volume size for each of the Cassandra nodes"
  name: CASSANDRA_PV_SIZE
  value: "10Gi"

oc get pv
NAME  CAPACITY   ACCESSMODES   STATUS  CLAIMREASON
AGE
pv-1gb-0011GiRWO   Available  4d
pv-1gb-0021GiRWO   Available  4d
pv-1gb-0031GiRWO   Available  4d
pv-1gb-0041GiRWO   Bound   thlatt/mongodb 4d
pv-1gb-0051GiRWO   Available  4d
pv-2gb-0010   2GiRWO   Available  4d
pv-2gb-0062GiRWO   Available  4d
pv-2gb-0072GiRWO   Available  4d
pv-2gb-0082GiRWO   Available  4d
pv-2gb-0092GiRWO   Available  4d
pv-5gb-0011   5GiRWO   Available  4d
pv-5gb-0012   5GiRWO   Available  4d
pv-5gb-0013   5GiRWO   Available  4d
pv-5gb-0014   5GiRWO   Available  4d
pv-5gb-0015   5GiRWO   Available  4d

am running with below command 

$ oc new-app -f metrics-deployer.yaml  ( hardcoded HOSTNAME, MASTER_API and PV 
info so not passing any parameters)



-- 
Srinivas Kotaru

On 6/13/16, 2:12 PM, "Matt Wringe" <mwri...@redhat.com> wrote:

>- Original Message -
>> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>> To: "Matt Wringe" <mwri...@redhat.com>
>> Cc: users@lists.openshift.redhat.com
>> Sent: Monday, June 13, 2016 4:55:55 PM
>> Subject: Re: Metrics deployment
>> 
>> Matt
>> 
>> Thanks for looking into. I rerun the setup, but had the same issue
>> 
>> # oc get pods
>> NAME READY STATUS  RESTARTS   AGE
>> hawkular-cassandra-1-y2egy   0/1   ContainerCreating   0  5m
>> hawkular-metrics-4b16f   0/1   Running 1  4m
>> heapster-x2gj2   0/1   Running 2  4m
>> metrics-deployer-9v7vc   0/1   Completed   0  6m
>> 
>> $ oc logs -f hawkular-cassandra-1-y2egy
>> Error from server: container "hawkular-cassandra-1" in pod
>> "hawkular-cassandra-1-y2egy" is waiting to start: ContainerCreating
>
>Ok, so it looks like something is blocking the Cassandra pod from starting.
>
>If you are using persistent storage, Cassandra will not start until the PV is 
>available. There may be some more information about Cassandra in the pod 
>section of the console under events.
>
>What command did you use when deploying the deployer?
>
>> 
>> $ oc logs -f hawkular-metrics-4b16f
>> 
>> 16:54:25,703 DEBUG [org.jboss.as.config] (MSC service thread 1-4) VM
>> Arguments: -Duser.home=/home/jboss -Duser.name=jboss -D[Standalone]
>> -XX:+UseCompressedOops -verbose:gc -Xloggc:/opt/eap/standalone/log/gc.log
>> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation
>> -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=3M -XX:-TraceClassUnloading
>> -Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true
>> -Djboss.modules.system.pkgs=org.jboss.logmanager -Djava.awt.headless=true
>> -Djboss.modules.policy-permissions=true
>> -Xbootclasspath/p:/opt/eap/jboss-modules.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-1.5.4.Final-redhat-1.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/ext/main/javax.json-1.0.4.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/ext/main/jboss-logmanager-ext-1.0.0.Alpha2-redhat-1.jar
>> -Djava.util.logging.manager=org.jboss.logmanager.LogManager
>> -javaagent:/opt/eap/jolokia.jar=port=8778,protocol=https,caCert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt,clientPrincipal=cn=system:master-proxy,useSslClientAuthentication=true,extraClientCheck=true,host=0.0.0.0,discoveryEnabled=false
>> -Djava.security.egd=file:/dev/./urandom
>> -Dorg.jboss.boot.log.file=/opt/eap/standalone/log/server.log
>> -Dlogging.configuration=file:/opt/eap/standalone/configuration/logging.properties
>> 16:54:27,079 INFO  [org.xnio] (MSC service thread 1-3) XNIO Version
>> 3.0.14.GA-redhat-1
>> 16:54:27,083 INFO  [org.xnio.nio] (MSC service thread 1-3) XNIO NIO
>> Implementation Version 3.0.14.GA-redhat-1
>> 16

Re: Metrics deployment

2016-06-13 Thread Srinivas Naga Kotaru (skotaru)
Am not sure this is the issue. I don’t see image name in default 
metrics-deploy.yaml file

  description: 'Specify prefix for metrics components; e.g. for 
"openshift/origin-metrics-deployer:latest", set prefix "openshift/origin-"'
  name: IMAGE_PREFIX
  value: "registry.access.redhat.com/openshift3/"

do we need to specify openshift/origin-metrics-deployer:latest???



-- 
Srinivas Kotaru

On 6/13/16, 1:55 PM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> wrote:

>Matt 
>
>Thanks for looking into. I rerun the setup, but had the same issue
>
># oc get pods
>NAME READY STATUS  RESTARTS   AGE
>hawkular-cassandra-1-y2egy   0/1   ContainerCreating   0  5m
>hawkular-metrics-4b16f   0/1   Running 1  4m
>heapster-x2gj2   0/1   Running 2  4m
>metrics-deployer-9v7vc   0/1   Completed   0  6m
>
>$ oc logs -f hawkular-cassandra-1-y2egy
>Error from server: container "hawkular-cassandra-1" in pod 
>"hawkular-cassandra-1-y2egy" is waiting to start: ContainerCreating
>
>$ oc logs -f hawkular-metrics-4b16f
>
>16:54:25,703 DEBUG [org.jboss.as.config] (MSC service thread 1-4) VM 
>Arguments: -Duser.home=/home/jboss -Duser.name=jboss -D[Standalone] 
>-XX:+UseCompressedOops -verbose:gc -Xloggc:/opt/eap/standalone/log/gc.log 
>-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation 
>-XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=3M -XX:-TraceClassUnloading 
>-Xms1303m -Xmx1303m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true 
>-Djboss.modules.system.pkgs=org.jboss.logmanager -Djava.awt.headless=true 
>-Djboss.modules.policy-permissions=true 
>-Xbootclasspath/p:/opt/eap/jboss-modules.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/main/jboss-logmanager-1.5.4.Final-redhat-1.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/ext/main/javax.json-1.0.4.jar:/opt/eap/modules/system/layers/base/org/jboss/logmanager/ext/main/jboss-logmanager-ext-1.0.0.Alpha2-redhat-1.jar
> -Djava.util.logging.manager=org.jboss.logmanager.LogManager 
>-javaagent:/opt/eap/jolokia.jar=port=8778,protocol=https,caCert=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt,clientPrincipal=cn=system:master-proxy,useSslClientAuthentication=true,extraClientCheck=true,host=0.0.0.0,discoveryEnabled=false
> -Djava.security.egd=file:/dev/./urandom 
>-Dorg.jboss.boot.log.file=/opt/eap/standalone/log/server.log 
>-Dlogging.configuration=file:/opt/eap/standalone/configuration/logging.properties
>16:54:27,079 INFO  [org.xnio] (MSC service thread 1-3) XNIO Version 
>3.0.14.GA-redhat-1
>16:54:27,083 INFO  [org.xnio.nio] (MSC service thread 1-3) XNIO NIO 
>Implementation Version 3.0.14.GA-redhat-1
>16:54:27,101 INFO  [org.jboss.as.server] (Controller Boot Thread) JBAS015888: 
>Creating http management service using socket-binding (management-http)
>16:54:27,104 INFO  [org.jboss.remoting] (MSC service thread 1-3) JBoss 
>Remoting version 3.3.5.Final-redhat-1
>
>$ oc logs -f heapster-x2gj2
>Endpoint Check in effect. Checking 
>https://hawkular-metrics:443/hawkular/metrics/status
>Could not connect to https://hawkular-metrics:443/hawkular/metrics/status. 
>Curl exit code: 6. Status Code 000
>'https://hawkular-metrics:443/hawkular/metrics/status' is not accessible [HTTP 
>status code: 000. Curl exit code 6]. Retrying.
>Could not connect to https://hawkular-metrics:443/hawkular/metrics/status. 
>Curl exit code: 6. Status Code 000
>'https://hawkular-metrics:443/hawkular/metrics/status' is not accessible [HTTP 
>status code: 000. Curl exit code 6]. Retrying.
>Could not connect to https://hawkular-metrics:443/hawkular/metrics/status. 
>Curl exit code: 6. Status Code 000
>
>
> $ oc logs -f metrics-deployer-9v7vc
>
>++ oc create -f -
>serviceaccount "heapster" created
>service "heapster" created
>replicationcontroller "heapster" created
>+ echo 'Success!'
>Success!
>
>-- 
>Srinivas Kotaru
>
>On 6/13/16, 1:49 PM, "Matt Wringe" <mwri...@redhat.com> wrote:
>
>>
>>
>>- Original Message -
>>> From: "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com>
>>> To: users@lists.openshift.redhat.com
>>> Sent: Monday, June 13, 2016 3:58:12 PM
>>> Subject: Metrics deployment
>>> 
>>> 
>>> 
>>> Hi
>>> 
>>> 
>>> 
>>> Am trying to configure metrics in our newly installed clusters. Am seeing
>>> below errors once metrics-deploy script was successful. I used our
>>> environment specific HAWKULAR_METRICS_HOSTNAME and MASTER_URL
>>> 
&g

Metrics deployment

2016-06-13 Thread Srinivas Naga Kotaru (skotaru)
Hi

Am trying to configure metrics in our newly installed clusters. Am seeing below 
errors once metrics-deploy script was successful. I used our environment 
specific HAWKULAR_METRICS_HOSTNAME and MASTER_URL

# oc new-app -f metrics-deployer.yaml

Note: customized, CASSANDARA PV, MASTER_URL, and HAWKULAR_METRICS_HOSTNAME ( 
hard coded as values)

template "hawkular-heapster" created
Deploying the Heapster component
++ echo 'Deploying the Heapster component'
++ '[' -n '' ']'
++ oc create -f -
++ oc process hawkular-heapster -v 
IMAGE_PREFIX=registry.access.redhat.com/openshift3/,IMAGE_VERSION=latest,MASTER_URL=https://lae3-alln-int-idev01.cisco.com:443,NODE_ID=nodename
serviceaccount "heapster" created
service "heapster" created
replicationcontroller "heapster" created
+ echo 'Success!'
Success!

# oc get pods
NAME READY STATUS  RESTARTS   AGE
hawkular-cassandra-1-9nzio   0/1   ContainerCreating   0  4m
hawkular-metrics-hi7mb   0/1   Running 1  4m
heapster-e8gbu   0/1   Running 2  4m
metrics-deployer-64703   0/1   ContainerCreating   0  3s
metrics-deployer-cd1nf   0/1   Completed   0  5m


$ oc logs -f heapster-e8gbu
Endpoint Check in effect. Checking 
https://hawkular-metrics:443/hawkular/metrics/status
Could not connect to https://hawkular-metrics:443/hawkular/metrics/status. Curl 
exit code: 6. Status Code 000
'https://hawkular-metrics:443/hawkular/metrics/status' is not accessible [HTTP 
status code: 000. Curl exit code 6]. Retrying.
Could not connect to https://hawkular-metrics:443/hawkular/metrics/status. Curl 
exit code: 6. Status Code 000


What is the wrong?  Why it checking just hawkular-metrics rather full routing 
URL which was provided as HAWKULAR_METRICS_HOSTNAME



--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Prune operations

2016-06-09 Thread Srinivas Naga Kotaru (skotaru)
Also need clarity on whether this cron need to be every node in the cluster, or 
only in masters or only in etcd servers? Documentation is not clear 

-- 
Srinivas Kotaru

On 6/9/16, 10:19 AM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> wrote:

>Clayton 
>
>Any recommended internal to run cron? Daily, fortnights, weekly, monthly?  
>
>
>
>-- 
>Srinivas Kotaru
>
>On 6/8/16, 9:56 PM, "Clayton Coleman" <ccole...@redhat.com> wrote:
>
>>At the current time cron would be the recommended approach.
>>
>>On Wed, Jun 8, 2016 at 11:56 PM, Srinivas Naga Kotaru (skotaru)
>><skot...@cisco.com> wrote:
>>> Currently all prune operations are run by oadm command manually.  Is there
>>> any way to automate and schedule? Is old friend Cron is best recommended or
>>> something else?
>>>
>>>
>>>
>>> https://docs.openshift.com/enterprise/3.2/admin_guide/pruning_resources.html
>>>
>>>
>>>
>>>
>>>
>>> Pl advise
>>>
>>>
>>>
>>> --
>>>
>>> Srinivas Kotaru
>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Prune operations

2016-06-09 Thread Srinivas Naga Kotaru (skotaru)
Clayton 

Any recommended internal to run cron? Daily, fortnights, weekly, monthly?  



-- 
Srinivas Kotaru

On 6/8/16, 9:56 PM, "Clayton Coleman" <ccole...@redhat.com> wrote:

>At the current time cron would be the recommended approach.
>
>On Wed, Jun 8, 2016 at 11:56 PM, Srinivas Naga Kotaru (skotaru)
><skot...@cisco.com> wrote:
>> Currently all prune operations are run by oadm command manually.  Is there
>> any way to automate and schedule? Is old friend Cron is best recommended or
>> something else?
>>
>>
>>
>> https://docs.openshift.com/enterprise/3.2/admin_guide/pruning_resources.html
>>
>>
>>
>>
>>
>> Pl advise
>>
>>
>>
>> --
>>
>> Srinivas Kotaru
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Prune operations

2016-06-08 Thread Srinivas Naga Kotaru (skotaru)
Thanks Clayton for confirmation 

Srinivas Kotaru

Sent from my iPhone

> On Jun 8, 2016, at 9:56 PM, Clayton Coleman <ccole...@redhat.com> wrote:
> 
> At the current time cron would be the recommended approach.
> 
> On Wed, Jun 8, 2016 at 11:56 PM, Srinivas Naga Kotaru (skotaru)
> <skot...@cisco.com> wrote:
>> Currently all prune operations are run by oadm command manually.  Is there
>> any way to automate and schedule? Is old friend Cron is best recommended or
>> something else?
>> 
>> 
>> 
>> https://docs.openshift.com/enterprise/3.2/admin_guide/pruning_resources.html
>> 
>> 
>> 
>> 
>> 
>> Pl advise
>> 
>> 
>> 
>> --
>> 
>> Srinivas Kotaru
>> 
>> 
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>> 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Prune operations

2016-06-08 Thread Srinivas Naga Kotaru (skotaru)
Currently all prune operations are run by oadm command manually.  Is there any 
way to automate and schedule? Is old friend Cron is best recommended or 
something else?

https://docs.openshift.com/enterprise/3.2/admin_guide/pruning_resources.html


Pl advise

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: DockerBuild Vs STI

2016-03-19 Thread Srinivas Naga Kotaru (skotaru)


Sent from my iPhone

On Mar 18, 2016, at 7:08 AM, Ben Parees 
<bpar...@redhat.com<mailto:bpar...@redhat.com>> wrote:



On Fri, Mar 18, 2016 at 2:55 AM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
We're thinking what is the best approach for our code deployment and promotion.

This is our proposed flow for each approach

Docker build: ( Outside of Openshift)
==

Eclipse -> Git -> Jenkins to build and create artifacts -> Jenkins Docker 
Plug-in to create image and push to corporate repo -> oc import-image and oc 
deploy -latest

Basically build & Image creation happening out side of Openshift.

OpenShift native:
==

Eclipse -> GIT -> Jenkins to build artifacts -> OC Binary deploy by CI/CD tool 
against each app as CI/CD has admin access to each project

We have 2 choices here a) binary build for each life cycle b) build for dev 
life cycle and promote using docker tag and push to other life cycles. Option B 
make more sense naturally

This approach using native openshift for build and deployments. Also using 
openshift internal registries to store final build images for each life cycle.

Can you comment on each pros and cons? From scaling ( hundred thousand 
deployments) as well as easy to operate and maintain. Whatever approach it 
should be repeatable and reliable without errors since we will  automate 
everything as part of CI/CD pipeline.

?The obvious advantage to building images inside openshift is that you get 
resource reuse from your openshift cluster.  If you perform image builds 
outside openshift, you effectively need a separate build farm that can perform 
docker builds.
?


Yes we do have external CI/CD platform that currently doing LAE2 builds and 
deploy using binar deployment. Want to leverage if we go with docket build 
based approach.

I want hear what advantages we getting by doing build inside and outside? Are 
we missing .sti capabilities? Speed and scale of open shift build process? 
Integrated deploy notifications? Or anything else?

Interested to hear comments and discussion around as no documentation exist to 
educate or explain.






Thanks in advance and appreciated feedback

--
SrinivasKotaru

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
Ben Parees | OpenShift

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: binary tar.gz format

2016-03-18 Thread Srinivas Naga Kotaru (skotaru)
Ben

Did binary deploy using —from-dir support .sti/bin/run script?

--
Srinivas Kotaru

From: Ben Parees <bpar...@redhat.com<mailto:bpar...@redhat.com>>
Date: Thursday, March 10, 2016 at 10:18 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: binary tar.gz format



On Thu, Mar 10, 2016 at 12:54 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Can some one comment whether this doable or not? Am looking for a similar OSE 
2.X binary deploy compatibility

I don't think it will work exactly like as it did in v2, if you provide an 
archive as your binary input, then the build, when it runs, will have that 
archive available, but it will not be extracted, so you either need to:

1) use a directory (--from-dir pointing to a directory containing your 
extracted content) as the binary input
or
2) your build (s2i assemble script, or your Dockerfile) needs to include logic 
to extract the archive you are providing, prior to proceeding with the build 
logic.
​




--
Srinivas Kotaru







On 3/9/16, 10:11 PM, 
"users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
 on behalf of Srinivas Naga Kotaru (skotaru)" 
<users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
 on behalf of skot...@cisco.com<mailto:skot...@cisco.com>> wrote:

>One more question
>
>Am exploring binary deployment using .tar.gz format. The reason for this 
>exercise is to take advantage of our OSE2 build system which currently package 
>and generate final artifact in .tar.gz format ( OSE 2.x binary deploy format)
>
>Is OSE 3.x binary deploy support tar.gz format? As per my testing, it is not 
>working
>
># tar -czvf sales-dev.tar.gz ./Deployments ./Configuration
># oc start-build sales-dev —from-file=sales-dev.tar.gz
>
>I rsh into pod and checked source folder. It was not untared
>
># oc rsh sales-dev-3-mdcs3
># ls -l source/
>total 12
>-rw-r--r--. 1 jboss jboss 8395 Mar  9 23:49 sales-dev.tar.gz
>
>
>
>
>--
>Srinivas Kotaru
>
>
>
>
>
>
>On 3/9/16, 9:46 PM, 
>"users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
> on behalf of Srinivas Naga Kotaru (skotaru)" 
><users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
> on behalf of skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
>
>>Ok thanks.
>>
>>Can we raise a RFE for tracking purpose if you guys think it useful feature.
>>
>>
>>--
>>Srinivas Kotaru
>>
>>
>>
>>
>>
>>
>>
>>On 3/9/16, 9:06 PM, "Clayton Coleman" 
>><ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:
>>
>>>Binary builds today have to come from direct user input (directly from
>>>a start command or a call to the rest API).  In the future we plan on
>>>supporting other ways of getting the content.
>>>
>>>> On Mar 9, 2016, at 11:59 PM, Srinivas Naga Kotaru (skotaru) 
>>>> <skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
>>>>
>>>> Clayton
>>>>
>>>> What you described already working if I pass using start-build.
>>>>
>>>> I am trying to pass one sample.war as a argument to template and use this 
>>>> to create initial application. Think about this is sample hello world 
>>>> program as part of provision. Once app was provisioned, app teams can 
>>>> deploy the way you described.
>>>>
>>>> If I put empty string to asFile, app creation is successful but build is 
>>>> waiting forever. So if clients hit browser, they wont get any output and 
>>>> might get confuse.
>>>>
>>>> Am sure we can pass git repo by adjusting strategy but exploring if 
>>>> possible to use  a sample.war as argument to template
>>>>
>>>>
>>>>
>>>> --
>>>> Srinivas Kotaru
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> On 3/9/16, 8:49 PM, "Clayton Coleman" 
>>>>> <ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:
>>>>>
>>>>> The container itself is

Re: binary tar.gz format

2016-03-18 Thread Srinivas Naga Kotaru (skotaru)
Thank you.  It’s great help. I am now able to add few extra arguments as 
requied by nss_wrapper to default run script to jboss process to look for 
passwd file from differetn location. I want to run each application its own OS 
generic user name rather default “jboss”.

Leveraged nss_wrapper as per Brenton advise. It is working for both java and 
non java apps.


--
Srinivas Kotaru

From: Ben Parees <bpar...@redhat.com<mailto:bpar...@redhat.com>>
Date: Wednesday, March 16, 2016 at 12:53 PM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: binary tar.gz format

the scripts are in /usr/local/s2i inside the image, so:

docker run 
registry.access.redhat.com/jboss-eap-6/eap64-openshift<http://registry.access.redhat.com/jboss-eap-6/eap64-openshift>
 ls /usr/local/s2i
docker run 
registry.access.redhat.com/jboss-eap-6/eap64-openshift<http://registry.access.redhat.com/jboss-eap-6/eap64-openshift>
 cat /usr/local/s2i/assemble

etc



On Wed, Mar 16, 2016 at 3:49 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Got it . Thanks

Where I can find default assemble and run scripts for JBOSS EAP? I want to add 
few extra export arguments to run script to be affective at build time. Is it 
possible to run both my run script along with default build time run script?

--
Srinivas Kotaru

From: Ben Parees <bpar...@redhat.com<mailto:bpar...@redhat.com>>
Date: Wednesday, March 16, 2016 at 12:38 PM

To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: binary tar.gz format



On Wed, Mar 16, 2016 at 3:28 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Ben

Did binary deploy using —from-dir support .sti/bin/run script?

​it really has nothing to do with the run script.

the contents supplied via --from-dir end up being made available to the 
assemble script, just as if you had provided a git repo and we had cloned the 
git repo.

So it's up to the assemble script what to do with the content you supplied via 
--from-dir.
​



--
Srinivas Kotaru

From: Ben Parees <bpar...@redhat.com<mailto:bpar...@redhat.com>>
Date: Thursday, March 10, 2016 at 10:18 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: binary tar.gz format



On Thu, Mar 10, 2016 at 12:54 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Can some one comment whether this doable or not? Am looking for a similar OSE 
2.X binary deploy compatibility

I don't think it will work exactly like as it did in v2, if you provide an 
archive as your binary input, then the build, when it runs, will have that 
archive available, but it will not be extracted, so you either need to:

1) use a directory (--from-dir pointing to a directory containing your 
extracted content) as the binary input
or
2) your build (s2i assemble script, or your Dockerfile) needs to include logic 
to extract the archive you are providing, prior to proceeding with the build 
logic.
​




--
Srinivas Kotaru







On 3/9/16, 10:11 PM, 
"users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
 on behalf of Srinivas Naga Kotaru (skotaru)" 
<users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
 on behalf of skot...@cisco.com<mailto:skot...@cisco.com>> wrote:

>One more question
>
>Am exploring binary deployment using .tar.gz format. The reason for this 
>exercise is to take advantage of our OSE2 build system which currently package 
>and generate final artifact in .tar.gz format ( OSE 2.x binary deploy format)
>
>Is OSE 3.x binary deploy support tar.gz format? As per my testing, it is not 
>working
>
># tar -czvf sales-dev.tar.gz ./Deployments ./Configuration
># oc start-build sales-dev —from-file=sales-dev.tar.gz
>
>I rsh into pod and

Re: DockerBuild Vs STI

2016-03-18 Thread Srinivas Naga Kotaru (skotaru)
John

Thank you. It is so detailed and sharing your real life scenario. Really 
appreciated..

It make lot of sense as you described. I think we are in the same boat in terms 
of planning , and our cluster design. We have multiple clusters, in fact , our 
prod has 2 clusters each in a separate data centers. Our prod app is a 
composite app from multiple data centers for HA reasons.

You rightly brought what are the pros and cons using builds with in and outside 
of openshift.

--
Srinivas Kotaru

From: "Skarbek, John" <john.skar...@ca.com<mailto:john.skar...@ca.com>>
Date: Friday, March 18, 2016 at 4:05 AM
To: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>, 
skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Subject: Re: DockerBuild Vs STI


Srinivas,

I’d like to throw another option your way.

Eclipse —> Git —> Jenkins to build and create artifacts —> Jenkins Docker 
Plug-in to create image -> push image to the built-in openshift docker registry

Something you’ll need before the above pipeline, is a configuration already in 
place on the openshift cluster (deployment config, pointing to the built-in 
registry, service, route, etc…) and the build-in registry needs to be exposed.

We are using this in my environment and it works well. We are using our own 
existing build processes that we’ve already had in place without any extra 
unnecessary work, and simply plugging in openshift to complete the deployment. 
Sending an image to the internal registry is essentially what the openshift 
native build process does; therefore, one is able to abuse the native 
extensions that openshift has built to complete deployments in an automated 
fashion.

We do, however, utilize multiple openshift clusters. One for testing and one 
for prod as an example. So we simply have two deploy jobs that get kicked off 
appropriately. If by chance you utilize the same cluster for both dev and prod, 
you can probably take the above model and utilize tagging appropriately to send 
the final deployment image into production.

We like this as we are able to integrate our existing large and ugly pipelines 
that have already been fine tuned to our liking. It also allows our deployment 
engineers to continue the path they already do without having to learn a new 
system. The downside I see to this is that we may be missing out one some 
features that openshift may provide in their building process. It also forces 
some undue troubleshooting as we are building our own docker containers which 
is a little difficult for our developers to test with locally. Things we’ve had 
to deal with are documented well within openshift, and most if it concerns the 
use of SCC’s and image file permissions. Though, thus far, it hasn’t been 
terrible.


--
John Skarbek


On March 18, 2016 at 02:57:28, Srinivas Naga Kotaru (skotaru) 
(skot...@cisco.com<mailto:skot...@cisco.com>) wrote:

We’re thinking what is the best approach for our code deployment and promotion.

This is our proposed flow for each approach

Docker build: ( Outside of Openshift)
==

Eclipse —> Git —> Jenkins to build and create artifacts —> Jenkins Docker 
Plug-in to create image and push to corporate repo —> oc import-image and oc 
deploy —latest

Basically build & Image creation happening out side of Openshift.

OpenShift native:
==

Eclipse —> GIT —> Jenkins to build artifacts —> OC Binary deploy by CI/CD tool 
against each app as CI/CD has admin access to each project

We have 2 choices here a) binary build for each life cycle b) build for dev 
life cycle and promote using docker tag and push to other life cycles. Option B 
make more sense naturally

This approach using native openshift for build and deployments. Also using 
openshift internal registries to store final build images for each life cycle.

Can you comment on each pros and cons? From scaling ( hundred thousand 
deployments) as well as easy to operate and maintain. Whatever approach it 
should be repeatable and reliable without errors since we will  automate 
everything as part of CI/CD pipeline.

Thanks in advance and appreciated feedback

--
SrinivasKotaru
___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openshift.redhat.com_openshiftmm_listinfo_users=CwICAg=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=8IlWeJZqFtf8Tvx1PDV9NsLfM_M0oNfzEXXNp-tpx74=lwMOrMH5uQuS0bXtBW5_dMK5rygsmaJRq1XyBxGrjm0=--Mau5gkyzLBfAg_YMyH1xFRPx739yA31PmjW9CZcDc=
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: dockerfiles for standard images

2016-03-15 Thread Srinivas Naga Kotaru (skotaru)
Ok that sounds good info. I remember even after changing to numeric 185 also 
had seen similar error.  Will try once again.

Finally are you saying use numeric UID and 1001 for non java and 185 for java 
based apps? Am I right?

--
Srinivas Kotaru

From: Ben Parees <bpar...@redhat.com<mailto:bpar...@redhat.com>>
Date: Tuesday, March 15, 2016 at 11:12 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: dockerfiles for standard images



On Tue, Mar 15, 2016 at 1:56 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Ben

Thanks for link.

Simple question.

I was trying to build a new JBOSS EAP builder image by adding some specific 
libs as per our requirement. It involve few RUN and YUM commands. Do we need to 
use root user before installing and move it back to builder user?

​
Yes, you need to set the user back to root prior to performing root operations 
like yum install.  At the end of your dockerfile, you should set the user back 
to 185.

if you docker inspect the image, you can see it runs as user 185 by default.
​


For JBOSS EAP, all processes are running as jboss and /etc/passwd entry for 
this user is 185. When I did something like below, POD creation failing and 
saying something like, it should have numeric UID.

​yes, this is a restriction that ensures your builder image is not running as 
root, or using a named user that equates to root.  Discussed here: 
https://docs.openshift.org/latest/creating_images/guidelines.html#openshift-specific-guidelines

"Lastly, the final USER declaration in the Dockerfile should specify the user 
ID (numeric value) and not the user name. This allows OpenShift to validate the 
authority the image is attempting to run with and prevent running images that 
are trying to run as root, because running containers as a privileged user 
exposes potential security 
holes<https://docs.openshift.org/latest/install_config/install/prerequisites.html#security-warning>.
 If the image does not specify a USER, it inherits the USER from the parent 
image."


​



FROM 
myrepo.example.com/mycompnay/eap64-openshift<http://myrepo.example.com/mycompnay/eap64-openshift>
USER root
RUN yum --enablerepo='rhel-7-server-ose-3.0-rpms' install -y nss_wrapper && \
yum clean all -y
RUN  mkdir -p /opt/oracle/product/instantclient-basic-12.1.0.2.0
ADD  ./instantclient_12_1/* /opt/oracle/product/instantclient-basic-12.1.0.2.0/
RUN  ln -s /opt/oracle/product/instantclient-basic-12.1.0.2.0/ 
/opt/oracle/product/current
RUN chown -R jboss:jboss /opt/eap
RUN chown -R jboss:jboss /opt/oracle
USER jboss

If I change it to like below, all looks good.

RUN chown -R 1001:0 /opt/eap
RUN chown -R 1001:0 /opt/oracle
USER 1001

I knew for non java images, you are using 1001. My question is same for java 
images also? Example tomat and jboss eap. I could see 1001 user doesn’t exist 
in /etc/passwd fie of Tomcat and JBOSS EAP based pods

--
Srinivas Kotaru

From: Ben Parees <bpar...@redhat.com<mailto:bpar...@redhat.com>>
Date: Tuesday, March 15, 2016 at 10:39 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: dockerfiles for standard images

You can see most of them here:
https://github.com/openshift/?utf8=%E2%9C%93=sti-

sti-base serves as a base image for the others.

Dockerfile.rhel7 is the rhel dockerfile, Dockerfile is the centos dockerfile.


On Tue, Mar 15, 2016 at 12:56 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Can someone point me link to standard images docker files? Am more interested 
to see OSE images rather origin.  I knew it might requires access, but since we 
have access, that should be fine.

--
Srinivas Kotaru

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
Ben Parees | OpenShift




--
Ben Parees | OpenShift

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: dockerfiles for standard images

2016-03-15 Thread Srinivas Naga Kotaru (skotaru)
Ben

Thanks for link.

Simple question.

I was trying to build a new JBOSS EAP builder image by adding some specific 
libs as per our requirement. It involve few RUN and YUM commands. Do we need to 
use root user before installing and move it back to builder user?

For JBOSS EAP, all processes are running as jboss and /etc/passwd entry for 
this user is 185. When I did something like below, POD creation failing and 
saying something like, it should have numeric UID.

FROM myrepo.example.com/mycompnay/eap64-openshift
USER root
RUN yum --enablerepo='rhel-7-server-ose-3.0-rpms' install -y nss_wrapper && \
yum clean all -y
RUN  mkdir -p /opt/oracle/product/instantclient-basic-12.1.0.2.0
ADD  ./instantclient_12_1/* /opt/oracle/product/instantclient-basic-12.1.0.2.0/
RUN  ln -s /opt/oracle/product/instantclient-basic-12.1.0.2.0/ 
/opt/oracle/product/current
RUN chown -R jboss:jboss /opt/eap
RUN chown -R jboss:jboss /opt/oracle
USER jboss

If I change it to like below, all looks good.

RUN chown -R 1001:0 /opt/eap
RUN chown -R 1001:0 /opt/oracle
USER 1001

I knew for non java images, you are using 1001. My question is same for java 
images also? Example tomat and jboss eap. I could see 1001 user doesn’t exist 
in /etc/passwd fie of Tomcat and JBOSS EAP based pods

--
Srinivas Kotaru

From: Ben Parees <bpar...@redhat.com<mailto:bpar...@redhat.com>>
Date: Tuesday, March 15, 2016 at 10:39 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: dockerfiles for standard images

You can see most of them here:
https://github.com/openshift/?utf8=%E2%9C%93=sti-

sti-base serves as a base image for the others.

Dockerfile.rhel7 is the rhel dockerfile, Dockerfile is the centos dockerfile.


On Tue, Mar 15, 2016 at 12:56 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Can someone point me link to standard images docker files? Am more interested 
to see OSE images rather origin.  I knew it might requires access, but since we 
have access, that should be fine.

--
Srinivas Kotaru

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
Ben Parees | OpenShift

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


dockerfiles for standard images

2016-03-15 Thread Srinivas Naga Kotaru (skotaru)
Can someone point me link to standard images docker files? Am more interested 
to see OSE images rather origin.  I knew it might requires access, but since we 
have access, that should be fine.

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Java heap and arguments

2016-03-14 Thread Srinivas Naga Kotaru (skotaru)
OSE 2.x support adding java arguments, including setting up heap and other 
values by clients itself.

What is the recommended procedure for OSE 3.X? Do we have any documentation 
which describe ?

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: api and console port : 8443

2016-03-13 Thread Srinivas Naga Kotaru (skotaru)
For node routing, we have to use DMZ based proxy servers. There are end points 
are clients and proxy to openshift routers.

Openshift routers doesn’t support DMZ. We can’t directly expose or put 
openshift nodes directly into DMZ as it shared same VXLAN with application 
nodes. I heard there is a tunneling but I didn’t understand it concepts or 
documentation is clear.

Since we have multiple data centers we have something like

GLB —> DC RP —> Openshift Routers —> Openshift Nodes


--
Srinivas Kotaru

From: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Date: Saturday, March 12, 2016 at 1:43 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>, 
Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Subject: Re: api and console port : 8443


Hi.


To be more precise.


Do you use the openshift ability to route based on labels ( ROUTE_LABELS ) and 
dedicated management labeled nodes?

BR Aleks


From: 
users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
 
<users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>>
 on behalf of Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Sent: Friday, March 11, 2016 20:54
To: Srinivas Naga Kotaru (skotaru); Jordan Liggitt; Clayton Coleman
Cc: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Subject: Re: api and console port : 8443


Hi.


You mean different network routes, right?


what else have you changed to use the master on 443?


Which version of HA have you chosen?

https://docs.openshift.com/enterprise/3.1/architecture/infrastructure_components/kubernetes_infrastructure.html#high-availability-masters


BR Aleks



From: Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>>
Sent: Friday, March 11, 2016 19:17
To: Aleksandar Lazic; Jordan Liggitt; Clayton Coleman
Cc: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Subject: Re: api and console port : 8443

Thanks for sharing your experience and writeup

We decided to go with different route. don’t want to involve run time layer 
with management traffic and also simplify as much as possible since we have 
multiple clusters in each life cycle ( non prod, prod etc)

This is final approach we decided to go

1.  Change port 8443 to 443 during ansible fresh installation ( Our Dev builds 
starting this week onwards)
2. Use a DNS based load balancer to forward to 3 masters in each cluster.

Hope this works. Pl comment if it doesn’t work so we can a fresh look.

--
Srinivas Kotaru

From: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Date: Friday, March 11, 2016 at 2:29 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, Jordan Liggitt 
<jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: api and console port : 8443


Hi.


I have read this post and the solution works.

The handycap from my point of view is that you will need to use official 
certificates in the master(s).

I have written a more or less detailed description how we at cloudwerkstatt 
solved this issue.


https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/

[https://alword.files.wordpress.com/2016/03/osv3-cons-443.png]<https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/>

Make OpenShift console available on port 443 
(https)<https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/>
alword.wordpress.com
Introduction The main reason why this blog post exist is that OpenShift V3 and 
Kubernetes is very close binded to port 8443. This could be changed in the 
future. We at Cloudwerkstatt GmbH use a ded…


Feedback is very welcome.


Best Regards

Aleks


From: Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>>
Sent: Thursday, March 10, 2016 18:47
To: Aleksandar Lazic; Jordan Liggitt; Clayton Coleman
Cc: users@

Re: api and console port : 8443

2016-03-11 Thread Srinivas Naga Kotaru (skotaru)
yes got it. Yes that’s right.

--
Srinivas Kotaru

From: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Friday, March 11, 2016 at 10:59 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>,
 Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: api and console port : 8443

I mean, low TTL for the DNS, so that if you want to dynamically update them you 
can at least in theory change them.  If you have those being VIPS, less of a 
concern.

On Fri, Mar 11, 2016 at 1:48 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Thanks Clayton. Am also excited to see how it works. As you said, it should as 
per theory

Sure will keep a low TTL for master VIP. Just curios, any reason why low TTL ?


--
Srinivas Kotaru

From: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Friday, March 11, 2016 at 10:33 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>,
 Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>

Subject: Re: api and console port : 8443

It should, although I would set a low TTL on the load balancer.  We'll make 
sure to test with this configuration as well.

On Fri, Mar 11, 2016 at 1:17 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Thanks for sharing your experience and writeup

We decided to go with different route. don’t want to involve run time layer 
with management traffic and also simplify as much as possible since we have 
multiple clusters in each life cycle ( non prod, prod etc)

This is final approach we decided to go

1.  Change port 8443 to 443 during ansible fresh installation ( Our Dev builds 
starting this week onwards)
2. Use a DNS based load balancer to forward to 3 masters in each cluster.

Hope this works. Pl comment if it doesn’t work so we can a fresh look.

--
Srinivas Kotaru

From: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Date: Friday, March 11, 2016 at 2:29 AM

To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, Jordan Liggitt 
<jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: api and console port : 8443


Hi.


I have read this post and the solution works.

The handycap from my point of view is that you will need to use official 
certificates in the master(s).

I have written a more or less detailed description how we at cloudwerkstatt 
solved this issue.


https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/

[X]<https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/>

Make OpenShift console available on port 443 
(https)<https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/>
alword.wordpress.com<http://alword.wordpress.com>
Introduction The main reason why this blog post exist is that OpenShift V3 and 
Kubernetes is very close binded to port 8443. This could be changed in the 
future. We at Cloudwerkstatt GmbH use a ded…


Feedback is very welcome.


Best Regards

Aleks


From: Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>>
Sent: Thursday, March 10, 2016 18:47
To: Aleksandar Lazic; Jordan Liggitt; Clayton Coleman
Cc: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Subject: Re: api and console port : 8443

Got it  thanks

Someone write a decent article on how to run master on 443 by taking advantage 
of service and external end point.
https://blog.openshift.com/run-openshift-console-port-443/
[X]<https://blog.openshift.com/run-openshift-console-port-443/>

Run OpenShift console on port 443 – OpenShift 
Blog<https://blog.openshift.com/run-openshift

Re: api and console port : 8443

2016-03-11 Thread Srinivas Naga Kotaru (skotaru)
Thanks Clayton. Am also excited to see how it works. As you said, it should as 
per theory

Sure will keep a low TTL for master VIP. Just curios, any reason why low TTL ?


--
Srinivas Kotaru

From: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Friday, March 11, 2016 at 10:33 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>,
 Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: api and console port : 8443

It should, although I would set a low TTL on the load balancer.  We'll make 
sure to test with this configuration as well.

On Fri, Mar 11, 2016 at 1:17 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Thanks for sharing your experience and writeup

We decided to go with different route. don’t want to involve run time layer 
with management traffic and also simplify as much as possible since we have 
multiple clusters in each life cycle ( non prod, prod etc)

This is final approach we decided to go

1.  Change port 8443 to 443 during ansible fresh installation ( Our Dev builds 
starting this week onwards)
2. Use a DNS based load balancer to forward to 3 masters in each cluster.

Hope this works. Pl comment if it doesn’t work so we can a fresh look.

--
Srinivas Kotaru

From: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Date: Friday, March 11, 2016 at 2:29 AM

To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, Jordan Liggitt 
<jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: api and console port : 8443


Hi.


I have read this post and the solution works.

The handycap from my point of view is that you will need to use official 
certificates in the master(s).

I have written a more or less detailed description how we at cloudwerkstatt 
solved this issue.


https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/

[X]<https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/>

Make OpenShift console available on port 443 
(https)<https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/>
alword.wordpress.com<http://alword.wordpress.com>
Introduction The main reason why this blog post exist is that OpenShift V3 and 
Kubernetes is very close binded to port 8443. This could be changed in the 
future. We at Cloudwerkstatt GmbH use a ded…


Feedback is very welcome.


Best Regards

Aleks


From: Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>>
Sent: Thursday, March 10, 2016 18:47
To: Aleksandar Lazic; Jordan Liggitt; Clayton Coleman
Cc: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Subject: Re: api and console port : 8443

Got it  thanks

Someone write a decent article on how to run master on 443 by taking advantage 
of service and external end point.
https://blog.openshift.com/run-openshift-console-port-443/
[X]<https://blog.openshift.com/run-openshift-console-port-443/>

Run OpenShift console on port 443 – OpenShift 
Blog<https://blog.openshift.com/run-openshift-console-port-443/>
blog.openshift.com<http://blog.openshift.com>
This post, will help you to make the OpenShift console run on port 443 by using 
the openshift-router facilities, service and endpoints.


Your setup or article content is pretty much inline with hosting a simple tcp 
based load balancer and listen on VIP:443 for client requests and forward it to 
masters:8443.

I knew api and console can be load balanced for HA. Am not tested we can use 
the same VIP for controller. I knew it is still active/passive.

--
Srinivas Kotaru

From: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Date: Thursday, March 10, 2016 at 1:20 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, Jordan Liggitt 
<jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: "users@lists.opensh

Re: api and console port : 8443

2016-03-11 Thread Srinivas Naga Kotaru (skotaru)
Thanks for sharing your experience and writeup

We decided to go with different route. don’t want to involve run time layer 
with management traffic and also simplify as much as possible since we have 
multiple clusters in each life cycle ( non prod, prod etc)

This is final approach we decided to go

1.  Change port 8443 to 443 during ansible fresh installation ( Our Dev builds 
starting this week onwards)
2. Use a DNS based load balancer to forward to 3 masters in each cluster.

Hope this works. Pl comment if it doesn’t work so we can a fresh look.

--
Srinivas Kotaru

From: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Date: Friday, March 11, 2016 at 2:29 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, Jordan Liggitt 
<jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: api and console port : 8443


Hi.


I have read this post and the solution works.

The handycap from my point of view is that you will need to use official 
certificates in the master(s).

I have written a more or less detailed description how we at cloudwerkstatt 
solved this issue.


https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/

[https://alword.files.wordpress.com/2016/03/osv3-cons-443.png]<https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/>

Make OpenShift console available on port 443 
(https)<https://alword.wordpress.com/2016/03/11/make-openshift-console-available-on-port-443-https/>
alword.wordpress.com
Introduction The main reason why this blog post exist is that OpenShift V3 and 
Kubernetes is very close binded to port 8443. This could be changed in the 
future. We at Cloudwerkstatt GmbH use a ded…


Feedback is very welcome.


Best Regards

Aleks


From: Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>>
Sent: Thursday, March 10, 2016 18:47
To: Aleksandar Lazic; Jordan Liggitt; Clayton Coleman
Cc: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Subject: Re: api and console port : 8443

Got it  thanks

Someone write a decent article on how to run master on 443 by taking advantage 
of service and external end point.
https://blog.openshift.com/run-openshift-console-port-443/
[https://blog.openshift.com/wp-content/uploads/Akram-Ben-Aissi200x200.jpg]<https://blog.openshift.com/run-openshift-console-port-443/>

Run OpenShift console on port 443 – OpenShift 
Blog<https://blog.openshift.com/run-openshift-console-port-443/>
blog.openshift.com
This post, will help you to make the OpenShift console run on port 443 by using 
the openshift-router facilities, service and endpoints.


Your setup or article content is pretty much inline with hosting a simple tcp 
based load balancer and listen on VIP:443 for client requests and forward it to 
masters:8443.

I knew api and console can be load balanced for HA. Am not tested we can use 
the same VIP for controller. I knew it is still active/passive.

--
Srinivas Kotaru

From: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Date: Thursday, March 10, 2016 at 1:20 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, Jordan Liggitt 
<jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: api and console port : 8443


Hi.


       [tls passthrough]

openshift-default-router ---> [POD own haproxy with ssl] --> master:8443


you can think on this like a reverse proxy, which it is ;-)


BR Aleks


From: Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>>
Sent: Thursday, March 10, 2016 09:41
To: Aleksandar Lazic; Jordan Liggitt; Clayton Coleman
Cc: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Subject: Re: api and console port : 8443

Aleksandar

Thanks for reply. I didn’t quite understand the flow how it works. Can you 
please explain me a little brief?


--
Srinivas Kotaru

From: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Date: Thursday, Mar

Re: how to tag and search exposed registry images

2016-03-10 Thread Srinivas Naga Kotaru (skotaru)
Thank you thank you,

All working now. 


-- 
Srinivas Kotaru







On 3/10/16, 1:12 PM, "Clayton Coleman" <ccole...@redhat.com> wrote:

>Search is not supported today.  Tag would work, the error you're
>describing seems like sales/sales-prod doesn't exist in the local
>docker client.  To tag from the docker client you have to pull, then
>tag, then push.
>
>On Thu, Mar 10, 2016 at 3:58 PM, Srinivas Naga Kotaru (skotaru)
><skot...@cisco.com> wrote:
>> Am trying to promote a image created dev cluster registry to prod cluster
>> registry
>>
>> Unable to search and tag if I use exposed registry URL for login.
>>
>> docker login docker-registry-default.laetest5.cisco.com
>> Username (skotaru): **
>> Password:
>> WARNING: login credentials saved in /home/skotaru/.docker/config.json
>> Login Succeeded
>>
>> # docker tag sales/sales-prod
>> docker-registry-default.laetest1.cisco.com/sales/sales-prod1 ( another
>> cluster registry)
>> Error response from daemon: no such id: sales/sales-prod
>>
>> # docker search sales-prod
>> NAME  DESCRIPTION   STARS OFFICIAL   AUTOMATED
>>
>> Below is my pod and respective image stream
>>
>> #oc get pods
>> NAME READY STATUS  RESTARTS   AGE
>> sales-prod-2-s5sg5   1/1   Running 0  11h
>>
>> # oc get is
>> NAME DOCKER REPOTAGS  UPDATED
>> sales-prod   172.30.238.173:5000/sales/sales-prod   latest11 hours ago
>>
>> 172.30.238.173:5000 is cluster IP and
>> docker-registry-default.laetest5.cisco.com is exposed route.
>>
>>
>> An doing anything wrong here?
>>
>> --
>> Srinivas Kotaru
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


how to tag and search exposed registry images

2016-03-10 Thread Srinivas Naga Kotaru (skotaru)
Am trying to promote a image created dev cluster registry to prod cluster 
registry

Unable to search and tag if I use exposed registry URL for login.

docker login docker-registry-default.laetest5.cisco.com
Username (skotaru): **
Password:
WARNING: login credentials saved in /home/skotaru/.docker/config.json
Login Succeeded

# docker tag sales/sales-prod 
docker-registry-default.laetest1.cisco.com/sales/sales-prod1 ( another cluster 
registry)
Error response from daemon: no such id: sales/sales-prod

# docker search sales-prod
NAME  DESCRIPTION   STARS OFFICIAL   AUTOMATED

Below is my pod and respective image stream

#oc get pods
NAME READY STATUS  RESTARTS   AGE
sales-prod-2-s5sg5   1/1   Running 0  11h

# oc get is
NAME DOCKER REPOTAGS  UPDATED
sales-prod   172.30.238.173:5000/sales/sales-prod   latest11 hours ago

172.30.238.173:5000 is cluster IP and 
docker-registry-default.laetest5.cisco.com is exposed route.


An doing anything wrong here?

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: binary tar.gz format

2016-03-10 Thread Srinivas Naga Kotaru (skotaru)
Thanks Ben

1st approach is tested and working for me.

Let me test 2nd one.

--
Srinivas Kotaru

From: Ben Parees <bpar...@redhat.com<mailto:bpar...@redhat.com>>
Date: Thursday, March 10, 2016 at 10:18 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: binary tar.gz format



On Thu, Mar 10, 2016 at 12:54 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Can some one comment whether this doable or not? Am looking for a similar OSE 
2.X binary deploy compatibility

I don't think it will work exactly like as it did in v2, if you provide an 
archive as your binary input, then the build, when it runs, will have that 
archive available, but it will not be extracted, so you either need to:

1) use a directory (--from-dir pointing to a directory containing your 
extracted content) as the binary input
or
2) your build (s2i assemble script, or your Dockerfile) needs to include logic 
to extract the archive you are providing, prior to proceeding with the build 
logic.
​




--
Srinivas Kotaru







On 3/9/16, 10:11 PM, 
"users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
 on behalf of Srinivas Naga Kotaru (skotaru)" 
<users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
 on behalf of skot...@cisco.com<mailto:skot...@cisco.com>> wrote:

>One more question
>
>Am exploring binary deployment using .tar.gz format. The reason for this 
>exercise is to take advantage of our OSE2 build system which currently package 
>and generate final artifact in .tar.gz format ( OSE 2.x binary deploy format)
>
>Is OSE 3.x binary deploy support tar.gz format? As per my testing, it is not 
>working
>
># tar -czvf sales-dev.tar.gz ./Deployments ./Configuration
># oc start-build sales-dev —from-file=sales-dev.tar.gz
>
>I rsh into pod and checked source folder. It was not untared
>
># oc rsh sales-dev-3-mdcs3
># ls -l source/
>total 12
>-rw-r--r--. 1 jboss jboss 8395 Mar  9 23:49 sales-dev.tar.gz
>
>
>
>
>--
>Srinivas Kotaru
>
>
>
>
>
>
>On 3/9/16, 9:46 PM, 
>"users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
> on behalf of Srinivas Naga Kotaru (skotaru)" 
><users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
> on behalf of skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
>
>>Ok thanks.
>>
>>Can we raise a RFE for tracking purpose if you guys think it useful feature.
>>
>>
>>--
>>Srinivas Kotaru
>>
>>
>>
>>
>>
>>
>>
>>On 3/9/16, 9:06 PM, "Clayton Coleman" 
>><ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:
>>
>>>Binary builds today have to come from direct user input (directly from
>>>a start command or a call to the rest API).  In the future we plan on
>>>supporting other ways of getting the content.
>>>
>>>> On Mar 9, 2016, at 11:59 PM, Srinivas Naga Kotaru (skotaru) 
>>>> <skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
>>>>
>>>> Clayton
>>>>
>>>> What you described already working if I pass using start-build.
>>>>
>>>> I am trying to pass one sample.war as a argument to template and use this 
>>>> to create initial application. Think about this is sample hello world 
>>>> program as part of provision. Once app was provisioned, app teams can 
>>>> deploy the way you described.
>>>>
>>>> If I put empty string to asFile, app creation is successful but build is 
>>>> waiting forever. So if clients hit browser, they wont get any output and 
>>>> might get confuse.
>>>>
>>>> Am sure we can pass git repo by adjusting strategy but exploring if 
>>>> possible to use  a sample.war as argument to template
>>>>
>>>>
>>>>
>>>> --
>>>> Srinivas Kotaru
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> On 3/9/16, 8:49 PM, "Clayton Coleman" 
>>>>> <ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:
>>>>>
>>>>> The co

Re: api and console port : 8443

2016-03-10 Thread Srinivas Naga Kotaru (skotaru)
Got it  thanks

Someone write a decent article on how to run master on 443 by taking advantage 
of service and external end point.
https://blog.openshift.com/run-openshift-console-port-443/

Your setup or article content is pretty much inline with hosting a simple tcp 
based load balancer and listen on VIP:443 for client requests and forward it to 
masters:8443.

I knew api and console can be load balanced for HA. Am not tested we can use 
the same VIP for controller. I knew it is still active/passive.

--
Srinivas Kotaru

From: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Date: Thursday, March 10, 2016 at 1:20 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, Jordan Liggitt 
<jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: api and console port : 8443


Hi.


   [tls passthrough]

openshift-default-router ---> [POD own haproxy with ssl] --> master:8443


you can think on this like a reverse proxy, which it is ;-)


BR Aleks


From: Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>>
Sent: Thursday, March 10, 2016 09:41
To: Aleksandar Lazic; Jordan Liggitt; Clayton Coleman
Cc: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Subject: Re: api and console port : 8443

Aleksandar

Thanks for reply. I didn’t quite understand the flow how it works. Can you 
please explain me a little brief?


--
Srinivas Kotaru

From: Aleksandar Lazic 
<aleksandar.la...@cloudwerkstatt.com<mailto:aleksandar.la...@cloudwerkstatt.com>>
Date: Thursday, March 10, 2016 at 12:18 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, Jordan Liggitt 
<jligg...@redhat.com<mailto:jligg...@redhat.com>>, 
"ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: api and console port : 8443


Hi.


We solved this issue with a own haproxy pod in front of the master and added 
the following variables into ansible/hosts file.


#

...

openshift_master_public_api_url=https://manage.{{ osm_default_subdomain }}
openshift_master_public_console_url={{ openshift_master_public_api_url 
}}/console
openshift_master_metrics_public_url={{ openshift_master_public_api_url 
}}/hawkular/metrics

...

#


In this haproxy you can add the manage.{{ osm_default_subdomain }} or the 
wildcard certificate into a secret.


###

oc secrets new wildcard-cloud-cert cloud.pem=...cloud_all.pem
oc secrets add serviceaccount/default secret/


###


With this solution you don't need to expose your master to the internet ;-)


Best Regards

Aleks


From:users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>
 
<users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>>
 on behalf of Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>>
Sent: Wednesday, March 09, 2016 21:37
To: Jordan Liggitt; Clayton Coleman
Cc: users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
Subject: Re: api and console port : 8443

Thanks Jordan/Jason/Clayton for quick replies

Good to knew that we can change port during provision time using ansible 
environment variables mentioned by Jason

However, this seems to be messy and confusing that user wont’ be able to change 
after the provision. At least too difficult unless all files across board 
reflect the new port

Can we run a simple load balancer and listen on 443 and forward to all masters 
on port 8443.  All the users will use standard vip:443.  Openshift might create 
all kubeconfig files with 8443 reference.

Can you validate above approach? It might ok to run load balance also on 8443 
and forward to 8443 but am thinking clients should’t bother about always enter 
8443 while connecting API or console

The idea is run a simple load balancer for balancing multiple API masters.



--
Srinivas Kotaru

From: Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>
Date: Wednesday, March 9, 2016 at 12:05 PM
To: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: skotaru <skot...@cisco.com&

Re: binary deploy

2016-03-10 Thread Srinivas Naga Kotaru (skotaru)
That might work, but need to test. Can u explain what do you mean by s2i 
ENTRYPOINT in this scanario?

Documentation says we can insert war file like mentioned below

https://docs.openshift.org/latest/dev_guide/builds.html#binary-source

Not sure why it is not working …


--
Srinivas Kotaru

From: Ben Parees <bpar...@redhat.com<mailto:bpar...@redhat.com>>
Date: Wednesday, March 9, 2016 at 11:43 PM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: binary deploy

I think you could use a docker-type build with an inline dockerfile that ADDs 
the remote file:

https://docs.openshift.org/latest/dev_guide/builds.html#dockerfile-source

you can use the same base image, just have your dockerfile FROM the base image, 
ADD the war to the correct location inside the image, and set the ENTRYPOINT to 
the s2i run script.


On Wed, Mar 9, 2016 at 11:59 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Clayton

What you described already working if I pass using start-build.

I am trying to pass one sample.war as a argument to template and use this to 
create initial application. Think about this is sample hello world program as 
part of provision. Once app was provisioned, app teams can deploy the way you 
described.

If I put empty string to asFile, app creation is successful but build is 
waiting forever. So if clients hit browser, they wont get any output and might 
get confuse.

Am sure we can pass git repo by adjusting strategy but exploring if possible to 
use  a sample.war as argument to template



--
Srinivas Kotaru






On 3/9/16, 8:49 PM, "Clayton Coleman" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:

>The container itself is what determines whether the image will be used
>and what directory is it expecting to see WARs in
>
>I *think* you need to do
>
>$ mkdir deployments
>$ mv .../sample.war deployments/
>$ oc start-build --from-dir=.
>
>Binary builds require you to launch start-build --from-X, otherwise
>the build will wait forever for you to send it the binary.
>
>
>
>
>On Wed, Mar 9, 2016 at 11:04 PM, Srinivas Naga Kotaru (skotaru)
><skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
>> I think that is pretty desired feature. I can think multiple use cases, one 
>> could by taking final artifacts from Jenkins builds.
>>
>> BTY, am still having issues. Am creating an application using template. This 
>> time I mentioned sample.war and copied sample.war file to folder where am 
>> running oc. Also copied same file to templates folder where my template 
>> exist. In either cases build is failing
>>
>> "spec": {
>> "source": {
>>"type": "Binary",
>> "binary": {
>> "asFile": "sample.war"
>> },
>> "contextDir": "${CONTEXT_DIR}"
>>
>>
>>
>> # oc logs sales-dev-1-build  
>> master  ✗ ✭ ✱
>>
>> I0309 22:58:52.610618   1 sti.go:173] The value of ALLOWED_UIDS is [1-]
>> I0309 22:58:52.642387   1 docker.go:242] Pulling Docker image 
>> registry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2<http://registry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2>
>>  ...
>> I0309 22:58:59.932801   1 sti.go:195] Creating a new S2I builder with 
>> build config: "Builder Name:\t\tJBoss EAP 6.4\nBuilder 
>> Image:\t\tregistry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2\nSource<http://tregistry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2\nSource>:\t\t\tfile:///tmp/s2i-build632502898/upload/src\nContext
>>  Directory:\t/Users/skotaru/lae3/build/ose-binary-builds\nOutput Image 
>> Tag:\t172.30.238.173:5000/sales/sales-dev:latest\nEnvironment:\t\tOPENSHIFT_BUILD_NAME=sales-dev-1,OPENSHIFT_BUILD_NAMESPACE=sales\nIncremental
>>  Build:\tdisabled\nRemove Old Build:\tdisabled\nBuilder Pull 
>> Policy:\talways\nQuiet:\t\t\tdisabled\nLayered 
>> Build:\t\tdisabled\nWorkdir:\t\t/tmp/s2i-build632502898\nDocker 
>> NetworkMode:\tcontainer:05752cac5dbdce4a5f77d60ed23030dda17a9344aa904ec3c9786e231a858233\nDocker
>>  Endpoint:\tunix:///var/run/docker.sock\n"
>> I0309 22:58:59.932858   1 docker.go:242] Pulling Docker imag

Re: binary deploy

2016-03-09 Thread Srinivas Naga Kotaru (skotaru)
Ok thanks. 

Can we raise a RFE for tracking purpose if you guys think it useful feature. 


-- 
Srinivas Kotaru







On 3/9/16, 9:06 PM, "Clayton Coleman" <ccole...@redhat.com> wrote:

>Binary builds today have to come from direct user input (directly from
>a start command or a call to the rest API).  In the future we plan on
>supporting other ways of getting the content.
>
>> On Mar 9, 2016, at 11:59 PM, Srinivas Naga Kotaru (skotaru) 
>> <skot...@cisco.com> wrote:
>>
>> Clayton
>>
>> What you described already working if I pass using start-build.
>>
>> I am trying to pass one sample.war as a argument to template and use this to 
>> create initial application. Think about this is sample hello world program 
>> as part of provision. Once app was provisioned, app teams can deploy the way 
>> you described.
>>
>> If I put empty string to asFile, app creation is successful but build is 
>> waiting forever. So if clients hit browser, they wont get any output and 
>> might get confuse.
>>
>> Am sure we can pass git repo by adjusting strategy but exploring if possible 
>> to use  a sample.war as argument to template
>>
>>
>>
>> --
>> Srinivas Kotaru
>>
>>
>>
>>
>>
>>
>>> On 3/9/16, 8:49 PM, "Clayton Coleman" <ccole...@redhat.com> wrote:
>>>
>>> The container itself is what determines whether the image will be used
>>> and what directory is it expecting to see WARs in
>>>
>>> I *think* you need to do
>>>
>>> $ mkdir deployments
>>> $ mv .../sample.war deployments/
>>> $ oc start-build --from-dir=.
>>>
>>> Binary builds require you to launch start-build --from-X, otherwise
>>> the build will wait forever for you to send it the binary.
>>>
>>>
>>>
>>>
>>> On Wed, Mar 9, 2016 at 11:04 PM, Srinivas Naga Kotaru (skotaru)
>>> <skot...@cisco.com> wrote:
>>>> I think that is pretty desired feature. I can think multiple use cases, 
>>>> one could by taking final artifacts from Jenkins builds.
>>>>
>>>> BTY, am still having issues. Am creating an application using template. 
>>>> This time I mentioned sample.war and copied sample.war file to folder 
>>>> where am running oc. Also copied same file to templates folder where my 
>>>> template exist. In either cases build is failing
>>>>
>>>> "spec": {
>>>>"source": {
>>>>   "type": "Binary",
>>>>"binary": {
>>>>"asFile": "sample.war"
>>>>},
>>>>"contextDir": "${CONTEXT_DIR}"
>>>>
>>>>
>>>>
>>>> # oc logs sales-dev-1-build  
>>>> master  ✗ ✭ ✱
>>>>
>>>> I0309 22:58:52.610618   1 sti.go:173] The value of ALLOWED_UIDS is [1-]
>>>> I0309 22:58:52.642387   1 docker.go:242] Pulling Docker image 
>>>> registry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2 ...
>>>> I0309 22:58:59.932801   1 sti.go:195] Creating a new S2I builder with 
>>>> build config: "Builder Name:\t\tJBoss EAP 6.4\nBuilder 
>>>> Image:\t\tregistry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2\nSource:\t\t\tfile:///tmp/s2i-build632502898/upload/src\nContext
>>>>  Directory:\t/Users/skotaru/lae3/build/ose-binary-builds\nOutput Image 
>>>> Tag:\t172.30.238.173:5000/sales/sales-dev:latest\nEnvironment:\t\tOPENSHIFT_BUILD_NAME=sales-dev-1,OPENSHIFT_BUILD_NAMESPACE=sales\nIncremental
>>>>  Build:\tdisabled\nRemove Old Build:\tdisabled\nBuilder Pull 
>>>> Policy:\talways\nQuiet:\t\t\tdisabled\nLayered 
>>>> Build:\t\tdisabled\nWorkdir:\t\t/tmp/s2i-build632502898\nDocker 
>>>> NetworkMode:\tcontainer:05752cac5dbdce4a5f77d60ed23030dda17a9344aa904ec3c9786e231a858233\nDocker
>>>>  Endpoint:\tunix:///var/run/docker.sock\n"
>>>> I0309 22:58:59.932858   1 docker.go:242] Pulling Docker image 
>>>> registry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2 ...
>>>> I0309 22:59:01.449811   1 sti.go:140] Preparing to build 
>>>> 172.30.238.173:5000/sales/sales-dev:latest
>>>> I0309 22:59:01.453593   1 source.go:151] Receiving source from STDIN 
>>>> as file sample.wa

Re: binary deploy

2016-03-09 Thread Srinivas Naga Kotaru (skotaru)
Clayton 

What you described already working if I pass using start-build. 

I am trying to pass one sample.war as a argument to template and use this to 
create initial application. Think about this is sample hello world program as 
part of provision. Once app was provisioned, app teams can deploy the way you 
described.

If I put empty string to asFile, app creation is successful but build is 
waiting forever. So if clients hit browser, they wont get any output and might 
get confuse. 

Am sure we can pass git repo by adjusting strategy but exploring if possible to 
use  a sample.war as argument to template



-- 
Srinivas Kotaru






On 3/9/16, 8:49 PM, "Clayton Coleman" <ccole...@redhat.com> wrote:

>The container itself is what determines whether the image will be used
>and what directory is it expecting to see WARs in
>
>I *think* you need to do
>
>$ mkdir deployments
>$ mv .../sample.war deployments/
>$ oc start-build --from-dir=.
>
>Binary builds require you to launch start-build --from-X, otherwise
>the build will wait forever for you to send it the binary.
>
>
>
>
>On Wed, Mar 9, 2016 at 11:04 PM, Srinivas Naga Kotaru (skotaru)
><skot...@cisco.com> wrote:
>> I think that is pretty desired feature. I can think multiple use cases, one 
>> could by taking final artifacts from Jenkins builds.
>>
>> BTY, am still having issues. Am creating an application using template. This 
>> time I mentioned sample.war and copied sample.war file to folder where am 
>> running oc. Also copied same file to templates folder where my template 
>> exist. In either cases build is failing
>>
>> "spec": {
>> "source": {
>>"type": "Binary",
>> "binary": {
>> "asFile": "sample.war"
>> },
>> "contextDir": "${CONTEXT_DIR}"
>>
>>
>>
>> # oc logs sales-dev-1-build  
>> master  ✗ ✭ ✱
>>
>> I0309 22:58:52.610618   1 sti.go:173] The value of ALLOWED_UIDS is [1-]
>> I0309 22:58:52.642387   1 docker.go:242] Pulling Docker image 
>> registry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2 ...
>> I0309 22:58:59.932801   1 sti.go:195] Creating a new S2I builder with 
>> build config: "Builder Name:\t\tJBoss EAP 6.4\nBuilder 
>> Image:\t\tregistry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2\nSource:\t\t\tfile:///tmp/s2i-build632502898/upload/src\nContext
>>  Directory:\t/Users/skotaru/lae3/build/ose-binary-builds\nOutput Image 
>> Tag:\t172.30.238.173:5000/sales/sales-dev:latest\nEnvironment:\t\tOPENSHIFT_BUILD_NAME=sales-dev-1,OPENSHIFT_BUILD_NAMESPACE=sales\nIncremental
>>  Build:\tdisabled\nRemove Old Build:\tdisabled\nBuilder Pull 
>> Policy:\talways\nQuiet:\t\t\tdisabled\nLayered 
>> Build:\t\tdisabled\nWorkdir:\t\t/tmp/s2i-build632502898\nDocker 
>> NetworkMode:\tcontainer:05752cac5dbdce4a5f77d60ed23030dda17a9344aa904ec3c9786e231a858233\nDocker
>>  Endpoint:\tunix:///var/run/docker.sock\n"
>> I0309 22:58:59.932858   1 docker.go:242] Pulling Docker image 
>> registry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2 ...
>> I0309 22:59:01.449811   1 sti.go:140] Preparing to build 
>> 172.30.238.173:5000/sales/sales-dev:latest
>> I0309 22:59:01.453593   1 source.go:151] Receiving source from STDIN as 
>> file sample.war
>> [ose-binary-builds]  
>>   master  ✗ ✭ ✱
>> [ose-binary-builds]  
>>   master  ✗ ✭ ✱
>> [ose-binary-builds] oc logs sales-dev-1-build -f 
>>   master  ✗ ✭ ✱
>> I0309 22:58:52.610618   1 sti.go:173] The value of ALLOWED_UIDS is [1-]
>> I0309 22:58:52.642387   1 docker.go:242] Pulling Docker image 
>> registry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2 ...
>> I0309 22:58:59.932801   1 sti.go:195] Creating a new S2I builder with 
>> build config: "Builder Name:\t\tJBoss EAP 6.4\nBuilder 
>> Image:\t\tregistry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2\nSource:\t\t\tfile:///tmp/s2i-build632502898/upload/src\nContext
>>  Directory:\t/Users/skotaru/lae3/build/ose-binary-builds\nOutput Image 
>> Tag:\t172.30.238.173:5000/sales/sales-dev:latest\nEnvironment:\t\tOPENSHIFT_BUILD_NAME=sales-dev-1,OPENSHIFT_BUILD_NAMESPACE=sales\nIncremental
>>  Build:\tdisabled\nRemove Old Build:\tdisabled\nBuilder Pull 
>> Policy:\talways\nQu

Re: binary deploy

2016-03-09 Thread Srinivas Naga Kotaru (skotaru)
I think that is pretty desired feature. I can think multiple use cases, one 
could by taking final artifacts from Jenkins builds.

BTY, am still having issues. Am creating an application using template. This 
time I mentioned sample.war and copied sample.war file to folder where am 
running oc. Also copied same file to templates folder where my template exist. 
In either cases build is failing 

"spec": {
"source": {
   "type": "Binary",
"binary": {
"asFile": "sample.war"
},
"contextDir": "${CONTEXT_DIR}"



# oc logs sales-dev-1-build  master 
 ✗ ✭ ✱

I0309 22:58:52.610618   1 sti.go:173] The value of ALLOWED_UIDS is [1-]
I0309 22:58:52.642387   1 docker.go:242] Pulling Docker image 
registry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2 ...
I0309 22:58:59.932801   1 sti.go:195] Creating a new S2I builder with build 
config: "Builder Name:\t\tJBoss EAP 6.4\nBuilder 
Image:\t\tregistry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2\nSource:\t\t\tfile:///tmp/s2i-build632502898/upload/src\nContext
 Directory:\t/Users/skotaru/lae3/build/ose-binary-builds\nOutput Image 
Tag:\t172.30.238.173:5000/sales/sales-dev:latest\nEnvironment:\t\tOPENSHIFT_BUILD_NAME=sales-dev-1,OPENSHIFT_BUILD_NAMESPACE=sales\nIncremental
 Build:\tdisabled\nRemove Old Build:\tdisabled\nBuilder Pull 
Policy:\talways\nQuiet:\t\t\tdisabled\nLayered 
Build:\t\tdisabled\nWorkdir:\t\t/tmp/s2i-build632502898\nDocker 
NetworkMode:\tcontainer:05752cac5dbdce4a5f77d60ed23030dda17a9344aa904ec3c9786e231a858233\nDocker
 Endpoint:\tunix:///var/run/docker.sock\n"
I0309 22:58:59.932858   1 docker.go:242] Pulling Docker image 
registry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2 ...
I0309 22:59:01.449811   1 sti.go:140] Preparing to build 
172.30.238.173:5000/sales/sales-dev:latest
I0309 22:59:01.453593   1 source.go:151] Receiving source from STDIN as 
file sample.war
[ose-binary-builds] 
   master  ✗ ✭ ✱
[ose-binary-builds] 
   master  ✗ ✭ ✱
[ose-binary-builds] oc logs sales-dev-1-build -f
   master  ✗ ✭ ✱
I0309 22:58:52.610618   1 sti.go:173] The value of ALLOWED_UIDS is [1-]
I0309 22:58:52.642387   1 docker.go:242] Pulling Docker image 
registry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2 ...
I0309 22:58:59.932801   1 sti.go:195] Creating a new S2I builder with build 
config: "Builder Name:\t\tJBoss EAP 6.4\nBuilder 
Image:\t\tregistry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2\nSource:\t\t\tfile:///tmp/s2i-build632502898/upload/src\nContext
 Directory:\t/Users/skotaru/lae3/build/ose-binary-builds\nOutput Image 
Tag:\t172.30.238.173:5000/sales/sales-dev:latest\nEnvironment:\t\tOPENSHIFT_BUILD_NAME=sales-dev-1,OPENSHIFT_BUILD_NAMESPACE=sales\nIncremental
 Build:\tdisabled\nRemove Old Build:\tdisabled\nBuilder Pull 
Policy:\talways\nQuiet:\t\t\tdisabled\nLayered 
Build:\t\tdisabled\nWorkdir:\t\t/tmp/s2i-build632502898\nDocker 
NetworkMode:\tcontainer:05752cac5dbdce4a5f77d60ed23030dda17a9344aa904ec3c9786e231a858233\nDocker
 Endpoint:\tunix:///var/run/docker.sock\n"
I0309 22:58:59.932858   1 docker.go:242] Pulling Docker image 
registry.access.redhat.com/jboss-eap-6/eap64-openshift:1.2 ...
I0309 22:59:01.449811   1 sti.go:140] Preparing to build 
172.30.238.173:5000/sales/sales-dev:latest
I0309 22:59:01.453593   1 source.go:151] Receiving source from STDIN as 
file sample.war



I think build is still expecting sample.war file from STDIN


-- 
Srinivas Kotaru







On 3/9/16, 7:55 PM, "Clayton Coleman" <ccole...@redhat.com> wrote:

>No, binaries are passed directly to the build, we don't support
>download from URL as a build source yet.
>
>On Wed, Mar 9, 2016 at 10:35 PM, Srinivas Naga Kotaru (skotaru)
><skot...@cisco.com> wrote:
>> Can we pass FQDN to fetch WAR file like below?
>>
>> "spec": {
>> "source": {
>>"type": "Binary",
>> "binary": {
>> "asFile":
>> "https://tomcat.apache.org/tomcat-6.0-doc/appdev/sample/sample.war;
>> },
>> "contextDir": "${CONTEXT_DIR}”
>>
>>
>> When I try it is failing …
>>
>> spec.source.binary.asFile: invalid value
>> 'https://tomcat.apache.org/tomcat-6.0-doc/appdev/sample/sample.war',
>> Details: file name may not contain slashes or relative path segments and
>> must be 

Re: api and console port : 8443

2016-03-09 Thread Srinivas Naga Kotaru (skotaru)
Thanks Jordan/Jason/Clayton for quick replies

Good to knew that we can change port during provision time using ansible 
environment variables mentioned by Jason

However, this seems to be messy and confusing that user wont’ be able to change 
after the provision. At least too difficult unless all files across board 
reflect the new port

Can we run a simple load balancer and listen on 443 and forward to all masters 
on port 8443.  All the users will use standard vip:443.  Openshift might create 
all kubeconfig files with 8443 reference.

Can you validate above approach? It might ok to run load balance also on 8443 
and forward to 8443 but am thinking clients should’t bother about always enter 
8443 while connecting API or console

The idea is run a simple load balancer for balancing multiple API masters.



--
Srinivas Kotaru

From: Jordan Liggitt <jligg...@redhat.com<mailto:jligg...@redhat.com>>
Date: Wednesday, March 9, 2016 at 12:05 PM
To: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: api and console port : 8443

also would need to adjust the port in the kubeconfig files used to connect to 
the master

On Wed, Mar 9, 2016 at 3:03 PM, Clayton Coleman 
<ccole...@redhat.com<mailto:ccole...@redhat.com>> wrote:
As long as you change the config, no.  We chose 8443 in case you
wanted to run a local TLS proxy, or in case you are running as a
developer.

On Wed, Mar 9, 2016 at 2:55 PM, Srinivas Naga Kotaru (skotaru)
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
> Any reason why api and console exposed as 8443 rather 443?
>
> Any impact if we change 8443 to 443 by find and replace 8443 with 443 on
> /etc/origin/master/master-config.yaml and restart master service?
>
> Do we need to change anything on node or etcd  side?
>
> --
> Srinivas Kotaru
>
> ___
> users mailing list
> users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Private registry : unable to pull

2016-03-04 Thread Srinivas Naga Kotaru (skotaru)
As I said, if I toggle image to public, was able to create app.

Since we have multiple clusters, we are exploring using a single corporate 
repository to host all images. It include openshift specific and other 
certified images in our environment.  Want to avoid multiple internal repo’s 
associated with each cluster for easy operations and maintenance purpose.

We also exploring to build and deployment outside of openshift using CI/CD tool 
chain and copy final  images  central repo rather openshift repo.

That is the idea of this exercise.  We don’t want to expose all images to 
everyone and limit them to private. A special secret will be added to all 
projects and CI/CD Jenkins bot which has access to push and pull to this 
corporate repo.

--
Srinivas Kotaru

From: Ben Parees <bpar...@redhat.com<mailto:bpar...@redhat.com>>
Date: Friday, March 4, 2016 at 10:29 AM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>, 
"ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: Private registry : unable to pull



On Thu, Mar 3, 2016 at 11:03 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Am trying to create an app using an image from corporate private registry.

>>>>>>>>>>
# oc new-app --docker-image=myrepo/skotaru/ruby-22-rhel7 --name quayapp1

I0303 19:55:15.769901   88132 componentresolvers.go:126] Errors occurred during 
resolution: []error{(*errors.errorString)(0xc208546d00)}
F0303 19:55:15.770051   88132 helpers.go:96] error: no match for 
“myrepo/skotaru/ruby-22-rhel7", specify --allow-missing-images to use this 
image name.

<<<<<<<<<<

It is failing to create app. If I changed image type to public, am able to 
create the app.

created a secret and added it to default service account.

# oc secrets new-dockercfg repo-login --docker-server=myrepo 
--docker-username=skotaru --docker-password=<> 
--docker-email=f...@fake.com<mailto:f...@fake.com>
# oc secrets add serviceaccount/default secrets/repo-login --for=pull

Am I missing anything here? Why unable to create app if image type is private?

​oc new-app now goes through the openshift internal registry to pull all 
images, so the question is how to ensure the openshift registry has the 
necessary credentials to access your private registry.  The answer to that is 
you need to create an imagestream that points to your private registry, and 
includes your credentials.

Then you can invoke new-app w/ the imagestream name.

Alternatively, if you're confident you've setup your service account 
credentials correctly, you can go ahead and use the "--allow-missing-images" 
flag and new-app will construct the DeploymentConfig despite not being able to 
confirm the image exists.

Adding Clayton in case there are more details to the registry pull-through 
feature.

​



--
Srinivas Kotaru

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
Ben Parees | OpenShift

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Private registry : unable to pull

2016-03-04 Thread Srinivas Naga Kotaru (skotaru)
Any idea why am unable to create app if repo type is private? Although created 
a secret with repo credentials and added to default SA account?

--
Srinivas Kotaru

From: skotaru >
Date: Thursday, March 3, 2016 at 8:03 PM
To: "users@lists.openshift.redhat.com" 
>
Subject: Private registry : unable to pull

Am trying to create an app using an image from corporate private registry.

>>
# oc new-app --docker-image=myrepo/skotaru/ruby-22-rhel7 --name quayapp1

I0303 19:55:15.769901   88132 componentresolvers.go:126] Errors occurred during 
resolution: []error{(*errors.errorString)(0xc208546d00)}
F0303 19:55:15.770051   88132 helpers.go:96] error: no match for 
“myrepo/skotaru/ruby-22-rhel7", specify --allow-missing-images to use this 
image name.

<<

It is failing to create app. If I changed image type to public, am able to 
create the app.

created a secret and added it to default service account.

# oc secrets new-dockercfg repo-login --docker-server=myrepo 
--docker-username=skotaru --docker-password=<> 
--docker-email=f...@fake.com
# oc secrets add serviceaccount/default secrets/repo-login --for=pull

Am I missing anything here? Why unable to create app if image type is private?

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Private registry : unable to pull

2016-03-03 Thread Srinivas Naga Kotaru (skotaru)
Am trying to create an app using an image from corporate private registry.

>>
# oc new-app --docker-image=myrepo/skotaru/ruby-22-rhel7 --name quayapp1

I0303 19:55:15.769901   88132 componentresolvers.go:126] Errors occurred during 
resolution: []error{(*errors.errorString)(0xc208546d00)}
F0303 19:55:15.770051   88132 helpers.go:96] error: no match for 
“myrepo/skotaru/ruby-22-rhel7", specify --allow-missing-images to use this 
image name.

<<

It is failing to create app. If I changed image type to public, am able to 
create the app.

created a secret and added it to default service account.

# oc secrets new-dockercfg repo-login --docker-server=myrepo 
--docker-username=skotaru --docker-password=<> --docker-email=f...@fake.com
# oc secrets add serviceaccount/default secrets/repo-login --for=pull

Am I missing anything here? Why unable to create app if image type is private?

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Apache Router

2016-03-01 Thread Srinivas Naga Kotaru (skotaru)
Any plans of offering apache based router? Currently it is HAProxy and works 
good. Some OSE customers like us need apache based router since we have 
dependency with existing SSO agents (or) WAF solutions. These solutions doesn’t 
currently work with HAProxy. This forcing us to introduce another apache layer. 
Some customers like us already introducing another Reverse Proxies for their 
DMZ requirements.  Most of the time this reverse proxy is again HAProxy not 
apache.  These combinations increasing multiple hopes in the path. If we use 
apache instead of HAPROXY router,  it helps reducing one layer by putting 
another apache on top of it.

It would be nice if customer has a choice of apache or haproxy based router. It 
helps us a lot. Apache router should be able to generate VHOST like entries for 
routing purpose.


--
Srinivas Kotaru


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Visualizing the OpenShift API with Swagger

2016-02-29 Thread Srinivas Naga Kotaru (skotaru)
Am not seeing swagger UI as shown or described in below blog pst

http://blog.andyserver.com/2015/09/openshift-api-swagger/

Below screenshot show my swagger in action. It seems to be mostly text 
interface rather original Swagger native UI

http://www.screencast.com/t/wgUBi8vtALn

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


External repo : how openshift deploy works

2016-02-26 Thread Srinivas Naga Kotaru (skotaru)
Hi

Build an image and pushed to external repo. Created OSE app using this image. 
App was created successfully.

Modified image with new code and pushed repo using docker tag and push.

What I did:


# oc import-image 

It was successful

# oc deploy  -n ccp —latest

Wondering above steps are right or not in sequence?

What I want? Am testing 2 seanarios


  1.  Automatically trigger OSE deploy whenever a new version updated in 
external repo
  2.  Our CI/CD manually do the deploy once external build system created a new 
version of image.

In the both the cases we want latest image to be deployed. Not the old or 
current versions


Questions?


  1.  Will OSE automatically trigger new deploy once new version pushed to 
external repo?
  2.  As part of deploy, will OSE take new image or current image?

Want to use external build and deploy systems for continuous code deployment to 
1st life cycle ( dev) and promote code to other life cycles ( stage, prod) 
manually.



Current image stream definition

},
"spec": {
"dockerImageRepository": “myreposerver1/gats/ccpapp-jboss"
},
"status": {
"dockerImageRepository": "172.30.238.173:5000/ccp/ccpapp-stg",
"tags": [
{
"tag": "latest",
"items": [
{
"created": "2016-02-25T04:25:35Z",
"dockerImageReference": 
“myreposerver1/gats/ccpapp-jboss:latest",
"image": 
"95007fb5b8d452502a9506c1bb4e529d93f6118f84c51228a6a979b9a1090dd2"
}
]
}
]
}
}


Deployment specs

"triggers": [
{
"type": "ConfigChange"
},
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"ccpapp-stg"
],
"from": {
"kind": "ImageStreamTag",
"name": "ccpapp-stg:latest"
},
"lastTriggeredImage": 
“myreposerver1/gats/ccpapp-jboss:latest"
}
}
],
"replicas": 0,
"selector": {
"app": "ccpapp-stg",
"deploymentconfig": "ccpapp-stg"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "ccpapp-stg",
"deploymentconfig": "ccpapp-stg"
},
"annotations": {
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"containers": [
{
"name": "ccpapp-stg",
"image": "quay.cisco.com/gats/ccpapp-jboss:latest",
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
},
{
"containerPort": 8181,
"protocol": "TCP"
},
{
"containerPort": 9990,
"protocol": "TCP"
}
],
"env": [
{
"name": "CISCO_LIFE",
"value": "stg"
}
],
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {}
}
}
},
"status": {
"latestVersion": 1,
"details": {
"causes": [
{
"type": "ImageChange",
"imageTrigger": {
"from": {}
}
}
]
}
}
}



--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Multi Clusters : Token management

2016-02-19 Thread Srinivas Naga Kotaru (skotaru)
I like the client cert authentication. Do we have any working instructions to 
test?

Pl confirm, It means every client need to have their own cert? don’t you think 
it would by very difficult to administrator in a big organization?

-- 
Srinivas Kotaru







On 2/19/16, 10:49 AM, "Aleksandar Kostadinov" <akost...@redhat.com> wrote:

>Srinivas Naga Kotaru (skotaru) wrote on 02/19/2016 08:00 PM:
>> David
>>
>> Thanks for info
>>
>> It looks like a big problem from management or client experience
>> perceptive . Have seen most of the clients are using a single cluster
>> but what about if a client has multiple clusters but client base is
>> common? Authentication, authorization,  API  end points all are
>> different or need to be managed independent to each other.
>
>I think you can setup proper certificate auth on all clusters to avoid 
>need to obtain different tokens from each cluster. i.e. all clusters 
>would accept the same client certificates. I'm not sure trying to make 
>tokens work across clusters is a good idea. At least doing it right 
>might not be easier than cert auth, I suspect it will be ugly.
>
>Btw for web console users, one can have same SSO across clusters so that 
>user will login only once per time period. For example kerberos or 
>google auth. This would be much easier than certificate auth but limited 
>to web console.
>
>> This is what current solution or can we change anything for better
>> client experience in multi cluster environments ?
>
>Do you mean only auth or also other difficulties?

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Adding a node to the cluster without ansible

2016-02-04 Thread Srinivas Naga Kotaru (skotaru)
Thanks, appreciated for quick action.


--
Srinivas Kotaru

From: Nakayama Kenjiro 
<nakayamakenj...@gmail.com<mailto:nakayamakenj...@gmail.com>>
Date: Thursday, February 4, 2016 at 8:49 PM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: v <vekt...@gmx.net<mailto:vekt...@gmx.net>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: Adding a node to the cluster without ansible

> Also Please update with latest relevant information on redhat portal page
>
> https://access.redhat.com/solutions/1983683

Sorry, that's my task. I updated it to link this page 
https://access.redhat.com/solutions/2150381

Thanks,
Kenjiro

On Fri, Feb 5, 2016 at 3:10 AM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Also Please update with latest relevant information on redhat portal page

https://access.redhat.com/solutions/1983683

--
Srinivas Kotaru

From: 
<users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>>
 on behalf of skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Date: Thursday, February 4, 2016 at 10:04 AM
To: v <vekt...@gmx.net<mailto:vekt...@gmx.net>>

Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: Adding a node to the cluster without ansible

Will ansible will touch existing configuration  and by any chance it will  
overwrite custom config put into ?

Just adding a new node, steps required looks scare me ( both ansible and 
manual). Can we do better job here by automating this task and guaranteed no 
disruption to existing cluster health?

My worry about real prod environments and always uptime guaranteed with SLA’s.

--
Srinivas Kotaru

From: 
<users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>>
 on behalf of v <vekt...@gmx.net<mailto:vekt...@gmx.net>>
Date: Thursday, February 4, 2016 at 7:51 AM
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: Adding a node to the cluster without ansible

Nice one, scaleup.yml is a very good idea!

origin-sdn-ovs is installed on the node, but it was a 1.0.x node. After the 
update to 1.1 the error is gone. :)

Will create a PR for the instructions.

Am 2016-02-04 um 16:40 schrieb Jason DeTiberus:
I would like to add an additional node to the cluster without using ansible.
(We have modified our cluster in many ways and don't dare running ansible 
because it might break our cluster.)

 The scale up playbooks take this into account.

They will query the master, generates and distributes the new certificates for 
the new node, and then runs the config playbooks on the new nodes only.

To take advantage of this,  you will need to add a group to your inventory 
called [new_nodes] and configure the hosts as you would for a new install under 
the [nodes] group.
Then you would run the playbooks/byo/openshift-cluster/scaleup.yml playbook.


On Thu, Feb 4, 2016 at 9:55 AM, v <vekt...@gmx.net<mailto:vekt...@gmx.net>> 
wrote:
All right, looks like it works. These are the commands for the master with 3.1:


oadm create-api-client-config \
  --certificate-authority=/etc/origin/master/ca.crt \
  --client-dir=/root/xyz4 \
  --master=https://xyz1.eu:8443<https://oshit01.rosm.eu:8443> \
  --signer-cert=/etc/origin/master/ca.crt \
  --signer-key=/etc/origin/master/ca.key \
  --signer-serial=/etc/origin/master/ca.serial.txt \
--groups=system:nodes \
--user=system:node:xyz4.eu<http://xyz4.eu>

oadm create-node-config \
--node-dir=/root/xyz4 \
--node=xyz.eu<http://xyz.eu> \
--hostnames=xyz4.eu<http://xyz4.eu>,123.456.0.5 \
--certificate-authority /etc/origin/master/ca.crt \
--signer-cert /etc/origin/master/ca.crt \
--signer-key /etc/origin/master/ca.key \
--signer-serial /etc/origin/master/ca.serial.txt \
--master=https://xyz1.eu:8443<https://oshit01.rosm.eu:8443> \
--node-client-certificate-authority /etc/origin/master/ca.crt


Then I copied all the created files to /etc/origin/node on the new node.
Took node-config.yaml from an old, working node, edited the hostnames and used 
it as node-config.yaml on the new node.

It seems to work. The only thing that bugs me is that I'm being spammed with 
the following error on the new node:
manager.go:313] NetworkPlugin redhat/openshift-ovs-subnet failed on the status 
hook for pod 'xy-router-2-imubn' - exit status 1
manager.go:313] NetworkPlugin redhat/openshift-ovs-subnet failed on the status 
hook 

swagger UI

2016-02-02 Thread Srinivas Naga Kotaru (skotaru)
Does OSE master expose swagger-ui? I tested but not working. Wondering we have 
to do anything to get it work. I feel this is good way to learn and use master 
API

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: swagger UI

2016-02-02 Thread Srinivas Naga Kotaru (skotaru)
Thanks for quick reply

Am getting method not allowed. Tried CLI and browser. Browser simply throwing a 
blank download page

# curl -Ik  https://8443/swaggerapi/oapi/v1

HTTP/1.1 405 Method Not Allowed
Date: Wed, 03 Feb 2016 00:15:20 GMT
Content-Length: 23
Content-Type: text/plain; charset=utf-8

--
Srinivas Kotaru

From: Nakayama Kenjiro 
<nakayamakenj...@gmail.com<mailto:nakayamakenj...@gmail.com>>
Date: Tuesday, February 2, 2016 at 4:02 PM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: swagger UI

Hi,

> Does OSE master expose swagger-ui?

Yes, it does.

> Wondering we have to do anything to get it work.

No, you don't need anything.

Basically, you can access the API(openshift's) with this URL:

  curl -k https://:8443/swaggerapi/oapi/v1

Could you please tell us what results you get by above curl?

Thanks,
Kenjiro


On Wed, Feb 3, 2016 at 8:53 AM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Does OSE master expose swagger-ui? I tested but not working. Wondering we have 
to do anything to get it work. I feel this is good way to learn and use master 
API

--
Srinivas Kotaru

___
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




--
Kenjiro NAKAYAMA <nakayamakenj...@gmail.com<mailto:nakayamakenj...@gmail.com>>
GPG Key fingerprint = ED8F 049D E67A 727D 9A44  8E25 F44B E208 C946 5EB9
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Dockerfile in OpenShift

2016-01-26 Thread Srinivas Naga Kotaru (skotaru)
Thx Clayton.

It is handy to have servicename.namespace.svc.cluster.local syntax to resolve 
service names. Any similar syntax to resolve pods instead of POD IP?

What happens if we put clusterip: None. Still DNS records will be created?

--
Srinivas Kotaru

From: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Monday, January 25, 2016 at 10:28 PM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: Den Cowboy <dencow...@hotmail.com<mailto:dencow...@hotmail.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: Dockerfile in OpenShift

Endpoints are stored in etcd (written by a process that watches for pod 
changes) and skydns serves them by reading from the API.  So what you get on 
the cli is exactly what DNS is using to serve the names.

On Jan 25, 2016, at 11:48 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:

Thank Clayton. My question was around where these service IP’ records are 
storing. etcd or some flat file similar to Bind zone files etc.

The command was useful to translate service to IP. Thx for sharing.

--
Srinivas Kotaru

From: "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Monday, January 25, 2016 at 5:16 PM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: Den Cowboy <dencow...@hotmail.com<mailto:dencow...@hotmail.com>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: Dockerfile in OpenShift

oc describe svc NAME will show you the service mapping and backing endpoints.  
dig @masterip servicename.namespace.svc.cluster.local will show you what is in 
DNS

On Jan 25, 2016, at 8:12 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:

Is skyDNS responsible for this resolution?

Where we can see all service entries and their associated IP addresses? Am 
trying to understand a little better on cluster wide name resolution and 
container —> external DNS communication

--
Srinivas Kotaru

From: 
<users-boun...@lists.openshift.redhat.com<mailto:users-boun...@lists.openshift.redhat.com>>
 on behalf of "ccole...@redhat.com<mailto:ccole...@redhat.com>" 
<ccole...@redhat.com<mailto:ccole...@redhat.com>>
Date: Sunday, January 24, 2016 at 9:24 AM
To: Den Cowboy <dencow...@hotmail.com<mailto:dencow...@hotmail.com>>
Cc: "users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>
Subject: Re: Dockerfile in OpenShift

In each container DNS is set up so that the name for each service is a 
resolvable address (which means normal network operations like ping, curl, etc 
can use the service name in place of the service IP).  If you have a service 
called "db", every container is "linked" to that service.

On Jan 24, 2016, at 4:39 AM, Den Cowboy 
<dencow...@hotmail.com<mailto:dencow...@hotmail.com>> wrote:

Hi Clayton,
Can you maybe give an example with commands?

I know how to create a service etc. But I don't fully understand "in every pod 
the name "db".

> Date: Fri, 22 Jan 2016 13:49:38 -0500
> Subject: Re: Dockerfile in OpenShift
> From: ccole...@redhat.com<mailto:ccole...@redhat.com>
> To: rcarv...@redhat.com<mailto:rcarv...@redhat.com>
> CC: dencow...@hotmail.com<mailto:dencow...@hotmail.com>; 
> users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
>
> OpenShift and Kube already have the equivalent of "link" through
> services. If you create service "db" in a namespace, in every pod the
> name "db" resolves to the service IP or the endpoints (depending on
> what kind of service you created) - so you don't need to directly
> link, you can just use the hostname "db" as your remote endpoint.
>
> On Fri, Jan 22, 2016 at 4:55 AM, Rodolfo Carvalho 
> <rcarv...@redhat.com<mailto:rcarv...@redhat.com>> wrote:
> > Hi Den,
> >
> >
> >
> > On Fri, Jan 22, 2016 at 9:32 AM, Den Cowboy 
> > <dencow...@hotmail.com<mailto:dencow...@hotmail.com>> wrote:
> >>
> >> Thanks for the answers. I have 2 containers which need to work together:
> >> they are started by:
> >>
> >> docker run -d --name "name1" test/image1:1
> >>
>

Re: routing/vhost alias

2016-01-20 Thread Srinivas Naga Kotaru (skotaru)
Perfect Eric, this is what exactly am looking.

I was doing same mistake as you highlighted, using the same implicit service 
name. Now am getting results how I want

$ oc expose svc  cakephp-ex --hostname=alias1.example.com --name alias1
route "alias1" exposed

 $ oc expose svc  cakephp-ex --hostname=alias2.example.com --name alias2
route "alias2” exposed

$ oc get routes
NAME HOST/PORTPATH  SERVICE  
LABELS   INSECURE POLICY   TLS TERMINATION
alias1   alias1.example.com cakephp-ex   
app=cakephp-ex
alias2   alias2.example.com cakephp-ex   
app=cakephp-ex

Thank you, appreciated.

--
Srinivas Kotaru

From: Erik Jacobs <ejac...@redhat.com<mailto:ejac...@redhat.com>>
Date: Wednesday, January 20, 2016 at 12:07 PM
To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
Cc: "blean...@redhat.com<mailto:blean...@redhat.com>" 
<blean...@redhat.com<mailto:blean...@redhat.com>>, Dale Bewley 
<d...@bewley.net<mailto:d...@bewley.net>>, 
"users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>" 
<users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>>, 
dev <d...@lists.openshift.redhat.com<mailto:d...@lists.openshift.redhat.com>>
Subject: Re: routing/vhost alias

Hi Srinivas,

Can you show where you were unable to create multiple routes for the same 
service? I have been able to use "oc expose" on the same service multiple 
times. You need to take care to use the --name attribute to give the new route 
a unique ID/name.

For example, given a service "foo":

oc expose foo --hostname=alias1.somedomain.com<http://alias1.somedomain.com>
oc expose foo --hostname=alias2.otherdomain.com<http://alias2.otherdomain.com>

The 2nd expose will fail because it will try to create another route called 
"foo" (an ID of foo, which is not unique since a route already exists with that 
name/id).

You would need to do:

oc expose foo --hostname=alias2.otherdomain.com<http://alias2.otherdomain.com> 
--name=foo2

I hope this helps!


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift Enterprise
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com<mailto:ejac...@redhat.com>
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Wed, Jan 20, 2016 at 1:19 PM, Srinivas Naga Kotaru (skotaru) 
<skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
Brenton

That part I understood however I was not able to create one then one route for 
a service. So is it expected behavior to have more than one service if I want 
to have more than one route/alias for same application?


--
Srinivas Kotaru







On 1/20/16, 6:02 AM, "Brenton Leanhardt" 
<blean...@redhat.com<mailto:blean...@redhat.com>> wrote:

>On Wed, Jan 20, 2016 at 2:29 AM, Srinivas Naga Kotaru (skotaru)
><skot...@cisco.com<mailto:skot...@cisco.com>> wrote:
>> Dale
>>
>> Thanks for reply. Am aware of this command but am not sure this is what am
>> looking
>>
>> We have 2 URL for every application. 1st one is generated by openshift
>> router and 2nd one client generated. Client generated URL simple a pointer
>> to our DMZ reverse proxy servers. Once initial traffic landed in DMZ and
>> proper security filtering, proxied back to openshift url.
>>
>> For this setup to work, openshift side should have an alias which is
>> matching 1st URL for routing to work. In case of apache, it was done using
>> vhost server alias mechanism. Since 3.x use HAProxy based router, am
>> thinking we should have equivalent setup by matching Host header of 1st URL.
>> At this moment HAProxy Router has ACL which are matching 2nd openshift Host
>> Header.  Without ACL entry matching 1st URL, HAProxy won’t be abel to proxy
>> to end points.  You will get 503 error etc unless we do a Host header
>> conversion from 1st URL to 2nd at RP layer.
>>
>> Hope it helps.
>
>Hi Srinivas,
>
>To achieve the same result as the 'alias' command from 2.x you would
>need to create a Route that matches the desired name.  Then you would
>create a CNAME entry for that in your DNS server that points your
>desired hostname to your router.  Hopefully I'm underestanding you
>correctly and this is what you'd like to do.
>
>--Brenton
>
>>
>> --
>> Srinivas Kotaru
>>
>> From: Dale Bewley <d...@bewley.net<mailto:d...@bewley.net>>
>> Date: Tuesday, January 19, 2016 at 7:21 PM
>> To: skotaru <skot...@cisco.com<mailto:skot...@cisco.com>>
>> Cc: dev 
>> <d...@lists.openshift.redhat.com&

Re: OpensShift SDN

2016-01-15 Thread Srinivas Naga Kotaru (skotaru)
Thanks Dan and Brenton 

That is good piece of info. However while doing further drill out the issue, it 
seems having a single cluster  per data center spanning internal and external 
nodes seems to have some security issues due to shared routers

Routers need to be shared or accessible to both internal and external nodes. If 
we put routers in

1. External Subnet : We need to open ports from routers to internal network for 
80 and 443. Any compromise on external application has direct access to 
internal applications.

2. Interna Network : More security risk. Any external traffic has to come to 
internal router network and connect external application. Security is 
compromised at internal layer and big hole

3. Dedicated Subnet for Routers : This is similar to Option 1. We need to open 
ports from this dedicated subnet to internal and external nodes for router 
communication. If any external application compromised, attacker has direct 
access to internal network or application due to shared subnet. 


-- 
Srinivas Kotaru






On 1/15/16, 6:40 AM, "Brenton Leanhardt" <blean...@redhat.com> wrote:

>On Fri, Jan 15, 2016 at 9:35 AM, Dan Winship <d...@redhat.com> wrote:
>> On 01/14/2016 05:54 PM, Srinivas Naga Kotaru (skotaru) wrote:
>>> Dan
>>>
>>> One question
>>>
>>> Masters also using same port for VXLAN communication with nodes
>>> right? If we block the port from internal and external subnets
>>> but if we put masters in internal network, they won’t be abel to
>>> talk to external nodes or vise verse right?
>>
>> The VXLAN is only used for communication with *pods*. So in that
>> situation, the master wouldn't directly be able to reach pods on
>> external nodes, but that may or may not be a problem. (There is some
>> reason that we make the master also be a node by default, which has
>> something to do with some tool which wants to have access to the pods,
>> but I don't remember what that is.)
>
>If the Master's can't reach Pods then the Web Console integration with
>java Pods (via jolokia) won't work.
>
>>
>> Master<->Node communication (eg, to launch new pods, etc) happens by the
>> nodes connecting to port 8443 on the master, so wherever the master is,
>> both kinds of nodes need to be able to reach that port.
>>
>>> One solution could be put masters in another subnet and control
>>> access between master, internal and external subnets. Any other
>>> better approach without doing this?
>>
>> Sure. Or just have some firewall holes specific to the master.
>>
>> -- Dan
>>
>> ___
>> dev mailing list
>> d...@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Router Sharding

2016-01-15 Thread Srinivas Naga Kotaru (skotaru)

Brenton said you guys are working on router sharding

https://trello.com/c/DtPlixdb/49-8-router-sharding-traffic-ingress

I didn’t get quite well description. What is this feature, how it is useful, 
what are the use cases and when it will be released?

Can we create separate routers for internal or external apps, or more control 
on grouping routes by labels or node selector ( region or zone) ?

--
Srinivas Kotaru
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Router Sharding

2016-01-15 Thread Srinivas Naga Kotaru (skotaru)
Thanks Brenton. It is clear now. When this feature will be released? 3.2?


-- 
Srinivas Kotaru






On 1/15/16, 12:30 PM, "Brenton Leanhardt" <blean...@redhat.com> wrote:

>On Fri, Jan 15, 2016 at 12:47 PM, Srinivas Naga Kotaru (skotaru)
><skot...@cisco.com> wrote:
>>
>> Brenton said you guys are working on router sharding
>>
>> https://trello.com/c/DtPlixdb/49-8-router-sharding-traffic-ingress
>>
>> I didn’t get quite well description. What is this feature, how it is useful,
>> what are the use cases and when it will be released?
>
>One use case would be for large scale deployments it's not practical
>to have hundreds of thousands of routes loaded in a single haproxy
>instance.  Sharding allows the problem to be carved up in to smaller
>pieces.
>
>Other use case would simply be to have routing images that are tuned
>for specific workloads.  If I create a custom router image that is
>only useful for a certain class of applications this feature would
>allow the router to only listen to the correct subset.
>
>>
>> Can we create separate routers for internal or external apps, or more
>> control on grouping routes by labels or node selector ( region or zone) ?
>
>My understanding is that this would all be possible.
>
>>
>> --
>> Srinivas Kotaru

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpensShift SDN

2016-01-14 Thread Srinivas Naga Kotaru (skotaru)
Dan 

One question 

Masters also using same port for VLLAN communication with nodes right? If we 
block the port from internal and external subnets but if we put masters in 
internal network, they won’t be abel to talk to external nodes or vise verse 
right? 

One solution could be put masters in another subnet and control access between 
master, internal and external subnets. Any other better approach without doing 
this? 

-- 
Srinivas Kotaru






On 1/14/16, 11:03 AM, "Srinivas Naga Kotaru (skotaru)" <skot...@cisco.com> 
wrote:

>Thank you Dan. It is all clear now.
>
>It is much better solution rather installing 2 separate cluster installations 
>on each data center just to isolate Internal Vs External traffic.
>
>Appreciated Dan..
>
>
>Srinivas Kotaru
>
>
>
>
>
>
>On 1/14/16, 10:00 AM, "Dan Winship" <d...@redhat.com> wrote:
>
>>On 01/14/2016 12:56 PM, Srinivas Naga Kotaru (skotaru) wrote:
>>> Thanks Dan for info. Are you saying we need to block VXLAN port using 
>>> traditional subnet firewall between Internal <-> External Nodes?
>>
>>Yes. (Though I assume your firewall is already doing this.)
>>
>>> Is it block 4789 port between subnets ? Any impact blocking 4789 port apart 
>>> from blocking Internal <—> External communication?
>>
>>Yes (UDP). No other effect.
>>
>>-- Dan
>>

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users