Re: [3.x]: openshift router and its own metrics

2019-08-16 Thread Clayton Coleman
On Aug 16, 2019, at 4:55 AM, Daniel Comnea  wrote:



On Thu, Aug 15, 2019 at 7:46 PM Clayton Coleman  wrote:

>
>
> On Aug 15, 2019, at 12:25 PM, Daniel Comnea  wrote:
>
> Hi Clayton,
>
> Certainly some of the metrics should be preserved across reloads, e.g.
> metrics like *haproxy_server_http_responses_total *should be preserved
> across reload (though to an extent, Prometheus can handle resets correctly
> with its native support).
>
> However, the metric
> *haproxy_server_http_average_response_latency_milliseconds* appears also
> to be accumulating when we wouldn't expect it to. (According the the
> haproxy stats, I think that's a rolling average over the last 1024 calls --
> so it goes up and down, or should.)
>
>
> File a bug with more details, can’t say off the top of my head
> [DC]: thank you, do you have a preference/ suggestion where i should open
> it for OKD ? i guess BZ is not the suitable for OKD, or am i wrong ?
>

There should be BZ components for origin


> Thoughts?
>
>
> Cheers,
> Dani
>
>
> On Thu, Aug 15, 2019 at 3:59 PM Clayton Coleman 
> wrote:
>
>> Metrics memory use in the router should be proportional to number of
>> services, endpoints, and routes.  I doubt it's leaking there and if it were
>> it'd be really slow since we don't restart the router monitor process
>> ever.  Stats should definitely be preserved across reloads, but will not be
>> preserved across the pod being restarted.
>>
>> On Thu, Aug 15, 2019 at 10:30 AM Dan Mace  wrote:
>>
>>>
>>>
>>> On Thu, Aug 15, 2019 at 10:03 AM Daniel Comnea 
>>> wrote:
>>>
 Hi,

 Would appreciate if anyone can please confirm that my understanding is
 correct w.r.t the way the router haproxy image [1] is built.
 Am i right to assume that the image [1] is is built as it's seen
 without any other layer being added to include [2] ?
 Also am i right to say the haproxy metrics [2] is part of the origin
 package ?


 A bit of background/ context:

 a while back on OKD 3.7 we had to swap the openshift 3.7.2 router image
 with 3.10 because we were seeing some problems with the reload and so we
 wanted to take the benefit of the native haproxy 1.8 reload feature to stop
 affecting the traffic.

 While everything was nice and working okay we've noticed recently that
 the haproxy stats do slowly increase and we do wonder if this is an
 accumulation or not cause (maybe?) by the reloads. Now i'm aware of a
 change made [3] however i suspect that is not part of the 3.10 image hence
 my question to double check if my understanding is wrong or not.


 Cheers,
 Dani

 [1]
 https://github.com/openshift/origin/tree/release-3.10/images/router/haproxy
 [2]
 https://github.com/openshift/origin/tree/release-3.10/pkg/router/metrics
 [3]
 https://github.com/openshift/origin/commit/8f0119bdd9c3b679cdfdf2962143435a95e08eae#diff-58216897083787e1c87c90955aabceff
 ___
 dev mailing list
 dev@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

>>>
>>> I think Clayton (copied) has the history here, but the nature of the
>>> metrics commit you referenced is that many of the exposed metrics points
>>> are counters which were being reset across reloads. The patch was (I think)
>>> to enable counter metrics to correctly aaccumulate across reloads.
>>>
>>> As to how the image itself is built, the pkg directly is part of the
>>> router controller code included with the image. Not sure if that answers
>>> your question.
>>>
>>> --
>>>
>>> Dan Mace
>>>
>>> Principal Software Engineer, OpenShift
>>>
>>> Red Hat
>>>
>>> dm...@redhat.com
>>>
>>>
>>>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: RFC: OKD4 Roadmap Draft

2019-08-16 Thread Kevin Lapagna
On Fri, Aug 16, 2019 at 4:50 PM Clayton Coleman  wrote:

> Single master / single node configurations are possible, but they will be
> hard.  Many of the core design decisions of 4 are there to ensure the
> cluster can self host, and they also require that machines really be
> members of the cluster.
>

How about (as alternative) spinning up multiple virtual machines and
simulate "the real thing". Sure, that uses lots of memory, but it will
nicely show what 4.x is capable of.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: RFC: OKD4 Roadmap Draft

2019-08-16 Thread Michael Gugino
I'm not sure what you mean 'bootstrap machine is anything'.

Haven't seriously looked into single-host cluster yet, but from a high
level, creating the bootstrap node as normal, but not pivoting the API to
the 'real' cluster would seem to do some of what we want, if what we want
is 'just give me a working API so I can run containers' and not 'all of
openshift stuffed into a single host'.

On Fri, Aug 16, 2019 at 11:36 AM Clayton Coleman 
wrote:

>
>
> On Aug 16, 2019, at 11:29 AM, Michael Gugino  wrote:
>
> Pretty much already had all of this working here:
> https://github.com/openshift/openshift-ansible/pull/10898
>
> For single host cluster, I think path of least resistance would be to
> modify the bootstrap host to not pivot, make it clear it's 'not for
> production' and we can take lots of shortcuts for someone just looking for
> an easy, 1-VM openshift api.
>
>
> Does that assume bootstrap machine is “anything”?
>
>
> I'm most interested in running OKD 4.x on Fedora rather than CoreOS.  I
> might try to do something with that this weekend as POC.
>
>
> Thanks
>
>
> On Fri, Aug 16, 2019 at 10:49 AM Clayton Coleman 
> wrote:
>
>>
>>
>> On Aug 16, 2019, at 10:39 AM, Michael McCune  wrote:
>>
>>
>>
>> On Wed, Aug 14, 2019 at 12:27 PM Christian Glombek 
>> wrote:
>>
>>> The OKD4 roadmap is currently being drafted here:
>>>
>>> https://hackmd.io/Py3RcpuyQE2psYEIBQIzfw
>>>
>>> There was an initial discussion on it in yesterday's WG meeting, with
>>> some feedback given already.
>>>
>>> I have updated the draft and am now calling for comments for a final
>>> time, before a formal
>>> Call for Agreement shall follow at the beginning of next week on the OKD
>>> WG Google group list.
>>>
>>> Please add your comments before Monday. Thank you.
>>>
>>>
>> i'm not sure if i should add this on the document, but is there any
>> consensus (one way or the other) about the notion of bringing forward the
>> all-in-one work that was done in openshift-ansible for version 3?
>>
>> i am aware of code ready containers, but i would really like to see us
>> provide the option for a single machine install.
>>
>>
>> It’s possible for someone to emulate much of the install, bootstrap, and
>> subsequent operations on a single machine (the installer isn’t that much
>> code, the bulk of the work is across the operators).  You’d end up copying
>> a fair bit of the installer, but it may be tractable.  You’d need to really
>> understand the config passed to bootstrap via ignition, how the bootstrap
>> script works, and how you would trick etcd to start on the bootstrap
>> machine.  When the etcd operator lands in 4.3, that last becomes easier
>> (the operator runs and configures a local etcd).
>>
>> Single master / single node configurations are possible, but they will be
>> hard.  Many of the core design decisions of 4 are there to ensure the
>> cluster can self host, and they also require that machines really be
>> members of the cluster.
>>
>> A simpler, less complex path might be to (once we have OKD proto working)
>> to create a custom payload that excludes the installer, the MCD, and to use
>> ansible to configure the prereqs on a single machine (etcd in a specific
>> config), then emulate parts of the bootstrap script and run a single
>> instance (which in theory should work today).  You might be able to update
>> it.  Someone exploring this would possibly be able to get openshift running
>> on a non coreos control plane, so worth exploring if someone has the time.
>>
>>
>> peace o/
>>
>>
>>> Christian Glombek
>>>
>>> Associate Software Engineer
>>>
>>> Red Hat GmbH 
>>>
>>> 
>>>
>>> cglom...@redhat.com 
>>> 
>>>
>>>
>>> Red Hat GmbH, http://www.de.redhat.com/, Sitz: Grasbrunn, Germany,
>>> Handelsregister: Amtsgericht München, HRB 153243,
>>> Geschäftsführer: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "okd-wg" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to okd-wg+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/okd-wg/CAABn9-8khhZ4VmHpKXXogo_kH-50QnR6vZgc525FROA41mxboA%40mail.gmail.com
>>> 
>>> .
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "okd-wg" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to okd-wg+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/okd-wg/CADE%2BktTuSSUaZZ-vPYkOLHKiojM9%2B5

Re: RFC: OKD4 Roadmap Draft

2019-08-16 Thread Clayton Coleman
On Aug 16, 2019, at 11:29 AM, Michael Gugino  wrote:

Pretty much already had all of this working here:
https://github.com/openshift/openshift-ansible/pull/10898

For single host cluster, I think path of least resistance would be to
modify the bootstrap host to not pivot, make it clear it's 'not for
production' and we can take lots of shortcuts for someone just looking for
an easy, 1-VM openshift api.


Does that assume bootstrap machine is “anything”?


I'm most interested in running OKD 4.x on Fedora rather than CoreOS.  I
might try to do something with that this weekend as POC.


Thanks


On Fri, Aug 16, 2019 at 10:49 AM Clayton Coleman 
wrote:

>
>
> On Aug 16, 2019, at 10:39 AM, Michael McCune  wrote:
>
>
>
> On Wed, Aug 14, 2019 at 12:27 PM Christian Glombek 
> wrote:
>
>> The OKD4 roadmap is currently being drafted here:
>>
>> https://hackmd.io/Py3RcpuyQE2psYEIBQIzfw
>>
>> There was an initial discussion on it in yesterday's WG meeting, with
>> some feedback given already.
>>
>> I have updated the draft and am now calling for comments for a final
>> time, before a formal
>> Call for Agreement shall follow at the beginning of next week on the OKD
>> WG Google group list.
>>
>> Please add your comments before Monday. Thank you.
>>
>>
> i'm not sure if i should add this on the document, but is there any
> consensus (one way or the other) about the notion of bringing forward the
> all-in-one work that was done in openshift-ansible for version 3?
>
> i am aware of code ready containers, but i would really like to see us
> provide the option for a single machine install.
>
>
> It’s possible for someone to emulate much of the install, bootstrap, and
> subsequent operations on a single machine (the installer isn’t that much
> code, the bulk of the work is across the operators).  You’d end up copying
> a fair bit of the installer, but it may be tractable.  You’d need to really
> understand the config passed to bootstrap via ignition, how the bootstrap
> script works, and how you would trick etcd to start on the bootstrap
> machine.  When the etcd operator lands in 4.3, that last becomes easier
> (the operator runs and configures a local etcd).
>
> Single master / single node configurations are possible, but they will be
> hard.  Many of the core design decisions of 4 are there to ensure the
> cluster can self host, and they also require that machines really be
> members of the cluster.
>
> A simpler, less complex path might be to (once we have OKD proto working)
> to create a custom payload that excludes the installer, the MCD, and to use
> ansible to configure the prereqs on a single machine (etcd in a specific
> config), then emulate parts of the bootstrap script and run a single
> instance (which in theory should work today).  You might be able to update
> it.  Someone exploring this would possibly be able to get openshift running
> on a non coreos control plane, so worth exploring if someone has the time.
>
>
> peace o/
>
>
>> Christian Glombek
>>
>> Associate Software Engineer
>>
>> Red Hat GmbH 
>>
>> 
>>
>> cglom...@redhat.com 
>> 
>>
>>
>> Red Hat GmbH, http://www.de.redhat.com/, Sitz: Grasbrunn, Germany,
>> Handelsregister: Amtsgericht München, HRB 153243,
>> Geschäftsführer: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "okd-wg" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to okd-wg+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/okd-wg/CAABn9-8khhZ4VmHpKXXogo_kH-50QnR6vZgc525FROA41mxboA%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "okd-wg" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to okd-wg+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/okd-wg/CADE%2BktTuSSUaZZ-vPYkOLHKiojM9%2B5qgQVtV1pS7z7eTcu7jdw%40mail.gmail.com
> 
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "okd-wg" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to okd-wg+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/okd-wg/CAH16ShKTveB9F9jaH24ycGLO2KbnHSHkP5T0cYLquumLUEe_BQ%40mail.gmail.com
> 

Re: RFC: OKD4 Roadmap Draft

2019-08-16 Thread Michael Gugino
Pretty much already had all of this working here:
https://github.com/openshift/openshift-ansible/pull/10898

For single host cluster, I think path of least resistance would be to
modify the bootstrap host to not pivot, make it clear it's 'not for
production' and we can take lots of shortcuts for someone just looking for
an easy, 1-VM openshift api.

I'm most interested in running OKD 4.x on Fedora rather than CoreOS.  I
might try to do something with that this weekend as POC.

On Fri, Aug 16, 2019 at 10:49 AM Clayton Coleman 
wrote:

>
>
> On Aug 16, 2019, at 10:39 AM, Michael McCune  wrote:
>
>
>
> On Wed, Aug 14, 2019 at 12:27 PM Christian Glombek 
> wrote:
>
>> The OKD4 roadmap is currently being drafted here:
>>
>> https://hackmd.io/Py3RcpuyQE2psYEIBQIzfw
>>
>> There was an initial discussion on it in yesterday's WG meeting, with
>> some feedback given already.
>>
>> I have updated the draft and am now calling for comments for a final
>> time, before a formal
>> Call for Agreement shall follow at the beginning of next week on the OKD
>> WG Google group list.
>>
>> Please add your comments before Monday. Thank you.
>>
>>
> i'm not sure if i should add this on the document, but is there any
> consensus (one way or the other) about the notion of bringing forward the
> all-in-one work that was done in openshift-ansible for version 3?
>
> i am aware of code ready containers, but i would really like to see us
> provide the option for a single machine install.
>
>
> It’s possible for someone to emulate much of the install, bootstrap, and
> subsequent operations on a single machine (the installer isn’t that much
> code, the bulk of the work is across the operators).  You’d end up copying
> a fair bit of the installer, but it may be tractable.  You’d need to really
> understand the config passed to bootstrap via ignition, how the bootstrap
> script works, and how you would trick etcd to start on the bootstrap
> machine.  When the etcd operator lands in 4.3, that last becomes easier
> (the operator runs and configures a local etcd).
>
> Single master / single node configurations are possible, but they will be
> hard.  Many of the core design decisions of 4 are there to ensure the
> cluster can self host, and they also require that machines really be
> members of the cluster.
>
> A simpler, less complex path might be to (once we have OKD proto working)
> to create a custom payload that excludes the installer, the MCD, and to use
> ansible to configure the prereqs on a single machine (etcd in a specific
> config), then emulate parts of the bootstrap script and run a single
> instance (which in theory should work today).  You might be able to update
> it.  Someone exploring this would possibly be able to get openshift running
> on a non coreos control plane, so worth exploring if someone has the time.
>
>
> peace o/
>
>
>> Christian Glombek
>>
>> Associate Software Engineer
>>
>> Red Hat GmbH 
>>
>> 
>>
>> cglom...@redhat.com 
>> 
>>
>>
>> Red Hat GmbH, http://www.de.redhat.com/, Sitz: Grasbrunn, Germany,
>> Handelsregister: Amtsgericht München, HRB 153243,
>> Geschäftsführer: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "okd-wg" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to okd-wg+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/okd-wg/CAABn9-8khhZ4VmHpKXXogo_kH-50QnR6vZgc525FROA41mxboA%40mail.gmail.com
>> 
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "okd-wg" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to okd-wg+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/okd-wg/CADE%2BktTuSSUaZZ-vPYkOLHKiojM9%2B5qgQVtV1pS7z7eTcu7jdw%40mail.gmail.com
> 
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "okd-wg" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to okd-wg+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/okd-wg/CAH16ShKTveB9F9jaH24ycGLO2KbnHSHkP5T0cYLquumLUEe_BQ%40mail.gmail.com
> 

Re: RFC: OKD4 Roadmap Draft

2019-08-16 Thread Clayton Coleman
On Aug 16, 2019, at 10:39 AM, Michael McCune  wrote:



On Wed, Aug 14, 2019 at 12:27 PM Christian Glombek 
wrote:

> The OKD4 roadmap is currently being drafted here:
>
> https://hackmd.io/Py3RcpuyQE2psYEIBQIzfw
>
> There was an initial discussion on it in yesterday's WG meeting, with some
> feedback given already.
>
> I have updated the draft and am now calling for comments for a final time,
> before a formal
> Call for Agreement shall follow at the beginning of next week on the OKD
> WG Google group list.
>
> Please add your comments before Monday. Thank you.
>
>
i'm not sure if i should add this on the document, but is there any
consensus (one way or the other) about the notion of bringing forward the
all-in-one work that was done in openshift-ansible for version 3?

i am aware of code ready containers, but i would really like to see us
provide the option for a single machine install.


It’s possible for someone to emulate much of the install, bootstrap, and
subsequent operations on a single machine (the installer isn’t that much
code, the bulk of the work is across the operators).  You’d end up copying
a fair bit of the installer, but it may be tractable.  You’d need to really
understand the config passed to bootstrap via ignition, how the bootstrap
script works, and how you would trick etcd to start on the bootstrap
machine.  When the etcd operator lands in 4.3, that last becomes easier
(the operator runs and configures a local etcd).

Single master / single node configurations are possible, but they will be
hard.  Many of the core design decisions of 4 are there to ensure the
cluster can self host, and they also require that machines really be
members of the cluster.

A simpler, less complex path might be to (once we have OKD proto working)
to create a custom payload that excludes the installer, the MCD, and to use
ansible to configure the prereqs on a single machine (etcd in a specific
config), then emulate parts of the bootstrap script and run a single
instance (which in theory should work today).  You might be able to update
it.  Someone exploring this would possibly be able to get openshift running
on a non coreos control plane, so worth exploring if someone has the time.


peace o/


> Christian Glombek
>
> Associate Software Engineer
>
> Red Hat GmbH 
>
> 
>
> cglom...@redhat.com 
> 
>
>
> Red Hat GmbH, http://www.de.redhat.com/, Sitz: Grasbrunn, Germany,
> Handelsregister: Amtsgericht München, HRB 153243,
> Geschäftsführer: Charles Cachera, Michael O'Neill, Tom Savage, Eric Shander
>
> --
> You received this message because you are subscribed to the Google Groups
> "okd-wg" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to okd-wg+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/okd-wg/CAABn9-8khhZ4VmHpKXXogo_kH-50QnR6vZgc525FROA41mxboA%40mail.gmail.com
> 
> .
>
-- 
You received this message because you are subscribed to the Google Groups
"okd-wg" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to okd-wg+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/okd-wg/CADE%2BktTuSSUaZZ-vPYkOLHKiojM9%2B5qgQVtV1pS7z7eTcu7jdw%40mail.gmail.com

.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: RFC: OKD4 Roadmap Draft

2019-08-16 Thread Neal Gompa
On Fri, Aug 16, 2019 at 10:39 AM Michael McCune  wrote:

>
>
> On Wed, Aug 14, 2019 at 12:27 PM Christian Glombek 
> wrote:
>
>> The OKD4 roadmap is currently being drafted here:
>>
>> https://hackmd.io/Py3RcpuyQE2psYEIBQIzfw
>>
>> There was an initial discussion on it in yesterday's WG meeting, with
>> some feedback given already.
>>
>> I have updated the draft and am now calling for comments for a final
>> time, before a formal
>> Call for Agreement shall follow at the beginning of next week on the OKD
>> WG Google group list.
>>
>> Please add your comments before Monday. Thank you.
>>
>>
> i'm not sure if i should add this on the document, but is there any
> consensus (one way or the other) about the notion of bringing forward the
> all-in-one work that was done in openshift-ansible for version 3?
>
> i am aware of code ready containers, but i would really like to see us
> provide the option for a single machine install.
>
>
I would personally like to see some kind of all-in-one setup support like
in OKD v3. I had made my own inventory for doing this for my personal
setups: https://pagure.io/openshift-allinone-deployment-configuration

It helps with playing with these things. :)


-- 
真実はいつも一つ!/ Always, there's only one truth!
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [3.x]: openshift router and its own metrics

2019-08-16 Thread Daniel Comnea
On Thu, Aug 15, 2019 at 7:46 PM Clayton Coleman  wrote:

>
>
> On Aug 15, 2019, at 12:25 PM, Daniel Comnea  wrote:
>
> Hi Clayton,
>
> Certainly some of the metrics should be preserved across reloads, e.g.
> metrics like *haproxy_server_http_responses_total *should be preserved
> across reload (though to an extent, Prometheus can handle resets correctly
> with its native support).
>
> However, the metric
> *haproxy_server_http_average_response_latency_milliseconds* appears also
> to be accumulating when we wouldn't expect it to. (According the the
> haproxy stats, I think that's a rolling average over the last 1024 calls --
> so it goes up and down, or should.)
>
>
> File a bug with more details, can’t say off the top of my head
> [DC]: thank you, do you have a preference/ suggestion where i should open
> it for OKD ? i guess BZ is not the suitable for OKD, or am i wrong ?
>
>
> Thoughts?
>
>
> Cheers,
> Dani
>
>
> On Thu, Aug 15, 2019 at 3:59 PM Clayton Coleman 
> wrote:
>
>> Metrics memory use in the router should be proportional to number of
>> services, endpoints, and routes.  I doubt it's leaking there and if it were
>> it'd be really slow since we don't restart the router monitor process
>> ever.  Stats should definitely be preserved across reloads, but will not be
>> preserved across the pod being restarted.
>>
>> On Thu, Aug 15, 2019 at 10:30 AM Dan Mace  wrote:
>>
>>>
>>>
>>> On Thu, Aug 15, 2019 at 10:03 AM Daniel Comnea 
>>> wrote:
>>>
 Hi,

 Would appreciate if anyone can please confirm that my understanding is
 correct w.r.t the way the router haproxy image [1] is built.
 Am i right to assume that the image [1] is is built as it's seen
 without any other layer being added to include [2] ?
 Also am i right to say the haproxy metrics [2] is part of the origin
 package ?


 A bit of background/ context:

 a while back on OKD 3.7 we had to swap the openshift 3.7.2 router image
 with 3.10 because we were seeing some problems with the reload and so we
 wanted to take the benefit of the native haproxy 1.8 reload feature to stop
 affecting the traffic.

 While everything was nice and working okay we've noticed recently that
 the haproxy stats do slowly increase and we do wonder if this is an
 accumulation or not cause (maybe?) by the reloads. Now i'm aware of a
 change made [3] however i suspect that is not part of the 3.10 image hence
 my question to double check if my understanding is wrong or not.


 Cheers,
 Dani

 [1]
 https://github.com/openshift/origin/tree/release-3.10/images/router/haproxy
 [2]
 https://github.com/openshift/origin/tree/release-3.10/pkg/router/metrics
 [3]
 https://github.com/openshift/origin/commit/8f0119bdd9c3b679cdfdf2962143435a95e08eae#diff-58216897083787e1c87c90955aabceff
 ___
 dev mailing list
 dev@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

>>>
>>> I think Clayton (copied) has the history here, but the nature of the
>>> metrics commit you referenced is that many of the exposed metrics points
>>> are counters which were being reset across reloads. The patch was (I think)
>>> to enable counter metrics to correctly aaccumulate across reloads.
>>>
>>> As to how the image itself is built, the pkg directly is part of the
>>> router controller code included with the image. Not sure if that answers
>>> your question.
>>>
>>> --
>>>
>>> Dan Mace
>>>
>>> Principal Software Engineer, OpenShift
>>>
>>> Red Hat
>>>
>>> dm...@redhat.com
>>>
>>>
>>>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev