Re: Haproxy subdomain going to wrong backend

2016-11-14 Thread Bryan Talbot
Use “reply-all” so the thread stays on the list.


> On Nov 14, 2016, at Nov 14, 4:33 AM, Azam Mohammed  wrote:
> 
> Hi Bryan,
> 
> Thanks for your email.
> 
> I was doing a bit of testing on haproxy.
> 
> I used hdr to match the subdomain in frontend but I got 503 "503 Service 
> Unavailable" No server is available to handle this request.
> 
> Haproxy Log:
> http-in http-in/ -1/-1/-1/-1/163 503 212 - - SC-- 4/4/0/0/0 0/0 "GET 
> /favicon.ico HTTP/1.1"
> 
> 
> http-in http-in/ -1/-1/-1/-1/0 503 212 - - SC-- 2/2/0/0/0 0/0 "GET 
> /favicon.ico HTTP/1.1"
> 
> But using hdr_dom(host) works fine
> 
> Haproxy Log:
> 
> 

Clearly the host header being sent isn’t the exact strings that you’re checking 
for. 

-Bryan



> http-in ppqa2argaamplus/web01 0/0/2/26/28 200 1560 - - --VN 6/6/0/0/0 0/0 
> "GET /content/ar/images/argaam-plus-icon.ico HTTP/1.1"
> 
> All our websites are developed on ASP.NET .
> 
> I want to use hdr (as you mention this match exact string) to match the 
> subdomain.
> 
> Could you please help me to fix this.
> 
> 
> --
> 
> Thanks & Regards, 
>  
> Azam Sheikh Mohammed
> IT Network & System Admin
>   


Re: problem building haproxy 1.6.9 on ar71xx

2016-11-14 Thread Thomas Heil
Hi,

On 14.11.2016 18:26, Willy Tarreau wrote:
> Hi Thomas,
> 
> On Fri, Nov 11, 2016 at 05:33:55PM +0100, Thomas Heil wrote:
>>> Lede has OPENSSL_WITH_DEPRECATED menuconfig [1], which defaults to yes
>>> (so a default LEDE build should be fine).
>>>
>>> Can you confirm your config has OPENSSL_WITH_DEPRECATED = y?
>>>
>>
>> Wow, nice hint. Well I added OPENSSL_WITH_DEPRECATED as depency now
>>
>>> Also can you post the output of "openssl version -a" please? That would
>>> have to come from the executable though; so on the destination machine
>>> in your cross-compile situation.
>>>
>>>
>> will do that later.
> (...)
> 
> And for what it's worth, we do use haproxy-1.6 and openssl-1.0.2 on
> the GL-iNet platform without any problem. It's ath79-based, which is
> nothing more than an updated ar71xx (a mips-24kc as well, AR9331 to
> be exact). We do it on our own distro however, building openssl for
> OS "linux-generic32". So I think Lukas is right, you might be missing
> some deprecated stuff.

indeed, that was the solution. I recently pushed the new package and the
buildbots
are happy now.

> 
> Also you may want to check if openwrt's openssl doesn't have some patches
> to strip down the resulting libraries by removing useless stuff, which
> would cause the issue you're facing.
> 

Ive checked it, but the issue was that we need OPENSSL_DEPRECATED=y

> Cheers,
> Willy
> 
> 
thanks you

cheers
thomas



Re: problem building haproxy 1.6.9 on ar71xx

2016-11-14 Thread Willy Tarreau
Hi Thomas,

On Fri, Nov 11, 2016 at 05:33:55PM +0100, Thomas Heil wrote:
> > Lede has OPENSSL_WITH_DEPRECATED menuconfig [1], which defaults to yes
> > (so a default LEDE build should be fine).
> > 
> > Can you confirm your config has OPENSSL_WITH_DEPRECATED = y?
> > 
> 
> Wow, nice hint. Well I added OPENSSL_WITH_DEPRECATED as depency now
> 
> > Also can you post the output of "openssl version -a" please? That would
> > have to come from the executable though; so on the destination machine
> > in your cross-compile situation.
> > 
> > 
> will do that later.
(...)

And for what it's worth, we do use haproxy-1.6 and openssl-1.0.2 on
the GL-iNet platform without any problem. It's ath79-based, which is
nothing more than an updated ar71xx (a mips-24kc as well, AR9331 to
be exact). We do it on our own distro however, building openssl for
OS "linux-generic32". So I think Lukas is right, you might be missing
some deprecated stuff.

Also you may want to check if openwrt's openssl doesn't have some patches
to strip down the resulting libraries by removing useless stuff, which
would cause the issue you're facing.

Cheers,
Willy



Re: [PATCH 1/8] MINOR: tcp: Store upstream proxy TCP informations before overwrite

2016-11-14 Thread Willy Tarreau
Hi Bertrand,

On Mon, Nov 14, 2016 at 08:49:28AM +0100, Willy Tarreau wrote:
> I'll pick your fixes from the patchset though ;-)

OK, patches 4 to 8 applied, and 1-3 kept for later, thanks!

Willy



Re: Getting JSON encoded data from the stats socket.

2016-11-14 Thread Willy Tarreau
Hi,

On Mon, Nov 14, 2016 at 03:29:58PM +, Mirek Svoboda wrote:
> What if we have the descriptions in the source code, serving as a single
> source of truth, and generate the JSON schema file from the source code
> upon build?

... or on the fly. That's what I was thinking as well. Ie
"show stats json-schema" and use that output.

> There might be also another use case for the descriptions in the source
> code in the future, though cannot come with an example now.

Clearly the source code doesn't need the descriptions, however it's the
easiest place to ensure consistency. When you add a new field and you
only have to type 5 words in a 3rd column, you have no excuse for not
doing it. When you have to open a file you don't know exists, or to try
to remember what file this was because you remember being instructed to
do so in the past, it's quite different.

Regards,
Willy



Re: Getting JSON encoded data from the stats socket.

2016-11-14 Thread Mirek Svoboda
Hi

> > OK. So does this mean that a schema will have to be maintained by hand in
> > parallel or will it be deduced from the dump ? I'm starting to be worried
> > about something not being kept up to date if we have to maintain it, or
> > causing a slow down in adoption of new stats entries.
>
> I envisage the schema being maintained in the same way that documentation
> is. In the draft schema I posted it should not be necessary to update each
> time a new item is added to the output of show flow or show info. Rather,
> the schema would need to be updated if the format of the data changes some
> how: f.e. a new field is added which would be analagous to adding a new
> column to the output of typed output, or a new type of value, such as u16,
> was added.
>

What if we have the descriptions in the source code, serving as a single
source of truth, and generate the JSON schema file from the source code
upon build?
There might be also another use case for the descriptions in the source
code in the future, though cannot come with an example now.

Regards,
Mirek Svoboda

>


Re: Getting JSON encoded data from the stats socket.

2016-11-14 Thread Simon Horman
On Mon, Nov 14, 2016 at 08:50:54AM -0500, hapr...@stormcloud9.net wrote:
> Might help to see an example of what the results look like when using
> this schema, however I do have one comment below.

Yes, agreed. I plan to work on making that so.

> On 2016/11/14 03:09, Simon Horman wrote:
> > Hi Willy, Hi All,
> >
> > On Thu, Nov 10, 2016 at 04:52:56PM +0100, Willy Tarreau wrote:
> >> Hi Simon!
> >>
> >> On Thu, Nov 10, 2016 at 04:27:15PM +0100, Simon Horman wrote:
> >>> My preference is to take things calmly as TBH I am only just getting
> >>> started on this and I think the schema could take a little time to get
> >>> a consensus on.
> >> I totally agree with you. I think the most difficult thing is not to
> >> run over a few arrays and dump them but manage to make everyone agree
> >> on the schema. And that will take more than a few days I guess. Anyway
> >> I'm fine with being proven wrong :-)
> > I took a first pass at defining a schema.
> >
> > * The schema follows what is described on json-schema.org (or at least
> >   tries to). Is this a suitable approach?
> > * The schema only covers "show info" and "show stat" and the fields
> >   are based on the typed output variants of those commands.
> >   This leads me to several questions:
> >   - Is this field selection desirable? It seems to make sense to me
> > as presumably the intention of the JSON output is for it to
> > be machine readable.
> >   - Is such an approach appropriate for other show commands?
> >   - And more generally, which other show commands are desired to
> > support output in JSON (in the near term)?
> >
> > {
> > "$schema": "http://json-schema.org/draft-04/schema#;,
> > "oneOf": [
> > {
> > "title": "Info",
> > "description": "Info about HAProxy status",
> > "type": "array",
> > "items": {
> > "properties": {
> > "title": "Info Item",
> > "type": "object",
> > "field": { "$ref": "#/definitions/field" },
> > "processNum": { "$ref": "#/definitions/processNum" },
> > "tags": { "$ref": "#/definitions/tags" },
> > "value": { "$ref": "#/definitions/typedValue" }
> > },
> > "required": ["field", "processNum", "tags", "value"]
> > }
> > },
> > {
> > "title": "Stat",
> > "description": "HAProxy statistics",
> > "type": "array",
> > "items": {
> > "title": "Info Item",
> > "type": "object",
> > "properties": {
> > "objType": {
> > "enum": ["F", // Frontend
> >  "B", // Backend
> >  "L", // Listener
> >  "S"  // Server
> Do we really need to save a few bytes and abbreviate these? We're
> already far more chatty than the CSV output as you're outputting field
> names (e.g. "proxyId" and "processNum"), so abbreviating the values when
> you've got full field names seems rather contrary. And then as you've
> demonstrated, this requires defining a "sub-schema" for explaining what
> "F", "B", etc, are. Thus requiring anyone parsing the json to have to
> keep a mapping of the values (and do the translation) within their code.
> Ditto for all the other "enum" types down below.

Good point. I'm not sure why that didn't occur to me.
But it does seem like a good idea.

> > ]
> > },
> > "proxyId": {
> > "type": "integer",
> > "minimum": 0
> > },
> > "id": {
> > "description": "Unique identifyier of object within 
> > proxy",
> > "type": "integer",
> > "minimum": 0
> > },
> > "field": { "$ref": "#/definitions/field" },
> > "processNum": { "$ref": "#/definitions/processNum" },
> > "tags": { "$ref": "#/definitions/tags" },
> > "typedValue": { "$ref": "#/definitions/typedValue" }
> > },
> > "required": ["objType", "proxyId", "id", "field", 
> > "processNum",
> >  "tags", "value"]
> > }
> > }
> > ],
> > "definitions": {
> > "field": {
> > "type": "object",
> > "pos": {
> > "description": "Position of field",
> > "type": "integer",
> > "minimum": 0
> > },
> > "name": {
> > "description": "Name of field",
> > "type": "string"
> > },
> > "required": ["pos", "name"]
> > },
> > "processNum": {
> > 

Re: Getting JSON encoded data from the stats socket.

2016-11-14 Thread Simon Horman
Hi Willy,

On Mon, Nov 14, 2016 at 03:10:18PM +0100, Willy Tarreau wrote:
> On Mon, Nov 14, 2016 at 11:34:18AM +0100, Simon Horman wrote:
> > > Sometimes a description like above appears in your example, is it just
> > > for a few fields or do you intend to describe all of them ? I'm asking
> > > because we don't have such descriptions right now, and while I won't
> > > deny that forcing contributors to add one when adding new stats could be
> > > reasonable (it's like doc), I fear that it would significantly inflate
> > > the output.
> > 
> > My understanding is that the description is part of the schema but would
> > not be included in a JSON instance. Or on other words, would not
> > be included in the output of a show command.
> 
> OK. So does this mean that a schema will have to be maintained by hand in
> parallel or will it be deduced from the dump ? I'm starting to be worried
> about something not being kept up to date if we have to maintain it, or
> causing a slow down in adoption of new stats entries.

I envisage the schema being maintained in the same way that documentation
is. In the draft schema I posted it should not be necessary to update each
time a new item is added to the output of show flow or show info. Rather,
the schema would need to be updated if the format of the data changes some
how: f.e. a new field is added which would be analagous to adding a new
column to the output of typed output, or a new type of value, such as u16,
was added.

> > My intention was to add descriptions for all fields. But in many case
> > the field name seemed to be sufficiently descriptive or at least I couldn't
> > think of a better description. And in such cases I omitted the description
> > to avoid being repetitive.
> 
> OK that's a good point. So we can possibly have a first implementation reusing
> the field name everywhere, and later make these descriptions mandatory in the
> code for new fields so that the output description becomes more readable.
> 
> > I do not feel strongly about the descriptions. I'm happy to remove some or
> > all of them if they are deemed unnecessary or otherwise undesirable; to add
> > them to every field for consistency; or something in between.
> 
> I think dumping only known descriptions and falling back to the name (or
> simply suggesting that the consumer just uses the same when there's no desc)
> sounds reasonable to me for now.
> 
> > > Also, do you have an idea about the verbosity of the dump here ? For
> > > example let's say you have 100 listeners with 4 servers each (which is
> > > an average sized config). I'm just looking for a rought order of 
> > > magnitude,
> > > ie closer to 10-100k or to 1-10M. The typed output is already quite heavy
> > > for large configs so it should not be a big deal, but it's something we
> > > have to keep in mind.
> > 
> > I don't think the type, description, etc... should be included in such
> > output as they can be supplied by the schema out-of-band. But the field
> > name and values along with syntactic elements (brackets, quotes, etc...) do
> > need to be included.
> 
> OK.
> 
> > I can try and come up with an estimate if it is
> > important but my guess is the result would be several times the size of the
> > typed output (mainly owing to the size of the field names in the output).
> 
> No, don't worry, this rough estimate is enough.

-- 
Simon Horman  si...@horms.nl
Horms Solutions BV  www.horms.nl
Parnassusweg 819, 1082 LZ Amsterdam, Netherlands
Tel: +31 (0)20 800 6155Skype: horms7



Re: Getting JSON encoded data from the stats socket.

2016-11-14 Thread Willy Tarreau
Hi Simon,

On Mon, Nov 14, 2016 at 09:09:21AM +0100, Simon Horman wrote:
> I took a first pass at defining a schema.
> 
> * The schema follows what is described on json-schema.org (or at least
>   tries to). Is this a suitable approach?

I'll let others respond as I have no idea since I never need nor use JSON :-)

> * The schema only covers "show info" and "show stat" and the fields
>   are based on the typed output variants of those commands.
>   This leads me to several questions:
>   - Is this field selection desirable? It seems to make sense to me
> as presumably the intention of the JSON output is for it to
> be machine readable.

Yes in my opinion it's the goal. And these are the two only parts that
were converted to typed output for this reason.

>   - Is such an approach appropriate for other show commands?

At the moment I don't think so because the other ones are more related
to state management than statistics.

>   - And more generally, which other show commands are desired to
> support output in JSON (in the near term)?

I can't think of any right now.

However I have a question below :

> "id": {
> "description": "Unique identifyier of object within 
> proxy",
> "type": "integer",
> "minimum": 0
> },

Sometimes a description like above appears in your example, is it just
for a few fields or do you intend to describe all of them ? I'm asking
because we don't have such descriptions right now, and while I won't
deny that forcing contributors to add one when adding new stats could be
reasonable (it's like doc), I fear that it would significantly inflate
the output.

Also, do you have an idea about the verbosity of the dump here ? For
example let's say you have 100 listeners with 4 servers each (which is
an average sized config). I'm just looking for a rought order of magnitude,
ie closer to 10-100k or to 1-10M. The typed output is already quite heavy
for large configs so it should not be a big deal, but it's something we
have to keep in mind.

Oh BTW just to let you know, I'm working on a painful bug and possibly a
small regression which will force me to revert some recent fixes, so you
may still have a bit of time left :-)

Thanks,
Willy



Re: Getting JSON encoded data from the stats socket.

2016-11-14 Thread Simon Horman
Hi Willy, Hi All,

On Thu, Nov 10, 2016 at 04:52:56PM +0100, Willy Tarreau wrote:
> Hi Simon!
> 
> On Thu, Nov 10, 2016 at 04:27:15PM +0100, Simon Horman wrote:
> > My preference is to take things calmly as TBH I am only just getting
> > started on this and I think the schema could take a little time to get
> > a consensus on.
> 
> I totally agree with you. I think the most difficult thing is not to
> run over a few arrays and dump them but manage to make everyone agree
> on the schema. And that will take more than a few days I guess. Anyway
> I'm fine with being proven wrong :-)

I took a first pass at defining a schema.

* The schema follows what is described on json-schema.org (or at least
  tries to). Is this a suitable approach?
* The schema only covers "show info" and "show stat" and the fields
  are based on the typed output variants of those commands.
  This leads me to several questions:
  - Is this field selection desirable? It seems to make sense to me
as presumably the intention of the JSON output is for it to
be machine readable.
  - Is such an approach appropriate for other show commands?
  - And more generally, which other show commands are desired to
support output in JSON (in the near term)?

{
"$schema": "http://json-schema.org/draft-04/schema#;,
"oneOf": [
{
"title": "Info",
"description": "Info about HAProxy status",
"type": "array",
"items": {
"properties": {
"title": "Info Item",
"type": "object",
"field": { "$ref": "#/definitions/field" },
"processNum": { "$ref": "#/definitions/processNum" },
"tags": { "$ref": "#/definitions/tags" },
"value": { "$ref": "#/definitions/typedValue" }
},
"required": ["field", "processNum", "tags", "value"]
}
},
{
"title": "Stat",
"description": "HAProxy statistics",
"type": "array",
"items": {
"title": "Info Item",
"type": "object",
"properties": {
"objType": {
"enum": ["F", // Frontend
 "B", // Backend
 "L", // Listener
 "S"  // Server
]
},
"proxyId": {
"type": "integer",
"minimum": 0
},
"id": {
"description": "Unique identifyier of object within 
proxy",
"type": "integer",
"minimum": 0
},
"field": { "$ref": "#/definitions/field" },
"processNum": { "$ref": "#/definitions/processNum" },
"tags": { "$ref": "#/definitions/tags" },
"typedValue": { "$ref": "#/definitions/typedValue" }
},
"required": ["objType", "proxyId", "id", "field", "processNum",
 "tags", "value"]
}
}
],
"definitions": {
"field": {
"type": "object",
"pos": {
"description": "Position of field",
"type": "integer",
"minimum": 0
},
"name": {
"description": "Name of field",
"type": "string"
},
"required": ["pos", "name"]
},
"processNum": {
"description": "Relative process number",
"type": "integer",
"minimum": 1
},
"tags": {
"type": "object",
"origin": {
"description": "Origin value was extracted from",
"type": "string",
"enum": ["M", // Metric
 "S", // Status
 "K", // Sorting Key
 "C", // From Configuration
 "P"  // From Product
]
},
"nature": {
"description": "Nature of information carried by field",
"type": "string",
"enum": ["A", // Age since last event
 "a", // Averaged value
 "C", // Cumulative counter
 "D", // Duration for a status
 "G", // Gague - measure at one instant
 "L", // Limit
 "M", // Maximum
 "m", // Minimum
 "N", // Name
 "O", // Free text output
 "R", // Event rate - measure at one instant
 "T"  // Date or time
]
},