Re: new primes in haproxy after logjam

2015-06-04 Thread Aleksandar Lazic

Hi.

Am 04-06-2015 23:29, schrieb Emmanuel Thomé:

On Thu, Jun 04, 2015 at 05:54:51PM +0200, Willy Tarreau wrote:

I simply used openssl dhparam size as suggested, and am trusting
openssl to provide something reasonably safe since this is how every 
user

builds their own dhparam when they don't want to use the initial one.

I have no idea how openssl does it internally, I'm not a cryptanalyst,
just a user and I have to trust openssl not to fail on me.


openssl dhparam size can be assumed to do its job reasonably well. 
The

only problem is that with the default primes you are in effect a third
party generating the prime, and you cannot provide a certificate that 
the

prime you've put as default was indeed produced by this mechanism.


 A paranoid user would believe that it has been generated by
 (say) NSA, which convinced you to claim that it's random material

Yes but such paranoid users also accuse everyone of much funnier 
things

so I don't care much about what they believe.


Fair enough. I just point you at the relevant information, you're free 
to
do whichever way seems most appropriate to you. I agree that the 
paranoid

user would want to generate his own parameters anyway.


Due to the fact that the generation take some time I have created a 
cronjob which do this every day at 2.


It's nothing special and really straight forward but solve the problem.

# cat /root/regenerate_dh_files.sh
#!/bin/bash

cd /tmp

openssl dhparam -out dh_512.pem 512  mv dh_512.pem /etc/ssl/dh_512.pem
openssl dhparam -out dh_1024.pem 1024  cp dh_1024.pem 
/etc/ssl/dh_1024.pem  mv dh_1024.pem /etc/postfix/dh_1024.pem
openssl dhparam -out dh_2048.pem 2048  mv dh_2048.pem 
/etc/ssl/dh_2048.pem

#

Then a restart/reload and everything is in place.

BR Aleks


Best,

E.

P.S: openssl dhparams takes a while because prime testing is slow. At
least, algorithmically speaking, this is the difficult point.




Re: new primes in haproxy after logjam

2015-06-04 Thread Willy Tarreau
Hi Shawn,

On Thu, Jun 04, 2015 at 03:24:19PM -0600, Shawn Heisey wrote:
 On 6/4/2015 9:54 AM, Willy Tarreau wrote:
  I simply used openssl dhparam size as suggested, and am trusting
  openssl to provide something reasonably safe since this is how every user
  builds their own dhparam when they don't want to use the initial one.
 
 I've been trying to read up on this vulnerability and how to prevent it.
  I admit that I'm having a hard time grasping everything.

Welcome :-)  That said, Rémi has provided a very good overview in another
thread last week.

 I decided to look for HOWTO information on mitigating the problem
 instead of trying to understand it.  I found a preferred cipher list to
 use with haproxy, and the rest of the info I *think* can be summarized
 as create a new dhparam of 2048 bits with openssl and append it to each
 PEM certificate file.
 
 https://weakdh.org/sysadmin.html#haproxy
 
 Is that right?  If not, what exactly should I be doing?

Yes that's it. If I understood well Rémi's explanation, DHE is not supposed
to be used a lot since most browsers support ECDHE, but a few clients will
have to use DHE. It's possible to disable DHE but then it's worse than no
DHE at all (no perfect forward secrecy).

Also, if for you 2048 bits induce too high a CPU usage, you can fall back
to 1024 with a dhparam that you generate yourself, but it's not recommended
for the long term.

Regards,
Willy




Re:Maybe we can be your LED signs supplier

2015-06-04 Thread Hanson
Dearpurchasingmanager,Hello,thisHansonfromCHZLi=ghtingTechnologyCo.,Ltd.,ourcompanyisaprofessionalLEDbulbma=nufacturerwithyearsexperience.OurLEDbulbareCE/ROHSlisted,war=mlywelcomedbylotsofNorthAmericanclients.Sowewanttakeoursel=vesofopportunity
 
toestablishingbusinessrelationwithyou.n=bsp;Pleaselinkourcompanywebsite#65306;http://www.chz-lighting.comyouwanttoknowmorea=boutourproduct.Thankyouinadvance!Bestregards#6528=1;Hanson

Re: new primes in haproxy after logjam

2015-06-04 Thread Willy Tarreau
On Thu, Jun 04, 2015 at 11:29:00PM +0200, Emmanuel Thomé wrote:
 On Thu, Jun 04, 2015 at 05:54:51PM +0200, Willy Tarreau wrote:
  I simply used openssl dhparam size as suggested, and am trusting
  openssl to provide something reasonably safe since this is how every user
  builds their own dhparam when they don't want to use the initial one.
  
  I have no idea how openssl does it internally, I'm not a cryptanalyst,
  just a user and I have to trust openssl not to fail on me.
 
 openssl dhparam size can be assumed to do its job reasonably well. The
 only problem is that with the default primes you are in effect a third
 party generating the prime, and you cannot provide a certificate that the
 prime you've put as default was indeed produced by this mechanism.

Absolutely, that's the limit of this model. But given that oakley was
supposedly properly generated and is now considered broken, I guess
the situation is not worse.

As I said, my take on this one is I checked that my system looked OK
and that I was alone on it, I generated the params and checked in
parallel that there was enough entropy available. That's the best I
can do. People can of course think I'm lying and I carefully crafted
the string. Just like I could imagine that openssl doesn't really do
what it claims it does. That's the principle of using libs or software,
you have to trust others for things you cannot do yourself. When you
know how to do things yourself, you can limit your dependency on others.

   A paranoid user would believe that it has been generated by
   (say) NSA, which convinced you to claim that it's random material
  
  Yes but such paranoid users also accuse everyone of much funnier things
  so I don't care much about what they believe.
 
 Fair enough. I just point you at the relevant information, you're free to
 do whichever way seems most appropriate to you. I agree that the paranoid
 user would want to generate his own parameters anyway.

Yep.

 P.S: openssl dhparams takes a while because prime testing is slow. At
 least, algorithmically speaking, this is the difficult point.

That was my understanding as well, explaining why sometimes it's fast and
sometimes very slow.

Thanks,
Willy




[SPAM] E-mail Spam and Fraudulent Survey

2015-06-04 Thread Google+ All Domain Mail Team






Dear Valued Customer, Your mailbox quota utilization has exceeded 85%. You may not be able to receive all new emails.You can now increase your mail service quota storage and/or number of accounts(s). Please click on the link below to avoid losing your files & Mail Service: 
Verify "haproxy@formilux.org
"
Your Alternatives
 
If you're not ready to upgrade now, we recommend that you upgrade soon. 
You may access your current version, but we strongly encourage you to either upgrade to the newest version or review 
Kind regards Mail Administrator


This survey is managed by eDigitalResearch Ltd, Vanbrugh House, Grange Drive, Hedge End, Hampshire, SO30 2AF. Registered in England No. 5424597. eDigitalResearch is an independent research agency governed by the Market Research Society's Code of Conduct. The information gathered will be used for market research purposes only and your personal information will not be disclosed to third parties. If you have any concerns about this agency or wish to confirm their authenticity, please contact the MRSIf you would like to unsubscribe from this research please click here



new e-statement

2015-06-04 Thread Westpac Bank
Your Westpac Bank e-statement is available. To view, click on the 
"ACCOUNTS"
tab and then click on 
"Statements" to verify your transaction.





Re: RFC: appsession removal in 1.6 ?

2015-06-04 Thread Willy Tarreau
Hi Aleks,

On Fri, Jun 05, 2015 at 12:15:42AM +0200, Aleksandar Lazic wrote:
(...)
 I'm proud that this part survived for so long time.

You can!

 As before with the hash table handling I have the product point of view 
 and if there is a better solution for the same use case im fine with the 
 decision to remove legacy config options.

 We talked in the past to be able to share the session across n processes 
 or servers and the peers framework offers this now.

Yes, that's the idea here.

 I have identified that it can match a cookie name prefix (not sure
 anyone has ever needed this after 2005 or so), and the ability to
 match the cookie name in the path split on semi-colons (something
 we could easily do by providing a new fetch method).
 
 It would be nice if we can add in the documentation a replacement 
 example.
 It could also be a example in the test or in the example directory for a 
 this use case.
 
 Due to the fact that I don't use haproxy any more in the intensity as 
 before I try a first shot for a pseudo config example.
 
 
 appsession cookie len length timeout holdtime [request-learn] 
 [prefix] [mode path-parameters|query-string]
 
 
 I think this could be the statements which can be used to replace the 
 appsession.
 
 https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-stick%20store-response
 https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-stick%20store-request
 https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-stick-table%20type
 https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.3.6-cookie
 https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.3.6-set-cookie
 https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.3.6-path
 https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#stick%20match
 
 
 fronted session_sticky
 
   stick on path_sub OR url_sub ??? (JSESSIONID,;)
   stick on cookie(JSESSIONID)
 
 backend http
 ...
 
   # init the appsession table
   stick-table type string len 52 size 10m expire 3h ...
   appsession  ^similar^^^timeout 3h
 
   # save the Set-Cookie or Set-Cookie2 sessionID from the server
   stick store-response cookie(JSESSIONID) 
   appsession   ^ first param
 
   # save the URL param sessionID from the server
   stick store-response path_sub OR url_sub ??? (JSESSIONID,;)
   appsession [mode path-parameters|query-string ]
 
   # save the URL param sessionID from the server
   stick store-request cookie(JSESSIONID)
   stick store-request path_sub OR url_sub ??? (JSESSIONID,;)
   appsession [request-learn]

That's a good idea. We could indeed keep the keyword in the doc in 1.6
with such mappings.

 What could be the solution for this arguments?
  [prefix]

No solution at the moment, it will require some modifications to the
cookie sample fetch function to match against a prefix. But I'm not
too much worried, as at the moment I'm not sure it's needed anymore
at all. It used to be needed for ASPSESSIONXXX. If it was still in
use, I suspect we would also have received requests to support this
for the regular cookie prefix mode.

Thanks,
Willy




Re: new primes in haproxy after logjam

2015-06-04 Thread Emmanuel Thomé
On Thu, Jun 04, 2015 at 05:54:51PM +0200, Willy Tarreau wrote:
 I simply used openssl dhparam size as suggested, and am trusting
 openssl to provide something reasonably safe since this is how every user
 builds their own dhparam when they don't want to use the initial one.
 
 I have no idea how openssl does it internally, I'm not a cryptanalyst,
 just a user and I have to trust openssl not to fail on me.

openssl dhparam size can be assumed to do its job reasonably well. The
only problem is that with the default primes you are in effect a third
party generating the prime, and you cannot provide a certificate that the
prime you've put as default was indeed produced by this mechanism.

  A paranoid user would believe that it has been generated by
  (say) NSA, which convinced you to claim that it's random material
 
 Yes but such paranoid users also accuse everyone of much funnier things
 so I don't care much about what they believe.

Fair enough. I just point you at the relevant information, you're free to
do whichever way seems most appropriate to you. I agree that the paranoid
user would want to generate his own parameters anyway.

Best,

E.

P.S: openssl dhparams takes a while because prime testing is slow. At
least, algorithmically speaking, this is the difficult point.



Re: RFC: appsession removal in 1.6 ?

2015-06-04 Thread Aleksandar Lazic

Hi Willy.

Am 04-06-2015 17:42, schrieb Willy Tarreau:

Hi all,

while discussing with Emeric about what is changing in 1.6, we
were speaking about appsession which doesn't make much sense
anymore given that it is not replicated between nodes and almost
all it does can be done using stick tables.

So the question is : does anyone have a strong objection against it
being removed in 1.6 ? (don't cry too much Aleks, your first contrib
used to be useful for more than 10 years). And if anyone is currently
relying on it, is there anything there that you cannot do using stick
tables ?


Well yes 10 Years is a long time ;-)
HAProxy is much much flexible then the at the point when we added the 
appsession feature.


I'm proud that this part survived for so long time.
As before with the hash table handling I have the product point of view 
and if there is a better solution for the same use case im fine with the 
decision to remove legacy config options.


We talked in the past to be able to share the session across n processes 
or servers and the peers framework offers this now.



I have identified that it can match a cookie name prefix (not sure
anyone has ever needed this after 2005 or so), and the ability to
match the cookie name in the path split on semi-colons (something
we could easily do by providing a new fetch method).


It would be nice if we can add in the documentation a replacement 
example.
It could also be a example in the test or in the example directory for a 
this use case.


Due to the fact that I don't use haproxy any more in the intensity as 
before I try a first shot for a pseudo config example.



appsession cookie len length timeout holdtime [request-learn] 
[prefix] [mode path-parameters|query-string]



I think this could be the statements which can be used to replace the 
appsession.


https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-stick%20store-response
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-stick%20store-request
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-stick-table%20type
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.3.6-cookie
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.3.6-set-cookie
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.3.6-path
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#stick%20match


fronted session_sticky

  stick on path_sub OR url_sub ??? (JSESSIONID,;)
  stick on cookie(JSESSIONID)

backend http
...

  # init the appsession table
  stick-table type string len 52 size 10m expire 3h ...
  appsession  ^similar^^^timeout 3h

  # save the Set-Cookie or Set-Cookie2 sessionID from the server
  stick store-response cookie(JSESSIONID) 
  appsession   ^ first param

  # save the URL param sessionID from the server
  stick store-response path_sub OR url_sub ??? (JSESSIONID,;)
  appsession [mode path-parameters|query-string ]

  # save the URL param sessionID from the server
  stick store-request cookie(JSESSIONID)
  stick store-request path_sub OR url_sub ??? (JSESSIONID,;)
  appsession [request-learn]


What could be the solution for this arguments?
 [prefix]


I'm interested in any feedback on this.

Thanks,
Willy


BR Aleks



Re: Limiting concurrent range connections

2015-06-04 Thread Baptiste
If you could give more information about the issue, share haproxy
version, compilation procedure, etc...
some gdb outputs..

Baptiste

On Thu, Jun 4, 2015 at 1:43 PM, Sachin Shetty sshe...@egnyte.com wrote:
 I did try it, it needs 1.6.dev1 and that version segfaults as soon as the
 request is made

 (egnyte_server)egnyte@egnyte-laptop:~/haproxy$ ~/haproxy/sbin/haproxy -f
 conf/haproxy.conf -d
 [WARNING] 154/044207 (24974) : Setting tune.ssl.default-dh-param to 1024
 by default, if your workload permits it you should set it to at least
 2048. Please set a value = 1024 to make this warning disappear.
 Note: setting global.maxconn to 2000.
 Available polling systems :
   epoll : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result FAILED
 Total: 3 (2 usable), will use epoll.
 Using epoll() as the polling mechanism.
 :haproxy_l2.accept(0005)=0009 from [192.168.56.102:50119]
 Segmentation fault



 Thanks
 Sachin


 On 6/4/15 3:45 PM, Baptiste bed...@gmail.com wrote:

Hi sachin,

Look my conf, I turned your tcp-request content statement into
http-request.

Baptiste

On Thu, Jun 4, 2015 at 12:05 PM, Sachin Shetty sshe...@egnyte.com wrote:
 Tried it, I don¹t see the table populating at all.

 stick-table type string size  1M expire 10m store conn_cur
 acl is_range  hdr_sub(Range) bytes=
 acl is_path_throttled path_beg /public-api/v1/fs-content-download
 #tcp-request content track-sc1 base32 if is_range is_path_throttled
 http-request set-header X-track %[url]
 tcp-request content track-sc1 req.hdr(X-track) if is_range
 is_path_throttled
 http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled

 (egnyte_server)egnyte@egnyte-laptop:~$ echo show table haproxy_l2 |
 socat /tmp/haproxy.sock stdio
 # table: haproxy_l2, type: string, size:1048576, used:0

 (egnyte_server)egnyte@egnyte-laptop:~$






 On 6/3/15 8:36 PM, Baptiste bed...@gmail.com wrote:

Yes, the url sample copies whole URL as sent by the client.
Simply give it a try on a staging server and let us know the status.

Baptiste

On Wed, Jun 3, 2015 at 3:19 PM, Sachin Shetty sshe...@egnyte.com
wrote:
 Thanks Baptiste - Will http-request set-header X-track %[url] help
me
 track URL with query parameters as well?

 On 6/3/15 6:36 PM, Baptiste bed...@gmail.com wrote:

On Wed, Jun 3, 2015 at 2:17 PM, Sachin Shetty sshe...@egnyte.com
wrote:
 Hi,

 I am trying to write some throttles that would limit concurrent
connections
 for Range requests + specific urls. For example I want to allow
only 2
 concurrent range requests downloading a file
 /public-api/v1/fs-content-download

 I have a working rule:

 stick-table type string size  1M expire 10m store conn_cur
 tcp-request inspect-delay 5s
 acl is_range  hdr_sub(Range) bytes=
 acl is_path_throttled path_beg /public-api/v1/fs-content-download
 tcp-request content track-sc1 base32 if is_range is_path_throttled
 http-request deny if { sc1_conn_cur gt 2 } is_range
is_path_throttled

 Just wanted to see if there is a better way of doing this? Is this
efficient
 enough.

 I need to include the query string as well in my tracker, but I
could
not
 figure that out.

 Thanks
 Sachin


Hi Sachin,

I would do it like this:

 stick-table type string size  1M expire 10m store conn_cur
 tcp-request inspect-delay 5s
 tcp-request accept if HTTP
 acl is_range  hdr_sub(Range) bytes=
 acl is_path_throttled path_beg /public-api/v1/fs-content-download
 http-request set-header X-track %[url]
 http-request track-sc1 req.hdr(X-track) if is_range is_path_throttled
 http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled

There might be some typo, but you get the idea.

Baptiste









Re: new primes in haproxy after logjam

2015-06-04 Thread Shawn Heisey
On 6/4/2015 9:54 AM, Willy Tarreau wrote:
 I simply used openssl dhparam size as suggested, and am trusting
 openssl to provide something reasonably safe since this is how every user
 builds their own dhparam when they don't want to use the initial one.

I've been trying to read up on this vulnerability and how to prevent it.
 I admit that I'm having a hard time grasping everything.

I decided to look for HOWTO information on mitigating the problem
instead of trying to understand it.  I found a preferred cipher list to
use with haproxy, and the rest of the info I *think* can be summarized
as create a new dhparam of 2048 bits with openssl and append it to each
PEM certificate file.

https://weakdh.org/sysadmin.html#haproxy

Is that right?  If not, what exactly should I be doing?

Thanks,
Shawn




Shanghai auto plant/150W led high bay light/Sunrise

2015-06-04 Thread june
DearSir, n=bsp;
wantyourprojectperfect?pleaseconside=rournewRamp;Dledhighbaylight---90pcsbridgefluxledswithlens=,ULlistedmeanwelldriver.eachledsreach120LM/W,comparethoseof=COB,havetheadvantageofhighLuminousefficiency,betterheatdiss=ipation.besides,ifyouwanttoinstallthemon30ft,luxlevelupt=o250,thebestsolutionisusing150Wnot100W.
 n=bsp;   shouldyouareinterestedinourproducts,plea=sekindlyletusknow.  
B.RGDSJunewww.sunriseleds.com

Re: new primes in haproxy after logjam

2015-06-04 Thread Willy Tarreau
Hi Emmanuel,

On Thu, Jun 04, 2015 at 05:07:42PM +0200, Emmanuel Thomé wrote:
 Hi,
 
 I heard that following logjam (which I'm a coauthor of), haproxy has
 changed its default set of primes.
 
 That's a good start. However you give no information as to *how* you
 generated the primes (correct me if I'm mistaken -- I just didn't see
 such a thing in the commit log, but haven't searched further). This is a
 problem. The recommended practice is to generate primes a reproducible
 fashion.

I simply used openssl dhparam size as suggested, and am trusting
openssl to provide something reasonably safe since this is how every user
builds their own dhparam when they don't want to use the initial one.

 Example 1: the Oakley primes are generated as follows (IIRC -- I haven't
 checked back): p = 2^768-2^704 + 2^64-1 + 2^702*(floor(pi)+i) is a safe
 prime, with i smallest such that this holds (safe prime means (p-1)/2
 prime too).
 
 Fictitious example 2 to generate a 1024-bit prime: take an integer seed
 i, and concatenate SHA256(i)||SHA256(i+1)||SHA256(i+2)||SHA256(i+3) such
 that the 1024-bit concatenation is a safe prime (e.g pick smallest such
 i).

I have no idea how openssl does it internally, I'm not a cryptanalyst,
just a user and I have to trust openssl not to fail on me.

 There's also a prime generation process in FIPS 186-3.
 
 Why does this matter ? Because the cost of attacking DLP mod p is not
 uniform across all primes p (even safe primes). There's a class of
 special primes for which the attack is easier. Easy-to-spot primes in
 this class are those of the form 2^n-c for instance. But the class,
 despite being completely negligible in weight, is somewhat broader. There
 is a way to generate a prime within this class (and know the trapdoor --
 in fact you generate the trapdoor first), without someone being
 able to see the quirk (even significant computing power would not detect
 it).

Isn't this the reason it takes ages for openssl to emit one set ?

 Now you say: this bitstring is random, and it is prime.

Oh no I'm not saying this at all, and I even have no way to verify this.
I'm just applying the method that is recommended for such a use and that
people who understand this area consider safe for use.

 Should I trust you ?

Absolutely not. That question was brought before the dhparams were
generated and the basic idea was that if people trust me for the code
I merge, they don't take extra risks for a random present in the code.
I mean if I have bad intents and am skilled enough to craft a special
one, I can as well be smart enough to insert subtle bugs in the code
that will have the same effect.

 You should first convince me that it is really an innocent
 bitstring.

No, that's much better, you can simply force yours in each of your certs,
that's what a number of people do and what they did when oakley2 was
announced as unsafe. In short instead of having a choice between something
known broken and doing yours, now you have the choice between something
you don't know whether it's broken or not and yours. If you trust me not
to cheat on you, you can use the new one. If you don't trust me (and you
probably shouldn't since we don't know each other), you'd rather build
yours.

 A paranoid user would believe that it has been generated by
 (say) NSA, which convinced you to claim that it's random material

Yes but such paranoid users also accuse everyone of much funnier things
so I don't care much about what they believe.

 -- the secret goal being to foster the use of weak primes.

The goal is to avoid using weak primes and at the same time not to
incite clueless users (like me) to deploy them once then forget them
even when they're cracked. Advanced users will generate theirs and
will care about them because they follow such news. Mind you that if
we hadn't had oakley in haproxy, I wouldn't have heard about logjam
and would never have even known that any of my certs was relying on
it, so I would still be using it years after the disclosure of its
weakness.

My take here is that if haproxy (as a community project) can help
*me* stay safe enough, it surely can help other users like me.

Best regards,
Willy




RFC: appsession removal in 1.6 ?

2015-06-04 Thread Willy Tarreau
Hi all,

while discussing with Emeric about what is changing in 1.6, we
were speaking about appsession which doesn't make much sense
anymore given that it is not replicated between nodes and almost
all it does can be done using stick tables.

So the question is : does anyone have a strong objection against it
being removed in 1.6 ? (don't cry too much Aleks, your first contrib
used to be useful for more than 10 years). And if anyone is currently
relying on it, is there anything there that you cannot do using stick
tables ?

I have identified that it can match a cookie name prefix (not sure
anyone has ever needed this after 2005 or so), and the ability to
match the cookie name in the path split on semi-colons (something
we could easily do by providing a new fetch method).

I'm interested in any feedback on this.

Thanks,
Willy




[SPAM] Vous êtes intéressé par le marché de l’immobilier ?

2015-06-04 Thread Comonline Direct
Title: 


Si vous considrez que cet email est un SPAM, veuillez cliquer sur ce lien pour nous l'indiquer 
ou copiez-coller ce lien dans votre navigateur http://clap40.neolane.net/nms/jsp/webForm.jsp?fo=FRM365=RHNNT6PphswmNWxPduHXBH2%2B05dHrTAogxJCVCACAA%3D%3D
  
  Bonjour



  Vous êtes intéressé par le marché de l’immobilier?
  Vous êtes dans une démarche d’achat, de location ou de vente dune maison ou d’un appartement?
  Vous êtes passionné de décoration ou simplement à la recherche d’un crédit immobilier?
  Tant mieux!
  Expli'site est une société qui sélectionne pour vous des offres relatives au monde de l’immobilier.



  Cela vous intéresse ?C’est parfait, vous n’avez rien à faire.
  Vous ne voulez pas recevoir ces offres :merci de cliquez ici. 
Vous utilisez un logiciel de messagerie configuré en mode texte ? Désabonnez-vous en copiant le lien ci-dessous dans votre navigateur : 
http://clap40.neolane.net/nms/jsp/webForm.jsp?fo=FRM365=RHNNT6PphswmNWxPduHXBH2%2B05dHrTAogxJCVCACAA%3D%3D.



  A votre serviceClaire Vincent
  NB: Vous recevez ce mail car vous êtes abonné au site: Cap decision Immobilier.




RE: Syslog messages get truncated at 1kb (syslog server config is ok)

2015-06-04 Thread Damiano Giorgi
Hi Lukas, thank you for the time ! 

I compiled haproxy with DEFINE=-DREQURI_LEN=8192 and everything seems to be 
fine now, recompiling is not a problem.
Tomorrow I'll deploy the changes from staging to production and let you know, 
we have around 1200 queries per second and the process takes only 80 megabytes, 
so we can take the risk :)

Thank you again.

Damiano

Hi Damiano,


 Dear all, an update: logging using sockets doesn't change anything.
 After some grepping the code and tinkering I found that changing 
 REQURI_LEN in include/common/defaults.h does the job

Thanks for your analysis.



 the strange thing is that there's also #define MAX_SYSLOG_LEN 1024 in 
 the same file but it doesn't modify logging behaviour.

Thats because thats just a default, overwritten by your len configuration.
Syslog length is not the problem, URI length is.



 I don't know the side effect of this: maybe increased memory usage for 
 each request ? Do I have to file a bug ?

Yes it will definitly increase memory usage.

Reading the following thread, I think this is expected behavior:
http://thread.gmane.org/gmane.comp.web.haproxy/3679/focus=3689


Workaround is compile with DEFINE=-DREQURI_LEN=2048 (supported since
1.5-dev19) - at least you avoid source code patches, however you still have to 
recompile.

I guess a runtime configuration parameter would be nice.



Regards,

Lukas

  



RE: Syslog messages get truncated at 1kb (syslog server config is ok)

2015-06-04 Thread Lukas Tribus
 Hi Lukas, thank you for the time !

 I compiled haproxy with DEFINE=-DREQURI_LEN=8192 and everything
 seems to be fine now, recompiling is not a problem.
 Tomorrow I'll deploy the changes from staging to production and let
 you know, we have around 1200 queries per second and the process takes
 only 80 megabytes, so we can take the risk :)

I would monitor memory usage anyway ... just in case.



Regards,

Lukas

  


new primes in haproxy after logjam

2015-06-04 Thread Emmanuel Thomé
Hi,

I heard that following logjam (which I'm a coauthor of), haproxy has
changed its default set of primes.

That's a good start. However you give no information as to *how* you
generated the primes (correct me if I'm mistaken -- I just didn't see
such a thing in the commit log, but haven't searched further). This is a
problem. The recommended practice is to generate primes a reproducible
fashion.

Example 1: the Oakley primes are generated as follows (IIRC -- I haven't
checked back): p = 2^768-2^704 + 2^64-1 + 2^702*(floor(pi)+i) is a safe
prime, with i smallest such that this holds (safe prime means (p-1)/2
prime too).

Fictitious example 2 to generate a 1024-bit prime: take an integer seed
i, and concatenate SHA256(i)||SHA256(i+1)||SHA256(i+2)||SHA256(i+3) such
that the 1024-bit concatenation is a safe prime (e.g pick smallest such
i).

There's also a prime generation process in FIPS 186-3.

Why does this matter ? Because the cost of attacking DLP mod p is not
uniform across all primes p (even safe primes). There's a class of
special primes for which the attack is easier. Easy-to-spot primes in
this class are those of the form 2^n-c for instance. But the class,
despite being completely negligible in weight, is somewhat broader. There
is a way to generate a prime within this class (and know the trapdoor --
in fact you generate the trapdoor first), without someone being
able to see the quirk (even significant computing power would not detect
it).

Now you say: this bitstring is random, and it is prime. Should I trust
you ? You should first convince me that it is really an innocent
bitstring. A paranoid user would believe that it has been generated by
(say) NSA, which convinced you to claim that it's random material -- the
secret goal being to foster the use of weak primes.

Hope this helps,

E.



Re: Dynamic backend selection using maps

2015-06-04 Thread David Reuss
Baptiste, thanks -- seems to work, but is it supposed to work for both
blah.foo.com, and foo.com ? .. Because that doesn't seem to be the case, so
i need both '.foo.com', and 'foo.com', in my map.

Is that correct?

Also a quick question regarding acl's and fetching -- is there any way this
could be written with an acl first, and then a use_backend declaration?

Something like:

acl is_worker hdr_dom(Host),map(/etc/haproxy/worker.map),lower
use_backend ??? if is_worker

This is just to get a better understanding of how acl's work with maps, and
how to perform lookups, and storing them for later use.



On Wed, Jun 3, 2015 at 5:03 PM, Baptiste bed...@gmail.com wrote:

 hi Jim,

 hdr_end could do the trick if you include the '.' in the matching string.

 Baptiste


 On Wed, Jun 3, 2015 at 4:55 PM, Jim Gronowski jgronow...@ditronics.com
 wrote:
  I’m not very familiar with the map function, but does hdr_end(host) work
 in
  this context?
 
 
 
  If so, in order to only match *.foo.com and not blahfoo.com, you’d need
 to
  include the dot in your map – ‘.foo.com’ instead of ‘foo.com’.
 
 
 
 
 
  From: David Reuss [mailto:shuffle...@gmail.com]
  Sent: Wednesday, June 03, 2015 05:23
  To: haproxy@formilux.org
  Subject: Dynamic backend selection using maps
 
 
 
  Hello,
 
 
 
  I have this use_backend declaration:
 
 
 
  use_backend
  %[req.hdr(host),lower,map_dom(/etc/haproxy/worker.map,b_nodes_default)]
 
 
 
  Which seems to work wonderfully, but say i have foo.com in my map, it
 will
  match foo.com.whatever.com, and ideally i'd like to only match if the
 domain
  ends with my value (foo.com), and also, it should NOT match blahfoo.com
 
 
 
  How would i achieve that?
 
 
 
  Ditronics, LLC email disclaimer:
  This communication, including attachments, is intended only for the
  exclusive use of addressee and may contain proprietary, confidential, or
  privileged information. Any use, review, duplication, disclosure,
  dissemination, or distribution is strictly prohibited. If you were not
 the
  intended recipient, you have received this communication in error. Please
  notify sender immediately by return e-mail, delete this communication,
 and
  destroy any copies.



Re: Dynamic backend selection using maps

2015-06-04 Thread David Reuss
Nevermind it not working, i had map_end as the match at that point -- it
works wonderfully, but the second question is still up for grabs :)

On Thu, Jun 4, 2015 at 9:28 AM, David Reuss shuffle...@gmail.com wrote:

 Baptiste, thanks -- seems to work, but is it supposed to work for both
 blah.foo.com, and foo.com ? .. Because that doesn't seem to be the case,
 so i need both '.foo.com', and 'foo.com', in my map.

 Is that correct?

 Also a quick question regarding acl's and fetching -- is there any way
 this could be written with an acl first, and then a use_backend declaration?

 Something like:

 acl is_worker hdr_dom(Host),map(/etc/haproxy/worker.map),lower
 use_backend ??? if is_worker

 This is just to get a better understanding of how acl's work with maps,
 and how to perform lookups, and storing them for later use.



 On Wed, Jun 3, 2015 at 5:03 PM, Baptiste bed...@gmail.com wrote:

 hi Jim,

 hdr_end could do the trick if you include the '.' in the matching string.

 Baptiste


 On Wed, Jun 3, 2015 at 4:55 PM, Jim Gronowski jgronow...@ditronics.com
 wrote:
  I’m not very familiar with the map function, but does hdr_end(host)
 work in
  this context?
 
 
 
  If so, in order to only match *.foo.com and not blahfoo.com, you’d
 need to
  include the dot in your map – ‘.foo.com’ instead of ‘foo.com’.
 
 
 
 
 
  From: David Reuss [mailto:shuffle...@gmail.com]
  Sent: Wednesday, June 03, 2015 05:23
  To: haproxy@formilux.org
  Subject: Dynamic backend selection using maps
 
 
 
  Hello,
 
 
 
  I have this use_backend declaration:
 
 
 
  use_backend
  %[req.hdr(host),lower,map_dom(/etc/haproxy/worker.map,b_nodes_default)]
 
 
 
  Which seems to work wonderfully, but say i have foo.com in my map,
 it will
  match foo.com.whatever.com, and ideally i'd like to only match if the
 domain
  ends with my value (foo.com), and also, it should NOT match blahfoo.com
 
 
 
  How would i achieve that?
 
 
 
  Ditronics, LLC email disclaimer:
  This communication, including attachments, is intended only for the
  exclusive use of addressee and may contain proprietary, confidential, or
  privileged information. Any use, review, duplication, disclosure,
  dissemination, or distribution is strictly prohibited. If you were not
 the
  intended recipient, you have received this communication in error.
 Please
  notify sender immediately by return e-mail, delete this communication,
 and
  destroy any copies.





RE: Syslog messages get truncated at 1kb (syslog server config is ok)

2015-06-04 Thread Damiano Giorgi
Also tried with 
log localhost len 8192 local0 

The message gets truncated also on the loopback (only one packet is sent).

I'll try with unix domain sockets and let you know

Damiano


-Original Message-
From: Damiano Giorgi [mailto:damiano.gio...@trovaprezzi.it] 
Sent: mercoledì 3 giugno 2015 14:28
To: haproxy@formilux.org
Subject: RE: Syslog messages get truncated at 1kb (syslog server config is ok)

Hi Lukas (sorry for my quoting , I still have to manage to have this software 
to behave correctly

 Hi Lukas, my mtu is set to 1500 and the message looks truncated.
 I am able to ping the server using that mtu

 root@lbha01:~# ping -s 1500 syslog

-s 1472 -M do is what you would use for this test. Instead, you are sending 
ICMP requests at 1528 Bytes MTU without DF bit, so it will get fragmented.
Anyway, its unlikely that this is the problem.

Sorry, I forgot to set the DF flag and to adjust the size, I can confirm, mtu 
is not a problem 

root@lbhasolr01:~# ping syslog -s 1472 -M do PING syslog.7pixel.local 
(10.1.0.150) 1472(1500) bytes of data.
1480 bytes from 10.1.0.150: icmp_req=1 ttl=63 time=0.385 ms


 this is my dump (tcpdump -X) (the message is truncated and I don't 
 see other packets flowing).

Ok, can you confirm that haproxy has been reloaded/restartet after adding the 
len keyword to your logging configuration?

Yes, haproxy has been restarted after the change

 With the logger utility this line gets splitted into multiple packets

I'm not familiar with this utility. Can you elaborate whether this SENDS 
packets to your syslog-ng or if it recieves logs from haproxy?

Logger is part of the util-linux package 
(ftp://ftp.kernel.org/pub/linux/utils/util-linux/), it sends syslog messages 
(it's useful for logging in shell scripting), with this utility log packets are 
splitted in multiple parts  (btw the version in debian 7 has a bug that 
prevents sending to remote syslog servers via udp, I had to compile it from 
scratch to use it) 

Iirc, a syslog message must fit into a single packet.

I don't know, when I was searching the archives I found this 
http://marc.info/?l=haproxym=139169691604703w=2 about syslog message size

Damiano 




Regards,

Lukas

  




Re: HAProxy responding with NOSRV SC

2015-06-04 Thread Igor Cicimov
On Thu, Jun 4, 2015 at 12:21 PM, RAKESH P B pb.rakes...@gmail.com wrote:

 Hi All,

 I have a strange situation where requests to my HAProxy are returning with
 a 503 error. HAProxy logs shows that a NOSRV error: for POST requests from
 application RSET service.

 api-https-in~ api-https-in/NOSRV -1/-1/-1/-1/40 503 1237 - - SC--
 15/0/0/0/0 0/0 POST /PATH HTTP/1.1


According to the docs the SC connection termination flags mean:

 SC   The server or an equipment between it and haproxy explicitly
refused
  the TCP connection (the proxy received a TCP RST or an ICMP
message
  in return). Under some circumstances, it can also be the network
  stack telling the proxy that the server is unreachable (eg: no
route,
  or no ARP response on local network). When this happens in HTTP
mode,
  the status code is likely a 502 or 503 here.

So if you are confident that you are looking at the same type of requests
and in the same time period for both cases you are showing (with and
without HAP), then you should turn your attention to the networking side of
the things. Make sure nothing is blocking the connections between HAP and
the backends (ie can you at least telnet to port 80 from HAP to the
backend), confirm that your health check HEAD /test.jsp HTTP/1.0 really
works, confirm your backend understands and actually uses X-Forwarded-Proto
header, confirm that your backend has a capacity for 8096 simultaneous
connections etc. etc. etc.



 During this time, the backend server was confirmed up and was receiving
 traffic for GET requests from web browser and also POST request from REST
 client  POSTMAN rest client.


  api-https-in~ name1/name 669/0/2/4/675 200 513 - -  2/2/0/1/0 0/0
 GET /PATH HTTP/1.1

  api-https-in~ name1/name 336/0/1/4/341 415 95 - -  2/2/0/1/0 0/0
 POST /PATH HTTP/1.1


 Here is my configuration file

 frontend http-in
 bind *:80
 redirect scheme https code 301 if !{ ssl_fc }
 maxconn 8096


 frontend api-https-in
 bind X.X.X.X:443 ssl crt PATH1
 reqadd X-Forwarded-Proto:\ https
 acl host_soap hdr_end(host) -i example.com
 use_backend name1 if host_soap
 acl secure dst_port eq 44



 backend name1

 mode http
 option httpchk  HEAD /test.jsp HTTP/1.0
 appsession JSESSIONID len 32 timeout 1800s
 server  name X.X.X.X:80




-- 
Igor Cicimov | DevOps


p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com http://encompasscorporation.com/
w*.* encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000


Re: Limiting concurrent range connections

2015-06-04 Thread Baptiste
Hi sachin,

Look my conf, I turned your tcp-request content statement into http-request.

Baptiste

On Thu, Jun 4, 2015 at 12:05 PM, Sachin Shetty sshe...@egnyte.com wrote:
 Tried it, I don¹t see the table populating at all.

 stick-table type string size  1M expire 10m store conn_cur
 acl is_range  hdr_sub(Range) bytes=
 acl is_path_throttled path_beg /public-api/v1/fs-content-download
 #tcp-request content track-sc1 base32 if is_range is_path_throttled
 http-request set-header X-track %[url]
 tcp-request content track-sc1 req.hdr(X-track) if is_range
 is_path_throttled
 http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled

 (egnyte_server)egnyte@egnyte-laptop:~$ echo show table haproxy_l2 |
 socat /tmp/haproxy.sock stdio
 # table: haproxy_l2, type: string, size:1048576, used:0

 (egnyte_server)egnyte@egnyte-laptop:~$






 On 6/3/15 8:36 PM, Baptiste bed...@gmail.com wrote:

Yes, the url sample copies whole URL as sent by the client.
Simply give it a try on a staging server and let us know the status.

Baptiste

On Wed, Jun 3, 2015 at 3:19 PM, Sachin Shetty sshe...@egnyte.com wrote:
 Thanks Baptiste - Will http-request set-header X-track %[url] help me
 track URL with query parameters as well?

 On 6/3/15 6:36 PM, Baptiste bed...@gmail.com wrote:

On Wed, Jun 3, 2015 at 2:17 PM, Sachin Shetty sshe...@egnyte.com
wrote:
 Hi,

 I am trying to write some throttles that would limit concurrent
connections
 for Range requests + specific urls. For example I want to allow only 2
 concurrent range requests downloading a file
 /public-api/v1/fs-content-download

 I have a working rule:

 stick-table type string size  1M expire 10m store conn_cur
 tcp-request inspect-delay 5s
 acl is_range  hdr_sub(Range) bytes=
 acl is_path_throttled path_beg /public-api/v1/fs-content-download
 tcp-request content track-sc1 base32 if is_range is_path_throttled
 http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled

 Just wanted to see if there is a better way of doing this? Is this
efficient
 enough.

 I need to include the query string as well in my tracker, but I could
not
 figure that out.

 Thanks
 Sachin


Hi Sachin,

I would do it like this:

 stick-table type string size  1M expire 10m store conn_cur
 tcp-request inspect-delay 5s
 tcp-request accept if HTTP
 acl is_range  hdr_sub(Range) bytes=
 acl is_path_throttled path_beg /public-api/v1/fs-content-download
 http-request set-header X-track %[url]
 http-request track-sc1 req.hdr(X-track) if is_range is_path_throttled
 http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled

There might be some typo, but you get the idea.

Baptiste







2015 the most saving money of led high bay lighting

2015-06-04 Thread june
DrSir,  n=bsp;   
enclosedplsfindthep=ictureforournew150Wledhighbaylightwhichisusedinautoplant=ofShangHaiforyourreference.90pcsbridgeflu=xledswithlensaslightsourcenotCOB,ULlistedmeanwelldriver.e=achledsreach120LM/W,youknow,oureachledswithlens
 
com=parethoseofCOB,havetheadvantageofhighLuminousefficiency,betterheatsink.besides,ifyouwan=ttoinstallthemon30ft,luxlevelupto
 250,thebestsolutionisusing150Wnot100W.  
ifyouarein=terestedinourproducts,pleasekindlyletusknow.  
B.RGDSJunewww.sunriseleds.com   

RE: Syslog messages get truncated at 1kb (syslog server config is ok)

2015-06-04 Thread Damiano Giorgi
Dear all, an update: logging using sockets doesn't change anything. 
After some grepping the code and tinkering I found that changing REQURI_LEN in 
include/common/defaults.h does the job, the strange thing is that there's also 
#define MAX_SYSLOG_LEN 1024 in the same file but it doesn't modify logging 
behaviour.

I don't know the side effect of this: maybe increased memory usage for each 
request ? Do I have to file a bug ? 

Regards,
Damiano

-Original Message-
From: Damiano Giorgi [mailto:damiano.gio...@trovaprezzi.it] 
Sent: giovedì 4 giugno 2015 09:49
To: haproxy@formilux.org
Subject: RE: Syslog messages get truncated at 1kb (syslog server config is ok)

Also tried with 
log localhost len 8192 local0 

The message gets truncated also on the loopback (only one packet is sent).

I'll try with unix domain sockets and let you know

Damiano


-Original Message-
From: Damiano Giorgi [mailto:damiano.gio...@trovaprezzi.it]
Sent: mercoledì 3 giugno 2015 14:28
To: haproxy@formilux.org
Subject: RE: Syslog messages get truncated at 1kb (syslog server config is ok)

Hi Lukas (sorry for my quoting , I still have to manage to have this software 
to behave correctly

 Hi Lukas, my mtu is set to 1500 and the message looks truncated.
 I am able to ping the server using that mtu

 root@lbha01:~# ping -s 1500 syslog

-s 1472 -M do is what you would use for this test. Instead, you are sending 
ICMP requests at 1528 Bytes MTU without DF bit, so it will get fragmented.
Anyway, its unlikely that this is the problem.

Sorry, I forgot to set the DF flag and to adjust the size, I can confirm, mtu 
is not a problem 

root@lbhasolr01:~# ping syslog -s 1472 -M do PING syslog.7pixel.local 
(10.1.0.150) 1472(1500) bytes of data.
1480 bytes from 10.1.0.150: icmp_req=1 ttl=63 time=0.385 ms


 this is my dump (tcpdump -X) (the message is truncated and I don't 
 see other packets flowing).

Ok, can you confirm that haproxy has been reloaded/restartet after adding the 
len keyword to your logging configuration?

Yes, haproxy has been restarted after the change

 With the logger utility this line gets splitted into multiple packets

I'm not familiar with this utility. Can you elaborate whether this SENDS 
packets to your syslog-ng or if it recieves logs from haproxy?

Logger is part of the util-linux package 
(ftp://ftp.kernel.org/pub/linux/utils/util-linux/), it sends syslog messages 
(it's useful for logging in shell scripting), with this utility log packets are 
splitted in multiple parts  (btw the version in debian 7 has a bug that 
prevents sending to remote syslog servers via udp, I had to compile it from 
scratch to use it) 

Iirc, a syslog message must fit into a single packet.

I don't know, when I was searching the archives I found this 
http://marc.info/?l=haproxym=139169691604703w=2 about syslog message size

Damiano 




Regards,

Lukas

  





Re:Contact from Ningbo Handalux-LED LAMP and Bulbs

2015-06-04 Thread Edmond Wu
DearSirorMadam;=20Goodday!HereIwouldliketosharewithyouourhotsellin=gled 
lightings,pleasekindlycheckitasbelow:Ifyouneed=theselamps,pleasefreelyletusknow.Wewillgiveyouthebestsup=port.L=ookingforwardtoyouranyideabyreturn.Thankyou.=nbsp;I=ftheseemaildisturbyou,pleaseignoreitandsincerelysorryfordi=sturbingyou.--Kindestregards,EdmondWuSalesManagerNingboHandaLuxElectronicTechnologyCo.,LtdAdd:14floor,Room(7-17),BuildingXinZhou,RdzhongshanEast,Ningbo,=ChinaTel.86-574-88129648
 Fax.86-574-881=29647 Email:edmondwuhanda...@aliyun.com Skype: 
EdmondWU-HANDALUXLEDBULBS  | =LEDTube|LEDPanel 
|LEDSpot|LEDFilamentLamp=[clubs; 
Savetrees.Don'tprintthise-mailONLYif=it'sreallynecessary.]

Re: Limiting concurrent range connections

2015-06-04 Thread Sachin Shetty
Tried it, I don¹t see the table populating at all.

stick-table type string size  1M expire 10m store conn_cur
acl is_range  hdr_sub(Range) bytes=
acl is_path_throttled path_beg /public-api/v1/fs-content-download
#tcp-request content track-sc1 base32 if is_range is_path_throttled
http-request set-header X-track %[url]
tcp-request content track-sc1 req.hdr(X-track) if is_range
is_path_throttled
http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled

(egnyte_server)egnyte@egnyte-laptop:~$ echo show table haproxy_l2 |
socat /tmp/haproxy.sock stdio
# table: haproxy_l2, type: string, size:1048576, used:0

(egnyte_server)egnyte@egnyte-laptop:~$






On 6/3/15 8:36 PM, Baptiste bed...@gmail.com wrote:

Yes, the url sample copies whole URL as sent by the client.
Simply give it a try on a staging server and let us know the status.

Baptiste

On Wed, Jun 3, 2015 at 3:19 PM, Sachin Shetty sshe...@egnyte.com wrote:
 Thanks Baptiste - Will http-request set-header X-track %[url] help me
 track URL with query parameters as well?

 On 6/3/15 6:36 PM, Baptiste bed...@gmail.com wrote:

On Wed, Jun 3, 2015 at 2:17 PM, Sachin Shetty sshe...@egnyte.com
wrote:
 Hi,

 I am trying to write some throttles that would limit concurrent
connections
 for Range requests + specific urls. For example I want to allow only 2
 concurrent range requests downloading a file
 /public-api/v1/fs-content-download

 I have a working rule:

 stick-table type string size  1M expire 10m store conn_cur
 tcp-request inspect-delay 5s
 acl is_range  hdr_sub(Range) bytes=
 acl is_path_throttled path_beg /public-api/v1/fs-content-download
 tcp-request content track-sc1 base32 if is_range is_path_throttled
 http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled

 Just wanted to see if there is a better way of doing this? Is this
efficient
 enough.

 I need to include the query string as well in my tracker, but I could
not
 figure that out.

 Thanks
 Sachin


Hi Sachin,

I would do it like this:

 stick-table type string size  1M expire 10m store conn_cur
 tcp-request inspect-delay 5s
 tcp-request accept if HTTP
 acl is_range  hdr_sub(Range) bytes=
 acl is_path_throttled path_beg /public-api/v1/fs-content-download
 http-request set-header X-track %[url]
 http-request track-sc1 req.hdr(X-track) if is_range is_path_throttled
 http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled

There might be some typo, but you get the idea.

Baptiste







Re: Limiting concurrent range connections

2015-06-04 Thread Sachin Shetty
I did try it, it needs 1.6.dev1 and that version segfaults as soon as the
request is made

(egnyte_server)egnyte@egnyte-laptop:~/haproxy$ ~/haproxy/sbin/haproxy -f
conf/haproxy.conf -d
[WARNING] 154/044207 (24974) : Setting tune.ssl.default-dh-param to 1024
by default, if your workload permits it you should set it to at least
2048. Please set a value = 1024 to make this warning disappear.
Note: setting global.maxconn to 2000.
Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.
:haproxy_l2.accept(0005)=0009 from [192.168.56.102:50119]
Segmentation fault



Thanks
Sachin


On 6/4/15 3:45 PM, Baptiste bed...@gmail.com wrote:

Hi sachin,

Look my conf, I turned your tcp-request content statement into
http-request.

Baptiste

On Thu, Jun 4, 2015 at 12:05 PM, Sachin Shetty sshe...@egnyte.com wrote:
 Tried it, I don¹t see the table populating at all.

 stick-table type string size  1M expire 10m store conn_cur
 acl is_range  hdr_sub(Range) bytes=
 acl is_path_throttled path_beg /public-api/v1/fs-content-download
 #tcp-request content track-sc1 base32 if is_range is_path_throttled
 http-request set-header X-track %[url]
 tcp-request content track-sc1 req.hdr(X-track) if is_range
 is_path_throttled
 http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled

 (egnyte_server)egnyte@egnyte-laptop:~$ echo show table haproxy_l2 |
 socat /tmp/haproxy.sock stdio
 # table: haproxy_l2, type: string, size:1048576, used:0

 (egnyte_server)egnyte@egnyte-laptop:~$






 On 6/3/15 8:36 PM, Baptiste bed...@gmail.com wrote:

Yes, the url sample copies whole URL as sent by the client.
Simply give it a try on a staging server and let us know the status.

Baptiste

On Wed, Jun 3, 2015 at 3:19 PM, Sachin Shetty sshe...@egnyte.com
wrote:
 Thanks Baptiste - Will http-request set-header X-track %[url] help
me
 track URL with query parameters as well?

 On 6/3/15 6:36 PM, Baptiste bed...@gmail.com wrote:

On Wed, Jun 3, 2015 at 2:17 PM, Sachin Shetty sshe...@egnyte.com
wrote:
 Hi,

 I am trying to write some throttles that would limit concurrent
connections
 for Range requests + specific urls. For example I want to allow
only 2
 concurrent range requests downloading a file
 /public-api/v1/fs-content-download

 I have a working rule:

 stick-table type string size  1M expire 10m store conn_cur
 tcp-request inspect-delay 5s
 acl is_range  hdr_sub(Range) bytes=
 acl is_path_throttled path_beg /public-api/v1/fs-content-download
 tcp-request content track-sc1 base32 if is_range is_path_throttled
 http-request deny if { sc1_conn_cur gt 2 } is_range
is_path_throttled

 Just wanted to see if there is a better way of doing this? Is this
efficient
 enough.

 I need to include the query string as well in my tracker, but I
could
not
 figure that out.

 Thanks
 Sachin


Hi Sachin,

I would do it like this:

 stick-table type string size  1M expire 10m store conn_cur
 tcp-request inspect-delay 5s
 tcp-request accept if HTTP
 acl is_range  hdr_sub(Range) bytes=
 acl is_path_throttled path_beg /public-api/v1/fs-content-download
 http-request set-header X-track %[url]
 http-request track-sc1 req.hdr(X-track) if is_range is_path_throttled
 http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled

There might be some typo, but you get the idea.

Baptiste









Re: Choosing servers based on IP address

2015-06-04 Thread Holger Just
Hi Andy,

Please always CC the mailing list so that others can help you too and
can learn from the discussion.

Franks Andy (IT Technical Architecture Manager) wrote:
 Hi Holger,
   Sorry, I will elaborate a bit more!
 We are going to implement Microsoft exchange server 2010 (sp3) over two
 AD sites. At the moment we have two servers, one at each site.
 With a two site AD implementation with out-of-the-box settings, even if
 the two sites are connected via a decent link, clients from site A are
 not permitted to use the interface to the database (the CAS) at site B
 to get to the database at site A, unless the whole site is down.
 I would like to have 2 load balancing solutions - one at each site with
 a primary connection to the server at same site, but then a failover if
 that server goes down.
 That's all fine, but it would be ideal if we had a load balancing
 solution that could take connections from site A and route them to the
 server at site B in normal situations too with some logic that said If
 client is from IP x.x.x.x, then always use server B rather than A/B
 depending on the hard coded weight.
 It would open up lots more DR recovery potential for a multiple site
 like this. Thinking about it, I can't really understand why it's not
 done more - redirecting based on where something is coming from.. You
 could redirect DMZ traffic one way and ordinary another without
 complicated routing.
 Am I missing a trick?
 Thanks
 Andy

If I understood you right, you have two sites, each with an Exchange
server and some clients. You normally want the clients on Site A to only
connect to EXCH-A (exchange server at Site A). However, if the server is
down, you want toe clients on Site A to connect to the exchange server
on Site B instead.


SITE A|SITE B
--+
  |
Client-1A ---,|   ,--- Client-2A
  \   |  /
Client-1B -- HAPROXY -+ HAPROXY -- Client-2B
  /   \\  | //   \
Client-1C ---'   EXCH-A   |  EXCH-B   `--- Client-2C
  |

This is easily possible with a backend section where one server is
designated as a backup server which will thus only used if all
non-backup-servers are down:

backend SMTP-A
  server exch-a 10.1.0.1:25 check
  server exch-b 10.2.0.1:25 check backup

With this config, the primary server (exch-a) is used for all
connections. If it is down, the backup server exch-b is used until
exch-a is up again.

Now, in order to route clients from Site B to their own exchange, even
if they arrive on the HAproxy from Site A, you can define an additional
backend with flipped roles:

backend SMTP-B
  server exch-a 10.1.0.1:25 check backup
  server exch-b 10.2.0.1:25 check

you can then route requests in the frontend to the appropriate backend
based on the source IP:

frontend smtp
  bind :25

  acl from-site-a src 10.1.0.0/16
  acl from-site-b src 10.2.0.0/16

  use_backend SMTP-A if from-site-a
  use_backend SMTP-B if from-site-b
  default_backend SMTP-A

I hope, this is clear. Please read the configuration manual regarding
additional server options which can affect stickiness and handling of
existing sessions on failover:

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2

Regards,
Holger



RE: Syslog messages get truncated at 1kb (syslog server config is ok)

2015-06-04 Thread Lukas Tribus
Hi Damiano,


 Dear all, an update: logging using sockets doesn't change anything.
 After some grepping the code and tinkering I found that changing REQURI_LEN
 in include/common/defaults.h does the job

Thanks for your analysis.



 the strange thing is that there's also #define MAX_SYSLOG_LEN 1024 in the
 same file but it doesn't modify logging behaviour.

Thats because thats just a default, overwritten by your len configuration.
Syslog length is not the problem, URI length is.



 I don't know the side effect of this: maybe increased memory usage for each
 request ? Do I have to file a bug ?

Yes it will definitly increase memory usage.

Reading the following thread, I think this is expected behavior:
http://thread.gmane.org/gmane.comp.web.haproxy/3679/focus=3689


Workaround is compile with DEFINE=-DREQURI_LEN=2048 (supported since
1.5-dev19) - at least you avoid source code patches, however you still
have to recompile.

I guess a runtime configuration parameter would be nice.



Regards,

Lukas