[squid-users] squid+apache+php-fpm How squid could mark DEAD a peer where Apache is alive and php-fpm dead?

2020-06-28 Thread Patrick Chemla

  
  
Hi,


Thank you all for the great work on Squid.


I got last week a bad "issue" where a squid Version 4.0.23 set to
  send requests to 4 VMs didn't detect a dead peer.


On each VM are running Apache as the frontal, and php-fpm as php
  scripts execution.


Squid is using sourcehash load balancing algorithm.


But, if the php-fpm is down or busy, Apache will return a 504
  Gateway timeout to Squid and browser, and squid will continue to
  send requests to same Apache, even if the answer will always be
  HTTP code 504, because Apache is ALIVE.


Apache is there, respawning, but the user is not getting the
  right answer.


And all IPs that are connecting to this VM, will fail, so 25% of
  all users will suffer unavailability.


I have setup some watchdogs scripts that will now, from the VM
  intern itself, check if there is a 504 code returned by Apache,
  and in that case, will restart php-fpm.


But how to tell squid to mark dead a peer that returns 504 code ?


Thanks
Patrick


  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] domains with accented international characters fail with Invalid URL

2020-05-12 Thread Patrick Chemla

  
  
Sorry for this, it seems it is not linked to accented characters.
  Other not accented domains don't work too.



I am checking my configuration.


Patrick

Le 12/05/2020 à 14:40, Patrick Chemla a
  écrit :


  
  
  Hi,
  
  
  In the past I asked few questions here and got very efficient
help, so I try again.
  
  
  I have a french domain using accented letters in the domain
name itself  like "aaaébbbéooo.com" knowned as
xn--xxx-xxx.fr in DNS and Apache configuration.
  
  
  I am trying to setup squid 3.5.20 (last version updated  as an
available package on Centos 7.7) as a proxy cache  to deliver
images for all domains on this server, but it fails for that
particular domain with message Invalid URL.
  
  
  Here are the access_logs:
  
  
  - 88.88.88.xx - - - [12/May/2020:08:05:43
+0200] " @%F1%F4%EA%F8?%80%E6Q%EAWH%D5%04zEa)%B1%A7%C1%9B%AA
- HTTP/1.1" 400 3941 0 "-" "-" TAG_NONE:HIER_NONE
- 88.88.88.xx - - - [12/May/2020:08:05:52 +0200] "GET / -
HTTP/1.1" 400 3755 0 "-" "-" TAG_NONE:HIER_NONE
- 88.88.88.xx - - - [12/May/2020:08:05:52 +0200] "GET http://hostname:3128/squid-internal-static/icons/SN.png
/squid-internal-static/icons/SN.png HTTP/1.1" 200 12939 0 "http://www.xn--xxx-xxx.fr:3128/"
"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:70.0)
Gecko/20100101 Firefox/70.0" TCP_MEM_HIT:HIER_NONE
- 88.88.88.xx - - - [12/May/2020:08:05:52 +0200] "GET
/favicon.ico - HTTP/1.1" 400 3777 0 "-" "-"
TAG_NONE:HIER_NONE
- 88.88.88.xx - - - [12/May/2020:08:07:23 +0200] "GET / -
HTTP/1.1" 400 3755 0 "-" "-" TAG_NONE:HIER_NONE
- 88.88.88.xx - - - [12/May/2020:08:07:23 +0200] "GET http://hostname:3128/squid-internal-static/icons/SN.png
/squid-internal-static/icons/SN.png HTTP/1.1" 200 12939 0 "http://xn--xxx-xxx.fr:3128/"
"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:70.0)
Gecko/20100101 Firefox/70.0" TCP_MEM_HIT:HIER_NONE
- 88.88.88.xx - - - [12/May/2020:08:07:23 +0200] "GET
/favicon.ico - HTTP/1.1" 400 3777 0 "-" "-"
TAG_NONE:HIER_NONE
- 88.88.88.xx - - - [12/May/2020:08:16:16 +0200] "GET / -
HTTP/1.1" 400 3755 0 "-" "-" TAG_NONE:HIER_NONE
- 88.88.88.xx - - - [12/May/2020:08:16:16 +0200] "GET http://hostname:3128/squid-internal-static/icons/SN.png
/squid-internal-static/icons/SN.png HTTP/1.1" 200 4098 0 "http://www.xn--xxx-xxx.fr:3128/"
"Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:70.0)
Gecko/20100101 Firefox/70.0" TCP_MEM_HIT_ABORTED:HIER_NONE
  
  
  
  I tried to search google  but can't find any details on a
special setting in squid.conf to accept accented internationals
letters.
  
  
  Maybe someone here could help?
  
  
  Many thanks in advance.
  
  
  Patrick
  
  
  
  ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] domains with accented international characters fail with Invalid URL

2020-05-12 Thread Patrick Chemla

  
  
Hi,


In the past I asked few questions here and got very efficient
  help, so I try again.


I have a french domain using accented letters in the domain name
  itself  like "aaaébbbéooo.com" knowned as xn--xxx-xxx.fr in
  DNS and Apache configuration.


I am trying to setup squid 3.5.20 (last version updated  as an
  available package on Centos 7.7) as a proxy cache  to deliver
  images for all domains on this server, but it fails for that
  particular domain with message Invalid URL.


Here are the access_logs:


- 88.88.88.xx - - - [12/May/2020:08:05:43
  +0200] " @%F1%F4%EA%F8?%80%E6Q%EAWH%D5%04zEa)%B1%A7%C1%9B%AA -
  HTTP/1.1" 400 3941 0 "-" "-" TAG_NONE:HIER_NONE
  - 88.88.88.xx - - - [12/May/2020:08:05:52 +0200] "GET / -
  HTTP/1.1" 400 3755 0 "-" "-" TAG_NONE:HIER_NONE
  - 88.88.88.xx - - - [12/May/2020:08:05:52 +0200] "GET
  http://hostname:3128/squid-internal-static/icons/SN.png
  /squid-internal-static/icons/SN.png HTTP/1.1" 200 12939 0
  "http://www.xn--xxx-xxx.fr:3128/" "Mozilla/5.0 (X11; Fedora;
  Linux x86_64; rv:70.0) Gecko/20100101 Firefox/70.0"
  TCP_MEM_HIT:HIER_NONE
  - 88.88.88.xx - - - [12/May/2020:08:05:52 +0200] "GET
  /favicon.ico - HTTP/1.1" 400 3777 0 "-" "-" TAG_NONE:HIER_NONE
  - 88.88.88.xx - - - [12/May/2020:08:07:23 +0200] "GET / -
  HTTP/1.1" 400 3755 0 "-" "-" TAG_NONE:HIER_NONE
  - 88.88.88.xx - - - [12/May/2020:08:07:23 +0200] "GET
  http://hostname:3128/squid-internal-static/icons/SN.png
  /squid-internal-static/icons/SN.png HTTP/1.1" 200 12939 0
  "http://xn--xxx-xxx.fr:3128/" "Mozilla/5.0 (X11; Fedora; Linux
  x86_64; rv:70.0) Gecko/20100101 Firefox/70.0"
  TCP_MEM_HIT:HIER_NONE
  - 88.88.88.xx - - - [12/May/2020:08:07:23 +0200] "GET
  /favicon.ico - HTTP/1.1" 400 3777 0 "-" "-" TAG_NONE:HIER_NONE
  - 88.88.88.xx - - - [12/May/2020:08:16:16 +0200] "GET / -
  HTTP/1.1" 400 3755 0 "-" "-" TAG_NONE:HIER_NONE
  - 88.88.88.xx - - - [12/May/2020:08:16:16 +0200] "GET
  http://hostname:3128/squid-internal-static/icons/SN.png
  /squid-internal-static/icons/SN.png HTTP/1.1" 200 4098 0
  "http://www.xn--xxx-xxx.fr:3128/" "Mozilla/5.0 (X11; Fedora;
  Linux x86_64; rv:70.0) Gecko/20100101 Firefox/70.0"
  TCP_MEM_HIT_ABORTED:HIER_NONE



I tried to search google  but can't find any details on a special
  setting in squid.conf to accept accented internationals letters.


Maybe someone here could help?


Many thanks in advance.


Patrick

  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Multiple SSL certificates on same IP

2018-12-19 Thread Patrick Chemla

  
  
Hi all,


Thanks for the great work you do/provide with squid.


I am using squid for years, I like it very much, and I am now
  installing a SSL load-balancing unit for about 80
  domains/sub-domains.


My OS release is Fedora release 29 (Twenty Nine)


My squid version and parameters are : 


# squid -v
Squid Cache: Version 4.4
  Service Name: squid
  
  This binary uses OpenSSL 1.1.1-pre9 (beta) FIPS 21 Aug 2018. For
  legal restrictions on distribution see
  https://www.openssl.org/source/license.html
  
  configure options:  '--build=x86_64-redhat-linux-gnu'
  '--host=x86_64-redhat-linux-gnu' '--program-prefix='
  '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin'
  '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share'
  '--includedir=/usr/include' '--libdir=/usr/lib64'
  '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib'
  '--mandir=/usr/share/man' '--infodir=/usr/share/info'
  '--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid'
  '--localstatedir=/var' '--datadir=/usr/share/squid'
  '--sysconfdir=/etc/squid' '--with-logdir=/var/log/squid'
  '--with-pidfile=/var/run/squid.pid'
  '--disable-dependency-tracking' '--enable-eui'
  '--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,PAM,POP3,RADIUS,SASL,SMB,SMB_LM'
  '--enable-auth-ntlm=SMB_LM,fake' '--enable-auth-digest=file,LDAP'
  '--enable-auth-negotiate=kerberos'
'--enable-external-acl-helpers=LDAP_group,time_quota,session,unix_group,wbinfo_group,kerberos_ldap_group'
  '--enable-storeid-rewrite-helpers=file' '--enable-cache-digests'
  '--enable-cachemgr-hostname=localhost' '--enable-delay-pools'
  '--enable-epoll' '--enable-icap-client' '--enable-ident-lookups'
  '--enable-linux-netfilter' '--enable-removal-policies=heap,lru'
  '--enable-snmp' '--enable-ssl' '--enable-ssl-crtd'
  '--enable-storeio=aufs,diskd,ufs,rock' '--enable-diskio'
  '--enable-wccpv2' '--enable-esi' '--enable-ecap' '--with-aio'
  '--with-default-user=squid' '--with-dl' '--with-openssl'
  '--with-pthreads' '--disable-arch-native' '--with-pic'
  '--disable-security-cert-validators'
  'build_alias=x86_64-redhat-linux-gnu'
  'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall
  -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2
  -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong
  -grecord-gcc-switches
  -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
  -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
  -fasynchronous-unwind-tables -fstack-clash-protection
  -fcf-protection -fPIC' 'LDFLAGS=-Wl,-z,relro   -Wl,-z,now
  -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -pie -Wl,-z,relro
  -Wl,-z,now -Wl,--warn-shared-textrel' 'CXXFLAGS=-O2 -g -pipe -Wall
  -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2
  -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong
  -grecord-gcc-switches
  -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
  -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic
  -fasynchronous-unwind-tables -fstack-clash-protection
  -fcf-protection -fPIC'
  'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'


The problem I have is that all these domains are actually on one
  IP only, on a single server, running nginx with multiple SSL
  certificates on one single IP, and I would like to do the same
  with squid.


I did few years ago with HaProxy, but I would prefer to keep
  squid.


3 choices:


- Having more than one IP on the server, create SSL certificates
  from LetsEncrypt including each a list of some domains and
  sub-domains
- Create a very bing certificate to have squid using it (not the
  best choice because domains are of different content, far one to
  the other)

- Having squid managing all certificates on a single IP. (The
  best because some domains have very high encryption needs, and
  LetsEncrypt is not their preference)



Like a bottle in the sea: Is that possible, multiple
  certificates, with squid 4.4 on a single IP?


Thanks for your help.


Patrick


  

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Redirect input http to https

2018-02-26 Thread Patrick Chemla

Hi all,

A lot a very goog people here could help in this:

I have a squid 3.5.20 in front of some backends to balance traffic 
according to where the website is.


I have set up certificates on squid for all sites, and backend are 
actually accepting traffic both http and https.


On some websites I have set up in Apache parameters a Redirect / 
https://www.domain.tld/ and of course, traffic coming in in http is 
redirected to https.


I searched for some acl/redirect in squid to force redirecting into 
squid and not to have Apache to send this 302 redirect.


Is there a way that Squid will send a 302 redirect to https to all 
requests coming in in http?


Many thanks

Patrick


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] tcp_outgoing_address issue how to deny traffic to other IPs

2018-02-26 Thread Patrick Chemla

Hi Alex, Ivan,

I finally found time to change/test the squid config to load balance on 
outgoing IPs, and thanks you very much it works very good. All traffic 
is not output according to the rules to the right IPs.


Other question I will open a new thread.

Many many thanks.

Patrick


Le 23/02/2018 à 00:09, Alex Rousskov a écrit :

On 02/22/2018 02:52 PM, Ivan Larionov wrote:

Your balancing rules are incorrect. This is how we balance 30% per IP:

You won the race! Perhaps our similar emails will increase the page rank
of the correct answers to this FAQ. :-).

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] tcp_outgoing_address issue how to deny traffic to other IPs

2018-02-22 Thread Patrick Chemla

Hi,

I have googled for days and can't find the right settings to distribut 
outgoing requests over part on local IPs of my server.


This is my conf I built according to what I found on docs and forums:


Squid Cache: Version 4.0.17



blablabla

blablabla

blablabla



acl Percent001 random 1/5
acl Percent002 random 1/5
acl Percent003 random 1/5
acl Percent004 random 1/5
acl Percent005 random 1/5

server_persistent_connections off


tcp_outgoing_address XX.3X.YYY.10 Percent001
tcp_outgoing_address XX.X3.YYY.21 Percent002
tcp_outgoing_address XX.5X.YYY.31 Percent003
tcp_outgoing_address XX.X9.YYY.34 Percent004
tcp_outgoing_address XX.5X.YYY.38 Percent005

balance_on_multiple_ip on

forwarded_for delete
via off

My problem is that this server as

- a main IP MA.IN.IP.00 of course

- a locahost 127.0.0.1 of course

- some secondary IPs attached to the same interface as the main IP


The input traffic comes through one of the secondaries, and I need the 
output traffic to get out randomly through other secondaries IPs, not 
any squid traffic from the main IP.


When I look at the log, or using network tcpdump analyzer, I can see 
that there is squid outgoing traffic on this IP, and I can't find how to 
deny tcp_outgoing_address to be on the main IP.


I hope it's clear, and I need help after I searched for days many 
combinations.


Many thanks

Patrick

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Trusted CA Certificate with ssl_bump

2016-11-21 Thread Patrick Chemla

Hi Alex, and all others

No I have set it for multiple domains, and it works really fine. Again 
many thanks.


But I have a new demand:

Within one of the sites, where squid handles the https connexion then 
communicate with internal VM through http, there is one (at least, maybe 
we will find others), I don't kmow why, but the dev want them http only.


When I come to the menu to this page, the app returns a http:// link to 
squid. Squid encrypts and send a https:// to the browser., but then when 
the user hit the link, somme of the components of the page should stay 
http://, and there the browser detects a https page with http components 
embeded, and block them.


Is there a way to tell squid to let http some link?

My domain is domain.tld:

the browser ask for https://domain.tld

squid decrypt, recognize this domain, according to acl goes to the VM1, 
in http:// mode, not crypted.


The site on VM1, return a page in http:// mode, with all links as http 
too,  and squid send it back crypted to the browser with all links 
embeded in https://


I want a special link on the page http://domain.tld/special/ to stay http.

How I can instruct squid to leave it as it is, but all others?

Thanks

Patrick


Le 17/11/2016 à 20:11, Patrick Chemla a écrit :


Hi Alex, sorry for disturbing, but it works with

https_port 5.39.105.241:443 accel defaultsite=www.sempli.com 
cert=/etc/squid/ssl/sempli.com.crt 
key=/etc/squid/ssl/sempli.com.key


Many, many, many Thanks for valuable help.

Patrick
Le 17/11/2016 à 19:48, Patrick Chemla a écrit :

Hi Alex,

I followed the

http://wiki.squid-cache.org/SquidFaq/ReverseProxy

I am getting errors when trying to connect. What could it be?

This is the config: Is there something bad there?

==
debug_options   ALL,1  33,2 28,9

http_port 5.39.105.241:443 accel defaultsite=www.sempli.com 
cert=/etc/squid/ssl/sempli.com.crt 
key=/etc/squid/ssl/sempli.com.key


cache_peer 172.16.16.83 parent 80 0 no-query originserver login=PASS 
sourcehash weight=80 connect-timeout=3 connect-fail-limit=3 standby=5 
name=SEMP1
cache_peer 172.16.17.83 parent 80 0 no-query originserver login=PASS 
sourcehash weight=80 connect-timeout=3 connect-fail-limit=3 standby=5 
name=SEMP2


acl w3_sempli dstdomain .sempli.com
cache_peer_access SEMP1 allow w3_sempli
cache_peer_access SEMP1 deny all

http_access allow w3_sempli

=

$ wget https://www.sempli.com
--2016-11-17 19:34:49--  https://www.sempli.com/
Résolution de www.semplitech.com (www.sempli.com)… xxx.xxx.xxx.xxx
Connexion à www.semplitech.com 
(www.sempli.com)|xxx.xxx.xxx.xxx|:443… connecté.
OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown 
protocol

Incapable d'établir une connexion SSL.

Same error with the browser
=
THis is what I have in access_log file:
- ccc.ccc.ccc.ccc - - - [17/Nov/2016:18:34:49 +0100] "NONE 
error:invalid-request - HTTP/1.1" 400 4468 "-" "-" TAG_NONE:HIER_NONE
- ccc.ccc.ccc.ccc - - - [17/Nov/2016:18:35:30 +0100] "NONE 
error:invalid-request - HTTP/1.1" 400 4468 "-" "-" TAG_NONE:HIER_NONE


===
This is what I have in cache.log:
2016/11/17 18:35:28.724 kid1| 28,4| FilledChecklist.cc(66) 
~ACLFilledChecklist: ACLFilledChecklist destroyed 0x78737acd2520
2016/11/17 18:35:28.725 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x78737acd2520
2016/11/17 18:35:30.752 kid1| 28,4| Eui48.cc(178) lookup: 
id=0xf55ca8ed404 query ARP table
2016/11/17 18:35:30.752 kid1| 28,4| Eui48.cc(222) lookup: 
id=0xf55ca8ed404 query ARP on each interface (480 found)
2016/11/17 18:35:30.752 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface lo
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(237) lookup: 
id=0xf55ca8ed404 looking up ARP address for ccc.ccc.ccc.ccc on eth2
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:1
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:2
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:3
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:4
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:5
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:6
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:7
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:8
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 fo

Re: [squid-users] Trusted CA Certificate with ssl_bump

2016-11-17 Thread Patrick Chemla


Hi Alex, sorry for disturbing, but it works with

https_port 5.39.105.241:443 accel defaultsite=www.sempli.com 
cert=/etc/squid/ssl/sempli.com.crt key=/etc/squid/ssl/sempli.com.key


Many, many, many Thanks for valuable help.

Patrick
Le 17/11/2016 à 19:48, Patrick Chemla a écrit :

Hi Alex,

I followed the

http://wiki.squid-cache.org/SquidFaq/ReverseProxy

I am getting errors when trying to connect. What could it be?

This is the config: Is there something bad there?

==
debug_options   ALL,1  33,2 28,9

http_port 5.39.105.241:443 accel defaultsite=www.sempli.com 
cert=/etc/squid/ssl/sempli.com.crt 
key=/etc/squid/ssl/sempli.com.key


cache_peer 172.16.16.83 parent 80 0 no-query originserver login=PASS 
sourcehash weight=80 connect-timeout=3 connect-fail-limit=3 standby=5 
name=SEMP1
cache_peer 172.16.17.83 parent 80 0 no-query originserver login=PASS 
sourcehash weight=80 connect-timeout=3 connect-fail-limit=3 standby=5 
name=SEMP2


acl w3_sempli dstdomain .sempli.com
cache_peer_access SEMP1 allow w3_sempli
cache_peer_access SEMP1 deny all

http_access allow w3_sempli

=

$ wget https://www.sempli.com
--2016-11-17 19:34:49--  https://www.sempli.com/
Résolution de www.semplitech.com (www.sempli.com)… xxx.xxx.xxx.xxx
Connexion à www.semplitech.com 
(www.sempli.com)|xxx.xxx.xxx.xxx|:443… connecté.
OpenSSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown 
protocol

Incapable d'établir une connexion SSL.

Same error with the browser
=
THis is what I have in access_log file:
- ccc.ccc.ccc.ccc - - - [17/Nov/2016:18:34:49 +0100] "NONE 
error:invalid-request - HTTP/1.1" 400 4468 "-" "-" TAG_NONE:HIER_NONE
- ccc.ccc.ccc.ccc - - - [17/Nov/2016:18:35:30 +0100] "NONE 
error:invalid-request - HTTP/1.1" 400 4468 "-" "-" TAG_NONE:HIER_NONE


===
This is what I have in cache.log:
2016/11/17 18:35:28.724 kid1| 28,4| FilledChecklist.cc(66) 
~ACLFilledChecklist: ACLFilledChecklist destroyed 0x78737acd2520
2016/11/17 18:35:28.725 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x78737acd2520
2016/11/17 18:35:30.752 kid1| 28,4| Eui48.cc(178) lookup: 
id=0xf55ca8ed404 query ARP table
2016/11/17 18:35:30.752 kid1| 28,4| Eui48.cc(222) lookup: 
id=0xf55ca8ed404 query ARP on each interface (480 found)
2016/11/17 18:35:30.752 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface lo
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(237) lookup: 
id=0xf55ca8ed404 looking up ARP address for ccc.ccc.ccc.ccc on eth2
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:1
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:2
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:3
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:4
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:5
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:6
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:7
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth2:8
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface eth3
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(237) lookup: 
id=0xf55ca8ed404 looking up ARP address for ccc.ccc.ccc.ccc on eth3
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(228) lookup: 
id=0xf55ca8ed404 found interface virbr0
2016/11/17 18:35:30.753 kid1| 28,4| Eui48.cc(237) lookup: 
id=0xf55ca8ed404 looking up ARP address for ccc.ccc.ccc.ccc on virbr0
2016/11/17 18:35:30.753 kid1| 28,3| Eui48.cc(520) lookup: 
id=0xf55ca8ed404 ccc.ccc.ccc.ccc NOT found
2016/11/17 18:35:30.753 kid1| 28,4| FilledChecklist.cc(66) 
~ACLFilledChecklist: ACLFilledChecklist destroyed 0x78737acd2660
2016/11/17 18:35:30.753 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x78737acd2660
2016/11/17 18:35:30.753 kid1| 33,2| client_side.cc(2583) 
clientProcessRequest: clientProcessRequest: Invalid Request
2016/11/17 18:35:30.753 kid1| 33,2| client_side.cc(816) swanSong: 
local=5.39.105.241:443 remote=ccc.ccc.ccc.ccc:48745 flags=1
2016/11/17 18:35:30.753 kid1| 28,3| Checklist.cc(70) preCheck: 
0x78737acd23c0 checking fast ACLs
2016/11/17 18:35:30.753 kid1| 28,5| Acl.cc(138) matches: checking 
access_log daemon:/var/log/squid/access.log
2016/11/17 18:35:30.753 kid1| 28,5| Acl.cc(138) matches: checking 
(access_log daemon:/var/l

Re: [squid-users] Trusted CA Certificate with ssl_bump

2016-11-17 Thread Patrick Chemla
hecklist.cc(63) markFinished: 
0x78737acd23c0 answer ALLOWED for match
2016/11/17 18:35:30.754 kid1| 28,4| FilledChecklist.cc(66) 
~ACLFilledChecklist: ACLFilledChecklist destroyed 0x78737acd23c0
2016/11/17 18:35:30.754 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x78737acd23c0
2016/11/17 18:36:15.609 kid1| 28,4| FilledChecklist.cc(66) 
~ACLFilledChecklist: ACLFilledChecklist destroyed 0x78737acd2520
2016/11/17 18:36:15.609 kid1| 28,4| Checklist.cc(197) ~ACLChecklist: 
ACLChecklist::~ACLChecklist: destroyed 0x78737acd2520


Thanks for help
Patrick

Le 16/11/2016 à 20:16, Patrick Chemla a écrit :
Many Thanks Alex. I will try in the next hours and let you if I am 
successful.


Patrick


Le 16/11/2016 à 20:04, Alex Crow a écrit :


On 16/11/16 17:33, Patrick Chemla wrote:
Thanks for your answers, I am not doing anything illegal, I am 
trying to

build a performant platform.

I have a big server running about 10 different websites.

I have on this server virtual machines, each specialized for one-some
websites, and squid help me to send the traffic to the destination
website on the internal VM according to the URL.

Some VMs are paired, so squid will loadbalance the traffic on group of
VMs according to the URL/acls.

All this works in HTTP, thanks to Amos advices few weeks ago.

Now, I need to set SSL traffic, and because the domains are different I
need to use different IPs:443 to be able to use different certificates.

I tried many times in the past to make squid working in SSL and never
succeed because of so many options, and this question: Does the traffic
between squid and the backend should be SSL? If yes, it's OK for me.
nothing illegal.

The second question: How to set up the SSL link on squid getting the 
SSL

request and sending to the backend. Actually the backend can handle SSL
traffic, it's OK for me if I find the way to make squid handle the
traffic, according to the acls. squid must decrypt the request, compute
the acls, then re-crypt to send to the backend.

The reason I asked not to reencrypt is because of performances. All 
this

is on the same server, from the host to the VMs and decrypt, the
reencrypt, then decrypt will be ressources consumming. But I can do it
like that.

Now, do you have any Howto, clear, that will help? I found many on
Google and not any gave me the solution working.

The other question is about Trusted Certificates. We have on the
websites trusted certificates. Should we use the same on the squid?

Thanks for appeciate help

Patrick



You are using a reverse proxy/web accelerator setup. Nothing you do
there will be illegal if you're using it for your own servers! You
should be able to use HTTP to the backend and just offer HTTPS from
squid. This will avoid loading the backend with encryption cycles. You
don't need any certificate generation as AFAIK you already have all the
certs you need.

See:

http://wiki.squid-cache.org/SquidFaq/ReverseProxy

for starters. You can adapt the wildcard example; if you have specific
certs for each domain, just listen on a different IP for each domain and
set up multiple https_port with a different listening IP for each site.
If you have a wildcard cert, ie *.mydomain.com, follow it directly.

Here's a couple more:

http://wiki.univention.com/index.php?title=Cool_Solution_-_Squid_as_Reverse_SSL_Proxy 



(I found the above with a simple google for "squid reverse ssl proxy".
Google is your friend here... )

http://www.squid-cache.org/Doc/config/https_port/

That's as far as my knowledge goes on reverse in Squid, at my site we
use nginx.But AFAIK if you're doing what I think you're doing that
should be enough. Squid does have a lot of config parameters, but then
so does any other fully capable proxy server. Just focus on the parts
you need for your role and it will be much easier. Specifically ignore
bump/peek+splice, it's just for forward proxy.

Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Trusted CA Certificate with ssl_bump

2016-11-16 Thread Patrick Chemla
Many Thanks Alex. I will try in the next hours and let you if I am 
successful.


Patrick


Le 16/11/2016 à 20:04, Alex Crow a écrit :


On 16/11/16 17:33, Patrick Chemla wrote:

Thanks for your answers, I am not doing anything illegal, I am trying to
build a performant platform.

I have a big server running about 10 different websites.

I have on this server virtual machines, each specialized for one-some
websites, and squid help me to send the traffic to the destination
website on the internal VM according to the URL.

Some VMs are paired, so squid will loadbalance the traffic on group of
VMs according to the URL/acls.

All this works in HTTP, thanks to Amos advices few weeks ago.

Now, I need to set SSL traffic, and because the domains are different I
need to use different IPs:443 to be able to use different certificates.

I tried many times in the past to make squid working in SSL and never
succeed because of so many options, and this question: Does the traffic
between squid and the backend should be SSL? If yes, it's OK for me.
nothing illegal.

The second question: How to set up the SSL link on squid getting the SSL
request and sending to the backend. Actually the backend can handle SSL
traffic, it's OK for me if I find the way to make squid handle the
traffic, according to the acls. squid must decrypt the request, compute
the acls, then re-crypt to send to the backend.

The reason I asked not to reencrypt is because of performances. All this
is on the same server, from the host to the VMs and decrypt, the
reencrypt, then decrypt will be ressources consumming. But I can do it
like that.

Now, do you have any Howto, clear, that will help? I found many on
Google and not any gave me the solution working.

The other question is about Trusted Certificates. We have on the
websites trusted certificates. Should we use the same on the squid?

Thanks for appeciate help

Patrick



You are using a reverse proxy/web accelerator setup. Nothing you do
there will be illegal if you're using it for your own servers! You
should be able to use HTTP to the backend and just offer HTTPS from
squid. This will avoid loading the backend with encryption cycles. You
don't need any certificate generation as AFAIK you already have all the
certs you need.

See:

http://wiki.squid-cache.org/SquidFaq/ReverseProxy

for starters. You can adapt the wildcard example; if you have specific
certs for each domain, just listen on a different IP for each domain and
set up multiple https_port with a different listening IP for each site.
If you have a wildcard cert, ie *.mydomain.com, follow it directly.

Here's a couple more:

http://wiki.univention.com/index.php?title=Cool_Solution_-_Squid_as_Reverse_SSL_Proxy

(I found the above with a simple google for "squid reverse ssl proxy".
Google is your friend here... )

http://www.squid-cache.org/Doc/config/https_port/

That's as far as my knowledge goes on reverse in Squid, at my site we
use nginx.But AFAIK if you're doing what I think you're doing that
should be enough. Squid does have a lot of config parameters, but then
so does any other fully capable proxy server. Just focus on the parts
you need for your role and it will be much easier. Specifically ignore
bump/peek+splice, it's just for forward proxy.

Alex
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Trusted CA Certificate with ssl_bump

2016-11-16 Thread Patrick Chemla
Thanks for your answers, I am not doing anything illegal, I am trying to 
build a performant platform.


I have a big server running about 10 different websites.

I have on this server virtual machines, each specialized for one-some 
websites, and squid help me to send the traffic to the destination 
website on the internal VM according to the URL.


Some VMs are paired, so squid will loadbalance the traffic on group of 
VMs according to the URL/acls.


All this works in HTTP, thanks to Amos advices few weeks ago.

Now, I need to set SSL traffic, and because the domains are different I 
need to use different IPs:443 to be able to use different certificates.


I tried many times in the past to make squid working in SSL and never 
succeed because of so many options, and this question: Does the traffic 
between squid and the backend should be SSL? If yes, it's OK for me. 
nothing illegal.


The second question: How to set up the SSL link on squid getting the SSL 
request and sending to the backend. Actually the backend can handle SSL 
traffic, it's OK for me if I find the way to make squid handle the 
traffic, according to the acls. squid must decrypt the request, compute 
the acls, then re-crypt to send to the backend.


The reason I asked not to reencrypt is because of performances. All this 
is on the same server, from the host to the VMs and decrypt, the 
reencrypt, then decrypt will be ressources consumming. But I can do it 
like that.


Now, do you have any Howto, clear, that will help? I found many on 
Google and not any gave me the solution working.


The other question is about Trusted Certificates. We have on the 
websites trusted certificates. Should we use the same on the squid?


Thanks for appeciate help

Patrick



Le 16/11/2016 à 14:27, Amos Jeffries a écrit :

On 16/11/2016 9:11 p.m., Patrick Chemla wrote:

Hi,

I have same problem, and I need to use trusted CA certificates, so what
is the solution?

Not to do illegal bad things that violate your contract with the CA.

Any CA which lets you intercept traffic by generating sub-certificates
with their root *will* be blacklisted and effectively "thrown off the
Internet". It has happened already for several CA who thought that was
an idle threat.


I have a squid 3.5.20 used for multiple domains, multiple backends,
using both HTTP and HTTPS.

As Alex said, what you describe here sounds a lot more like
reverse-proxy than interception.

Sergey who started this thread was intercepting HTTPS traffic sent by
clients to an explicit proxy. All answers so far have been about that
topic, which is probably *not* what you are facing.

The configurations and limitations are very different. So first thing to
do is be clear about what actually you are trying to do.



So questions:

1/ Should I set up the squid certificate with ONLY self-signed, or there
is a way to use Trusted certificates? So if only self-signed, the user
will be always forced to accept the self-signed certificate on first
time? not really good for commercial sites.


Are you the owner of the website(s) or an authorized CDN/Hosting
provider for them ?



2/ Should the backend cache_peer set as ssl on port 443, or could it be
simple http 80 (backends are internal VMs onto the same server, no
external network between squid and backends)?


That depends on your answer to the above.


3/ Will the acls rules work OK to affect each request to the right
backend according to domain, even in HTTPS?


Yes. But the detail may not be what you expect. It depends on the above
answers.


4/ Do you know some clear and easy howto, examples, for such settings,
from where I could get how to do?


<http://wiki.squid-cache.org/ConfigExamples/> contains all of the
configurations you might need. But which one(s) are correct for you
depends on what you are actually needing to do.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Groups of peers load-balancing

2016-10-07 Thread Patrick Chemla

Many thanks Amos for your help.

I will set up my servers in the next days, and will let you know how it 
works, well I am sure.


Patrick


Le 06/10/2016 à 13:11, Amos Jeffries a écrit :

On 6/10/2016 8:52 p.m., George William Herbert wrote:

Usually you load balance with another tool...

HTTP Load Balancer is one of the roles Squid is designed for.

When you need to converge the LB, routing, and caching tasks Squid is
the product for the job.


On Oct 6, 2016, at 12:16 AM, Patrick Chemla wrote:

Hi,

I am using Squid Cache: Version 3.5.20 on 2 Fedora 24 server.

I have to set a load-balancer for multiple sites, each using
different peers, on both servers + cloud instances.

Squid is the entry point for all websites. According to the domain,
I will have 2 to 5 peers to handle the load. But, as I have 2 big
domains, and a group of other domains, I need dedicated peers for
each big domains, and another groups of peers for other domains.

So squid must route requests :

- for domain A to peers A1 A2 A3

- for domain B to peers B1 B2 B3 B4 B5

- for all other domains to peers O1 O2

Load balancing method within a group could be different, as some
domains need user to reach always same peer, when other domain
could simply handle round-robin balancing.

I can't find how to group peers A1 A2 A3 to group A, peers B1
B2..B5 to group B, O1 O2 to group O, then set the cache_peer_access
to the needed group.

Grouping is done by cache_peer_access ACLs.

You need to define one ACL which will only match for one group.
Typically the domain name is used for that. The "group" is simply the
peers which the request is allowed to be sent to.

For example:

  acl groupA dstdomain .example.com
  acl groupB dstdomain .example.net
  acl groupO dstdomain .example.org

  cache_peer_access A1 allow groupA
  cache_peer_access A1 deny all
  cache_peer_access A2 allow groupA
  cache_peer_access A2 deny all
  ...

  cache_peer_access B1 allow groupB
  cache_peer_access B1 deny all
  cache_peer_access B2 allow groupB
  cache_peer_access B2 deny all
  ...


Since your peers are split into distinct groups just add the relevant
algorithm to the cache_peer in that group which is using it.

Things can get complex if you have one peer being part of two groups.
But then you just define two cache_peer lines for it, one in each group
with the relevant LB algorithm. The name= parameter is used distinguish
which cache_peer line is relevant for the cache_peer_access rules.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Groups of peers load-balancing

2016-10-06 Thread Patrick Chemla

Hi,

I am using Squid Cache: Version 3.5.20 on 2 Fedora 24 server.

I have to set a load-balancer for multiple sites, each using different 
peers, on both servers + cloud instances.


Squid is the entry point for all websites. According to the domain, I 
will have 2 to 5 peers to handle the load. But, as I have 2 big domains, 
and a group of other domains, I need dedicated peers for each big 
domains, and another groups of peers for other domains.


So squid must route requests :

- for domain A to peers A1 A2 A3

- for domain B to peers B1 B2 B3 B4 B5

- for all other domains to peers O1 O2

Load balancing method within a group could be different, as some domains 
need user to reach always same peer, when other domain could simply 
handle round-robin balancing.


I can't find how to group peers A1 A2 A3 to group A, peers B1 B2..B5 to 
group B, O1 O2 to group O, then set the cache_peer_access to the needed 
group.


Can you help? Do you have similar examples?

Thanks

Patrick


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] affinity session load balancing

2015-11-16 Thread Patrick Chemla

Hi Antony,

Thanks for your answer.

Actually, I am doing load balancing as sourceash, so on IP source.

The problem is that about 80% of clients come from the same IP, so I 
have a highly loaded backend, while other are sleeping.


So whatever you call it, on haproxy they call it session affinity LB,  
my need is to use a round-robin load balancing, but, very important, 
each user should always directed to the same backend.


Can we do that with squid? avoiding user login on squid (userhash is not 
convenient)?


Patrick

On 16/11/2015 11:41, Antony Stone wrote:

On Monday 16 November 2015 at 10:35:39, Patrick Chemla wrote:


Hi,

I am using squid for years, maybe with basic features, and I have a
problem today with an app where I need to manage multiple backends, be
sure that a user is always sent to the same one because the app writes
on local disk, and I have 80% users coming from same IP.

Is this Squid operating in accelerator mode (in front of the server/s) or in
proxying mode (being used by the clients)?


So I need to load balance, not on the soucre IP, and I can't have a
login on squid to identify each user, because it will create a double
connexion procedure with the application login.

How does the app distinguish between different clients *without* Squid being
involved?


Is there a way that squid will recognize a new connexion, maybe same IP,
and load balnace it to any backend using round-robin? some affinity
session load balancing?

The first thing needed to answer that is a definition of "session".


Regards,

Antony.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] affinity session load balancing

2015-11-16 Thread Patrick Chemla

Hi,

I am using squid for years, maybe with basic features, and I have a 
problem today with an app where I need to manage multiple backends, be 
sure that a user is always sent to the same one because the app writes 
on local disk, and I have 80% users coming from same IP.


So I need to load balance, not on the soucre IP, and I can't have a 
login on squid to identify each user, because it will create a double 
connexion procedure with the application login.


Is there a way that squid will recognize a new connexion, maybe same IP, 
and load balnace it to any backend using round-robin? some affinity 
session load balancing?


Thanks
Patrick

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problem to set up multi-cpu multi-ports squid 3.3.12

2014-07-15 Thread Patrick Chemla

Hi Eliezer,

I have disabled SELINUX, doesn't help.
shm partition is mounted OK from beginning

I can't find basic_data.sh, nor on my disk after install of squid 
package or sources, nor using google.


I am trying to compile squid-3.4.6 to add rock cache type (not included 
by default) but I get errors with crypto and ssl libraries :


Making all in anyp
make[3] : on entre dans le répertoire « 
/usr/local/src/squid-3.4.6/src/anyp »
/bin/sh ../../libtool  --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H  
-I../.. -I../../include -I../../lib -I../../src -I../../include   
-I../../libltdl   -Wall -Wpointer-arith -Wwrite-strings -Wcomments 
-Wshadow -Werror -pipe -D_REENTRANT -O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
--param=ssp-buffer-size=4 -grecord-gcc-switches  -m64 -mtune=generic 
-fpie -march=native -std=c++11 -c -o PortCfg.lo PortCfg.cc
libtool: compile:  g++ -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../lib -I../../src -I../../include -I../../libltdl -Wall 
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe 
-D_REENTRANT -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches 
-m64 -mtune=generic -march=native -std=c++11 -c PortCfg.cc  -fPIC -DPIC 
-o .libs/PortCfg.o

In file included from ../../src/anyp/PortCfg.h:9:0,
 from PortCfg.cc:2:
../../src/ssl/gadgets.h:32:9: error: 'SSL_METHOD' does not name a type
 typedef SSL_METHOD * ContextMethod;
 ^
In file included from ../../src/anyp/PortCfg.h:9:0,
 from PortCfg.cc:2:
../../src/ssl/gadgets.h:76:40: error: variable or field 'X509_free_cpp' 
declared void

 CtoCpp1(X509_free, X509 *)
^
../../src/ssl/gadgets.h:76:40: error: 'X509' was not declared in this scope
../../src/ssl/gadgets.h:76:47: error: 'a' was not declared in this scope
 CtoCpp1(X509_free, X509 *)
   ^
../../src/ssl/gadgets.h:77:24: error: 'X509' was not declared in this scope
 typedef LockingPointerX509, X509_free_cpp, CRYPTO_LOCK_X509 
X509_Pointer;

^
../../src/ssl/gadgets.h:77:30: error: 'X509_free_cpp' was not declared 
in this scope
 typedef LockingPointerX509, X509_free_cpp, CRYPTO_LOCK_X509 
X509_Pointer;


Do you have an idea?
Thanks
Patrick

Le 14/07/2014 22:03, Eliezer Croitoru a écrit :

On 07/14/2014 08:42 PM, Patrick Chemla wrote:

Hey Eliezer,

Happy to read you.

What do you call rock as cache_dir?


Squid uses cache_dir to store objects on disk.
If you don't know what it is I will refer you to the configuration pages:
http://www.squid-cache.org/Doc/config/cache_dir/

Your basic issue is related to SHM and\or selinux.
you can use the basic_data.sh script to get most of the needed 
information about your system and the issue.


You need to first disable selinux or use permissive mode.
Then make sure you have a SHM partition mounted.
Only then squid will work with SMP support.

Good Luck,
Eliezer




Re: [squid-users] Re: Problem to set up multi-cpu multi-ports squid 3.3.12

2014-07-15 Thread Patrick Chemla


Thanks for help.

The problem is that I can have some external IPs and hundreds ports 
for each IP on the same box.


Up to now, I am using virtual machines for IPs and I route the ip:ports 
with iptables to the right VM (hundreds ports each). There one squid 
instance is listening to 128 ports (squid limit). It works very well.


Some customers want more power, so I need to give some VMs more cpus (I 
have), and run more than one squid process on the same hundreds ports.


Designing loadbalanced configuration with iptables, or frontend/backend 
with squid for incoming hundreds ports, will get to thousands ports inside.


It is possible I think, but building the configurator is something tiny. 
Also, configurations can change while in production with squid -k 
reconfigure for thousands ports.


Of course I can split the ports to separate squid instances, and limit 
each port traffic.


I need to think well about the solution.

Patrick

Le 15/07/2014 00:24, babajaga a écrit :

Besides SMP, there is still the old fashioned option of multiple instances
of squid, in a sandwich config.
http://wiki.squid-cache.org/MultipleInstances

Besides described port rotation, you can set up 3 squids, for example:
one frontend, just doing ACLs and request dispatching (carp), and 2
backends, with real caching.
This variant has the advantage avoiding double caching, which might happen
in the port rotation alternative.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Problem-to-set-up-multi-cpu-multi-ports-squid-3-3-12-tp4666906p4666915.html
Sent from the Squid - Users mailing list archive at Nabble.com.




[squid-users] Problem to set up multi-cpu multi-ports squid 3.3.12

2014-07-14 Thread Patrick Chemla

Hi,

I have a multi-ports config of squid running from version 3.1.19 
upgraded to 3.3.12. Working like a charm, but the traffic is reaching 
one cpu limit.


I want to use SMP capabilities with SMP workers on my 8 cpus/64G mem 
Fedora 20 box.


I saw in the 
http://wiki.squid-cache.org/Features/SmpScale#What_can_workers_share.3F 
that workers can share http_ports, right?


When I run with workers 1 , I can see the squid-1 process listen on the 
designed port with netstat.


When I run with workers greater than 1, I can see processes squid-1, 
squid-2... squid-n with ps -ef|fgrep squid, but not any  process 
listening to any tcp port with netstat -apn (I see all processes 
listening to udp ports).


I can't find any configuration example featuring SMP workers capability 
for squid 3.3.12, including http_ports lines.


Could any one help me there?

Thanks a lot
Patrick



Re: [squid-users] Problem to set up multi-cpu multi-ports squid 3.3.12

2014-07-14 Thread Patrick Chemla
 UNSTARTED 8 DNS Socket IPv6
2014/07/14 17:12:05 kid4| Open FD UNSTARTED 9 DNS Socket IPv4
2014/07/14 17:12:05 kid1| Shutting down...
2014/07/14 17:12:05 kid1| storeDirWriteCleanLogs: Starting...
2014/07/14 17:12:05 kid1|   Finished.  Wrote 0 entries.
2014/07/14 17:12:05 kid1|   Took 0.00 seconds (  0.00 entries/sec).
CPU Usage: 0.046 seconds = 0.034 user + 0.012 sys
Maximum Resident Size: 64176 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:5812 KB
Ordinary blocks: 5795 KB  3 blks
Small blocks:   0 KB 21 blks
Holding blocks:  9744 KB  5 blks
Free Small blocks:  1 KB
Free Ordinary blocks:  16 KB
Total in use:   15539 KB 267%
Total free:17 KB 0%
2014/07/14 17:12:05 kid1| Logfile: closing log 
stdio:/var/log/squid/access.log

2014/07/14 17:12:05 kid1| Open FD UNSTARTED 8 DNS Socket IPv6
2014/07/14 17:12:05 kid1| Open FD UNSTARTED 9 DNS Socket IPv4
2014/07/14 17:12:05 kid4| Squid Cache (Version 3.3.12): Exiting normally.
2014/07/14 17:12:05 kid1| Squid Cache (Version 3.3.12): Exiting normally.
2014/07/14 17:12:05 kid2| Squid Cache (Version 3.3.12): Exiting normally.

Do you see anything strange?

Patrick
Le 14/07/2014 18:43, Eliezer Croitoru a écrit :

Hey There,

It depends, In a case you are using UFS\AUFS cache_dir you cannot.. 
use SMP with it.
You will need to use rock and only rock as a cache_dir for the time 
being.
You need to run squid -kparse to make sure your settings makes sense 
for squid.
else then that you should look at cache.log to see if there is any 
output that will make sense of the result.

If you want us to look at your squid.conf share it..

I will not try to spoil anything about since it's Fedora a powerful 
system but not all sysadmin will like to work with it for too long due 
to the short life cycle of this OS.


Eliezer

On 07/14/2014 06:30 PM, Patrick Chemla wrote:

Hi,

I have a multi-ports config of squid running from version 3.1.19
upgraded to 3.3.12. Working like a charm, but the traffic is reaching
one cpu limit.

I want to use SMP capabilities with SMP workers on my 8 cpus/64G mem
Fedora 20 box.

I saw in the
http://wiki.squid-cache.org/Features/SmpScale#What_can_workers_share.3F
that workers can share http_ports, right?

When I run with workers 1 , I can see the squid-1 process listen on the
designed port with netstat.

When I run with workers greater than 1, I can see processes squid-1,
squid-2... squid-n with ps -ef|fgrep squid, but not any  process
listening to any tcp port with netstat -apn (I see all processes
listening to udp ports).

I can't find any configuration example featuring SMP workers capability
for squid 3.3.12, including http_ports lines.

Could any one help me there?

Thanks a lot
Patrick