RE: [squid-users] tcp_outgoing_mark + https

2012-12-13 Thread Sébastien WENSKE
Hi Eliezer,

I made the tests, and first, there is no IP in the CONNECT request:

13/Dec/2012:07:30:13.508 240535 10.4.10.25 TCP_MISS/200 14882 CONNECT 
www.kernel.org:443 - HIER_DIRECT/www.kernel.org -

Now the debug:
In HTTP, I see the ACL;
2012/12/13 08:45:03.434 kid1| ACLList::matches: checking fibre
2012/12/13 08:45:03.434 kid1| ACL::checklistMatches: checking 'fibre'
2012/12/13 08:45:03.434 kid1| aclMatchDomainList: checking 'www.kernel.org'
2012/12/13 08:45:03.434 kid1| aclMatchDomainList: 'www.kernel.org' found
2012/12/13 08:45:03.434 kid1| ACL::ChecklistMatches: result for 'fibre' is 1
2012/12/13 08:45:03.434 kid1| aclmatchAclList: 0x7fff52f3eab0 returning true 
(AND list satisfied)
2012/12/13 08:45:03.434 kid1| ACLChecklist::markFinished: 0x7fff52f3eab0 
checklist processing finished

But in HTTPS, nothing. Below, the complete log for a request to 
https://www.kernel.org:
2012/12/13 09:09:49.255 kid1| Acl.cc(321) matches: ACLList::matches: checking 
all
2012/12/13 09:09:49.255 kid1| Acl.cc(310) checklistMatches: 
ACL::checklistMatches: checking 'all'
2012/12/13 09:09:49.255 kid1| Ip.cc(571) match: aclIpMatchIp: 
'10.4.10.76:52320' found
2012/12/13 09:09:49.255 kid1| Acl.cc(312) checklistMatches: 
ACL::ChecklistMatches: result for 'all' is 1
2012/12/13 09:09:49.255 kid1| Checklist.cc(251) matchAclList: aclmatchAclList: 
0x72c159e0 returning true (AND list satisfied)
2012/12/13 09:09:49.255 kid1| Checklist.cc(156) markFinished: 
ACLChecklist::markFinished: 0x72c159e0 checklist processing finished
2012/12/13 09:09:49.255 kid1| Acl.cc(321) matches: ACLList::matches: checking 
all
2012/12/13 09:09:49.255 kid1| Acl.cc(310) checklistMatches: 
ACL::checklistMatches: checking 'all'
2012/12/13 09:09:49.255 kid1| Ip.cc(571) match: aclIpMatchIp: 
'10.4.10.76:52321' found
2012/12/13 09:09:49.255 kid1| Acl.cc(312) checklistMatches: 
ACL::ChecklistMatches: result for 'all' is 1
2012/12/13 09:09:49.255 kid1| Checklist.cc(251) matchAclList: aclmatchAclList: 
0x72c159e0 returning true (AND list satisfied)
2012/12/13 09:09:49.255 kid1| Checklist.cc(156) markFinished: 
ACLChecklist::markFinished: 0x72c159e0 checklist processing finished
2012/12/13 09:09:49.256 kid1| Checklist.cc(162) preCheck: 
ACLChecklist::preCheck: 0x54fde78 checking 'http_access allow swe'
2012/12/13 09:09:49.256 kid1| Acl.cc(321) matches: ACLList::matches: checking 
swe
2012/12/13 09:09:49.256 kid1| Acl.cc(310) checklistMatches: 
ACL::checklistMatches: checking 'swe'
2012/12/13 09:09:49.256 kid1| Ip.cc(571) match: aclIpMatchIp: 
'10.4.10.76:52320' NOT found
2012/12/13 09:09:49.256 kid1| Acl.cc(312) checklistMatches: 
ACL::ChecklistMatches: result for 'swe' is 0
2012/12/13 09:09:49.256 kid1| Checklist.cc(229) matchAclList: aclmatchAclList: 
async=0 nodeMatched=0 async_in_progress=0 lastACLResult() = 0 finished() = 0
2012/12/13 09:09:49.256 kid1| Checklist.cc(243) matchAclList: aclmatchAclList: 
0x54fde78 returning (AND list entry awaiting an async lookup)
2012/12/13 09:09:49.256 kid1| Checklist.cc(162) preCheck: 
ACLChecklist::preCheck: 0x54fde78 checking 'http_access allow localhost'
2012/12/13 09:09:49.256 kid1| Acl.cc(321) matches: ACLList::matches: checking 
localhost
2012/12/13 09:09:49.256 kid1| Acl.cc(310) checklistMatches: 
ACL::checklistMatches: checking 'localhost'
2012/12/13 09:09:49.256 kid1| Ip.cc(571) match: aclIpMatchIp: 
'10.4.10.76:52320' NOT found
2012/12/13 09:09:49.256 kid1| Acl.cc(312) checklistMatches: 
ACL::ChecklistMatches: result for 'localhost' is 0
2012/12/13 09:09:49.256 kid1| Checklist.cc(229) matchAclList: aclmatchAclList: 
async=0 nodeMatched=0 async_in_progress=0 lastACLResult() = 0 finished() = 0
2012/12/13 09:09:49.256 kid1| Checklist.cc(243) matchAclList: aclmatchAclList: 
0x54fde78 returning (AND list entry awaiting an async lookup)
2012/12/13 09:09:49.256 kid1| Checklist.cc(162) preCheck: 
ACLChecklist::preCheck: 0x54fde78 checking 'http_access allow manager localhost'
2012/12/13 09:09:49.256 kid1| Acl.cc(321) matches: ACLList::matches: checking 
manager
2012/12/13 09:09:49.256 kid1| Acl.cc(310) checklistMatches: 
ACL::checklistMatches: checking 'manager'
2012/12/13 09:09:49.256 kid1| RegexData.cc(70) match: aclRegexData::match: 
checking 'www.kernel.org:443'
2012/12/13 09:09:49.256 kid1| RegexData.cc(81) match: aclRegexData::match: 
looking for '(^cache_object://)'
2012/12/13 09:09:49.256 kid1| RegexData.cc(81) match: aclRegexData::match: 
looking for '(^https?://[^/]+/squid-internal-mgr/)'
2012/12/13 09:09:49.256 kid1| Acl.cc(312) checklistMatches: 
ACL::ChecklistMatches: result for 'manager' is 0
2012/12/13 09:09:49.256 kid1| Checklist.cc(229) matchAclList: aclmatchAclList: 
async=0 nodeMatched=0 async_in_progress=0 lastACLResult() = 0 finished() = 0
2012/12/13 09:09:49.256 kid1| Checklist.cc(243) matchAclList: aclmatchAclList: 
0x54fde78 returning (AND list entry awaiting an async lookup)
2012/12/13 09:09:49.256 kid1| Checklist.cc(162) preCheck: 
ACLChecklist::preCheck: 0x54fde78 checking 

Re: [squid-users] Squid3 extremely slow for some website cnn.com

2012-12-13 Thread Muhammed Shehata

Dear Amos,
-the interrelation:
the logs are from two squid similar servers that only differ in version 
and client at both request doesn't disconnect or anything the aborted 
maybe mean that squid can't get this url contains java script but what I 
wonder of why squid can get it successfully

-here is the logs with time :
squid2 on Centos5.2  1355387935.418  7 x.x.x.x TCP_MISS/304 324 GET 
http://cdn.optimizely.com/js/128727546.js - DIRECT/23.50.196.211 
text/javascript

squid3 on Centos 6.3 
13/Dec/2012:10:39:05 +0200  20020 x.x.x.x TCP_MISS_ABORTED/000 0 GET 
http://cdn.optimizely.com/js/128727546.js - HIER_DIRECT/cdn.optimizely.com -
13/Dec/2012:10:39:25 +0200  20020 x.x.x.x TCP_MISS_ABORTED/000 0 GET 
http://cdn.optimizely.com/js/128727546.js - HIER_DIRECT/cdn.optimizely.com -



N.B
x.x.x.x is the ip of the same host getting the website at same time 
without disconnect


Mshehata
IT NS
On 12/13/2012 09:56 AM, Amos Jeffries wrote:


On 13/12/2012 8:04 p.m., Muhammed Shehata wrote:

Dear Eliezer,
I tried to remove dans from the setup already and I'm pretty sure 
that the issue in squid itself as per its logs declare



Your logs presented in that first post declare that out of two requests:
* a client connected, delivered a request and disconnected before 
anything was returned.
* a client connected and requested the same URL which was delivered 
successfully.


The log lines were incomplete, so we don't know anything about timing 
and interrelations.


Amos







Re: [squid-users] Ideas for Squid statistics Web UI development

2012-12-13 Thread Marcello Romani

Il 19/11/2012 01:05, George Machitidze ha scritto:

Hello

I've started development of open sourced Web UI for gathering stats
for Squid proxy server and need your help to clarify needs and
resources.

Where it came from:
Enterprises require auditing, reporting, configuration
check/visibility and statistics. I can say that most of these things
are easy to implement and provide in different ways, except reporting
and stats. Additionally, there are some requirements in functionality
and nice interface not met by currently available solutions that I've
found. Also, state of maintenance, future development etc are very
unclear and Ineffective, but still acceptable or enough for _some_
installations. If you know something that can do all this stuff -
please let me know.
So, I've decided to write everything from the scratch, maybe will take
some public-licensed part from other projects.

Architecture:
Starting point is gathering stats, then we need to manipulate and
store it, then we can add some regular jobs (will avoid this) and then
we need to view this.

Gathering data
Available sources:
1. Logs, available via files or logging daemon (traffic, errors)
2. Stats available via SNMP  (status/counters/config)
3. Cache Manager (status/counters/config)
4. OS-level things (footprint, processes, disk, cpu etc)
[anything else?]

This part will be done by local logging daemon, I won't use file
logging for known reasons.
BTW, good starting point is log_mysql_daemon by marcello, available in
GPL, written in perl. Effective enough to start and load any data to
DB - it's simple enough and took for me 10-15 minutes to analyze the
code, setup and configure.



Hi, I'm the author of log_mysql_daemon.

As time permits, I'm willing to help. At the time I wote that I had some 
ideas (well, mostly questions in fact) about how to complement that 
10-liner with some decent db / perl / whatever programming to deal with 
the long term data retention and analysis issues that the 
one-log-line-one-table-row approach poses.


--
Marcello Romani


Re: [squid-users] ssl interception causes zero byte replies sometimes

2012-12-13 Thread Alex Rousskov
On 12/11/2012 02:40 AM, Sean Boran wrote:
 Hi,
 
 It happens a few times daily  that on submitting a login request to
 sites like Atlassian confluence (not just at atlassian, but elsewhere
 too), or Redmine, that the user gets a screen The requested URL could
 not be retriueved and with a zero sized reply.
 
 It does not happen every time.
 If one refreshes the browser it is ok.
 If the destination is excluded from SSL interception, it does not happen.

Yes, this is a known issue with bumped requests and persistent
connection races. Our patch for this bug is available at
http://article.gmane.org/gmane.comp.web.squid.devel/19190

and we are also working on a better approach to address the same bug:
http://article.gmane.org/gmane.comp.web.squid.devel/19256


HTH,

Alex.





RE: [squid-users] 3.2.4 build problem

2012-12-13 Thread Alan Lehman
 On 13.12.2012 11:48, Alan Lehman wrote:
  On 8/12/2012 11:02 a.m., Alan Lehman wrote:
   I'm having trouble building 3.2.4 on RHEL5.
  
   I configured with options :
   --enable-ssl --enable-useragent-log --enable-referer-log
   --with-filedescriptors=8192 --enable-delay-pools
  
   make all says:
   ext_file_userip_acl.cc: In function âint main(int, char**)â:
   ext_file_userip_acl.cc:254: error: âerrnoâ was not declared in
  this
   scope
   make[3]: *** [ext_file_userip_acl.o] Error 1
  
   Any ideas?
 
  Use the daily update package please. This was fixed a few hours
 after
  release.
 
  When I have time to confirm how that got past testing and that there
  are no others hiding anywhere else there will be a new release.
 
  HTH
  Amos
 
  Still having trouble building. I am trying 3.2.5-2012121-r11739, and
  it gives me the following errors. I've tried removing all the
  configure options, but the results look about the same regardless.
 
  Thanks for any help.
 
 
 
  /home/alehman/squid-3.2.5-20121212-
 r11739/src/../src/ipc/AtomicWord.h:47:
  undefined re ference to `__sync_fetch_and_add_4'

 What version of GCC/G++ are you using?

 Amos

4.1.2


CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you.



Re: [squid-users] Squid3 extremely slow for some website cnn.com

2012-12-13 Thread Eliezer Croitoru

Hey Muhammed,

From my point of view if it's squid fault of making this error you are 
having, I and anyone else using the same version and build will have, am 
I right?


So I am using squid 3.2.1-3 and 3.3.0.1 and I dont have any of the 
issues you are talking about.


If you can come up with a clean centos 6.3 setup that has the same side 
effect you are talking about in a VM and I can get the same result and 
see it in my own eyes on my network line I would assume we are having an 
issue related to squid or specific build of squid with specific things.
Unless you will provide enough data that will make it a fact nobody can 
help you with it.


If it's centos 6.3 you can probably run a small VM on a desktop and get 
the same results.


Regards,
Eliezer

On 12/13/2012 9:56 AM, Amos Jeffries wrote:

On 13/12/2012 8:04 p.m., Muhammed Shehata wrote:

Dear Eliezer,
I tried to remove dans from the setup already and I'm pretty sure
that the issue in squid itself as per its logs declare



Your logs presented in that first post declare that out of two requests:
* a client connected, delivered a request and disconnected before
anything was returned.
* a client connected and requested the same URL which was delivered
successfully.

The log lines were incomplete, so we don't know anything about timing
and interrelations.

Amos


--
Eliezer Croitoru
https://www1.ngtech.co.il
sip:ngt...@sip2sip.info
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


[squid-users] websites blocked using wccpv2 and squid2.7stable9

2012-12-13 Thread Mustafa Raji
hi
i have a problem with certain websites , i'm using cisco router connecting to 
squid with wccpv2 configuration. the problem is one website i can't open it 
from the traffic that goes through the cache server, if i use the connection 
without the squid box i can reach the website normally, then i tried to connect 
a normal pc using the same cache ip address and i can get the website so i can 
be sure that the cause is not from the cache server squid but i can't find the 
problem so i can give a solution to the problem , please is there any one who 
faced the same problem so he/she can help me

thanks 
mustafa


Re: [squid-users] Squid3 extremely slow for some website cnn.com

2012-12-13 Thread Amos Jeffries

On 13/12/2012 9:41 p.m., Muhammed Shehata wrote:

Dear Amos,
-the interrelation:
the logs are from two squid similar servers that only differ in 
version and client at both request doesn't disconnect or anything the 
aborted maybe mean that squid can't get this url contains java script 
but what I wonder of why squid can get it successfully

-here is the logs with time :
squid2 on Centos5.2  1355387935.418  7 x.x.x.x TCP_MISS/304 324 
GET http://cdn.optimizely.com/js/128727546.js - DIRECT/23.50.196.211 
text/javascript

squid3 on Centos 6.3 
13/Dec/2012:10:39:05 +0200  20020 x.x.x.x TCP_MISS_ABORTED/000 0 GET 
http://cdn.optimizely.com/js/128727546.js - 
HIER_DIRECT/cdn.optimizely.com -
13/Dec/2012:10:39:25 +0200  20020 x.x.x.x TCP_MISS_ABORTED/000 0 GET 
http://cdn.optimizely.com/js/128727546.js - 
HIER_DIRECT/cdn.optimizely.com -




Aha. Thanks this makes more sense. 7ms with a response versus 20 seconds 
with nothing returned.


Although for better debug you should get the squid-3 to leave upstream 
server IP address in the log. It could be some problem of which IP is 
being connected to by Squid.


With 3.2 at debugs_options 11,2 you get a cache.log HTTP trace of what 
is going between Squid and optimizely and client. I suspect optimizely 
is not responding when a request is delivered to them - but you need to 
track that down.


Amos


[squid-users] Port allow question

2012-12-13 Thread Paras pradhan
Hi,

I have 0-65536 in safe ports and it is allowed.

acl Safe_ports port 0-65535
http_access deny !Safe_ports



But I am seeing this in access.log.

--
1355433138.267  0 192.168.0.2 TCP_DENIED/403 3413 CONNECT
192.168.0.2:35357 - NONE/- text/html
--

How do we allow 35357?

Thanks!
Paras.


Fw: [squid-users] access_log, squid and NTLM : HaProxy

2012-12-13 Thread David Touzeau

Dear

I’m using HaProxy  in order to balance with 2 squids 3.2x connected to 
Active Directory with NTLM

The NTLM is correctly forwarded to the Squid.
But in access_log, squid did not write the NTLM session username.

in debug mode, i correctly see NTLM forwarded by HaProxy

eg:
Host: www.google-analytics.com
Proxy-Connection: keep-alive
Proxy-Authorization: NTLM 
TlRMTVNTUAADGAAYAJAsASwBqBIAEgBYEAAQAGoWABYAegDUAQAABYKIogYC8CMPHk8Ya0Be7brddwFRGwsVREEARgBFAE8ATgBMAEkATgBFAGQAdABvAHUAegBlAGEAdQAyADUAMgBEADgAMAAxAFQAQQBPAEwAV+knjgSCxCFS6pn9EnoeWQEBlojs7RzXzQGZ+wOfrEADZwACABIAQQBGAEUATwBOAEwASQBOAEUAAQAWADAAMAAwAFMATAAwADQAUABSAE8AWAAEABoAYQBmAGUAbwBuAGwAaQBuAGUALgBuAGUAdAADADIAMAAwADAAUwBMADAANABQAFIATwBYAC4AYQBmAGUAbwBuAGwAaQBuAGUALgBuAGUAdAAIADAAMCAAAK6dfzwK8q0yctw3nb8Es7vizb1e17w0TPPsIlbX/BvHCgAQAAAJACgASABUAFQAUAAvADEAMAAuADMAMgAuADAALgAyADAAOgAzADEAMgA4
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.11 (KHTML, 
like Gecko) Chrome/23.0.1271.95 Safari/537.11

Accept: */*
Referer: http://www.google.com
Accept-Encoding: gzip,deflate,sdch
Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3

When connecting browsers directly to the squid, usernames are correctly 
written to access_log
Why Squid did not write members in access_log when using HaProxy load 
balancer ?


Best regards





[squid-users] Reverse Proxy not re-encrypt SSL

2012-12-13 Thread David Touzeau


Dear

I'm using Squid 3.2.4 in reverse mode with multiple SSL web servers

I need to force squid to not use the default certificate for specific target 
Web servers and i did not know how to do...

I'm turning around this issue...

Example:
http_port 80 accel vhost
https_port 443 accel  cert=/etc/squid3/ssl/cacert.pem 
key=/etc/squid3/ssl/privkey.pem vhost



For this cache_peer i need to squid just forward SSL requests (CONNECT 
method) to the remote server and not re-encrypt the SSL in order to let the 
remote web server establishing the SSL tunnel.


cache_peer 10.32.0.10 parent 443 0 no-query originserver ssl 
sslflags=DONT_VERIFY_PEER name=ssldef


Is it possible to do that ?
Or when settings accel 443 port, all SSL web sites are mandatory 
re-encrypted  ?


Best regards



Re: [squid-users] Port allow question

2012-12-13 Thread Amos Jeffries

On 14/12/2012 11:53 a.m., Paras pradhan wrote:

Hi,

I have 0-65536 in safe ports and it is allowed.

acl Safe_ports port 0-65535
http_access deny !Safe_ports


This is not an ALLOWED. This is a not-DENIED otherwise known as check 
next rule.



NP: there are a number of ports between 0-1024 range which are seriously 
risky to permit HTTP connections to. The SMTP and FTP ports for example.







But I am seeing this in access.log.

--
1355433138.267  0 192.168.0.2 TCP_DENIED/403 3413 CONNECT
192.168.0.2:35357 - NONE/- text/html
--

How do we allow 35357?



This is a CONNECT request. So acl SSL_Ports port 35357 should do it. 
But consider carefully why the client needs a binary tunnel opened to 
that destination, and whether letting it is a good idea.


Amos


[squid-users] Custom error page for HTTP status 400-404, 500

2012-12-13 Thread Paul Ch
Hi,

I am running a squid 3.2.1 server as a reverse proxy.  I have several
Microsoft Windows IIS servers as cache_peers.

I am trying to setup a custom error page for various HTTP_STATUS codes
such as 404 and 500.  This is a relevant extract from my squid.conf
file:

#squid config extract#

acl denied_status http_status 400-404 500 502 503

#Production JC
cache_peer api.mydomain.com parent 443 0 no-query originserver ssl
sslversion=3 sslflags=DONT_VERIFY_PEER front-end-https=on name=jc
login=PASSTHRU
acl sites_jc dstdomain api.mydomain.com
cache_peer_access jc deny sites_jc denied_status
cache_peer_access jc allow sites_jc serviceHours1
acl http proto http
acl https proto https

#EOF#

If I try to access api.mydomain.com/nonexistant, I still see the IIS 404
error page rather than the access denied squid error.

Any ideas?

Cheers.

-- 
http://www.fastmail.fm - A fast, anti-spam email service.



Re: [squid-users] Custom error page for HTTP status 400-404, 500

2012-12-13 Thread Amos Jeffries

On 14/12/2012 5:41 p.m., Paul Ch wrote:

Hi,

I am running a squid 3.2.1 server as a reverse proxy.  I have several
Microsoft Windows IIS servers as cache_peers.

I am trying to setup a custom error page for various HTTP_STATUS codes
such as 404 and 500.  This is a relevant extract from my squid.conf
file:

#squid config extract#

acl denied_status http_status 400-404 500 502 503

#Production JC
cache_peer api.mydomain.com parent 443 0 no-query originserver ssl
sslversion=3 sslflags=DONT_VERIFY_PEER front-end-https=on name=jc
login=PASSTHRU
acl sites_jc dstdomain api.mydomain.com
cache_peer_access jc deny sites_jc denied_status
cache_peer_access jc allow sites_jc serviceHours1
acl http proto http
acl https proto https

#EOF#

If I try to access api.mydomain.com/nonexistant, I still see the IIS 404
error page rather than the access denied squid error.

Any ideas?


cache_peer_access determines whether teh request s allowed to be 
serviced by the peer.


How do you expect the future result form the peer to be used to 
determine whether to fetch it there?


Use http_reply_access instead.

Amos


Re: [squid-users] Custom error page for HTTP status 400-404, 500

2012-12-13 Thread Paul Ch
Thanks Amos, this works perfectly.

So cache_peer_access can block the request from even touching the peer
where as http_reply_access would block it after it's been processed by
the peer.  Makes sence.

Cheers!

-- 
  Paul Ch
  sima...@operamail.com

On Fri, Dec 14, 2012, at 04:57 AM, Amos Jeffries wrote:
 On 14/12/2012 5:41 p.m., Paul Ch wrote:
  Hi,
 
  I am running a squid 3.2.1 server as a reverse proxy.  I have several
  Microsoft Windows IIS servers as cache_peers.
 
  I am trying to setup a custom error page for various HTTP_STATUS codes
  such as 404 and 500.  This is a relevant extract from my squid.conf
  file:
 
  #squid config extract#
 
  acl denied_status http_status 400-404 500 502 503
 
  #Production JC
  cache_peer api.mydomain.com parent 443 0 no-query originserver ssl
  sslversion=3 sslflags=DONT_VERIFY_PEER front-end-https=on name=jc
  login=PASSTHRU
  acl sites_jc dstdomain api.mydomain.com
  cache_peer_access jc deny sites_jc denied_status
  cache_peer_access jc allow sites_jc serviceHours1
  acl http proto http
  acl https proto https
 
  #EOF#
 
  If I try to access api.mydomain.com/nonexistant, I still see the IIS 404
  error page rather than the access denied squid error.
 
  Any ideas?
 
 cache_peer_access determines whether teh request s allowed to be 
 serviced by the peer.
 
 How do you expect the future result form the peer to be used to 
 determine whether to fetch it there?
 
 Use http_reply_access instead.
 
 Amos

-- 
http://www.fastmail.fm - A fast, anti-spam email service.



Re: [squid-users] Reverse Proxy not re-encrypt SSL

2012-12-13 Thread Jakob Curdes

Am 14.12.2012 01:23, schrieb David Touzeau:




For this cache_peer i need to squid just forward SSL requests (CONNECT 
method) to the remote server and not re-encrypt the SSL in order to 
let the remote web server establishing the SSL tunnel.

Is it possible to do that ?
Or when settings accel 443 port, all SSL web sites are mandatory 
re-encrypted  ?
If you do not decrypt the packets, you cannot see what is inside. Squid 
is a HTTP proxy. If it does not decrypt the packet, it will never see a 
CONNECT or any other HTTP command...
What you want ist packet forwarding at the firewall level, in better 
words, destination network address translation. But this means you are 
exposing the backend HTTPS server with its operating system's network 
stack directly to the outside.



HTH, Jakob Curdes