Re: [squid-users] Odd problem with the new squid

2012-09-21 Thread turgut kalfaoğlu

Many thanks Amos.I'm reading about it now.

PS: I tried updating my squid to the CVS version, however it does not 
compile under Centos 6..
IPAddress.cc: In member function ‘void 
IPAddress::GetAddrInfo(addrinfo*&, int) const’:

IPAddress.cc:819: error: suggest parentheses around ‘&&’ within ‘||’

-turgut


On 09/22/2012 03:43 AM, Amos Jeffries wrote:

On 21/09/2012 11:37 p.m., turgut kalfaoğlu wrote:

Hi there. On a server I have squid-3.2.1-20120906-r11653 running.

The server uses IPv6 on a hamachi VPN, and  regular IPv4 for internet.

Squid is configured to listen only to ipv6 using some line like:
http_port [1610:ab::225:e1ee]:1399

Whenever a client uses squid to browse, he or she gets a 3 or 4-digit 
hex number embedded into the web page.
If the page is short, the number is only on top. If the page is long, 
then the number repeats are regular intervals.
It's as if its some status code, or maybe a length code. I can't 
figure it out.


However, it corrupts the page and the data, if it happens to be 
binary data.


Any ideas where these numbers are coming from?


This sounds like chunked encoding being sent without the encoding 
headers.


Amos






Re: [squid-users] Cache Manager working on Apache server as ICP sibling

2012-09-21 Thread Amos Jeffries

On 22/09/2012 6:51 a.m., Eliezer Croitoru wrote:

On 9/21/2012 10:23 AM, Amos Jeffries wrote:

Apache is not a server which can maintain a proxy cache. Why are you
making it a sibling (potential alternative *cache*) instead of a parent
(potential data *source*)?

Amos

Because some people think it gives them what they need.



Which is why I asked "why", to figure out if they were mistaken or if 
that is a missing feature.


Amos


Re: [squid-users] squid_swap.log

2012-09-21 Thread Amos Jeffries

On 22/09/2012 3:17 a.m., Eliezer Croitoru wrote:

On 9/21/2012 1:20 PM, aa ss wrote:

Hello.

  i'm running squid 2.6.STABLE23 integrated with websense, so i need 
to run this version and cannot upgrade.


That has not been true for several years.

  I readed lots of documentation and lots of posts about this 
problem, without finding a real solution.

  I have 3 separate disks acting as cache.
  The problem is that when this 3 disks reach storage limit imposed 
by squid (about 92%) replacement policy, the 3 squid_swap.log files 
grow indefinitely, filling all disk space.
  To solve this i need to stop squid, delete all cache and swap 
files, and rebuild.



  Is there a solution for my problem?

OK filling but with what message?
what about cache.log file?
And how exactly websense forces you to use such an old squid version?


The old websense filters tried to operate through the squid-2.5/2.6 
re-write interfaces and require some patching of 2.6 to work right. They 
did not provide patches for support of 2.7 or 3.0 and a lot of vocal 
documentation was made of that fact at the time.


Websense current services provide ICAP interfaces which means squid-3.1 
and later can easily be plugged in that way, same as most any other AV 
and filtering vender service.


Amos


Re: [squid-users] Odd problem with the new squid

2012-09-21 Thread Amos Jeffries

On 21/09/2012 11:37 p.m., turgut kalfaoğlu wrote:

Hi there. On a server I have squid-3.2.1-20120906-r11653 running.

The server uses IPv6 on a hamachi VPN, and  regular IPv4 for internet.

Squid is configured to listen only to ipv6 using some line like:
http_port [1610:ab::225:e1ee]:1399

Whenever a client uses squid to browse, he or she gets a 3 or 4-digit 
hex number embedded into the web page.
If the page is short, the number is only on top. If the page is long, 
then the number repeats are regular intervals.
It's as if its some status code, or maybe a length code. I can't 
figure it out.


However, it corrupts the page and the data, if it happens to be binary 
data.


Any ideas where these numbers are coming from?


This sounds like chunked encoding being sent without the encoding headers.

Amos



Re: [squid-users] Cache Manager working on Apache server as ICP sibling

2012-09-21 Thread Eliezer Croitoru

On 9/21/2012 10:23 AM, Amos Jeffries wrote:

Apache is not a server which can maintain a proxy cache. Why are you
making it a sibling (potential alternative *cache*) instead of a parent
(potential data *source*)?

Amos

Because some people think it gives them what they need.

Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Chaining url_rewrite_programs

2012-09-21 Thread Eliezer Croitoru

On 9/21/2012 7:19 PM, Eric Nichols wrote:

Can we chain using a combination of the zapchain script and the
custom.plexample from this link
http://www.powershelladmin.com/wiki/Linux_squid_proxy_url_rewriting_or_redirection
by
specifying this in the squid.conf?

url_rewrite_program /opt/zapchain/zapchain /opt/custom/custom.pl
  /opt/websense/websense
or
url_rewrite_program (/opt/zapchain/zapchain /opt/custom/custom.pl
  /opt/websense/websense)

Can anyone recommend another approach?

No you cant do it this way.
The more reasonable way to handle these services is ICAP or external_acl.

using squid 2.6 is fine and everything but it's really old.

you can use these same scripts on any newer version of squid so you can 
migrate to at least 3.1 if you dont want to jump into the latest 3.2.1 
stable.


I suggest you to take a look at qlproxy which gives a lot using ICAP.

If you are paying for websense it will be sensible to talk with the 
company and ask to migrate from url_rewrite into external_acl.


You can write some one to all these wrappers if you want but i really 
think it's a bad idea for performance.


Regards,
Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] squid_swap.log

2012-09-21 Thread Eliezer Croitoru

On 9/21/2012 1:20 PM, aa ss wrote:

Hello.

  i'm running squid 2.6.STABLE23 integrated with websense, so i need to run 
this version and cannot upgrade.
  I readed lots of documentation and lots of posts about this problem, without 
finding a real solution.
  I have 3 separate disks acting as cache.
  The problem is that when this 3 disks reach storage limit imposed by squid 
(about 92%) replacement policy, the 3 squid_swap.log files grow indefinitely, 
filling all disk space.
  To solve this i need to stop squid, delete all cache and swap files, and 
rebuild.


  Is there a solution for my problem?

OK filling but with what message?
what about cache.log file?
And how exactly websense forces you to use such an old squid version?
What you can do if you want to upgrade is to use the 2.6 as cache peer 
with no cache at all.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Can't access facebook.com through squid 3.1.19

2012-09-21 Thread Jose-Marcio Martins da Cruz


Sorry for top posting...

I'm having problems here too with facebook. Sometimes it works and sometimes it doesn't work. It's a behaviour that look 
like a "timeout".


A simple experiment shows that with or without a proxy, I have the same problem.

So maybe the problem may come from something other than squid.




Marcello Romani wrote:

Il 17/09/2012 19:20, Wilson Hernandez ha scritto:

Hello.

As of a couple of days ago I've been experiencing a very strange problem
with squid 3.1.19: Facebook doesn't work!

Every site work well except for FB. I checked the access.log file while
accessing FB and nothing about FB shows. At first I thought it might be
a DNS problem but, is not. Checked cache.log and nothing there either.

I re-installed 3.1.19 version and still can't get access to FB.

Any idea of what can be happening?

Thanks in advanced for your time and help.




I don't use facebook but it seems to me it's forcing https even if initially 
accessed via http. Check if squid.conf has
some config line that affects https access.

HTH




--

 Envoyé de ma machine à écrire.



Re: [squid-users] Can't access facebook.com through squid 3.1.19

2012-09-21 Thread Marcello Romani

Il 17/09/2012 19:20, Wilson Hernandez ha scritto:

Hello.

As of a couple of days ago I've been experiencing a very strange problem
with squid 3.1.19: Facebook doesn't work!

Every site work well except for FB. I checked the access.log file while
accessing FB and nothing about FB shows. At first I thought it might be
a DNS problem but, is not. Checked cache.log and nothing there either.

I re-installed 3.1.19 version and still can't get access to FB.

Any idea of what can be happening?

Thanks in advanced for your time and help.




I don't use facebook but it seems to me it's forcing https even if 
initially accessed via http. Check if squid.conf has some config line 
that affects https access.


HTH

--
Marcello Romani


RE: [squid-users] Cache Manager working on Apache server as ICP sibling

2012-09-21 Thread Andrew Krupiczka
Thanks Amos,

Our system is a custom product design with some special requirements, so we 
ended up with the such configuration and somewhat customized Squid.
We thought that perhaps some easy way around might be available but at the end 
this is not very critical issue for us.

Andrew

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, September 21, 2012 3:24 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Cache Manager working on Apache server as ICP sibling

On 21/09/2012 3:33 a.m., Andrew Krupiczka wrote:
> Thanks,
>
> To be more precise my Apache server peer is configured as:cache_peer 
> 127.0.0.1 sibling 1114 3130 no-digest originserver
>
> The Cache Manager only works after adding the following line: 
> neighbor_type_domain 127.0.0.1 parent AKDME7071BCE29DE9
>
> which "transforms" my sibling peer into to the parent peer which we don't 
> want to do.

Apache is not a server which can maintain a proxy cache. Why are you making it 
a sibling (potential alternative *cache*) instead of a parent (potential data 
*source*)?

Amos


[squid-users] Odd problem with the new squid

2012-09-21 Thread turgut kalfaoğlu

Hi there. On a server I have squid-3.2.1-20120906-r11653 running.

The server uses IPv6 on a hamachi VPN, and  regular IPv4 for internet.

Squid is configured to listen only to ipv6 using some line like:
http_port [1610:ab::225:e1ee]:1399

Whenever a client uses squid to browse, he or she gets a 3 or 4-digit 
hex number embedded into the web page.
If the page is short, the number is only on top. If the page is long, 
then the number repeats are regular intervals.
It's as if its some status code, or maybe a length code. I can't figure 
it out.


However, it corrupts the page and the data, if it happens to be binary data.

Any ideas where these numbers are coming from?

Thanks, -t




[squid-users] squid_swap.log

2012-09-21 Thread aa ss
Hello.

 i'm running squid 2.6.STABLE23 integrated with websense, so i need to run this 
version and cannot upgrade.
 I readed lots of documentation and lots of posts about this problem, without 
finding a real solution.
 I have 3 separate disks acting as cache.
 The problem is that when this 3 disks reach storage limit imposed by squid 
(about 92%) replacement policy, the 3 squid_swap.log files grow indefinitely, 
filling all disk space.
 To solve this i need to stop squid, delete all cache and swap files, and 
rebuild.


 Is there a solution for my problem?
 Many thanks.
 Giulius


Re: [squid-users] problems with ssl_crtd

2012-09-21 Thread Linos
On 21/09/12 09:20, Amos Jeffries wrote:
> Firstly, is this problem still occuring with a recent snapshot? we have done a
> lot of stabilization on squid-3 in the months working up towards 3.2.1 release
> and the SSL code has had two new features added to improve the bumping process
> and behaviours.
> 
> 
> Secondly, the issue as you found is not in Squid but in the helper. You should
> be able to add -d option to the helper command line to get a debug trace out 
> of
> it into cache.log. Set Squid to a normal (0 or 1) level to avoid any squid 
> debug
> confusing the helper traces.
> 
> In 3.2 helpers crashing is not usually a fatal event, you will simply see an
> annoying amount of that:
> "
> 
> 2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
> 2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
> 2012/09/20 14:58:23| Starting new helpers
> "
> 
> 
> In this case there is something in the cert database or system environment 
> which
> is triggering the crash and persisting across into newly started helpers,
> crashing them as well. This is the one case where Squid is still killed by
> helpers dying faster than they can be sent lookups, thus the
> 
> "FATAL: The ssl_crtd helpers are crashing too rapidly, need help!"
> 
> HTH
> Amos
> 

Tested squid-3.HEAD-20120921-r12321, squid crash itself very fast with this
version, i have no time to test the ssl problem:

squid3 -N
2012/09/21 11:09:49| SECURITY NOTICE: auto-converting deprecated "ssl_bump allow
" to "ssl_bump client-first " which is usually inferior to the newer
server-first bumping mode. Update your ssl_bump rules.
Abortado (`core' generado)

about the core file, no matter what i put in squid.conf, squid does not generate
it, i have this line right now:
coredump_dir /var/log/squid3

but i have tried use the squid cache_dir itself and does not work either, i have
executed it in gdb and get this backtrace.


#0  0x7579a445 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7579dbab in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x556cf63d in xassert (
msg=0x55906778 "!conn() || conn()->clientConnection == NULL ||
conn()->clientConnection->fd == aDescriptor", file=, line=103)
at debug.cc:565
#3  0x557c8985 in ACLFilledChecklist::fd (this=0x5691b418,
aDescriptor=11) at FilledChecklist.cc:103
#4  0x556f73bd in FwdState::initiateSSL (this=0x57b00268)
at forward.cc:831
#5  0x557fd204 in AsyncCall::make (this=0x577c9cf0)
at AsyncCall.cc:35
#6  0x55800227 in AsyncCallQueue::fireNext (this=)
at AsyncCallQueue.cc:52
#7  0x55800380 in AsyncCallQueue::fire (this=0x55d5aba0)
at AsyncCallQueue.cc:38
#8  0x556e8604 in EventLoop::runOnce (this=0x7fffe460)
at EventLoop.cc:130
#9  0x556e86d8 in EventLoop::run (this=0x7fffe460)
at EventLoop.cc:94
#10 0x55749249 in SquidMain (argc=,
argv=) at main.cc:1518
#11 0x55678536 in SquidMainSafe (argv=,
argc=) at main.cc:1240
#12 main (argc=, argv=) at main.cc:1232


Regards,
Miguel Angel.


Re: [squid-users] problems with ssl_crtd

2012-09-21 Thread Linos
On 21/09/12 09:20, Amos Jeffries wrote:
> Firstly, is this problem still occuring with a recent snapshot? we have done a
> lot of stabilization on squid-3 in the months working up towards 3.2.1 release
> and the SSL code has had two new features added to improve the bumping process
> and behaviours.
> 
> 
> Secondly, the issue as you found is not in Squid but in the helper. You should
> be able to add -d option to the helper command line to get a debug trace out 
> of
> it into cache.log. Set Squid to a normal (0 or 1) level to avoid any squid 
> debug
> confusing the helper traces.
> 
> In 3.2 helpers crashing is not usually a fatal event, you will simply see an
> annoying amount of that:
> "
> 
> 2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
> 2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
> 2012/09/20 14:58:23| Starting new helpers
> "
> 
> 
> In this case there is something in the cert database or system environment 
> which
> is triggering the crash and persisting across into newly started helpers,
> crashing them as well. This is the one case where Squid is still killed by
> helpers dying faster than they can be sent lookups, thus the
> 
> "FATAL: The ssl_crtd helpers are crashing too rapidly, need help!"
> 
> HTH
> Amos
> 

I have not tried a recent snapshot but i am going to do right now.

I have added a -d option, now i have this line in squid.conf:
sslcrtd_program /usr/lib/squid3/ssl_crtd -d -s /var/spool/squid3/squid_ssl_db -M
16MB

Still i don't get nothing new in cache.log, this is the last crash:

(ssl_crtd): Cannot create ssl certificate or private key.
2012/09/21 10:33:10| WARNING: ssl_crtd #2 exited
2012/09/21 10:33:10| Too few ssl_crtd processes are running (need 1/10)
2012/09/21 10:33:10| Starting new helpers
2012/09/21 10:33:10| helperOpenServers: Starting 1/10 'ssl_crtd' processes
2012/09/21 10:33:10| client_side.cc(3477) sslCrtdHandleReply: "ssl_crtd" helper
return  reply
(ssl_crtd): Cannot create ssl certificate or private key.
2012/09/21 10:33:10| WARNING: ssl_crtd #1 exited
2012/09/21 10:33:10| Too few ssl_crtd processes are running (need 1/10)
2012/09/21 10:33:10| Closing HTTP port 0.0.0.0:3128
2012/09/21 10:33:10| Closing HTTP port [::]:3150
2012/09/21 10:33:10| storeDirWriteCleanLogs: Starting...
2012/09/21 10:33:10| 65536 entries written so far.
2012/09/21 10:33:10|   Finished.  Wrote 112080 entries.
2012/09/21 10:33:10|   Took 0.04 seconds (2691254.86 entries/sec).
FATAL: The ssl_crtd helpers are crashing too rapidly, need help!

Squid Cache (Version 3.2.1): Terminated abnormally.
(ssl_crtd): Cannot create ssl certificate or private key.
CPU Usage: 1.196 seconds = 0.720 user + 0.476 sys
Maximum Resident Size: 199824 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:   34196 KB
Ordinary blocks:33966 KB 52 blks
Small blocks:   0 KB  1 blks
Holding blocks: 37268 KB  8 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 229 KB
Total in use:   71234 KB 208%
Total free:   229 KB 1%


I have tried to attach to the five ssl_crtd processes but after the crash i get:

[Inferior 1 (process 465) exited normally]
[Inferior 1 (process 463) exited normally]
[Inferior 1 (process 464) exited normally]
[Inferior 1 (process 466) exited with code 01]
[Inferior 1 (process 467) exited with code 01]

so no backtrace, not in gdb neither in cache.log.

About the environment problem seems to be related with google domains, i don't
if i could trigger with other but not as easily for sure.

I am going to try the last snapshot in a while and post here my results.

Regards,
Miguel Angel.


Re: [squid-users] Re: stale client blocking other downloads

2012-09-21 Thread Amos Jeffries

On 20/09/2012 11:34 p.m., Stephane CHAZELAS wrote:

2012-09-19 17:07:46 +0300, Eliezer Croitoru:

On 9/19/2012 4:36 PM, Stephane CHAZELAS wrote:

But only when providing with a User-Agent oddly enough (and also
when inserting a 10 second delay between the two requests).
Then, I tried to do the same query from a different client and
since then, I cannot reproduce it anymore (even if I do a
different request), and I see no merging in conntrackd. So I
don't have the full picture here, and reckon it's going to be
difficult to investigate as I don't know how to reproduce it
consistently.

I've got network captures, but only on the down side (between
the client and the proxy).

I think it's better to be moved into squid-users since it's not
really about squid code.
so next time reply to squid-users@squid-cache.org

OK. Note that squid has been working perfectly with tproxy for
several months, and the issue is only coming up now because of
that new buggy HTTP client we're evaluating that behaves in an
unexpected way (makes a request, doesn't read the response and
makes another identical request). I was raising it to -dev
because it looks potentially like a serious denial of service
vulnerability in squid.


It is a vulnerability in theory, but not one we can do anything about in 
practice. It is perfectly legitimate HTTP to receive millions of 
requests per second from a single client (for example another proxy 
serving as gateway fro a whole country). We also make no practice of 
retaining previous requests around to compare details, it is far more 
efficient to reject two requests as they arrive than to compare N 
requests against each other in case one is a repeat.


 The DoS vulnerability aspect comes down to how well/efficiently you 
configure the proxy access controls. When well done the proxy should 
have a very high rejection speed for most potential.attack requests. 
Most but not all. DoS resolution is all about raising the bar as high as 
possible.



Amos


Re: [squid-users] Generell Squid setup

2012-09-21 Thread Amos Jeffries

On 21/09/2012 4:26 a.m., Farkas H wrote:

Hi Amos,

thank you for your response.
I think I didn't explain the problem correctly. Let me mention that
the http-Post request contains
- one or two URLs,
- name and parameters of one mathematical function from a set of
server functions,
- mime-type of the response.

The client sends a http-Post request to the server

The server ...
(1) fetches the data (xml, up to 100 MB) via http-Get using the
reference which is embedded in the client http-Post request,
(2) computes a complete new dataset (xml, up to 100 MB) using the
fetched data and the requested function and parameters,
(3) converts the dataset to the requested mime-type,
(4) returns the resulting dataset as a response of the http-Post
request to the client.


Aha. So you are *not* transforming a POST into a GET as earlier stated.

You are generating an entirely new resource (at the web service and URLs 
POST'ed too), from what happens to be remote data sources contacted via GET.


Because the front end requests are POST there is no caching of them by 
Squid. The web service is free to run its own cache to optimize 
computation responses, but that is outside of HTTP.


The back end requests being GET can be freely cached by a Squid right 
next to the calculator web service. Erasing the latency to original 
sources and de-duplicating repeat requests by the Web Service.


In theory the body response from a POST with Content-Location: header 
could be cached under the Content-Location URL for later GET requests to 
it, but Squid does not yet support that and it sounds like your system 
the clients would not re-request using GET anyway. (Sponsorship welcome 
if you want to go that way regardless).




If we would implement a simple server cache the process would be like this:
(- client sends request via http-Post to the server)
- server generates a unique key of the client http-Post request,
because the URL of the http-Post request is always the same,
- server checks if the unique key / filename is in the cache; no:
normal process,
- server checks via http-Head (origin server) if the cached data is
fresh: no normal process,
- server sends the cached data associated to the unique key / filename
as response of the http-Post request to the client; end,
- normal process: server fetches the data via http-Get; server
computes the result dataset using the fetched data and the requested
function and parameters; server stores data in cache; server sends the
result data as response of the http-Post request to the client; end.

I wonder if Squid could help us to cache the data accosiated to the
client http-Post request. My idea is to generate a unique key of the
http-Post request to make it cacheable.


POST can be returned with Content-Length and ETag unique location and 
identifier. Which technically make the response body cacheable, but only 
when fetched from the Content-Location with GET requests later. All POST 
requests still are not able to be responded with a HIT, only with MISS.


Amos



Thanks so much for your response.
Farkas


On 8 September 2012 04:48, Amos Jeffries  wrote:

On 7/09/2012 9:39 a.m., Farkas H wrote:

Hi Amos,

thanks for your response.
I modified the web application. Now we have the following infrastructure.
client --> http-Post [embedded http-Get] --> Server / web application
--> http-Get --> Squid -> Servers (-> Squid -> Server / web
application -> client)
Advantage: The Server / web application doesn't have to request data
from the remote servers if it's in the Squid cache.

Additionally I want to cache the http-Post requests.
client --> http-Post --> Squid --> Server / web apllication /
processing the response (-> Squid -> client)


Requests body is not cacheable in HTTP.

The body content is the state to be changed at end-server resource in the
URL. Caching it is meaningless, since next time the state needs to be
changed you CANNOT simply reply from a middleware cache saying "server state
now changed" without passing any of those details to the server.

Additionally, your translation service is re-writing the POST into GET
requests so the POST ceases to exist at your gateway service. There is
nothing to cache.




The idea: We modify the header of the http-post request to make it unique.
The information whether Squid has a stored response to the modified
request (true or false) should be added to the request / should be
forwarded to the destination server. There are two possibilities.
(1) The modified request is not stored in Squid (new request).
(2) The modified request is stored in Squid. We don't know yet if the
data is still fresh.
The request should be forwarded in both(!) possibilities, (1) and (2),
to the destination server.
Is that possible with Squid?


Of course. That is dependent on the Cache-Control: headers the server
supplies to Squid with its responses and applies to each response
independent of anythign else.

Send "Cache-Control: must-revalidate" and Squid will query the

Re: [squid-users] problems with ssl_crtd

2012-09-21 Thread Linos
On 20/09/12 12:58, Ahmed Talha Khan wrote:
> Hey Guy, All
> 
> I have started facing a very similar issue now.I have been using
> squid-3.HEAD-20120421-r12120 for about 5 months without any issues.
> Suddenly from yesterday ive started getting crahses in ssl_crtd
> process.
> 
> 
> In my case i am the only user but i observe that the behaviour is
> random. Sometimes it crashes and sometimes it works. Different https
> pages give the crash. Even non https pages have caused the crash.
> 
>  These occur especially on google https pages like docs,mail,calender etc..
> 
> The signing cert is also ok and has NOT expired.
> 
> 

I can confirm my problem is not reproducible with https://www.apple.com (for
example), not as easily as with google domain almost.

Regards,
Miguel Angel.



Re: [squid-users] Cache Manager working on Apache server as ICP sibling

2012-09-21 Thread Amos Jeffries

On 21/09/2012 3:33 a.m., Andrew Krupiczka wrote:

Thanks,

To be more precise my Apache server peer is configured as:  cache_peer 
127.0.0.1 sibling 1114 3130 no-digest originserver

The Cache Manager only works after adding the following line:   
neighbor_type_domain 127.0.0.1 parent AKDME7071BCE29DE9

which "transforms" my sibling peer into to the parent peer which we don't want 
to do.


Apache is not a server which can maintain a proxy cache. Why are you 
making it a sibling (potential alternative *cache*) instead of a parent 
(potential data *source*)?


Amos


Re: [squid-users] problems with ssl_crtd

2012-09-21 Thread Amos Jeffries
Firstly, is this problem still occuring with a recent snapshot? we have 
done a lot of stabilization on squid-3 in the months working up towards 
3.2.1 release and the SSL code has had two new features added to improve 
the bumping process and behaviours.



Secondly, the issue as you found is not in Squid but in the helper. You 
should be able to add -d option to the helper command line to get a 
debug trace out of it into cache.log. Set Squid to a normal (0 or 1) 
level to avoid any squid debug confusing the helper traces.


In 3.2 helpers crashing is not usually a fatal event, you will simply 
see an annoying amount of that:

"

2012/09/20 14:58:23| WARNING: ssl_crtd #2 exited
2012/09/20 14:58:23| Too few ssl_crtd processes are running (need 1/5)
2012/09/20 14:58:23| Starting new helpers
"


In this case there is something in the cert database or system 
environment which is triggering the crash and persisting across into 
newly started helpers, crashing them as well. This is the one case where 
Squid is still killed by helpers dying faster than they can be sent 
lookups, thus the


"FATAL: The ssl_crtd helpers are crashing too rapidly, need help!"

HTH
Amos