with-openssl should
> resolve it.
>
> Amos
>
Dan Steen
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
tain.
>
> Amos
>
>
> Original message
> From: Dan Steen
> Date: Wed, 12 May 2021, 10:06
> To: squid-users@lists.squid-cache.org
> Subject: [squid-users] https_port not correctly sending ssl cert information?
>> Hi!,
>>
>> I've recently been trying to
enable-ssl-crtd, and the new
version only has --with-gnutls. Would that be the issue? I appreciate the
help!
Thanks!
Dan Steen
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
I'd be all over any Squid 4 RPMs for EL6, for what that's worth.
I had downloaded your source RPM for EL7 at one point and tried to build
one for EL6. Dealing with the compiler issues was a bit beyond me though,
sadly.
On Tue, 14 Aug 2018 at 05:46, Eliezer Croitoru wrote:
> I need to test it
Copy, Amos — receiving you loud and clear :)
On Mon, 4 Jun 2018 at 15:47, Amos Jeffries wrote:
> Hi anyone,
> just testing to see if the list server is still operational. Things
> have been suspiciously quiet this week.
>
> Amos
> ___
> squid-users
unfamiliar errors.
Any advice welcome!
Best,
Dan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Okay, cool — thanks for clarifying.
Guess I'll nuke it myself and reinitialise a blank one.
Best,
Dan
On 19 May 2017 at 23:29, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 19/05/17 15:47, Dan Charlesworth wrote:
>
>> Hey all
>>
>> I'm fairly new to rock cac
10240
# du --max-depth=1 /var/spool/squid/ -h
137G /var/spool/squid/rock
What am I missing?
Best,
Dan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
;Catalog":{"Hash":"0ubiHCQUm5xIzgzlKW9Gbw=="},"Ts":636282501023081355}
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: http://sheppartonnews.pressreader.com
ws: 5
svc: 5
ws: azure
Date: Thu, 20 Apr 2017 01:55:02 GMT
X-Cache: MISS from 10.0.1.15
Hi everyone,
This is a super weird one!
This Pressreader site (http://sheppartonnews.pressreader.com/shepparton-news)
gets a totally different (erroneous) response from the server when accessing it
through squid on a particular school's network.
It doesn’t happen through any other squid box
Quoting Alex Rousskov :
On 04/12/2017 12:16 PM, Amos Jeffries wrote:
Changes to http_access defaults
Clearly stating what you are trying to accomplish with these changes may
help others evaluate your proposal. Your initial email focuses on _how_
you are
l have 2500 accounts ...
>
I have my ACLs based off what group an individual belongs to in a LDAP
tree.
Perhaps something like that would be helpful in your setup.
-Dan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
I just want to throw my support behind seeking a solution to this problem.
Luke’s clearly considered it in way more detail than anyone so far, myself
included.
The affects the squids under my purview every day.
Best,
Dan
> On 14 Sep. 2016, at 10:18 am, squid-us...@filter.luko.org wr
Hey Steve,
Deployed a 3.5.20 build with both of those patches and have noticed a big
improvement in memory consumption of squid processes at a couple of
splice-heavy sites.
Thank you, sir!
Dan
> On 12 Aug 2016, at 7:05 PM, Steve Hill <st...@opendium.com> wrote:
>
>
>&g
Pretty sure this is affecting our 3.5.x systems as well — we use a very
similar splicing implementation.
I'll keep an eye out in hope someone adapts that patch!
Dan
On 12 August 2016 at 06:22, Alex Rousskov <rouss...@measurement-factory.com>
wrote:
> On 08/11/2016 10:56 AM, Steve H
appropriate?
Any advice welcome.
Thanks!
Dan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
It looks like I'm probably going to get fobbed off by this site's
administrators. "It's our load balancer" — "Simply set up a bypass" etc.
Is there any straightforward way to disable the X-Forwarded-For header just
for requests to this one website? What would be implicati
That’s a super helpful analysis, thanks Amos.
Now to see if I track down the site admins
> On 5 Jul 2016, at 3:04 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
>
> On 5/07/2016 4:25 p.m., Dan Charlesworth wrote:
>> This website seems not send back a proper web page if
This website seems not send back a proper web page if the request comes via a
(squid?) proxy.
http://passporttosafety.com.au/
Can anyone tell what might be going wrong here?
Best,
Dan
___
squid-users mailing list
squid-users@lists.squid-cache.org
or anything like
that? I’ve probably overlooked the discussion on the list.
> On 1 Jun 2016, at 10:26 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
>
> Hi Dan,
> sorry RL getting in the way these weeks.
>
> Two things stand out for me.
>
> Its a bit odd that ex
AM, Dan Charlesworth <d...@getbusi.com> wrote:
>
> I’ve now got mgr:mem output from a leaky box for comparison but I’m having a
> hard time spotting where the problem might be.
>
> Would anyone more experienced mind taking at these and seeing if anything
> jumps out as
1KB Strings 0 0
4KB Strings 0 1
16KB Strings 0 5
Other Strings0 0
Large buffers: 0 (0 KB)
Thanks!
> On 11 May 2016, at 2:37 PM, Dan Charlesworth <d...@getbusi.com> wrote:
>
> Thanks Amos -
>
> Not sure how self-explanatory the output
0 0
Large buffers: 0 (0 KB)
> On 10 May 2016, at 6:02 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
>
> On 10/05/2016 2:35 p.m., Dan Charlesworth wrote:
>> A small percentage of deployments of our squid-based product are using
>> oodles of memory—
A small percentage of deployments of our squid-based product are using oodles
of memory—there doesn’t seem to be a limit to it.
I’m wondering what the best way might be to analyse what squid is reserving it
all for in the latest 3.5 release?
The output of squidclient mgr:cache_mem is
g outside of squid.
>
> Eliezer
>
> On 07/03/2016 06:50, Dan Charlesworth wrote:
>> Alright, we’re getting somewhere.
>>
>> A plain curl is about as slow as a default squid config curl:
>>
>> P.S. I sent you a Skype request
>>
>> ---
>
entioned?
>
> Another one to try is:
> http://www.squid-cache.org/Doc/config/dns_v4_first/
>
> try adding to the end of squid.conf
> dns_v4_first on
>
> All The Bests,
> Eliezer
>
> On 04/03/2016 00:42, Dan Charlesworth wrote:
>> Thanks for your inp
18:07:21 2016
;; MSG SIZE rcvd: 93
real0m0.037s
user0m0.003s
sys 0m0.001s
> On 3 Mar 2016, at 5:44 PM, Eliezer Croitoru <elie...@ngtech.co.il> wrote:
>
> can you try the next command:
> dig -x 10.100.128.1
>
> Eliezer
>
> On 03/03/2016 08:04, Dan Ch
96.50
ns-1756.awsdns-27.co.uk. 11489 IN A 205.251.198.220
;; Query time: 21 msec
;; SERVER: 192.231.203.3#53(192.231.203.3)
;; WHEN: Thu Mar 3 17:03:04 2016
;; MSG SIZE rcvd: 246
real0m0.026s
user0m0.004s
sys 0m0.001s
> On 3 Mar 2016, at 4:55 PM, Eliezer Croitoru <elie...@ngtech
Right now we have 1 squid box (out of a lot), running 3.5.13, which does
something like this for every request, taking about 10 seconds:
2016/03/03 16:30:48.883 kid1| 78,3| dns_internal.cc(1794) idnsPTRLookup:
idnsPTRLookup: buf is 43 bytes for 10.100.128.1, id = 0x733a
2016/03/03 16:30:48.883
I’m just catching up with this one, but we’ve observed some memory leaks on a
small percentage of our boxes, which we migrated to Peek & Splice late last
year.
We’re on 3.5.13, about to move to 3.5.15.
What’s the least disruptive way to keep this under control, if there is one?
Is there
)
But now I can’t even a source for that … I need to spend some quality time with
Google I think.
> On 24 Feb 2016, at 5:50 AM, Amos Jeffries <squ...@treenet.co.nz> wrote:
>
> On 23/02/2016 1:05 p.m., Dan Charlesworth wrote:
>> I'm bumping this question back up, because I al
I'm bumping this question back up, because I also would like to know.
We'd rather not need users of our squid-based software to need to deploy
new CentOS 7 servers to run it.
On 12 February 2016 at 19:59, Jason Haar wrote:
> Hi there
>
> Given the real work on ssl-bump
It's been a while since I've looked at this—because the software we use to
generate our squid.conf just works around now—but we found that Squid 3
would only enforce exactly half the configured rate on HTTP requests but
enforce the full rate on HTTPS requests.
So we now make two delay pools for
It’s been a far superior client experience to bumping on the deployments I’ve
seen. Obviously MITM-ing a connection is always going to be a less amenable
situation for clients; technically and ethically.
The only problem I’ve had with splicing is this Host Header Forgery error squid
has when
gt;
> On 25/11/2015 12:20 p.m., Dan Charlesworth wrote:
>> Thanks for the perspective on this, folks.
>>
>> Going back to the technical stuff—and this isn’t really a squid thing—but is
>> there any way I can minimise this using my DNS server?
>>
>> Can
Thanks for the perspective on this, folks.
Going back to the technical stuff—and this isn’t really a squid thing—but is
there any way I can minimise this using my DNS server?
Can I force my local DNS to only ever return 1 address from the pool on a
hostname I’m having trouble with?
> On 30
of IPs
apparently at random.
> On 29 Oct 2015, at 3:46 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
>
> On 29/10/2015 1:16 p.m., Dan Charlesworth wrote:
>> It looks like there’s certain hosts that are designed to load balance (or
>> something) between a few I
the client and the proxy are going to get the same IPs at the same
time.
What is one to do about that?
> On 22 Oct 2015, at 10:00 PM, Yuri Voinov <yvoi...@gmail.com> wrote:
>
>
>
> 22.10.15 15:58, Amos Jeffries пишет:
>> On 21/10/2015 4:53 p.m., Dan Charlesw
I’m getting these very frequently for api.github.com and github.com
I’m using the same DNS servers as my intercepting squid 3.5.10 proxy and they
only return the one IP when I do an nslookup as well …
Any updates from your end, Roel?
> On 8 Oct 2015, at 8:29 PM, Eliezer Croitoru
Amos -
I’m going to assume that request was directed at Alex, as I don’t have editor
access to the wiki. Let me know if not.
> On 16 Oct 2015, at 4:22 PM, Amos Jeffries wrote:
>
> Can you please add to the Troubleshooting section at the end of
>
Great, thanks. Don’t know why I didn’t think of it before but I’ll try
elevating it from Login -> System keychain and see what happens.
> On 16 Oct 2015, at 11:51 AM, Jason Haar <jason_h...@trimble.com> wrote:
>
> On 16/10/15 13:34, Dan Charlesworth wrote:
>> Th
ason_h...@trimble.com> wrote:
>
> On 16/10/15 13:08, Dan Charlesworth wrote:
>> ORLY
>>
>> I seem to recall this happening on 10.10 as well, but it could be an El
>> Capitan thing. Do you mind reminding me of your squid config Jason?
>
> With my config I trying to
anything to do with Elliptic Curves or pinning
>
> Jason
>
> On 15/10/15 12:19, Alex Rousskov wrote:
>> On 10/14/2015 05:00 PM, Dan Charlesworth wrote:
>>
>>> I feel like if server-first is working there must be *some*
>>> combination of peek/s
, and Jason for your help on this.
> On 16 Oct 2015, at 11:55 AM, Dan Charlesworth <d...@getbusi.com> wrote:
>
> Great, thanks. Don’t know why I didn’t think of it before but I’ll try
> elevating it from Login -> System keychain and see what happens.
>
>> On 16 Oct
to use server-first if they decide to
employ bumping, so if any of you smart people have any other suggestions,
please send them through.
Thanks
> On 15 Oct 2015, at 1:34 AM, Alex Rousskov <rouss...@measurement-factory.com>
> wrote:
>
> On 10/13/2015 09:08 PM, Dan
t 2:39 PM, Dan Charlesworth <d...@getbusi.com> wrote:
>
> ¯\_(ツ)_/¯
>
> All I really have to go on is those errors com.apple.WebKit.Networking is
> logging which apparently points to a specific thing it’s missing called
> “forward transport security”. Only the peek@st
aar <jason_h...@trimble.com> wrote:
>
> On 14/10/15 16:08, Dan Charlesworth wrote:
>> I thought that fixed it for a second …
>>
>> But in reality ssl_bump peek step1 & ssl_bump bump step3 is actually
>> splicing everything, it seems.
>>
>> Any
I thought that fixed it for a second …
But in reality ssl_bump peek step1 & ssl_bump bump step3 is actually splicing
everything, it seems.
Any other advice? :-)
> On 14 Oct 2015, at 1:51 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
>
> On 14/10/2015 1:13 p.m., Dan
Throwing this out to the list in case anyone else might be trying to get SSL
Bump to work with the latest version of Safari.
Every other browser on OS X (and iOS) is happy with bumping for pretty much all
HTTPS sites, so long as the proxy’s CA is trusted.
However Safari throws generic “secure
Same here—I've been meaning to ask the list about this too. I’m still on 3.5.9,
by the way.
> On 6 Oct 2015, at 10:55 PM, Roel van Meer wrote:
>
> Hi everyone,
>
> I have a Squid setup on a linux box with transparent interception of both
> http and https traffic. Everything
It seems there’s no way to get the equivalent of the `dst` internal ACL into an
external ACL. %DST returns the hostname from DNS not the origin IP.
Am I missing something? Perhaps there's a more creative way to pass the IP to
an external ACL regardless of what the hostname is?
Thanks!
On 09/25/2015 06:09 PM, Amos Jeffries wrote:
> On 26/09/2015 2:26 a.m., Dan Purgert wrote:
>> Quoting TarotApprentice:
>>
>>> Is there a chance we can get 3.5.9 into Debian please.
>>>
>>
>> Think this is more a question for the Debian maintainers,
Quoting TarotApprentice :
Is there a chance we can get 3.5.9 into Debian please.
Think this is more a question for the Debian maintainers, than the
squid ones. I ended up building 3.5.8 from source because of it.
TBH though, the built-from-source 3.5.8 seems
Thanks for all the info here, people.
This is probably because of some other dumb thing I’m doing in my ssl_bump
config, but if I change ssl_bump peek step1 to ssl_bump peek all, I get this
assertion failure:
PeerConnector.cc:747: "!callback"
> On 9 Sep 2015, at 6:59 pm, Amos Jeffries
10.0.1.7 TCP_TUNNEL 200 13741 CONNECT
192.30.252.126:443 api.github.com - splice - ORIGINAL_DST/192.30.252.126 -
> On 8 Sep 2015, at 5:39 pm, Dan Charlesworth <d...@getbusi.com> wrote:
>
> Thanks Amos.
>
> To clarify about the user agents: I’m talking about anything with a (log
t log a UA when an explicit CONNECT
does.
> On 8 Sep 2015, at 5:17 pm, Amos Jeffries <squ...@treenet.co.nz> wrote:
>
> On 8/09/2015 5:36 p.m., Dan Charlesworth wrote:
>> Hello all
>>
>> I’ve been testing out an SSL bumping config using 3.5.8 for the last week or
Hello all
I’ve been testing out an SSL bumping config using 3.5.8 for the last week or so
and am scratching my head over a couple of things.
First, here’s my config (shout out to James Lay):
acl tcp_level at_step SslBump1
acl client_hello_peeked at_step SslBump2
acl bump_bypass_domains
I’m trying to figure out if there’s a way to avoid those 0 byte “peeked”
requests being processed by the rest of our external ACLs etc. by allowing them
early on in the transaction.
Unfortunately there doesn’t seem to be a way to target just those ones with
http_access—the TAG_NONE isn’t an
://somesite.com' isn't just for your blog), you'll probably do
better to create a subdomain (blog.somesite.com) so that you don't
make a mess of things ;)
Regards,
Dan
smime.p7s
Description: S/MIME Signature
___
squid-users mailing list
squid-users
it deduce[sic] a lot of hit ratio
Here's the same phrase worded the way I think that HaxkXBack /meant/ --
Yeah Joe,
I don't know why people don't give the bug higher priority as it is
significantly reducing the hit ratio
HTH :)
-Dan
smime.p7s
Description: S/MIME Signature
least, slapped in the back of the head.
On 8/6/2015 6:44 PM, Dan Charlesworth wrote:
This used to just cause a WARNING right? Is this really a good enough
reason to stop Squid from starting up?
2015/08/07 09:25:43| ERROR: '.ssl.gstatic.com
http://ssl.gstatic.com/' is a subdomain
This used to just cause a WARNING right? Is this really a good enough reason to
stop Squid from starting up?
2015/08/07 09:25:43| ERROR: '.ssl.gstatic.com http://ssl.gstatic.com/' is a
subdomain of '.gstatic.com http://gstatic.com/'
2015/08/07 09:25:43| ERROR: You need to remove
antony.st...@squid.open.source.it
wrote:
On Monday 03 August 2015 at 08:06:35 (EU time), Dan Charlesworth wrote:
Probably a lot of forward proxy users here have encountered applications
which, if they can’t get their web requests through the proxy (because of
407 Proxy Auth Required
Quoting Eliezer Croitoru elie...@ngtech.co.il:
I managed to make it work!
I am using ubuntu 14.04.2 with openLDAP and phpldapadmin.
I have changed my server to look like yours and it still didn't work.
So what I did was this: I changed the command to:
/usr/lib/squid3/ext_ldap_group_acl -d -b
Quoting Eliezer Croitoru elie...@ngtech.co.il:
I wanted to test the ext_ldap_group_acl so I created a ldap domain.
The command I am testing is:
/usr/lib/squid3/ext_ldap_group_acl -b DC=ngtech,DC=local -D
CN=admin,DC=ngtech,DC=local -w password -f
Hey folks
Is 3.4.14 going to be a thing or should we be moving to v3.5 if we want new
bug fixes?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Not to go off-topic here, but you folks are all SSL Bumping youtube.com /
googlevideo.com in order to do this caching, right?
Want to make sure I’m not missing some secret way to make YouTube use plain
HTTP.
On Fri, Jul 24, 2015 at 8:24 AM, Eliezer Croitoru elie...@ngtech.co.il
wrote:
Hey
On Sun, 12 Jul 2015 11:13:02 -0700, Jason Enzer wrote:
[...]
Looks like this:
[snip]
http_access allow tasty3171 ip1
http_access deny ip1 tasty3171
[snip]
http_access allow inc3172 ip2
http_access deny *inc3172 ip2*
[snip]
http_access allow inc3173 ip3
http_access deny *inc3173
On Fri, 03 Jul 2015 18:08:49 +, Dan Purgert wrote:
I'm setting up a squid proxy with LDAP user/group authentication, and so
far have been able to sort out the problems I've run into with a little
help from google and caches of the various squid mailing lists.
Currently, it's in a mostly
On July 4, 2015 2:57:20 AM EDT, Amos Jeffries squ...@treenet.co.nz wrote:
On 4/07/2015 6:08 a.m., Dan Purgert wrote:
I'm setting up a squid proxy with LDAP user/group authentication, and
so
far have been able to sort out the problems I've run into with a
little
help from google and caches
I'm setting up a squid proxy with LDAP user/group authentication, and so
far have been able to sort out the problems I've run into with a little
help from google and caches of the various squid mailing lists.
Currently, it's in a mostly working state for nearly everything (i.e.
user
It's also worth pointing out that your messages are getting flagged as Spam
by Gmail, which probably isn't helping visibility.
On 23 June 2015 at 06:11, mohammad al_luha...@yahoo.com wrote:
why is no-one answering this ?!!
BTW, i tried the kernel patch 2.6.35 from ZPH, it worked
Firstly, I think the biggest roadblocks you’re going to hit with caching
YouTube are:
1) It’s all encrypted now (thanks Google). Squid can’t cache what it can’t see
inside an SSL tunnel.
2) They have a pretty intense CDN which you’ll need a StoreID helper to deal
with.
There are
Thanks Amos. We're using the CONNECT ACL and everything is working as
expected.
On 29 April 2015 at 20:28, Amos Jeffries squ...@treenet.co.nz wrote:
On 29/04/2015 5:44 p.m., dan wrote:
I mentioned last time that we had to x2 all our delay_parameter’s
bytes because of a weird bug where squid
I mentioned last time that we had to x2 all our delay_parameter’s bytes because
of a weird bug where squid would apply it at half speed for no reason.
It just occurred to me that (obviously) this is why HTTPS downloads are going
too fast; because this bug must only affect HTTP traffic.
connected to a specific
site. Is this possible?
Dan Berry
Data Network Engineer
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Thanks Amos
Sorry if that wasn’t clear, but yeah, 7 KB/s was the desired speed in that
test.
I was testing against an ISO in an S3 bucket of ours. I would start the
download using http:// and get 7 KB/s (great). Then cancel it and edit the URL
to https:// and get ~90 KB/s.
Oh, and
=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC'
'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'
--enable-ltdl-convenience
On 28 Mar 2015, at 3:11 am, Dan Charlesworth d...@getbusi.com wrote:
Roger—thanks for heads up Amos.
On Fri
Bumping this because I think it might have gone into the black hole the other
night.
On 23 Mar 2015, at 5:44 pm, Dan Charlesworth d...@getbusi.com wrote:
Turns out it’s also shitting the bed whenever I go to an SSL site now that
I’ve added --enable-storeio=rock:
2015/03/23 17:40:13 kid1
will upload them to the bug.
Thanks folks.
On 25 March 2015 at 09:28, Dan Charlesworth d...@getbusi.com wrote:
Resending this after the last attempt went into the mail server black hole:
Hey Amos
I decided I’m not confident enough in 3.5.HEAD, after last time, to go
back into production
with optimisations
disabled and it seems to be doing fine performance and stability-wise. I
only managed to capture one crash with optimisations disabled, so far, but
it seemed to have some memory-related corruption, unfortunately.
Updates to come over the next few days.
On 23 March 2015 at 16:59, Dan
posted
before?
Kind regards
Dan
On 19 Mar 2015, at 5:18 pm, Amos Jeffries squ...@treenet.co.nz wrote:
On 19/03/2015 6:36 p.m., Dan Charlesworth wrote:
Hey y’all
Finally got 3.5.2 running. I was under the impression that using
server-first SSL bump would still be compatible, despite all
this in my cache logs I stop squid, remove swap.state file and run
squid3 -z, after that start squid again and the issue its gone.
Regards
On 3/19/15, Dan Charlesworth d...@getbusi.com wrote:
Hi John
This bug has been affecting me on an off for a while as well. I believe it
only affects aufs
0x4135 in ?? ()
No symbol table info available.
#14 0x0020 in ?? ()
No symbol table info available.
#15 0x in ?? ()
No symbol table info available.
On 16 Mar 2015, at 6:18 pm, Amos Jeffries squ...@treenet.co.nz wrote:On 16/03/2015 7:16 p.m., Dan Charlesworth
Hey Eliezer
I don't actually use SMP. I could be wrong about the aufs thing; I haven't
personally tested—and don't currently plan to test—any other cache types. I
just gleaned that from the comments in the bug reports.
Kind regards
Dan
On 20 March 2015 at 13:45, Eliezer Croitoru elie
Dan:
i used squid 2.7stable9 ago ,and i worried whether squid
3.5.2 is stablest for us until now too .
and you ?
Do you think Whether version is stablest at squid 3.xxx ?
Well I got 3.5.2 into production for a few hours
seen this issue frequently when I reduced my cache size,
from 70 GB to 30 GB now.
Regards
On 3/19/15, Dan Charlesworth d...@getbusi.com wrote:
Hey Eliezer
I don't actually use SMP. I could be wrong about the aufs thing; I haven't
personally tested—and don't currently plan to test—any other
Hey y’all
Finally got 3.5.2 running. I was under the impression that using server-first
SSL bump would still be compatible, despite all the Peek Splice changes, but
apparently not. Hopefully someone can explain what might be going wrong here ...
Using the same SSL Bump config that we used for
Hi Donny
I gathered that much. I guess what I specifically am asking for is:
- Which CentOS 6 package includes the missing perl modules?
- How do I grant the “pinger” the correct permissions in CentOS 6?
Cheers
Dan
On 18 Mar 2015, at 4:58 pm, Donny Vibianto l4n...@gmail.com wrote:
hi Dan
.
Tory
Sent via the wild blue yonder
On Mar 17, 2015, at 20:16, Dan Charlesworth d...@getbusi.com
mailto:d...@getbusi.com wrote:
Hey Eliezer
Do you have any plans to maintain a Squid 3.5.x rpm for CentOS 6?
I can see you’ve published one for CentOS 7. In fact I tried to use your
Bumpity bump
Had this go down exactly the same way this past Monday at Deployment #1.
On 10 Mar 2015, at 4:51 pm, Dan Charlesworth d...@getbusi.com wrote:
Hey folks
After having many of our systems running Squid 3.4.12 for a couple of weeks
now we had two different deployments fail
Hey folks
After having many of our systems running Squid 3.4.12 for a couple of weeks now
we had two different deployments fail today due to SSL DB corruption.
Never seen this in almost 9 months of SSL bump being in production and there
were no problems in either cache log until the “wrong
Alright I got abrtd on board, finally.Here’s a a backtrace from this morning (bt and bt full versions included separately):#0 0x00397e232625 in raise (sig=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1 0x00397e233e05 in abort () at abort.c:92
#2 0x005656ef in xassert
Thanks Amos!
I reckon that dns_packet_max directive might be playing into it. Most of the
problematic hostnames seem to return large pools of IPs.
Only one way to find out ...
On Thu, Feb 26, 2015 at 3:59 PM, Amos Jeffries squ...@treenet.co.nz
wrote:
On 26/02/2015 2:23 p.m., Dan
where this still crashes “cleanly”.
Both on CentOS 6.6 and Squid 3.4.12.
Anyone have a clue what might cause this “deadlocking” type behaviour after
an “assertion failed crash?
On Fri, Feb 20, 2015 at 5:23 PM, Amos Jeffries squ...@treenet.co.nz
wrote:
On 20/02/2015 7:15 p.m., Dan Charlesworth
this still crashes “cleanly”.
Both on CentOS 6.6 and Squid 3.4.12.
Anyone have a clue what might cause this “deadlocking” type behaviour after an
“assertion failed crash?
On Fri, Feb 20, 2015 at 5:23 PM, Amos Jeffries squ...@treenet.co.nz
wrote:
On 20/02/2015 7:15 p.m., Dan Charlesworth wrote
certainly narrow it down a lot further than before.
Cheers
Dan
On 20 Feb 2015, at 2:57 pm, Eliezer Croitoru elie...@ngtech.co.il wrote:
Hey Dan,
I am not the best at reading squid long debug output and it is needed in
order to understand the path that the request is traveling between
Thanks Amos -
So then it more than likely is related to our external ACLs that deal with the
HTTP response?
On 20 Feb 2015, at 5:06 pm, Amos Jeffries squ...@treenet.co.nz wrote:
On 20/02/2015 5:46 p.m., Eliezer Croitoru wrote:
Hey Dan,
The basic rule of thumb in programming lands
its impact?
Thanks
Dan
On 12 February 2015 at 09:51, Dan Charlesworth d...@getbusi.com wrote:
Hey Eliezer
With the response_size_100 ACL definition:
- 100 tells the external ACL the limit in MB
- 192.168.0.10 tells the external ACL the squid IP
I think one or both of these is only needed
other info I can provide that might point towards
the cause of this crash.
And thanks again for taking a look.
On 3 Feb 2015, at 2:49 pm, Dan Charlesworth d...@getbusi.com wrote:
Hi Eliezer
Thanks for paying attention, as always. I’m working on getting an
(appropriately censored) example
1 - 100 of 255 matches
Mail list logo