On Fri, Feb 10, 2017 at 2:58 PM, Tory M Blue <tmb...@gmail.com> wrote:
> Sorry image didn't come through, i'm talking about this error
>
> ERRORThe requested URL could not be retrieved
> --
>
> *Invalid Request* error was encountered
kie%3A%20dtuid%3D1471486181650244744%0D%0AHost%3A%20cache03%0D%0A%0D%0A%0D%0A>
.
On Fri, Feb 10, 2017 at 12:52 PM, Tory M Blue <tmb...@gmail.com> wrote:
>
> The request just from the browser or curl:
> http://cache04.prod.ca.domain.net/squid-internal-static/icons/SN.png
>
>
The request just from the browser or curl:
http://cache04.prod.ca.domain.net/squid-internal-static/icons/SN.png
Anyone know what could have changed in 3.5.20 CentOS7 to cause this check
to fail? I use it for internal load balancers to note if the system is able
to handle requests.
Not sure why
On Thu, Feb 2, 2017 at 3:51 PM, Amos Jeffries wrote:
> On 3/02/2017 7:56 a.m., tmb...@gmail.com wrote:
> > asnani_satish wrote
> >> This happens when size specified in cache_mem >= cache_dir
> >> Example:
> >> cache_dir aufs /var/spool/squid 1000 32 512
> >>implies 1000
On Tue, Jan 31, 2017 at 7:29 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 1/02/2017 4:09 p.m., Tory M Blue wrote:
> > I moved to a different disk today. System was down, I rsyncd the cache
> > directory over, including everything and the swap files etc. Squid starts
So we are moving from an F5 LB to an AWS ELB. In the F5 we have a irule
that inserts a header that our origin servers looks for so they can return
https urls.
The ELB and Squid combination ends up rewritting the x_forward_proto header
from
HTTP_X_FORWARDED_PROTO: https
to
On Tue, May 3, 2016 at 5:58 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
> On 4/05/2016 11:12 a.m., Tory M Blue wrote:
>> My configs have always consisted of http_port 80 accel vhost.. With
>> the latest 3.5.17 (I guess) if you don't list 0.0.0.0:80 squid won't
>> e
My configs have always consisted of http_port 80 accel vhost.. With
the latest 3.5.17 (I guess) if you don't list 0.0.0.0:80 squid won't
even attempt to listen, talk on ivp4..
So adding 0.0.0.0:80 allows it to at least talk via ipv4.
This seems wrong, odd.
I understand you are removing methods
> I've seen the dns issue when IPv6 is not being handled properly. One way to
> test ( ya ya ) is to disable IPv6 via sysctl and see if you still see the
> delays.
Tory___
squid-users mailing list
squid-users@lists.squid-cache.org
Can we get an update on the bug mentioned here "
http://bugs.squid-cache.org/show_bug.cgi?id=4223;
With this unfixed one can't use siblings with HTCP or anything actually. I
should be able to have my origin and a sibling, I should be able to make a
request to my sibling for a document and if
X-Cnection: close
X-Cnection ??
Can someone explain that one to me, don't recall seeing it in previous
releases
Thanks
Tory
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
So I was playing with squid-internal-mgr (replacement for cachemgr.cgi it
seems), but I have no real authentication access , other than my ACL's
acl manager url_regex -i ^cache_object:// +i
^https?://[^/]+/squid-internal-mgr/
And limited to my networks obviously.
But as of now those pages are
On Tue, Jul 28, 2015 at 12:54 PM, Amos Jeffries squ...@treenet.co.nz
wrote:
On 29/07/2015 5:53 a.m., Tory M Blue wrote:
I just reproduced this by hand, using an HTTP sniffer tool. I requested
the same URL twice, with about a 0.25 second delay between fetches, and
the
2nd attempt
Was hoping this was fixed in 3.5.6
The following error was encountered while trying to retrieve the URL:
http://view-dev.eng.domain.net/rimfire/adm/search?
http://view-dev.eng.admission.net/rimfire/admission/search?
*Valid document was not found in the cache and only-if-cached directive was
Wondering when a 3.5.6 RPM will be available. I can build the beta's no
issue, but I've spent a couple of days with trying to get 3.5.6 packaged up
and am failing. So it would be nice to get a 3.5.6 spun up as the 3.5.x was
provided and that was clean, but for some reason I can't get the 3.5.6 to
Just tried to put 3.5 into production and it's dying. This runs fine with
low volume, but once the volume is up there it dies.
squid-3.5.0.2-1.el6.x86_64
2.6.32-504.16.2.el6.x86_64 #1 SMP
Squid is being handled in memory, so this can't be an I/O disk issue. As I
have other systems running 2.7
I had my build scripts working with the 3.5 betas, but i've switched over
to the mainline and I'm getting a ton of invalid switches.
So far I've had to comment out the following, as I'm getting errors such as
config.status: executing libtool commands
+ --enable-snmp
So appears I have some corruption? I can hit this server with tests just
fine, but if I put it into production I get these errors.
I've got no debug statements active, so this has me a bit concerned
Tory
2015/06/16 15:26:08 kid1| WARNING: 1 swapin MD5 mismatches
2015/06/16 15:26:08 kid1|
On Tue, Jun 16, 2015 at 4:29 PM, Amos Jeffries squ...@treenet.co.nz wrote:
On 17/06/2015 10:28 a.m., Tory M Blue wrote:
So appears I have some corruption? I can hit this server with tests just
fine, but if I put it into production I get these errors.
I've got no debug statements active
On Mon, Jun 15, 2015 at 4:34 PM, Amos Jeffries squ...@treenet.co.nz wrote:
On 16/06/2015 10:25 a.m., Tory M Blue wrote:
On Thu, Jun 4, 2015 at 3:56 PM, Amos Jeffries squ...@treenet.co.nz
wrote:
On 5/06/2015 5:58 a.m., Tory M Blue wrote:
I am running HDCP or at least testing
2015/06/15 16:36:29 kid1|
'/usr/share/squid/errors/es-us/ERR_ONLY_IF_CACHED_MISS': (2) No such file
or directory
2015/06/15 16:36:29 kid1| WARNING: Error Pages Missing Language: es-us
got this a few times when trying to enable htcp.
Any ideas?
Tory
On Thu, Jun 4, 2015 at 3:56 PM, Amos Jeffries squ...@treenet.co.nz wrote:
On 5/06/2015 5:58 a.m., Tory M Blue wrote:
I am running HDCP or at least testing with it and thus have ICP
disabled. I
know it's disabled but I don't need it yelling at me every few
minutes/seconds. How can I tell
On Thu, Jun 11, 2015 at 4:07 PM, Tory M Blue tmb...@gmail.com wrote:
Okay well I took the RPM from your counter parts link and it gave me
http://wiki.squid-cache.org/KnowledgeBase/CentOS
squid-3.5.0.4-1.el6.x86_64.rpm
So am I using the wrong version?
Thank you sir! :)
Tory
Okay Amos
I've got logs and now finally a core (the whole 'squid' isn't signed with
proper key) thang took a bit to get around.
Rather not post the core here, is there a better place, not sure it's a bug
yet so opening a bug seems premature?
Thanks
Tory
3.5.0.2
CentOS
I've tried using the -N option and
On Thu, Jun 11, 2015 at 12:51 PM, Tory M Blue tmb...@gmail.com wrote:
On Thu, Jun 11, 2015 at 12:25 PM, Eliezer Croitoru elie...@ngtech.co.il
wrote:
What is the issue??
Did you tried the latest RPM's ??
http://wiki.squid-cache.org/KnowledgeBase/CentOS
Eliezer
I spun my own
/2015 10:26 a.m., Tory M Blue wrote:
On Thu, Jun 11, 2015 at 3:21 PM, Amos Jeffries squ...@treenet.co.nz
wrote:
On 12/06/2015 9:48 a.m., Tory M Blue wrote:
On Thu, Jun 11, 2015 at 12:51 PM, Tory M Blue tmb...@gmail.com
wrote:
On Thu, Jun 11, 2015 at 12:25 PM, Eliezer Croitoru
I am running HDCP or at least testing with it and thus have ICP disabled. I
know it's disabled but I don't need it yelling at me every few
minutes/seconds. How can I tell Squid, yes thank you, I'm aware I'm not
using ICP and it's disabled, now quiet?!
Thanks
Tory
Wondering why I'm getting this error, what config param am I missing?
I have 1 parent, 2 squid servers configured as siblings for each other
http_port 80 accel vhost
cache_peer apps-preprod.domain.net parent 80 0 no-digest no-query
originserver no-netdb-exchange
cache_peer
Greetings
I'm getting further along with my testing and am trying to go the route of
lcp or htcp since I have a 3 squid cache cluster.
So the questions are..
1) What is the best debug level to set in order to see either icp or htcp
information? (I've been using tcpdump at the server level, just
Afternoon
Have a question, is there a negative to running -k rotate more than
once a day?
I've recently moved squid to a ramcache (it's glorious), however my
cache.swap file continues to grow and brings me to an uncomfortable
95%.
If I run rotate it goes from 95% to 83% (9-12gb cache dir), it
On Thu, Jun 2, 2011 at 12:45 AM, Amos Jeffries squ...@treenet.co.nz wrote:
On 02/06/11 18:27, Tory M Blue wrote:
Afternoon
Have a question, is there a negative to running -k rotate more than
once a day?
All your active connections will pause while Squid deals with the logs.
Ahh wasn't
On Fri, May 27, 2011 at 1:39 AM, Amos Jeffries squ...@treenet.co.nz wrote:
On 27/05/11 10:05, Tory M Blue wrote:
Seems to be some confusion.
Stale vs non-stale content is a matter for the website cache control HTTP
headers. Squid *will* serve stale content according to RFC 2616.
Proxy
Hiya :)
I would like to have via the primary squid instance a method to grab a
local image, to verify that squid is up and running.
I have F5's that I want to add a monitor, if the squid box goes down,
take it out of the vip.. However I can't have the monitor query the
squid box and have it pull
I've got weird load behavior that crops up and this box is only
running squid. I am close to what I set my cache_dirs to in terms of
size, so wondering if that's it.
Just trying to figure out why my server will run at a load of 1 -1.5
and next thing it's up to 5-6, no real increase in traffic.
On Wed, May 25, 2011 at 8:01 PM, Amos Jeffries squ...@treenet.co.nz wrote:
high and low water for the disk were at 85 and 95%, bumped it just to
The watermark difference and total size determine how much disk gets erased
when it overflows. Could be lots or not much. Very likely this is it.
On Wed, May 25, 2011 at 9:03 PM, Amos Jeffries squ...@treenet.co.nz wrote:
On Wed, 25 May 2011 20:27:05 -0700, Tory M Blue wrote:
On Wed, May 25, 2011 at 8:01 PM, Amos Jeffries squ...@treenet.co.nz
wrote:
backup, so I was leary. CPU cycles sure, but the squid process shows:
PID USER PR
On Tue, Apr 5, 2011 at 12:28 PM, Tory M Blue tmb...@gmail.com wrote:
On Tue, Apr 5, 2011 at 12:32 AM, Amos Jeffries squ...@treenet.co.nz wrote:
On 05/04/11 17:09, Tory M Blue wrote:
Problem is that this is happening in every cache server. Even if I
start clean I get these. What debug level
On Tue, Apr 5, 2011 at 12:32 AM, Amos Jeffries squ...@treenet.co.nz wrote:
On 05/04/11 17:09, Tory M Blue wrote:
Problem is that this is happening in every cache server. Even if I
start clean I get these. What debug level/numbers can I use to track
this down? This happens constantly, so ya
What does storeClientReadHeader: no URL! mean, what is it telling me
I'm seeing this quite a bit and can't find with normal searches what
this means, what is causing this..
Thanks
Tory
2011/04/04 10:18:45| storeClientReadHeader: no URL!
2011/04/04 10:18:49| storeClientReadHeader: no URL!
On Mon, Apr 4, 2011 at 4:10 PM, Amos Jeffries squ...@treenet.co.nz wrote:
On Mon, 4 Apr 2011 10:24:14 -0700, Tory M Blue wrote:
What does storeClientReadHeader: no URL! mean, what is it telling me
I'm seeing this quite a bit and can't find with normal searches what
this means, what
Problem is that this is happening in every cache server. Even if I
start clean I get these. What debug level/numbers can I use to track
this down? This happens constantly, so ya as you said something is
going on but it doesn't appear to be, someone mucking with the cache
or other odity, since
On Tue, May 4, 2010 at 4:14 PM, Amos Jeffries squ...@treenet.co.nz wrote:
On Tue, 4 May 2010 11:17:18 -0700, Tory M Blue tmb...@gmail.com wrote:
I'm seeing this error on occasion and trying to figure out how to
capture what is causing it.
2010/05/04 11:06:03| urlParse: Illegal character
I'm seeing this error on occasion and trying to figure out how to
capture what is causing it.
2010/05/04 11:06:03| urlParse: Illegal character in hostname '!host!'
!host!.
I've thought maybe it was actually in a URI but I've added access
logging with urlpath_regex -i \!host and nothing is
On Thu, Feb 18, 2010 at 12:27 AM, Henrik Nordstrom
hen...@henriknordstrom.net wrote:
ons 2010-02-17 klockan 21:40 -0800 skrev Tory M Blue:
And sorry sleeping was just my way of citing the box shows no load,
almost no IO 4-5 when I'm hitting it hard. I do not see this issue
with lesser threads
On Tue, Feb 16, 2010 at 7:38 PM, Tory M Blue tmb...@gmail.com wrote:
/usr/local/squid/etc/squid/squid.conf ??
So it's really odd. Not getting anything to stdin/stdout
But don't want to get too into the config piece when the big deal
seems to be the congestion. Why more congestion
2010/2/17 Henrik Nordström hen...@henriknordstrom.net:
tor 2010-02-18 klockan 14:51 +1300 skrev Amos Jeffries:
Henrik seems to have re-appeared and he has more disk IO experience then
me so may have an idea whet to look for ... ?
My first reaction is to run a small benchmark in parallel to
I'm starting to lose my mind here. New hardware test bed including a
striped set of SSD's
Same hardware, controller etc as my other squid servers, just added
SSD's for testing. I've used default threads and I've built with 24
threads. And what's blowing my mind is I get the error immediately
upon
2010/02/16 14:18:15| squidaio_queue_request: WARNING - Queue congestion
2010/02/16 14:18:26| squidaio_queue_request: WARNING - Queue congestion
What can I look for, if I don't believe it's IO wait or load (the box
is sleeping), what else can it be. I thought creating a new build with
24
On Tue, Feb 16, 2010 at 4:45 PM, Amos Jeffries squ...@treenet.co.nz wrote:
On Tue, 16 Feb 2010 16:24:22 -0800, Tory M Blue tmb...@gmail.com wrote:
2010/02/16 14:18:15| squidaio_queue_request: WARNING - Queue
congestion
2010/02/16 14:18:26| squidaio_queue_request: WARNING - Queue
congestion
/usr/local/squid/etc/squid/squid.conf ??
So it's really odd. Not getting anything to stdin/stdout
But don't want to get too into the config piece when the big deal
seems to be the congestion. Why more congestion with faster disks and
I'm just thinking if there is actually another config
Squid 2.7Stable7
F12
AUFS on a ext3 FS
6gigs ram
dual proc
cache_dir aufs /cache 32000 16 256
FilesystemSize Used Avail Use% Mounted on
/dev/vda2 49G 3.8G 42G 9% /cache
configure options: '--host=i686-pc-linux-gnu'
'--build=i686-pc-linux-gnu'
Greetings,
I just recently upgraded (or in the midst of testing) and I note that
3 servers that I upgraded from 2.6 stable 13 to 2.7 stable 6, are
running 3-4x load of the identical servers running the 2.6 stable
variety.
I was wondering what would cause this?
Should I stick with 2.6 stable 13
On Fri, May 2, 2008 at 6:17 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
On ons, 2008-04-30 at 11:10 -0700, Tory M Blue wrote:
I was wondering if there was a way for Squid to pass on some basic
information to the server citing that the original request was Secure,
so that the backend
On Mon, May 5, 2008 at 9:23 AM, Tory M Blue [EMAIL PROTECTED] wrote:
On Fri, May 2, 2008 at 6:17 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
On ons, 2008-04-30 at 11:10 -0700, Tory M Blue wrote:
I was wondering if there was a way for Squid to pass on some basic
information
On Fri, May 2, 2008 at 5:25 AM, Amos Jeffries [EMAIL PROTECTED] wrote:
You made the situation clear. I mentioned the only reasonably easy
solution.
If you didn't understand me, Keith M Richad provided you with the exact
squid.conf settings I was talking about before.
Obviously i have
On Thu, May 1, 2008 at 2:02 AM, Amos Jeffries [EMAIL PROTECTED] wrote:
You could make a second peer connection using HTTPS between squid and the
back-end server and ACL the traffic so that only requests coming in via SSL
are sent over that link. Leaving non-HTTPS incoming going over the old
I was wondering if there was a way for Squid to pass on some basic
information to the server citing that the original request was Secure,
so that the backend server will respond correctly.
Right now Squid takes and handles the SSL, passes back to the server
via standard http and the application
On Jan 19, 2008 8:22 PM, Andrew Miehs [EMAIL PROTECTED] wrote:
What exactly was the three second delay? and what did F5 do to fix this?
Thanks
Andrew
Sorry Andrew for the delay.. I believe I posted this when I first had
the issue, but reposting so that it can be logged
.42 = Squid
.153 =
Okay
So still working thru some http 1.1 issues as we keep finding more
well that won't work..
Due to various bugs in IE 4-6 we have to return 1.1 or they get a
script error (it's a .js file).
tested both on ie7 and ie6 and in both cases with 1.1 enabled the
page is fine. Once 1.1 is disabled,
On Jan 19, 2008 2:06 PM, Henrik Nordström [EMAIL PROTECTED] wrote:
2.7 has the support you need for this, assuming you speak of using Squid
as an accelerator/frontend server..
Regards
Henrik
How so, as you've read and provided further information re my gzip
workaround (thanks), I'm
I didn't notice that Squid does a nice thing , all things considered ..
When the protocol is sent as HTTP 1.1 to the Squid cache, it rewrites
the request as HTTP 1.0 (changing the SERVER_PROTOCOL header) , but
sticks the origin client protocol version into the Via header.
so for a snippet of our
On Jan 18, 2008 12:46 AM, Ash Damle [EMAIL PROTECTED] wrote:
Hello. Any pointers how how to get Squid to do gzip compression and then
e-tags when used as a reverse proxy cache.
Thanks
-Ash
Has to do with version HTTP1.1 vs gzip. But since Squid passes http1.0
version to your origin
I'm running into more connection stacking and while I solved my 3
second delay thanks to F5, i'm still seeing over 9000,1
connections on my web servers, all in Time Wait and most of them from
Squid.
As I continue to look thru config options, kernel params, I noticed this;
Do not set
So I've discovered that much of my connection stacking is due to Squid
responding as 1.0 for everything, this has also caused some issues in
my app.
So before I abandon squid, since we must use gzip encoding and various
other 1.1 specific features, I'm wondering if there is a way to
capture and
On Dec 12, 2007 7:14 AM, [EMAIL PROTECTED] wrote:
Hello,
Hello :)
snmpwalk -m /usr/share/squid/mib.txt 127.0.0.1:3405 -c public cacheHttpHits
snmpwalk: Timeout (Sub-id not found: (top) - cacheHttpHits)
snmpwalk 127.0.0.1:3405 -c public -m /usr/share/squid/mib.txt
snmpwalk: Timeout
What
I have some important information that I would like to log. Like when
the origin servers or other disappear or when squid timeouts trying to
connect to a peer etc.etc.
However I have a ton of information that my developers cite can't be
removed (basically an http error) Dec 10 16:34:33 cache01
On Dec 10, 2007 5:18 PM, Amos Jeffries [EMAIL PROTECTED] wrote:
Well, this is a critical error for the data connection.
A source server is pumping data into squid without proper HTTP header
information to say what it is.
The server is sending a Content-Length: header with the wrong length
On 10/20/07, Adrian Chadd [EMAIL PROTECTED] wrote:
On Sat, Oct 20, 2007, Tory M Blue wrote:
On this particular box
squid-2.6.STABLE12-1.fc6
I do have squid-2.6.STABLE13-1.fc6, installed on another test box
(have not tested this behavior)
Try it out; I think Henrik fixed that bug
are you trying this
with?
Adrian
On Fri, Oct 19, 2007, Tory M Blue wrote:
Sorry yet another question.
I am using origin hosts or vhosts for my cache_peers (not talking to
other caches).
What I've found, is that in my test environment, if I take the origin
server or vhost down
On 10/15/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:
Probably you have a TCP connection based load balancer instead of one
that balances on actual traffic, and the Netcaches have persistent
connections disabled..
See the client_persistent_connections and persistent_request_timeout
Trying to figure out how I can reduce connections, sitting around on
my Squid boxes.
I'm still running with both Netcaches and a few Squid boxes and what
I'm seeing in my loadbalancer is that the Netcaches have 50% less
connections at any given time than the Squid boxes. Also the Netcache
, Tory M Blue wrote:
I'm not sure what is going on and have done so much tracing that I've
just probably confused things more then anything else.
i'm running Squid Cache: Version 2.6.STABLE12, on Fedora Core 6.
It's configured to point to a single parent (which is a Virtual IP on
the LB
On 7/27/07, Adrian Chadd [EMAIL PROTECTED] wrote:
Have you looked at it through tcpdump?
Those sorts of delays could be simple stuff like forward/reverse
DNS..
Adrian
Adiran, I have used straight IP instead of the VIP name with no change
in symptoms, so it's not DNS.
I've done tcpdumps and
I'm not sure what is going on and have done so much tracing that I've
just probably confused things more then anything else.
i'm running Squid Cache: Version 2.6.STABLE12, on Fedora Core 6.
It's configured to point to a single parent (which is a Virtual IP on
the LB) with multiple servers
On 7/27/07, Adrian Chadd [EMAIL PROTECTED] wrote:
On Fri, Jul 27, 2007, Tory M Blue wrote:
Adiran, I have used straight IP instead of the VIP name with no change
Whats debugging on the F5 say?
(I've not got an F5 so I can't do any testing at my end..)
the tcpdumps are showing a reset
I have working squid 3.0 boxes, well i think they are working and feel
like they are working, but as I dive further and further into my
configs and the user guides, I find that I have some gum holding
things together.
So my second post...
I currently have a squid config with 3 http_port accel
Good morning, afternoon and or evening.
I am either not searching correctly or, nahhh, I've failed to locate
something that must be out there I'm sure.
Squid acl's.
I would like to match on the URI and the URL, I would like to apply
no-cache rules to a domain matching a specific url.
example
77 matches
Mail list logo