We have a squid 3.1.1 box running transparently. Certain hostnames for
local web servers are not getting resolved by the name server which squid
and the local system are properly pointed at.
If the users point their web browser to the ip address of these web
servers, they can access just fine
Getting the following error when attempting to access:
http://hiring.monster.com/jobs/createtitle.aspx?:
The request or reply is too large.
If you are making a POST or PUT request, then your request body (the thing
you are trying to upload) is too large. If you are making a GET request,
then
Actually, I originally tried that, although "coss" was first in my list. I
assume that doesn't matter.
I took your idea and ran with it, though. I got things to compile
properly when I omitted the quotes.
So: --enable-storeio=coss,ufs,aufs,null
Thanks for everyone's suggestions.
Adrian Chadd
Okay, I have the latest stable source.
The exact same issue continues.
"Amos Jeffries" <[EMAIL PROTECTED]> wrote on 01/22/2008 10:21:46 PM:
> > My build/configure command:
> > ./configure --prefix=/services/proxy --enable-icmp --enable-snmp
> > --enable-cachemgr-hostname=kmiproxy01 --enable-arp-a
My build/configure command:
./configure --prefix=/services/proxy --enable-icmp --enable-snmp
--enable-cachemgr-hostname=kmiproxy01 --enable-arp-acl --disable-select
--disable-poll --enable-epoll --enable-large-cache-files
--disable-ident-lookups --enable-stacktraces --with-large-files
--enable-
As of 2004, the COSS storage module was "experimental" and "not intended
for everyday use". This, according to "Squid: The definitive Guide,
O'Rielly".
With the default UFS module enabled, we constantly run into issues when
squid reaches its maximum storage limit. When I say issues, I mean
Amos Jeffries <[EMAIL PROTECTED]> wrote on 01/10/2008 07:53:44 AM:
> [EMAIL PROTECTED] wrote:
> > I have been asked to continue proxying connections out to the
Internet,
> > but to discontinue caching web traffic.
> >
> > After reading the FAQ and the config guide (2.6STABLE12) I found that:
Manoj_Rajkarnikar <[EMAIL PROTECTED]> wrote on 01/10/2008 12:13:15 AM:
>
>
>
> On Wed, 9 Jan 2008, [EMAIL PROTECTED] wrote:
>
> > I have been asked to continue proxying connections out to the
Internet,
> > but to discontinue caching web traffic.
> >
> > After reading the FAQ and the config gu
I have been asked to continue proxying connections out to the Internet,
but to discontinue caching web traffic.
After reading the FAQ and the config guide (2.6STABLE12) I found that:
'cache_dir null' Is the approach. It's failing.
The error is: Daemon: FATAL: Bungled squid.conf line 19: cache
Please don't top post? I'm not sure what you mean.
Chris Robertson <[EMAIL PROTECTED]> wrote on 03/15/2007 06:22:35 PM:
> [EMAIL PROTECTED] wrote:
> > access.log stores the time/date stamp as: nnn.nnn where 'n' is a
digit
> > between 0 and 9.
> >
> > I'd like to read timestamps in human-re
access.log stores the time/date stamp as: nnn.nnn where 'n' is a digit
between 0 and 9.
I'd like to read timestamps in human-readable form. :-)
Like I said, there was a simple perl command to convert it. I just don't
know where to find it.
Henrik Nordstrom <[EMAIL PROTECTED]> wrote on 03
I know I've had to ask this before, but I went to the FAQ and searched for
UTC and couldn't find what I'm looking for.
Somone, quite a while back, sent me a utc.pl script to convert standard
input from UTC to GMT.
Can someone point me to that script? Google was frutstrating because UTC
was fo
What is your cache.log showing?
And your access.log, particularly entries related to sites that "don't
return a response".
pierre <[EMAIL PROTECTED]> wrote on 10/05/2006 11:08:10 AM:
> Hello,
>
> I m a newbie with squid.
> I just installed it on a Freebsd station with 2 interfaces (one on
> in
I have a list of IP addresses from which I want to allow access to a
specific number of internet addresses.
Can someone help get me started with this?
Thanks,
Tim Rainier
Did you try blocking: ".playboy.com" ?
"Dave Mullen" <[EMAIL PROTECTED]> wrote on 06/09/2006 04:09:11 PM:
> Fellow Users,
>
> I have squid running with a blacklist, but I seem to have found an issue
with
> my config. The blacklist lists a domain, but it's not blocking any
subdomains
> of that
We've a situation at our facility where specific clients sit in static IP
address block This clients are considered "restricted" and I need a way
to get these clients to access a set of websites that I've defined.
There's probably 20 or 30 sites.
Can I get some recommendations on how to do th
It's simply telling you that the peer squid box was not compiled to
support digest mode, but this squid box was and you have digest mode
enabled for it.
If you really need digest mode, recompile your digest squid box to support
digest mode. :-)
Tim Rainier
news <[EMAIL PROTECTED]> wrote on 05
Nor will it. Those IM applications are designed to work around firewalls
and blocking mechanisms. They'll even use port 80 to communicate, if they
have to.
If you really want to block IMs (it's debatable whether doing so is truly
worth the effort), you need to use an Intrustion Detection Syst
The screen blanking issue could be related to hitting the SysReq key.
Out of curiosity, how do the following log files look:
/var/log/messages
cache.log (located in var/logs/ under the root of your squid directory.
/etc/crontab
ls -la /etc/cron.weekly
Tim
"Neil A. Hillard" <[EMAIL PROTECTED]>
Yes, it does. It won't always find a cached version though.
In either case, it still ends up direct.
"Joost de Heer" <[EMAIL PROTECTED]>
04/18/2006 02:42 PM
Please respond to
[EMAIL PROTECTED]
To
[EMAIL PROTECTED]
cc
squid-users@squid-cache.org
Subject
RE: [squid-users] proxy.pac
[EMAI
Not true at all. The web browser tries to access the configuration
script. If it doesn't get to it, the request is submitted directly.
We wouldn't have been able to use the functionality otherwise.
"Jason Gauthier" <[EMAIL PROTECTED]> wrote on 04/18/2006 12:45:29 PM:
> >
> > Yes, the truncat
Yes, the truncating problem was simple to work around. Just copy
proxy.pac to proxy.pa.
I take the autodiscovery comment back about not being supported in other
browsers.
I stand by my recommendation, however, to use the configuration script, as
opposed to autodiscovery.
Merton Campbell Croc
I think it's important to note that WPAD (Proxy Autodiscovery) is a hosed
implementation in Internet Explorer.
You'll notice that few other browsers even have the functionality.
WPAD, ("Automatically Detect Settings" check box in IE) was established to
either set a DNS entry for wpad.domain.com
Blacklists are not restricted to domains (at least SquidGuard's isn't).
Obviously that would be ineffective.
SquidGuard's regex matching works great, for one. And their URL blocking
is especially
effective with blocking sites that continue to register new domains to
evade bans (They're not all
If you use the canned lists from SquidGuard, you're good to go.
3rd party blacklists have a tendency to be illegitimate. I found one
person that had geocities.com in the blacklist.
I strongly disagree with that entry.
However, this does not belittle the effectiveness of redirectors. They
work
Squid can do those things with a redirector. Like DansGuardian or
SquidGuard.
Check these out: http://www.squid-cache.org/related-software.html
Tim Rainier
S t i n g r a y <[EMAIL PROTECTED]>
01/23/2006 09:36 AM
To
squid
cc
Subject
[squid-users] Can this be done ?
Hello all
i am
Okay, once again,
I miss-typed the -z. I'm not using -z, I'm using -k reconfigure.
Matus UHLAR - fantomas <[EMAIL PROTECTED]> wrote on 01/19/2006 04:17:31
AM:
> > > I realize this isn't normal. That's why I asked the question. Are
you
> > > using SquidGuard too?
>
> On 18.01 20:21, Mark El
[EMAIL PROTECTED] wrote on 01/18/2006 01:42:06 PM:
> Mark Elsen <[EMAIL PROTECTED]> wrote on 01/18/2006 11:53:33 AM:
>
> > > Squid Version: squid/2.5.STABLE12
> > >
> > > I've configured a proxy script that my clients point to. It reads
as
> > > follows:
> > >
> > > function FindProxyForURL(ur
Mark Elsen <[EMAIL PROTECTED]> wrote on 01/18/2006 11:53:33 AM:
> > Squid Version: squid/2.5.STABLE12
> >
> > I've configured a proxy script that my clients point to. It reads as
> > follows:
> >
> > function FindProxyForURL(url, host)
> > {
> > if (isPlainHostName(host) || isInNet(host
Squid Version: squid/2.5.STABLE12
I've configured a proxy script that my clients point to. It reads as
follows:
function FindProxyForURL(url, host)
{
if (isPlainHostName(host) || isInNet(host, "172.24.0.0",
"255.255.0.0")
|| isInNet(host, "192.168.
I'm surpised squid even recovered from trying to do an acl for every
blacklist entry.
Use squidguard, it is very simple to use.
www.squidguard.org
Tim
Christoph Haas <[EMAIL PROTECTED]>
01/17/2006 02:28 PM
To
squid-users@squid-cache.org
cc
Subject
Re: [squid-users] blacklist
On Tues
Look into SquidGuard or DansGuardian and:
http://www.squid-cache.org/related-software.html
Tim
David Lynum <[EMAIL PROTECTED]> wrote on 01/10/2006 01:13:45 PM:
> Dear List,
>
> I've created ACL's in squid to keep my users from going to certain
> websites during certain parts of the day. The
We use a configuration script (proxy.pac), which IE and firefox clients
access via the "autoconfiguration script" setting in both browsers.
Our router is configured to deny access to the internet unless the
requests are coming from the proxy server.
Windows also has an internal proxy function c
> > hello gurus,
> >
> > Is it possible to stop spyware entering in to the
> > network with help of squid.
> >
>
> Basically not because SQUID , only deals with the http transport
> layer. You can use , a virus scanning box ; in as a parent for
> your SQUID sever.
>
> Use adequate anti virus prot
I receive the following email from squid sem-frequently:
From: squid
To: [EMAIL PROTECTED]
Subject: The Squid Cache (version 3.0-PRE3-20050510) died.
You've encountered a fatal error in the Squid Cache version
3.0-PRE3-20050510.
If a core file was created (possibly in the swap directory),
please
"Caceres" <[EMAIL PROTECTED]> wrote on 12/02/2005 09:09:08 AM:
> Hi,
> Squid work or dosen't work in IPv6?
It works.
> Mark you already test squid HTTP prxy in IPv6 enviorments??
>
> Regards,
> Paulo Ferreira
>
> ./Caceres
> -
> [EMAIL PROTECTED]
>
> > Mark Elsen <[EMAIL P
Mark Elsen <[EMAIL PROTECTED]> wrote on 11/30/2005 01:14:43 PM:
> > Hi, I have a question for you.
> >
> > Squid supports HTTP and FTP proxying over IPv6?
>
> No.
>
No? Squid 2.5, in the least, supports http opver IPv6.
Not sure on FTP.
> >
> > I'm searching a proxy Server to perform HTTP an
Squid should not be getting in way of these applications, unless they
require some sort of http transaction in order for them to work.
If the latter is the case, you should be able to configure them to access
the web via http through a proxy server.
Are you using your proxy transparently?
Tim R
I know it looks at the FROM header and maybe this note didn't need to go
to the whole list.
My point was that every time (s)he asks a question and someone wants to
answer it, they'll get this
hideous response back, unless they've answered it before.
Tim Rainier
H <[EMAIL PROTECTED]>
11/29/
Can I suggest that if you plan to take part in email-based discussions
that you not use a hideous mechanism like
this to manage your email?
Using a service like UOL Antispam for personal use is fine.
Public email lists are not. That service you're using requires the
following in order for us t
The CPU doesn't really play all that much of a role in the performance of
Squid. (Obviously faster CPUs are nice, but really not important with
squid)
Disk I/O and Memory are much more important than the speed of the cpu.
Disk size and memory size are contingent on each other.
Unfortunately, i
The "disk space is over limit" error is not saying the disk is full. The
cache has reached the limit that's been set in the squid.conf file.
It could be causing squid to die, but how likely is it that this would be
the cause, if squid dies 6 minutes after every hour?
My suggestion is to check a
> I am not sure if I am using both my hardware resources and my squid.conf
> properly, especially with regards to: cache_dir ufs /usr/squidcache 8192
16
> 256
In terms of cache_dir, it looks fine. (assuming you're not using veritas
volume manager on the partition from which you're running yo
How ironic that you sent that message from an MSN account. :-)
Basically, good luck.
I would block the standard mail ports, then use a content filter to block
the html-based email sites.
But you'd have to do them manually for each and every site. It's not
practical and it's a lot of work.
An
> I have a brand new Gentoo Linux install set up with the following:
>
> Arno's Firewall 1.8.4d is firewalling my internet connection and
> forwarding all outgoing port 80 traffic through a transparent proxy
> setup.
Cool. Is it doing the same for outgoing port 443?
If not, that's why secure w
The error message, or a copy of cache.log would be a good start.
Second, you appear to be trying to accel an http server. Are you doing
this on purpose?
This is NOT proxying as you see it. This is used to speed up web servers
and should not be used.
This applies to all your http_accel entries.
> SMTP is allowed through your squid program itself, not the squid server.
This is not correct. Although it might be possible to pass email through
squid, squid does not natively
allow smtp proxying. Squid proxies and caches http traffic and nothing
more. Unfortunately, due to variations of ho
Uhm, yeah. Why aren't you trying to prevent this activity?
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
"D & E Radel" <[EMAIL PROTECTED]>
10/25/2005 05:00 PM
To
<[EMAIL PROTECTED]>,
cc
Subject
Re: [squid-users] Spam mail through Squid server
If that really is the c
> > kalproxy:/var/log/squid # free -m
> > total used free sharedbuffers cached
> > Mem: 1007995 12 0 4 33
> > -/+ buffers/cache:957 50
> > Swap: 1027 18 1008
> I call this "running low
> On a side-note. Your 4x33 are set up as RAID or LVM?
> neither one is a good idea.
> http://www.squid-cache.org/Doc/FAQ/FAQ-3.html#ss3.11
Indeed. I was making sure he wasn't raiding his squid cache. :-)
> if your computes has enough of memory left for metadata cache (inodes
and
>directories
Oh. You're running 4 seperate caches?
Yeah, I couldn't see why anyone would want to RAID squid cache. :-)
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
"Rodrigo A B Freire" <[EMAIL PROTECTED]>
10/12/2005 01:24 PM
To
<[EMAIL PROTECTED]>
cc
Subject
Re: [squid-users] Which t
Oh yeah. I definitely see the advantages.
The fact is, we're small enough that it hasn't sorely affected us much at
all. My access log for squid grows to about 4-10 GB in a week.
I made it adimently clear that I would only retain 1 weeks worth of access
logging information.
When it comes down
Very cool! Thanx!
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
"Chris Robertson" <[EMAIL PROTECTED]> wrote on 10/11/2005 06:09:53 PM:
> > -Original Message-
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> > Sent: Tuesday, October 11, 2005 1:20 PM
> > To: squid
My guess is that, yes, you're filling the "/var" partition when you rotate
those logs.
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
"Lucia Di Occhi" <[EMAIL PROTECTED]> wrote on 10/12/2005 09:12:17 AM:
> Here is the output from df -h
>
> FilesystemSize Used Avai
First off, there's no possible way my cache would "fill" the '/'
partition. There's a cache size directive in squid that's designed to
limit the amount of disk space usage.
Not to mention the fact that I have a utility script that runs every 15
minutes, which pages me if partitions are >= to 90
What is it about browsing the web that's not fast enough?
It could simply be that authentication routines are slowing it down.
Part of the whole reason behind caching data is to prevent having to
download popular sites/images/files/etc more than once.
For example, if 20 people request the current
Sorry. That's `df -h` as opposed to `du -h`.
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
[EMAIL PROTECTED]
10/11/2005 03:38 PM
To
squid-users@squid-cache.org
cc
Subject
Re: [squid-users] Crashed squid 2.5.STABLE11
First, and foremost, I would hesitate rotating the
I realize that and agree. My situation was screwy because of the server
I'm running squid on.
It has several internal partitions that are used for bios/post which
disallowed me to set up partitions the
way I wanted to.
Not to mention the fact that this was really just a test squid box that I
h
First, and foremost, I would hesitate rotating the store log. Henrik and
probably several others, can verify that notion.
Second, do a `du -h` and email the output back.
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
"Lucia Di Occhi" <[EMAIL PROTECTED]>
10/11/2005 02:29 PM
What if the squid cache is stored on the "/" partition?
Wouldn't that be a hideous mistake to set "/" to 'noatime' ?
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
Henrik Nordstrom <[EMAIL PROTECTED]> wrote on 10/11/2005 10:07:21 AM:
> On Tue, 11 Oct 2005 [EMAIL PROTECTED] wrote
Does this file exist? -> /var/log/squid/store.log
Does the user running squid have permission to write to it?
Basically, do an ls -lah /var/log/squid
and paste the output into the reply email.
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
"Lucia Di Occhi" <[EMAIL PROTECTED]>
This is more of a filesystem question, then it is an operating
system/distro question.
Based on my research, the benchmarks on the web claim ReiserFS to provide
up to 15-20% faster results.
I've not had any time to do any benchmarking. My cache is currently
running on an ext3 partition running
Not much, no.
Seems to me I found a couple compilation errors when I first tried to
install it.
It was no big deal or anything.
If you run into trouble, you can contact me off-list since it's probably
beyond the scope of this list.
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED
Squid's url_regex is a hideously slow way of managing blackholed
urls/sites/domains.
I'm not necessarily blaming the program itself, the fact is, regular
expressions can be quite computational.
SquidGuard, on the other hand, is VERY fast and works quite well.
Lots of folks around here swear by D
That isn't transparent at all, actually. set the environment variable
http_proxy to the ip address (or name) and port of your
squid machine, so wget requests are proxied, which is a requirement for
your testing purposes.
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
"Fabiano
3.0-PRE3-20050510
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
Mark Elsen <[EMAIL PROTECTED]> wrote on 10/10/2005 12:47:44 PM:
> > I'm getting the following in my messages log, quite frequently:
> >
> > Oct 10 07:46:31 kalproxy (squid): Squid has attempted to read data
from
Pardon the standard "is it plugged in?" question, but
Does wget know there's a proxy server it needs to go through?
Unless you're running the proxy via port 80 (or it's transparent), wget
does not appear to be going through a proxy, which would
make your test useless.
If your proxy is not se
I'm getting the following in my messages log, quite frequently:
Oct 10 07:46:31 kalproxy (squid): Squid has attempted to read data from
memory that is not present. This is an indication of of (pre-3.0) code
that hasn't been updated to deal with sparse objects in memory. Squid
should coredump.al
That's contingent on the audio player they're using, and the way you've
set up squid?
Is squid set up transparently?
Is the audio player(s) being used supportive of proxies?
If so, is the proxy parameter set?
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
"david brown" <[EMAI
I would assume you'd need to do something similiar to:
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT
--to-port 3128
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
Sushil Deore <[EMAIL PROTECTED]>
10/04/2005 01:06 PM
To
Henrik Nordstrom <[EMAIL PROTEC
Can't get to the site at all, or it takes forever to load/is timing out?
If the ladder is the case, consider the fact that the site is 22 hops away
(at least from me).
If it's the former, what is the error message? Connection Timed Out ?
Might need to boost the timeout setting because it liter
I'd be interested in seeing your squid.conf as well.
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
"Chris Robertson" <[EMAIL PROTECTED]>
09/27/2005 04:11 PM
To
cc
Subject
RE: [squid-users] problem about squid exhaust all memory
> -Original Message-
> From: dj
DansGuardian has WAY more flexibility.
SquidGuard is WAY faster, in my opinion.
What's your priority in terms of filtering needs?
Tim Rainier
Information Services, Kalsec, INC
"Piszcz, Justin" <[EMAIL PROTECTED]>
09/26/2005 01:33 PM
To
"Odhiambo Washington" <[EMAIL PROTECTED]>,
cc
Subject
Seems to me you need to change:
tcp_outgoing_address 192.168.29.254 network_local
To:
tcp_outgoing_address network_local
Or am I not understanding your question? :-)
Tim Rainier
Information Services, Kalsec, INC
Fabio Silva <[EMAIL PROTECTED]>
09/23/2005 05:46 PM
Please respond to
Fabio Si
The only WCCP-specific compile-time option I'm aware of is to disable
WCCP.
If you want to use WCCP through a transparent proxy, I would assume that
you need specifically compile squid using the two noted compile-options.
It sounds like, however, some of the linux distributions out there are
set
Heh
This actually sounds a lot like a retransmit issue
>From a shell (as root) try the following command:
while true
do
netstat -s | grep retrans
sleep 3
clear
done
This will report the network retransmits occurring on the network
interface(s).
If you see this num
Jorge,
Squid requires specific compilation paramaters if you plan to run the
cache as transparent:
--enable-ipf-transparent
or
--enable-pf-transparent
Respectively...
Did you use either of these?
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
"Jorge A. Rodriguez" <[EMAIL
How are you attempting to start squid?
Tim Rainier
Information Services, Kalsec, INC
Daniel Navarro <[EMAIL PROTECTED]>
09/16/2005 11:29 AM
To
Squid Cache
cc
Subject
[squid-users] What means squid: [60G???
Hi all fellows,
My squid is not starting at bootime, and yes is
chkconfi
Personally, I'd use a proxy configuration script that exempts internal
requests from being proxied.
Then set your clients up to use the script.
Not that I'm not suggesting the use of WPAD. IE and firefox/mozilla, for
example, have an option in their network settings to
use an automatic proxy co
Added memory_replacement_policy head LFUDA
Things appear to be up and running just fine now.
Just for good measure, am I missing anything else?
Tim Rainier
[EMAIL PROTECTED]
09/01/2005 12:43 PM
To
squid-users@squid-cache.org
cc
Subject
Re: [squid-users] Replacement/Removal Policy Type
Once I got the configure option sorted, I added the following line to
squid.conf:
cache_replacement_policy heap LFUDA
Is this correct?
If so, I'm missing something because squid contiually page faults and bogs
the machine down. (average of 100 processes waiting on the CPU).
What gives?
Tim Ra
I take that back, I was falsely using quotes.
Your suggestion is working, thank you.
Tim Rainier
Mark Elsen <[EMAIL PROTECTED]>
09/01/2005 09:27 AM
To
"[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
cc
squid-users@squid-cache.org
Subject
Re: [squid-users] Replacement/Removal Policy Type
On 9/1
Tried that already. It fails with a "no policy named 'heap' is available"
Tim Rainier
Mark Elsen <[EMAIL PROTECTED]>
09/01/2005 09:27 AM
To
"[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
cc
squid-users@squid-cache.org
Subject
Re: [squid-users] Replacement/Removal Policy Type
On 9/1/05, [EMAI
If I want to use the "heap LFUDA" replacement/removal policy, what needs
to go into the "list of modules" section of the --enable-removal-policies=
parameter for configuring/compiling?
I went into src/repl to look for policy names. Tried a few names by
guessing and am unable to come up with any
While we're out there talking about cache size limits, I need a
refresher
What is the general rule in terms of setting a cache size limit?
I, obviously, want the cachable space to be as large as possible, but seem
to experience a huge decrease in performance and increase in crashes as I
inc
I'd be interested in seeing your squid.conf file.
I too, expect your cache is dirty. Again, attaching an strace session to
squid might illustrate exactly why it's dying.
Also, how does the "-C" parameter affect squid?
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
"John R. Va
The squid cache has reached it size limit and is being rebuilt to
compensate...
You could load squid in debug mode. There are a few debug mode options
available for you...
Just use squid -? to get a list of options.
Alternatively, you can attach an strace session to squid to see exactly
what
Yes:
-FDon't serve any requests until store is rebuilt.
My guess is that squid is trying to rebuild the store, but is too busy
servicing requests.
If that doesn't work, you can manually clear the cache, the store file and
have squid rebuild the cache hierarchy (squid -z).
Tim Rainier
In
This is the third "crash". No core files.
1.) Is there some specific explanation for why there isn't a core file?
2.) The cache/store logs give absolutely nothing that explains these
fatal errors at all. Is there something else I could look at?
Maybe attach an strace session to the running p
Actually, ./configure --help is quite sufficient at displaying
compile-time options and their descriptions.
I would start there.
Tim Rainier
Abdock <[EMAIL PROTECTED]>
08/17/2005 01:09 PM
To
squid-users@squid-cache.org
cc
Subject
[squid-users] new to squid
Dear All,
I need to set
All it ever reported was that the store was 1.5% rebuilt and then it would
show it starting back up.
Yes, that's correct, it never even reported in the log that it was quiting
and re-starting.
Tim Rainier
"Chris Robertson" <[EMAIL PROTECTED]>
08/05/2005 03:29 PM
To
cc
Subject
RE: [squid-
Squid's cache limit is set to 4GB.
When the cache fills up and squid attempts to rebuild, it dies and reloads
itself continually, failing to rebuild the cache.
My squid.conf is below:
squid.conf ---
cache_effective_user nobody
log_fqdn on
http_port 8000
i
First, and foremost, you'll need to use the --enable-ipf-transparent
config option when you run the configure.
Second, you'll want to search the Squid FAQ. There are some useful tips
out there about how to set that up.
Tim Rainier
Rodrigo Gesswein <[EMAIL PROTECTED]>
08/04/2005 02:00 PM
Pl
This issue has been discussed numerous times on this list.
For an archive search, try:
http://www.google.com/search?q=site:squid-cache.org+%2B%22Windows+Update%22&hl=en&lr=&start=30&sa=N
Tim Rainier
"Matt Ashfield" <[EMAIL PROTECTED]>
08/04/2005 11:36 AM
Please respond to
<[EMAIL PROTECTED]>
Why not use Log Rotations?
Or is this not a *nix box?
Tim Rainier
"Carlos Eduardo Gomes Marins" <[EMAIL PROTECTED]>
08/03/2005 04:50 PM
To
cc
Subject
[squid-users] 407 Error
Hi all,
Due to a large number of users (5000) and lack of disk space for
logging, I'm trying to find out ho
Completely the opposite for us.
At the time of testing, XP was the only machine that seemed to work
consistently.
We only had a couple XP machines at that time.
Tim Rainier
Merton Campbell Crockett <[EMAIL PROTECTED]>
08/01/2005 10:28 PM
To
Rodrigo A B Freire <[EMAIL PROTECTED]>
cc
squid-us
I should've clarified that.
We used the DNS records for AutoDiscovery.
I'm not sure if that matters, but we didn't use DHCP.
Tim Rainier
Information Services, Kalsec, INC
[EMAIL PROTECTED]
"Rodrigo A B Freire" <[EMAIL PROTECTED]>
08/01/2005 08:33 PM
To
cc
Subject
Re: [squid-users] proxy.p
Yes, only when using WPAD.
Although some of the proxy.pa requests did make it to the webserver, the
majority of those requests required me to
actually sniff the machines manually to find them.
I wonder if they resolved the issue in a relatively recent service pack
or something similiar.
I kn
What's your config look like?
Tim
Joe Acquisto <[EMAIL PROTECTED]>
08/01/2005 03:21 PM
To
squid-users@squid-cache.org
cc
Subject
[squid-users] acl issues
Still chasing getting PC restrictions to work.
I just don't get it. I have acl's defined, and I can see it checking
them, in the
I'd reply to the question sent to the list, but I deleted it already.
There's a bug in IE that truncates the last character of the
autoconfiguration file.
The problem is the packet which requests that file, sometimes get's
fragmented, not always.
This essentially causes IE to request two files:
1 - 100 of 149 matches
Mail list logo