When squid receives a signal for reconfiguration
it restarts all ufdbGuard processes and it seems that the
newly started ufdbGuard processes rebuild the database.
I am the (biased) author of ufdbGuard.
ufdbGuard is faster, has more features and also
does not have the problem that is described
bel
Hi Lee,
I am the author of ufdbGuard.
ufdbGuard is based upon squidGuard 1.2.x and is heavily
modified, is a lot faster and has new features that
squidGuard does not have.
I suggest that you try it. It is free.
-Marcus
Lee Higginbotham wrote:
Good afternoon,
We are currently have squid 3.0
Hi Sean,
You cannot have 2 or more ACLs matching the same source.
The first ACL for source 'client' is matched for a PC with
IP address range 10.0.0.0 - 10.0.255.255 and then
the 'pass rule' is used to make a decision on whether
to block or not.
The second ACL for 'client' is never used.
The sol
Hi Hims,
I am the author of ufdbGuard which is based on squidGuard.
ufdbGuard is free software which does 5 URL lookups/sec
on a recent CPU and has no problems with large databases.
-Marcus
hims92 wrote:
hello,
I performed the tests (to block sites using squidguard) with some less
domain
my 2 cents:
someone needs to explain how to set a breakpoint
because when the assertion fails, the program exits
(see previous emails: Program exited with code 01)
The question is where to set the breakpoint
but probably Amos knows where to set it.
Marcus
Silamael wrote:
-BEGIN PGP SIGNED
What are the values for the parameters cache_swap_low and cache_swap_high ?
For a large cache it is recommended to have them close to each other. E.g.
cache_swap_low 90
cache_swap_high 91
You can also add
refresh_pattern (cgi-bin|\?)0 0% 0
since dynamic pages should not be cach
Luis Daniel Lucio Quiroz wrote:
Le mercredi 30 septembre 2009 11:14:43, Marcus Kool a écrit :
What are the values for the parameters cache_swap_low and cache_swap_high ?
For a large cache it is recommended to have them close to each other. E.g.
cache_swap_low 90
cache_swap_high 91
You can
in case it is not clear: the 'aufs' option for cache_dir is much faster
than the 'ufs' which you are using now.
Marcus
George Herbert wrote:
Multiple hard disks, and spreading out Squid's logs and cache dirs
onto separate disks, helps a lot.
The big prod squid environment I was running for a w
Matt,
Setting read_timeout to 1min and connect_timeout to 20sec should do the trick.
And I recommend to look for users who download large files or
watch CNN video news all day long.
Marcus
Matthew Young wrote:
Hello all
I have a group of proxy users who are not technical at all, and it is
v
Everybody is entitled to have its own opinion and I respect them.
I agree that a company should have a internet usage policy and
communicate this clearly with all staff.
Nevertheless, there are many persons who simply do not obey such
policy and tracking those persons consumes too much time from
There are over 75000 proxy sites and every day new ones appear.
There are numerous Yahoo groups, Google groups and mailing lists
who distribute new proxy sites every day.
Sure, a network admin can make it a full daytime job to
run the race against the clock; block used proxy sites and block
tomor
Henrik Nordstrom wrote:
ons 2009-11-04 klockan 09:59 -0200 skrev Marcus Kool:
A URL filter is definitely a good option and a doomed success.
Sorry if you got the impression that I think URL filters are a bad idea.
I do not. Just that implementing URL filters alone without also having a
Ultrasurf can be blocked by ufdbGuard, a free URL rewriter for Squid.
ufdbGuard uses various techniques to block Ultrasurf:
- verifying the HTTPS connections by opening a new HTTPS connection
and check if the other side speaks SSL+HTTP
- blocking HTTPS to sites without a FQDN in the URL
- block
Robert Collins wrote:
On Mon, 2009-11-23 at 21:40 -0500, Linda Messerschmidt wrote:
Maybe. We would like to diagnose this problem and fix it properly,
but if
its too much hassle you can go that way.
It would definitely be my preference to diagnose and fix the problem
and I can live with a fai
Linda started this thread with huge performance problems
when Squid with a size of 12 GB forks 15 times.
Linda emailed me that she is doing a test with
vm.pmap.pg_ps_enabled set to 1 (the kernel will
transparently transform 4K pages into superpages)
which gives a big relief for TLB management
an
ve requests from Squid.
option 3 is most likely very simple but it is unknown how much it helps.
option 4 is simple, but depending on the functionality
of the rewriter, it is or is not acceptable. You need to experiment
to see if it helps.
Marcus
Linda Messerschmidt wrote:
On Tue, Nov 24, 200
Linda Messerschmidt wrote:
On Wed, Nov 25, 2009 at 7:43 AM, Marcus Kool
wrote:
The result of the test with vm.pmap.pg_ps_enabled set to 1
is ... different than what I expected.
The values of vm.pmap.pde.p_failures and vm.pmap.pde.demotions
indicate that the page daemon has problems creating
Stripes need be be larger than the average object size to have
concurrent access to more than one object at the same time.
The *average* objects size is 13 KB so to be on the safe side
I would use a stripe size of 32K or more.
The optimal size also depends on the file system type that you use.
M
simplest rescue until
the Squid developers come with a solution.
Marcus
Linda Messerschmidt wrote:
On Wed, Nov 25, 2009 at 11:18 AM, Marcus Kool
wrote:
The FreeBSD list may have an explanation why there are
superpage demotions before we expect them (when their are no forks
and no big demands for
John Doe wrote:
From: Matus UHLAR - fantomas
On 08.12.09 02:41, John Doe wrote:
Yes but, as long as squid does not handle disk crashes gracefully, I am
stuck with RAID...
what kind of RAID? for mirrors, you don't need stripe size. Stripes aren't
safer than single disks. RAID5 is slow, unles
It depends on the number of disks thats you use for the cache on disk.
as a rule of thumb: 10 I/Os per disk is fine, so 10 threads per disk.
Only if you use very high performance disk arrays you may
increase the number of threads per (logical) disk.
Marcus
J. Webster wrote:
Would this dramatic
Hi Ismail,
I would add a redirect statement to the int_net acl rule.
observation: blocking porn without blocking proxies is the same as blocking
nothing.
You might want to try ufdbGuard: it is faster than squidguard, and has
additional features for enforcing Google SafeSearch and verifying
HTTP
Ismail,
ufdbGuard is free.
It can be used with a free URL database and
with a commercial database.
-Marcus
İsmail ÖZATAY wrote:
Marcus Kool yazmış:
Hi Ismail,
I would add a redirect statement to the int_net acl rule.
observation: blocking porn without blocking proxies is the same as
Hi Martin,
Squid is a little awkward:
the URL returned by squidguard must have the protocol as the original URL.
So for a URL with HTTPS protocol, squidguard must return a URL that uses the
HTTPS protocol.
This is really not nice but the workaround is to use a 302 redirection:
redirect
Or use a commercial URL filter from URLfilterDB.
Marcus
Amos Jeffries wrote:
Johnson, S wrote:
Anyone have recommendations for a URL filtering list through squid?
Yes. Don't
Or if you do use a well maintained one. Such an SURBL.
Amos
ufdbGuard can block Skype.
ufdbGuard is a free URL redirector which works with Squid.
Blocking Skype is based on SSL connection verification
and since Skype using port 443 but has no SSL handshake,
the connection is blocked when the option
enforce-https-official-certificate is set ON.
Note that
Ricardo,
You cannot do it with a transparent proxy.
If you want Squid to handle https traffic, you must
use Squid in a non-transparent setup.
-Marcus
Ricardo Augusto de Souza wrote:
I am still not able to block https sites.
I tested all you sugested here.
I am using transparent proxy. I am re
I am the author of ufdbGuard which is based on squidGuard.
ufdbGuard is free and can be used with both free and commercial databases.
-Marcus
a bv wrote:
Hi,
What is /are the popular /commanly used open source (and maybe also
the other free ones) URL/content filtering solution/software. And
The story about Squid and HTTP 1.1 is long...
To get your LiveUpdate working ASAP you might want to
fiddle with the firewall rules and to NOT redirect
port 80 traffic of Symantec servers to Squid, but
simply let the traffic pass.
Nathan Eady wrote:
Okay, we've got port 80 traffic going transpar
The ACL blocks URLs that end with .com
i.e. it blocks a URL which is www.example.com while it does not block
www.example.com/index.html
If you change the patterns to include a slash you are fine.
The slash must prevent that domains with .com are matched.
e.g.
..*\.com$ becomes .*\..*/.*\.com$
of technologies want
to evaluate technical professionals based on their own lack of knowledge
***---***--***
--- On Sat, 3/28/09, Marcus Kool wrote:
From: Marcus Kool
Subject: Re: [squid-users] .com extension blocking causing blocking of
redirecting
Try ufdbGuard. It has the script that you asked for and
built-in enforcement of Google SafeSearch
HTTPS tunnel detection
enforcement of safer HTTPS traffic
Marcus
Amos Jeffries wrote:
Thys de Beer wrote:
HI All,
I am terrible with cgi scripts infect null and void ... where do i get
a scr
Landy,
If you are desperate for bandwidth I suggest to block
ads (e.g. a.rad.msn.com) and 'user behaviour analysis'
(e.g. scorecardresearch.com).
Furthermore, you may consider blocking mp3 files.
Depending on what type of users you have, this can save
a lot of bandwidth.
Marcus
Landy Landy wr
Landy Landy wrote:
If you are desperate for bandwidth I suggest to block
ads (e.g. a.rad.msn.com) and 'user behaviour analysis'
(e.g. scorecardresearch.com).
Furthermore, you may consider blocking mp3 files.
Depending on what type of users you have, this can save
a lot of bandwidth.
Blockin
acl blockanalysis01 dstdomain .scorecardresearch.com .google-analytics.com
acl blockads01 dstdomain .rad.msn.com ads1.msn.com ads2.msn.com ads3.msn.com
ads4.msn.com
acl blockads02 .adserver.yahoo.com pagead2.googlesyndication.com
http_access deny blockanalysis01
http_access deny blockads01
http_a
Kinkie wrote:
On Thu, Feb 25, 2010 at 5:19 PM, Denys Fedorysychenko
wrote:
On Thursday 25 February 2010 13:42:52 Amos Jeffries wrote:
My opinion of RAID behind Squid is very poor. Avoid if at all possible.
HW RAID is claimed to be workable though, particularly as the price
range and quality
Michel,
Proxies are the URL filter circumventors, so if you like
to use a URL filter, you should always block proxies.
Henrik stated in a separate response that some browsers have
problems with HTTP 302 redirect responses. I have no access
to all types of web browsers, and Microsoft Internet Ex
mic...@casa.co.cu wrote:
Marcus Kool escribió:
Michel,
Proxies are the URL filter circumventors, so if you like
to use a URL filter, you should always block proxies.
Henrik stated in a separate response that some browsers have
problems with HTTP 302 redirect responses. I have no access
Jaap,
URLfilterDB has over 95.000 proxy servers in its commercial URL database.
Each day there are many new ones.
If you are serious about blocking access to them you need a
good URL filter.
I represent URLfilterDB but with some googling you will find
lots of others.
Best regards,
Marcus Kool
Or use an alternative: ufdbGuard.
ufdbGuard is a URL filter for Squid that has a much easier
configuration file than the Squid ACLs and additional
configuration files.
ufdbGuard is also multithreaded and very fast.
And a tip: if you are really serious about blocking
anything, you should also blo
I use squid
Squid Cache: Version 3.0.STABLE20
configure options: '--prefix=/local/squid' '--with-default-user=squid'
'--with-filedescriptors=2400' '--enable-icap-client' '--enable-storeio=aufs,ufs,null'
'--with-pthreads' '--enable-async-io=8' '--enable-removal-policies=lru'
'--enable-default-e
Henrik Nordström wrote:
mån 2010-03-29 klockan 13:58 -0300 skrev Marcus Kool:
0.33 epoll_wait(6, {{EPOLLIN, {u32=23, u64=8800387989527}}}, 2400,
10) = 1
0.32 gettimeofday({1269878848, 223083}, NULL) = 0
0.31 read(27, 0xffd3de98, 256) = -1 EAGAIN (Resource
Henrik Nordström wrote:
fre 2010-04-02 klockan 15:41 -0300 skrev Marcus Kool:
strange indeed, but this is strace output with which I am not very familiar.
Strace should print the whole array that it uses as argument to
epoll_wait or just prints the first element ? (and the 2nd argument
Martin,
Valgrind is a memory leak detection tool.
You need some developer skills to run it.
If you have a test environment with low load you may want
to give it a try.
- download the squid sources
- run configure with CFLAGS="-g -O2"
- run squid with valgrind
- wait
- kill squid with a TERM sign
Ricardo,
ufdbGuard is a URL redirector for Squid.
Its main purpose is URL filtering and it is also capable
of filtering Skype the way that you want.
Skype uses direct communication (blocked by your firewall),
HTTP [proxy] (blocked by Squid since Skype does not obey HTTP protocol)
and HTTPS [prox
and
found out that both need more memory over time but 3.0 eventually does not
grow.
3.1 continues to grow until CPU rises to nearly 100%; then the memory
consumption
seem to stop.
Has someone an idea where the problem could be?
Martin
Marcus Kool wrote on 17.06.2010 16:15:09:
Marti
yes.
1) the index is in memory and needs 10-20 MB index in memory for each GB on disk
2) the housekeeping of the index costs more CPU cycles for a larger cache
3) the housekeeping of the cached objects on disk costs time and grows when the cache is
larger. Can be minimised by having cache_swap_l
If you want to block HTTPS for Google you need to block it for all domains
including google.co.uk, google.com.br, google.co.nz google.com.au and
130 more.
Henrik Nordström wrote:
tor 2010-05-27 klockan 15:35 -0400 skrev Dave Burkholder:
Is there some way to specify via a Squid ACL that reques
Isaac Witmer wrote:
On Wed, Jul 21, 2010 at 4:57 PM, Marcus Kool
wrote:
yes.
1) the index is in memory and needs 10-20 MB index in memory for each GB on disk
I was under the impression (from the oriely squid manual) that recent
versions do not use up extra RAM with bigger caches.
But maybe
Francesco,
Here is a biased answer: check out http://www.urlfilterdb.com
Marcus @ URLfilterDB
Francesco Collini wrote:
Hello,
actually we use urlblacklist.com, we are registered users for providers.
It seems the Blacklist is not well maintained: updates are often
missing many censored sites,
Nyamul Hassan wrote:
Hi,
I would build with the following in mind:
1. Better to have a separate disk for the cache stores.
2. Have a COSS store for objects less than 256k. And let AUFS handle
larger objects.
3. Don't have more than 75% of your disk allocated.
4. Only one AUFS store per disk. B
Ralf Hildebrandt wrote:
* Marcus Kool :
Nyamul Hassan wrote:
Hi,
I would build with the following in mind:
1. Better to have a separate disk for the cache stores.
2. Have a COSS store for objects less than 256k. And let AUFS handle
larger objects.
3. Don't have more than 75% of your
Heinz Diehl wrote:
On 08.08.2010, Marcus Kool wrote:
vm.swappiness=20
vm.vfs_cache_pressure=50
Do you have some numbers that actually show a significant improvement?
No. I have experience. It seems that Amos has the same.
I think at least swappiness should better be 100 here, to free as
Heinz Diehl wrote:
On 09.08.2010, Marcus Kool wrote:
I think at least swappiness should better be 100 here, to free as much as
possible memory. Unused applications hanging around for a long
time can conserve quite a lot of pagecache which otherwise could be used
actively.
Do you have any
Jose Ildefonso Camargo Tolosa wrote:
Hi!
On Tue, Aug 24, 2010 at 12:59 AM, Hamza Sani Abubakar Usman
wrote:
Hi,
Can you please tell me that How much amount of ram will required if we use
100GB partition for squid caching.
I don't remember. A quick google search gave me this:
http://www.
Amos Jeffries wrote:
On 28/09/10 12:03, Rich Rauenzahn wrote:
Hi,
Our squid servers are consistently goes over their configured disk
limits. I've rm'ed the cache directories and started over several
times... yet they slowly grow to over their set limit and fill up the
filesystem.
These are
The old setting for cache_swap_high was 95.
A background process monitors the cache usage and
purges old objects. If you retrieve new large files
faster than the background process purges old ones,
you are in trouble.
Marcus
Rich Rauenzahn wrote:
[resending, I accidentally left off the list a
The code example that you sent earlier shows it clearly:
there is an overflow bug.
it is extremely easy to fix too.
Marcus
Rich Rauenzahn wrote:
On Mon, Oct 4, 2010 at 2:56 AM, Matus UHLAR - fantomas
wrote:
On 29.09.10 17:42, Rich Rauenzahn wrote:
This code strikes me as incorrect... Basic
Read carefully the code and its output that was sent on
09/29/2010 09:42 PM
There is an overflow error.
Matus UHLAR - fantomas wrote:
On 05.10.10 09:14, Matus UHLAR - fantomas wrote:
well "the same applies" here means (or at least it should) that you must
make your program capable, by using in
There are over 10 proxy sites and you need a blacklist
if you do not want to end up googling all day.
There is also software for VPNs and SSH tunnels that you will
never block with a blacklist. You need a professional
URL filter.
Marcus
John Dakos wrote:
Kromonos thank you for your mess
Mike Rambo wrote:
Tim Bates wrote:
On 5/10/2010 9:44 PM, John Dakos wrote:
Kromonos thank you for your message.
But I know this way with dstdom. but the problem is... on web
has a
hundreds bypass proxy sites... this is no way for administrators. I
spend a
lot of time to search on goo
Gerson "fserve" Barreiros wrote:
Can i have your 90k url database?
Sorry, like I said in my previous posting:
solutions that block 99% are all paid.
block UltraSurf is easy btw.
one thing that i learned in all that time as a sysadmin is that is
painfull to a user when the site is not bloc
short.cut...@yahoo.com.cn wrote:
--- On Thu, 7/10/10, Marcus Kool wrote:
From: Marcus Kool
Subject: Re: [squid-users] How to Block ByPass proxy Sites..
To: "Gerson "fserve" Barreiros"
Cc: squid-users@squid-cache.org
Date: Thursday, 7 October, 2010, 10:02
I am author of ufdbGuard, a free URL filter for Squid.
You may want to check it out: ufdbGuard is multithreaded and supports
POSIX regular expressions.
If you do not want to use ufdbGuard, here is a tip:
ufdbGuard composes large REs from a set of "simple" REs:
largeRE = (RE1)|(RE2)|...|(REn)
whi
DNS lookups are done by the resolver.
Options on Linux can be set in /etc/resolv.conf (see also man resolv.conf).
The default timeout is only 5 seconds and any program, including Squid,
that does a nameserver query should get an answer (including an error)
in 5 seconds.
In my case I have 3 nameser
It is technically impossible to hide your WAN IP.
There are low level OS calls to retrieve the address of
"the other party".
Marcus
Tony wrote:
I was told that this is all I need to get this to work. I'm using
the latest version of squid 3.1.9 My browser proxy setting is set
to localhost 3128
Optimum Wireless Services wrote:
On Thu, 2010-12-16 at 21:21 +1300, Amos Jeffries wrote:
On 16/12/10 15:03, Optimum Wireless Services wrote:
Hello.
I don't know if this is the right place to ask about this issue, if is
not then, please apologize.
I have a small WISP in my town and I would l
It is technically feasible to share one or more fiber-attached
disks between multiple hosts when used with a disk array (a lot
more expensive and a lot faster than a single host-attached disk).
The more difficult part is to keep this shared disk synchronised
between hosts and to make sure that all
Amos Jeffries wrote:
On 24/01/11 23:09, Michael Hendrie wrote:
On 24/01/2011, at 8:17 PM, Saiful Alam wrote:
OK I have kept your suggestion in my mind, but right now I'm not in
a position to buy two HDD's. May be I can afford to buy 15 days
later. For the time being, my prime problem is th
Michelle
most likely you have Squid 2.6 on your system and now also
installed 3.2 in a different location.
What is the output of
/usr/local/squid/sbin/squid -v
Marcus
Michelle Dawson wrote:
Hi Guys,
I have just compiled the Squid 3.2.0.4 from source. But now that it is
compiled it the h
Leonardo,
I suggest to look at ufdbGuard. It is a free URL filter with
additional security features and SafeSearch enforcement for
many search engines.
Marcus
Leonardo wrote:
Dear all,
I have a working install of Squid 3.1.7 with Squirm 1.0-BetaB, which
provides URL rewriting. The Squid pr
You can use ufdbGuard. It is a URL filter for Squid.
ufdbGuard accepts URLs (domain and path), domains and expressions.
Marcus
Zartash . wrote:
Dear All,We are blocking urls using url_regex feature (urls are stored in a
file), but we are unable to block urls having special characters (like
ufdbGuard is a URL filter for Squid that does exactly what Zartash needs.
It transforms codes like %xx to their respective characters and does
URL matching based on the normalised/translated URLs.
It also supports regular expressions, Google Safesearch enforcement and more.
Marcus
Amos Jeffries
There seems to be a misconception about what sslbump can and cannot do.
sslbump can only decrypt SSL connections.
sslbump cannot decrypt all other types of traffic that use the
HTTPS port and CONNECT method.
So, for example, it cannot decrypt Skype traffic and files
containing a virus can still e
Zartash,
can you upload the files
cache.log
ufdbguardd.log
ufdbGuard.conf
to http://upload.urlfilterdb.com ?
In case that the files are small you can send them directly to me.
Marcus
Zartash . wrote:
Thanks, I have installed ufdbGuard and defined it in squid but it doesnt
seem to redirect a
I heard that development of Dansguardian stopped, so I suggest to investigate
other solutions. You could reduce squid+DG+Squid to Squid+ufdbGuard.
ufdbGuard is a free URL filter and works with various URL database providers.
Marcus
bwright wrote:
Any other ideas?
I know there have to be
ufdbGuard is a free URL filter that can block Ultrasurf.
You need to use the option enforce-https-with-hostname.
ufdbGuard can be used with your own whitelist/blacklist,
a free URL database, and a commercial URL database.
Marcus
Amos Jeffries wrote:
On Wed, 9 Mar 2011 12:12:53 -0800, Luis Veana
Osmany,
look in access.log.
It should say what is happening:
I expect this:
... TCP_MISS/301 GET http://kaspersky
... TCP_MISS/200 GET ftp://dnl-kaspersky.quimefa.cu:2122/Updates
and does the client use Squid for the ftp protocol ??
And the RE matches too many strings.
I recommend to r
rsky.quimefa.cu:2122/Updates/index/u0607g.xml.klz
I've changed the script many time so that I can get what I want but I
had no success. can you please help me?
On Sun, 2011-03-13 at 21:27 -0300, Marcus Kool wrote:
Osmany,
look in access.log.
It should say what is happening:
I expect thi
Dejan,
Squid is known to be CPU bound under heavy load and the
Quad core running at 1.6 GHz in not the fastest.
A 3.2 GHz dual core will give you double speed.
The config parameter "minimum_object_size 10 KB"
prevents that objects smaller than 10 KB are not written to disk.
I am curious to know
If your users do not mind, you can block ads and user tracking
sites of which many produce 1x1 gifs.
Most ads and tracking codes are not cacheable and may consume a lot.
This all depends on which sites your users visit of course.
Marcus
Amos Jeffries wrote:
On 31/03/11 01:38, Ed W wrote:
Hi,
The cache_mem parameter is 10 MB so the cached objects in memory are 10 MB.
The cache_dir is 10 GB so the cached objects on disk are 10 GB.
Most likely squid is slow because of the I/O.
If you have 16 GB of memory and a 64-bit OS and 64-bit Squid you can set
cache_mem to 4 GB to have a lot more o
Helmut,
It is not easy detecting Skype.
When PCs of end users are blocked by the firewall,
Skype will use the Squid proxy to go the internet.
Squid only sees a CONNECT on the HTTPS port 443 and does
not know what goes through.
You will see a :443 in the access.log file.
ufdbGuard is a URL filter
Jorge,
I understand that you want to give users maximum 5 minutes access to facebook.
There are various problems with implementing this requirement but
one issue is that Squid nor any other software
has a way to determine if a user visits facebook.com or visits
an other website that has a facebo
When a TCP connection is established, TCP SYN packets are exchanged.
Blocking SYN packets is the same as blocking all TCP traffic.
Andreas Braathen wrote:
I tried it, but it did not change anything. Squid still sends SYN packets to
establish state with destination.
Any other suggestions?
e
In addition to what Amos already answered:
Yes, a URL rewritor like squidGuard can block HTTPS sites.
But the URL that the URL rewritor receives is only the domain
name which makes managing lists of URLs a little more complicated.
ufdbGuard is a fork of squidGuard and actively maintained by me
s
Amos Jeffries wrote:
On 28/05/11 00:46, Marc Nil wrote:
Hello,
I am currently facing some troubles will using Squids
feature to manage bandwidth (delay_pools, delay_access, ...)
I would like to apply a 50kbytes/s limitation to each
users and a global 3Mbytes/s limitation.
There is a authe
http://www.squid-cache.org/Versions/v3/3.1/ is not yet updated and the latest
version is still 3.1.12.3
ftp://ftp.squid-cache.org/pub/archive/3.1/ is OK.
Marcus
Amos Jeffries wrote:
The Squid HTTP Proxy team is very pleased to announce the
availability of the Squid-3.1.13 release!
This re
I noticed the same and opened bug 3261.
Marcus
Daniel Beschorner wrote:
http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-10323.patch
seems to break my not-IPV6-enabled system on start:
comm_open: socket failure: (97) Address family not supported by protocol
FATAL: Could not cr
It is bug 3261: http://bugs.squid-cache.org/show_bug.cgi?id=3261
Amos Jeffries wrote:
On 03/07/11 05:03, Filip wrote:
Hi,
Unable to make the squid work with the DNS, it keeps showing the error
"Unable to determine the IP adress of the host...", and here are the
cache.log message errors I get:
I live in Brazil and sometimes watch BBC videos using Squid without issues.
Can you give a link to an example URL which causes problems ?
Marcus
Karl Pielorz wrote:
Hi,
We've tried running a number of Squid versions (from 2.7.9 through to
3.2.0.9) but they all seem to suffer when streaming
e has encountered and found a solution to this issue.
BMatz
-Original Message-
From: Marcus Kool [mailto:marcus.k...@urlfilterdb.com]
Sent: Friday, July 08, 2011 6:28 AM
To: Karl Pielorz
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Streaming video content (e.g. BBC news to
b.scorecardresearch.com/b
There are many sites that use tracking URLs/gifs and they can slow
down end user experience.
Marcus
Karl Pielorz wrote:
--On 08 July 2011 12:47 -0300 Marcus Kool
wrote:
Well, I still would like to know the URL because I like to observe
which set of URLs this eventually
The message indicates that the numbers of membufs should be
because there are insufficent membufs to use for caching
objects. The reason for having 'insufficient membufs'
is explained below.
Given the fact that the average object size is 13 KB, the given
configuration effectively puts a very lar
Look at ufdbGuard, a free replacement for squidGuard, which includes
a mini HTTP server, ufdbhttpd, that is only used for the redirects of ufdbGuard.
ufdbhttpd has no config file, no problems with installation.
Marcus
Bruce Bauman wrote:
I am running a web browser, squid, and squidguard all on
Alexus,
Many tried and failed. It is not possible to filter accurately
based on a list of words.
You need a professional filter solution.
ufdbGuard is a free URL filter for Squid.
It works with both free URL databases and a commercial database.
ufdbGuard produces feedback about the quality of
ufdbGuard version 1.28 has been released on January 19, 2012.
ufdbGuard is a URL filter for Squid with the following features:
- filter web access based on rules for users, times, website category
- works with free and commercial URL databases
- can enforce SafeSearch for all major search engines
The ICAP protocol does not stream in the sense that it forwards piece by piece.
The ICAP protocol only supports a preview which for Squid has a maximum of 64
KB.
So a large file with preview mode enabled, can send a configurable size
(between 1 byte and 64 KB) of the first part of the content to
ufdbGuard is a free URL filter for Squid which has the
time-related ACL feature to block sites only during business hours.
The Reference Manual of ufdbGuard explains the technical details.
If you have a small set of sites that you want to block, you
can make your own URL table and use ufdbGuard f
Muhammad,
have you looked at ufdbGuard?
It is a free URL filter with time-based ACLs, ACLs for groups,
user-defined sets of URLs, support for free and commercial URL databases.
The only major item on your wishlist that ufdbGuard does not do,
is the delay pool, but that is something you can do wit
1 - 100 of 228 matches
Mail list logo