/squid
5. it does improve performance, however two preceding TCP_MISS/302 hits
for every archive url hit, do provide major contribution to the overall
response delay
Thanks again,
Andrei.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://list
rent squid configuration options
or any existing squid plugins to cache 302 responses without Expires
header,
instead must write some brand new code, correct?
Andrei
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.or
Hello.
1. this question was asked before, but not yet resolved:
http://www.squid-cache.org/mail-archive/squid-users/200701/.html
2. use case:
the following url goes though double redirect, both times not providing
"Expires:" header,
which results in repeated TCP_MISS/302 entries in
On 25/09/2019 15:29, Alex Rousskov wrote:
On 9/25/19 7:12 AM, Alessandro Andrei wrote:
My access_log file il flooded with messages that I do not want to see
Specifically
1) CONNECT vortex-win.data.microsoft.com
2) TCP_DENIED/407
So I created two ACLs to exclude them from logging and applied
My access_log file il flooded with messages that I do not want to see
Specifically
1) CONNECT vortex-win.data.microsoft.com
2) TCP_DENIED/407
So I created two ACLs to exclude them from logging and applied it to my access
log
acl AuthRequest http_status 407
acl excludefromlog dstdomain
It's regarding active fingerprinting and mitigating attacks, not just it's
passive use. (Sorry for the dbl send)
On Oct 30, 2017 21:41, "Alex Rousskov" <rouss...@measurement-factory.com>
wrote:
> On 10/30/2017 12:15 PM, Andrei wrote:
> > You do realize that there's
You do realize that there's nothing "weird" about p0f, right? Perhaps you
should have a read over:
http://lcamtuf.coredump.cx/p0f3/
https://blog.cloudflare.com/introducing-the-p0f-bpf-compiler/
On Mon, Oct 30, 2017 at 11:22 AM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:
> On
Ok. I got it fixed after reading the FAQ and changing the values to:
request_header_max_size 15824 KB
request_body_max_size 15824 KB
reply_header_max_size 15824 KB
reply_body_max_size 15824 KB
Is there any way to set these max_sizes to unlimited?
On Fri, Jul 29, 2011 at 12:06 PM, Andrei
If proxy info is entered manually in the browser, caching works OK. If
LAN clients are sent transparently to the proxy, an error message in
Google Chrome:
Error 324 The server closed the connection without sending any data.
Mozilla Firefox displays a blank page.
Strangely enough I don't see
These are my Squid stats. I have about 23% of cache hits.
We have 300 users. This is a school environment where most students
access the same site at the same time for their classroom activity.
Is there anything I should add or change to make the caching better,
or is 23% to be expected?
Proxy
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
Is this the default?
I think that's custom but I'm not sure. Would you recommend changing it and why?
cache_dir aufs /var/spool/squid3 7000 16 256
is better. Can you increa the cache
Is it possible to see caching server throughput rate, response times
under workload, cache hit ratio, number of concurrent connections to
caching server, and how much cached content was delivered to LAN
clients and how much bypassed (or was not cached) by Squid?
Is there a quick command or utility that would show me how much of the
content is fetched and how much is cached by the proxy? I have Cache
Manager and squidview installed.
When I enable Squid on the network some sites like Yahoo.com do not
render/display in the client browsers properly. Yahoo.com for example
looks likes its missing some of its CSS files. This only happens when
I enable Squid (3.1) proxy, if clients access these sites directly
everything is
Hi, did you take a look in access.log to see if some requests are blocked?
Stephane, I see nothing blocked in the access.log but then I might be
missing out something. The log is huge and I don't know what to look
for...
I don't see anything blocked. I wonder if refresh_pattern -i
(/cgi-bin/|\?) 0 0% 0
has anything to do with it...
Thank you so much! I'm not sure if I understood everything, but here
is what I have so far.
1) 1GB of RAM in this machine (P4, 40GB IDE, 1GB RAM).
2) Running Squid 3.1.3 now :-)
3) Not sure what you meant with AUFS. Does this need to be changed?
cache_dir ufs /var/spool/squid3 7000 16 256
4)
I'm a newbie. To get Squid started all I was able to do is create the
config below. This works but it feels like it could be a little
faster. I have about 300 users.
Are there any other options that you would recommend adding to this
config file? This is my config file for Squid 3.0 on Debian, P4,
Thanks, guys. This is a small network I guess. I'll leave it with one NIC.
Who knows what 300 kids with laptops will be doing.
I see Hit Ratio mentioned quite a bit. Can somebody give me good
performance tuning link/tips etc for Squid? I have default install on
Debian. I could probably fine tune
I have a Squid box that caches for about 300 users. This is my first
Squid installation. Some sites take longer to fetch in the browser,
but once opened the sites load fairly quickly. For example, if I type
bbc.com it would take about 3-4 seconds of waiting and staring at the
blank browser page
Thanks, Amos. I don't understand the MTU thing. What should I do about MTU?
On Mon, Aug 30, 2010 at 5:24 PM, Amos Jeffries squ...@treenet.co.nz wrote:
On Mon, 30 Aug 2010 08:51:34 -0700, Andrei funactivit...@gmail.com
wrote:
I have a Squid box that caches for about 300 users. This is my
This is a general Squid question. If you have experience with medium
sized networks (300+ users) and Squid, this question is for you.
I'm setting up a transparent Squid box for 300 users. All requests
from the router are sent to the Squid box. Squid box has one NIC,
eth0. This box receives
before
On Sat, Aug 28, 2010 at 5:12 PM, Amos Jeffries squ...@treenet.co.nz wrote:
Leonardo Rodrigues wrote:
Em 28/08/2010 12:29, Andrei escreveu:
I'm setting up a transparent Squid box for 300 users. All requests
from the router are sent to the Squid box. Squid box has one NIC,
eth0. This box
Henrik Nordstrom wrote:
sön 2008-03-30 klockan 17:17 +0300 skrev Andrei-Florian Staicu:
Hello list,
Could you tell me if I can have different url_rewrite_programs for
different acls?
No, but you can make the url rewrite program take different actions
based on pretty much anything
Hello list,
Could you tell me if I can have different url_rewrite_programs for
different acls?
Something like
acl net1 src 192.168.1.0/24
url_rewrite_access allow net1
url_rewrite_program /usr/bin/redir1
acl net2 src 192.168.2.0/24
url_rewrite_access allow net2
url_rewrite_program
,
and putting the credentials the internet is allowed. My problem is: I
would like that the internet is only allowed to users that enter at
domain, if user enter local, the internet cant work !!
Somene can help me !!
Tkss
Andrei
can help me ?
Tkss
Andrei
the internet works. ! I would like if the user does login local, the
internet cant works. Only if login in domain internet can works.
Someone can help me !
tks
Andrei
--
Andrei Antonelli
Analista de Sistemas - HCFMRP-USP
Tel: +55 16 3602-2928
Hospital das Clínicas de Ribeirão Preto - USP
CIA - Centro
the internet works. ! I would like if the user does login local, the
internet cant works. Only if login in domain internet can works.
Someone can help me !
tks
Andrei
and password appear, and if the user put the user and pass domain
the internet works. ! I would like if the user enter at local machine
and try access the internet, the internet can't works. Only if who
logged at domain internet can works.
Someone can help me !
tks
Andrei
it's possible by using redirectors. You will find some info in
Squid FAQ.
Kovacs Andrei
General Comtrust Petrosani
Adam Aube wrote:
Andrei Kovacs wrote:
I would like to know if it is possible to limit the number of
connections to a URI (not a particular one but in general) because I
want to limit the usage of Download Accelerators which make 5-10
connections to a file.
Take a look at the maxconn acl
Hi.
I would like to know if it is possible to limit the number of
connections to a URI (not a particular one but in general) because I
want to limit the usage of Download Accelerators which make 5-10
connections to a file.
Thanks.
--
No virus found in this outgoing message.
Checked by AVG
33 matches
Mail list logo