Reid ha scritto:
Could you tell me the command to make squid listen only on 3128? When I run
tcpdump -n -i any
port 3128 (or any other port) it gives me an error - something about
promiscuous command. Thanks
Have you tried to make squid listen only on the 3128 port ?
Maybe you could start
Henrik Nordstrom escribió:
On Fri, 2007-06-29 at 16:10 +0200, Emilio Casbas wrote:
With requests to this url (webcluster.cti.unav.es/alumno/ga) the
access.log is showing:
internal.proxy.unav.es - - [29/Jun/2007:15:53:33 +0200] GET
On Mon, 2007-07-02 at 08:31 +0200, Emilio Casbas wrote:
What do your http_port look like?
http_port 80 accel defaultsite=www.unav.es vhost
As the URLs processed by Squid will be the URLs requested by the client,
your rewrite rules (if any is needed, most often not needed at all) need
to be
On Sun, 2007-07-01 at 23:44 +0700, Ferdian Alfianto wrote:
I have Fedora Core 6 with Squid 2.6-Stable12 installed. I have two
internet connetions, on two ethernet card (et0 and eth1).
I want all traffic for html files (including xml, css, php, etc)
redirect to eth0, and for other files (such
On Mon, 2007-07-02 at 09:32 +0800, squid squid wrote:
The SNMP error when cacheClientHttpRequests and cacheCurrentLRUExpiration
are being turned on.
cacheClientHttpRequests is one of the per-client counters in the
CacheClient array. You can't plot this with MRTG.
Suspect you are looking for
On Sun, 2007-07-01 at 16:40 +0400, Mohamed Navas V wrote:
Henrik,
What exactly do you want to happen when the webserver sends this
redirect? Do you want that URL to be sent to the browser, or do you want
the browser to use another URL?
Do you want Squid to listen on that URL, or do
Henrik,
All the redirections are happening in the webserver itself.Also this
is happening right now .We have suggested to put the proxy for
transparency and security. As i had tried with squid 2.5 long time
back but couldn't do. We have the squid 2.6 now, and to test with this
setup.
So the web
Henrik Nordstrom escribió:
On Mon, 2007-07-02 at 08:31 +0200, Emilio Casbas wrote:
What do your http_port look like?
http_port 80 accel defaultsite=www.unav.es vhost
As the URLs processed by Squid will be the URLs requested by the client,
your rewrite rules (if any is needed, most often not
On Fri, Jun 29, 2007, Jegadeesh wrote:
#wbinfo -t
checking the trust secret via RPC calls failed
error code was NT_STATUS_CANT_ACCESS_DOMAIN_INFO (0xc0da)
Could not check secret
Thats your first port of call. Ignore Squid completely until you
get wbinfo -t returning success.
I'd check
On Wed, 27 Jun 2007 22:46:12 +0200
Henrik Nordstrom [EMAIL PROTECTED] wrote:
And the connection gets closed here.. not good.
What do the requests/replies look like when using Squid-2.5?
Does things return to how they were if you back out the following
patch?
Hi
it's possible to use Squid for limit the access at a url for a list of IP
without use SquidGard.
i want that ip192.168.10.2 and 192.168.10.128 can se only one web site
thanks for your help
On Mon, 2007-07-02 at 09:10 +0200, Emilio Casbas wrote:
The same rewrite rules are working with squid-2.5 but no with squid-2.6.
Is there any workaround to solve this?
Then had the accelerator mode configured differently, probably
httpd_accel_host internal.server.name
and NOT
Hello All,
I have a list of urls to be blocked in a text file and the size of it is
4MB. Squid has been configured to use this to match the requested urls
and configuration is given below.
acl blocked url_regex '/usr/local/squid/blocked.txt'
http_access deny blocked
This is working
Henrik Nordstrom escribió:
On Mon, 2007-07-02 at 09:10 +0200, Emilio Casbas wrote:
The same rewrite rules are working with squid-2.5 but no with squid-2.6.
Is there any workaround to solve this?
Then had the accelerator mode configured differently, probably
httpd_accel_host
Noc Phibee wrote:
Hi
it's possible to use Squid for limit the access at a url for a list of IP
without use SquidGard.
i want that ip192.168.10.2 and 192.168.10.128 can se only one web site
Hi Noc Phibee,
You can use something like the following ACLs:
acl all src 0.0.0.0/0.0.0.0
acl
Ok heres what you sent modified for what I want to do, if I put in the
names how will it know my host IP? I'm confused by how this all mates
up.
http_port 8080 urlgroup=1 vhost
acl port8080 myport 8080
cache_peer web ip parent 8080 0 originserver no-query
cache_peer_access web ip allow port8080
Sekar wrote:
Hello All,
I have a list of urls to be blocked in a text file and the size of it is
4MB. Squid has been configured to use this to match the requested urls
and configuration is given below.
Hi Sekar,
A text file of size 4 MB! It must contain at least 30 thousand entries.
acl
I have it set up as reverse proxy.
In test I hit it with eight load clients but can't but can't get the CPU
on the squid or carp boxes to go above 30%, and I can only pull down
about 500 10k files per second.
This is in a configuration that only uses memory on the squids, so it's
not the disk
I'm using httperf test squid, the latencey goes up plateaus at 3000ms (3
seconds) @ about 500 connections/second then goes exponential.
Seems kind of strange.
Config is below:
#http_port 8090 vhost
http_port 8090
#enables or disable ICP
icp_port 0
#icp_port 3130
#htcp_port 4827
#
#
#
#for
You should try AUFS as storage, use epoll if your're using linux or
/dev/poll if using another OS, look for the iostat and vmstat
statistics to check if your're killing your disks, is there any error
logs in the cache.log ?.
Regards, Pablo
On 7/2/07, Mark Vickers [EMAIL PROTECTED] wrote:
I
Normally, with Aufs and Diskd and ufs, the cache_swap_log will be in the disk
that the files are stored in. Since with Coss, this must be specified, is there
a way to get it to also divide up their logs?
Is there a way to do something like this when using coss?
(I would rather not have to use
On Mon, 2007-07-02 at 11:08 +0400, Mohamed Navas V wrote:
Henrik,
All the redirections are happening in the webserver itself.Also this
is happening right now .We have suggested to put the proxy for
transparency and security. As i had tried with squid 2.5 long time
back but couldn't do. We
On Mon, 2007-07-02 at 17:52 +0200, Emilio Casbas wrote:
No, our squid-2.5 accelerator configuration is:
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
Are you sure about the httpd_accel_host virtual think?? Not commonly
used, and
On Mon, 2007-07-02 at 13:34 +0200, Joerg Schuetter wrote:
Any know issues with reverting this patch? I want to be sure
before putting this modified version into production here.
It will make the client which triggered this patch to fall back on Basic
when it could have used NTLM.. that's all.
On Mon, 2007-07-02 at 23:56 +0545, Tek Bahadur Limbu wrote:
Using dstdomain in place of url_regex if applicable will probably reduce
your overall CPU usage.
and memory usage.. regex is quite memory hungry..
Regards
Henrik
signature.asc
Description: This is a digitally signed message part
On Mon, 2007-07-02 at 12:37 -0500, Jason Hitt wrote:
Ok heres what you sent modified for what I want to do, if I put in the
names how will it know my host IP? I'm confused by how this all mates
up.
The internal name for a cache_peer is by default the same as the
specified hostname/ip. To allow
On Mon, 2007-07-02 at 16:54 -0400, Mark Vickers wrote:
I have it set up as reverse proxy.
In test I hit it with eight load clients but can't but can't get the CPU
on the squid or carp boxes to go above 30%, and I can only pull down
about 500 10k files per second.
On what kind of server?
On Mon, 2007-07-02 at 17:25 -0400, Mark Vickers wrote:
I'm using httperf test squid, the latencey goes up plateaus at 3000ms (3
seconds) @ about 500 connections/second then goes exponential.
How many filedescriptors is your Squid using?
Regards
Henrik
signature.asc
Description: This is a
On Mon, 2007-07-02 at 15:21 -0700, Nicole wrote:
Normally, with Aufs and Diskd and ufs, the cache_swap_log will be in the disk
that the files are stored in. Since with Coss, this must be specified, is
there
a way to get it to also divide up their logs?
Is there a way to do something like
The box is a Dual 2 core ( 4 core total) AMD 275.
So I should probably start 4 squids.
When I try to start the second one I get:
squid -f /etc/squid/squid2.conf
2007/07/02 16:08:03| Squid is already running! Process ID 448
Thank you very much for your help!!!
wfc-zit-new-001:~# cat
On Mon, 2007-07-02 at 19:09 -0400, Mark Vickers wrote:
The box is a Dual 2 core ( 4 core total) AMD 275.
So I should probably start 4 squids.
When I try to start the second one I get:
squid -f /etc/squid/squid2.conf
2007/07/02 16:08:03| Squid is already running! Process ID 448
Reid wrote:
Thank you for the link.
On that page it says You can not limit a single HTTP request's
connection speed. You can limit individual hosts to some bandwidth
rate. Does this mean that as long as each user is coming from a
different hostname (e.g. a different isp) or a different
Henrik Nordstrom wrote:
On Mon, 2007-07-02 at 12:37 -0500, Jason Hitt wrote:
Ok heres what you sent modified for what I want to do, if I put in the
names how will it know my host IP? I'm confused by how this all mates
up.
The internal name for a cache_peer is by default the same as
Hi,
Thank you for the advise. The error on CacheClient was gone after changing
to the parameter that you have advised.
May I know what is the difference between cacheServerRequests and
cacheProtoClientHttpRequests???
Kindly also advise which is the parameter which will show the total
Henrik Nordstrom disse na ultima mensagem:
On Mon, 2007-07-02 at 16:54 -0400, Mark Vickers wrote:
I have it set up as reverse proxy.
In test I hit it with eight load clients but can't but can't get the CPU
on the squid or carp boxes to go above 30%, and I can only pull down
about 500 10k
35 matches
Mail list logo