Re: [squid-users] Squid3 and lots of FIN_WAIT1

2009-11-18 Thread David B.
Amos Jeffries a écrit :
 [snip] 
 Perhaps my load is too high and i need to tune kernel via sysctl,
 but i
 can't figure what to do. For now, i've tried several things and i
 can't
 solved this issue.
 

 You may want to check:
  * persistent connections is turned on (squid.conf)
   
 On by défault i think, but i will force this to on to test.
  * system socket timeouts
   
 System socket or squid related like persistent_request_timeout ?

 System socket. I'm guessing the FIN_WAIT* is created once Squid sends
 its close signal to the system.
Hmm, i need to find witch kernel parameter to tune.
I was thinking to net.ipv4.tcp_fin_timeout, but this seems to only work
for FIN_WAIT2.

I'll have a look to net.ipv4.tcp_max_orphans, perhaps this can help me. :)

I'll try to keep the list in touch, if this can be helpfull for someone
else.

David.



Re: [squid-users] help on always_redirect

2009-11-18 Thread Amos Jeffries
sqlcamel wrote:
 Amos Jeffries:
 sqlcamel wrote:
 Hello,

 in squid.conf:

 #  TAG: always_direct
 #   Usage: always_direct allow|deny [!]aclname ...
 #
 #   Here you can use ACL elements to specify requests which should
 #   ALWAYS be forwarded by Squid to the origin servers without using
 #   any peers.  For example, to always directly forward requests for
 #   local servers ignoring any parents or siblings you may have use
 #   something like:
 #
 #   acl local-servers dstdomain my.domain.net
 #   always_direct allow local-servers


 So, what's origin servers by definition in Squid?
 The origin server of a website. The machine referenced in DNS with A or
  records when the domain name is looked up.

 In reverse-proxy mode, does it mean the peers which have an originserver
  option?
 No. It in reverse-proxy mode it means force non-reverse proxy DNS
 handling and ignore all cache_peer settings in squid.conf.

 
 Thanks a lot Amos.
 so if I set this directive in squid.conf for reverse proxy:
 
 never_direct allow all
 
 does it mean squid will ignore all DNS handling and pass only the
 traffic to the configured peers?
 
 

It's the opposite of always_direct. Has no effect on reverse-proxy behavour.

Amos
-- 
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.14


[squid-users] cache-peer and hosts file

2009-11-18 Thread NublaII Lists
Hi there

I have a couple of quick questions to which I have seen many examples,
but I have never been able to figure out the whole picture.

In a simple setup like this:

1 squid machine
2 www servers

website: www.example.com
external ip: 1.2.3.4

squid machine:
name: squid.example.com
ip: 10.0.0.1

www1 machine:
name: www1.example.com
ip: 10.0.0.2

www2 machine:
name: www2.example.com
ip: 10.0.0.3

Here is the part of the squid.conf that applies here

# Basic parameters
visible_hostname www.example.com
# This line indicates the server we will be proxying for
http_port 80 accel defaultsite=www.example.com
# And the IP Address for it
cache_peer 10.0.0.2 parent 80 0 no-query originserver round-robin
cache_peer 10.0.0.3 parent 80 0 no-query originserver round-robin

So, questions...

- is the squid.conf syntax correct?
- what should I have on the /etc/hosts file on the squid machine?
RIght now this is what I have

127.0.0.1localhost
10.0.0.1squid.example.com
10.0.0.2www.example.com
10.0.0.3www.example.com


[squid-users] questions on squid cache

2009-11-18 Thread Melanie Pfefer
hi

I have in squid.conf

cache_dir ufs /var/squid/var/cache 100 16 256


I would like to know:
1. if the squid cache is stored on disk or RAM
2. if I can reach a point where cache is full
3. How can I remove cache older than 1 week (same logic as log rotation)

thanks in advance







Re: [squid-users] questions on squid cache

2009-11-18 Thread Jefferson Diego

Em 18-11-2009 12:16, Melanie Pfefer escreveu:

hi

I have in squid.conf

cache_dir ufs /var/squid/var/cache 100 16 256


I would like to know:
1. if the squid cache is stored on disk or RAM
2. if I can reach a point where cache is full
3. How can I remove cache older than 1 week (same logic as log rotation)

thanks in advance







   
1. Both. This line is about the cache stored on disk, but the squid also 
has a cache on memory, that you can set in a line like cache_mem 128 MB

2. I did not understand... what?
3. Your old cache is removed by the refresh pattern.
Look this:
refresh_pattern -i \.jpg$ 10080 90% 10080
It means that every file ending with jpg will stay on the cache 1 week 
(10080 minutes).



(Sorry by me english... I'm brazilian...)


Re: [squid-users] Re: ubuntu apt-get update 404

2009-11-18 Thread Matthew Morgan

Amos Jeffries wrote:

Matthew Morgan wrote:

Amos Jeffries wrote:

Matthew Morgan wrote:

snip
Ok, it seems to happen in stages.  The first time I run apt-get 
update after switching to 3.x, it's hit or miss.  Sometimes it's 
perfect, sometimes I get errors.  After that, I get errors in two 
stages.  Here's what happens:



Either:

apt-get update #1  -  no errors
apt-get update #2  -  invalid header, and sometimes 404 errors
apt-get update #3 and above  - 404 errors only

or:

apt-get update #1  -  invalid header, and sometimes 404 errors
apt-get update #2 and above  - 404 errors only

The dump files I have uploaded match the second set of 
circumstances.  server1.dump and client1.dump are from the first 
apt-get update after switching, and I got an invalid header error + 
404 errors.  server2.dump and client2.dump came from the second 
apt-get update attempt, and only 404 errors were returned.


I hope this helps!  Let me know if you need anything else.  Just a 
reminder, on my setup I only have 1 squid server with 1 cache 
directory.  For comparison, my server is Ubuntu 9.04 running kernel 
2.6.28-16-server.  I am not using TPROXY.


Here are the files (I tried to attach them, but mailer-daemon 
kicked the email)


http://lithagen.dyndns.org/server1.dump
http://lithagen.dyndns.org/client1.dump
http://lithagen.dyndns.org/server2.dump
http://lithagen.dyndns.org/client2.dump


Well, good news and sad news.

Both traces show the same problems.

The 404 is actually being generated by the us.archive.ubuntu.com 
server itself. There is something broken at the mirror or in apts 
local sources.list URLs.
So does squid 3.x have a different user agent string or something?  


No.

Everything works fine with the exact same sources.list when using 
squid 2.7, so there shouldn't be anything wrong with the file.  
us.archive.ubuntu.com must be treating squid 3.x different somehow, 
right?


It does seem to be. Why is the big question.


Amos
Should I send you a capture of my working 2.7 installation so you can 
compare what headers and such are being sent from an otherwise identical 
setup?




Re: [squid-users] help on always_redirect

2009-11-18 Thread sqlcamel


--- 2009年11月18日 星期三,Amos Jeffries squ...@treenet.co.nz 寫道﹕

 寄件人: Amos Jeffries squ...@treenet.co.nz
 主題: Re: [squid-users] help on always_redirect
 收件人: 
 副本(CC): squid-users@squid-cache.org
 日期: 2009年11月18日,星期三,上午10:12
 sqlcamel wrote:

  
  never_direct allow all
  
  does it mean squid will ignore all DNS handling and
 pass only the
  traffic to the configured peers?
  
  
 
 It's the opposite of always_direct. Has no effect on
 reverse-proxy behavour.
 

Hi Amos,

please see these statement:

#
#   By combining nonhierarchical_direct off and prefer_direct on you
#   can set up Squid to use a parent as a backup path if going direct
#   fails.

So, do nonhierarchical_direct and prefer_direct have any effect on rever-proxy 
behavior?

Thanks again.


  Yahoo!香港提供網上安全攻略,教你如何防範黑客! 請前往 http://hk.promo.yahoo.com/security/ 了解更多!



Re: [squid-users] questions on squid cache

2009-11-18 Thread Melanie Pfefer
thanks

In my case I have


#Suggested default:
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320


the last lime means all types of objects will be cached for 1.2 day?

thx

--- On Wed, 18/11/09, Jefferson Diego jeffersondie...@hotmail.com wrote:

 From: Jefferson Diego jeffersondie...@hotmail.com
 Subject: Re: [squid-users] questions on squid cache
 To: squid-users@squid-cache.org
 Date: Wednesday, 18 November, 2009, 16:56
 Em 18-11-2009 12:16, Melanie Pfefer
 escreveu:
  hi
 
  I have in squid.conf
 
  cache_dir ufs /var/squid/var/cache 100 16 256
 
 
  I would like to know:
  1. if the squid cache is stored on disk or RAM
  2. if I can reach a point where cache is full
  3. How can I remove cache older than 1 week (same
 logic as log rotation)
 
  thanks in advance
 
 
 
 
 
 
 
     
 1. Both. This line is about the cache stored on disk, but
 the squid also 
 has a cache on memory, that you can set in a line like
 cache_mem 128 MB
 2. I did not understand... what?
 3. Your old cache is removed by the refresh pattern.
 Look this:
 refresh_pattern -i \.jpg$ 10080 90% 10080
 It means that every file ending with jpg will stay on the
 cache 1 week 
 (10080 minutes).
 
 
 (Sorry by me english... I'm brazilian...)
 





Re: [squid-users] questions on squid cache

2009-11-18 Thread Brian Mearns
Sorry, replied incorrectly. Message below.

-- Forwarded message --
From: Brian Mearns mearn...@gmail.com
Date: Wed, Nov 18, 2009 at 10:23 AM
Subject: Re: [squid-users] questions on squid cache
To: Jefferson Diego jeffersondie...@hotmail.com


On Wed, Nov 18, 2009 at 9:56 AM, Jefferson Diego
jeffersondie...@hotmail.com wrote:
 Em 18-11-2009 12:16, Melanie Pfefer escreveu:

 hi

 I have in squid.conf

 cache_dir ufs /var/squid/var/cache 100 16 256


 I would like to know:
 1. if the squid cache is stored on disk or RAM
 2. if I can reach a point where cache is full
 3. How can I remove cache older than 1 week (same logic as log rotation)

 thanks in advance









 1. Both. This line is about the cache stored on disk, but the squid also has
 a cache on memory, that you can set in a line like cache_mem 128 MB
 2. I did not understand... what?
 3. Your old cache is removed by the refresh pattern.
 Look this:
 refresh_pattern -i \.jpg$ 10080 90% 10080
 It means that every file ending with jpg will stay on the cache 1 week
 (10080 minutes).


 (Sorry by me english... I'm brazilian...)


2. I believe squid will simply discard entries according to some
heuristic once it has reached capacity. That's the general idea behind
any cache, remove items that are least likely to be needed again in
order to make room for items that are needed now.

-Brian

--
Feel free to contact me using PGP Encryption:
Key Id: 0x3AA70848
Available from: http://keys.gnupg.net



-- 
Feel free to contact me using PGP Encryption:
Key Id: 0x3AA70848
Available from: http://keys.gnupg.net


[squid-users] Configuration problems attempting to cache Google Earth/dynamic content

2009-11-18 Thread Jeremy LeBeau
I am trying to set up a server that is running SUSE SLES 11 as a Squid
Proxy to help cache Google Earth content in a low-bandwidth
environment.  I have tried following the steps in this article:
http://wiki.squid-cache.org/Features/StoreUrlRewrite?action=recallrev=7
but I am not having any luck with getting it to work.  In fact, when I
try those steps, Squid will automatically stop about 15 seconds after
start.  The system is running version 2.7 Stable, as installed by
YAST.

Anyone who could offer some help or a configuration file that would
work with this?


RE: [squid-users] Configuration problems attempting to cache Google Earth/dynamic content

2009-11-18 Thread Mike Marchywka





 Date: Wed, 18 Nov 2009 12:02:40 -0600
 From:
 To: squid-users@squid-cache.org
 Subject: [squid-users] Configuration problems attempting to cache Google 
 Earth/dynamic content

 I am trying to set up a server that is running SUSE SLES 11 as a Squid
 Proxy to help cache Google Earth content in a low-bandwidth
 environment. I have tried following the steps in this article:
 http://wiki.squid-cache.org/Features/StoreUrlRewrite?action=recallrev=7
 but I am not having any luck with getting it to work. In fact, when I
 try those steps, Squid will automatically stop about 15 seconds after
 start. The system is running version 2.7 Stable, as installed by
 YAST.

Why does it stop? There should be some logs to check and if you invoke
it from the command line in foreground you can get quick feedback.
Do you want it to cache in contradiction to server response headers?


 Anyone who could offer some help or a configuration file that would
 work with this?
  
_
Windows 7: It works the way you want. Learn more.
http://www.microsoft.com/Windows/windows-7/default.aspx?ocid=PID24727::T:WLMTAGL:ON:WL:en-US:WWL_WIN_evergreen:112009v2

Re: [squid-users] squid proxy - multiple outgoing IP addresses

2009-11-18 Thread Cameron Knowlton
This DOES NOT work:

http_port 24.69.160.243:3128 name=A
http_port 24.69.177.112:3128 name=B

acl fromA myportname A
tcp_outgoing_address 24.69.160.243 fromA
tcp_outgoing_address 24.69.160.243 !all

acl fromB myportname B
tcp_outgoing_address 24.69.177.112 fromB
tcp_outgoing_address 24.69.177.112 !all


24.69.160.243 sets up just fine, as that's the primary address of the machine. 
However, the 2nd IP on the machine (24.69.177.112) doesn't work.

I'm losing my mind, why is this so challenging to pull off?! I'm running Squid 
Version 3.0.STABLE16 (stock on OS X Server 10.5.8), not sure if this is the 
problem:

Squid Cache: Version 3.0.STABLE16
configure options:  '--prefix=/usr/local/squid' '--enable-delay-pools'


Please, someone, help! I'm about to lose a **very large** client because of 
this inability. Thank you in advance.

Cameron Knowlton


At 12:07 AM +1300 09/11/17, Amos Jeffries wrote:
Cameron Knowlton wrote:
To clarify, I already have the application coded to round robin through a 
provided list of IP:port combinations, I simply need to get Squid to run on 
both local IPs.

Supplying multiple http_port directives to Squid doesn't seem to do the trick:

http_port 24.69.1.2:%PORT%
http_port 24.69.1.3:%PORT%

I only seem to get Squid to run on 24.69.1.2.  :(

Some additional configuration is required:

 * an ACL to for each receiving port to match only traffic arriving at that 
 port.
 * tcp_outgoing_addr using those ACL to explicitly set the Squid outbound IP 
 on traffic arriving at a given port.

For example:

  http_port 1.2.3.4:3128 name=A
  http_port 1.2.3.5:3128 name=B

  acl fromA myportname A
  tcp_outgoing_address 1.2.3.4 fromA
  tcp_outgoing_address 1.2.3.4 !all

  acl fromB myportname B
  tcp_outgoing_address 1.2.3.5 fromB
  tcp_outgoing_address 1.2.3.5 !all


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.14


-- 
Cameron Knowlton
iGods Internet Marketing
camer...@igods.com
P: 250.382.0226
http://www.knowledgevine.net


Re: [squid-users] squid proxy - multiple outgoing IP addresses

2009-11-18 Thread Cameron Knowlton
Eureka! Finally found the problem, it was with a different setting within Squid 
(SquidMan, actually)... I've posted my squid.conf below in its entirety in 
hopes that it might help others.

The line that was messing me up was SquidMan's dynamic allowed client list:

%ALLOWEDHOSTS%

... which is in itself benign. However, I failed to include the 2nd internal IP 
address within SquidMan's Clients configuration, which prevented it from 
showing up in the AllowedHosts line.

Totally sweet, thank you for your patience, Amos et al!

Cameron Knowlton


At 10:09 AM -0800 09/11/18, Cameron Knowlton wrote:
This DOES NOT work:

http_port 24.69.160.243:3128 name=A
http_port 24.69.177.112:3128 name=B

acl fromA myportname A
tcp_outgoing_address 24.69.160.243 fromA
tcp_outgoing_address 24.69.160.243 !all

acl fromB myportname B
tcp_outgoing_address 24.69.177.112 fromB
tcp_outgoing_address 24.69.177.112 !all


24.69.160.243 sets up just fine, as that's the primary address of the machine. 
However, the 2nd IP on the machine (24.69.177.112) doesn't work.

I'm losing my mind, why is this so challenging to pull off?! I'm running Squid 
Version 3.0.STABLE16 (stock on OS X Server 10.5.8), not sure if this is the 
problem:

Squid Cache: Version 3.0.STABLE16
configure options:  '--prefix=/usr/local/squid' '--enable-delay-pools'


Please, someone, help! I'm about to lose a **very large** client because of 
this inability. Thank you in advance.

Cameron Knowlton


At 12:07 AM +1300 09/11/17, Amos Jeffries wrote:
Cameron Knowlton wrote:
To clarify, I already have the application coded to round robin through a 
provided list of IP:port combinations, I simply need to get Squid to run on 
both local IPs.

Supplying multiple http_port directives to Squid doesn't seem to do the trick:

http_port 24.69.1.2:%PORT%
http_port 24.69.1.3:%PORT%

I only seem to get Squid to run on 24.69.1.2.  :(

Some additional configuration is required:

 * an ACL to for each receiving port to match only traffic arriving at that 
 port.
 * tcp_outgoing_addr using those ACL to explicitly set the Squid outbound IP 
 on traffic arriving at a given port.

For example:

  http_port 1.2.3.4:3128 name=A
  http_port 1.2.3.5:3128 name=B

  acl fromA myportname A
  tcp_outgoing_address 1.2.3.4 fromA
  tcp_outgoing_address 1.2.3.4 !all

  acl fromB myportname B
  tcp_outgoing_address 1.2.3.5 fromB
  tcp_outgoing_address 1.2.3.5 !all


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.14


-- 
Cameron Knowlton
iGods Internet Marketing
camer...@igods.com
P: 250.382.0226
http://www.knowledgevine.net


RE: [squid-users] storeurl_rewriter and URL mismatch log entries

2009-11-18 Thread Kathleen M Kelly
This issue I'm seeing, where the original fetch url somehow gets into
the cache even though storeurl_rewriter is running and should be
normalizing it and caching only the normalized url...this seems to be
directly related to me also getting TCP_SWAPFAIL_MISS errors in my
access log.  

A request comes in, gets a TCP_MISS.  I thought that it then caches the
normalized url.  But then the next request for this same url gets a
TCP_SWAPFAIL_MISS.  After that, it gets TCP_HITs.  The TCP_SWAPFAIL_MISS
must be causing it to cache the original url.

Does this added info mean anything to anyone?  I'm trying to research
what causes the TCP_SWAPFAIL_MISS now and see if I can find anything.

Thanks so much!


-Original Message-
From: Kathleen M Kelly [mailto:kmke...@yahoo-inc.com] 
Sent: Tuesday, November 17, 2009 5:14 PM
To: squid-users@squid-cache.org
Subject: [squid-users] storeurl_rewriter and URL mismatch log entries

Hello,

I have a squid application where a storeurl_rewriter program is needed
to normalize incoming fetch urls into the same url.  I am running
squid-2.7_9, and have a storeurl_rewriter program that gets launched at
squid startup.  In my config file, I set storeurl_rewrite_children to
100, and only allow http requests to come through (using the
storeurl_access allow proto HTTP setting, so any
cache_object://localhost requests will be ignored).

The problem I am having is I keep getting cache.log entries that look
like storeClientReadHeader: URL mismatch, comparing the incoming fetch
url with my normalized url.  I am getting thousands of these errors an
hour.  I was sure to clear out the squid cache as I launched my new
storeurl_rewriter program, so I don't understand how the cache has any
of the original fetch urls in it at all to be causing this URL mismatch.
I was getting warnings about not enough storeurl_rewriter programs
running, which was when I bumped it up to 100.  I was also having an
issue where logs showed that when the cache_object://localhost requests
came through, my program shut down, which was why I added the access
setting.  I don't see either of these warnings in the logs anymore, yet
still see tons of URL mismatch errors.

Does anyone have any ideas on this?  

Just to clarify a bit more...suppose a fetch url looks like
http://www.fetchme.com/123456, but it should then go through my
storeurl_rewriter program where it will be turned into
http://www.normalized.com/123456, and this is what should be used to
fetch and then cache.  My assumption is that every fetch request goes
through storeurl_rewriter, so how is it that I would be seeing so many
log entries like storeClientReadHeader: URL mismatch
{http://www.normalized.com/123456} != {http://www.fetchme.com/123456}?
Is there some case where a fetch request does not go through
storeurl_rewriter?  If it was busy, I think I would be seeing a log
warning saying so, which I am not since I bumped children to 100.


Thanks so much,

Kathleen 


Re: [squid-users] WCCP-EVNT:S00: Here_I_Am packet from IP_OF_PROXY w/bad rcv_id 00000000

2009-11-18 Thread paulvay

I'm still having this problem.  I have moved this over to a 6500 sup720 with
the squid proxy on a directly connected vlan so it can use L2 forwarding and
still no luck.  Also tried squid 3.0STABLE10-1, same exact error

paulvay wrote:
 
 I'm trying to setup squid to work with WCCP.  
 Cisco box is a 4948 : cat4500-entservicesk9-mz.122-50.SG2.bin
 Squid is running on CentOS 5.3 
 Squid Cache: Version 2.6.STABLE21
 
 configure options:  '--build=x86_64-redhat-linux-gnu'
 '--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu'
 '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
 '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
 '--includedir=/usr/include' '--libdir=/usr/lib64'
 '--libexecdir=/usr/libexec' '--sharedstatedir=/usr/com'
 '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--exec_prefix=/usr'
 '--bindir=/usr/sbin' '--libexecdir=/usr/lib64/squid'
 '--localstatedir=/var' '--datadir=/usr/share' '--sysconfdir=/etc/squid'
 '--enable-epoll' '--enable-snmp' '--enable-removal-policies=heap,lru'
 '--enable-storeio=aufs,coss,diskd,null,ufs' '--enable-ssl'
 '--with-openssl=/usr/kerberos' '--enable-delay-pools'
 '--enable-linux-netfilter' '--with-pthreads'
 '--enable-ntlm-auth-helpers=SMB,fakeauth'
 '--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group'
 '--enable-auth=basic,digest,ntlm' '--enable-digest-auth-helpers=password'
 '--with-winbind-auth-challenge' '--enable-useragent-log'
 '--enable-referer-log' '--disable-dependency-tracking'
 '--enable-cachemgr-hostname=localhost' '--enable-underscores'
 '--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL'
 '--enable-cache-digests' '--enable-ident-lookups'
 '--enable-follow-x-forwarded-for' '--enable-wccpv2' '--enable-fd-config'
 '--with-maxfd=16384' 'build_alias=x86_64-redhat-linux-gnu'
 'host_alias=x86_64-redhat-linux-gnu'
 'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-D_FORTIFY_SOURCE=2 -fPIE
 -Os -g -pipe -fsigned-char' 'LDFLAGS=-pie'
 
 Squid.conf:
 http_port 8080 transparent 
 wccp2_router switch loopback
 wccp2_version 4 
 wccp2_forwarding_method 1 
 wccp2_return_method 1 
 wccp2_service standard 0 
 wccp2_address 0.0.0.0 
 
 modprobe ip_gre 
 ip tunnel add wccp0 mode gre remote 10.103.7.41 local 10.138.232.90 dev
 eth0 
 ip addr add 10.138.232.90/32 dev wccp0 
 ip link set wccp0 up 
 
 echo 0 /proc/sys/net/ipv4/conf/wccp0/rp_filter 
 
 iptables -t nat -A PREROUTING -p tcp -i wccp0 -j REDIRECT --to-ports 8080 
 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT
 --to-ports 8080 
 
 
 doning a debug ip wccp events on the cisco box I get  WCCP-EVNT:S00:
 Here_I_Am packet from IP_OF_PROXY w/bad rcv_id 
 
 Please let me know what other info I can provide.
 
 Thanks,
 Paul
 

-- 
View this message in context: 
http://old.nabble.com/WCCP-EVNT%3AS00%3A-Here_I_Am-packet-from-%3CIP_OF_PROXY%3E-w-bad-rcv_id--tp26321526p26413811.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] squid proxy - multiple outgoing IP addresses

2009-11-18 Thread Cameron Knowlton
Looks like I spoke too soon... when tracing my nice new proxy on 24.69.177.112, 
it seems that it's actually going through the primary IP after all:

curl --proxy 24.69.177.112:3128 --trace - www.ipaddressworld.com

Your computer's IP address is:
24.69.160.243


Anyway, I changed to the previous method I was using, and it seems to report 
correctly:

acl ip1 myip 24.69.160.243
acl ip2 myip 24.69.177.112
tcp_outgoing_address 24.69.160.243 ip1
tcp_outgoing_address 24.69.177.112 ip2


curl --proxy 24.69.177.112:3128 --trace - www.ipaddressworld.com

Your computer's IP address is:
24.69.177.112


Much better. Now, if I could only convince my ISP to give me more than one 
proxy. Ack! Another barrier!

thanks again, Amos.

Cameron Knowlton


At 10:30 AM -0800 09/11/18, Cameron Knowlton wrote:
Eureka! Finally found the problem, it was with a different setting within 
Squid (SquidMan, actually)... I've posted my squid.conf below in its entirety 
in hopes that it might help others.

and, now the promised and updated squid.conf (I'm so excited, I forgot to 
include it):

# --
# WARNING - do not edit this template unless you know what you are doing
# --

cache_peer %PARENTPROXY% parent %PARENTPORT% 7 no-query no-digest 
no-netdb-exchange default
cache_dir ufs %CACHEDIR% %CACHESIZE% 16 256
maximum_object_size %MAXOBJECTSIZE%
http_port %PORT%
visible_hostname %VISIBLEHOSTNAME%

# http_port 24.69.160.243:3128 name=A
# http_port 24.69.177.112:3128 name=B

# acl fromA myportname A
# tcp_outgoing_address 24.69.160.243 fromA
# tcp_outgoing_address 24.69.160.243 !all

# acl fromB myportname B
# tcp_outgoing_address 24.69.177.112 fromB
# tcp_outgoing_address 24.69.177.112 !all

acl ip1 myip 24.69.160.243
acl ip2 myip 24.69.177.112
tcp_outgoing_address 24.69.160.243 ip1
tcp_outgoing_address 24.69.177.112 ip2

cache_access_log %ACCESSLOG%
cache_log %CACHELOG%
cache_store_log %STORELOG%
pid_filename %PIDFILE%

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin
no_cache deny QUERY

# access control lists
%ALLOWEDHOSTS%
%DIRECTHOSTS%
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl SSL_ports port 443 563 8443
acl Safe_ports port 80 81 21 443 563 70 210 1025-65535 280 488 591 777
acl CONNECT method CONNECT

# only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# deny requests to unknown ports
http_access deny !Safe_ports

# deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports

# client access
http_access allow localhost
%HTTPACCESSALLOWED%
http_access deny all

# direct access (bypassing parent proxy)
%ALWAYSDIRECT%
always_direct deny all


[squid-users] Squid - impact of TLS/SSL vulnerability?

2009-11-18 Thread The Psycho Chicken
Hi,

Has anyone looked at the impact of the recent TLS/SSL vulnerability
(http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-3555) on Squid? If
you're using Squid as a HTTPS reverse proxy then it has SSL exposed to the
Internet.

I haven't noticed anything in the mailing lists.

Cheers,

Paul



Re: [squid-users] problem: remote site times out, provider blames squid proxy

2009-11-18 Thread Henrik Nordstrom
mån 2009-11-16 klockan 10:21 +1100 skrev Howard Cock:

 The problem we have is that this site often fails to load via our
 squid proxies, clicking on links on the front page – specifically
 different “answers” – one can wait a long time for a response. The
 site does load fine if going direct. After much too-ing and fro-ing
 the tech support at RightNow maintain that our proxy server is
 displaying aberrant behavior. This is puzzling as we are not a small
 university and our 35,000 users of the web don’t have this problem
 with any site except the RightNow site. Our proxy systems handle
 millions of requests just fine.

Usually is this:

http://wiki.squid-cache.org/KnowledgeBase/BrokenWindowSize

also documented at

http://wiki.squid-cache.org/SquidFaq/SystemWeirdnesses#Some_sites_load_extremely_slowly_or_not_at_all

Regards
Henrik



[squid-users] squid 3.1.014 crashing

2009-11-18 Thread Landy Landy
Hello.

I would like to know what's wrong with squid 3.1.0.14. I have squid installed 
with ecap enabled and is crashing very often even though it restarts itself. I 
contacted the libecap author and he claims there's a bug in this squid's 
version. I don't know if is really true but, wanted to share this with the 
list. Here's what cache.log shows:

(squid)(death+0x4b)[0x818b8cb]
[0xb7f67420]
/usr/local/lib/ecap_adapter_gzip.so(_ZN7Adapter7Xaction17noteVbContentDoneEb+0x61)[0xb7ab0411]
(squid)(_ZN10Adaptation4Ecap10XactionRep23noteBodyProductionEndedE8RefCountI8BodyPipeE+0x61)[0x81fdd61]
(squid)(_ZN12UnaryMemFunTI12BodyConsumer8RefCountI8BodyPipeEE6doDialEv+0x48)[0x80f1208]
(squid)(_ZN9JobDialer4dialER9AsyncCall+0x51)[0x8199661]
(squid)(_ZN10AsyncCallTI18BodyConsumerDialerE4fireEv+0x18)[0x80f1138]
(squid)(_ZN9AsyncCall4makeEv+0x17b)[0x819888b]
(squid)(_ZN14AsyncCallQueue8fireNextEv+0xf9)[0x819b4a9]
(squid)(_ZN14AsyncCallQueue4fireEv+0x28)[0x819b5a8]
(squid)(_ZN9EventLoop13dispatchCallsEv+0x13)[0x81096c3]
(squid)(_ZN9EventLoop7runOnceEv+0xf6)[0x81098b6]
(squid)(_ZN9EventLoop3runEv+0x28)[0x8109988]
(squid)(_Z9SquidMainiPPc+0x4b5)[0x8151e75]
(squid)(main+0x27)[0x81522f7]
/lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xc8)[0xb7caeea8]
(squid)(__gxx_personality_v0+0x155)[0x80bd081]
FATAL: Received Segment Violation...dying.
2009/11/18 17:20:19| storeDirWriteCleanLogs: Starting...
2009/11/18 17:20:19| WARNING: Closing open FD   16
2009/11/18 17:20:19| 65536 entries written so far.
2009/11/18 17:20:19|131072 entries written so far.
2009/11/18 17:20:19|196608 entries written so far.
2009/11/18 17:20:19|   Finished.  Wrote 202205 entries.
2009/11/18 17:20:19|   Took 0.09 seconds (2268958.01 entries/sec).
CPU Usage: 2.776 seconds = 2.028 user + 0.748 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:4320 KB
Ordinary blocks: 4160 KB 52 blks
Small blocks:   0 KB  1 blks
Holding blocks: 20700 KB 86 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 159 KB
Total in use:   24860 KB 575%
Total free:   159 KB 4%
2009/11/18 17:20:22| Starting Squid Cache version 3.1.0.14 for 
i686-pc-linux-gnu...
2009/11/18 17:20:22| Process ID 6
2009/11/18 17:20:22| With 1024 file descriptors available
2009/11/18 17:20:22| Initializing IP Cache...
2009/11/18 17:20:22| DNS Socket created at [::], FD 7
2009/11/18 17:20:22| Adding nameserver 196.3.81.5 from squid.conf
2009/11/18 17:20:22| Adding nameserver 200.88.127.22 from squid.conf
2009/11/18 17:20:22| Adding nameserver 196.3.81.132 from squid.conf
2009/11/18 17:20:22| Unlinkd pipe opened on FD 12
2009/11/18 17:20:22| Swap maxSize 1024 + 524288 KB, estimated 224256 objects
2009/11/18 17:20:22| Target number of buckets: 11212
2009/11/18 17:20:22| Using 16384 Store buckets
2009/11/18 17:20:22| Max Mem  size: 524288 KB
2009/11/18 17:20:22| Max Swap size: 1024 KB
2009/11/18 17:20:22| Version 1 of swap file with LFS support detected...
2009/11/18 17:20:22| Rebuilding storage in /var/log/squid3.1/cache (CLEAN)
2009/11/18 17:20:22| Using Round Robin store dir selection
2009/11/18 17:20:22| Current Directory is /home/landysaccount
2009/11/18 17:20:22| Loaded Icons.
2009/11/18 17:20:22| Accepting  intercepted HTTP connections at 
172.16.0.1:3128, FD 16.
2009/11/18 17:20:22| HTCP Disabled.
2009/11/18 17:20:22| loading Squid module from 
'/usr/local/lib/ecap_adapter_gzip.so'
2009/11/18 17:20:22| Squid modules loaded: 1
2009/11/18 17:20:22| Adaptation support is on
2009/11/18 17:20:22| Ready to serve requests.
2009/11/18 17:20:22| Store rebuilding is 2.03% complete
2009/11/18 17:20:24| Done reading /var/log/squid3.1/cache swaplog (202205 
entries)
2009/11/18 17:20:24| Finished rebuilding storage from disk.
2009/11/18 17:20:24|202205 Entries scanned
2009/11/18 17:20:24| 0 Invalid entries.
2009/11/18 17:20:24| 0 With invalid flags.
2009/11/18 17:20:24|202205 Objects loaded.
2009/11/18 17:20:24| 0 Objects expired.
2009/11/18 17:20:24| 0 Objects cancelled.
2009/11/18 17:20:24| 0 Duplicate URLs purged.
2009/11/18 17:20:24| 0 Swapfile clashes avoided.
2009/11/18 17:20:24|   Took 1.72 seconds (117266.86 objects/sec).
2009/11/18 17:20:24| Beginning Validation Procedure
2009/11/18 17:20:24|   262144 Entries Validated so far.
2009/11/18 17:20:24|   Completed Validation Procedure
2009/11/18 17:20:24|   Validated 404423 Entries
2009/11/18 17:20:24|   store_swap_size = 5977592
2009/11/18 17:20:24| storeLateRelease: released 0 objects
(squid)(death+0x4b)[0x818b8cb]
[0xb7f33420]
/usr/local/lib/ecap_adapter_gzip.so(_ZN7Adapter7Xaction17noteVbContentDoneEb+0x61)[0xb7a7c411]
(squid)(_ZN10Adaptation4Ecap10XactionRep23noteBodyProductionEndedE8RefCountI8BodyPipeE+0x61)[0x81fdd61]

Re: [squid-users] Squid - impact of TLS/SSL vulnerability?

2009-11-18 Thread Kinkie
On Wed, Nov 18, 2009 at 10:25 PM, The Psycho Chicken
psychochic...@restlesschickens.com wrote:
 Hi,

 Has anyone looked at the impact of the recent TLS/SSL vulnerability
 (http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2009-3555) on Squid? If
 you're using Squid as a HTTPS reverse proxy then it has SSL exposed to the
 Internet.

 I haven't noticed anything in the mailing lists.

Squid is as vulnerable as any other product based on SSL.
Unfortunately there's not much we developers can do. The burden falls
on the (open)ssl library implementors, and all we can do is wait.
Some OS vendors have already started shipping an updated ssl library
which somehow plugs the hole. After that (dynamic) library has been
installed on the host OS, Squid (after a restart at most) is
immediately protected from the flaw.


-- 
/kinkie


[squid-users] Question on HAPROXY

2009-11-18 Thread Landy Landy
Hello.

I was reading something about haproxy and couldn't quite understand if it can 
be used along side with squid. Can it be used to save bandwidth and balance two 
or more dsl lines installed with squid?




  


[squid-users] $35 Squid setup help

2009-11-18 Thread squidby

I'd to set up squid to be basic proxy for http and https, to limit access to
specific IPs, and use separate outgoing IPs based on port. 


Running under windows :(

$35 by paypal if you can provide the squid.conf and step by step for any ssl
requirements. 

message me your email address, and I'll respond on first come first serve
-- 
View this message in context: 
http://old.nabble.com/%2435-Squid-setup-help-tp26418963p26418963.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] $35 Squid setup help

2009-11-18 Thread Brian Mearns
On Wed, Nov 18, 2009 at 8:36 PM, squidby sq...@tainable.com wrote:

 I'd to set up squid to be basic proxy for http and https, to limit access to
 specific IPs, and use separate outgoing IPs based on port.


 Running under windows :(

 $35 by paypal if you can provide the squid.conf and step by step for any ssl
 requirements.

 message me your email address, and I'll respond on first come first serve
 --
 View this message in context: 
 http://old.nabble.com/%2435-Squid-setup-help-tp26418963p26418963.html
 Sent from the Squid - Users mailing list archive at Nabble.com.


Not that I begrudge any one here their 35$, but that's totally unusual
on this list. This is a volunteer/community based help forum: people
are here to share their knowledge in order to improve the user base.
If you want to offer money for their help, that's obviously your
choice but it's definitely unusual.

Cheers,
-Brian

-- 
Feel free to contact me using PGP Encryption:
Key Id: 0x3AA70848
Available from: http://keys.gnupg.net


[squid-users] Gzip Supporting

2009-11-18 Thread yaoxing zhang

Hello everyone,
I'm using squid 3.0 stable 16 as a accelerator for my IIS 7.0 server. 
And I find that squid does not enable gzip for compressing, which 
increases a lot of internet traffic. I can't find any option with which 
I can enable gzip. Can anyone help me?

I attached the request and response headers below:

request header:
Host*.*.*.*
User-AgentMozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.1.5) 
Gecko/20091105 Fedora/3.5.5-1.fc11 Firefox/3.5.5 GTB5

Accepttext/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Languagezh-cn,zh;q=0.5
Accept-Encodinggzip,deflate
Accept-CharsetGB2312,utf-8;q=0.7,*;q=0.7
Keep-Alive300
Connectionkeep-alive
Cookie***
If-Modified-SinceThu, 19 Nov 2009 03:41:39 GMT
Cache-Controlmax-age=0

response from squid:
Cache-Controlpublic, max-age=1
Content-Typetext/html; charset=utf-8
ExpiresThu, 19 Nov 2009 03:43:39 GMT
Last-ModifiedThu, 19 Nov 2009 03:41:39 GMT
ServerMicrosoft-IIS/7.0
X-AspNet-Version2.0.50727
DateThu, 19 Nov 2009 03:43:37 GMT
Content-Length249662
X-CacheHIT from www.dealextreme.com
X-Cache-LookupHIT from www.dealextreme.com:80
Via1.0 www.dealextreme.com (squid/3.0.STABLE16)
Age9

response header from IIS7 if requested IIS server directly:
Cache-Controlpublic, max-age=0
Content-Typetext/html; charset=utf-8
Content-Encodinggzip
ExpiresThu, 19 Nov 2009 03:37:38 GMT
Last-ModifiedThu, 19 Nov 2009 03:35:38 GMT
VaryAccept-Encoding
ServerMicrosoft-IIS/7.0
X-AspNet-Version2.0.50727
DateThu, 19 Nov 2009 03:37:38 GMT
Content-Length29707
--
Regards,
YX


Re: [squid-users] Gzip Supporting

2009-11-18 Thread sqlcamel

yaoxing zhang:

Hello everyone,
I'm using squid 3.0 stable 16 as a accelerator for my IIS 7.0 server. 
And I find that squid does not enable gzip for compressing, which 
increases a lot of internet traffic. I can't find any option with which 
I can enable gzip. Can anyone help me?


AFAIK, only Squid-3.1 with ecap support can enable the external gzip module.


--
Yahoo/Skype: sqlcamel