[squid-users] re... squidclient -m PURGE problem...

2008-06-11 Thread Seonkyu Park

I'm sorry I missed...

 'squidclient  -p 80 -h 1.1.1.1 http://a.b.c/x.jpg'

=

 'squidclient  -p 80 -m PURGE -h 1.1.1.1 http://a.b.c/x.jpg'



 Hello Everyone.
 
 I've been using SQUID for reverse proxy. ( web server accelerator )
 I try to make purge system with squidclient program.
 For client could get fresh object from SQUID.
 
 'squidclient  -p 80 -h 1.1.1.1 http://a.b.c/x.jpg'
 
 I got the result, cache object(x.jpg) successfully deleted.
 
 And then  
 
 'wget http://a.b.c/x.jpg'
 
 I think that Squid will get fresh x.jpg  from origin server.
 But Squid return  503 error.
 So I was delete all cache object drirectory  rebuild file system  squid 
 start.
 Same result..
 
 If I was purge object then SQUID could fresh object from origin server.
 
 How can I make this ?..
 
 Thanks.
 
 


[squid-users] TCP_SWAPFAIL_MISS

2008-06-11 Thread Stephan Viljoen

Hi There,

I upgraded my squid-2.6.stable19 to squid-2.7.stable2 and noticed some weird 
messages in my access log file which seems to occur quite frequently and was 
wondering what it means or whether it's anything to be concerned about?


Thanks in advanced,
-steph

log snippet

1213141045.679679 10.0.18.114 TCP_SWAPFAIL_MISS/200 1130 GET 
http://media.ign.com/ign/images/gamedetails_moreinfo_bg.gif - 
DIRECT/72.247.238.227 image/gif
1213141045.745  48931 10.0.10.130 TCP_SWAPFAIL_MISS/200 248656 GET 
http://i273.photobucket.com/albums/jj207/ybf08/June%2008/shaggy2.jpg - 
DIRECT/209.17.73.10 image/jpeg
1213141045.847   3611 10.0.21.54 TCP_SWAPFAIL_MISS/200 27613 GET 
http://www.fileratings.com/DapWelcome/img/Premium_buy.gif - 
DIRECT/212.143.22.36 image/gif
1213141046.052672 10.0.12.102 TCP_SWAPFAIL_MISS/304 489 GET 
http://www.spiegel.de/img/0,1020,688491,00.jpg - DIRECT/195.71.11.67 
image/jpeg
1213141046.094   5609 10.0.4.194 TCP_SWAPFAIL_MISS/304 418 GET 
http://cdn.stardoll.com/i/icon/ad.gif - DIRECT/209.9.8.79 -
1213141046.239568 10.0.14.198 TCP_SWAPFAIL_MISS/304 372 GET 
http://m.2mdn.net/879366/DartRichMedia_1_03.js - DIRECT/209.62.187.43 
application/x-javascript
1213141046.342   2179 10.0.18.114 TCP_SWAPFAIL_MISS/200 1934 GET 
http://media.ign.com/ign/images/latestmedia_bg.gif - DIRECT/72.247.238.186 
image/gif
1213141046.423   4317 10.0.14.206 TCP_SWAPFAIL_MISS/200 6091 GET 
http://www.nseindia.com/nifty_new.png - DIRECT/210.210.25.111 image/png


//--log snippet


--
This message has been scanned for viruses and
dangerous content by the BBI SMTP filter, and is
believed to be clean.



[squid-users] How to have my cache act just like an user-agent ?

2008-06-11 Thread Timothy Madden
Hello

My local squid, after a few successful requests through the parent
cache, starts accessing the URLs directly, although I can not connect
to the internet directly from my workplace.
I can only get to the internet through the parent cache. I think it is
the parent cache who somehow tells my squid to access the URLs
directly.

How can I have my squid behave just like an user-agent ? So the
parent-cache would never know it is accessed by a local cache ?


Thank you,
Timothy Madden


Re: [squid-users] How to have my cache act just like an user-agent ?

2008-06-11 Thread Amos Jeffries

Timothy Madden wrote:

Hello

My local squid, after a few successful requests through the parent
cache, starts accessing the URLs directly, although I can not connect
to the internet directly from my workplace.
I can only get to the internet through the parent cache. I think it is
the parent cache who somehow tells my squid to access the URLs
directly.

How can I have my squid behave just like an user-agent ? So the
parent-cache would never know it is accessed by a local cache ?


cache_peer ip-of-parent parent port-of-parent 0 no-query default name=up
cache_peer_access up allow all
never_direct allow all


Amos
--
Please use Squid 2.7.STABLE1 or 3.0.STABLE6


[squid-users] Problems Using squid 2.6 as a transparent web cache

2008-06-11 Thread Donoso Gabilondo, Daniel
Hello,
I have an application in linux that uses http resources (videos,
images..). These resources are in other machine with a http server
running (under windows).

The linux application always download the resources. I installed and
configured squid in the linux machine to cache these resources, but the
linux application always downloads them from the http server. I don't
know how can I resolve the problem. I need some help, please.

The linux ip address is: 192.168.240.23 and the windows with http server
ip is: 192.168.233.158

This is my squid.conf file content:

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
icp_access allow all
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
coredump_dir /var/spool/squid
cache_dir ufs /var/spool/squid 700 32 512
http_port 3128 transparent
icp_port0
cache_peer  localhost.home.nl parent 8080 0 default
acl HOMEdstdomain .home.nl
always_direct  allow all
never_directallow all


I executed these commands:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to
192.168.240.23:3128
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT
--to-port 3128


The cache.log content is this:

2008/06/11 11:30:52| Starting Squid Cache version 2.6.STABLE19 for
i386-redhat-linux-gnu...
2008/06/11 11:30:52| Process ID 8617
2008/06/11 11:30:52| With 1024 file descriptors available
2008/06/11 11:30:52| Using epoll for the IO loop
2008/06/11 11:30:52| ipcacheAddEntryFromHosts: Bad IP address 'tele1'
2008/06/11 11:30:52| ipcacheAddEntryFromHosts: Bad IP address 'svc1'
2008/06/11 11:30:52| DNS Socket created at 0.0.0.0, port 42897, FD 6
2008/06/11 11:30:52| Adding nameserver 192.168.202.11 from
/etc/resolv.conf
2008/06/11 11:30:52| Adding nameserver 192.168.202.13 from
/etc/resolv.conf
2008/06/11 11:30:52| User-Agent logging is disabled.
2008/06/11 11:30:52| Referer logging is disabled.
2008/06/11 11:30:52| Unlinkd pipe opened on FD 11
2008/06/11 11:30:52| Swap maxSize 716800 KB, estimated 55138 objects
2008/06/11 11:30:52| Target number of buckets: 2756
2008/06/11 11:30:52| Using 8192 Store buckets
2008/06/11 11:30:52| Max Mem  size: 8192 KB
2008/06/11 11:30:52| Max Swap size: 716800 KB
2008/06/11 11:30:52| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2008/06/11 11:30:52| Rebuilding storage in /var/spool/squid (CLEAN)
2008/06/11 11:30:52| Using Least Load store dir selection
2008/06/11 11:30:52| Set Current Directory to /var/spool/squid
2008/06/11 11:30:52| Loaded Icons.
2008/06/11 11:30:53| Accepting transparently proxied HTTP connections at
0.0.0.0, port 3128, FD 13.
2008/06/11 11:30:53| WCCP Disabled.
2008/06/11 11:30:53| Ready to serve requests.
2008/06/11 11:30:53| Configuring Parent localhost.home.nl/8080/0
2008/06/11 11:30:53| Done reading /var/spool/squid swaplog (0 entries)
2008/06/11 11:30:53| Finished rebuilding storage from disk.
2008/06/11 11:30:53| 0 Entries scanned
2008/06/11 11:30:53| 0 Invalid entries.
2008/06/11 11:30:53| 0 With invalid flags.
2008/06/11 11:30:53| 0 Objects loaded.
2008/06/11 11:30:53| 0 Objects expired.
2008/06/11 11:30:53| 0 Objects cancelled.
2008/06/11 11:30:53| 0 Duplicate URLs purged.
2008/06/11 11:30:53| 0 Swapfile clashes avoided.
2008/06/11 11:30:53|   Took 0.3 seconds (   0.0 objects/sec).
2008/06/11 11:30:53| Beginning Validation Procedure
2008/06/11 11:30:53|   Completed Validation Procedure
2008/06/11 11:30:53|   Validated 0 Entries
2008/06/11 11:30:53|   store_swap_size = 0k
2008/06/11 11:30:53| storeLateRelease: released 0 objects






Re: [squid-users] How does weighted-round-robin work?

2008-06-11 Thread Amos Jeffries

Roy M. wrote:

According to 3.0 manual:


=
weighted-round-robin
  to define a set of parents which should be used in a round-robin
fashion with the frequency of each parent being based on the round
 trip time. Closer parents are used more often.
=


Currently I have 3 Apache web servers, A, B, C,
where B has Dual CPU and more memory, they are under the same private network.
So I assign B with more weight.


#A
parent 80 0 no-query originserver weighted-round-robin login=PASS weight=1

#B
parent 80 0 no-query originserver weighted-round-robin login=PASS weight=2

#C
parent 80 0 no-query originserver weighted-round-robin login=PASS weight=1


However, from the access log from these 3 web servers, I found that
the MISS request to #B is only around 130% higher than A and C

Is it normal or I misunderstood the weighted-round-robin settings?


Small mis-reading perhapse.  weighted-round-robin works the same as 
round-robin only its based on the peer RTT (network delay to reach peer).


In vanilla robin, the counter gets 1 added, which evenly balances 
requests through the peers, regardless of network trouble or anything else.


In weighted, the counter gets RTT/weight, which balances things more in 
favour of close peers. But weight= can give an extra boost to preferred 
peers or a manual balancing if the expected RTT (in ms) to that peer is 
large. The division is never allowed to produce a non-integer or stat 
under 1.


Amos
--
Please use Squid 2.7.STABLE1 or 3.0.STABLE6


[squid-users] Forwarding NTLM to BasicAuthentication

2008-06-11 Thread a.s.d

Hi All.

I have following problem to solve. Please help me.

I have some UTM solution witch working as proxy (AV+CF+IP) and its bind to
LDAP server. Unfortunately this solution don’t supporting LDAP/NTLM
authentication, only basic authentication. 

My idea is to build additional proxy (squid) witch can take authorization
data from Windows client via NTLM and forward it to UTM.

It’s possible? If not, maybe you have other (bather) proposition.

Thanks from top.

-- 
View this message in context: 
http://www.nabble.com/Forwarding-NTLM-to-BasicAuthentication-tp17773575p17773575.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] reverse proxy accelerator, cache persistence

2008-06-11 Thread Sylvain Viart

Hi,

I would like my squid proxy to keep serving its cached copy of a 
document, when the origin server is unreachable.


What directive should I look for?

Regards,
Sylvain.


Re: [squid-users] help on performances

2008-06-11 Thread Indunil Jayasooriya
 Need some help on how to improve the performance of squid proxy.

 My problem is when I access any site directly it is faster but when used
 proxy its slow.

Pls try below command and ses its output

squidclient mgr:info



-- 
Thank you
Indunil Jayasooriya


[squid-users] wccp transparent ??? (like tproxy)

2008-06-11 Thread Alexandre Correa
Hello..

are wccp (v2) transparent like tproxy ? with wccp running, destination
www server see ip of the client or proxy ? (client has routable/valid
ip address)


reagards !

-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] debug_options reference

2008-06-11 Thread Henrik Nordstrom
On tis, 2008-06-10 at 14:37 +0200, Anton Melser wrote:

 Yeah, I get that, but there's nothing like a few pertinent examples to
 help... and for people that do squid rarely or are just wanting to
 solve a problem and are told squid is the best it can be far more
 difficult that it needs to be. It very probably is the best but the
 docs are very fulltime admin oriented, IMHO...

Yes, we seriously need more people looking at and updating the
documentation. If you can then you are more than welcome helping writing
good documentation on the items you found missing. The wiki is open for
all to contribute.

http://wiki.squid-cache.org/

Regards
Herik


signature.asc
Description: This is a digitally signed message part


[squid-users] squid-cache.org

2008-06-11 Thread Monah Baki
Out of curiosity at the download section it says version 2.7 Latest  
Release Stable 1, but when you click on the 2.7 link it says stable2,  
which is it?



Thanks




BSD Networking, Microsoft Notworking





[squid-users] Re: squid-cache.org

2008-06-11 Thread Monah Baki

Forget it :)



On Jun 11, 2008, at 6:28 AM, Monah Baki wrote:

Out of curiosity at the download section it says version 2.7 Latest  
Release Stable 1, but when you click on the 2.7 link it says  
stable2, which is it?



Thanks




BSD Networking, Microsoft Notworking





BSD Networking, Microsoft Notworking





Re: [squid-users] cache_mem or let the kernel handle it?

2008-06-11 Thread Henrik Nordstrom
On tis, 2008-06-10 at 10:18 +0200, Anton Melser wrote:

 When going through mod_cache before finally coming back to squid, they
 talk about the fact that it can actually be better to use a disk cache
 than a mem cache. The reason being that the kernel caches files, and
 does so very well...

For Squid it's a complex equation, but if your site is mostly small
objects (max some hundreds KB) and of reasonably limited size then
boosting up cache_mem is a benefit. 

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid3 - reason to migrate

2008-06-11 Thread Henrik Nordstrom
On tis, 2008-06-10 at 09:42 +0200, [EMAIL PROTECTED] wrote:

 It's strange that nobody testing it, it's very good fs.

Well, it's been mostly Adrian working on COSS in the last years, and he
does not like working on Squid-3, so the COSS version in Squid-3 has
been left behind. Additionally COSS is not (yet) a priority item for the
users who have contracted developers working on Squid-3.

Reards
Henrik




signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid3 - reason to migrate

2008-06-11 Thread Adrian Chadd
On Wed, Jun 11, 2008, Henrik Nordstrom wrote:
 On tis, 2008-06-10 at 09:42 +0200, [EMAIL PROTECTED] wrote:
 
  It's strange that nobody testing it, it's very good fs.
 
 Well, it's been mostly Adrian working on COSS in the last years, and he
 does not like working on Squid-3, so the COSS version in Squid-3 has
 been left behind. Additionally COSS is not (yet) a priority item for the
 users who have contracted developers working on Squid-3.

And there's no real reason for me to work on it until the last few
things are sorted out - more sensible layouts, more sensible rebuilding.


Adrian


-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] BitTorrent

2008-06-11 Thread Henrik Nordstrom
On tis, 2008-06-10 at 09:08 -0400, Tuc at T-B-O-H.NET wrote:
 Hi,
 
   There seemed to have been some discussion back in October
 about BitTorrent. I just noticed that I had :
 
 2008/06/10 03:02:39| parseHttpRequest: Requestheader contains NULL characters
 2008/06/10 03:02:39| parseHttpRequest: Unsupported method '^SBitTorrent'
 2008/06/10 03:02:39| clientReadRequest: FD 27 (192.168.3.15:64673) Invalid 
 Request

Are you doing transparent interception of port 80?

Some torrents run on port 80 without using HTTP to bypass stupid port
based firewalls...

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] How to improve integratin of LDAP authentication

2008-06-11 Thread Jevos, Peter
Hi, 

I'd like to ask you one question.
I have ldap authentication against AD that works perfectly.
My config is:
auth_param basic program /usr/local/squid/libexec/squid_ldap_auth -R -b
dc=x, dc=x -D cn=x,ou=x,ou=x,dc=x,dc=x,dc=x -w x -f
sAMAccountName=%s -h 10.0.0.1 -p 3268

When I run it login window apperas to insert login credentials. And
that's fine and it works.
My question is: Is it possible to hand over this credentials from MS
Windows login credentials ( like domainname\user ) ?
The reason is to avoid the interuption with login window

Is it actually possible ?

Thx

pet




Re: [squid-users] How does weighted-round-robin work?

2008-06-11 Thread Roy M.
Hi,

On Wed, Jun 11, 2008 at 5:10 PM, Amos Jeffries [EMAIL PROTECTED] wrote:
 Roy M. wrote:


 In vanilla robin, the counter gets 1 added, which evenly balances requests
 through the peers, regardless of network trouble or anything else.

 In weighted, the counter gets RTT/weight, which balances things more in
 favour of close peers. But weight= can give an extra boost to preferred
 peers or a manual balancing if the expected RTT (in ms) to that peer is
 large. The division is never allowed to produce a non-integer or stat under
 1.


Thanks.

Since all web servers are under private Gigabit network, RTT delay or
bandwidth stuffs are not important.

However, they are application servers and CPU intensive and  therefore
I run squid as reverse proxy.

So I hope the one with more CPU power (it can be config. my me
manually) can be sharing more requests, what would be the best setup
for this?


Howard


Re: [squid-users] Load balancer Squid mystery

2008-06-11 Thread Henrik Nordstrom
On tis, 2008-06-10 at 13:10 -0700, Hitech Luddite wrote:
 Ever now and then, a request in a session which resides on app server
 A is sent by the load balancer to app server B. This happens in spite
 of the load balancer cookie (and the ASP.NET session cookie, for what
 it's worth) being the same as before. We have seen the behavior with
 both a Cisco and a F5 load balancer.

Probably persistent connections making the load balancer screw up. My
guess is that the load balancer only looks at the first request on a
connection and then assumes every following request will be from the
same user. When using a proxy the only guarantee there is that the
request is from the same proxy... connections in HTTP is hop-by-hop, not
end-to-end, and the same connection from a proxy will be reused for
requests from different users of that proxy.

Try server_persistent_connections off in squid.conf, or disabling
persistent connections / keep-alive on the web server.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid 3 as reverse-proxy with SSL

2008-06-11 Thread Henrik Nordstrom
On tis, 2008-06-10 at 22:14 +0200, Maik Fuss wrote:

 the cert's are from a ISP who says that's a modssl (apache) cert, so...
 is the reason for this the wrong cert-type?

What do the first line of the cert look like?

Do the user Squid is running as have permission to read the cert?

Which Squid version?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] FW: How to improve integratin of LDAP authentication

2008-06-11 Thread Jevos, Peter

Hi, 

I'd like to ask you one question.
I have ldap authentication against AD that works perfectly.
My config is:
auth_param basic program /usr/local/squid/libexec/squid_ldap_auth -R -b
dc=x, dc=x -D cn=x,ou=x,ou=x,dc=x,dc=x,dc=x -w x -f
sAMAccountName=%s -h 10.0.0.1 -p 3268

When I run it login window apperas to insert login credentials. And
that's fine and it works.
My question is: Is it possible to hand over this credentials from MS
Windows login credentials automatically ( like domainname\user ) ?
The reason is to avoid the interuption with login window. So probably
squid should be somehow dig out this credentials from the system

Is it actually possible ?

Thx

pet




Re: [squid-users] Moving from windows to linux Port issues and OWA

2008-06-11 Thread Henrik Nordstrom
On tis, 2008-06-10 at 23:04 +0100, Dan Holliday wrote:
 Problem 1
 
 I currently run outlook webaccess, although I plan on moving over to
 roundcube in the future. When I visit the OWA site webmail.test.co.uk it
 looks as if it's going to work, it starts to load the frames, but then
 falls over  When I look at the source I see the line  
 
 BASE href=http://webmail.test.co.uk:81/exchange/webmaster/

Yes. Exchange OWA needs the external port to be the same as the internal
port. The only exception is if using an SSL frontend then 443 may be
used externally even if 80 is used internally.

 Some how the port number 81 has got into the code and been sent back to
 the client.  This is doing my head in, I can't figure out how to get
 round it.

You can't. But you can make Squid listen on port 81 as well, or change
Exchange to use port 80..


 Problem 2
 
 I run logitech squeeze centre on my windows PC.  This runs on port 9000
 as it has it's own webserver, and on the ubuntu pc I run webmin which
 runs on port 1 as it also has it's own webserver.  What I want to do
 is configure squid so that if I browse to squeezebox.test.co.uk squid
 goes to 192.168.0.1:9000 if I browse to webmin.test.co.uk squid goes to
 192.168.0.2:1000 if I browse to windows.test.co.uk squid goes to
 192.168.0.1:81 and if I browse to ubuntu.test.co.uk squid goes to
 192.168.0.2:81, but all the time the client pc remains using port 80.


Thats multipel cache_peer lines (one per web server port) and matching
cache_peer_access lines telling what may be sent to each.

But beware of the port problem mentioned above.. which depending on the
server may also be seen in the host component...

Remapping URLs to other URLs in the proxy is very likely to cause
problems. The best is if the requested URL can be forwarded as-is to the
selected server.

Note: You can select which server to forward to based on anything in the
URL, and I really mean anything.  Doing it based on host name is the
simplest, but anything works.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] not redirect some ips to proxy via wccp

2008-06-11 Thread Alexandre Correa
Hello !!

I´m playing with wccp v2 and it´s working fine.. but i need to wccp
not redirect some ip-blocks to proxy (ignore them) and permit go
direct 

proxy runs on freebsd..and proxy server isn´t gateway .. gateway is
other server !!


ty.

regards !!
-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] FW: How to improve integratin of LDAP authentication

2008-06-11 Thread Luis Claudio Botelho - Chefe de Tecnologia e Redes

Hi Peter

We have this configuration here in my job.

My workstations doesn't ask for login and password because they are 
integrated in the domain.


Only the workstations that doesn't belong to the domain ask for 
user/password.


The question is: is your workstation connected to the domain? Have you 
configured SAMBA in your Linux Server?


Regards!

Luis Claudio Botelho
Brazil

- Original Message - 
From: Jevos, Peter [EMAIL PROTECTED]

To: squid-users@squid-cache.org
Sent: Wednesday, June 11, 2008 8:39 AM
Subject: [squid-users] FW: How to improve integratin of LDAP authentication



Hi,

I'd like to ask you one question.
I have ldap authentication against AD that works perfectly.
My config is:
auth_param basic program /usr/local/squid/libexec/squid_ldap_auth -R -b
dc=x, dc=x -D cn=x,ou=x,ou=x,dc=x,dc=x,dc=x -w x -f
sAMAccountName=%s -h 10.0.0.1 -p 3268

When I run it login window apperas to insert login credentials. And
that's fine and it works.
My question is: Is it possible to hand over this credentials from MS
Windows login credentials automatically ( like domainname\user ) ?
The reason is to avoid the interuption with login window. So probably
squid should be somehow dig out this credentials from the system

Is it actually possible ?

Thx

pet






RE: [squid-users] FW: How to improve integratin of LDAP authentication

2008-06-11 Thread Jevos, Peter

 -Original Message-
 From: Luis Claudio Botelho - Chefe de Tecnologia e Redes
 [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, June 11, 2008 2:20 PM
 To: Jevos, Peter; squid-users@squid-cache.org
 Subject: Re: [squid-users] FW: How to improve integratin of LDAP
 authentication
 
 Hi Peter
 
 We have this configuration here in my job.
 
 My workstations doesn't ask for login and password because they are
 integrated in the domain.
 
 Only the workstations that doesn't belong to the domain ask for
 user/password.
 
 The question is: is your workstation connected to the domain? Have you
 configured SAMBA in your Linux Server?
 
 Regards!
 
 Luis Claudio Botelho
 Brazil
 

Thanks for your answer Luis
Of coursse our stations are connected into the domain.
I'm not using samba yet ( but it'spossible )
But all i'd like ot know is a brief principle how it works ( or brief
howto )

Thx

pet

 
 
 Hi,
 
 I'd like to ask you one question.
 I have ldap authentication against AD that works perfectly.
 My config is:
 auth_param basic program /usr/local/squid/libexec/squid_ldap_auth -R
-b
 dc=x, dc=x -D cn=x,ou=x,ou=x,dc=x,dc=x,dc=x -w x -f
 sAMAccountName=%s -h 10.0.0.1 -p 3268
 
 When I run it login window apperas to insert login credentials. And
 that's fine and it works.
 My question is: Is it possible to hand over this credentials from MS
 Windows login credentials automatically ( like domainname\user ) ?
 The reason is to avoid the interuption with login window. So probably
 squid should be somehow dig out this credentials from the system
 
 Is it actually possible ?
 
 Thx
 
 pet
 
 
 



Re: [squid-users] Problems Using squid 2.6 as a transparent web cache

2008-06-11 Thread Adrian Chadd

Firstly, the Squid defaults don't allow for very large files to be cached.
maximum_object_size defaults to 4 megabytes.

Secondly, maybe the application and/or http server are not handling caching
logic correctly. Look at the request and response headers.



Adrian

On Wed, Jun 11, 2008, Donoso Gabilondo, Daniel wrote:
 Hello,
 I have an application in linux that uses http resources (videos,
 images..). These resources are in other machine with a http server
 running (under windows).
 
 The linux application always download the resources. I installed and
 configured squid in the linux machine to cache these resources, but the
 linux application always downloads them from the http server. I don't
 know how can I resolve the problem. I need some help, please.
 
 The linux ip address is: 192.168.240.23 and the windows with http server
 ip is: 192.168.233.158
 
 This is my squid.conf file content:
 
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access deny all
 icp_access allow all
 hierarchy_stoplist cgi-bin ?
 access_log /var/log/squid/access.log squid
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern .   0   20% 4320
 acl apache rep_header Server ^Apache
 broken_vary_encoding allow apache
 coredump_dir /var/spool/squid
 cache_dir ufs /var/spool/squid 700 32 512
 http_port 3128 transparent
 icp_port0
 cache_peer  localhost.home.nl parent 8080 0 default
 acl HOMEdstdomain .home.nl
 always_direct  allow all
 never_directallow all
 
 
 I executed these commands:
 
 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to
 192.168.240.23:3128
 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT
 --to-port 3128
 
 
 The cache.log content is this:
 
 2008/06/11 11:30:52| Starting Squid Cache version 2.6.STABLE19 for
 i386-redhat-linux-gnu...
 2008/06/11 11:30:52| Process ID 8617
 2008/06/11 11:30:52| With 1024 file descriptors available
 2008/06/11 11:30:52| Using epoll for the IO loop
 2008/06/11 11:30:52| ipcacheAddEntryFromHosts: Bad IP address 'tele1'
 2008/06/11 11:30:52| ipcacheAddEntryFromHosts: Bad IP address 'svc1'
 2008/06/11 11:30:52| DNS Socket created at 0.0.0.0, port 42897, FD 6
 2008/06/11 11:30:52| Adding nameserver 192.168.202.11 from
 /etc/resolv.conf
 2008/06/11 11:30:52| Adding nameserver 192.168.202.13 from
 /etc/resolv.conf
 2008/06/11 11:30:52| User-Agent logging is disabled.
 2008/06/11 11:30:52| Referer logging is disabled.
 2008/06/11 11:30:52| Unlinkd pipe opened on FD 11
 2008/06/11 11:30:52| Swap maxSize 716800 KB, estimated 55138 objects
 2008/06/11 11:30:52| Target number of buckets: 2756
 2008/06/11 11:30:52| Using 8192 Store buckets
 2008/06/11 11:30:52| Max Mem  size: 8192 KB
 2008/06/11 11:30:52| Max Swap size: 716800 KB
 2008/06/11 11:30:52| Local cache digest enabled; rebuild/rewrite every
 3600/3600 sec
 2008/06/11 11:30:52| Rebuilding storage in /var/spool/squid (CLEAN)
 2008/06/11 11:30:52| Using Least Load store dir selection
 2008/06/11 11:30:52| Set Current Directory to /var/spool/squid
 2008/06/11 11:30:52| Loaded Icons.
 2008/06/11 11:30:53| Accepting transparently proxied HTTP connections at
 0.0.0.0, port 3128, FD 13.
 2008/06/11 11:30:53| WCCP Disabled.
 2008/06/11 11:30:53| Ready to serve requests.
 2008/06/11 11:30:53| Configuring Parent localhost.home.nl/8080/0
 2008/06/11 11:30:53| Done reading /var/spool/squid swaplog (0 entries)
 2008/06/11 11:30:53| Finished rebuilding storage from disk.
 2008/06/11 11:30:53| 0 Entries scanned
 2008/06/11 11:30:53| 0 Invalid entries.
 2008/06/11 11:30:53| 0 With invalid flags.
 2008/06/11 11:30:53| 0 Objects loaded.
 2008/06/11 11:30:53| 0 Objects expired.
 2008/06/11 11:30:53| 0 Objects cancelled.
 2008/06/11 11:30:53| 0 Duplicate URLs purged.
 2008/06/11 11:30:53| 0 Swapfile clashes avoided.
 2008/06/11 11:30:53|   Took 0.3 seconds (   0.0 objects/sec).
 2008/06/11 11:30:53| Beginning Validation Procedure
 2008/06/11 11:30:53|   Completed Validation Procedure
 2008/06/11 11:30:53|   Validated 0 Entries
 2008/06/11 11:30:53|   

Re: [squid-users] wccp transparent ??? (like tproxy)

2008-06-11 Thread Adrian Chadd
WCCPv2 is just getting traffic to the box. You then use something like TPROXY
to do what you want with it.

TPROXY + WCCPv2 work just fine together.



adrian

On Wed, Jun 11, 2008, Alexandre Correa wrote:
 Hello..
 
 are wccp (v2) transparent like tproxy ? with wccp running, destination
 www server see ip of the client or proxy ? (client has routable/valid
 ip address)
 
 
 reagards !
 
 -- 
 
 Sds.
 Alexandre J. Correa
 Onda Internet / OPinguim.net
 http://www.ondainternet.com.br
 http://www.opinguim.net

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Re: squid-cache.org

2008-06-11 Thread Amos Jeffries

Monah Baki wrote:

Forget it :)



Nah. Lets fix it... Done.
Thanks.

Amos




On Jun 11, 2008, at 6:28 AM, Monah Baki wrote:

Out of curiosity at the download section it says version 2.7 Latest 
Release Stable 1, but when you click on the 2.7 link it says stable2, 
which is it?



Thanks




BSD Networking, Microsoft Notworking





BSD Networking, Microsoft Notworking






--
Please use Squid 2.7.STABLE1 or 3.0.STABLE6


Re: [squid-users] FW: How to improve integratin of LDAP authentication

2008-06-11 Thread Luis Claudio Botelho - Chefe de Tecnologia e Redes

Hi Peter again!

I have these two scenarios here: machines conected at the domain, and the 
personal notebooks (from students and teachers - I work at an university).


The students gain access through wireless - but they have to authenticate. 
On the other side, our machines doesn't need to authenticate to access the 
Internet - the logon credential is accepted for Squid. It's totally 
transparent to the user. All the access are registered in the Squid logs - 
date/time/username/site...
And the only way we found to do this was integrating the Linux Server with 
SAMBA. We have 1.500 workstations, and this is the only way to register user 
access.


Hope it helps

Regards!

Luis - Brazil



- Original Message - 
From: Jevos, Peter [EMAIL PROTECTED]
To: Luis Claudio Botelho - Chefe de Tecnologia e Redes 
[EMAIL PROTECTED]; squid-users@squid-cache.org

Sent: Wednesday, June 11, 2008 9:23 AM
Subject: RE: [squid-users] FW: How to improve integratin of LDAP 
authentication





-Original Message-
From: Luis Claudio Botelho - Chefe de Tecnologia e Redes
[mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 11, 2008 2:20 PM
To: Jevos, Peter; squid-users@squid-cache.org
Subject: Re: [squid-users] FW: How to improve integratin of LDAP
authentication

Hi Peter

We have this configuration here in my job.

My workstations doesn't ask for login and password because they are
integrated in the domain.

Only the workstations that doesn't belong to the domain ask for
user/password.

The question is: is your workstation connected to the domain? Have you
configured SAMBA in your Linux Server?

Regards!

Luis Claudio Botelho
Brazil



Thanks for your answer Luis
Of coursse our stations are connected into the domain.
I'm not using samba yet ( but it'spossible )
But all i'd like ot know is a brief principle how it works ( or brief
howto )

Thx

pet




Hi,

I'd like to ask you one question.
I have ldap authentication against AD that works perfectly.
My config is:
auth_param basic program /usr/local/squid/libexec/squid_ldap_auth -R

-b

dc=x, dc=x -D cn=x,ou=x,ou=x,dc=x,dc=x,dc=x -w x -f
sAMAccountName=%s -h 10.0.0.1 -p 3268

When I run it login window apperas to insert login credentials. And
that's fine and it works.
My question is: Is it possible to hand over this credentials from MS
Windows login credentials automatically ( like domainname\user ) ?
The reason is to avoid the interuption with login window. So probably
squid should be somehow dig out this credentials from the system

Is it actually possible ?

Thx

pet









[squid-users] adding a parameter to a URL

2008-06-11 Thread Shaine

Dear Friends,

I have a big problem with adding a parameter to a URL which passes via squid
. For that i am going to use url_rewrite program. I had a big time with
squid url rewriting, but no success. 

Could you please tell me , to get in to my point what are the minimum
requirement to be satisfied ?

url_rewrite_program
url_rewrite_children
url_rewrite_concurrency
url_rewrite_host_header on|off 
url_rewrite_access allow|deny acl ...

how it should like , url rewrite program ? can somebody leave me a simple
example ?

Many Thanks
Shaine.
-- 
View this message in context: 
http://www.nabble.com/adding-a-parameter-to-a-URL-tp17776816p17776816.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Web Usage Statistics by Client IP

2008-06-11 Thread Richard Chapman

Hi

I am new to Squid - but found it very easy to get going. I am running 
Squid 2.6 on Centos 5.1 Linux. and it workd brilliantly.


I was hoping to be able to track down the Bandwidth Usage Stats for 
individual client machines - to try to find out where all our bandwidth 
is going. I have found the Cache Manager Statistics Reports - but 
haven't found one with this info broken down by Client.
Is it there somewhere in one of the report - or do I need some 
additional reporting tool?


Thanks for the help.

Richard.





Re: [squid-users] Problems Using squid 2.6 as a transparent web cache

2008-06-11 Thread Amos Jeffries

Donoso Gabilondo, Daniel wrote:

Hello,
I have an application in linux that uses http resources (videos,
images..). These resources are in other machine with a http server
running (under windows).

The linux application always download the resources. I installed and
configured squid in the linux machine to cache these resources, but the
linux application always downloads them from the http server. I don't
know how can I resolve the problem. I need some help, please.


I suspect you are trying to do some sort of web mashup involving Squid?
I've found the best ways to do those is to have squid as the public 
domain gateway and do the app-linking/routing in the squid config.


Anyway on to your various problems



The linux ip address is: 192.168.240.23 and the windows with http server
ip is: 192.168.233.158

This is my squid.conf file content:

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all


So none of the clients are allowed to make requests?
I'd expect to see a control saying the intercepted network has access 
through.

 acl localnet src 192.168.0.0/16
 http_access deny !localnet

and drop the deny all down a bit


icp_access allow all


allow all with no port configured? looks like you can kill this.


hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
coredump_dir /var/spool/squid
cache_dir ufs /var/spool/squid 700 32 512
http_port 3128 transparent
icp_port0



cache_peer  localhost.home.nl parent 8080 0 default
acl HOMEdstdomain .home.nl



always_direct  allow all
never_directallow all


Those lines contradict each other 'everything MUST go direct + nothing 
EVER allowed direct'.


You want just:
  never_direct allow HOME
  never_direct deny all
  cache_peer_access localhost.home.nl allow HOME
  cache_peer_access localhost.home.nl deny all
  http_access allow HOME

 .. the deny I mentioned dropping down goes about here. AFTER the peer 
access config.





I executed these commands:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to
192.168.240.23:3128
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT
--to-port 3128


Okay so far. What about intercepting the requests clients make directly 
to your web app?
 Since the app knows its running on port 8080 it will tell the clients 
that in its URLs, and the 'clients' do not know about Squid they will 
not ask for those objects over port 80.





The cache.log content is this:

2008/06/11 11:30:52| Starting Squid Cache version 2.6.STABLE19 for
i386-redhat-linux-gnu...
2008/06/11 11:30:52| Process ID 8617
2008/06/11 11:30:52| With 1024 file descriptors available
2008/06/11 11:30:52| Using epoll for the IO loop
2008/06/11 11:30:52| ipcacheAddEntryFromHosts: Bad IP address 'tele1'
2008/06/11 11:30:52| ipcacheAddEntryFromHosts: Bad IP address 'svc1'


Your hosts file has corrupt content.

Apart from all that, squid looks to be running fine.


Amos
--
Please use Squid 2.7.STABLE1 or 3.0.STABLE6


[squid-users] adding Header values

2008-06-11 Thread Shaine

Hi 

For each request that passes squid , I wanted to add another HTTP HEADER
value as a HEADER. How can i do that ? please tell me what are the
configurations has to be done in squid in and , do i have to write any
external program ?

Thank you
Shaine
-- 
View this message in context: 
http://www.nabble.com/adding-Header-values-tp1443p1443.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Setting up squid for web application testing

2008-06-11 Thread Tom Evans

On Wed, 2008-06-11 at 13:15 +1200, Amos Jeffries wrote:
 
 You need a cache_peer for each unique source (the app server and the 
 upstream proxy.)
 Chris has already pointed you at cache_peer_access. That with a few 
 dstdomain ACL can route the app requests to the app peer and the rest at 
 the parent proxy.
 
 Using a cache_peer for the app server drops any need for special DNS or 
 hosts file config. Everything happens at one place inside the squid.conf.
 
 
 Amos

Thanks Amos + Chris!

I didn't like putting in fake DNS entries anyway, so this way is much
cleaner. For the archives, I have a 'edge' squid proxy, which can access
the internet, and an internal Apache reverse proxy serving versions of
our public websites for testing. I added an internal squid proxy, which
sends requests for the 'testing' versions of the websites to the
internal Apache reverse proxy, and requests for other sites onto the
edge squid proxy. 

For the archive, the configuration is surprisingly simple:

  acl tested_sites dstdomain www.foo.com
  acl tested_sites dstdomain svc.foo.com
  (etc)

  cache_peer edge-proxy.internal parent 3128 0 proxy-only default
  cache_peer apache-reverse-proxy.internal parent 80 0 

  cache_peer_access edge-proxy.internal deny tested_sites
  cache_peer_access apache-reverse-proxy.internal deny !tested_sites

Thanks again

Tom


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] BitTorrent

2008-06-11 Thread Tuc at T-B-O-H.NET
 On tis, 2008-06-10 at 09:08 -0400, Tuc at T-B-O-H.NET wrote:
  Hi,
 =20
  There seemed to have been some discussion back in October
  about BitTorrent. I just noticed that I had :
 =20
  2008/06/10 03:02:39| parseHttpRequest: Requestheader contains NULL charac=
 ters
  2008/06/10 03:02:39| parseHttpRequest: Unsupported method '^SBitTorrent'
  2008/06/10 03:02:39| clientReadRequest: FD 27 (192.168.3.15:64673) Invali=
 d Request
 
 Are you doing transparent interception of port 80?
 
 Some torrents run on port 80 without using HTTP to bypass stupid port
 based firewalls...
 
 Regards
 Henrik
 
WCCP2 and transparent, yes sir.

The site actually has no locally installed firewalls. The site
is double NAT'd though (And NAT'd/tunneled for Squid)

Tuc


[squid-users] Squid_kerb_auth problem after long login times.

2008-06-11 Thread Plant, Dean
Testing squid-2.6.STABLE20 on CentOS 5 with WinXP clients that are part
of and AD domain.

I have been testing the Kerberos authentication and have noticed that
after a few days I can no longer use the proxy. My Kerberos tickets are
valid on the proxy and on the client and I can access windows network
resources normally. If I login to different machine I can use the proxy
so all seems well with the proxy configuration. If I logout of the
affected machine and then login again proxy access is restored.

I have tested this with a few other users who have been logged in for
over a week with the same results. All were denied access until logging
out and in again.

Time is correct on all machines.

Any ideas for the best way to debug the Kerberos handshake.

Thanks in advance.

Dean.



[squid-users] Squid auto restart?

2008-06-11 Thread howard chen
Hi

Just play around squid for a few days, and seems the server
occasionally restarted automatically

E.g.

Squid Object Cache: Version 3.HEAD-20080530
Start Time: Wed, 11 Jun 2008 14:20:56 GMT
Current Time:   Wed, 11 Jun 2008 14:59:18 GMT


Definitively no body no body restart the server at 14:20:56 GMT

Or it is normal?

Thanks.


Re: [squid-users] squidclient -m PURGE problem...

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 13:47 +0900, Seonkyu Park wrote:
 'squidclient  -p 80 -h 1.1.1.1 http://a.b.c/x.jpg'
 
 I got the result, cache object(x.jpg) successfully deleted.
 
 And then  
 
 'wget http://a.b.c/x.jpg'
 
 I think that Squid will get fresh x.jpg  from origin server.
 But Squid return  503 error.

Squid never returns 503 on it's own on http requests. Sounds like your
web server failed.. (squid uses 504)

This is most likely not related to the purge, except that the cache may
have been hiding the server problem for some time and you didn't notice
until now when the object was removed from the cache.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] How does weighted-round-robin work?

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 19:27 +0800, Roy M. wrote:

 So I hope the one with more CPU power (it can be config. my me
 manually) can be sharing more requests, what would be the best setup
 for this?

round-robin weight=X

assigns requests proportional to the weight assigned to the server.

a weight=1, b weight=2, c weight=1, sends 25% (1/(1+2+1)) of the
requests to a, 50% to b and 25% to c.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] TCP_SWAPFAIL_MISS

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 08:23 +0200, Stephan Viljoen wrote:
 Hi There,
 
 I upgraded my squid-2.6.stable19 to squid-2.7.stable2 and noticed some weird 
 messages in my access log file which seems to occur quite frequently and was 
 wondering what it means or whether it's anything to be concerned about?

Anything in cache.log?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Forwarding NTLM to BasicAuthentication

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 02:29 -0700, a.s.d wrote:

 My idea is to build additional proxy (squid) witch can take authorization
 data from Windows client via NTLM and forward it to UTM.

Squid can do this, but only by sending a fake password. It does not have
access to the users actual password when using NTLM (only GINA on the
client workstation has knowledge of the users actutal password when
using NTLM... (not even the domain controller knows..)

See the login= cache_peer option for the available choices on how to
forward the authenticated user name to peer proxies.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] reverse proxy accelerator, cache persistence

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 11:50 +0200, Sylvain Viart wrote:

 I would like my squid proxy to keep serving its cached copy of a 
 document, when the origin server is unreachable.

It normally does this by default, provided the document is allowed to be
cached..

Regrds
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] How to improve integratin of LDAP authentication

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 13:26 +0200, Jevos, Peter wrote:
 When I run it login window apperas to insert login credentials. And
 that's fine and it works.
 My question is: Is it possible to hand over this credentials from MS
 Windows login credentials ( like domainname\user ) ?
 The reason is to avoid the interuption with login window

Yes, by using NTLM authentication.

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] not redirect some ips to proxy via wccp

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 09:04 -0300, Alexandre Correa wrote:

 I´m playing with wccp v2 and it´s working fine.. but i need to wccp
 not redirect some ip-blocks to proxy (ignore them) and permit go
 direct 

This is done by acl lists in the router.

Depending on your setup it might also be possibe by adjusting the
firewall rules on your Squid server to allow direct forwarding if the
traffic in question.

 proxy runs on freebsd..and proxy server isn´t gateway .. gateway is
 other server !!

When using WCCP the boundaries is a bit diffuse as the router delegates
traffic to the WCCP members...

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] adding a parameter to a URL

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 05:51 -0700, Shaine wrote:

 I have a big problem with adding a parameter to a URL which passes via squid
 . For that i am going to use url_rewrite program. I had a big time with
 squid url rewriting, but no success. 
 
 Could you please tell me , to get in to my point what are the minimum
 requirement to be satisfied ?

url_rewrite_program is the only required directive.

It's recommended to also use url_rewrite_concurrency, but you must then
adjust your helper to understand the concurrent version of the helper
protocol (a request channel number is inserted infront of helper
requests/responses)

If you don't use url_rewrite_concurrency then you most likely need to
tune url_rewrite_children a bit depending on your load, but the default
of 5 is more than sufficient for testing.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Web Usage Statistics by Client IP

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 21:08 +0800, Richard Chapman wrote:

 I was hoping to be able to track down the Bandwidth Usage Stats for 
 individual client machines - to try to find out where all our bandwidth 
 is going. I have found the Cache Manager Statistics Reports - but 
 haven't found one with this info broken down by Client.
 Is it there somewhere in one of the report - or do I need some 
 additional reporting tool?

I think this is only available via SNMP as part of the cacheClientTable
in the Squid MIB.

snmpwalk -Cc -Os -c public -m $PWD/mib.txt -v 1 localhost:3401 cacheClientTable

Unfortunately there is no information on how long any given client has
been using the proxy so you have to collect two samples and compare the
difference.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] reverse proxy accelerator, cache persistence

2008-06-11 Thread Sylvain Viart

Hi Henrik,

Henrik Nordstrom a écrit :

On ons, 2008-06-11 at 11:50 +0200, Sylvain Viart wrote:
  
I would like my squid proxy to keep serving its cached copy of a 
document, when the origin server is unreachable.



It normally does this by default, provided the document is allowed to be
cached..
  

Oh, interesting...

So I've seen the problem elsewhere, thanks.

I gonna to some more test.
May the request was about a missing document when the origin server is down.

Regards,
Sylvain.



Re: [squid-users] adding a parameter to a URL

2008-06-11 Thread Sylvain Viart

Hi Shaine,

Shaine a écrit :

I have a big problem with adding a parameter to a URL which passes via squid
. For that i am going to use url_rewrite program. I had a big time with
squid url rewriting, but no success. 


Could you please tell me , to get in to my point what are the minimum
requirement to be satisfied ?
  
I haven't tested to rewrite the querystring part of the url, but it's 
available on the redirector (rewrite_program)


Here's sample input for the rewrite_program

0 
http://www.somedomain.com/thumb/100/3/b/2/7/3b279a6eab3d0a983d9tre.somedomain.com/messenger/messPing.php 
12.34.56.78/- - POST -
0 
http://subdom.somedomain.com/thumb/55/3/c/3/6/3c36046ed06c78b2b65627f660be6220.jpg 
12.34.56.78/- - GET -
0 
http://www.somedomain.com/thumb/100/3/6/8/4/3684949288972604fafdb167ffc214d5.jpg 
12.34.56.78/- - GET -
0 
http://www.somedomain.com/thumb/100/7/a/4/1/7a4113fd5fba8ec93fa6bf82a6c993be.jpg 
12.34.56.78/- - GET -
0 
http://www..somedomain.com/thumb/100/4/3/d/f/43df2ca304f508557294d3a835a6fd29.jpg 
12.34.56.78/- - GET -


The digit in the first position  is only present when 
url_rewrite_concurrency is used, see


The thread : url_rewrite_concurrency singlethreaded redirector performance?

http://www.mail-archive.com/squid-users@squid-cache.org/msg49897.html


url_rewrite_program
url_rewrite_children
url_rewrite_concurrency
url_rewrite_host_header on|off 
url_rewrite_access allow|deny acl ...
  


I use :
url_rewrite_program /etc/squid/redirector.pl
url_rewrite_children 100
url_rewrite_concurrency 50
url_rewrite_host_header off


which means :

100 process spawned (busy proxy)
url_rewrite_concurrency 50, means squid can pass up to 50 URL to the 
program using a counter


url_rewrite_host_header off, means that redirector rewrites the URL, but 
squid keep the original URL, useful in accelerator mode (surrogate), See 
the doc, to be sure.

how it should like , url rewrite program ? can somebody leave me a simple
example ?

Simple perl program :

# no buffered output, auto flush
$|=1;

while(STDIN)
{
  s#http://something/#http://somthingelse/#;
  print;
}


A bit fast answer, hope that helps.


Regards,
Sylvain.







RE: [squid-users] How to not cache a site?

2008-06-11 Thread Jerome Yanga
It took me sometime to find this email until a friend pointed it out.
You brought up the point that we have been trying to solve by not
caching the site.

Here is the full story.  There is a site behind our Reverse Proxy that
keeps on getting funky due to missing icons and some pages that does not
follow the formatting.  Looking into this issue, we realized that even
if we configured our HTTP Headers not to cache this site, I still find
instances of that site in the cache during purging.  This is what
started this post.

We have compiled our Apache to use mod_auth_session to assist in
security of the site.

What you have provided below seems to have pointed us to resolving this
issue.  

The redirection just has a Cache-Control: max-age=0, which allows the

cache to store the response, and just requires that it be revalidated 
(which is done as evidenced by the TCP_REFRESH_HIT in the Squid log).

It seems that when the responses got stored, the authentication gets
funky and in effect some objects referenced in the page cannot be
access.  We will modify our mod_auth_session code to avoid this
condition.  If neither the recoding of mod_auth_session nor adding the
cache deny directives does work, I will update this post.

Thank you so much, Chris.

Regards,
Jerome



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 10, 2008 1:12 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] How to not cache a site?

Jerome Yanga wrote:
 Resending as I had received a failure notice message.

 I do not think that the refresh_pattern is even setup as they are all
 commented out.

 # grep refresh_pattern /etc/squid/squid.conf
 # refresh_pattern regex min percent max
 #refresh_pattern -i \.js$   0   0%  1
 #refresh_pattern -i \.css$  0   10% 30
 #refresh_pattern .  0   20% 4320

 Attached is a zipped http header log captured using Live HTTP Headers.

 Regards,
 Jerome
   

Sample squid log entry from the zip file (without cookies) for
reference:


TCP_REFRESH_HIT:FIRST_UP_PARENT 10.11.12.13 10.10.10.10 - - 
[06/Jun/2008:21:42:52 +] GET 
http://site_address.com/help/chr_ind_on.gif HTTP/1.1 302 830 
http://site_address.com/help/whskin_tbars.htm; Mozilla/5.0 (Windows; 
U; Windows NT 5.1; en-US; rv:1.8.1.14) Gecko/20080404 Firefox/2.0.0.14


There were no associated HTTP headers for this object 
(http://site_address.com/help/chr_ind_on.gif)*, but here is another 
request that also resulted in a 302 (Moved Temporarily):


GET /help/chr_back.gif HTTP/1.1
Host: site_address.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.14)

Gecko/20080404 Firefox/2.0.0.14
Accept: image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://site_address.com/help/whskin_tbars.htm
Cookie: [removed]

HTTP/1.x 302 Moved Temporarily
Date: Thu, 05 Jun 2008 23:40:54 GMT
Location: 
http://site_address.com/gateway/index.cfm?fa=loginreturnURL=http%3A%2F%
2Fsite_address%2Ecom%2Fhelp%2FFchr%5Fback%2Egif
Cache-Control: max-age=0
Expires: Thu, 05 Jun 2008 23:40:54 GMT
Content-Length: 422
Content-Type: text/html; charset=iso-8859-1
Connection: keep-alive


The redirection just has a Cache-Control: max-age=0, which allows the 
cache to store the response, and just requires that it be revalidated 
(which is done as evidenced by the TCP_REFRESH_HIT in the Squid log).

So, I'm still not seeing anything being cached against the server's 
request.  Try tailing the access log and grep for  200  and HIT** 
(note the spaces on either end of the 200).  That should show any 
objects (as opposed to redirects or errors) that are served from cache.

Chris

* The other URL (http://site_address.com/help/whskin_tbars.htm) it the 
referrer.
** tail -f /cache/logs/access.log | egrep 10.10.10.10.* 200 .*HIT






RE: [squid-users] Inbound and Outbound proxy on same machine

2008-06-11 Thread Michael St. Laurent
 Michael St. Laurent wrote:
  I'm trying to run an Inbound and Outbound proxy on the same machine.
  The inbound is to serve OWA pages and I'm following the 
 instructions in
  the Wiki
  
 (http://wiki.squid-cache.org/ConfigExamples/SquidAndOutlookWebAccess).
  If I try to start a separate process for the Inbound it 
 complains that
  Squid is already running.  If I try to merge the Inbound 
 config into the
  same file as the Outbound config then the Outbound proxy 
 stops working
  (browser gets error:  Forwarding Denied - This cache will 
 not forward
  your request because it is trying to enforce a sibling 
 relationship.)
 
 All current versions of Squid permit multiple http_port 
 entries, and can 
 be configured as multi-mode proxies.
 
 All you need to do is place the OWA config at the top of squid.conf, 
 drop the final http_access deny all from the OWA demo 
 settings, then 
 follow them up with a second section for the general outbound proxy 
 settings.
 

I moved the config lines to the top (after a few comments but before any
other configuration lines) but my outbound connections still get the
Forwarding Denied error when I have the lines uncommented.  I'm using
Squid 2.6.STABLE6 on a CentOS-5 system.

Here are the config lines which I'm adding (note that I'm leaving the
deny lines commented out):

# acceleration mode for inbound proxy

https_port pub-ip:443 cert=/etc/pki/tls/certs/squid-new.pem
defaultsite=owa-FQDN

cache_peer owa-ip parent 80 0 no-query originserver login=PASS
front-end-https=on name=owa-FQDN

acl OWA dstdomain owa-FQDN
cache_peer_access owa-FQDN allow OWA
never_direct allow OWA

# lock down access to only query the OWA server!
http_access allow OWA
#http_access deny all
miss_access allow OWA
#miss_access deny all


[squid-users] Searching squid logs for pornographic sites

2008-06-11 Thread Steven Engebretson
I am looking for a tool that will scan the access.log file for pornographic 
sites, and will report the specifics back.  We do not block access to any 
Internet sites, but need to monitor for objectionable content.

What I am doing now is just greping for some key words, and dumping the output 
into a file.  I am manually going through about 60,000 lines of log file, 
following my grep.  99% of these are false.  Any help would be appreciated.

Thank you all.


-Steven E.



Re: [squid-users] adding a parameter to a URL

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 18:02 +0200, Sylvain Viart wrote:

 url_rewrite_program /etc/squid/redirector.pl
 url_rewrite_children 100
 url_rewrite_concurrency 50
 url_rewrite_host_header off
 
 
 which means :
 
 100 process spawned (busy proxy)
 url_rewrite_concurrency 50, means squid can pass up to 50 URL to the 
 program using a counter

You would actually be better off with url_rewrite_children 1 and a much
higher concurrency level.

 url_rewrite_host_header off, means that redirector rewrites the URL, but 
 squid keep the original URL, useful in accelerator mode (surrogate), See 
 the doc, to be sure.

It's better to not rewrite the host component. Use cache_peer_access to
select the right backend server instead.

This is a directive likely to be removed in future releases as it's not
really needed, and can cause some often unexpected problems (but natural
once you look at it..)

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Setting up squid for web application testing

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 14:52 +0100, Tom Evans wrote:
 For the archive, the configuration is surprisingly simple:
 
   acl tested_sites dstdomain www.foo.com
   acl tested_sites dstdomain svc.foo.com
   (etc)
 
   cache_peer edge-proxy.internal parent 3128 0 proxy-only default
   cache_peer apache-reverse-proxy.internal parent 80 0 
 
   cache_peer_access edge-proxy.internal deny tested_sites
   cache_peer_access apache-reverse-proxy.internal deny !tested_sites

You also need

never_direct allow all

but I guess you already have that...

REgards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid auto restart?

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 23:02 +0800, howard chen wrote:
 Hi
 
 Just play around squid for a few days, and seems the server
 occasionally restarted automatically

Probably it hits some bug. What is said in cache.log?

 Squid Object Cache: Version 3.HEAD-20080530

That's a development version. Fine for testing, but for production
please use 3.0.STABLE..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] BitTorrent

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 09:56 -0400, Tuc at T-B-O-H.NET wrote:
 WCCP2 and transparent, yes sir.

Then this problem is more or less unavoidable. Caused by the
interception sending port 80 to an HTTP proxy. Triggered by people
abusing port 80 for non-HTTP traffic.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] Inbound and Outbound proxy on same machine

2008-06-11 Thread Henrik Nordstrom
On ons, 2008-06-11 at 10:18 -0700, Michael St. Laurent wrote:

 https_port pub-ip:443 cert=/etc/pki/tls/certs/squid-new.pem
 defaultsite=owa-FQDN
 
 cache_peer owa-ip parent 80 0 no-query originserver login=PASS
 front-end-https=on name=owa-FQDN
 
 acl OWA dstdomain owa-FQDN
 cache_peer_access owa-FQDN allow OWA
 never_direct allow OWA

How is OWA defined? It should be

acl OWA dstdomain owa-FQDN

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Re: squid_kerb_auth on mac os x

2008-06-11 Thread Henrik Nordstrom
On mån, 2008-06-09 at 10:02 -0700, Alex Morken wrote:

 I now believe the issue has to do with squid configuration.  I have  
 not been able to get any indication that it is even trying kerberos -  
 it is just using the basic auth method.  I am going to strip down my  
 squid config to the basics and see what I can get going on.

What does the headers returned by Squid say?

enable log_mime_hdrs and the get logged in access.log..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Re: Squid_kerb_auth problem after long login times.

2008-06-11 Thread Markus Moeller
Can you use kerbtray on the client ( it is available as part of the support 
tools or resource tools). I suspect that your ticket has expired. The ticket 
will usually be renewed when you lock/unlock your screen or access a share. 
XP should also renew when IE accesses a web server or proxy with negotiate 
(although I have heard of some issues here).


Can you try to lock and unlock the screen instead of logout/login.

Markus

BTW What does the squid logfile say when you use  squid_kerb_auth -d -i  ... 
?


Plant, Dean [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]

Testing squid-2.6.STABLE20 on CentOS 5 with WinXP clients that are part
of and AD domain.

I have been testing the Kerberos authentication and have noticed that
after a few days I can no longer use the proxy. My Kerberos tickets are
valid on the proxy and on the client and I can access windows network
resources normally. If I login to different machine I can use the proxy
so all seems well with the proxy configuration. If I logout of the
affected machine and then login again proxy access is restored.

I have tested this with a few other users who have been logged in for
over a week with the same results. All were denied access until logging
out and in again.

Time is correct on all machines.

Any ideas for the best way to debug the Kerberos handshake.

Thanks in advance.

Dean.





Re: [squid-users] FW: How to improve integratin of LDAP authentication

2008-06-11 Thread Chris Robertson

Jevos, Peter wrote:


Thanks for your answer Luis
Of coursse our stations are connected into the domain.
I'm not using samba yet ( but it'spossible )
But all i'd like ot know is a brief principle how it works ( or brief
howto )
  


http://wiki.squid-cache.org/ConfigExamples/WindowsAuthenticationNTLM


Thx

pet
  


Chris


Re: [squid-users] Web Usage Statistics by Client IP

2008-06-11 Thread Chris Robertson

Henrik Nordstrom wrote:

On ons, 2008-06-11 at 21:08 +0800, Richard Chapman wrote:

  
I was hoping to be able to track down the Bandwidth Usage Stats for 
individual client machines - to try to find out where all our bandwidth 
is going. I have found the Cache Manager Statistics Reports - but 
haven't found one with this info broken down by Client.
Is it there somewhere in one of the report - or do I need some 
additional reporting tool?



I think this is only available via SNMP as part of the cacheClientTable
in the Squid MIB.

snmpwalk -Cc -Os -c public -m $PWD/mib.txt -v 1 localhost:3401 cacheClientTable

Unfortunately there is no information on how long any given client has
been using the proxy so you have to collect two samples and compare the
difference.

Regards
Henrik
  


Alternatively, there are log parsing programs 
(http://www.squid-cache.org/Scripts/).  Popular ones on the list are 
Sarg and Calamaris.


Chris


Re: [squid-users] Searching squid logs for pornographic sites

2008-06-11 Thread Peter Albrecht
Hi Steven,

 I am looking for a tool that will scan the access.log file for 
 pornographic sites, and will report the specifics back.  We do not block 
 access to any Internet sites, but need to monitor for objectionable 
 content.   
 
 What I am doing now is just greping for some key words, and dumping the 
 output into a file.  I am manually going through about 60,000 lines of 
 log file, following my grep.  99% of these are false.  Any help would be 
 appreciated.   

I'm not sure if I got you right: Are you trying to identify unwanted sites 
from your access.log? Maybe the blacklists from SquidGuard can be of any 
help:

http://www.squidguard.org/blacklists.html

If you compare the sites from your access logfile with the blacklists, you 
should be able to figure out which are unwanted sites. You would need to 
write a script comparing the files and reporting any hits.

Regards,

Peter

-- 
Peter Albrecht  Tel: +49-(0)-89-287793-83
Open Source School GmbH Fax: +49-(0)-89-287555-63 
Amalienstraße 45 RG
80799 München   http://www.opensourceschool.de

HRB 172645 - Amtsgericht München
Geschäftsführer: Peter Albrecht, Dr. Markus Wirtz



Re: [squid-users] adding Header values

2008-06-11 Thread Chris Robertson

Shaine wrote:
Hi 


For each request that passes squid , I wanted to add another HTTP HEADER
value as a HEADER. How can i do that ? please tell me what are the
configurations has to be done in squid in and , do i have to write any
external program ?

Thank you
Shaine
  


I think you'll have to edit the source to do this.  Currently there is 
header_access (broken out to request_header_access and 
reply_header_access in Squid 3) and header_replace (which allows you to 
replace the value of a header denied with a header_access line), but 
nothing along the lines of header_add.


Chris


Re: [squid-users] Squid 3 as reverse-proxy with SSL [solved]

2008-06-11 Thread Maik Fuss

hi guys, the problem is solved!

it was a https_port ... in another configfile without the cert/key param!

so.. if you use https_port and dont set a cert param all other certs dont 
work...

thx 4 help :)

Henrik Nordstrom schrieb:

On tis, 2008-06-10 at 22:14 +0200, Maik Fuss wrote:


the cert's are from a ISP who says that's a modssl (apache) cert, so...
is the reason for this the wrong cert-type?


What do the first line of the cert look like?

Do the user Squid is running as have permission to read the cert?

Which Squid version?

Regards
Henrik


RE: [squid-users] Inbound and Outbound proxy on same machine

2008-06-11 Thread Michael St. Laurent
  https_port pub-ip:443 cert=/etc/pki/tls/certs/squid-new.pem
  defaultsite=owa-FQDN
  
  cache_peer owa-ip parent 80 0 no-query originserver login=PASS
  front-end-https=on name=owa-FQDN
  
  acl OWA dstdomain owa-FQDN
  cache_peer_access owa-FQDN allow OWA
  never_direct allow OWA
 
 How is OWA defined? It should be
 
 acl OWA dstdomain owa-FQDN

The line reads

acl OWA dstdomain owa.hartwellcorp.com


Re: [squid-users] Searching squid logs for pornographic sites

2008-06-11 Thread julian julian

I suggest to use a log analizer like webalizer o sarg, this is a bit more 
complete for user behavior analisys.

Julián

--- On Wed, 6/11/08, Steven Engebretson [EMAIL PROTECTED] wrote:

 From: Steven Engebretson [EMAIL PROTECTED]
 Subject: [squid-users] Searching squid logs for pornographic sites
 To: squid-users@squid-cache.org
 Date: Wednesday, June 11, 2008, 11:32 AM
 I am looking for a tool that will scan the access.log file
 for pornographic sites, and will report the specifics back.
  We do not block access to any Internet sites, but need to
 monitor for objectionable content.
 
 What I am doing now is just greping for some key words, and
 dumping the output into a file.  I am manually going through
 about 60,000 lines of log file, following my grep.  99% of
 these are false.  Any help would be appreciated.
 
 Thank you all.
 
 
 -Steven E.





Re: [squid-users] Searching squid logs for pornographic sites

2008-06-11 Thread Rob Asher
Here's something similar to what you're already doing except comparing to a 
file of badwords to look for in the URL's and then emailing you the results.

#!/bin/sh
# filter.sh
#
cd /path/to/filterscript
cat /var/log/squid/access.log | grep -if /path/to/filterscript/badwords  
hits.out

/path/to/filterscript/wordfilter.gawk hits.out

cat /path/to/filterscript/word-report | /bin/mail -s URL Filter Report [EMAIL 
PROTECTED] 

rm hits.out


#!/bin/gawk -f
# wordfilter.gawk

BEGIN {
print URL Filter Report:  /path/to/filterscript/word-report
print --  
/path/to/filterscript/word-report
sp =  - 
}

{
print strftime(%m-%d-%Y %H:%M:%S,$1), sp, $8  
/path/to/filterscript/word-report
print $7  /path/to/filterscript/word-report
print   /path/to/filterscript/word-report
}



You may need to adjust the columns printed in the awk script.  They're set for 
username instead of IP's.  Also, you'll need to make a 
/path/to/filterscript/badwords file with the words/regex you want to search 
forone per line.  Someone with better regex skills could probably eliminate 
a lot false hits with specific patterns in the badwords file.  I'm using 
this in addition to squidGuard and blacklists to catch URL's that were missed 
so the output isn't near as large as what you're getting.  

Rob



-
Rob Asher
Network Systems Technician
Paragould School District
(870)236-7744 Ext. 169


 Steven Engebretson [EMAIL PROTECTED] 6/11/2008 1:32 PM 
I am looking for a tool that will scan the access.log file for pornographic 
sites, and will report the specifics back.  We do not block access to any 
Internet sites, but need to monitor for objectionable content.

What I am doing now is just greping for some key words, and dumping the output 
into a file.  I am manually going through about 60,000 lines of log file, 
following my grep.  99% of these are false.  Any help would be appreciated.

Thank you all.


-Steven E.


-- 

This message has been scanned for viruses and
dangerous content by the Paragould School District
MailScanner, and is believed to be clean.



-- 

This message has been scanned for viruses and
dangerous content by the Paragould School District
MailScanner, and is believed to be clean.



Re: [squid-users] Searching squid logs for pornographic sites

2008-06-11 Thread Jason

Look at these:

http://www.meadvillelibrary.org/os/osfiltering-ala/smutscript/
http://www.meadvillelibrary.org/os/osfiltering-ala/
http://meadvillelibrary.org/os/filtering/filtermaintenance.html

She wrote a script that searches logs for keywords and emails it to her.

Jason


Steven Engebretson wrote:

I am looking for a tool that will scan the access.log file for pornographic 
sites, and will report the specifics back.  We do not block access to any 
Internet sites, but need to monitor for objectionable content.

What I am doing now is just greping for some key words, and dumping the output 
into a file.  I am manually going through about 60,000 lines of log file, 
following my grep.  99% of these are false.  Any help would be appreciated.

Thank you all.


-Steven E.


--- AV  Spam Filtering by M+Guardian - Risk Free Email (TM) ---



  


Re: [squid-users] not redirect some ips to proxy via wccp

2008-06-11 Thread Alexandre Correa
:)

i forgot that cisco acccess-lists are top-down parsing.. i have to add
dsts hosts first and after the ips to redirect to proxy :)


thanks !!!

regards !

On Wed, Jun 11, 2008 at 12:22 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On ons, 2008-06-11 at 09:04 -0300, Alexandre Correa wrote:

 I´m playing with wccp v2 and it´s working fine.. but i need to wccp
 not redirect some ip-blocks to proxy (ignore them) and permit go
 direct 

 This is done by acl lists in the router.

 Depending on your setup it might also be possibe by adjusting the
 firewall rules on your Squid server to allow direct forwarding if the
 traffic in question.

 proxy runs on freebsd..and proxy server isn´t gateway .. gateway is
 other server !!

 When using WCCP the boundaries is a bit diffuse as the router delegates
 traffic to the WCCP members...

 Regards
 Henrik




-- 

Sds.
Alexandre J. Correa
Onda Internet / OPinguim.net
http://www.ondainternet.com.br
http://www.opinguim.net


Re: [squid-users] BitTorrent

2008-06-11 Thread Tuc at T-B-O-H.NET
 On ons, 2008-06-11 at 09:56 -0400, Tuc at T-B-O-H.NET wrote:
  WCCP2 and transparent, yes sir.
 
 Then this problem is more or less unavoidable. Caused by the
 interception sending port 80 to an HTTP proxy. Triggered by people
 abusing port 80 for non-HTTP traffic.
 
 Regards
 Henrik
 
No worries. Just was trying to see if there had been any
other discussion/action/suggestions/etc. I'm having more issues
with fragmentation and swarming from the clients right now than I
am with Squid/WCCPv2 proxying it. :) 

Thanks, Tuc


[squid-users] Issues with Squid and authenticated sites

2008-06-11 Thread Henrique Machado
Good evening,

First time in the list, and I´m having a terrible issue with my squid.

Had 2.5STABLE12 running with no auth and recently upgraded to
3.0STABLE6 with auth against Windows DC.

The problem is: everytime when trying to access a website that asks
for a user and a password (some FTP sites and even some websites), I
don´t receive the INPUT USERNAME AND PASSWORD box.
When I had no authentication method running in Squid, I´d get an error
message when trying to authenticate. Squid sent the command
FTPpassword and received the reply ´User anonymous cannot log in´
(this one is for FTP sites).
All around the world I have searched for an answer, and I always
received the same one: Place the username and password in the URL.
K, fine, that works, partially, because the FTP always opens as
read-only (and also the idea of having users´s passwords in our log
files is against our security policy).
The same goes for the websites that require authentication (this
situation happens mostly when it´s an authentication method from
Apache or IIS): no authentication box.

After the upgrade, when accessing sites/FTP that require
authentication, I keep getting an authentication box, but from my
proxy, not from the website/FTP.

Unfortunetly Google couldn´t help me out with this situation, and
since I had luck when consulting the netfilter list for some issues
with iptables, I hope I´d get the same result coming to squid list.

I thank everyone in advance for the attention.

Henrique


Re: [squid-users] Squid auto restart?

2008-06-11 Thread howard chen
Hi

On Thu, Jun 12, 2008 at 2:39 AM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On ons, 2008-06-11 at 23:02 +0800, howard chen wrote:
 Hi

 Just play around squid for a few days, and seems the server
 occasionally restarted automatically

 Probably it hits some bug. What is said in cache.log?


It said:

..
..
2008/06/12 08:40:25| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:39:38| Detected REVIVED Parent: 192.168.11.31
2008/06/12 08:40:25| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:25| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:31| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:31| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:33| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:33| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:35| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:35| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:35| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:36| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:48| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:48| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:48| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:49| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:49| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:49| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:52| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:52| TCP connection to 192.168.11.31/80 failed
2008/06/12 08:40:52.006| WARNING: Closing client 201.114.23.131:3459
connection due to lifetime timeout
2008/06/12 08:40:52.006| assertion failed: comm.cc:339:
!fd_table[fd].flags.closing
2008/06/12 08:40:55| Starting Squid Cache version 3.HEAD-20080530 for
x86_64-unknown-linux-gnu...
2008/06/12 08:40:55| Process ID 3007




Howard.


Re: [squid-users] BitTorrent

2008-06-11 Thread Amos Jeffries
 On ons, 2008-06-11 at 09:56 -0400, Tuc at T-B-O-H.NET wrote:
  WCCP2 and transparent, yes sir.

 Then this problem is more or less unavoidable. Caused by the
 interception sending port 80 to an HTTP proxy. Triggered by people
 abusing port 80 for non-HTTP traffic.

 Regards
 Henrik

   No worries. Just was trying to see if there had been any
 other discussion/action/suggestions/etc. I'm having more issues
 with fragmentation and swarming from the clients right now than I
 am with Squid/WCCPv2 proxying it. :)

Ah, I got around most of the port-80 abuse issue by informing clients of
the proxy for this type of thing, allowing them to configure their apps to
use it as a gateway gets around a few issues.

(Great side-effect: Some P2P apps using a proxy send their internal IPs to
the server, so remote leeches can't figure out where they are, and other
local peers get a direct connection across.)

The real solution to all your P2P woes though is to implement IPv6. A
growing number of apps will use it anyway and tunnel around. The native
addressing scheme also allows P2P to more efficiently identify peers and
cause much less use of upstream bandwidth.

Amos




Re: [squid-users] Squid auto restart?

2008-06-11 Thread Amos Jeffries
 Hi

 On Thu, Jun 12, 2008 at 2:39 AM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
 On ons, 2008-06-11 at 23:02 +0800, howard chen wrote:
 Hi

 Just play around squid for a few days, and seems the server
 occasionally restarted automatically

 Probably it hits some bug. What is said in cache.log?


 It said:

 ..
 ..
snip
 2008/06/12 08:40:52| TCP connection to 192.168.11.31/80 failed
 2008/06/12 08:40:52.006| WARNING: Closing client 201.114.23.131:3459
 connection due to lifetime timeout
 2008/06/12 08:40:52.006| assertion failed: comm.cc:339:
 !fd_table[fd].flags.closing
 2008/06/12 08:40:55| Starting Squid Cache version 3.HEAD-20080530 for
 x86_64-unknown-linux-gnu...
 2008/06/12 08:40:55| Process ID 3007


Looks a little bit like http://www.squid-cache.org/bugs/show_bug.cgi?id=2253
but is happening in a different piece of code.

Are you able to do two things:
 - repeat this bug with a current daily snapshot?
 - debug and produce a stack trace when those asserts happen?

Amos




Re: [squid-users] Re: squid-cache.org

2008-06-11 Thread Mark Nottingham
Might also be good to update the configuration guide link from 2.6 to  
2.7 in the left-hand column of the front page.


Cheers,


On 11/06/2008, at 10:34 PM, Amos Jeffries wrote:


Monah Baki wrote:

Forget it :)


Nah. Lets fix it... Done.
Thanks.

Amos


On Jun 11, 2008, at 6:28 AM, Monah Baki wrote:
Out of curiosity at the download section it says version 2.7  
Latest Release Stable 1, but when you click on the 2.7 link it  
says stable2, which is it?



Thanks




BSD Networking, Microsoft Notworking




BSD Networking, Microsoft Notworking



--
Please use Squid 2.7.STABLE1 or 3.0.STABLE6


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Re: squid-cache.org

2008-06-11 Thread Amos Jeffries
 Might also be good to update the configuration guide link from 2.6 to
 2.7 in the left-hand column of the front page.


Yes, thanks. Done.

Amos

 Cheers,


 On 11/06/2008, at 10:34 PM, Amos Jeffries wrote:

 Monah Baki wrote:
 Forget it :)

 Nah. Lets fix it... Done.
 Thanks.

 Amos

 On Jun 11, 2008, at 6:28 AM, Monah Baki wrote:
 Out of curiosity at the download section it says version 2.7
 Latest Release Stable 1, but when you click on the 2.7 link it
 says stable2, which is it?


 Thanks




 BSD Networking, Microsoft Notworking



 BSD Networking, Microsoft Notworking


 --
 Please use Squid 2.7.STABLE1 or 3.0.STABLE6

 --
 Mark Nottingham   [EMAIL PROTECTED]







Re: [squid-users] wccp transparent ??? (like tproxy)

2008-06-11 Thread Visolve

Hello Alexandre,

WCCP( v2) is used to load balance the traffic with squid server. If you 
configured the TProxy with WCCP(V2),  www server will see the original 
client IP address.


Thanks,
-Visolve Squid Team
www.visolve.com/squid/


Alexandre Correa wrote:

Hello..

are wccp (v2) transparent like tproxy ? with wccp running, destination
www server see ip of the client or proxy ? (client has routable/valid
ip address)


reagards !

  




Re: [squid-users] Web Usage Statistics by Client IP

2008-06-11 Thread Indunil Jayasooriya
Hi Richard,

Pls try sarg.

here is HOW to .

http://www.squid-cache.org/mail-archive/squid-users/200805/0172.html


On Wed, Jun 11, 2008 at 6:38 PM, Richard Chapman
[EMAIL PROTECTED] wrote:
 Hi

 I am new to Squid - but found it very easy to get going. I am running Squid
 2.6 on Centos 5.1 Linux. and it workd brilliantly.

 I was hoping to be able to track down the Bandwidth Usage Stats for
 individual client machines - to try to find out where all our bandwidth is
 going. I have found the Cache Manager Statistics Reports - but haven't found
 one with this info broken down by Client.
 Is it there somewhere in one of the report - or do I need some additional
 reporting tool?

 Thanks for the help.

 Richard.







-- 
Thank you
Indunil Jayasooriya


Re: [squid-users] Squid auto restart?

2008-06-11 Thread howard chen
Hi

On Thu, Jun 12, 2008 at 11:05 AM, Amos Jeffries [EMAIL PROTECTED] wrote:
 Are you able to do two things:
  - repeat this bug with a current daily snapshot?
  - debug and produce a stack trace when those asserts happen?

 Amos



Is this bug (2253) patched? As I am already using Head (5.30).

Since I can't see the it in the change logs, i.e.
http://www.squid-cache.org/Versions/v3/HEAD/changesets/

So far this version is quite stable and running in a quite high
traffic env, but just sometimes
restart itself a few times per day (Hopefully restart but not dead...)


Anyway, I will try to upgrade and see.

Thanks.

Howard


[squid-users] cached MS updates !

2008-06-11 Thread pokeman

hi there 
Refrence to following atricle 
http://www.nabble.com/Re%3A-YouTube-and-other-streaming-media-%28caching%29-p17738020.html
i am going to cached windowsupdate object here is changes in store script in
squid.conf and output log 

Squid.conf
acl store_rewrite_list url_regex ^http://(.*?)/windowsupdate\?

refresh_pattern windowsupdate.com/.*\.(cab|exe|dll) 10080 90% 99
ignore-no-cache override-expire ignore-private
refresh_pattern download.microsoft.com/.*\.(cab|exe|dll) 10080 90% 99
ignore-no-cache override-expire ignore-private
refresh_pattern au.download.windowsupdate.com/.*\.(cab|exe|psf) 10080 90%
99 ignore-no-cache override-expire ignore-private
refresh_pattern ^http://sjl-v[0-9]+\.sjl\.youtube\.com 10080 90% 99
ignore-no-cache override-expire ignore-private

#Store  script

#!/usr/bin/perl
$|=1;
while () {
  @X = split;
  $url = $X[0];
  $url =~
[EMAIL PROTECTED]://(.*?)/get_video\?(.*)video_id=(.*?)[EMAIL 
PROTECTED]://videos.youtube.INTERNAL/ID=$3@;
  $url =~
[EMAIL PROTECTED]://(.*?)/get_video\?(.*)video_id=(.*?)[EMAIL 
PROTECTED]://videos.youtube.INTERNAL/ID=$3@;
  $url =~
[EMAIL PROTECTED]://(.*?)/videodownload\?(.*)docid=(.*?)[EMAIL 
PROTECTED]://videos.google.INTERNAL/ID=$3@;
  $url =~
[EMAIL PROTECTED]://(.*?)/videodownload\?(.*)docid=(.*?)[EMAIL 
PROTECTED]://videos.google.INTERNAL/ID=$3@;
  $url =~
[EMAIL PROTECTED]://(.*?)/update\?(.*)video_id=(.*?)[EMAIL 
PROTECTED]://au.download.windowsupdate.com.INTERNAL/ID=$3@;
  $url =~
[EMAIL PROTECTED]://(.*?)/update\?(.*)video_id=(.*?)[EMAIL 
PROTECTED]://au.download.windowsupdate.com.INTERNAL/ID=$3@;
  print $url\n;
}

### output cache log 
1213248096.431606 192.168.0.5 TCP_MISS/206 15348 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-kb902400-x86-enu_a7c593892442e90b74d93abf0524a52f00998cea.psf
- DIRECT/199.93.42.124 application/octet-stream
1213248098.070905 192.168.0.5 TCP_MISS/206 36487 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-kb902400-x86-enu_a7c593892442e90b74d93abf0524a52f00998cea.psf
- DIRECT/199.93.42.124 multipart/byteranges
1213248099.996   1535 192.168.0.5 TCP_MISS/206 40838 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-kb902400-x86-enu_a7c593892442e90b74d93abf0524a52f00998cea.psf
- DIRECT/8.12.137.30 multipart/byteranges
1213248101.372   1216 192.168.0.5 TCP_MISS/206 14687 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-kb902400-x86-enu_a7c593892442e90b74d93abf0524a52f00998cea.psf
- DIRECT/209.84.7.123 application/octet-stream
1213248101.749202 192.168.0.5 TCP_MISS/200 375 HEAD
http://download.windowsupdate.com/v7/windowsupdate/redir/wuredir.cab?0806120622
- DIRECT/79.140.80.33 application/octet-stream
1213248102.091606 192.168.0.5 TCP_MISS/206 12929 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-kb902400-x86-enu_a7c593892442e90b74d93abf0524a52f00998cea.psf
- DIRECT/209.84.7.123 application/octet-stream
1213248103.755962 192.168.0.5 TCP_MISS/206 33762 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-kb902400-x86-enu_a7c593892442e90b74d93abf0524a52f00998cea.psf
- DIRECT/209.84.7.123 multipart/byteranges
1213248104.624578 192.168.0.5 TCP_MISS/200 375 HEAD
http://www.update.microsoft.com/v7/windowsupdate/selfupdate/wuident.cab?0806120623
- DIRECT/65.55.13.158 application/octet-stream
1213248104.831100 192.168.0.5 TCP_MISS/200 376 HEAD
http://download.windowsupdate.com/v7/windowsupdate/a/selfupdate/WSUS3/x86/Other/wsus3setup.cab?0806120623
- DIRECT/79.140.80.33 application/octet-stream
1213248105.266431 192.168.0.5 TCP_MISS/200 25737 GET
http://download.windowsupdate.com/v7/windowsupdate/a/selfupdate/WSUS3/x86/Other/wsus3setup.cab?0806120623
- DIRECT/79.140.80.33 application/octet-stream
1213248105.638   1529 192.168.0.5 TCP_MISS/206 36542 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-kb902400-x86-enu_a7c593892442e90b74d93abf0524a52f00998cea.psf
- DIRECT/199.93.42.124 application/octet-stream
1213248106.175409 192.168.0.5 TCP_MISS/206 13595 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-kb902400-x86-enu_a7c593892442e90b74d93abf0524a52f00998cea.psf
- DIRECT/199.93.42.124 multipart/byteranges
1213248106.595102 192.168.0.5 TCP_MISS/200 375 HEAD
http://download.windowsupdate.com/v7/windowsupdate/redir/wuredir.cab?0806120623
- DIRECT/79.140.80.33 application/octet-stream
1213248108.882   1832 192.168.0.5 TCP_MISS/206 45373 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-kb902400-x86-enu_a7c593892442e90b74d93abf0524a52f00998cea.psf
- DIRECT/8.12.137.30 application/octet-stream
1213248109.603608 192.168.0.5 TCP_MISS/206 16132 GET
http://au.download.windowsupdate.com/msdownload/update/v5/psf/windowsxp-kb902400-x86-enu_a7c593892442e90b74d93abf0524a52f00998cea.psf
- DIRECT/8.12.137.30 application/octet-stream