Re: [squid-users] Problem with Facebook browsing

2010-07-22 Thread balkrishna
Dear Mr. Alex,

Thank you for your help.
It worked for me.

Thank you once again.

Regards,

Bal Krishna


> Look for a message from Amos just a couple of hours ago...
>
> Alex
>
> On 22/07/10 18:50, balkris...@subisu.net.np wrote:
>> Dear All,
>>
>> I have been facing strange problem with Facebook since yesterday.
>> Facebook was working fine with my Squid since last 1 year but since last
>> 2
>> days I have been experiencing strange problems.
>> Sometimes some page browses properly and sometimes not. Only blank page
>> is
>> seen most of the times.
>> But as I browse bypassing the proxy the problem seems solved.
>> I haven't changed my any configurations and is same as it was before
>> when
>> working.
>> Other sites are browsing w/o problems through the same proxies.
>>
>> There is not any specific error or other logs seen which is helpful for
>> troubleshooting.
>> Anyone have any idea what happened ??
>>
>> Regards,
>> 
>> Bal Krishna
>>
>>
>
>




Re: [squid-users] Wccp using L2

2010-07-22 Thread senthilkumaar2021

Thank you very much

Just by changing the assignment method,forwarding method,return method 
in squid.conf *L2+Mask worked perfectly *with No tunnel.


Regards
senthil


Amos Jeffries wrote:

senthilkumaar2021 wrote:

Hi,

We are running Squid Tproxy with wccp .For wccp we had used GRE+HASH 
assignment.
Squid Has been running fine.But the load on router is high.So we plan 
to Use L2 Redirect with mask assignment.
We have established gre tunnel with router identifier.In order to 
change GRE to L2 assignment.It is necessary to change assignment 
values in squid.conf other than this any changes to be made on router 
side.

whether gre tunnel is also needed for L2 also?


GRE and L2 are the tunnel transport protocols. So to take a semi-wild 
guess...no. GRE tunnel should not be required to transport via L2 
redirect.


Amos




[squid-users] User Authentication to parent proxy question

2010-07-22 Thread Markus Moeller

Hi,

 If  I have a parent proxy which requires NTLM or Kerberos user 
authentication can I use login=PASS to do that ?


e.g.

cache_peer parent.foo.net   parent3128  3130  proxy-only default 
login=PASS


 Is it also possible that the child does not authenticate the user but just 
hands the authentication through to the parent ?


Thank you
Markus 





Re: [squid-users] Problem with Facebook browsing

2010-07-22 Thread Alex Crow

Look for a message from Amos just a couple of hours ago...

Alex

On 22/07/10 18:50, balkris...@subisu.net.np wrote:

Dear All,

I have been facing strange problem with Facebook since yesterday.
Facebook was working fine with my Squid since last 1 year but since last 2
days I have been experiencing strange problems.
Sometimes some page browses properly and sometimes not. Only blank page is
seen most of the times.
But as I browse bypassing the proxy the problem seems solved.
I haven't changed my any configurations and is same as it was before when
working.
Other sites are browsing w/o problems through the same proxies.

There is not any specific error or other logs seen which is helpful for
troubleshooting.
Anyone have any idea what happened ??

Regards,

Bal Krishna

   




[squid-users] Problem with Facebook browsing

2010-07-22 Thread balkrishna
Dear All,

I have been facing strange problem with Facebook since yesterday.
Facebook was working fine with my Squid since last 1 year but since last 2
days I have been experiencing strange problems.
Sometimes some page browses properly and sometimes not. Only blank page is
seen most of the times.
But as I browse bypassing the proxy the problem seems solved.
I haven't changed my any configurations and is same as it was before when
working.
Other sites are browsing w/o problems through the same proxies.

There is not any specific error or other logs seen which is helpful for
troubleshooting.
Anyone have any idea what happened ??

Regards,

Bal Krishna



[squid-users] Support for detecting "if-modified" using SHA digest or similar?

2010-07-22 Thread Ed W
 Hi, I am plotting a hierarchical cache with a proxy at the client end 
of a slow expensive satellite internet connection, and another on the 
fast cheap internet side (goal is to optimise traffic passing through 
the slow link).  I would specifically like to address the issue that 
many (smaller, dynamic) sites do not properly support if-modified type 
headers and always send the same content each time..


I think the only way this can be solved is if the client end cache 
notices it has a cached version of a resource, adds it's own 
"if-modified-sha" header stating which content it's got, the upstream 
proxy then may need to fetch the object again, but if the upstream finds 
the content actually is the same then it commutes the response to a 304 
(Something like a dynamic proxy generated e-tag really)


Someone may tell me this is already in an RFC?  If so great.  If not, 
could someone advise how difficult this feature might be to add to Squid 
3.1?  Bonus marks if it doesn't break streaming resources...


Any other ways to achieve the same effect?

Thanks

Ed W


[squid-users] Problem with cache_peer

2010-07-22 Thread Gemmy
 Hi~
I hava a squid cluster with two cache_peer using round-robin. Today, I
want to add the third nginx, but it's load average increases quickly,
and the netstat tell that the number of TIME_WAIT is more than 5000!
I run "squidclient mgr:server_list" on the squid servers.The "OPEN
CONNS" to old nginx is only digits, but three hundred to new nginx!
I collect and count the access.log too,the number of "PARENT/old" is
nearly same to "PARNET/new"...
The cache_peer configure like following:

cache_peer 10.2.3.4 parent 80 0 no-query round-robin originserver name=d1
cache_peer 10.2.3.5 parent 80 0 no-query round-robin originserver name=d2
cache_peer 10.2.3.6 parent 80 0 no-query round-robin originserver name=d3
cache_peer_domain d1 .test.com
cache_peer_domain d2 .test.com
cache_peer_domain d3 .test.com

And the nginx servers' configure are exactly the same.

Is there any wrong?



Re: [squid-users] URGENT -- Suddenly Cant open Facebook

2010-07-22 Thread Amos Jeffries

Jorge Perez wrote:

Hello, we suddenly today we cant open facebook and we need it urgently for work.

There is no DNS Issue, all i get is a blank page and nothing happens. Before it 
was everything ok...

Any ideas??


Lucky day, I was about to post the answers :)  We have had an unusually 
high number of people on IRC live help with the same problem in the last 
few hours.


Squid-2.7 can be made to work by adding "server_http11 on" to squid.conf.

Squid-3.1 is not affected.

Other versions have no good fix yet. Perhapse routing requests through 
one of the unaffected versions or allowing clients to go direct to 
facebook without the proxy.


Why?
  Facebook seem to have changed something on their servers very, very 
recently. They are right now violating HTTP in several ways.


 The bad violation resulting in blank pages is that some the mandatory 
HTTP headers, including Date: are missing from their replies to HTTP/1.0 
clients.


 The other violation is that they are responding with different HTTP 
versions and header sets to HTTP/1.0 and HTTP/1.1 depending on which 
version is used to query them.
  When queried with HTTP/1.1 request the right headers, or at least a 
minimally usable set are sent out.


Amos



Here is access.log

1279813884.035144 192.168.169.238 TCP_MISS/200 1704 GET 
http://static.ak.fbcdn.net/rsrc.php/zANMV/hash/9hba0udp.css - 
DIRECT/65.216.161.59 text/css
1279813885.265   2175 192.168.169.238 TCP_MISS/200 793 GET 
http://www.facebook.com/? - DIRECT/66.220.147.11 text/html
1279813887.957   5110 192.168.169.238 TCP_MISS/404 11091 GET 
http://www.facebook.com/t - DIRECT/66.220.147.11 text/html
1279813888.020   1558 192.168.169.238 TCP_MISS/200 453 GET 
http://www.facebook.com/? - DIRECT/66.220.147.11 text/html
1279813893.897   9622 192.168.169.238 TCP_MISS/200 688 GET 
http://search.twitter.com/search.json? - DIRECT/128.242.245.43 application/json

iptables proxy rules:

echo "Aplicando reglas iptables..."
iptables -t nat -F
iptables -t nat -X
iptables -t nat -Z
iptables -F
iptables -X
iptables -Z
##
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -t nat -P PREROUTING ACCEPT
iptables -t nat -P POSTROUTING ACCEPT
##
iptables -t nat -A POSTROUTING -s 192.168.169.0/24 -o eth2 -j MASQUERADE
iptables -t nat -A PREROUTING -s 192.168.169.0/24 -d ! 192.168.169.0/24 -p tcp 
--dport 80 -j REDIRECT --to-port 3128
##
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 993 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 110 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 465 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 25 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 80 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 443 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 53 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p udp --dport 53 -j ACCEPT
iptables -A FORWARD -s 192.168.2.0/24 -i eth2 -p tcp --dport 1863 -j ACCEPT
##
echo 1 > /proc/sys/net/ipv4/ip_forward





squid.conf

http_port 192.168.169.3:3128 transparent
cache_dir ufs /usr/local/squid/var/cache 250 16 256
cache_effective_user squid
cache_effective_group squid
access_log /usr/local/squid/var/logs/access.log squid

acl localnet src 192.168.169.0/255.255.255.0
acl localhost src 127.0.0.1/255.255.255.255
acl all src 0.0.0.0/0.0.0.0
###
acl SSL_ports port 443 563
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT
 SITIOS BLOKEADOS #
acl restobb src 192.168.169.1-192.168.169.129
acl sucky_urls dstdomain .facebook.com .twitter.com .doubleclick.com 
.fotolog.com .warez-bb.org .fotolog.cl .chilewarez.org .rapidshare.com 
.megaupload.com .rapidshare.de .medi$
deny_info http://www.trabajoweb.cl/error.html sucky_urls
http_access deny restobb sucky_urls
 NO DESCARGAS #
acl resto src 192.168.169.1-192.168.169.29/32
acl descargas_negadas urlpath_regex -i 
\.(exe|vqf|gz|zip|r[ap][rwm]|avi|mpe?g?3?|qt|ra?m|iso|wav|mov|torrent)(\?.*)?$
deny_info http://www.trabajoweb.cl/error.html descargas_negadas
http_access deny resto descargas_negadas
 SITIOS PROYECTOS ###
acl restobb2 src 192.168.169.130-192.168.169.149
acl sucky_urls2 dstdomain .doubleclick.com .warez-bb.org .fotolog.cl 
.chilewarez.org .rapidshare.com .megaupload.com .rapidshare.de .mediafire.com 
.depositfiles.com .taringa.co$
deny_info http://www.trabajoweb.cl/

[squid-users] URGENT -- Suddenly Cant open Facebook

2010-07-22 Thread Jorge Perez
Hello, we suddenly today we cant open facebook and we need it urgently for work.

There is no DNS Issue, all i get is a blank page and nothing happens. Before it 
was everything ok...

Any ideas??

Here is access.log

1279813884.035144 192.168.169.238 TCP_MISS/200 1704 GET 
http://static.ak.fbcdn.net/rsrc.php/zANMV/hash/9hba0udp.css - 
DIRECT/65.216.161.59 text/css
1279813885.265   2175 192.168.169.238 TCP_MISS/200 793 GET 
http://www.facebook.com/? - DIRECT/66.220.147.11 text/html
1279813887.957   5110 192.168.169.238 TCP_MISS/404 11091 GET 
http://www.facebook.com/t - DIRECT/66.220.147.11 text/html
1279813888.020   1558 192.168.169.238 TCP_MISS/200 453 GET 
http://www.facebook.com/? - DIRECT/66.220.147.11 text/html
1279813893.897   9622 192.168.169.238 TCP_MISS/200 688 GET 
http://search.twitter.com/search.json? - DIRECT/128.242.245.43 application/json

iptables proxy rules:

echo "Aplicando reglas iptables..."
iptables -t nat -F
iptables -t nat -X
iptables -t nat -Z
iptables -F
iptables -X
iptables -Z
##
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -t nat -P PREROUTING ACCEPT
iptables -t nat -P POSTROUTING ACCEPT
##
iptables -t nat -A POSTROUTING -s 192.168.169.0/24 -o eth2 -j MASQUERADE
iptables -t nat -A PREROUTING -s 192.168.169.0/24 -d ! 192.168.169.0/24 -p tcp 
--dport 80 -j REDIRECT --to-port 3128
##
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 993 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 110 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 465 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 25 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 80 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 443 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p tcp --dport 53 -j ACCEPT
iptables -A FORWARD -s 192.168.169.0/24 -i eth2 -p udp --dport 53 -j ACCEPT
iptables -A FORWARD -s 192.168.2.0/24 -i eth2 -p tcp --dport 1863 -j ACCEPT
##
echo 1 > /proc/sys/net/ipv4/ip_forward





squid.conf

http_port 192.168.169.3:3128 transparent
cache_dir ufs /usr/local/squid/var/cache 250 16 256
cache_effective_user squid
cache_effective_group squid
access_log /usr/local/squid/var/logs/access.log squid

acl localnet src 192.168.169.0/255.255.255.0
acl localhost src 127.0.0.1/255.255.255.255
acl all src 0.0.0.0/0.0.0.0
###
acl SSL_ports port 443 563
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT
 SITIOS BLOKEADOS #
acl restobb src 192.168.169.1-192.168.169.129
acl sucky_urls dstdomain .facebook.com .twitter.com .doubleclick.com 
.fotolog.com .warez-bb.org .fotolog.cl .chilewarez.org .rapidshare.com 
.megaupload.com .rapidshare.de .medi$
deny_info http://www.trabajoweb.cl/error.html sucky_urls
http_access deny restobb sucky_urls
 NO DESCARGAS #
acl resto src 192.168.169.1-192.168.169.29/32
acl descargas_negadas urlpath_regex -i 
\.(exe|vqf|gz|zip|r[ap][rwm]|avi|mpe?g?3?|qt|ra?m|iso|wav|mov|torrent)(\?.*)?$
deny_info http://www.trabajoweb.cl/error.html descargas_negadas
http_access deny resto descargas_negadas
 SITIOS PROYECTOS ###
acl restobb2 src 192.168.169.130-192.168.169.149
acl sucky_urls2 dstdomain .doubleclick.com .warez-bb.org .fotolog.cl 
.chilewarez.org .rapidshare.com .megaupload.com .rapidshare.de .mediafire.com 
.depositfiles.com .taringa.co$
deny_info http://www.trabajoweb.cl/error.html sucky_urls2
http_access deny restobb2 sucky_urls2

 SITIOS ESTUDIO ###
acl restobb3 src 192.168.169.190-192.168.169.219
acl sucky_urls3 dstdomain .doubleclick.com .warez-bb.org .fotolog.cl 
.chilewarez.org .rapidshare.com .megaupload.com .rapidshare.de .mediafire.com 
.depositfiles.com .taringa.co$
deny_info http://www.trabajoweb.cl/error.html sucky_urls2
http_access deny restobb3 sucky_urls2


http_access allow localnet
http_access allow localhost
http_access deny all
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
##
http_reply_access allow localnet
http_reply_access deny all
acl FTP proto FTP
always_direct allow FTP
#
#REGLAS DESCARGAS
acl normales src 192.168.169.30-192.168.169.129/32
acl tecnicos src 192.168.169.130-192.168.169.149/32
acl administrador src 192.168.169.150-192.168.169.189/32
acl estudio src 192.168.169.190-192.168

[squid-users] squid-3.1.5 core files

2010-07-22 Thread Zeller, Jan (ID)
Dear squid-list,

i tried to follow the guidelines on 
http://wiki.squid-cache.org/SquidFaq/BugReporting because i am getting lots of 
core files.

- squid version : 3.1.5
- OS : Debian GNU/Linux 5.0.5 (lenny) amd64

# ls -lth core 
-rw--- 1 proxy proxy 2.1G 2010-07-22 16:59 core

# file core
core: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 
'(squid) -sY -f /etc/squid3/squid-3.1.5.conf'

# gdb /opt/squid-3.1.5/sbin/squid core
GNU gdb 6.8-debian
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu"...

warning: Can't read pathname for load map: Input/output error.
Reading symbols from /lib/librt.so.1...done.
Loaded symbols for /lib/librt.so.1
Reading symbols from /lib/libpthread.so.0...done.
Loaded symbols for /lib/libpthread.so.0
Reading symbols from /lib/libcrypt.so.1...done.
Loaded symbols for /lib/libcrypt.so.1
Reading symbols from /usr/lib/libssl.so.0.9.8...done.
Loaded symbols for /usr/lib/libssl.so.0.9.8
Reading symbols from /usr/lib/libcrypto.so.0.9.8...done.
Loaded symbols for /usr/lib/libcrypto.so.0.9.8
Reading symbols from /lib/libnsl.so.1...done.
Loaded symbols for /lib/libnsl.so.1
Reading symbols from /lib/libdl.so.2...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /usr/lib/libstdc++.so.6...done.
Loaded symbols for /usr/lib/libstdc++.so.6
Reading symbols from /lib/libm.so.6...done.
Loaded symbols for /lib/libm.so.6
Reading symbols from /lib/libgcc_s.so.1...done.
Loaded symbols for /lib/libgcc_s.so.1
Reading symbols from /lib/libc.so.6...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /lib/ld-linux-x86-64.so.2...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /usr/lib/libz.so.1...done.
Loaded symbols for /usr/lib/libz.so.1
Reading symbols from /lib/libnss_compat.so.2...done.
Loaded symbols for /lib/libnss_compat.so.2
Reading symbols from /lib/libnss_nis.so.2...done.
Loaded symbols for /lib/libnss_nis.so.2
Reading symbols from /lib/libnss_files.so.2...done.
Loaded symbols for /lib/libnss_files.so.2
Core was generated by `(squid) -sY -f /etc/squid3/squid-3.1.5.conf'.
Program terminated with signal 6, Aborted.
[New process 1433]
#0  0x7f61234e6ed5 in raise () from /lib/libc.so.6
(gdb) bt
#0  0x7f61234e6ed5 in raise () from /lib/libc.so.6
#1  0x7f61234e83f3 in abort () from /lib/libc.so.6
#2  0x7f6123d6a294 in __gnu_cxx::__verbose_terminate_handler ()
   from /usr/lib/libstdc++.so.6
#3  0x7f6123d68696 in ?? () from /usr/lib/libstdc++.so.6
#4  0x7f6123d686c3 in std::terminate () from /usr/lib/libstdc++.so.6
#5  0x7f6123d68f6f in __cxa_pure_virtual () from /usr/lib/libstdc++.so.6
#6  0x0055a1e0 in JobDialer (this=0x7fffcc60, aJob=0x599) at 
AsyncJob.cc:172
#7  0x005b142d in Adaptation::Initiator::announceInitiatorAbort 
(this=0xc8e9450, 
x...@0xc8e9480) at ../../src/base/AsyncJobCalls.h:33
#8  0x005b1ad1 in Adaptation::Iterator::noteInitiatorAborted 
(this=0xc8e9438)
at Iterator.cc:104
#9  0x0055a7db in JobDialer::dial (this=0x6d7cbb20, ca...@0x6d7cbaf0)
at AsyncJob.cc:215
#10 0x00559bdb in AsyncCall::make (this=0x6d7cbaf0) at AsyncCall.cc:34
#11 0x0055c270 in AsyncCallQueue::fireNext (this=0xad3000) at 
AsyncCallQueue.cc:53
#12 0x0055c418 in AsyncCallQueue::fire (this=0xad3000) at 
AsyncCallQueue.cc:39
#13 0x004ca81c in EventLoop::runOnce (this=0x7fffce20) at 
EventLoop.cc:130
#14 0x004ca8f8 in EventLoop::run (this=0x7fffce20) at 
EventLoop.cc:94
#15 0x00513eac in SquidMain (argc=4, argv=0x7fffcfc8) at 
main.cc:1397
#16 0x00514436 in main (argc=1433, argv=0x599) at main.cc:1159
(gdb) quit


What could be the cause and what kind of additional information should I 
provide for better analysis ?


thanks,

---

Jan




Re: [squid-users] Ideal partition size for good hit ratio ?

2010-07-22 Thread Matus UHLAR - fantomas
On 09.07.10 11:14, sameer khan wrote:
> thank for repy matus. I am not trying to cache large object in coss, 
> it is max limit to 10. Want to achieve is best hit ratio. 

request hit ratio or byte hit ratio? Those two are different.

> I am guessing if the cache size is too small it will
> have effect on hit ratio.

of course. But If you will have too big COSS cache_dir, it may hold small
objects for a long time, and there will be no place for (a)ufs/diskd to hold
big objects. Since big objects tend to take more place, you may find out
that you have small byte-hit ratio even if high request hit ratio.

> As there are many small objects, i want to cache them in 
> coss and reset in aufs. So if i use more than one coss directory
> on the same drive, is it also not recommended (as with aufs) or it is 
> ok if i do that.

there's no use for that. and as I said, you apparently need less data for
COSS, not more.

> Does any one know upper limit of squid's request per sec. As my squid box
> is working perfectly for ~350 request per sec. But it slows down on ~600
> request per sec. any suggestion will be much appreciated.

it mostly depends on the HW. you can tune the SW to hold up to SW/HW
possibilities.

> I am using browser, i dont think i can limit it to 72 line. So doing it 
> manually. 

it's apparently the webmail (hotmail issue)... well, microsoft is very
"good" at ignoring good practices

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Emacs is a complicated operating system without good text editor.


Re: [squid-users] Question about SquidGuard and blocking pages

2010-07-22 Thread Silamael
On 07/22/2010 03:24 PM, Beavis wrote:
> looks like a config is missing. in my setup i have prepared the
> internal access-denied page and put a fqdn on use an internal dns zone
> you have to resolv it. squid does pretty good on filtering, and it
> includes filter via IP. try to have the page with url resolved to a
> zone entry you have, and try it again. if not you can always whitelist
> the url.

Hi Beavis,

sorry, i don't get it. As far as i understand the code, squid seems to
throw away the original URL and create a complete new request for the
URL returned by SquidGuard.

What do you mean with all this DNS stuff? For what i need an internal
dns zone? Sounds pretty complicated for just blocking some pages based
on the SquidGuard blacklists...

-- Matthias


Re: [squid-users] Re: fakeauth_auth for logging on Ubuntu builds

2010-07-22 Thread Amos Jeffries

rscho wrote:

Thanks Amos,

Yes, you're correct we are using the the version built by Acme, I thought
the two were the same.

I think you're correct about my misconception as to the way fakeauth works
as well. You say the silent part comes from the browser being able to fetch
NTLM credentials from the OS? In our case both IE and Mozilla browsers can
retrieve this information when their proxy is set to the existing Windows
Squid (no popup appears) but when the proxy is set to the Ubuntu Squid a
popup always appears regardless of the Squid authenticator we're using. When
you say "It only becomes a popup when the
browser sends invalid credentials and gets challenged to supply valid ones",
it suggests that the authenticators we're using initially receive invalid
credentials but then approve them because after popup appears and the user
supplies them (even if they're rubbish) it "authenticates" them and allows
browsing.

I don't understand why the initial request from the browser to the proxy
fails but after refreshing the page a popup appears, values are entered and
browsing is permitted. Do you have any thoughts on this?

Thanks for your perl link, although it doesn't seem to work where the other
two do. I'm using it like this:

auth_param basic program /usr/bin/perl /etc/squid3/no_check.pl 
# A perl authenticator
#auth_param basic program /etc/squid3/GetUserID   
# A 'C' authenticator

#auth_param basic program /usr/bin/php /etc/squid3/PHP_Check.php   #
A php authenticator
auth_param basic children 5
auth_param basic realm XYZ
.
.
Is this correct? It asks for credentials 3 times and whether correct or not
eventually fails. Using auth_param ntlm. doesn't work at all.


fakeauth_auth is an NTLM protocol auth helper. Which is why I replied 
with the no_check.pl one. They do exactly the same things which are not 
the same as Basic protocol auth.


Since it does not work when configured with NTLM there is something else 
going on. Check your persistent connections are all turned on in Squid. 
If its not that then something in the browsers retrieval may be going wrong.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.5


[squid-users] Re: fakeauth_auth for logging on Ubuntu builds

2010-07-22 Thread rscho

Thanks Amos,

Yes, you're correct we are using the the version built by Acme, I thought
the two were the same.

I think you're correct about my misconception as to the way fakeauth works
as well. You say the silent part comes from the browser being able to fetch
NTLM credentials from the OS? In our case both IE and Mozilla browsers can
retrieve this information when their proxy is set to the existing Windows
Squid (no popup appears) but when the proxy is set to the Ubuntu Squid a
popup always appears regardless of the Squid authenticator we're using. When
you say "It only becomes a popup when the
browser sends invalid credentials and gets challenged to supply valid ones",
it suggests that the authenticators we're using initially receive invalid
credentials but then approve them because after popup appears and the user
supplies them (even if they're rubbish) it "authenticates" them and allows
browsing.

I don't understand why the initial request from the browser to the proxy
fails but after refreshing the page a popup appears, values are entered and
browsing is permitted. Do you have any thoughts on this?

Thanks for your perl link, although it doesn't seem to work where the other
two do. I'm using it like this:

auth_param basic program /usr/bin/perl /etc/squid3/no_check.pl 
# A perl authenticator
#auth_param basic program /etc/squid3/GetUserID 
  
# A 'C' authenticator
#auth_param basic program /usr/bin/php /etc/squid3/PHP_Check.php   #
A php authenticator
auth_param basic children 5
auth_param basic realm XYZ
.
.
Is this correct? It asks for credentials 3 times and whether correct or not
eventually fails. Using auth_param ntlm. doesn't work at all.

Thanks again, and any tips or workarounds are much appreciated!




-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/fakeauth-auth-for-logging-on-Ubuntu-builds-tp2298114p2298653.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Question about SquidGuard and blocking pages

2010-07-22 Thread Beavis
looks like a config is missing. in my setup i have prepared the
internal access-denied page and put a fqdn on use an internal dns zone
you have to resolv it. squid does pretty good on filtering, and it
includes filter via IP. try to have the page with url resolved to a
zone entry you have, and try it again. if not you can always whitelist
the url.


hope that helps.

-Beavis

On Thu, Jul 22, 2010 at 7:19 AM, Silamael  wrote:
> Hello!
>
> We're using SquidGuard for blocking certain URLs. Now, the problem is
> that SquidGuard redirects to some internal://.../error-access-denied
> URL, but in this page this internal URL is shown as blocked URL instead
> of the original URL.
> Is that any configuration problem or did i stumble over some Squid bug here?
>
> Thanks!
>
> -- Matthias
>



-- 
()  ascii ribbon campaign - against html e-mail
/\  www.asciiribbon.org   - against proprietary attachments


[squid-users] Question about SquidGuard and blocking pages

2010-07-22 Thread Silamael
Hello!

We're using SquidGuard for blocking certain URLs. Now, the problem is
that SquidGuard redirects to some internal://.../error-access-denied
URL, but in this page this internal URL is shown as blocked URL instead
of the original URL.
Is that any configuration problem or did i stumble over some Squid bug here?

Thanks!

-- Matthias


RE: [squid-users] Delay Pool Configuration Confirmation.

2010-07-22 Thread GIGO .

Right Amos i think what i want was the class 2 so i will configure as you 
suggest and it will encompass the authenticated users as well?
 
regards,
Bilal



> Date: Thu, 22 Jul 2010 23:56:21 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Delay Pool Configuration Confirmation.
>
> GIGO . wrote:
>> Dear all,
>>
>>
>> I am using squid 2.7 stable 9. I want to restrict downloads for every one 
>> both authenticated and IP based clients to 128KB at the day time and with 
>> full capacity at night. I have done the following configurations however 
>> they dont seem to work for me. Can you confirm that if they are correct.
>>
>>
>>
>> i am using squid_kerb_ldap & squid_kerb_auth and 50% users are based on 
>> this. 50% users are IP based 10.x.x.x (/24).
>> #Definition of working hours---
>> acl wh time MTWHF 09:00-21:00
>> #--Delay Pools Settings---
>> delay_pools 1
>> delay_class 1 1
>> delay_parameters 1 128000/128000
>> delay_access 1 allow downloads wh
>
> class 1 is a aggregate limit. Meaning that config caps the whole network
> at 125KB combined. Divide that by the number of users on the network
> using the proxy at any time.
>
> If you want each user to have 128KB but no more, use a class 2 pool.
> With parameters of -1/-1 131072/131072 (no aggregate limit, 128KB
> individual caps).
>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.5 
>   
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

Re: [squid-users] Delay Pool Configuration Confirmation.

2010-07-22 Thread Amos Jeffries

GIGO . wrote:

Dear all,
 
 
I am using squid 2.7 stable 9. I want to restrict downloads for every one both authenticated and IP based clients to 128KB at the day time and with full capacity at night. I have done the following configurations however they dont seem to work for me. Can you confirm that if they are correct.


 
 
i am using squid_kerb_ldap & squid_kerb_auth and 50% users are based on this. 50% users are IP based 10.x.x.x (/24). 
#Definition of working hours---

acl wh time MTWHF 09:00-21:00
#--Delay Pools Settings---
delay_pools 1
delay_class 1 1
delay_parameters 1 128000/128000
delay_access 1 allow downloads wh 		 	   		  


class 1 is a aggregate limit. Meaning that config caps the whole network 
at 125KB combined. Divide that by the number of users on the network 
using the proxy at any time.


If you want each user to have 128KB but no more, use a class 2 pool. 
With parameters of -1/-1 131072/131072   (no aggregate limit, 128KB 
individual caps).


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.5


[squid-users] Delay Pool Configuration Confirmation.

2010-07-22 Thread GIGO .

Dear all,
 
 
I am using squid 2.7 stable 9. I want to restrict downloads for every one both 
authenticated and IP based clients to 128KB at the day time and with full 
capacity at night. I have done the following configurations however they dont 
seem to work for me. Can you confirm that if they are correct.

 
 
i am using squid_kerb_ldap & squid_kerb_auth and 50% users are based on this. 
50% users are IP based 10.x.x.x (/24). 
#Definition of working hours---
acl wh time MTWHF 09:00-21:00
#--Delay Pools Settings---
delay_pools 1
delay_class 1 1
delay_parameters 1 128000/128000
delay_access 1 allow downloads wh 
_
Hotmail: Free, trusted and rich email service.
https://signup.live.com/signup.aspx?id=60969