[squid-users] Multiple-instances load-balancing

2010-11-11 Thread Artemis BRAJA

Hello,

I deployed a Fedora 13 x86_64 environment as a proxy server using Squid 
3.0.STABLE25

I'm interested on starting Squid with multiple instances.
I successfully started two squid instances with 2 different config-files 
respectively listening on port 3128 and port 3129.

Squid is not configured as a transparent proxy.
I also successfully executed the shell script mentioned here 
, the only thing that was 
changed is the destination port from 80 to 3130.
This port (3130) will be used to configure clients proxy port 
configurations.

The problem is that I'm unable to open any web-page.
It seem that no packets are passing through the chains, requests never 
reach squid.

Actually my /etc/sysconfig/iptables looks like this:
/# Generated by iptables-save v1.4.7 on Wed Nov 10 11:48:23 2010
*mangle
:PREROUTING ACCEPT [13:1014]
:INPUT ACCEPT [135:10622]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [76:10400]
:POSTROUTING ACCEPT [76:10400]
:DIVERT - [0:0]
:extrachain - [0:0]
-A PREROUTING -p tcp -m socket -j DIVERT
-A PREROUTING -p tcp -m tcp --dport 3130 -m conntrack --ctstate NEW -j 
extrachain
-A PREROUTING -i eth0 -p tcp -m tcp --dport 3130 -m connmark --mark 0x0 
-j TPROXY --on-port 3128 --on-ip 0.0.0.0 --tproxy-mark 0x1/0x1
-A PREROUTING -i eth0 -p tcp -m tcp --dport 3130 -m connmark --mark 0x1 
-j TPROXY --on-port 3129 --on-ip 0.0.0.0 --tproxy-mark 0x1/0x1

-A DIVERT -j MARK --set-xmark 0x1/0x
-A DIVERT -j ACCEPT
-A extrachain -m statistic --mode nth --every 2 -j CONNMARK --set-xmark 
0x0/0x
-A extrachain -m statistic --mode nth --every 2 --packet 1 -j CONNMARK 
--set-xmark 0x1/0x

COMMIT
# Completed on Wed Nov 10 11:48:23 2010
# Generated by iptables-save v1.4.7 on Wed Nov 10 11:48:23 2010
*filter
:INPUT ACCEPT [8435:541409]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [5439:343824]
-A INPUT -j LOG --log-level 7
COMMIT
# Completed on Wed Nov 10 11:48:23 2010
/
Any help will be appreciated!

Regards
Artemis
--
Artemis Braja | System Administrator
T +355 4 4400123 |  F +355 4 225 11 33
M +355 67 40 40 202 |www.primo.al 
Rr. Donika Kastrioti #4|  Tirana, Albania



Re: [squid-users] howto forward to squid proxy

2010-11-11 Thread Arturas Kurlavicius
Thanx For Reply

On Fri, Nov 12, 2010 at 9:15 AM, Amos Jeffries  wrote:
> On 12/11/10 19:54, Arturas Kurlavicius wrote:
>>
>> Hello
>> First i want say sorry for bad english :(
>> Here mine situation
>>
>> I work in huge gonoverment netowk. Mine network uses proxy to acces
>> internet. Shame that proxy not transparent... so i must every time
>> tipe setting in every PC. Thats anoying. SO i want to change
>> situation.
>>
>> I made a bit stupid gateway PC (debian). With single network adapter.
>> Mess a bit with IP tables. And simple gateway working.
>> network Cnfig:
>> [CODE]
>> auto lo
>> iface lo inet loopback
>>
>> # The primary network interface
>> auto eth0
>> allow-hotplug eth0
>> #iface eth0 inet dhcp
>> iface eth0 inet static
>> address 10.0.8.226
>> netmask 255.255.255.0
>> gateway 10.0.8.1
>> [/CODE]
>> iptables Config:
>> [CODE]
>> ###Flush iptables configurations
>> iptables -F
>> iptables -X
>> iptables -t nat -F
>> iptables -t nat -X
>> iptables -t mangle -F
>> iptables -t mangle -X
>> iptables -P INPUT ACCEPT
>> iptables -P FORWARD ACCEPT
>> iptables -P OUTPUT ACCEPT
>>
>> ###Enable IP forwarding
>> echo 1>  /proc/sys/net/ipv4/ip_forward
>>
>> ###Enable ip masquerading
>> iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
>> [/CODE]
>> And all seemes working.. i can browse sies (with proxie esttings).
>>
>> So now i want with that Gateway make proxie transparent. So i'm
>> tryeing to forward 80 port to proxie.
>> Config:
>> [CODE]
>> ###Flush iptables configurations
>> iptables -F
>> iptables -X
>> iptables -t nat -F
>> iptables -t nat -X
>> iptables -t mangle -F
>> iptables -t mangle -X
>> iptables -P INPUT ACCEPT
>> iptables -P FORWARD ACCEPT
>> iptables -P OUTPUT ACCEPT
>>
>> ###Enable IP forwarding
>> echo 1>  /proc/sys/net/ipv4/ip_forward
>>
>> ###Enable ip masquerading
>> iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
>>
>> ###bandau forwardint 80 porta
>> iptables -A FORWARD -j ACCEPT
>> iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT
>> --to-destination 212.59.6.119:80
>
> Use an uncommon randomly picked --to-destination port here to avoid the
> security problems introduced by NAT. It is only used between this firewall
> and Squid, so can be firewalled in the "mangle" table to prevent external
> machines sending traffic directly there.
>
>> [/CODE]
>>
>> Well it seems forwarding working... but i try to get page (for example
>> www.inuxforums.org)... i get from proxy error
>> [CODE]
>> ERROR
>> The requested URL could not be retrieved
>>
>> 
>> While trying to retrieve the URL: /
>>
>> The following error was encountered:
>>
>> •Invalid URL
>> Some aspect of the requested URL is incorrect. Possible problems:
>>
>> •Missing or incorrect access protocol (should be `http://'' or similar)
>> •Missing hostname
>> •Illegal double-escape in the URL-Path
>> •Illegal character in hostname; underscores are not allowed
>>
>> 
>> Generated Thu, 11 Nov 2010 11:02:48 GMT by duke.cust.lt
>> (squid/3.0.STABLE25)
>> [/CODE]
>> But if i put proxie setting in browser (212.59.6.119:80) all again works
>> fine.
>>
>> So i want to advice what i'm doing wrong..
>
> You needs to create an http_port for the NAT traffic to enter Squid. It
> needs identical IP:port details identical to the firewall --to-destination.
>  In 3.0 and older squid it has the flag "transparent" that tells Squid how
> to find and replace the missing hostname.
>

So... you saying i need to change squid configuration... Well that not
possible for me. I'm only user.
Is there another way to make proxy transparent ?? If i cant change
squid config???
Only WPAD/PAC?

>>
>> P.s. Auto proxie seetings not possible. And i hawe a lot of
>> notebooks.. so they every time need to change setting.. and thats bad
>> :(
>
> Do you mean transparent configuration aka WPAD/PAC? that would really be the
> best way. NAT interception adds some annoying security problems and
> restrictions.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.9
>  Beta testers wanted for 3.2.0.3
>


Re: [squid-users] howto forward to squid proxy

2010-11-11 Thread Amos Jeffries

On 12/11/10 19:54, Arturas Kurlavicius wrote:

Hello
First i want say sorry for bad english :(
Here mine situation

I work in huge gonoverment netowk. Mine network uses proxy to acces
internet. Shame that proxy not transparent... so i must every time
tipe setting in every PC. Thats anoying. SO i want to change
situation.

I made a bit stupid gateway PC (debian). With single network adapter.
Mess a bit with IP tables. And simple gateway working.
network Cnfig:
[CODE]
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
allow-hotplug eth0
#iface eth0 inet dhcp
iface eth0 inet static
address 10.0.8.226
netmask 255.255.255.0
gateway 10.0.8.1
[/CODE]
iptables Config:
[CODE]
###Flush iptables configurations
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT

###Enable IP forwarding
echo 1>  /proc/sys/net/ipv4/ip_forward

###Enable ip masquerading
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
[/CODE]
And all seemes working.. i can browse sies (with proxie esttings).

So now i want with that Gateway make proxie transparent. So i'm
tryeing to forward 80 port to proxie.
Config:
[CODE]
###Flush iptables configurations
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT

###Enable IP forwarding
echo 1>  /proc/sys/net/ipv4/ip_forward

###Enable ip masquerading
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

###bandau forwardint 80 porta
iptables -A FORWARD -j ACCEPT
iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT
--to-destination 212.59.6.119:80


Use an uncommon randomly picked --to-destination port here to avoid the 
security problems introduced by NAT. It is only used between this 
firewall and Squid, so can be firewalled in the "mangle" table to 
prevent external machines sending traffic directly there.



[/CODE]

Well it seems forwarding working... but i try to get page (for example
www.inuxforums.org)... i get from proxy error
[CODE]
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: /

The following error was encountered:

•Invalid URL
Some aspect of the requested URL is incorrect. Possible problems:

•Missing or incorrect access protocol (should be `http://'' or similar)
•Missing hostname
•Illegal double-escape in the URL-Path
•Illegal character in hostname; underscores are not allowed

Generated Thu, 11 Nov 2010 11:02:48 GMT by duke.cust.lt (squid/3.0.STABLE25)
[/CODE]
But if i put proxie setting in browser (212.59.6.119:80) all again works fine.

So i want to advice what i'm doing wrong..


You needs to create an http_port for the NAT traffic to enter Squid. It 
needs identical IP:port details identical to the firewall 
--to-destination.  In 3.0 and older squid it has the flag "transparent" 
that tells Squid how to find and replace the missing hostname.




P.s. Auto proxie seetings not possible. And i hawe a lot of
notebooks.. so they every time need to change setting.. and thats bad
:(


Do you mean transparent configuration aka WPAD/PAC? that would really be 
the best way. NAT interception adds some annoying security problems and 
restrictions.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


[squid-users] howto forward to squid proxy

2010-11-11 Thread Arturas Kurlavicius
Hello
First i want say sorry for bad english :(
Here mine situation

I work in huge gonoverment netowk. Mine network uses proxy to acces
internet. Shame that proxy not transparent... so i must every time
tipe setting in every PC. Thats anoying. SO i want to change
situation.

I made a bit stupid gateway PC (debian). With single network adapter.
Mess a bit with IP tables. And simple gateway working.
network Cnfig:
[CODE]
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
allow-hotplug eth0
#iface eth0 inet dhcp
iface eth0 inet static
address 10.0.8.226
netmask 255.255.255.0
gateway 10.0.8.1
[/CODE]
iptables Config:
[CODE]
###Flush iptables configurations
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT

###Enable IP forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward

###Enable ip masquerading
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
[/CODE]
And all seemes working.. i can browse sies (with proxie esttings).

So now i want with that Gateway make proxie transparent. So i'm
tryeing to forward 80 port to proxie.
Config:
[CODE]
###Flush iptables configurations
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT

###Enable IP forwarding
echo 1 > /proc/sys/net/ipv4/ip_forward

###Enable ip masquerading
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

###bandau forwardint 80 porta
iptables -A FORWARD -j ACCEPT
iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT
--to-destination 212.59.6.119:80
[/CODE]

Well it seems forwarding working... but i try to get page (for example
www.inuxforums.org)... i get from proxy error
[CODE]
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: /

The following error was encountered:

•Invalid URL
Some aspect of the requested URL is incorrect. Possible problems:

•Missing or incorrect access protocol (should be `http://'' or similar)
•Missing hostname
•Illegal double-escape in the URL-Path
•Illegal character in hostname; underscores are not allowed

Generated Thu, 11 Nov 2010 11:02:48 GMT by duke.cust.lt (squid/3.0.STABLE25)
[/CODE]
But if i put proxie setting in browser (212.59.6.119:80) all again works fine.

So i want to advice what i'm doing wrong..

P.s. Auto proxie seetings not possible. And i hawe a lot of
notebooks.. so they every time need to change setting.. and thats bad
:(

PLZ help


Re: [squid-users] Problems with hotmail and facebook

2010-11-11 Thread Amos Jeffries

On 12/11/10 19:20, Landy Landy wrote:



--- On Fri, 11/12/10, Amos Jeffries  wrote:


From: Amos Jeffries
Subject: Re: [squid-users] Problems with hotmail and facebook
To: squid-users@squid-cache.org
Date: Friday, November 12, 2010, 12:05 AM
On 12/11/10 15:44, Landy Landy
wrote:

Amos.

Thanks for your quick reply.

I haven't tried a newer version yet. The problem

started two days ago and I've been using that version for
over a year now and it worked well.


--- On Thu, 11/11/10, Amos Jeffries wrote:


From: Amos Jeffries
On 12/11/10 15:11, Landy Landy
wrote:

Hello.

Our network is experiencing problems loading

or

accessing facebook and hotmail inbox and others

when I use

squid. I am using:


I use google's public dns and our local isp

provider's.


I tried to login to my hotmail account and got

this:


Squid Cache: Version 3.0.STABLE24
configure options:

'--prefix=/usr/local/squid'

'--sysconfdir=/etc/squid' '--enable-delay-pools'
'--enable-kill-parent-hack' '--disable-htcp'
'--enable-default-err-language=Spanish'
'--enable-linux-netfilter'

'--disable-ident-lookups'

'--localstatedir=/var/log/squid3.1'

'--enable-stacktraces'

'--with-default-user=proxy' '--with-large-files'
'--enable-icap-client' '--enable-async-io'
'--enable-storeio=aufs'

'--enable-removal-policies=heap,lru'

'--with-maxfd=32768'


When I try accessing these pages without

having to

pass through squid everything works fine.


Does anyone has an idea of what can be causing

this?


Could you give any details about what the problems

actually

are please?


Noticed that hotmail sometimes just hangs after

providing the username and password. People started calling
today and are driving me crazy.


Also today I noticed this (Response not valid) when

replying to a thread on dslreports.org:




Mientras se intentaba procesar la petición:

POST /speak/wisp?enc=L2ZvcnVtL3dpc3A%3D;really

HTTP/1.1

Host: www.dslreports.com
Connection: keep-alive
Referer: http://www.dslreports.com/speak/wisp?enc=L2ZvcnVtL3dpc3A%3D
Content-Length: 2580
Cache-Control: max-age=0
Origin: http://www.dslreports.com
Pragma: no-cache
Content-Type: multipart/form-data;

boundary=WebKitFormBoundarynmpKtsJY1cReuJwT

Accept:

application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5

User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US)

AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.44
Safari/534.7

Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie:

__utmz=260971928.1285198857.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none);
__utma=260971928.458537925.1285198857.1285422374.1285465733.3;
dsl=6402462798:1587616; bbruid=1587616


Ha ocurrido el siguiente problema:

Respuesta no válida.

El mensaje de Respuesta HTTP recibido del servidor

contactado no pudo ser entendido o tenía alguna
malformación. Por favor contacte al operador del sitio web.
Quizas su administrador del caché pueda darle a Ud. más
detalles acerca de la naturaleza exacta del problema en caso
de ser necesario.


Su administrador del caché es optimumwirel...@hotmail.com.



Things are not as they used to be. I checked the

cache.log file and can't find anything there. What do you
recommend me to do?




The POST is requesting "sdch" (aka binary diff encoding)
responses. If
you or any other proxy along that supply path are doing
anything with
ICAP besides straight AV scanning that could be corrupting
the diffs.

The problem is in the response to that POST. The newer
3.1.9 logs what
the problems is at debug level 1 ("debug_options ALL,1")
including the
URL for tracking.

With that info you can drill down into the oprocessing are
or a tcpdump
log and find out what the response actually is.


Amos,
As I mentioned earlier, I noticed the problem happen on website running AJAX 
code. I don't know if it makes sense but, hotmail, yahoo, and facebook  all use 
ajax... and by the way 3.1.9 hasn't really change anything.


I know. The mechanism used to make the requests does not affect the 
debugging process. They are all just HTTP traffic by the time it reaches 
Squid.
 Since 3.1 has not fixed it I suggest you take advantage of its 
slightly better debug output to help track down the problem. Then if the 
speed is an issue revert to 3.0 to use the workaround found.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] ACLs Implementation help

2010-11-11 Thread Amos Jeffries

FWIW: this is all covered in details in the wiki:
  http://wiki.squid-cache.org/Features/Authentication

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Getting Groups from windows server authentication

2010-11-11 Thread Amos Jeffries

On 12/11/10 18:49, viswanathan wrote:

Hi all

I am using squid 2.7 stable 7 and it authenticates with windows active
directory by using Samba smb_auth.
Now the authentication works fine for all users.
We want to filter websites on the basis of Organization unit (groups) in
Active directory.
Whether it is possible to fetch group name along with user name using
smb_auth?


No. The auth tests only provide true/false about whether the username + 
password blob is a valid combination. For groups testing you need 
external_acl_type helpers that use the username to find the group.


http://wiki.squid-cache.org/ConfigExamples/Authenticate/NtlmWithGroups

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Problems with hotmail and facebook

2010-11-11 Thread Landy Landy


--- On Fri, 11/12/10, Amos Jeffries  wrote:

> From: Amos Jeffries 
> Subject: Re: [squid-users] Problems with hotmail and facebook
> To: squid-users@squid-cache.org
> Date: Friday, November 12, 2010, 12:05 AM
> On 12/11/10 15:44, Landy Landy
> wrote:
> > Amos.
> >
> > Thanks for your quick reply.
> >
> > I haven't tried a newer version yet. The problem
> started two days ago and I've been using that version for
> over a year now and it worked well.
> >
> > --- On Thu, 11/11/10, Amos Jeffries wrote:
> >
> >> From: Amos Jeffries
> >> On 12/11/10 15:11, Landy Landy
> >> wrote:
> >>> Hello.
> >>>
> >>> Our network is experiencing problems loading
> or
> >> accessing facebook and hotmail inbox and others
> when I use
> >> squid. I am using:
> >>>
> >>> I use google's public dns and our local isp
> >> provider's.
> >>>
> >>> I tried to login to my hotmail account and got
> this:
> >>>
> >>> Squid Cache: Version 3.0.STABLE24
> >>> configure options: 
> '--prefix=/usr/local/squid'
> >> '--sysconfdir=/etc/squid' '--enable-delay-pools'
> >> '--enable-kill-parent-hack' '--disable-htcp'
> >> '--enable-default-err-language=Spanish'
> >> '--enable-linux-netfilter'
> '--disable-ident-lookups'
> >> '--localstatedir=/var/log/squid3.1'
> '--enable-stacktraces'
> >> '--with-default-user=proxy' '--with-large-files'
> >> '--enable-icap-client' '--enable-async-io'
> >> '--enable-storeio=aufs'
> '--enable-removal-policies=heap,lru'
> >> '--with-maxfd=32768'
> >>>
> >>> When I try accessing these pages without
> having to
> >> pass through squid everything works fine.
> >>>
> >>> Does anyone has an idea of what can be causing
> this?
> >>
> >> Could you give any details about what the problems
> actually
> >> are please?
> >
> > Noticed that hotmail sometimes just hangs after
> providing the username and password. People started calling
> today and are driving me crazy.
> >
> > Also today I noticed this (Response not valid) when
> replying to a thread on dslreports.org:
> >
> > 
> >
> > Mientras se intentaba procesar la petición:
> >
> > POST /speak/wisp?enc=L2ZvcnVtL3dpc3A%3D;really
> HTTP/1.1
> > Host: www.dslreports.com
> > Connection: keep-alive
> > Referer: http://www.dslreports.com/speak/wisp?enc=L2ZvcnVtL3dpc3A%3D
> > Content-Length: 2580
> > Cache-Control: max-age=0
> > Origin: http://www.dslreports.com
> > Pragma: no-cache
> > Content-Type: multipart/form-data;
> boundary=WebKitFormBoundarynmpKtsJY1cReuJwT
> > Accept:
> application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
> > User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US)
> AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.44
> Safari/534.7
> > Accept-Encoding: gzip,deflate,sdch
> > Accept-Language: en-US,en;q=0.8
> > Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
> > Cookie:
> __utmz=260971928.1285198857.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none);
> __utma=260971928.458537925.1285198857.1285422374.1285465733.3;
> dsl=6402462798:1587616; bbruid=1587616
> >
> > Ha ocurrido el siguiente problema:
> >
> > Respuesta no válida.
> >
> > El mensaje de Respuesta HTTP recibido del servidor
> contactado no pudo ser entendido o tenía alguna
> malformación. Por favor contacte al operador del sitio web.
> Quizas su administrador del caché pueda darle a Ud. más
> detalles acerca de la naturaleza exacta del problema en caso
> de ser necesario.
> >
> > Su administrador del caché es optimumwirel...@hotmail.com.
> >
> > 
> >
> > Things are not as they used to be. I checked the
> cache.log file and can't find anything there. What do you
> recommend me to do?
> >
> 
> The POST is requesting "sdch" (aka binary diff encoding)
> responses. If 
> you or any other proxy along that supply path are doing
> anything with 
> ICAP besides straight AV scanning that could be corrupting
> the diffs.
> 
> The problem is in the response to that POST. The newer
> 3.1.9 logs what 
> the problems is at debug level 1 ("debug_options ALL,1")
> including the 
> URL for tracking.
> 
> With that info you can drill down into the oprocessing are
> or a tcpdump 
> log and find out what the response actually is.
> 
Amos,
As I mentioned earlier, I noticed the problem happen on website running AJAX 
code. I don't know if it makes sense but, hotmail, yahoo, and facebook  all use 
ajax... and by the way 3.1.9 hasn't really change anything.

Thanks.





Re: [squid-users] ACLs Implementation help

2010-11-11 Thread Amos Jeffries

On 12/11/10 18:18, Edmonds Namasenda wrote:

Amos, thank you for the responses always.

On Thu, Nov 11, 2010 at 6:56 PM, Amos Jeffries  wrote:


On 12/11/10 04:08, Edmonds Namasenda wrote:



I believe I am a better squid administrator than when I joined. Throw me a bone!



Switch "users" with "browsers" and you have it right. There is a whole layer of 
software between squid and the people at the screen.

The browser is supposed to remember these things once the person has entered 
them. Or as in the case of Kerberos, to locate the credentials without 
bothering the person at all.


If you are seeing a browser repeatedly asking for login then there is a problem 
with the browser. Those can occasionally be hit by something it does not like 
coming back from Squid. When that happens some network forensics are needed to 
figure out whats going on.


I know Firefox asks whether to keep authentication details. I am not
sure about MS IE.
Assuming they are using Firefox and the log-in details are kept by the
browser, are you implying there will not be continuous requests for
logging in with each accessed page forever?
That is the browser sends authentication details to squid and squid
allows them access accordingly.



Yes, exactly. When usable credentials are known the browser keeps them 
until they stop working or the window closed. The remember question just 
makes the credentials persist across window closures. This is the same 
for all browsers.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


[squid-users] Getting Groups from windows server authentication

2010-11-11 Thread viswanathan

Hi all

I am using squid 2.7 stable 7 and it authenticates with windows active
directory by using Samba smb_auth.
Now the authentication works fine for all users.
We want to filter websites on the basis of Organization unit (groups) in
Active directory.
Whether it is possible to fetch group name along with user name using
smb_auth?

Thanks
Viswa




Re: [squid-users] ACLs Implementation help

2010-11-11 Thread Edmonds Namasenda
Amos, thank you for the responses always.

On Thu, Nov 11, 2010 at 6:56 PM, Amos Jeffries  wrote:
>
> On 12/11/10 04:08, Edmonds Namasenda wrote:

>> I believe I am a better squid administrator than when I joined. Throw me a 
>> bone!
>>
>
> Switch "users" with "browsers" and you have it right. There is a whole layer 
> of software between squid and the people at the screen.
>
> The browser is supposed to remember these things once the person has entered 
> them. Or as in the case of Kerberos, to locate the credentials without 
> bothering the person at all.
>
>
> If you are seeing a browser repeatedly asking for login then there is a 
> problem with the browser. Those can occasionally be hit by something it does 
> not like coming back from Squid. When that happens some network forensics are 
> needed to figure out whats going on.
>
I know Firefox asks whether to keep authentication details. I am not
sure about MS IE.
Assuming they are using Firefox and the log-in details are kept by the
browser, are you implying there will not be continuous requests for
logging in with each accessed page forever?
That is the browser sends authentication details to squid and squid
allows them access accordingly.



--
Thank you and kind regards,

I.P.N Edmonds

Cel:    +256 70 227 3374
       +256 71 227 3374

Y! / MSN: zibiced | GMail: namasenda | Skype: edsend


Re: [squid-users] Problems with hotmail and facebook

2010-11-11 Thread Landy Landy

--- On Thu, 11/11/10, Amos Jeffries  wrote:

> From: Amos Jeffries 
> Subject: Re: [squid-users] Problems with hotmail and facebook
> To: squid-users@squid-cache.org
> Date: Thursday, November 11, 2010, 11:51 PM
> On 12/11/10 17:30, Landy Landy
> wrote:
> >
> > --- On Thu, 11/11/10, Amos Jeffries 
> wrote:
> >
> >> From: Amos Jeffries
> >> Subject: Re: [squid-users] Problems with hotmail
> and facebook
> >> To: squid-users@squid-cache.org
> >> Date: Thursday, November 11, 2010, 11:16 PM
> >> On 12/11/10 16:22, Landy Landy
> >> wrote:
> >>>
> >>> --- On Thu, 11/11/10, Amos Jeffries 
> wrote:
> >>>
>  From: Amos Jeffries
>  On 12/11/10 15:11, Landy Landy
>  wrote:
> > Hello.
> >
> > Our network is experiencing problems
> loading
> >> or
>  accessing facebook and hotmail inbox and
> others
> >> when I use
>  squid. I am using:
> >
> > I use google's public dns and our
> local isp
>  provider's.
> >
> > I tried to login to my hotmail account
> and got
> >> this:
> >
> > Squid Cache: Version 3.0.STABLE24
> > configure options:
> >> '--prefix=/usr/local/squid'
>  '--sysconfdir=/etc/squid'
> '--enable-delay-pools'
>  '--enable-kill-parent-hack'
> '--disable-htcp'
>  '--enable-default-err-language=Spanish'
>  '--enable-linux-netfilter'
> >> '--disable-ident-lookups'
>  '--localstatedir=/var/log/squid3.1'
> >> '--enable-stacktraces'
>  '--with-default-user=proxy'
> '--with-large-files'
>  '--enable-icap-client'
> '--enable-async-io'
>  '--enable-storeio=aufs'
> >> '--enable-removal-policies=heap,lru'
>  '--with-maxfd=32768'
> >
> > When I try accessing these pages
> without
> >> having to
>  pass through squid everything works fine.
> >
> > Does anyone has an idea of what can be
> causing
> >> this?
> 
>  Could you give any details about what the
> problems
> >> actually
>  are please?
> 
>  Have you tried a more recent squid
> release?
> >>>
> >>> Just installed version 3.1.9 and noticed this
> in the
> >> cache.log file:
> >>>
> >>>
> >>> 2010/11/11 23:19:45| IpIntercept.cc(137)
> >> NetfilterInterception:  NF
> getsockopt(SO_ORIGINAL_DST)
> >> failed on FD 117: (2) No such file or directory
> >> 
> >>>
> >>> What does that mean?
> >>>
> >>
> >> It means you have a NAT failure receiving those
> requests.
> >>
> >> Possibly that you are sending traffic directly to
> a NAT
> >> http_port from
> >> browsers configured to know about the proxy.
> >>
> > But, I'm running squid in transparent mode.
> 
> You are running Squid in NAT interception mode. That is
> what the old 
> "transparent" flag used to mean. These messages are
> generated when the 
> NAT system tables contain no information about the
> connected client machine.
>   It's not terribly critical, but shows that your
> proxy is open to a 
> couple of security problems from those machines.
> 
> Unlikely to be related to your connection problems.
> 
> 
> PS: if the client calls started just after you moved to
> 3.1.9 you may 
> have hit http://bugs.squid-cache.org/show_bug.cgi?id=3099 as
> well.
> Even if so I think that is not the problem you saw with 3.0
> though.

Well, problems started prior to upgrading to 3.1.9. I just upgraded it a couple 
of hours ago. Now, things are getting sluggish. As, I mentioned earlier, I 
can't even use yahoo now. I don't know what's going on. I've restarted my 
modem, my gw with squid, and my machine to see if that fixes the problem but, 
hasn't and I have no idea what can be causing it. All I know is if I bypass 
squid things work normal. Even to reply this email I had to bypass squid 
because I wasn't able to send it through squid.





Re: [squid-users] Problems with hotmail and facebook

2010-11-11 Thread Amos Jeffries

On 12/11/10 15:44, Landy Landy wrote:

Amos.

Thanks for your quick reply.

I haven't tried a newer version yet. The problem started two days ago and I've 
been using that version for over a year now and it worked well.

--- On Thu, 11/11/10, Amos Jeffries wrote:


From: Amos Jeffries
On 12/11/10 15:11, Landy Landy
wrote:

Hello.

Our network is experiencing problems loading or

accessing facebook and hotmail inbox and others when I use
squid. I am using:


I use google's public dns and our local isp

provider's.


I tried to login to my hotmail account and got this:

Squid Cache: Version 3.0.STABLE24
configure options:  '--prefix=/usr/local/squid'

'--sysconfdir=/etc/squid' '--enable-delay-pools'
'--enable-kill-parent-hack' '--disable-htcp'
'--enable-default-err-language=Spanish'
'--enable-linux-netfilter' '--disable-ident-lookups'
'--localstatedir=/var/log/squid3.1' '--enable-stacktraces'
'--with-default-user=proxy' '--with-large-files'
'--enable-icap-client' '--enable-async-io'
'--enable-storeio=aufs' '--enable-removal-policies=heap,lru'
'--with-maxfd=32768'


When I try accessing these pages without having to

pass through squid everything works fine.


Does anyone has an idea of what can be causing this?


Could you give any details about what the problems actually
are please?


Noticed that hotmail sometimes just hangs after providing the username and 
password. People started calling today and are driving me crazy.

Also today I noticed this (Response not valid) when replying to a thread on 
dslreports.org:



Mientras se intentaba procesar la petición:

POST /speak/wisp?enc=L2ZvcnVtL3dpc3A%3D;really HTTP/1.1
Host: www.dslreports.com
Connection: keep-alive
Referer: http://www.dslreports.com/speak/wisp?enc=L2ZvcnVtL3dpc3A%3D
Content-Length: 2580
Cache-Control: max-age=0
Origin: http://www.dslreports.com
Pragma: no-cache
Content-Type: multipart/form-data; 
boundary=WebKitFormBoundarynmpKtsJY1cReuJwT
Accept: 
application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.7 (KHTML, 
like Gecko) Chrome/7.0.517.44 Safari/534.7
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: 
__utmz=260971928.1285198857.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); 
__utma=260971928.458537925.1285198857.1285422374.1285465733.3; 
dsl=6402462798:1587616; bbruid=1587616

Ha ocurrido el siguiente problema:

Respuesta no válida.

El mensaje de Respuesta HTTP recibido del servidor contactado no pudo ser 
entendido o tenía alguna malformación. Por favor contacte al operador del sitio 
web. Quizas su administrador del caché pueda darle a Ud. más detalles acerca de 
la naturaleza exacta del problema en caso de ser necesario.

Su administrador del caché es optimumwirel...@hotmail.com.



Things are not as they used to be. I checked the cache.log file and can't find 
anything there. What do you recommend me to do?



The POST is requesting "sdch" (aka binary diff encoding) responses. If 
you or any other proxy along that supply path are doing anything with 
ICAP besides straight AV scanning that could be corrupting the diffs.


The problem is in the response to that POST. The newer 3.1.9 logs what 
the problems is at debug level 1 ("debug_options ALL,1") including the 
URL for tracking.


With that info you can drill down into the oprocessing are or a tcpdump 
log and find out what the response actually is.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Problems with hotmail and facebook

2010-11-11 Thread Amos Jeffries

On 12/11/10 17:30, Landy Landy wrote:


--- On Thu, 11/11/10, Amos Jeffries  wrote:


From: Amos Jeffries
Subject: Re: [squid-users] Problems with hotmail and facebook
To: squid-users@squid-cache.org
Date: Thursday, November 11, 2010, 11:16 PM
On 12/11/10 16:22, Landy Landy
wrote:


--- On Thu, 11/11/10, Amos Jeffries  wrote:


From: Amos Jeffries
On 12/11/10 15:11, Landy Landy
wrote:

Hello.

Our network is experiencing problems loading

or

accessing facebook and hotmail inbox and others

when I use

squid. I am using:


I use google's public dns and our local isp

provider's.


I tried to login to my hotmail account and got

this:


Squid Cache: Version 3.0.STABLE24
configure options:

'--prefix=/usr/local/squid'

'--sysconfdir=/etc/squid' '--enable-delay-pools'
'--enable-kill-parent-hack' '--disable-htcp'
'--enable-default-err-language=Spanish'
'--enable-linux-netfilter'

'--disable-ident-lookups'

'--localstatedir=/var/log/squid3.1'

'--enable-stacktraces'

'--with-default-user=proxy' '--with-large-files'
'--enable-icap-client' '--enable-async-io'
'--enable-storeio=aufs'

'--enable-removal-policies=heap,lru'

'--with-maxfd=32768'


When I try accessing these pages without

having to

pass through squid everything works fine.


Does anyone has an idea of what can be causing

this?


Could you give any details about what the problems

actually

are please?

Have you tried a more recent squid release?


Just installed version 3.1.9 and noticed this in the

cache.log file:



2010/11/11 23:19:45| IpIntercept.cc(137)

NetfilterInterception:  NF getsockopt(SO_ORIGINAL_DST)
failed on FD 117: (2) No such file or directory



What does that mean?



It means you have a NAT failure receiving those requests.

Possibly that you are sending traffic directly to a NAT
http_port from
browsers configured to know about the proxy.


But, I'm running squid in transparent mode.


You are running Squid in NAT interception mode. That is what the old 
"transparent" flag used to mean. These messages are generated when the 
NAT system tables contain no information about the connected client machine.
 It's not terribly critical, but shows that your proxy is open to a 
couple of security problems from those machines.


Unlikely to be related to your connection problems.


PS: if the client calls started just after you moved to 3.1.9 you may 
have hit http://bugs.squid-cache.org/show_bug.cgi?id=3099 as well.

Even if so I think that is not the problem you saw with 3.0 though.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Problems with hotmail and facebook

2010-11-11 Thread Landy Landy

--- On Thu, 11/11/10, Amos Jeffries  wrote:

> From: Amos Jeffries 
> Subject: Re: [squid-users] Problems with hotmail and facebook
> To: squid-users@squid-cache.org
> Date: Thursday, November 11, 2010, 11:16 PM
> On 12/11/10 16:22, Landy Landy
> wrote:
> >
> > --- On Thu, 11/11/10, Amos Jeffries  wrote:
> >
> >> From: Amos Jeffries
> >> On 12/11/10 15:11, Landy Landy
> >> wrote:
> >>> Hello.
> >>>
> >>> Our network is experiencing problems loading
> or
> >> accessing facebook and hotmail inbox and others
> when I use
> >> squid. I am using:
> >>>
> >>> I use google's public dns and our local isp
> >> provider's.
> >>>
> >>> I tried to login to my hotmail account and got
> this:
> >>>
> >>> Squid Cache: Version 3.0.STABLE24
> >>> configure options: 
> '--prefix=/usr/local/squid'
> >> '--sysconfdir=/etc/squid' '--enable-delay-pools'
> >> '--enable-kill-parent-hack' '--disable-htcp'
> >> '--enable-default-err-language=Spanish'
> >> '--enable-linux-netfilter'
> '--disable-ident-lookups'
> >> '--localstatedir=/var/log/squid3.1'
> '--enable-stacktraces'
> >> '--with-default-user=proxy' '--with-large-files'
> >> '--enable-icap-client' '--enable-async-io'
> >> '--enable-storeio=aufs'
> '--enable-removal-policies=heap,lru'
> >> '--with-maxfd=32768'
> >>>
> >>> When I try accessing these pages without
> having to
> >> pass through squid everything works fine.
> >>>
> >>> Does anyone has an idea of what can be causing
> this?
> >>
> >> Could you give any details about what the problems
> actually
> >> are please?
> >>
> >> Have you tried a more recent squid release?
> >
> > Just installed version 3.1.9 and noticed this in the
> cache.log file:
> >
> >
> > 2010/11/11 23:19:45| IpIntercept.cc(137)
> NetfilterInterception:  NF getsockopt(SO_ORIGINAL_DST)
> failed on FD 117: (2) No such file or directory
> 
> >
> > What does that mean?
> >
> 
> It means you have a NAT failure receiving those requests.
> 
> Possibly that you are sending traffic directly to a NAT
> http_port from 
> browsers configured to know about the proxy.
> 
But, I'm running squid in transparent mode.





Re: [squid-users] Problems with hotmail and facebook

2010-11-11 Thread Amos Jeffries

On 12/11/10 16:22, Landy Landy wrote:


--- On Thu, 11/11/10, Amos Jeffries  wrote:


From: Amos Jeffries
On 12/11/10 15:11, Landy Landy
wrote:

Hello.

Our network is experiencing problems loading or

accessing facebook and hotmail inbox and others when I use
squid. I am using:


I use google's public dns and our local isp

provider's.


I tried to login to my hotmail account and got this:

Squid Cache: Version 3.0.STABLE24
configure options:  '--prefix=/usr/local/squid'

'--sysconfdir=/etc/squid' '--enable-delay-pools'
'--enable-kill-parent-hack' '--disable-htcp'
'--enable-default-err-language=Spanish'
'--enable-linux-netfilter' '--disable-ident-lookups'
'--localstatedir=/var/log/squid3.1' '--enable-stacktraces'
'--with-default-user=proxy' '--with-large-files'
'--enable-icap-client' '--enable-async-io'
'--enable-storeio=aufs' '--enable-removal-policies=heap,lru'
'--with-maxfd=32768'


When I try accessing these pages without having to

pass through squid everything works fine.


Does anyone has an idea of what can be causing this?


Could you give any details about what the problems actually
are please?

Have you tried a more recent squid release?


Just installed version 3.1.9 and noticed this in the cache.log file:


2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 117: (2) No such file or directory




What does that mean?



It means you have a NAT failure receiving those requests.

Possibly that you are sending traffic directly to a NAT http_port from 
browsers configured to know about the proxy.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Problems with hotmail and facebook

2010-11-11 Thread Landy Landy
Ok. Now, when I'm trying to logon to yahoo via mail.yahoo.com I'm getting this:


The following error was encountered while trying to retrieve the URL: 
http://us.mc625.mail.yahoo.com/mc/welcome?

Zero Sized Reply

Squid did not receive any data for this request.

Your cache administrator is optimumwirel...@hotmail.com.

This is with the newer version 3.1.9.

Is my cache messed up? Is my configuration messed up?


  


Re: [squid-users] Problems with hotmail and facebook

2010-11-11 Thread Landy Landy

--- On Thu, 11/11/10, Amos Jeffries  wrote:

> From: Amos Jeffries 
> Subject: Re: [squid-users] Problems with hotmail and facebook
> To: squid-users@squid-cache.org
> Date: Thursday, November 11, 2010, 9:26 PM
> On 12/11/10 15:11, Landy Landy
> wrote:
> > Hello.
> >
> > Our network is experiencing problems loading or
> accessing facebook and hotmail inbox and others when I use
> squid. I am using:
> >
> > I use google's public dns and our local isp
> provider's.
> >
> > I tried to login to my hotmail account and got this:
> >
> > Squid Cache: Version 3.0.STABLE24
> > configure options:  '--prefix=/usr/local/squid'
> '--sysconfdir=/etc/squid' '--enable-delay-pools'
> '--enable-kill-parent-hack' '--disable-htcp'
> '--enable-default-err-language=Spanish'
> '--enable-linux-netfilter' '--disable-ident-lookups'
> '--localstatedir=/var/log/squid3.1' '--enable-stacktraces'
> '--with-default-user=proxy' '--with-large-files'
> '--enable-icap-client' '--enable-async-io'
> '--enable-storeio=aufs' '--enable-removal-policies=heap,lru'
> '--with-maxfd=32768'
> >
> > When I try accessing these pages without having to
> pass through squid everything works fine.
> >
> > Does anyone has an idea of what can be causing this?
> 
> Could you give any details about what the problems actually
> are please?
> 
> Have you tried a more recent squid release?

Just installed version 3.1.9 and noticed this in the cache.log file:


2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 117: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 120: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 121: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 123: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 124: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 126: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 127: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 129: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 132: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 134: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 137: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 139: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 141: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 143: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 145: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 150: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 151: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 152: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 155: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 158: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 159: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 162: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 164: (2) No such file or directory
2010/11/11 23:19:45| IpIntercept.cc(137) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on FD 168: (2) No such file or directory


What does that mean?





Re: [squid-users] Problems with hotmail and facebook

2010-11-11 Thread Landy Landy
Amos.

Thanks for your quick reply.

I haven't tried a newer version yet. The problem started two days ago and I've 
been using that version for over a year now and it worked well.


--- On Thu, 11/11/10, Amos Jeffries  wrote:

> From: Amos Jeffries 
> Subject: Re: [squid-users] Problems with hotmail and facebook
> To: squid-users@squid-cache.org
> Date: Thursday, November 11, 2010, 9:26 PM
> On 12/11/10 15:11, Landy Landy
> wrote:
> > Hello.
> >
> > Our network is experiencing problems loading or
> accessing facebook and hotmail inbox and others when I use
> squid. I am using:
> >
> > I use google's public dns and our local isp
> provider's.
> >
> > I tried to login to my hotmail account and got this:
> >
> > Squid Cache: Version 3.0.STABLE24
> > configure options:  '--prefix=/usr/local/squid'
> '--sysconfdir=/etc/squid' '--enable-delay-pools'
> '--enable-kill-parent-hack' '--disable-htcp'
> '--enable-default-err-language=Spanish'
> '--enable-linux-netfilter' '--disable-ident-lookups'
> '--localstatedir=/var/log/squid3.1' '--enable-stacktraces'
> '--with-default-user=proxy' '--with-large-files'
> '--enable-icap-client' '--enable-async-io'
> '--enable-storeio=aufs' '--enable-removal-policies=heap,lru'
> '--with-maxfd=32768'
> >
> > When I try accessing these pages without having to
> pass through squid everything works fine.
> >
> > Does anyone has an idea of what can be causing this?
> 
> Could you give any details about what the problems actually
> are please?

Noticed that hotmail sometimes just hangs after providing the username and 
password. People started calling today and are driving me crazy.

Also today I noticed this (Response not valid) when replying to a thread on 
dslreports.org:



Mientras se intentaba procesar la petición:

POST /speak/wisp?enc=L2ZvcnVtL3dpc3A%3D;really HTTP/1.1
Host: www.dslreports.com
Connection: keep-alive
Referer: http://www.dslreports.com/speak/wisp?enc=L2ZvcnVtL3dpc3A%3D
Content-Length: 2580
Cache-Control: max-age=0
Origin: http://www.dslreports.com
Pragma: no-cache
Content-Type: multipart/form-data; 
boundary=WebKitFormBoundarynmpKtsJY1cReuJwT
Accept: 
application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.7 (KHTML, 
like Gecko) Chrome/7.0.517.44 Safari/534.7
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: 
__utmz=260971928.1285198857.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); 
__utma=260971928.458537925.1285198857.1285422374.1285465733.3; 
dsl=6402462798:1587616; bbruid=1587616

Ha ocurrido el siguiente problema:

Respuesta no válida.

El mensaje de Respuesta HTTP recibido del servidor contactado no pudo ser 
entendido o tenía alguna malformación. Por favor contacte al operador del sitio 
web. Quizas su administrador del caché pueda darle a Ud. más detalles acerca de 
la naturaleza exacta del problema en caso de ser necesario.

Su administrador del caché es optimumwirel...@hotmail.com.



Things are not as they used to be. I checked the cache.log file and can't find 
anything there. What do you recommend me to do?

Even now, when I was trying to reply to this email Yahoo just hanged. I don't 
know if is having problems with the AJAX code ran on these pages. I had to use 
another internet connection without squid in order to send this message.

Again, thanks for taking the time to help.





Re: [squid-users] Problems with hotmail and facebook

2010-11-11 Thread Amos Jeffries

On 12/11/10 15:11, Landy Landy wrote:

Hello.

Our network is experiencing problems loading or accessing facebook and hotmail 
inbox and others when I use squid. I am using:

I use google's public dns and our local isp provider's.

I tried to login to my hotmail account and got this:

Squid Cache: Version 3.0.STABLE24
configure options:  '--prefix=/usr/local/squid' '--sysconfdir=/etc/squid' 
'--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp' 
'--enable-default-err-language=Spanish' '--enable-linux-netfilter' 
'--disable-ident-lookups' '--localstatedir=/var/log/squid3.1' 
'--enable-stacktraces' '--with-default-user=proxy' '--with-large-files' 
'--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs' 
'--enable-removal-policies=heap,lru' '--with-maxfd=32768'

When I try accessing these pages without having to pass through squid 
everything works fine.

Does anyone has an idea of what can be causing this?


Could you give any details about what the problems actually are please?

Have you tried a more recent squid release?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


[squid-users] Problems with hotmail and facebook

2010-11-11 Thread Landy Landy
Hello.

Our network is experiencing problems loading or accessing facebook and hotmail 
inbox and others when I use squid. I am using:

I use google's public dns and our local isp provider's.

I tried to login to my hotmail account and got this:

Squid Cache: Version 3.0.STABLE24
configure options:  '--prefix=/usr/local/squid' '--sysconfdir=/etc/squid' 
'--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp' 
'--enable-default-err-language=Spanish' '--enable-linux-netfilter' 
'--disable-ident-lookups' '--localstatedir=/var/log/squid3.1' 
'--enable-stacktraces' '--with-default-user=proxy' '--with-large-files' 
'--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs' 
'--enable-removal-policies=heap,lru' '--with-maxfd=32768'

When I try accessing these pages without having to pass through squid 
everything works fine. 

Does anyone has an idea of what can be causing this?

Thanks in advanced for your help.


  


Re: [squid-users] Any tips on analysing squid crashes?

2010-11-11 Thread Amos Jeffries

On 12/11/10 04:02, Declan White wrote:

I have a squid that falls from the sky once a week. It is running with -C to 
provide coredumps, however, the coredumps contain only:

Solaris 2.9 pstack output:
core 'core.squid' of 14292: (squid) -YC -f /etc/squid3.conf
  7dba8ddc _kill (7e10dac8, 7e10cc38, 1127bc, 10604c, 
2, 7e10dac8) + 8
  7e0064d4 __1cH__CimplRdefault_terminate6F_v_ (7e10dac8, 
7e10cc38, 190298, 10604c, 104e40, 7e0064d0) + 4
  7e0062b4 __1cH__CimplMex_terminate6F_v_ (7e10de40, 0, 0, 
7e10de40, 7e10c978, 1) + 24
  7e007078 _ex_rethrow_body (7e10de40, 1, 0, 105984, 
7e4c0608, 100272ba8) + 88
  7e006fc8 __1cG__CrunKex_rethrow6F_v_ (100514d50, 1000c9640, 
100514d50, 7e10de40, 7e4c8528, 1) + 40
  00010014d0b4 $XDGBaHss9zzMWWh.__1cNSquidMainSafe6Fippc_i_ (4, 
7c18, 7c40, 0, 7e10dec8, 100514d50) + 94
  0001000714dc _start (0, 0, 0, 0, 0, 0) + 17c

This appears to be saying that a 'catch' handler in SquidMainSafe was
triggered, yet nothing appeared in the errorlog, and nothing in the stack
hints as to why it died.


One of those hex values should be the exception details. I'm not sure 
how "core" utility display them though.


SquidMain also dumps to stderr any details about the original exception 
it can. That may be in your syslog at the time of death.




Also does anyone have any rough estimate of CPU usage of squid3 vs squid2?
It looks like squid3 takes more CPU than squid2.

Declan


Exact squid release version and details of this to squid-dev please.
Last few cache.log entries before it restarted or stopped completely?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Squid compression in reverse mode

2010-11-11 Thread Amos Jeffries

On 12/11/10 10:39, Sébastien WENSKE wrote:

Hi All,

Below, is what I setup today:

browser<--- HTTPS >  reverse proxy (squid 3.1.9)< HTTP ->  OWA
2010

All work fine, but I want be able to compress data "on the fly" (text,
image...) between squid and browsers (internet clients):

browser<--- HTTPS >  [compression] reverse proxy (squid 3.1.9)<
HTTP ->  OWA 2010


Has someone already get this work in this specific scenario?


There is an eCAP module available for compression.
http://wiki.squid-cache.org/Features/eCAP

Peoples results varies. Some it works, some compression is much slower 
and others the adapter does not work at all. Feedback to the author 
please so any bugs can be fixed.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] ACL problem, can not get never_direct to work.

2010-11-11 Thread Amos Jeffries

On 12/11/10 06:04, Dean Weimer wrote:

I think I am going nuts, because I can't see what I am doing wrong here, I am 
trying to send a group of domains through a parent proxy because the proxy 
forwarding them doesn't have direct access to the websites.  These ACL list are 
before any others in the configuration, but the domains are still trying to go 
direct.

# The Parent Configuration
cache_peer 10.50.20.6 parent 8080 8181 name=PROXY3 no-query no-digest

#The ACL lines
acl InternalDNS dstdomain "/usr/local/squid/etc/internal.dns.acl"

## Put this in once to verify they above ACL was actually working for the 
domains
## http_access deny InternalDNS
## With above uncommented, I got access denied as expected

## Here is where I am doing something wrong, that I cannot figure out
never_direct allow InternalDNS
always_direct allow !InternalDNS
cache_peer_access PROXY3 allow InternalDNS
cache_peer_access PROXY3 deny all


That looks right from the child perspective. always_direct as well as 
cache-peer_access denying is a bit of overkill but not too bad.


Use "debug_options 44,3" and see what the peer selection is doing.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Squid 3.2.0.2 installation - SUSE Enterprise 11.2

2010-11-11 Thread Amos Jeffries

On 12/11/10 05:50, viswanathan wrote:

hi all

while compile the squid 3.2.0.2 in SUSE Enterprise - following error occur



 * Please try 3.2.0.3 now.

 * Please send beta release issue to squid-dev.

 * The below looks like OpenSSL pieces are missing. 3.2.0.3 should pick 
this up and display a better error.



make[3]: Entering directory `/usr/local/visolve/squid-3.2.0.2/src/base'
/bin/sh ../../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H
-I../.. -I../../include -I../../src -I../../include -Wall
-Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT
-m64 -g -O2 -c -o AsyncCall.lo AsyncCall.cc
libtool: compile: g++ -DHAVE_CONFIG_H -I../.. -I../../include
-I../../src -I../../include -Wall -Wpointer-arith -Wwrite-strings
-Wcomments -Werror -pipe -D_REENTRANT -m64 -g -O2 -c AsyncCall.cc -fPIC
-DPIC -o .libs/AsyncCall.o
In file included from ../../src/squid.h:158,
from AsyncCall.cc:5:
../../src/ssl_support.h:58: error: expected constructor, destructor, or
type conversion before ‘*’ token
../../src/ssl_support.h:61: error: expected constructor, destructor, or
type conversion before ‘*’ token
../../src/ssl_support.h:74: error: ‘SSL’ was not declared in this scope
../../src/ssl_support.h:74: error: ‘ssl’ was not declared in this scope
../../src/ssl_support.h:77: error: typedef ‘SSLGETATTRIBUTE’ is
initialized (use __typeof__ instead)
../../src/ssl_support.h:77: error: ‘SSL’ was not declared in this scope



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


[squid-users] Squid compression in reverse mode

2010-11-11 Thread Sébastien WENSKE
Hi All,

Below, is what I setup today:

browser <--- HTTPS > reverse proxy (squid 3.1.9) < HTTP -> OWA
2010

All work fine, but I want be able to compress data "on the fly" (text,
image...) between squid and browsers (internet clients):

browser <--- HTTPS > [compression] reverse proxy (squid 3.1.9) <
HTTP -> OWA 2010


Has someone already get this work in this specific scenario?

Many thanks,

Sebastien WENSKE


smime.p7s
Description: S/MIME cryptographic signature


[squid-users] ACL problem, can not get never_direct to work.

2010-11-11 Thread Dean Weimer
I think I am going nuts, because I can't see what I am doing wrong here, I am 
trying to send a group of domains through a parent proxy because the proxy 
forwarding them doesn't have direct access to the websites.  These ACL list are 
before any others in the configuration, but the domains are still trying to go 
direct.

# The Parent Configuration
cache_peer 10.50.20.6 parent 8080 8181 name=PROXY3 no-query no-digest

#The ACL lines
acl InternalDNS dstdomain "/usr/local/squid/etc/internal.dns.acl"

## Put this in once to verify they above ACL was actually working for the 
domains
## http_access deny InternalDNS
## With above uncommented, I got access denied as expected

## Here is where I am doing something wrong, that I cannot figure out
never_direct allow InternalDNS
always_direct allow !InternalDNS
cache_peer_access PROXY3 allow InternalDNS
cache_peer_access PROXY3 deny all


All sites in the ACL still attempt to go direct instead of forwarding to the 
parent

Squid -k parse shows no errors

Squid -k reconfigure was run, Output from the cache.log shows the parent was 
configured:
2010/11/11 16:43:04| Configuring Parent 10.50.20.6/8080/8181
2010/11/11 16:43:04| Loaded Icons.
2010/11/11 16:43:04| Ready to serve requests.

No errors are present after this in the cache.log, but the access.log still 
shows the sites going direct:
1289494760.992   5408 10.100.10.9 TCP_MISS/000 0 GET http://www.orscheln.com/ - 
DIRECT/www.orscheln.com -

When I had the http_access deny line in to verify the domains were correctly 
being seen by the acl:
1289493703.745  0 10.100.10.9 TCP_DENIED/403 2540 GET 
http://www.orscheln.com/ - NONE/- text/html

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


[squid-users] Squid 3.2.0.2 installation - SUSE Enterprise 11.2

2010-11-11 Thread viswanathan

hi all

while compile the squid 3.2.0.2 in SUSE Enterprise - following error occur

make[3]: Entering directory `/usr/local/visolve/squid-3.2.0.2/src/base'
/bin/sh ../../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H 
-I../.. -I../../include -I../../src -I../../include -Wall 
-Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT 
-m64 -g -O2 -c -o AsyncCall.lo AsyncCall.cc
libtool: compile: g++ -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../src -I../../include -Wall -Wpointer-arith -Wwrite-strings 
-Wcomments -Werror -pipe -D_REENTRANT -m64 -g -O2 -c AsyncCall.cc -fPIC 
-DPIC -o .libs/AsyncCall.o

In file included from ../../src/squid.h:158,
from AsyncCall.cc:5:
../../src/ssl_support.h:58: error: expected constructor, destructor, or 
type conversion before ‘*’ token
../../src/ssl_support.h:61: error: expected constructor, destructor, or 
type conversion before ‘*’ token

../../src/ssl_support.h:74: error: ‘SSL’ was not declared in this scope
../../src/ssl_support.h:74: error: ‘ssl’ was not declared in this scope
../../src/ssl_support.h:77: error: typedef ‘SSLGETATTRIBUTE’ is 
initialized (use __typeof__ instead)

../../src/ssl_support.h:77: error: ‘SSL’ was not declared in this scope
../../src/ssl_support.h:77: error: expected primary-expression before 
‘,’ token
../../src/ssl_support.h:77: error: expected primary-expression before 
‘const’

../../src/ssl_support.h:80: error: ‘SSLGETATTRIBUTE’ does not name a type
../../src/ssl_support.h:83: error: ‘SSLGETATTRIBUTE’ does not name a type
../../src/ssl_support.h:86: error: ‘SSL’ was not declared in this scope
../../src/ssl_support.h:86: error: ‘ssl’ was not declared in this scope
../../src/ssl_support.h:89: error: ‘SSL’ was not declared in this scope
../../src/ssl_support.h:89: error: ‘ssl’ was not declared in this scope
In file included from ../../src/squid.h:172,
from AsyncCall.cc:5:
../../src/structs.h:607: error: ISO C++ forbids declaration of ‘SSL_CTX’ 
with no type

../../src/structs.h:607: error: expected ‘;’ before ‘*’ token
../../src/structs.h:953: error: ISO C++ forbids declaration of ‘SSL_CTX’ 
with no type

../../src/structs.h:953: error: expected ‘;’ before ‘*’ token
../../src/structs.h:954: error: ISO C++ forbids declaration of 
‘SSL_SESSION’ with no type

../../src/structs.h:954: error: expected ‘;’ before ‘*’ token
make[3]: *** [AsyncCall.lo] Error 1
make[3]: Leaving directory `/usr/local/visolve/squid-3.2.0.2/src/base'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/usr/local/visolve/squid-3.2.0.2/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/local/visolve/squid-3.2.0.2/src'
make: *** [all-recursive] Error 1

thanks for the help


-Viswa


Re: [squid-users] ACLs Implementation help

2010-11-11 Thread Amos Jeffries

On 12/11/10 04:08, Edmonds Namasenda wrote:

Thank you all.

On Thu, Nov 11, 2010 at 4:19 PM, Amos Jeffries  wrote:

On 12/11/10 01:22, Edmonds Namasenda wrote:


No continuous authentication required with every URL accessed or
re-directions once the first log-in is accepted.


Understood. That is not possible.

HTTP is by design stateless. Each single TCP connection being able to be
used identically by both a single end-user browser or a middleware proxy
serving multiple users. Even if you believe your end-users are all browsers
you will likely be wrong at some point.


That means every URL accessed will ask for a password from the users.
Then password authentication by squid is not advisable for corporate
end users... it is an inconvenience.


Amos


I believe I am a better squid administrator than when I joined. Throw me a bone!



Switch "users" with "browsers" and you have it right. There is a whole 
layer of software between squid and the people at the screen.


The browser is supposed to remember these things once the person has 
entered them. Or as in the case of Kerberos, to locate the credentials 
without bothering the person at all.



If you are seeing a browser repeatedly asking for login then there is a 
problem with the browser. Those can occasionally be hit by something it 
does not like coming back from Squid. When that happens some network 
forensics are needed to figure out whats going on.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


[squid-users] smb_auth

2010-11-11 Thread Senthilkumar

Hi Team,

I am using squid 2.7 stable 7 and it was authenticating  with Active 
directory using *smb_auth.

*ADS has two organisation unit one as* sales and workers .*
I need to allow unfiltered access to sales while filtered access to workers.
Is it possible with smb_auth? and how it can be done?

Thanks
Senthil





Re: [squid-users] ACLs Implementation help

2010-11-11 Thread Edmonds Namasenda
Thank you all.

On Thu, Nov 11, 2010 at 4:19 PM, Amos Jeffries  wrote:
> On 12/11/10 01:22, Edmonds Namasenda wrote:
>>
>> No continuous authentication required with every URL accessed or
>> re-directions once the first log-in is accepted.
>
> Understood. That is not possible.
>
> HTTP is by design stateless. Each single TCP connection being able to be
> used identically by both a single end-user browser or a middleware proxy
> serving multiple users. Even if you believe your end-users are all browsers
> you will likely be wrong at some point.
>
That means every URL accessed will ask for a password from the users.
Then password authentication by squid is not advisable for corporate
end users... it is an inconvenience.
>
> Amos
>
I believe I am a better squid administrator than when I joined. Throw me a bone!


-- 
Thank you and kind regards,

I.P.N Edmonds

Cel:    +256 70 227 3374
       +256 71 227 3374

Y! / MSN: zibiced | GMail: namasenda | Skype: edsend


[squid-users] Any tips on analysing squid crashes?

2010-11-11 Thread Declan White
I have a squid that falls from the sky once a week. It is running with -C to 
provide coredumps, however, the coredumps contain only:

Solaris 2.9 pstack output:
core 'core.squid' of 14292: (squid) -YC -f /etc/squid3.conf
 7dba8ddc _kill (7e10dac8, 7e10cc38, 1127bc, 10604c, 2, 
7e10dac8) + 8
 7e0064d4 __1cH__CimplRdefault_terminate6F_v_ (7e10dac8, 
7e10cc38, 190298, 10604c, 104e40, 7e0064d0) + 4
 7e0062b4 __1cH__CimplMex_terminate6F_v_ (7e10de40, 0, 0, 
7e10de40, 7e10c978, 1) + 24
 7e007078 _ex_rethrow_body (7e10de40, 1, 0, 105984, 
7e4c0608, 100272ba8) + 88
 7e006fc8 __1cG__CrunKex_rethrow6F_v_ (100514d50, 1000c9640, 100514d50, 
7e10de40, 7e4c8528, 1) + 40
 00010014d0b4 $XDGBaHss9zzMWWh.__1cNSquidMainSafe6Fippc_i_ (4, 
7c18, 7c40, 0, 7e10dec8, 100514d50) + 94
 0001000714dc _start (0, 0, 0, 0, 0, 0) + 17c

This appears to be saying that a 'catch' handler in SquidMainSafe was
triggered, yet nothing appeared in the errorlog, and nothing in the stack
hints as to why it died.

Also does anyone have any rough estimate of CPU usage of squid3 vs squid2?
It looks like squid3 takes more CPU than squid2.

Declan


Re: [squid-users] ACLs Implementation help

2010-11-11 Thread Amos Jeffries

On 12/11/10 01:22, Edmonds Namasenda wrote:

Yeah, I guess I am getting there.
Please look in-line...



How do I enforce password authentication ONLY ONCE for users to


What do you mean by "ONLY ONCE"? A user can be authenticated or not, there is 
no multiple about it.

No continuous authentication required with every URL accessed or
re-directions once the first log-in is accepted.


Understood. That is not possible.

HTTP is by design stateless. Each single TCP connection being able to be 
used identically by both a single end-user browser or a middleware proxy 
serving multiple users. Even if you believe your end-users are all 
browsers you will likely be wrong at some point.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] ACLs Implementation help

2010-11-11 Thread Edmonds Namasenda
Yeah, I guess I am getting there.
Please look in-line...

>>
>> How do I enforce password authentication ONLY ONCE for users to
>
> What do you mean by "ONLY ONCE"? A user can be authenticated or not, there is 
> no multiple about it.
No continuous authentication required with every URL accessed or
re-directions once the first log-in is accepted.
>
>> internet access using file "passt"?
>> http_access allow passt net_ed  ?!
>
> With the above Squid will pull the auth details sent by the browser out of
> the request. If there are none it will skip the access line.
>
> You place the ACL of type proxy_auth (in this case "past") last on the line  
> to make Squid request credentials from the browser.

acl passt proxy_auth REQUIRED # Last ACL line; passt = ncsa authentication file
?!

http_access allow passt net_ed # Last http_access line; net_ed = my network
?!
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.9
>  Beta testers wanted for 3.2.0.3
>



-- 
Thank you and kind regards,

I.P.N Edmonds


Re: [squid-users] forwarding specific request to other peer

2010-11-11 Thread Matus UHLAR - fantomas
On 09.11.10 17:04, balkris...@subisu.net.np wrote:
> There is special requirement in my network such that some of the specific
> websites needs to be forwarded to specific cache peer. I want to forward
> youtube request in one proxy (suppose "cache1") to be forwarded to another
> proxy ( say "cache2") and cache2 fetches the content and give back to
> cahce1. (Need solution without content aware switching in between)

I think proper ACL a cache_peer_access should do that
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Depression is merely anger without enthusiasm. 


Re: [squid-users] ACLs Implementation help

2010-11-11 Thread Amos Jeffries

yay! :)

On 11/11/10 23:39, Edmonds Namasenda wrote:

Much appreciated for the previous help.
Some more clarification on the in-line requests below.
On Wed, Nov 10, 2010 at 2:38 PM, Amos Jeffries  wrote:


On 09/11/10 20:25, Edmonds Namasenda wrote:


Dear all.
Using openSuse 11.2 and Squid 3.0 Stable 18

Besides commenting out anything to do with 'localnet', below is all that
I added or edited on squid.conf

# Authentication Program
auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid_passwd

# Start ACLs (bottom of ACL section defaults)
acl passt proxy_auth REQUIRED# Authentication file to be used
"passt"
acl net_ed src 10.100.10.0/24  192.168.7.0/24
  10.208.6.0/24  # My
networks
acl dove src 10.100.10.248-10.100.10.255# Unrestricted Internet
access I.P range
acl whrs1 time MTWHF 9:00-12:59# Morning work shift
acl whrs2 time MTWHF 13:00-16:59# Afternoon work shift


meant to be ...
acl whrs2 time MTWHF 14:00-16:59


acl nowww dstdomain "/etc/squid/noWWW"# Inaccessible URLs file path
acl nodwnld urlpath_regex "/etc/squid/noDWNLD"# Unavailable
downloads file path

# End ACLs

# Start http_access Edits (top of http_access section defaults)
http_access allow dove# Internet access without authentication,
denied URLs or download restrictions
http_access deny nowww whrs1 whrs2# Deny URLs during work shifts


Um, this means that when the clock says simultaneously that it is both morning 
AND afternoon...

... to deny with an OR combine the time periods into one ACL name or split the 
http_access into two lines.


http_access deny nowww whrs1
http_access deny nodwnld whrs1
http_access deny nowww whrs2
http_access deny nodwnld whrs2
... works great so far as tested.


Amos


How do I enforce password authentication ONLY ONCE for users to


What do you mean by "ONLY ONCE"? A user can be authenticated or not, 
there is no multiple about it.



internet access using file "passt"?
http_access allow passt net_ed  ?!


With the above Squid will pull the auth details sent by the browser out 
of the request. If there are none it will skip the access line.


You place the ACL of type proxy_auth (in this case "past") last on the 
line to make Squid request credentials from the browser.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


[squid-users] ACLs Implementation help

2010-11-11 Thread Edmonds Namasenda
Much appreciated for the previous help.
Some more clarification on the in-line requests below.
On Wed, Nov 10, 2010 at 2:38 PM, Amos Jeffries  wrote:
>
> On 09/11/10 20:25, Edmonds Namasenda wrote:
>>
>> Dear all.
>> Using openSuse 11.2 and Squid 3.0 Stable 18
>>
>> Besides commenting out anything to do with 'localnet', below is all that
>> I added or edited on squid.conf
>>
>> # Authentication Program
>> auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid_passwd
>>
>> # Start ACLs (bottom of ACL section defaults)
>> acl passt proxy_auth REQUIRED        # Authentication file to be used
>> "passt"
>> acl net_ed src 10.100.10.0/24  192.168.7.0/24
>>  10.208.6.0/24         # My
>> networks
>> acl dove src 10.100.10.248-10.100.10.255        # Unrestricted Internet
>> access I.P range
>> acl whrs1 time MTWHF 9:00-12:59        # Morning work shift
>> acl whrs2 time MTWHF 13:00-16:59        # Afternoon work shift

meant to be ...
acl whrs2 time MTWHF 14:00-16:59

>> acl nowww dstdomain "/etc/squid/noWWW"        # Inaccessible URLs file path
>> acl nodwnld urlpath_regex "/etc/squid/noDWNLD"        # Unavailable
>> downloads file path
>>
>> # End ACLs
>>
>> # Start http_access Edits (top of http_access section defaults)
>> http_access allow dove        # Internet access without authentication,
>> denied URLs or download restrictions
>> http_access deny nowww whrs1 whrs2        # Deny URLs during work shifts
>
> Um, this means that when the clock says simultaneously that it is both 
> morning AND afternoon...
>
> ... to deny with an OR combine the time periods into one ACL name or split 
> the http_access into two lines.

http_access deny nowww whrs1
http_access deny nodwnld whrs1
http_access deny nowww whrs2
http_access deny nodwnld whrs2
... works great so far as tested.

> Amos

How do I enforce password authentication ONLY ONCE for users to
internet access using file "passt"?
http_access allow passt net_ed  ?!


--
Thank you and kind regards,

I.P.N Edmonds

Cel:    +256 70 227 3374
       +256 71 227 3374

Y! / MSN: zibiced | GMail: namasenda | Skype: edsend


Re: [squid-users] Re: Access control problem

2010-11-11 Thread Amos Jeffries

On 10/11/10 05:15, mrmmm wrote:



Amos Jeffries-2 wrote:


Your initial message said "among other stuff I have". The conclusion
then has to be that somewhere in that other stuff is http_access rules
which bypass the ones you mentioned here.

Amos
--



By "other stuff" i mean specific deny entries.


*all* of them? not one single "http_access allow" somewhere up top?


Most of them are of type "URL
Regexp" and have just one entry per rule, and they work fine. It seems that
the problem is that for some reason when I have multiple entries per rule (a
file with a list of sites) it is not denying them properly. However, it does
appear that Squid reads them because I put a couple of duplicates in the
file (on purpose) and when squid loads I get the message:

WARNING: '.resize.yandex.net' is a subdomain of 'resize.yandex.net'
WARNING: because of this 'resize.yandex.net' is ignored to keep splay tree
searching predictable
WARNING: You should probably remove '.resize.yandex.net' from the ACL named
'denybadsites'



oooh. You were talking about regex ACL not working then provide an 
example of a dstdomain error. Mixing or crossing patterns could be the 
source of your failure.


What is your *full* config please? along with the output of "squid -v"

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Re: Multiple NICs

2010-11-11 Thread Amos Jeffries

On 10/11/10 01:39, Nick Cairncross wrote:


Hi!

I wouldn't think you need multiple network cards to use squid, unless
your internet connection is on or above 1GB/s. If your ISP provides you
less, I would think a regular gigabit Nic would do the job.
Your Hard Drives probably wont be fast enough to cache data on multiple
Nics anyways.

We have over 1000 Clients, and the previous setup we used, we had only 1
GB network interface of our squid. It was sitting in the DMZ, and the
connections went trough it.
It was fine. Had no connection problems.

Tibby

Feladó: Nick Cairncross

Hi list,

I'm looking at building a couple more 3.1.8 servers on RHEL 5.5 x86. The
servers are nicely high-powered have multiple Gb NICs (4 in total). My
previous proxy server (bluecoat) had two NICs. I understand that one was
used to listen to requests and send to our upstream accelerator and one
was used if the equivalent 'send direct' was used i.e bypass the
accelerator. Can the list make any thoughts or recommendations about the
best way to utilise the NICs for best performance? Can I achieve the same
outbound as above? Should I even bother trying to do this? User base
would be about 700 users; I'm not caching. Simple ACLs but with two
authentication helpers (depending on browser).

Cheers
Nick


Thanks Tibby for your input - sounds sensible. Net connection is fast and
wide, so gb should be ok.

In that case another question for the list: Seeing as I'm not doing ANY
caching at all and just proxying traffic are there any recommendations for
squid.conf settings that might optimise my users' experiences (other than
caching..).


Why such a thing against caching?
That is the #1 speed gain (about 3-4 orders of magnitude faster to fetch 
something from RAM cache than the network).



I have fast ACLs where possible in place and my squid.conf is
as below. I'm looking for any tips on maximising memory, processes etc


memory maximization is almost all about caching.

Besides the cache_mem for in-transit objects (your "cache deny all" 
causes them to drop when completed even if they need fetching again 
immediately).


You could also possibly check and tune the DNS ipcache/fqdncache sizes 
for more entries, and bump the auth cache size up enough to hold all 
your user credentials.



from within the squid.conf so that the end user has as quick an experience
as possible - Are there any other tags I should look at using? Server spec
is a single cpu Xeon X5660 @ 2.8, 6gb 1333 ram, 250 gb R1



To start with TMF (the measurement factory) are looking into a few 
things right now regards to the speed of 3.1. There are likely to be 
some extra speed patches in 3.1.10 next month.


The first one being the default buffer size in src/MemBuf.h (currently 
2*1024, could be upped to 64*1024 for bigger network reads).



===
http_port 8080

auth_param negotiate program /usr/lib/squid/squid_kerb_auth -r
auth_param negotiate children 80
auth_param negotiate keep_alive on

auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 60
auth_param ntlm keep_alive on

auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic children 20
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

cache_peer [upstream] parent 8080 0 no-query proxy-only no-digest default

cache_mgr [blanked]
cachemgr_passwd [blanked] all
client_persistent_connections on
#server_persistent_connections on
persistent_connection_after_error on

access_log /var/log/squid/access.log squid
cache_store_log none squid
cache_log /var/log/squid/cache.log squid

## Delay Pool Definitions ###

# Total number of delay pools
delay_pools 1

 ACCESS CONTROL LISTS #

## USER-AGENT (Browser-type) ACLs
acl Java_jvm browser "/etc/squid/ACL/USERAGENTS/USER-AGENTS_JAVA.txt"
acl iTunes browser "/etc/squid/ACL/USERAGENTS/USER-AGENTS_APPLE.txt"
acl MSNMessenger browser "/etc/squid/ACL/USERAGENTS/USER-AGENTS_MSN.txt"


## USER AUTHENTICATION ACLs
acl AuthenticatedUsers proxy_auth REQUIRED

## URL DESTINATION ACLs
acl URL_ALLOWDstDomains dstdomain
"/etc/squid/ACL/URL/URL_ALLOWDstDomains.txt"

## IP ACLS ##
acl CNP_SERVERIP src 172.16.10.176
acl CNP_SERVERIP src 172.16.100.50
acl CNP_CLIENTIP src "/etc/squid/ACL/IPADDRESSES/IP_CLIENTIP.txt"

## Windows Update ACLS
acl WSUS_IP src 172.16.10.127

# LAN IP ACLs
acl CNP_172SUBNETS src 172.16.0.0/16
acl CNP_SERVERSUBNETS src 172.16.10.0/24
acl CNP_SERVERSUBNETS src 172.16.100.0/24

# Blocks CONNECT method to IP addresses (Blocks Skype amongst other things)
acl StopDirectIP url_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+


dstdom_regex for a often much shorter size of bytes duplicated and sent 
out for regex scanning.




# MSN Messenger Allow IP ACL
acl IP_MSNMessenger src "/etc/squid/ACL/IPADDRESSES/IP_MSNMESSENGER.txt"

# SEND DIRECT ACLs
acl SENDDIRECT_DstDomains dstdomain
"/etc/squid/ACL/SENDDI

Re: [squid-users] Squid capabilities: several questions

2010-11-11 Thread Amos Jeffries

On 10/11/10 05:03, Jordi Espasa Clofent wrote:

Hi,

I have to design and implement a proxy in a complex production
environment; I used Squid some time ago (3 years), so I'm thinking in
using it again. First of all I need to know:

Let's suppose that the squid box has 3 NICs. 2 external connected to
both Internet DSLs and 1 internal which recieves the proxy clients.

// ¿Can I chosse to use one or another external NIC (different DSL)
according to Squid rules about protocols? I mean, for example:

- all the clients who use http, DSL_1
- all the clients who use ftp, DSL_2

// Even ¿Can I choose to use one or another external NIC (different DSL)
according Squid rules about users auth? I mean, for example

- users A,B and C use DSL_1
- users D,E and F, use DSL_2


Routing of packets is the business of the OS not Squid.

You can use Squid ACLs to determine the outgoing IP address, TOS value 
or (on Linux) Netfilter MARK sent by Squid.


Additional configuration of the operating system has to be done to use 
those details to actually route the traffic out the appropriate NIC.




// In direct relation with the previous question ¿Can Squid validate
users against Win$$ Active Directory?


Yes. Squid bundles several auth helpers for various AD interface methods 
and auth protocols. There are third-party helpers as well from Samba.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: Fwd: Re: [squid-users] Re: Bandwidth split?

2010-11-11 Thread Amos Jeffries

On 11/11/10 20:52, J Webster wrote:

To start off simply and just get the limit working, can I use this:
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 125000/125000
delay_access 1 allow all

That should limit all connections to 1 Mbps.
I have seen varying lines for the last one ranging from allow all, deny
all, and webmin doesn;t even put in that line at all.
After that, I would like to add in the regexs one by one if it start
limiting the server.
Will the above just limit by IP connection?


Yes. That will make separate buckets for each individual client IP, with 
no grouping.



So, I don;t need to bother cross checking the access of the ncsa_users?
Only ncsa_users have access to the server anyway.


Yes, these delay_pools limit how much data gets read for sending to the 
client. So only clients who have proxy access can be delayed.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3