Re: [squid-users] Re: Want to create SQUID mesh, but force certain URLs to be retrieved by only one Proxy

2009-04-08 Thread Pandu E Poluan

Without allow-miss, I get the error:

*Valid document was not found in the cache and only-if-cached directive 
was specified.*


Strangely, doing the same on ProxyC causes an Access Denied error...

Rgds

[p]

Amos Jeffries wrote:

Pandu E Poluan wrote:

Okay, some experimentations I made:

I added the following lines on ProxyB:

# lines from Amos' tip
acl fastsites dstdomain .need-fast-inet.com
acl fastsites dstdomain .another-need-fast-inet.com
never_direct allow fastsites

Changes on ProxyA:

# lines from Amos' tip
acl fastsites dstdomain .need-fast-inet.com
acl fastsites dstdomain .another-need-fast-inet.com
# also from Amos' tip
miss_access allow fastsites
miss_access deny siblings
miss_access allow all
# and this one from Amos' tip
always_direct allow fastsites

My browser can't access .need-fast-inet.com

I further changed the following lines to ProxyB:

# added weight=2 allow-miss
cache_peer   ProxyA   sibling   3128   4827   htcp weight=2 allow-miss
# added the following line
neighbor_type_domain ProxyA parent .need-fast-inet.com 
.another-need-fast-inet.com


Now, I can access .need-fast-inet.com through ProxyB.

But, isn't that allow-miss dangerous?

Any comments?



It's dangerous to use it widely. And particularly on both ends of the 
peering link (ie DONT place it in proxyA config for proxyB/C).


It's safe to do on a one-way link. The miss_access controls you have 
in place at each of your Squid perform explicitly the same actions. So 
AFAIK you should not hit any of the loop cases that may occur.


Test without the 'allow-miss' option though.  I believe the setting 
neighbor_type_domain disables it more specifically for the objectX 
requests via the change to parent link.


Amos



Rgds.


[p]


Pandu E Poluan wrote:

Hmmm... strange...

Now, instead of accessing the site objectX, ProxyB and ProxyC users 
can't access the site at all...


But no SQUID error page shows up... the browser simply times out... 
Accessing URLs other thatn objectX still works...


objectX is accessible via ProxyA, though.

The changes I made currently:

On ProxyA:

acl objectX dstdomain ...
miss_access allow objectX
always_direct allow objectX

On ProxyB/C:

acl objectX dstdomain ...
never_direct allow objectX

I'll experiment with the settings... maybe also miss_access allow 
objectX on ProxyB and ProxyC?



Rgds.



Pandu E Poluan wrote:

Aha! Thanks a lot, Amos  :-)

I have been suspicious all along that the solution uses miss_access 
and never_direct ... but never saw an example anywhere.


Again, much thanks!

** rushes to his proxies to configure them **


Rgds.


[p]


Amos Jeffries wrote:

Pandu E Poluan wrote:
The URL is allowed to be accessed by everyone, ProxyA-users, and 
ProxyB/C-users alike.


I just want the URL to be retrieved by ProxyA, because accessing 
that certain URL through ProxyB/C is too damn slow (pardon the 
language).



Rgds.



Okay. Thought it might be something like that, just wanted to be 
sure before fuzzing the issue.


You will need to create an ACL just for this URL (an others you 
want to do the same).

 acl objectX ...


proxyA needs to allow peers past the miss_access block.

proxyA:
 miss_access allow objectX
 miss_access deny siblings
 miss_access allow all


siblings must never go direct to the object (always use their 
parent peer)


proxyB/proxyC:
  never_direct allow objectX

Amos



Amos Jeffries wrote:

Pandu E Poluan wrote:

Anyone care to comment on my email?

And another question: Is it possible to use miss_access with a 
dstdomain acl?



Rgds.


Pandu E Poluan wrote:

Hi,

I want to know is there a way to force a URL to be retrieved 
by only a certain proxy, while ensuring that meshing works.


Here's the scenario:

I have a ProxyA == connects to Internet via a fast connection 
InetFast
This proxy is used by a group of users that really need fast 
connection.


I have other proxies ProxyB  ProxyC == connects to Internet 
via a slower connection InetSlow

These proxies are used by the rest of the staff.

I configured them all as siblings, with miss_access blocking 
MISS requests between them, e.g.


# Configuration snippet of ProxyA
cache_peer ProxyB sibling 3128 4827 htcp
cache_peer ProxyC sibling 3128 4827 htcp
acl siblings src ProxyB
acl siblings src ProxyC
miss_access deny siblings
miss_access allow all

ProxyB  ProxyC both has similar config.

( The aim is to 'assist' other staffers using InetSlow so that 
whatever has been retrieved by the InetFast users will be made 
available to the rest of the staffs )


Now, let's say there's this URL http://www.need-fast-inet.com/ 
that I want to be retrieved exclusively by ProxyA.


How would I configure the peering relationships?


If you can state the problem and the desired setup clearly in 
single-sentence steps you have usually described the individual 
config settings needed.


Is the URL allowed to be fetched by the slow users through 
proxyB into proxy A and then internet?





Amos











--
*Pandu E 

Re: [squid-users] Re: Want to create SQUID mesh, but force certain URLs to be retrieved by only one Proxy

2009-04-08 Thread Amos Jeffries

Pandu E Poluan wrote:

Without allow-miss, I get the error:

*Valid document was not found in the cache and only-if-cached directive 
was specified.*


Okay, bet use it then. Should be safe enough in your setup.

Amos



Strangely, doing the same on ProxyC causes an Access Denied error...

Rgds

[p]

Amos Jeffries wrote:

Pandu E Poluan wrote:

Okay, some experimentations I made:

I added the following lines on ProxyB:

# lines from Amos' tip
acl fastsites dstdomain .need-fast-inet.com
acl fastsites dstdomain .another-need-fast-inet.com
never_direct allow fastsites

Changes on ProxyA:

# lines from Amos' tip
acl fastsites dstdomain .need-fast-inet.com
acl fastsites dstdomain .another-need-fast-inet.com
# also from Amos' tip
miss_access allow fastsites
miss_access deny siblings
miss_access allow all
# and this one from Amos' tip
always_direct allow fastsites

My browser can't access .need-fast-inet.com

I further changed the following lines to ProxyB:

# added weight=2 allow-miss
cache_peer   ProxyA   sibling   3128   4827   htcp weight=2 allow-miss
# added the following line
neighbor_type_domain ProxyA parent .need-fast-inet.com 
.another-need-fast-inet.com


Now, I can access .need-fast-inet.com through ProxyB.

But, isn't that allow-miss dangerous?

Any comments?



It's dangerous to use it widely. And particularly on both ends of the 
peering link (ie DONT place it in proxyA config for proxyB/C).


It's safe to do on a one-way link. The miss_access controls you have 
in place at each of your Squid perform explicitly the same actions. So 
AFAIK you should not hit any of the loop cases that may occur.


Test without the 'allow-miss' option though.  I believe the setting 
neighbor_type_domain disables it more specifically for the objectX 
requests via the change to parent link.


Amos



Rgds.


[p]


Pandu E Poluan wrote:

Hmmm... strange...

Now, instead of accessing the site objectX, ProxyB and ProxyC users 
can't access the site at all...


But no SQUID error page shows up... the browser simply times out... 
Accessing URLs other thatn objectX still works...


objectX is accessible via ProxyA, though.

The changes I made currently:

On ProxyA:

acl objectX dstdomain ...
miss_access allow objectX
always_direct allow objectX

On ProxyB/C:

acl objectX dstdomain ...
never_direct allow objectX

I'll experiment with the settings... maybe also miss_access allow 
objectX on ProxyB and ProxyC?



Rgds.



Pandu E Poluan wrote:

Aha! Thanks a lot, Amos  :-)

I have been suspicious all along that the solution uses miss_access 
and never_direct ... but never saw an example anywhere.


Again, much thanks!

** rushes to his proxies to configure them **


Rgds.


[p]


Amos Jeffries wrote:

Pandu E Poluan wrote:
The URL is allowed to be accessed by everyone, ProxyA-users, and 
ProxyB/C-users alike.


I just want the URL to be retrieved by ProxyA, because accessing 
that certain URL through ProxyB/C is too damn slow (pardon the 
language).



Rgds.



Okay. Thought it might be something like that, just wanted to be 
sure before fuzzing the issue.


You will need to create an ACL just for this URL (an others you 
want to do the same).

 acl objectX ...


proxyA needs to allow peers past the miss_access block.

proxyA:
 miss_access allow objectX
 miss_access deny siblings
 miss_access allow all


siblings must never go direct to the object (always use their 
parent peer)


proxyB/proxyC:
  never_direct allow objectX

Amos



Amos Jeffries wrote:

Pandu E Poluan wrote:

Anyone care to comment on my email?

And another question: Is it possible to use miss_access with a 
dstdomain acl?



Rgds.


Pandu E Poluan wrote:

Hi,

I want to know is there a way to force a URL to be retrieved 
by only a certain proxy, while ensuring that meshing works.


Here's the scenario:

I have a ProxyA == connects to Internet via a fast connection 
InetFast
This proxy is used by a group of users that really need fast 
connection.


I have other proxies ProxyB  ProxyC == connects to Internet 
via a slower connection InetSlow

These proxies are used by the rest of the staff.

I configured them all as siblings, with miss_access blocking 
MISS requests between them, e.g.


# Configuration snippet of ProxyA
cache_peer ProxyB sibling 3128 4827 htcp
cache_peer ProxyC sibling 3128 4827 htcp
acl siblings src ProxyB
acl siblings src ProxyC
miss_access deny siblings
miss_access allow all

ProxyB  ProxyC both has similar config.

( The aim is to 'assist' other staffers using InetSlow so that 
whatever has been retrieved by the InetFast users will be made 
available to the rest of the staffs )


Now, let's say there's this URL http://www.need-fast-inet.com/ 
that I want to be retrieved exclusively by ProxyA.


How would I configure the peering relationships?


If you can state the problem and the desired setup clearly in 
single-sentence steps you have usually described the individual 
config settings needed.


Is the URL allowed to be fetched by the 

Re: [squid-users] Re: Want to create SQUID mesh, but force certain URLs to be retrieved by only one Proxy

2009-04-08 Thread Pandu E Poluan

Ah... I found out the problem (as to Access Denied)...

Somehow I've forgotten to include ProxyC in an http_access statement in 
ProxyA...


All is well now...

Thanks for your kind assistance, Amos! :-)

Rgds

[p]

Amos Jeffries wrote:

Pandu E Poluan wrote:

Without allow-miss, I get the error:

*Valid document was not found in the cache and only-if-cached 
directive was specified.*


Okay, bet use it then. Should be safe enough in your setup.

Amos



Strangely, doing the same on ProxyC causes an Access Denied error...

Rgds

[p]

Amos Jeffries wrote:

Pandu E Poluan wrote:

Okay, some experimentations I made:

I added the following lines on ProxyB:

# lines from Amos' tip
acl fastsites dstdomain .need-fast-inet.com
acl fastsites dstdomain .another-need-fast-inet.com
never_direct allow fastsites

Changes on ProxyA:

# lines from Amos' tip
acl fastsites dstdomain .need-fast-inet.com
acl fastsites dstdomain .another-need-fast-inet.com
# also from Amos' tip
miss_access allow fastsites
miss_access deny siblings
miss_access allow all
# and this one from Amos' tip
always_direct allow fastsites

My browser can't access .need-fast-inet.com

I further changed the following lines to ProxyB:

# added weight=2 allow-miss
cache_peer   ProxyA   sibling   3128   4827   htcp weight=2 allow-miss
# added the following line
neighbor_type_domain ProxyA parent .need-fast-inet.com 
.another-need-fast-inet.com


Now, I can access .need-fast-inet.com through ProxyB.

But, isn't that allow-miss dangerous?

Any comments?



It's dangerous to use it widely. And particularly on both ends of 
the peering link (ie DONT place it in proxyA config for proxyB/C).


It's safe to do on a one-way link. The miss_access controls you have 
in place at each of your Squid perform explicitly the same actions. 
So AFAIK you should not hit any of the loop cases that may occur.


Test without the 'allow-miss' option though.  I believe the setting 
neighbor_type_domain disables it more specifically for the objectX 
requests via the change to parent link.


Amos



Rgds.


[p]


Pandu E Poluan wrote:

Hmmm... strange...

Now, instead of accessing the site objectX, ProxyB and ProxyC 
users can't access the site at all...


But no SQUID error page shows up... the browser simply times 
out... Accessing URLs other thatn objectX still works...


objectX is accessible via ProxyA, though.

The changes I made currently:

On ProxyA:

acl objectX dstdomain ...
miss_access allow objectX
always_direct allow objectX

On ProxyB/C:

acl objectX dstdomain ...
never_direct allow objectX

I'll experiment with the settings... maybe also miss_access allow 
objectX on ProxyB and ProxyC?



Rgds.



Pandu E Poluan wrote:

Aha! Thanks a lot, Amos  :-)

I have been suspicious all along that the solution uses 
miss_access and never_direct ... but never saw an example anywhere.


Again, much thanks!

** rushes to his proxies to configure them **


Rgds.


[p]


Amos Jeffries wrote:

Pandu E Poluan wrote:
The URL is allowed to be accessed by everyone, ProxyA-users, 
and ProxyB/C-users alike.


I just want the URL to be retrieved by ProxyA, because 
accessing that certain URL through ProxyB/C is too damn slow 
(pardon the language).



Rgds.



Okay. Thought it might be something like that, just wanted to be 
sure before fuzzing the issue.


You will need to create an ACL just for this URL (an others you 
want to do the same).

 acl objectX ...


proxyA needs to allow peers past the miss_access block.

proxyA:
 miss_access allow objectX
 miss_access deny siblings
 miss_access allow all


siblings must never go direct to the object (always use their 
parent peer)


proxyB/proxyC:
  never_direct allow objectX

Amos



Amos Jeffries wrote:

Pandu E Poluan wrote:

Anyone care to comment on my email?

And another question: Is it possible to use miss_access with 
a dstdomain acl?



Rgds.


Pandu E Poluan wrote:

Hi,

I want to know is there a way to force a URL to be retrieved 
by only a certain proxy, while ensuring that meshing works.


Here's the scenario:

I have a ProxyA == connects to Internet via a fast 
connection InetFast
This proxy is used by a group of users that really need fast 
connection.


I have other proxies ProxyB  ProxyC == connects to 
Internet via a slower connection InetSlow

These proxies are used by the rest of the staff.

I configured them all as siblings, with miss_access blocking 
MISS requests between them, e.g.


# Configuration snippet of ProxyA
cache_peer ProxyB sibling 3128 4827 htcp
cache_peer ProxyC sibling 3128 4827 htcp
acl siblings src ProxyB
acl siblings src ProxyC
miss_access deny siblings
miss_access allow all

ProxyB  ProxyC both has similar config.

( The aim is to 'assist' other staffers using InetSlow so 
that whatever has been retrieved by the InetFast users will 
be made available to the rest of the staffs )


Now, let's say there's this URL 
http://www.need-fast-inet.com/ that I want to be retrieved 
exclusively by ProxyA.


How 

[squid-users] FreeBSD - Squid 2.7 - Transparent

2009-04-08 Thread Vivek

HI All,



I am trying to use squid 2.7 in FreeBSD machine. But there is no option 
available  --enable-ipfw-transparent  for configure the squid in 
transparent mode. How can we enable transparent mode when configuring 
squid?.




Regards

Vivek



You are invited to Get a Free AOL Email ID. - http://webmail.aol.in



[squid-users] About --enable-removal-policies='heap lru'

2009-04-08 Thread Pandu E Poluan

--enable-removal-policies='heap lru'

Does that mean only heap LRU method supported, or lru and all three 
heap xxx methods?


Thanks.

[p]

--
*Pandu E Poluan*
*Panin Sekuritas*
IT Manager / Operations  Audit
Phone : +62-21-515-3055 ext 135
Fax :   +62-21-515-3061
Mobile :+62-856-8400-426
e-mail : 	pandu_pol...@paninsekuritas.co.id 
mailto:pandu_pol...@paninsekuritas.co.id






Y!M :   hands0me_irc
MSN :   si-gant...@live.com
GTalk : pandu.ca...@gmail.com



[squid-users] CONNECT method support(for https) using squid3.1.0.6 + tproxy4

2009-04-08 Thread Mikio Kishi
Hi, all

Now, I evaluate the squid3.1.0.6 + tproxy4 environment like the
following network.

(1) (2)

 |   |
  +--+   | ++|+-+
  |WWW   +---+ ||++ WWW |
  |Client|.2 |   .1| squid  |.1  |  .2|  Server |
  +--+   +-+   + tproxy ++|(tcp/443)|
 | | (tcp/8080) |||(tcp/80) |
 | ++|+-+
   192.168.0.0/24  10.0.0.0/24

  (1) 192.168.0.2 --  192.168.0.1:8080
  (2) 192.168.0.2 --  10.0.0.2:80

HTTP communication is completely OK !
but in HTTPS(using CONNECT method) case

  (1) 192.168.0.2 --  192.168.0.1:8080
  (2) 192.168.0.2 --  10.0.0.2:443

the following error occurred.

 commBind: Cannot bind socket FD 12 to 192.168.0.2: (99) Cannot
   assign requested address

I think that tunnelStart()#tunnel.cc don't support COMM_TRANSPARENT

 tunnelStart(ClientHttpRequest * http, int64_t * size_ptr, int* status_ptr)
 {
  ... snip ...
sock = comm_openex(SOCK_STREAM,
   IPPROTO_TCP,
   temp,
   COMM_NONBLOCKING,  // need COMM_TRANSPARENT
   getOutgoingTOS(request),
   url);
  ... snip ...

What do you think ?

--
Sincerely,
Mikio Kishi


[squid-users] Complex Reverse Proxy setup

2009-04-08 Thread schwermie

At our company we have a complex setup.

external URL   internal URL
www.example.com/ - www.example.com
www.example.com/subdir - www.example2.com
www.webserver.com   - www.example.com/webserver
www.server.com:1800 - www.server.com:1800

Could someone help me, i don't know how to create a config for this
scenario.

Thanks...
-- 
View this message in context: 
http://www.nabble.com/Complex-Reverse-Proxy-setup-tp22945431p22945431.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] FreeBSD - Squid 2.7 - Transparent

2009-04-08 Thread Leslie Jensen


 HI All,



 I am trying to use squid 2.7 in FreeBSD machine. But there is no option
 available  --enable-ipfw-transparent  for configure the squid in
 transparent mode. How can we enable transparent mode when configuring
 squid?.



 Regards

 Vivek



Before you compile, do make config!
/Leslie


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [squid-users] FreeBSD - Squid 2.7 - Transparent

2009-04-08 Thread Vivek

My question is simple.

Based on the instarutions given by 
http://wiki.squid-cache.org/ConfigExamples/Intercept/FreeBsdIpfw we 
should use  --enable-ipfw-transparent  this option when configuration 
squid.


But the above option not available in squid 2.7. Is there any 
alternative for that?




Regards

Vivek



-Original Message-

From: Leslie Jensen les...@eskk.nu

To: Vivek vivek...@aol.in

Cc: squid-users@squid-cache.org; hen...@henriknordstrom.net; 
squ...@treenet.co.nz


Sent: Wed, 8 Apr 2009 5:01 pm

Subject: Re: [squid-users] FreeBSD - Squid 2.7 - Transparent



HI All,













I am trying to use squid 2.7 in FreeBSD machine. But there is no 

option


available  --enable-ipfw-transparent  for configure the squid in



transparent mode. How can we enable transparent mode when configuring



squid?.















Regards







Vivek












Before you compile, do make config!

/Leslie





--

This message has been scanned for viruses and

dangerous content by MailScanner, and is

believed to be clean.















You are invited to Get a Free AOL Email ID. - http://webmail.aol.in



RE: [squid-users] Custom error page based on IP.

2009-04-08 Thread Palmer J.D.F.
Sorry for the somewhat large delay in replying to you, I have been on
longish term sick.
However I've just returned and have sussed this out.

Firstly I added the following rules to squid.conf.

acl swan src 123.45.0.0/16   # The campus subnet, which was
already defined in squid.conf
.
deny_info ERR_EXTERNAL_IP not swan   # if client's source IP is not in
swan subnet then instantiate error page
acl www dst 123.45.67.89 # campus www server holding the
instruction page
http_access allow www !swan  # allows access to web server from
IP's that are outside of swan subnet
http_access deny !swan   # deny src IP's outside the swan
subnet.

Then created a custom error file (ERR_EXTERNAL_IP) which contains a
redirect to the page on the campus webserver.

If you don't allow the access to the campus web server, you get a
recursive deny and all gets a bit messy.

Simples!

Cheers,
Jezz.


 -Original Message-
 From: John Doe [mailto:jd...@yahoo.com]
 Sent: 13 February 2009 09:58
 To: Palmer J.D.F.
 Subject: Re: [squid-users] Custom error page based on IP.
 
 
 From: Palmer J.D.F. j.d.f.pal...@swansea.ac.uk
  Is it possible to have a custom error page that is displayed only
 when a
  client machine tries to connect to our squid caches from outside our
  subnet?
 
  We have a lot of users  visitors that use their machines on site,
 but
  also off site on other networks; occasionally these users try to
 proxy
  via our cache from off site networks outside our subnet; we have
acls
 in
  place that prevent remote proxying, but as it is they just get an
 Access
  Denied error.
  If possible I'd like to replace this error with an explanation and
  instructions on how to re-configure their browser.
 
  As far as I can tell the same Access Denied error
(ERR_ACCESS_DENIED)
 is
  displayed for a multitude of reasons, hence not viable to just edit
 the
  existing error; is it possible to have a different error just for
 this
  scenario?
 
 Maybe you could use url rewrites to forward them to a specific web
page
 that would explain why they cannot use the proxy from outside...
 
 JD
 
 
 



[squid-users] ident auth problem with squid 3.1.0.6

2009-04-08 Thread michael.kastin...@spar.at
Hi!

Currently we are testing the new squid version 3.1.0.6. generally the squid is 
working fine, but we have a problem with authenticating users with ident.

cut of squid.conf:

http_port 3128
ident_lookup_access allow all
acl CONNECT method CONNECT
acl all src all
acl permit_userident  /usr/local/config-squid/etc/permit_user1

http_access allow CONNECT
http_access allow manager localhost
http_access allow manager cachemanager
http_access deny manager
http_access allow messenger
http_access allow permit_user
http_access deny all

http_reply_access allow all

icp_access allow all

but on every request, the squid is trying to connect to the users ident port 
there is the message 

commBind: Cannot bind socket FD 12 to 172.31.19.100:3128: (98) Address already 
in use 

in cache.log an the user will get access denied.

lsof is showing, that no other processes are using this port.

# lsof -i tcp:3128
COMMAND   PID USER   FD   TYPE   DEVICE SIZE NODE NAME
squid   28992 suqid   55u  IPv4 31976219   TCP *:squid (LISTEN)
#

Strace is showing that there is something wrong with opening socket on the 
right port:

9.1.4.9 (client ip)
172.31.19.100 (server ip)

accept(55, {sa_family=AF_INET, sin_port=htons(24395), 
sin_addr=inet_addr(9.1.4.9)}, [16]) = 10
getsockname(10, {sa_family=AF_INET, sin_port=htons(3128), 
sin_addr=inet_addr(172.31.19.100)}, [16]) = 0
...

bind(12, {sa_family=AF_INET, sin_port=htons(3128), 
sin_addr=inet_addr(172.31.19.100)}, 16) = -1 EADDRINUSE (Address already in 
use)

why does squid bind his ident connect to port 3128? 
The same config is working fine with squid 2.7 without any troubles.

Is this a known issue with squid 3.1.0.6 ? does anyone have the same issue?

Thanks for help!

br
Mike
---
SPAR Österreichische Warenhandels-AG
Hauptzentrale
A - 5015 Salzburg, Europastrasse 3
FN 34170 a
 
Tel: +43 662 4470 24245
Mobile: +43 664 8159150
E-Mail: michael.kastin...@spar.at
Internet: http://www.spar.at
 
Wichtiger Hinweis: Der Inhalt dieser E-Mail kann vertrauliche und rechtlich 
geschützte Informationen, insbesondere Betriebs- oder Geschäftsgeheimnisse, 
enthalten, zu deren Geheimhaltung der Empfänger verpflichtet ist. Die 
Informationen in dieser E-Mail sind ausschließlich für den Adressaten bestimmt. 
Sollten Sie die E-Mail irrtümlich erhalten haben so ersuchen wir Sie, die 
Nachricht von Ihrem System zu löschen und sich mit uns in Verbindung zu setzen.
Über das Internet versandte E-Mails können leicht manipuliert oder unter 
fremdem Namen erstellt werden. Daher schließen wir die rechtliche 
Verbindlichkeit der in dieser Nachricht enthaltenen Informationen aus. Der 
Inhalt der E-Mail ist nur rechtsverbindlich, wenn er von uns schriftlich 
bestätigt und gezeichnet wird.
Sollte trotz der von uns verwendeten Virus-Schutzprogramme durch die Zusendung 
von E-Mails ein Virus in Ihre Systeme gelangen, haften wir nicht für evtl. 
hieraus entstehende Schäden.
Wir danken für Ihr Verständnis.
 
Important notice: The contents of this e-mail may contain confidential and 
legally protected information that is in particular related to operational and 
trade secrets, which the recipient is obliged to treat as confidential.  The 
information in this e-mail is made available exclusively for use by the 
addressee. In the event that the e-mail may have been sent to you in error, we 
would ask you to kindly delete this communication from your system and to 
contact us.
E-mails sent via the Internet can be easily manipulated or sent out under 
someone else's name. We therefore do not accept legal liability for the 
information contained in this communication. The contents of the e-mail are 
only legally binding if they have been confirmed and signed by us in writing.
If, in spite of our using Antivirus protection software, a virus may have 
penetrated your system through the sending of this e-mail, we do not accept 
liability for any damage that may possibly arise as a result of this.
We trust that you appreciate our position.

---


[squid-users] Squid Host header rewriting

2009-04-08 Thread Juha Luoma
Hi,

Squid rewrites the host header as follows:

   GET http://194.137.237.63/uutiset/ HTTP/1.1\r\n
   Host: www.hs.fi\r\n

-

   GET /uutiset/ HTTP/1.0\r\n
   Host: 194.137.237.63\r\n

Why is that? How to pass on the original Host header in this case?

Thanks,

 - Juha


[squid-users] About --enable-removal-policies='heap lru'

2009-04-08 Thread Mehmet ÇELiK


You can try --enable-removal-policies=heap,lru.. 
Thus, you will have given all support.


add to squid.conf
cache_replacement_policy heap GDSF

Regards..

--
Mehmet CELIK 

Date: Wed, 8 Apr 2009 17:22:41 +0700
From: pandu_pol...@paninsekuritas.co.id
To: squid-users@squid-cache.org
Subject: [squid-users] About --enable-removal-policies='heap lru'

--enable-removal-policies='heap lru'

Does that mean only heap LRU method supported, or lru and all three 
heap xxx methods?


Thanks.

[p]

--
*Pandu E Poluan*
*Panin Sekuritas*
IT Manager / Operations  Audit
Phone :  +62-21-515-3055 ext 135
Fax :  +62-21-515-3061
Mobile :  +62-856-8400-426
e-mail :  pandu_pol...@paninsekuritas.co.id 
mailto:pandu_pol...@paninsekuritas.co.id


 

 
 
Y!M :  hands0me_irc

MSN :  si-gant...@live.com
GTalk :  pandu.ca...@gmail.com



Re: [squid-users] Strange problem accessing http://Bloomberg.com

2009-04-08 Thread Jason Taylor

Hi Amos,

I resolved the issue with the following line in my proxy.pac file:
  if (dnsDomainIs(host, \'wbetest2.bloomberg.com)) { return 
proxy:3128; }


I used the page at 
http://jcurnow.home.comcast.net/~jcurnow/WritingEffectivePACFiles.html 
(mentioned in the proxy.pac entry in wikipedia) to add sufficient 
alerting in my PAC file to precisely walk through the bloomberg page and 
see $host values that the PAC file was seeing.


Also, I found a contact within my organization that has several contacts 
at Bloomberg and my issue description and fix will make their way to the 
right people to take care of the source of the problem.


Now if only Microsoft could take care of their javascript parser...  
Firefox does not experience this issue, even when using the exact same 
PAC file.


Thanks very much for your help.

Cheers,

/Jason

Amos Jeffries wrote:

So I think the client's proxy.pac script might be having trouble
digesting the malformed URL below:



1239113823.055  0 xxx.yyy.zzz.aaa TCP_DENIED/400 1614 GET
http://'wbetest2.bloomberg.com/jscommon/0/s_code.js' - NONE/- text/html
  

The single quote is making the proxy.pac freeze which in turn makes the
browser window freeze.
So at least now I know this is a problem at Bloomberg's end.
However, in the mean time, I need to make this site work for my users
since brokers are not known for their patience and understanding.

I know this isn't the ideal forum for this, but does anyone have an idea
how I can let the proxy.pac properly parse a URL with a quoted string in
it?



Hmmm:

 ...
  if ( strstr($url, \') ) return DIRECT;

should do the trick.


Of course I would never suggest passing them to PROXY
http://127.0.0.1:80/; ;)


Amos

  




[squid-users] Can Squid do what Blue Coat BCAAA does with transparent silent NTLM auth

2009-04-08 Thread Elvar


Hello,

For several years now I've used Squid with Winbind to silently 
authenticate users to Active Directory which has worked wonderfully. The 
one thing I've always had to do though is configure the user proxy 
settings to manually point to the proxy in order for it to silently 
authenticate. If I try a transparent proxy configuration without 
specifying manual proxy settings in the browser the silent 
authentication does not work.


According to some friends who use a product called Blue Coat SG there is 
an agent called BCAAA which allows you to have the proxy configured in a 
transparent manner, not specify manual settings in the users browsers, 
and still get silent NTLM authentication. Is there any way to do this in 
Squid that I'm just not aware of?



Kind regards,
Elvar




Re: [squid-users] acl dstdomains does not block!

2009-04-08 Thread Leslie Jensen


Amos Jeffries skrev:


Um, the config you showed simplifies down to:

 allow localhost access anywhere.
 deny anything else. Period.

I think you want:

#
# If we want to block certain sites.
#
# acl blockedsites dstdomain .aftonbladet.se.
 acl blockedsites dstdomain .squid-cache.org
# acl blockedsites dstdomain /usr/local/etc/squid/dstdomain
#
# Show message when blocked
# deny_info ERR_ACCESS_DENIED blocked_sites
#
 http_access deny blockedsites

# allow local network to other sites.

  http_access allow localhost
  http_access allow localnet

#
# And deny all other access to this proxy
#
 http_access deny all


Amos


Thank you guys.

I'm now up and running thanks to your advise :-)

/Leslie




Re: [squid-users] Getting error msgs when trying to start squid

2009-04-08 Thread Henrique M.


Amos Jeffries-2 wrote:
 
 'error messages' in web terminology means something completely different 
 which can be 'kept'.
 
 I assume you mean where doe sit send the startup error output? That is 
 usually sent to syslog by Debian/Ubuntu during init process and then 
 when squid is going to the /var/logs/squid3/cache.log
 
 Amos
 -- 
 Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
Current Beta Squid 3.1.0.6
 

Thanks for the help so far Amos, the squid 2.7 is working now. I couldn't
get squid3 to work so I reinstalled squid 2.7 and it worked right away,
don't know why it didn't work before. This version seems to be ubuntu's
default but it is old and even though it is running it won't recognize the
httpd_accel command lines, so I had to comment them like you guys said.

I also would like to ask for help with squid configuration. I have a ADSL
modem that it also DHCP server (IP is 192.168.2.1) and a Ubuntu Linux server
that will be the proxy server (IP 192.168.2.5). In order to get the proxy
working will I have to transfer the DHCP server to the ubuntu server
instead? How should I setup squid.conf to get proxy working?

Thanks again
-- 
View this message in context: 
http://www.nabble.com/Getting-error-msgs-when-trying-to-start-squid-tp22933693p22957492.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] squid authentication and redirection

2009-04-08 Thread Rudy Gevaert
Dear Squid users,

I was wondering if the following can be accomplished in squid:

Say, a user starts using the proxy
1 he is not logged, so he gets redirected to a webpage over https
2 the webpage authenticates him, and sets a cookie in his browser
3 he is then redirected to the original url he was surfing to
4 squid checks if the cookie is valid and authenticates the user
5 the user can surf till he closes his browser 

In step 4 we never go to the authentication webpage unless the cookie is
not valid.  

In the back ground  we would then run a script that parses the log file
and updates a database.  So the next time a user logs in we can deny him
access.

The current solutions I have found have the following problems:
- they use basic authentication, so password is sent in clear text of
  the wire
- they redirect all requests to a redirect url


Can it be done with squid?

Thanks in advance,
-- 
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
Rudy Gevaert  rudy.geva...@ugent.be  tel:+32 9 264 4734
Directie ICT, afd. Infrastructuur  Direction ICT, Infrastructure dept.
Groep Systemen Systems group
Universiteit Gent  Ghent University
Krijgslaan 281, gebouw S9, 9000 Gent, Belgie   www.UGent.be
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 


[squid-users] SSL on Squid Reverse Proxy

2009-04-08 Thread Frank Hoang

Using latest stable squid2.7.6
Using Squid as a reverse proxy.

Got a setup of Squid -- web server -- java
site works fine in normal HTTP port 80.

Need to  enable SSL for the site also.
so I added
https_port x.x.x.x:443 cert=/site_name.com.cert key=/site_name.com.key  
vhost


and
cache_peer 10.x.x.x parent 443 0 no-query no-digest default
where cache peer is the INT VIP of the webcluster.

Squid SSL seems to work accord to the logs and browser check.
2009/04/08 23:19:07| Accepting accelerated HTTP connections at  
x.x.x.x, port 80, FD 20.
2009/04/08 23:19:07| Accepting HTTPS connections at x.x.x.x, port 443,  
FD 21.


Problem is I get
Error 400 Bad Request  when trying to access the site via HTTPS  
through squid.


pointing your host file to 10.x.x.x and checking with the browser over  
HTTPS works.


I think my conf is missing some proper options.
when adding cache_peer options like ssl , there is no change.


Any help would be great,

Thanks


[squid-users] Re: FreeBSD - Squid 2.7 - Transparent

2009-04-08 Thread Henrik Nordstrom
ons 2009-04-08 klockan 05:15 -0400 skrev Vivek:

 I am trying to use squid 2.7 in FreeBSD machine. But there is no option 
 available  --enable-ipfw-transparent  for configure the squid in 
 transparent mode. How can we enable transparent mode when configuring 
 squid?.

As far as I can understand none is needed in Squid-2, as IPFW returns
the original destination address in getsockname().

Additionally the lookup in Squid-3 is probably somewhat broken for the
same reason, comparing getsockname() to getsockname()...

Regards
Henri



Re: [squid-users] Squid 3.1.0.6, zph, shorewall, and tc on debian 5.0 (lenny)

2009-04-08 Thread Jason

Jason wrote:

Amos,

Thanks for answering.

Amos Jeffries wrote:

Jason wrote:

Everyone,

   I have compiled squid 3.1.6 from source on amd64 Debian 5.0 with


NP: please use the correct version numbering: 3.1.0.6.
there will probably be a 3.1.6 at some point in the future and 
hopefully this problem will not apply to those users, best not to add 
confusion.

My mistake.  This is for 3.1.0.6.  My apologies to the squid community.


zph options enabled.  I don't peer with any other caches, so all 
peering

stuff is disabled in my build.  I did not compile a kernel with the zph
patches, because, as I understand, that is only necessary if I want to
preserve zph marks between caches.  Plus, there is no zph patch for
the kernel version I am running.


Right.



With shorewall redirect rules, squid is operating as a transparent
intercepting proxy just fine.  I do not use tproxy - this is a NAT 
setup.


I can not get the zph functions to work.

Here are my config options:

squid.conf
...
qos_flows local-hit=0x30
...

shorewall tcstart:
#root htb
tc qdisc add dev eth1 root handle 1: htb default 1

#default htb
tc class add dev eth1 parent 1: classid 1:1 htb rate 64kbps /
ceil 64kbps

#squid htb
tc class add dev eth1 parent 1: classid 1:7 htb rate 1Mbit

tc filter add dev eth1 parent 1: protocol ip prio 1 u32 match /
ip protocol 0x6 0xff match ip tos 0x30 0xff flowid 1:7

#I tried this for squid too
#tc filter add dev eth1 parent 1: protocol ip prio 1 u32 match /
ip protocol 0x6 0xff match u32 0x880430 0x at 20 flowid 1:7

The shorewall tcrules are all commented out right now, so it is not 
applying

any filtering.

I have about one week to finish off this server for production...  
Help?



Jason Wallace



So what are the packet traces showing you about events?

Also, its much easier for most of us to read the real firewall rules. 
what does iptables -L  iptables -t nat -L show hapening?


Amos


iptables -L  iptables -t nat -L yields the following.  I will try to 
packet trace this afternoon.
I have researched what a packet trace could mean.  Do you want to see 
what wireshark says on a client computer when I try to retrieve 
something that should come from the cache?




iptables -L  iptables -t nat -L
Chain INPUT (policy DROP)
target prot opt source   destination
eth0_inall  --  anywhere anywhere
eth1_inall  --  anywhere anywhere
ACCEPT all  --  anywhere anywhere
ACCEPT all  --  anywhere anywherestate 
RELATED,ESTABLISHED

Drop   all  --  anywhere anywhere
LOGall  --  anywhere anywhereLOG level 
warning prefix `Shorewall:INPUT:DROP:'

DROP   all  --  anywhere anywhere

Chain FORWARD (policy DROP)
target prot opt source   destination
eth0_fwd   all  --  anywhere anywhere
eth1_fwd   all  --  anywhere anywhere
ACCEPT all  --  anywhere anywherestate 
RELATED,ESTABLISHED

Drop   all  --  anywhere anywhere
LOGall  --  anywhere anywhereLOG level 
warning prefix `Shorewall:FORWARD:DROP:'

DROP   all  --  anywhere anywhere

Chain OUTPUT (policy DROP)
target prot opt source   destination
eth0_out   all  --  anywhere anywhere
eth1_out   all  --  anywhere anywhere
ACCEPT all  --  anywhere anywhere
ACCEPT all  --  anywhere anywherestate 
RELATED,ESTABLISHED

ACCEPT all  --  anywhere anywhere

Chain Drop (7 references)
target prot opt source   destination
reject tcp  --  anywhere anywheretcp dpt:auth
dropBcast  all  --  anywhere anywhere
ACCEPT icmp --  anywhere anywhereicmp 
fragmentation-needed
ACCEPT icmp --  anywhere anywhereicmp 
time-exceeded

dropInvalid  all  --  anywhere anywhere
DROP   udp  --  anywhere anywheremultiport 
dports loc-srv,microsoft-ds
DROP   udp  --  anywhere anywhereudp 
dpts:netbios-ns:netbios-ssn
DROP   udp  --  anywhere anywhereudp 
spt:netbios-ns dpts:1024:65535
DROP   tcp  --  anywhere anywheremultiport 
dports loc-srv,netbios-ssn,microsoft-ds

DROP   udp  --  anywhere anywhereudp dpt:1900
dropNotSyn  tcp  --  anywhere anywhere
DROP   udp  --  anywhere anywhereudp 
spt:domain


Chain Reject (0 references)
target prot opt source   destination
reject tcp  --  anywhere anywheretcp dpt:auth
dropBcast  all  --  anywhere anywhere
ACCEPT icmp --  anywhere anywhereicmp 
fragmentation-needed
ACCEPT icmp --  anywhere   

Re: [squid-users] CONNECT method support(for https) using squid3.1.0.6 + tproxy4

2009-04-08 Thread Amos Jeffries
 Hi, all

 Now, I evaluate the squid3.1.0.6 + tproxy4 environment like the
 following network.

 (1) (2)

  |   |
   +--+   | ++|+-+
   |WWW   +---+ ||++ WWW |
   |Client|.2 |   .1| squid  |.1  |  .2|  Server |
   +--+   +-+   + tproxy ++|(tcp/443)|
  | | (tcp/8080) |||(tcp/80) |
  | ++|+-+
192.168.0.0/24  10.0.0.0/24

   (1) 192.168.0.2 --  192.168.0.1:8080
   (2) 192.168.0.2 --  10.0.0.2:80

 HTTP communication is completely OK !
 but in HTTPS(using CONNECT method) case

   (1) 192.168.0.2 --  192.168.0.1:8080
   (2) 192.168.0.2 --  10.0.0.2:443
 
 the following error occurred.

 commBind: Cannot bind socket FD 12 to 192.168.0.2: (99) Cannot
   assign requested address

 I think that tunnelStart()#tunnel.cc don't support COMM_TRANSPARENT

 tunnelStart(ClientHttpRequest * http, int64_t * size_ptr, int*
 status_ptr)
 {
  ... snip ...
sock = comm_openex(SOCK_STREAM,
   IPPROTO_TCP,
   temp,
   COMM_NONBLOCKING,  // need COMM_TRANSPARENT
   getOutgoingTOS(request),
   url);
  ... snip ...

 What do you think ?

HTTPS encrypted traffic cannot be intercepted.

Amos




RE: [squid-users] Custom error page based on IP.

2009-04-08 Thread Amos Jeffries
 Sorry for the somewhat large delay in replying to you, I have been on
 longish term sick.
 However I've just returned and have sussed this out.

 Firstly I added the following rules to squid.conf.

 acl swan src 123.45.0.0/16   # The campus subnet, which was
 already defined in squid.conf
 .
 deny_info ERR_EXTERNAL_IP not swan   # if client's source IP is not in
 swan subnet then instantiate error page
 acl www dst 123.45.67.89 # campus www server holding the
 instruction page
 http_access allow www !swan  # allows access to web server from
 IP's that are outside of swan subnet
 http_access deny !swan   # deny src IP's outside the swan
 subnet.

 Then created a custom error file (ERR_EXTERNAL_IP) which contains a
 redirect to the page on the campus webserver.

 If you don't allow the access to the campus web server, you get a
 recursive deny and all gets a bit messy.


NP: the line above deny_info ERR_EXTERNAL_IP not swan
 should be configured as:
  deny_info ERR_EXTERNAL_IP swan

Unless the ERR_EXTERNAL_IP is generating the redirect to include various
of the Squid % error page codes it can be replaced further with:
  deny_info http://internal.server/errorpage.html swan


Amos


 Simples!

 Cheers,
 Jezz.


 -Original Message-
 From: John Doe [mailto:jd...@yahoo.com]
 Sent: 13 February 2009 09:58
 To: Palmer J.D.F.
 Subject: Re: [squid-users] Custom error page based on IP.


 From: Palmer J.D.F. j.d.f.pal...@swansea.ac.uk
  Is it possible to have a custom error page that is displayed only
 when a
  client machine tries to connect to our squid caches from outside our
  subnet?
 
  We have a lot of users  visitors that use their machines on site,
 but
  also off site on other networks; occasionally these users try to
 proxy
  via our cache from off site networks outside our subnet; we have
 acls
 in
  place that prevent remote proxying, but as it is they just get an
 Access
  Denied error.
  If possible I'd like to replace this error with an explanation and
  instructions on how to re-configure their browser.
 
  As far as I can tell the same Access Denied error
 (ERR_ACCESS_DENIED)
 is
  displayed for a multitude of reasons, hence not viable to just edit
 the
  existing error; is it possible to have a different error just for
 this
  scenario?

 Maybe you could use url rewrites to forward them to a specific web
 page
 that would explain why they cannot use the proxy from outside...

 JD









Re: [squid-users] squid authentication and redirection

2009-04-08 Thread Amos Jeffries
 Dear Squid users,

 I was wondering if the following can be accomplished in squid:

 Say, a user starts using the proxy
 1 he is not logged, so he gets redirected to a webpage over https
 2 the webpage authenticates him, and sets a cookie in his browser
 3 he is then redirected to the original url he was surfing to
 4 squid checks if the cookie is valid and authenticates the user
 5 the user can surf till he closes his browser

 In step 4 we never go to the authentication webpage unless the cookie is
 not valid.

 In the back ground  we would then run a script that parses the log file
 and updates a database.  So the next time a user logs in we can deny him
 access.

 The current solutions I have found have the following problems:
 - they use basic authentication, so password is sent in clear text of
   the wire
 - they redirect all requests to a redirect url


 Can it be done with squid?

Yes. But its very complicated.

Since you are calculating your database of 'not okay' users based on IPs
you can drop the whole cookie thing and simply create an external_acl_type
helper that checks the current database records directly for each request.

Using an external helper, lets you do:
 .. define external helper and ACL 'LoggedIn'

 deny_info https://exmaple.com/login_page LoggedIn
 http_access deny !LoggedIn
 http_access allow LoggedIn

Amos


 Thanks in advance,
 --
 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
 Rudy Gevaert  rudy.geva...@ugent.be  tel:+32 9 264 4734
 Directie ICT, afd. Infrastructuur  Direction ICT, Infrastructure dept.
 Groep Systemen Systems group
 Universiteit Gent  Ghent University
 Krijgslaan 281, gebouw S9, 9000 Gent, Belgie   www.UGent.be
 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --





Re: [squid-users] Getting error msgs when trying to start squid

2009-04-08 Thread Amos Jeffries


 Amos Jeffries-2 wrote:

 'error messages' in web terminology means something completely different
 which can be 'kept'.

 I assume you mean where doe sit send the startup error output? That is
 usually sent to syslog by Debian/Ubuntu during init process and then
 when squid is going to the /var/logs/squid3/cache.log

 Amos
 --
 Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
Current Beta Squid 3.1.0.6


 Thanks for the help so far Amos, the squid 2.7 is working now. I couldn't
 get squid3 to work so I reinstalled squid 2.7 and it worked right away,
 don't know why it didn't work before. This version seems to be ubuntu's
 default but it is old and even though it is running it won't recognize the
 httpd_accel command lines, so I had to comment them like you guys said.


httpd_accel has been obsolete for more than 3 years now.
Where did you get that config? I know it does not come with the packaged
squid/squid3 on any current Ubuntu.

 I also would like to ask for help with squid configuration. I have a ADSL
 modem that it also DHCP server (IP is 192.168.2.1) and a Ubuntu Linux
 server
 that will be the proxy server (IP 192.168.2.5). In order to get the proxy
 working will I have to transfer the DHCP server to the ubuntu server
 instead? How should I setup squid.conf to get proxy working?

Considering that you have on apparently brand new installs encountered two
sets of issues with long obsolete config options. I'm going to have to say
please post your Ubundu version, squid version, and whole squid.conf
(minus the comment '#' lines) and lets get it cleaned up before you do
anything else.

Amos




Re: [squid-users] SSL on Squid Reverse Proxy

2009-04-08 Thread Amos Jeffries
 Using latest stable squid2.7.6
 Using Squid as a reverse proxy.

 Got a setup of Squid -- web server -- java
 site works fine in normal HTTP port 80.

 Need to  enable SSL for the site also.
 so I added
 https_port x.x.x.x:443 cert=/site_name.com.cert key=/site_name.com.key
 vhost

 and
 cache_peer 10.x.x.x parent 443 0 no-query no-digest default
 where cache peer is the INT VIP of the webcluster.

 Squid SSL seems to work accord to the logs and browser check.
 2009/04/08 23:19:07| Accepting accelerated HTTP connections at
 x.x.x.x, port 80, FD 20.
 2009/04/08 23:19:07| Accepting HTTPS connections at x.x.x.x, port 443,
 FD 21.

 Problem is I get
 Error 400 Bad Request  when trying to access the site via HTTPS
 through squid.

 pointing your host file to 10.x.x.x and checking with the browser over
 HTTPS works.

 I think my conf is missing some proper options.
 when adding cache_peer options like ssl , there is no change.


cache_peer ... sslflags=DONT_VERIFY_PEER


Amos




Re: [squid-users] CONNECT method support(for https) using squid3.1.0.6 + tproxy4

2009-04-08 Thread Mikio Kishi
Hi, Amos

HTTPS encrypted traffic cannot be intercepted.

Yes, I know that. but, in this case, not transparent.

(1) (2)

 |   |
  +--+   | ++|+-+
  |WWW   +---+ ||++ WWW |
  |Client|.2 |   .1| squid  |.1  |  .2|  Server |
  +--+   +-+   + tproxy ++|(tcp/443)|
 | | (tcp/8080) |||(tcp/80) |
 | ++|+-+
   192.168.0.0/24  10.0.0.0/24

  (1) 192.168.0.2 --  192.168.0.1:8080
  ^
  (2) 192.168.0.2 --  10.0.0.2:443
^^^

Just only thing I'd like to do is source address spoofing
using tproxy.

Does that make sense ?

Sincerely,

--
Mikio Kishi


On Thu, Apr 9, 2009 at 10:52 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 Hi, all

 Now, I evaluate the squid3.1.0.6 + tproxy4 environment like the
 following network.

             (1)                     (2)

              |                       |
   +--+   |     ++    |    +-+
   |WWW   +---+     |            |    ++ WWW     |
   |Client|.2 |   .1| squid      |.1  |  .2|  Server |
   +--+   +-+   + tproxy ++    |(tcp/443)|
              |     | (tcp/8080) |    |    |(tcp/80) |
              |     ++    |    +-+
        192.168.0.0/24          10.0.0.0/24

   (1) 192.168.0.2 --  192.168.0.1:8080
   (2) 192.168.0.2 --  10.0.0.2:80

 HTTP communication is completely OK !
 but in HTTPS(using CONNECT method) case

   (1) 192.168.0.2 --  192.168.0.1:8080
   (2) 192.168.0.2 --  10.0.0.2:443
                                     
 the following error occurred.

 commBind: Cannot bind socket FD 12 to 192.168.0.2: (99) Cannot
           assign requested address

 I think that tunnelStart()#tunnel.cc don't support COMM_TRANSPARENT

 tunnelStart(ClientHttpRequest * http, int64_t * size_ptr, int*
 status_ptr)
 {
  ... snip ...
    sock = comm_openex(SOCK_STREAM,
                       IPPROTO_TCP,
                       temp,
                       COMM_NONBLOCKING,  // need COMM_TRANSPARENT
                       getOutgoingTOS(request),
                       url);
  ... snip ...

 What do you think ?

 HTTPS encrypted traffic cannot be intercepted.

 Amos





[squid-users] Squid 2.7.STABLE6 - peerDigestFetchAbort peer 192.168.0.1 Bad Request

2009-04-08 Thread louis gonzales
I need help understanding what the following cache.log information
means?  Please.

2009/04/09 00:35:08| The request GET
http://unified1.abstract.net:80/tc/fms/513901874/mygroup/FSC_unified1_Administrator
is ALLOWED, because it matched 'FMS'
2009/04/09 00:35:08| peerSourceHashSelectParent: Calculating hash for
192.168.0.1
2009/04/09 00:35:08| The reply for GET
http://unified1.abstract.net/tc/fms/513901874/mygroup/FSC_unified1_Administrator
is ALLOWED, because it matched 'all'
2009/04/09 00:35:08| clientReadRequest: FD 12: no data to process
((10035) WSAEWOULDBLOCK, Resource temporarily unavailable.)
2009/04/09 00:36:59| peerDigestRequest:
http://192.168.0.1/squid-internal-periodic/store_digest key:
53E0FFBD42B9AFDF6F0027179BE8F121
2009/04/09 00:36:59| peerDigestFetchAbort: peer 192.168.0.1, reason: Bad Request
2009/04/09 00:36:59| temporary disabling (Bad Request) digest from 192.168.0.1
2009/04/09 00:36:59| fwdAbort:
http://192.168.0.1/squid-internal-periodic/store_digest
2009/04/09 00:46:59| peerDigestRequest:
http://192.168.0.1/squid-internal-periodic/store_digest key:
53E0FFBD42B9AFDF6F0027179BE8F121
2009/04/09 00:46:59| peerDigestFetchAbort: peer 192.168.0.1, reason: Bad Request
2009/04/09 00:46:59| temporary disabling (Bad Request) digest from 192.168.0.1
2009/04/09 00:46:59| fwdAbort:
http://192.168.0.1/squid-internal-periodic/store_digest


-- 
Louis Gonzales
BSCS EMU 2003
HP Certified Professional
louis.gonza...@linuxlouis.net