[squid-users] Strange problem with 503 errors.

2012-04-05 Thread Michael D. Setzer II
My college recently switched from ISP provided IPs to its own.
202.128.71.x,  202.128.72.x, 202.128.73.x and 202.128.79.x from 
ISP to there own 203.215.52.0/22.

The switch seemed to go fine, but have found a few sites that are 
giving us 503 errors. 

I have gotten around the problem by using rinetd to redirect a port 
to the squid server on my home machine for these sites, and one 
of our ISPs has given access to there proxy server so now using 
that, but ideally would like to come up with what is causing the 
problem.

I have squid servers on campus, and was wondering if there was 
a way to have them in the event of not being able to connect to a 
site, it they could automatically try going thru the ISPs proxy?

The traceroutes to the sites that don't work get to the point right 
before they should, 1 hop less. The latest problem site is strange.

Going to tinyurl.com sometimes gives the IP address 
64.62.243.89, which works fine. But other times it gives 
64.62.243.91, which doesn't from the college. Both work fine from 
home. In running wireshark, the 89 address will send and receive 
pings and responses, but the 91 only shows the sends with no 
receive responses. 

nmap shows no open ports on the 91 from college, but from home 
it does? Perhaps someone might have a way of figuring out what 
the issue is. Our IT guys say they are not blocking these IPs, and 
even they have found sites that don't work for them. 

Thanks.

+--+
  Michael D. Setzer II -  Computer Science Instructor  
  Guam Community College  Computer Center  
  mailto:mi...@kuentos.guam.net
  mailto:msetze...@gmail.com
  http://www.guam.net/home/mikes
  Guam - Where America's Day Begins
  G4L Disk Imaging Project maintainer 
  http://sourceforge.net/projects/g4l/
+--+

http://setiathome.berkeley.edu (Original)
Number of Seti Units Returned:  19,471
Processing time:  32 years, 290 days, 12 hours, 58 minutes
(Total Hours: 287,489)

BOINC@HOME CREDITS
SETI12029945.909740   |   EINSTEIN 7623371.809852
ROSETTA  4388616.446766   |   ABC 12124980.377137



[squid-users] Add parameter to all Reques URL‏

2012-04-05 Thread Simon Laredos

Hello,
I would like someone to offer help on a problem I have.
What I need is Squid add a parameter to the end of all requests made ​​from the 
proxy.
Eg, when want to visit the page "http://www.google.com/"; in the end add the 
parameter "?Something" to make it this way "http://www.google.com/?something"; 
and that this applies to all sites, except personal sites.

Greetings and thanks for attention.      
  

Re: [squid-users] limiting connections

2012-04-05 Thread H
Carlos Manuel Trepeu Pupo wrote:
> On Thu, Apr 5, 2012 at 10:32 AM, H  wrote:
>> Carlos Manuel Trepeu Pupo wrote:
> what is your purpose? solve bandwidth problems? Connection rate?
> Congestion? I believe that limiting to *one* download is not your real
> intention, because the browser could still open hundreds of regular
> pages and your download limit is nuked and was for nothing ...
>
> what is your operating system?
>
>>> I pretend solve bandwidth problems. For the persons who uses download
>>> manager or accelerators, just limit them to 1 connection. Otherwise I
>>> tried to solve with delay_pool, the packet that I delivery to the
>>> client was just like I configured, but with accelerators the upload
>>> saturate the channel.
>>>
>>
>>
>> since you did not say what OS youŕe running I can give you only some
>> direction, any or most Unix firewall can solve this easy, if you use
>> Linux you may like pf with FBSD you should go with ipfw, the latter
>> probably is easier to understand but for both you will find zillions of
>> examples on the net, look for short setups
> 
> Sorry, I forgot !! Squid is in Debian 6.0 32 bits. My firewall is
> Kerio but in Windows, and i'm not so glad to use it !!!
> 
>>
>> first you "divide" your bandwidth between your users
> 
> First I search about the dynamic bandwidth with Squid, but squid do
> not do this, and them after many search I just find ISA Server with a
> third-party plugin, but I prefer linux.
> 
>>
>> if you use TPROXy you can devide/limit the bandwidth on the outside
>> interface in order to limit only access to the link but if squid has the
>> object in cache it might go out as fast as it can
>>
>> you still can manage the bandwidth pool with delay parameters if you wish
> 
> I tried with delay_pool, but the delay_pool just manage the download
> average, and not the upload, I need the both. The last time I tried
> with delay_pool the "download accelerator" download at the speed that
> I specify, but the proxy consume all channel with the download,
> something that I never understand.
> 
>>
>>
>> I guess you meant downlaod accelerator, not manager, you can then limit
>> the connection rate within the bandwidth for each user and each
>> protocol, for DL-accelerator you should pay attention to udp packages as
>> well, you did not say how much user and bandwdith you have but limit the
>> tcp connection to 25 and udp to 40 to begin with, then test it until
>> coming to something what suites your wish
> 
> I have 128 kbps, and I have no idea about the UDP packages !!! That's
> new for me !! Any documentation that I can read ???
> 


any of this we talk about has nothing to do with squid

bw control, connection limiting etc you should handle with the firewall

let squid do what it does well, cache and proxy

you could consider a different setup, a Unix box with firewall on your
internet connection and as your gateway, squid as TPROXY or transparent
proxy if you need NAT, all on the same box

if you use Linux you should look for pf firewall, if you use FreeBSD you
should use ipfw firewall and read the specific documentations, if this
all is new for you,  you might find it easier to use FreeBSD since all
setups are straight forward, linux and also pf is a little bit more
complicated
as example, setting nat on IPFW can be down with three lines of code, I
believe pf needs at least 6 to work

but before you dig deeper you might think about a new design of your
concept of Internet access


>>
>> you still could check which DLaccel your people are using and then limit
>> or block only this P2P ports which used to be very effective
> 
> Even if I do not permit "CONNECT" the users can use P2P ports ??
> 

I do not understand this question, is this squids connect keyword? If,
nothing to do ...

all I was talking about is on firewall layer, before squid

DL-accel use to fire lots of UDP packets to find a peer, this packages
can saturate small links easily if you do not limit them

you limit the max udp connections as also the max tcp connections, what
helps you getting even with small bandwidth "reasonable" speed as far as
128kbit/s can be reasonable

you can run a simple squid setup
and you run a simple firewall setup

both on one machine


> Thanks for this, I can get clear many question about squid that I have !!!
> 

you are welcome



-- 
H
+55 11 4249.



signature.asc
Description: OpenPGP digital signature


[squid-users] NTLM, non-domain machines and keep-alive

2012-04-05 Thread Harry Mills

Hi,

I have been trying to iron our a few issues we are having with NTLM 
authentication on our network for machines which are not domain members:


Windows 2008R2 AD domain
RHEL 6.1
squid-3.1.10-1
samba-3.5.6-86
Internet Explorer 7,8

We are in the process of moving to Kerberos authentication, and the test 
squid we have running is working well, however, when presented with the 
negotiate option for auth, IE will choose NTLM rather than basic when it 
is not a member of the domain.


I have reduced the config for squid down to just offering NTLM 
authentication to help me debug an issue with pop up boxes. I have also 
written a wrapper around the ntlm_auth binary to strace the calls being 
made when it is being executed.


NTLM authentication works without issue for domain members, however IE 
(and Chrome) will both popup an authentication required box three times 
before accepting the DOMAIN\Username and password.


Checking the wrapper around ntlm_auth, the process is only called by 
squid after the last of the three authentication prompts is submitted by 
the browser. Squid issues the expected two 407s to the browser which 
appears to cause the browser to pop up the authentication window each 
time, and on the third submission authentication succeeds.


The odd thing is, if I turn off keep-alive for ntlm in the squid.conf 
then I still see the 407s being issued by squid, but I only get a single 
authentication pop up from the browser, which when submitted with the 
correct credentials is immediately accepted and authentication succeeds.


I am clearly missing something, because it states quite clearly that 
NTLM _requires_ keep alive sockets as it is a connection orientated 
mechanism, so perhaps my turning off keep-alive causes a basic-auth 
fallback within ntlm_auth?


Is there a reason that IE presents 3 authentication boxes before 
accepting credentials from a non-domain machine. If there is a reason, 
is there a solution?


One thought I have had is that the majority of non-domain members will 
be on a specific VLAN, and therefore have a specific IP subnet. Is it 
possible to offer a different range of authentication options to the 
clients based on a subnet acl, e.g. Kerb/NTLM for machines on 
domain-member VLANS and just basic for guests (non-domain members)?


Regards,

Harry


Re: [squid-users] limiting connections

2012-04-05 Thread Carlos Manuel Trepeu Pupo
On Thu, Apr 5, 2012 at 10:32 AM, H  wrote:
> Carlos Manuel Trepeu Pupo wrote:
>>> > what is your purpose? solve bandwidth problems? Connection rate?
>>> > Congestion? I believe that limiting to *one* download is not your real
>>> > intention, because the browser could still open hundreds of regular
>>> > pages and your download limit is nuked and was for nothing ...
>>> >
>>> > what is your operating system?
>>> >
>> I pretend solve bandwidth problems. For the persons who uses download
>> manager or accelerators, just limit them to 1 connection. Otherwise I
>> tried to solve with delay_pool, the packet that I delivery to the
>> client was just like I configured, but with accelerators the upload
>> saturate the channel.
>>
>
>
> since you did not say what OS youŕe running I can give you only some
> direction, any or most Unix firewall can solve this easy, if you use
> Linux you may like pf with FBSD you should go with ipfw, the latter
> probably is easier to understand but for both you will find zillions of
> examples on the net, look for short setups

Sorry, I forgot !! Squid is in Debian 6.0 32 bits. My firewall is
Kerio but in Windows, and i'm not so glad to use it !!!

>
> first you "divide" your bandwidth between your users

First I search about the dynamic bandwidth with Squid, but squid do
not do this, and them after many search I just find ISA Server with a
third-party plugin, but I prefer linux.

>
> if you use TPROXy you can devide/limit the bandwidth on the outside
> interface in order to limit only access to the link but if squid has the
> object in cache it might go out as fast as it can
>
> you still can manage the bandwidth pool with delay parameters if you wish

I tried with delay_pool, but the delay_pool just manage the download
average, and not the upload, I need the both. The last time I tried
with delay_pool the "download accelerator" download at the speed that
I specify, but the proxy consume all channel with the download,
something that I never understand.

>
>
> I guess you meant downlaod accelerator, not manager, you can then limit
> the connection rate within the bandwidth for each user and each
> protocol, for DL-accelerator you should pay attention to udp packages as
> well, you did not say how much user and bandwdith you have but limit the
> tcp connection to 25 and udp to 40 to begin with, then test it until
> coming to something what suites your wish

I have 128 kbps, and I have no idea about the UDP packages !!! That's
new for me !! Any documentation that I can read ???

>
> you still could check which DLaccel your people are using and then limit
> or block only this P2P ports which used to be very effective

Even if I do not permit "CONNECT" the users can use P2P ports ??

Thanks for this, I can get clear many question about squid that I have !!!

>
>
>
>
> --
> H
> +55 11 4249.
>


Re: [squid-users] does a match on an ACL stop or continue?

2012-04-05 Thread Greg Whynott

On 05/04/2012 2:09 AM, Jasper Van Der Westhuizen wrote:

Hi Greg

As far as I know it stops when it hits a rule. Rules are "AND'd "or "OR'd" 
together.



thanks Jasper!
have a great weekend,
greg



Re: [squid-users] Serious problem with read_timeout

2012-04-05 Thread Jean-Philippe Menil

Le 04/04/2012 09:00, Jean-Philippe Menil a écrit :

On 03/04/2012 23:53, Amos Jeffries wrote:

On 04.04.2012 02:46, Jean-Philippe Menil wrote:

Le 03/04/2012 11:06, Jean-Philippe Menil a écrit :

Hi,

i encounter serious outage with squid 3.HEAD-20120307-r12077.
Every time i download some test files, it stop after 15 minutes.
If i go down read_timeout to 1 minutes, the download stop after 1
minutes.

Is it a know issue, or must i increment read_timeout to excessively
timeout?

special configuration is as follow:

workers 4
cpu_affinity_map process_numbers=1,2,3,4 cores=6,7,8,9

Regards.


Nobody have ever observe this phenomen?


Not many production networks (squid-users people) use 3.HEAD (alpha) 
code.

The developers and alpha/beta testers hang out in squid-dev ;)

And no, you are the first to mention this particular behaviour.

Amos


Hi,

yes iknow, but i think it is present in 3.2 too (i will test this 
afternoon to confirm).
I think i can repeat that only when download a file to on a  https 
site, does it help?


Regards.


Hi,

so i have done test with a squid 3.2.0.14.
And it appear that i can repeat the problem only with https site, why i 
don't know yet.


For test, i fixe a lower value (don't want to wait 15 minutes between 
each test) for read_timeout,

and i download an iso file through some https site:
https://nzdis.org/projects/projects/perfnet/repository/revisions/4/raw/vendor/Vyatta/Vyatta/vyatta-livecd-vc5.0.2.iso

Every time, the download stop at the read_timeout value.

Any ideas?

Regards.


--
Jean-Philippe Menil - Pôle réseau Service IRTS
DSI Université de Nantes
jean-philippe.me...@univ-nantes.fr
Tel : 02.53.48.49.27 - Fax : 02.53.48.49.09



Re: [squid-users] limiting connections

2012-04-05 Thread H
Carlos Manuel Trepeu Pupo wrote:
>> > what is your purpose? solve bandwidth problems? Connection rate?
>> > Congestion? I believe that limiting to *one* download is not your real
>> > intention, because the browser could still open hundreds of regular
>> > pages and your download limit is nuked and was for nothing ...
>> >
>> > what is your operating system?
>> >
> I pretend solve bandwidth problems. For the persons who uses download
> manager or accelerators, just limit them to 1 connection. Otherwise I
> tried to solve with delay_pool, the packet that I delivery to the
> client was just like I configured, but with accelerators the upload
> saturate the channel.
> 


since you did not say what OS youŕe running I can give you only some
direction, any or most Unix firewall can solve this easy, if you use
Linux you may like pf with FBSD you should go with ipfw, the latter
probably is easier to understand but for both you will find zillions of
examples on the net, look for short setups

first you "divide" your bandwidth between your users

if you use TPROXy you can devide/limit the bandwidth on the outside
interface in order to limit only access to the link but if squid has the
object in cache it might go out as fast as it can

you still can manage the bandwidth pool with delay parameters if you wish


I guess you meant downlaod accelerator, not manager, you can then limit
the connection rate within the bandwidth for each user and each
protocol, for DL-accelerator you should pay attention to udp packages as
well, you did not say how much user and bandwdith you have but limit the
tcp connection to 25 and udp to 40 to begin with, then test it until
coming to something what suites your wish

you still could check which DLaccel your people are using and then limit
or block only this P2P ports which used to be very effective




-- 
H
+55 11 4249.



signature.asc
Description: OpenPGP digital signature


Re: Fwd: [squid-users] Squid and FTP

2012-04-05 Thread Eliezer Croitoru

On 05/04/2012 16:21, Colin Coe wrote:

On Thu, Apr 5, 2012 at 8:32 PM, Eliezer Croitoru  wrote:

On 05/04/2012 14:51, Colin Coe wrote:




OK, I did
export ftp_proxy=http://benpxy1p:3128
wget ftp://ftp2.bom.gov.au/anon/gen/fwo
--2012-04-05 19:43:38--  ftp://ftp2.bom.gov.au/anon/gen/fwo
Resolving benpxy1p... 172.22.106.10
Connecting to benpxy1p|172.22.106.10|:3128... connected.
Proxy request sent, awaiting response... ^C

An entry appeared in access.log only after I hit ^C.

Changing ftp_proxy to ftp://benpxy1p:3128 did not change anything.

CC


well if a access_log entry appears it means that the client is contacting
the squid server.
did you notice that the size of this list\dir is about 1.8 MB?
take something simple such as:
ftp://ftp.freebsd.org/pub
it should be about 2.9Kb.
then if it didnt go within 10 secs try using without upper stream proxys.
maybe something is setup wrong on the cache_peer.
there are options to debug with a lot of output from squid that can simplify
the problem.
but i would go to minimum settings and up.
use only one proxy and without a name.
just use the ip for the cache_peer acls.
you can use the debug sections:
http://wiki.squid-cache.org/KnowledgeBase/DebugSections
to make more use of it.
use like this:
debug_options ALL,1 section,verbosity_level
debug_options ALL,1 9,6

there are couple of sections that will provide you with more network layer
info that will help you find the source of the problem.

to see the log tail the cahce.log file.

well i gave you kind of the worst case scenario i could think of.
if you need more help i'm here.

Regards,
Eliezer



As a test I pointed the client at the corporate proxy.

# export ftp_proxy=http://172.22.0.7:221
# wget ftp://ftp2.bom.gov.au/anon/gen/fwo/IDY02128.dat
--2012-04-05 20:43:53--  ftp://ftp2.bom.gov.au/anon/gen/fwo/IDY02128.dat
Connecting to 172.22.0.7:221... connected.
Proxy request sent, awaiting response... 200 No headers, assuming HTTP/0.9
Length: unspecified
Saving to: “IDY02128.dat”

[
 <=>
] 232 --.-K/s   in 2m 0s

2012-04-05 20:45:52 (1.94 B/s) - “IDY02128.dat” saved [232]

It took a while but it definitely works.  I added the debug lines to
the squid.conf (and restarted).  When pointing the client at the squid
server (for doing the FTP), there were no additional lines logged in
either cache.log or access.log.

Again, doing a tcpdump on the squid server shows the client _is_
connecting to the squid server.

CC


as i was saying...it's not about if it's connecting to the squid server 
but what happens from squid to the world.

try to disable the cache_peer settings on squid...
try to use squid as regular proxy without going to the parent bluecoat 
and see how it works.
just to see if you do have any problem on squid settings that are not 
related to the cache_peer settings.


as you know i and many more people are using squid for ftp and it works 
with no problem.


i cant point exactly about the point of failure in your setup but one 
thing i do know..

i am using 3 cache peers and it works excellent for me.
just for you i will put a setup to see how my basic settings for squid 
works with a parent proxy. (it will take some time )


most likely that if in any point you see access log entry it means that 
you are not configuring something right on your squid.


try the next:
in hosts file add the entry:
172.22.0.7  ftp_proxy
172.22.0.7  http_proxy

then in squid.conf add:
cache_peer ftp_proxy parent 221 0 no-query no-digest proxy-only
cache_peer_access ftp_proxy allow ftp_ports
cache_peer_access ftp_proxy deny all

cache_peer http_proxy parent 8200 0 no-query no-digest proxy-only
cache_peer_access http_proxy deny ftp
cache_peer_access http_proxy allow all

#remove the :
#always_direct allow Dev
#always_direct allow Prod

#and add only:
never_direct allow all


Regards,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: Fwd: [squid-users] Squid and FTP

2012-04-05 Thread Colin Coe
On Thu, Apr 5, 2012 at 8:32 PM, Eliezer Croitoru  wrote:
> On 05/04/2012 14:51, Colin Coe wrote:
> 
>
>
>> OK, I did
>> export ftp_proxy=http://benpxy1p:3128
>> wget ftp://ftp2.bom.gov.au/anon/gen/fwo
>> --2012-04-05 19:43:38--  ftp://ftp2.bom.gov.au/anon/gen/fwo
>> Resolving benpxy1p... 172.22.106.10
>> Connecting to benpxy1p|172.22.106.10|:3128... connected.
>> Proxy request sent, awaiting response... ^C
>>
>> An entry appeared in access.log only after I hit ^C.
>>
>> Changing ftp_proxy to ftp://benpxy1p:3128 did not change anything.
>>
>> CC
>>
> well if a access_log entry appears it means that the client is contacting
> the squid server.
> did you notice that the size of this list\dir is about 1.8 MB?
> take something simple such as:
> ftp://ftp.freebsd.org/pub
> it should be about 2.9Kb.
> then if it didnt go within 10 secs try using without upper stream proxys.
> maybe something is setup wrong on the cache_peer.
> there are options to debug with a lot of output from squid that can simplify
> the problem.
> but i would go to minimum settings and up.
> use only one proxy and without a name.
> just use the ip for the cache_peer acls.
> you can use the debug sections:
> http://wiki.squid-cache.org/KnowledgeBase/DebugSections
> to make more use of it.
> use like this:
> debug_options ALL,1 section,verbosity_level
> debug_options ALL,1 9,6
>
> there are couple of sections that will provide you with more network layer
> info that will help you find the source of the problem.
>
> to see the log tail the cahce.log file.
>
> well i gave you kind of the worst case scenario i could think of.
> if you need more help i'm here.
>
> Regards,
> Eliezer
>

As a test I pointed the client at the corporate proxy.

# export ftp_proxy=http://172.22.0.7:221
# wget ftp://ftp2.bom.gov.au/anon/gen/fwo/IDY02128.dat
--2012-04-05 20:43:53--  ftp://ftp2.bom.gov.au/anon/gen/fwo/IDY02128.dat
Connecting to 172.22.0.7:221... connected.
Proxy request sent, awaiting response... 200 No headers, assuming HTTP/0.9
Length: unspecified
Saving to: “IDY02128.dat”

   [
<=>
] 232 --.-K/s   in 2m 0s

2012-04-05 20:45:52 (1.94 B/s) - “IDY02128.dat” saved [232]

It took a while but it definitely works.  I added the debug lines to
the squid.conf (and restarted).  When pointing the client at the squid
server (for doing the FTP), there were no additional lines logged in
either cache.log or access.log.

Again, doing a tcpdump on the squid server shows the client _is_
connecting to the squid server.

CC

-- 
RHCE#805007969328369


Re: [squid-users] limiting connections

2012-04-05 Thread Carlos Manuel Trepeu Pupo
On Thu, Apr 5, 2012 at 7:01 AM, H  wrote:
> Carlos Manuel Trepeu Pupo wrote:
>> On Tue, Apr 3, 2012 at 6:35 PM, H  wrote:
>>> Eliezer Croitoru wrote:
 On 03/04/2012 18:30, Carlos Manuel Trepeu Pupo wrote:
> On Mon, Apr 2, 2012 at 6:43 PM, Amos Jeffries
> wrote:
>> On 03.04.2012 02:21, Carlos Manuel Trepeu Pupo wrote:
>>>
>>> Thanks a looot !! That's what I'm missing, everything work
>>> fine now. So this script can use it cause it's already works.
>>>
>>> Now, I need to know if there is any way to consult the active request
>>> in squid that work faster that squidclient 
>>>
>>
>> ACL types are pretty easy to add to the Squid code. I'm happy to
>> throw an
>> ACL patch your way for a few $$.
>>
>> Which comes back to me earlier still unanswered question about why
>> you want
>> to do this very, very strange thing?
>>
>> Amos
>>
>
>
> OK !! Here the complicate and strange explanation:
>
> Where I work we have 128 Kbps for the use of almost 80 PCs, a few of
> them use download accelerators and saturate the channel. I began to
> use the ACL maxconn but I have still a few problems. 60 of the clients
> are under an ISA server that I don't administrate, so I can't limit
> the maxconn to them like the others. Now with this ACL, everyone can
> download but with only one connection. that's the strange main idea.
 what do you mean by only one connection?
 if it's under one isa server then all of them share the same external IP.

>>>
>>> Hi
>>>
>>> I am following this thread with mixed feelings of weirdness and
>>> admiration ...
>>>
>>> there are always two ways to reach a far point, it's left around or
>>> right around the world, depending on your position one of the ways is
>>> always the longer one. I can understand that some without hurry and
>>> money issues chose the longer one, perhaps also because of more chance
>>> for adventurous happenings, unknown and the unexpected
>>>
>>> so know I explained in a similar long way what I do not understand, why
>>> would you make such a complicated out of scope code, slow, certainly
>>> dangerous ... if at least it would be perl, but bash calling external
>>> prog and grepping, whow ... when you can solve it with a line of code ?
>>>
>>> this task would fit pf or ipfw much better, would be more elegant and
>>> zillions times faster and secure, not speaking about time investment,
>>> how much time you need to write 5/6 keywords of code?
>>>
>>> or is it for demonstration purpose, showing it as an alternative
>>> possibility?
>>>
>>
>> It's great read this. I just know BASH SHELL, but if you tell me that
>> I can make this safer and faster... Previously post I talk about
>> this!! That someone tell me if there is a better way of do that, I'm
>> newer !! Please, if you can guide me
>>
>
>
> who knows ...
>
> what is your purpose? solve bandwidth problems? Connection rate?
> Congestion? I believe that limiting to *one* download is not your real
> intention, because the browser could still open hundreds of regular
> pages and your download limit is nuked and was for nothing ...
>
> what is your operating system?
>

I pretend solve bandwidth problems. For the persons who uses download
manager or accelerators, just limit them to 1 connection. Otherwise I
tried to solve with delay_pool, the packet that I delivery to the
client was just like I configured, but with accelerators the upload
saturate the channel.

>
>
> --
> H
> +55 11 4249.
>


Re: Fwd: [squid-users] Squid and FTP

2012-04-05 Thread Eliezer Croitoru

On 05/04/2012 14:51, Colin Coe wrote:



OK, I did
export ftp_proxy=http://benpxy1p:3128
wget ftp://ftp2.bom.gov.au/anon/gen/fwo
--2012-04-05 19:43:38--  ftp://ftp2.bom.gov.au/anon/gen/fwo
Resolving benpxy1p... 172.22.106.10
Connecting to benpxy1p|172.22.106.10|:3128... connected.
Proxy request sent, awaiting response... ^C

An entry appeared in access.log only after I hit ^C.

Changing ftp_proxy to ftp://benpxy1p:3128 did not change anything.

CC

well if a access_log entry appears it means that the client is 
contacting the squid server.

did you notice that the size of this list\dir is about 1.8 MB?
take something simple such as:
ftp://ftp.freebsd.org/pub
it should be about 2.9Kb.
then if it didnt go within 10 secs try using without upper stream proxys.
maybe something is setup wrong on the cache_peer.
there are options to debug with a lot of output from squid that can 
simplify the problem.

but i would go to minimum settings and up.
use only one proxy and without a name.
just use the ip for the cache_peer acls.
you can use the debug sections:
http://wiki.squid-cache.org/KnowledgeBase/DebugSections
to make more use of it.
use like this:
debug_options ALL,1 section,verbosity_level
debug_options ALL,1 9,6

there are couple of sections that will provide you with more network 
layer info that will help you find the source of the problem.


to see the log tail the cahce.log file.

well i gave you kind of the worst case scenario i could think of.
if you need more help i'm here.

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: Fwd: [squid-users] Squid and FTP

2012-04-05 Thread Colin Coe
On Thu, Apr 5, 2012 at 6:50 PM, Eliezer Croitoru  wrote:
> On 05/04/2012 12:14, Colin Coe wrote:
>>
>> Oops, and send to list.
>>
>> On Thu, Apr 5, 2012 at 4:26 PM, Eliezer Croitoru
>>  wrote:
>>>
>>> On 05/04/2012 10:25, Colin Coe wrote:


 On Wed, Apr 4, 2012 at 7:40 PM, Amos Jeffries
  wrote:
>
>
> On 4/04/2012 6:01 p.m., Eliezer Croitoru wrote:
>>
>>
>>
>> On 04/04/2012 08:12, Colin Coe wrote:
>>>
>>>
>>>
>>> Hi all
>>>
>>> I'm trying to get our squid proxy server to allow clients to do
>>> outbound FTP.  The problem is that our corporate proxy uses tcp/8200
>>> for http/https traffic and port 221 for FTP traffic.
>>>
>>> Tailing the squid logs I see that squid is attempting to send all FTP
>>> requests direct instead of going through the corporate proxy.
>>>
>>> Any ideas how I'd configure squid to use the corp proxy for FTP
>>> instead of going direct?
>>>
>>> Thanks
>>>
>>> CC
>>>
>> if you have parent proxy you should use the never_direct acl.
>>
>>
>>
>> acl ftp_ports port 21
>
>
>
>
> Make that "20 21" (note the space between)
>
>
> Amos



 Hi all

 I've made changes based on these suggestions but it still doesn't
 work.  My squid.conf looks like:
 ---
 cache_peer 172.22.0.7 parent 8200 0 default no-query no-netdb-exchange
 proxy-only no-digest no-delay name=other
 cache_peer 172.22.0.7 parent 221 0 default no-query no-netdb-exchange
 proxy-only no-digest no-delay  name=ftp

 cache_dir ufs /var/cache/squid 4900 16 256

 http_port 3128

 hierarchy_stoplist cgi-bin ?

 refresh_pattern ^ftp:           1440    20%     10080
 refresh_pattern ^gopher:        1440    0%      1440
 refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
 refresh_pattern .               0       20%     4320

 acl manager proto cache_object
 acl localhost src 127.0.0.1/32 ::1
 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

 acl localnet src 10.0.0.0/8     # RFC 1918 possible internal network
 acl localnet src 172.16.0.0/12  # RFC 1918 possible internal network
 acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network
 acl localnet src fc00::/7       # RFC 4193 local private network range
 acl localnet src fe80::/10      # RFC 4291 link-local (directly
 plugged) machines

 acl ftp_ports port 21 20

 acl SSL_ports port 443 21 20
 acl Safe_ports port 80          # http
 acl Safe_ports port 21          # ftp
 acl Safe_ports port 443         # https
 acl Safe_ports port 70          # gopher
 acl Safe_ports port 210         # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280         # http-mgmt
 acl Safe_ports port 488         # gss-http
 acl Safe_ports port 591         # filemaker
 acl Safe_ports port 777         # multiling http
 acl CONNECT method CONNECT

 cache_peer_access ftp allow ftp_ports
 cache_peer_access ftp deny all
 never_direct allow ftp_ports
 cache_peer_access other deny ftp_ports

 acl Prod dst 172.22.106.0/23
 acl Prod dst 172.22.176.0/23
 acl Dev dst 172.22.102.0/23

 acl BOM dstdomain .bom.gov.au
 cache deny BOM

 always_direct allow Dev
 always_direct allow Prod
 never_direct allow all

 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow localnet
 http_access deny all
 ---

 On the proxy server, when I do a 'tcpdump host client and port 3128' I
 get nothing more than
 ---
 15:22:19.515518 IP 172.22.106.23.48052>    172.22.106.10.3128: Flags
 [S], seq 2995762959, win 5840, options [mss 1460,sackOK,TS val
 1681190449 ecr 0,nop,wscale 7], length 0
 15:22:19.515567 IP 172.22.106.10.3128>    172.22.106.23.48052: Flags
 [S.], seq 1966725410, ack 2995762960, win 14480, options [mss
 1460,sackOK,TS val 699366121 ecr 1681190449], length 0
 15:22:19.515740 IP 172.22.106.23.48052>    172.22.106.10.3128: Flags
 [.], ack 1, win 5840, options [nop,nop,TS val 1681190449 ecr
 699366121], length 0
 15:23:49.606087 IP 172.22.106.23.48052>    172.22.106.10.3128: Flags
 [F.], seq 1, ack 1, win 5840, options [nop,nop,TS val 1681280540 ecr
 699366121], length 0
 15:23:49.606163 IP 172.22.106.10.3128>    172.22.106.23.48052: Flags
 [.], ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
 1681280540], length 0
 15:23:49.606337 IP 172.22.106.10.3128>    172.22.106.23.48052: Flags
 [F.], seq 1, ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
 1681280540], length 0
 15:23:49.606465 IP 172.22.106.23.48052> 

Re: [squid-users] limiting connections

2012-04-05 Thread H
Carlos Manuel Trepeu Pupo wrote:
> On Tue, Apr 3, 2012 at 6:35 PM, H  wrote:
>> Eliezer Croitoru wrote:
>>> On 03/04/2012 18:30, Carlos Manuel Trepeu Pupo wrote:
 On Mon, Apr 2, 2012 at 6:43 PM, Amos Jeffries
 wrote:
> On 03.04.2012 02:21, Carlos Manuel Trepeu Pupo wrote:
>>
>> Thanks a looot !! That's what I'm missing, everything work
>> fine now. So this script can use it cause it's already works.
>>
>> Now, I need to know if there is any way to consult the active request
>> in squid that work faster that squidclient 
>>
>
> ACL types are pretty easy to add to the Squid code. I'm happy to
> throw an
> ACL patch your way for a few $$.
>
> Which comes back to me earlier still unanswered question about why
> you want
> to do this very, very strange thing?
>
> Amos
>


 OK !! Here the complicate and strange explanation:

 Where I work we have 128 Kbps for the use of almost 80 PCs, a few of
 them use download accelerators and saturate the channel. I began to
 use the ACL maxconn but I have still a few problems. 60 of the clients
 are under an ISA server that I don't administrate, so I can't limit
 the maxconn to them like the others. Now with this ACL, everyone can
 download but with only one connection. that's the strange main idea.
>>> what do you mean by only one connection?
>>> if it's under one isa server then all of them share the same external IP.
>>>
>>
>> Hi
>>
>> I am following this thread with mixed feelings of weirdness and
>> admiration ...
>>
>> there are always two ways to reach a far point, it's left around or
>> right around the world, depending on your position one of the ways is
>> always the longer one. I can understand that some without hurry and
>> money issues chose the longer one, perhaps also because of more chance
>> for adventurous happenings, unknown and the unexpected
>>
>> so know I explained in a similar long way what I do not understand, why
>> would you make such a complicated out of scope code, slow, certainly
>> dangerous ... if at least it would be perl, but bash calling external
>> prog and grepping, whow ... when you can solve it with a line of code ?
>>
>> this task would fit pf or ipfw much better, would be more elegant and
>> zillions times faster and secure, not speaking about time investment,
>> how much time you need to write 5/6 keywords of code?
>>
>> or is it for demonstration purpose, showing it as an alternative
>> possibility?
>>
> 
> It's great read this. I just know BASH SHELL, but if you tell me that
> I can make this safer and faster... Previously post I talk about
> this!! That someone tell me if there is a better way of do that, I'm
> newer !! Please, if you can guide me
> 


who knows ...

what is your purpose? solve bandwidth problems? Connection rate?
Congestion? I believe that limiting to *one* download is not your real
intention, because the browser could still open hundreds of regular
pages and your download limit is nuked and was for nothing ...

what is your operating system?



-- 
H
+55 11 4249.



signature.asc
Description: OpenPGP digital signature


Re: Fwd: [squid-users] Squid and FTP

2012-04-05 Thread Eliezer Croitoru

On 05/04/2012 12:14, Colin Coe wrote:

Oops, and send to list.

On Thu, Apr 5, 2012 at 4:26 PM, Eliezer Croitoru  wrote:

On 05/04/2012 10:25, Colin Coe wrote:


On Wed, Apr 4, 2012 at 7:40 PM, Amos Jeffries
  wrote:


On 4/04/2012 6:01 p.m., Eliezer Croitoru wrote:



On 04/04/2012 08:12, Colin Coe wrote:



Hi all

I'm trying to get our squid proxy server to allow clients to do
outbound FTP.  The problem is that our corporate proxy uses tcp/8200
for http/https traffic and port 221 for FTP traffic.

Tailing the squid logs I see that squid is attempting to send all FTP
requests direct instead of going through the corporate proxy.

Any ideas how I'd configure squid to use the corp proxy for FTP
instead of going direct?

Thanks

CC


if you have parent proxy you should use the never_direct acl.



acl ftp_ports port 21




Make that "20 21" (note the space between)


Amos



Hi all

I've made changes based on these suggestions but it still doesn't
work.  My squid.conf looks like:
---
cache_peer 172.22.0.7 parent 8200 0 default no-query no-netdb-exchange
proxy-only no-digest no-delay name=other
cache_peer 172.22.0.7 parent 221 0 default no-query no-netdb-exchange
proxy-only no-digest no-delay  name=ftp

cache_dir ufs /var/cache/squid 4900 16 256

http_port 3128

hierarchy_stoplist cgi-bin ?

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC 1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl ftp_ports port 21 20

acl SSL_ports port 443 21 20
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

cache_peer_access ftp allow ftp_ports
cache_peer_access ftp deny all
never_direct allow ftp_ports
cache_peer_access other deny ftp_ports

acl Prod dst 172.22.106.0/23
acl Prod dst 172.22.176.0/23
acl Dev dst 172.22.102.0/23

acl BOM dstdomain .bom.gov.au
cache deny BOM

always_direct allow Dev
always_direct allow Prod
never_direct allow all

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access deny all
---

On the proxy server, when I do a 'tcpdump host client and port 3128' I
get nothing more than
---
15:22:19.515518 IP 172.22.106.23.48052>172.22.106.10.3128: Flags
[S], seq 2995762959, win 5840, options [mss 1460,sackOK,TS val
1681190449 ecr 0,nop,wscale 7], length 0
15:22:19.515567 IP 172.22.106.10.3128>172.22.106.23.48052: Flags
[S.], seq 1966725410, ack 2995762960, win 14480, options [mss
1460,sackOK,TS val 699366121 ecr 1681190449], length 0
15:22:19.515740 IP 172.22.106.23.48052>172.22.106.10.3128: Flags
[.], ack 1, win 5840, options [nop,nop,TS val 1681190449 ecr
699366121], length 0
15:23:49.606087 IP 172.22.106.23.48052>172.22.106.10.3128: Flags
[F.], seq 1, ack 1, win 5840, options [nop,nop,TS val 1681280540 ecr
699366121], length 0
15:23:49.606163 IP 172.22.106.10.3128>172.22.106.23.48052: Flags
[.], ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
1681280540], length 0
15:23:49.606337 IP 172.22.106.10.3128>172.22.106.23.48052: Flags
[F.], seq 1, ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
1681280540], length 0
15:23:49.606465 IP 172.22.106.23.48052>172.22.106.10.3128: Flags
[.], ack 2, win 5840, options [nop,nop,TS val 1681280540 ecr
699456212], length 0
---

Nothing goes into the access.log file from this connection either.


so what is your problem now?
that nothing goes into the access log?
let's go two steps back.
i didnt make sure but you do have:


acl Prod dst 172.22.106.0/23
acl Prod dst 172.22.176.0/23
acl Dev dst 172.22.102.0/23

always_direct allow Dev
always_direct allow Prod

and if you dont get anything in the access log it probably means that the
clients are not connecting to the server.
how you are directing the ftp clients to squid proxy server?
you do know that squid is not intercepting ftp protocol by itself?
there was some kind of ftp interception tool as far as i remember.

so just a sec state your goals again and what you have 

[squid-users] in squid using ssl_bump+tranparent options causes redirect-loop error at browser

2012-04-05 Thread Ahmed Talha Khan
I am trying to setup squid as a transparent proxy with the ssl bump
options enables as outlined in the article. However there is a problem
with all pages that have ssl/tls connections eg.
mail.yahoo.com,login.facebook.com. The browser gives me the error
“This webpage has re-direst loops”. Why is that happening? I was able
to buy pass the problem by removing the “transparent” keyword from the
config of http_port. But i want to know why this is happening.

Here is the relevant portion of my squid.cong file

always_direct allow all
ssl_bump allow all

http_port 192.168.8.105:3128 ssl-bump
cert=/home/talha/squid/www.sample.com.pem
key=/home/talha/squid/www.sample.com.pem

https_port 192.168.8.105:3129 ssl-bump
cert=/home/talha/squid/www.sample.com.pem
key=/home/talha/squid/www.sample.com.pem

If i put in the transparent keyword in it, i get the problem. can
anybody help me with this?

--
Regards,
-Ahmed Talha Khan


-- 
Regards,
-Ahmed Talha Khan


Fwd: [squid-users] Squid and FTP

2012-04-05 Thread Colin Coe
Oops, and send to list.

On Thu, Apr 5, 2012 at 4:26 PM, Eliezer Croitoru  wrote:
> On 05/04/2012 10:25, Colin Coe wrote:
>>
>> On Wed, Apr 4, 2012 at 7:40 PM, Amos Jeffries
>>  wrote:
>>>
>>> On 4/04/2012 6:01 p.m., Eliezer Croitoru wrote:


 On 04/04/2012 08:12, Colin Coe wrote:
>
>
> Hi all
>
> I'm trying to get our squid proxy server to allow clients to do
> outbound FTP.  The problem is that our corporate proxy uses tcp/8200
> for http/https traffic and port 221 for FTP traffic.
>
> Tailing the squid logs I see that squid is attempting to send all FTP
> requests direct instead of going through the corporate proxy.
>
> Any ideas how I'd configure squid to use the corp proxy for FTP
> instead of going direct?
>
> Thanks
>
> CC
>
 if you have parent proxy you should use the never_direct acl.



 acl ftp_ports port 21
>>>
>>>
>>>
>>> Make that "20 21" (note the space between)
>>>
>>>
>>> Amos
>>
>>
>> Hi all
>>
>> I've made changes based on these suggestions but it still doesn't
>> work.  My squid.conf looks like:
>> ---
>> cache_peer 172.22.0.7 parent 8200 0 default no-query no-netdb-exchange
>> proxy-only no-digest no-delay name=other
>> cache_peer 172.22.0.7 parent 221 0 default no-query no-netdb-exchange
>> proxy-only no-digest no-delay  name=ftp
>>
>> cache_dir ufs /var/cache/squid 4900 16 256
>>
>> http_port 3128
>>
>> hierarchy_stoplist cgi-bin ?
>>
>> refresh_pattern ^ftp:           1440    20%     10080
>> refresh_pattern ^gopher:        1440    0%      1440
>> refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
>> refresh_pattern .               0       20%     4320
>>
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/32 ::1
>> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
>>
>> acl localnet src 10.0.0.0/8     # RFC 1918 possible internal network
>> acl localnet src 172.16.0.0/12  # RFC 1918 possible internal network
>> acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network
>> acl localnet src fc00::/7       # RFC 4193 local private network range
>> acl localnet src fe80::/10      # RFC 4291 link-local (directly
>> plugged) machines
>>
>> acl ftp_ports port 21 20
>>
>> acl SSL_ports port 443 21 20
>> acl Safe_ports port 80          # http
>> acl Safe_ports port 21          # ftp
>> acl Safe_ports port 443         # https
>> acl Safe_ports port 70          # gopher
>> acl Safe_ports port 210         # wais
>> acl Safe_ports port 1025-65535  # unregistered ports
>> acl Safe_ports port 280         # http-mgmt
>> acl Safe_ports port 488         # gss-http
>> acl Safe_ports port 591         # filemaker
>> acl Safe_ports port 777         # multiling http
>> acl CONNECT method CONNECT
>>
>> cache_peer_access ftp allow ftp_ports
>> cache_peer_access ftp deny all
>> never_direct allow ftp_ports
>> cache_peer_access other deny ftp_ports
>>
>> acl Prod dst 172.22.106.0/23
>> acl Prod dst 172.22.176.0/23
>> acl Dev dst 172.22.102.0/23
>>
>> acl BOM dstdomain .bom.gov.au
>> cache deny BOM
>>
>> always_direct allow Dev
>> always_direct allow Prod
>> never_direct allow all
>>
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow localhost
>> http_access allow localnet
>> http_access deny all
>> ---
>>
>> On the proxy server, when I do a 'tcpdump host client and port 3128' I
>> get nothing more than
>> ---
>> 15:22:19.515518 IP 172.22.106.23.48052>  172.22.106.10.3128: Flags
>> [S], seq 2995762959, win 5840, options [mss 1460,sackOK,TS val
>> 1681190449 ecr 0,nop,wscale 7], length 0
>> 15:22:19.515567 IP 172.22.106.10.3128>  172.22.106.23.48052: Flags
>> [S.], seq 1966725410, ack 2995762960, win 14480, options [mss
>> 1460,sackOK,TS val 699366121 ecr 1681190449], length 0
>> 15:22:19.515740 IP 172.22.106.23.48052>  172.22.106.10.3128: Flags
>> [.], ack 1, win 5840, options [nop,nop,TS val 1681190449 ecr
>> 699366121], length 0
>> 15:23:49.606087 IP 172.22.106.23.48052>  172.22.106.10.3128: Flags
>> [F.], seq 1, ack 1, win 5840, options [nop,nop,TS val 1681280540 ecr
>> 699366121], length 0
>> 15:23:49.606163 IP 172.22.106.10.3128>  172.22.106.23.48052: Flags
>> [.], ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
>> 1681280540], length 0
>> 15:23:49.606337 IP 172.22.106.10.3128>  172.22.106.23.48052: Flags
>> [F.], seq 1, ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
>> 1681280540], length 0
>> 15:23:49.606465 IP 172.22.106.23.48052>  172.22.106.10.3128: Flags
>> [.], ack 2, win 5840, options [nop,nop,TS val 1681280540 ecr
>> 699456212], length 0
>> ---
>>
>> Nothing goes into the access.log file from this connection either.
>>
> so what is your problem now?
> that nothing goes into the access log?
> let's go two steps back.
> i didnt make sure but you do have:
>
>
> acl Prod dst 172.22.106.0/23
> acl Prod dst 172.22.176.0/23
> acl Dev dst 172.22.102.0/23
>
> 

RE: [squid-users] ntlm and kerberos

2012-04-05 Thread Anders.Larsson
Ok i did the migration yesterday from ntlm to kerberos :) went very smth..

One other thing is there a way to set logging for kerberos so I can see failed 
auth against AD ? 
And what do u recommend in children ? I got 15 now.
We got 4000 users in domain

The main issue that I moved from ntlm was that we had some issues with sistes 
that had to exclude in auth.. because java.. and that some users got problem 
with auth popup login in their IE.. they just needed to type user and password 
then it worked..

But now we still have the issue with popup for some users.. like 30 users.. 
very strange behavior.



 * Systemadmin Unix/Linux/Vmware
 * Tieto
 * Kyrkgatan 60
 * 831 34 ÖSTERSUND
 * Växel:+46 (0)10 481 98 00
 * Fax:  +46 (0)10 481 98 10
 * Tel:  +46 (0)10 481 02 20
 * Mobil:+46 (0)70 656 42 64
 * Mail: anders.lars...@tieto.com
 **
  
   Debian is they way to salvation 
  
  ---  How Hard Can It Be ---


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: den 3 april 2012 13:17
To: squid-users@squid-cache.org
Subject: Re: [squid-users] ntlm and kerberos

On 3/04/2012 7:26 p.m., Anders.Larsson wrote:
> Hi!
>
> Im using at the moment ntlm to auth to AD, I got a test server that are using 
> Kerberos..
> Now I want to change the prod machine to use Kerberos to.. is there a way to 
> have both auth directives in conf ?

Yes. Simply put them both in.
http://wiki.squid-cache.org/Features/Authentication#Can_I_use_different_authentication_mechanisms_together.3F

>
> I want to take it in steps so I have to create a acl for src ip/hosts..
> But how do I do the point out witch auth so it uses the acl for Kerberos..??
> Possible ?

Not possible unfortunately. The clients software decides.

Amos


Re: [squid-users] Squid and FTP

2012-04-05 Thread Eliezer Croitoru

On 05/04/2012 10:25, Colin Coe wrote:

On Wed, Apr 4, 2012 at 7:40 PM, Amos Jeffries  wrote:

On 4/04/2012 6:01 p.m., Eliezer Croitoru wrote:


On 04/04/2012 08:12, Colin Coe wrote:


Hi all

I'm trying to get our squid proxy server to allow clients to do
outbound FTP.  The problem is that our corporate proxy uses tcp/8200
for http/https traffic and port 221 for FTP traffic.

Tailing the squid logs I see that squid is attempting to send all FTP
requests direct instead of going through the corporate proxy.

Any ideas how I'd configure squid to use the corp proxy for FTP
instead of going direct?

Thanks

CC


if you have parent proxy you should use the never_direct acl.



acl ftp_ports port 21



Make that "20 21" (note the space between)


Amos


Hi all

I've made changes based on these suggestions but it still doesn't
work.  My squid.conf looks like:
---
cache_peer 172.22.0.7 parent 8200 0 default no-query no-netdb-exchange
proxy-only no-digest no-delay name=other
cache_peer 172.22.0.7 parent 221 0 default no-query no-netdb-exchange
proxy-only no-digest no-delay  name=ftp

cache_dir ufs /var/cache/squid 4900 16 256

http_port 3128

hierarchy_stoplist cgi-bin ?

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC 1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl ftp_ports port 21 20

acl SSL_ports port 443 21 20
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

cache_peer_access ftp allow ftp_ports
cache_peer_access ftp deny all
never_direct allow ftp_ports
cache_peer_access other deny ftp_ports

acl Prod dst 172.22.106.0/23
acl Prod dst 172.22.176.0/23
acl Dev dst 172.22.102.0/23

acl BOM dstdomain .bom.gov.au
cache deny BOM

always_direct allow Dev
always_direct allow Prod
never_direct allow all

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access deny all
---

On the proxy server, when I do a 'tcpdump host client and port 3128' I
get nothing more than
---
15:22:19.515518 IP 172.22.106.23.48052>  172.22.106.10.3128: Flags
[S], seq 2995762959, win 5840, options [mss 1460,sackOK,TS val
1681190449 ecr 0,nop,wscale 7], length 0
15:22:19.515567 IP 172.22.106.10.3128>  172.22.106.23.48052: Flags
[S.], seq 1966725410, ack 2995762960, win 14480, options [mss
1460,sackOK,TS val 699366121 ecr 1681190449], length 0
15:22:19.515740 IP 172.22.106.23.48052>  172.22.106.10.3128: Flags
[.], ack 1, win 5840, options [nop,nop,TS val 1681190449 ecr
699366121], length 0
15:23:49.606087 IP 172.22.106.23.48052>  172.22.106.10.3128: Flags
[F.], seq 1, ack 1, win 5840, options [nop,nop,TS val 1681280540 ecr
699366121], length 0
15:23:49.606163 IP 172.22.106.10.3128>  172.22.106.23.48052: Flags
[.], ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
1681280540], length 0
15:23:49.606337 IP 172.22.106.10.3128>  172.22.106.23.48052: Flags
[F.], seq 1, ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
1681280540], length 0
15:23:49.606465 IP 172.22.106.23.48052>  172.22.106.10.3128: Flags
[.], ack 2, win 5840, options [nop,nop,TS val 1681280540 ecr
699456212], length 0
---

Nothing goes into the access.log file from this connection either.


so what is your problem now?
that nothing goes into the access log?
let's go two steps back.
i didnt make sure but you do have:

acl Prod dst 172.22.106.0/23
acl Prod dst 172.22.176.0/23
acl Dev dst 172.22.102.0/23

always_direct allow Dev
always_direct allow Prod

and if you dont get anything in the access log it probably means that 
the clients are not connecting to the server.

how you are directing the ftp clients to squid proxy server?
you do know that squid is not intercepting ftp protocol by itself?
there was some kind of ftp interception tool as far as i remember.

so just a sec state your goals again and what you have done so far.

Regards,
Eliezer

Any ideas?

CC




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eli

RE: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 exchange2007 with ntlm

2012-04-05 Thread Clem
Hi Guido,

Thanks for this link but I've already read it, and already set that
parameter (EXPR), and no change, I've made more tests yesterday :

.. WinXP -> squid -> exchange 2007

With lan manager parameters (secpol.msc) AND with msstd option checked in
outlook http proxy parameters :

. LM et NTLM only : working
. NTLM only : working
. NTLMv2 only : working

.. Windows7 -> squid -> exchange 2007

With lan manager parameters (secpol.msc) AND with msstd option checked in
outlook http proxy parameters :

. LM et NTLM only : NOT working
. NTLM only : NOT working
. NTLMv2 only : NOT working

With lan manager parameters (secpol.msc) AND with msstd option NOT checked
in outlook http proxy parameters :

. LM et NTLM only : working
. NTLM only : NOT working
. NTLMv2 only : NOT working

Without squid, so outlook connected directly to exchange via outlook
anywhere, that works with any parameters, for XP and 7.

I'm so confused ... Why with XP that works with any parameters and Windows7
only with 2 parameters on ?
What is the thing that do the difference between these two OS ?

Regards,

Clem


-Message d'origine-
De : Guido Serassio [mailto:guido.seras...@acmeconsulting.it] 
Envoyé : mercredi 4 avril 2012 19:32
À : Clem; squid-users@squid-cache.org
Objet : R: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6
exchange2007 with ntlm

Hi Clem,

Try reading this:
http://blogs.technet.com/b/exchange/archive/2008/09/29/3406352.aspx

Regards

Guido Serassio
Acme Consulting S.r.l.
Microsoft Silver Certified Partner
VMware Professional Partner
Via Lucia Savarino, 110098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135   Fax. : +39.011.9781115
Email: guido.seras...@acmeconsulting.it
WWW: http://www.acmeconsulting.it


> -Messaggio originale-
> Da: Clem [mailto:clemf...@free.fr]
> Inviato: lunedì 2 aprile 2012 15.34
> A: squid-users@squid-cache.org
> Oggetto: RE: [squid-users] https analyze, squid rpc proxy to rpc proxy 
> ii6
> exchange2007 with ntlm
> 
> Re,
> 
> I've found the option that generate issue only with windows7, in 
> outlook proxy http settings window, we have this checked automatically 
> : connect only to server proxy certificate that use this principal
(common) name :
> Msstd : externalfqdn
> 
> When I uncheck this option, my outlook (2007/2010) can connect trough 
> squid with ntlm in my Exchange via outlook anywhere, If it's checked 
> I've got a : server is unavailable.
> In windows XP, checked or not, that works.
> 
> By the way, after connection to exchange succeed in w7, that option 
> rechecks itself automatically ...
> 
> The point is, why ? Maybe windows7 is more paranoid with certificate ??
> 
> Have you an idea ?
> 
> Regards
> 
> Clem
> 
> -Message d'origine-
> De : Amos Jeffries [mailto:squ...@treenet.co.nz] Envoyé : mardi 27 
> mars 2012 23:27 À : squid-users@squid-cache.org Objet : RE: 
> [squid-users] https analyze, squid rpc proxy to rpc proxy ii6
> exchange2007 with ntlm
> 
> On 27.03.2012 21:31, Clem wrote:
> > Hi Amos,
> >
> > Administrateur is the french AD name for Administrator :)
> >
> 
> Yes. I'm just wondering if it is correct for what your IIS is checking 
> against.
> 
> Amos



Re: [squid-users] Squid and FTP

2012-04-05 Thread Colin Coe
On Wed, Apr 4, 2012 at 7:40 PM, Amos Jeffries  wrote:
> On 4/04/2012 6:01 p.m., Eliezer Croitoru wrote:
>>
>> On 04/04/2012 08:12, Colin Coe wrote:
>>>
>>> Hi all
>>>
>>> I'm trying to get our squid proxy server to allow clients to do
>>> outbound FTP.  The problem is that our corporate proxy uses tcp/8200
>>> for http/https traffic and port 221 for FTP traffic.
>>>
>>> Tailing the squid logs I see that squid is attempting to send all FTP
>>> requests direct instead of going through the corporate proxy.
>>>
>>> Any ideas how I'd configure squid to use the corp proxy for FTP
>>> instead of going direct?
>>>
>>> Thanks
>>>
>>> CC
>>>
>> if you have parent proxy you should use the never_direct acl.
>>
>>
>>
>> acl ftp_ports port 21
>
>
> Make that "20 21" (note the space between)
>
>
> Amos

Hi all

I've made changes based on these suggestions but it still doesn't
work.  My squid.conf looks like:
---
cache_peer 172.22.0.7 parent 8200 0 default no-query no-netdb-exchange
proxy-only no-digest no-delay name=other
cache_peer 172.22.0.7 parent 221 0 default no-query no-netdb-exchange
proxy-only no-digest no-delay  name=ftp

cache_dir ufs /var/cache/squid 4900 16 256

http_port 3128

hierarchy_stoplist cgi-bin ?

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC 1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines

acl ftp_ports port 21 20

acl SSL_ports port 443 21 20
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

cache_peer_access ftp allow ftp_ports
cache_peer_access ftp deny all
never_direct allow ftp_ports
cache_peer_access other deny ftp_ports

acl Prod dst 172.22.106.0/23
acl Prod dst 172.22.176.0/23
acl Dev dst 172.22.102.0/23

acl BOM dstdomain .bom.gov.au
cache deny BOM

always_direct allow Dev
always_direct allow Prod
never_direct allow all

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access deny all
---

On the proxy server, when I do a 'tcpdump host client and port 3128' I
get nothing more than
---
15:22:19.515518 IP 172.22.106.23.48052 > 172.22.106.10.3128: Flags
[S], seq 2995762959, win 5840, options [mss 1460,sackOK,TS val
1681190449 ecr 0,nop,wscale 7], length 0
15:22:19.515567 IP 172.22.106.10.3128 > 172.22.106.23.48052: Flags
[S.], seq 1966725410, ack 2995762960, win 14480, options [mss
1460,sackOK,TS val 699366121 ecr 1681190449], length 0
15:22:19.515740 IP 172.22.106.23.48052 > 172.22.106.10.3128: Flags
[.], ack 1, win 5840, options [nop,nop,TS val 1681190449 ecr
699366121], length 0
15:23:49.606087 IP 172.22.106.23.48052 > 172.22.106.10.3128: Flags
[F.], seq 1, ack 1, win 5840, options [nop,nop,TS val 1681280540 ecr
699366121], length 0
15:23:49.606163 IP 172.22.106.10.3128 > 172.22.106.23.48052: Flags
[.], ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
1681280540], length 0
15:23:49.606337 IP 172.22.106.10.3128 > 172.22.106.23.48052: Flags
[F.], seq 1, ack 2, win 14480, options [nop,nop,TS val 699456212 ecr
1681280540], length 0
15:23:49.606465 IP 172.22.106.23.48052 > 172.22.106.10.3128: Flags
[.], ack 2, win 5840, options [nop,nop,TS val 1681280540 ecr
699456212], length 0
---

Nothing goes into the access.log file from this connection either.

Any ideas?

CC

-- 
RHCE#805007969328369