[squid-users] Login Popups on Windows XP with squid_kerb_auth and external acl

2012-03-12 Thread Игорь Потапов
Hi.
squid is 3.1.19 on FreeBSD 8.2 with MIT kerberos. squid_kerb_auth is in use as 
the only auth scheme. Have some external acl to check authorization in mysql 
db. On machines running XP SP2 with IE8 (enabled Windows Intergrated Auth) 
sometimes authentication windows popup. I think this is happening if some 
request is denied by external auth script. If I hit Cancel page loads further. 
On Windows 7 see no such behavior. 
Config is here http://pastebin.com/QyCiha8Q
Here is external auth script http://pastebin.com/LiAmniSz
I think IE8 on XP sometimes doesn't send Authorization and asks for it. Or 
falls back to NTLM. I've made some workarounds to disable login windows but on 
XP they appear.
Can I force IE8 on XP to use only negotiate/Kerberos? 



[squid-users] Push Patch by Jon Kay

2012-03-12 Thread anita.sivakumar
Hi,

Is there anyone who has used this patch 
http://devel.squid-cache.org/stale_projects.html#push?
Is this a completed patch ? or are stale projects mean that they are still 
under development?

This patch seems to give a lot of compilation issues when it is integrated with 
the source code (3.16) as the patch was built on 2.5.
If anybody has used it, can you please give an insight on how the code is 
compiled and executed?

Thanks.

Anita

Please do not print this email unless it is absolutely necessary. 

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email. 

www.wipro.com


Re: [squid-users] RE: Fwd: Fwd: SSLBUMP Issue with SSL websites

2012-03-12 Thread Amos Jeffries

On 13/03/2012 1:05 a.m., Amit Pasari-Xsinfoways wrote:

Dear Amos,
Thank you very much for your time and advice . I have now gone into a 
second problem now :
I have upgraded my squid verson to squid3-3.1.18-1.el4.pp , now 
https://facebook.com , https://linkedin.com opens fine .
But when i open mail.yahoo.com or gmail.com it gives me an error on my 
browsers :

*The page isn't redirecting properly
  Firefox has detected that the server is redirecting the 
request for this address in a way that will never complete

*And in access.log i get something like this :
331553718.561208 192.168.1.102 TCP_MISS/302 751 GET 
http://login.yahoo.com/config/login_verify2? - DIRECT/202.86.7.110 
text/html
1331553718.700130 192.168.1.102 TCP_MISS/302 751 GET 
http://login.yahoo.com/config/login_verify2? - DIRECT/202.86.7.110 
text/html
1331553718.844134 192.168.1.102 TCP_MISS/302 751 GET 
http://login.yahoo.com/config/login_verify2? - DIRECT/202.86.7.110 
text/html
1331553718.994137 192.168.1.102 TCP_MISS/302 751 GET 
http://login.yahoo.com/config/login_verify2? - DIRECT/202.86.7.110 
text/html
1331553719.138132 192.168.1.102 TCP_MISS/302 751 GET 
http://login.yahoo.com/config/login_verify2? - DIRECT/202.86.7.110 
text/html
1331553719.275129 192.168.1.102 TCP_MISS/302 751 GET 
http://login.yahoo.com/config/login_verify2? - DIRECT/202.86.7.110 
text/html


The browser is stuck in a forwarding loop as instructed by those web 
servers. Squid woudl not seem to be relevant there.




Second can i get the rpm for squid 3.2 for centos 4.x ?*


I'm not aware of any for 3.2 yet outside of Fedora ones. Good question 
will be whether 3.2 build on CentOS as old as 4.x, we test it on 5.x.


Amos


Re: [squid-users] requests per second

2012-03-12 Thread Amos Jeffries

On 13/03/2012 3:21 p.m., Michael Hendrie wrote:

On 11/03/2012, at 10:21 PM, Amos Jeffries wrote:


On 9/03/2012 4:52 a.m., Student University wrote:

Hi ,
This is Liley ,,,

can anyone tell me what
requests per second can squid3 serves ,
especially if we run it on the top of a hardware with OCZ RevoDrive 3
X2 (200,000 Random Write 4K IOPS)

Thanks in advance .

These are some performance stats from network admin who have been willing to 
donate the info publicly:
http://wiki.squid-cache.org/KnowledgeBase/Benchmarks

How do we post results on the above wiki page?


You can post the details here and I'll cut-n-paste to the page.

Or to edit the wiki, signup for a wiki account and let the admin (kinkie 
or myself) know when username to permit write privileges too. Sorry that 
its not open by default, but we have had spam problems.



Amos


Re: [squid-users] maxconn bug ?

2012-03-12 Thread Amos Jeffries

On 13.03.2012 06:03, FredB wrote:

Hi all,

Maxconn seems doesn't works with last squid 3.2.0.16

I'm trying

acl userslimit src 192.168.0.0/16
acl 3conn maxconn 3
http_access deny 3conn userslimit
client_db on

grep 192.168.80.194 /var/log/squid/access.log | grep 2012:17:48:43 | 
wc -l

10

And no ban
Maybe I misconfigured something ?


Duration and overlap of those connections matters. If they were all 
serviced in less than 100ms and closed it is possible they all took 
place one after another sequentially with no more than 1 open at a time.


maxconn allows up to 3 *simultaneous* connections. Opening three then 
closing one before opening a fourth is permitted. Only opening four at 
once is not permitted.




I have an another question about deny pages, when I block by
maxconn/port/acldst/etc my users get the same DENY page without
distinction, how can I customize the result (one page for dstdomain,
one page for maxconn, one page for ldap ident, etc)


You use deny_info to attach a custom output to the last ACL on the 
line. This output gets presented every time that ACL is last on a deny 
line.

http://www.squid-cache.org/Doc/config/deny_info/

Amos


Re: [squid-users] Help with a tcp_miss/200 issue

2012-03-12 Thread Amos Jeffries

On 13.03.2012 03:13, James Ashton wrote:

Any thoughts guys?

This has me baffled.  I am digging through list archives, but nothing
relevant so far.
I figure it has to be a response header issue.  I just don't see it.



Could be. You will need to know the headers being sent into Squid 
"squid1.kelbymediagroup.com" from the origin server though. I suspect it 
may be missing Date: header or something like that making the original 
non-cacheable. Squid does many fixing-up of details like that on its 
output to ensure the output is more friendly to downstream clients.




Using Squid 3.1.8


Or it could be some bug in that particular version. Tried the more 
current .19 release?



Config seems okay.


#
visible_hostname squid2.kelbymediagroup.com
#
refresh_pattern

(phpmyadmin|process|register|login|contact|signup|admin|gateway|ajax|account|cart|checkout|members)
0 10% 0
refresh_pattern (blog|feed) 300 20% 4320
refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 40320 75% 86400
refresh_pattern -i \.(iso|avi|wav|mp3|mpeg|swf|flv|x-flv)$ 1440 40% 
40320

refresh_pattern -i \.mp4$   1440   90% 43200
refresh_pattern -i \.(css|js)$ 300 40% 7200
refresh_pattern -i \.(html|htm)$ 300 40% 7200
refresh_pattern (/cgi-bin/|\?) 300 20% 4320
refresh_pattern . 300 40% 40320
#




Amos




- Original Message -
From: "James Ashton"

Hello all,
 I am trying to improve caching/acceleration on a series of wordpress 
sites.

Almost all objects are being cached at this point other than the page
HTML itself.
All I am getting there is TCP_MISS/200 log lines.

The request is a GET for the URL  http://planetphotoshop.com

At the moment my response header is:

Cache-Controlmax-age=60
Cneonctionclose
Connectionkeep-alive
Content-Encodinggzip
Content-Length15339
Content-Typetext/html; charset=UTF-8
DateFri, 09 Mar 2012 13:58:01 GMT
ServerApache/2.2.15 (CentOS)
VaryAccept-Encoding
Via1.0 squid1.kelbymediagroup.com (squid)
X-CacheMISS from squid1.kelbymediagroup.com
X-Cache-LookupMISS from squid1.kelbymediagroup.com:80
X-Pingbackhttp://planetphotoshop.com/xmlrpc.php


I dont see anything preventing caching

Any thoughts or ideas?

Thank you in advance for the help.

James




Re: [squid-users] requests per second

2012-03-12 Thread Amos Jeffries

On 13.03.2012 02:37, guest01 wrote:

Hi,

We are using Squid as forward-proxy for about 10-20k clients with
about 1200RPS.

In our setup, we are using 4 physical servers (HP ProLiant DL380 
G6/G7

with 16CPU, 32GB RAM) with RHEL5.8 64Bit as OS with a dedicated
hardware loadbalancer. At the moment, the average server load is
approx 0.6.

We are currently using:
Squid Cache: Version 3.1.16
configure options:  '--enable-ssl' '--enable-icap-client'
'--sysconfdir=/etc/squid' '--enable-async-io' '--enable-snmp'
'--enable-poll' '--with-maxfd=32768' '--enable-storeio=aufs'
'--enable-removal-policies=heap,lru' '--enable-epoll'
'--disable-ident-lookups' '--enable-truncate'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-default-user=squid' '--prefix=/opt/squid' 
'--enable-auth=basic

digest ntlm negotiate'
'-enable-negotiate-auth-helpers=squid_kerb_auth'
'--with-aufs-threads=32' '--enable-linux-netfilter'
'--enable-external-acl-helpers' --with-squid=/home/squid/squid-3.1.16
--enable-ltdl-convenience

IMHO, it is really important which features you are planning to use.
For example, we are using authentication (kerberos, ntlm, ldap) and
ICAP content adaption. Without that, our RPS-rate would be much
higher. Because of a lacking SMP-support in 3.1, we are using 4
instances per server. At the beginning, the setup used to be much
simpler! ;-)



Is that 1200 RPS per-instance? per-server?  or for the combined setup?

Amos
PS. intending to add the details to the benchmarks page.


Re: [squid-users] requests per second

2012-03-12 Thread Amos Jeffries

On 13.03.2012 00:44, Student University wrote:

Hi David 

You achieve 2K with what version of squid ,,,
do you have any special configuration tweaks ,,,

also what if i use SSD [200,000 Random Write 4K IOPS]

Best Regards ,,,
Liley


On Mon, Mar 12, 2012 at 9:59 AM, David B.wrote:

Hi Jenny,

Reverse proxy or not ?
We're using squid as reverse proxy et only that.



The key point of that statement being "reverse proxy".

Squid speeds are highly variable relative to traffic behaviour. 
Reverse-proxy have their own style of traffic pattern involving a single 
or limited number of websites being "reversed"/accelerated, which allows 
for very high speeds and bandwidth efficiency (HIT ratio) to be 
achieved. Other modes of operation face much more variable traffic 
patterns and much lower benefits.


Jenny and David face opposite extremes of traffic. David can get high 
speeds over twice what our lab tests show. Jenny faces traffic types 
which are not cacheable even in the high-speed short-term RAM cache. 
Making for speed limits affected by TCP/IP overheads, network overheads, 
and processing overheads. Even the 1K req/sec lab test speeds are well 
out of reach on that type of traffic.



"Student University": starting to get the idea of why nobody can simply 
point you at a how-to?


*why* and *what for* strongly determine the speeds you can achieve. You 
need to provide us with quite a few details about your setup to get 
useful help.


The simplest answer to your question is "To get 5K on a Squid being 
regular proxy involves re-writing every website on the Internet to use 
HTTP/1.1 correctly and cache-friendly." You up for doing that?


Amos



We can achieve about 2K RPS with our boxes and this server isn't
overloaded...
In fact, it's common hardware, like mono dual core Xéon, some RAM 
and a

poor RAID 1 disk array without BBU.

I think 5K RPS is possible. :)

David.

Dears ,

how we can achieve 5000 RPS through squid 

Thanks in advance
Liley



In your dreams.

Jenny






Re: [squid-users] requests per second

2012-03-12 Thread Michael Hendrie

On 11/03/2012, at 10:21 PM, Amos Jeffries wrote:

> On 9/03/2012 4:52 a.m., Student University wrote:
>> Hi ,
>> This is Liley ,,,
>> 
>> can anyone tell me what
>> requests per second can squid3 serves ,
>> especially if we run it on the top of a hardware with OCZ RevoDrive 3
>> X2 (200,000 Random Write 4K IOPS)
>> 
>> Thanks in advance .
> 
> These are some performance stats from network admin who have been willing to 
> donate the info publicly:
> http://wiki.squid-cache.org/KnowledgeBase/Benchmarks

How do we post results on the above wiki page?

> 
> As for the OCZ question, Squid has been known to burn through SSDs a lot 
> faster than manufacturer claims of their lifetime. Squid traffic is 
> mostly-write with >50Mbps write peak rates where SSD are manufactured for 
> mostly-read I/O patterns. I've recently been told of one ISP reaching around 
> 100Mbps writes on average with no trouble at all.
> 
> The OCZ is rated well above that, so is unlikely to be a visible bottleneck. 
> You are more likely to be throttled by the speed Squid can parse new 
> requests. Which is CPU bound.
> 
> Amos
> 



Re: [squid-users] requests per second

2012-03-12 Thread Michael Hendrie

On 13/03/2012, at 12:07 AM, guest01 wrote:

> Hi,
> 
> We are using Squid as forward-proxy for about 10-20k clients with
> about 1200RPS.

> 
> IMHO, it is really important which features you are planning to use.
> For example, we are using authentication (kerberos, ntlm, ldap) and
> ICAP content adaption. Without that, our RPS-rate would be much
> higher. Because of a lacking SMP-support in 3.1, we are using 4
> instances per server. At the beginning, the setup used to be much
> simpler! ;-)
> 

Also, understanding your traffic throughput (mbps) and cache-hit ratio and not 
just request/second is a big factor in scoping required hardware.  When 
benchmarking with Web Polygraph, you can see the difference throughput and 
cache-hit has on overall server performance.

As an example, one particular server I benchmarked in a forward proxy 
configuration performed as follows:

1200 requests-per-second @ ~350mbps
or 
2700 requests-per-second @ ~200mbps

That was with Polygraph configured to achieve around 15% byte hit ratio.

Changing the byte hit ratio of the test up to around 40% resulted in a huge 
increase in request rate throughput due to a lot more content being satisfied 
from the high-speed disk array.  A 40% byte hit wasn't realistic for the 
traffic pattern the server was going to see so was an unrealistic test result.

Who knows what the results would have looked like if I added auth, a few ACL's 
different refresh patterns etc.

I think it is very difficult for anyone to answer (other than a guide) whether 
using hardware component X will achieve a result of Y unless they're using the 
exact same hardware (not just 1 component), have the same configuration and 
same traffic patterns.


> hth,
> Peter
> 
> On Mon, Mar 12, 2012 at 1:47 PM, David B.  wrote:
>> Hi,
>> 
>> It's only a reverse proxy cache, not a proxy. This is different.
>> We use squid only for images.
>> 
>> Squid : 3.1.x
>> OS : debian 64 bits
>> 
>> Le 12/03/2012 12:44, Student University a écrit :
>>> Hi David 
>>> 
>>> You achieve 2K with what version of squid ,,,
>>> do you have any special configuration tweaks ,,,
>>> 
>>> also what if i use SSD [200,000 Random Write 4K IOPS]
>>> 
>>> Best Regards ,,,
>>> Liley
>>> 



[squid-users] maxconn bug ?

2012-03-12 Thread FredB
Hi all,

Maxconn seems doesn't works with last squid 3.2.0.16

I'm trying 

acl userslimit src 192.168.0.0/16
acl 3conn maxconn 3
http_access deny 3conn userslimit 
client_db on 

grep 192.168.80.194 /var/log/squid/access.log | grep 2012:17:48:43 | wc -l 
10 

And no ban 
Maybe I misconfigured something ? 

I have an another question about deny pages, when I block by 
maxconn/port/acldst/etc my users get the same DENY page without distinction, 
how can I customize the result (one page for dstdomain, one page for maxconn, 
one page for ldap ident, etc)

Thanks 




[squid-users] How to replace message body

2012-03-12 Thread Ladislav Jech
Hello,

we use Squid as reverse proxy which is used to connect to internal app
server from internet applications.
let "internal.domain.local" be internal name of server and default
configured front-end on application server level in internal network.
let "external.domain.local" be external name of server which is used
and defined in public DNS, which is used to access the internal server
from public internet.

Our internal application server runs Web services, each of this
service includes WSDL and XSD schemas. In these schemas there is used
internal server name as in this example internal.domain.local as
following:
WSDL includes address location>
http://internal.domain.local:80/MyService"/>
WSDL but also imports other schemas
http://internal.domain.local/MyService/Schema";
schemaLocation="http://internal.domain.local:80/MyService/Schema"/>

But when I access the service from internet, I like to change all URLs
in response body of the app server from internal.domain.local TO
external.domain.local. So everything is alright from the point of view
of the external user of web services.

Is squid able to do that? Is there any interface where I can get full
message body and apply some script on it (bash, perl, java, whatever)?

Thank you very much for some suggestion.

Best regards,

Ladislav Jech


Re: [squid-users] Help with a tcp_miss/200 issue

2012-03-12 Thread James Ashton
Any thoughts guys?

This has me baffled.  I am digging through list archives, but nothing relevant 
so far. 
I figure it has to be a response header issue.  I just don't see it.

Using Squid 3.1.8
Config is:



http_port 80 accel vhost

# Production Servers
cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query no-digest originserver 
login=PASS name=myAccellb round-robin
#
# user
cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query no-digest originserver 
login=PASS name=puserlb round-robin
#
# Training
cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query no-digest originserver 
login=PASS name=ktrain5 round-robin
#
# Ad Server
cache_peer xxx.xxx.xxx.xxx parent 80 0 no-query no-digest originserver 
login=PASS name=adserver1 round-robin
#
acl PURGE method PURGE
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
#acl all src 0.0.0.0/0.0.0.0
#

acl our_sites dstdomain origin.xxx.com
acl our_sites dstdomain streamorigin.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain images.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain media.xxx.com
acl our_sites dstdomain origin-media.xxx.com
acl our_sites dstdomain www.media.xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain portfolio.xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain .com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain members.xxx.com
acl our_sites dstdomain skportfolio.xxx.com
acl our_sites dstdomain xxx.xxx.com
acl our_sites dstdomain mattk.xxx.com
acl our_sites dstdomain planet.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain forum.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain secure.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain xxx.com
acl our_sites dstdomain www.xxx.com
acl our_sites dstdomain secure.xxx.com
acl our_sites dstdomain reviews.xxx.com
acl our_sites dstdomain emails.xxx.com
acl our_sites dstdomain facebook.xxx.com

acl adserver dstdomain cache.ads.xxx.com

acl ktrain dstdomain xxx.com
acl ktrain dstdomain www.xxx.com
acl ktrain dstdomain secure.xxx.com

acl puser dstdomain www.xxx.com
acl puser dstdomain secure.xxx.com
acl puser dstdomain cache.xxx.com
acl puser dstdomain xxx.com

#
http_access allow our_sites
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
#http_access deny all
#
# Main Pool
cache_peer_access myAccellb allow our_sites
cache_peer_access myAccellb deny all
#
#
cache_peer_access ktrain5 allow ktrain
cache_peer_access ktrain5 deny all
#
#
cache_peer_access puserlb allow puser
cache_peer_access puserlb deny all
#
#
cache_peer_access adserver1 allow adserver
cache_peer_access adserver1 deny all
#
#
visible_hostname squid2.kelbymediagroup.com
#
refresh_pattern 
(phpmyadmin|process|register|login|contact|signup|admin|gateway|ajax|account|cart|checkout|members)
 0 10% 0
refresh_pattern (blog|feed) 300 20% 4320
refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 40320 75% 86400
refresh_pattern -i \.(iso|avi|wav|mp3|mpeg|swf|flv|x-flv)$ 1440 40% 40320
refresh_pattern -i \.mp4$   1440   90% 43200
refresh_pattern -i \.(css|js)$ 300 40% 7200
refresh_pattern -i \.(html|htm)$ 300 40% 7200
refresh_pattern (/cgi-bin/|\?) 300 20% 4320
refresh_pattern . 300 40% 40320
#
negative_ttl 0 seconds
cache_effective_user squid
cache_mem 4000 MB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
maximum_object_size_in_memory 1 MB  # 28 KB
maximum_object_size 1000 MB
cache_dir aufs /caches/cache1 3 64 256
debug_options ALL,1
cache_store_log none
pipeline_prefetch on
#
#
shutdown_lifetime 1 second
httpd_suppress_version_string on
access_log /var/log/squ

Re: [squid-users] requests per second

2012-03-12 Thread guest01
Hi,

We are using Squid as forward-proxy for about 10-20k clients with
about 1200RPS.

In our setup, we are using 4 physical servers (HP ProLiant DL380 G6/G7
with 16CPU, 32GB RAM) with RHEL5.8 64Bit as OS with a dedicated
hardware loadbalancer. At the moment, the average server load is
approx 0.6.

We are currently using:
Squid Cache: Version 3.1.16
configure options:  '--enable-ssl' '--enable-icap-client'
'--sysconfdir=/etc/squid' '--enable-async-io' '--enable-snmp'
'--enable-poll' '--with-maxfd=32768' '--enable-storeio=aufs'
'--enable-removal-policies=heap,lru' '--enable-epoll'
'--disable-ident-lookups' '--enable-truncate'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-default-user=squid' '--prefix=/opt/squid' '--enable-auth=basic
digest ntlm negotiate'
'-enable-negotiate-auth-helpers=squid_kerb_auth'
'--with-aufs-threads=32' '--enable-linux-netfilter'
'--enable-external-acl-helpers' --with-squid=/home/squid/squid-3.1.16
--enable-ltdl-convenience

IMHO, it is really important which features you are planning to use.
For example, we are using authentication (kerberos, ntlm, ldap) and
ICAP content adaption. Without that, our RPS-rate would be much
higher. Because of a lacking SMP-support in 3.1, we are using 4
instances per server. At the beginning, the setup used to be much
simpler! ;-)

hth,
Peter

On Mon, Mar 12, 2012 at 1:47 PM, David B.  wrote:
> Hi,
>
> It's only a reverse proxy cache, not a proxy. This is different.
> We use squid only for images.
>
> Squid : 3.1.x
> OS : debian 64 bits
>
> Le 12/03/2012 12:44, Student University a écrit :
>> Hi David 
>>
>> You achieve 2K with what version of squid ,,,
>> do you have any special configuration tweaks ,,,
>>
>> also what if i use SSD [200,000 Random Write 4K IOPS]
>>
>> Best Regards ,,,
>> Liley
>>


Re: [squid-users] requests per second

2012-03-12 Thread David B.
Hi,

It's only a reverse proxy cache, not a proxy. This is different.
We use squid only for images.

Squid : 3.1.x
OS : debian 64 bits

Le 12/03/2012 12:44, Student University a écrit :
> Hi David 
>
> You achieve 2K with what version of squid ,,,
> do you have any special configuration tweaks ,,,
>
> also what if i use SSD [200,000 Random Write 4K IOPS]
>
> Best Regards ,,,
> Liley
>


TR: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 exchange2007 with ntlm

2012-03-12 Thread Clem
Progressing in my ntlm/rpcohttps researches

The only reverse proxy that can forward ntlm authentication on outlook
anywhere with ntlm auth is ISA, and in this article it describes what
parameters you must set for this working :

http://blogs.pointbridge.com/Blogs/enger_erik/Pages/Post.aspx?_ID=17

The main parameters are :

. accept all users
And
. No delegation but client may authenticate directly

So the proxy acts "directly" and send credential as if it was the client.

I think squid has to act exactly like ISA to make ntlm auth to work, dunno
if it's possible as ISA is a windows proxy server and surely more
confortable with compatibility.

Regards

Clem

-Message d'origine-
De : Amos Jeffries [mailto:squ...@treenet.co.nz] 
Envoyé : jeudi 8 mars 2012 14:29
À : Clem
Objet : Re: TR: [squid-users] https analyze, squid rpc proxy to rpc proxy
ii6 exchange2007 with ntlm

On 9/03/2012 2:08 a.m., Clem wrote:
> Ok Amos so we go back to same issues, as I said you I have tested all I
> could with the latest 3.2 beta versions before.
>
> So I'm going back to the type-1 ntlm message issue (see my last messages
> with this subject)
>
> And my last question was :
>
>> I think the link SQUID ->   IIS6 RPC PROXY is represented by the
>> cache_peer line on my squid.conf, and I don't know if
>> client_persistent_connections
> and
>> server_persistent_connections parameters affect cache_peer too ?

It does.


Amos



[squid-users] RE: Fwd: Fwd: SSLBUMP Issue with SSL websites

2012-03-12 Thread Amit Pasari-Xsinfoways

Dear Amos,
Thank you very much for your time and advice . I have now gone into a 
second problem now :
I have upgraded my squid verson to squid3-3.1.18-1.el4.pp , now 
https://facebook.com , https://linkedin.com opens fine .
But when i open mail.yahoo.com or gmail.com it gives me an error on my 
browsers :

*The page isn't redirecting properly
  Firefox has detected that the server is redirecting the 
request for this address in a way that will never complete

*And in access.log i get something like this :
331553718.561208 192.168.1.102 TCP_MISS/302 751 GET 
http://login.yahoo.com/config/login_verify2? - DIRECT/202.86.7.110 text/html
1331553718.700130 192.168.1.102 TCP_MISS/302 751 GET 
http://login.yahoo.com/config/login_verify2? - DIRECT/202.86.7.110 text/html
1331553718.844134 192.168.1.102 TCP_MISS/302 751 GET 
http://login.yahoo.com/config/login_verify2? - DIRECT/202.86.7.110 text/html
1331553718.994137 192.168.1.102 TCP_MISS/302 751 GET 
http://login.yahoo.com/config/login_verify2? - DIRECT/202.86.7.110 text/html
1331553719.138132 192.168.1.102 TCP_MISS/302 751 GET 
http://login.yahoo.com/config/login_verify2? - DIRECT/202.86.7.110 text/html
1331553719.275129 192.168.1.102 TCP_MISS/302 751 GET 
http://login.yahoo.com/config/login_verify2? - DIRECT/202.86.7.110 text/html


Second can i get the rpm for squid 3.2 for centos 4.x ?*

Amit

*



RE: [squid-users] Please help debugging proxy problem with GET /

2012-03-12 Thread Федотов А . А .
Thanks, Amos!

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, March 12, 2012 3:29 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Please help debugging proxy problem with GET /

On 12/03/2012 7:32 p.m., Федотов А.А. wrote:
> Hi folks,
> We get the following problem with Squid configuration. Short GET or 
> POST requests do not pass proxy,  the proxy returns customized 
> "invalid request" error message
>
> GET / HTTP/1.0\r\n
> Host: yandex.ru\r\n
> Proxy-Authorization: Basic ***\r\n
> \r\n
>
> The following works:
>
>
> GET http://yandex.ru/ HTTP/1.0\r\n
> Host: yandex.ru\r\n
> Proxy-Authorization: Basic ***\r\n
> \r\n
>
> Where to start digging? Thank you in advance!

With the difference between client->proxy and client->server HTTP formats...

"http://yandex.ru/"; is an absolute-URL. For requests handled by HTTP proxies.

  "/" is a URL-path. For requests handled by origin servers and their 
surrogates (reverse-proxy). For example, is that a part of an http://, 
https://, ftp://, gopher://, or some other URI format? only the origin server 
or its dedicated surrogate (reverse-proxy) can know that.

Amos


Re: [squid-users] requests per second

2012-03-12 Thread Student University
Hi David 

You achieve 2K with what version of squid ,,,
do you have any special configuration tweaks ,,,

also what if i use SSD [200,000 Random Write 4K IOPS]

Best Regards ,,,
Liley


On Mon, Mar 12, 2012 at 9:59 AM, David B.  wrote:
> Hi Jenny,
>
> Reverse proxy or not ?
> We're using squid as reverse proxy et only that.
> We can achieve about 2K RPS with our boxes and this server isn't
> overloaded...
> In fact, it's common hardware, like mono dual core Xéon, some RAM and a
> poor RAID 1 disk array without BBU.
>
> I think 5K RPS is possible. :)
>
> David.
>>> Dears ,
>>>
>>> how we can achieve 5000 RPS through squid 
>>>
>>> Thanks in advance
>>> Liley
>>
>>
>> In your dreams.
>>
>> Jenny
>


Re: [squid-users] squid 3.1 - endless loop IIS webserver

2012-03-12 Thread Amos Jeffries

On 12/03/2012 6:53 p.m., kadvar wrote:

Hi,

I have searched for other posts with the same problem but the workarounds
that worked for them did'nt work for me. I am trying to configure a squid
reverse proxy with ssl support. I have squid on 192.168.124.41 with apache
on 127.0.0.1 on the same box. I also have two other webservers (1 apache, 1
IIS). Squid is configured to direct any requests for asp pages to iis and
the rest to the apache machine.

I have also configured squid to use https, the programmer has set up a 302
redirect on the iis machine so that visiting http://example.com/Login.aspx
redirects to https://example.com/Login.aspx. Squid redirects fine but after
that gives me a "The page isn't redirecting properly". Running wget shows
that squid is going into an endless loop. I have reproduced squid.conf and
also the wget output below.

$wget --no-check http://192.168.124.41/Login.aspx
--2012-03-12 11:06:53--  http://192.168.124.41/Login.aspx
Connecting to 192.168.124.41:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://example.com/Login.aspx [following]
--2012-03-12 11:06:53--  https://example.com/Login.aspx
Resolving example.com... 192.168.124.41
Connecting to example.com|192.168.124.41|:443... connected.
WARNING: cannot verify example.com’s certificate, issued by
“/C=IN/ST=AP/L=Default City/O=Default Company
Ltd/CN=example.com/emailAddress=ad...@example.com”:
   Unable to locally verify the issuer’s authority.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://example.com/memberplanet/Login.aspx [following]

and so on..


The problem is that Squid is sending HTTPS traffic to an HTTP port on 
IIS. Requests to origin servers do not include anything specifically 
saying HTTPS or HTTPS. The server tells that from the port its receiving 
the request on.


There is a trick you can add to your squid.conf to split traffic between 
two ports on the IIS peer




##
squid.conf
#
http_port 192.168.124.41:80 accel defaultsite=example.com

https_port 192.168.124.41:443 accel
cert=/usr/newrprgate/CertAuth/testcert.cert
key=/usr/newrprgate/CertAuth/testkey.pem defaultsite=example.com

acl rx_aspx urlpath_regex -i \.asp[x]*


acl HTTPS proto HTTPS


cache_peer 192.168.124.169 parent 80 0 no-query no-digest originserver
name=aspserver

cache_peer_access aspserver deny HTTPS


cache_peer_access aspserver allow rx_aspx
cache_peer_access aspserver deny all


cache_peer 192.168.124.169 parent 443 0 no-query no-digest originserver 
name=aspserverSSL

cache_peer_access aspserverSSL allow  HTTPS rx_aspx
cache_peer_access aspserverSSL deny all




cache_peer 127.0.0.1 parent 80 0 no-query originserver name=wb1
cache_peer_access wb1 deny rx_aspx

acl origin_servers dstdomain .example.com
http_access allow origin_servers
http_access deny all
###

I'd appreciate it if someone could give me some clues as to what I'm doing
wrong.



That should fix the looping.

Amos


Re: [squid-users] Please help debugging proxy problem with GET /

2012-03-12 Thread Amos Jeffries

On 12/03/2012 7:32 p.m., Федотов А.А. wrote:

Hi folks,
We get the following problem with Squid configuration. Short GET or POST requests do not 
pass proxy,  the proxy returns customized "invalid request" error message

GET / HTTP/1.0\r\n
Host: yandex.ru\r\n
Proxy-Authorization: Basic ***\r\n
\r\n

The following works:


GET http://yandex.ru/ HTTP/1.0\r\n
Host: yandex.ru\r\n
Proxy-Authorization: Basic ***\r\n
\r\n

Where to start digging? Thank you in advance!


With the difference between client->proxy and client->server HTTP formats...

"http://yandex.ru/"; is an absolute-URL. For requests handled by HTTP 
proxies.


 "/" is a URL-path. For requests handled by origin servers and their 
surrogates (reverse-proxy). For example, is that a part of an http://, 
https://, ftp://, gopher://, or some other URI format? only the origin 
server or its dedicated surrogate (reverse-proxy) can know that.


Amos


Re: [squid-users] external acl code examples

2012-03-12 Thread Amos Jeffries

On 12/03/2012 10:38 p.m., E.S. Rosenberg wrote:

So one thing that is not really clear to me, the external acl script
is running constantly and gets "sent" arguments on its' stdin or is


Exactly so.


the script/program being called every time with the arguments you
define for it


Amos


[squid-users] Re: Reverse Proxy, OWA RPCoHTTPS and NTLM

2012-03-12 Thread frred
Abdessamad BARAKAT  barakat.fr> writes:

> 
> Hi,
> 
> I try to setup squid as ssl reverse proxy for publishing OWA services 
> (webmail, rpc/http and activesync), now the publish is made by a ISA 
> server and I want to replace this ISA Server.
> 
[..]

Hi, 

Do you resolve your issue, because i have the same error.

Regards.




Re: [squid-users] external acl code examples

2012-03-12 Thread E.S. Rosenberg
So one thing that is not really clear to me, the external acl script
is running constantly and gets "sent" arguments on its' stdin or is
the script/program being called every time with the arguments you
define for it
Thanks,
Eli

2012/2/29 Amos Jeffries :
> On 29.02.2012 01:51, Erwann Pencreach wrote:
>>
>> Hi,
>>
>> I don't really understand the trick with the Id, but I'll have a look
>> at it
>
>
> Its a concurrency support. Allowing Squid to schedule more than one lookup
> at a time for the helper. You then add concurrency=N with some N value
> greater than 1 for the number of requests for Squid to queue.
>
>
>>
>> I wrote this script, because I wasn't able to get authentication
>> information from distant client or distant samba pdc (All tricks I have
>> found are for an configuration where Squid is on the same host as the
>> pdc). Password doesn't matter, but username is mandatory. When I have
>> username, I have some ldap checks to do, some whitlist and blacklist to
>> check.
>
>
> Something seems wrong there.
>
> For Squid lookup helpers to validate credentials the only requirement is
> that the backend accept validation requests from them. In the PDC case there
> may be some security around which servers are allowed to lookup user
> credentials, you need to ensure the Squid box (IP? security token?) is in
> that accepted set. It sounds to me like the default security at the PDC is
> for the localhost connections to be accepted, but not external servers.
>
> Certain of the Squid lookup helpers do need certain tools from Samba to be
> installed (ntlm_auth or winbind or smbclient) in order to run. But those
> tools are not the PDC, only other types  of lookup helper.
>
>
> Amos
>


[squid-users] Facebook connection timeout error

2012-03-12 Thread Zaikin Alexander
Since March 7, 2012 our users was able to connect to http://facebook.com 
through squid 3.1.16 in non-transparent mode. After the recent troubles with 
facebook servers we are unable to connect to the http://facebook.com, this page 
is not load at all and then squid generates connection timeout error. Some of 
other pages like http://ru-ru.facebook.com or http://de-de.facebook.com/ works 
fine.

I am able to connect to http://facebook.com without our proxy or if I use some 
external anonymizer which allows users to connect to restricted sites. I have 
already tried to delete all restriction for facebook from my proxy, turn 
forwarded_for option OFF but it had no effect. Also I am able to connect to 
facebook directly from proxy PC via w3m browser.

Any ideas for what else could be done?


Re: [squid-users] requests per second

2012-03-12 Thread David B.
Hi Jenny,

Reverse proxy or not ?
We're using squid as reverse proxy et only that.
We can achieve about 2K RPS with our boxes and this server isn't
overloaded...
In fact, it's common hardware, like mono dual core Xéon, some RAM and a
poor RAID 1 disk array without BBU.

I think 5K RPS is possible. :)

David.
>> Dears ,
>>
>> how we can achieve 5000 RPS through squid 
>>
>> Thanks in advance
>> Liley
>  
>  
> In your dreams.
>  
> Jenny