Re: [squid-users] dynamin content "pattern_refresh".

2012-05-23 Thread Eliezer Croitoru

you can try to use this tool:
http://redbot.org/
to make sure what are the sites cachebilaty options
maybe some objects there need some cache enforcement rules in the 
refresh_pattern specified for them.


Eliezer

On 23/05/2012 02:54, Beto Moreno wrote:

  I had been working on the settings:

  refresh_pattern.

  The doc say that is better for the new websites that use dynamic
content and a friend here at the list explain me the difference.

  My test was simple:

  use 2 browsers: firefox/iexplore.
  Run the test twice for each site.

  first run
  firefox site1, site2,site3,site4
  iexplore site1, site2,site3,site4

  run ccleaner, repeat the test.

  run srg to get my squid-cache peformance and free-sa.

  They where 3 settings I try and make the same test.

  NOTE: every time I start a setting, I delete my cache, clean my logs
and start from 0.

  setting 1 default settings
  acl QUERY urlpath_regex cgi-bin \?
  cache deny QUERY

  setting 2  new way:
  disable the old way:

  #acl QUERY urlpath_regex cgi-bin \?
  #cache deny QUERY
  refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
  refresh_pattern .0 20% 4320

   setting 2:

refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
refresh_pattern .0 20% 4320

  Them after I finish my test I start reviewing my logs and compare,
the sites I use was:

yahoo.com
osnews.com
frontera,info(local news paper)
noticias,nvs.com
centos.org

  I didn't interact with the site, just get to the first page, finish
loading and done, continue with the next one.

Once I check my reports I didn't see to much difference, I found just
1 log that the old way didn't "cache" 1 thing, check:

setting 1/2  have this:

1337667655.898  0 192.168.50.100 TCP_MEM_HIT/200 21280 GET
http://www.frontera.info/WebResource.axd? - NONE/-
application/x-javascript

setting 1 TCP_MISS.

Example of part my logs:

1337667655.596 43 192.168.50.100 TCP_MISS/302 603 GET
http://frontera.info/ - DIRECT/216.240.181.163 text/html
1337667655.748 54 192.168.50.100 TCP_MISS/200 1454 GET
http://www.frontera.info/HojasEstilos/Horoscopos.css -
DIRECT/216.240.181.163 text/css
1337667655.749 52 192.168.50.100 TCP_MISS/200 1740 GET
http://www.frontera.info/Includes/Controles/LosEconomicos.css -
DIRECT/216.240.181.163 text/css
1337667655.749 49 192.168.50.100 TCP_MISS/200 1557 GET
http://www.frontera.info/Includes/Controles/ReporteroCiudadano.css -
DIRECT/216.240.181.163 text/css
1337667655.754 54 192.168.50.100 TCP_MISS/200 1697 GET
http://www.frontera.info/Includes/Controles/Elementos.css -
DIRECT/216.240.181.163 text/css
1337667655.780 24 192.168.50.100 TCP_MISS/200 1406 GET
http://www.frontera.info/Includes/Controles/Finanzas.css -
DIRECT/216.240.181.163 text/css
1337667655.817124 192.168.50.100 TCP_MISS/200 21639 GET
http://www.frontera.info/HojasEstilos/Estilos2009.css -
DIRECT/216.240.181.163 text/css
1337667655.898  0 192.168.50.100 TCP_MEM_HIT/200 21280 GET
http://www.frontera.info/WebResource.axd? - NONE/-
application/x-javascript
1337667655.903 20 192.168.50.100 TCP_MISS/200 1356 GET
http://www.frontera.info/Interactivos/lib/jquery.jcarousel.css -
DIRECT/216.240.181.163 text/css
1337667655.907308 192.168.50.100 TCP_MISS/200 116552 GET
http://www.frontera.info/Home.aspx - DIRECT/216.240.181.163 text/html
1337667655.935 23 192.168.50.100 TCP_MISS/200 3934 GET
http://www.frontera.info/Interactivos/skins/fotos/skin.css -
DIRECT/216.240.181.163 text/css
1337667655.966 27 192.168.50.100 TCP_MISS/200 3995 GET
http://www.frontera.info/Interactivos/skins/elementos/skin.css -
DIRECT/216.240.181.163 text/css
1337667655.971 23 192.168.50.100 TCP_MISS/200 4260 GET
http://www.frontera.info/HojasEstilos/ui.tabs.css -
DIRECT/216.240.181.163 text/css
1337667655.972 24 192.168.50.100 TCP_MISS/200 4953 GET
http://www.frontera.info/HojasEstilos/thickbox.css -
DIRECT/216.240.181.163 text/css
1337667655.993 21 192.168.50.100 TCP_MISS/200 4380 GET
http://www.frontera.info/js/finanzas.js - DIRECT/216.240.181.163
application/x-javascript
1337667655.997 47 192.168.50.100 TCP_MISS/200 9341 GET
http://www.frontera.info/Interactivos/lib/jquery.jcarousel.pack.js -
DIRECT/216.240.181.163 application/x-javascript
1337667656.023 25 192.168.50.100 TCP_MISS/200 4239 GET
http://www.frontera.info/videos/external_script.js -
DIRECT/216.240.181.163 application/x-javascript

3 settings same TCP_MISS.

I was thinking that maybe I will get more TCP_HIT, MEM_HIT, but no.
noticiasmvs.com a lot HIT's but with the 3 settings.

do this site disable caching their site? exist a way to find out?
what could cause to still get a lot of MISS?
where my settings wrong?
my test was not the best way?
how can I see if this new settings make a difference?

Any input will be appreciated, thanks for your time!!!

I'm using

Re: [squid-users] can't access cachemgr

2012-05-23 Thread Jeff MacDonald
Hi,

I can't put the access rules above the acl definition if that was what you 
meant. but incase that isn't what you meant.. i did re-order it a bit and this 
is what i have now.. still no access.

FYI, i'm trying to access it using the cache manager cgi which runs on the same 
server

root@proxy:~# !gre
grep -e ^acl -e ^http_acc /etc/squid3/squid.conf
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl westhants proxy_auth REQUIRED
acl westhants-network src 192.168.11.0/24
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow westhants
http_access allow localhost
http_access allow westhants-network
http_access allow manager localhost
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny all

--
Jeff MacDonald
j...@terida.com
902 880 7375

On 2012-05-02, at 12:28 PM, Eliezer Croitoru wrote:

> On 02/05/2012 17:37, Jeff MacDonald wrote:
>> Hi,
>> 
>> I've seen this similar issue for a lot of people around the web, and have 
>> tried my best to debug my access rules.
>> 
>> The error message I get is :
>> 
>> 1335968823.335  8 127.0.0.1 TCP_DENIED/407 2201 GET 
>> cache_object://localhost/ j...@bignose.ca NONE/- text/html
>> 
>> I'm pretty sure I'm missing something miniscule, but need help finding it.
>> 
>> Here are my access rules in my squid.conf
> 
> try to move the access rules of the manager to the top and move down the auth 
> access rule
> 
> http_access allow manager localhost
> http_access allow manager example
> http_access allow westhants
> 
> by the way how are you trying to access  the cache_object?
> using squidclient ?
> i'm using the basic config files on opensuse 12.1 with squid 3.1.16 and it 
> seems to work like that.
> sample :
> squidclient  cache_object://localhost/client_list
> 
> Eliezer
> 
>> 
>> root@proxy:/etc/squid3# grep -e ^acl -e ^http_acc /etc/squid3/squid.conf
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/32
>> acl example src 192.168.11.16/32
>> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
>> acl westhants proxy_auth REQUIRED
>> http_access allow westhants
>> http_access allow manager localhost
>> http_access allow manager example
>> http_access deny all
>> acl westhants-network src 192.168.11.0/24
>> acl SSL_ports port 443
>> acl Safe_ports port 80  # http
>> acl Safe_ports port 21  # ftp
>> acl Safe_ports port 443 # https
>> acl Safe_ports port 70  # gopher
>> acl Safe_ports port 210 # wais
>> acl Safe_ports port 1025-65535  # unregistered ports
>> acl Safe_ports port 280 # http-mgmt
>> acl Safe_ports port 488 # gss-http
>> acl Safe_ports port 591 # filemaker
>> acl Safe_ports port 777 # multiling http
>> acl CONNECT method CONNECT
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow localhost
>> http_access allow westhants-network
>> http_access deny all
>> 
>> Thanks!
>> 
>> --
>> Jeff MacDonald
>> j...@terida.com
>> 902 880 7375
>> 
> 
> 
> -- 
> Eliezer Croitoru
> https://www1.ngtech.co.il
> IT consulting for Nonprofit organizations
> eliezer  ngtech.co.il



[squid-users] comperterName logged for sAMAccountName

2012-05-23 Thread Diersen, Dustyn [DAS]
I have squid running with SquidGuard using Active Directory for LDAP 
authentication. The problem I am seeing is the use of the AD attribute 
sAMAccountName for both userName and computerName. I thought I had a fix by 
adding sAMAccountType to my following squid_ldap_auth helper, but I am still 
seeing numerous computerNames rather than userNames being logged. The REAL 
problem is ACL matching, as I never know what I will be receiving from my users 
and do not wish to include computerName in my userlists.  I have tested adding 
a couple of computerNames to the userlist which resolves blocked access 
messages for users with specialized access requirements.

Here is my current LDAP helper string:
auth_param basic program /usr/local/squid/libexec/squid_ldap_auth -R -b 
"dc=base,dc=domain,dc=in,dc=our,dc=AD" -s sub -D "BASE\\user" -W 
"/squidGuard/filename" -f 
"(&(&(objectCategory=person)(sAMAccountName=%s)(sAMAccountType=805306368)))" -u 
sAMAccountName -P -v3 -Hldap://domain.com

I have been searching for a solution to this problem for more than a week, but 
have been unable to find one that works in my environment.

-Dustyn


[squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-23 Thread Ruiyuan Jiang
Hi, when I tried to test accessing MS exchange server, the outlook just kept 
prompt for the user name and password without luck. Here is the message from 
squid's access.log from the test:

1337803935.354  6 207.46.14.62 TCP_MISS/200 294 RPC_IN_DATA 
https://webmail.juicycouture.com/Rpc/RpcProxy.dll - PINNED/exchangeServer 
application/rpc
1337803937.876  6 207.46.14.62 TCP_MISS/401 666 RPC_IN_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/exchangeServer text/html
1337803937.965 11 207.46.14.62 TCP_MISS/401 389 RPC_IN_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/exchangeServer text/html
1337803938.144  6 207.46.14.62 TCP_MISS/401 666 RPC_OUT_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/exchangeServer text/html
1337803938.229  6 207.46.14.62 TCP_MISS/401 389 RPC_OUT_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/exchangeServer text/html


Here is my squid.conf for the test:

https_port 156.146.2.196:443 accel 
cert=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.crt 
key=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.key 
cafile=/opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt 
defaultsite=webmail.juicycouture.com

cache_peer internal_ex_serv parent 443 0 no-query originserver login=PASS ssl 
sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=exchangeServer

acl EXCH dstdomain .juicycouture.com

cache_peer_access exchangeServer allow EXCH
cache_peer_access exchangeServer deny all
never_direct allow EXCH

http_access allow EXCH
http_access deny all
miss_access allow EXCH
miss_access deny all


Where did I do wrong? I also tried a different squid.conf (basically remove all 
the ACLs) but got the same message in access.log:

https_port 156.146.2.196:443 accel 
cert=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.crt 
key=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.key 
cafile=/opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt 
defaultsite=webmail.juicycouture.com

cache_peer internal_ex_serv parent 443 0 no-query originserver login=PASS ssl 
sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=exchangeServer

cache_peer_access exchangeServer allow all

http_access allow all
miss_access allow all

Thanks.

Ryan Jiang



This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended 
recipient, please notify the sender immediately by 
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.



[squid-users] squid slow response time

2012-05-23 Thread Ali Esf
hello list and hello dear Amos
thanks for your help.
some of my problems with squid are solved but some of them not.

i compared squid on Linux Centos 5.8 with cc proxy on Microsoft windows server 
2003

and understood that the ccproxy is more fast than squid on the same 
specification machine and supports more users.

i captured the screen of the cc proxy and squid.

http://up98.org/upload/server1/02/j/bpufq054uyf1qeamraj.jpg

the above picture shows cc proxy  on windows.as you see it supports 64 users 
and 1264 connections and even more.

http://up98.org/upload/server1/02/j/kqlr5fcr2fvk1jafqva4.jpg

the above picture shows port 9090 that is configed for http proxy by squid by 
netstat command.
it shows there are 574 connections through port 9090 and squid.

http://up98.org/upload/server1/02/j/hprnte4gldvsylb19xf.jpgthe above picture 
shows the number of users to port 9090 that are 37 users.


when the number of users increases the response time of squid become too slowly 
that sometimes takes 11 - 15 seconds to load the google web page.
but i tested that the speed of download files through squid is great and the 
problem is when loading the pages when users get around 40.

and also in cc proxy with even 64 users and more the speed of loading pages is 
great.it is as like as there is no any proxy.


the machines specification is the same and are :
ram = 1 GB
port = 1 Gbps 
cpu = Intel(R) Xeon(R) CPU           E5620  @ 2.40GHz, 2 cores
os = CentOS Linux 5.8
hard disk space = 30 GB

we use squid just for proxy and not for catching. and need authentication just 
by user name and password through mysql database.
here is the configuration::



cache deny all
#
# Recommended minimum configuration:
#
auth_param basic program /usr/local/squid/libexec/squid_db_auth --user 
squid_user --password user_password --plaintext --persist
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed

acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl user_pass_auth proxy_auth REQUIRED



# replace 10.0.0.1 with your webserver IP




#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports
#http_access deny CONNECT !SSL_Ports


# Deny CONNECT to other than secure SSL ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow user_pass_auth
http_access deny all

access_log none

cache_store_log none

cache_log /dev/null

dns_nameservers 4.2.2.4 8.8.8.8

# And finally deny all other access to this proxy

# Squid normally listens to port 3128
http_port 9090

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /usr/local/squid/var/cache 100 16 256

# Leave coredumps in the first cache dir
#coredump_dir /usr/local/squid/var/cache

visible_hostname www.amirvpn.in

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:    1440    20%    10080
refresh_pattern ^gopher:    1440    0%    1440
refresh_pattern -i (/cgi-bin/|\?) 0    0%    0
refresh_pattern .    0    20%    4320
cache_effective_user squid
cache_effective_group squid

cache_mem 800 MB


regards ali



Re: [squid-users] dynamin content "pattern_refresh".

2012-05-23 Thread Beto Moreno
Hi, thanks for your info.

I had try that tool, just need to understand:

frontera.info say:

General
The server's clock is 3 min 58 sec behind.
Content Negotiation
The resource doesn't send Vary consistently.
The server's clock is 3 min 58 sec behind.
Content negotiation for gzip compression is supported, saving 42%.
The server's clock is 3 min 58 sec behind.
Caching
This response only allows a private cache to store it.
This response allows a cache to assign its own freshness lifetime.

Now the embedded:

Problems
The server's clock is 3 min 58 sec behind.
This response allows a cache to assign its own freshness lifetime.
The resource doesn't send Vary consistently.
The Content-Disposition header doesn't have a 'filename' parameter.
Cache-Control: public is rarely necessary.
The If-Modified-Since response is missing required headers.

Now yahoo.com

General
The server's clock is correct.
Caching

This response only allows a private cache to store it.
This response allows a cache to assign its own freshness lifetime.

Now: noticiasmvs.com

General
The server's clock is correct.
The Content-Length header is correct.

Content Negotiation
Content negotiation for gzip compression is supported, saving 22%.

Caching
This response allows all caches to store it.
This response allows a cache to assign its own freshness lifetime.

What I understand is that yahoo/frontera won't let squid to save some
of their data, and noticiasmvs is open for squid, right?

Will very appreciated if someone could explain me a little more about
this output from this site I want to go deeper with squid, what we can
do in this situation(private cache)?

Thanks!!!

On Wed, May 23, 2012 at 2:57 AM, Eliezer Croitoru  wrote:
> you can try to use this tool:
> http://redbot.org/
> to make sure what are the sites cachebilaty options
> maybe some objects there need some cache enforcement rules in the
> refresh_pattern specified for them.
>
> Eliezer
>
>
> On 23/05/2012 02:54, Beto Moreno wrote:
>>
>>  I had been working on the settings:
>>
>>  refresh_pattern.
>>
>>  The doc say that is better for the new websites that use dynamic
>> content and a friend here at the list explain me the difference.
>>
>>  My test was simple:
>>
>>  use 2 browsers: firefox/iexplore.
>>  Run the test twice for each site.
>>
>>  first run
>>  firefox site1, site2,site3,site4
>>  iexplore site1, site2,site3,site4
>>
>>  run ccleaner, repeat the test.
>>
>>  run srg to get my squid-cache peformance and free-sa.
>>
>>  They where 3 settings I try and make the same test.
>>
>>  NOTE: every time I start a setting, I delete my cache, clean my logs
>> and start from 0.
>>
>>  setting 1 default settings
>>  acl QUERY urlpath_regex cgi-bin \?
>>  cache deny QUERY
>>
>>  setting 2  new way:
>>  disable the old way:
>>
>>  #acl QUERY urlpath_regex cgi-bin \?
>>  #cache deny QUERY
>>  refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
>>  refresh_pattern .            0 20% 4320
>>
>>   setting 2:
>>
>> refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
>> refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200
>> refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
>> refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
>> refresh_pattern .            0 20% 4320
>>
>>  Them after I finish my test I start reviewing my logs and compare,
>> the sites I use was:
>>
>> yahoo.com
>> osnews.com
>> frontera,info(local news paper)
>> noticias,nvs.com
>> centos.org
>>
>>  I didn't interact with the site, just get to the first page, finish
>> loading and done, continue with the next one.
>>
>> Once I check my reports I didn't see to much difference, I found just
>> 1 log that the old way didn't "cache" 1 thing, check:
>>
>> setting 1/2  have this:
>>
>> 1337667655.898      0 192.168.50.100 TCP_MEM_HIT/200 21280 GET
>> http://www.frontera.info/WebResource.axd? - NONE/-
>> application/x-javascript
>>
>> setting 1 TCP_MISS.
>>
>> Example of part my logs:
>>
>> 1337667655.596     43 192.168.50.100 TCP_MISS/302 603 GET
>> http://frontera.info/ - DIRECT/216.240.181.163 text/html
>> 1337667655.748     54 192.168.50.100 TCP_MISS/200 1454 GET
>> http://www.frontera.info/HojasEstilos/Horoscopos.css -
>> DIRECT/216.240.181.163 text/css
>> 1337667655.749     52 192.168.50.100 TCP_MISS/200 1740 GET
>> http://www.frontera.info/Includes/Controles/LosEconomicos.css -
>> DIRECT/216.240.181.163 text/css
>> 1337667655.749     49 192.168.50.100 TCP_MISS/200 1557 GET
>> http://www.frontera.info/Includes/Controles/ReporteroCiudadano.css -
>> DIRECT/216.240.181.163 text/css
>> 1337667655.754     54 192.168.50.100 TCP_MISS/200 1697 GET
>> http://www.frontera.info/Includes/Controles/Elementos.css -
>> DIRECT/216.240.181.163 text/css
>> 1337667655.780     24 192.168.50.100 TCP_MISS/200 1406 GET
>> http://www.frontera.info/Includes/Controles/Finanzas.css -
>> DIRECT/216.240.181.163 text/css
>> 1337667655.817    124 192.168.50.100 TCP_MISS

Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-23 Thread Clem

Hello Ruiyan,

Which auth have you set in your outlook anywhere setting ? Squid works 
fine with Basic but has big troubles with NTLM.


regards

Clem

Le 23/05/2012 22:38, Ruiyuan Jiang a écrit :

Hi, when I tried to test accessing MS exchange server, the outlook just kept 
prompt for the user name and password without luck. Here is the message from 
squid's access.log from the test:

1337803935.354  6 207.46.14.62 TCP_MISS/200 294 RPC_IN_DATA 
https://webmail.juicycouture.com/Rpc/RpcProxy.dll - PINNED/exchangeServer 
application/rpc
1337803937.876  6 207.46.14.62 TCP_MISS/401 666 RPC_IN_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/exchangeServer text/html
1337803937.965 11 207.46.14.62 TCP_MISS/401 389 RPC_IN_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/exchangeServer text/html
1337803938.144  6 207.46.14.62 TCP_MISS/401 666 RPC_OUT_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/exchangeServer text/html
1337803938.229  6 207.46.14.62 TCP_MISS/401 389 RPC_OUT_DATA 
https://webmail.juicycouture.com/rpc/rpcproxy.dll? - 
FIRST_UP_PARENT/exchangeServer text/html


Here is my squid.conf for the test:

https_port 156.146.2.196:443 accel 
cert=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.crt 
key=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.key 
cafile=/opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt 
defaultsite=webmail.juicycouture.com

cache_peer internal_ex_serv parent 443 0 no-query originserver login=PASS ssl 
sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=exchangeServer

acl EXCH dstdomain .juicycouture.com

cache_peer_access exchangeServer allow EXCH
cache_peer_access exchangeServer deny all
never_direct allow EXCH

http_access allow EXCH
http_access deny all
miss_access allow EXCH
miss_access deny all


Where did I do wrong? I also tried a different squid.conf (basically remove all 
the ACLs) but got the same message in access.log:

https_port 156.146.2.196:443 accel 
cert=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.crt 
key=/opt/squid-3.1.19/ssl.crt/webmail_juicycouture_com.key 
cafile=/opt/apache2.2.21/conf/ssl.crt/DigiCertCA.crt 
defaultsite=webmail.juicycouture.com

cache_peer internal_ex_serv parent 443 0 no-query originserver login=PASS ssl 
sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=exchangeServer

cache_peer_access exchangeServer allow all

http_access allow all
miss_access allow all

Thanks.

Ryan Jiang



This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended
recipient, please notify the sender immediately by
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.