[squid-users] Cache-Control is ignored upon 202 (accept) response (re-post)

2014-02-16 Thread Boaz Citrin
Hello,

I want to invalidate the cache when my server returns 202 to a PUT
request, however it seems like Squid ignores the Cache-Control when
return code is 202.

Tried to set must-revalidate or even not to return any Cache-Control
header but these all result no update to cache so subsequent GET
request results HIT while I expect a MISS.
Note that Vary header is not present in response.

I am using 2.7.STABLE8 on Windows. No patch, just the installer from
http://sourceforge.net/projects/squidwindowsmsi/

Also NOT using Store ID/Url features, also probably not using SMP
either as I am not familiar with it.

I wonder if there is any way to tweak the config to force caching on
202 response.

Thanks,

Boaz


Re: [squid-users] Cache-Control is ignored upon 202 (accept) response (re-post)

2014-02-16 Thread Amos Jeffries
On 16/02/2014 10:41 p.m., Boaz Citrin wrote:
> Hello,
> 
> I want to invalidate the cache when my server returns 202 to a PUT
> request, however it seems like Squid ignores the Cache-Control when
> return code is 202.

>From the latest specification
http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-4.3.4

"
   Responses to the PUT method are not cacheable.  If a successful PUT
   request passes through a cache that has one or more stored responses
   for the effective request URI, those stored responses will be
   invalidated (see Section 4.4 of [Part6]).
"

> 
> Tried to set must-revalidate or even not to return any Cache-Control
> header but these all result no update to cache so subsequent GET
> request results HIT while I expect a MISS.
> Note that Vary header is not present in response.
> 
> I am using 2.7.STABLE8 on Windows. No patch, just the installer from
> http://sourceforge.net/projects/squidwindowsmsi/


Your version of Squid is known to be only ~60% compliant with the
HTTP/1.1 specification and it seems this is possibly one of the missing
bits. Please consider an upgrade (sadly that means using a non-Windows
server for Squid).

Amos



Re: [squid-users] Cache-Control is ignored upon 202 (accept) response (re-post)

2014-02-16 Thread Boaz Citrin
Sorry, I confused between PUT and DELETE. Actually the DELETE is
request doesn't invalidate the cache on 202 response.
And this doesn't conform with the spec:
"If a DELETE
   request passes through a cache that has one or more stored responses
   for the effective request URI, those stored responses will be
   invalidated"

Any idea?

On Sun, Feb 16, 2014 at 12:42 PM, Amos Jeffries  wrote:
> On 16/02/2014 10:41 p.m., Boaz Citrin wrote:
>> Hello,
>>
>> I want to invalidate the cache when my server returns 202 to a PUT
>> request, however it seems like Squid ignores the Cache-Control when
>> return code is 202.
>
> From the latest specification
> http://tools.ietf.org/html/draft-ietf-httpbis-p2-semantics-26#section-4.3.4
>
> "
>Responses to the PUT method are not cacheable.  If a successful PUT
>request passes through a cache that has one or more stored responses
>for the effective request URI, those stored responses will be
>invalidated (see Section 4.4 of [Part6]).
> "
>
>>
>> Tried to set must-revalidate or even not to return any Cache-Control
>> header but these all result no update to cache so subsequent GET
>> request results HIT while I expect a MISS.
>> Note that Vary header is not present in response.
>>
>> I am using 2.7.STABLE8 on Windows. No patch, just the installer from
>> http://sourceforge.net/projects/squidwindowsmsi/
>
>
> Your version of Squid is known to be only ~60% compliant with the
> HTTP/1.1 specification and it seems this is possibly one of the missing
> bits. Please consider an upgrade (sadly that means using a non-Windows
> server for Squid).
>
> Amos
>


[squid-users] Worker I/O push queue overflow: ipcIo7.30506r9

2014-02-16 Thread Dr.x
hi all ,
i have implemented aufs with rock
i had that logs at logs of rock cache.log !!!

what does that mean ??

i have done the following

i have 5 hardsisk aufs  dir
2hardsisk rock dir

i have as below :

workers 8
#dns_v4_first on
cpu_affinity_map process_numbers=1,2,3,4,5,6,7,8,9
cores=2,4,6,8,10,12,14,16,18
#
if ${process_number} = 4
include /etc/squid/aufs1.conf
endif
###
if ${process_number} = 2
include /etc/squid/aufs2.conf
endif

if ${process_number} = 6
include /etc/squid/aufs3.conf
endif
#
if ${process_number} = 7
include /etc/squid/aufs4.conf
endif
#
if ${process_number} = 8
include /etc/squid/aufs5.conf
endif
===

each aufs.conf has dir aufs in it.

but after alll fo that ,
i have still low bandwith saving 
does the errors harmfull 

*Worker I/O push queue overflow: ipcIo7.30506r9

*


regards



-
Dr.x
--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Worker-I-O-push-queue-overflow-ipcIo7-30506r9-tp4664857.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Worker I/O push queue overflow: ipcIo7.30506r9

2014-02-16 Thread Eliezer Croitoru

On 02/16/2014 05:05 PM, Dr.x wrote:

cpu_affinity_map process_numbers=1,2,3,4,5,6,7,8,9
cores=2,4,6,8,10,12,14,16,18

Just wondering what would that even do?
If one of the CPUs\Cores is idle there is not much you can do.
It should be idle to not consume power.
It is not like it will consume power when it's not working.

Eliezer


Re: Fwd: [squid-users] Performance tuning of SMP + Large rock

2014-02-16 Thread Rajiv Desai
On Sat, Feb 15, 2014 at 10:12 AM, Dr.x  wrote:
> @Rajiv Desai
>
> have u found increasing in bandwidth saving when u used large rock ??

Yes. Large rock works pretty well with multiple SMP workers (in my
limited experience for past 5 days).
I get 85% hit rate for previously read (and thereby cached) dataset.
I will be looking into why there are 15% misses coz theoretically hit
rate should be 100% for this test.

> if so
> how  much difference u found ?

Difference as compared to what? I believe rock store is the only SMP
aware cache as stated in documentation.
I did try with aufs initially but the hit rate was very poor as
expected due to multiple workers.

>
> regards
>
>
>
> -
> Dr.x
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Performance-tuning-of-SMP-Large-rock-tp4664765p4664851.html
> Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Seemingly incorrect behavior: squid cache getting filled up on PUT requests

2014-02-16 Thread Rajiv Desai
I am using Squid Cache:
Version 3.HEAD-20140127-r13248

My cache dir is configured to use rock (Large rock with SMP):
cache_dir rock /mnt/squid-cache 256000 max-size=4194304

My refresh pattern is permissive to cache all objects:
refresh_pattern . 129600 100% 129600 ignore-auth

I uploaded 30 GB of data via squid cache with PUT requests.
>From storedir stats(squidclient mgr:storedir) it seems like each PUT
is occupying 1 slot in rock cache.

Is this a known bug? PUT requests should not increase cache usage right?


Stats:

by kid9 {

Store Directory Statistics:

Store Entries  : 53

Maximum Swap Size  : 209715200 KB

Current Store Swap Size: 8960416.00 KB

Current Capacity   : 4.27% used, 95.73% free


Store Directory #0 (rock): /mnt/squid-cache

FS Block Size 1024 Bytes


Maximum Size: 209715200 KB

Current Size: 8960416.00 KB 4.27%

Maximum entries:  13107199

Current entries:560025 4.27%

Used slots: 560025 4.27%

Pending operations: 1 out of 0

Flags:

} by kid9


Thanks,
Rajiv


Re: [squid-users] Seemingly incorrect behavior: squid cache getting filled up on PUT requests

2014-02-16 Thread Amos Jeffries
On 17/02/2014 11:41 a.m., Rajiv Desai wrote:
> I am using Squid Cache:
> Version 3.HEAD-20140127-r13248
> 
> My cache dir is configured to use rock (Large rock with SMP):
> cache_dir rock /mnt/squid-cache 256000 max-size=4194304
> 
> My refresh pattern is permissive to cache all objects:
> refresh_pattern . 129600 100% 129600 ignore-auth
> 
> I uploaded 30 GB of data via squid cache with PUT requests.
> From storedir stats(squidclient mgr:storedir) it seems like each PUT
> is occupying 1 slot in rock cache.
> 
> Is this a known bug? PUT requests should not increase cache usage right?
> 
> 
> Stats:
> 
> by kid9 {
> 
> Store Directory Statistics:
> 
> Store Entries  : 53
> 


How may objects in that 30GB of PUT requests?

That 53 looks more like the icons loaded by Squid for use in error pages
and ftp:// directory listings.

Amos



Re: [squid-users] Seemingly incorrect behavior: squid cache getting filled up on PUT requests

2014-02-16 Thread Rajiv Desai
On Sun, Feb 16, 2014 at 3:39 PM, Amos Jeffries  wrote:
> On 17/02/2014 11:41 a.m., Rajiv Desai wrote:
>> I am using Squid Cache:
>> Version 3.HEAD-20140127-r13248
>>
>> My cache dir is configured to use rock (Large rock with SMP):
>> cache_dir rock /mnt/squid-cache 256000 max-size=4194304
>>
>> My refresh pattern is permissive to cache all objects:
>> refresh_pattern . 129600 100% 129600 ignore-auth
>>
>> I uploaded 30 GB of data via squid cache with PUT requests.
>> From storedir stats(squidclient mgr:storedir) it seems like each PUT
>> is occupying 1 slot in rock cache.
>>
>> Is this a known bug? PUT requests should not increase cache usage right?
>>
>>
>> Stats:
>>
>> by kid9 {
>>
>> Store Directory Statistics:
>>
>> Store Entries  : 53
>>
>
>
> How may objects in that 30GB of PUT requests?
>
> That 53 looks more like the icons loaded by Squid for use in error pages
> and ftp:// directory listings.
>

572557 objects were uploaded with PUT requests.
I was looking at current size and used slots to interpret current
cache occupancy. Perhaps I am interpreting these incorrectly?

Current Size: 8960416.00 KB 4.27%
Current entries:560025 4.27%
Used slots: 560025 4.27%

> Amos
>


[squid-users] Solaris 3.4.3 on Sparc 11 64 bit

2014-02-16 Thread Monah Baki
uname -a
SunOS proxy 5.11 11.1 sun4v sparc SUNW,SPARC-Enterprise-T5220

Here are the steps before it fails

./configure --prefix=/usr/local/squid --enable-async-io
--enable-cache-digests --enable-underscores --enable-pthreads
--enable-storeio=ufs,aufs --enable-removal-policies=lru,heap

make

c -I../../../include   -I/usr/include/gssapi -I/usr/include/kerberosv5
   -I/usr/include/gssapi -I/usr/include/kerberosv5 -Wall
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
-D_REENTRANT -pthreads -g -O2 -std=c++0x -MT ext_session_acl.o -MD -MP
-MF .deps/ext_session_acl.Tpo -c -o ext_session_acl.o
ext_session_acl.cc
mv -f .deps/ext_session_acl.Tpo .deps/ext_session_acl.Po
/bin/sh ../../../libtool --tag=CXX--mode=link g++ -Wall
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
-D_REENTRANT -pthreads -g -O2 -std=c++0x   -g -o ext_session_acl
ext_session_acl.o ../../../compat/libcompat-squid.la
libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments
-Wshadow -Werror -pipe -D_REENTRANT -pthreads -g -O2 -std=c++0x -g -o
ext_session_acl ext_session_acl.o
../../../compat/.libs/libcompat-squid.a -pthreads
Undefined   first referenced
 symbol in file
db_create   ext_session_acl.o
db_env_create   ext_session_acl.o
ld: fatal: symbol referencing errors. No output written to ext_session_acl
collect2: ld returned 1 exit status
*** Error code 1
make: Fatal error: Command failed for target `ext_session_acl'
Current working directory /home/mbaki/squid-3.4.3/helpers/external_acl/session
*** Error code 1
The following command caused the error:
fail= failcom='exit 1'; \
for f in x $MAKEFLAGS; do \
  case $f in \
*=* | --[!k]*);; \
*k*) failcom='fail=yes';; \
  esac; \
done; \
dot_seen=no; \
target=`echo all-recursive | sed s/-recursive//`; \
list='LDAP_group SQL_session eDirectory_userip file_userip
kerberos_ldap_group session unix_group wbinfo_group'; for subdir in
$list; do \
  echo "Making $target in $subdir"; \
  if test "$subdir" = "."; then \
dot_seen=yes; \
local_target="$target-am"; \
  else \
local_target="$target"; \
  fi; \
  (CDPATH="${ZSH_VERSION+.}:" && cd $subdir && make  $local_target) \
  || eval $failcom; \
done; \
if test "$dot_seen" = "no"; then \
  make  "$target-am" || exit 1; \
fi; test -z "$fail"
make: Fatal error: Command failed for target `all-recursive'
Current working directory /home/mbaki/squid-3.4.3/helpers/external_acl
*** Error code 1
The following command caused the error:
fail= failcom='exit 1'; \
for f in x $MAKEFLAGS; do \
  case $f in \
*=* | --[!k]*);; \
*k*) failcom='fail=yes';; \
  esac; \
done; \
dot_seen=no; \
target=`echo all-recursive | sed s/-recursive//`; \
list='basic_auth digest_auth external_acl log_daemon  negotiate_auth
url_rewrite storeid_rewrite ntlm_auth  '; for subdir in $list; do \
  echo "Making $target in $subdir"; \
  if test "$subdir" = "."; then \
dot_seen=yes; \
local_target="$target-am"; \
  else \
local_target="$target"; \
  fi; \
  (CDPATH="${ZSH_VERSION+.}:" && cd $subdir && make  $local_target) \
  || eval $failcom; \
done; \
if test "$dot_seen" = "no"; then \
  make  "$target-am" || exit 1; \
fi; test -z "$fail"
make: Fatal error: Command failed for target `all-recursive'
Current working directory /home/mbaki/squid-3.4.3/helpers
*** Error code 1
The following command caused the error:
fail= failcom='exit 1'; \
for f in x $MAKEFLAGS; do \
  case $f in \
*=* | --[!k]*);; \
*k*) failcom='fail=yes';; \
  esac; \
done; \
dot_seen=no; \
target=`echo all-recursive | sed s/-recursive//`; \
list='compat lib snmplib libltdl scripts icons  errors doc helpers src
tools test-suite'; for subdir in $list; do \
  echo "Making $target in $subdir"; \
  if test "$subdir" = "."; then \
dot_seen=yes; \
local_target="$target-am"; \
  else \
local_target="$target"; \
  fi; \
  (CDPATH="${ZSH_VERSION+.}:" && cd $subdir && make  $local_target) \
  || eval $failcom; \
done; \
if test "$dot_seen" = "no"; then \
  make  "$target-am" || exit 1; \
fi; test -z "$fail"
make: Fatal error: Command failed for target `all-recursive'


[squid-users] squid 3.4.3 on Solaris Sparc

2014-02-16 Thread Monah Baki
uname -a
SunOS proxy 5.11 11.1 sun4v sparc SUNW,SPARC-Enterprise-T5220

Here are the steps before it fails

./configure --prefix=/usr/local/squid --enable-async-io
--enable-cache-digests --enable-underscores --enable-pthreads
--enable-storeio=ufs,aufs --enable-removal-policies=lru,
heap

make

c -I../../../include   -I/usr/include/gssapi -I/usr/include/kerberosv5
   -I/usr/include/gssapi -I/usr/include/kerberosv5 -Wall
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
-D_REENTRANT -pthreads -g -O2 -std=c++0x -MT ext_session_acl.o -MD -MP
-MF .deps/ext_session_acl.Tpo -c -o ext_session_acl.o
ext_session_acl.cc
mv -f .deps/ext_session_acl.Tpo .deps/ext_session_acl.Po
/bin/sh ../../../libtool --tag=CXX--mode=link g++ -Wall
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
-D_REENTRANT -pthreads -g -O2 -std=c++0x   -g -o ext_session_acl
ext_session_acl.o ../../../compat/libcompat-squid.la
libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments
-Wshadow -Werror -pipe -D_REENTRANT -pthreads -g -O2 -std=c++0x -g -o
ext_session_acl ext_session_acl.o
../../../compat/.libs/libcompat-squid.a -pthreads
Undefined   first referenced
 symbol in file
db_create   ext_session_acl.o
db_env_create   ext_session_acl.o
ld: fatal: symbol referencing errors. No output written to ext_session_acl
collect2: ld returned 1 exit status
*** Error code 1
make: Fatal error: Command failed for target `ext_session_acl'
Current working directory /home/mbaki/squid-3.4.3/helpers/external_acl/session
*** Error code 1
The following command caused the error:
fail= failcom='exit 1'; \
for f in x $MAKEFLAGS; do \
  case $f in \
*=* | --[!k]*);; \
*k*) failcom='fail=yes';; \
  esac; \
done; \
dot_seen=no; \
target=`echo all-recursive | sed s/-recursive//`; \
list='LDAP_group SQL_session eDirectory_userip file_userip
kerberos_ldap_group session unix_group wbinfo_group'; for subdir in
$list; do \
  echo "Making $target in $subdir"; \
  if test "$subdir" = "."; then \
dot_seen=yes; \
local_target="$target-am"; \
  else \
local_target="$target"; \
  fi; \
  (CDPATH="${ZSH_VERSION+.}:" && cd $subdir && make  $local_target) \
  || eval $failcom; \
done; \
if test "$dot_seen" = "no"; then \
  make  "$target-am" || exit 1; \
fi; test -z "$fail"
make: Fatal error: Command failed for target `all-recursive'
Current working directory /home/mbaki/squid-3.4.3/helpers/external_acl
*** Error code 1
The following command caused the error:
fail= failcom='exit 1'; \
for f in x $MAKEFLAGS; do \
  case $f in \
*=* | --[!k]*);; \
*k*) failcom='fail=yes';; \
  esac; \
done; \
dot_seen=no; \
target=`echo all-recursive | sed s/-recursive//`; \
list='basic_auth digest_auth external_acl log_daemon  negotiate_auth
url_rewrite storeid_rewrite ntlm_auth  '; for subdir in $list; do \
  echo "Making $target in $subdir"; \
  if test "$subdir" = "."; then \
dot_seen=yes; \
local_target="$target-am"; \
  else \
local_target="$target"; \
  fi; \
  (CDPATH="${ZSH_VERSION+.}:" && cd $subdir && make  $local_target) \
  || eval $failcom; \
done; \
if test "$dot_seen" = "no"; then \
  make  "$target-am" || exit 1; \
fi; test -z "$fail"
make: Fatal error: Command failed for target `all-recursive'
Current working directory /home/mbaki/squid-3.4.3/helpers
*** Error code 1
The following command caused the error:
fail= failcom='exit 1'; \
for f in x $MAKEFLAGS; do \
  case $f in \
*=* | --[!k]*);; \
*k*) failcom='fail=yes';; \
  esac; \
done; \
dot_seen=no; \
target=`echo all-recursive | sed s/-recursive//`; \
list='compat lib snmplib libltdl scripts icons  errors doc helpers src
tools test-suite'; for subdir in $list; do \
  echo "Making $target in $subdir"; \
  if test "$subdir" = "."; then \
dot_seen=yes; \
local_target="$target-am"; \
  else \
local_target="$target"; \
  fi; \
  (CDPATH="${ZSH_VERSION+.}:" && cd $subdir && make  $local_target) \
  || eval $failcom; \
done; \
if test "$dot_seen" = "no"; then \
  make  "$target-am" || exit 1; \
fi; test -z "$fail"
make: Fatal error: Command failed for target `all-recursive'


Re: [squid-users] squid 3.4.3 on Solaris Sparc

2014-02-16 Thread Francesco Chemolli

On 17 Feb 2014, at 01:15, Monah Baki  wrote:

> uname -a
> SunOS proxy 5.11 11.1 sun4v sparc SUNW,SPARC-Enterprise-T5220
> 
> Here are the steps before it fails
> 
> ./configure --prefix=/usr/local/squid --enable-async-io
> --enable-cache-digests --enable-underscores --enable-pthreads
> --enable-storeio=ufs,aufs --enable-removal-policies=lru,
> heap
> 
> make
> 
> c -I../../../include   -I/usr/include/gssapi -I/usr/include/kerberosv5
>   -I/usr/include/gssapi -I/usr/include/kerberosv5 -Wall
> -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
> -D_REENTRANT -pthreads -g -O2 -std=c++0x -MT ext_session_acl.o -MD -MP
> -MF .deps/ext_session_acl.Tpo -c -o ext_session_acl.o
> ext_session_acl.cc
> mv -f .deps/ext_session_acl.Tpo .deps/ext_session_acl.Po
> /bin/sh ../../../libtool --tag=CXX--mode=link g++ -Wall
> -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror -pipe
> -D_REENTRANT -pthreads -g -O2 -std=c++0x   -g -o ext_session_acl
> ext_session_acl.o ../../../compat/libcompat-squid.la
> libtool: link: g++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments
> -Wshadow -Werror -pipe -D_REENTRANT -pthreads -g -O2 -std=c++0x -g -o
> ext_session_acl ext_session_acl.o
> ../../../compat/.libs/libcompat-squid.a -pthreads
> Undefined   first referenced
> symbol in file
> db_create   ext_session_acl.o
> db_env_create   ext_session_acl.o

The build system is not being able to find the Berkeley db library files (but 
for some reason it can find the header).
Please check that libdb.a or libdb.so are available and found on the paths 
searched for libraries by your build system.

Kinkie

Re: [squid-users] Seemingly incorrect behavior: squid cache getting filled up on PUT requests

2014-02-16 Thread Rajiv Desai
What is the authoritative source of cache statistics? The slots
occupied due to PUT requests (as suggested by mgr:storedir stats is
quite concerning.
Is there some additional config that needs to be added to ensure that
PUTs are simply bypassed for caching purpose.

NOTE: fwiw, I have verified that subsequent GETs for the same objects
after PUTs do get a cache MISS.

On Sun, Feb 16, 2014 at 3:45 PM, Rajiv Desai  wrote:
> On Sun, Feb 16, 2014 at 3:39 PM, Amos Jeffries  wrote:
>> On 17/02/2014 11:41 a.m., Rajiv Desai wrote:
>>> I am using Squid Cache:
>>> Version 3.HEAD-20140127-r13248
>>>
>>> My cache dir is configured to use rock (Large rock with SMP):
>>> cache_dir rock /mnt/squid-cache 256000 max-size=4194304
>>>
>>> My refresh pattern is permissive to cache all objects:
>>> refresh_pattern . 129600 100% 129600 ignore-auth
>>>
>>> I uploaded 30 GB of data via squid cache with PUT requests.
>>> From storedir stats(squidclient mgr:storedir) it seems like each PUT
>>> is occupying 1 slot in rock cache.
>>>
>>> Is this a known bug? PUT requests should not increase cache usage right?
>>>
>>>
>>> Stats:
>>>
>>> by kid9 {
>>>
>>> Store Directory Statistics:
>>>
>>> Store Entries  : 53
>>>
>>
>>
>> How may objects in that 30GB of PUT requests?
>>
>> That 53 looks more like the icons loaded by Squid for use in error pages
>> and ftp:// directory listings.
>>
>
> 572557 objects were uploaded with PUT requests.
> I was looking at current size and used slots to interpret current
> cache occupancy. Perhaps I am interpreting these incorrectly?
>
> Current Size: 8960416.00 KB 4.27%
> Current entries:560025 4.27%
> Used slots: 560025 4.27%
>
>> Amos
>>


[squid-users] Re: squid3 block all 443 ports request

2014-02-16 Thread khadmin
Hi Amos,

Thank you for the response, actually i'am working with IPV4 on my network
architecture.
All the client are connected to a DC Windows 2012 server that manage
DNS,DHCP and AD.
The proxy server is not under the domain controller and have a static Ip
adress.
Any way I will try to run MTU Path and i will give you feed-back.
Other way would you advise me to installa nother version of Squid proxy?

Regards,
Khalil



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid3-block-all-443-ports-request-tp4664735p4664867.html
Sent from the Squid - Users mailing list archive at Nabble.com.