Re: [squid-users] Strange reject of users (basic auth)

2005-02-22 Thread Janno de Wit
Hi Henrik,

 What I do is a squid -k reconfigure on the box(es) giving problems...
 and then everything is okay.

 Sounds like the basic auth helper you are using had a hickup..
 Is there anything in cache.log which may hint to why?

No, the strange thing is I have no strange output. Is there a specific
debug-level + verbosity where input-output to the Auth-helper can be
logged?

Regards, Janno.

-- 
Janno de Wit
DNA Services B.V.


signature.asc
Description: Digital signature


[squid-users] Forward Requests to different Ports:

2005-02-22 Thread Markus Atteneder
Is it possible to configure squid to forward reqests comming from specific
hosts to the same parent as other requests but to a different port? The
reason is to bypass a webwasher on the parent server for these hosts in
order to allow denyed sites.

-- 
DSL Komplett von GMX +++ Supergünstig und stressfrei einsteigen!
AKTION Kein Einrichtungspreis nutzen: http://www.gmx.net/de/go/dsl


Re: [squid-users] Problem with unparseable HTTP header field

2005-02-22 Thread Ralf Hildebrandt
* James Gray [EMAIL PROTECTED]:

 I called their IT people yesterday and gave them the details to which
 they were rather receptive.  Indeed, they had $CLUE.  They called me
 back in the afternoon to advise they had installed a patch at their end
 and wanted me to test it.  Working again - balance is restored and the
 good guys (us) win :)

What were they running?

-- 
Ralf Hildebrandt (i.A. des IT-Zentrum)  [EMAIL PROTECTED]
Charite - Universitätsmedizin BerlinTel.  +49 (0)30-450 570-155
Gemeinsame Einrichtung von FU- und HU-BerlinFax.  +49 (0)30-450 570-962
IT-Zentrum Standort CBF send no mail to [EMAIL PROTECTED]


Re: [squid-users] Forward Requests to different Ports:

2005-02-22 Thread Michael Pophal
Why don't you create different webwasher profiles for different user
groups?

Regards michael

On Tue, 2005-02-22 at 09:46, Markus Atteneder wrote:
 Is it possible to configure squid to forward reqests comming from specific
 hosts to the same parent as other requests but to a different port? The
 reason is to bypass a webwasher on the parent server for these hosts in
 order to allow denyed sites.
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal
--
Topic Manager
Internet Access Services  Solutions
--
Siemens AG, ITO AS 4
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]
--



RE: [squid-users] Cache_peer problems

2005-02-22 Thread DONDANA ALBERTO
mark,

we have the same problem because we're trying to migrate from VirusWall
to IWSS

configuration (squid 2.5):

cache_peer antivirus1 parent 3128 3128 proxy-only no-query
connect-timeout=2 default
cache_peer antivirus2 parent 3128 3128 proxy-only no-query default
never_direct allow all

antivirus 1 is a IWSS 2.0, antivirus2 is an old VirusWall

when I'm trying to download an infected file IWSS replay with ant http
403 (Error), squid seems not handling the enclosed message and start
contact the second cache peer, which instead reply with an http 200 (OK)
with an informative error message enclosed (and the virus will be
blocked)

we also made another try, we replaced antivirus 2 with antivirus3 (IWSSS
again)
download the infected file we discovered that squid contact 2 times
antivirus1 (receiving error 403 2 times) than swap the comunication to
antivirus3 again receiving error 403 2 times but finally the last time
it correctly pass over the erro page to the client
this increase the network traffic of both antivirus

contacting TrendMicro they replied that the big difference from VW and
IWSS is the message reply in case of a 'virused' page

we'd like to keep two cache peer for fault tolerance

bye


Alberto


On Wed, 2005-02-16 at 07:46, Elsen Marc wrote:
  
  
  We are using squid in conjunction with trend micro's IWSS.
  
  The documentation outlines how to do this, clients contact IWSS and 
  IWSS uses squid as an upstream proxy server.  For reporting reasons,
  We want to do it the other way around, IWSS are to general for us, 
  Authentication is done vie NTLM.
  
  IWSS is running on 8080 and squid on 3128, same box.
  IWSS is not an ICP proxy and thus the squid doco led me to 
  the following
  Cach_peer statement:
  cache_peer 127.0.0.1 parent 8080 7 no-query default
  
  Without the no-query and default statements I end up with 
  TIMEOUT_DIRECT
  warnings.
  
  Now all this works ok, except when IWSS detects a virus, in 
  which case,
  squid 
  Ignore the 403 returned and goes direct instead of displaying 
  the error
  message
  
  1108522791.283 59 172.16.8.59 TCP_MISS/200 886 GET
  http://www.trendmicro.com/global/en/images/topnav/tn-partners-over.gif
  aclark DEFAULT_PARENT/127.0.0.1 image/gif
  1108522791.287 57 172.16.8.59 TCP_MISS/200 754 GET
  http://www.trendmicro.com/global/en/images/topnav/tn-about-over.gif
  aclark DEFAULT_PARENT/127.0.0.1 image/gif
  1108522825.301141 172.16.8.59 TCP_MISS/200 391 GET
  http://www.trendmicro.com/ftp/products/eicar-file/eicar.com aclark
  DIRECT/61.9.129.152 application/octet-stream
  
  I know it is getting a 403 from the IWSS as a packet trace has this in
  its data segment:
  
  HTTP/1.1 403 OK
  Connection: close
  Content-Type: text/html; charset=UTF-8
  Cache-Control: no-cache
  Date: Wed, 16 Feb 2005 01:49:15 GMT
  htmlheadtitleIWSS Security Event/title/head
  bodyscript  if( typeof( window.innerWidth ) == 'number' ) {if
  (window.innerWidth  10 || window.innerHeight  10)
  {self.resizeTo(700,600);}}else if (document.body 
  (document.body.clientWidth  10 || document.body.clientHeight  10))
  {self.resizeTo(700, 600);}/scripth1h1IWSS Security Event
  (pthalo.ngv.vic.gov.au)/h1/h1
  Access to this URL is currently restricted due to a blocking
  rule.BRBRURL:
  Bhttp://www.trendmicro.com/ftp/products/eicar-file/eicar.com
  /BBRRu
  le: Block URLs of type BVirus infected temporary block/BPIf you
  feel you have reached this message in error, please contact 
  your network
  administrator.
  /body/html
  
  Is this the appropriate method for what we need out of our 
  caching/virus
  system?
  
 
 
 You may try  :
 
   never_direct allow all
 
 in squid.conf. To prevent squid from 'direct going attempts'.
 
 M.



[squid-users] Re: Improving squid-performance

2005-02-22 Thread Ow Mun Heng
On Tue, 2005-02-22 at 16:10, Stefan Neufeind wrote:
 Ow Mun Heng wrote:
  On Tue, 2005-02-22 at 08:13, Stefan Neufeind wrote:

  Did you ever determine what the bottleneck was in the 1st place?
 
 I'm not too sure how to adequately check that. The system is running
 with 99.XXX% for squid from time to time, all memory is used, most cache
 is not - that's what I expected. In top, from time to time, the
 system-part of cpu-usage raises to 25% or so. So I guess it might have
 to do with harddisk-access, together with network I/O maybe. That's why
 I hope diskd can help (read below).

Okay.. when you talk about most cache is not I have no idea what that
means. Please explain.

Also, note that for any partitions which you are using solely for
squid's cache, you should not be devoting the _entire_ partition to
squid, rather only ~80%  as stated in the cache_dir directive in
squid.conf

This is to permit squid to breathe so to speak. (I forgot what the
_actual_ technical term is)


If all memory is used, can I ask how much of memory are you giving to
squid? There's a math in getting the amount of memory per cache. It's
like ~10MB RAM for every 1GB of cache. (IIRC)


 So I thought how performance can be improved. Okay, increasing
 mem (currently 1GB) might help. But what other options are available?
  
  memory is one option but I doubt it will help unless you know why it's
  using 100% CPU. I'm certain you're not running any sort of
  antivirus/filtering app right?
 
 For sure not running any sort of this :-) Let me explain: It's an
 external webserving cluster with two machines behind it. People
 accessing the webservers only access certain sites that are placed on
 those servers. No big files (maybe up to 50 or 100kb), but many. Caching
 helps the webservers a lot - I see that from hit-stats.

That's called a reverse proxy and I know what it's supposed to do. :-D


  cache? Can you determine if the load is caused by internal or external
  users?
 
 Just those two servers.

Okay.. so no internal users 


 
 Onn the Fedora-mailinglist I found your message:
 https://www.redhat.com/archives/fedora-list/2004-November/msg04242.html
 Were there eany replies to this (which I didn't notice)? Did you find
 any good howto's? What steps did you take?

Ah... Was searching the wrong list :-|
 High Performance Squid - Howto
 * From: Ow Mun Heng Ow Mun Heng wdc com
 Does anyone here has any pointers on installing a high performance
 Squid?

Nope... No one replied :-[

  fine w/ 4096 descriptors.
 
 I did recompile squid with a higher number of descriptors. But after
 reading several posts about this topic I wonder if I need to raise some
 limit on the system itself as well? However, squid does not yet complain ...

You will know cos you will see it in the system logs. If they don't
complain and if your site's not very busy, then it's fine.

 What does this diskd do? I didn't find much information about it. Only:
 http://www.squid-cache.org/Doc/FAQ/FAQ-22.html
  
  Diskd is just another cache filesystem like aufs. I can't tell you more
  than that. But diskd is supposed to function the best on *BSD systems.
  on Linux, aufs is the better choice. (AFAIK)
 
 Ouch. Too bad. I hoped to see a perforrmance-improvement by using diskd.
 Hmm - maybe I'll try out

If you do.. let us know the results then.



  - but I hoped it might be benefishial. On
 squid-cache.org it's written like it should be an improvement in general
 - however I've read that some kind of message-queues need to be
 supported by the kernel, so maybe that's why it's not the standard
 storage-interface. Are the necessary functions already available on a
 FC3 2.6-kernel, do you know?

Look here.. Not sure if it helps but found the link in one post in
squid-list

http://www.perl.org/tpc/2002/sessions/wessels_duane.ppt



  Raw Partitions? I don't now about that. Maybe someone on the list would
  know??
 
 Haven't read anything in the archives. Was just a crazy idea :-)


Hmm.. can't help you on that. but if you do find something. Let the open
source world know

 
 Thank you for the quick reply,

Cheers. I scratch your hairy back, you scratch mine :-)

  Stefan

--
Ow Mun Heng
Gentoo/Linux on DELL D600 1.4Ghz 
98% Microsoft(tm) Free!! 
Neuromancer 17:40:57 up 8:28, 5 users, 
load average: 0.28, 0.32, 0.25 



[squid-users] Limiting bandwidth

2005-02-22 Thread Daniel Herrero Martínez
Hi there,
I wonder if there is any way to limit the number of connnections to a 
destination (based on the destination address). I know it is possible to 
establish a limit based on the source address with MAXCONN, ¿is there 
any similar tool based on the destination address?

Thanks in advance
--
Daniel Herrero Martínez

Universidad de Navarra
C T I - Área de sistemas
Tel: 948 425600 Ext.2092
http://www.unav.es/cti


Re: [squid-users] Strange reject of users (basic auth)

2005-02-22 Thread Henrik Nordstrom
On Tue, 22 Feb 2005, Janno de Wit wrote:
No, the strange thing is I have no strange output. Is there a specific
debug-level + verbosity where input-output to the Auth-helper can be
logged?
debug_options ALL,1 29,9
Regards
Henrik


Re: [squid-users] :Direct connection without DNS lookup

2005-02-22 Thread Henrik Nordstrom
On Tue, 22 Feb 2005 [EMAIL PROTECTED] wrote:
Is their a a methood wher you can tell squid to directly connect to a 
specific web site by providing its IP addredd without DNS lookups.
cache_peer
or /etc/hosts
or requesting the site by IP.
Regards
Henrik


Re: [squid-users] Files 2Gb on FTP sites via Squid

2005-02-22 Thread Henrik Nordstrom
On Tue, 22 Feb 2005, davep wrote:
Right. I looked at ftp.c and there are several occurrences of
int size;
ftp.c is actually the least of the problematic areas but you get the 
picture.

Regards
Henrik


Re: [squid-users] Forward Requests to different Ports:

2005-02-22 Thread Henrik Nordstrom

On Tue, 22 Feb 2005, Markus Atteneder wrote:
Is it possible to configure squid to forward reqests comming from specific
hosts to the same parent as other requests but to a different port?
Yes, all you need is to use different names for the parent, all resolving 
into the IP of the parent.

Regards
Henrik


Re: [squid-users] squid with ldirectord-----------------------------------

2005-02-22 Thread Henrik Nordstrom
On Tue, 22 Feb 2005, SUKHWINDER PAL wrote:
As we know that in case of LVS-DR the realserver is
directly serving the client and not through the
LVS.So, how the ldirectord on LVS will detect the
failure of squid servers.
You need to tell ldirectord a suitable URL to use for testing the Squid 
servers. You can either use a proxied URL or request an internal URL such 
as one of the icons 
http://squid.server/squid-internal-static/icons/anthony-unknown.gif

HTTP Proxies is not that different from a normal HTTP web service. Both 
speaks HTTP.

Regards
Henrik


[squid-users] File download blocking

2005-02-22 Thread Ian Morgan
Hello,

You're all probably tired of this subject but I'm having a problem with the
following config:

acl europe src x.x.x.x/x.x.x.x
acl germany src x.x.x.x/x.x.x.x

acl blockfiles url_regex /etc/squid/denyfiles.txt

http_access deny blockfiles germany
http_access deny blockfiles europe

The contents of the denyfiles.txt looks like this:

\.exe$
\.zip$
\.mpg$
\.mpeg

The problem is that none of the files I want to block and prevent download
are actually blocked and can be downloaded. 

Anyone got any ideas?

Many thanks,

IM


**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.

This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.

www.mimesweeper.com
**



Re: [squid-users] Limiting bandwidth

2005-02-22 Thread Ow Mun Heng
On Tue, 2005-02-22 at 18:32, Daniel Herrero Martnez wrote:
 Hi there,
 I wonder if there is any way to limit the number of connnections to a 
 destination (based on the destination address). I know it is possible to 
 establish a limit based on the source address with MAXCONN, is there 
 any similar tool based on the destination address?

I only know about memory pools but not sure if that's whats wanted



[squid-users] Thomas Werner is out of the office.

2005-02-22 Thread Thomas Werner
I will be out of the office starting Mon 21.02.05 and will not return until
Mon 28.02.05.

During that time, I will have limited or no access to my email inbox. For
emergencies, I ask you kindly to contact Mark Grace at +49 (0)30 21231-1080
or [EMAIL PROTECTED]

Thank you and kind regards,
Thomas Werner

Re: [squid-users] Two squid instances based on file types? Is it good?

2005-02-22 Thread Marco Crucianelli
On Mon, 2005-02-21 at 11:33 +0100, Henrik Nordstrom wrote:
 On Mon, 21 Feb 2005, Marco Crucianelli wrote:
 
  I was thinking about using ACL regex_url to direct avi,mp3,iso etc from
  the small_object (front-end) squid to the big_object (back-end) squid
  together with the directive cache_peer_access...do you think I can do it
  this way?
 
 Yes, but you may want to also use never_direct or the same.

Why shall I use never_direct? Maybe to force the frontend squid (the one
caching web stuff) to never ask for multimedia files the origin servers,
but to redirect these calls to the backend squid, the one caching
multimedia stuff?!? Maybe I've a wrong idea of how squid works! I'll
bette explain myself: I thought that the flow of requests in squid was:

1) WD (WebDoc) squid receive a request for a MM (MultiMedia) file, it
doesn't have it, but it knows it has a cache_peer parent, MM squid.
2) It uses ICP to ask MM squid (its parent) if it has the requested file
3) MM squid answers it doesn't have it but it's asking to its parent
4) MM squid gets the requested file from its parent, caches it and send
to WD squid via ICP that he now has the file
5) WD squid now knows that MM squid has the file then it takes it from
MM squid, does not cahce it back (as I think to use the directive
proxy-only while definig) and then send the requested file back to the
client

Is it correct? Is there anything wrong? I mean, maybe I should use
never_direct to be sure that the WD squid asks to its parent, MM
squid?!?!?

 
  But, what about staleness? Can I set up the refresh time in squid...with
  which directive?!?!
 
 refresh_pattern
 
 Regards
 Henrik

Yes, refresh pattern...right! :P
Using regex in refresh_pattenr, could I match on the cache_dir (don't
think so) or could I match only on the file extension, for example?!?!
I mean: I would like to have different refresh pattern bades on
cache_dir or at least, based on file extension, thus I could keep big
multimedia files longer...


Many many thanks...once again! ;)

Marco


[squid-users] Re: File download blocking

2005-02-22 Thread zottmann
Hi !! 

I think it is better to use a rep_mime_tipe acl, because this way you have 
better control over what is being downloaded then using file extensions. 

Regards, 
Carlos. 

--- 

Hello, 


You're all probably tired of this subject but I'm having a problem with the 
following config: 


acl europe src x.x.x.x/x.x.x.x 
acl germany src x.x.x.x/x.x.x.x 


acl blockfiles url_regex /etc/squid/denyfiles.txt 


http_access deny blockfiles germany 
http_access deny blockfiles europe 


The contents of the denyfiles.txt looks like this: 


\.exe$ 
\.zip$ 
\.mpg$ 
\.mpeg 


The problem is that none of the files I want to block and prevent download 
are actually blocked and can be downloaded. 


Anyone got any ideas? 


Many thanks, 


IM 


Re: [squid-users] Two squid instances based on file types? Is it good?

2005-02-22 Thread Marco Crucianelli
On Mon, 2005-02-21 at 08:03 -0300, H Matik wrote:
 On Monday 21 February 2005 07:24, Marco Crucianelli wrote:
  On Fri, 2005-02-18 at 16:10 -0200, H Matik wrote:
   On Friday 18 February 2005 08:34, Marco Crucianelli wrote:
On Thu, 2005-02-17 at 12:52 -0600, Kevin wrote:
 What mechanism are you using to set expire times?
   
 
  Do you mean using max_object_size=512K for the small_object squid?
 
 yes
 
 
  I was thinking about using ACL regex_url to direct avi,mp3,iso etc from
  the small_object (front-end) squid to the big_object (back-end) squid
  together with the directive cache_peer_access...do you think I can do it
  this way?
 
 
 this should work, you add other extensions as you need 
 
 acl bf urlpath_regex \.mpg
 acl bf urlpath_regex \.avi
 acl bf urlpath_regex \.wmv
 
 never_direct allow bf
 always_direct deny bf

As I wrote to Henrik, I should use never_direct to be sure that the
front end squid asks to the back end squid right?!

Suppose the backend squid has its own cache_peer parent and suppose I
want that, whenever the frontend squid asks the backend squid for a
multimedia files, if the backend squid doesn't have it, it asks to its
own parent, when it gets the file back it sends this back back to the
frontend squiddo you think this could be possible with such a
configuration?!?!

 
 
 
  But, what about staleness? Can I set up the refresh time in squid...with
  which directive?!?!
 
 you can use refresh_pattern
 
 

Do you know if I can match on the cache_dir using refresh_pattern!?!?

 Hans
 
 
 
  Once again, may thanks
 

Thank you so much Hans! ;)

Marco


Re: [squid-users] Forward Requests to different Ports:

2005-02-22 Thread nikolay . nenchev
Hi to all,
I'm right now trying to implement webwasher 1000 csm appliance with squid 
with icap support.
I need to make user/group authentication from NT domain, and because from 
the appliance which is with CGLinux - rpm base system, doesn't support 
NTLM, I have installed samba and making ntlm authenticationin from NT 
domain true squid. And here comes my questions:
picture:


Webwasher   Squid - NT domain
icap server samba-icap client
--- --- ---
|WW |  icap |squid  |  NTLM |   |
|   |---|   |---|   |
|   |   |   |   |   |
--- ___ 
|
|
|
user - clients (browsers)


I have authenticated user in the proxy, but after that how to pass some 
kind of credential to icap server (webwasher) and to make policy for every 
different group.
Regards,
Nikolay Nenchev




[EMAIL PROTECTED]@inet 
22.02.2005 11:09

To
[EMAIL PROTECTED]
cc
squid-users@squid-cache.org
Subject
Re: [squid-users] Forward Requests to different Ports:






Why don't you create different webwasher profiles for different user
groups?

Regards michael

On Tue, 2005-02-22 at 09:46, Markus Atteneder wrote:
 Is it possible to configure squid to forward reqests comming from 
specific
 hosts to the same parent as other requests but to a different port? 
The
 reason is to bypass a webwasher on the parent server for these hosts 
in
 order to allow denyed sites.
-- 
Mit freundlichen Grüssen / With kind regards

Michael Pophal
--
Topic Manager
Internet Access Services  Solutions
--
Siemens AG, ITO AS 4
Telefon: +49(0)9131/7-25150
Fax: +49(0)9131/7-43344
Email:   [EMAIL PROTECTED]
--





Re: [squid-users] Squid, virtual IP and Layer 7 switching...any idea?

2005-02-22 Thread Marco Crucianelli
On Mon, 2005-02-21 at 18:58 +0100, Henrik Nordstrom wrote:
 On Mon, 21 Feb 2005, Marco Crucianelli wrote:
 
  I mean: what I was thining of was a Layer 7 solution using virtual IP 
  address, just to let the two squid asnwer to the clients without passing 
  back through the Layer 7 machine! In such a case I do need virtual IP 
  and there should surely be some things to modify in squid.conf
 
 No, there is no things to modify in squid.conf when you use a virtual ip. 
 Squid configuration is 100% the same as when using NAT.
 
 The difference is in your OS IP configuration only. Not Squid.

Well, I'm sure not that ggod in squid configuration, but thinking about
a layer 7 switching solution using virtual IP, to let squid answer to
clients request directly I should use a TCP handoff. In such a case,
squid needs to use the virtual IP address to answer to clients (binding
squid instance to the virtual IP in squid.conf) while, to speak with its
cache_peer it needs to use its real IP address (using something like
udp_incoming_address and udp_outgoing_address in squid.conf). While, not
using wirtual IP solution but natting only, I don't need neither to bind
squid to wirtual IP nor to change udp_incoming and outgoing_address.

Am I wrong?!?

 
 Regards
 Henrik

Thanks!

Marco


[squid-users] Authentication Window popping up randomly

2005-02-22 Thread zottmann
Hi! 

We are facing an weird problem here with ntlm authentication. After we 
upgraded our Linux boxes to Fedora Core 3, sometimes the user is prompted 
with the authentication window from squid. 

Looking at the winbindd.log I have found the following error message: 

[2005/02/21 12:20:44, 0] rpc_client/cli_pipe.c:cli_nt_session_open(1451) 
  cli_nt_session_open: cli_nt_create failed on pipe \NETLOGON to machine 
SERVER_NAME.  Error was NT_STATUS_PIPE_NOT_AVAILABLE 
[2005/02/21 12:20:44, 0] rpc_client/cli_pipe.c:cli_nt_setup_netsec(1622) 
  Could not initialise \PIPE\NETLOGON 

What could be going wrong? 

Thanks in Advance, 
Carlos. 


RE: [squid-users] Authentication Window popping up randomly

2005-02-22 Thread Elsen Marc

 
 
 Hi! 
 
 We are facing an weird problem here with ntlm authentication. 
 After we 
 upgraded our Linux boxes to Fedora Core 3, sometimes the user 
 is prompted 
 with the authentication window from squid. 
 
 Looking at the winbindd.log I have found the following error message: 
 
 [2005/02/21 12:20:44, 0] 
 rpc_client/cli_pipe.c:cli_nt_session_open(1451) 
   cli_nt_session_open: cli_nt_create failed on pipe \NETLOGON 
 to machine 
 SERVER_NAME.  Error was NT_STATUS_PIPE_NOT_AVAILABLE 
 [2005/02/21 12:20:44, 0] 
 
 The latter seems to be a SAMBA error :
 In 
  
  http://lists.samba.org/archive/samba/2004-May/085560.html

 someone suggested that downgrading SAMBA solved this problem...

 M.


[squid-users] Two conflicting content-length headers

2005-02-22 Thread Diego Dasso
well, after watch the cache.log a get this for the Invalid Response..
2005/02/21 17:24:05| ctx: exit level  0
2005/02/21 17:24:05| urlParse: Illegal character in hostname  
'night_wolf.weblogger.terra.com.br'
2005/02/21 17:24:18| ctx: enter level  0:  
'http://www.iiisci.org/cisci2005/Reviewers/download.asp?aux1=C170JB'
2005/02/21 17:24:18| WARNING: found two conflicting content-length headers

so.. i recompile squid 2.5.8 to accept underscores and now i get only
2005/02/21 19:26:25| ctx: enter level  0:  
'http://www.iiisci.org/cisci2005/Reviewers/download.asp?aux1=C170JB'
2005/02/21 19:26:25| WARNING: found two conflicting content-length headers

now, with the version 2.5.4, all works fine, i assume that now the new  
version is more strict with the headers, (good for me) but there is a way  
to workaround this issue??

diego
--
La juventud envejece, la inmadurez se supera, la ignorancia puede ser  
educada y la borrachera se pasa; pero la estupidez es para siempre  
Aristofanes


___
Advertencia:
Este mensaje contiene la opinion personal del remitente y 
la Universidad Catolica Nuestra Senora de la Asuncion 
no asume responsabilidad alguna con relacion al contenido del presente mensaje. 
Cualquier consulta realizar por favor a [EMAIL PROTECTED] .

Protected by LED


Fwd: Re: [squid-users] Two squid instances based on file types? Is it good?

2005-02-22 Thread H Matik

On Tuesday 22 February 2005 09:13, you wrote:
  this should work, you add other extensions as you need
 
  acl bf urlpath_regex \.mpg
  acl bf urlpath_regex \.avi
  acl bf urlpath_regex \.wmv
 
  never_direct allow bf
  always_direct deny bf

 As I wrote to Henrik, I should use never_direct to be sure that the
 front end squid asks to the back end squid right?!

you may even even add

always_direct allow !bf

in order not to query it for other file types


if you do not use always|never_direct the thing will not work correctly

I am not sure if talking back|front-end cache is good here, lets stay with
large_object and small_object cache

we suppose here that the small_object cache is the one exposed and used by
your users

if you do not use always|never_direct in the small_object cache you probably
never get a hit and the small_object cache will pull the large objects
without storing them ever since you limit it with max_object_size

means you need to force calling the large_object cache which then should
 store the object and you get a hit at the next call

 Suppose the backend squid has its own cache_peer parent and suppose I
 want that, whenever the frontend squid asks the backend squid for a
 multimedia files, if the backend squid doesn't have it, it asks to its
 own parent, when it gets the file back it sends this back back to the
 frontend squiddo you think this could be possible with such a
 configuration?!?!

this is pretty confusing


Hans

   But, what about staleness? Can I set up the refresh time in
   squid...with which directive?!?!
 
  you can use refresh_pattern

 Do you know if I can match on the cache_dir using refresh_pattern!?!?

  Hans
 
   Once again, may thanks

 Thank you so much Hans! ;)

 Marco

--
___
Infomatik
(18)8112.7007
http://info.matik.com.br
Mensagens não assinadas com GPG não são minhas.
Messages without GPG signature are not from me.
___

---



pgpzT3VVhc10b.pgp
Description: PGP signature


RE: [squid-users] Two conflicting content-length headers

2005-02-22 Thread Elsen Marc

 
 
 well, after watch the cache.log a get this for the Invalid Response..
 
 2005/02/21 17:24:05| ctx: exit level  0
 2005/02/21 17:24:05| urlParse: Illegal character in hostname  
 'night_wolf.weblogger.terra.com.br'
 2005/02/21 17:24:18| ctx: enter level  0:  
 'http://www.iiisci.org/cisci2005/Reviewers/download.asp?aux1=C170JB'
 2005/02/21 17:24:18| WARNING: found two conflicting 
 content-length headers
 
 so.. i recompile squid 2.5.8 to accept underscores and 
 now i get only
 
 2005/02/21 19:26:25| ctx: enter level  0:  
 'http://www.iiisci.org/cisci2005/Reviewers/download.asp?aux1=C170JB'
 2005/02/21 19:26:25| WARNING: found two conflicting 
 content-length headers
 
 now, with the version 2.5.4, all works fine, i assume that 
 now the new  
 version is more strict with the headers, (good for me) but 
 there is a way  
 to workaround this issue??
 

  Can be seen confirmed with :

http://web-sniffer.net/?url=http%3A%2F%2Fwww.iiisci.org%2Fcisci2005%2FReviewers%2Fdownload.asp%3Faux1%3DC170JBsubmit=Submithttp=1.1gzip=yestype=GETua=Mozilla%2F5.0+%28Windows%3B+U%3B+Windows+NT+5.0%3B+en-US%3B+rv%3A1.7.5%29+Gecko%2F20041217+Web-Sniffer%2F1.0.20


   - Notify the webmaster of this broken webserver.
   - Seems/heard that 2.5.STABLE9(-RC1) will have the opportunity to relax
 the http parser (more and or again).

   M.


Re: [squid-users] Two conflicting content-length headers

2005-02-22 Thread Goetz von Escher
Hi Diego
This sounds exactly like the problem I reported on december 10:
 http://www.squid-cache.org/mail-archive/squid-users/200412/0318.html
Hard-working Henrik Nordstrom indicated that it was fixed starting
with squid-2.5.STABLE7-20041211 or later.
Regards
Goetz
Diego Dasso wrote:
well, after watch the cache.log a get this for the Invalid Response..
2005/02/21 17:24:05| ctx: exit level  0
2005/02/21 17:24:05| urlParse: Illegal character in hostname  
'night_wolf.weblogger.terra.com.br'
2005/02/21 17:24:18| ctx: enter level  0:  
'http://www.iiisci.org/cisci2005/Reviewers/download.asp?aux1=C170JB'
2005/02/21 17:24:18| WARNING: found two conflicting content-length headers

so.. i recompile squid 2.5.8 to accept underscores and now i get only
2005/02/21 19:26:25| ctx: enter level  0:  
'http://www.iiisci.org/cisci2005/Reviewers/download.asp?aux1=C170JB'
2005/02/21 19:26:25| WARNING: found two conflicting content-length headers

now, with the version 2.5.4, all works fine, i assume that now the 
new  version is more strict with the headers, (good for me) but there is 
a way  to workaround this issue??

diego



RE: [squid-users] Invalid Response

2005-02-22 Thread Jacobi Michael CRPH
Thank you! I just put in STABLE9-RC1-20050222 and it resolved the users' issue.

Mike Jacobi

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
Sent: Sunday, February 20, 2005 22:06
To: Jacobi Michael CRPH
Cc: Henrik Nordstrom; Chris Robertson; =?X-UNKNOWN?Q?'Johan_Hen=E6s'?=;
Squid Users
Subject: RE: [squid-users] Invalid Response


On Sun, 20 Feb 2005, Jacobi Michael CRPH wrote:

 Is this in the daily autogenerated version of STABLE8?

Yes, it is in 2.5.STABLE9-RC1 and later.

Regards
Henrik


[squid-users] www.europroperty.com

2005-02-22 Thread Damian-Grint Philip
We have recently upgraded to 2.5Stable7-20050113.
The above URL is causing Squid to return Invalid Request, but there is
no problem going direct.
 
This is what happens going through squid:
 
GET http://www.europroperty.com/ HTTP/1.0.
Accept: */*.
Accept-Language: en-gb.
Cookie: WEBTRENDS_ID=80.169.166.244-2279377056.29694189.
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Q312461).
Host: www.europroperty.com.
Proxy-Connection: Keep-Alive.
.
HTTP/1.0 502 Bad Gateway.
Server: squid/2.5.STABLE7-20050113.
Mime-Version: 1.0.
Date: Tue, 22 Feb 2005 14:53:52 GMT.
Content-Type: text/html.
Content-Length: 1475.
Expires: Tue, 22 Feb 2005 14:53:52 GMT.
X-Squid-Error: ERR_INVALID_REQ 0.
X-Cache: MISS from miloscz.collierscre.co.uk.
X-Cache-Lookup: MISS from miloscz.collierscre.co.uk:3128.
Proxy-Connection: keep-alive.

This is what happens going direct
 
GET / HTTP/1.1.
Accept: */*.
Accept-Language: en-gb.
Accept-Encoding: gzip, deflate.
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Q312461).
Host: www.europroperty.com.
Connection: Keep-Alive.
Cookie: WEBTRENDS_ID=80.169.166.244-2279377056.29694189.
.
HTTP/1.1 200 OK.
Server: Microsoft-IIS/5.0.
Date: Tue, 22 Feb 2005 14:51:28 GMT.
Content Location: http://www.europroperty.com.
X-Powered-By: ASP.NET.
Content-Length: 7278.
Content-Type: text/html.
Expires: Tue, 22 Feb 2005 14:50:29 GMT.
Set-Cookie: ASPSESSIONIDCAQSCQDT=MLPDPKNAOAHMCMLPCNOMJDPN; path=/.
Cache-control: private.

Has anyone come across this before and is there a fix?
 
Regards
 
Phil DG


This e-mail has been scanned for all viruses by Star. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk



[squid-users] Understanding the stats.

2005-02-22 Thread Ray Charles


Hi,


Looking at some stats from Cache Manager. I originally
thought there were just hits or misses, is a Near Hit
still a miss? I am hoping for a deeper explaination of
Near Hits. Also, can you guys tell me, but doesn't
this look like a poorly performing squid server?? It's
for approx 30Hours.

Connection information for squid:
Number of clients accessing cache:  5384
Number of HTTP requests received:  
7124911
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start: 
 3641.7
Average ICP messages per minute since start:  
 0.0
Select loop called: 26095903 times, 4.498 ms
avg
Cache information for squid:
Request Hit Ratios: 5min: 99.1%, 60min:
98.5%
Byte Hit Ratios:5min: 99.0%, 60min:
98.8%
Request Memory Hit Ratios:  5min: 78.4%,
60min: 91.2%
Request Disk Hit Ratios:5min: 4.5%,
60min: 4.1%
Storage Swap size:  802820 KB
Storage Mem size:   90464 KB
Mean Object Size:   609.12 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.00678  0.00379
Cache Misses:  0.03241  0.04776
Cache Hits:0.00463  0.00379
Near Hits:169.11253 169.11253
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.0  0.05078
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:117388.930 seconds
CPU Time:   54245.630 seconds
CPU Usage:  46.21%
CPU Usage, 5 minute avg:42.93%
CPU Usage, 60 minute avg:   38.88%
Process Data Segment Size via sbrk(): 179290
KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 52592






__ 
Do you Yahoo!? 
Yahoo! Mail - 250MB free storage. Do more. Manage less. 
http://info.mail.yahoo.com/mail_250


[squid-users] squid + gmail

2005-02-22 Thread UnIData
Hi,

I configured squid and all ok, but can access to gmail.com and
google.com from clients,  what is the problem and the solution please
!

Thanks !!


[squid-users] gmail + squid- ERROR

2005-02-22 Thread UnIData
Hi,

I configured squid and all ok, but can't  access to gmail.com and
google.com from clients,  what is the problem and the solution please
!

Thanks !!


[squid-users] squid + gmail ???

2005-02-22 Thread UnIData
Hi,

I configured squid and all ok, but can't  access to gmail.com and
google.com from clients,  what is the problem and the solution please
!

Thanks !!


Re: [squid-users] gmail + squid- ERROR

2005-02-22 Thread Rafhael Almeida
Hi, do you have a Internet explorer 5.5 o less?? if yes, check the browser 
requeriments of gmail!

At 04:53 PM 2/22/2005, you wrote:
Hi,
I configured squid and all ok, but can't  access to gmail.com and
google.com from clients,  what is the problem and the solution please
!
Thanks !!




[squid-users] Re: squid-3.0 on Fedora Core 1 3

2005-02-22 Thread Henrik Nordstrom
On Tue, 22 Feb 2005, Sushil Deore wrote:
hi Henrik,
Thanks for your quick response and hope to get through this asap.
tried -NCd1 option also... :(
[EMAIL PROTECTED] logs]# /usr/local/squid/sbin/squid -DNYCd3

2005/02/22 16:48:49| /usr/local/squid/var/cache: (2) No such file or
directory
FATAL:  Failed to verify one of the swap directories, Check cache.log
   for details.  Run 'squid -z' to create swap directories
   if needed, or if running Squid for the first time.
Aborted
its not able to create the swap dir actually and if I try
[EMAIL PROTECTED] logs]# /usr/local/squid/sbin/squid -z
2005/02/22 16:49:00| Memory pools are 'off'; limit: 2.00 MB
Aborted
Does your cache_effective_user have write permission to 
/usr/local/squid/var/?

Try
  mkdir /usr/local/squid/var/cache
  chown your_cache_effective_user /usr/local/squid/var/cache
  /usr/local/squid/sbin/squid -z -d3
  /usr/local/squid/sbin/squid -DNYCd3
Regards
Henrik


Re: [squid-users] Two squid instances based on file types? Is it good?

2005-02-22 Thread Henrik Nordstrom
On Tue, 22 Feb 2005, Marco Crucianelli wrote:
Why shall I use never_direct? Maybe to force the frontend squid (the one
caching web stuff) to never ask for multimedia files the origin servers,
but to redirect these calls to the backend squid, the one caching
multimedia stuff?!?
Yes..
Maybe I've a wrong idea of how squid works!
Many are confused about the relations of cache_peer, never_direct and 
always_direct etc.

cache_peer defines possible paths where Squid MAY forward the request.
cache_peer_access/cache_peer_domain limits when these paths may be 
considered.

always_direct forces Squid to ignore all peers and always go direct for 
the request.

never_direct (when always_direct is not in effect) tells Squid that it may 
not go direct.

when neither always_direct or never_direct is in effect (the default 
situation) Squid is free to choose whatever path it sees most fit for the 
request, and will do this based on a number of criterias.

  - type of request
  - hierachy_stoplist
  - prefer_direct on/off
  - ICP status of the possible peers
  - TCP status of the possible peers
  - netdb information
  - etc..
With the goal of finding a reasonable balance between global cache hit 
ratio and request latency.

Normally it selects
  1. The best ICP peer or Digest HIT peer.
  2. Direct
  3. Some parent (default, round-robin etc..)
If prefer_direct off then 2 and 3 switches place.
In never_direct then the picture looks somewhat different
  1. The best ICP peer or Digest HIT peer.
  2. Some parent (default, round-robin etc..)
  3. All parents.
If always_direct then the picture becomes simply
  1. Direct
Regards
Henrik


Re: [squid-users] Squid, virtual IP and Layer 7 switching...any idea?

2005-02-22 Thread Henrik Nordstrom
On Tue, 22 Feb 2005, Marco Crucianelli wrote:
Well, I'm sure not that ggod in squid configuration, but thinking about
a layer 7 switching solution using virtual IP, to let squid answer to
clients request directly I should use a TCP handoff.
Yes...
In such a case,
squid needs to use the virtual IP address to answer to clients (binding
squid instance to the virtual IP in squid.conf) while, to speak with its
cache_peer it needs to use its real IP address (using something like
udp_incoming_address and udp_outgoing_address in squid.conf).
You don't need to bind Squid to the virutal IP. You may if you only want 
Squid to answer to the virtual IP and not the real IPs, but it is not 
required.

While, not using wirtual IP solution but natting only, I don't need 
neither to bind squid to wirtual IP nor to change udp_incoming and 
outgoing_address.
You do not need to if you use a virtual IP either.
All the gory details of the virtual IP is handled by the OS, and even 
there it isn't that much special about it (just a secondary IP on the same 
server). Only if the servers is on the same network segment as the L7 
switch publishes the virtual IP on is some small amount of care needed at 
the OS level to make sure the servers do not respond to ARP on the virtual 
IP. Only the L7 switch should respond to ARP for the virtual IP. If the 
servers is on a separate network behind the L7 switch then the ARP problem 
is not an issue and can be ignored.

Regards
Henrik


[squid-users] SquidNT - basic auth?

2005-02-22 Thread James Gray
Gah - one of our support people in $REMOTE_OFFICE has decided to lead a little 
rebellion and installed squid on a Win2k Server box (the standard here is to 
run Squid on RHEL).  After a little massaging I've managed to fix most of the 
differences between squid.conf on Linux and SquidNT - seeing as the support 
people couldn't figure out why *nix line ends in  config files confused 
SquidNT.

However, I can't seem to get basic authentication to work.  We need basic auth 
for some legacy apps and other stuff (like Java) which don't support NTLM 
auth.

I know this is probably a simple oversight on my part, but I'm happy to be 
handed a URL and told to figure it out.  Of course if someone can 
cut-and-paste the basic auth from squid.conf for SquidNT that's even 
better :)  TIA

Cheers,

James


RE: [squid-users] Cache_peer problems

2005-02-22 Thread Henrik Nordstrom

On Tue, 22 Feb 2005, DONDANA ALBERTO wrote:
contacting TrendMicro they replied that the big difference from VW and
IWSS is the message reply in case of a 'virused' page
we'd like to keep two cache peer for fault tolerance
Change fwdReforwardableStatus() in forward.c to ignore HTTP_FORBIDDEN 
(just delete the line).

Regards
Henrik


[squid-users] Re: SquidNT - basic auth?

2005-02-22 Thread James Gray
On Wed, 23 Feb 2005 09:20 am, James Gray wrote:
 However, I can't seem to get basic authentication to work.  We need basic
 auth for some legacy apps and other stuff (like Java) which don't support
 NTLM auth.

Sorry to waste people's time:
http://www1.fr.squid-cache.org/mail-archive/squid-users/200308/0277.html

Then scroll down.  Thanks to Guido Serassio who wrote the reply in that URL.

Cheers,

James


Re: [squid-users] Two conflicting content-length headers

2005-02-22 Thread Henrik Nordstrom
On Tue, 22 Feb 2005, Diego Dasso wrote:
2005/02/21 19:26:25| ctx: enter level  0: 
'http://www.iiisci.org/cisci2005/Reviewers/download.asp?aux1=C170JB'
2005/02/21 19:26:25| WARNING: found two conflicting content-length headers

now, with the version 2.5.4, all works fine, i assume that now the new 
version is more strict with the headers, (good for me) but there is a way to 
workaround this issue??
The only way to fix this is to have the malfunctioning web server fixed.
The server is seriously broken, and it is impossible for Squid to deduce 
how to correctly parse the reply in a sane manner.

Adding workarounds to deal with such seriously broken servers will only 
cause other problems down the line, and is obviously not something we plan 
to do.

This problem is very different from the general invalid response 
problem discussed recently for which a workaround has been added.

Regards
Henrik


Re: [squid-users] squid + gmail ???

2005-02-22 Thread James Gray
On Wed, 23 Feb 2005 08:55 am, UnIData wrote:
 I configured squid and all ok, but can't  access to gmail.com and
 google.com from clients,  what is the problem and the solution please

Is there a reason you posted this question 3 times in 5 minutes?

What error (exaclty)?  Is it being generated by squid or your browser?  What 
browser?  Have you checked squid's access.log to see if the client is 
actually using the squid server?  What ACL's have you defined - is there a 
deny rule that's not behaving as you expected?

More info = better chance someone can help you :)

Cheers,

James


Re: [squid-users] www.europroperty.com

2005-02-22 Thread Henrik Nordstrom

On Tue, 22 Feb 2005, Damian-Grint Philip wrote:
We have recently upgraded to 2.5Stable7-20050113.
You should upgrade. This is a interim snapshot release, not a production 
release.

The above URL is causing Squid to return Invalid Request, but there is
no problem going direct.
See cache.log for more detail.
Upgrading will give a more accurate error message to the client, or may 
even allow the malformed response to be forwarded if you are lucky. 
(depends on the serverity of the web server malfunction)

Regards
Henrik


Re: [squid-users] Understanding the stats.

2005-02-22 Thread Henrik Nordstrom
On Tue, 22 Feb 2005, Ray Charles wrote:
Looking at some stats from Cache Manager. I originally
thought there were just hits or misses, is a Near Hit
still a miss?
A near hit is a hit on a neighbour cache.
Cache information for squid:
   Request Hit Ratios: 5min: 99.1%, 60min: 98.5%
   Byte Hit Ratios:5min: 99.0%, 60min: 98.8%
   Request Memory Hit Ratios:  5min: 78.4%, 60min: 91.2%
   Request Disk Hit Ratios:5min: 4.5%, 60min: 4.1%
Extremely high hit ratios in general.
Is this a reverse proxy?
Median Service Times (seconds)  5 min60 min:
   HTTP Requests (All):   0.00678  0.00379
   Cache Misses:  0.03241  0.04776
   Cache Hits:0.00463  0.00379
   Near Hits:169.11253 169.11253
Your communication to the peers seems rather poor..
Regards
Henrik


Re: [squid-users] :Direct connection without DNS lookup

2005-02-22 Thread chanaka
Hi henric

I have allready running cache_peer for this box default
set to a master squid box.
Yes i attempted /etc/hosts enry for this specifc intranet site
and also checked on /etc/nsswitch.conf.
Still squid is looking for DNS when resolving.
Is their any specific entry to ask squid to use /etc/hosts
when resolving?

Chanaka


 On Tue, 22 Feb 2005 [EMAIL PROTECTED] wrote:

 Is their a a methood wher you can tell squid to directly connect to a
 specific web site by providing its IP addredd without DNS lookups.

 cache_peer

 or /etc/hosts

 or requesting the site by IP.

 Regards
 Henrik





Re: [squid-users] :Direct connection without DNS lookup

2005-02-22 Thread H Matik
On Tuesday 22 February 2005 21:08, [EMAIL PROTECTED] wrote:
 Hi henric

 I have allready running cache_peer for this box default
 set to a master squid box.
 Yes i attempted /etc/hosts enry for this specifc intranet site
 and also checked on /etc/nsswitch.conf.
 Still squid is looking for DNS when resolving.
 Is their any specific entry to ask squid to use /etc/hosts
 when resolving?

 Chanaka

Hi
is this a NAT problem or what is your concern here?

Hans




  On Tue, 22 Feb 2005 [EMAIL PROTECTED] wrote:
  Is their a a methood wher you can tell squid to directly connect to a
  specific web site by providing its IP addredd without DNS lookups.
 
  cache_peer
 
  or /etc/hosts
 
  or requesting the site by IP.
 
  Regards
  Henrik

-- 
___
Infomatik
(18)8112.7007
http://info.matik.com.br
Mensagens não assinadas com GPG não são minhas.
Messages without GPG signature are not from me.
___


pgpRatBr0rlOp.pgp
Description: PGP signature


Re: [squid-users] www.europroperty.com

2005-02-22 Thread Reuben Farrelly
Hi,
At 05:59 a.m. 23/02/2005, you wrote:
We have recently upgraded to 2.5Stable7-20050113.
The above URL is causing Squid to return Invalid Request, but there is
no problem going direct.
This is what happens going through squid:
GET http://www.europroperty.com/ HTTP/1.0.
Accept: */*.
Accept-Language: en-gb.
Cookie: WEBTRENDS_ID=80.169.166.244-2279377056.29694189.
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Q312461).
Host: www.europroperty.com.
Proxy-Connection: Keep-Alive.
.
HTTP/1.0 502 Bad Gateway.
Server: squid/2.5.STABLE7-20050113.
Mime-Version: 1.0.
Date: Tue, 22 Feb 2005 14:53:52 GMT.
Content-Type: text/html.
Content-Length: 1475.
Expires: Tue, 22 Feb 2005 14:53:52 GMT.
X-Squid-Error: ERR_INVALID_REQ 0.
X-Cache: MISS from miloscz.collierscre.co.uk.
X-Cache-Lookup: MISS from miloscz.collierscre.co.uk:3128.
Proxy-Connection: keep-alive.
This is what happens going direct
GET / HTTP/1.1.
Accept: */*.
Accept-Language: en-gb.
Accept-Encoding: gzip, deflate.
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Q312461).
Host: www.europroperty.com.
Connection: Keep-Alive.
Cookie: WEBTRENDS_ID=80.169.166.244-2279377056.29694189.
.
HTTP/1.1 200 OK.
Server: Microsoft-IIS/5.0.
Date: Tue, 22 Feb 2005 14:51:28 GMT.
Content Location: http://www.europroperty.com.
^^^
X-Powered-By: ASP.NET.
Content-Length: 7278.
Content-Type: text/html.
Expires: Tue, 22 Feb 2005 14:50:29 GMT.
Set-Cookie: ASPSESSIONIDCAQSCQDT=MLPDPKNAOAHMCMLPCNOMJDPN; path=/.
Cache-control: private.
Has anyone come across this before and is there a fix?
Yes.  Check your cache.log, the reason why this is rejected is clearly 
logged (and it is broken - hint: see above).

Have a read through one of the archives listed at 
http://www.squid-cache.org/mailing-lists.html , the answer to this question 
(as well as a way to work around the brokenness) has been posted about 100 
times already this week...

reuben


[squid-users] change my suscription

2005-02-22 Thread Xavier Callejas
Hi.

I would like to change my suscripted email in this list from 
[EMAIL PROTECTED] to [EMAIL PROTECTED]

thx.

-- 
Xavier Callejas

E-Mail + MSN: xcallejas at ibcinc.com.sv
ICQ: 6224
--
Open your Mind, use Open Source.


Re: [squid-users] squid + gmail ???

2005-02-22 Thread Navneet Choudhary
check whether you have opened HTTPS [443] port or not?

Try browsing any secure site[that require secure communication httpS].
i.e any banking Or e-commerce.


I don't think gmail.com  google.com both being blocked by any ACL .

[If you are bocking anything  everything starting from alphabet g,
using regular expression[regress]]

gmail.com  problem may be arising due browser being not supported by gmail!.

Please use IE6,Netscape 4+,Firefox 1.0 etc.


But most importatnt thing

More info = better chance someone can help you :)



On Wed, 23 Feb 2005 10:39:29 +1100, James Gray [EMAIL PROTECTED] wrote:
 On Wed, 23 Feb 2005 08:55 am, UnIData wrote:
  I configured squid and all ok, but can't  access to gmail.com and
  google.com from clients,  what is the problem and the solution please
 
 Is there a reason you posted this question 3 times in 5 minutes?
 
 What error (exaclty)?  Is it being generated by squid or your browser?  What
 browser?  Have you checked squid's access.log to see if the client is
 actually using the squid server?  What ACL's have you defined - is there a
 deny rule that's not behaving as you expected?
 
 More info = better chance someone can help you :)
 
 Cheers,
 
 James