[squid-users] squidclient help

2010-02-18 Thread Vivek

Hi All,

I am trying to get the URLs of cached objects in disk in via 
squidclient.


#squidclient mgr:vm_objects

Retrieves the list of objects in the memory cache. It contains the URL 
link ( GET http://127.0.0.1:3181/id=02591000260870/image.png )

---
KEY 3BAE20D702DCFA4225D988B1F151EA92
 GET http://127.0.0.1:3181/id=02591000260870/image.png
 STORE_OK  IN_MEMORY SWAPOUT_NONE PING_DONE
 CACHABLE,DISPATCHED,VALIDATED
 LV:1266548360 LU:1266548360 LM:-1EX:1266893960
 0 locks, 0 clients, 1 refs
 Swap Dir -1, File 0X
 inmem_lo: 0
 inmem_hi: 16553
 swapout: 0 bytes queued
---

#squidclient mgr:objects

Retrieves the list of all cached objects (including those on disk). But 
it doesn't contain the URL link.

---
KEY 14A08323AC805484B4161AFCC0228C02
 STORE_OK  NOT_IN_MEMORY SWAPOUT_DONE PING_DONE
 CACHABLE,DISPATCHED,VALIDATED
 LV:1266548026 LU:1266548232 LM:-1EX:1266893626
 0 locks, 0 clients, 2 refs
 Swap Dir 0, File 0X004471
---

How do we get the URLs of disk cache objects using squidclient or any 
other method...



Thanks,
Vivek


Re: [squid-users] peer selection with weight=N

2010-02-18 Thread H

>>
>> Does weight=N influence round-robin selection algorithm?
>
> Yes.
>
>> But firstable, does weight has the same definition for ICP and http
>> (no-query)
>> protocol?
>
> Yes, but what is being weighted differs slightly so proportions differ
> somewhat for the same weight in different peering protocols.
>


thank's, so far so good, but look what I get here

I have a transparent squid in front of the client network and three parents
connected to different upstream providers. All of them are connected locally
over Gigabit Eth. No mentionable delay between them, even under peak load what
is about 50-60Mbit/s through the transsparent squid box. Left to say that the
transparent proxy has always_direct deny and never_direct_allow in it

Thing is, all three as parent with no-query round-robin get equal load as
supposed, but, giving one (any) of them weight=2 makes no difference, still
gets the same load.

So I thought doing this

cache_peer parent_IP parent tport uport no-query [weight=2]
cache_peer parent_IP parent tport uport no-query round-robin
cache_peer parent_IP parent tport uport no-query round-robin

with or without weight the first gets all load, the other two are practically
never requested, no I/O traffic.  To complete this, I tried any combination
without round-robin

when I disconnect the first parent from it's upstream link, navigation failes,
when I shut squid down on it, it rolls over to the second and third and does
round-robin as supposed. So I added monitorurl to the first and the failover
works BUT it never comes back to query the first.


Long story, resuming:

Coming back to query a parent with temp failing uplink only works if
round-robin is not present

round-robin with any weight for any of the parents does not divide the load in
any form and may disable failure detection

I noticed this since dezember and can not say if or when the problem started
because I had no upstream link failure before. So 2.7-STABLE7 had this problem
already.



as last try, I configured:

cache_peer parent_IP parent tport uport [no-query] [monitor-options]
cache_peer parent_IP sibling tport uport [no-query] [round-robin] [allow-miss]
cache_peer parent_IP sibling tport uport [no-query] [round-robin] [allow-miss]

which also works as long as everything is online. Whatever options are set as
expressed with [], soon the first parent or its uplink fail the siblings deny
access completely. Of course miss_access peer allow is set properly, they do
not serve misses, either with no-query nor icp. Seems sibling operation does
not work at all for misses.


I have additional cache_peer_access acls and other rules which might influence
peer selectin but disabled tem all for the described tests, the only two
regarding lines on the transparent proxy are always_direct deny and
never_direct allow, disabling these the transparent proxy goes direct and
client navigation do not fail what is good but not what I want (direct)

So I guess I am doing something very stupid or there is something wrong with
the round-robin option right?


H
(17)8111.3300


Re: [squid-users] Websites not loading correctly

2010-02-18 Thread Amos Jeffries

Alex Marsal wrote:
Sorry Amos, I'm actually running 3.0.STABLE20 (the one comming with 
OpenSuse).


Thanks

Alex

Amos Jeffries  ha escrito:


Alex Marsal wrote:

Hello,

I've noticed that some websites doesn't load correctly. For

example

this one:

http://global.dymo.com/esES/Products/default.html

If I try to go to this website with squid 3.0 it just load a blank



page, like the layout. But If I try it without squid the website

is

displayed correctly.

Any help please?



The site has a broken Vary: header. You need to contact the website 
admin about fixing it.


http://redbot.org/?uri=http%3A%2F%2Fglobal.dymo.com%2FesES%2FProducts%2Fdefault.html+


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] captive portal with squid transparent

2010-02-18 Thread Landy Landy


I currently use NoCat along side squid with no problem.



--- On Wed, 2/17/10, Henrik Nordström  wrote:

> From: Henrik Nordström 
> Subject: Re: [squid-users] captive portal with squid transparent
> To: "Mister Raven" 
> Cc: squid-users@squid-cache.org
> Date: Wednesday, February 17, 2010, 7:02 PM
> ons 2010-02-17 klockan 14:26 -0800
> skrev Mister Raven:
> > Has anyone had any success setting up a captive portal
> on squid 3.1.x
> > in transparent proxy mode, and running dansguardian
> 2.2?
> > 
> > So far I have not found a solution.
> 
> What's the problem?
> 
> If you want to combine transparent & authentication
> then you will need
> to implement some kind of IP based authentication, partly
> outside Squid.
> 
> Regards
> Henrik
> 
> 





Re: [squid-users] no source

2010-02-18 Thread Amos Jeffries

Luis Daniel Lucio Quiroz wrote:

Le Mercredi 17 Février 2010 19:21:57, Amos Jeffries a écrit :

On Wed, 17 Feb 2010 19:01:38 -0600, Luis Daniel Lucio Quiroz

 wrote:

2010/02/17 18:50:49| Failed to select source for 'http://www.google.fr'
2010/02/17 18:50:49|   always_direct = 0
2010/02/17 18:50:49|never_direct = 1
2010/02/17 18:50:49|timedout = 0

Using squid 3.0.20 i'm having this, it is a know issue already fixed in
.24? I
couldnt update because it is in production

The request is not allowed to go from Squid directly to the Internet.

Also, none of the configured cache_peer entries are allowed to service it,
or if one is it is currently down.

Amos

thanx i fixed by changing my never_directo into always_direct sentence


Why was that necessary? simply removing the never_direct blocker would 
be enough in normal Squid setups.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] A little help..

2010-02-18 Thread Amos Jeffries

Nir Fishler wrote:

Hello there,

I'm corrently using Squid version 2.6 build 24 installed on CentOS
v5.4 and i'd like to know how can I allow specific user to access
specific URLs??

Thanks for your help.


http://wiki.squid-cache.org/SquidFaq

http://wiki.squid-cache.org/SquidFaq/SquidAcl

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] Tiered Squid proxy issue (Microsoft JET Database Engine error '80040e57'

2010-02-18 Thread Brett Lymn
On Thu, Feb 18, 2010 at 12:14:21PM -0600, Ryan McCain wrote:
> Thanks for the response.  According to Websense, they only suppor 2.5x. :(
> 

The websense redirectors don't understand the quoting of
non-alphanumeric characters in the username that was introduced by
squid after 2.5.  To use a later version of squid you need to rewrite
the username to convert the quoted characters back to their ascii
form, your site may be different but we just rewrite the %5c back to \
and it all works fine.  You need to chain a small script in front of
the websense redirector that performs the rewrite, it works fine.

> Also, I will contact the site owner now and ask about this issue.
>

I have this issue going to another site too - the interesting thing is
if I use our border squid server then I can download a pdf from a
particular site BUT if I use a squid server inside our network that
parents off the border squid server then I get a similar error to what
Ryan sees.  I tried talking to the site admin but he does not seem to
know what is going wrong.
 
We are using squid 2.7 STABLE 6

-- 
Brett Lymn
"Warning:
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited.  If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility.  It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
your computer."




[squid-users] A little help..

2010-02-18 Thread Nir Fishler
Hello there,

I'm corrently using Squid version 2.6 build 24 installed on CentOS
v5.4 and i'd like to know how can I allow specific user to access
specific URLs??

Thanks for your help.


Nir.

--
(( niro ))



-- 
(( niro ))


[squid-users] squid redirector.

2010-02-18 Thread Alessandro Baggi
Hi there. I'm using OpenBSD 4.6 with squid, squidclamav and squidGuard 
and I've problem with squidGuard. After several hours of work, 
squidGuard processes become zombies. Then to avoid this problem I'm 
trying to create my redirector. It is a simple redirector, read from 
stdin, controls if the url is not blacklisted and then write on stdout 
(a few lines of code). Then, when I concatenate squidclamav with my 
redirector, all works fine but squidclamav does not perform any scan on 
files. (test on eicar.com). Squid + squidclamav works, and also squid + 
my redirector.
Another issue is: I've tried to get the same behaviour with a multiple 
redirector with "wrapzap", and they works fine, my redirector redirect 
blacklisted url, and squidclamav perform a scan.


Anyone can help me whit this strange behaviour, please?

Maybe the problem is the request's form? my redirector get a request 
from squid on this form:


http://www.google.it/ 192.168.1.3/- - GET - myip=192.168.1.2 myport=3128

and then I write this request on stdout. If I try to write this request 
without myip and myport it does not perform any request. It's a normal 
Behaviour?



Thanks in advance


Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 11:49 -0800 skrev Tory M Blue:

> Okay I've found some issues that I had not seen before,
> 
> Feb 18 18:37:06 kvm0 kernel: nf_conntrack: table full, dropping packet.

And this is exactly what I wanted you to look out for...

> I would like to kick the netfilter team and fedora  team in the shins.
> The issue was my squid boxes are virtual and the errors were being
> logged on the domain box (not domain as in MS). So now I'm trying to
> go through the system and remove all this garbage. This server does
> not need to track the connections and or log them. There does not seem
> to be a simple way to disable, just a lot of sysctl options and I'm
> unclear if these will do it entirely.

There is no sysctl to block conntrack.

What you need is to either

a) Make sure conntrack is not loaded in the kernel.

b) If conntrack needs to be loaded then make sure to add suitable
NOTRACK rules in iptables to avoid tracking any flows that do not need
to be tracked..

Regards
Henrik



RE: [squid-users] Tiered Squid proxy issue (Microsoft JET Database Engine error '80040e57'

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 13:39 -0600 skrev Ryan McCain:
> According to this: 
> http://www.squid-cache.org/Versions/v2/2.6/cfgman/header_access.html
> 
> 2.6 uses "header_access"
> 
> Hoping it was the same for 2.5x, I entered this into squid.conf
> 
> header_access X-Forwarded-For deny all
> 
> ...and it worked like a charm.  How did you know it was the X-Forwarded-For 
> header?

Because it's about the only header added by proxies like Squid that a
forum would like to stuff into it's database..

Regards
Henrik



Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-18 Thread Tory M Blue
On Thu, Feb 18, 2010 at 12:27 AM, Henrik Nordstrom
 wrote:
> ons 2010-02-17 klockan 21:40 -0800 skrev Tory M Blue:
>
>> And sorry "sleeping" was just my way of citing the box shows no load,
>> almost no IO 4-5 when I'm hitting it hard. I do not see this issue
>> with lesser threads, it's only when I turn up the juice. But with
>> turning up the connections per second I would expect to see some type
>> of load and I see none.
>
> Anything in /var/log/messages?
>
> The above problem description is almost an exact match for Linux
> iptables connectiontracking table limit being hit.
>
> Regards
> Henrik

Thanks Henrik, nothing in /var/logs/messages or even dmesg

and iptables

Not running. No rules in place, service shutdown.

Thats not the culprit, with less than 12 children at the beginning of my run

2010/02/17 10:29:51|   Completed Validation Procedure
2010/02/17 10:29:51|   Validated 948594 Entries
2010/02/17 10:29:51|   store_swap_size = 3794376k
2010/02/17 10:29:51| storeLateRelease: released 0 objects


2010/02/18 09:53:08| squidaio_queue_request: WARNING - Queue congestion
2010/02/18 09:53:12| squidaio_queue_request: WARNING - Queue congestion
2010/02/18 09:53:17| squidaio_queue_request: WARNING - Queue congestion

Even dropped my thread count and as soon as my load test starts (get
maybe 10 children launched), I get the error

2010/02/18 09:56:18| squidaio_queue_request: WARNING - Queue congestion
2010/02/18 09:56:28| squidaio_queue_request: WARNING - Queue congestion


Okay I've found some issues that I had not seen before,

Feb 18 18:37:06 kvm0 kernel: nf_conntrack: table full, dropping packet.

I would like to kick the netfilter team and fedora  team in the shins.
The issue was my squid boxes are virtual and the errors were being
logged on the domain box (not domain as in MS). So now I'm trying to
go through the system and remove all this garbage. This server does
not need to track the connections and or log them. There does not seem
to be a simple way to disable, just a lot of sysctl options and I'm
unclear if these will do it entirely.

net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.all.arp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_filter=0
net.ipv4.conf.lo.rp_filter=0
net.ipv4.conf.lo.arp_filter=0
net.ipv4.conf.eth0.rp_filter=0
net.ipv4.conf.eth0.arp_filter=0
net.ipv4.conf.eth1.rp_filter=0
net.ipv4.conf.eth1.arp_filter=0
net.ipv4.conf.br0.rp_filter=0
net.ipv4.conf.br0.arp_filter=0
net.ipv4.conf.br1.rp_filter=0
net.ipv4.conf.br1.arp_filter=0
net.ipv4.conf.vnet0.rp_filter=0
net.ipv4.conf.vnet0.arp_filter=0
net.ipv4.conf.vnet1.rp_filter=0
net.ipv4.conf.vnet1.arp_filter=0

But I'll be quiet here for a few, until I need assistance from the
squid community. I'm still seeing the queue congestion, but if it
actually doubles the thresholds each time, I may get to a good place,
or can be okay to ignore the messages. Obviously the queue congestion
was not causing the 500's, the dropping of packets by netfilter was

Thanks

Tory


RE: [squid-users] Tiered Squid proxy issue (Microsoft JET Database Engine error '80040e57'

2010-02-18 Thread Ryan McCain
According to this: 
http://www.squid-cache.org/Versions/v2/2.6/cfgman/header_access.html

2.6 uses "header_access"

Hoping it was the same for 2.5x, I entered this into squid.conf

header_access X-Forwarded-For deny all

...and it worked like a charm.  How did you know it was the X-Forwarded-For 
header?



-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: Thursday, February 18, 2010 12:35 PM
To: Ryan McCain
Cc: 'squid-users@squid-cache.org'
Subject: RE: [squid-users] Tiered Squid proxy issue (Microsoft JET Database 
Engine error '80040e57'

tor 2010-02-18 klockan 12:21 -0600 skrev Ryan McCain:
> BTW, Websense does support Squid 2.6.  Would upgrading from 2.5 to 2.6 
> possibly help?

Most likely not as the error seems to be on the web server and not Squid.

What you can try is to filter out the X-Forwarded-For header to see if that 
makes any difference.

request_header_access X-Forwarded-For deny all

[not sure the above syntax works in 2.5.. may be it's header_access, or mabe 
even older directives, memory of 2.5 and even 2.6 have faded]

Regards
Henrik



Re: [squid-users] regarding caching and replication

2010-02-18 Thread senthil

Thank you very much

Hi henri,

We need to make cached data in squid1 to be cached data in squid2

Is there any possibility .

Note: Squid2 is started if request rate increases beyond certain rate

Regards
senthil



Henrik Nordström wrote:

tor 2010-02-18 klockan 19:28 +0530 skrev senthil:

  
Is it possible to make cached object in squid1 eg a.gif to be in squid2 
cache as a.gif using squidclient
 eg squidclient  -h ipofparentsquid(172.16.1.15)   -m GET 
http://www.example.com/a.gif



No, that makes squidclient fetch the URL from ipofparentsquid.

For what you describe you need a peering relation.
Regards
Henrik


  




RE: [squid-users] Tiered Squid proxy issue (Microsoft JET Database Engine error '80040e57'

2010-02-18 Thread Ryan McCain
I tried upgrading to 2.6 on one of the nodes of a POP cluster and it didn't 
work.  To further troubleshoot this, I'm going to attempt to upgrade to 2.7 on 
one of the POP cluster nodes. 


Ryan McCain
Northrop Grumman Corporation
Email: ryan.mcc...@la.gov
Phone: 225.505.3832

Registered Linux User #364609

-Original Message-
From: Ryan McCain [mailto:ryan.mcc...@la.gov] 
Sent: Thursday, February 18, 2010 12:21 PM
To: 'Henrik Nordström'
Cc: 'squid-users@squid-cache.org'
Subject: RE: [squid-users] Tiered Squid proxy issue (Microsoft JET Database 
Engine error '80040e57'

BTW, Websense does support Squid 2.6.  Would upgrading from 2.5 to 2.6 possibly 
help? 


Ryan McCain
Northrop Grumman Corporation
Email: ryan.mcc...@la.gov
Phone: 225.505.3832

Registered Linux User #364609

-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net]
Sent: Thursday, February 18, 2010 4:37 AM
To: Ryan McCain
Cc: 'squid-users@squid-cache.org'
Subject: Re: [squid-users] Tiered Squid proxy issue (Microsoft JET Database 
Engine error '80040e57'

tor 2010-02-11 klockan 11:22 -0600 skrev Ryan McCain:

> We are using Squid 2.5 on SLES for compatibility reasons with a redirector we 
> use at the POP level , Websense.  Websebse doesn't support 2.7 or 3.x.

Any external helpers (url rewriters, auth etc) which works with 2.5 also works 
in later releases.

> Anyways, if you go to http://www.garymallon.com --> COURSES --> DISCUSSION 
> BOARD then login with:
> User: student
> Pw: ssw
> 
> I get the following error:
> 
> Microsoft JET Database Engine error '80040e57' 
> 
> The field is too small to accept the amount of data you attempted to add. Try 
> inserting or pasting less data. 
> 
> /mrengmal/gm/forum/inc_func_common.asp, line 585

This is difficult to answer without knowing the web server application.

Have you tried talking to the maintainers of that forum?

A guess is that they are storing the X-Forwarded-For header in their database 
for tracking purposes and have assigned to small field with for storing that..

Regards
Henrik



Re: [squid-users] header windows live messenger

2010-02-18 Thread Luis Daniel Lucio Quiroz
Le Jeudi 18 Février 2010 06:00:32, David C. Heitmann a écrit :
> Login.live.com
> .contacts.msn.com 
> .storage.msn.com 


Try these, 


[squid-users] Re: help please header

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 19:25 +0100 skrev David C. Heitmann:
> hi gurus,
> 
> i need the header for windows live messenger to login
> 
> when i delete all deny all - i can connect!

Why are you messing around with headers like this? It's a high risk of
breaking things when denying headers without knowing exactly what that
header does.

Regards
Henrik



RE: [squid-users] Tiered Squid proxy issue (Microsoft JET Database Engine error '80040e57'

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 12:21 -0600 skrev Ryan McCain:
> BTW, Websense does support Squid 2.6.  Would upgrading from 2.5 to 2.6 
> possibly help?

Most likely not as the error seems to be on the web server and not
Squid.

What you can try is to filter out the X-Forwarded-For header to see if
that makes any difference.

request_header_access X-Forwarded-For deny all

[not sure the above syntax works in 2.5.. may be it's header_access, or
mabe even older directives, memory of 2.5 and even 2.6 have faded]

Regards
Henrik



[squid-users] help please header

2010-02-18 Thread David C. Heitmann

hi gurus,

i use squid 3.1.0.16 and debian 5

i need the header for windows live messenger 2009 to login

when i delete all deny all - i can connect!


reply_header_access Allow allow all
reply_header_access Authorization allow all
reply_header_access WWW-Authenticate allow all
reply_header_access Proxy-Authorization allow all
reply_header_access Proxy-Authenticate allow all
reply_header_access Cache-Control allow all
reply_header_access Content-Encoding allow all
reply_header_access Content-Length allow all
reply_header_access Content-Type allow all
reply_header_access Date allow all
reply_header_access Expires allow all
reply_header_access If-Modified-Since allow all
reply_header_access Last-Modified allow all
reply_header_access Location allow all
reply_header_access Pragma allow all
reply_header_access Accept allow all
reply_header_access Accept-Charset allow all
reply_header_access Accept-Encoding allow all
reply_header_access Accept-Language allow all
reply_header_access Content-Language allow all
reply_header_access Mime-Version allow all
reply_header_access Retry-After allow all
reply_header_access Title allow all
reply_header_access Connection allow all
reply_header_access Proxy-Connection allow all
reply_header_access Host allow all
reply_header_access Via allow all
reply_header_access X-Forwarded-For allow all
reply_header_access User-Agent allow all
reply_header_access Referer allow all
reply_header_access Cookie allow all
reply_header_access Set-Cookie allow all
reply_header_access From allow all
reply_header_access Server allow all
reply_header_access Link allow all
reply_header_access Accept-Ranges allow all
reply_header_access If-Modified-Since allow all
reply_header_access If-None-Match allow all
reply_header_access If-Range allow all
reply_header_access Max-Forwards allow all
reply_header_access Range allow all
reply_header_access Upgrade allow all
reply_header_access Age allow all
reply_header_access Content-Language allow all
reply_header_access Content-Location allow all
reply_header_access Content-Disposition allow all
reply_header_access Content-MD5 allow all
reply_header_access Content-Range allow all
reply_header_access ETag allow all
reply_header_access Refresh allow all
reply_header_access Retry-After allow all
reply_header_access Trailer allow all
reply_header_access Transfer-Encoding allow all
reply_header_access Vary allow all
reply_header_access Warning allow all

#reply_header_access All deny all


thansk dave




[squid-users] help please header

2010-02-18 Thread David C. Heitmann

hi gurus,

i need the header for windows live messenger to login

when i delete all deny all - i can connect!


reply_header_access Allow allow all
reply_header_access Authorization allow all
reply_header_access WWW-Authenticate allow all
reply_header_access Proxy-Authorization allow all
reply_header_access Proxy-Authenticate allow all
reply_header_access Cache-Control allow all
reply_header_access Content-Encoding allow all
reply_header_access Content-Length allow all
reply_header_access Content-Type allow all
reply_header_access Date allow all
reply_header_access Expires allow all
reply_header_access If-Modified-Since allow all
reply_header_access Last-Modified allow all
reply_header_access Location allow all
reply_header_access Pragma allow all
reply_header_access Accept allow all
reply_header_access Accept-Charset allow all
reply_header_access Accept-Encoding allow all
reply_header_access Accept-Language allow all
reply_header_access Content-Language allow all
reply_header_access Mime-Version allow all
reply_header_access Retry-After allow all
reply_header_access Title allow all
reply_header_access Connection allow all
reply_header_access Proxy-Connection allow all
reply_header_access Host allow all
reply_header_access Via allow all
reply_header_access X-Forwarded-For allow all
reply_header_access User-Agent allow all
reply_header_access Referer allow all
reply_header_access Cookie allow all
reply_header_access Set-Cookie allow all
reply_header_access From allow all
reply_header_access Server allow all
reply_header_access Link allow all
reply_header_access Accept-Ranges allow all
reply_header_access If-Modified-Since allow all
reply_header_access If-None-Match allow all
reply_header_access If-Range allow all
reply_header_access Max-Forwards allow all
reply_header_access Range allow all
reply_header_access Upgrade allow all
reply_header_access Age allow all
reply_header_access Content-Language allow all
reply_header_access Content-Location allow all
reply_header_access Content-Disposition allow all
reply_header_access Content-MD5 allow all
reply_header_access Content-Range allow all
reply_header_access ETag allow all
reply_header_access Refresh allow all
reply_header_access Retry-After allow all
reply_header_access Trailer allow all
reply_header_access Transfer-Encoding allow all
reply_header_access Vary allow all
reply_header_access Warning allow all

#reply_header_access All deny all


thansk dave



[squid-users] Re: help

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 12:12 +0100 skrev David C. Heitmann:
> can i hide the remote addr or the remote dns addr ?
> or is it not possible?

What do you mean?

Regards
Henrik



RE: [squid-users] Tiered Squid proxy issue (Microsoft JET Database Engine error '80040e57'

2010-02-18 Thread Ryan McCain
BTW, Websense does support Squid 2.6.  Would upgrading from 2.5 to 2.6 possibly 
help? 


Ryan McCain
Northrop Grumman Corporation
Email: ryan.mcc...@la.gov
Phone: 225.505.3832

Registered Linux User #364609

-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: Thursday, February 18, 2010 4:37 AM
To: Ryan McCain
Cc: 'squid-users@squid-cache.org'
Subject: Re: [squid-users] Tiered Squid proxy issue (Microsoft JET Database 
Engine error '80040e57'

tor 2010-02-11 klockan 11:22 -0600 skrev Ryan McCain:

> We are using Squid 2.5 on SLES for compatibility reasons with a redirector we 
> use at the POP level , Websense.  Websebse doesn't support 2.7 or 3.x.

Any external helpers (url rewriters, auth etc) which works with 2.5 also works 
in later releases.

> Anyways, if you go to http://www.garymallon.com --> COURSES --> DISCUSSION 
> BOARD then login with:
> User: student
> Pw: ssw
> 
> I get the following error:
> 
> Microsoft JET Database Engine error '80040e57' 
> 
> The field is too small to accept the amount of data you attempted to add. Try 
> inserting or pasting less data. 
> 
> /mrengmal/gm/forum/inc_func_common.asp, line 585

This is difficult to answer without knowing the web server application.

Have you tried talking to the maintainers of that forum?

A guess is that they are storing the X-Forwarded-For header in their database 
for tracking purposes and have assigned to small field with for storing that..

Regards
Henrik



Re: [squid-users] regarding caching and replication

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 19:28 +0530 skrev senthil:

> Is it possible to make cached object in squid1 eg a.gif to be in squid2 
> cache as a.gif using squidclient
>  eg squidclient  -h ipofparentsquid(172.16.1.15)   -m GET 
> http://www.example.com/a.gif

No, that makes squidclient fetch the URL from ipofparentsquid.

For what you describe you need a peering relation.
Regards
Henrik



RE: [squid-users] Tiered Squid proxy issue (Microsoft JET Database Engine error '80040e57'

2010-02-18 Thread Ryan McCain
Thanks for the response.  According to Websense, they only suppor 2.5x. :(

Also, I will contact the site owner now and ask about this issue.

Here are the headers using the Firefox Live Headers plugin. I'm not sure what 
it means, but I have the POST setting set to ACCURATE.

This is when it doesn't work (going through a POP proxy v2.5 then up through 
our top level proxy v2.7):
---

http://site463.mysite4now.net/mrengmal/gm/forum/login.asp

POST http://site463.mysite4now.net/mrengmal/gm/forum/login.asp HTTP/1.1
Host: site463.mysite4now.net
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.11) 
Gecko/2009060200 SUSE/3.0.11-0.1.1 Firefox/3.0.11
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Proxy-Connection: keep-alive
Referer: 
http://site463.mysite4now.net/mrengmal/gm/forum/login.asp?target=default.asp
Cookie: Snitz00User=; ASPSESSIONIDQCASABDC=KHNDCKLCEEENIEDJCADGOLAD; 
ASPSESSIONIDQCBTBBCC=FDFFFLLCCDKFBDLJOOPELEMD; 
ASPSESSIONIDSABQCACC=GKKNJOLCAMCFDIELLEDGIHLB; 
ASPSESSIONIDQABSCCAC=PMAPIPLCFECHCPDDNEBCPDAH; 
ASPSESSIONIDSCBSCDBC=JKIDMAMCKODFGGNAMIOKAHDK
Content-Type: application/x-www-form-urlencoded
Content-Length: 101
target=default.asp&Name=student&submit1.x=33&submit1.y=5&submit1=Login&Password=ssw&SavePassWord=true
HTTP/1.0 500 Internal Server Error
Date: Thu, 18 Feb 2010 18:08:15 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Content-Length: 5339
Content-Type: text/html
Set-Cookie: 
Snitz00User=Pword=80fef94b3945d30bd0adf30fa9915276881765b719c14f7ffa29f1128f11c1f9&Name=student;
 expires=Sat, 20-Mar-2010 20:08:14 GMT; path=/
Cache-Control: private
X-Cache: MISS from dss-cs99lv03-a, MISS from proxy-mon.dss.la.gov
Via: 1.1 dss-cs99lv03-a:8080 (squid/2.7.STABLE6)
X-Cache-Lookup: MISS from proxy-mon.dss.la.gov:8080
Proxy-Connection: close
--

-

This is when it does work going directly through a top level proxy (v2.7)
---
http://site463.mysite4now.net/mrengmal/gm/forum/login.asp

POST http://site463.mysite4now.net/mrengmal/gm/forum/login.asp HTTP/1.1
Host: site463.mysite4now.net
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.11) 
Gecko/2009060200 SUSE/3.0.11-0.1.1 Firefox/3.0.11
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Proxy-Connection: keep-alive
Referer:
http://site463.mysite4now.net/mrengmal/gm/forum/login.asp?target=default.asp
Cookie: Snitz00User=; ASPSESSIONIDQCASABDC=KHNDCKLCEEENIEDJCADGOLAD;
ASPSESSIONIDQCBTBBCC=FDFFFLLCCDKFBDLJOOPELEMD;
ASPSESSIONIDSABQCACC=GKKNJOLCAMCFDIELLEDGIHLB;
ASPSESSIONIDQABSCCAC=PMAPIPLCFECHCPDDNEBCPDAH;
ASPSESSIONIDSCBSCDBC=JKIDMAMCKODFGGNAMIOKAHDK
Content-Type: application/x-www-form-urlencoded
Content-Length: 101
target=default.asp&Name=student&submit1.x=20&submit1.y=8&submit1=Login&Password=ssw&SavePassWord=true
HTTP/1.0 200 OK
Date: Thu, 18 Feb 2010 18:10:48 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Content-Length: 6646
Content-Type: text/html
Set-Cookie:
Snitz00User=Pword=80fef94b3945d30bd0adf30fa9915276881765b719c14f7ffa29f1128f11c1f9&Name=student;
expires=Sat, 20-Mar-2010 20:10:46 GMT; path=/
Cache-Control: private
X-Cache: MISS from dss-cs99lv02-a
Via: 1.1 dss-cs99lv02-a:8080 (squid/2.7.STABLE6)
Connection: keep-alive
Proxy-Connection: keep-alive
--
http://site463.mysite4now.net/mrengmal/gm/forum/default.asp

GET http://site463.mysite4now.net/mrengmal/gm/forum/default.asp HTTP/1.1
Host: site463.mysite4now.net
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.11) 
Gecko/2009060200 SUSE/3.0.11-0.1.1 Firefox/3.0.11
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Proxy-Connection: keep-alive
Cookie:
Snitz00User=Pword=80fef94b3945d30bd0adf30fa9915276881765b719c14f7ffa29f1128f11c1f9&Name=student;
ASPSESSIONIDQCASABDC=KHNDCKLCEEENIEDJCADGOLAD;
ASPSESSIONIDQCBTBBCC=FDFFFLLCCDKFBDLJOOPELEMD;
ASPSESSIONIDSABQCACC=GKKNJOLCAMCFDIELLEDGIHLB;
ASPSESSIONIDQABSCCAC=PMAPIPLCFECHCPDDNEBCPDAH;
ASPSESSIONIDSCBSCDBC=JKIDMAMCKODFGGNAMIOKAHDK

HTTP/1.0 200 OK
Date: Thu, 18 Feb 2010 18:10:50 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Content-Length: 17878
Content-Type: text/html
Cache-Control: private
X-Cache: MISS from dss-cs99lv02-a
Via: 1.1 dss-cs99lv02-a:8080 (squid/2.7.STABLE6)
Connection: keep-alive
Proxy-Connection: keep-alive
--


Does anything stick out?


Ryan McCain
Northrop Grumman Corporation
Email: ryan.mcc...@la.gov
Phone: 225.505.3832

Registered Linux User #364609

-Original Messa

Re: [squid-users] clientParseRequestMethod: Unsupported method in request '×^?^L<92>ª¤Ô'

2010-02-18 Thread Henrik Nordström
fre 2010-02-19 klockan 02:50 +1300 skrev Amos Jeffries:

> ShoutCAST media streams? Support for that was added in 3.1. WHich is 
> either build-yourself or use the packages from Debian experimental 
> repositories.

Fedora 12 also ships 3.1.

Regards
Henrik



Re: [squid-users] Active Directory Single Sign-on

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 12:16 +0100 skrev Khaled Blah:
> Thx for your replay, Henrik!
> 
> With "it" I think you mean Proxy Authentication, right? Sorry, if that's a 
> trivial question for you. I just would like to clarify this.

Yes.

Regards
Henrik



Re: [squid-users] peer selection with weight=N

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 08:42 -0200 skrev H:

> For squid-2.7-STABLE8
> 
> Does weight=N influence round-robin selection algorithm?

Yes.

> But firstable, does weight has the same definition for ICP and http (no-query)
> protocol?

Yes, but what is being weighted differs slightly so proportions differ
somewhat for the same weight in different peering protocols.

Regards
Henrik



Re: [squid-users] Diagnosing Objects That Are Not Cached (squid/3.0.STABLE8)

2010-02-18 Thread Henrik Nordström
ons 2010-02-17 klockan 13:23 -0500 skrev Norbert Hoeller:
> I enabled level 6 logging for section 22 and 65.  I then explicitly
> retrieved
> 'http://www.facebook.com/images/loaders/indicator_blue_small.gif' and
> found the following entries in cache.log.  Does this confirm that the
> problem is the invalid 'Expires' header value?  If so, is there a way
> around this issue other than trying to get Facebook to adhere to
> standards?

Seems to be a regression error in Squid-3.

Squid-2 behaves better...

Regards
Henrik



[squid-users] Livemeeting (audio and video) and squid

2010-02-18 Thread P. H.
Hi all,

Has anybody expierence with livemeeting and squid? We have the problem
that we can do livemeeting conferences with (or through) squid but
audio and video is not possible. Even though microsoft docs state that
when you have proxy configured

"Q. My company uses a Web proxy server. Can I still participate in a
Live Meeting webcast?
A. Yes. Live Meeting works with most firewalls and Web proxy servers.

"

Anybody else run into this problem or has successfully done audio or
video meetings through squid with livemeeting?

Thanks and regards,

 Philipp


[squid-users] header help

2010-02-18 Thread David C. Heitmann

which header is for windows live mesenger important

with all deny all i cant login into msn 2009!

please help
mfg dave


Re: [squid-users] Websites not loading correctly

2010-02-18 Thread Alex Marsal

Sorry Amos, I'm actually running 3.0.STABLE20 (the one comming with OpenSuse).

Thanks

Alex

Amos Jeffries  ha escrito:


Alex Marsal wrote:

Hello,

I've noticed that some websites doesn't load correctly. For

example

this one:

http://global.dymo.com/esES/Products/default.html

If I try to go to this website with squid 3.0 it just load a blank



page, like the layout. But If I try it without squid the website

is

displayed correctly.

Any help please?



There are now 24 quite different releases of 3.0, and countless  
patched variations of each of those.


Which exact one are you talking about?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16



AVISO: Este mensaje y todos los anexos transmitidos con el mismo han sido 
enviados para el uso exclusivo del destinatario y pueden contener información 
confidencial o privilegiada. Si su receptor no fuera el destinatario o persona 
que se responsabilice de su entrega al mismo, por el presente se le informa que 
la difusión, distribución, copia u otro uso de este mensaje o sus anexos esta 
estrictamente prohibida. Si hubiera recibido este mensaje por error, rogamos lo 
notifique, al remitente de inmediato, nos lo haga saber y lo elimine de su 
ordenador. Queda prohibida la utilización o difusión no autorizada de este 
mensaje. Le recordamos que las comunicaciones a través de Internet no son 
seguras, pudiendo ser interceptadas por terceros. Por favor, considere su 
responsabilidad con el medio ambiente antes de imprimir este correo electrónico.

DISCLAIMER: The e-mail message and all attachments transmitted with it are 
intended solely for the use of the addressee and may contain legally privileged 
and confidential information. If the reader of this message is not the intended 
recipient, or an employee or agent responsible for delivering this message to 
the intended recipient, you are hereby notified that any dissemination, 
distribution, copying, or other use of this message or its attachments is 
strictly prohibited. If you have received this message in error, please notify 
the sender immediately by replying to this message and please delete it from 
your computer. Any use or retransmission without proper authorisation is 
prohibited. You are cautioned that any communication over the Internet is not 
secure and may be intercepted by third parties. Please consider your 
environmental responsibility before printing this e-mail.


Re: [squid-users] NTLM Authentication and Connection Pinning problem

2010-02-18 Thread Amos Jeffries

Jeff Foster wrote:

RFC 2616 states that "304 Not Modified" responses don't have a body.

To quote from http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html

"If the client has performed a conditional GET request and access is
 allowed, but the document has not been modified, the server SHOULD
 respond with this status code. The 304 response MUST NOT contain a
 message-body, and thus is always terminated by the first empty line after
 the header fields."


Please re-read what I said in the third sentence under "Lets put it this 
way"




In addition the responses include a "Connection: keep-alive" header so
the socket shouldn't be closed and should be ready for re-use after the
http header is received.


Please re-read all of what I said under "Lets put it this way"

If you still don't understand answer this:

 Telling the remote end to keep a link alive, then closing the link 
yourself has what effect?

  That is exactly what the server is doing.




These link state that NTLM authentication is TCP connection based
and authentication is not required after the first HTTP request.

http://www.innovation.ch/personal/ronald/ntlm.html
http://curl.haxx.se/rfc/ntlm.html#ntlmHttpAuthentication


Thats what I described as "weird". The server is not obeying that. It 
challenged for every new object requested within link #7 in your trace.


Amos



Jeff F>

On Wed, Feb 17, 2010 at 4:30 PM, Amos Jeffries  wrote:

On Wed, 17 Feb 2010 08:59:27 -0600, Jeff Foster  wrote:

I'm not sure which TCP stream you are referencing in your reply.
If you are looking at client port 1917; I agree with your response.


There are 7 distinct TCP streams/connections in that trace.
The first 6 display the exact same behavior or persist long enough to do
the login and object fetch, but the object requires closure.

The 7th does the weird 304 and re-auth required by the server. It also
persists during the entire auth setup sequence, but we never see the end of
the sequence so can't tell if the object kills it as well or not.

The port does not matter here. Both the server and client and Squid are
all working exactly correctly according to the data they transfer.

Lets put it this way
 If the objects being requested were sent by the server with correct
Content-Length headers. The _entire_ page load (all 7 of those connections)
would happen through a single TCP link.
 This can be seen working in the 304 response (which has a known 0-byte
length) and its followup request re-using the same ports.

 The reality is that the server is sending out objects that range from 0
bytes to infinite, without telling Squid how long they are. There is
exactly one way for it to tell Squid the end of any given object and the
start of the next. To completely close the TCP link.
 There is exactly one way for Squid to pass that information on. To close
the Squid->client TCP link.

Cut off the head the body dies. Cut off the body the head dies. Either way
you look at it the client->squid->server pinned linkage dies.


NP: I think what you were expecting to see was the client->squid link dies
and 'untie' the squid->server link back to re-use as a normal persistent
connection. Then a new client->squid link to tie it up again later. While
that would normally be the case, these are dying due to the unknown-length
objects, and the server link is the first to go down.



The problem as I see it is the TCP stream for the client port 1919.
It is using port 37159 on the squid server to the upstream. Then
in packet 210 the upstream request switches to port 37161.

The trace was run from the initial client request to long after the
Internet Explorer authentication dialog was displayed.

All of this happens magically in the background away from the users view,
thus its called 'transparent' authentication. Access to pages being
'transparent' from the users view.


Amos


On Wed, Feb 17, 2010 at 1:50 AM, Amos Jeffries  wrote:

Jeff Foster wrote:
Hi Jeff,

Looking at the 3.1 capture I see everything working perfectly as it
should
be.

 The connection is held open as expected of persistent connections
 through
the enire auth sequence and beyond. It finishes with an actual page
result
starting to come back from the final auth credentials.

 The thing to notice at this point is that the object being fetched has
 no
Content-Length: header and so the connection MUST end with closure to
terminate the file. This will prevent it ever being re-used as you
expected.
 NP: all the object replies this server produces seem to have this type
 of
content preventing connection re-use.

 At the end it is your client machine which sends a RST packet and

aborts

the download and closes the connections before the object is complete,
its
visibly a partial page in the trace.


 The only odd thing I can see so far is the followup from the
http://simon/Styles/forms.css request. Server replies with a 304
redirection
(keep alive allows connection re-use :). As expected The client sends

the

Re: [squid-users] regarding caching and replication

2010-02-18 Thread senthil

Hi Amos,

Thank you very much

>>By "encrypted" do you mean "binary GIF format" ?
>> as in the format expected to be received when asking for a .gif object?

yes..

Is it possible to make cached object in squid1 eg a.gif to be in squid2 
cache as a.gif using squidclient
eg squidclient  -h ipofparentsquid(172.16.1.15)   -m GET 
http://www.example.com/a.gif



Regards
senthil

Amos Jeffries wrote:

senthil wrote:

Henrik Nordström wrote:

tor 2010-02-18 klockan 14:56 +0530 skrev senthil:

 
We can copy the contents of cache directory of Squid1 to Squid2 but 
the problem here is that the copied data has to be indexed by squid2.



I would set squid2 as a sibling peer to Squid1 using cache digests,
allowing it to fetch content from Squid1 to populate it's cache as
needed.

Regards
Henrik



  

Hello Hen,

Thanks for the reply

In the scenario we are using Squid as reverse proxy

Using cache digest is good,is  it possible to generate the request 
for the cached objects defined in the cached digest from squid2


i.e.,squid2 getting cached objects from squid1 with the help of cache 
digest information in squid2


configure squid1 as a cache_peer of squid2. Index exchange happens 
automatically.




By using  " squidclient  -h ipofparentsquid(172.16.1.15)   -m GET 
http://www.xxx.com/dd.gif "  we are able to see the details of cached 
object in encrypted format


By "encrypted" do you mean "binary GIF format" ?
 as in the format expected to be received when asking for a .gif object?



By using the squidclient command is there is any possibility to get 
cached objects to the squid2 from squid1 using cache digest information.


No. Not manually.

Amos




Re: [squid-users] clientParseRequestMet hod: Unsupported method in request '×^? ^L<92>ª¤Ô'

2010-02-18 Thread Amos Jeffries

Johann Spies wrote:

I recently build two proxy servers with squid3.0
(3.0.STABLE8-3+lenny3) replacing old ones which used version 2.5.

We are experiencing a few problems and would appreciate some enlightenment
concerning these issues:

1. Users complaining that sometimes incomplete files are received when
they download and interruptions when they watch streaming contents.


ShoutCAST media streams? Support for that was added in 3.1. WHich is 
either build-yourself or use the packages from Debian experimental 
repositories.




2. From time to time the store gets corrupted and then squid keep on
   rebuilding the cache and resetting itself.


Aye, "STABLE8" was not very stable as it turned out. I recommend using 
the release in backports.org. If you go to 3.1 for the stream problem 
that should cover this issue as well.




3. Whether this is related, I don't know but I have seen messages on
   the internet linking the reset of the cache and this type of error:
   
   clientParseRequestMethod: Unsupported method in request '^»eí¹®m*¬hoÇ'ÒøÊÖ'




It's not an error. Someone is pushing garbage into Squid. Squid will log 
them (the IP is usually a line or two below that message in older Squid) 
then disconnect them.




4. I see also the following in the logs which I do not understand and
   feel a bit uncomfortable about:

access.log:1266393906.689  0 127.0.0.1 TCP_MISS/200 905 GET 
cache_object://localhost/server_list - NONE/- text/plain
access.log:1266393910.402  0 127.0.0.1 TCP_MISS/200 960 GET 
cache_object://localhost/storedir - NONE/- text/plain
access.log:1266393919.070  0 127.0.0.1 TCP_MISS/200 905 GET 
cache_object://localhost/server_list - NONE/- text/plain


cache.log:2010/02/17 10:05:19| CACHEMGR: @127.0.0.1 requesting 
'server_list'
cache.log:2010/02/17 10:05:19| CACHEMGR: @127.0.0.1 requesting 
'counters'
cache.log:2010/02/17 10:05:19| CACHEMGR: @127.0.0.1 requesting 
'counters'



Someone is using the manager interface to your Squid.
They are doing so from localhost, so I assume its you. Perhapse using 
squidclient or cachemgr.cgi.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] Websites not loading correctly

2010-02-18 Thread Amos Jeffries

Alex Marsal wrote:

Hello,

I've noticed that some websites doesn't load correctly. For example this 
one:


http://global.dymo.com/esES/Products/default.html

If I try to go to this website with squid 3.0 it just load a blank page, 
like the layout. But If I try it without squid the website is displayed 
correctly.


Any help please?



There are now 24 quite different releases of 3.0, and countless patched 
variations of each of those.


Which exact one are you talking about?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] regarding caching and replication

2010-02-18 Thread Amos Jeffries

senthil wrote:

Henrik Nordström wrote:

tor 2010-02-18 klockan 14:56 +0530 skrev senthil:

 
We can copy the contents of cache directory of Squid1 to Squid2 but 
the problem here is that the copied data has to be indexed by squid2.



I would set squid2 as a sibling peer to Squid1 using cache digests,
allowing it to fetch content from Squid1 to populate it's cache as
needed.

Regards
Henrik



  

Hello Hen,

Thanks for the reply

In the scenario we are using Squid as reverse proxy

Using cache digest is good,is  it possible to generate the request for 
the cached objects defined in the cached digest from squid2


i.e.,squid2 getting cached objects from squid1 with the help of cache 
digest information in squid2


configure squid1 as a cache_peer of squid2. Index exchange happens 
automatically.




By using  " squidclient  -h ipofparentsquid(172.16.1.15)   -m GET 
http://www.xxx.com/dd.gif "  we are able to see the details of cached 
object in encrypted format


By "encrypted" do you mean "binary GIF format" ?
 as in the format expected to be received when asking for a .gif object?



By using the squidclient command is there is any possibility to get 
cached objects to the squid2 from squid1 using cache digest information.


No. Not manually.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] proxy.pac ipv6-addresses

2010-02-18 Thread Amos Jeffries

Henrik Nordström wrote:

tor 2010-02-18 klockan 10:32 +0100 skrev tsl...@agilolfinger.de:


How has the "return PROXY" statement in .pac-file to be crafted, for
returning a IPV6-address?


Not sure there is a standard syntax for that. Use of DNS name is
recommended.




Also depending on your browser there may be some alterations needed.

From updating pattern matches ...
http://support.mozilla.com/tiki-view_forum_thread.php?locale=tr&comments_parentId=259063&forumId=1

... to writing a completely new PAC lookup function
http://blogs.msdn.com/wndp/archive/2006/07/18/IPV6-WPAD-for-WinHttp-and-WinInet.aspx


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


Re: [squid-users] BYPASSED acl allowedurls url_regex "/etc/squid/url.txt" , help?

2010-02-18 Thread Amos Jeffries

Andres Salazar wrote:

Hello Amos,

# /usr/local/sbin/squid -v
Squid Cache: Version 2.7.STABLE6

Iam including the ACLs and the HTTP_ACCESS:

acl msn_mime req_mime_type -i ^application/x-msn-messenger$
acl msn_gw url_regex -i gateway.dll
acl flash_mime rep_mime_type ^application/x-shockwave-flash$
acl flash_mime_allowurl dstdomain .flashstudio.com .flashtutorials.com
89.15.79.50
acl allowedurls dstdomain "/etc/squid/url.txt"
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl localnet src x.x.x.x.x.
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 443 # https
acl Safe_ports port 
acl SSL_ports port 
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny all msn_mime
http_access deny all msn_gw


"all" has no meaning at the beginning of a set of combined rules.

It might have meaning at the finishing end of the line, but in this case 
not either.



http_reply_access deny flash_mime !flash_mime_allowurl
http_access allow localnet allowedurls
http_access allow localnet SSL_ports


There you go. Unlimited access to all SSL ports for localnet.

That line appears to be doing nothing but opening the HTTPS requests to 
the not-allowed domains.
Allowed domains (both HTTP and HTTPS) are already allowed by "allow 
localnet allowedurls"



http_access deny all

The url.txt iam sending through email.



That file had a problem too, its a wonder it worked at all. Comment 
likewise in reply to that email.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE24
  Current Beta Squid 3.1.0.16


[squid-users] regarding caching and replication

2010-02-18 Thread senthil

Henrik Nordström wrote:

tor 2010-02-18 klockan 14:56 +0530 skrev senthil:

  
We can copy the contents of cache directory of Squid1 to Squid2 but the 
problem here is that the copied data has to be indexed by squid2.



I would set squid2 as a sibling peer to Squid1 using cache digests,
allowing it to fetch content from Squid1 to populate it's cache as
needed.

Regards
Henrik



  

Hello Hen,

Thanks for the reply

In the scenario we are using Squid as reverse proxy

Using cache digest is good,is  it possible to generate the request for 
the cached objects defined in the cached digest from squid2


i.e.,squid2 getting cached objects from squid1 with the help of cache 
digest information in squid2


By using  " squidclient  -h ipofparentsquid(172.16.1.15)   -m GET 
http://www.xxx.com/dd.gif "  we are able to see the details of cached 
object in encrypted format


By using the squidclient command is there is any possibility to get 
cached objects to the squid2 from squid1 using cache digest information.


regards
senthil




[squid-users] header windows live messenger

2010-02-18 Thread David C. Heitmann

hello experts,

when i wrote:
reply_header_access User-Agent deny all
request_header_access User-Agent deny all
(squid 3.1.0.16)

i cant login into windows live messenger 2009

when i delete this rule i have a successfully login

i think i have to permit msn to use the User-Agent to login into msn 2009
for gmx i have:

acl user_agent_request dstdomain .gmx.net
request_header_access User-Agent allow user_agent_request

acl user_agent_reply dstdomain .gmx.net
reply_header_access User-Agent allow user_agent_reply

---
with this address in a file, i cant login! but which address is it for 
login???


acl user_agent_msn dstdomain "/squid/user_agent"
request_header_access User-Agent allow user_agent_msn
---
www.sqm.microsoft.com
rad.msn.com
db2.t.msn.com
msn.com
ssw.msn.com
live.ivwbox.de
view.atdmt.com
impde.tradedoubler.com
public.bay.livefilestore.com
Login.live.com
.contacts.msn.com 
.storage.msn.com 
c.msn.com 
.messenger.msn.com
g.msn.com 
crl.microsoft.com 
messenger.hotmail.com:1863
gateway.messenger.hotmail.com 
config.messenger.msn.com
ows.messenger.msn.com 
rsi.hotmail.com 
sqm.microsoft.com 
.edge.messenger.live.com 
relay.data.edge.messenger.live.com 
rad.msn.com 
appdirectory.messenger.msn.com 
images.messenger.msn.com 
spaces.live.com

relay.voice.messenger.msn.com
65.54.52.254
65.54.52.62


please help me
mfg david


Re: [squid-users] NTLM Authentication and Connection Pinning problem

2010-02-18 Thread Jeff Foster
RFC 2616 states that "304 Not Modified" responses don't have a body.

To quote from http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html

"If the client has performed a conditional GET request and access is
 allowed, but the document has not been modified, the server SHOULD
 respond with this status code. The 304 response MUST NOT contain a
 message-body, and thus is always terminated by the first empty line after
 the header fields."

In addition the responses include a "Connection: keep-alive" header so
the socket shouldn't be closed and should be ready for re-use after the
http header is received.

These link state that NTLM authentication is TCP connection based
and authentication is not required after the first HTTP request.

http://www.innovation.ch/personal/ronald/ntlm.html
http://curl.haxx.se/rfc/ntlm.html#ntlmHttpAuthentication

Jeff F>

On Wed, Feb 17, 2010 at 4:30 PM, Amos Jeffries  wrote:
> On Wed, 17 Feb 2010 08:59:27 -0600, Jeff Foster  wrote:
>> I'm not sure which TCP stream you are referencing in your reply.
>> If you are looking at client port 1917; I agree with your response.
>>
>
> There are 7 distinct TCP streams/connections in that trace.
> The first 6 display the exact same behavior or persist long enough to do
> the login and object fetch, but the object requires closure.
>
> The 7th does the weird 304 and re-auth required by the server. It also
> persists during the entire auth setup sequence, but we never see the end of
> the sequence so can't tell if the object kills it as well or not.
>
> The port does not matter here. Both the server and client and Squid are
> all working exactly correctly according to the data they transfer.
>
> Lets put it this way
>  If the objects being requested were sent by the server with correct
> Content-Length headers. The _entire_ page load (all 7 of those connections)
> would happen through a single TCP link.
>  This can be seen working in the 304 response (which has a known 0-byte
> length) and its followup request re-using the same ports.
>
>  The reality is that the server is sending out objects that range from 0
> bytes to infinite, without telling Squid how long they are. There is
> exactly one way for it to tell Squid the end of any given object and the
> start of the next. To completely close the TCP link.
>  There is exactly one way for Squid to pass that information on. To close
> the Squid->client TCP link.
>
> Cut off the head the body dies. Cut off the body the head dies. Either way
> you look at it the client->squid->server pinned linkage dies.
>
>
> NP: I think what you were expecting to see was the client->squid link dies
> and 'untie' the squid->server link back to re-use as a normal persistent
> connection. Then a new client->squid link to tie it up again later. While
> that would normally be the case, these are dying due to the unknown-length
> objects, and the server link is the first to go down.
>
>
>> The problem as I see it is the TCP stream for the client port 1919.
>> It is using port 37159 on the squid server to the upstream. Then
>> in packet 210 the upstream request switches to port 37161.
>>
>> The trace was run from the initial client request to long after the
>> Internet Explorer authentication dialog was displayed.
>
> All of this happens magically in the background away from the users view,
> thus its called 'transparent' authentication. Access to pages being
> 'transparent' from the users view.
>
>
> Amos
>
>> On Wed, Feb 17, 2010 at 1:50 AM, Amos Jeffries  wrote:
>>> Jeff Foster wrote:

>>>
>>> Hi Jeff,
>>>
>>> Looking at the 3.1 capture I see everything working perfectly as it
>>> should
>>> be.
>>>
>>>  The connection is held open as expected of persistent connections
>>>  through
>>> the enire auth sequence and beyond. It finishes with an actual page
>>> result
>>> starting to come back from the final auth credentials.
>>>
>>>  The thing to notice at this point is that the object being fetched has
>>>  no
>>> Content-Length: header and so the connection MUST end with closure to
>>> terminate the file. This will prevent it ever being re-used as you
>>> expected.
>>>  NP: all the object replies this server produces seem to have this type
>>>  of
>>> content preventing connection re-use.
>>>
>>>  At the end it is your client machine which sends a RST packet and
> aborts
>>> the download and closes the connections before the object is complete,
>>> its
>>> visibly a partial page in the trace.
>>>
>>>
>>>  The only odd thing I can see so far is the followup from the
>>> http://simon/Styles/forms.css request. Server replies with a 304
>>> redirection
>>> (keep alive allows connection re-use :). As expected The client sends
> the
>>> auth credentials to the new request URL through teh existing
> connection.
>>> But
>>> then the server replies with a brand new auth challenge as if it had
>>> never
>>> seen the client before.
>>>  The trace does not continue long enough to follow that, but I would
> hope
>>> the 

Re: [squid-users] Active Directory Single Sign-on

2010-02-18 Thread Khaled Blah
Thx for your replay, Henrik!

With "it" I think you mean Proxy Authentication, right? Sorry, if that's a 
trivial question for you. I just would like to clarify this.


Regards,

Khaled

 Original-Nachricht 
> Datum: Thu, 18 Feb 2010 11:38:11 +0100
> Von: "Henrik Nordström" 
> An: Khaled Blah 
> CC: squid-users@squid-cache.org
> Betreff: Re: [squid-users] Active Directory Single Sign-on

> tor 2010-02-18 klockan 10:30 +0100 skrev Khaled Blah:
> 
> > "This mechanism is not used for HTTP authentication to HTTP proxies."
> > 
> > Does that mean HTTP proxy authentication or the actual HTTP
> > authentication. I am wondering whether that means that Squid cannot use
> > SPNEGO based proxy authentication or that a client cannot HTTP
> > authenticate to a target through a proxy. I found the RFC to be ambigous
> > concerning this.
> 
> Squid can handle it since negotiate support was added to Squid.
> 
> Firefox can handle it.
> 
> Late versions of MSIE can also handle it, but at the time Microsoft
> wrote that document MSIE could not handle it.
> 
> Regards
> Henrik


[squid-users] Websites not loading correctly

2010-02-18 Thread Alex Marsal

Hello,

I've noticed that some websites doesn't load correctly. For example this one:

http://global.dymo.com/esES/Products/default.html

If I try to go to this website with squid 3.0 it just load a blank  
page, like the layout. But If I try it without squid the website is  
displayed correctly.


Any help please?

AVISO: Este mensaje y todos los anexos transmitidos con el mismo han sido 
enviados para el uso exclusivo del destinatario y pueden contener información 
confidencial o privilegiada. Si su receptor no fuera el destinatario o persona 
que se responsabilice de su entrega al mismo, por el presente se le informa que 
la difusión, distribución, copia u otro uso de este mensaje o sus anexos esta 
estrictamente prohibida. Si hubiera recibido este mensaje por error, rogamos lo 
notifique, al remitente de inmediato, nos lo haga saber y lo elimine de su 
ordenador. Queda prohibida la utilización o difusión no autorizada de este 
mensaje. Le recordamos que las comunicaciones a través de Internet no son 
seguras, pudiendo ser interceptadas por terceros. Por favor, considere su 
responsabilidad con el medio ambiente antes de imprimir este correo electrónico.

DISCLAIMER: The e-mail message and all attachments transmitted with it are 
intended solely for the use of the addressee and may contain legally privileged 
and confidential information. If the reader of this message is not the intended 
recipient, or an employee or agent responsible for delivering this message to 
the intended recipient, you are hereby notified that any dissemination, 
distribution, copying, or other use of this message or its attachments is 
strictly prohibited. If you have received this message in error, please notify 
the sender immediately by replying to this message and please delete it from 
your computer. Any use or retransmission without proper authorisation is 
prohibited. You are cautioned that any communication over the Internet is not 
secure and may be intercepted by third parties. Please consider your 
environmental responsibility before printing this e-mail.


[squid-users] peer selection with weight=N

2010-02-18 Thread H

Hi

For squid-2.7-STABLE8

Does weight=N influence round-robin selection algorithm?

But firstable, does weight has the same definition for ICP and http (no-query)
protocol?

thank's


H
(17)8111.3300


Re: [squid-users] Regarding Caching of Objects and replication

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 14:56 +0530 skrev senthil:

> We can copy the contents of cache directory of Squid1 to Squid2 but the 
> problem here is that the copied data has to be indexed by squid2.

I would set squid2 as a sibling peer to Squid1 using cache digests,
allowing it to fetch content from Squid1 to populate it's cache as
needed.

Regards
Henrik



Re: [squid-users] proxy.pac ipv6-addresses

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 10:32 +0100 skrev tsl...@agilolfinger.de:

> How has the "return PROXY" statement in .pac-file to be crafted, for
> returning a IPV6-address?

Not sure there is a standard syntax for that. Use of DNS name is
recommended.

Regards
Henrik



Re: [squid-users] Active Directory Single Sign-on

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 10:30 +0100 skrev Khaled Blah:

> "This mechanism is not used for HTTP authentication to HTTP proxies."
> 
> Does that mean HTTP proxy authentication or the actual HTTP
> authentication. I am wondering whether that means that Squid cannot use
> SPNEGO based proxy authentication or that a client cannot HTTP
> authenticate to a target through a proxy. I found the RFC to be ambigous
> concerning this.

Squid can handle it since negotiate support was added to Squid.

Firefox can handle it.

Late versions of MSIE can also handle it, but at the time Microsoft
wrote that document MSIE could not handle it.

Regards
Henrik



Re: [squid-users] Tiered Squid proxy issue (Microsoft JET Database Engine error '80040e57'

2010-02-18 Thread Henrik Nordström
tor 2010-02-11 klockan 11:22 -0600 skrev Ryan McCain:

> We are using Squid 2.5 on SLES for compatibility reasons with a redirector we 
> use at the POP level , Websense.  Websebse doesn't support 2.7 or 3.x.

Any external helpers (url rewriters, auth etc) which works with 2.5 also
works in later releases.

> Anyways, if you go to http://www.garymallon.com --> COURSES --> DISCUSSION 
> BOARD then login with:
> User: student
> Pw: ssw
> 
> I get the following error:
> 
> Microsoft JET Database Engine error '80040e57' 
> 
> The field is too small to accept the amount of data you attempted to add. Try 
> inserting or pasting less data. 
> 
> /mrengmal/gm/forum/inc_func_common.asp, line 585 

This is difficult to answer without knowing the web server application.

Have you tried talking to the maintainers of that forum?

A guess is that they are storing the X-Forwarded-For header in their
database for tracking purposes and have assigned to small field with for
storing that..

Regards
Henrik



Re: [squid-users] ACL by ms windows hostname not IP

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 11:09 +0100 skrev Nikolas Kuimcidis:

> Curretnly we stopped using static IP adresses and we obtain our IP's 
> from a DHCP server.
> So I would like to setup the ACL rules to filter by 
> windows-computer-name and not by IP

If your DHCP is configured to automatically update DNS then the
srcdomain acl can match what is registered in DNS.

> ACL test srcdomain computername.domain.company.org
> 
> Any idea what am i doing wrong?

Did you also use this in http_access?

Regards
HEnrik




Re: [squid-users] two connections - specific users ? problem....

2010-02-18 Thread Henrik Nordström
tor 2010-02-18 klockan 11:21 +0100 skrev David C. Heitmann:

> > You can do this with the help of proper source policy based routing
> > configured on the server to enable the server to properly participate on
> > both ISP links, combined with tcp_outgoing_address to select what
> > requests uses which link.
> >
> > Regards
> > Henrik
> >
> >
> >   
> yes it is for two isp connections.
> but the first is for all and the second for only one person.
> how can manged this in squid?

See above.

The first part (source policy based routing) is OS or router
configuration depending on your network.

The second part (tcp_outgoing_address) is Squid.

Regards
Henrik



RE: [squid-users] Cache manager analysis

2010-02-18 Thread J. Webster

Does this look reasonable?

auth_param basic realm P*r ProxyServer
auth_param basic credentialsttl 2 hours
auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid_passwd
authenticate_cache_garbage_interval 1 hour
authenticate_ip_ttl 2 hours
#acl all src 0.0.0.0/0.0.0.0
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1
acl cacheadmin src 88.xxx.xxx.xxx 127.0.0.1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 1863 # MSN messenger
acl ncsa_users proxy_auth REQUIRED
acl maxuser max_user_ip -s 2
acl CONNECT method CONNECT
http_access allow manager cacheadmin
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost
http_access deny manager
http_access allow ncsa_users
http_access deny maxuser
http_access deny all
icp_access allow all
http_port 8080
http_port 88.xxx.xxx.xxx:80
hierarchy_stoplist cgi-bin ?
cache_mem 256MB
maximum_object_size_in_memory 50 KB
cache_replacement_policy heap LFUDA
cache_dir aufs /var/spool/squid 4 16 256
maximum_object_size 50 MB
cache_swap_low 90
cache_swap_high 95
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
buffered_logs on
refresh_pattern ^ftp:   1440    20% 10080
refresh_pattern ^gopher:    1440    0%  1440
refresh_pattern -i (/cgi-bin/|\?)  0 0% 0
refresh_pattern .   0   20% 4320
quick_abort_min 0 KB
quick_abort_max 0 KB
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
half_closed_clients off
cache_mgr ***'***.com
cachemgr_passwd  all
visible_hostname P*r ProxyServer
log_icp_queries off
dns_nameservers 208.67.222.222 208.67.220.220
hosts_file /etc/hosts
memory_pools off
forwarded_for off
client_db off
coredump_dir /var/spool/squid


> From: webster_j...@hotmail.com
> To: squ...@treenet.co.nz; squid-users@squid-cache.org
> Date: Sat, 13 Feb 2010 16:35:29 +
> Subject: RE: [squid-users] Cache manager analysis
>
>
> Thanks.
> A few questions on this:
> (a) when you said this all src all is that meant to be acl src all?
> (b) Hint 2: if possible, define an ACL or the network ranges where you accept 
> logins. Use it like so
>   The logins are accepted form IP addresses that I never know, it is an 
> external proxy server for geo location so not sure I can do this? logins will 
> only ever by directed to the 88.xxx.xxx.xxx server though?
> (c) cache_mem 100 MB
> Bump this up as high as you can go without risking memory swapping.
> Objects served from RAM are 100x faster than objects not.
> Where can I view if memeory swapping is happening?
> (D) maximum_object_size 50 MB
> Bump this up too. Holding full ISO CDs and windows service packs can
> boost performance when one is used from the cache. 40GB of disk can
> store a few.
> If I increase this, will the server ever try to store streamed video? I 
> had an efficiency problem with the original configuration that came with 
> squid, which meant that streamed video was buffering constantly. Not sure 
> what caused it but with the current config it does not do that.
> If I increase the cache_mem and max object size do I also need to increase 
> this?
> maximum_object_size_in_memory 50 KB
> (E)
> cache_swap_low 90
> cache_swap_high 95
> access_log /var/log/squid/access.log squid
> cache_log /var/log/squid/cache.log
> buffered_logs on
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
>
> Drop the QUERY bits above. It's more than halving the things your Squid can 
> store.
> Remove the acl and the cache deny?
> At present, does this stop the cache from storing anything with a ?, ie 
> dynamic pages?
> What if the same request is made for a dynamic page, will it retrive it from 
> the cache (old page) rather then fetch the new dynamic content?
>
> current conf redone below:
> 
> auth_param basic realm Proxy server
> auth_param basic credentialsttl 2 hours
> auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid_passwd
> authenticate_cache_garbage_interval 1 hour
> authenticate_ip_ttl 2 hours
> #acl all src 0.0.0.0/0.0.0.0
> acl src all
> acl manager proto cache_object
> acl localhost src 127.0.0.1
> acl cacheadmin src 88.xxx.xxx.xxx
> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl S

[squid-users] ACL by ms windows hostname not IP

2010-02-18 Thread Nikolas Kuimcidis

Hello There,

We use squid to deny full internet access to our users.
We have a win 2000 domain and the computers had static ip adresses 
assigned. This means that ACL by IP's were piece of cake to setup.

So i could easily set who has and who doesnt have access.

Curretnly we stopped using static IP adresses and we obtain our IP's 
from a DHCP server.
So I would like to setup the ACL rules to filter by 
windows-computer-name and not by IP


First thing I did I added the windows domain server DNS to the 
resolv.conf file on the debian server which runs squid.
With this step I can ping our windows machines (ping 
computername.domain.company.cz) form the debian box.

So i thought that i could add a ACL (bellow) but that doesnt work

ACL test srcdomain computername.domain.company.org

Any idea what am i doing wrong?

Could you please guide me the right way and excuse a maybe lame question

Thank you in advance
Nikolas


[squid-users] Re: R: Error 503 using HTTPS connection

2010-02-18 Thread Henrik Nordstrom
tor 2010-02-18 klockan 09:43 +0100 skrev Edgardo Ghibaudo:
> If I disable the proxy SQUID (using IE6 or Firefox 3.5.7 with SSL 
> certication) the connection is very fast.
> Using the proxy the connection is VERY slow ... and the log file reports 
> error 503
> In the configuration file I don't have any deny for the address 195.7.17.254 
> and for the port 443.
> There is NO problem with other SSL sites.

It's not a Squid issue, it's a networking issue.

Recommended reading:

   Squid FAQ, Linux, Can't connect to some sites through Squid
   


   Squid FAQ, Linux, Some sites load extremely slowly or not at all
   


   Squid FAQ, Troubleshooting, Why do I sometimes get Zero sized reply
   


   Squid Knowledge Base, Identifying and working around sites with broken TCP 
Window Scaling
   

Regards
Henrik




[squid-users] proxy.pac ipv6-addresses

2010-02-18 Thread tslbai
Hello list,

sorry for posting my question here in the list. I know, that this is not a
squid issue, but i hope to get hint here.

How has the "return PROXY" statement in .pac-file to be crafted, for
returning a IPV6-address?

I tried
{ return "PROXY 2001:DB8:484:2FB::1:8080" }
{ return "PROXY [2001:DB8:484:2FB::1]:8080" }
but both do not work.

Any help is appreciated.

Regards, Florian



[squid-users] Active Directory Single Sign-on

2010-02-18 Thread Khaled Blah
Hello to the list,

I have searched for answers regarding this but did not find any. My
question concerns RFC 4559. There it says:

"This mechanism is not used for HTTP authentication to HTTP proxies."

Does that mean HTTP proxy authentication or the actual HTTP
authentication. I am wondering whether that means that Squid cannot use
SPNEGO based proxy authentication or that a client cannot HTTP
authenticate to a target through a proxy. I found the RFC to be ambigous
concerning this.

I'd be glad if you could enlighten me concerning this question.

Thanks a lot!

-- 
Khaled Blah
khaled.b...@gmx.de



signature.asc
Description: OpenPGP digital signature


[squid-users] Regarding Caching of Objects and replication

2010-02-18 Thread senthil

Hi All,

The scenario here is that in a  network single squid is running  ie 
squid1( 172.16.1.35.)


When the request rate increases beyond 800 req/per second, the second 
squid is started ie squid2( 172.16.1.36 )


Both the squid having Caching capacity of 400GB .

We want to replicate the caching contents of initially started Squid1 to 
the newly started Squid2


We can copy the contents of cache directory of Squid1 to Squid2 but the 
problem here is that the copied data has to be indexed by squid2.


this consumes more time.So if there is any possibility to replicate 
contents of Squid1 with the help of Squidclient


We can get list of the objects cached in squid1 with the help of 
squidclient is there any way to make caching of elements from squid1 to 
squid2

rather than copying.

Kindly help me

Thanks in Advance

Regards
senthil




Re: [squid-users] problem

2010-02-18 Thread Henrik Nordstrom
tor 2010-02-11 klockan 10:39 +0100 skrev David C. Heitmann:
> how can i connect throw the proxy with msn live messenger 2009 ?

What does access.log say?

REgards
Henrik



Re: [squid-users] Re: SSLBump, help to configure for 3.1.0.16

2010-02-18 Thread Henrik Nordstrom
ons 2010-02-17 klockan 22:40 -0700 skrev Alex Rousskov:
> On 02/16/2010 12:54 PM, Andres Salazar wrote:
> > Hello,
> > 
> > Iam still having issues with SSLBump .. apparently iam now getting
> > this error when I visit an https site with my browser explicity
> > configured to use the https_port  .
> > 
> > 2010/02/16 14:31:14| clientNegotiateSSL: Error negotiating SSL
> > connection on FD 8: error:1407609B:SSL
> > routines:SSL23_GET_CLIENT_HELLO:https proxy request (1/-1)

This error is seen if a browser is configured to use a Squid https_port
as HTTP proxy port for secure (SSL/TLS) connections. To be exact it's
from the OpenSSL library where the library barfs at receiving an HTTP
CONNECT request where an SSL/TLS handshake was expected.

For explicit proxy configuration the browser must be configured to use a
Squid http_port.

Regards
Henrik



Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-18 Thread Henrik Nordstrom
ons 2010-02-17 klockan 21:40 -0800 skrev Tory M Blue:

> And sorry "sleeping" was just my way of citing the box shows no load,
> almost no IO 4-5 when I'm hitting it hard. I do not see this issue
> with lesser threads, it's only when I turn up the juice. But with
> turning up the connections per second I would expect to see some type
> of load and I see none.

Anything in /var/log/messages?

The above problem description is almost an exact match for Linux
iptables connectiontracking table limit being hit.

Regards
Henrik



Re: [squid-users] Re: SSLBump, help to configure for 3.1.0.16

2010-02-18 Thread Matus UHLAR - fantomas
> On 02/16/2010 12:54 PM, Andres Salazar wrote:
> > Iam still having issues with SSLBump .. apparently iam now getting
> > this error when I visit an https site with my browser explicity
> > configured to use the https_port  .
> > 
> > 2010/02/16 14:31:14| clientNegotiateSSL: Error negotiating SSL
> > connection on FD 8: error:1407609B:SSL
> > routines:SSL23_GET_CLIENT_HELLO:https proxy request (1/-1)

On 17.02.10 22:40, Alex Rousskov wrote:
> IIRC, SSL bumping at http_port is for dealing with HTTP CONNECT
> requests sent by the browser directly to the proxy while https_port is
> for bumping transparently intercepted SSL sessions that the browser
> tries to establish with the origin server. Your "browser explicitly
> configured to use the https_port" description does not fit either of
> these use cases.

I think it's more case of browsers not supporting proxying via https.
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Honk if you love peace and quiet. 


Re: [squid-users] SSLBump, help to configure for 3.1.0.16

2010-02-18 Thread Matus UHLAR - fantomas
> On Tue, Feb 16, 2010 at 7:17 AM, Matus UHLAR - fantomas
>  wrote:
> > Are you aware of all security concerns when intercepting HTTPS connections?
> >
> > ...I just wonder when will first proactive admin (or someone from his 
> > managers) sent
> > to prison because of breaking into users connections.

On 16.02.10 09:40, K K wrote:
> Laws vary by country.  At least in the US, SSL-Intercepting admins are
> much more likely to face civil liability than any sort of criminal
> charge.  So no prison, just bankruptcy.

IT highly depends on what will admin do with the data - if and what data
will leak out.

> With the requirement to load a public key on the machine being
> intercepted, generally this is only deployed in situations where the
> owner of the proxy also already "owns" the user machine.

I still would like to warn all admins of security breach using the sslbump
and legal or ethical risks of doing that.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
I wonder how much deeper the ocean would be without sponges.