Thanks Joseph, I'll give it a try if this behavior of squid3 stable1
begins to affect the users' navigation.
David JP
Joseph Piché wrote:
As internet navigation is not affected (users don't even notice a glitch)
I'll probably wait for ubuntu to catch up with squid developers. I'll let
you know
I have done the following:
1. created a key on the KDC with the following command:
ktpass -princ HTTP/[EMAIL PROTECTED] -pass password
-mapuser squidtest -out c:\temp\squidtest.HTTP.keytab
2. Setup the /etc/krb5.conf for our domain and realm.
3. I then copied the key to the linux box, set the pe
Hi, all,
Recently, we used Squid redirectors to solve an application problem.
Our redirectors are checking incoming requests against a database
table to see if this IP has already accessed Squid--redirect only if
ip is not in database.
We now have the concern that it may cause problem when applyi
Hi Brie,
I have just gone through a similar scenario myself. So I might be able to help.
> I'm a longtime Squid user and I want to edit the error messages that
> Squid provides. I've located the files (ERR_ACCESS_DENIED,
> ERR_CACHE_ACCESS_DENIED, etc) and modified them but they are not
> display
On fre, 2008-07-11 at 17:18 -0400, Mark Stoughton wrote:
> Hey there!
>
> I'm having some troubles with the storeurl_rewrite functionality in Squid.
>
> Basic problem:
> I am successfully able to run the external rewrite program, and a valid value
> is returned to Squid (as evidenced in cache.l
I used to have the following config in squid.conf:
auth_param ntlm program /usr/local/squid/ntlm/libexec/fakeauth_auth /
usr/local/squid/ntlm/etc/passwd
This would allow me to enter any creds and "authenticate" with the
squid proxy.
Using 3.0.HEAD, I see the following when a client attempt
Hey there!
I'm having some troubles with the storeurl_rewrite functionality in Squid.
Basic problem:
I am successfully able to run the external rewrite program, and a valid value
is returned to Squid (as evidenced in cache.log), but then Squid just hangs,
and never does any cache lookups or fe
On fre, 2008-07-11 at 15:30 -0400, Brie Gordon wrote:
> Hello!
>
> I'm a longtime Squid user and I want to edit the error messages that
> Squid provides. I've located the files (ERR_ACCESS_DENIED,
> ERR_CACHE_ACCESS_DENIED, etc) and modified them but they are not
> displayed as I thought.
Common
On fre, 2008-07-11 at 14:32 -0400, Dean Durant wrote:
> while using firefox, I get: ERROR
> The requested URL could not be retrieved
>
> While trying to retrieve the URL:
> http://www.recycleelectronics.com/quick_quote.html
>
>
> The following error was encountered:
> Connection to 69.80.
Hi, all,
Recently, we succesfully used Squid redirectors to solve an
application problem. Our redirectors are checking incoming requests
against a database table to see if this IP has already accessed
Squid--redirect only if ip is not in database.
We now have the concern that it may cause problem
I read about deny_info and I understand now that none of my images
were being displayed (as should be expected).
The next step is to configure Apache to serve these pages? I am not
sure how to proceed here.
Thank you.
Regards,
Brie Gordon
http://granite.sru.edu/~bag6849/index.html
Hello!
I'm a longtime Squid user and I want to edit the error messages that
Squid provides. I've located the files (ERR_ACCESS_DENIED,
ERR_CACHE_ACCESS_DENIED, etc) and modified them but they are not
displayed as I thought. For example, Squid appends a line about itself
and the cache manager and d
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Henrik Nordstrom wrote:
> If you send the following items in private email I will try look into
> it.
>
> - Your entire squid.conf
> - access.log with "log_mime_hdrs on" showing two consequtive squidclient
> requests for the same url.
>
> Regards
> H
Heinrich Harrer wrote:
On Tue, Jul 8, 2008 at 3:36 PM, Chris Robertson <[EMAIL PROTECTED]> wrote:
[cut]
Just use delay pools, and set the initial bucket size to the max object size
you don't want to limit. This will have the added benefit of preventing
someone from circumventing your reply_b
while using firefox, I get: ERROR
The requested URL could not be retrieved
While trying to retrieve the URL:
http://www.recycleelectronics.com/quick_quote.html
The following error was encountered:
Connection to 69.80.208.156 FailedThe system returned:
(110) Connection timed
the easiest is to limit that to the interface facing your clients.
Another option is to add ACCEPT rules before that, accepting any traffic
you do not want to intercept, leaving those packets as-is..
fre 2008-07-11 klockan 13:39 -0400 skrev Brodsky, Jared S.:
> My iptables are configured like thi
On lör, 2008-07-12 at 03:26 +1200, Amos Jeffries wrote:
> I think you got it right when you noticed the storeExpireNow. It's just
> a matter of tracking down which of the several pathways to it thats
> happening. I just hope somebody with more store experience can jump in
> and help soon.
If
My iptables are configured like this.
/sbin/iptables -t tproxy -A PREROUTING -p tcp -m tcp --dport 80 -j
TPROXY --on-port 81
I had a feeling I needed to address something w/ my iptables, however
was not 100% sure how to configure that.
Jared
-Original Message-
From: Henrik Nordstrom [
On fre, 2008-07-11 at 12:50 -0400, Dean Durant wrote:
> Hello, I can see in my cache.log that sites are being received by squid,
> but all I get in the browser is: The requested URL could not be retrieved
What do the full error message say?
If using MSIE then disable "Show friendly errormessage
On fre, 2008-07-11 at 10:54 -0400, Brodsky, Jared S. wrote:
> I just rolled out my Squid box last night w/ Transparent proxying on my
> network and everything is working great. However I have a few servers
> (webmail, bug tracking) that need to be accessible to the outside world,
> however every t
Henrik Nordstrom wrote:
On fre, 2008-07-11 at 07:49 -0700, John Doe wrote:
I don't have use allow-miss.
But I do have:
header_access Cache-Control deny all
header_replace Cache-Control max-age=864000
I will try without it...
That explains the loops in sibling setup.
But it does not e
On fre, 2008-07-11 at 07:49 -0700, John Doe wrote:
> I don't have use allow-miss.
> But I do have:
> header_access Cache-Control deny all
> header_replace Cache-Control max-age=864000
> I will try without it...
That explains the loops in sibling setup.
But it does not explain why your cache_peer
Hello, I can see in my cache.log that sites are being received by squid,
but all I get in the browser is: The requested URL could not be retrieved
Where can I start troubleshooting?Does anyone have any ideas?
Thanks,
Dean Durant
Hi Folks,
Can anyone tell me if I am using coss for my cache will the -with-large-option
allow me to set the max-size to 1048576 (1GB).
I have just realised the max object size was 1GB but coss was only allowing 1MB
files to be stored in the cache, causing me to only get around 1% disk hit
rat
Angelo Höngens wrote:
Looks like the object is expiring, because I see the storeExpireNow
command. But I don't get exactly why it's expiring.. Well, I see
something in the log, but I do not understand what it means:
2008/07/11 08:41:48| FRESH: age 3600 < min 216000
2008/07/11 08:41:48| Stalene
I just rolled out my Squid box last night w/ Transparent proxying on my
network and everything is working great. However I have a few servers
(webmail, bug tracking) that need to be accessible to the outside world,
however every time someone attempts to access it, they get the Squid
access denied
> > Works fine for a while... until the digests are exchanged.
> > As expected, my logs are full of forwarding loops detected.
>
> Hmm... you SHOULD NOT see loops unless you are using the allow-miss
> cache_peer option, as by default Squid adds a "Cache-Control:
> only-if-cached" control to reques
> > still using my 4 siblings in proxy-only.
> > Works fine for a while... until the digests are exchanged.
> > As expected, my logs are full of forwarding loops detected.
> >
> > The problem is, since the siblings are in 'proxy-only', they do not cache
> > the
> looped objects and constantly as
On fre, 2008-07-11 at 12:58 +0200, Raphael Maseko wrote:
> >From what I understand, if you use WCCP, you do not exactly re-write the
> packet destination address but you use your router to give your packet a
> 'coat' or encapsulation without modifying the packet inside the
> encapsulation.
Yes.
On fre, 2008-07-11 at 03:57 -0700, John Doe wrote:
> Works fine for a while... until the digests are exchanged.
> As expected, my logs are full of forwarding loops detected.
Hmm... you SHOULD NOT see loops unless you are using the allow-miss
cache_peer option, as by default Squid adds a "Cache-Con
Siu-kin Lam wrote:
Dear all
Any experience using squid as caching in ISP environment ?
thanks
SK
I'm sure there's much larger ISPs out there and been using it much longer;
just passing along our info.
We're a small ISP serving around 10k dialup,dsl,cable modem and MAN subs
vi
Joseph Piché wrote:
Oh for pete's sake. Never, never, never give permanent root privileges like
that to Squid. It undermines the whole idea of security on that box.
Make sure the default user of squid is assigned, with a proper service group
and that group or user has access to the resources squ
> Oh for pete's sake. Never, never, never give permanent root privileges like
> that to Squid. It undermines the whole idea of security on that box.
>
> Make sure the default user of squid is assigned, with a proper service group
> and that group or user has access to the resources squid needs to r
I am using 3.0.STABLE7 and if there was any error or warning in logs , surely
i would send for your reference. But unfortunately there was no such
messages and only thing was that function of url_redirector didn't work.
after restarted the squid only its get back to work.
Thats why i thought ,
Hi!
On Thursday 10 July 2008, Joseph Piché wrote:
> >> I have a setup with Squid 3.0 stable 7 and DansGuardian 2.9.9.4. I
> >> have been trying to set up authentication using ntlm_auth connecting
> >> to Active Directory. Everything works fine except I get prompted for a
> >> username and password
Looks like the object is expiring, because I see the storeExpireNow
command. But I don't get exactly why it's expiring.. Well, I see
something in the log, but I do not understand what it means:
2008/07/11 08:41:48| FRESH: age 3600 < min 216000
2008/07/11 08:41:48| Staleness = -1
2008/07/11 08:4
Shaine wrote:
But i am confuse in this option. Can you please direct me with an example ?
Shaine
acl yes dstdomain .site1.com
acl no dstdomain .site2.com
url_rewrite_access yes
url_rewrite_access !no
Amos Jeffries-2 wrote:
Shaine wrote:
Hi Henrik,
First of all i would like to thank y
Shaine wrote:
Hi
Load of IM requests goes through squid proxy and most functionalities of IM
handle by url_redirector program.Redirector program functioning properly.
But after some times , like after 8 hours or some thing that url redirector
program doesn't work.
In my suid.conf
url_rewrite
John Doe wrote:
Hi,
still using my 4 siblings in proxy-only.
Works fine for a while... until the digests are exchanged.
As expected, my logs are full of forwarding loops detected.
2008/07/11 10:48:32| WARNING: Forwarding loop detected for:
Client: 192.168.17.11 http_port: 192.168.17.11:8000
Angelo Hongens wrote:
Amos Jeffries wrote:
Angelo Hongens wrote:
Sorry, sent this mail directly to Hendrik.. Here it is to the list. I'm
still pulling my hear out :(
Henrik Nordstrom wrote:
On ons, 2008-07-09 at 14:32 +0200, Angelo Höngens wrote:
Is there any way I can force caching if the
Hi Peter,
>From what I understand, if you use WCCP, you do not exactly re-write the
packet destination address but you use your router to give your packet a
'coat' or encapsulation without modifying the packet inside the
encapsulation.
The machine to which the encapsulated packet is directed rem
Hi,
still using my 4 siblings in proxy-only.
Works fine for a while... until the digests are exchanged.
As expected, my logs are full of forwarding loops detected.
2008/07/11 10:48:32| WARNING: Forwarding loop detected for:
Client: 192.168.17.11 http_port: 192.168.17.11:8000
GET http://192.
Hi
ACL for url_director done. Now that also functioning properly. But I want to
ask some thing.
for instance , www.google.com , I have not defined in url_redirector ACL.
when we access google , cant we forward it directly to the web without going
to url_redirect program ?
Thank you
Shaine.
I have many squid servers. In the past, they were running with layer7 switch.
After, switched to layer4 switch, the server comes unstable.
Does anyone run squid server with layer4 switch in a large environment?
Thanks
--- On Fri, 7/11/08, Amos Jeffries <[EMAIL PROTECTED]> wrote:
> From:
> > I already have 'log_fqdn off'.
> > What do I miss to prevent all dns things?
>
> As you dit, plus nsswitch settings to only use files, not dns.
Hum, other applications need dns...
So I recompiled with internal dns and will just keep 'log_fqdn off'.
> > Any GET or HEAD request (even with no-c
I tried to observe the response for 2 different requests
This one for www.google.com, it appears like this
---reading head part is HTTP/1.0 200 OK
Cache-Control: private, max-age=0
Date: Fri, 11 Jul 2008 06:40:53 GMT
Expires: -1
Content-Type: text/html; charset=UTF-8
Set-Cookie:
PREF
Hi,
I have a question:
Squid 2.7stable2 as reverse/application proxy.
Is it possible to use mswin_check_ad_group in environment with more than one DC?
Let´s better say: the environment is multidomain,the DCs are in the
same forest but in different domain.Squid run under the domain "Test"
The chec
47 matches
Mail list logo