* Ralf Hildebrandt [EMAIL PROTECTED]:
Try also disabling tcp timestamps. It it still doesn't work then most
likely they have blacklisted your proxy ip..
To no avail. Doesn't work.
I sent mail there. They have no contact for technical issues, and
WHOIS data is protected by a privacy
OK, really at a loss now. Got rid of this problem by refining a few things
but now still not working but no real evidence of why not? Although
maybe
== log.smbd ==
[2008/06/26 21:28:35, 3] printing/printing.c:start_background_queue(1397)
start_background_queue: Starting background LPQ
Yes, but only winbindd_privileged
Henrik Nordstrom-5 wrote:
On mån, 2008-06-23 at 15:31 -0700, afstcklnd wrote:
Hi,
OK, have built a new Squid 2.7 Stable 2 version and it's up and running.
wbinfo reports authentication OK, but I get the following when the users
try
and
On 27.06.08 09:32, Ralf Hildebrandt wrote:
* Ralf Hildebrandt [EMAIL PROTECTED]:
Try also disabling tcp timestamps. It it still doesn't work then most
likely they have blacklisted your proxy ip..
To no avail. Doesn't work.
I sent mail there. They have no contact for technical
Hi,
I have been set range_offset_limit 100 MB, and maximum_object_size
100MB, then make a HTTP request to fetch a file(9MB) using php
the code:
test.php(client)
?php
$fp = fsockopen(file.test.com, 8080, $errno, $errstr, 30);
if (!$fp) {
echo $errstr ($errno)\n;
} else {
Indeed, my unique server has 3 IP aliases.
Apaches:
127.0.0.1:8081
127.0.0.1:8082
127.0.0.1:8083
Squids:
192.168.17.11:80
192.168.17.12:80
192.168.17.13:80
I added udp_incoming_address 192.168.17.[11|12|13] respectively in each squid
confs
I don't need to change udp_outgoing_address, do
Matus UHLAR - fantomas wrote:
On 27.06.08 09:32, Ralf Hildebrandt wrote:
* Ralf Hildebrandt [EMAIL PROTECTED]:
Try also disabling tcp timestamps. It it still doesn't work then most
likely they have blacklisted your proxy ip..
To no avail. Doesn't work.
I sent mail there. They have no
* Matus UHLAR - fantomas [EMAIL PROTECTED]:
I sent mail there. They have no contact for technical issues, and
WHOIS data is protected by a privacy protection service. Complete
idiocy.
I wonder, should not they be listed in rfc-ignorant and openwhois.org lists?
(they are not)
Maybe :)
I notice when I set quick_abort_pct to -1 KB, the file is cached , BUT
the range_offset_limit not work, when request range
range_offset_limit, the file sill prefetched and cached.
I am very confused?
2008/6/27, WestWind [EMAIL PROTECTED]:
Hi,
I have been set range_offset_limit 100 MB, and
Hi Friends,
Basically we can do the bypass proxy for local addresses via web browsers.
This is inbuilt functions of certain web browsers.
Like that, can we bypass some web request which are locally hosted ( In the
same network ) via squid-cache ? I think from any of ACL method we can do
that ,
Hello,
I'm trying to force Urchin to understand Squid combined log files. I
created custom logformat that should match typical Apache combined log
perfectly:
logformat combined %a %ui %un [%tl] %rm %rp HTTP/%rv %Hs %st
%{Referer}h %{User-Agent}h
At the moment I'm trying to make AWStats reading
__
Squid Proxy Cache Security Update Advisory SQUID-2008:1
__
Advisory ID:SQUID-2008:1
Date: June 22, 2008
Summary:
The Squid HTTP Proxy team is pleased to announce the
availability of the Squid-3.0.STABLE7 release!
This release adds many documentation updates, and several bugs found in
the previous release.
One of these bugs was a regression issue from 2.6 which allowed Remote
Denial of Service attacks
Also, even if I have the following:
2008/06/27 10:39:23| Local cache digest enabled; rebuild/rewrite every
600/3600
The digests don't seem to be rebuild every 10 minutes (or is it silent?).
And they seem to exchange digests after the ICP_QUERY...
In fact, after 1 hour, I got:
squid1:
On 27.06.08 03:48, Shaine wrote:
Basically we can do the bypass proxy for local addresses via web browsers.
This is inbuilt functions of certain web browsers.
Like that, can we bypass some web request which are locally hosted ( In the
same network ) via squid-cache ?
No, because your
/show_bug.cgi?id=2365
didnt get applied on 3.0stable7 ??? i checked last 3.0 snapshot
(20080627) and it's not even there on the daily snapshot
--
Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br
Minha
Maciek Iwanowski wrote:
Hello,
I'm trying to force Urchin to understand Squid combined log files. I
created custom logformat that should match typical Apache combined log
perfectly:
logformat combined %a %ui %un [%tl] %rm %rp HTTP/%rv %Hs %st
%{Referer}h %{User-Agent}h
At the moment I'm
#2365
http://www.squid-cache.org/bugs/show_bug.cgi?id=2365
didnt get applied on 3.0stable7 ??? i checked last 3.0 snapshot
(20080627) and it's not even there on the daily snapshot
The patch came in 48 hours after I'd rolled and published the release.
Formal announcement is only
John Doe wrote:
Indeed, my unique server has 3 IP aliases.
Apaches:
127.0.0.1:8081
127.0.0.1:8082
127.0.0.1:8083
Squids:
192.168.17.11:80
192.168.17.12:80
192.168.17.13:80
I added udp_incoming_address 192.168.17.[11|12|13] respectively in each squid
confs
I don't need to change
On Friday 27 June 2008, Matus UHLAR - fantomas wrote:
On 27.06.08 03:48, Shaine wrote:
Basically we can do the bypass proxy for local addresses via web
browsers. This is inbuilt functions of certain web browsers.
Like that, can we bypass some web request which are locally hosted ( In
the
.
is there any reason for bug #2365
http://www.squid-cache.org/bugs/show_bug.cgi?id=2365
didnt get applied on 3.0stable7 ??? i checked last 3.0 snapshot
(20080627) and it's not even there on the daily snapshot
The patch came in 48 hours after I'd rolled and published the release.
Formal
Unfortunately output of emulate_httpd_log is not providing enough
information (lack of user agent).
This is why I'm struggling to get logformat working.
It seems that timestamp output is different with emulate_httpd_log. How
to force squid to log time in this way without emulating httpd logs?
snip
However this function is also a builtin in some (many) browsers.
Or you can use a proxy autoconfig script served up by an apache
webserver!
Just needs java scripting enabled on the clients and it solved all my
issues
with 2 internal networks and many customer networks and a plethora of
- Original Message
Indeed, my unique server has 3 IP aliases.
Apaches:
127.0.0.1:8081
127.0.0.1:8082
127.0.0.1:8083
Squids:
192.168.17.11:80
192.168.17.12:80
192.168.17.13:80
I added udp_incoming_address 192.168.17.[11|12|13] respectively in each
squid
OK, I'll reply to myself :]
Documentation in squid.conf says:
tl Local time. Optional strftime format argument %d/%b/%Y:%H:%M:%S
%z
It seems that it is not true. Default strftime argument is:
%d/%b/%Y:%H:%M:%S
Quick fix for this is just to modify default combined log format
definition:
For instance , if squid runs in port 8080 , when a specific url comes into
the squid via port 8080 , before it receives to port 8080 cant we redirect
to a web server , which that url searching?
From the squid itself cant we find a solutions to have a proxy request by
passing ???
Angierfw
My bad...
I did not realize that the ICP_QUERY URL should have been the apache IPs and
not the squids IPs...
I was accessing directly my squids instead of using them as transparent proxies.
Once setup as proxies in my browser, I get:
ICP_QUERY http://127.0.0.1/img/apache_header.gif
which will
On fre, 2008-06-27 at 02:38 -0700, afstcklnd wrote:
OK, really at a loss now. Got rid of this problem by refining a few things
but now still not working but no real evidence of why not? Although
maybe
== log.smbd ==
[2008/06/26 21:28:35, 3]
I am going through a simular nightmare in our environment, we
currently use NTLM auth and since we have over 6000 Internet users
this isn't very efficent. I can't get kerberos to work. I used the
./squid_kerb_auth_test program to generate the blob, and it is over
5000 characters long. The
Brian,
the read buffer in squid_kerb_auth is 6400 which I think should be
increased to 8192 the value used in squid for writing. The ticket is
usually only that big for users which are members of hundreds of Windows
Groups, which I have never seen before to be 4k.
Can you try to increase
On lör, 2008-06-28 at 01:02 +1200, Amos Jeffries wrote:
Maybe you do. Definitely tcp_outgoing_address might be a good idea too.
It helps sanity.
But only udp_incoming_address, not udp_outgoing_address. See the
description of udp_outgoing_address for why..
On fre, 2008-06-27 at 03:25 -0700, John Doe wrote:
squid1:
1214557685.718 2 192.168.17.11 TCP_MISS/200 2329 GET
http://192.168.17.11/ - FIRST_UP_PARENT/127.0.0.1 text/html
###
Browse squid2...
On fre, 2008-06-27 at 08:06 -0700, John Doe wrote:
I still have the GET internal://pc-03/squid-internal-periodic/store_digest
problem though..
What problem? It's cache digest exchanges between the Squids..
Regards
Henrik
signature.asc
Description: This is a digitally signed message part
Hi!
When i try to start up squid, i get this error:
WARNING: digestauthenticator #31 (FD 38) exited
WARNING: digestauthenticator #30 (FD 37) exited
WARNING: digestauthenticator #29 (FD 36) exited
WARNING: digestauthenticator #28 (FD 35) exited
WARNING:
On fre, 2008-06-27 at 16:44 -0430, Edward Ortega wrote:
WARNING: digestauthenticator #31 (FD 38) exited
WARNING: digestauthenticator #30 (FD 37) exited
WARNING: digestauthenticator #29 (FD 36) exited
WARNING: digestauthenticator #28 (FD 35) exited
On fre, 2008-06-27 at 18:44 +0800, WestWind wrote:
I notice when I set quick_abort_pct to -1 KB, the file is cached , BUT
the range_offset_limit not work, when request range
range_offset_limit, the file sill prefetched and cached.
Is it? The two does not have much to do with each other... in
Shaine wrote:
For instance , if squid runs in port 8080 , when a specific url comes into
the squid via port 8080 , before it receives to port 8080 cant we redirect
to a web server , which that url searching?
Not without receiving it. Hmm, here is a little scenario...
Given two security-sealed
37 matches
Mail list logo