Hi,
i recently trying to build a squid box using squid-3.0.PRE6 version
but the process stop during make stage with these error messages :
Making all in src
if g++ -DHAVE_CONFIG_H -I. -I. -I.. -I../include-Werror -Wall
-Wpointer-arith -Wwrite-strings -Wcomments -g -O2 -MT Trie.o -MD -MP
These two helpers can be used togheter, the diference(i think) between
using wbinfo_group.pl and ldap_group is that the first one is based on
the name taked by wbinfo program(samba indirectly) and ldap_group is
based on a ldap read. Then, how it says henrik you can mixfreely but you
must take care
I am coming back with this issue again since it is still persistent
This problem is real and easy to repeat and destroys the complete
cache_dir content. The squid vesion is 2.6-Stable14 and certainly it is
with all 2.6 versions I tested so far. This problem is not as easy to
launch with 2.5
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Wed, 8 Aug 2007 07:12:37 -0300 (BRT)
Michel Santos [EMAIL PROTECTED] wrote:
I am coming back with this issue again since it is still persistent
This problem is real and easy to repeat and destroys the complete
cache_dir content. The squid
Michel Santos disse na ultima mensagem:
Since you tell me that *nobody* has this problem what I certainly can not
believe ;) but seems you guys are using linux or windows then might this
be related to freebsd's softupdate on the file system and squid can not
handle this? Should I disable it
Hi Everybody,
We have a Squid 2.6.STABLE13 running on an
OpenBSD box along with packet filtering(earlier we used to run it on
FedoraCore4). The machine is a P4 3.4 GHz with 1 GB RAM and running
a cache of 30 GB. The external link speed is 4 Mbps. We use the DNS
server of
I think you should setup -CURRENT FreeBSD boxes to test gjournal[1].
Maybe gjournal can help you out, but you'll only know if you test it on
your own.
gjournal will be probably on the next FreeBSD engineering release,
7.0-RELEASE[2].
Cheers,
m0f0x
[1] http://wiki.freebsd.org//gjournal
Preetish wrote:
Hi Everybody,
We have a Squid 2.6.STABLE13 running on an
OpenBSD box along with packet filtering(earlier we used to run it on
FedoraCore4). The machine is a P4 3.4 GHz with 1 GB RAM and running
a cache of 30 GB. The external link speed is 4 Mbps. We use
m0f0x disse na ultima mensagem:
I think you should setup -CURRENT FreeBSD boxes to test gjournal[1].
Maybe gjournal can help you out, but you'll only know if you test it on
your own.
gjournal will be probably on the next FreeBSD engineering release,
7.0-RELEASE[2].
yep I know, zfs
On Wed, 2007-08-08 at 13:49 +0700, zen wrote:
i recently trying to build a squid box using squid-3.0.PRE6 version
but the process stop during make stage with these error messages :
Making all in src
if g++ -DHAVE_CONFIG_H -I. -I. -I.. -I../include-Werror -Wall
-Wpointer-arith
G'day,
My next question!
What are people using as refresh_patterns for normal ISP forward
caching? I'd like to put up a wiki page with a list of useful
refresh patterns, especially if you've managed to enable caching
of content such as streaming http media/flv, google earth, etc.
Basically,
Hi Tek Bahadur Limbu
Your 4 mbps connection link seems really really slow. Maybe as you say,
your ISP could be creating this problem for you in the first place.
I know it must be funny but how do we find out that wether the link
is actually giving us 4 Mbps? The traceroute for google.com shows
Hi Preetish,
From your previous email, I suspect your squid CPU usage is that high
due to url_regex acls (perhaps the acl files contain too many regexes
to evaluate). Try commenting them out, or simply making those files
empty.
HTH,
--
Gonzalo A. Arana
On 8/8/07, Preetish [EMAIL PROTECTED] wrote:
Hi Tek Bahadur Limbu
Your 4 mbps connection link seems really really slow. Maybe as you say,
your ISP could be creating this problem for you in the first place.
I know it must be funny but how do we find out that wether the link
is actually
Hi,
I am trying to use squid as a proxy.
I think I get the squid up and running.
But I keep getting 403 (from my client).
1186598664.052 49 127.0.0.1 TCP_DENIED/403 1442 GET
http://127.0.0.1:8080/CacheLoadServlet? - NONE/- text/html
$ ./squidclient http://www.google.comHTTP/1.0 403
I've worked with Henrik on this some already, and we figured out some DNS
issues and stopped connections from never completing but I'm at a loss as to
what would do this:
(output of the squidclient mgr:active_requests for this one request. )
Connection: 0xa1f2cc8
FD 52, read 2832,
Hi,
On Wed, Aug 08, Scott B. Anderson wrote:
The squid server is the lan router and the client default gateway so
any network issues would show up when proxy is off. I'm at a loss.
This is 2.6STABLE_13 on Fedora core 5 kernel 2.6.17-1.2174_FC5. This
became a problem only after switching from
How will going through squid prevent the users from connecting to an
outside proxy in order to avoid being blocked?
Please clarify.
Thank you for responding.
-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 07, 2007 8:18 PM
To: Thomas Raef
Cc:
Hi Everybody,
From your previous email, I suspect your squid CPU usage is that high
due to url_regex acls (perhaps the acl files contain too many regexes
to evaluate). Try commenting them out, or simply making those files
empty.
Gr8 ..:D.After doing this the cpu utilization has come down
Hi Everybody,
From your previous email, I suspect your squid CPU usage is that high
due to url_regex acls (perhaps the acl files contain too many regexes
to evaluate). Try commenting them out, or simply making those files
empty.
Gr8 ..:D.After doing this the cpu utilization has come down
How will going through squid prevent the users from connecting to an
outside proxy in order to avoid being blocked?
Most normal web anon proxy connections are standard HTTP requests, these
if redirected to the local proxy, can be processed by its ACLs.
There's no magic bullet for everything.
Hi,
I am trying to use squid as a proxy.
I think I get the squid up and running.
But I keep getting 403 (from my client).
1186598664.052 49 127.0.0.1 TCP_DENIED/403 1442 GET
http://127.0.0.1:8080/CacheLoadServlet? - NONE/- text/html
Please post a copy of your squid.conf (without
Hi,
i have a servlet which returns content of a video file:
like this:
http://127.0.0.1:8080/videoServlet?id=1
http://127.0.0.1:8080/videoServlet?id=2
I hit that servlet thru squid as the proxy.
I do this to query if squid does cache the video file:
./squidclient -H Cache-Control:
Hi,
i have a servlet which returns content of a video file:
like this:
http://127.0.0.1:8080/videoServlet?id=1
http://127.0.0.1:8080/videoServlet?id=2
I hit that servlet thru squid as the proxy.
I do this to query if squid does cache the video file:
./squidclient -H Cache-Control:
On 8/8/07, Amos Jeffries [EMAIL PROTECTED] wrote:
Hi,
i have a servlet which returns content of a video file:
like this:
http://127.0.0.1:8080/videoServlet?id=1
http://127.0.0.1:8080/videoServlet?id=2
I hit that servlet thru squid as the proxy.
I do this to query if squid does
On 8/8/07, Amos Jeffries [EMAIL PROTECTED] wrote:
Hi,
i have a servlet which returns content of a video file:
like this:
http://127.0.0.1:8080/videoServlet?id=1
http://127.0.0.1:8080/videoServlet?id=2
I hit that servlet thru squid as the proxy.
I do this to query if squid does
On Thu, Aug 09, 2007, Amos Jeffries wrote:
Gr8 ..:D.After doing this the cpu utilization has come down to around
3% to 4%. even my Number of clients accessing cache:reached 1567
in no time and was still incerasing but Like Tek Bahadur said i ran
out of file descriptors :(. So i ll
Alex Rousskov wrote:
Please post the output of
awk '/checking.for.mallopt/,/result:/' config.log
here the result:
configure:42213: checking for mallopt
configure:42270: gcc -o conftest -Wall -g -O2 -g -pthread conftest.c
-lpthread -lm 5
/var/tmp//ccQv9b17.o(.text+0x14): In function
Alex Rousskov wrote:
Exactly. And you are using --enable-icap-support (see your ./configure
line above :-)
This is not the reason for the make failure though (I am working on
that), but you may not get ICAP support once you make your Squid...
Alex.
dang it... my mistake...
btw have
On Thu, 2007-08-09 at 09:48 +0700, zen wrote:
awk '/checking.for.mallopt/,/result:/' config.log
here the result:
configure:42213: checking for mallopt
configure:42270: gcc -o conftest -Wall -g -O2 -g -pthread conftest.c
-lpthread -lm 5
/var/tmp//ccQv9b17.o(.text+0x14): In function
On Thu, 2007-08-09 at 10:55 +0700, zen wrote:
btw have you tried the c-icap suport inegrated to squid??
If you mean ICAP client support in Squid, then yes, I am using and
working on that feature in Squid3. It works well in my environment and
increasingly better in others.
C-icap is the name of
Alex Rousskov wrote:
Hm... Something does not add up here. ./configure found no mallopt and
should have undefined the HAVE_MALLOPT #define in include/autoconf.h
Do you have the following lines in that file (search for HAVE_MALLOPT)?
/* Define to 1 if you have the `mallopt' function. */
Interesting the --require-membership-of, haven't noticed it at all.
This parameter is interesting to check for the global presence of a user
into the domain, but not for matching particular rules for specific
ACLs, such as http_access allow streaming_media STREAM_AD_GROUP
together with the
Yes wbinfo_group is better for per ACL group matchings but it isn't required in
my environment.
I haven't found a good free http debugger although there are a few that have 14
day trials. A quick google should give you a few options.
I currently just use the three I listed previously as that
34 matches
Mail list logo