Linda W wrote:
With no processes attaching to squid -- no activity -- no open
network connections -- only squid listening for connections --
why is squid walking up doing a busy-wait so often?
It's the most active process -- even when it is supposedly doing nothing?
I'm running it on suse10.3,
Andrew Struiksma wrote:
I have setup a reverse proxy which prompts for a password if the client is not
on our LAN. I am not sure as to the proper setting of auth_param basic
children. I set it to 2 since we will have around 75 users hitting the site
from our LAN but probably fewer than 10 simu
simon benedict wrote:
Dear All,
I have the following setup which is been workin fine for a long time
Redhat 9
squid-2.4.STABLE7-4
i also have shoreline firewall on the same squid server
now i appreciate if someone cd advise n help me
1) Real time Virus Scanning for your Proxy Server, incl
i have been use
acl website dstdomain "/etc/website"
to block some website
but how to make exception to some PC ( client )
eg :
192.168.1.100-200 <-- my client's ip
i want only 192.168.1.100 and 192.168.1.110 that not have any blocked
site ( free access )
and i want 192.168.1.101 - 192.168.1.109
With no processes attaching to squid -- no activity -- no open
network connections -- only squid listening for connections --
why is squid walking up doing a busy-wait so often?
It's the most active process -- even when it is supposedly doing nothing?
I'm running it on suse10.3, squid-beta-3.0-3
> Hi all:
>
> I have a Squid running on 192.168.1.1 listening on 3128 TCP port. Users
> from 192.168.1.0/24 can browse the Internet without problems thanks to a
> REDIRECT rule in my shorewall config.
>
> But users from differents networks (192.168.2.0/24, 192.168.3.0/24,
> etc.) can't browse the I
>
> Yes, but using data=writeback is not a tuning, but risking. Using that on
> squid cache dir may require cleaning cache_dir after each crash, otherwise
> you risk providing invalid data
> --
What option "data=writeback" really do?
Thanks!
--
Rafael Gomes
Consultor em TI
Embaixador Fedora
LPI
Thanks for answers,
Your informations about this ask helped me to really understand this.
I will write a post about this on my blog.
Thanks!
On Mon, Oct 6, 2008 at 10:45 AM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
> Henrik Nordstrom wrote:
>>
>> On mån, 2008-10-06 at 11:08 +0200, Matus UHLAR -
On mån, 2008-10-06 at 19:07 +0200, Itzcak Pechtalt wrote:
> On Mon, Oct 6, 2008 at 1:05 PM, Henrik Nordstrom
> <[EMAIL PROTECTED]> wrote:
>
> > But it is important you keep the number of objects per cache_dir well
> > below 2^24. Preferably not more than 2^23.
>
> Is there any way to limit number
gms5002 wrote:
Hello,
SNIP
In a nutshell I need something that can handle 'Take all
traffic from xxx.xxx.xxx.xxx, encrypt it, and send it to
yyy.yyy.yyy.yyy:.' Thanks so much for your help!
Stunnel (http://www.stunnel.org/) is a far better choice. Squid is an
HTTP proxy, and not
On Mon, Oct 6, 2008 at 6:07 PM, Christian Tzolov
<[EMAIL PROTECTED]> wrote:
> Hello all,
>
> Is it possible to configure several Squid servers to share a single,
> common cache directory?
No.
What is possible is to have multiple squids, each with its own cache,
coordinate with each other using icp
I have setup a reverse proxy which prompts for a password if the client is not
on our LAN. I am not sure as to the proper setting of auth_param basic
children. I set it to 2 since we will have around 75 users hitting the site
from our LAN but probably fewer than 10 simultanious users from the ou
On Mon, Oct 06, 2008 at 01:07:49PM -0700, Gordon Mohr wrote:
> I can't find mention of this '-I' option elsewhere. (It's not in my
> 2.6.STABLE14-based man page.)
>
> Is there a writeup on this option anywhere?
>
> Did it only appear in later versions?
Right, sorry, it appeared in 2.7:
http
>
> In my case all of the data being sent out was small enough and
> repetitive enough to be in the Linux filesystem cache. That's where I
> found the best throughput. I think the typical size of the data items
> were about 8-30MBytes. It was a regular Linux ext3 filesystem. The
> machine happ
On Mon, Oct 6, 2008 at 4:08 AM, RM <[EMAIL PROTECTED]> wrote:
> On Mon, Oct 6, 2008 at 1:45 AM, Pieter De Wit <[EMAIL PROTECTED]> wrote:
>> Hi JL,
>>
>> Does your server use DNS in it's logging ? Perhaps it's reverse DNS ?
>>
>> If he downloads a big file, does the speed pick up ?
>>
>> Cheers,
>>
- Mensagem original
De: Amos Jeffries <[EMAIL PROTECTED]>
Para: NBBR <[EMAIL PROTECTED]>
Cc: squid-users@squid-cache.org
Enviadas: Segunda-feira, 6 de Outubro de 2008 10:50:51
Assunto: Re: [squid-users] BUG 740
NBBR wrote:
> I'm with problems for the squid(3.0) send content-type to my p
I can't find mention of this '-I' option elsewhere. (It's not in my
2.6.STABLE14-based man page.)
Is there a writeup on this option anywhere?
Did it only appear in later versions?
Is there a long-name for the option that would be easier to search for?
I would be interested in seeing your scri
Marcin,
In my case all of the data being sent out was small enough and
repetitive enough to be in the Linux filesystem cache. That's where I
found the best throughput. I think the typical size of the data items
were about 8-30MBytes. It was a regular Linux ext3 filesystem. The
machine happens
Dave Dykstra ([EMAIL PROTECTED]) napisał(a):
> Meanwhile the '-I' option to squid makes it possible to run multiple
> squids serving the same port on the same machine, so you can make use of
> more CPUs. I've got scripts surrounding squid startups to take
> advantage of that. Let me know if you'
Dear All,
I have the following setup which is been workin fine for a long time
Redhat 9
squid-2.4.STABLE7-4
i also have shoreline firewall on the same squid server
now i appreciate if someone cd advise n help me
1) Real time Virus Scanning for your Proxy Server, includes scanning of HTTP
tr
On Mon, Sep 29, 2008 at 4:19 AM, Gordon Mohr <[EMAIL PROTECTED]> wrote:
> Using 2.6.14-1ubuntu2 in an reverse/accelerator setup.
>
> URLs I hope to be cached aren't, even after adjusting passed headers.
>
> For example, I request an URL with FireFox, get the expected MISS. Then
> request same URL w
On Mon, Oct 6, 2008 at 1:05 PM, Henrik Nordstrom
<[EMAIL PROTECTED]> wrote:
> But it is important you keep the number of objects per cache_dir well
> below 2^24. Preferably not more than 2^23.
Is there any way to limit number of objects in cache_dir ?
Thanks
Itzcak
Meanwhile the '-I' option to squid makes it possible to run multiple
squids serving the same port on the same machine, so you can make use of
more CPUs. I've got scripts surrounding squid startups to take
advantage of that. Let me know if you're interested in having them.
Currently I run a couple
Hello all,
Is it possible to configure several Squid servers to share a single,
common cache directory?
Cheers,
Chris
Hello all,
Running Squid 2.6.STABLE21, and get the lovely "Internet Explorer cannot open
the Internet site - Operation aborted" error when connecting to
www.aircanada.com
Get this error when connecting through Squid in IE6 and IE7 but not with IE8
(beta). Firefox is not affected by this of
Hi all:
I have a Squid running on 192.168.1.1 listening on 3128 TCP port. Users
from 192.168.1.0/24 can browse the Internet without problems thanks to a
REDIRECT rule in my shorewall config.
But users from differents networks (192.168.2.0/24, 192.168.3.0/24,
etc.) can't browse the Internet. Those
On Monday 06 October 2008 14:12:40 Amos Jeffries wrote:
> Simon Waters wrote:
> >
> > Would you expect Squid to cache the first 3MB if the HTTP 1.1 request
> > stopped early?
>
> Not separate form the rest of the file. You currently still need the
> quick_abort and related settings tuned to always
I have two installations on ESX 3.5 Update 2 currently in testing, one running
on Solaris and the other on Ubuntu, both the 3.0 branch. They are running with
no disk cache however, and pointing at parent proxies. I was concerned about
how our iSCSI SAN would handle the cache as it is recommend
NBBR wrote:
I'm with problems for the squid(3.0) send content-type to my perl script using
external acl's. this problem is what this in BUG 740?
if it will be, would like resolv in squid 3.0?
3.0 is already restricted to only serious bugs. You can patch your own
though if you want:
http:/
Henrik Nordstrom wrote:
On mån, 2008-10-06 at 11:08 +0200, Matus UHLAR - fantomas wrote:
On 05.10.08 12:31, Rafael Gomes wrote:
I have two Scsi discs. I can set a unique cache_dir and make a Raid 0,
so i will improve the write or I can set two cache_dir one per disc.
What is better?
Are Ther
Francois Goudal wrote:
Hi,
I'm trying to make a setup with several squid proxies :
All my clients are making their requests to the main proxy, I will call
it proxy_1 here.
Then I have 2 other proxies : proxy_2 and proxy_3 that are never queried
directly by the clients, they are supposed to
Simon Waters wrote:
On Monday 06 October 2008 11:55:41 Amos Jeffries wrote:
Simon Waters wrote:
Seeing issues with Googlebots retrying on large PDF files.
Apache logs a 200 for the HTTP 1.0 requests.
Squid logs an HTTP 1.1 request that looks to have stopped early (3MB out
of 13MB).
This patt
Hi list,
I've been instaled the squid 2.7 in a freebsd 7 machine. I need to configure it
to a network with at about 400 clients machines.
The server has 4 GB RAM and 60 GB (partition) to the cache, and only the pf
firewall is configured and running on this server, since a lot.
I alread have a
I'm using squid 2.7 stable 4.
Here's the line i use:
cache_peer comp parent 3128 3130 default login=PASS
Then I run squid using this command:
/usr/local/squid/sbin/squid -N -d 1 -D &
But It doesn't seem to work.
Eric NGUYEN DANG LUAN
-Message d'origine-
De : Henrik Nordstrom [mailto:[
*
This message has been scanned by IMSS NIT-Silchar
Dear ALL SQUID Users,
I have a problem/ query to ask you all. I have a proxy server for student
folk in my network.
It was going fine untill the students reported to me that they were able
to log
*
This message has been scanned by IMSS NIT-Silchar
Dear ALL SQUID Users,
I have a problem/ query to ask you all. I have a proxy server for student
folk in my network.
It was going fine untill the students reported to me that they were able
to log
*
This message has been scanned by IMSS NIT-Silchar
Dear ALL SQUID Users,
I have a problem/ query to ask you all. I have a proxy server for student
folk in my network.
It was going fine untill the students reported to me that they were able
to l
I'm with problems for the squid(3.0) send content-type to my perl script using
external acl's. this problem is what this in BUG 740?
if it will be, would like resolv in squid 3.0?
I'm trying make a MIME-TYPE validation using external acl's.
Andre Fernando A. Oliveira
Novos endereço
Are there any concerns/problems using Squid on VMware ESX server 3.5? We
got about 500 Users, so there shouldn't be that much load on that
machine. Maybe someone tested that and could just report how it works.
Regards
Jens
On mån, 2008-10-06 at 11:08 +0200, Matus UHLAR - fantomas wrote:
> On 05.10.08 12:31, Rafael Gomes wrote:
> > I have two Scsi discs. I can set a unique cache_dir and make a Raid 0,
> > so i will improve the write or I can set two cache_dir one per disc.
> >
> > What is better?
> >
> > Are There
On Monday 06 October 2008 11:55:41 Amos Jeffries wrote:
> Simon Waters wrote:
> > Seeing issues with Googlebots retrying on large PDF files.
> >
> > Apache logs a 200 for the HTTP 1.0 requests.
> >
> > Squid logs an HTTP 1.1 request that looks to have stopped early (3MB out
> > of 13MB).
> >
> > Th
On mån, 2008-10-06 at 08:49 +0200, Francois Cami wrote:
> I would not run an ext3 filesystem with data=writeback . noatime and
> nodiratime provide a welcome boost by eliminating unneeded writes,
> however writeback is not {powerfailure, system crash}-safe. If you
> value your time (especially the
On Mon, Oct 6, 2008 at 1:45 AM, Pieter De Wit <[EMAIL PROTECTED]> wrote:
> Hi JL,
>
> Does your server use DNS in it's logging ? Perhaps it's reverse DNS ?
>
> If he downloads a big file, does the speed pick up ?
>
> Cheers,
>
> Pieter
>
> JL wrote:
>>
>> I have a server setup which provides an ano
Hi,
I'm trying to make a setup with several squid proxies :
All my clients are making their requests to the main proxy, I will call
it proxy_1 here.
Then I have 2 other proxies : proxy_2 and proxy_3 that are never queried
directly by the clients, they are supposed to be used as cache_peer by
On sön, 2008-10-05 at 16:38 +0200, Itzcak Pechtalt wrote:
> When Squid reach several millions of objects per cache dir, it start
> to be very CPU consumer, becuae every insertion and deletion of object
> takes long time.
Mine don't.
> On my Squid 80-100GB had the CPU consumption effect.
That's a
lmenaria wrote:
Hello everyone,
I have installed SQUID 2.7.4 and its working fine. Now need to change
TimeZone from GMT to Local, because my application is not working with GMT
time. So how can set the local time in SQUID configuration ?
Please let me know.
The default squid log format uses
On sön, 2008-10-05 at 09:04 +0200, Sommariva Graziano wrote:
> Is it possible to monitor Delay Pools with MRTG?
There is no SNMP MIB definition for the delay pools counters.
BUt in theory it's possible to collect the data using cachemgr and feed
it into mrtg or rrdtool.. but it's probably about
Simon Waters wrote:
Seeing issues with Googlebots retrying on large PDF files.
Apache logs a 200 for the HTTP 1.0 requests.
Squid logs an HTTP 1.1 request that looks to have stopped early (3MB out of
13MB).
This pattern is repeated with slight variation in the amount of data served to
the G
Rafael Gomes wrote:
Well any algorithms could help this case correct?
If you put another disc, I must choice the same size, correct?
No, not must. At worst large one gets more content into it and faster
turnover. Small one used as a backup when large gets filled.
Amos
On Sat, Oct 4, 2008
On mån, 2008-10-06 at 10:59 +0200, NGUYEN DANG LUAN, Eric wrote:
> I've tried almost all options for cache_peer but it doesn't seem to work. Is
> it a squid's bug?
Did you try login=PASS using squid-2.7?
Regards
Henrik
signature.asc
Description: This is a digitally signed message part
Seeing issues with Googlebots retrying on large PDF files.
Apache logs a 200 for the HTTP 1.0 requests.
Squid logs an HTTP 1.1 request that looks to have stopped early (3MB out of
13MB).
This pattern is repeated with slight variation in the amount of data served to
the Googlebots, and after ab
On 05.10.08 12:31, Rafael Gomes wrote:
> I have two Scsi discs. I can set a unique cache_dir and make a Raid 0,
> so i will improve the write or I can set two cache_dir one per disc.
>
> What is better?
>
> Are There any documents about information? Like comparation and other
> things like this.
> On Mon, Oct 6, 2008 at 8:49 AM, Francois Cami <[EMAIL PROTECTED]> wrote:
> > I would not run an ext3 filesystem with data=writeback . noatime and
> > nodiratime provide a welcome boost by eliminating unneeded writes,
> > however writeback is not {powerfailure, system crash}-safe. If you
> > value
I've tried almost all options for cache_peer but it doesn't seem to work. Is it
a squid's bug?
Eric NGUYEN DANG LUAN
-Message d'origine-
De : NGUYEN DANG LUAN, Eric [mailto:[EMAIL PROTECTED]
Envoyé : lundi 6 octobre 2008 09:29
À : Henrik Nordstrom
Cc : squid-users@squid-cache.org
Objet
On 03.10.08 07:40, fbaiao wrote:
> My squid is blocking Trillian and I'm not finding the reason.
squid is HTTP proxy. Unless you really have a reason, don't use it for
Trillian and other protocols. Connect trillian directly to destination
ports.
--
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http
Hi JL,
Does your server use DNS in it's logging ? Perhaps it's reverse DNS ?
If he downloads a big file, does the speed pick up ?
Cheers,
Pieter
JL wrote:
I have a server setup which provides an anonymous proxy service to
individuals across the world. I have one specific user that is
experie
I have a server setup which provides an anonymous proxy service to
individuals across the world. I have one specific user that is
experiencing very slow speeds. Other users performing the very same
activities do not experience the slow speeds, myself included. I asked
the slow user to do traceroute
I went through the documentation. I need help in installing the
auth_proxy module. I have not installed squid from Synaptic Manager, i
did it manually! so, the helpers directory is missing on my system and
i am not able to find the squid authenticators.
Is there anyway i can get this?
On Mon, Oct
>> When a user is connect directly on webwasher it works. He is authenticated
>> worretly (I can see that thanks to logs).
>> But once I implement a Squid cache server, it doesn't work. My user can't be
>> authenticated.
>Have you told Squid to trust the webwasher proxy with proxy login credent
On Mon, Oct 6, 2008 at 8:49 AM, Francois Cami <[EMAIL PROTECTED]> wrote:
> On Mon, Oct 6, 2008 at 8:28 AM, Kinkie <[EMAIL PROTECTED]> wrote:
>> On Mon, Oct 6, 2008 at 5:01 AM, Rafael Gomes <[EMAIL PROTECTED]> wrote:
>>> With ReiserFS or xfs we must set this options too?
>>>
>>> options : noatime, n
60 matches
Mail list logo