Re: [squid-users] Squid 3.1 Release Date
Francois Cami wrote: > On Tue, Mar 3, 2009 at 8:32 AM, Silamael wrote: >> Is there any date when Squid 3.1 will be official released? >> Thanks in advance! > > http://wiki.squid-cache.org/ReleaseProcess#head-eea0e990c0003af12917552175691a5120980cdd > Thanks for the reply but this doesn't answer my question. I now that Squid 3.1 is already released in X.Y.0.z. I just wanted to know if there is any planned date. If you say, most likely in April, that's already enough. Just need an approximate date for some internal plannings. -- Matthias
[squid-users] Squid 3.1 Release Date
Hello there! Is there any date when Squid 3.1 will be official released? Thanks in advance! -- Matthias
Re: [squid-users] Upgrade from 2.6 to 3.0
Hi Drew, If you are satisfied with the performance of squid2.6 and you can survive with it. I would rather suggest not to upgrade to 3.0 and stay with squid 2.6. What it matters is, Is current squid satisfying your needs? It doesn't matter what version are you using? ~~ Sameer Shinde. M:- +91 98204 61580 Millions saw the apple fall, but Newton was the one who asked why. On Tue, Mar 3, 2009 at 3:13 AM, Amos Jeffries wrote: >> >> I've been using 2.6 for about a year or so. >> >> Should I be looking at upgrading to 3.0. >> >> Has anyone else upgraded from 2.6 to 3.0 and what problems, if any, have >> you run into? >> _
Re: [squid-users] squid particular ACL authentication
> Hi, > > I was wondering if i can create two different or more ACLs; one of a > network range and can bypass squid authentication, while the other ACL > must be authenticated. Would that be possible? You don't exactly authenticate an ACL. But skipping that... acl A ... acl B ... acl C proxy_auth REQUIRED # permit A without being authenticated http_access allow A # permit B only if they are authenticated http_access allow B C Amos > > I have squid3 on Ubuntu 8.10. I 've been reading some on-line > documents about squid ACL but can't really figure out how to achieve > that. > > any help would be much appreciated. > > -- > Best Regards, > Dooda >
[squid-users] squid particular ACL authentication
Hi, I was wondering if i can create two different or more ACLs; one of a network range and can bypass squid authentication, while the other ACL must be authenticated. Would that be possible? I have squid3 on Ubuntu 8.10. I 've been reading some on-line documents about squid ACL but can't really figure out how to achieve that. any help would be much appreciated. -- Best Regards, Dooda
Re: [squid-users] Squid and NTLM Authentication
> We are using squid-2.5.STABLE12-18.12 > Please use a newer more capable Squid. There have been MANY improvements since 2.5 was made obsolete. Amos
Re: [squid-users] Problems enabling cache_replacement_policy
> Hi! > > We have a reverse squid proxy running fine. But we need to put the > cache_replacement_policy in mode GDFS. For this, we have this in > config file: > > cache_replacement_policy heap GDSF > memory_replacement_policy heap GDSF > high_memory_warning 2048 MB > > With independence about the convenience of using GDFS with > cache_replacement_policy and memory_replacement_policy, when we run > the command: > > $ squidclient -p 80 mgr:storedir > > HTTP/1.0 200 OK > Server: squid > Mime-Version: 1.0 > Date: Mon, 02 Mar 2009 16:09:50 GMT > Content-Type: text/plain > Expires: Mon, 02 Mar 2009 16:09:50 GMT > Last-Modified: Mon, 02 Mar 2009 16:09:50 GMT > X-Cache: MISS from vlex.eu > X-Cache-Lookup: MISS from vlex.eu:80 > Via: 1.0 vlex.eu (squid) > Connection: close > > Store Directory Statistics: > Store Entries : 6504719 > Maximum Swap Size : 41943040 KB > Current Store Swap Size: 39530824 KB > Current Capacity : 94% used, 6% free > > Store Directory #0 (aufs): /var/cache/squid > FS Block Size 4096 Bytes > First level subdirectories: 16 > Second level subdirectories: 64 > Maximum Size: 41943040 KB > Current Size: 39530824 KB > Percent Used: 94.25% > Filemap bits in use: 6503017 of 8388608 (78%) > Filesystem Space in use: 39971720/138299856 KB (29%) > Filesystem Inodes in use: 6584740/8781824 (75%) > Flags: SELECTED > Removal policy: lru > LRU reference age: 1.88 days > > As you can see, the Removal policy is marked as "lru", not GDFS. Why? > We are a bit confusing about this. Many things in squid.conf are order-specific. The cache_replacement_policy setting can be used multiple times, and only affects the cache_dire defined below it. Amos
Re: [squid-users] Problem with Reverse Proxy and multiple domains
> > I'm currently running Squid 2.6 stable 22 as a caching server. > > It is acting as a front-end for bunch of servers answering for > www.123456.com and 123456.com. Without any problems. > > I have updated the apache configuring for handling web traffic for > www.abcdef.com=2C abcdef.com=2C www.987zyx.com and 987zyx.com. > > If I hit the web servers with the various domains=2C I get the desired web > site without any problems. > > The problem I'm running into with Squid is that no matter what domain I > enter, squid is treating all the traffic for www.123456.com. > > So if I enter www.987zyx.com via squid=2C I go the www.123456.com web site > instead. > > Here is a copy of the squid configuration I'm using. What am I doing > wrong? > Using the broken and obsolete squid-2.5 method of 'acceleration'. I've placed incline alterations to update this to 2.6 requirements... > > acl all src 0.0.0.0/0.0.0.0 > acl manager proto cache_object > acl localhost src 127.0.0.1/255.255.255.255 > acl to_localhost dst 127.0.0.0/8 > acl SSL_ports port 443 > acl CONNECT method CONNECT > > hierarchy_stoplist cgi-bin ? > acl QUERY urlpath_regex cgi-bin \? > cache deny QUERY > acl apache rep_header Server ^Apache > broken_vary_encoding allow apache > coredump_dir /var/cache/squid > http_port 80 accel vport http_port 80 accel vhost > cache_peer 192.168.2.10 parent 80 0 no-query originserver round-robin > login=PASS > cache_peer 192.168.2.11 parent 80 0 no-query originserver round-robin > login=PASS > cache_peer 192.168.2.12 parent 80 0 no-query originserver round-robin > login=PASS KILL this: > acl webserver dst 192.168.2.10 192.168.2.11 192.168.2.12 acl 123456 dstdomain .123456.com (if you want to be VERY tricky: acl 123456 dst 192.168.2.10 ) cache_peer_access 192.168.2.10 allow 123456 cache_peer_access 192.168.2.10 deny all http_access allow 123456 ... repeat as appropriate for each webserver. Including _separate_ ACLs for each one. Followed with: http_access deny all never_direct allow all Kill all the below http_*: > http_access allow webserver > http_access allow all > miss_access allow webserver > miss_access allow all > http_access deny all > > icp_access deny all > > acl loadbalancer1 src 192.168.3.125 > acl loadbalancer2 src 192.168.3.126 > follow_x_forwarded_for allow loadbalancer1 > follow_x_forwarded_for allow loadbalancer2 > follow_x_forwarded_for allow all > acl_uses_indirect_client on > delay_pool_uses_indirect_client on > log_uses_indirect_client on > > logformat combined %{Host}>h %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %h" > "%{User-Agent}>h" %Ss:%Sh > access_log /var/log/squid/access.log combined > collapsed_forwarding on > vary_ignore_expire on > > cache_effective_user squid > cache_store_log none > client_db off > cache_mem 512 MB > cache_dir ufs /var/cache/squid 3000 10 10 > Amos
Re: [squid-users] CONNECT in accelerator mode
> Hello! > > I'm in a bit of a deadlock, so all my hopes are with you. > The short version: I want to use squid as both an accelerator and as a > forward proxy which can handle CONNECT requests. > > From what I've read over the net, these 2 cases are mutually > exclusive, but I decided to ask anyway, since maybe there's a > workaround or alternative method. The net is mostly wrong. Since 2.6 Squid has been perfectly capable of running in multiple modes at once on multiple ports. The config you are looking for is documented at: http://wiki.squid-cache.oprg/ConfigExamples/Reverse/BasicAccelerator Note the little informative note at the top of the squid configuration text relevant for forward-proxy + accelerator setups. > > Basically, I configure squid to listen to TCP/80 and use apache on > TCP/81 as an origin server, but I also want it to handle CONNECT > requests to a port range on localhost (1-10020). > > In case you were wondering, here's my squid.conf file: > http://pastebin.com/fcfcd6a6 > Whenever I try to connect through the proxy via CONNECT, i get the > infamous [parseHttpRequest: CONNECT not valid in accelerator mode] > error in the logs. Aha, you cannot to CONNECT through an accelerated port AFAIK. You can make squid listen on a regular non-accelerated port (usually 3128) for all the forward-proxy requests. > > I tried (as it can be seen) to enable the proxy both-ways (with the > allow-direct and always_direct keywords), but it works as far as > CONNECT. There, it stops working as I expect it to. Those two settings apply only to how a request is sent out of squid. Not to the types available in any mode. > > If anyone has any thoughts on how I may overcome this problems, please do. > > Thank you! > Regarding a few of your posted config settings: cache_access_log == access_log * turning on to 'none' and then defining a log to send to makes cache_access_log irrelevant. no_cache - is deprecated. replace with 'cache deny' instead of 'no_cache deny'. "acl all src all" - pretty much describes itself. And 'all' replaces toRest in your usage. http://pastebin.com/m60cf0432 Amos
[squid-users] Re: Problems enabling cache_replacement_policy
On Mon, 2 Mar 2009 17:12:50 +0100 Manuel Trujillo wrote: > cache_replacement_policy heap GDSF Did you place this line above the cache_dir line?
Re: [squid-users] Upgrade from 2.6 to 3.0
> > I've been using 2.6 for about a year or so. > > Should I be looking at upgrading to 3.0. > > Has anyone else upgraded from 2.6 to 3.0 and what problems, if any, have > you run into? > _ > Hotmail® is up to 70% faster. Now good news travels really fast. > http://windowslive.com/online/hotmail?ocid=TXT_TAGLM_WL_HM_70faster_032009 Problems we know people are hitting: * some altered configure switches/options * altered or missing squid.conf settings --> Do please read the release notes section 8 before upgrading. * stricter processing of squid.conf (WARNINGS: about issues previously unmentioned). * missing features. Some were not ported Squid-2 => Squid-3 for the 3.0 release. --> Do please check the list of stuff you are currently using/needing are still supported in 3.0 before the upgrade. HTH Amos
Re: [squid-users] Performance comparison between 2.7 an 3.0
> Hi! > > Is there any cache performance comparison between both two stables > versions ? > > I mean, a comparison when both runs in the same hardware doing the same > thing. > Just thoughput matters... At this moment I don't care about cpu and I/O > usage. > > Thanks a lot in advance > Lucas Brasilino > If there is we have yet to hear about it. Anyone? Amos
[squid-users] Performance comparison between 2.7 an 3.0
Hi! Is there any cache performance comparison between both two stables versions ? I mean, a comparison when both runs in the same hardware doing the same thing. Just thoughput matters... At this moment I don't care about cpu and I/O usage. Thanks a lot in advance Lucas Brasilino
[squid-users] CONNECT in accelerator mode
Hello! I'm in a bit of a deadlock, so all my hopes are with you. The short version: I want to use squid as both an accelerator and as a forward proxy which can handle CONNECT requests. >From what I've read over the net, these 2 cases are mutually exclusive, but I decided to ask anyway, since maybe there's a workaround or alternative method. Basically, I configure squid to listen to TCP/80 and use apache on TCP/81 as an origin server, but I also want it to handle CONNECT requests to a port range on localhost (1-10020). In case you were wondering, here's my squid.conf file: http://pastebin.com/fcfcd6a6 Whenever I try to connect through the proxy via CONNECT, i get the infamous [parseHttpRequest: CONNECT not valid in accelerator mode] error in the logs. I tried (as it can be seen) to enable the proxy both-ways (with the allow-direct and always_direct keywords), but it works as far as CONNECT. There, it stops working as I expect it to. If anyone has any thoughts on how I may overcome this problems, please do. Thank you!
[squid-users] Upgrade from 2.6 to 3.0
I've been using 2.6 for about a year or so. Should I be looking at upgrading to 3.0. Has anyone else upgraded from 2.6 to 3.0 and what problems, if any, have you run into? _ Hotmail® is up to 70% faster. Now good news travels really fast. http://windowslive.com/online/hotmail?ocid=TXT_TAGLM_WL_HM_70faster_032009
[squid-users] Problem with Reverse Proxy and multiple domains
I'm currently running Squid 2.6 stable 22 as a caching server. It is acting as a front-end for bunch of servers answering for www.123456.com and 123456.com. Without any problems. I have updated the apache configuring for handling web traffic for www.abcdef.com=2C abcdef.com=2C www.987zyx.com and 987zyx.com. If I hit the web servers with the various domains=2C I get the desired web site without any problems. The problem I'm running into with Squid is that no matter what domain I enter, squid is treating all the traffic for www.123456.com. So if I enter www.987zyx.com via squid=2C I go the www.123456.com web site instead. Here is a copy of the squid configuration I'm using. What am I doing wrong? acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 acl CONNECT method CONNECT hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY acl apache rep_header Server ^Apache broken_vary_encoding allow apache coredump_dir /var/cache/squid http_port 80 accel vport cache_peer 192.168.2.10 parent 80 0 no-query originserver round-robin login=PASS cache_peer 192.168.2.11 parent 80 0 no-query originserver round-robin login=PASS cache_peer 192.168.2.12 parent 80 0 no-query originserver round-robin login=PASS acl webserver dst 192.168.2.10 192.168.2.11 192.168.2.12 http_access allow webserver http_access allow all miss_access allow webserver miss_access allow all http_access deny all icp_access deny all acl loadbalancer1 src 192.168.3.125 acl loadbalancer2 src 192.168.3.126 follow_x_forwarded_for allow loadbalancer1 follow_x_forwarded_for allow loadbalancer2 follow_x_forwarded_for allow all acl_uses_indirect_client on delay_pool_uses_indirect_client on log_uses_indirect_client on logformat combined %{Host}>h %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %h" "%{User-Agent}>h" %Ss:%Sh access_log /var/log/squid/access.log combined collapsed_forwarding on vary_ignore_expire on cache_effective_user squid cache_store_log none client_db off cache_mem 512 MB cache_dir ufs /var/cache/squid 3000 10 10 _ Windows Live™ Contacts: Organize your contact list. http://windowslive.com/connect/post/marcusatmicrosoft.spaces.live.com-Blog-cns!503D1D86EBB2B53C!2285.entry?ocid=TXT_TAGLM_WL_UGC_Contacts_032009
[squid-users] Re: Problems enabling cache_replacement_policy
On Mon, Mar 2, 2009 at 17:12, Manuel Trujillo wrote: > Hi! > > We have a reverse squid proxy running fine. But we need to put the > cache_replacement_policy in mode GDFS. For this, we have this in > config file: > > cache_replacement_policy heap GDSF > memory_replacement_policy heap GDSF > high_memory_warning 2048 MB Sorry... We are using squid3-3.0.STABLE10-2.11 (binary package) from openSUSE 11.1 (i586 GNU/Linux). Thank you. -- Manuel Trujillo Albarral Director de Sistemas Informáticos VLEX NETWORKS S.L. Telf: 93-272.26.85 ext. 157 GoogleTalk mtruji...@vlex.com http://www.vlex.com
[squid-users] Squid and NTLM Authentication
We are using squid-2.5.STABLE12-18.12 When trying to access this web page from home it prompts me for NTLM Authentication credentials: http://www.allianceenterprises.com/AWAREInfo --- When I do it from work (behind Squid), I get this error: You are not authorized to view this page You do not have permission to view this directory or page due to the access control list (ACL) that is configured for this resource on the Web server. Please try the following: * Contact the Web site administrator if you believe you should be able to view this directory or page. * Click the Refresh button to try again with different credentials. HTTP Error 401.3 - Unauthorized: Access is denied due to an ACL set on the requested resource. Internet Information Services (IIS) Technical Information (for support personnel) * Go to Microsoft Product Support Services and perform a title search for the words HTTP and 401. * Open IIS Help, which is accessible in IIS Manager (inetmgr), and search for topics titled About Security, Access Control, and About Custom Error Messages. I'm thinking that Squid doesn't know how to handle NTLM authentication? Any ideas?
[squid-users] Problems enabling cache_replacement_policy
Hi! We have a reverse squid proxy running fine. But we need to put the cache_replacement_policy in mode GDFS. For this, we have this in config file: cache_replacement_policy heap GDSF memory_replacement_policy heap GDSF high_memory_warning 2048 MB With independence about the convenience of using GDFS with cache_replacement_policy and memory_replacement_policy, when we run the command: $ squidclient -p 80 mgr:storedir HTTP/1.0 200 OK Server: squid Mime-Version: 1.0 Date: Mon, 02 Mar 2009 16:09:50 GMT Content-Type: text/plain Expires: Mon, 02 Mar 2009 16:09:50 GMT Last-Modified: Mon, 02 Mar 2009 16:09:50 GMT X-Cache: MISS from vlex.eu X-Cache-Lookup: MISS from vlex.eu:80 Via: 1.0 vlex.eu (squid) Connection: close Store Directory Statistics: Store Entries : 6504719 Maximum Swap Size : 41943040 KB Current Store Swap Size: 39530824 KB Current Capacity : 94% used, 6% free Store Directory #0 (aufs): /var/cache/squid FS Block Size 4096 Bytes First level subdirectories: 16 Second level subdirectories: 64 Maximum Size: 41943040 KB Current Size: 39530824 KB Percent Used: 94.25% Filemap bits in use: 6503017 of 8388608 (78%) Filesystem Space in use: 39971720/138299856 KB (29%) Filesystem Inodes in use: 6584740/8781824 (75%) Flags: SELECTED Removal policy: lru LRU reference age: 1.88 days As you can see, the Removal policy is marked as "lru", not GDFS. Why? We are a bit confusing about this. Thank you very much. -- Manuel Trujillo
Re: [squid-users] Re: Best cache_dir with 280GB disk
On 02.03.09 15:24, Shekhar Gupta wrote: > Thanks ,I will do that however 128x256 Is this will be right > combination for 280 GB space for a single cache_dir I'd better increase that to 256 256, although it depends on the number of objects, which depends on minimum and maximum object size configured on your system. the average object size is usually around 13Kb. -- Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. Fucking windows! Bring Bill Gates! (Southpark the movie)
RE: [squid-users] only TCP_MISS/200 in log files
Hi Amos, I have just build the 3.1.0.5 (3.1.0.6 make error), it works fine with the same configuration. Ciao, Thx, Sébastien. -Message d'origine- De : Amos Jeffries [mailto:squ...@treenet.co.nz] Envoyé : lundi 2 mars 2009 01:42 À : Sébastien WENSKE Cc : squid-users@squid-cache.org Objet : Re: [squid-users] only TCP_MISS/200 in log files > Hi All, > > I have noticed that there are only TCP_MISS/200 in my squid (3.1.0.0) log > files A little surprising, but please use the latest code when testing beta releases. We have very many bug and stability fixes since 3.1 was in alpha release. Amos
Re: [squid-users] What is the reason & how to solve this
I think I don't understand you. You says that the traffic from squid server to Internet, the source IP, is always from 192.168.1.1? This is normal, because the server only have one route to internet and this route specifies only one source IP. The destination IP to squid connection is not relevant in this case, because the traffic is generated inside the squid server and hasn't relation with the connection to squid. On Mon, Mar 2, 2009 at 11:15 AM, Shekhar Gupta wrote: > I got this point however the source IP should be the one which is > specified on the interface , then why all the packets are taking the > previous IP as source IP ... > > > On Mon, Mar 2, 2009 at 3:29 PM, David Rodríguez Fernández > wrote: >> If you put two nic with 2 IP from the same network the kernel can't >> know how route. If you need two ip from the same network, you can add >> the second IP as an alias on the same nic. >> >> On Mon, Mar 2, 2009 at 10:52 AM, Shekhar Gupta >> wrote: >>> I didn't get this , why can't ? >>> >>> On Mon, Mar 2, 2009 at 2:27 PM, David Rodríguez Fernández >>> wrote: You can't have two IP from the same network on different nic. On Mon, Mar 2, 2009 at 9:11 AM, Shekhar Gupta wrote: > > I am running 2 squid instance on 2 Diffrent IP address on a Single server > . > > NIC1 - 192.168.1.1:8080 installed to /squid1/ > NIC2 - 192.168.1.2:8080 installed to /squid2/ > > When i am specifying 192.168.1.1:8080 it takes this IP and go to > Internet whcih is Fine > When i am specifying 192.168.1.2:8080 it still takes (192.168.1.1) and > go to internet ?? what is wrong why is this happening . >>> >> >
Re: [squid-users] What is the reason & how to solve this
I got this point however the source IP should be the one which is specified on the interface , then why all the packets are taking the previous IP as source IP ... On Mon, Mar 2, 2009 at 3:29 PM, David Rodríguez Fernández wrote: > If you put two nic with 2 IP from the same network the kernel can't > know how route. If you need two ip from the same network, you can add > the second IP as an alias on the same nic. > > On Mon, Mar 2, 2009 at 10:52 AM, Shekhar Gupta > wrote: >> I didn't get this , why can't ? >> >> On Mon, Mar 2, 2009 at 2:27 PM, David Rodríguez Fernández >> wrote: >>> You can't have two IP from the same network on different nic. >>> >>> On Mon, Mar 2, 2009 at 9:11 AM, Shekhar Gupta >>> wrote: I am running 2 squid instance on 2 Diffrent IP address on a Single server . NIC1 - 192.168.1.1:8080 installed to /squid1/ NIC2 - 192.168.1.2:8080 installed to /squid2/ When i am specifying 192.168.1.1:8080 it takes this IP and go to Internet whcih is Fine When i am specifying 192.168.1.2:8080 it still takes (192.168.1.1) and go to internet ?? what is wrong why is this happening . >>> >> >
Re: [squid-users] What is the reason & how to solve this
If you put two nic with 2 IP from the same network the kernel can't know how route. If you need two ip from the same network, you can add the second IP as an alias on the same nic. On Mon, Mar 2, 2009 at 10:52 AM, Shekhar Gupta wrote: > I didn't get this , why can't ? > > On Mon, Mar 2, 2009 at 2:27 PM, David Rodríguez Fernández > wrote: >> You can't have two IP from the same network on different nic. >> >> On Mon, Mar 2, 2009 at 9:11 AM, Shekhar Gupta >> wrote: >>> >>> I am running 2 squid instance on 2 Diffrent IP address on a Single server . >>> >>> NIC1 - 192.168.1.1:8080 installed to /squid1/ >>> NIC2 - 192.168.1.2:8080 installed to /squid2/ >>> >>> When i am specifying 192.168.1.1:8080 it takes this IP and go to >>> Internet whcih is Fine >>> When i am specifying 192.168.1.2:8080 it still takes (192.168.1.1) and >>> go to internet ?? what is wrong why is this happening . >> >
Re: [squid-users] Is it possible to authenticate users against Novell eDirectory >= 8?
Joel Rosental R. wrote: Hi, I would want to know if is possible to authenticate squid against a Novell eDirectory 8.8.4? By what i've read in docs found, there is kind of an incompatibility with versions >= 8 because of some issue with how new versions store IP Addresses from the clients. edirectory. I believe 2.7 and 3.x have an auth helper for that. edirectory 8 specifically, well, um... since nobody has replied yet you will have to try the helper and report back. Looks like none of the regular group here know at this point. Amos -- Please be using Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13 Current Beta Squid 3.1.0.5
Re: [squid-users] Re: Best cache_dir with 280GB disk
Thanks ,I will do that however 128x256 Is this will be right combination for 280 GB space for a single cache_dir On Mon, Mar 2, 2009 at 2:08 PM, Jan-Frode Myklebust wrote: > On 2009-02-28, Shekhar Gupta wrote: >> >> Can any one let me know what will be the best configuration for squid >> cache_dir that can be defined with 280GB . I am using the following >> however i think squid genius can work this more effectively >> >> cache_dir ufs /squid/var/cachestore1 5000 128 256 >> cache_dir ufs /squid/var/cachestore2 5000 128 256 >> cache_dir ufs /squid/var/cachestore3 5000 128 256 >> cache_dir ufs /squid/var/cachestore4 5000 128 256 > > I'm curious why you would want to split it up in multiple > small cache_dir's, instead of giving it all to squid as one > cache dir ? > > And I think you should use "aufs" (or diskd?) instead of "ufs", > to avoid blocking squid on disk access. > > > -jf > >
Re: [squid-users] What is the reason & how to solve this
I didn't get this , why can't ? On Mon, Mar 2, 2009 at 2:27 PM, David Rodríguez Fernández wrote: > You can't have two IP from the same network on different nic. > > On Mon, Mar 2, 2009 at 9:11 AM, Shekhar Gupta > wrote: >> >> I am running 2 squid instance on 2 Diffrent IP address on a Single server . >> >> NIC1 - 192.168.1.1:8080 installed to /squid1/ >> NIC2 - 192.168.1.2:8080 installed to /squid2/ >> >> When i am specifying 192.168.1.1:8080 it takes this IP and go to >> Internet whcih is Fine >> When i am specifying 192.168.1.2:8080 it still takes (192.168.1.1) and >> go to internet ?? what is wrong why is this happening . >
Re: [squid-users] acl macf1 arp mac-address
░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote: if i want to use acl list rule using mac address what should i do ? i use/install squid using apt-get install ( ubuntu ) ( latest - and always do apt-get update/upgrade ) can it run both ? ( IP list ACL and MAC list ACL ) 1) consider *very* carefully why you want to use MAC. 2) learn about ARP and MAC and how they operate within a network. Particularly about how and why MAC _changes_ during a packets transfer. 3) go back to (1) 4) make sure --enable-arp-acl is configured when squid was built. It's off by default to make people do step (2) before they cause themselves major misery. And I imaging a lot of distros leave it off for its lack of real-network usefulness. Amos -- Please be using Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13 Current Beta Squid 3.1.0.5
Re: [squid-users] Squid NTLM + Windows Vista update
Sébastien WENSKE wrote: Thanks Amos, It was very helpfull. Now I need to fix an issue with dansguardian, when I get through it, I notice this in squid log: 01/Mar/2009:16:43:48.329 73520 10.0.0.11 TCP_MISS/200 ->0<- CONNECT update.microsoft.com:443 - DIRECT/65.55.13.126 - and I get a windows update 80072EE2 error... But with squid only, it works fine. 01/Mar/2009:16:42:08.667 117784 10.0.0.11 TCP_MISS/200 ->7780<- CONNECT update.microsoft.com:443 - DIRECT/65.55.184.93 - Welcome. IIRC allowing localhost (aka Dansguardian) access to the particular CONNECT worked for someone. Amos Thanks, Sébastien -Message d'origine- De : Amos Jeffries [mailto:squ...@treenet.co.nz] Envoyé : samedi 28 février 2009 23:51 À : Sébastien WENSKE Cc : squid-users@squid-cache.org Objet : Re: [squid-users] Squid NTLM + Windows Vista update Sébastien WENSKE wrote: Hi All, I have some troubles to get update with windows vista when I use squid with NTLM. 28/Feb/2009:19:04:39.534 2 10.0.0.11 TCP_DENIED/407 452 HEAD http://download.windowsupdate.com/v8/windowsupdate/redir/muv3wuredir.cab? - NONE/- text/html Is it possible to allow a specific url/domain without the authentication process? Many thanks, Sébastien WENSKE. http://wiki.squid-cache.org/SquidFaq/WindowsUpdate Amos -- Please be using Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13 Current Beta Squid 3.1.0.5
Re: [squid-users] What is the reason & how to solve this
You can't have two IP from the same network on different nic. On Mon, Mar 2, 2009 at 9:11 AM, Shekhar Gupta wrote: > > I am running 2 squid instance on 2 Diffrent IP address on a Single server . > > NIC1 - 192.168.1.1:8080 installed to /squid1/ > NIC2 - 192.168.1.2:8080 installed to /squid2/ > > When i am specifying 192.168.1.1:8080 it takes this IP and go to > Internet whcih is Fine > When i am specifying 192.168.1.2:8080 it still takes (192.168.1.1) and > go to internet ?? what is wrong why is this happening .
[squid-users] Re: Best cache_dir with 280GB disk
On 2009-02-28, Shekhar Gupta wrote: > > Can any one let me know what will be the best configuration for squid > cache_dir that can be defined with 280GB . I am using the following > however i think squid genius can work this more effectively > > cache_dir ufs /squid/var/cachestore1 5000 128 256 > cache_dir ufs /squid/var/cachestore2 5000 128 256 > cache_dir ufs /squid/var/cachestore3 5000 128 256 > cache_dir ufs /squid/var/cachestore4 5000 128 256 I'm curious why you would want to split it up in multiple small cache_dir's, instead of giving it all to squid as one cache dir ? And I think you should use "aufs" (or diskd?) instead of "ufs", to avoid blocking squid on disk access. -jf
[squid-users] What is the reason & how to solve this
I am running 2 squid instance on 2 Diffrent IP address on a Single server . NIC1 - 192.168.1.1:8080 installed to /squid1/ NIC2 - 192.168.1.2:8080 installed to /squid2/ When i am specifying 192.168.1.1:8080 it takes this IP and go to Internet whcih is Fine When i am specifying 192.168.1.2:8080 it still takes (192.168.1.1) and go to internet ?? what is wrong why is this happening .
Re: [squid-users] Streaming is killing Squid cache
We've been running Windows Update via Squid, and I must say it works like a charm. While I do agree that having a Win2k3 or Win2k8 with AD could simplify admin, as most clients are running Windows, in our scenario, that is out of the question, as we're an ISP, and there is no way we can / want to force users to be users of our AD. Regards HASSAN - Original Message - From: "Gregori Parker" To: "Amos Jeffries" Cc: "Brett Glass" ; Sent: Monday, March 02, 2009 13:13 Subject: RE: [squid-users] Streaming is killing Squid cache I missed the part where he mentioned that this is a poor ISP with no control over their clients, so you'll have to pardon my fatal presumptuousness. Hint: I'm rolling my eyes It may seem marvelous, but there actually are a handful of places that run Windows...even on servers. In that sort of environment, you're likely to find AD, in which case WSUS + GPO are both simple, sensible and _zero_ cost solutions for this problem. From: Amos Jeffries [mailto:squ...@treenet.co.nz] Sent: Sun 3/1/2009 5:15 PM To: Gregori Parker Cc: Brett Glass; Amos Jeffries; squid-users@squid-cache.org Subject: RE: [squid-users] Streaming is killing Squid cache Better yet, implement a wsus server, let it bypass caching and gpo your users to update from that. That way you can get away from having ms updates dictate caching options that result in problems with streaming. You are of course making a few very fatal assumptions: 1) that every service provider with this issue can afford to run a dedicated Windows server machine for this purpose. 2) that they want to. (I for one marvel that people are still willing to run MS windows on ANY server.) 3) that they have Enterprise level of control over where their clients machines get WU from. Hint: Tier 0-3 ISP have _zero_ control over client machine settings. Amos From: Brett Glass [mailto:squid-us...@brettglass.com] Sent: Sun 3/1/2009 8:02 AM To: Amos Jeffries; Brett Glass Cc: squid-users@squid-cache.org Subject: Re: [squid-users] Streaming is killing Squid cache At 09:47 PM 2/28/2009, Amos Jeffries wrote: Leaving min at -1, and max at something large (10-50MB?) Should abort the streams when they reach the max value, You'll have to set the max to something reasonably higher than the WU cab size. Service Packs may cause issues since they are >100MB each, but are infrequent enough to use a spider and cause caching if need be. We've actually seen Microsoft updates as big as 800 MB. Of course, this is a good argument for turning this setting into something that's controlled by an ACL, so one could say, "Cache everything from Microsoft, but not from these streaming providers." Hmm, thinking about this some more... Maybe your fix is to "cache deny X" where X is an ACL defining the streaming sources. The abort logics apparently seem to only hold links open if they are considered cacheable (due to headers and non-denial in Squid). Or perhapse you are hitting the one rare case where "half_closed_clients on" is needed for now to make the abort kick in. Amos