Re: [squid-users] reverse proxy for ssl sites

2009-12-29 Thread Matus UHLAR - fantomas
 On 26-12-2009 5:06, Guido Marino Lorenzutti wrote:
 Hi people!
 Im using squid for reverse proxing a lot of sites for quite a few
 years. The thing is that I have severeal sites that i need to give ssl
 support and i can't find a way to tell the squid to act the same way
 that he acts for the non ssl connections.

 This is my setup to work with the non ssl connections. I try and it
 dosen't work by just telling to listen also in the port 443. Any links
 that can help?

 Angelo Höngens a.hong...@netmatch.nl escribió:
 Here's an example squid config on my blog for a squid that listens on ssl:

 http://blog.hongens.nl/guides/protect-owa-using-a-reverse-proxy/

On 28.12.09 10:12, Guido Marino Lorenzutti wrote:
 This was helpfull. Now im facing a new problem, I use Debian and the  
 package dosen't have ssl support (yacks!). But this I can solve by  
 myself.

well, seems that linking squid (GPL) with openssl is problematic...

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=251988

support of GnuTLS would help here. Or relicensing squid ;-)
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
He who laughs last thinks slowest. 


Re: [squid-users] reverse proxy for ssl sites

2009-12-29 Thread Guido Marino Lorenzutti

Matus UHLAR - fantomas uh...@fantomas.sk escribió:


On 26-12-2009 5:06, Guido Marino Lorenzutti wrote:

Hi people!
Im using squid for reverse proxing a lot of sites for quite a few
years. The thing is that I have severeal sites that i need to give ssl
support and i can't find a way to tell the squid to act the same way
that he acts for the non ssl connections.

This is my setup to work with the non ssl connections. I try and it
dosen't work by just telling to listen also in the port 443. Any links
that can help?



Angelo Höngens a.hong...@netmatch.nl escribió:

Here's an example squid config on my blog for a squid that listens on ssl:

http://blog.hongens.nl/guides/protect-owa-using-a-reverse-proxy/


On 28.12.09 10:12, Guido Marino Lorenzutti wrote:

This was helpfull. Now im facing a new problem, I use Debian and the
package dosen't have ssl support (yacks!). But this I can solve by
myself.


well, seems that linking squid (GPL) with openssl is problematic...

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=251988

support of GnuTLS would help here. Or relicensing squid ;-)
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
He who laughs last thinks slowest.



yeah.. I don't know... I guess I will install haproxy or varnish for  
the ssl sites.





RE: [squid-users] Streaming Media from ABC.com CBS.com etc...

2009-12-29 Thread Mike Marchywka



 From:
 To: squid-users@squid-cache.org
 Date: Mon, 28 Dec 2009 21:47:37 -0600
 Subject: RE: [squid-users] Streaming Media from ABC.com CBS.com etc...

 I used the term Full Episodes just as a way to explain the links that are 
 on the various network websites.

 I'm not an expert obviously, but the issue seems to be something I'm not 
 configuring correctly in squid. If I bypass squid and directly connect to the 
 sites everything works fine. I trimmed down my squid.conf as much as I knew 
 how to eliminate any configuration errors that I could think of. Does anyone 
 who is using Squid as their network proxy have the ability to view any of the 
 videos on any of the major network sites?

Well, we have  some limited media testing on a mobile app but I don't recall 
full episodes but my point is that that doesn't mean much because the details 
matter.



 I originally thought the problem was related to how the sites require the use 
 of their own individual players to view the videos. They do this to prevent 
 people from using addons to download the videos directly. So I tried to 
 stream some Netflix content since they also use a propriety video player. 
 With netflix I didn't have any issuues. I've tried various versions of squid 
 assuming that the problem was perhaps related to my build. But the issue 
 seems universal.


Not all non-browsers are the same. You really need to look at the links in each 
case and pretend to be the various 
user agents and see if the server is sending you something squid can handle. 
You may be able to use
netstat while another player is loading or tcpdump or something to see what it 
is doing. However, from what I have seen,
sometimes the pages contain rtsp links, sometimes http. Probably the shorter 
ones are http and longer rtsp but
you need at least look at your page source, that should be easy from any 
browser. It is well worth your time
to get something like cygwin and learn how to use the tools, not just hunt 
through menu's and icons.
All these people keep changing their sites and even if you get something up 
today it is unlikely to be stable forever.






 I've went through my squid logs (Access.log, cache.log, store.log) and no 
 errors show. In fact with the configuration below, the access.log isn't even 
 used because everything is a direct connection.

 Kevin


 Hello everyone,

 I'm sure this is an oversight on my part, but for the life of me I cannot 
 get Full Episodes to play from any of the major network sites. I can 
 stream media from everywhere else (Netflix, YouTube, shoutcast, etc...). In 
 an effort to troubleshoot this I have set up a bare minimum install of Squid 
 3.0 Stable 18 and configured a bare bones squid.conf.

 (This is the complete squid.conf used for testing only)
 http_port 3128
 cache_effective_user squid
 cache_effective_group squid
 acl localhost src 127.0.0.1/255.255.255.255
 acl localnet src 192.168.0.0/16
 acl HTTP proto HTTP
 always_direct allow HTTP
 acl CONNECT method CONNECT
 http_access allow localnet
 coredump_dir /var/spool/squid

 With this configuration I can get as far as watching the Commercial Ad 
 portion of any of the sites, but the actually episodes never play (On any of 
 the network sites). Again, I'm sure this is something simple, but I've been 
 searching for an answer for going on a couple weeks now, and am finally 
 breaking down and asking for help.

 I'm using a Windows machine to access the Squid Box and I'm using IE. Any 
 help would be appreciated. I'd be more than happy to read through any 
 FAQ/Guide/etc.. that pertains to this issue, but I have had no luck finding 
 anything pertaining to this problem.

 Well, it would help to get something like linux or cygwin where you stand a 
 chance of getting useful information.
 I use cygwin's wget for stuff like this. Hit the url that works and the one 
 that doesn't and try to phrase your
 question in terms of something that comes up at ietf- they probably don't 
 know anything about streaming
 media on CBS or ABC. In the past, I've noted that some places react to the 
 user agent and can response with html links
 that point to either 3gp files or an rtsp stream. Find out where your links 
 actually point and see how server responds
 when you try to hit it directly. I gather if you are calling the media a 
 full episode you may not have looked at the underlying
 links or response headers from server.





 Thanks,

 Kevin


 _
  
_
Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
http://clk.atdmt.com/GBL/go/177141664/direct/01/

[squid-users] help with external_acl_type for php auth

2009-12-29 Thread John Peterson


Still having problems using the external_acl_type command. Can someone point me 
in the right direction. I have some example code that was working with the 
regular auth_param basic but I would like to use the external_acl_type because 
it can call the program when needed, however I'm not having any luck applying 
the code. Thanks for your help.


https_port 442 defaultsite=www.tucows.com accel vhost
cert=/squid-cert5/regobie2-c.crt
key=/squid-cert5/squid_key.pem vhost

logfile_rotate 8

#both cache pools go to the same server, but we want to
control how people access the site via the #acl lists. On
port 443 they need a CAC, on 442 they can login via the sql
server.
#cache for server test.com

visible_hostname proxy
#auth_param basic program /usr/bin/php
/usr/local/squid/libexec/squid_php_auth.php
#auth_param basic children 40
#auth_param basic realm proxy_auth
#auth_param basic credentialsttl 2 hours
external_acl_type MyAclHelper %LOGIN /usr/bin/php
/usr/local/squid/libexec/squid_php_auth.php
acl proxyauth external MyAclHelper
#acl proxyauth proxy_auth REQUIRED

acl noport2 myport 443
#acl Auth proxy_auth REQUIRED
acl noport myport 442
#this acl is just assiging a acl name to the test.com
location. We will use this acl name in the #http_access
section. We can also combine acl lists together.

cache_peer www.tucows.com parent 80 0 no-query originserver
login=PASS name=www.tucows.com
acl site3 dstdomain www.tucows.com
cache_peer_access www.tucows.com allow site3
#http_access allow site3
http_access allow site3 proxyauth
#http_access allow site3 Auth

#acl all src 0.0.0.0/0.0.0.0

http_access deny all
debug_options ALL,1 32,2
cache_effective_user squid
cache_effective_group squid
cache_access_log /usr/local/squid/var/logs/access.log


  


[squid-users] secure connection client -- squid

2009-12-29 Thread Eduardo Maia

Hello,

I use Squid 3 (squid-3.0-14.2mdv2009.1) with authentication on a 
Mandriva 2009. I need to secure the connection between the client (IE, 
Firefox...) and the Squid. By default the password it's base64 encoded 
but in blanc text. I need with encryption.



Anyone can help? or send any manuals/tutorials?

Thks,
 Eduardo



RE: [squid-users] secure connection client -- squid

2009-12-29 Thread Diego
Hi,
You could find what you need with external_acl_type.
Start a SSL-enabled Apache in your Squid Box.
Configure an external_acl_type for user authentication.
Request userpassword in a HTTPS form. Develop a simple CGI 
to feed the DB used by external_acl_type with valid-users.

Other Options could be HTTP Digest/NTLM Authentication.

Salu2
Diego Amadey

-Mensaje original-
De: Eduardo Maia [mailto:em...@iportalmais.pt] 
Enviado el: Martes, 29 de Diciembre de 2009 12:04 p.m.
Para: squid-users@squid-cache.org
Asunto: [squid-users] secure connection client -- squid

Hello,

I use Squid 3 (squid-3.0-14.2mdv2009.1) with authentication on a 
Mandriva 2009. I need to secure the connection between the client (IE, 
Firefox...) and the Squid. By default the password it's base64 encoded 
but in blanc text. I need with encryption.


Anyone can help? or send any manuals/tutorials?

Thks,
  Eduardo


No virus found in this incoming message.
Checked by AVG - www.avg.com 
Version: 8.5.431 / Virus Database: 270.14.123/2592 - Release Date: 12/29/09
07:47:00



[squid-users] Check your disk space?!

2009-12-29 Thread Heinz Diehl
Hi,

I'm getting this in cache.log:

[]
2009/12/29 15:38:51| Store rebuilding is  2.7% complete
2009/12/29 15:38:51| diskHandleWrite: FD 13: disk write error: (28) No
space left on device
FATAL: Write failure -- check your disk space and cache.log
Squid Cache (Version 2.7.STABLE7): Terminated abnormally.
(squid)[0x7f0ce985a7e6]
(squid)(fatal+0x25)[0x7f0ce985acb5]
(squid)[0x7f0ce97f9fb7]
(squid)[0x7f0ce986526c]
(squid)[0x7f0ce98665e6]
(squid)(eventRun+0x144)[0x7f0ce97fe264]
(squid)(main+0x7a7)[0x7f0ce9829b57]
/lib64/libc.so.6(__libc_start_main+0xe6)[0x7f0ce7d9a586]
(squid)[0x7f0ce97c6c39]
CPU Usage: 0.228 seconds = 0.066 user + 0.162 sys
Maximum Resident Size: 50928 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:7804 KB
Ordinary blocks: 7729 KB  1 blks
Small blocks:   0 KB  0 blks
Holding blocks:  3880 KB  4 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  74 KB
Total in use:   11609 KB 99%
Total free:74 KB 1%

The harddisk is of 30 GB capacity, and cache_dir is set to 20GB.
This should be plenty of space.

 cache_dir aufs /var/cache/squid/cacheA 2 64 256

Does anybody know what can be the cause of this and what I can do to
prevent squid from dying?

Thanks,
Heinz.



[squid-users] Re: Check your disk space?!

2009-12-29 Thread Heinz Diehl
On 29.12.2009, Heinz Diehl wrote: 

[]

Forgot to mention:
the whole cache took around 10 GB of space on the harddisk as the error
occured, so the harddisk can not be filled up.




Re: [squid-users] Re: Check your disk space?!

2009-12-29 Thread Jorge Armando Medina

Heinz Diehl wrote:
On 29.12.2009, Heinz Diehl wrote: 


[]

Forgot to mention:
the whole cache took around 10 GB of space on the harddisk as the error
occured, so the harddisk can not be filled up.


  
I got a similar error about no space left and at that moment disc 
usage was 13GB from 30GB cache disk. The problem was the OS mounted read 
only the partition because bad filesystem, I repaired the filesystem re 
create the cache and everything it is ok.



--
Jorge Armando Medina
Computación Gráfica de México
Web: http://www.e-compugraf.com
Tel: 55 51 40 72, Ext: 124
Email: jmed...@e-compugraf.com
GPG Key: 1024D/28E40632 2007-07-26
GPG Fingerprint: 59E2 0C7C F128 B550 B3A6  D3AF C574 8422 28E4 0632



RE: [squid-users] Re: Check your disk space?!

2009-12-29 Thread Dean Weimer
Check what your Operating system reports on the disk volume, perhaps something 
else is being written to that disk.  I even made the mistake once of taking a 
snapshot for temporary backup purposes of my cache volume during testing and 
forgot to delete it.  Needless to say once that server went live it ran out of 
disk space in a hurry and took me a little while to figure out where all that 
disk space went.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

 -Original Message-
 From: Heinz Diehl [mailto:h...@fancy-poultry.org]
 Sent: Tuesday, December 29, 2009 9:59 AM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Re: Check your disk space?!
 
 On 29.12.2009, Heinz Diehl wrote:
 
 []
 
 Forgot to mention:
 the whole cache took around 10 GB of space on the harddisk as the error
 occured, so the harddisk can not be filled up.
 



[squid-users] Re: Re: Check your disk space?!

2009-12-29 Thread Heinz Diehl
On 29.12.2009, Jorge Armando Medina wrote: 

 I got a similar error about no space left and at that moment disc
 usage was 13GB from 30GB cache disk. The problem was the OS mounted
 read only the partition because bad filesystem, I repaired the
 filesystem re create the cache and everything it is ok.

I think I finally found the cause: the filesystem is out of blocks. 
Unfortunately,
the btrfs formatter doesn't obey the --sectorsize option and always defaults to
4096. That's f*cking annoying, btrfs has worked really more than great for
me the last months and is fingerlickin' fast on my SSD. So far, I have now
changed to XFS with 512 blocksize, did restore the cache_dir and squid is
running up and running again.

If this will be showing up not being the solution, I'll report back here.

Thanks to all who did care,
Heinz.




[squid-users] Issue with Digest and Number of Objects

2009-12-29 Thread Dusten Splan
Hi All,
  So I'm having an issue where squid will not cache more then 3843117
objects.  Also on this same box we are seeing the traffic dip every
time it rebuilds the digest file.


Sample of log file.

2009/12/29 04:15:21.745| storeDigestRebuildStart: rebuild #276
2009/12/29 04:15:21.745| storeDigestCalcCap: have: 38414447, want
38414447 entries; limits: [8318
0325, 103975384]
2009/12/29 04:15:21.745| storeDigestResize: 81437985 - 83180325;
change: 1742340 (2%)
2009/12/29 04:15:21.745| storeDigestResize: small change, will not resize.
2009/12/29 04:15:21.761| storeDigestRebuildStep: buckets: 8388608
entries to check: 16777220
2009/12/29 04:15:36.125| storeDigestRebuildStep: buckets: 8388608
entries to check: 16777220
2009/12/29 04:15:50| Detected DEAD Sibling: mypeer01
2009/12/29 04:15:50| Detected DEAD Sibling: mypeer02
2009/12/29 04:15:50| Detected DEAD Sibling: mypeer03
2009/12/29 04:15:50.710| storeDigestRebuildStep: buckets: 8388608
entries to check: 16777220
2009/12/29 04:15:54.956| storeDigestRebuildFinish: done.
2009/12/29 04:15:55.010| storeDigestRewrite: start rewrite #276
2009/12/29 04:15:55.010| storeDigestRewrite: url:
http://example.com/squid-interna
l-periodic/store_digest key: D412EC3C5799ADE5A3CF229E9BD27A16
2009/12/29 04:15:55.040| storeDigestRewrite: entry expires on -1 (-1262078156)
2009/12/29 04:15:55.065| storeDigestSwapOutStep: size: 50898741
offset: 0 chunk: 4096 bytes
2009/12/29 04:15:55.079| storeDigestSwapOutStep: size: 50898741
offset: 4096 chunk: 4096 bytes
2009/12/29 04:15:55.088| storeDigestSwapOutStep: size: 50898741
offset: 8192 chunk: 4096 bytes
...
2009/12/29 04:15:55.372| storeDigestSwapOutStep: size: 50898741
offset: 50888704 chunk: 4096 bytes
2009/12/29 04:15:55.372| storeDigestSwapOutStep: size: 50898741
offset: 50892800 chunk: 4096 bytes
2009/12/29 04:15:55.372| storeDigestSwapOutStep: size: 50898741
offset: 50896896 chunk: 1845 bytes
2009/12/29 04:15:55.372| storeDigestRewriteFinish: digest expires at
-1 (-1262078156)
2009/12/29 04:16:02| Detected REVIVED Sibling: mypeer01
2009/12/29 04:16:02| Detected REVIVED Sibling: mypeer03
2009/12/29 04:16:02| Detected REVIVED Sibling: mypeer02


Then start again an hour later.

2009/12/29 05:15:54.958| storeDigestRebuildStart: rebuild #277


Thanks
  Dusten


Re: [squid-users] Re: Re: Check your disk space?!

2009-12-29 Thread Matus UHLAR - fantomas
 On 29.12.2009, Jorge Armando Medina wrote: 
  I got a similar error about no space left and at that moment disc
  usage was 13GB from 30GB cache disk. The problem was the OS mounted
  read only the partition because bad filesystem, I repaired the
  filesystem re create the cache and everything it is ok.

On 29.12.09 17:46, Heinz Diehl wrote:
 I think I finally found the cause: the filesystem is out of blocks. 
 Unfortunately,
 the btrfs formatter doesn't obey the --sectorsize option and always defaults 
 to
 4096. That's f*cking annoying, btrfs has worked really more than great for
 me the last months and is fingerlickin' fast on my SSD. So far, I have now
 changed to XFS with 512 blocksize, did restore the cache_dir and squid is
 running up and running again.

If you are using 2.7, configure 'coss' type cache_dir and put all small
files (e.g. under 128MB) there. It should be faster and save much of disk
space (and inode table too)

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
My mind is like a steel trap - rusty and illegal in 37 states. 


Re: [squid-users] any work arounds for bug 2176

2009-12-29 Thread Brett Lymn
On Thu, Dec 17, 2009 at 10:10:12PM +1300, Amos Jeffries wrote:
 
 Which is off. Now I'm confused.
 

I know the feeling well :)

 
 I am a C coder and may have some time to do some debugging on this
 between christmas and new year so, Amos, if you have any thoughts or
 hints as to where to go looking I can certainly have a stab at it.
 
 
 Thank you. Any help at all would be great.
 
 I *think* the relevant code is off src/client_side_reply.cc, but what to 
 look for is where I'm currently stuck. The keep_alive values resolved 
 things for you Brett but not Bill.
 

src/client_side.c?  I think you are referring to a squid 3 file there
at a guess since it is c++

 
 The variable nature of the threshold looks like some timing between 
 actions triggering the bug vs the rate at which Squid is sucking the 
 request in.
 

I have done some traces and it looks like the entire file is not being
sent to the remote server... something happens between squid and the
remote server that stops the sending short.  The client sends the
entire file to squid.

 AFAIK popups only occur when the client gets sent two re-auth 
 challenges. Which in the un-patched Squid was caused by the first 
 half-authenticated link being closed by Squid before auth could 
 complete. Then the second link being challenged for more auth would 
 cause popup.
 

Yes, that is what we were seeing unpatched.

 I think the next step is to find out what the difference between your 
 two setups is exactly:
  * squid.conf


Here is a lightly edited squid.conf - just removed some acls that
should not affect the upload:

http_port 3128
cache_mem  32 MB
maximum_object_size 16000 KB
cache_dir aufs /cache 15000 16 256
cache_dir aufs /cache2 15000 16 256
cache_access_log /cache/logs/access.log
cache_log /cache/logs/cache.log
cache_store_log none
pid_filename /var/run/squid.pid
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
request_header_max_size 1 KB
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 80 443 444 563
acl Safe_ports port 80 81 82 21 443 444 563 70 1025-65535
acl CONNECT method CONNECT
acl threat  dstdomain   /opt/local/squid/etc/block_list.txt
acl Safe_ports port 86
acl Safe_ports port 554
icp_access deny all
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny threat
http_access allow all
http_access deny all
http_reply_access allow all
icp_access allow all
miss_access allow all
always_direct allow all
never_direct deny all
snmp_access deny all
coredump_dir /cache/logs

  * headers between Squid and the POSTing app.
  * headers between Squid and the web server.
 

I have these and will send them off list.  As I mentioned before it
seems like the entire file is not being sent to the server for some
reason.  I don't understand enough to tell if this is because the
server is terminating the connection early or squid is sending
something it does not like.

 
 If as you say the patch solved the issue but you saw the same thing 
 earlier. Then I suspects it's probably a squid.conf detail being overlooked.
 

If I understand Bill correctly I think we are both getting the same
thing.  I have not tested smaller files again since the patch but the
response to large file uploads is consistent with what I am seeing.

-- 
Brett Lymn
Warning:
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited.  If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility.  It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
your computer.




[squid-users] Questions in Squid source code

2009-12-29 Thread Manjusha Maddala
Hi all, 

I'm working with Squid-2.6 and right now stuck on a bunch of questions.
Would appreciate a word from the Squid experts.

1. What is SwapDir? Is that the in-memory representation of the disk
cache? What does the in-memory representation of the disk cache look
like - does it follow the same format as the swap.state file?

2. What is StoreEntry? 

3. In squid/src/structs.h,

what do each of the entries in the below structure symbolize?   

struct _cacheSwap {
SwapDir *swapDirs;
int n_allocated;
int n_configured;
} cacheSwap;

when/where do they get initialized?

4. Each time squid -k rotate is done, I notice a new swap.state file
gets added along with a 0 byte swap.state.last-clean file. How is the
new swap.state file built? Is the in-memory hashtable/map dumped into
this file during rotate or is it built by crawling all the directories
in the disk cache and fetching the meta data of each file? 

5. Once the swap.state file is built, it keeps growing until the next
periodic squid rotate is kicked off. What are these new entries that get
appended to swap.state? I'm guessing each time a new webpage gets
cached, 
5.1) the in-memory table gets updated with the meta data for the new URI
5.2) one entry is made in store.log with a SWAPOUT tag
5.3) one entry is made in swap.state with the meta data for the new URI

Somewhere in between the two squid rotate jobs, the cache replacement
thread comes in and evicts the least recently used pages. The memory
hashtable gets updated accordingly, *but* the swap.state file doesn't.
Hence, over time swap.state file grows and needs to be synced up with
the memory table. 

Did I get it right?

6. Is there any utility to read the swap.state file?

7. swap.state file is maintained for loading the in-memory hashtable at
squid startup. When else is this file used?

8. A high-level pseudo code for the request processing algorithm as I
understand:

- Squid receives a GET request for URL
- Computes a hash for the URL and uses it as a key to pull the record
from its internal memory representation of the meta-data of all files on
the disk cache
- If a matching record is found, the refresh_pattern rules are applied
to determine if the content is fresh or stale and a TCP_HIT or
TCP_REFRESH_HIT/TCP_REFRESH_MISS get logged respectively
- If no record is found, its a TCP_MISS

Have I missed something?


Thanks.

CONFIDENTIALITY NOTICE 
===
This email message and any attachments are for the exclusive use of the 
intended recipient(s) and may contain confidential and privileged information.  
Any unauthorized review, use, disclosure or distribution is prohibited.  If you 
are not the intended recipient, please contact the sender by reply email and 
destroy all copies of the original message along with any attachments, from 
your computer system.  If you are the intended recipient, please be advised 
that the content of this message is subject to access, review and disclosure by 
the sender's Email System Administrator.


Re: [squid-users] Issue with Digest and Number of Objects

2009-12-29 Thread Amos Jeffries
On Tue, 29 Dec 2009 14:11:44 -0500, Dusten Splan dsp...@myyearbook.com
wrote:
 Hi All,
   So I'm having an issue where squid will not cache more then 3843117
 objects.

NP: 2^24 objects limit per- cache_dir entry.

  Also on this same box we are seeing the traffic dip every
 time it rebuilds the digest file.

Some delay is expected while the 50MB digest is generated.

I think probably the worse effect is that the peers are disappearing. This
will most likely increase the network lag time for each request.

 
 Sample of log file.
 
 2009/12/29 04:15:21.745| storeDigestRebuildStart: rebuild #276
 2009/12/29 04:15:21.745| storeDigestCalcCap: have: 38414447, want
 38414447 entries; limits: [8318
 0325, 103975384]
 2009/12/29 04:15:21.745| storeDigestResize: 81437985 - 83180325;
 change: 1742340 (2%)
 2009/12/29 04:15:21.745| storeDigestResize: small change, will not
resize.
 2009/12/29 04:15:21.761| storeDigestRebuildStep: buckets: 8388608
 entries to check: 16777220
 2009/12/29 04:15:36.125| storeDigestRebuildStep: buckets: 8388608
 entries to check: 16777220
 2009/12/29 04:15:50| Detected DEAD Sibling: mypeer01
 2009/12/29 04:15:50| Detected DEAD Sibling: mypeer02
 2009/12/29 04:15:50| Detected DEAD Sibling: mypeer03
 2009/12/29 04:15:50.710| storeDigestRebuildStep: buckets: 8388608
 entries to check: 16777220
 2009/12/29 04:15:54.956| storeDigestRebuildFinish: done.
 2009/12/29 04:15:55.010| storeDigestRewrite: start rewrite #276
 2009/12/29 04:15:55.010| storeDigestRewrite: url:
 http://example.com/squid-interna
 l-periodic/store_digest key: D412EC3C5799ADE5A3CF229E9BD27A16
 2009/12/29 04:15:55.040| storeDigestRewrite: entry expires on -1
 (-1262078156)
 2009/12/29 04:15:55.065| storeDigestSwapOutStep: size: 50898741
 offset: 0 chunk: 4096 bytes
 2009/12/29 04:15:55.079| storeDigestSwapOutStep: size: 50898741
 offset: 4096 chunk: 4096 bytes
 2009/12/29 04:15:55.088| storeDigestSwapOutStep: size: 50898741
 offset: 8192 chunk: 4096 bytes
 ...
 2009/12/29 04:15:55.372| storeDigestSwapOutStep: size: 50898741
 offset: 50888704 chunk: 4096 bytes
 2009/12/29 04:15:55.372| storeDigestSwapOutStep: size: 50898741
 offset: 50892800 chunk: 4096 bytes
 2009/12/29 04:15:55.372| storeDigestSwapOutStep: size: 50898741
 offset: 50896896 chunk: 1845 bytes
 2009/12/29 04:15:55.372| storeDigestRewriteFinish: digest expires at
 -1 (-1262078156)
 2009/12/29 04:16:02| Detected REVIVED Sibling: mypeer01
 2009/12/29 04:16:02| Detected REVIVED Sibling: mypeer03
 2009/12/29 04:16:02| Detected REVIVED Sibling: mypeer02
 
 
 Then start again an hour later.
 
 2009/12/29 05:15:54.958| storeDigestRebuildStart: rebuild #277
 
 
 Thanks
   Dusten


[squid-users] content filter

2009-12-29 Thread Jeff Peng
Hello,

Is there a plugin for squid which can implement content filter?
for example, if the webpage includes the keyword of sex, the plugin
will remove it or replace it to ***.

Thanks.