Re: [squid-users] Re: Squid not caching. Please help.

2013-05-12 Thread PARAM KRISH
Hi, Thanks for your reply.

Here are my comments:

squid.conf was sent earlier before you replied, you can go through and
comment if that looks fine to you or has any issues in it.

Showing us the config would help answer that one.
Both squid.conf and a clear description of how the F5s are forwarding
traffic to Squid and how the squid box is receiving that traffic would
be a good start.

Our domain wikid-dev.domain.com points to F5 VIP, which upon receiving
the requests, does two things. 1) http to https redirection if
received in https, 2) SSL Offloading, then forward the requests to
Squid server @ 80. Squid config i had sent earlier which is believed
to be forwarding the requests to two of its cache_peer to its port
443, and cacheing the pages as well. F5 and Squid setup what we have
done is simple and straight forward, we have not done anything
complex. Additionally, F5 uses OneConnect HTTP profile and has
X_Forward enabled and i have kept SourceHash in the cache_peer in
squid.conf to forward the requests to one cache_peer within the same
TCP_connection to avoid any session related problems by blind
round-robin. I will touch on this little later as i would still need
to get my client_ip received from F5, which is not currently
happening. But, right now i am worried about the cacheing that need to
be addressed.

Why not? 443 and 80 are just ports. Both are just transferring TCP packets. 
When routing them there is no difference.
Does it make a difference if F5 forward the requests to 443 or 80 to
Squid ? I did that earlier but it never worked but if the cacheing
problem is due to that, i can take your advice on this.

-PK

On Sun, May 12, 2013 at 12:34 PM, paramkrish mkpa...@gmail.com wrote:
 Additional info:

 Squid Cache: Version 3.2.3
 configure options:  '--prefix=/usr/local/squid' '--enable-ssl'
 '--enable-removal-policies=lru heap' '--enable-ltdl-convenience'

 squid.conf :

 cache_mgr engh...@domain.com
 cache_effective_user squid
 cache_effective_group squid

 visible_hostname wikid-dev.domain.com

 http_port 80 accel defaultsite=wikid-dev.domain.com vhost

 cache_peer 10.56.102.92 parent 443 0 login=PASS name=engwikiweb1dev
 sourcehash ssl sslflags=DONT_VERIFY_PEER
 cache_peer 10.56.102.93 parent 443 0 login=PASS name=engwikiweb2dev
 sourcehash ssl sslflags=DONT_VERIFY_PEER


 acl wikid-dev dstdomain wikid-dev.domain.com
 cache_peer_access engwikiweb1dev allow wikid-dev
 #cache_peer_access engwikiweb2dev allow wikid-dev
 cache_peer_access engwikiweb1dev allow all
 #cache_peer_access engwikiweb2dev allow all
 never_direct allow wikid-dev

 acl LUCENE url_regex lucene
 cache deny LUCENE

 http_access allow wikid-dev
 http_access allow all

 cache_replacement_policy heap LRU
 memory_replacement_policy heap LRU

 cache_dir ufs /wikiweb-dev/squid-cache/var/engwikilb1/cache 2048 16 64

 cache_swap_low 90
 cache_swap_high 95

 cache_mem 1024 MB
 maximum_object_size_in_memory 4096 KB
 maximum_object_size 32768 KB

 logfile_rotate  3
 forwarded_for truncate

 cache_store_log /wikiweb-dev/squid-cache/var/engwikilb1/logs/cache_store.log
 logformat squid %ts %06tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A %mt

 refresh_pattern -i \.css\?.*$144050% 2880 reload-into-ims
 refresh_pattern -i \.js\?.*$ 144050% 2880 reload-into-ims
 refresh_pattern -i \.css$144050% 2880 reload-into-ims
 refresh_pattern -i \.js$ 144050% 2880 reload-into-ims
 refresh_pattern  (/skins/)  144050% 2880 reload-into-ims
 refresh_pattern -i  gen=css  144050% 2880 reload-into-ims
 refresh_pattern -i  gen=js  144050% 2880 reload-into-ims
 refresh_pattern .0   0%  0


 I would like if there is anything wrong in my config that makes not to cache
 all the pages as i see only TCP_MISS statements in my access.log

 -PK



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-not-caching-Please-help-tp4659918p4659924.html
 Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Squid not caching. Please help.

2013-05-12 Thread PARAM KRISH
Thanks Amos. We are already on a Production, cant really think about
upgrading without knowing the impact and benefits, for now.

On the other hand, i picked up the TCP_HIT from my production box
that is a lot used, seem to have many there.

1368107093 00 10.56.9.27 TCP_HIT/200 738 GET
http://wikid.domain.com/wikid/skins/common/images/Arr_.png -
HIER_NONE/- image/png
1368110324 01 10.56.9.27 TCP_HIT/200 5143 GET
http://wikid.domain.com/wikid/skins/common/images/spinner.gif -
HIER_NONE/- image/gif
1368110509 01 10.56.9.27 TCP_HIT/200 2169 GET
http://wikid.domain.com/wikid/skins/monobook/IE60Fixes.css? -
HIER_NONE/- text/css
1368110510 01 10.56.9.27 TCP_HIT/200 557 GET
http://wikid.domain.com/wikid/skins/monobook/external.png -
HIER_NONE/- image/png
1368113664 01 10.56.9.27 TCP_HIT/200 897 GET
http://wikid.eng.domain.com/wikid/skins/common/images/button_image.png
- HIER_NONE/- image/png
1368113664 00 10.56.9.27 TCP_HIT/200 639 GET
http://wikid.eng.domain.com/wikid/skins/common/images/button_hr.png -
HIER_NONE/- image/png
1368115000 00 10.56.9.27 TCP_HIT/200 475 GET
http://wikid.domain.com/lucene-wikid/skins/monobook/FF2Fixes.css -
HIER_NONE/- text/css
1368117520 01 10.56.9.27 TCP_HIT/200 1311 GET
http://wikid.domain.com/lucene-wikid/skins/monobook/lock_icon.gif? -
HIER_NONE/- image/gif
1368123877 01 10.56.9.27 TCP_HIT/200 4099 GET
http://wikid/wikid/skins/common/images/poweredby_mediawiki_88x31.png -
HIER_NONE/- image/png
1368130149 00 10.56.9.27 TCP_HIT/200 3998 GET
http://wikid.eng.domain.com/wikid/skins/common/images/poweredby_mediawiki_88x31.png
- HIER_NONE/- image/png

Basically, Out of over 8 lac lines in the access.log of production,
here is what the count for each of the type of requests it has
received ..

TCP_CLIENT_REFRESH_MISS/200 :69
TCP_CLIENT_REFRESH_MISS_ABORTED/200 :1
TCP_HIT/200 :173
TCP_HIT_ABORTED/000 :45
TCP_HIT_ABORTED/200 :1
TCP_IMS_HIT/304 :9126
TCP_MEM_HIT/200 :16172
TCP_MEM_HIT_ABORTED/200 :42
TCP_MISS/200 :282417
TCP_MISS/206 :200
TCP_MISS/301 :3110
TCP_MISS/302 :383503
TCP_MISS/304 :52580
TCP_MISS/400 :10
TCP_MISS/403 :14
TCP_MISS/404 :42825
TCP_MISS/405 :221
TCP_MISS/500 :2
TCP_MISS/502 :3
TCP_MISS/503 :19
TCP_MISS_ABORTED/000 :5559
TCP_MISS_ABORTED/200 :2882
TCP_MISS_ABORTED/206 :1
TCP_MISS_ABORTED/302 :2
TCP_MISS_ABORTED/404 :2
TCP_REFRESH_FAIL/200 :1
TCP_REFRESH_MODIFIED/200 :158
TCP_REFRESH_MODIFIED/206 :13
TCP_REFRESH_MODIFIED/404 :2
TCP_REFRESH_UNMODIFIED/200 :31737
TCP_REFRESH_UNMODIFIED/206 :144
TCP_REFRESH_UNMODIFIED/304 :10355
TCP_REFRESH_UNMODIFIED_ABORTED/200 :14

Do you see anything fishy in here ?

Here is what the squid.conf of the production box look like and it
runs the same squid version.


cache_mgr engh...@domain.com
cache_effective_user squid
cache_effective_group squid

visible_hostname wikid.domain.com


http_port 80 accel defaultsite=wikid.domain.com vhost

cache_peer 10.56.9.38 parent 443 0 originserver login=PASS
name=engwikiweb1 sourcehash ssl sslflags=DONT_VERIFY_PEER
cache_peer 10.56.9.39 parent 443 0 login=PASS name=engwikiweb2
sourcehash ssl sslflags=DONT_VERIFY_PEER


acl wikid dstdomain wikid.domain.com
cache_peer_access engwikiweb1 allow wikid
cache_peer_access engwikiweb2 allow wikid
cache_peer_access engwikiweb1 allow all
cache_peer_access engwikiweb2 allow all
never_direct allow wikid

http_access allow wikid
http_access allow all

cache_replacement_policy heap LRU
memory_replacement_policy heap LRU

cache_dir ufs /wikiweb/squid-cache/var/cache/engwikilb1 2048 16 64

cache_swap_low 90
cache_swap_high 95

cache_mem 1024 MB
maximum_object_size_in_memory 4096 KB
maximum_object_size 32768 KB

logfile_rotate  3
forwarded_for on

cache_store_log none
logformat squid %ts %06tr %a %Ss/%03Hs %st %rm %ru %un %Sh/%A %mt

refresh_pattern -i \.css\?.*$144050% 2880 reload-into-ims
refresh_pattern -i \.js\?.*$ 144050% 2880 reload-into-ims
refresh_pattern -i \.css$144050% 2880 reload-into-ims
refresh_pattern -i \.js$ 144050% 2880 reload-into-ims
refresh_pattern  (/skins/)  144050% 2880 reload-into-ims
refresh_pattern -i  gen=css  144050% 2880 reload-into-ims
refresh_pattern -i  gen=js  144050% 2880 reload-into-ims
refresh_pattern .0 0% 0
===


Do you still see something wrong with this squid version or anything
to do with its config file that we can tweak to see ?

-PK

On Mon, May 13, 2013 at 6:10 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 12/05/2013 7:04 p.m., paramkrish wrote:

 Additional info:

 Squid Cache: Version 3.2.3
 configure options:  '--prefix=/usr/local/squid' '--enable-ssl'
 '--enable-removal-policies=lru heap' '--enable-ltdl-convenience'

 squid.conf :

 cache_mgr engh...@domain.com
 cache_effective_user squid
 cache_effective_group squid

 visible_hostname wikid-dev.domain.com

 http_port 80 accel defaultsite=wikid-dev.domain.com vhost

 cache_peer 10.56.102.92 parent 443 0 login=PASS 

[squid-users] Squid not caching. Please help.

2013-05-11 Thread PARAM KRISH
Hello

I have the squid running at port80 with the cache_peer on 443

I see there is only TCP_MISS/200 or TCP_MISS_ABORTED/200, I doubt
whether Squid is really caching of the pages.

1368334462 000179 10.56.101.197 TCP_MISS/200 4621 GET
http://wikid-dev.eng.domain.com/ - SOURCEHASH_PARENT/engwikiweb1dev
text/html
1368334492 000242 10.56.101.197 TCP_MISS/200 4621 GET
http://wikid-dev.eng.domain.com/ - SOURCEHASH_PARENT/engwikiweb1dev
text/html


Analysing the squid-internal-mgr shows ..something like this .. that
gives me a feeling that it caches only png files?


KEY A4C05546F054C963F60E83AB7F691CE7
STORE_OK  IN_MEMORY SWAPOUT_NONE PING_NONE
SPECIAL,CACHABLE,VALIDATED
LV:1368333867 LU:1368333867 LM:1365163353 EX:-1
0 locks, 0 clients, 0 refs
Swap Dir -1, File 0X
GET http://wikid-dev.domain.com/squid-internal-static/icons/silk/film_key.png
inmem_lo: 0
inmem_hi: 1053
swapout: 0 bytes queued

KEY 2D01672F31BE2A63BF2563FC37C432C8
STORE_OK  NOT_IN_MEMORY SWAPOUT_DONE PING_NONE
REVALIDATE,CACHABLE,DISPATCHED,VALIDATED
LV:1368265065 LU:1368265065 LM:1368194144 EX:1370857065
0 locks, 0 clients, 1 refs
Swap Dir 0, File 0X09

KEY 09846EEBF4E045567A5FE301F0F24C6C
STORE_OK  IN_MEMORY SWAPOUT_NONE PING_NONE
SPECIAL,CACHABLE,VALIDATED
LV:1368333867 LU:1368333867 LM:1365163353 EX:-1
0 locks, 0 clients, 0 refs
Swap Dir -1, File 0X
GET http://wikid-dev.domain.com/squid-internal-static/icons/silk/picture.png
inmem_lo: 0
inmem_hi: 824
swapout: 0 bytes queued
cache_store_log shows
1368334342.699 RELEASE -1  FCD06E1B1AB3B6BEF0BC8AD5D4F04D9C
200 1368334342 1368194177 1368334342 text/html -1/-505 GET
http://wikid-dev.eng.domain.com/
1368334372.653 RELEASE -1  2D355F1E8F50FB487C85077838F16C4F
200 1368334372 1368194177 1368334372 text/html -1/-505 GET
http://wikid-dev.eng.domain.com/
1368334402.562 RELEASE -1  4BA7AD9FCACE409B5505597981969600
200 1368334402 1368194177 1368334402 text/html -1/-505 GET
http://wikid-dev.eng.domain.com/
1368334432.646 RELEASE -1  81F96406F0A22CE4AE629ABFBBEA512A
200 1368334432 1368194177 1368334432 text/html -1/-505 GET
http://wikid-dev.eng.domain.com/


Am i missing something in my config to make it work, or all is ok ?

Can someone clarify further please ?

First of all I am running F5's (Load Balancers) before squid hence i
cant make the squid work with 443 for f5's to forward the requests
which is
why i have f5 @443 forward requests to squid80, that goes to two
apache's at 443, they work fine functionally but when i check the logs
i get the feeling
that the cache is not being used in squid. Any comments ?

-PK


Re: [squid-users] Re: Squid round-robin to 2 Apache's

2013-02-12 Thread PARAM KRISH
Hi, Just to update , it worked fine. The problem was with the redirect
login cgi that had some permission issues in other server hence it did
not failover earlier. It now listens only in 443, works beautifully.
Thanks Amos for your help.

On Wed, Feb 6, 2013 at 10:16 AM, paramkrish mkpa...@gmail.com wrote:
 Dear Squid Users:

 Do you see any gross difference in my setup ? What i m trying is something
 very basic, in my opinion, just having two apache's running in 8080 behind
 Squid and a http-https redirection. While everything works great, I am
 concerned why squid does not detect the failed cache_peer parent to failover
 the request to the other node. What could possibly be missing in the configs
 or is this some sort of bug when squid made to work with 443 / SSL ?

 Please guide me as i am completely stalled.
 Thanks a lot for the wonderful work you have been doing.

 PK



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-round-robin-to-2-Apache-s-tp4658362p4658394.html
 Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Squid round-robin to 2 Apache's

2013-02-04 Thread PARAM KRISH
Hi Amox, Thanks so much for helping out... Here are my replies ..

On Mon, Feb 4, 2013 at 1:39 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 4/02/2013 8:22 p.m., PARAM KRISH wrote:

 Hello Squid Users

 I am in need of some help. I believe this must have been done by some
 already but i could not just get the right document to understand
 well.

 Setup :

 Squid v3.2.3 on a RHEL 6 64-bit VM (say server1)
 Apache's 2.2.15 on RHEL6 64-bit VM's (running in server1 and in server2).

 Requirement:

 1. All http traffic should automatically redirect to https without a
 change in the url (except http - https )
 2. I have two URL's  A.B.C.D and A.C.D both pointing to the server1 IP
 must work well.
 3. When Apache in server1 dies, all traffic should work well with
 Apache in server2. All traffic must use Squid for
 ReverseProxy/Cacheing

 What i have done so far ?

 1. Apache V-hosts (8080  443 ) : 8080 V-hosts does RewriteRule to https
 2. Squid listens in 80 and 3123 (for ssl) having these entries in its
 config.


 Why 3123 (er actually 3128)? having different ports on the public-facing
 proxy to what the backends are using is a common cause of trouble because
 virtual-ports in HTTP are a tricky concept. Speaking of which...

 Also 3128 is a well-known proxy port. Using it for things other that normal
 forward proxy traffic is another common cause of trouble. Usually from the
 viral/worm side of the Internet.

3128 is what i am using. Sorry it was a typo of 3123. so, what port
are you recommending to use if not this ? If i make it to listen to
the same port of apache, everytime i restart Apache or squid , either
of them complain that the port is already in-use.

 https_port 3128 accel cert=/usr/local/apache2/conf/folder/certnew.cer
 key=/usr/local/apache2/conf/folder/a.b.c.d.key defaultsite=a.b.c.d
 vhost


 This basic style SSL config does not support virtual hosting. You require
 the certificate generator options to create certificates for each virtual
 hosted FQDN on demand.
 YHBW: This is not a very commonly used configuration (yet) and we have had
 little feedback as to how well (or not) it works.

 NP: if you only have the one domain FQDN being hosted, OR multiple domains
 all as aliases of one main domain, ... drop the vhost option.

Our servers are for internal use and we use only self-signed
certificates. Domains are local's and are in DNS rightly added.

 http_port 80 accel defaultsite=a.b.c.d vhost

 cache_peer server1.b.c.d parent 8080 0 originserver round-robin
 cache_peer server2.b.c.d parent 8080 0 originserver round-robin

 What i understand is, requests coming to Port 80 (Squid) forwarded to
 either of Apache's 8080 where it gets rewritten to 443, processed.


 Erm, all requests arriving in both port 80 and port 3128 to your Squid are
 sent to the Apaches port 8080 as plain-HTTP traffic. There is no distinction
 between HTTP and HTTPS in a reverse-proxy. HTTPS is just a wire-level
 TLS/SSL wrapping around regular HTTP after all. Once the reverse-proxy
 https_port terminates the TLS/SSL it is gone.

 I think what you are actually seeing is that the Apache are handling the
 traffic and have been configured to ensure all public URLs are
 https://a.b.c.d:3128/, yes?

No public URL's here. All internal links and available only for the
internal employees.
My requirement is that all the http request to those two URL's
(a.b.c.d and a.c.d) must get redirected(or rewritten) to https
automatically.
I was thinking having a Squid doing reverse proxy sniffing at  443
would help to pick up from cache the next time we get the request,
since this requirement is for mediawiki. Are you saying this wont do
well or use-less ?

 To both simplify and speed this all up:
  * use 443 as Squid listening https_port.
  * make the Apache listen on port 443 on their own IPs (can be internal
 private IPs).
  * configure the Apaches to know that a.b.c.d is their public FQDN.
  * remove any port re-writing config in the Apaches

I am not sure i understand that clearly.
If you are saying no re-writing in Apaches, and Apaches listening in
443 only, how(who) does http to https gets redirected if i implement
like this ?
Can you cover that part too please ?

Also, when i tried like, squid listening on 443 through cache_peer of
server1:443 and server2:443, i got something like this when apache1
goes down.

Bad Request

Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
Hint: https://server2.b.c.d/

The hinted link when i click works fine but i obviously want this to
be taken care automatically rather than popping to the user know that
this is happening when server1 apache is dead.

Thanks..
PK



 This works well when both Apache's are running fine. When i bring down
 Apache1, It breaks. Apache2 up/down does not make a difference as i
 noticed though they are effectively used when both

[squid-users] Squid round-robin to 2 Apache's

2013-02-03 Thread PARAM KRISH
Hello Squid Users

I am in need of some help. I believe this must have been done by some
already but i could not just get the right document to understand
well.

Setup :

Squid v3.2.3 on a RHEL 6 64-bit VM (say server1)
Apache's 2.2.15 on RHEL6 64-bit VM's (running in server1 and in server2).

Requirement:

1. All http traffic should automatically redirect to https without a
change in the url (except http - https )
2. I have two URL's  A.B.C.D and A.C.D both pointing to the server1 IP
must work well.
3. When Apache in server1 dies, all traffic should work well with
Apache in server2. All traffic must use Squid for
ReverseProxy/Cacheing

What i have done so far ?

1. Apache V-hosts (8080  443 ) : 8080 V-hosts does RewriteRule to https
2. Squid listens in 80 and 3123 (for ssl) having these entries in its config.

https_port 3128 accel cert=/usr/local/apache2/conf/folder/certnew.cer
key=/usr/local/apache2/conf/folder/a.b.c.d.key defaultsite=a.b.c.d
vhost

http_port 80 accel defaultsite=a.b.c.d vhost

cache_peer server1.b.c.d parent 8080 0 originserver round-robin
cache_peer server2.b.c.d parent 8080 0 originserver round-robin

What i understand is, requests coming to Port 80 (Squid) forwarded to
either of Apache's 8080 where it gets rewritten to 443, processed.

This works well when both Apache's are running fine. When i bring down
Apache1, It breaks. Apache2 up/down does not make a difference as i
noticed though they are effectively used when both in running state as
i noticed from their logs. So it looks like they do effective
load-balancing in normal state, but cant take Apache1 from its
critical path.

Can you guys help me understand how i set this up ? I'm not sure the
http-https redirect must be done by Apache or the Squid or using any
redirector.pl within Squid. Which works best/faster ?

Any help or reference doc / config would be much appreciated. I dont
have a requirement for people directly coming to server1 or server2 to
either of its 8080/443 ports. All traffic to be routed through the
domain url's a.b.c.d or a.c.d ONLY.

Thanks a lot.