Re: [squid-users] https questions
On lör, 2008-06-07 at 09:58 +0800, Ken W. wrote: 2008/6/7 Henrik Nordstrom [EMAIL PROTECTED]: But you are quite likely to run into issues with the server sending out http:// URLs in it's responses unless the server has support for running behind an SSL frontend. See for example the front-end-https cache_peer option. Thanks Henrik. Under my setting, can squid work correctly for this flow? clients --https-- squid --http-- webserver webserver --http-- squid --https-- clients Again, yes, provided your web server application has support for being used in this manner.
Re: [squid-users] https questions
2008/6/7 Henrik Nordstrom [EMAIL PROTECTED]: Thanks Henrik. Under my setting, can squid work correctly for this flow? clients --https-- squid --http-- webserver webserver --http-- squid --https-- clients Again, yes, provided your web server application has support for being used in this manner. Thanks Henrik. We use apache-2.0.59 standard installtion for web servers. We didn't have https port opened on Apache. Then, can Squid run well with https frontend under this case? thanks again.
[squid-users] How to bypass banned sites
My ISP banned most sites. So, i make them use YourFreedom but it generates a lot of traffics. As more users, Traffice becomes large and very slow. When connetion becomes slow, YourFreedom does not work. Then I try to use Ultra Surf but not works when connection is slow or sometimes. I googled ssh+putty can offer this also. So, I tried but i cannot ... I think anyone of you can help me Any comments are welcomed Mr. Crack007
Re: [squid-users] How to bypass banned sites
Mr Crack escreveu: My ISP banned most sites. Any comments are welcomed Change ISP or move from China .. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email [EMAIL PROTECTED] My SPAMTRAP, do not email it
Re: [squid-users] RE : [squid-users] performances ... again
On 06.06.08 18:14, GARDAIS Ionel wrote: I will try the host != some.url.com part. For the isInNet() trick, the problem is that it inducts a DNS resolution call for every request to compare with the IP/mask parameters. Don't you have local resolver? Do you think that your browser doesn't cache the lookup itself? -- Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. I feel like I'm diagonally parked in a parallel universe.
Re: [squid-users] Squid Performance, VMware vs Physical Machine
On 06.06.08 18:35, Brodsky, Jared S. wrote: Getting ready to roll out a squid server in my organization after doing about a month of testing on it on a virtual machine in VMware server. Is running squid in a virtual environment recommended, or is having a dedicated box a safer way to go? I'll have about 30 users that hit YouTube and other streaming media sites throughout the day and I am hoping to cache a lot of it since many watch the same ones more than once. I do, however have a box set aside that I can use which is a P4 3 Ghz w/ 1GB ram and was going to drop in two 10,000 RPM drives for the cache. I know that the squid wiki says JBOD is preferable, however is RAID 0 a bad way to go? when you loose one disk in RAID0 (or system level of JBOD), You'll loose the whole cache. If it happens with multiple cache directories (squid't use of JBOD), you'll only loose that part on that disk... -- Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. You have the right to remain silent. Anything you say will be misquoted, then used against you.
Re: [squid-users] https questions
Hello members, My squid's config for https looks as below: http_port 80 accel vhost https_port 443 accel vhost cert=/usr/local/squid/etc/ssl/server.cert key=/usr/local/squid/etc/ssl/server.key cache_peer 12.34.56.78 parent 80 0 no-query front-end-https=auto originserver name=origin_1 acl service_1 dstdomain .abc.com cache_peer_access origin_1 allow service_1 When I access to squid with: https://www.abc.com I got no success and cache.log show: 2008/06/07 14:37:02| httpsAccept: Error allocating handle: error:0906A068:PEM routines:PEM_do_header:bad password read 2008/06/07 14:37:02| httpsAccept: Error allocating handle: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib 2008/06/07 14:37:02| httpsAccept: Error allocating handle: error:140BA0C3:SSL routines:SSL_new:null ssl ctx This is the info for my squid: Squid Cache: Version 3.0.STABLE6 configure options: '--prefix=/usr/local/squid3.0' '--disable-carp' '--enable-async-io=128' '--enable-removal-policies=heap lru' '--disable-wccp' '--disable-wccpv2' '--enable-kill-parent-hack' '--disable-snmp' '--disable-htcp' '--disable-poll' '--disable-select' '--disable-ident-lookups' '--with-aio' '--with-large-files' '--with-filedescriptors=51200' '--enable-ssl' I'm running it under redhat linux AS5. Please help, thanks. --Ken 2008/6/7 Henrik Nordstrom [EMAIL PROTECTED]: On lör, 2008-06-07 at 09:58 +0800, Ken W. wrote: 2008/6/7 Henrik Nordstrom [EMAIL PROTECTED]: But you are quite likely to run into issues with the server sending out http:// URLs in it's responses unless the server has support for running behind an SSL frontend. See for example the front-end-https cache_peer option. Thanks Henrik. Under my setting, can squid work correctly for this flow? clients --https-- squid --http-- webserver webserver --http-- squid --https-- clients Again, yes, provided your web server application has support for being used in this manner.
Re: [squid-users] Squid Performance, VMware vs Physical Machine
Matus UHLAR - fantomas wrote: On 06.06.08 18:35, Brodsky, Jared S. wrote: Getting ready to roll out a squid server in my organization after doing about a month of testing on it on a virtual machine in VMware server. Is running squid in a virtual environment recommended, or is having a dedicated box a safer way to go? I'll have about 30 users that hit YouTube and other streaming media sites throughout the day and I am hoping to cache a lot of it since many watch the same ones more than once. I do, however have a box set aside that I can use which is a P4 3 Ghz w/ 1GB ram and was going to drop in two 10,000 RPM drives for the cache. I know that the squid wiki says JBOD is preferable, however is RAID 0 a bad way to go? when you loose one disk in RAID0 (or system level of JBOD), You'll loose the whole cache. If it happens with multiple cache directories (squid't use of JBOD), you'll only loose that part on that disk... At which point you better hope that: - none of the root 2 levels of cache directory have been affected - none of the content files have been truncated/broken/corrupted or you loose varying amounts of cache stability anyway and have to erase and rebuild or at best risk bad content given out. Which in my experience usually happens on the largest longest-lived files. Most types of RAID are of no net benefit with current Squid, (but Okay if you have no choice AND its hardware driven). Amos -- Please use Squid 2.7.STABLE1 or 3.0.STABLE6
[squid-users] Yet another Invalid URL question
Am trying to set up a separate box (with two NICs) as a firewall/filter using the following configuration. When Squid was running without Dansguardian, and I connected a laptop to the second NIC and pointed the laptop to Squid, everything worked fine. With the iptables set and Dansguardian running (and the laptop configured just normally), however, when I enter anything into the laptop's browser, I get an error message, from Squid, saying the URL is invalid, and the URL that it says it is trying to use is the URL I typed, without the domain info. Thus, if I ask for www.google.com/ it shows / instead. If I try something like search.lycos.com/?query=testx=0y=0 it shows /?query=testx=0y=0 instead. I have seen some chatter about this type of thing on Squid's mail list, but, again, the Squid-only operation did not encounter this problem, so it could be something to do with Dansguardian, but I'm asking here, as well as in the Dansguardian group, in case anyone here could be of assistance (and because nobody has responded, at all, in the Dansguardian group). The configuration I am trying to use is listed below, but here are a couple of notes, before I just paste it all there. First, the main set of instructions I was trying to use for this were from: http://www.spencerstirling.com/computergeek/dansguardian.html However, it appears to be a bit older, and mentions Squid options that no longer appear to be valid (e.g. httpd_accel_host virtual). It also says to use http_port 127.0.0.1:3128 but the discussion of the invalid URL problem in the Squid mail list suggests http_post 127.0.0.1:3128 transparent instead. Thus, I'm sure that the mismash of settings I am using is, somehow, the cause of the problem, but I lack the networking experience to tell just WHERE the problem occurs. Here, then, is the rest of the configuration information: Linux: Debian 4.0 r3 i386 iptables: 1.3.6 Squid: 3.0.PRES Dansguardian: 2.8.0.6-antivirus-6.4.4.1-2 (Squid and Dansguardian installed via Synaptic Package Manager) Since the configuration files are large enough that their full text would cause the list server to truncate this message, I have posted them to MediaFire. Here they are: firewall.sh: http://www.mediafire.com/?zvb2gjj99d9 dansguardian.conf: http://www.mediafire.com/?gucyirxttdb squid.conf: http://www.mediafire.com/?22txgzjdcez Thanks!
Re: [squid-users] Transparent proxy with MSN
2008/6/7 Amos Jeffries [EMAIL PROTECTED]: Sergio Belkin wrote: 2008/6/5 Amos Jeffries [EMAIL PROTECTED]: Sergio Belkin wrote: Hi, I'd want to know if it's possible allos MSN usage along transparent proxy. Possible. But not always easy. It depends highly on the type of network you have setup (a level of NAT between the client and squid kills it fairly well). The schema is as follows: A user connect with his notebook via Access Point which has OpenWRT installed. OpenWRT has DNAT rules: iptables -t nat -A prerouting_rule -i br0 -p tcp --dport 80 -j DNAT --to-destination $SQUID_IP:8080 iptables -t nat -A prerouting_rule -i br0 -p tcp --dport 1863 -j DNAT --to-destination SQUID_IP:8080 That NAT happening on the AP would break squid transparency. The AP needs to do policy-routing to pass only the port-80 packets to the squid box. http://wiki.squid-cache.org/ConfigExamples/LinuxPolicyRouteWebTraffic The NAT part appears to be right, but the Squid box should be the one doing it. So But why is web browsing working fine? There is something about authentication too with MSN, Where can I red about it? full TPROXY may be needed for that one. (I've tried the last one and even redirecting 1050, but I'm not sure if that's right) Users can browse the web with no problems using transparent proxy (except SSL sites of course) but they fail to use MSN. MSN is _supposed_ to have automatic failovers to port 80 that use HTTP. But that depends on what other paths it can find through your network first. Amos -- Please use Squid 2.7.STABLE1 or 3.0.STABLE6 -- -- Open Kairos http://www.openkairos.com Watch More TV http://sebelk.blogspot.com Sergio Belkin -
[squid-users] Blocking web sites
Hi Why is necessary an additional tool for squid to block web sites like SquidGuard or DansGuardian? is there an issue with performance? Thanks in advanced
[squid-users] Cacheboy-1.1 release, testers wanted!
Hi everyone, I've just released Cacheboy-1.1, which is essentially an almost current snapshot of Squid-2.HEAD with a whole lot of code reorganisation and a couple of new minor features. It certainly seems stable enough in local testing and limited third party testing. I'll see if I can get permission from those who are testing it to drop names, but its seen some reasonably busy production loads and as I said, it seems stable enough. I'd like to try and get this stuff tested more thoroughly in production environments before I begin trying to roll these changes back into Squid-2.HEAD. My work to date is mostly code reorganisation in preparation for larger scale changes. I've done about as much code shuffling as I can do in this first pass without beginning much more intrusive code changes to fix various silly choices made in the past. I'd like this code to be tested out first as widely as possible before I begin my next set of slightly more intrusive changes. This should mostly be a drop-in replacement for those running Squid-2 under UNIX. I haven't yet done any compatibility work to fix it to compile outside of Linux, FreeBSD and Solaris 10. No, you probably won't notice any functionality or performance differences between Squid-2.7 and Cacheboy. Well, unless you're running Solaris - there's an implementation of event ports for the network IO. ./configure --help has some more information about that. Performance and feature work will come (much) later; too much reorganisation needs to be done first to set the scene for said changes. The 1.0 and 1.1 tarballs can be fetched from: http://code.google.com/p/cacheboy/downloads/ The wiki has some basic information about whats going on: http://code.google.com/p/cacheboy/wiki/ Finally, if you're at all interested in the why behind the what, take a look at the blog: http://cacheboy.blogspot.com/ The wider the testing I get, the quicker this stuff can be made stable and rolled into the next Squid-2 release. Thanks! Adrian -- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
Re: [squid-users] Blocking web sites
Carlos Alberto Bernat Orozco escreveu: Hi Why is necessary an additional tool for squid to block web sites like SquidGuard or DansGuardian? is there an issue with performance? You're completly mistaken. DansGuardian and SquidGuard are OPTIONAL tools, they are not REQUIRED. Squid implements several Access List (ACL) types which can be used to apply SEVERAL different and probably complex browsing policies. It cannot be that easy to implement very complex policies, but it's completly possible without using any other optional tool, like SquidGuard, DansGuardian or others. Those tools, tough, can improve cpu usage on big and complex policies, and make things easier for implementing those complex policies. But ... they are NOT required as you're thinking. If you understand how squid ACL works, if you can place your policy in a logical way, then you can translate this to squid ACLS and implement them. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email [EMAIL PROTECTED] My SPAMTRAP, do not email it
Re: [squid-users] Blocking web sites
Thanks for your message. I wass totally mistaken. Thanks! 2008/6/7 Leonardo Rodrigues Magalhães [EMAIL PROTECTED]: Carlos Alberto Bernat Orozco escreveu: Hi Why is necessary an additional tool for squid to block web sites like SquidGuard or DansGuardian? is there an issue with performance? You're completly mistaken. DansGuardian and SquidGuard are OPTIONAL tools, they are not REQUIRED. Squid implements several Access List (ACL) types which can be used to apply SEVERAL different and probably complex browsing policies. It cannot be that easy to implement very complex policies, but it's completly possible without using any other optional tool, like SquidGuard, DansGuardian or others. Those tools, tough, can improve cpu usage on big and complex policies, and make things easier for implementing those complex policies. But ... they are NOT required as you're thinking. If you understand how squid ACL works, if you can place your policy in a logical way, then you can translate this to squid ACLS and implement them. -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email [EMAIL PROTECTED] My SPAMTRAP, do not email it
[squid-users] strange squid problem
Dear Sir, I'm using Mandriva spring 2007.1, (squid 2.6.Stable7) I can surf the internet,use messenger even watch a music video, but can't download a complete program. I've tried to download avg anti-virus free edition 8.0 - 45.57mb i only receive 25.0mb of the file. I'm downloading on a windows xp Pro system. If i try to download the above mention program (avg anti-virus free edition 8.0) on my linux box not using squid the full 45.57mb is received, how can i resolve this problem. Please Help. Kind Regards Desmond
Re: [squid-users] https questions
On lör, 2008-06-07 at 18:29 +0800, Ken W. wrote: 2008/06/07 14:37:02| httpsAccept: Error allocating handle: error:0906A068:PEM routines:PEM_do_header:bad password read Your SSL key is encrypted and you have not given the encryption key to Squid so it can not set up the SSL proper. Decrypt the SSL key and it should work better. openssl rsa -in key.pem -out unencrypted_key.pem Regards Henrik
Re: [squid-users] strange squid problem
DD Dods escreveu: I'm using Mandriva spring 2007.1, (squid 2.6.Stable7) I can surf the internet,use messenger even watch a music video, but can't download a complete program. I've tried to download avg anti-virus free edition 8.0 - 45.57mb i only receive 25.0mb of the file. I'm downloading on a windows xp Pro system. If i try to download the above mention program (avg anti-virus free edition 8.0) on my linux box not using squid the full 45.57mb is received, how can i resolve this problem. is the avg 8.0 a simple example of a download problem or are you having problems to download ONLY avg 8.0 ??? are you having problems with ALL big downloads or just with avg 8.0 45.57Mb file ??? -- Atenciosamente / Sincerily, Leonardo Rodrigues Solutti Tecnologia http://www.solutti.com.br Minha armadilha de SPAM, NÃO mandem email [EMAIL PROTECTED] My SPAMTRAP, do not email it
Re: [squid-users] High tcp_hit times
Hmm, that's weird because I don't have any ACL that would require DNS. See my original post w/ the squid config. mike At 10:46 PM 6/6/2008, Henrik Nordstrom wrote: On fre, 2008-06-06 at 15:30 -0700, leongmzlist wrote: Does squid still use dns for reverse proxy requests? All my requests goes to http://cache-int/, but cache-int is not on /etc/hosts nor on DNS. I have 1 orginal-server defined and is used as the default, so shouldn't squid just goto the backend w/o dns lookups? Also depends on if you have any acls relying on DNS, such as a dst acl. Regards Henrik
[squid-users] Re: Cacheboy-1.1 release, testers wanted!
Is squid being renamed? Or what the relation of cacheboy to the squid project? Or was it the 2.7 branch that was renamed to cacheboy (is there one for girls?...not sure a I want a boy caching my webcontent -- but thats another matter...:-)) I'm also a bit perturbed, that 3.0, which has been around forever, is still in Beta, while alot of work continues to go on in the 2.x line. I mean usually most work goes toward the new generationwith maybe 1 person handing bug fixes for 2.x...and usually most feature work goes into 3.x... I'm not trying to direct or anything, I don't understand all the politics or what's going on... Is cacheboy going to become a 3rd squid proxy server? Or why was it split off? If it's really designed to be a separate fork going off in a different direction, I don't suppose it's very high traffic at this point, but it seem like it's yet another distractor for moving ahead getting all features and performance work needed in 3.x I mean -- the linux kernel is alot bigger and has a greater diversity of needs than squid would likely ever have, yet they, remarkably, have managed to stay mostly cohesive, but maybe no one with 'squid' has linus's charismatic charm? (?!?) But certainly a lesson to be taken from linux, no matter what examples there are to it not working for some developers (and there have been examples -- nothing is perfect), but the bulk of the work is focused and there doesn't seem to be any forks of any note, meaning ones that weren't intended as testing/development playgrounds with the work being remerged later, sorta faded away. So I guess, how did cacheboy come to be and why is it here (which may become obvious if I know the connection to the 'main' squid project...)? :-?
[squid-users] Re: performances ... again
GARDAIS Ionel wrote: Beside the fact that the hit rate is low, response time are way too long for users (cache-misses median service times are around 200ms and cache-hits are around 3ms) Have you tried access without a squid-proxy -- but maybe using 'socks' (if you need to bridge a firewall), or direct? I've done performance analysis on my squid installation a few times -- cuz not happy with performance...but.. when squid isn't there and I go through socks, or I setup a machine on the net directly -- I don't get any faster throughput or response time ... at most, we're talking differences down in the noise level ... One-by-one, I eliminated or upgraded parts that I could -- my internal net is 1Gigabit. Tried upgrading to a firewall product that supported a 100Mb interface (old was 10Mbithalf duplex)...but its a matter of my DSL not being much faster than yours -- ~2.5-3Mb. But simple 'ping' times to most servers are 100ms or more. within my ISP's network, they are as low as 50-55, but going out, and back down someone else's network usually adds at least another 50, often nearer 100, so just ping times alone to many outside destinations are 100-150, closer to 150 on heavier sites. So if a tiny ping packet -- handled by the OS takes that long -- of course adding web-server application time to that is going to add 'something'. So while some outside servers can serve 1-segment packets in the 150-160ms range, most are higher ... A TCP_REFRESH_UNMODIFIED might take around 140-150... But most misses are 400, 600 or more, easily. My hits are fast...some coming in at less than 1ms (0 is stated), but if you have to go to a webserver -- forget it...300-1000 is typical -- *without* a cache (in my case, anyway). The only ways to speedup web pages are to make sure your clients are issuing multiple requests in parallel -- easily configured in Firefox, but there are KB articles ( pay-utils that will set the values for you, of course) for MS products. Since I take it that pipeline requests doesn't work in squid(?) and compression doesn't work (not that compression would help latency -- only throughput), the best thing is to make sure your clients can issue at least 8 requests in parallel at a time -- through your squid proxy and that your squid-proxy can handle that many connections/client without becoming a bottleneck. My proxy usage is often just me -- so no prob, but even when I have housemates and guests active, squid never seems to be a bottleneck.
Re: [squid-users] Re: Cacheboy-1.1 release, testers wanted!
On Sat, Jun 07, 2008 at 01:32:31PM -0700, Linda W wrote: So I guess, how did cacheboy come to be and why is it here (which may become obvious if I know the connection to the 'main' squid project...)? :-? This thread may be of some interest to you: http://marc.info/?t=12108954741r=1w=2 Also, I believe the Cacheboy website has some info as well. From a user perspective I wish all hands would work on one branch (2.7 or 3.x), but I understand that's not how things always work. Ray
[squid-users] Re: squid_kerb_auth on mac os x
Find below a small test program to create a token. Run a kinit as a user and then ./squid_kerb_auth_test proxy_fqdn. It creates a token like: ./squid_kerb_auth_test opensuse.suse.home Token: YIIB/gYJKoZIhvcSAQICAQBuggHtMIIB6aADAgEFoQMCAQ6iBwMFAACjggEWYYIBEjCCAQ6gAwIBBaELGwlTVVNFLkhPTUWiJTAjoAMCAQOhHDAaGwRIVFRQGxJvcGVuc3VzZS5zdXNlLmhvbWWjgdIwgc+gAwIBF6EDAgEDooHCBIG/3ZmN10yosQbc3IkfBaq/pW6LiWMyDFmxec6M13jhnBU36eKJL1cIsqp3EArME/dVR3Y0FC7QSguW4mNJrtr44vGQD8NdYGHqUxFWH7uIkLE9YnAQnuimj/pefsI7s4EKCo+cqlecVIx2aXtVuubicH1e+CSB+QlH7ZIWpAoCfaLFkxLl6OoZ42ixxou0e+aBCyZQ+1n3PH1Xts7MuFz+6OTQh+IhBWbQbLY54oKnCivjptbsLZH5D0uKS31i01ukgbkwgbagAwIBF6KBrgSBq9OLL0umYzCethf/CUEcQ6+7xobZYVsyIJtsV9IwAFAscVVO4hbMW3jKbM8BYLts72QCShJPTgBlAaoWwCy/YpZezNwPnYDm2lYDjfPZ2/r23326SmXKtPbNT1VFc+yPwAMrYPCxJr92Cxg2OI4z1qQWcCdRR6c5tidX3SSH4rX+YJHEAVKD/mMsFXmO18iT08B/pG4HQ8BcGs3UvQh4hXwOrnSBeR4xonljtQ== Then set the keytab with export KRB5_KTNAME=FILE:/etc/squid/squid.keytab and run ./squid_kerb_auth -d -i -s HTTP/proxy_fqdn and enter the token starting with YR as follows (in one line) ./squid_kerb_auth -d -i -s HTTP/[EMAIL PROTECTED] YR YIIB/gYJKoZIhvcSAQICAQBuggHtMIIB6aADAgEFoQMCAQ6iBwMFAACjggEWYYIBEjCCAQ6gAwIBBaELGwlTVVNFLkhPTUWiJTAjoAMCAQOhHDAaGwRIVFRQGxJvcGVuc3VzZS5zdXNlLmhvbWWjgdIwgc+gAwIBF6EDAgEDooHCBIG/3ZmN10yosQbc3IkfBaq/pW6LiWMyDFmxec6M13jhnBU36eKJL1cIsqp3EArME/dVR3Y0FC7QSguW4mNJrtr44vGQD8NdYGHqUxFWH7uIkLE9YnAQnuimj/pefsI7s4EKCo+cqlecVIx2aXtVuubicH1e+CSB+QlH7ZIWpAoCfaLFkxLl6OoZ42ixxou0e+aBCyZQ+1n3PH1Xts7MuFz+6OTQh+IhBWbQbLY54oKnCivjptbsLZH5D0uKS31i01ukgbkwgbagAwIBF6KBrgSBq7SAvkLhcONUUF5s01suOu2vdgwD2vxbYsT0DLgOYbH2w+dF9doOVk1D6rRTvjQmVN/SnS/SLXAwUIW776vYIhlzTGBQLioCypYRjmpGgq73A7//wC1b7/NXV5Ml6czAegeVHT0S01Y43kGtPihW1sO7fmKmn8Rak8qjKq6QNdQLnjK3wAnzf9KOnG6Hf0QlW/hQPSCelPN4EI7qyrDjMjVUKkiiLPnG1xxKtA== 2008/06/07 22:52:11| squid_kerb_auth: Got 'YR YIIB/gYJKoZIhvcSAQICAQBuggHtMIIB6aADAgEFoQMCAQ6iBwMFAACjggEWYYIBEjCCAQ6gAwIBBaELGwlTVVNFLkhPTUWiJTAjoAMCAQOhHDAaGwRIVFRQGxJvcGVuc3VzZS5zdXNlLmhvbWWjgdIwgc+gAwIBF6EDAgEDooHCBIG/3ZmN10yosQbc3IkfBaq/pW6LiWMyDFmxec6M13jhnBU36eKJL1cIsqp3EArME/dVR3Y0FC7QSguW4mNJrtr44vGQD8NdYGHqUxFWH7uIkLE9YnAQnuimj/pefsI7s4EKCo+cqlecVIx2aXtVuubicH1e+CSB+QlH7ZIWpAoCfaLFkxLl6OoZ42ixxou0e+aBCyZQ+1n3PH1Xts7MuFz+6OTQh+IhBWbQbLY54oKnCivjptbsLZH5D0uKS31i01ukgbkwgbagAwIBF6KBrgSBq7SAvkLhcONUUF5s01suOu2vdgwD2vxbYsT0DLgOYbH2w+dF9doOVk1D6rRTvjQmVN/SnS/SLXAwUIW776vYIhlzTGBQLioCypYRjmpGgq73A7//wC1b7/NXV5Ml6czAegeVHT0S01Y43kGtPihW1sO7fmKmn8Rak8qjKq6QNdQLnjK3wAnzf9KOnG6Hf0QlW/hQPSCelPN4EI7qyrDjMjVUKkiiLPnG1xxKtA==' from squid (length: 691). 2008/06/07 22:52:12| squid_kerb_auth: parseNegTokenInit failed with rc=109 2008/06/07 22:52:12| squid_kerb_auth: Token is possibly a GSSAPI token AF AA== [EMAIL PROTECTED] 2008/06/07 22:52:12| squid_kerb_auth: AF AA== [EMAIL PROTECTED] 2008/06/07 22:52:12| squid_kerb_auth: User [EMAIL PROTECTED] authenticated Regards Markus Compile gcc -o squid_kerb_auth_test squid_kerb_auth_test.c -lgssapi_krb5 -lkrb5 /* * - * * Author: Markus Moeller (markus_moeller at compuserve.com) * * Copyright (C) 2007 Markus Moeller. All rights reserved. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. * * - */ /* * Hosted at http://sourceforge.net/projects/squidkerbauth */ #ifndef HEIMDAL #include profile.h #endif #include krb5.h #include unistd.h #include stdlib.h #include stdio.h #include string.h #include errno.h #include time.h #include sys/time.h #ifdef HEIMDAL #include gssapi.h #define gss_nt_service_name GSS_C_NT_HOSTBASED_SERVICE #else #include gssapi/gssapi.h #ifndef SOLARIS_11 #include gssapi/gssapi_generic.h #else #define gss_nt_service_name GSS_C_NT_HOSTBASED_SERVICE #endif #endif static const char *LogTime(void); int check_gss_err(OM_uint32 major_status, OM_uint32 minor_status, const char* function); #define PROGRAM squid_kerb_auth_test static const char *LogTime() { struct tm *tm; struct timeval now; static time_t last_t = 0; static char buf[128]; gettimeofday(now, NULL); if (now.tv_sec != last_t) { tm = localtime(now.tv_sec); strftime(buf, 127, %Y/%m/%d %H:%M:%S, tm); last_t = now.tv_sec; } return buf; }
[squid-users] Re: squid_kerb_auth on mac os x
Alex, what Kerberos version do you use ? I think Mac uses MIT 1.4.1 which does not support spnego as far as I recall. So in your compile don't use -DHAVE_SPNEGO. Markus Alex Morken [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Hello, This is the first time I have posted on this list, so hello to everyone. I have been trying to get squid_kerb_auth to work on Mac OS X 10.4.11 and I cannot seem to figure out the reason it fails. Here are the options I had set for the configure part of squid: Squid Cache: Version 2.7.STABLE2 configure options: '--enable-auth=basic negotiate' '--enable-basic- auth-helpers=LDAP' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-esternal-acl-helpers=ldap_group' '--prefix=/usr/local/ squid-2.7' Everything compiles nicely and produces no errors. I set up and tested my kerberos configuration per below: Set up a local keytab for squid - HTTP/[EMAIL PROTECTED] Tested it by issuing the following command and it worked correctly: `kinit -k -t /etc/squid/squid.keytab HTTP/[EMAIL PROTECTED] Set and exported KRB5_KTNAME pointing to the local keytab. I wrote a bash script that does this and I have also tried to set the environmental variable in the current shell and run it from there. Both work as expected. I added authentication to squid.conf auth_param negotiate program /usr/libexec/squid_kerb_auth -d -s HTTP/ [EMAIL PROTECTED] I then started squid and it looks like everything is starting correctly. But it is still not dealing with kerberos correctly. I downloaded and compiled squid_kerb_auth by hand as I had found someone else on this list that was running into a problem similar to mine. I recompiled squid_kerb_auth with a few different options as mentioned in the thread. They are listed below. Compiled by hand: gcc -o squid_kerb_auth -DHAVE_SPNEGO -D__LITTLE_ENDIAN__ -Ispnegohelp squid_kerb_auth.c base64.c spnegohelp/derparse.c spnegohelp/ spnego.c spnegohelp/spnegohelp.c spnegohelp/spnegoparse.c - lgssapi_krb5 -lkrb5 -lcom_err root# ./squid_kerb_auth -d 2008/06/03 13:37:59| squid_kerb_auth: Starting version 1.0.1 [EMAIL PROTECTED] 2008/06/03 13:38:01| squid_kerb_auth: Got 'username' from squid (length: 15). 2008/06/03 13:38:01| squid_kerb_auth: gss_accept_sec_context() failed: A token was invalid. Token header is malformed or corruptBH gss_accept_sec_context() failed: A token was invalid. Token header is malformed or corrupt Results from just using ./configure and no options specified: host:/tmp/kerb/squid_kerb_auth root# ./squid_kerb_auth -d -s HTTP/ [EMAIL PROTECTED] 2008/06/03 13:47:38| squid_kerb_auth: Starting version 1.0.1 [EMAIL PROTECTED] 2008/06/03 13:47:39| squid_kerb_auth: Got '[EMAIL PROTECTED]' from squid (length: 15). 2008/06/03 13:47:39| squid_kerb_auth: parseNegTokenInit failed with rc=108 2008/06/03 13:47:39| squid_kerb_auth: Token is possibly a GSSAPI token 2008/06/03 13:47:39| squid_kerb_auth: gss_accept_sec_context() failed: A token was invalid. Token header is malformed or corruptBH gss_accept_sec_context() failed: A token was invalid. Token header is malformed or corrupt I have also tried all combinations of -DHAVE_SPNEGO, - D__LITTLE_ENDIAN__ and -D__BIG_ENDIAN__. All have failed in similar ways. So the obvious questions are - what am I doing wrong? am I using squid_kerb_auth correctly from the command line (can I use it all that way)? Is there anywhere I can look for more verbose logs from squid? I have been running squid with -d 9 -N options and it doesn't error to the logs or to the screen in any sort of verbose way (the way I would expect it to work). Any help would be much appreciated and I would be happy to provide any information you request! Thank you, Alex Morken
Re: [squid-users] Re: Cacheboy-1.1 release, testers wanted!
On lör, 2008-06-07 at 13:32 -0700, Linda W wrote: Is squid being renamed? No. Or what the relation of cacheboy to the squid project? CacheBoy is a fork of the Squid project, where Adrian tries out some new ideas of what the future should look like. I'm also a bit perturbed, that 3.0, which has been around forever, is still in Beta, Squid-3.0 was released as the current STABLE release on May 20, 2008 and is no longer in testing. while alot of work continues to go on in the 2.x line. Adrian still works on 2.x, and third-party contributions is still accepted (if in reasonable shape). I also continue maintaining the 2.x tree fixing important bugs. For some time the two versions will coexists, but the intentions is that 3.x should fill the role over time. There is some features and performance missing from 3.x to completely take over from 2.x, but also a lot of unique functionality not found in 2.x. I mean usually most work goes toward the new generationwith maybe 1 person handing bug fixes for 2.x...and usually most feature work goes into 3.x... It s the case for Squid as well. Is cacheboy going to become a 3rd squid proxy server? Or why was it split off? Because Adrian is not happy with the direction Squid-3 is taking, and also because he is more comfortable with C than C++, with large parts of Squid-3 being somewhat alien to him where Squid-2 is very well known to him. Regards Henrik
Re: [squid-users] Re: Cacheboy-1.1 release, testers wanted!
On sön, 2008-06-08 at 00:11 +0200, Henrik Nordstrom wrote: On lör, 2008-06-07 at 13:32 -0700, Linda W wrote: Is squid being renamed? No. Or what the relation of cacheboy to the squid project? CacheBoy is a fork of the Squid project, where Adrian tries out some new ideas of what the future should look like. I'm also a bit perturbed, that 3.0, which has been around forever, is still in Beta, Squid-3.0 was released as the current STABLE release on May 20, 2008 and is no longer in testing. Sorry, a silly copy-paste error crept in there. Dec 13, 2007 is the correct date for 3.0.STABLE1. May 20 was the most recent 3.0.STABLE6 bugfix release... while alot of work continues to go on in the 2.x line. Adrian still works on 2.x, and third-party contributions is still accepted (if in reasonable shape). I also continue maintaining the 2.x tree fixing important bugs. For some time the two versions will coexists, but the intentions is that 3.x should fill the role over time. There is some features and performance missing from 3.x to completely take over from 2.x, but also a lot of unique functionality not found in 2.x. I mean usually most work goes toward the new generationwith maybe 1 person handing bug fixes for 2.x...and usually most feature work goes into 3.x... It s the case for Squid as well. Is cacheboy going to become a 3rd squid proxy server? Or why was it split off? Because Adrian is not happy with the direction Squid-3 is taking, and also because he is more comfortable with C than C++, with large parts of Squid-3 being somewhat alien to him where Squid-2 is very well known to him. Regards Henrik
Re: [squid-users] Yet another Invalid URL question
Escuela Episcopal Bilingüe Santísima Trinidad wrote: Am trying to set up a separate box (with two NICs) as a firewall/filter using the following configuration. When Squid was running without Dansguardian, and I connected a laptop to the second NIC and pointed the laptop to Squid, everything worked fine. With the iptables set and Dansguardian running (and the laptop configured just normally), however, when I enter anything into the laptop's browser, I get an error message, from Squid, saying the URL is invalid, and the URL that it says it is trying to use is the URL I typed, without the domain info. Thus, if I ask for www.google.com/ it shows / instead. If I try something like search.lycos.com/?query=testx=0y=0 it shows /?query=testx=0y=0 instead. I have seen some chatter about this type of thing on Squid's mail list, but, again, the Squid-only operation did not encounter this problem, so it could be something to do with Dansguardian, but I'm asking here, as well as in the Dansguardian group, in case anyone here could be of assistance (and because nobody has responded, at all, in the Dansguardian group). The configuration I am trying to use is listed below, but here are a couple of notes, before I just paste it all there. First, the main set of instructions I was trying to use for this were from: http://www.spencerstirling.com/computergeek/dansguardian.html However, it appears to be a bit older, and mentions Squid options that no longer appear to be valid (e.g. httpd_accel_host virtual). It also says to use http_port 127.0.0.1:3128 but the discussion of the invalid URL problem in the Squid mail list suggests http_post 127.0.0.1:3128 transparent instead. Thus, I'm sure that the mismash of settings I am using is, somehow, the cause of the problem, but I lack the networking experience to tell just WHERE the problem occurs. Here, then, is the rest of the configuration information: Linux: Debian 4.0 r3 i386 iptables: 1.3.6 Squid: 3.0.PRES Dansguardian: 2.8.0.6-antivirus-6.4.4.1-2 (Squid and Dansguardian installed via Synaptic Package Manager) Please upgrade your Squid, we have been in STABLE cycle for several months now. The current production release is available form the Debian 'unstable' repositories. Since the configuration files are large enough that their full text would cause the list server to truncate this message, I have posted them to MediaFire. Here they are: firewall.sh: http://www.mediafire.com/?zvb2gjj99d9 dansguardian.conf: http://www.mediafire.com/?gucyirxttdb squid.conf: http://www.mediafire.com/?22txgzjdcez Thanks! Amos -- Please use Squid 2.7.STABLE1 or 3.0.STABLE6
Re: [squid-users] Transparent proxy with MSN
Sergio Belkin wrote: 2008/6/7 Amos Jeffries [EMAIL PROTECTED]: Sergio Belkin wrote: 2008/6/5 Amos Jeffries [EMAIL PROTECTED]: Sergio Belkin wrote: Hi, I'd want to know if it's possible allos MSN usage along transparent proxy. Possible. But not always easy. It depends highly on the type of network you have setup (a level of NAT between the client and squid kills it fairly well). The schema is as follows: A user connect with his notebook via Access Point which has OpenWRT installed. OpenWRT has DNAT rules: iptables -t nat -A prerouting_rule -i br0 -p tcp --dport 80 -j DNAT --to-destination $SQUID_IP:8080 iptables -t nat -A prerouting_rule -i br0 -p tcp --dport 1863 -j DNAT --to-destination SQUID_IP:8080 That NAT happening on the AP would break squid transparency. The AP needs to do policy-routing to pass only the port-80 packets to the squid box. http://wiki.squid-cache.org/ConfigExamples/LinuxPolicyRouteWebTraffic The NAT part appears to be right, but the Squid box should be the one doing it. So But why is web browsing working fine? Web browsing will work as long as your packets are reaching Squid. What wil be going wrong there is that your squid will be logging and doing ACL security checks on the wrong IPs for clients. There is something about authentication too with MSN, Where can I red about it? I don't know I found a mention in google, but it was not very helpful. full TPROXY may be needed for that one. (I've tried the last one and even redirecting 1050, but I'm not sure if that's right) Users can browse the web with no problems using transparent proxy (except SSL sites of course) but they fail to use MSN. MSN is _supposed_ to have automatic failovers to port 80 that use HTTP. But that depends on what other paths it can find through your network first. Amos -- Please use Squid 2.7.STABLE1 or 3.0.STABLE6 -- Please use Squid 2.7.STABLE1 or 3.0.STABLE6
Re: [squid-users] Re: Cacheboy-1.1 release, testers wanted!
Its reasonably simple, and was documentedin the past. Cacheboy is a fork of Squid-2.HEAD, with the intent to implement all the various things which I believed we should have done to Squid-2 before we forked it off and started along the Squid-3 path. Some of these medium term goals may conflict with Squid-3 goals and I didn't want to implement everything inside Squid-2 until the rest of the developers were happy. These include HTTP/1.1, IPv6 support and some rudimentary content manipulation. I began studying Squid performance issues a number of years ago when Squid-3 was still not stable. I chose Squid-2 because its what people were using. I'd like to think that my Squid-2 focus has and is benefitting users - I've tried to keep the leap between versions (at least due to my changes) reasonably minimal, but this hasn't always been the case. Squid-3 is now stable but there is still upcoming work which would make my performance related stuff more painful to implement, and this would result in longer periods of instability and release times. Coupled with the fact that Squid-3 still doesn't run anywhere near the speed that Squid-2.7 does (and Squid-2.HEAD / Cacheboy will just make that worse, sorry!), I think that Squid-2 is a perfectly good vehicle to flesh out performance and flexibility work - people run it everywhere, its used in very busy environments, and people seem happy to trial a version which is very close to what they're already running. As I've said before, its my hope that I can roll the changes I make in Cacheboy (and eventually Squid-2.HEAD) into a better future version of Squid. But I need to know -what- to change before I can really discuss topics like parallelism, performance, storage and features; and I don't think that anyone actively working on Squid right now really knows what the path forward should be. I got fed up waiting :) If you noticed, I've aimed to make Cacheboy stable -now-. This is only a few weeks after Squid-2.7 was released. Assuming Squid-2.HEAD doesn't get any further changes which introduce regressions, my aim is to get Cacheboy stable and in production right now so I can push out Squid-2.8, get it into production environments and begin the next set of changes. I don't want the release cycle to take years like we have in the past. Adrian On Sat, Jun 07, 2008, Linda W wrote: Is squid being renamed? Or what the relation of cacheboy to the squid project? Or was it the 2.7 branch that was renamed to cacheboy (is there one for girls?...not sure a I want a boy caching my webcontent -- but thats another matter...:-)) I'm also a bit perturbed, that 3.0, which has been around forever, is still in Beta, while alot of work continues to go on in the 2.x line. I mean usually most work goes toward the new generationwith maybe 1 person handing bug fixes for 2.x...and usually most feature work goes into 3.x... I'm not trying to direct or anything, I don't understand all the politics or what's going on... Is cacheboy going to become a 3rd squid proxy server? Or why was it split off? If it's really designed to be a separate fork going off in a different direction, I don't suppose it's very high traffic at this point, but it seem like it's yet another distractor for moving ahead getting all features and performance work needed in 3.x I mean -- the linux kernel is alot bigger and has a greater diversity of needs than squid would likely ever have, yet they, remarkably, have managed to stay mostly cohesive, but maybe no one with 'squid' has linus's charismatic charm? (?!?) But certainly a lesson to be taken from linux, no matter what examples there are to it not working for some developers (and there have been examples -- nothing is perfect), but the bulk of the work is focused and there doesn't seem to be any forks of any note, meaning ones that weren't intended as testing/development playgrounds with the work being remerged later, sorta faded away. So I guess, how did cacheboy come to be and why is it here (which may become obvious if I know the connection to the 'main' squid project...)? :-? -- - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support - - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
[squid-users] How to control download banwidth
I setup squid at many Cyber cafe; The performnace is normal. but the following problem occurs. If one of the users download large files with download manager e.g. IDM, Free Download managers, . the whole Internet cafe speed becomes very slow. I have set download limit policies but it is not suitable all the time. There are many situations user need to download more than download limit. I can solve this problem if i can control download bandwidth so that they cannot take the whole Internet bandwidth if they download with IDM, FDM, Gozilla ... Any can help me with example Mr. Crack 007
Re: [squid-users] How to control download banwidth
Mr Crack wrote: I setup squid at many Cyber cafe; The performnace is normal. but the following problem occurs. If one of the users download large files with download manager e.g. IDM, Free Download managers, . the whole Internet cafe speed becomes very slow. I have set download limit policies but it is not suitable all the time. There are many situations user need to download more than download limit. I can solve this problem if i can control download bandwidth so that they cannot take the whole Internet bandwidth if they download with IDM, FDM, Gozilla ... Any can help me with example delay_pool - limit download speeds. Can be set such that up to a limit the speed is unlimited, then for larger downloads the speed gets dropped. Usually used with maxconn ACL to limit people using too many seperate connections. Gozilla for example opens 3+ HTTP requests for different parts of the same object... Amos -- Please use Squid 2.7.STABLE1 or 3.0.STABLE6
[squid-users] use squid for bandwidth limiting
Hi, I have 4 server that used for web hosting. we have an internet connection that shared between these servers without any management . we want to manage and limit bandwidth of some of our server. I want to add a new server just for bandwidth management. what is the best bandwidth management tools in linux ? is squid useful for this purpose? can squid limit the incoming and outgoing traffic transparency? Please assist me. Best Regard Mahdi
Re: [squid-users] How to control download banwidth
--- Amos Jeffries [EMAIL PROTECTED] wrote: Usually used with maxconn ACL to limit people using too many seperate connections. Gozilla for example opens 3+ HTTP requests for different parts of the same object... Amos -- Please use Squid 2.7.STABLE1 or 3.0.STABLE6 afaik, maxconn only limit based on source ip connection, right ? it can't be used with other types of acl. perhaps it would be nice feature if it can be combined with .. let's say some regex of acl multimedia type. it would be easier to manage some partial content download. just my 2 cent, cmiiw -Agung-