Re: [squid-users] block https? (again)
you can block via iptables...the ssl hosts... and to filter urls.. use squidguard or dansguardian.. squid with a big blacklist causes the server loose performance.. On 4/29/07, Adrian Chadd [EMAIL PROTECTED] wrote: On Sat, Apr 28, 2007, Chuck Kollars wrote: I know this has already been asked, and I know Hendrik said no dice. But I still don't understand why, so I'm going to ask the same dumb question one more time: The important thing here is that Squid is an open source project, and if someone comes up with some patches which implement functionality then they can be included in the base distribution. I've had a couple of interested parties in developing a few features as of late - and a few people have also implemented stuff, much to our happiness - but for the most part there's lots of demand for stuff but not a lot of people who seem to do it. So! How's about this. I'll put up a wishlist on the Wiki. Its at http://wiki.squid-cache.org/WishList . Its very, very incomplete. If you have something you'd like to see implemented in Squid; donating to Squid to get one of these projects done or would like to participate in developing something but don't quite know how then now's your time to step up and say something to me. To the people who have emailed me in the past about certain stuff (logfile helpers, COSS work, SMP/threaded support, pre-fetch and satellite link optimisations) then I'd appreciate it if you'd get back to me with what you'd like to see and I'll get it added to this page. I'll also see if I can get this particular page unlocked from requiring login to edit so you can put in your own projects. Adrian -- Sds. Alexandre J. Correa Onda Internet / OPinguim.net http://www.ondainternet.com.br http://www.opinguim.net
Re: [squid-users] block https? (again)
On Sun, Apr 29, 2007, Alexandre Correa wrote: you can block via iptables...the ssl hosts... and to filter urls.. use squidguard or dansguardian.. squid with a big blacklist causes the server loose performance.. Would someone like to figure out why Squid's ACLs have such horrible performance when handling large site lists? Adrian
RE: [squid-users] Squid + Policy-Based Routing + LoadBalancing/Clustering???
lör 2007-04-28 klockan 22:10 -0500 skrev Fiero, Paul: Ack, that isn't the answer I was looking for. We do a load balancer that we could use but, unfortunately it means traffic would go from the router, through the firewall, through the load balancer, to squid, back through the load balancer, back through the firewall then out to the internet and then it would return through that path. Why? The load balancer path is only for traffic Clients-Squid, how Squid then fetches the content is irrelevant. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] block https? (again)
sön 2007-04-29 klockan 18:46 +0800 skrev Adrian Chadd: Would someone like to figure out why Squid's ACLs have such horrible performance when handling large site lists? Are they? Most if not all reports I have seen claiming this has been using url_regex when they should have been using dstdomain. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] block https? (again)
lör 2007-04-28 klockan 20:35 -0700 skrev Chuck Kollars: I want to block a whole bunch of https: proxies. I don't need to find them or to understand them - just block them. I already have a list of them (thanks to urlblacklist.com and DansGuardian). Then block them. Provided the traffic is sent via Squid to begin with. What is a no-dice is to have Squid deny traffic which is not even sent via Squid. I'e if you run a transparent interception setup, not having the browsers configured to use the proxy. acl proxy dstdomain file_blacklist_of_proxies.txt http_access deny proxy This needs to go before where you allow traffic. 2) Is the problem that the size of the blacklist might be very large (~10,000) and performance suffers so much this is unworkable? 1 is quite fine for dstdomain. Help me understand. Help me understand in what context I said this was not possible. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
[squid-users] SquidNT 2.6_12 cache_effective_user --- not working
Hi all, I hope for some help. I'm currently using SquidNT2.5_9 and now I wanted to upgrade to the newest version. All things are set so far, but I still got the problem of an unexpected termination of Squid. It says in cache.log that the user set in tag cache_effective_user is not allowed to write in folder e:/squid26_12/var/logs. I don't know why, because I granted the group Everyone full access to it and it can write the cache.log file. What did I do wrong? Andreas
Re: [squid-users] Re: users are so angry {NCSA authentication} ask for password with every new page
sön 2007-04-29 klockan 21:10 +0300 skrev phpdevster: I am useing squid for internet proxy . and the problem in detail is when i open a ie page and i want to open another page . the first page still opened it asks for password for the second page . these is ie 6 is there a cookies or something like i can use to fix the problem ? What does your http_port line look like? Is the browser configured to use the proxy, or are you doing transparent interception of port 80? Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] SquidNT 2.6_12 cache_effective_user --- not working
Hi, At 20.10 29/04/2007, Andreas Woll wrote: Hi all, I hope for some help. I'm currently using SquidNT2.5_9 and now I wanted to upgrade to the newest version. All things are set so far, but I still got the problem of an unexpected termination of Squid. It says in cache.log that the user set in tag cache_effective_user is not allowed to write in folder e:/squid26_12/var/logs. I don't know why, because I granted the group Everyone full access to it and it can write the cache.log file. Do you are using Cygwin ? In the other native builds of Squid (MinGW or Visual Studio) the cache_effective_user option is meaningless. You must set the Windows service account to change the Squid running account. Regards Guido - Guido Serassio Acme Consulting S.r.l. - Microsoft Certified Partner Via Lucia Savarino, 1 10098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: [EMAIL PROTECTED] WWW: http://www.acmeconsulting.it/
Re: [squid-users] Session helper question
l=C3=B6r 2007-04-21 klockan 16:49 -0400 skrev Tuc at T-B-O-H.NET: I don't see it. I look at the man page from 3.0.PRE5 and=20 2.6.STABLE12 . =20 =20 I'll guess that you should either use -t or -a. You may use a -t if you like, but there is a default value if you don't. You should not use -a, see the text after the -a option for a description of how the helper works when you don't use -a. Without this flag the helper automatically starts the session after the first request. But your contridicting yourself. The email that started this all contained the following : The above can be accomplished with the help of the session acl helper in it's active mode, combined with a internal web server for serving the splash page and redirecting the user back to the requested URL when clicking on Connect. SO, thats what I want to do. I want to use the session acl helper, in -a or active mode, and I'll put up an internal Apache server to put out our splash page. When the user is authenticated and done, I'll redirect them to the requested URL. So first you tell me I need active, then you tell me I don't. I don't want automatic session starting, I want to decide that when I'm satisfied THEN the user will have a session started. Your frst reply to me told me to use it in the active mode, so that would be -a. The example was removed between 2.6.STABLE12 and 3.0.PRE5. What I tried to tell you was to read the man page text after the -a option, explaining how the helper operates when not using -a (i.e. what is the default mode of operation). I meant in a much more previous email, not a currently previous email. You may use the active mode if you like, but it's somewhat more complicated to use. It's meant for situations where user must actively accept a terms-of-use page or similar before they are allowed to browse. Which is exactly what I want. And all I'm looking for is WHAT is used to determine that the session is there. Is it a cookie? A pop up? A mac address? An IP address? And the example has not been removed. It's 3.0.PRE5 which is not yet updated. Ok, thanks. So looking at the example, and it talks about an argument LOGIN, which I don't understand whats part of that argument. It also talks about sessions, but what constitutes a session? Is it from the same IP, from the same browser, etc.=20 Arguments is sent via the acl directive. A session identifier is whatever you send to the helper. Could be any of the above, as per your external_acl_type definition. What is the way in the example? I don't see it passing anything it seems except %LOGIN. Whats %LOGIN comprimised of? Is it possible to give an example of how the flow goes, what the browser and squid do back and forth? It's no flow in such sense. It's just a definition of what identifies a session in external_acl_type, and then the helper monitoring the activity of the session and timing out the session when idle (or alternatively explicit login/logout actions when using the -a option). Regards Henrik There is a flow... 1) User attempts to access a.b.c.d 2) Squid sents to acl helper 3) ACL helper matches BLAH against BLEH 4) If there is a match, page can be retrived 4b) If not, then user is directed to another page 5) When BLOOP is done, a FROIBLE is stored and So I'm trying to find out what the ACL is matching to see if the user was seen or not... Thanks, Tuc
RE: [squid-users] Squid + Policy-Based Routing +LoadBalancing/Clustering???
Aaa, I see your point. I wasn't thinking before I spoke. To bypass the normal route to the outside world would be in violation of our security policy and would set a precedent that I don't think our CIO is ready to defend Paul Fiero, RHCE Information Security Analyst Communications and Technology Management Office City of Austin (512) 974-3559 === The information contained in this ELECTRONIC MAIL transmission is confidential. It may also be a privileged work product or proprietary information. This information is intended for the exclusive use of the addressee(s). If you are not the intended recipient, you are hereby notified that any use, disclosure, dissemination, distribution [other than to the addressee(s)], copying or taking of any action because of this information is strictly prohibited. === -Original Message- From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] Sent: Sunday, April 29, 2007 12:26 PM To: Fiero, Paul Cc: squid-users@squid-cache.org Subject: RE: [squid-users] Squid + Policy-Based Routing +LoadBalancing/Clustering??? lör 2007-04-28 klockan 22:10 -0500 skrev Fiero, Paul: Ack, that isn't the answer I was looking for. We do a load balancer that we could use but, unfortunately it means traffic would go from the router, through the firewall, through the load balancer, to squid, back through the load balancer, back through the firewall then out to the internet and then it would return through that path. Why? The load balancer path is only for traffic Clients-Squid, how Squid then fetches the content is irrelevant. Regards Henrik
Re: [squid-users] SquidNT 2.6_12 cache_effective_user --- not working
Hi Guido, I use the version provided on your website. I think this is not a Cygwin, at least I didn't found the cygwin.dll file. I tried already to alter the logon account for the squid service and set this in the same way in squid.conf and in the ACLs of the folders but without success. Here is the cache.log: 2007/04/29 02:37:03| Starting Squid Cache version 2.6.STABLE12 for i686-pc-winnt... 2007/04/29 02:37:03| Running as Squid Windows System Service on Windows 2000 2007/04/29 02:37:03| Service command line is: 2007/04/29 02:37:03| Process ID 824 2007/04/29 02:37:03| With 2048 file descriptors available 2007/04/29 02:37:03| With 2048 CRT stdio descriptors available 2007/04/29 02:37:03| Windows sockets initialized 2007/04/29 02:37:03| Using select for the IO loop 2007/04/29 02:37:03| Performing DNS Tests... 2007/04/29 02:37:03| Successful DNS name lookup tests... 2007/04/29 02:37:03| DNS Socket created at 0.0.0.0, port 2045, FD 5 2007/04/29 02:37:03| Adding nameserver 212.42.245.7 from Registry 2007/04/29 02:37:03| Adding nameserver 212.42.246.212 from Registry 2007/04/29 02:37:03| Adding nameserver 172.58.98.5 from Registry 2007/04/29 02:37:03| Adding nameserver 172.58.98.8 from Registry 2007/04/29 02:37:03| User-Agent logging is disabled. 2007/04/29 02:37:03| Referer logging is disabled. 2007/04/29 02:37:03| Unlinkd pipe opened on FD 8 2007/04/29 02:37:03| Swap maxSize 1024 KB, estimated 787692 objects 2007/04/29 02:37:03| Target number of buckets: 39384 2007/04/29 02:37:03| Using 65536 Store buckets 2007/04/29 02:37:03| Max Mem size: 8192 KB 2007/04/29 02:37:03| Max Swap size: 1024 KB 2007/04/29 02:37:03| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec FATAL: Cannot open 'e:/squid26_12/var/logs' for writing. The parent directory must be writeable by the user '.\squid', which is the cache_effective_user set in squid.conf. Squid Cache (Version 2.6.STABLE12): Terminated abnormally. At 14:32 29.04.2007, Guido Serassio wrote: Hi, At 20.10 29/04/2007, Andreas Woll wrote: Hi all, I hope for some help. I'm currently using SquidNT2.5_9 and now I wanted to upgrade to the newest version. All things are set so far, but I still got the problem of an unexpected termination of Squid. It says in cache.log that the user set in tag cache_effective_user is not allowed to write in folder e:/squid26_12/var/logs. I don't know why, because I granted the group Everyone full access to it and it can write the cache.log file. Do you are using Cygwin ? In the other native builds of Squid (MinGW or Visual Studio) the cache_effective_user option is meaningless. You must set the Windows service account to change the Squid running account. Regards Guido - Guido Serassio Acme Consulting S.r.l. - Microsoft Certified Partner Via Lucia Savarino, 1 10098 - Rivoli (TO) - ITALY Tel. : +39.011.9530135 Fax. : +39.011.9781115 Email: [EMAIL PROTECTED] WWW: http://www.acmeconsulting.it/
Re: [squid-users] block https? (again)
On Sun, Apr 29, 2007, Henrik Nordstrom wrote: s??n 2007-04-29 klockan 18:46 +0800 skrev Adrian Chadd: Would someone like to figure out why Squid's ACLs have such horrible performance when handling large site lists? Are they? Most if not all reports I have seen claiming this has been using url_regex when they should have been using dstdomain. Thats what I thought, but it'd be nice to have it confirmed and some actual numbers posted. adrian
RE: [squid-users] Squid + Policy-Based Routing +LoadBalancing/Clustering???
Aaa, I see your point. I wasn't thinking before I spoke. To bypass the normal route to the outside world would be in violation of our security policy and would set a precedent that I don't think our CIO is ready to defend That sounds ... to me as a security consultant ... like you have a very troubling security setup there. The load balancer _outside_ the FW inaccessible to squid directly?? You should be considering both load balancer, squid and any other servers as valuable company resources to protect from both internet and some clients. FW outbound and inbound but not between them (unless your _very_ paranoid and have a FW on each machine ... which is a story for later...). But that is all besides Henriks point. Which was... Squid should be able to go out via FW directly for vetting not through a load balancer which may easily circle the loop back to squid again , and again, Thus the paths should look like this ... User-FW/Router-Balancer-Squid-FW-Internet and Internet-FW-Squid-FW/Router-User FW and Router should be considered as fast like a switch, somthing that can be traversed easily more than once, but only as an invisible hop to elsewhere. There is no need for squid to go through the balancer twice. The squid-internet part _cannot_ be balanced at your end by the nature of the protocols. Doing so merely doubles the traffic going through your hardware. Not exactly something you want to do under any circumstances. Amos pPS. Oh and PLEASE do not claim confidentiality on writing which are published for the entire world to see in perpetuity. === The information contained in this ELECTRONIC MAIL transmission is confidential. It may also be a privileged work product or proprietary information. This information is intended for the exclusive use of the addressee(s). If you are not the intended recipient, you are hereby notified that any use, disclosure, dissemination, distribution [other than to the addressee(s)], copying or taking of any action because of this information is strictly prohibited. === -Original Message- From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] Sent: Sunday, April 29, 2007 12:26 PM To: Fiero, Paul Cc: squid-users@squid-cache.org Subject: RE: [squid-users] Squid + Policy-Based Routing +LoadBalancing/Clustering??? lör 2007-04-28 klockan 22:10 -0500 skrev Fiero, Paul: Ack, that isn't the answer I was looking for. We do a load balancer that we could use but, unfortunately it means traffic would go from the router, through the firewall, through the load balancer, to squid, back through the load balancer, back through the firewall then out to the internet and then it would return through that path. Why? The load balancer path is only for traffic Clients-Squid, how Squid then fetches the content is irrelevant. Regards Henrik
[squid-users] Make install error
Hi, I encountered error at make install and the error as follows: then mv -f .deps/store_dir.Tpo .deps/store_dir.Po; else rm -f .deps/store_dir.Tpo; exit 1; fi store_dir.c: In function `storeDirGetBlkSize': store_dir.c:526: error: storage size of `sfs' isn't known store_dir.c:527: warning: implicit declaration of function `statfs' store_dir.c:526: warning: unused variable `sfs' store_dir.c: In function `storeDirGetUFSStats': store_dir.c:565: error: storage size of `sfs' isn't known store_dir.c:565: warning: unused variable `sfs' *** Error code 1 make: Fatal error: Command failed for target `store_dir.o' Current working directory /export/home/squid-2.6.STABLE12/src *** Error code 1 make: Fatal error: Command failed for target `install-recursive' Current working directory /export/home/squid-2.6.STABLE12/src *** Error code 1 make: Fatal error: Command failed for target `install' Current working directory /export/home/squid-2.6.STABLE12/src *** Error code 1 make: Fatal error: Command failed for target `install-recursive' Kindly advise what could have been wrong with my compilation or make install. Thank you. _ Find singles online in your area with MSN Dating and Match.com! http://cp.intl.match.com/eng/msn/msnsg/wbc/wbc.html