Re: [squid-users] Why is squid caching local intranet domains??

2012-06-06 Thread Eliezer Croitoru

the squid is a gateway...
so if you access the local network you are not getting the data\web 
through the squid box..

makes sense.

Eliezer

On 06/06/2012 19:10, bnichols wrote:

Well the only issue I really have is that any host that is MANUALLY
configure for the squid gets cache hits on the hosts in the
localdomain, which really isny a problem, considering none of my hosts
are manually configured, and its all done via forwarding on the router.

So in essence, squid is doing what I want it to do, caching all
traffic, and letting the local hosts go directly to local webservers on
the intranet.

  I was just surprised and bewildered by the lack of log file generation
when trying to access a local webserver. I would have expected to see
logs with DIRECT in them rather than a lack of logs all together.


Of course I get log files just
fine when accessing normal web sites, and logs, and squid functions.

On Wed, 06 Jun 2012 18:51:02 +0300
Eliezer Croitoruelie...@ngtech.co.il  wrote:


SNIP

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


[squid-users] Youtube\dynamic content caching with squid 3.2 |DONE|

2012-06-08 Thread Eliezer Croitoru
 hope it 
will be simplified now.


we know that the proxy servers do not reveal the request modification 
that is done by the icap server and this specific icap server 
(GreasySpoon) has a very very powerful capabilities of external and 
custom libs classes and programing languages.


we will create a database with couple of fields for temporary data and 
if we want we can also build some statistics tables in the db.


the purpose of the database is to store destination url and compatible 
key and will be managed by the key and not the url because the url is 
dynamic..


we will do a double request manipulating on each request.
one one the intercept\forward proxy and the second is on the 
cache_peer\second instance proxy.


the flow is like that:

request from client -proxy1

proxy1---ICAP server
proxy1 acl on the real domain to reqmod on ICAP


icap server(extracting the data of the object from url and paring them 
on the db with the url, then rewrites the request to a spoofed domain 
with the key on the uri) -proxy1

example:

http://dfn.dl.sourceforge.net/project/npp-compare/1.5.6/compare-1.5.6-unicode.zip

became: 
http://dl.df.squid.internal//project/npp-compare/1.5.6/compare-1.5.6-unicode.zip


and paired in the db as id and the original url with timestamp.



proxy1  request as client  the spoofed object --proxy2

proxy1 acls for squid.internal dstdomain is to peer it to proxy2


proxy2-ICAP

proxy2 has acls that allow only spoofed domains .squid.internal to 
reqmod the ICAP server (to prevent an endless loop).



ICAP serverproxy2
the icap server rewrites the paired url instead of the key.
this is because we want to fetch the real object recursively into proxy1 
cache.


in this state of proxy 1 thinks it's fetching the spoofed key aka:
http://dl.df.squid.internal//project/npp-compare/1.5.6/compare-1.5.6-unicode.zip

but proxy2 is feeding him:
http://dfn.dl.sourceforge.net/project/npp-compare/1.5.6/compare-1.5.6-unicode.zip

proxy2proxy1--client

so this specific state is logically like that:
client thinks he fetches the real file.
proxy1 fetch a spoofed file\url from proxy2
proxy2 fetch the real file\url from the real server to proxy1.

but next time that a client will try to get one of the objects:
http://dfn.dl.sourceforge.net/project/npp-compare/1.5.6/compare-1.5.6-unicode.zip

http://X.dl.sourceforge.net/project/npp-compare/1.5.6/compare-1.5.6-unicode.zip

http://yyy.dl.sourceforge.net/project/npp-compare/1.5.6/compare-1.5.6-unicode.zip

if proxy1 will have the spoofed object:
http://dl.df.squid.internal//project/npp-compare/1.5.6/compare-1.5.6-unicode.zip

in cache he will serv it from there otherwise it will be fetched from 
the internet using proxy2.



*this is the main concept*


i have a working setup for:
youtube
ytimg
imdb mp4\flv
sourceforge
some of facebook content
bliptv
vimeo
dailymotion
metacafe
av updates.
Filehippo
linux distros repos.  (need to make a change in the db\key 
structure\match rules)




if you have more features that can be good i will be happy to try.

(there is a access.log file with some nice data)

Regards,
Eliezer






--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il
1339139485.017   3384 192.168.10.100 TCP_MISS/200 1782237 GET 
http://youtube.squid.internal/id=a5ad7854db1d4983itag=34range=13-1781759 - 
FIRSTUP_PARENT/127.0.0.1 video/x-flv
1339139488.829   3720 192.168.10.100 TCP_MISS/200 1782250 GET 
http://youtube.squid.internal/id=a5ad7854db1d4983itag=34range=1781760-3563519 
- FIRSTUP_PARENT/127.0.0.1 video/x-flv
1339139492.106   3199 192.168.10.100 TCP_MISS/200 1782250 GET 
http://youtube.squid.internal/id=a5ad7854db1d4983itag=34range=3563520-5345279 
- FIRSTUP_PARENT/127.0.0.1 video/x-flv
1339139496.554   4343 192.168.10.100 TCP_MISS/200 1782250 GET 
http://youtube.squid.internal/id=a5ad7854db1d4983itag=34range=5345280-7127039 
- FIRSTUP_PARENT/127.0.0.1 video/x-flv
1339139507.309   4015 192.168.10.100 TCP_MISS/200 1782250 GET 
http://youtube.squid.internal/id=a5ad7854db1d4983itag=34range=7127040-8908799 
- FIRSTUP_PARENT/127.0.0.1 video/x-flv
1339139521.914   4620 192.168.10.100 TCP_MISS/200 1782250 GET 
http://youtube.squid.internal/id=a5ad7854db1d4983itag=34range=8908800-10690559
 - FIRSTUP_PARENT/127.0.0.1 video/x-flv
1339139533.937   3444 192.168.10.100 TCP_MISS/200 1782250 GET 
http://youtube.squid.internal/id=a5ad7854db1d4983itag=34range=10690560-12472319
 - FIRSTUP_PARENT/127.0.0.1 video/x-flv
1339139556.221   4127 192.168.10.100 TCP_MISS/200 1782250 GET 
http://youtube.squid.internal/id=a5ad7854db1d4983itag=34range=12472320-14254079
 - FIRSTUP_PARENT/127.0.0.1 video/x-flv
1339139571.108   3614 192.168.10.100 TCP_MISS/200 1782250 GET 
http://youtube.squid.internal

Re: [squid-users] Youtube\dynamic content caching with squid 3.2 |DONE|

2012-06-09 Thread Eliezer Croitoru

POSTED the code on my GITHUB
https://github.com/elico/squid-helpers
https://github.com/elico/squid-helpers/tree/master/squid_helpers/youtubetwist


Icap server works even faster then a Store_url_rewrite helper and gives 
a lot of benefits.



Regards,
Eliezer


On 08/06/2012 15:05, Eliezer Croitoru wrote:
SNIP


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] error in opening google.com in firefox

2012-06-10 Thread Eliezer Croitoru

proxy is uses as a server to client that is aware of it.
this is what called forward proxy.
you define in the browserthe proxy address and port then use it.
to work with that you define in squid.conf the line:
http_port 3128
or any other port.
if you want to intercept the clients connections so the proxy will 
always work on their traffic you must use another argument:

http_port 3128 intercept

some more info about it you can find here:
http://www.squid-cache.org/Doc/config/http_port/

are you using linux?
if so did you configure any iptables rules for squid to work?

Eliezer

On 10/06/2012 16:52, Muhammad Yousuf Khan wrote:

sorry i know very basic squid
would you please explain it a bit more.
i am using squid in proxy mode (not transparent)
squid port is 3128. i didnt  change it to 80 or  8080

moreover this error is showed up on firefox however working normal
with explorer and it is happening only in my PC for all other clients
its working fine.

furthermore Squid was working fine since 6 months this error has
occurs  very recently.

On Sat, Jun 9, 2012 at 10:42 AM, Amos Jeffriessqu...@treenet.co.nz  wrote:

On 8/06/2012 3:32 a.m., Muhammad Yousuf Khan wrote:


i am receiving this error via firfox but working good in explorer.

this is what i am getting in access.log.

TCP_DENIED/400 2022 GET NONE:// - NONE/- text/html



and this is the exact error that i am getting on firefox.

ERROR
The requested URL could not be retrieved

Invalid Request error was encountered while trying to process the request:

 GET / HTTP/1.1
 Host: www.google.com.pk
 X-VMN-URL:
http://partner37.mydomainadvisor.com/search.php?pr=blekkoid=blekkotb_031_tbv=1_0_1_34ent=antiphishing_dnq=www.google.com.pk



This looks like you are sending port 80 traffic to a forward proxy listening
port. Client-proxy traffic in HTTP has a very different syntax to
client-server traffic. The way Squid identifies how to handle the traffic
is by the traffic mode flag you set on the http_port.

Amos



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


[squid-users] i'm getting a lot of output in my cache.log about status_code acl

2012-06-10 Thread Eliezer Croitoru
2012/06/11 03:07:58| ACL::checklistMatches WARNING: 'OK' ACL is used but 
there is no HTTP reply -- not matching.
2012/06/11 03:07:58| ACL::checklistMatches WARNING: 'REDIRECT' ACL is 
used but there is no HTTP reply -- not matching.



in squid.conf
acl REDIRECT http_status 302
acl OK  http_status 200-206


i'm not sure if im right
but i want to deny cache on 302 code responses.
i understand that http_status acl is a fast one but
i'm trying to use it on the


cache deny REDIRECT
cache allow all


any comments will be gladly be welcome.

Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] i'm getting a lot of output in my cache.log about status_code acl

2012-06-10 Thread Eliezer Croitoru

On 11/06/2012 03:40, Amos Jeffries wrote:

On 11.06.2012 12:12, Eliezer Croitoru wrote:

2012/06/11 03:07:58| ACL::checklistMatches WARNING: 'OK' ACL is used
but there is no HTTP reply -- not matching.
2012/06/11 03:07:58| ACL::checklistMatches WARNING: 'REDIRECT' ACL is
used but there is no HTTP reply -- not matching.


in squid.conf
acl REDIRECT http_status 302
acl OK http_status 200-206


i'm not sure if im right
but i want to deny cache on 302 code responses.
i understand that http_status acl is a fast one but
i'm trying to use it on the


cache deny REDIRECT
cache allow all



cache access is performed to determine whether a stored response may
be looked up. There is no response and status pre-known at that point.

Amos


ok that i know...
but why is it logging it ? isnt it common sense that if you dont have 
answer you are not suppose to even consider it?




Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] ACL to determine forward or intercept request?

2012-06-11 Thread Eliezer Croitoru

On 12/06/2012 00:14, Guy Helmer wrote:

Is there a way to write an ACL that can determine whether a request has been 
made using Squid as a forward proxy, or if the request has been intercepted?

Guy


you can use the myportname acl for that.
then you can deny any direct access to the intercept port so only 
intercepted traffic will get to squid on this port...

this is about it.

Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] https traffic via cache peer with SSL termination enabled on downstream proxy

2012-06-11 Thread Eliezer Croitoru
you can use two cache_peers fot he same host then name them differently 
with a name=  and using a CONNECT method acl to allow access to the 
ssl encrypted upstream connection.


Eliezer

On 11/06/2012 16:00, nipun_mlist Assam wrote:

Hi All,

I have a configuration as given below:

client--  downstream-proxy--  upstream-proxy---  cloud

downstream proxy is always squid, while upstream proxy is either squid
or bluecoat.
When SSL termination enabled on downstream proxy, I noticed traffic
between down-stream and upstream-proxy is not encrypted. That results
in failures when upstream proxy is bluecoat. It returns 400 Bad
request error.
The root cause is bluecoat always wants https traffic to be encrypted.
For example, if below data ( a plain text request
https://accounts.google.com) is sent to bluecoat, bluecoat will return
a 400 Bad request error, but squid will happily get the response and
send back to the client program.

GET 
https://accounts.google.com/ServiceLogin?service=mailpassive=truerm=falsecontinue=http://mail.google.com/mail/scc=1ltmpl=defaultltmplcache=2
HTTP/1.1
Accept: image/jpeg, application/x-ms-application, image/gif,
application/xaml+xml, image/pjpeg, application/x-ms-xbap,
application/vnd.ms-excel, application/vnd.ms-powerpoint,
application/msword, */*
Accept-Language: en-IN
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1;
Trident/4.0; GTB7.3; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729;
.NET CLR 3.0.30729; Media Center PC 6.0)
Accept-Encoding: gzip, deflate
Host: accounts.google.com
Via: 1.1 taarusg (squid/3.1.11)
X-Forwarded-For: 192.168.119.8
Cache-Control: max-age=259200
Connection: keep-alive



On the other hand if I disable SSL termination on the downstream
proxy, everything works just fine.
My requirement is http traffic between upstream and downstream proxy
should be always non-encrypted. While in case of HTTPS, traffic
between downstream and upstream proxy should never be non-encrypted.
How can I configure downstream squid to always use HTTP CONNECT in
case of for HTTPS even when SSL termination enabled on the downstream
proxy ?
Any help is greatly appreciated.

Regards,
Nipun Talukdar
Bangalore
India



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] i'm getting a lot of output in my cache.log about status_code acl

2012-06-11 Thread Eliezer Croitoru

On 11/06/2012 08:56, Amos Jeffries wrote:



You have your debug options set to display important messages, not just
critical ones.

It is important to be aware your cache deny REDIRECT is not working as
you designed.

Amos
.. do you have any suggestion on how to make it work to not cache a 302 
code response ?


Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] i'm getting a lot of output in my cache.log about status_code acl

2012-06-11 Thread Eliezer Croitoru

On 12/06/2012 04:21, Amos Jeffries wrote:

On 12.06.2012 11:19, Eliezer Croitoru wrote:

On 11/06/2012 08:56, Amos Jeffries wrote:



You have your debug options set to display important messages, not just
critical ones.

It is important to be aware your cache deny REDIRECT is not working as
you designed.

Amos

.. do you have any suggestion on how to make it work to not cache a
302 code response ?



Not with the existing design. processReply() in http.cc needs to be
altered to run some access directive similar to cache in order to cope
with responses.

Is this for your system with ICAP service to cache YouTube? You can add
no-store control to the 3xx response with RESPMOD pre-cache.

Amos


well it is indeed for this service.
it works so great and i so proud of it becasue it the first squid 3X 
based cache objects management of dynamic content such as youtube


the idea was somewhere in my head and i started implementing it  a long 
time ago but for a more complicated setup then youtube.
now i had the muse and took the time to actually test it all and make 
sure that it works great for every implementation i have ever put my 
hands on until now.
i also added some code to collect live data using icap and i got only 
for youtube for the last 3 days a 600+ hits from a sum of 1900+ requests 
on one server on a 10 MB DSL line.
it's about 1.7 MB * 600 = about 9GB from 31GB sum was served from cache 
for dynamic content only.
it's the first time that i can get statistics from squid access.log with 
just about one line of commands.


the idea of no-store been done already.
thanks for the idea.

i am working on a better way to manage the objects on the cache itself 
by side to the lru\heap algorithms.
by that i mean that by statistics on objects to shorten or extend their 
life span in the cache using htcp.


i think that it's a much more efficient way to manage a cache and not to 
get into a point that you are storing thousands of files on a web server 
directory and serv them to the client.


by the way there were some stories about that trying to to cancel the 
range requests of youtube wont work.
well it seems like it works fine but the player as a burst\throttle 
mechanism that treats the download with specific speed to avoid 
bandwidth abuse.
so it's better to cache the range requests so the clients will get a 
better speed from the player..


Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] i'm getting a lot of output in my cache.log about status_code acl

2012-06-11 Thread Eliezer Croitoru

SNIP


Cool. Are you able to write this up into the YouTube wiki page?


Amos

i will need to get a test page first to feel the wiki because i have 
never written one.

also my user from a reason wasnt activated.
i didnt got the mail (my server is sitting on a dynip server so it might 
be it but it seems ood).


the thing is that it can be done also with a url rewriter almost the 
same with icap.

the only problem is that you cant do the trick of icap on response.
but as i'm thinking on it it can be done with  reply_header_replace

the idea that can be implemented wiht url_rewrite and two squid 
instances is:

first url rewriter psudo:
strips that dynamic data into memory
compose an internal url with the data.
writes the internal url and original url in a memory database.
send squid the new url..
squid has acls for the internal domain to get it using the cache peer...


second url rewriter psudo:
strips the dynamic content data\use the url as id
check in the memory db if exists and if so grab the original url and
send it to squid.

client - asks yt
   |
   |
squid1   asks from squid 2 yt.squid.internal (client dont know about it)
   |
   |(   mysql memory db for coordination of acrobatics ;)  )
   |
squid2  gets from yt the original url (squid1 dosnt know about it )
   |here on squid2 i can manipulate the headers rewriting
  \|/

   YT

well it's not the store_url_rewrite but acrobatics so i'm kind of proud 
of it.


will try the header rewriting on squid to make the whole thing more 
perfect and to even work without icap.


Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


[squid-users] New open-source ICAP server mod for url rewriting and headers manipulation.

2012-06-13 Thread Eliezer Croitoru
as i was working with ICAP i saw that GreasySpoon ICAP server consumes a 
lot of memory and on load takes a lot of cpu from an unknown reasons so 
i was looking for an alternative and didnt find one but i found a basic 
icap server that i modified to be more modular and also to work with 
instances\forks.


the main goal of this specific modification is to make it simple to use 
for url_rewriting.


tests that was done until now for performance was on:
client---squid\gw---server
1Gbit lan speed between all
client spec - intel atom 410D 2gb ram opensuse
squid spec - intel atom 510D 2GB ram Gentoo + squid 3.1.19 + ruby 1.9.3_p125
server spec - 4GB core i3 opensuse 64 bit nginx serving simple html 
it's wokrs


with apache benchmark tools:
ab -c 1000 -n 4000 http://otherdomain_to_rewrite/;

served all requests and about 800+ reqs per sec.

download at: https://github.com/elico/squid-helpers/tree/master/echelon-mod

looking for testers to make sure that the server is good.

notes: the forks aren't build that good so in a case of termination by 
exceptions runtime error only one fork goes down and you must kill all 
the others manually to to restart the server.


logs have a huge amount of output on a production environment so it's 
recommended to not use it at all if you dont need it.



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


[squid-users] getting the full list of objects in the cache _dir

2012-06-14 Thread Eliezer Croitoru
i'm looking for the best way to get the full list of current cached 
objects in cache_dir cache.
i can use the purge software to lookup objects and purge them but i 
would like better to know somehow the specific list of objects in cache 
live.
i can use the purge tool to extract all the data at once and to parse it 
into a memory db or analyze it on the fly.


maybe some external log daemon for the cache_store logs can do the trick?

Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] New open-source ICAP server mod for url rewriting and headers manipulation.

2012-06-14 Thread Eliezer Croitoru

well first i'm not such a huge programmer.
writing the code in ruby is very simple and e-cap will might be faster 
and it can be a good idea to write some code for it.

if you have any example code i will be happy to hear about it.

java is not that bad as it seems to some people and GreasySpoon offers 
so much:

simple interface + web interface
simple methods
simple logs
simple way to work with threads
embedding external classes and libs.

after all that the memory consumption is not that bad for a heavy load 
environment.


but still my ruby simple server has nice interface to the user i have 
moduled (not finished yet).

it's almost not consuming ram and cpu.
writing a module for my icap server is just about writing the full class 
in only ruby, include it and run a simple matcher case to apply the 
class method operation.


if you do ask me about caching post responses i would say that it's in 
most cases not suppose to be cached and is a mechanism to send the 
server forms and data.


about url_rewriting there is already a url_rewrite interface for squid 
so it seems like a pointless operation to write new one at all.


the main reason i wrote it was because i needed an independent service 
software and platform to work with for a high load proxy.
i can use one (or two redundant) icap server as a rewriting platform for 
a whole proxy cluster but with the current cpu and ram usage of it i can 
put it on the same server instead of on another machine.


i think that for any server to maintain a stress of above 900 requests 
per second (with the usage of one fork when there is an option for more 
with just a settings file, on an intel atom cpu) it will be an achievement.
the only reason i havn't tested it for more then 1024 request per second 
stress is because i havn't had the time to tweak the machine\s 
descriptors to more then the soft limit of 1024.
to make squid run without any cache.log errors i had to shutdown the 
access.log writing.


more ideas for stress tests and if someone have a spare test environment 
to spare some testing time for the software i will be happy.


Eliezer

On 14/06/2012 22:44, johan firdianto wrote:

why you don't play with ecap ?. it should faster than icap.
greasySpoon based on java, i'm not surprised consume much memory.
with i/e-cap you could also cache post request by using respmod vector.


SNIP


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] getting the full list of objects in the cache _dir

2012-06-14 Thread Eliezer Croitoru

On 15/06/2012 04:59, Amos Jeffries wrote:

On 15/06/2012 7:31 a.m., Eliezer Croitoru wrote:

i'm looking for the best way to get the full list of current cached
objects in cache_dir cache.
i can use the purge software to lookup objects and purge them but i
would like better to know somehow the specific list of objects in
cache live.
i can use the purge tool to extract all the data at once and to parse
it into a memory db or analyze it on the fly.

maybe some external log daemon for the cache_store logs can do the trick?


The cachemgr objects report lists everything in the cache index. We
have not yet updated it to break things down by cache_dir though.

Amos

well the cache_dir specific is not important to me.
im having promlem while using it on squidclient.
it's get stuck after couple objects.

Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Big issue on all Squid that we have (2.5 and 3.0) on a web site with IE8/IE9

2012-06-21 Thread Eliezer Croitoru

you can try to clean any cache object in this domain\page.
then deny cache...
try to load the page.
it can be because of an object that was cached but meant for FF instead 
of IE.


Eliezer
On 6/21/2012 8:15 AM, Noc Phibee Telecom wrote:

Hi

thanks for your answer, but i don't think's that it's a problems of web
designer.
This site work with IE8/IE9 when we don't use squid proxy

It's only when Squid is used by IE that's don't work.

Best regards
Jerome


Le 21/06/2012 05:23, Helmut Hullen a écrit :

Hallo, Noc,

Du meintest am 21.06.12:



We have a big issue with our squid proxy. We browse this website
(http://www.laroutedulait.fr) through squid 3.0. We get a blue
background and nothing else ( using IE8  9 ).

Seems to be no squid problem but a problem made by the web designer.

The side tries do use a 'class=ie8' or 'class=ie9' when it finds
such a browser.

On my system it shows less informations under Internet Explorer than
under Firefox.


No errors in log. a Idea of the problems ?

Just ask the maker of the side (but contact doesn't work ...).

Viele Gruesse!
Helmut







--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] Big issue on all Squid that we have (2.5 and 3.0) on a web site with IE8/IE9

2012-06-21 Thread Eliezer Croitoru
well i have checked it with IE( you can get the F12 key to show you 
developer thingy.

there is a js that tries to fetch the jpg files as an xml file.
so it's something with ie fault about it or the js.
the file is a jpg and not an xml what might cause the problem.

Elizer

On 6/21/2012 12:03 PM, Jose-Marcio Martins da Cruz wrote:


Trying to see this page...

With seamonkey (not IE), both in direct (without proxy) or passing
through a squid 3.1...

I can just see a blue and empty page... the same than you.

So it doesn't seem to me that this is a squid problem.

Eliezer Croitoru wrote:

you can try to clean any cache object in this domain\page.
then deny cache...
try to load the page.
it can be because of an object that was cached but meant for FF
instead of IE.

Eliezer
On 6/21/2012 8:15 AM, Noc Phibee Telecom wrote:

Hi

thanks for your answer, but i don't think's that it's a problems of web
designer.
This site work with IE8/IE9 when we don't use squid proxy

It's only when Squid is used by IE that's don't work.

Best regards
Jerome


Le 21/06/2012 05:23, Helmut Hullen a écrit :

Hallo, Noc,

Du meintest am 21.06.12:



We have a big issue with our squid proxy. We browse this website
(http://www.laroutedulait.fr) through squid 3.0. We get a blue
background and nothing else ( using IE8 9 ).

Seems to be no squid problem but a problem made by the web designer.

The side tries do use a 'class=ie8' or 'class=ie9' when it finds
such a browser.

On my system it shows less informations under Internet Explorer than
under Firefox.


No errors in log. a Idea of the problems ?

Just ask the maker of the side (but contact doesn't work ...).

Viele Gruesse!
Helmut













--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




[squid-users] How can i prioritize Icap services?

2012-06-25 Thread Eliezer Croitoru
using my icap server i implemented a basic url filtering mechanism with 
postgresql\mysql\sqlite.

i want to know how icap services are prioritized in squid.
regular acls are first hits... allow ... deny..
if i want to put the url filtering service above all other icap services 
on squid how would i do that?


i have 5 icap services:

##config
icap_service service_req reqmod_precache bypass=0 
icap://127.0.0.1:1344/reqmod

adaptation_access service_req deny someacl
adaptation_access service_req allow otheracl


icap_service service_filter reqmod_precache bypass=0 
icap://127.0.0.2:1344/reqmod?smpfilter

adaptation_access service_filter allow all
#end config

i want all urls to be checked by the filtering system and if so the url 
is rewritten and i dont want any other icap service to match it.


so what squid logic about icap?

Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il



Re: [squid-users] Is Squid Multi-Tenant?

2012-06-26 Thread Eliezer Croitoru

On 6/26/2012 8:48 AM, Deepak Panigrahy wrote:

My requirement is to address different configuration files to a single
squid server, through which I can use multiple configuration with
separate rules/filters/users

~ DP


it's better to first understand the acls of squid.
you can get much from one squid instance as you need.
you will might want to use some external acl software to manage 
everything but still there are only a few things you will need more then 
one instance for.


if you have more information on you setup and needs we can try to assist 
you achieve it.


Regards,
Eliezer

On Fri, Jun 22, 2012 at 12:30 AM, Robert Collins
robe...@squid-cache.org wrote:

On Fri, Jun 22, 2012 at 5:18 AM, Deepak Panigrahy
deepak.ii...@gmail.com wrote:

I am a newbie to Squid and was wondering if Squid is multi-tenant? If
yes, how can we achieve multi-tenancy in Squid?


This depends almost entirely on what you mean. Can you describe what
multi-tenant means to you?

-Rob



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] Cannot run '/usr/lib/squid3/squid_session' process

2012-06-26 Thread Eliezer Croitoru
i dont know this external acl but i would try to first run it from 
command line with all the parameters.

/usr/lib/squid3/squid_session -t 60 -b /usr/share/squid3/session.db

you should get the feeling of it running or not.
it's ubuntu so try sudo and also su - proxy command...
so you would get the feeling of root permissions and the proxy user that 
runs the command for squid.
also make sure that all the tree of directories /usr/lib/squid3/ has 
permissions that allows the proxy user access to them.

maybe the specific file has the right ones but the tree dirs dont.

Regards,
Eliezer

On 6/26/2012 1:23 PM, Stefanie Clormann wrote:

Hi,

I am running an Ubuntu Linux Server 12.04 - 64 bit - Kernel
3.2.0-24-generic.
and the squid3 package (3.1.19-1ubuntu3).

I wanted to try the following:
# Test 2
external_acl_type splash_page ttl=60 concurrency=200 %SRC
/usr/lib/squid3/squid_session -t 60 -b /usr/share/squid3/session.db
acl existing_users external splash_page
deny_info splash.html existing_users
http_access deny !existing_users

and I get this error:
2012/06/26 12:00:52| helperOpenServers: Starting 5/5 'squid_session'
processes
2012/06/26 12:01:55| WARNING: Cannot run '/usr/lib/squid3/squid_session'
process.
2012/06/26 12:02:58| WARNING: Cannot run '/usr/lib/squid3/squid_session'
process.
2012/06/26 12:04:01| WARNING: Cannot run '/usr/lib/squid3/squid_session'
process.
2012/06/26 12:05:04| WARNING: Cannot run '/usr/lib/squid3/squid_session'
process.

Output of:
ls -la /usr/lib/squid3/squid_session:
-rwxr-xr-x 1 root root 10200 Jun 21 11:53 /usr/lib/squid3/squid_session
ls -la/usr/share/squid3/session.db
-rw-r--r-- 1 proxy proxy 0 Mai 16 13:32 /usr/share/squid3/session.db


I also tried a compiled source version (squid-3.1.20 ) - but it gives me
the same error.

What could be the problem?
Stefanie



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] Can't play youtube while logged in

2012-06-26 Thread Eliezer Croitoru
 4291 link-local (di=

rectly

plugged) machines

tcp_outgoing_address 192.168.254.2
udp_outgoing_address 192.168.254.2

acl SSL_ports port 443
acl Safe_ports port 80 =C2=A0 =C2=A0 =C2=A0 =C2=A0# http
acl Safe_ports port 21 =C2=A0 =C2=A0 =C2=A0 =C2=A0# ftp
acl Safe_ports port 443 =C2=A0 =C2=A0 =C2=A0 =C2=A0# https
acl Safe_ports port 70 =C2=A0 =C2=A0 =C2=A0 =C2=A0# gopher
acl Safe_ports port 210 =C2=A0 =C2=A0 =C2=A0 =C2=A0# wais
acl Safe_ports port 1025-65535 =C2=A0 =C2=A0# unregistered ports
acl Safe_ports port 280 =C2=A0 =C2=A0 =C2=A0 =C2=A0# http-mgmt
acl Safe_ports port 488 =C2=A0 =C2=A0 =C2=A0 =C2=A0# gss-http
acl Safe_ports port 591 =C2=A0 =C2=A0 =C2=A0 =C2=A0# filemaker
acl Safe_ports port 777 =C2=A0 =C2=A0 =C2=A0 =C2=A0# multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on localhost is a local user
http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
#include /etc/squid/acl.extended.conf

acl direct dstdomain .

always_direct allow direct



always_direct allow all is better. However, all this does is prevent Sq=

uid

sending the request through a cache_peer and forces Squid to ass it to eh
DNS (DIRECT) web server for the domain.
You have no cache_peer configured, so it has no use in your config.






# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 192.168.254.2:3080 intercept

# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?
strip_query_terms off

# Uncomment and adjust the following to add a disk cache directory.
cache_dir aufs /mnt/data/squid 9000 16 256
cache_mem 256 MB
maximum_object_size_in_memory 128 KB

# Leave coredumps in the first cache dir
coredump_dir /mnt/data/squid

# WCCP Router IP
wccp2_router 192.168.254.1

# forwarding 1=3Dgre 2=3Dl2
wccp2_forwarding_method 1

# GRE return method gre|l2
wccp2_return_method 1

# Assignment method hash|mask
wccp2_assignment_method hash

# standard web cache, no auth
wccp2_service dynamic 52
wccp2_service_info 52 protocol=3Dtcp priority=3D240 ports=3D80

maximum_object_size 700 MB
minimum_object_size 4 KB

half_closed_clients off
quick_abort_min 0 KB
quick_abort_max 0 KB
vary_ignore_expire on
reload_into_ims on
log_fqdn off
memory_pools off
cache_swap_low 98
cache_swap_high 99
max_filedescriptors 65536
fqdncache_size 16384
retry_on_error on
offline_mode off
pipeline_prefetch on

# Add any of your own refresh_pattern entries above these.
#include /etc/squid/refresh.extended.conf
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0

refresh_pattern . =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00 20% 4320







--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] transparent (intercepting?) without wccp, options?

2012-07-01 Thread Eliezer Croitoru

hey there Ezequiel,
the Cisco RV042 is a nice product but..
100 users on this device might not be the problem.
i think that the main problem is the wan connections them-self.
if it's a cable line with 6 and 3 Mbps bandwidth is the problem and not 
routing.

100 users means that each user gets about 9 Kbps if will be divided equally.
in the case that most of your bandwidth usage is http the squid can help 
you.
i would first make a basic analysis of the network traffic and make sure 
what is consuming the speed.
instead of doing some tricks and replacing the RV02 i would start with 
linux bridge between the switch and the RV042.


you can use this box to analyze the network traffic and with just 2 nics.
also you can block p2p using ipp2p iptables module and use squid+trpoxy 
to serv cache content.


i have used this setup with ubuntu before and it made the effect!.
today ubuntu 12.04 LTS will give you everything you need.
if you want you can add snmp and other tools for graphing and other stuff..


with squid as bridge you do not need to bother yourself with the wan 
settings\load balancing and setting the linux box as dhcp or routing stuff.
what i would recommend for you in this kind of setup is to make the 
squid box as dns server(cache and forward dns).


using this setup you can test settings very easily on part of the 
clients or test computer.


for network usage analysis you can use ntop, it also gives p2p and other 
protocols detection.


so the setup i propose is not from your list:

5)
wan1---++   ++
   |  RV042 |---|squid\bridge|--switch-+--[lan clients]
wan2---++   ++

- RV042 = LB and wan gatway.
- squid = brdige + NTOP + p2p block\throttling + http cache


things you should consider about pfsense and ClearOS:
- they do have nice web interface but lack updated software.
- they take up from your machine more then you need.
- they leave you in the big cloud of what to h### happen when i did 
apply???


about accessing the squid in this setup the box is behind nat so it's ok 
and if you will every decide that you want the squid to take over the 
RV042 LB and dhcp you can just use iptables to block access to squid 
port or bind squid only to local net port and of-course the basic way of 
acls to allow only local users access.


about content filtering:
i prefer to use squidguard and not danshguardian.
there always the option of using some icap server such as qlprpxy.

about cache:
i have composed a nice method to cache youtube and some other dynamic 
content video sites using icap and squid.
(now working on embedding filtering in my icap server based on public 
blacklists.)


it's a nice project you have there.

i will be happy to talk with you about it.

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] yahoo mail problem with tproxy (squid 3.1.19, kernel 3.2.21)

2012-07-02 Thread Eliezer Croitoru

On 6/28/2012 11:18 AM, Ming-Ching Tiew wrote:


I have set up a bridge according to instruction here :-

http://wiki.squid-cache.org/Features/Tproxy4

with squid 3.1.19 and kernel 3.2.21.

The configuration is working with other with most of the sites, except for 
yahoo mail. It's is extremely slow with yahoo mail, can hardly able to login 
and logout of yahoo mai. However the same computer when switch to nat REDIRECT 
using squid intercept, it is working OK, ie it is fast enough.

Anyone observed the same issue ?


it works slowly for all clients or just windows 7 ? other clients?
i have seen a problem when applying tproxy on a router with win7 client.
from unknown reason using standard routing and intercept everything 
works fine but when i switched to tproxy all http access from this win7 
machine was slow as hell until i restarted the machine.

then everything works fine.
on the same time i had a linux client on the setup that worked without 
any problem.


if you are having the same symptom i think it's a windows problem.

Regards,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] Re: transparent (intercepting?) without wccp, options?

2012-07-03 Thread Eliezer Croitoru

On 7/3/2012 5:05 AM, Ezequiel Birman wrote:

Eliezer == Eliezer Croitoru elie...@ngtech.co.il writes:


  hey there Ezequiel, the Cisco RV042 is a nice product but..  100
  users on this device might not be the problem.  i think that the
  main problem is the wan connections them-self.  if it's a cable
  line with 6 and 3 Mbps bandwidth is the problem and not routing.
  100 users means that each user gets about 9 Kbps if will be
  divided equally.  in the case that most of your bandwidth usage is
  http the squid can help you.  i would first make a basic analysis
  of the network traffic and make sure what is consuming the speed.
  instead of doing some tricks and replacing the RV02 i would start
  with linux bridge between the switch and the RV042.

I think you are right, and since upload speeds are even slower that must
be the culprit.

  you can use this box to analyze the network traffic and with just
  2 nics.  also you can block p2p using ipp2p iptables module and
  use squid+trpoxy to serv cache content.

  i have used this setup with ubuntu before and it made the effect!.
  today ubuntu 12.04 LTS will give you everything you need.  if you
  want you can add snmp and other tools for graphing and other
  stuff..


  with squid as bridge you do not need to bother yourself with the
  wan settings\load balancing and setting the linux box as dhcp or
  routing stuff.  what i would recommend for you in this kind of
  setup is to make the squid box as dns server(cache and forward
  dns).

 From what I gather, squid is capable of caching DNS right? or will I
need bind too?
you need also bind because the clients will query the server and not 
squid.. squid has an internal dns cache.


  using this setup you can test settings very easily on part of the
  clients or test computer.

  for network usage analysis you can use ntop, it also gives p2p and
  other protocols detection.

I am trying it right now, nice!

  so the setup i propose is not from your list:

  5) wan1---++ ++ |
  RV042 |---|squid\bridge|--switch-+--[lan clients]
  wan2---++ ++

  - RV042 = LB and wan gatway.  - squid = brdige + NTOP + p2p
  block\throttling + http cache

Thanks, I am giving it a try.

I'll start by following

http://wiki.squid-cache.org/ConfigExamples/Intercept/DebianWithRedirectorAndReporting

this is a good way to start but it wont be a transparent proxy but a 
nat proxy but it can be good for your needs as anyway you have nat in 
the RV042.



which seems similar to what i am trying to achive. If I am mistaken,
please let me know.

and also most of
http://wiki.squid-cache.org/Features/Tproxy4
tproxy will ggive you the benefit of some graphing tools with a more 
accurate vision on your clients requests.



update me

Regards,
Eliezer


  things you should consider about pfsense and ClearOS: - they do
  have nice web interface but lack updated software.  - they take up
  from your machine more then you need.  - they leave you in the big
  cloud of what to h### happen when i did apply???

  about accessing the squid in this setup the box is behind nat so
  it's ok and if you will every decide that you want the squid to
  take over the RV042 LB and dhcp you can just use iptables to block
  access to squid port or bind squid only to local net port and
  of-course the basic way of acls to allow only local users access.

  about content filtering: i prefer to use squidguard and not
  danshguardian.  there always the option of using some icap server
  such as qlprpxy.

  about cache: i have composed a nice method to cache youtube and
  some other dynamic content video sites using icap and squid.  (now
  working on embedding filtering in my icap server based on public
  blacklists.)

May be I'll try that after basic http :)

  it's a nice project you have there.

  i will be happy to talk with you about it.

  Regards, Eliezer

  -- Eliezer Croitoru https://www1.ngtech.co.il IT consulting for
  Nonprofit organizations eliezer at ngtech.co.il


Thanks for sharing your insights.




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] acl to allow sites on SQL or LDAP

2012-07-04 Thread Eliezer Croitoru

On 7/4/2012 5:37 PM, Marcio Merlone wrote:

Hi all,

I am administering 3 squid 3.0.STABLE19-1ubuntu0.2 proxies on 3
different sites, and managed to read group membership on LDAP using
external_acl_type and squid_ldap_group without a problem. The last bit I
need to make this a dream proxy cluster is also store the allowed sites
on LDAP (preferably).

I searched the net for something like this, but all I get is about user
auth, nothing regarding allowed sites list. Can someone help me find the
way for that, if any?

Thanks in advance and best regards.


Hey there  Marcio,

squid is loading the acls\rules at startup or reconfiguring.
there for using regular squid rules you can't use DB such as LDAP, mysql 
or any other DB.(there are other open options)
i wouldn't recommend you to use LDAP as a DB for this kind of operation 
because it's pretty slow for it.


the other options are: URL_REWRITE,ICAP,EXTERNAL_ACL.

i wrote a nice ICAP server that was meant to do url manipulation but 
seems that it can do much more.
it uses MYSQL as temp DB to store and retrieve specific data on urls for 
cache so it's MYSQL\PG\SQLITE\LDAP ready.


i am working now on effective way to add filtering mechanism into it.
i have basic model that works.
this model should be the same for filtering or as ACLS, you will just 
need to change the destination page to any page you want like porn is 
not available right now please try this later at home or other nice 
pages you like.


if you are willing to do the testings with me and built some skeleton 
for it to fit sysadmins i will be more then happy to work on it.
the basic domain match is pretty simple to implement and it's kind of 
done already.


the next thing to be done is the dstdomain .example.dom joker.
about regex acls i will might use some other technique to load it from 
DB into memory and only when the DB changed to update the regex into memory.


regex is a very slow acl and basically should be used wisely.

talk with me

Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] Re: transparent (intercepting?) without wccp, options?

2012-07-04 Thread Eliezer Croitoru

On 7/5/2012 4:48 AM, Ezequiel Birman wrote:

Amos == Amos Jeffries squ...@treenet.co.nz writes:


  On 04.07.2012 15:54, Ezequiel Birman wrote:
  Eliezer == Eliezer Croitoru elie...@ngtech.co.il writes:
 
  snip

  
http://wiki.squid-cache.org/ConfigExamples/Intercept/DebianWithRedirectorAndReporting
this is a good way to start but it wont be a transparent
  proxy but  a nat proxy but it can be good for your needs as
  anyway you have  nat in the RV042.
 
  Are you sure? The only mention to nat in is in order to redirect
  port 80 to 3128 on squid box. This is the intro:
  ...

  Yes. There are 4 protocol layers involved.  ebtables - rules stops
  it being a bridge transparent relay/proxy and makes it routed
  traffic.  iptables - rules use NAT (interception proxy) instead of
  TPROXY (transparent proxy).  squid - config file uses
  URL-rewriters to prevent Squid being a HTTP protocol transparent
  proxy (HTTP definition of transparent proxy is the Squid
  default behaviour).

  There is a lot of people confused by the meaning of the word
  transparent. With good reason, it has been used out of context
  so much.

Where should I start then? Could yo point me to some doc, tutorial or
config example to implement what Eliezer suggested? I mean beside the
books which I didn't buy yet.

Regards



dont worry!
i dont know anyone that masters linux and got it all from books he didnt 
bout :)


it's pretty simple to implement as long you do understand the concepts.
you will just need to practice and see how all of it actually fits 
together as a puzzle.


start with a bridge interface and bridge tools.
it depends on what linux distro you are using.
debian is a nice and simple one.
you need to install the bridge tools + ebtables and configure the bridge 
interface for two Ethernet interfaces.

the next step is to add the bridge interface ip address and default route.
all the above can be done in the /etc/...somewhere
this link: 
http://wiki.debian.org/BridgeNetworkConnections#Libvirt_and_bridging

can help you a bit.

on debian it will work just like that.. config.. apply settings.. 
connect one cable .. connect second cable ...done.


after that you can install\compile squid3.1
will be here to help if you need something.

Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] acl to allow sites on SQL or LDAP

2012-07-05 Thread Eliezer Croitoru

On 7/5/2012 3:10 PM, Marcio Merlone wrote:

Em 04-07-2012 22:19, Eliezer Croitoru escreveu:

SNIP

the other options are: URL_REWRITE,ICAP,EXTERNAL_ACL.

Didn't know about ICAP. Sounds the way to go.


SNIP


if you are willing to do the testings with me and built some skeleton
for it to fit sysadmins i will be more then happy to work on it.

Right now my needs are really basic, just a plain group+sites list
match. But the needs may grow as features become available. :)

well squid and ICAP dose have icap_client_username_header 
X-Client-Username
that allows to ICAP server identify the user and based on that the group 
but i will need to do some coding to fetch the user filtering group.
i dont know we but if a ldap user is in more then one group it will need 
some more coding and database structure plans.
so if you or anyone reading this have some idea on how implement the 
database\table structure to fit multiple groups i'm reading.



i do have one idea but it was ment for filtering and not for group acls:
use filtering levels\weight (numbered) like:
#csv format: domain, weigth
.porndomain.com, 100
.facebook.com, 20
.google.com, 10
.newssite.com, 40
#end of csc
i dont have have sites in my mind but like a category that allowed or 
denied.
using numbers can benefit the lookup speed in mysql as a base index for 
acl match.


if you have lists of sites to allow or deny for a group it will give me 
some grounds to think of options.




the basic domain match is pretty simple to implement and it's kind
of done already.

That' it for now.



ok i have implemented the basic fastest dstdomain acl match method i was 
thinking of so we can use either an exact match or a domain wildcard.



the next thing to be done is the dstdomain .example.dom joker.
about regex acls i will might use some other technique to load it from
DB into memory and only when the DB changed to update the regex into
memory.

regex is a very slow acl and basically should be used wisely.

Does your project has a home-page? I'll be glad to test and help.


i'm using github to host the stable code:
https://github.com/elico/echelon
i didnt released yet any code regarding the filtering mechanism because 
it's not polished and messy with notes in it.

i wrote it in ruby.
my TODO list for the project is:
polish the basic mysql\pgsql\mssql\sqlite\ldap simple interface for 
usage in the server for queries.

polish my cache module.
polish the dstdomain matcher.
ADDED now:write user related code to match a mysql simple userdb.
write some user code related to ldap users and groups.


i will be glad if you will be able to write a class with couple specific 
methods to find a user\group(match) in ldap.


i think i will write some basic html file on the project.

Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] acl to allow sites on SQL or LDAP

2012-07-05 Thread Eliezer Croitoru

On 7/4/2012 5:37 PM, Marcio Merlone wrote:

Hi all,

I am administering 3 squid 3.0.STABLE19-1ubuntu0.2 proxies on 3
different sites, and managed to read group membership on LDAP using
external_acl_type and squid_ldap_group without a problem. The last bit I
need to make this a dream proxy cluster is also store the allowed sites
on LDAP (preferably).

I searched the net for something like this, but all I get is about user
auth, nothing regarding allowed sites list. Can someone help me find the
way for that, if any?

Thanks in advance and best regards.


i added some new features and changed some methods in the server:
https://github.com/elico/echelon

added method to redirect using 302 redirection in case you dont want to 
just rewrite the url.
also added cache module as a preparation to move from my greasyspoon to 
Echelod only mode.


added matching dstdomain squid like acl using mysql db as storage for 
the dstdomains.


#can block spyware\porn\proxy\others
added matching squidguard blacklists domains acl using mysql db as 
storage for the list of domains.


i will post later some more info on how to use etc.

if you have only couple groups in the meanwhile we can use icap request 
urls and ldap external_acl to match the group and the access to specific 
namespace and for each one of the groups maintain separate block acl table.



Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] Re: transparent (intercepting?) without wccp, options?

2012-07-06 Thread Eliezer Croitoru

On 7/6/2012 5:16 AM, Ezequiel Birman wrote:

Eliezer == Eliezer Croitoru elie...@ngtech.co.il writes:


  On 7/5/2012 4:48 AM, Ezequiel Birman wrote:

SNIP

  dont worry!  i dont know anyone that masters linux and got it all
  from books he didnt bout :)

XD I meant, of course, the two squid books, Begginer's Guide and
advanced. Anyway, if i get this right maybe I'll write a tutorial
myself.

it's nice to read these books but most of what you need to know is just 
out there waiting for you to read.

SNIP

I am using CentOS 6.2, for no better reason that I use fedora at home.

Bridge is working, then installed squid via yum.

squid-3.1.10-1.el6_2.4.x86_64


for centos 6.X you can use fedore 16 rpm of latest squid version.
the 3.1.10 is pretty old..
this:
http://rpm.pbone.net/index.php3?stat=26dist=73size=1869309name=squid-3.1.19-1.fc15.x86_64.rpm

will give you some other rpm files for 3.1.19

SNIP

What now? If I understand correctly, I need to set up rules in ebtables
to drop (deviate) http packets. Then set iptables in order to redirect
to port 3129 (tproxy), and that's it? Am I right?

In http://wiki.squid-cache.org/Features/Tproxy4#Routing_configuration I
see rules applied to eth0, should i rewrite br0 in place of eth0?

Should i copy the setup from
http://wiki.squid-cache.org/Features/Tproxy4#iptables_Configuration
without changes?

Already did: setsebol -P squid_connect_any=1 squid_use_tproxy=1. By
the way, i think setsebol variables don't accept yes as a value.

Anything else?

Thanks for your time

i would start with selinux off because you probably wont need it for 
this system and it will make sure that squid runs and after that you can 
try to use full selinux setup.


the next step are:
setup squid for tproxy at port 3129 on all interfaces but the 3128 only 
on loopback if you dont want clients to access it directly.

#squid.conf
http_port 127.0.0.1:3128
http_port 3129 tproxy
#end
you also need to allow access for the lan clients in the acls.
set the cache dir size etc..

load iptables modules + iptables rules.
load ebtables rules

the rp thing should be set for the real interfaces.
i will give you my tproxy script.

#!/bin/sh
CLIENT_IFACE=eth1
INET_IFACE=eth0

ebtables -t broute -F
ebtables -t broute -A BROUTING -i $CLIENT_IFACE -p ipv4 --ip-proto tcp 
--ip-dport 80 -j redirect --redirect-target DROP
ebtables -t broute -A BROUTING -i $INET_IFAC -p ipv4 --ip-proto tcp 
--ip-sport 80 -j redirect --redirect-target DROP


cd /proc/sys/net/bridge/

for i in *
 do
   echo 0  $i
 done
unset i


#i like to load the iptables modules by myself:
modprobe ip_tables
modprobe xt_tcpudp
modprobe nf_tproxy_core
modprobe xt_MARK
modprobe xt_TPROXY
modprobe xt_socket
modprobe nf_conntrack_ipv4
sysctl net.netfilter.nf_conntrack_acct=1

for i in /proc/sys/net/ipv4/conf/*/rp_filter ; do
 echo 2  $i
done

#add routes
ip route flush table 100
ip rule del fwmark 1 lookup 100
ip rule add fwmark 1 lookup 100
ip -f inet route add local default dev lo table 100

echo flushing any exiting rules
iptables -t mangle -F
iptables -t mangle -X DIVERT

echo creating rules
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -s 192.168.1.0/24-j 
TPROXY --tproxy-mark 0x1/0x1 --on-port 3129


#i use conntrack to flush the old sessions so all the new ones will be 
redirected to squid.

conntrack -F
#i have used a router so i needed to flush the routes cache
ip -s route flush cache
#end

ELiezer



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] Re: transparent (intercepting?) without wccp, options?

2012-07-06 Thread Eliezer Croitoru

On 7/7/2012 4:52 AM, Amos Jeffries wrote:
snip


#i use conntrack to flush the old sessions so all the new ones will be
redirected to squid.
conntrack -F


This need to be noted as quite dangerous. It will force all existing
connections into the NEW state and pass them through Squid
*immediately*. Which will result in Squid rejecting all the invalid
half-completed HTTP transactions.
  New connections will go through TPROXY and get conntrack records
associated with it anyway, without need of a flush.
  Idle HTTP connections are the exception here. The next packet Squid
sees is valid HTTP so they are not rejected.


thanks for the note good.
indeed you are right and i have another script that i have used to FLUSH 
only specific criteria session's but it was really meant only as an 
init\startup script so no harm should be done there unless the admin is 
really into reconfigure the server every couple minutes.



#i have used a router so i needed to flush the routes cache
ip -s route flush cache
#end

ELiezer






--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




[squid-users] Question about ICAP.

2012-07-06 Thread Eliezer Croitoru

i already asked before once.
i want to prioritize ICAP services by specific order like : 
basicfiltering depth filtering


so i use: adaptation_service_chain ??
as far i understand it will make the checks by order.
and what if one of the ICAP services has deny acl for the url?
let say

#request: http://www.google.com/
acl google dstdomain .google.com
adaptation_service_chain urlFilter logger1 logger2
urlFilter allow all
logger1 deny google
logger1 allow all
logger2 allow all

?

another question is:
ICAP respmod_precache from squid has i have seen will send the request url.
and i'm not sure (i dont remember ICAP rfc) if the RESPMOD suppose to 
send the request url as part of the protocol.
can i send in the RESPMOD the request url and by that change the url 
that the content will be stored at?

in the Encapsulated header i am getting the request and the response.
in a REQMOD i am responding with:
Encapsulated: req-hdr=0, null-body=some_byte_number
maybe i can response to a RESPMOD with header like:
Encapsulated: req-hdr=0, res-hdr=583, res-body=850

and to change the request header.. then on squid the cached url will be 
the one from the ICAP server response and then there is no longer need 
for store_url_rewrite at all because ICAP can replace url_rewrite and 
store_url_rewrite..


Thanks,
Eliezer

this is a basic ICAP RESPMOD session sample to work with..:
RESPMOD icap://127.0.0.1:1344/respmod ICAP/1.0
Host: 127.0.0.1:1344
Date: Sat, 07 Jul 2012 03:42:40 GMT
Encapsulated: req-hdr=0, res-hdr=583, res-body=850
Preview: 0
Allow: 204

GET http://www.google.com/ HTTP/1.1
Host: www.google.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20100101 
Firefox/13.0

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: 
PREF=ID=3f1a1925884751e3:U=726510ee09cc5141:FF=0:LD=en:TM=1317970640:LM=1339099594:GM=1:S=VsYoiMXWKW5OPJaK; 
NID=60=hSChbnc5ZvZSldPGywiQG9OkjCYFC9NNLXWHWg84bCsDeD1g7mvD3uN0nObnb17DnuWLeAi5nsmgHvqlbSvV_9qJuHUqbT0j9q1ydyCodwqCvxLrF-yd69ZzBHf5xxZyPyBn_0KkOxbbYH2YAGyJuEU; 
OGPERM=W6%3D0.0.4.60


HTTP/1.1 302 Moved Temporarily
Location: http://www.google.co.il/
Cache-Control: private
Content-Type: text/html; charset=UTF-8
Date: Sat, 07 Jul 2012 03:43:44 GMT
Server: gws
Content-Length: 221
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN

0

ICAP/1.0 100 continue

dd
HTMLHEADmeta http-equiv=content-type 
content=text/html;charset=utf-8

TITLE302 Moved/TITLE/HEADBODY
H1302 Moved/H1
The document has moved
A HREF=http://www.google.co.il/;here/A.
/BODY/HTML

0

ICAP/1.0 200 OK
ISTag: GreasySpoon-1.0.8-01
Host: 127.0.0.1:1344
Encapsulated: res-hdr=0, res-body=295
Connection: close

HTTP/1.1 302 Moved Temporarily
Location: http://www.google.co.il/
Cache-Control: no-cache, no-store, must-revalidate
Content-Type: text/html; charset=UTF-8
Date: Sat, 07 Jul 2012 03:43:44 GMT
Server: gws
Content-Length: 221
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN

dd
HTMLHEADmeta http-equiv=content-type 
content=text/html;charset=utf-8

TITLE302 Moved/TITLE/HEADBODY
H1302 Moved/H1
The document has moved
A HREF=http://www.google.co.il/;here/A.
/BODY/HTML

0








--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il



Re: [squid-users] Question about ICAP.

2012-07-07 Thread Eliezer Croitoru

On 7/7/2012 7:52 AM, Amos Jeffries wrote:
SNIP


another question is:
ICAP respmod_precache from squid has i have seen will send the request
url.
and i'm not sure (i dont remember ICAP rfc) if the RESPMOD suppose to
send the request url as part of the protocol.
can i send in the RESPMOD the request url and by that change the url
that the content will be stored at?


I don't believe so. The RESPMOD is pre-cache, but to Squid that only
means the cache location has been determined, opened and awaiting data.
These services are modifying what will fill into that.  You need a
REQMOD service to modify any request details. Squid REQMOD is carefully
been made pre-cache as well so that the caching properties can be modified.

But you don't need store_url to de-duplicate when ICAP can replace the
whole video with a redirect to the previously stored videos URL.
which i have problem to do because i then need to have some monitoring 
over the cached object which is not possible now.



in the Encapsulated header i am getting the request and the response.
in a REQMOD i am responding with:
Encapsulated: req-hdr=0, null-body=some_byte_number
maybe i can response to a RESPMOD with header like:
Encapsulated: req-hdr=0, res-hdr=583, res-body=850



To modify the response sometimes the request details are required. That
does not mean you can modify the request details. At that stage they are
read-only.

well yes i took the time to read a bit an it seems that ICAP design in 
not allowing changing the request on the response but allows to give you 
a full response on a reqmod.



and to change the request header.. then on squid the cached url will
be the one from the ICAP server response and then there is no longer
need for store_url_rewrite at all because ICAP can replace
url_rewrite and store_url_rewrite..


If you want to look into updating ICAP to pass back an altered key for
Squid we can look into that as a store_url replacment. Store URL
de-duplication was an experimental feature which never really stabilized
properly for the YT use-case it was supposed to handle (that nasty
redirect patch and the recent multi-encoding issues).

There are other possibilities as well, Digest: and alternative-URI Link:
location features of HTTP need better support in Squid and could be used
to replace store_url features.

Digest: with a cache indexed on object digest hashes allows client some
control over when de-duplication is performed. By requesting a hash
match when URL MISSes  (or not).

The Link: support would be particularly useful in replacing store_url.
It allows responses to register multiple duplicating sets of URL. With
one reply Squid could index the alternatives and HIT on multiple other
URLs in future requests. This one would be controllable with RESPMOD,
Link being a response header.


and my question: what is supported by squid these days?
the Link header is a very nice idea.
it can be an elegant solution because it give ICAP have the advantage of 
inspecting all the headers and there for will make it better then the 
old store_url and also other stdin\stout helpers.

is there any status on the Link header?

snip

0

ICAP/1.0 200 OK
ISTag: GreasySpoon-1.0.8-01
Host: 127.0.0.1:1344
Encapsulated: res-hdr=0, res-body=295
Connection: close

HTTP/1.1 302 Moved Temporarily
Location: http://www.google.co.il/
Cache-Control: no-cache, no-store, must-revalidate


Strange modification.

no-store overrides no-cache and must-revalidate, both of which only
operate on stored content sometime in the future. It is useless waste of
bytes placing all three in one cache-control header.

Squid being a shared proxy private and no-store are handled the
same. So all this does is prevent the browser caching the 304 on the
users machine.


well this was just an example which was used for testing sometime ago.




Amos



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] Re: transparent (intercepting?) without wccp, options?

2012-07-09 Thread Eliezer Croitoru

On 7/9/2012 7:00 AM, Ming-Ching Tiew wrote:





- Original Message -


for i in /proc/sys/net/ipv4/conf/*/rp_filter ; do
echo 2  $i
done


Really strange. I have never able to get tproxy to work unless I switch the 
rp_filter to 0.

When rp_filter is 2, I could sniff the traffic, but somehow squid is not able 
to see it.


i do know that it different on Ubuntu and other distros for unknown reason.

if it works with 0 then let it be...
i think that debian comes with 0 as default.

Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] Uploads not working behind squid proxy

2012-07-11 Thread Eliezer Croitoru




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] Uploads not working behind squid proxy

2012-07-11 Thread Eliezer Croitoru

On 7/12/2012 1:21 AM, Crawford, Ben wrote:

As requested, a more detailed squid.conf:
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl localnet src 10.161.128.0/20
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
cache_peer 10.55.240.250 parent 3128 3130 no-query default login=PASS
http_access allow manager localhost
http_access allow localnet
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
http_port 10.161.128.11:3128 intercept
coredump_dir /var/spool/squid3
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Packages(.gz)*)$  0   20% 2880
refresh_pattern .   0   20% 4320

Ben

well the answer is you other mail + the squid.conf
Without the cache_peer I can not get to any sties at all.  All
internet (well, http and https) traffic on our network must go through
the parent proxy, either directly or through a local child proxy.

the proxy tries to connect the direct upstream server to get access 
because you dont have an explicit never_direct allow all acl defined.
so post and other requests that requires direct access will then tried 
to be served by accessing the origin server.

you must be explicit with cache_peer acl.
replace:
##
cache_peer 10.55.240.250 parent 3128 3130 no-query default login=PASS
##
with
##
cache_peer 10.55.240.250 parent 3128 3130 no-query default login=PASS 
name=upstream

cache_peer_access upstream allow all
never_direct allow all
##

this will allow and will force all traffic through the upstream proxy 
server.


Good luck,
Eliezer

SNIP
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




[squid-users] i'm having a little performance trouble with squid + ICAP server.

2012-07-12 Thread Eliezer Croitoru
i am using squid 3.1.19 for testing and will use in the next couple days 
squid 3.2 because it has couple new icap options.


i have added some caching to my icap server and the stress tests shows 
efficiency of about 4000 requests per second.
a unlimited linux FD to support the stress testings to 65535 for squid 
and my server.
it is double handling connections to mysql and redis while redis is a 
persistent connection and mysql not.

mysql requests done only and only if redis dosnt have the data needed.

so when im testing squid.. using apache benchmark after about 2000 
finished connections squid stops passing icap requests like a failed 
ICAP service until the recovery time(30 secs).
i have set the bypass on the icap service to on to allow the stress test 
continue.


so squid continues to serve the requests but will stop icap queries.

any direction on what to look for?

Thanks,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il



Re: [squid-users] Using ACLs with ICAP/SquidClamAV

2012-07-12 Thread Eliezer Croitoru

On 7/12/2012 11:17 PM, Peter H. Lemieux wrote:

This is my first posting.  Please be gentle!

I've run Squid in many arrangements but only recently have I been using
the ICAP client to invoke SquidClamAV.  I've browsed the wiki and
searched on Google, but I can't seem to figure out how I might use ACLs
to control when a request gets passed to the ICAP server.

We have a Windows server that wants to download an update file from
windowsupdate.com.  That file triggers the known ClamAV false positive
W32.Virut.Gen.D-159.  I'd like to write an ACL so that objects requested
from this machine's IP address are not passed to the ICAP server but
sent directly to the requesting machine.

I've written lots of ACLs in the past to exempt hosts, URL regexes, and
the like, but I can't seem to figure out how to do this with an ICAP
request. I've looked at the documentation for configuration file
directives like adaptation_access, icap_service, and the like, but I
can't seem to find anything that tells me how to use ACLs with those.
Can anyone point me to some documentation I might read, or suggest some
methods to use ACLs with ICAP?

Thanks!


Peter


use the logic of acls:

##start
#instead of  192.168.0.1 use the machie ip
acl my_machine src 192.168.0.1

icap_service service_av reqmod_precache bypass=0 
icap://clamavserver:1344/reqmod

adaptation_access service_av deny my_machine
adaptation_access service_av allow all
##end
That is all

Best Regards,
Elieze

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




[squid-users] squid 3.2.0.16+ not honoring hierarchy proxy settings on intercept and tproxy mode

2012-07-12 Thread Eliezer Croitoru

i have filed a bug: http://bugs.squid-cache.org/show_bug.cgi?id=3589

and attached the draft on the bug from a month ago.

while working with hierarchy of proxies on squid 3.1.19 was fine but on 
3.2.0.16+ im having some problems.


compilation options:
www1 ~ #  /opt/squid3119/sbin/squid -v
Squid Cache: Version 3.1.19
configure options:  '--prefix=/opt/squid3119' 
'--disable-maintainer-mode' '--disable-dependency-tracking' 
'--disable-silent-rules' '--enable-inline' '--enable-async-io=8' 
'--enable-storeio=ufs,aufs' '--enable-removal-policies=lru,heap' 
'--enable-delay-pools' '--enable-cache-digests' '--enable-underscores' 
'--enable-icap-client' '--enable-follow-x-forwarded-for' 
'--enable-digest-auth-helpers=ldap,password' '--enable-arp-acl' 
'--enable-esi--disable-translation' 
'--with-logdir=/opt/squid3119/var/log' 
'--with-pidfile=/var/run/squid3119.pid' '--with-filedescriptors=65536' 
'--with-large-files' '--with-default-user=proxy' 
'--enable-linux-netfilter' '--enable-ltdl-convenience' '--enable-snmp' 
--with-squid=/opt/src/squid-3.1.19

www1 ~ #  /opt/squid3217/sbin/squid -v
Squid Cache: Version 3.2.0.17
configure options:  '--prefix=/opt/squid3217' 
'--with-default-user=proxy' '--enable-linux-netfilter' 
'--with-filedescriptors=65536' '--enable-underscores' 
'--enable-storeio=ufs,aufs' '--enable-delay-pools' '--enable-esi' 
'--enable-icap-client' '--enable-ssl' '--enable-forw-via-db' 
'--enable-cache-digests' '--enable-follow-x-forwarded-for' 
'--enable-ssl-crtd' '--enable-auth' '--disable-translation' 
'--disable-auto-locale' '--with-large-files'


Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] i'm having a little performance trouble with squid + ICAP server.

2012-07-12 Thread Eliezer Croitoru

On 7/13/2012 4:16 AM, Ming-Ching Tiew wrote:

Sorry I am offering no help but I am interested to know how do you set
up a stress test environment.
I supposed it's an automatic script based stress tests ?

Rgds.




well it's pretty simple.
my setup is like this:
gw\dns\dhcp\cahce\icap = server(intel atom d510 2gb ram 500GB sata HD)
windows 7 = client(core i3 4gb ram..)
linux = client(intel atom d410 2gb ram 160gb)

the network is 1Gbit.
wan = 5Bmit

i have vm on the corei3 with nginx that serv static pages.

to test the icap server i wrote a ruby script and changed the linux 
systems ulimit to 65535.
the test was to send a specific icap request that involves filtering 
query and get at least one line back and then close the connection be 
because if i got any of the data the processing by the icap server was done.
i measured the timestamp before i start the connection and after then 
calculate the time between them and report the time only if it's more 
then 0.1 secs long less then that is far more then sufficient.


i wrote a two scripts, one is with ruby forks and the other with threads.

i looped over sets of 1000 to 4000 requests for between 30 to 60 secs.

the load then builds up and the connection tracking shows 25000 + 
connections on time_wait.
so the open connections limit was about 4000 and with a time_wait of 
about 25000+ (my time_wait is 15 sec).

i ran those tests for hours and it worked great.

this is about direct ICAP access.
then i tested from the linux box with Apache benchmark to squid proxy 
with the -X option... not intercepted but forward proxy. and it seems 
like after about 1000 requests squid wont do icap queries (i have live 
log on stdout from the icap server)


If you want some more data i will be happy to give you some .

Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




[squid-users] Posted in the wiki my nice caching method using a coordinator\ICAP

2012-07-12 Thread Eliezer Croitoru
for those who dont want to mess with ICAP i wrote a ruby coordinator for 
url_rewrite interface.
the only problem is the logs that will show one thing but actually the 
url that gets into cache is the rewritten.

it can be verified using an ICP\HTCP client.
i wrote ICP client to verify cache objects status using command line can 
be found at:

http://www1.ngtech.co.il/icp_client.rb.txt

the draft of the article at: 
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator

Contents

Caching Dynamic Content using a Coordinator
Problem Outline
What is Dynamic Content
File De-Duplication\Duplication
Marks of dynamic content in URL
?
CGI-BIN
HTTP and caching
HTTP headers
HTTP 206\partial content
Dynamic-Content|Bandwidth Consumers
Specific Cache Cases analysis
Microsoft Updates Caching
Youtube video\img
CDN\DNS load balancing
Facebook
Caching Dynamic Content|De-duplicated content
Old methods
Store URL Rewrite
Web-server and URL Rewrite
NGINX as a Cache Peer
Summery of the ICAP solution
Implementing ICAP solution
Alternative To ICAP server Using url_rewrite


Best Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il



Re: [squid-users] WCCP2+tproxy and Cisco LNS

2012-07-14 Thread Eliezer Croitoru

On 7/13/2012 2:33 PM, Wayne Lee wrote:

Hello List

My first post here but have been using squid for a while.

Trying to implement a transparent proxy for some of our DSL users.
I've setup a test LNS on a Cisco 2821, the connections come in via the
standard PPPoA and are sent via L2TP from the provider. Standard stuff
which works.  WCCPv2 is setup and working OK, I can see the packets
arriving on the box. The trouble I'm having is that the packets are
arriving on the squid box but don't seem to be diverted into squid
daemon.

Details

LNS = Cisco 2821, (C2800NM-SPSERVICESK9-M), Version 12.4(3b). LNS is
acting as a router on a stick (1 active interface)

(IP's changed to protect the guilty. NAT is not used in this network)

LNS IP = 172.16.254.253 /30
LNS GW = 172.16.254.254 /30
DSL user IP = 10.10.254.254 /32


SNIP

if you could be more accurate about the cables setup and logic and not 
just ip it can help understand things.



Packet traces

traffic from dsl connection directed via wccp to squid

root@squid:~# !tcpdump
tcpdump -niwccp0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on wccp0, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
12:19:54.287278 IP 10.10.254.254.46360  80.239.148.170.80: Flags [S],
seq 975284290, win 13600, options [mss 1360,sackOK,TS val 2009935 ecr
0,nop,wscale 4], length 0
12:19:54.445694 IP 10.10.254.254.46361  80.239.148.170.80: Flags [S],
seq 1791319806, win 13600, options [mss 1360,sackOK,TS val 2009975 ecr
0,nop,wscale 4], length 0
12:19:55.285531 IP 10.10.254.254.46360  80.239.148.170.80: Flags [S],
seq 975284290, win 13600, options [mss 1360,sackOK,TS val 2010185 ecr
0,nop,wscale 4], length 0
12:19:55.445826 IP 10.10.254.254.46361  80.239.148.170.80: Flags [S],
seq 1791319806, win 13600, options [mss 1360,sackOK,TS val 2010225 ecr
0,nop,wscale 4], length 0

the problem is that the traffic that comes from the internet suppose to 
get into the proxy machine but it's going to the client which is not 
listening to the same socket.

wccp + tproxy dont play good together!!!
if you will run tcpdump on the client machine you will see packets of 
sessions that started on the squid box arriving to it.

you dont need to be with this 3 days.
just buy a 1Gbit Ethernet card and put a small bridge between the cisco 
and the next hop.





I have followed several guides on the wiki, tried different distro's,
DNAT without Tproxy and now with Tproxy. Any pointers on where I'm
going wrong will be helpful as I've been at this for 3 days now. If I
set this up in a normal network with LAN, WAN and squid being the
gateway device it works in non-transparent and transparent modes. This
feels like a issue with the DSL connections being rejected by squid or
iptables but I'm at a loss to explain where or how.

When tested using the DNAT method the packets were routed via the
squid box although still bypassed the squid daemon, the packets would
return from the webserver but were then dropped. Using the Tproxy
method shows the packets never getting to squid and not leaving the
box to the webserver.

Do I require multiple interfaces on the squid box and maybe use
ebtables or is what I'm trying to achieve possible on 1 interface ?


it depends.
you can always do something with vlans and stuff to make one interface 
act like two.
with tproxy the traffic that comes from the proxy is the same as the one 
that comes from the client.

10.10.254.254 comes in and 10.10.254.254 comes out.
so the only options you have are:
use some routing technique such as routing map with next hop.
you can setup the cisco to send traffic to the squidbox using one ip 
that squid will use as gw for the clients network.

and second ip to access the net and from the net.
this way squid will be a router on the way.
another option is the bridge thing with two networks cards.
you can play with vlans and bridge two vlans but it's pretty nasty to do so.

Regards,
Eliezer



Thanks for reading


Wayne




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] WCCP2+tproxy and Cisco LNS

2012-07-14 Thread Eliezer Croitoru
 list of the clients subnets you can put together on the 
tproxy machine two different static routing path.
one as the default GW(vlan2) with high metric and lower metrics a static 
routes to the clients subnets (to vlan1).


if you want to work specifically with cisco you can use the CDP protocol 
as a replacement to WCCP web-cache status.

there is a nice tool for linux:
http://code.google.com/p/ladvd/
takes about a minute to compile and will give you an option to use 
route-map based on set ip next-hop verify-availability.


well all this is based on the basic understanding of cisco 2800 series 
capabilities.


in you scenario i think it's better to use squid as a ROUTER with tproxy 
and not as a BRIDGE.



Regards,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




[squid-users] I have stumbled into a very curios problem\issue with icap request passed from squid.

2012-07-15 Thread Eliezer Croitoru

squid 3.1.20
i am not sure about it but when squid passes requests to icap it sends 
the requests.. encapsulated in icap format.
if squid gets a connect request the request represented as CONNECT 
domain.com:443 something

if it's a GET|POST|PUT the request declared as GET http://url/uri HTTP/1.1

but when i get a request that came from another proxy and was 
intercepted... i am getting:

http://proxy-ip:port/ http://url/uri HTTP/1.1
when it suppose to be:
GET http://url/uri HTTP/1.1

so my topology is:
squid 3.1.10 -- gw+squid3.1.20+icap--internet.
on the squid 3.1.10 i am runing the command:
curl -X http://127.0.0.1:3128/ http://www.domain.com/
and the squid 3.1.20 send ICAP encapsulated request as 
http://127.0.0.1:3128 http://url/uri HTTP/1.1


i hope something can be done about it.

Thanks,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il



Re: [squid-users] NTLM auth to remote server fails through squid

2012-07-16 Thread Eliezer Croitoru

On 7/16/2012 7:05 PM, Peter Olsson wrote:

We're trying to connect to a remote server that
requires authentication. This works fine when
we place the browser client on the Internet, but
when we place the browser client behind squid the
authentication popup just returns without accepting
the login.

can you please be more specific about the topology?
it's kind of fog to me.
if you can out up some IP's for the devices and network relationship 
will be very helpful.

if you can attach squid.conf it will be more efficient.

SNIP

What could be the reason for this auth failure?
What debug values should I use?

NB: This is not about authenticating to the proxy server,
we allow proxy connections from inside without authentication.
The question is about authenticating to an external server
that is out of our control.

please describe more the position of the client and server,
proxy and server.

Eliezer



Thanks!




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] tproxy configuration

2012-07-16 Thread Eliezer Croitoru

On 7/16/2012 1:47 PM, nipun_mlist Assam wrote:

Hi,

Is there anyway to use squid tproxy feature without configuring the
squid box as a router ?

Bridge.

Is it possible to use  WCCP + tproxy combination to achieve the same?


there is an example at:
http://code.google.com/p/lusca-cache/wiki/ExampleTproxy4Linux
that lusca guy did.
if you do ask me with all the features that cisco devices have the WCCP 
is nice but i prefer a more explicit way then auto setup.


i have tried using this example for about 4 hours and to make WCCP 
somehow work but it seems like wither i did something wrong or it's not 
possible.


what do you want to achieve? everything that can be achieved using WCCP 
can be achieved in other way with tproxy.


Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] I have stumbled into a very curios problem\issue with icap request passed from squid.

2012-07-16 Thread Eliezer Croitoru

On 7/16/2012 3:36 AM, Eliezer Croitoru wrote:
so i sat on it a bit more:
on the 3.1.10 proxy in the logs i am getting this while using curl and 
proxy:
1342399298.935680 127.0.0.1 TCP_MISS/200 9590 http://127.0.0.1:3128 
http://www.xxx.com/ - DIRECT/67.23.112.226 text/html

and this using wget:
1342463622.152   1585 127.0.0.1 TCP_MISS/200 165744 GET 
http://www.xxx.com/ - DIRECT/67.23.112.226 text/html


why would it be a method of http://127.0.0.1:3128; on the proxy?
how can it be?

Thanks,
Eliezer

some logs from squid 3.1.20 that intercepts the request:
icap log level 9:
2012/07/16 21:24:01.982| AsyncJob constructed, this=0x998cf10 
type=ConnStateData [job240089]
2012/07/16 21:24:02.120| AsyncJob constructed, this=0xab4638 
type=ClientHttpRequest [job240090]
2012/07/16 21:24:02.121| HttpRequest.cc(55) HttpRequest: constructed, 
this=0x3376130 id=5
2012/07/16 21:24:02.121| AsyncJob constructed, this=0x54fb728 
type=AccessCheck [job240091]
2012/07/16 21:24:02.121| AccessCheck.cc(50) AccessCheck: AccessCheck 
constructed for REQMOD PRECACHE
2012/07/16 21:24:02.121| The AsyncCall AsyncJob::start constructed, 
this=0xc35dd0 [call8112457]
2012/07/16 21:24:02.121| AsyncJob.cc(19) will call AsyncJob::start() 
[call8112457]

2012/07/16 21:24:02.121| entering AsyncJob::start()
2012/07/16 21:24:02.121| AsyncCall.cc(32) make: make call 
AsyncJob::start [call8112457]

2012/07/16 21:24:02.121| AccessCheck status in: [ job240091]
2012/07/16 21:24:02.121| AccessCheck.cc(75) check: start checking
2012/07/16 21:24:02.122| AccessCheck.cc(197) isCandidate: checking 
candidacy of 6, group service_filter
2012/07/16 21:24:02.122| ServiceGroups.cc(134) findService: 
service_filter checks service at 0
2012/07/16 21:24:02.122| ServiceGroups.cc(171) findService: 
service_filter has no matching services
2012/07/16 21:24:02.122| AccessCheck.cc(207) isCandidate: service_filter 
ignores

2012/07/16 21:24:02.122| AccessCheck.cc(97) checkCandidates: has 0 rules
2012/07/16 21:24:02.122| AccessCheck.cc(112) checkCandidates: NO 
candidates left

2012/07/16 21:24:02.122| AccessCheck.cc(164) callBack: NULL
2012/07/16 21:24:02.122| client_side_request.cc(669) 
adaptationAclCheckDone: 0x50b24d8 adaptationAclCheckDone called

2012/07/16 21:24:02.123| AccessCheck will stop, reason: done
2012/07/16 21:24:02.123| AsyncJob::start() ends job [Stopped, 
reason:done job240091]
2012/07/16 21:24:02.123| AsyncJob destructed, this=0x54fb728 
type=AccessCheck [job240091]
2012/07/16 21:24:02.123| AsyncJob.cc(139) callEnd: AsyncJob::start() 
ended 0x54fb728

2012/07/16 21:24:02.123| leaving AsyncJob::start()
2012/07/16 21:24:02.284| AsyncJob constructed, this=0x165dbeb8 
type=HttpStateData [job240092]
2012/07/16 21:24:02.470| AsyncJob constructed, this=0x54fb728 
type=AccessCheck [job240093]
2012/07/16 21:24:02.470| AccessCheck.cc(50) AccessCheck: AccessCheck 
constructed for RESPMOD PRECACHE
2012/07/16 21:24:02.470| The AsyncCall AsyncJob::start constructed, 
this=0xc67500 [call8112468]
2012/07/16 21:24:02.470| AsyncJob.cc(19) will call AsyncJob::start() 
[call8112468]

2012/07/16 21:24:02.470| entering AsyncJob::start()
2012/07/16 21:24:02.470| AsyncCall.cc(32) make: make call 
AsyncJob::start [call8112468]

2012/07/16 21:24:02.470| AccessCheck status in: [ job240093]
2012/07/16 21:24:02.470| AccessCheck.cc(75) check: start checking
2012/07/16 21:24:02.470| AccessCheck.cc(197) isCandidate: checking 
candidacy of 6, group service_filter
2012/07/16 21:24:02.470| ServiceGroups.cc(126) findService: 
service_filter serves another location
2012/07/16 21:24:02.470| AccessCheck.cc(207) isCandidate: service_filter 
ignores

2012/07/16 21:24:02.470| AccessCheck.cc(97) checkCandidates: has 0 rules
2012/07/16 21:24:02.470| AccessCheck.cc(112) checkCandidates: NO 
candidates left

2012/07/16 21:24:02.470| AccessCheck.cc(164) callBack: NULL
2012/07/16 21:24:02.471| AccessCheck will stop, reason: done
2012/07/16 21:24:02.471| AsyncJob::start() ends job [Stopped, 
reason:done job240093]
2012/07/16 21:24:02.471| AsyncJob destructed, this=0x54fb728 
type=AccessCheck [job240093]
2012/07/16 21:24:02.471| AsyncJob.cc(139) callEnd: AsyncJob::start() 
ended 0x54fb728

2012/07/16 21:24:02.471| leaving AsyncJob::start()
2012/07/16 21:24:02.792| ConnStateData will NOT delete in-call job, 
reason: ConnStateData::connStateClosed
2012/07/16 21:24:02.792| ConnStateData::connStateClosed(FD 37, 
data=0x998cd58) ends job [Stopped, reason:ConnStateData::connStateClosed 
job240089]
2012/07/16 21:24:02.792| The AsyncCall Initiate::noteInitiatorAborted 
constructed, this=0xc67500 [call8112496]
2012/07/16 21:24:02.792| Initiator.cc(28) will call 
Initiate::noteInitiatorAborted() [call8112496]
2012/07/16 21:24:02.792| AsyncJob destructed, this=0xab4638 
type=ClientHttpRequest [job240090]
2012/07/16 21:24:02.792| AsyncJob destructed, this=0x998cf10 
type=ConnStateData [job240089]
2012/07/16 21:24:02.792| AsyncJob.cc(139) callEnd: 
ConnStateData::connStateClosed(FD 37, data=0x998cd58

Re: [squid-users] tproxy configuration

2012-07-16 Thread Eliezer Croitoru

On 7/16/2012 1:47 PM, nipun_mlist Assam wrote:

Hi,

Is there anyway to use squid tproxy feature without configuring the
squid box as a router ?
Is it possible to use  WCCP + tproxy combination to achieve the same?

well after digging i have found it unclear at all how the squid wiki 
examples explains WCCP and  TPROXY.


i have found the way to make it all work together.
this site shows perfectly how to set it up all together:
http://bloggik.net/index.php/articles/networks/18-cisco/38-squid-tproxy-wccp

the i got the source squid-users post at:
http://www.squid-cache.org/mail-archive/squid-users/200906/0602.html
http://www.mail-archive.com/squid-users@squid-cache.org/msg65056.html

i will write it in the wiki all together back in Plain English that will 
explain all the things you need to take in account when implementing it 
and what can go wrong with it.


ELiezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] My PreDefined DownLoad

2012-07-17 Thread Eliezer Croitoru

On 7/17/2012 1:59 PM, Vishal Agarwal wrote:

Dear All,

Is there any way to replace the download location for some client PC via
squid.

Like if somebody is downloading say .torrent file from any location; he
should end up with my predefined .torrent file located in my localhost web
server location torrent file.


Thanks/regards,
Vishal Agarwal



try to not hijack other people threads to keep the list in order, please.

it can be done but it's preferred to redirect them into some html page 
that is saying something about restrictions of the network.

you can use the url_rewrite_program.
i wrote some url_rewriter in these examples:
http://wiki.squid-cache.org/ConfigExamples/PhpRedirectors
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/Coordinator#Store_URL_Rewrite

mine uses regex to find a  match.
if so will send a rewritten url to the server.
you can use it like this:
##start
#!/usr/bin/ruby
def main
  while request = gets
request = request.split
 if request[0]
case request[1]
  when /^http:\/\/.*\.torrent$/
puts request[0] + 
302:http://server_ip_or_domain/302_torrent_forbidden.html;

  else
puts request[0] + 
end
 else
puts 
 end
   end

end
STDOUT.sync = true
main
##end


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] tproxy configuration

2012-07-17 Thread Eliezer Croitoru

On 7/17/2012 2:07 PM, nipun_mlist Assam wrote:

Eliezer,

Thanks for the links. The diagram in the first link is good but I
don't know to read that language.

Also, squid has a bug regarding its tproxy feature, it never spoofs
the client IP.  I made a small fix for that issue, but that was one
year back and I lost the code with the fix.

Regards,
Nipun Talukdar
Bangalore
India



SNIP
there is no problem with squid and tproxy that wont spoof clients ip if
i will add it later to squid wiki.

diagram of the network:
http://www1.ngtech.co.il/squid/wccp2.pngsetup correctly.

squid config:
##start wccp2.sh
#!/usr/bin/bash

echo Loading modules..
modprobe -a nf_tproxy_core xt_TPROXY xt_socket xt_mark ip_gre gre

LOCALIP=10.80.2.2
CISCODIRIP=10.80.2.1
#you must connect the gre tunnel to the cisco router IP identifier.
CISCOIPID=192.168.10.127

echo changing routing and reverse path stuff..
echo 0  /proc/sys/net/ipv4/conf/lo/rp_filter
echo 1  /proc/sys/net/ipv4/ip_forward

echo creating tunnel...
iptunnel add wccp0 mode gre remote $CISCOIPID local $LOCALIP dev eth1
ifconfig wccp0 127.0.1.1/32 up

echo creating routing table for tproxy...
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

echo creating iptables tproxy rules...
iptables -A INPUT  -i lo -j ACCEPT
iptables -A INPUT  -p icmp -m icmp --icmp-type any -j ACCEPT
iptables -A FORWARD -i lo -j ACCEPT
iptables -A INPUT  -s $CISCODIRIP -p udp -m udp --dport 2048 -j ACCEPT
iptables -A INPUT -i wccp0 -j ACCEPT
iptables -A INPUT -p gre -j ACCEPT

iptables -t mangle -F
iptables -t mangle -A PREROUTING -d $LOCALIP -j ACCEPT
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY 
--tproxy-mark 0x1/0x1 --on-port 3129

##end

##start add to squid.conf
wccp2_router 10.80.2.1
wccp_version 2
wccp2_rebuild_wait on
wccp2_forwarding_method gre
wccp2_return_method gre
wccp2_service standard 0
wccp2_service dynamic 80
wccp2_service dynamic 90
wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 ports=80
wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source 
priority=240 ports=80

##end

##cisco config
conf t
ip access-list extended wccp
 permit ip 10.80.3.0 0.0.0.255 any
ip access-list extended wccp_to_inside
 permit ip any 10.80.3.0 0.0.0.255
exit
ip wccp 80 redirect-list wccp
ip wccp 90 redirect-list wccp_to_inside
!gw interface
interface FastEthernet0/0.1
 encapsulation dot1Q 1 native
 ip address 192.168.10.127 255.255.255.0
 ip wccp 80 redirect out
 ip wccp 90 redirect in
exit
!proxy interface
interface FastEthernet0/0.100
 encapsulation dot1Q 100
 ip address 10.80.2.1 255.255.255.0
 ip wccp redirect exclude in
exit
!clients interface
interface FastEthernet0/0.200
 encapsulation dot1Q 200
 ip address 10.80.3.1 255.255.255.0
exit
!rotue to internet gw
ip route 0.0.0.0 0.0.0.0 192.168.10.201
end
##cisco config end


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] sslproxy_cafile and CA certificats

2012-07-17 Thread Eliezer Croitoru

On 7/17/2012 2:31 PM, Abdessamad BARAKAT wrote:

Hi,

I use squid 3.2 with SSL intercept and I want to check SSL website
certificats so I need the whole well know CA certificats like a actual
browser

Anyone know where I can find the whole CA (verisign, thawte,..) in one
file for the directive sslproxy_cafile

Thanks for any info

there is a package in almost any linux distribution that called firefox 
something and you can find more info on that here:

http://packages.debian.org/sid/ca-certificates
you can extract it all from this package.

Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il




Re: [squid-users] i'm having a little performance trouble with squid + ICAP server.

2012-07-18 Thread Eliezer Croitoru

On 7/13/2012 9:51 AM, bnichols wrote:

On Thu, 12 Jul 2012 18:44:26 -0700 (PDT)
Ming-Ching Tiew mct...@yahoo.com wrote:


Sorry I am offering no help but I am interested to know how do you
set up a stress test environment. I supposed it's an automatic script
based stress tests ?


  well you could wget -r entire websites and then loop the script to
  repeat itself and run that on several machines that are going through
  your squid. its sort of primitive but it would give you some idea.


or use a normal cache proxy tester that reads a list of urls from a file.
you can take a access.log file of squid use cat access.log|gawk 
'{print $7}' /tmp/urls_list.txt


but i do advise you to not do a stress test on other people resources 
without their approval.
it can cost other people money while using some dummy VM's with nginx on 
them can do the trick for your basic needs.
if you do stress test you better plan it and by doing so know the 
results and consequences.


Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] yahoo mail problem with tproxy (squid 3.1.19, kernel 3.2.21)

2012-07-18 Thread Eliezer Croitoru

On 7/18/2012 11:35 AM, Felix Leimbach wrote:

Hi,

On 07/18/2012 04:28 AM, Ming-Ching Tiew wrote:

When logging out from yahoo mail, it's very slow and eventually there
is any error.


I'm not sure whether this is your problem - but I too had similar
problems with 3.1.19.
Upgrading to 3.1.20 solved the problem - turned out bug 3466 (adaption
stuck on last single-byte body piece) was the culprit.
Try giving 3.1.20 a shot.

HTH
Felix

by the screenshot he is using 3.1.20.
well i do not get this problem with wither squid 3.1.16-20 or 3.2.0-8-17
so it can be a network issue (other proxy in the way\routing etc) or 
develop libs dependency.

from his logs before:
2012/07/01 20:10:16.992| WARNING: HTTP: Invalid Response: No object data 
received for http://mail.yahoo.com/ AKA mail.yahoo.com/

2012/07/01 20:10:16.994| fwdServerClosed: FD 10 http://mail.yahoo.com/
if he is getting the problem i would like to make effort reproduce it.

so more data needed:
OS = linux
32 \ 64 bit = ?
what Distribution ?
uname -a output ?
what are the configure options for squid ? (squid -v output)
if a package has being used which? (download source).
tproxy as router?
do you intercept ssl?


any data will give more info on the problem.

tcpdump -i any 'port 80' -n
output while the problem accrues is will be very good.

iptables-save
ip route
ip rule


some more data will be helpful instead of just throwing to the air the 
problem with the log declaring about the problem.


as for http://mail.yahoo.com/
this is a 302 HTTP/1.0 302 Moved Temporarily reply so it might be 
something with the size of the reply.


try to run
curl  -v  http://mail.yahoo.com/
to see if you get any output while not using squid.

Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] yahoo mail problem with tproxy (squid 3.1.19, kernel 3.2.21)

2012-07-19 Thread Eliezer Croitoru

you dont need to recompile the kernel.
the basic kernel of fedora 15 is ok.
i have some issues with fedora but most of them are due to SELINUX or 
basic misconfiguration.


you dont need the DVD for that.
just use the netinst iso and install minimal server.
there is a nice RPM for fedora 15 of squid that you can use.

hope you will get all done well.
(try also 3.2 RPM)

Elizer

On 7/19/2012 9:21 AM, Ming-Ching Tiew wrote:



I will setup a new machine and report back. It's will be fedora 15, i386

because that's the latest DVD I have. Need be I will recompile a newer kernel.



SNIP

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] tproxy configuration

2012-07-19 Thread Eliezer Croitoru

On 7/17/2012 6:01 PM, Wayne Lee wrote:

 SNIP 
Many thanks Eliezer.

I still have the same issue in that once the packets arrive on the
squid box they are not actually diverted into the squid daemon and
thus fail.

I have managed to find a working solution and that is to not use wccp
and just built a proper gre tunnel between the squid and cisco router,
the DNAT/Redirect methods then work as expected.


Thanks again


Wayne

if the packets are not diverted into squid there is something wrong with 
your setup.
if you will post your squid config routes and iptables i will might be 
able to help you.


for me squid works with wither tproxy\dnat\redirect + wccp or with basic 
routing rules.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] tproxy configuration

2012-07-19 Thread Eliezer Croitoru

On 7/20/2012 12:35 AM, Wayne Lee wrote:



Sent from my iPad

On 19 Jul 2012, at 19:58, Eliezer Croitoru elie...@ngtech.co.il wrote:


On 7/17/2012 6:01 PM, Wayne Lee wrote:

 SNIP 

if the packets are not diverted into squid there is something wrong with your 
setup.
if you will post your squid config routes and iptables i will might be able to 
help you.

for me squid works with wither tproxy\dnat\redirect + wccp or with basic 
routing rules.

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Hello

I followed your guide on this post although I swapped the wccp redirect 
statements around

You had

  ip wccp 80 redirect out
  ip wccp 90 redirect in

I changed it to
ip wccp 90 redirect in
ip wccp 80 redirect out

No traffic was being redirected down the wccp until I changed it. Everything 
else was as you posted. Packets were not being diverted or tproxy'ed into squid 
which has been my issue all along. I'm happy to set it up that way again and 
provide whatever debug output required as I would prefer the wccp for failover 
purposes. Any/all help and guidance is appreciated.


Regards

Wayne

well the order dosnt matter because it's evaluated based on the IN and 
OUT status.
so in any case you will put in before or after 90 the check wont be 
applied as IN on OUT.

it's a one way check.

anyway i'm happy it works good for you.
i wrote a wiki page about how to set it up with a very nice diagram of 
the topology at:

http://wiki.squid-cache.org/ConfigExamples/UbuntuTproxy4Wccp2

i was thinking about people that runs web cache with a linux router and 
not a Cisco device.
They do not have this kind of a solution so i was thinking of writing 
some scripts and a small daemons pair.

one for the linux router and the other for the cache servers.
it will manage packet marking on iptables PREROUTING table with maybe 
some additional dynamic tables.


and the other on the squid box to identify the it is still there and 
running.


based on wccp methods it's pretty simply to implement.
wccp is a binary protocol while i was thinking to implement it based 
on text + basic encryption option.


i wrote already a nice pair of helpers that checks if a cache peer is 
runing and well.
so it's only matter of signaling the current status from the cache to 
the router every specific predefined interval and making sure that the 
settings are intact.


this guy wrote POTATO:
https://github.com/wisq/potato

with web interface and stuff for load balancing couple dsl line.
the idea is kind the same and i think i can make it useful.

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Non-browser applications using NTLM+Squid?

2012-07-19 Thread Eliezer Croitoru

On 7/19/2012 11:29 PM, Baird, Josh wrote:

Hi,

I'm wondering what others are doing about non-browser applications (Anti-virus 
software that fetches updates, instant messengers over HTTP, etc) that sit 
behind a Squid proxy that requires NTLM authentication?  These applications, in 
my experience, use Windows' proxy settings to proxy their outbound traffic, but 
can't speak NTLM, so the application is prevented from proxying any traffic.

Would a Kerberos integrated Squid be a possible solution to this problem?

Thanks,

Josh


very simple.. just allow them all before the authentication acls such as in:

acl updates dstdomain .windowsupdates.microsoft.com .antivirusupdates.org
acl updates1 dst 192.168.0.1/32

http_access allow localnet updates
http_access allow localnet updates1
http_access allow localnet ntlm_auth_helper
http_access deny all


Regards,
Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] negative ACL

2012-07-19 Thread Eliezer Croitoru

On 7/19/2012 10:47 PM, Rick Chisholm wrote:

I have an NTLM auth proxy, but a number of apps do not seem to be smart
enough to pass credentials and this generates numerous squid
authentication pop-ups for users. I'm trying to eliminate this.

I was thinking of creating a browser ACL with entries the will cover the
browsers in use on the network and then try to use a NOT operator like

http_access allow !known_browsers

before the auth required setting.

thoughts?



this is a very very bad exploit so i wodn't ever cosider it.
it means that every user that will change the broeser id (firefox- 
about:config - change variable -done)

can use your proxy.
if you will do such a thing at least but not least use
http_access allow localnet !known_browsers

i would suggest to analyze these apps.
they do use most of the time specific domains that you can allow without 
any ntlm auth.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] yahoo mail problem with tproxy (squid 3.1.19, kernel 3.2.21)

2012-07-20 Thread Eliezer Croitoru
 of the bridge or more precisely do transfer the packet to the 
next levels of the bridge.


after all this a good way to understand bridges is:
http://ebtables.sourceforge.net/br_fw_ia/br_fw_ia.html
have great diagrams and packets flows.

so the rules you actually need is:
ebtables -t broute -A BROUTING -i eth0 -p ipv4 --ip-protocol tcp \
 --ip-destination-port 80 -j redirect --redirect-target DROP
iptables -t nat -A PREROUTING -m physdev --physdev-in eth0 -p tcp 
--dport 80 -j REDIRECT \ --to-port 3129


if you are willing to be explicit.
byt actually only the iptables rules will be sufficient to make the 
drill in intercept mode.
the bridge rule in this case is just to make it sooner a routing issue 
then a bridge issue.


just as a note this diagram:
http://ebtables.sourceforge.net/br_fw_ia/PacketFlow.png
from left to right shows only two points in ebtables that can make your 
clients get into squid.

link layer BROUTING (BLUE) or link layer NAT(green)PREROUTING.



# Default Fedora DVD installation has rules which must be deleted
iptables -D INPUT   -j REJECT --reject-with icmp-host-prohibited
iptables -D FORWARD -j REJECT --reject-with icmp-host-prohibited
echo 1  /proc/sys/net/ipv4/ip_forward



first try to change the settings to make them somehow genuine.
the next step is to understand that yahoo mail is mostly https traffic 
which will not pass through squid.
if you do still get problems the it can be because of another network 
issue that is related to the way you implemented your bridge in the 
topology.


how this machine cables are setup? switch..router..client etc..
in my setup fedora 15 works ok.

it seems like another issue then squid itself.
if nat\redirect works fine in most cases it a matter of mac level 
filtering since bridge do forward packet without mangling the src mac 
address and while tproxy socket *DO* use the mac address of the src.


it seems like the wiki is indeed missing some explanation on bridge and 
tproxy.


Regards,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Squid + Cisco 4500 + WCCP2

2012-07-20 Thread Eliezer Croitoru

On 7/20/2012 1:53 PM, Ioannis Pliatsikas wrote:

I'm trying to setup a transparent proxy with squid using wccpv2 and a
4507 (ios v15.1) Cisco switch.

Tried using out of the box rpm package, 3.1.20 on Opensuse 12.1 with no
luck. My cache.log kept filling with Unknown capability type in WCCPv2
Packet messages.

Compiled from source the same version with --enable-wccpv2 option but i
keep getting the same errors.

Cisco can see the proxy because i get



SNIP


No tunnel defined anywhere cause i assume it's not necessary on l2
redirection
Any ideas?

Else then the error is it redirecting the traffic?
i have tested wccp2 on a router and gre but not on a switch and L2 yet.
on the cisco you also need to apply extended acls based on www port to 
match the specific traffic you want to redirect into squid.

if you wont do that the web-cache wont redirect anything.

Regards,
Elizer




Thank you in advance
John



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Optimising squid cache for USB flash drives

2012-07-20 Thread Eliezer Croitoru

On 7/20/2012 5:17 PM, haggismn wrote:

Hi all,
I am a bit of a noob with squid, as well as USB flash storage, so please
bear with me.

I have squid up and running on a USB equipped dd-wrt router. I have plugged
in a 8gb USB flash drive, of which 4gb are allocated for the squid cache.
Currently I am using mostly default settings, ie cache_dir ufs
/mnt/sda1/cache 4096 16 256


Nice.

I am aware that by using a USB flash drive with squid, the lifespan of the
drive will be greatly decreased, due to the limited number of write cycles
each block on the disk has. I was therefore wondering if it is possible to
set up the caching so that it reduces the number of writes made onto the
disk. I have been looking at options like minimum_object_size, which I have
set to 8 KB, thus reducing the number of small files written. Will this help
in any way? Are there other measure I can take which might help? I have been
looking at using COSS storage, with a low max-stripe-waste, with the
intention that this might reduce write frequency. As far as I can tell, this
will write to the disk in 1MB chunks. Might this help by any chance? Are
there any other measures that might help, for example formatting the flash
drive in a certain way (although likely limited to FAT32).

dont think about the lifespan of the drive because if they die they die.
cache is by all means possible write to disk what you can excluding 
the ram cache.

this is the basic idea of cache.
USB flash drives are not that fast compared to many HD but can still be 
faster then the link you have.
the basic thing is to disable logging which you dont really need to 
store on most of wrt devices.

the minimum_object_size is not important in this case (my opinion).
COSS is not being used anymore and there is a rockstore something.
anything is better then fat32 in you case of linux OS.
the reiser FS is ment for lots and lots of small files.
ext2/3/4 and reiser fs has an option of noatime that can reduce some 
drive access but it has a risk of corruption the FS.
in you case of 4GB cache it really not suppose to be a big deal if you 
will loose it unless you have more data on it.
i dont remember exactly but the size of the cache dir suppose to be in 
use with your ram size and for a DD-wrt device that dosnt have much ram 
4gb of cache dir will might not be a good idea.


Regards,
Eliezer



Any information would be greatly appreciated.

Thanks in advance



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Optimising-squid-cache-for-USB-flash-drives-tp4655892.html
Sent from the Squid - Users mailing list archive at Nabble.com.




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] yahoo mail problem with tproxy (squid 3.1.19, kernel 3.2.21)

2012-07-20 Thread Eliezer Croitoru

On 7/21/2012 6:01 AM, Ming-Ching Tiew wrote:



- Original Message -

From: Eliezer Croitoru elie...@ngtech.co.il

so what you just need for ebtables is two rules:
all packets the are destined to the web om port 80.. route them into the 
machine... later will be intercepted by tproxy  so:
ebtables -t broute -A BROUTING -i eth0 -p ipv4 --ip-protocol tcp \ 
--ip-destination-port 80 -j redirect --redirect-target DROP



and every packet that comes from the internet from port 80 (web server) should be 
always get to the proxy as it's an  answer to squid request either tproxy or 
intercept.
the only difference with intercept mode is that:
the packet that comes back from the internet destination is the proxy and on any 
case the bridge will send it to the  proxy.



so to intercept web answers to the proxy you need the rules:
ebtables -t broute -A BROUTING -i eth1 -p ipv4  --ip-protocol tcp \
--ip-source-port 80 -j redirect --redirect-target DROP

and that is it for the bridge.


Your rules are essentially the same as mine and I don't see how it that 
different,
maybe I am just missed the point.


The reason you see many more rules than is needed because I want to make them
the connection symmetric so that it does not matter which ethX is the upstream,
and which is the down stream, ie whichever port you plug into it will still 
work.

And I have specifically confirmed that the other two additional rules have no 
traffic.

they indeed are not suppose to fail your setup but it's not suppose to 
be symmetric with tproxy.
the idea of the bridge is that you have clients side and external side 
that you abuse both.


if you make it this way for a purpose it's another story.
i would say that the result can show some really nasty issue you are 
having in the network level and ebtables+switch is the basic thing to check.

i will try to dump the tcp sessions on the interfaces using:
tcpdump -i any -X -s0 -n port 80 -w test.pcap

i will be happy to look into the packets to see if there is a clue in 
them saying something about the zero reply.


to make sure it's not squid issue try to install the rpm of squid 3.2
http://pkgs.org/fedora-16/fedora-i386/squid-3.2.0.12-1.fc16.i686.rpm.html

i have tested it on fedora 15-16 and still the same result that it works 
both on 3.1.X and 3.2.X.


you can try to play with stp on\off on the bridge for case of packets 
getting lost somewhere by STP filters.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Squid AD login problem

2012-07-24 Thread Eliezer Croitoru

On 7/24/2012 4:13 PM, Nicole Hähnel wrote:

Hi,

recently we are using Squid 3.1.20 on SLES11 SP1 to control the
webaccess in our Microsoft AD network.
There are some internal microsoft based websites like Sharepoint for
instance.
Without squid we can open these websites without renewed authentication
to the browser.
With squid (wpad file) we get a login box, but in spite of the right
credentials we won't be logged in.
All computers are authenticated to the AD, so squid has to pass through
the kerberos certificate.

Are there any hints on that?

Thanks!

Kind regards,
Nicole


what is the content of the WPAD script?
the access to the sharepoint and other internal server are through the 
squid server at all?
do you see anything logged in the access.log file when you are trying to 
access the sharepoint page?


Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] I need a help!

2012-07-27 Thread Eliezer Croitoru

On 7/27/2012 6:16 PM, Mihamina Rakotomandimby wrote:

On 07/27/2012 01:06 PM, Helmut Hullen wrote:

Long time ago squid had an option offline_mode which could be set to
on or off.


:-)

I remember a joke about a user asking the sysadmin to store the internet
somewhere on the LAN in case of a possible cut off...



well I have tried and still trying ;)
it seems like my squid dosnt have enough hands... so i used:
##diff
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern .   0   20% 4320
..
 refresh_pattern .   99   9% 99 
override-expire override-lastmod ignore-reload ignore-no-store 
ignore-no-cache ignore-private reload-into-ims

##end

but it seems like i'm having some problems with my clients..


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] I need a help!

2012-07-27 Thread Eliezer Croitoru

On 07/27/2012 06:53 PM, Mihamina Rakotomandimby wrote:

On 07/27/2012 06:50 PM, Eliezer Croitoru wrote:

99   9% 99 override-expire override-lastmod
ignore-reload ignore-no-store ignore-no-cache ignore-private


You love ignoring... ;-)

i ignore that much that i'm thinking of ignoring some things and take 
the next flight to england.

or the area..


Re: [squid-users] tproxy can't connect to target url after url rewrite program to different host

2012-07-27 Thread Eliezer Croitoru

On 07/28/2012 02:55 AM, Ming-Ching Tiew wrote:


Tested this with Squid Version 3.1.20-20120710-r10457,

After a simple url_rewrite_program changing from url to
a different host, the communication will not succeed
( using linux bridge with ebtables/iptables for this tproxy

communication ).

The nat intercept mode could succeed.

only for the url?
for me it works fine.


Re: [squid-users] tproxy can't connect to target url after url rewrite program to different host

2012-07-28 Thread Eliezer Croitoru

On 7/28/2012 11:54 PM, Ming-Ching Tiew wrote:


From: Eliezer Croitoru elie...@ngtech.co.il
To: squid-users@squid-cache.org
Cc:
Sent: Saturday, July 28, 2012 10:53 AM
Subject: Re: [squid-users] tproxy can't connect to target url after url rewrite 
program to different host

On 07/28/2012 02:55 AM, Ming-Ching Tiew wrote:


Tested this with Squid Version 3.1.20-20120710-r10457,

After a simple url_rewrite_program changing from url to
a different host, the communication will not succeed
( using linux bridge with ebtables/iptables for this tproxy

communication ).

The nat intercept mode could succeed.

only for the url?
for me it works fine.


Further testing revealed that if the re-written url is at port 80,
then it works. If the url is changed to a different port, then
it will timeout. Eg


http://dfsdffsa:8080/fsdafsdf

Looks like there is a restriction here, because when squid

opens a connection faking the client  (tproxy), the reply since is it

not port 80, it is not coming back to squid.


now that you remind me.
i have seen this kind of problem!!!
it was nasty on squid 3.1.
you can see in iptables connection tracking that squid is opening the 
socket but it sends the first syn and wont get the incoming syn from the 
destination.


but there are two different situations bridge and routing.
on bridge it's pretty obviates.
you must tell the bridge to drop the incoming traffic from of source 
port 8080 otherwise it will be bridged to the client and wont get back 
to squid.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] tproxy can't connect to target url after url rewrite program to different host

2012-07-28 Thread Eliezer Croitoru

On 7/29/2012 2:21 AM, Ming-Ching Tiew wrote:


From: Eliezer Croitoru elie...@ngtech.co.il
To: squid-users@squid-cache.org


now that you remind me.
i have seen this kind of problem!!!
it was nasty on squid 3.1.
you can see in iptables connection tracking that squid is opening the
socket but it sends the first syn and wont get the incoming syn from the
destination.

but there are two different situations bridge and routing.
on bridge it's pretty obviates.
you must tell the bridge to drop the incoming traffic from of source
port 8080 otherwise it will be bridged to the client and wont get back
to squid.




If it is an external web server, the ebtable rule will probably fix it.

But for my case, on the squid machine, I have a web server, and
the url rewrite redirect the traffic to this web server. And I don't seem
to be able to get a reply back into squid. Which is blocking the reply
?

This is a problem with + tproxy and servers that hosts a webserver on 
the same machine.
it seems like when squid on tproxy mode it opens a Socket to the origin 
server and since it's on the same machine the routing on the machine 
main routing table is to send it strait to the client machine without 
intercepting\redirecting it into the lookback\squid.



the simple solution for that is to use cache_peer to port 8080 and use 
acls to apply the rewritten request through the cache_peer remember to 
add the cache_peer no-tproxy option.


if there was an option\acl no-tproxy alllow acl_name this would give 
some nice option but since it's not needed for most of the users i dont 
thing it will be implemented.


Regards,
Eliezr

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] How to trick splay trees?

2012-07-31 Thread Eliezer Croitoru

On 7/31/2012 12:50 PM, Jannis Kafkoulas wrote:

Thanks for the quick answer!

Now I see that I didn't express myself precisely enough :-(

to also go via  cache_peer par-alt. wasn't meant as an alternative (either or) but as 
as well as the domain .fa-intracomp.net :-)

in other words, abc.intracomp.com should be directed only to par-alt.

...

thanks


so it's pretty simple..
as the acls goes for first HITS there is nothing to trick the splay 
trees but just use a more explicit ACLS with a deny one first.


##start
acl alt dstdomain .fa-intracomp.net
acl std dstdomain .intracomp.com
acl alt-2 dstdom_regex -i abc.intracomp.com

cache_peer 192.10.10.22parent31280 no-query login=PASS 
proxy-only no-digest name=par-std
cache_peer 192.10.10.22parent800 no-query login=PASS 
proxy-only no-digest name=par-alt

#first use an explicit dney for the abc...
# so first this domain will not pass using this proxy
# then allow the other proxy.
# and it's recommended to separate the acls for the two proxies.
cache_peer_access  par-std deny alt-2
cache_peer_access  par-alt  allow alt-2
cache_peer_access  par-alt  allow alt
cache_peer_access  par-std  allow std
##end

i would put it in my squid.conf in another order for it t be more 
understandable for the human eye\mind to match the algorithm that squid 
uses for acls.


##start

#acls part with notes about purpose of each acl if neede.
acl alt dstdomain .fa-intracomp.net
acl std dstdomain .intracomp.com
acl alt-2 dstdom_regex -i abc.intracomp.com


#cache peers part:

#cache peer 1
cache_peer 192.10.10.22parent31280 no-query login=PASS 
proxy-only no-digest name=par-std


#cache peer 1 acls
cache_peer_access  par-std deny alt-2
cache_peer_access  par-std  allow std
#


#cache peer 2
cache_peer 192.10.10.22parent800 no-query login=PASS 
proxy-only no-digest name=par-alt


#cache peer 2 acls
cache_peer_access  par-alt  allow alt-2
cache_peer_access  par-alt  allow alt

##end

so you do know which proxy will match first explictly
you will have the acls ordered per cache_peer and there for you see 
better how squid will approach to the cache_peers.


Regards,
Eliezer




--- El Lun 30/7/12, Amos Jeffries squ...@treenet.co.nz escribió:


De: Amos Jeffries squ...@treenet.co.nz
Asunto: Re: [squid-users] How to trick splay trees?
Para: squid-users@squid-cache.org
Fecha: Lunes 30 de Julio de 2012 15:25
On 31/07/2012 1:25 a.m., Jannis
Kafkoulas wrote:

Hi,

(I use squid 2.7. STABLE9 on RedHat EL 5.6)

Following problem:

I have following dstdomains defined
going to par-std and par-alt  cache_peers

respectively:


acl alt dstdomain .fa-intracomp.net
acl std dstdomain .intracomp.com

Now I'd like  abc.intracomp.com  to also go

via  cache_peer par-alt.


Following two tries didn't work:

# acl alt-2 dstdom_regex -i abc.intracomp.com
# acl alt dstdomain abc.intracomp.com


The dstdomain one is faster. Both are correct for your
requested policy.
The key word you stated being also ...



The requests were sent to par-std cache_peer

cache_peer 192.10.10.22parent

   31280 no-query
login=PASS proxy-only no-digest name=par-std

cache_peer 192.10.10.22parent

   800 no-query
login=PASS proxy-only no-digest name=par-alt


cache_peer_access  par-alt  allow alt-2
cache_peer_access  par-alt  allow alt
cache_peer_access  par-std  allow std


Is there a way for that to work at all?


Unless given some specific selection algorithm (digest, ICP,
hshes,
carp, roundrobin etc) Squid lists peers in configuration
order when
attemping to pass traffic.

As I said above the key word in your policy statements is
also - with
both peers *available* for use Squid will pick the first one
that works.
With par-std being listed first your logs will show it being
used until
such time as it becomes unresponsive or overloaded. Then
par-alt will
pick up the slack for that one domain.

I think you are looking at the logs and seeing only par-std,
thinking
its not working when actually it is. You can test by
changing the order
of cache_peer definitions in your config and seeing the
preferred peer
switch to the par-alt when the new ACL is added.

NOTE: you canot send a request via *both* using TCP unicast
links, just one.

Amos




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Re: I need a help!

2012-07-31 Thread Eliezer Croitoru

On 7/31/2012 3:09 PM, RW wrote:

On Fri, 27 Jul 2012 18:16:14 +0300
Mihamina Rakotomandimby wrote:


On 07/27/2012 01:06 PM, Helmut Hullen wrote:

Long time ago squid had an option offline_mode which could be
set to on or off.


It still does, at least it's in the documented conf file for 3.1.


I remember a joke about a user asking the sysadmin to store the
internet somewhere on the LAN in case of a possible cut off...


Many years ago there was a company that distributed free dos software
by mail order that offered a cut-down version of the internet on floppy
disks.


and i assume they retired after i did't fit a floppy?

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Re: Re: Re: Re: Re: Re: squid_ldap_group against nested groups/Ous

2012-08-03 Thread Eliezer Croitoru

On 8/2/2012 9:24 AM, Eugene M. Zheganin wrote:

Hi.

On 01.08.2012 23:02, Markus Moeller wrote:

Hi Eugene,

  Are all 12 groups for the same control ?  If  so you can  use -g
Group1:Group2:Group3:.


No, I map them to different acls, and then those acls are used to
restrict various levels of the access.

Like:

(it was)
external_acl_type ldap_group [...]

acl ad-internet-users  external ldap_group
/usr/local/etc/squid/ad-internet-users.acl
acl ad-privileged external ldap_group
/usr/local/etc/squid/ad-privileged-users.acl
acl ad-icq-only external ldap_group /usr/local/etc/squid/ad-icq-only.acl
acl ad-no-icq external ldap_group /usr/local/etc/squid/ad-no-icq.acl

http_access allow ad-internet-users something
http_access deny ad-internet-users something1
http_access allow ad-privileges something1

and so on.

Eugene.

how long is the list?
and what is the proxy load \ requests per sec ?
cache on the external_acl helper can be very effective and will take 
most of the load if the ttl is well tuned.
i dont really know about ad environment that these kind of groups are 
being changed in less then a day so just extend the ldap helper ttl to 
more then 60 secs and then most of the acls will may be slow on the 
first acl hit but on the next it will be much faster.


Regards,
Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Squid memory usage

2012-08-03 Thread Eliezer Croitoru

On 8/3/2012 3:16 PM, Hugo Deprez wrote:

Dear community,

I am running squid3 on Linux Debian squeeze.(3.1.6).

I encounter a suddenly a high memory usage on my virtual machine don't
really know why.
Looking at the cacti memory graph is showing a memory jump from 1.5 Gb
to 4GB and then ther server started to swap.

For information the virtual machine has 4Gb of RAM.

Here is the settings of squid.conf :

cache_dir ufs /var/spool/squid3 100 16 256
cache_mem 100 MB

hierarchy_stoplist cgi-bin ?
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320


my squid3 process is using: 81% of my RAM. So arround 3,2Gb of memory.

proxy25889  0.6 81.1 3937744 3299616 ? SAug02   9:34
(squid) -YC -f /etc/squid3/squid.conf

I am currently having arround 50 users using it.


I did have a look at the FAQ
(http://wiki.squid-cache.org/SquidFaq/SquidMemory#how-much-ram), but I
didn't find any tips for my situation in it.


Have you got any idea ? How can I troubleshoot this ?

Thanks !


is this a replicated VM by any chance?


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Squid 3.2.0.19 beta is available

2012-08-08 Thread Eliezer Croitoru

On 8/7/2012 10:59 AM, Amos Jeffries wrote:

mportant changes to note in this release:

* As you should know CVE-2009-0801 security vulnerability protection was
added in 3.2 series.

Earlier betas attempted to protect peer caches as well as themselves, by
blocking relay of untrusted requests until we could implement a safe relay.

Due to time constraints this extra layer of peer protection
has been REMOVED from 3.2 default builds.

Interception cache proxies are themselves well protected against the
vulnerability, but can indirectly poison any cache heirarchy they are
integrated with. The -DSTRICT_HOST_VERIFY compile-time flag can be
defined in CXXFLAGS to re-enable this peer protection if desired. Its
use is encouraged, but will result in problems for some popular
configurations. ie ISP interception proxy gatewaying through a cache
array, matrix of interception proxies as siblings.

Use of the client destination IP (ORIGINAL_DST) is still preferred for
untrusted requests, so if your proxy is backed by a firewall denial
please ensure that the rules are REJECT rules rather than DROP for best
performance. never_direct does not affect this routing preference as it
does for DIRECT traffic.

I want to verify because i'm a bit confused.
can a intercepted request be forwarded to a cache_peer in any way?

Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Squid memory usage

2012-08-08 Thread Eliezer Croitoru

On 8/8/2012 6:49 PM, Hugo Deprez wrote:

Hello,

since I changed the configuration
memory usage is growing slowy.

Now squid is using 17% of 4GB

Eliezer, I am not sure to understand. But I am using two VM,
active/passive setup with a corosync VIP.

I remembered someone talking here about a VM that was replicated and 
caused squid to leak memory.

so I asked if it's a VM that was cloned or not to verify it.
for me it still odd that a cloned VM will cause such a thing so I will 
just say it's seems like a bogus alarm set by someone by false assumption.


just curios about this cluster setup you have there:
can you give some more details about it? (email me directly)
I am working on a cluster setup of squid tproxy balanced on a routing level.

Thanks,
Elizer

I will consider upgrading one member of the cluster to 3.1.20 (squeeze
packages). DO you think this will sole the issue ?

Regards,




On 8 August 2012 05:30, Amos Jeffries squ...@treenet.co.nz wrote:

On 08.08.2012 02:35, Simon Roscic wrote:

SNIP



I think this is probably:
  http://bugs.squid-cache.org/show_bug.cgi?id=3605

Can you start with the cachemgr memory usage report and confirm whether the
same FwdServer excessive memory usage is seen?


Amos




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] squid 3.2 intercept and upstream proxy not working

2012-08-08 Thread Eliezer Croitoru

On 8/9/2012 2:16 AM, Amos Jeffries wrote:


Releases 3.2.0.14-3.2.0.18 have a standing block preventing requests
with conflicting destination IP and destination domain name being passed
to peers.

Release 3.2.0.19 loosens that block to allow it, but only if the clients
original destination IP (ORIGINAL_DST) is non-contactable by the proxy.

BUT, ... checking your config file there is a bigger problem, and a
relatively large amount of useless ACL checks ...

and let say i want to loosen it a bit more?

Thanks,
Eliezer


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] squid 3.2 intercept and upstream proxy not working

2012-08-10 Thread Eliezer Croitoru

On 8/9/2012 4:47 AM, Amos Jeffries wrote:

On 09.08.2012 12:32, Eliezer Croitoru wrote:

On 8/9/2012 2:16 AM, Amos Jeffries wrote:


Releases 3.2.0.14-3.2.0.18 have a standing block preventing requests
with conflicting destination IP and destination domain name being passed
to peers.

Release 3.2.0.19 loosens that block to allow it, but only if the clients
original destination IP (ORIGINAL_DST) is non-contactable by the proxy.

BUT, ... checking your config file there is a bigger problem, and a
relatively large amount of useless ACL checks ...

and let say i want to loosen it a bit more?


How much more?
  to relay known dangerous traffic to peers as if it were safe?
  or just to obey never_direct?

flag it as safe... because it is a local one that is safe.
i am talking only on http traffic and not https.

Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] squidguard not blocking

2012-08-10 Thread Eliezer Croitoru

On 8/10/2012 9:34 PM, J Webster wrote:

squidguard correctly blocks when I run from the command line:
[root squidguard]# echo http://www.porn.com/ - - GET | squidGuard -c
/etc/squid/squidguard.conf -d

SNIP



Does the url rewriter need to be further up the squid.conf?
It is right at the end of the conf file at the moment:
url_rewrite_program /usr/bin/squidGuard -c /etc/squid/squidguard.conf
it should be enough and by the load change the url rewrite child process 
numbers.

you must have the squid user access to the binary file and the db.

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] squid 3.2 intercept and upstream proxy not working

2012-08-10 Thread Eliezer Croitoru

On 8/10/2012 2:32 PM, Amos Jeffries wrote:

On 10/08/2012 10:54 p.m., Eliezer Croitoru wrote:

On 8/9/2012 4:47 AM, Amos Jeffries wrote:

On 09.08.2012 12:32, Eliezer Croitoru wrote:

On 8/9/2012 2:16 AM, Amos Jeffries wrote:


Releases 3.2.0.14-3.2.0.18 have a standing block preventing requests
with conflicting destination IP and destination domain name being
passed
to peers.

Release 3.2.0.19 loosens that block to allow it, but only if the
clients
original destination IP (ORIGINAL_DST) is non-contactable by the
proxy.

BUT, ... checking your config file there is a bigger problem, and a
relatively large amount of useless ACL checks ...

and let say i want to loosen it a bit more?


How much more?
  to relay known dangerous traffic to peers as if it were safe?
  or just to obey never_direct?

flag it as safe... because it is a local one that is safe.
i am talking only on http traffic and not https.


Please try 3.2.0.19 with this extra patch:
http://ww.squid-cache.org/Versions/v3/3.2/changesets/squid-3.2-11644.patch
the link should be: 
http://www.squid-cache.org/Versions/v3/3.2/changesets/squid-3.2-11644.patch


and it works like a charm. :)

now I noticed that the url_rewrite_concurrency was changed and it's nice.

maybe an options can be added to the build of 3.2 to use some safty 
modes on cache_peer? or maybe a flag that will mark cache_peer as safe?


Thanks,
Eliezer


It removes the preference bias for ORIGINAL_DST over peers.

Amos



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Put all port 80, 443 http https rtmp connections from openvpn through squid?

2012-08-11 Thread Eliezer Croitoru

On 8/11/2012 2:57 PM, J Webster wrote:

But once the tunnel reaches the OpenVPN server, you can direct port 80
and 443 traffic from it via the proxy server can't you?
Once it gets to the OpenVPN server (where you would also have the proxy
server), isn't it decrypted?
Lots of companies have VPN tunnels and then route web traffic through a
proxy so it must be possible somehow.

On 11/08/12 13:54, Alex Crow wrote:

On 11/08/12 08:20, J Webster wrote:

Is there a way to push all openvpn connections using http ports
through a transparent squid and how?
Also, can I log which openvpn certificate/client is accessing which
pages in this way?
I assume I would have to use an alternative port or use firewall
rules to only allow squid connections from the network 10.8.x.x

Squid is an HTTP proxy, so no.

You can't really proxy OpenVPN as it's end-to-end encrypted with SSL.
If you issued the certs from your CA it might be possible to MITM it
but that may be illegal in many jurisdictions.

Alex




of course you can.
it's a basic IPTABLES rules and since openvpn uses a tunX interface you 
can intercept all traffic from the tunX interface to the proxy.
but you cant force the clients to use the vpn as gateway to the whole 
word but only to the VPN connection.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Put all port 80, 443 http https rtmp connections from openvpn through squid?

2012-08-11 Thread Eliezer Croitoru

On 8/11/2012 6:15 PM, J Webster wrote:

But once the tunnel reaches the OpenVPN server, you can direct port 80

yes as the machine is a router.
SNIP

of course you can.
it's a basic IPTABLES rules and since openvpn uses a tunX interface
you can intercept all traffic from the tunX interface to the proxy.
but you cant force the clients to use the vpn as gateway to the whole
word but only to the VPN connection.

Regards,
Eliezer



So, I simply forward port 80 and 443 on network 10.8.00 to a transparent
squid proxy?

yes.
but for 443\ssl you will need ssl-bump which is a bit complicated.


How can I record in the squid logs which OpenVPN client certificate is
using the proxy?
you cant... unless you will build some external acl helper that will do 
that for you with special openvpn api\logs and the client ip.
if you are willing to know which clients\certificate is being used you 
will need to build a special cross longing analysis for squid and 
openvpn logs like a reverse ip to certificate way.



Also, how do I do this for rtmp connections because port 80 and 443 will
have to go via the proxy but rtmp will have to bypass it somehow?

squid is a http proxy and not rtmp.
rtmp use other ports then 80\443 and cannot be used over squid(you can 
if it's tcp and you allow CONNECT and unsafe ports which is not safe.. 
and will make the vpn connection vulnerable and maybe useless)


if you have a solid reason to do so it can be a nice project to try.

a more simple way is to assign dedicated IP for each certificate\client.

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] squidguard spyware log

2012-08-11 Thread Eliezer Croitoru

On 8/11/2012 8:06 PM, J Webster wrote:

I see some logs of spyware sites being blocked by squidguard.
I presume these are sites that have cross domain xml or javascript or
other things built in.
Will squidguard block the whole page even if there is one script in it
that might be spyware?

2012-08-11 17:10:31 [3630] Request(default/spyware/-)
http://won.images.streamray.com/images/streamray/won/jpg/m/6/milf36_150.jpg
93.23.197.116/- user GET $
2012-08-11 17:10:36 [3630] Request(default/spyware/-)
http://graphics.streamray.com/crossdomain.xml 93.23.197.116/- user GET
REDIRECT

Is there a way to provide a page saying this site has malware and has
been blocked rather than just the default block page?
ie 2 different blocking html pages?

you should look at squidguard ACLs and you can ask at the squidguard 
mailing list: squidgu...@shalla.de

or:
http://www.squidguard.org/Doc/examples.html

Regards,
Eliezer



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Put all port 80, 443 http https rtmp connections from openvpn through squid?

2012-08-11 Thread Eliezer Croitoru

On 8/11/2012 8:23 PM, J Webster wrote:

squid is a http proxy and not rtmp.

rtmp use other ports then 80\443 and cannot be used over squid(you can
if it's tcp and you allow CONNECT and unsafe ports which is not safe..
and will make the vpn connection vulnerable and maybe useless)

if you have a solid reason to do so it can be a nice project to try.

a more simple way is to assign dedicated IP for each certificate\client.

Regards,
Eliezer


The reason I asked about rtmp is that many sites you access the video
via the web browser but it sends it back via rtmp.
So, this is not possible through squid at all?
However, it is possible in a direct connection. So, can you allow 80,443
to go through squid but accept the return directly if on rtmp? probably
not.

rtmp can be used on squid with a big BUT.
since rtmp is a tcp protocol you must allow a CONNECT and destination 
ports to be used through the proxy.

but it's not such a safe and good idea to do so.
since the squid box is a router in your case and you will intercept the 
port 80\443 rtmp will not have any trouble if you do use NAT for 
outgoing connections since rtmp works on other ports then 80 and 443.




So, assign a static IP to a certificate and then have squid log by IP
address, then have a program match up the ip at the time with the client
name?

exactly.
squid always logs by ip and can add username so if you have static ip 
you can always know to match the client ip to specific user.
if you will want to be more sophisticated you can use reverse dns to 
name the static ip's into user ids so any logs software such as 
calamaris can show you the used id.


Regards,
Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Put all port 80, 443 http https rtmp connections from openvpn through squid?

2012-08-12 Thread Eliezer Croitoru

On 8/12/2012 11:26 AM, J Webster wrote:

rtmp can be used on squid with a big BUT.

since rtmp is a tcp protocol you must allow a CONNECT and destination
ports to be used through the proxy.
but it's not such a safe and good idea to do so.
since the squid box is a router in your case and you will intercept
the port 80\443 rtmp will not have any trouble if you do use NAT for
outgoing connections since rtmp works on other ports then 80 and 443.

But the routing will be different somehow won't it?
For example, let's assume youtube uses rtmp.
A user connects via VPN, navigates to www.youtube.com, on the VPN server
the 80 request is directed through squid, the video server returns the
80 request and a rtmp request but the rtmp cannot go through squid so
where does it return, just another port on the VPN server? As long as I
leave those rtmp ports open then all is okay?
What if there are 50 clients all using rtmp as the same time, how would
the routing within the 10.8.x.x network happen with squid involved?

this is not related in anyway to squid but to plain routing.
in order to understand how it works just know thay MAGIC of NAT exists!
you can read about it here:
http://en.wikipedia.org/wiki/Network_address_translation

this is a nice magic of networking.. ;)

youtube dosn't use rtmp for starter.
if you want to see a site that uses rtmp you can try some of IMDB trailers.
crunchy roll is a site that works only on rtmp Videos 
http://www.crunchyroll.com/ .


the only difference in routing while using squid as intercept proxy is 
just on the outgoing traffic but for most squid boxes\routers the 
routing table will be the same that used for NAT or local software such 
as squid\proxy.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Linux build, libcap, and enable-linux-netfilter

2012-08-14 Thread Eliezer Croitoru

On 8/14/2012 8:53 PM, David Hembree wrote:

configure: WARNING: Linux Transparent Proxy support WILL NOT be enabled
configure: WARNING: Reduced support to Interception Proxy
configure: WARNING: Missing needed capabilities (libcap or libcap2)
for TPROXY v2
configure: WARNING: Linux Transparent Proxy support WILL NOT be enabled

do you need TPROXY or INTERCEPT?
intercpet will work just fine.
TPROXY is another story...

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Linux build, libcap, and enable-linux-netfilter

2012-08-14 Thread Eliezer Croitoru

On 8/14/2012 10:06 PM, David Hembree wrote:

Hey Eliezer, yes we need tproxy working. This proxy is to act as a
man in the middle for some users who won't know it's there (i.e.
clients don't need to be configured) to endpoints off our network,
i.e. public sites on the internet http and https that we allow


for that you wont need tproxy but just intercept.
for https it's more complicated and you will need ssl-bump and CA 
publication to the clients computers.


I have used little redhat but indeed centos and fedora and if the lib is 
missing you can weither compile them manually or to find the right 
package because on all redhat flavor linux i have used with squid you 
can get a package of libcap and libcap2.


if you have specific redhat version it will be much more simple to find 
a package.


Regards,
Eliezer


Thanks,
David

On Tue, Aug 14, 2012 at 3:01 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:

On 8/14/2012 8:53 PM, David Hembree wrote:


configure: WARNING: Linux Transparent Proxy support WILL NOT be enabled
configure: WARNING: Reduced support to Interception Proxy
configure: WARNING: Missing needed capabilities (libcap or libcap2)
for TPROXY v2
configure: WARNING: Linux Transparent Proxy support WILL NOT be enabled


do you need TPROXY or INTERCEPT?
intercpet will work just fine.
TPROXY is another story...

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Squid 3.2.1 is available

2012-08-15 Thread Eliezer Croitoru

http://www.squid-cache.org/Versions/v3/3.2/
ftp://ftp.squid-cache.org/pub/squid/
ftp://ftp.squid-cache.org/pub/archive/3.2/

or the mirrors. For a list of mirror sites see

http://www.squid-cache.org/Download/http-mirrors.html
http://www.squid-cache.org/Download/mirrors.html

If you encounter any issues with this release please file a bug report.
http://bugs.squid-cache.org/

from some unknown reason i am unable to access the http site and the 
local site in my country is not updated yet:

http://www1.il.squid-cache.org/Versions/
3.2.0.19 is the latest.

the ftp works fine.

Eliezer


Amos Jeffries




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Squid 3.2.1 is available

2012-08-15 Thread Eliezer Croitoru

Downloaded the source from JP mirror and compiled.
Works like a charm with http interception and http cache_peer.

On 8/15/2012 2:29 PM, Amos Jeffries wrote:

  * CVE-2009-0801 : NAT interception vulnerability to malicious clients.
about this bug i tried to read about it just of curiosity but i didnt 
understood the actual vulnerability.

in the bugzilla it states:
##start
Due to Squid not reusing the original destination address on intercepted
requests it's possible (even trivial) for flash or java applets to 
bypass the

same-origin policy in the browser when Squid intercepts HTTP requests.

The cause to this is that such applets are allowed to perform their own HTTP
stack, in which case the same-origin policy of the browser sandbox only
verifies that the applet tries to contact the same IP as from where it was
loaded at the IP level. Squid then uses the Host header to determine which
server to forward the request to which may be different from the 
connected IP.


Applies to all Squid releases.
##end

well this is the basic expected behavior of a proxy to verify the 
destination host and NAT interception.


even if the destination IP is not the same as the connected one it still 
validates the same host\domain so what is the problem?


Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] upgrade but leave earlier version running?

2012-08-17 Thread Eliezer Croitoru

On 8/17/2012 8:20 PM, J Webster wrote:

Is there a way to install the new version of squid and leave 2.6 running
and then swpa them over once I am sure everything in verison 3 is
running on the server ok?
I don;t believe CentOS 5.8 has anything in the repos above 2.6 so is
there a way I can use yum without installing from source and compiling?
you can try using rpm of fedora 15-17 but there is a chance you will get 
some problems.


if you ask me i recommend to compile from source anyway to fit your 
specific needs.


if you are up for the task you can create an rpm yourself on other 
centos 5.8 similar machine and install the rpm or compile and copy the 
binary files.


if you really want to make sure that everything works on the new server 
you should create a test machine first to make sure everything works 
there and then continue a procedure of upgrade with backup on the way 
and rollover plan to be safe.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Squid 3.2.1 is available

2012-08-18 Thread Eliezer Croitoru

SNIP


The browser is 100% unaware of the proxies existence and the page being
fetched from a different server than its TCP connection was sent to.
All the IP level security the browser uses to check same-origin is
bypassed silently. All the DNSSEC, IP-based firewall rules, etc which
the LAN administrator may have setup for that client to make use of are
also bypassed silently unless replicated in proxy config.
  I'm not sure which of the two is more serious, but leaning slightly
towards the firewall bypasses being worse nowdays since browsers have
improved their checking a bit too along the same lines as the squid checks.

It is possible for a website JS (ie advert) to fetch a malicious page
using a benign TCP connection to a safe IP address and a Host: with
malicious server name. The result corrupts the browser cache with a
phishing-style page and gives open access to any private details
(credentials, cookies, local browser state) to the malicious website
server.

The only real solution is to avoid using an interception or transparent
proxy completely (or use it only to bounce clients to a how to
configure your browser page as per the ZeroConf wiki example). But the
3.2 changes raise the difficulty for attackers and go a long way towards
avoiding collateral damage to the rest of the LAN clients from such
attacks.

Amos


Thanks Amos,

I wasn't sure that I got it right but it seems like my logic was right 
after all.


But if anyone do use firewall + intercept proxy he will most likely will 
manage the proxy acls to match the local security policy else then the 
firewall.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Transparent Proxy

2012-08-19 Thread Eliezer Croitoru

On 8/19/2012 10:00 PM, Roman Gelfand wrote:

My goal is to make suid as transparent proxy.   I see several options.
  Not sure which one I should be using.  I am looking for standard
transparent proxy server.


--enable-ipfw-transparent or --enable-ipf-transparent or --enable-pf-transparent

Thanks in advance


what os? what kernel? ver?

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Transparent Proxy

2012-08-19 Thread Eliezer Croitoru

On 8/19/2012 10:18 PM, Roman Gelfand wrote:

debian/2.6.26-2-686

Thanks for your help


Then ip|ipfwipf in not of your concern.
you need linux-netfilter.

Regards,
Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


[squid-users] I want to verify why squid wont cache a specific object.

2012-08-19 Thread Eliezer Croitoru
I have been checking out about IMDB videos and it seems like the 
original requests cannot be cached from an unknown reason.


the store log shows:
1345335832.436 RELEASE -1  B174D16A30640884673D882B55B3594C  200 
1334135877 1302715098-1 video/mp4 16586249/16586249 GET 
http://video-http.media-imdb.com/MV5BMTUwMDMzOTQ4MF5BMTFeQW1wNF5BbWU3MDY0MzUzOTQ@.mp4?Expires=1345368308Signature=Q2dXbWeH4jZXddLAPgOxKrrgbzbuBGZSha5OU4muOH68UFaJ-MlxMUbqbocmzzsSpEJA23aTW46tDlm18RSGzuSiIQgi4tf3lchNMV2hSdDaHqGHrIHVmGWnGCW7VWFfOcYihdM3MqU9EpjO4qzrfoi95cKRXBc~SFCQR4gC8jM_Key-Pair-Id=APKAILW5I44IHKUN2DYAhint=flv


and it means that the file cannot be cached by squid.
It's weird a bit because I did some tracking on the connection and 
requests but dosnt seems to get the full picture of why it will not be 
cached.


The video will be cached if i will do a regular request from a browser 
wget or other means but not the original player.


The request is a simple GET with a full size response as stated in the 
headers all fit the criteria of cachable object.


this is the response headers:
##start
HTTP/1.1 200 OK
x-amz-id-2: blabla/AIBT0KwY/mC1f2mnZx0crlLcLUcdYqxt7
x-amz-request-id: E903CD06E96673AC
Date: Wed, 11 Apr 2012 09:17:57 GMT
Last-Modified: Wed, 13 Apr 2011 17:18:18 GMT
ETag: c9164ada101ce1baad52740ec34b2027
Accept-Ranges: bytes
Content-Type: video/mp4
Content-Length: 16586249
Server: AmazonS3
Age: 48065
X-Cache: Hit from cloudfront
X-Amz-Cf-Id: blabla_SZfHBxHjWxZvaw==
X-Cache: MISS from proxy1
X-Cache: MISS from proxy2
Via: 1.0 dd9f0ef9e10a3156b506f6324ccd2e2a.cloudfront.net (CloudFront), 
1.1 proxy2 (squid/3.2.1), 1.1 proxy1 (squid/3.2.1)

Connection: keep-alive
##end

I dont know what debug sections to even look at.
like is there a sections that shows the reason for an object to be a 
 type ?


Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] I want to verify why squid wont cache a specific object.

2012-08-19 Thread Eliezer Croitoru

On 8/20/2012 1:38 AM, Amos Jeffries wrote:


The  is the file number/name where it is being stored. Since
this is an erased operation that is always the magic F value.

It is not 1-to-1 related to the object being cacheable. It just means
the object *currently* stored needs removing. Non-cacheable objects the
RELEASE immediately follows storage, for cacheable objects being
replaced the erase of old content immediately follows storage of new
copies.

OK



Since the caching changes between UA. I would assume the player is
sending some form of no-cache or no-store control in the request
headers. Set debug section 11,2 if you can and find the players request
headers.
I have reviewed the headers before using wireshark\tcpdump but couldn't 
find anything in them that suppose to change the behavior to the object.


anyway the headers from the debug section are:
##start
2012/08/20 02:00:43.682 kid1| HTTP Client local=54.240.162.191:80 
remote=192.168.10.100:60900 FD 32 flags=33

2012/08/20 02:00:43.682 kid1| HTTP Client REQUEST:
-
GET 
/MV5BMTUwMDMzOTQ4MF5BMTFeQW1wNF5BbWU3MDY0MzUzOTQ@.mp4?Expires=1345449755Signature=igqZatNciNUeCPcPTDUIBC2oX4BN7A1Go5U5h6BHeUo2z0qXyKl~1LQo1MHb8KnVmSob1GSlNs3LAbVuTSCQxSV3FfBBPH~~09CIoIFfZE7lDkBzXcjYBMC757-1OLP1eHHx5TmPNv00dBuoMCN90xlu~uifWzgsYbFNSaQans8_Key-Pair-Id=APKAILW5I44IHKUN2DYAhint=flv 
HTTP/1.1

Host: video-http.media-imdb.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20100101 
Firefox/13.0

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Referer: http://www.imdb.com/images/js/jwplayer/5.9/player.swf
##end

this is issued by the browser:
##start
2012/08/20 02:15:25.088 kid1| HTTP Client local=54.240.162.174:80 
remote=192.168.10.100:64245 FD 24 flags=33

2012/08/20 02:15:25.088 kid1| HTTP Client REQUEST:
-
GET 
/MV5BMTUwMDMzOTQ4MF5BMTFeQW1wNF5BbWU3MDY0MzUzOTQ@.mp4?Expires=1345450432Signature=PCnHMyiuodLmW-r1toSrSa7Gs~bJa7Io05AOBksCMCT5HNI2MYYPWtdyHM5W~5N4RtUSaY9SVzU4OlsSpSzGPZG0BD5nvP3RXIv03PqHMUQQo-lzASOC6TY1R3ARrVwgEm5mn3BrRQ4Ce6dwC7x9eGH~XgFNrKqcJCFxmbTwxQ8_Key-Pair-Id=APKAILW5I44IHKUN2DYAhint=flv 
HTTP/1.1

Host: video-http.media-imdb.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20100101 
Firefox/13.0

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
If-Modified-Since: Wed, 13 Apr 2011 17:18:18 GMT
If-None-Match: c9164ada101ce1baad52740ec34b2027
##end

and this is the response:
##start
-
HTTP/1.1 304 Not Modified
Date: Sun, 19 Aug 2012 23:17:19 GMT
ETag: c9164ada101ce1baad52740ec34b2027
Last-Modified: Wed, 13 Apr 2011 17:18:18 GMT
Age: 47447
X-Cache: Hit from cloudfront
X-Amz-Cf-Id: LnmQ_CfVeSreDlbL_vz0gFVPGkRuy5O6ajlxmfsoBjdAUkcn6ff3zw==
X-Cache: MISS from proxy
Via: 1.0 add6ceb4822d467e68d27b0dbaa26dae.cloudfront.net (CloudFront), 
1.1 proxy (squid/3.2.1)

Connection: keep-alive
##end

A request from IE 9:
##start
2012/08/20 02:21:12.594 kid1| HTTP Client local=54.240.162.120:80 
remote=192.168.10.100:65326 FD 22 flags=33

2012/08/20 02:21:12.595 kid1| HTTP Client REQUEST:
-
GET 
/MV5BMTUwMDMzOTQ4MF5BMTFeQW1wNF5BbWU3MDY0MzUzOTQ@.mp4?Expires=1345450432Signature=PCnHMyiuodLmW-r1toSrSa7Gs~bJa7Io05AOBksCMCT5HNI2MYYPWtdyHM5W~5N4RtUSaY9SVzU4OlsSpSzGPZG0BD5nvP3RXIv03PqHMUQQo-lzASOC6TY1R3ARrVwgEm5mn3BrRQ4Ce6dwC7x9eGH~XgFNrKqcJCFxmbTwxQ8_Key-Pair-Id=APKAILW5I44IHKUN2DYAhint=flv 
HTTP/1.1

Accept: text/html, application/xhtml+xml, */*
Accept-Language: en-US
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; 
Trident/5.0; BOIE9;ENUSMSE)

Accept-Encoding: gzip, deflate
Host: video-http.media-imdb.com
Connection: Keep-Alive
##end

response:
##start
-
HTTP/1.1 200 OK
x-amz-id-2: 1zQ2Ga1OC41u3SJGOlnpOgxww47S6D/AIBT0KwY/mC1f2mnZx0crlLcLUcdYqxt7
x-amz-request-id: E903CD06E96673AC
Date: Wed, 11 Apr 2012 09:17:57 GMT
Last-Modified: Wed, 13 Apr 2011 17:18:18 GMT
ETag: c9164ada101ce1baad52740ec34b2027
Accept-Ranges: bytes
Content-Type: video/mp4
Content-Length: 16586249
Server: AmazonS3
Age: 47796
X-Cache: Hit from cloudfront
X-Amz-Cf-Id: Gdqmd-d1ue_C6EcUXB9HqGsbDP27b9nPQ7JvD1Ph60it6W9Bs6PTdg==
X-Cache: MISS from proxy
Via: 1.0 b99a3a1517181a9079e46ed00e41ddfb.cloudfront.net (CloudFront), 
1.1 proxy (squid/3.2.1)

Connection: keep-alive
##end

my naked and armed eye yet to see anything that should affect it.
I tried to see if i can get some data using redbot but it seems like 
almost nothing.

the log for the request can be seen here:
http://redbot.org/?id=ZN0UhW

it seems like the server clock is behind a about 10 weeks but the basic 
settings should allow more then 60 minutes of caching.


Thanks,
Eliezer

--
Eliezer Croitoru
https://proxy
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] I want to verify why squid wont cache a specific object.

2012-08-19 Thread Eliezer Croitoru

On 8/20/2012 2:37 AM, Eliezer Croitoru wrote:

On 8/20/2012 1:38 AM, Amos Jeffries wrote:


The  is the file number/name where it is being stored. Since
this is an erased operation that is always the magic F value.

It is not 1-to-1 related to the object being cacheable. It just means
the object *currently* stored needs removing. Non-cacheable objects the
RELEASE immediately follows storage, for cacheable objects being
replaced the erase of old content immediately follows storage of new
copies.

OK

SNIP
just a bit more interesting data.
there is a different between intercepted requests(NAT and  TPROXTY) to 
using regular proxy http requests.


on regular proxy everything works fine and the file is being cached always.
(I use two squids.. both with url rewriter that causes the like 
store_url_rewite effect on the cache.)
it works always for youtube on the same setup so I dont really know what 
the cause can be.


it narrows down the bug into a very small area which is:
3.2.1 TPROXY\INTERCEPT + cache_peer + specific requests

vs

3.2.1 regular proxy + cache_peer + specific requests

there is a different in the requests that was made on regular proxy or 
intercepted requests that the url has a @ sign in it but it's not 
suppose to change a thing.


I will file a bug later but I want first to verify more about it.

also tested with squid 3.1.20 with the same exact setup using 
intercet\tproxy\forward proxy and in 3.1 it all works just fine.


so it's a bug in squid 3.2.1


Thanks,
Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] I want to verify why squid wont cache a specific object.

2012-08-19 Thread Eliezer Croitoru

On 8/20/2012 4:03 AM, Amos Jeffries wrote:


If you wish. It is a minor regression for the use-cases where traffic is
being fetched from sources other than ORIGINAL_DST. That content should
still be cacheable as it was before. It is done this way for now so that
swapping ORIGINAL_DST in for DIRECT at selection time will be safe, and
that selection point is far too late to create cache entries when it
turns out to be safely cacheable despite the verify failing.



OK got it in a way.
I still dont know exactly what's under the hood so I will leave it for 
tomorrow for a second pass.




However, I noted that your system is a two-layer proxy and both layers
are MISS'ing. For the Host verification possibility only the gateway
intercepting cache would be forced to MISS by those flags. The second
layer is completely unaware of the intercept or Host error and treats it
as normal forward-proxy traffic, caching and all. I would expect this
situation to appear as MISS on your frontend proxy and HIT on your
backend proxy before reaching the cloud proxy.
Well the thing is... that the urls are dynamic and the second one is to 
make the object cachable on the first possible.

the first layer(intercepting) actually fetch an internal url such as:
http://youtube.squid.internal/static_naming

and the second layer is fetching the real one which is dynamic.
So the second one is not meant for cache at all and since it's a dynamic 
url I can not use the second proxy cache for that purpose. (Sucks)


I have used 3.1 for quite long time and was happy with the results so 
it's not really a big issue to keep using it for now.
But I will be very happy for squid to not loose a basic functionality 
and specially on caching possibilities that is the soul purpose of squid.


I can use a small helper I have used before to schedule object fetching 
into squid but the setup by itself now was suppose to prevent all of 
this and to make sure there is one proxy instance that will do all the 
caching and nothing will be done to fetch objects manually into the cache.


Thanks,
Eliezer



Amos


--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] I want to verify why squid wont cache a specific object.

2012-08-19 Thread Eliezer Croitoru

On 8/20/2012 3:45 AM, Amos Jeffries wrote:

Two possibilities:

1) Did you bump up the maximum_object_size? default being 4MB and these
objects are ~16MB.

my limit is 200MB


2) They are dynamic objects without any Cache-Control or Expires header
to explicitly state the cacheability age. That means they fall under the
risky category which is set to a 0 maximum age by the default '?'
refresh pattern.

I have refresh pattern for each of the domains separately.
The pattern gave about 7 days of cache for an object until I upgraded 
from 3.1 to 3.2.1.


These objects aren't really risky because they are static.
And it's funny that you can see all over amazon cloudfront cdn headers 
that it is being cached already...
for the first 10 request amazon cdn showed a cache miss and then hits 
one after the other.


Thanks,
Eliezer


Amos



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] squid on redhat6 crash

2012-08-20 Thread Eliezer Croitoru

On 8/20/2012 4:01 PM, Julie Xu wrote:

Hi

Used redhat6' squid 3.1, and the system is crashing when there is load 
increased.


you mean load by req\s?memory?cpu?disk i\o?
how many requests per sec?



Aug 20 14:30:01 proxy-server squid[2450]: Squid Parent: child process 6591 
exited due to signal 6 with status 0
Aug 20 14:30:04 proxy-server squid[2450]: Squid Parent: child process 7091 
started
Aug 20 14:46:40 proxy-server abrt[22338]: Saved core dump of pid 7091 
(/usr/sbin/squid) to /var/spool/abrt/ccpp-2012-08-20-14:46:36-7091 (668131328 
bytes)

Briefly look the dump directory,  only thing I found is the direct is the cache 
direct, there is a coredump file, try to use crash to read it, not successful.

Could anyone advise me what is possible reason for the crash, and the solution?

squid 3.1 self compiled or from rpm?
using any special helpers?how many child process(for helpers)?
squid -v output?
squid.conf ?
transparent proxy?
ulimit -Sa output?
ulimit -Ha  output?


the more data you will provide us we can pinpoint the reason for the 
problem.


Regards,
Eliezer


Thanks in advance

Julie




--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] squid on redhat6 crash

2012-08-21 Thread Eliezer Croitoru

On 8/21/2012 2:33 AM, Julie Xu wrote:

More information on cache.log

2012/08/20 14:46:35| WARNING: swapfile header inconsistent with available data
FATAL: Received Segment Violation...dying.
2012/08/20 14:46:35| storeDirWriteCleanLogs: Starting...
2012/08/20 14:46:35| WARNING: Closing open FD   36
2012/08/20 14:46:35| 65536 entries written so far.
2012/08/20 14:46:35|131072 entries written so far.
2012/08/20 14:46:35|196608 entries written so far.


I believe that the log talks...
there was a problem if I remember right about swapfile being corrupted 
and found these:

http://bugs.squid-cache.org/show_bug.cgi?id=3035
http://bugs.squid-cache.org/show_bug.cgi?id=3404

And since you are running an old version of squid 3.1.10.. current is 
3.1.20 I recommend you to upgrade.
there aren't RPMs for redhat but there are for fedora 15-17 that you can 
try.

I have used them on centos 6+ and they seems to work perfect.

You mentioned that the proxy is not being used as intercept and in this 
case the better and recommended option is to select the latest stable 
version 3.2.1 .


about the limits:
 Max open files1024 4096 files

squid is compiled with a higher limit and you should change the Hard 
limit to at least squid limit which is 16384.



you mean load by req\s?memory?cpu?disk i\o?


The load I mean is when squid client increase; I found it only crash between 
9am-9pm, did not crash at night time.


how many requests per sec?


Now, I can not check, it is out of service now.

well since proxy is measured by request per second for load you should 
check it to mark the reason as load.



squid 3.1 self compiled or from rpm?


It is from system rpm.


you can use fedora newer rpms since they compatible with redhat.

using any special helpers?how many child process(for helpers)?


Squid_ldap_auth,  10 child process,  do not anything else.

I would recommend you to use some higher limit such as 20 (for high load 
environment).
Another approach is to use concurrency which in my testings showed 
better performance then more child process on my url_rewrite helper.




squid -v output?

]# squid -v
Squid Cache: Version 3.1.10

SNIP
very old..
(almost once a month was released so now it's 3.1.20 .. at least 9 month 
old)



transparent proxy?

no

Then I recommend upgrade to 3.2.1

I have just seen that there is no RPM for 3.2.1 yet so I am rebuilding 
one based on the squid-3.2.0.16-1.fc17.src.rpm.


changed the sources and sig.
removed old patches and build it as is without any change-log updates.

If the rpm will be fine I will update also the change-log.



many regards

Julie


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


[squid-users] Error on RPM build of squid 3.2.1 on Fedora 17

2012-08-21 Thread Eliezer Croitoru

I am trying to build an RPM for fedora 17.
I took the squid.spec and other files from:
squid-3.2.0.16-1.fc17.src.rpm
at
http://koji.fedoraproject.org/koji/buildinfo?buildID=305827

the basic cxx and cflags are:
RPM_OPT_FLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector --param=ssp-buffer-size=4  -m64 -mtune=generic


and the compilation will fail in the middle.(regular compilation works)
at:
##start
g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\/etc/squid/squid.conf\ 
-DDEFAULT_SQUID_DATA_DIR=\/usr/share/squid\ -DDEFAULT_SQUID_CONFIG_DIR 
  =\/etc/squid\  -I.. -I../include -I../lib -I../src 
-I../include-I../src-I/usr/include/libxml2 
-I/usr/include/libxml2 -Wall -Wpo   inter-arith -Wwrite-strings 
-Wcomments -Werror -pipe -D_REENTRANT -O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4  -m64 -mtune=generic -fpie -c -o tools.o tools.cc
g++ -DHAVE_CONFIG_H -DDEFAULT_CONFIG_FILE=\/etc/squid/squid.conf\ 
-DDEFAULT_SQUID_DATA_DIR=\/usr/share/squid\ -DDEFAULT_SQUID_CONFIG_DIR 
  =\/etc/squid\  -I.. -I../include -I../lib -I../src 
-I../include-I../src-I/usr/include/libxml2 
-I/usr/include/libxml2 -Wall -Wpo   inter-arith -Wwrite-strings 
-Wcomments -Werror -pipe -D_REENTRANT -O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4  -m64 -mtune=generic -fpie -c -o tunnel.o 
tunnel.cc

tools.cc: In function 'void no_suid()':
tools.cc:785:14: error: ignoring return value of 'int setuid(__uid_t)', 
declared with attribute warn_unused_result [-Werror=unused-result]

tools.cc: In function 'void enter_suid()':
tools.cc:760:39: error: ignoring return value of 'int setresuid(__uid_t, 
__uid_t, __uid_t)', declared with attribute warn_unused_result [-We 
  rror=unused-result]

cc1plus: all warnings being treated as errors
make[3]: *** [tools.o] Error 1
##end

I changed the flags to:
CFLAGS=-O2 -pipe -m64 -mtune=generic
CXXFLAGS=${CFLAGS}
LDFLAGS=-pie

and it seems to work fine.

the question is:
what is causing the problem?
is it squid code or unneeded flags?

Thanks,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] squid on redhat6 crash.. rpm for squid 3.2.1

2012-08-21 Thread Eliezer Croitoru

On 8/21/2012 2:33 AM, Julie Xu wrote:
SNIP
I created rpms for fedora 17 x86_64 at:
http://www1.ngtech.co.il/rpm/

that is squid:
http://www1.ngtech.co.il/rpm/squid-3.2.1-1.fc17.x86_64.rpm

this is the startup script:
http://www1.ngtech.co.il/rpm/squid-sysvinit-3.2.1-1.fc17.x86_64.rpm

I compiled it with the same configure options as in the older rpms of 
fedora 17.


if you upgrade a system remember to backup all old settings
Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


Re: [squid-users] Compile error Squid 3.2.1

2012-08-27 Thread Eliezer Croitoru

On 08/27/2012 09:25 PM, Jeff Gerard wrote:

I am looking for help in determining why my compile will not work.

I have been running into the following errors during compile:

SNIP
I am running this on Fedora Core 6 (have been running 3.1.20 with no issue) and 
gcc version is 4.1.2

6 ? you mean the 2006 version?


Configure options:
'--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/sbin' 
'--libexecdir=/usr/lib/squid' '--localstatedir=/var' '--datadir=/usr/share' 
'--sysconfdir=/etc/squid' '--enable-removal-policies=heap,lru' 
'--enable-storeio=aufs,diskd,ufs' '--enable-ssl' '--with-openssl=/usr/kerberos' 
'--enable-delay-pools' '--enable-linux-netfilter' 
'--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group' 
'--enable-useragent-log' '--enable-referer-log' '--disable-dependency-tracking' 
'--enable-cachemgr-hostname=localhost' '--enable-cache-digests' 
'--with-large-files' '--disable-wccpv2' '--disable-wccp' 'CFLAGS=-march=i486' 
'LDFLAGS=-pie -lpcreposix -lpcre'
i would start with a more basic configure with netfiler enabled and 
see if there anything el


The only references to that error I have seen deal with adding the -march=i486 
CFLAG option which I have tried, to no avail.

Any help would be greatly appreciated.

Thanks in advance...

Jeff


<    1   2   3   4   5   6   7   8   9   10   >