Re: [squid-users] squid performance
Hi Daniel, proxy efficiency: it compares the time for fetching objects from the cache to objects fetched from the internet. It shows you how fast your cache can deliver objects. Of course, this value should be > 0, otherwise you have a bottleneck. The higher the efficiency, the better performs your proxy. 'bandwidth savings [%]' = 'Bandwidth savings [byte]' divided by 'Total Bandwidth [byte]'. It shows you how many percent of the sent byte are cached bytes. Hope this helps. Regards Michael On Sun, 2005-01-23 at 02:42, Daniel Navarro wrote: > what is the squid performance parameter that shows me > how much efficient it is? > what is the squid parameter that shows me how much > bandwidth have saved? > > I refer to calamaris reports. > Yours, Daniel Navarro >Maracay, Venezuela >www.csaragua.com/ecodiver > > _ > Do You Yahoo!? > Información de Estados Unidos y América Latina, en Yahoo! Noticias. > Visítanos en http://noticias.espanol.yahoo.com -- Mit freundlichen Grüssen / With kind regards Michael Pophal -- Topic Manager Internet Access Services & Solutions -- Siemens AG, ITO A&S 4 Telefon: +49(0)9131/7-25150 Fax: +49(0)9131/7-43344 Email: [EMAIL PROTECTED] --
Re: [squid-users] digging from cache
but, if i use the command #tail -f /var/log/squid/cache.log i c my current running cache.log. so is there any command for catch a line and write to in other txt file if someone use the proxy and put any line if they use as a @any mail service. so in the final result i can find an all email addresses of all clients if they any use their emails. i only want email name NOT password. hope u understand. Your complains and suggestions are always been welcome. Thankyou & best regards, Shiraz Gul Khan (03002061179) Onezero Inc. _ It's fast, it's easy and it's free. Get MSN Messenger today! http://www.msn.co.uk/messenger
Re: [squid-users] SSL Reverse Proxy to Exchange 2003 OWA - SQUID just shutsdown by itself.
1. No idea. Can be anything from a bug in Squid to a configuration error I am using the same configuration what I used with Squid-3-PRE3. The mail and mailbox opened perfectly ok except that Squid process was stopping after getting 16-17 error messages - "clientNegotiateSSL: Error negotiating SSL connection on FD 12: error::lib(0):func(0):reason(0) (5/0)"< > Anything in cache.log? >>>cache.log entries I had sent yesterday > > What does access.log say?access.log entries I had sent yesterday< > > And what URL did you request in your browser?>>>default site - mail.xyz.om> Any sugegstion.. Thanks & regards, Rakesh Jha - Original Message - From: "Henrik Nordstrom" <[EMAIL PROTECTED]> To: "Rakesh Kumar" <[EMAIL PROTECTED]> Cc: "Henrik Nordstrom" <[EMAIL PROTECTED]>; "Squid Users" Sent: Thursday, January 20, 2005 01:20 AM Subject: Re: [squid-users] SSL Reverse Proxy to Exchange 2003 OWA - SQUID just shutsdown by itself. > > > On Sun, 16 Jan 2005, Rakesh Kumar wrote: > > > I have installed now Squid-3.0-PRE3-20050111. Now squid porcess is seems to > > be stable as I have not restarted for last 4-5 days but facing an other > > problem, now opening a box or a mail takes very long time (may be 10 > > minutes). We keep on seeing 'Loading' on the screen. > > What is the problem > > No idea. Can be anything from a bug in Squid to a configuration error. > Anything in cache.log? > > What does access.log say? > > And what URL did you request in your browser? > > Regards > Henrik > ## Attention: This e-mail message is privileged and confidential. If you are not the intended recipient please delete the message and notify the sender. Any views or opinions presented are solely those of the author. ##
[squid-users] Local cache skipped; always goes to parent
It appears that my local cache is completely bypassed in favor of a parent proxy. I need to have my Squid (v2.5S7 + patches) call a parent proxy for authentication. After making what seemed to be reasonable additions to my squid.conf file, I find that I can in fact run through the parent proxy, but that the local cache is not used at all. These are the lines added to my config: cache_peer proxy.domain.tld parent 1080 7 no-query login=myname:mypass acl allsrc src 0.0.0.0/0.0.0.0 never_direct allow allsrc Basically I want the parent cache to be called for any objects that can't be satisfied from the local cache, and that the objects gotten from the parent be cached locally. The non-use of my local cache is so complete, though, that I suspect I am bass-ackwards. I think I must be telling Squid that I want the parent queried for every request, though that isn't my reading of the "cache_peer" doc. I should note that my local Squid config worked great until the need for parent authentication prompted me to add the above lines to my config. Advice, please?
RE: [squid-users] squid performance
Maybe I don't know how to make the correct question. I already have webalizer, SARG, Squid Logbuchauswertung and calamaris. What are the important parameters to measure and what does they mean? What tells me how many pages or files are taking from cache instead of internet? What tells me how much bandwidth is being saved? Regards, Daniel Navarro Maracay, Venezuela www.csaragua.com/ecodiver --- Elsen Marc <[EMAIL PROTECTED]> escribió: > > > > > > what is the squid performance parameter that shows > me > > how much efficient it is? > > Define efficient. > > > what is the squid parameter that shows me how much > > bandwidth have saved? > > > > http://www.squid-cache.org/Scripts/ > > M. > _ Do You Yahoo!? Información de Estados Unidos y América Latina, en Yahoo! Noticias. Visítanos en http://noticias.espanol.yahoo.com
Re: [squid-users] How to improve speed to access foreign sites/ Squid Access cache mechanism
As I know some pages avoid cache programs to make cache on it, so you can't improve speed. If you reload a page, also you squid cache program reload that page, you can get cache effect just loading on one client first and other later. Regards, Daniel Navarro Maracay, Venezuela www.csaragua.com/ecodiver --- Seewo Chen <[EMAIL PROTECTED]> escribió: > > > Hi, everybody > At first I must tell you my location which is > in china. > I installed squid2.5 Stable7 based on Redhat9.0 > in recently day. But I meet > some troubles: > The cache effect is very visible when I surf > internal site or foreign sites > that the response time is quick throught squid proxy > server , but it is very > slow when I access oracle.com or hp.com. even if I > access oracle.com twice/three > times at sequential time. The following is relative > test data, I add the site > response speed from > http://www.linkwan.com/gb/broadmeter/speed/responsespeedtest.htm: > Remark: Response time (sec) is the time I access the > home page of relative > sites, from I commit request to the status bar > display Done. > > Web Sites Time Site > response Speed (sec) > Response time (sec) > oracle.com 1st >3.57 > 68 >2nd > 9.52 30 >3rd > 6.21 30 > dell.com 1st > 0.79 > 5 >2nd > 0.45 3 >3rd > 0.41 3 > www.sohu.com 1st > 1.65 > 5 >2nd > 0.15 2 >3rd > 0.17 2 > > Now, any idea about squid access cache > mechanism? what will do before squid > put page to client IE browse? May squid check cache > at first and then put the > page to IE browse and then check the original web > sites? > > Thanks, > Seewo > > > > _ Do You Yahoo!? Información de Estados Unidos y América Latina, en Yahoo! Noticias. Visítanos en http://noticias.espanol.yahoo.com
RE: [squid-users] diskd
--- Elsen Marc <[EMAIL PROTECTED]> escribió: > > > > How can I implement and test diskd? > > > > I am using the squid integrated into Fedora Club > 3. > > > > - Use the appropriate configure option to select > this store method > - Select this method when configuring cache_dirs in > squid.conf. >Check squid.conf.default for explanations and > comments. > > M. > Thanks for replying back. I am using the squid originally installed with my Fedora Club 3. So I don't know if diskd is active. I am interested on it since I read squid FAQ's recomending it. Regards, Daniel Navarro Maracay, Venezuela www.csaragua.com/ecodiver _ Do You Yahoo!? Información de Estados Unidos y América Latina, en Yahoo! Noticias. Visítanos en http://noticias.espanol.yahoo.com
Re: [squid-users] Enforcing Refresh patterns
Hi, At 03:42 a.m. 25/01/2005, Alexander Shopov wrote: Hi guys, After reading the FAQ, searching on google, reading viSofts manual, and the Squid Documentation project, extensive experimenting and then wiretrapping with Etereal, I still cannot get the result I want with Squid. I want to *force* a particular refresh pattern on some objects (*.gif,*.js) from some servers. I want all gifs from some servers to be refreshed no earlier than 12 minutes after they went into the cache *regardless* of the settings of the web server and the commands of the client: I tried with the following setting: refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher:14400% 1440 refresh_pattern -i .*\.gif$ 12 100%12 override-expire override-lastmod reload-into-ims ignore-reload refresh_pattern . 0 20% 4320 But then whenever the user client generates a request for a gif object, squid first checks whether the object is stale by generating a request to the server. I do not it to do so for at least 12 minutes, I want squid to return the object immediately. Can anyone give me advice? What version of squid are you using? Can you post the full section out of your access.log, of a request where this happens, with log_mime_hdrs on (Just post 1 request logged) Reuben
RE: [squid-users] Problem with FTP upload through squid : truncat ed files
> -Original Message- > From: Henri Walazo [mailto:[EMAIL PROTECTED] > Sent: Monday, January 24, 2005 2:15 AM > To: Elsen Marc > Cc: squid-users@squid-cache.org > Subject: Re: [squid-users] Problem with FTP upload through squid : > truncated files > > > Thanks, it works with Mozilla 1.7.5 > > However, is it possible to connect to a ftp site through mozilla > without typing the user and password in plain text in the url ? > ftp://[EMAIL PROTECTED] You will be prompted for the password, and it will not show up in the browser bar or on the webpage. Chris
[squid-users] SquidClamAV Redirector
All, Did someone here try SquidClamAV Redirector: http://www.jackal-net.at/tiki-read_article.php?articleId=1 Please share your experience and suggestion. Thx & Rgds, Awie
[squid-users] CFP: 10th International Workshop on Web Caching and Content Distribution
10th International Workshop on Web Caching and Content Distribution (WCW) Support of IEEE pending Sophia Antipolis, French Riviera, France 12 September - 14 September, 2005 http://2005.iwcw.org/ Overview: The International Workshop on Web Caching and Content Distribution (WCW) serves as the premiere meeting for researchers and practitioners to exchange results and visions on all aspects of content distribution, and delivery. Innovations in content delivery systems continue to have strong impact in the Internet, resulting in a surge of interest in both content delivery applications and the web/network infrastructure that supports novel content delivery applications. Starting from basic caching, research in content distribution has broadened its scope to cover practically all areas related to the intersection of content and networking, including such areas as peer-to-peer, data grid computing, utility and edge computing, application networking, wireless content delivery, pervasive networking and content computing. Building on the success of the previous WCW meetings, WCW10 plans to form a strong technical program that covers the newest and most interesting areas relating to content delivery services as they move through the Internet. Call for Papers: The workshop solicits technical papers related to Internet content delivery, caching and replication, and content services networking. Particular areas of interest include, but are not limited to: -Content delivery architectures -P2P file sharing, storage, and content delivery -Caching and content distribution for mobile wireless systems -Web caching and replication (protocols and architectures) -Edge services and dynamic content caching -Multimedia content distribution -Overlay networks for content delivery -Content placement and request routing -Empirical studies of deployed content delivery systems -Security in content distribution systems -Wide-area upload and content gathering -Novel applications and paradigms for caching and content distribution General Chair: Ernst W. Biersack, Institut Eurecom Program Chairs: Ernst W. Biersack, Institut Eurecom Pablo Rodriguez, Microsoft Research Local Organization Chair: Guillaume Urvoy-Keller, Institut Eurecom Cyber Chair: Pietro Michiardi, Institut Eurecom Program Committee: Mostafa Ammar, Georgia Institute of Technology Azer Bestavros, Boston University Bobby Bhattacharjee, University of Maryland Paul Francis, Cornell University Markus Hofmann, Bell-Labs Magnus Karlsson, HP Labs Anne-Marie Kermarrec, INRIA Laurent Mathy, University of Lancaster Guillaume Pierre, Vrije Universiteit Amsterdam Thomas Plagemann, Oslo University Lili Qiu, Microsoft Research Reza Rejaie, University of Oregon Torsten Suel, Polytechnic University Mary Vernon, University of Wisconsin-Madison Geoff Voelker, University of California, San Diego Craig Wills, Worcester Polytechnic Institute Alec Wolman, Microsoft Research Hui Zhang, Carnegie Mellon University Guidelines: Technical papers and synopses are welcome. Technical papers describe previously unpublished research results or empirical evaluations of current systems. Synopses are summaries of interesting new problems or approaches, or of standards or development efforts in progress. Technical papers are limited to 5000 words; synopses are limited to 3000 words. We require authors to first submit a 150-word abstract to ease the process of reviewer assignment. The Program Committee will judge submitted papers on relevance, significance, originality, clarity, and technical merit. Do not submit product marketing material or material that is previously published or under review elsewhere. Accepted papers will be either published in the LNCS Series of Springer-Verlag or as IEEE Proceedings. At least one author of each accepted paper must attend the workshop to present their work. Please submit technical papers and synopses in PDF format through the submission form on the conference Website. Proposals for Panels: WCW panels bring together researchers from industry and academia. These panels are an important element of WCW. This year we plan on having a panel that discusses "Ten years of Content Distribution". Please send other panel proposals in plain text by e-mail to the Program Chairs ([EMAIL PROTECTED]). Important Dates: 03/08/2005: Deadline for abstract submissions 03/15/2005: Deadline for paper submissions 05/31/2005: Acceptance notification 06/28/2005: Camera-ready papers due
[squid-users] Enforcing Refresh patterns
Hi guys, After reading the FAQ, searching on google, reading viSofts manual, and the Squid Documentation project, extensive experimenting and then wiretrapping with Etereal, I still cannot get the result I want with Squid. I want to *force* a particular refresh pattern on some objects (*.gif,*.js) from some servers. I want all gifs from some servers to be refreshed no earlier than 12 minutes after they went into the cache *regardless* of the settings of the web server and the commands of the client: I tried with the following setting: refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i .*\.gif$ 12 100% 12 override-expire override-lastmod reload-into-ims ignore-reload refresh_pattern . 0 20% 4320 But then whenever the user client generates a request for a gif object, squid first checks whether the object is stale by generating a request to the server. I do not it to do so for at least 12 minutes, I want squid to return the object immediately. Can anyone give me advice? Best regards: al_shopov
[squid-users] Printing problem
Since I migrated from windows to linux gateway with squid sometimes clients can´t print, have to reboot clients in order to fix it. why? How to solve? What is the printing network port? Regards, Daniel Navarro Maracay, Venezuela www.csaragua.com/ecodiver _ Do You Yahoo!? Información de Estados Unidos y América Latina, en Yahoo! Noticias. Visítanos en http://noticias.espanol.yahoo.com
Re: [squid-users] SSL Reverse Proxy to Exchange 2003 OWA - SQUID just shutsdown by itself.
- Original Message - From: "Henrik Nordstrom" <[EMAIL PROTECTED]> To: "Rakesh Kumar" <[EMAIL PROTECTED]> Cc: "Henrik Nordstrom" <[EMAIL PROTECTED]>; "Squid Users" Sent: Thursday, January 20, 2005 01:20 AM Subject: Re: [squid-users] SSL Reverse Proxy to Exchange 2003 OWA - SQUID just shutsdown by itself. > > > On Sun, 16 Jan 2005, Rakesh Kumar wrote: > > > I have installed now Squid-3.0-PRE3-20050111. Now squid porcess is seems to > > be stable as I have not restarted for last 4-5 days but facing an other > > problem, now opening a box or a mail takes very long time (may be 10 > > minutes). We keep on seeing 'Loading' on the screen. > > What is the problem > > No idea. Can be anything from a bug in Squid to a configuration error. > > Anything in cache.log? >Nothing in cache.log. > > What does access.log say? >I am attching access.log, > > And what URL did you request in your browser? >>>https://mail.xyz.com > > Regards > Henrik > Some time mail box are opened quickly and at some other time it will not open even after 10 minLike once a mailbox opened after 15 min of wait. ACCESS.LOG Entries--- *When inbox opened successfully and a mail content is displayed* 1106564515.881 31 168.187.x.y TCP_MISS/401 391 GET http://mail.xyz.com/ - FIRST_UP_PARENT/mail.xyz.com text/html 1106564527.505 36 168.187.x.y TCP_MISS/200 1535 GET http://mail.xyz.com/ - FIRST_UP_PARENT/mail.xyz.com text/html 1106564530.637 2015 168.187.x.y TCP_MISS/200 24495 GET http://mail.xyz.com/rakesh/? - FIRST_UP_PARENT/mail.xyz.com text/html 1106564534.751 4529 168.187.x.y TCP_MISS/200 20264 GET http://mail.xyz.com/rakesh/Inbox/? - FIRST_UP_PARENT/mail.xyz.com text/html 1106564539.483 11 168.187.x.y TCP_MISS/200 11990 GET http://mail.xyz.com/exchweb/6.5.7226.0/controls/tf_Messages.xsl - FIRST_UP_PARENT/mail.xyz.co m text/xml 1106564543.217880 168.187.x.y TCP_MISS/207 13590 SEARCH http://mail.xyz. com/rakesh/Inbox/ - FIRST_UP_PARENT/mail.xyz.com text/xml 1106564546.507 24 168.187.x.y TCP_MISS/200 430 SUBSCRIBE http://mail.xyz.com/rakesh/Calendar - FIRST_UP_PARENT/mail.xyz.com - 1106564546.642 15 168.187.x.y TCP_MISS/200 427 SUBSCRIBE http://mail.xyz.com/rakesh/Tasks - FIRST_UP_PARENT/mail.xyz.com - 1106564547.567605 168.187.x.y TCP_MISS/207 424 SEARCH http://mail.xyz.com/rakesh/Calendar - FIRST_UP_PARENT/mail.xyz.com text/xml 1106564547.783338 168.187.x.y TCP_MISS/207 424 SEARCH http://mail.xyz.com/rakesh/Tasks - FIRST_UP_PARENT/mail.xyz.com text/xml 1106564547.973 34 168.187.x.y TCP_MISS/200 8512 GET http://mail.xyz.com/rakesh/Inbox/RE:-116.EML? - FIRST_UP_PARENT/mail.xyz.com text/html 1106564597.865 6 168.187.x.y TCP_MISS/200 475 SUBSCRIBE http://mail.xyz.com/rakesh/Inbox - FIRST_UP_PARENT/mail.xyz.com - Mail box ipool & IT-sec did not open, loading on screen kept dsiplaying* 1106564635.216 83 168.187.x.y TCP_MISS/207 809 PROPFIND http://mail.xyz.com/rakesh/ipool/ - FIRST_UP_PARENT/mail.xyz.com text/xml 1106564635.452 1039 168.187.x.y TCP_MISS/200 19856 GET http://mail.xyz.com/rakesh/ipool/? - FIRST_UP_PARENT/mail.xyz.com text/html 1106564637.535 81 168.187.x.y TCP_MISS/207 795 BPROPPATCH http://mail.xyz.com/rakesh/ - FIRST_UP_PARENT/mail.xyz.com text/xml 1106564719.196 6 168.187.x.y TCP_MISS/207 567 POLL http://mail.xyz.com/rakesh/Inbox - FIRST_UP_PARENT/mail.xyz.com text/xml 1106564793.252 24 168.187.x.y TCP_MISS/207 812 PROPFIND http://mail.xyz.com/rakesh/IT-Sec/ - FIRST_UP_PARENT/mail.xyz.com text/xml 1106564840.064 7 168.187.x.y TCP_MISS/207 567 POLL http://mail.xyz.com/rakesh/Inbox - FIRST_UP_PARENT/mail.xyz.com text/xml 1106564959.217 5 168.187.x.y TCP_MISS/207 567 POLL http://mail.xyz.com/rakesh/Inbox - FIRST_UP_PARENT/mail.xyz.com text/xml 1106565079.203 4 168.187.x.y TCP_MISS/207 567 POLL http://mail.xyz.com/rakesh/Inbox - FIRST_UP_PARENT/mail.xyz.com text/xml ***After above nail box nokia opened successfully and mail is displayed 1106565101.240 7 168.187.x.y TCP_MISS/207 816 PROPFIND http://mail.xyz.com/rakesh/nokia/ - FIRST_UP_PARENT/mail.xyz.com text/xml 1106565102.725 1680 168.187.x.y TCP_MISS/200 19865 GET http://mail.xyz.com/rakesh/nokia/? - FIRST_UP_PARENT/mail.xyz.com text/html 1106565105.846575 168.187.x.y TCP_MISS/200 11057 GET http://mail.xyz.com/exchweb/6.5.7226.0/controls/tf_TwoLine.xsl - FIRST_UP_PARENT/mail.xyz.com text/xml 1106565108.307348 168.187.x.y TCP_MISS/207 6168 SEARCH http://mail.xyz.com/rakesh/nokia/ - FIRST_UP_PARENT/mail.xyz.com text/xml 1106565110.620 20 168.187.x.y TCP_MISS/200 6326 GET http://mail.xyz.com/rakesh/nokia/[UserCenter]%20Your%20Password-3.EML? - FIRST_UP_PARENT/mail. xyz.com text/html ***Again now this mail box PRG did not open aven after 10 min 1106566087.981 32 168.187.198.212 TCP_MISS/207 819 PROPFIND http://mail.xyz.com/rakesh/PRG/ - FIRST_UP_PARENT/mail.xyz.com text/xml 1106566089.370 2255 168.187.198.212
Re: [squid-users] Problem with FTP upload through squid : truncated files
Thanks, it works with Mozilla 1.7.5 However, is it possible to connect to a ftp site through mozilla without typing the user and password in plain text in the url ? Also, I notice in access.log that if I use mozilla to download/upload from/to an ftp site, the methods used are GET/PUT, whereas it is the CONNECT method when using an ftp client such as filezilla or smartftp. So I don't know if this is really a client-side problem, or a squid problem with CONNECT, or a bad configuration, or anything else ? So my problem is half resolved : now I can upload files through ftp (with mozilla and PUT method, thank you for this solution), but I still don't know why I can't upload with a ftp client (with CONNECT method) (this is no more necessary, thanks to you, but I like to know the solution for every problem I'm faced with). If someone has an idea for this last point, it would relieve me greatly. Thanks, Henri On Mon, 24 Jan 2005 11:14:07 +0100, Elsen Marc <[EMAIL PROTECTED]> wrote: > > > > > > > Here are some access.log excerpts when I try different operations : > > > > > > First I download from ftp.redhat.com the file abiword-1.0.4-2.i386.rpm > > (4.98 MB) (in binary mode) > > > >You may also want to have a go with the latest Mozilla (1.7.5) which has >(renewed) support for ftp uploads. >If that works, you may for instance have a bug/problem at the client-app >side. > >M. >
RE: [squid-users] Problem with FTP upload through squid : truncated files
> > > Here are some access.log excerpts when I try different operations : > > > First I download from ftp.redhat.com the file abiword-1.0.4-2.i386.rpm > (4.98 MB) (in binary mode) > You may also want to have a go with the latest Mozilla (1.7.5) which has (renewed) support for ftp uploads. If that works, you may for instance have a bug/problem at the client-app side. M.
Re: [squid-users] Problem with FTP upload through squid : truncated files
Here are some access.log excerpts when I try different operations : First I download from ftp.redhat.com the file abiword-1.0.4-2.i386.rpm (4.98 MB) (in binary mode) I get this line in access.log : 1106558551.810 14298 192.168.1.3 TCP_MISS/200 5232437 CONNECT ftp.redhat.com:14954 - DIRECT/209.132.176.30 - [Host: ftp.redhat.com:14954\r\n] [] and the download is correct. It works perfectly for every ftp download Then I try to upload a 79kB html file (in ascii mode) on my personal ftp account : firstly with smartftp : I get a 31.5 kB file on my ftp, with the following lines in access.log : 1106557992.094 43 192.168.1.3 TCP_MISS/200 39 CONNECT ftpperso.free.fr:36467 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:36467\r\n] [] 1106557992.154 15 192.168.1.3 TCP_MISS/200 250 CONNECT ftpperso.free.fr:54180 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:54180\r\n] [] If I retry, I get a 30.1 kB file 1106558060.515 43 192.168.1.3 TCP_MISS/200 39 CONNECT ftpperso.free.fr:29418 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:29418\r\n] [] 1106558060.571 18 192.168.1.3 TCP_MISS/200 250 CONNECT ftpperso.free.fr:63365 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:63365\r\n] [] Then a 42.6 kB file 1106558090.866 50 192.168.1.3 TCP_MISS/200 39 CONNECT ftpperso.free.fr:5221 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:5221\r\n] [] 1106558090.919 15 192.168.1.3 TCP_MISS/200 250 CONNECT ftpperso.free.fr:27894 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:27894\r\n] [] and a 31.5 kB file 1106558128.399 46 192.168.1.3 TCP_MISS/200 39 CONNECT ftpperso.free.fr:10667 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:10667\r\n] [] 1106558128.456 16 192.168.1.3 TCP_MISS/200 250 CONNECT ftpperso.free.fr:47718 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:47718\r\n] [] and a 53.6 kB file 1106558165.607 51 192.168.1.3 TCP_MISS/200 39 CONNECT ftpperso.free.fr:1246 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:1246\r\n] [] 1106558165.667 18 192.168.1.3 TCP_MISS/200 250 CONNECT ftpperso.free.fr:40582 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:40582\r\n] [] and a 50.9 kB file 1106558198.307 49 192.168.1.3 TCP_MISS/200 39 CONNECT ftpperso.free.fr:2186 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:2186\r\n] [] 1106558198.361 16 192.168.1.3 TCP_MISS/200 250 CONNECT ftpperso.free.fr:62871 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:62871\r\n] [] and a 28.7 kB file 1106558236.013 45 192.168.1.3 TCP_MISS/200 39 CONNECT ftpperso.free.fr:49806 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:49806\r\n] [] 1106558236.128 75 192.168.1.3 TCP_MISS/200 250 CONNECT ftpperso.free.fr:53424 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:53424\r\n] [] and so on... If I use filezilla instead of smartftp I get a 13.8 kB file 1106558918.549 23 192.168.1.3 TCP_MISS/200 39 CONNECT 212.27.40.252:45724 - DIRECT/212.27.40.252 - [Host: 212.27.40.252:45724\r\n] [] 1106558918.629 10 192.168.1.3 TCP_MISS/200 250 CONNECT 212.27.40.252:5846 - DIRECT/212.27.40.252 - [Host: 212.27.40.252:5846\r\n] [] then again a 13.8 kB file 1106559209.340 18 192.168.1.3 TCP_MISS/200 39 CONNECT 212.27.40.252:49877 - DIRECT/212.27.40.252 - [Host: 212.27.40.252:49877\r\n] [] 1106559209.427 10 192.168.1.3 TCP_MISS/200 250 CONNECT 212.27.40.252:16455 - DIRECT/212.27.40.252 - [Host: 212.27.40.252:16455\r\n] [] and so on Then I try to upload a 439 kB tgz binary file (in binary mode) At first with smartftp I get a 424 kB file 1106559593.377208 192.168.1.3 TCP_MISS/200 39 CONNECT ftpperso.free.fr:41129 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:41129\r\n] [] 1106559593.489 19 192.168.1.3 TCP_MISS/200 313 CONNECT ftpperso.free.fr:50343 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:50343\r\n] [] then a 408 kB file 1106559690.549210 192.168.1.3 TCP_MISS/200 39 CONNECT ftpperso.free.fr:55439 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:55439\r\n] [] 1106559690.660 16 192.168.1.3 TCP_MISS/200 313 CONNECT ftpperso.free.fr:18901 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:18901\r\n] [] and 382 kB file 1106559781.616210 192.168.1.3 TCP_MISS/200 39 CONNECT ftpperso.free.fr:64826 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:64826\r\n] [] 1106559781.715 24 192.168.1.3 TCP_MISS/200 313 CONNECT ftpperso.free.fr:3442 - DIRECT/212.27.40.252 - [Host: ftpperso.free.fr:3442\r\n] [] If I try with filezilla, I get a 379 kB file 1106559871.045351 192.168.1.3 TCP_MISS/200 39 CONNECT 212.27.40.252:60138 - DIRECT/212.27.40.252 - [Host: 212.27.40.252:60138\r\n] [] 1106559871.192 12 192.168.1.3 TCP_MISS/200 313 CONNECT 212.27.40.252:35117 - DIRECT/212.27.40.252 - [Host: 212.27.40.252:35117\r\n] [] then a 414 kB file 1106559926.912 12 192.168.1.3 TCP_MISS/200 313 CONNECT 212.27.40.252:53944 - DIRECT/212.27.40.252 - [Host: 212.27.40.252:53944\r\n] [] 1106559929.764172 192.168.1.3 TCP_MISS/200 39 CONNECT 212.27.
[squid-users] NTLM Authentication and Streaming players
Hi guys, i'm using Squid 2.5 stable3 on a RHEL ES 3.0 with NTLM authentication. I have problems with several Windows Media Player connecting to streaming servers like radio or something else that ask users to authenticate manually (with pop-up). How can i avoid this? I've tried to insert some acls but don't work i see that many link points to a file like this: Any ideas? Thank you Riccardo
Re: [squid-users] squid + squirm
> On Wed, 19 Jan 2005, [iso-8859-2] Max Černý wrote: > >But they have discovered, that if they don't put www.google.com, but > >http://66.102.11.99 - the webpage will display. > > > >Is there any chance to block reguests, which does not have a DNS names, but > >only IP addresses? On 23.01 00:24, Henrik Nordstrom wrote: > Not easily. This would require one to build a database of all the IP > addresses of the sites you want to block as most has not registered the > reverse lookup. > > In cases like google it's even worse as this requires building a database > of all the google search servers around the globe. A standard DNS lookup > of www.google.com only gives you some of them.. maybe a redirector that denies all requests for IP address. However there are still sites that direct people to IP address-based URLs. -- Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. Micro$oft random number generator: 0, 0, 0, 4.33e+67, 0, 0, 0...
Re: [squid-users] squid - dns server
> On Sat, 2005-01-22 at 15:38 -0600, Daniel Navarro wrote: > > is squid a dns server by itself? On 22.01 22:56, Kinkie wrote: > No. But you can tell squid to use DNS servers different than those you > specify system-wide. however, having local caching DNS resolver might improve performance, especially when other DNS servers are behing slow link (which is often the reason why to set up proxy for). I usually recommend having DNS server even for links that do not need http proxy. -- Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. 10 GOTO 10 : REM (C) Bill Gates 1998, All Rights Reserved!
Re: [squid-users] log filtering
On 22.01 06:29, [EMAIL PROTECTED] wrote: > I've got squid running and am asked to produce a > report showing requests by the USER not the WEBPAGE. forcing proxy authentication could help this. But in such case you must not use transparent proxy. > i.e. only the URLS entered or clicked by the user, > filtering out all the extra info about the supporting > files that the webpage requested (GIFS, popups, > banners, etc). note that from HTTP's point of view, only object exist, and there is no difference between images, frames, popups etc. -- Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/ Warning: I wish NOT to receive e-mail advertising to this address. Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu. "To Boot or not to Boot, that's the question." [WD1270 Caviar]