Re: [squid-users] Requests Per Second

2010-01-25 Thread BarneyC

The 800 RPS is on a 100Mb/s network? The 5000 RPS? That was my question. I'm
not really concerned with what other people get on their systems, as every
system will be different. I want to know how many RPS I can expect to
encounter on a busy 100Mb/s residential network.
-- 
View this message in context: 
http://n4.nabble.com/Requests-Per-Second-tp1288921p1289133.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Delay pools

2010-01-25 Thread Sakhi Louw
Hi,

Does anyone know a good site with detailed information on squid
delay pools.

-- 
Sakhi Louw


[squid-users] Delay pools

2010-01-25 Thread Sakhi Louw
Hi,

Does anyone know a good site with detailed information on squid
delay pools.

-- 
Sakhi Louw


Re: [squid-users] Delay pools

2010-01-25 Thread Amos Jeffries

Sakhi Louw wrote:

Hi,

Does anyone know a good site with detailed information on squid
delay pools.



http://wiki.squid-cache.org/Features/DelayPools

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE21
  Current Beta Squid 3.1.0.15


[squid-users] Questions regarding COSS setup

2010-01-25 Thread Markus Meyer
Hi all,

I want to use COSS but I'm not sure if I understand all the options
right. So this is what I want:

Although our average object size is ca. 27 kB most of the files are much
smaller. This is the distribution of about 11 M files from one of our
Squids:
 1kB: 19585 0,1%
 2kB: 583402 5,2%
 4kB: 4897854 44,3%
 8kB: 1049084 9,4%
 10kB: 145059 1,3%
 20kB: 351615 3,1%
 30kB: 182077 1,6%
 40kB: 325084 2,9%
 50kB: 597911 5,4%
 60kB: 807344 7,3%
 70kB: 742066 6,7%
 80kB: 527370 4,7%
 90kB: 333543 3%
 100kB: 206064 1,8%
 100kB: 280574 2,5%

I want to let Squid do as less IO as possible. So I thought I set
maximum_object_size_in_memory to 4 kB and max-size for COSS to 3x
8kB = 24 kB. The rest goes into AUFS.

So my configuration lines would look something like this:

cache_dir coss /web/cache/1/coss/ 20480 max-size=24576
cache_dir coss /web/cache/2/coss/ 20480 max-size=24576
[...]
cache_dir aufs /web/cache/1/aufs/ 81920 290 256
cache_dir aufs /web/cache/2/aufs/ 81920 290 256
[...]

Help is needed with the following things:

- --with-coss-membuf-size compile-time option is set to 1 MB per
default. Does it make sense to change this value?

- How big should I make the COSS files? I thought about 20 GB on four
disks for COSS and 60 GB on the same disks for AUFS.

- How do I understand block-size? What values should I use? I can't
get my head around the docs in the Squid-Wiki.

- All other options for COSS seem to be for specific cases which don't
apply here(at least that's what I think). So leaving them at the default
values would be my choice.


Thanks for any help,

Markus


[squid-users] Bridge FreeBSD, PF and transparent squid

2010-01-25 Thread ozan ucar

Hello,
I want running squid transparent on bridge freebsd system.
My network schema, squid and pf configuration;

http://www.cehturkiye.com/bridge_pf_and_transparent_squid-_eng.jpg

Using Firefox  its passing through bridged mode Pf, while Squid logs 
that TCP_Denied but doesnt blocks the web site. Why ?


Using Chrome  Proxy settings for squid  192.168.5.11 80
squid logs Tcp_Denied and blocks the web site !

I'm try pf rule and listening squid other interface (vr0,fxp0,bridge0) 
but results did not change


How i can solve my problem ?
are you suggest any document ?
Thanks you for relation



Re: [squid-users] Bridge FreeBSD, PF and transparent squid

2010-01-25 Thread Ismail OZATAY

Merhaba ozan,

Can you send us your squid.conf ?

Regards.

ismail


Hello,
I want running squid transparent on bridge freebsd system.
My network schema, squid and pf configuration;

http://www.cehturkiye.com/bridge_pf_and_transparent_squid-_eng.jpg

Using Firefox  its passing through bridged mode Pf, while Squid logs 
that TCP_Denied but doesnt blocks the web site. Why ?


Using Chrome  Proxy settings for squid  192.168.5.11 80
squid logs Tcp_Denied and blocks the web site !

I'm try pf rule and listening squid other interface (vr0,fxp0,bridge0) 
but results did not change


How i can solve my problem ?
are you suggest any document ?
Thanks you for relation






Re: [squid-users] Bridge FreeBSD, PF and transparent squid

2010-01-25 Thread ozan ucar

Hello Ismail,
My squid.conf is http://www.cehturkiye.com/squid.conf

Thanks.
Ismail OZATAY yazmış:

Merhaba ozan,

Can you send us your squid.conf ?

Regards.

ismail


Hello,
I want running squid transparent on bridge freebsd system.
My network schema, squid and pf configuration;

http://www.cehturkiye.com/bridge_pf_and_transparent_squid-_eng.jpg

Using Firefox  its passing through bridged mode Pf, while Squid 
logs that TCP_Denied but doesnt blocks the web site. Why ?


Using Chrome  Proxy settings for squid  192.168.5.11 80
squid logs Tcp_Denied and blocks the web site !

I'm try pf rule and listening squid other interface 
(vr0,fxp0,bridge0) but results did not change


How i can solve my problem ?
are you suggest any document ?
Thanks you for relation













Re: [squid-users] Questions regarding COSS setup

2010-01-25 Thread Amos Jeffries

Markus Meyer wrote:

Hi all,

I want to use COSS but I'm not sure if I understand all the options
right. So this is what I want:

Although our average object size is ca. 27 kB most of the files are much
smaller. This is the distribution of about 11 M files from one of our
Squids:
 1kB: 19585 0,1%
 2kB: 583402 5,2%
 4kB: 4897854 44,3%
 8kB: 1049084 9,4%
 10kB: 145059 1,3%
 20kB: 351615 3,1%
 30kB: 182077 1,6%
 40kB: 325084 2,9%
 50kB: 597911 5,4%
 60kB: 807344 7,3%
 70kB: 742066 6,7%
 80kB: 527370 4,7%
 90kB: 333543 3%
 100kB: 206064 1,8%
 100kB: 280574 2,5%

I want to let Squid do as less IO as possible. So I thought I set
maximum_object_size_in_memory to 4 kB and max-size for COSS to 3x
8kB = 24 kB. The rest goes into AUFS.


Looking at that 8KB will catch 20% more than 4KB would.



So my configuration lines would look something like this:

cache_dir coss /web/cache/1/coss/ 20480 max-size=24576
cache_dir coss /web/cache/2/coss/ 20480 max-size=24576
[...]
cache_dir aufs /web/cache/1/aufs/ 81920 290 256
cache_dir aufs /web/cache/2/aufs/ 81920 290 256


add min-size=24576 to the AUFS to prevent them grabbing the small files 
intended for COSS.



[...]

Help is needed with the following things:

- --with-coss-membuf-size compile-time option is set to 1 MB per
default. Does it make sense to change this value?


AFAIK, no, but you may want to test that.



- How big should I make the COSS files? I thought about 20 GB on four
disks for COSS and 60 GB on the same disks for AUFS.


I'm not sure about this.

More total size means more slices being swapped in and out to load rarer 
things. Larger slice size reduces that, but increases loading time.


COSS are limited by the maximum 2^24 objects per cache_dir, like all 
stores so far. So that times your average file size should give you 
something like 64 GB per dir as a theoretical absolute max store size.




- How do I understand block-size? What values should I use? I can't
get my head around the docs in the Squid-Wiki.


Block is equivalent to inodes as I understand it. Each file is stored in 
1-N blocks.  A block of 512 bytes storing a 12 byte file will waste 500 
bytes. As will two blocks storing a 524 byte object.


To reach your 20GB directory size you will need block-size=2048.

Going by your object distribution I'd say thats probably workable. 
Though 1KB (dir size 16GB) would have less wastage.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE21
  Current Beta Squid 3.1.0.15


[squid-users] Poor mans static CDN

2010-01-25 Thread beac...@tiscali.co.uk
Hi,

Is there any way to configure the following scenario? I run a 
wordpress blog and wish to offload the static content to squid by re-
writing the hostnames in the templates. We have plenty of bandwidth on 
tap but the apache resources on our VM appear to be pointlessly tied up 
when busy serving the larger static content (images, crazy js/css) to 
slow clients.

So, If the blog runs on blog.tld and is also the origin server I'd 
like to serve the static content from cache.blog.tld and have it pull 
the content from the origin blog.tld site.

This is what I have setup so far:

http_port external-IP2:80 vhost defaultsite=cache.blog.tld vhost
cache_peer blog.tld  parent 80 0 no-query originserver login=PASS 

What appears to be happening is that the Host: header in the request 
to the origin is cache.blog.tld rather than blog.tld.

Since I may have to do this for a lot of sites I don't really want to 
fix this with apache's serveralias workaround as I intend to use a 
wordpress filter plugin to re-write the static objects to use the cache 
on the busy sites. 

TIA!
B.





[squid-users] error access to http://www.tyco-fsbp.com/cgi-bin/webscripts/nph-home.pl

2010-01-25 Thread Eduardo Maia

Hello,

When access to http://www.tyco-fsbp.com/cgi-bin/webscripts/nph-home.pl 
with squid 3.0.STABLE1 or squid-3.1.0.15 i got the error below. If i 
access directly without proxy the website opens.

What could be the problem?


ERROR
The requested URL could not be retrieved

While trying to process the request:
GET /cgi-bin/webscripts/nph-home.pl HTTP/1.0
User-Agent: Opera/9.64 (X11; Linux i686; U; en) Presto/2.1.1
Host: www.tyco-fsbp.com
Accept: text/html, application/xml;q=0.9, application/xhtml+xml, 
image/png, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1

Accept-Language: en-US,en;q=0.9
Accept-Charset: iso-8859-1, utf-8, utf-16, *;q=0.1
Accept-Encoding: deflate, gzip, x-gzip, identity, *;q=0
Referer: http://www.tyco-fsbp.com/
Pragma: no-cache
Cache-Control: no-cache
Proxy-Connection: Keep-Alive


The following error was encountered:
Invalid Response

The HTTP Response message received from the contacted server could not 
be understood or was otherwise malformed. Please contact the site 
operator. Your cache administrator may be able to provide you with more 
details about the exact nature of the problem if needed.


Your cache administrator is webmaster.




thanks,

Eduardo



[squid-users] Cache_mem ?? -- help

2010-01-25 Thread Ariel
Hello list, I have a server with 3 GB of RAM and two 50GB partitions
for cache space, that I could focus on cache_mem assign value?


Re: [squid-users] refresh pattern

2010-01-25 Thread Ernesto Nataloni

HI Amos,
i have installed Squid 3.0 Stable 19, and i've done the chenge to my 
aspx page as you told me but nothing in changed.
It seems tha you can't give 1 to Min.Age or Max Age parameter in 
refresh_pattern directive.

Thank so much for your help
Kind regards

_
Ernesto Nataloni

 Teleborsa S.p.A.
Agenzia stampa economica fondata nel 1961
 via di Trasone, 52 - 00199 Roma - Italy
 http://www.teleborsa.it

Mail:   ernesto.natal...@teleborsa.it
Phone:  (+39).06.86502220
Mobile: (+39).345.4306212
Fax:(+39).06.86502800


Il 23/01/2010 0.25, Amos Jeffries ha scritto:



Il 22/01/2010 0.38, Amos Jeffries ha scritto:

Ernesto Nataloni wrote:

Hi,
i havw found that if i set this refresh pattern with 1 minute min. 
time


refresh_pattern .aspx 1 100% 1 override-expire override-lastmod 
reload- into-ims ignore-reload ignore-no-cache ignore-private 
ignore-auth


Squid doesn't cache .aspx page


Sometimes. Or anything else with aspx in the URL.
For example;  http://aspx.example.com will be affected too.

To use file types in refresh pattern you need to use a regex:

  \.(aspx)(\?.*)?$

With aspx replaced by  aspx|asp  to do two file types .aspx and 
.asp the same.




If i put min. time to 2 minutes:

refresh_pattern .aspx 2 100% 2 override-expire override-lastmod 
reload- into-ims ignore-reload ignore-no-cache ignore-private 
ignore-auth


Squid work fine and caches .aspx page.

This is true for every type of object (.htm, .jpg, .png)
I didn't try with other object
I want to cache .aspx objects for only 1 minute and refresh it 
after 1 minute.

So i don't known if my istruction is correct.
Can you help me?


Please first answer why are you doing this?

Far, far better to not cache at all than to so completely override 
the expert webmasters knowledge of their own site requirements.

Ernesto Nataloni wrote:



I work for a financial data service provvider.
I need to cache my data for 1 minute because i have a lot of web 
access that keep very little pattern data from IIS Web server ( that 
i hope to change in Apache as soo as possible).
This data are real time data, so if i can cache it for only 1 minute 
i can help to my servers to service more efficentely other applications.

I'll try to use help and i'll feed beck to aboute the results.


Okay. It sounds like you are the authoritative for that website.

What you seem to be attempting is to replicate the behavior of:
  Cache-Control: maxage=0, s-maxage=60, max-stale=60
which permits middle caches to store a copy for up to 60 seconds from 
original generation time and makes end clients to always request fresh 
ones.


BTW, What version of Squid?

Amos


[squid-users] Issue with XML requests

2010-01-25 Thread Ali Jawad
Hi
We are developing an application that does send XML requests to our
webserver. We do have a non caching SQUID server on our local network,
when the SQUID server is in use we dont get the result back from the
server. When we dont use the SQUID server we get the result. Although
no  content filtering rules are are in place. If the request is done
through a browser we get the answer

This is the SQUID log for a browser
126735.732   1748 127.0.0.1 TCP_MISS/200 623 GET
http://xyz.com/balance2.php? - DIRECT/87.236.144.25 text/xml
This is the SQUID log for our application
126752.166  60004 127.0.0.1 TCP_MISS/000 0 POST
http://xyz.com/balance2.php - DIRECT/87.236.144.25 -


As for the server itself

This is the log when passing through SQUID with application
sourceIP - - [25/Jan/2010:17:17:44 +] POST /balance2.php HTTP/1.0 200 35
This is the log when NOT passing through SQUID with application
sourceIP - - [25/Jan/2010:17:18:55 +] POST /balance2.php HTTP/1.1 200 82

Can anyone please point me in the right direction ?

Regards


[squid-users] Re: Issue with XML requests

2010-01-25 Thread Ali Jawad
Without SQUID

The packet is


POST /balance2.php HTTP/1.1.
Host: xyz
content-type: application/x-www-form-urlencoded.
Connection: Keep-Alive.
content-length: 36.
.
username=sourceedge2password=123456


With SQUID the request is:


POST /balance2.php HTTP/1.0.
Host: xyz.com.
Content-Type: application/x-www-form-urlencoded.
Content-Length: 36.
Via: 1.1 y.net:3128 (squid/2.6.STABLE5).
X-Forwarded-For: 127.0.0.1.
Cache-Control: max-age=259200.
Connection: keep-alive.

As you can see the argument line is missing and the server returns with:

HTTP/1.1 200 OK.
Date: Mon, 25 Jan 2010 18:19:38 GMT.
Server: Apache/2.2.3 (CentOS).
X-Powered-By: PHP/5.1.6.
Content-Length: 35.
Connection: close.
Content-Type: text/html; charset=UTF-8.
.
Error passing variables (AD err 01)


On Mon, Jan 25, 2010 at 6:30 PM, Ali Jawad alijaw...@gmail.com wrote:
 Hi
 We are developing an application that does send XML requests to our
 webserver. We do have a non caching SQUID server on our local network,
 when the SQUID server is in use we dont get the result back from the
 server. When we dont use the SQUID server we get the result. Although
 no  content filtering rules are are in place. If the request is done
 through a browser we get the answer

 This is the SQUID log for a browser
 126735.732   1748 127.0.0.1 TCP_MISS/200 623 GET
 http://xyz.com/balance2.php? - DIRECT/87.236.144.25 text/xml
 This is the SQUID log for our application
 126752.166  60004 127.0.0.1 TCP_MISS/000 0 POST
 http://xyz.com/balance2.php - DIRECT/87.236.144.25 -


 As for the server itself

 This is the log when passing through SQUID with application
 sourceIP - - [25/Jan/2010:17:17:44 +] POST /balance2.php HTTP/1.0 200 35
 This is the log when NOT passing through SQUID with application
 sourceIP - - [25/Jan/2010:17:18:55 +] POST /balance2.php HTTP/1.1 200 82

 Can anyone please point me in the right direction ?

 Regards



[squid-users] Automatic Configuration

2010-01-25 Thread Jay Kolomeysky
I wanted to know if there was a way to implement a Squid server into your 
environment without having to modify any browser settings.  Every article 
I've read says that even if you use DNS/DHCP you still have to point the 
browser to a configuration file but the only advantage is that if you move 


the file or the server changes you don't have to change the setting on all 


the browsers.

We have over 6,000 people in our environment and I can't change all of 
their settings.  I'd like for the integration to be seamless.

Please let me know if this is possible and if so then how.  Thanks in 
advance.

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
__


[squid-users] Setting up Parent Proxy ?

2010-01-25 Thread Roland Roland

Hi all,

 i have a cisco based tunnel set between my two branches
i've setup another squid at the remote branch and im trying to use it as 
a parent proxy JUST for specific results.



for example:

going to 192.168.75.1/24 should go through the parent proxy..

and everything else should go directly to the internet...


how can i set that?

 i know i can set the parent with cache_peer but i dont know how to set 
just specific destinations to be satisfied by that proxy...




thanks for your help :)


[squid-users] Help with extension_methods

2010-01-25 Thread Dean Weimer
I found some errors in my cache.log file this afternoon, I have tracked it down 
to a development machine and know that  they occurred while the developer 
working on the machine was doing a build out of Plone, which did in the end 
succeed so I am not sure this is a huge concern but would rather not have the 
errors in the future if it can be fixed.

There were several entries like this in the access.log:
1264442419.041  0 10.20.147.34 NONE/400 1806 NONE 
error:unsupported-request-method - NONE/- text/html

That corresponded to entries like this in the cache.log:
2010/01/25 12:03:35| clientParseRequestMethod: Unsupported method attempted by 
10.20.147.34: This is not a bug. see squid.conf extension_methods
2010/01/25 12:03:35| clientParseRequestMethod: Unsupported method in request 
'_z___'

I checked on the extension_methods, but am a little confused as to what to 
enter for the method?  To possibly solve this issue, would I just use the 
following configuration line?
extension_methods _z___

If anyone could point me in the right direction to find some resources on this 
issue it would be greatly appreciated.  I tried searching but didn't find any 
information on _z___ on the web.  I am currently running squid3.0.STABLE21.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co



RE: [squid-users] Automatic Configuration

2010-01-25 Thread John Lauro
Two options...

Some browsers will support Web Proxy Autodiscovery Protocol, which can find
the configuration file.  See:
http://en.wikipedia.org/wiki/Web_Proxy_Autodiscovery_Protocol

The other option is to setup squid as a transparent proxy.



 -Original Message-
 From: Jay Kolomeysky [mailto:jkolomey...@sgu.edu]
 Sent: Monday, January 25, 2010 12:25 PM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Automatic Configuration
 
 I wanted to know if there was a way to implement a Squid server into
 your
 environment without having to modify any browser settings.  Every
 article
 I've read says that even if you use DNS/DHCP you still have to point
 the
 browser to a configuration file but the only advantage is that if you
 move
 
 
 the file or the server changes you don't have to change the setting on
 all
 
 
 the browsers.
 
 We have over 6,000 people in our environment and I can't change all of
 their settings.  I'd like for the integration to be seamless.
 
 Please let me know if this is possible and if so then how.  Thanks in
 advance.
 
 __
 This email has been scanned by the MessageLabs Email Security System.
 For more information please visit http://www.messagelabs.com/email
 __
 
 No virus found in this incoming message.
 Checked by AVG - www.avg.com
 Version: 8.5.432 / Virus Database: 271.1.1/2639 - Release Date:
 01/25/10 07:36:00



Re: [squid-users] error access to http://www.tyco-fsbp.com/cgi-bin/webscripts/nph-home.pl

2010-01-25 Thread Marcello Romani

Eduardo Maia ha scritto:

Hello,

When access to http://www.tyco-fsbp.com/cgi-bin/webscripts/nph-home.pl 
with squid 3.0.STABLE1 or squid-3.1.0.15 i got the error below. If i 
access directly without proxy the website opens.

What could be the problem?


ERROR
The requested URL could not be retrieved

While trying to process the request:
GET /cgi-bin/webscripts/nph-home.pl HTTP/1.0
User-Agent: Opera/9.64 (X11; Linux i686; U; en) Presto/2.1.1
Host: www.tyco-fsbp.com
Accept: text/html, application/xml;q=0.9, application/xhtml+xml, 
image/png, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1

Accept-Language: en-US,en;q=0.9
Accept-Charset: iso-8859-1, utf-8, utf-16, *;q=0.1
Accept-Encoding: deflate, gzip, x-gzip, identity, *;q=0
Referer: http://www.tyco-fsbp.com/
Pragma: no-cache
Cache-Control: no-cache
Proxy-Connection: Keep-Alive


The following error was encountered:
Invalid Response

The HTTP Response message received from the contacted server could not 
be understood or was otherwise malformed. Please contact the site 
operator. Your cache administrator may be able to provide you with more 
details about the exact nature of the problem if needed.


Your cache administrator is webmaster.




thanks,

Eduardo



Hallo,
the problem is your script is not generating the correct HTTP headers.
Web browsers are written to render the highest possible number of sites, 
so are very forgiving when they find errors in the http stream.

Squid is instead much stricter in this regard.

Try wget -S against your site and another site which loads well through 
the http proxy.

Here's an example:

marce...@marcello-laptop:~$ wget -S www.cineca.it

--2010-01-25 22:25:19--  http://www.cineca.it/
Risoluzione di www.cineca.it... 130.186.1.46
Connessione a www.cineca.it|130.186.1.46|:80... connesso.
HTTP richiesta inviata, in attesa di risposta...
  HTTP/1.1 200 OK
  Date: Mon, 25 Jan 2010 21:25:19 GMT
  Server: Apache/1.3.36 (Unix) PHP/4.4.2 mod_auth_ianus_sso/1.15 
DAV/1.0.3 mod_ssl/2.8.27 OpenSSL/0.9.8b mod_jk/1.2.15

  Last-Modified: Fri, 22 Jan 2010 21:58:52 GMT
  ETag: b52a-622a-4b5a1f9c
  Accept-Ranges: bytes
  Content-Length: 25130
  Connection: close
  Content-Type: text/html; charset=ISO-8859-1
Lunghezza: 25130 (25K) [text/html]
Salvataggio in: index.html

100%[===] 25.130 
   54,2K/s   in 0,5s


2010-01-25 22:25:19 (54,2 KB/s) - index.html salvato [25130/25130]


marce...@marcello-laptop:~$ wget -S 
http://www.tyco-fsbp.com/cgi-bin/webscripts/nph-home.pl


--2010-01-25 22:25:27-- 
http://www.tyco-fsbp.com/cgi-bin/webscripts/nph-home.pl

Risoluzione di www.tyco-fsbp.com... 213.203.193.92
Connessione a www.tyco-fsbp.com|213.203.193.92|:80... connesso.
HTTP richiesta inviata, in attesa di risposta...
Lunghezza: non specificato
Salvataggio in: nph-home.pl

[   = 


] 17.810  33,1K/s   in 0,5s

2010-01-25 22:25:29 (33,1 KB/s) - nph-home.pl salvato [17810]


As you can see, the cineca.it server is printing out several headers 
before the actual page.

Your script is just putting out the html page.

From the url it seems it's a Perl script, so let me direct you to the 
CGI.pm man page

http://search.cpan.org/dist/CGI.pm/lib/CGI.pm
where you have this example:

   #!/usr/local/bin/perl -w
   use CGI; # load CGI routines
   $q = CGI-new;# create new CGI object
== print $q-header,# create the HTTP header
 $q-start_html('hello world'), # start the HTML
 $q-h1('hello world'), # level 1 header
 $q-end_html;  # end the HTML

Note http headers _before_ the actual html.


HTH

Marcello


Re: [squid-users] Cache_mem ?? -- help

2010-01-25 Thread Luis Daniel Lucio Quiroz
Le Lundi 25 Janvier 2010 08:40:14, Ariel a écrit :
 Hello list, I have a server with 3 GB of RAM and two 50GB partitions
 for cache space, that I could focus on cache_mem assign value?

First you may know how much ram and disk you will assign for squid 
utilization, then you may do your maths,

each object in squid use 64 or 72 (32 or 64 arch) bytes per index, and index 
is store in mem, so you may do your maths using mean object size.

LD


Re: [squid-users] Cache_mem ?? -- help

2010-01-25 Thread Marcello Romani

Ariel ha scritto:

Hello list, I have a server with 3 GB of RAM and two 50GB partitions
for cache space, that I could focus on cache_mem assign value?


http://wiki.squid-cache.org/SquidFaq/SquidMemory

HTH

Marcello


Re: [squid-users] Help with extension_methods

2010-01-25 Thread Mark Nottingham
_z___ isn't a request method on anybody's planet :)

It's more than likely that the client is either trying to talk a protocol other 
than HTTP to you, or its request message delimitation (usually, Content-Length) 
is messed up. 



On 26/01/2010, at 6:02 AM, Dean Weimer wrote:

 I found some errors in my cache.log file this afternoon, I have tracked it 
 down to a development machine and know that  they occurred while the 
 developer working on the machine was doing a build out of Plone, which did in 
 the end succeed so I am not sure this is a huge concern but would rather not 
 have the errors in the future if it can be fixed.
 
 There were several entries like this in the access.log:
 1264442419.041  0 10.20.147.34 NONE/400 1806 NONE 
 error:unsupported-request-method - NONE/- text/html
 
 That corresponded to entries like this in the cache.log:
 2010/01/25 12:03:35| clientParseRequestMethod: Unsupported method attempted 
 by 10.20.147.34: This is not a bug. see squid.conf extension_methods
 2010/01/25 12:03:35| clientParseRequestMethod: Unsupported method in request 
 '_z___'
 
 I checked on the extension_methods, but am a little confused as to what to 
 enter for the method?  To possibly solve this issue, would I just use the 
 following configuration line?
 extension_methods _z___
 
 If anyone could point me in the right direction to find some resources on 
 this issue it would be greatly appreciated.  I tried searching but didn't 
 find any information on _z___ on the web.  I am currently running 
 squid3.0.STABLE21.
 
 Thanks,
  Dean Weimer
  Network Administrator
  Orscheln Management Co
 

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Bypass proxy for one user

2010-01-25 Thread Chris Robertson

Dayo Adewunmi wrote:

Hi

On our LAN, you can't access the internet without having the proxy 
settings
in your browser. I've got one user coming in, who's laptop is 
locked-down,

and unfortunately his browser's set to use no proxy.

How do I configure squid to let him access the internet directly?


You can't.  Squid is not preventing him from accessing the internet; it 
is the facility by which access to the Web is allowed.  Web access is 
being blocked using some other utility, and that utility must be 
identified and reconfigured.



Preferrably, without him hogging all the bandwidth. Which wouldn't be an
issue if I could put the proxy settings into his browser, as squid is 
configured

to use delay pools.

Thanks

Dayo


Chris



Re: [squid-users] Requests Per Second

2010-01-25 Thread Chris Robertson

BarneyC wrote:

The 800 RPS is on a 100Mb/s network? The 5000 RPS? That was my question. I'm
not really concerned with what other people get on their systems, as every
system will be different. I want to know how many RPS I can expect to
encounter on a busy 100Mb/s residential network.
  


I have a fairly special case, in that the majority of my customers are 
on the far end of a satellite link (each client also has a Squid server 
on premise), but perhaps you will find the combined numbers from my 
central servers useful as a data point to extrapolate from.


Client side traffic: 58 Mbit/sec
Internet side traffic: 44 Mbit/sec
Req/sec: 215
Number of unique clients: 150 (remember, this is the number of child 
Squid servers.  Real client numbers are north of 10,000).


Each server runs Squid 2.7STABLE6 with the following hardware.
2 x Xeon 3110
8GB RAM
3 x 45 GB COSS
3 x 150 GB aufs

My servers are not remotely taxed.  Squid's CPU usage hovers around 10% 
and the load average (5 or 15 minute) rarely rises above 2.


Chris



Re: [squid-users] Poor mans static CDN

2010-01-25 Thread Chris Robertson

beac...@tiscali.co.uk wrote:

Hi,

Is there any way to configure the following scenario? I run a 
wordpress blog and wish to offload the static content to squid by re-
writing the hostnames in the templates. We have plenty of bandwidth on 
tap but the apache resources on our VM appear to be pointlessly tied up 
when busy serving the larger static content (images, crazy js/css) to 
slow clients.


So, If the blog runs on blog.tld and is also the origin server I'd 
like to serve the static content from cache.blog.tld and have it pull 
the content from the origin blog.tld site.
  


If I understand correctly, you want clients to request static content 
from cache.blog.tld, but you want cache.blog.tld to request non-cached 
static content from blog.tld.  A url_rewrite_program (and/or some 
counseling) is in order:


http://www.squid-cache.org/Doc/config/url_rewrite_program/


This is what I have setup so far:

http_port external-IP2:80 vhost defaultsite=cache.blog.tld vhost
cache_peer blog.tld  parent 80 0 no-query originserver login=PASS 
  


I'd also suggest changing your defaultsite to blog.tld, as that is 
what Squid will send as the Host header if one is lacking in the 
original request.


What appears to be happening is that the Host: header in the request 
to the origin is cache.blog.tld rather than blog.tld.


Since I may have to do this for a lot of sites I don't really want to 
fix this with apache's serveralias workaround as I intend to use a 
wordpress filter plugin to re-write the static objects to use the cache 
on the busy sites.


A simple redirect program that strips a leading cache. from the 
requested domain name will give you the flexibility to do this.


 


TIA!
B.
  


Chris



Re: [squid-users] Setting up Parent Proxy ?

2010-01-25 Thread Chris Robertson

Roland Roland wrote:

Hi all,

 i have a cisco based tunnel set between my two branches
i've setup another squid at the remote branch and im trying to use it 
as a parent proxy JUST for specific results.



for example:

going to 192.168.75.1/24 should go through the parent proxy..

and everything else should go directly to the internet...


how can i set that?


http://en.wikipedia.org/wiki/Proxy_auto-config



 i know i can set the parent with cache_peer but i dont know how to 
set just specific destinations to be satisfied by that proxy...




thanks for your help :)


Chris



[squid-users] How to configure Squid to proxy a web site with external links to itself?

2010-01-25 Thread fulan Peng
Hi, gurus!

Some web sites use external to refer internal pages. For example, a
page anotherpage.html at the root directory, usually,
/anotherpage.html will be ok. But it uses
http://thiswebsite.com/anotherpage.html instead. The browser has no
problem. But Squid get lost. Squid thought http://thiswebsite.com is
an external web site and quit and disappeared.  How can we get Squid
work for these web sites?
Thanks a lot!

Fulan Peng


Re: [squid-users] Re: Issue with XML requests

2010-01-25 Thread Amos Jeffries

Ali Jawad wrote:

Without SQUID

The packet is


POST /balance2.php HTTP/1.1.
Host: xyz
content-type: application/x-www-form-urlencoded.
Connection: Keep-Alive.
content-length: 36.
.
username=sourceedge2password=123456



I assume you are consistent with using '.' as a newline.

The Host: header is a bit broken. If thats not a typo it will cause 
Squid to reject the POST.





With SQUID the request is:


POST /balance2.php HTTP/1.0.
Host: xyz.com.
Content-Type: application/x-www-form-urlencoded.
Content-Length: 36.
Via: 1.1 y.net:3128 (squid/2.6.STABLE5).
X-Forwarded-For: 127.0.0.1.
Cache-Control: max-age=259200.
Connection: keep-alive.

As you can see the argument line is missing and the server returns with:

HTTP/1.1 200 OK.
Date: Mon, 25 Jan 2010 18:19:38 GMT.
Server: Apache/2.2.3 (CentOS).
X-Powered-By: PHP/5.1.6.
Content-Length: 35.
Connection: close.
Content-Type: text/html; charset=UTF-8.
.
Error passing variables (AD err 01)


On Mon, Jan 25, 2010 at 6:30 PM, Ali Jawad alijaw...@gmail.com wrote:

Hi
We are developing an application that does send XML requests to our
webserver. We do have a non caching SQUID server on our local network,
when the SQUID server is in use we dont get the result back from the
server. When we dont use the SQUID server we get the result. Although
no  content filtering rules are are in place. If the request is done
through a browser we get the answer

This is the SQUID log for a browser
126735.732   1748 127.0.0.1 TCP_MISS/200 623 GET
http://xyz.com/balance2.php? - DIRECT/87.236.144.25 text/xml
This is the SQUID log for our application
126752.166  60004 127.0.0.1 TCP_MISS/000 0 POST
http://xyz.com/balance2.php - DIRECT/87.236.144.25 -


As for the server itself

This is the log when passing through SQUID with application
sourceIP - - [25/Jan/2010:17:17:44 +] POST /balance2.php HTTP/1.0 200 35
This is the log when NOT passing through SQUID with application
sourceIP - - [25/Jan/2010:17:18:55 +] POST /balance2.php HTTP/1.1 200 82

Can anyone please point me in the right direction ?

Regards




TCP_MISS/000 means something broke before Squid received any reply 
information to pull the status code from.


I'd raise the debug level to debug_options 11,5 55,5 58,5 73,6 74,6 
and see what information is available about the problem.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE21
  Current Beta Squid 3.1.0.15


Re: [squid-users] How to configure Squid to proxy a web site with external links to itself?

2010-01-25 Thread Amos Jeffries

fulan Peng wrote:

Hi, gurus!

Some web sites use external to refer internal pages. For example, a
page anotherpage.html at the root directory, usually,
/anotherpage.html will be ok. But it uses
http://thiswebsite.com/anotherpage.html instead. The browser has no
problem. But Squid get lost. Squid thought http://thiswebsite.com is
an external web site and quit and disappeared.  How can we get Squid
work for these web sites?
Thanks a lot!

Fulan Peng


You must be doing something mighty strange for that to matter.

Perhapse you are using a URL-rewrite to present a completely different 
domain name to the public?

If so this is the price. You have two choices:
 1) Stop using the re-writer and present the same domain(s) to the public.
 2) put up with it and try to remove the absolute links from all web 
content at the point they are generated.


Or did you mean squid crashes and stops responding to anything for a 
while by quit and disappeared?



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE21
  Current Beta Squid 3.1.0.15


Re: [squid-users] Setting up Parent Proxy ?

2010-01-25 Thread Amos Jeffries

Chris Robertson wrote:

Roland Roland wrote:

Hi all,

 i have a cisco based tunnel set between my two branches
i've setup another squid at the remote branch and im trying to use it 
as a parent proxy JUST for specific results.



for example:

going to 192.168.75.1/24 should go through the parent proxy..

and everything else should go directly to the internet...


how can i set that?


http://en.wikipedia.org/wiki/Proxy_auto-config



Or maybe:
 http://www.squid-cache.org/Doc/config/cache_peer_access

... depending on which of the software involved he wants to be the child.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE21
  Current Beta Squid 3.1.0.15


[squid-users] Squid performance issues

2010-01-25 Thread Felipe W Damasio
 Hi all,

 Sorry for the long email.

 I'm using squid on a 300Mbps ISP with about 10,000 users.

 I have an 8-core I7 Intel processor-machine, with 8GB of RAM and 500
of HD for the cache. (exclusive Sata HD with xfs). Using aufs as
storeio.

 I'm caching mostly multimedia files (youtube and such).

 Squid usually eats around 50-70% of one core.

 But always around midnight (when a lot of users browse the internet),
my squid becomes very slowI mean, a page that usually takes 0.04s
to load takes 23seconds to load.

 My best guess is that the volume of traffic is making squid slow.

 I'm using a 2.6.29.6 vanilla kernel with tproxy enabled for squid.
And I'm using these /proc configurations:

echo 0  /proc/sys/net/ipv4/tcp_ecn
echo 1  /proc/sys/net/ipv4/tcp_low_latency
echo 10  /proc/sys/net/core/netdev_max_backlog
echo 409600   /proc/sys/net/ipv4/tcp_max_syn_backlog
echo 7  /proc/sys/net/ipv4/tcp_fin_timeout
echo 15  /proc/sys/net/ipv4/tcp_keepalive_intvl
echo 3  /proc/sys/net/ipv4/tcp_keepalive_probes
echo 65536  /proc/sys/vm/min_free_kbytes
echo 262144 1024000 4194304  /proc/sys/net/ipv4/tcp_rmem
echo 262144 1024000 4194304  /proc/sys/net/ipv4/tcp_wmem
echo 1024000  /proc/sys/net/core/rmem_max
echo 1024000  /proc/sys/net/core/wmem_max
echo 512000  /proc/sys/net/core/rmem_default
echo 512000  /proc/sys/net/core/wmem_default
echo 524288  /proc/sys/net/ipv4/netfilter/ip_conntrack_max
echo 3  /proc/sys/net/ipv4/tcp_synack_retries

 The machine is in bridge-mode.

 I wrote a little script that prints:

 - The date;
 - The /usr/bin/time squidclient http://www.amazon.com;;
 - The number of ESTABLISHED connections (through netstat -an);
 - The number of TIME_WAIT connections;
 - The total number of netstat connections;
 - The route cache (ip route list cache);
 - The number of clients currently connected in squid (through mgr:info);
 - The number of free memory in MB (free -m);
 - The % used of the squid-running core;
 - The average number of time to respond a request / sec (mgr:info
also) - 5 minutes avg;
 - The average number of http requests / sec (5 minutes avg) - mgr:info as well.

 On any other hour, I have something like:

2010-01-25 18:48:19 ; 0.04 ; 19383 ; 9902 ; 29865 ; 96972 ; 4677 ; 131
; 59 ; 0.24524 ; 476.871718
2010-01-25 18:53:29 ; 0.04 ; 18865 ; 8593 ; 30123 ; 179570 ; 4679 ;
148 ; 62 ; 0.22004 ; 504.424207
2010-01-25 18:58:38 ; 0.04 ; 18377 ; 9056 ; 29283 ; 99038 ; 4680 ; 174
; 61 ; 0.22004 ; 466.659336
2010-01-25 19:03:49 ; 0.04 ; 18877 ; 9133 ; 28327 ; 181196 ; 4673 ;
171 ; 57 ; 0.24524 ; 483.558436

 So, it takes around 0.04s to get http://www.amazon.com.

2010-01-24 23:46:50 ; 2.53 ; 22723 ; 9861 ; 35012 ; 64752 ; 4306 ;
166; 70 ; 0.22004 ; 566.364274
2010-01-24 23:52:04 ; 3.74 ; 21173 ; 10256 ; 33242 ; 167594 ; 4309 ;
169 ; 68 ; 0.20843 ; 537.758601
2010-01-24 23:57:20 ; 0.08 ; 18691 ; 9050 ; 29590 ; 65496 ; 4312 ; 138
; 71 ; 0.20843 ; 525.119006
2010-01-25 00:02:29 ; 15.54 ; 18016 ; 8209 ; 29035 ; 149248 ; 4318 ;
160 ; 82 ; 0.25890 ; 491.615241

 As I said, it goes from 0.04 to 15.54s(!) to get a single html file.
Horrible. After 12:30, everything goes back to normal.

 From those variables, I can't seem to find any indication of what can
be causing this appalling slowdown. The number of squid users doesn't
go up that much, I just see that the avg time squid reports to
answering a request goes from 0.20s to 0.25, and the number of http
requests/sec actually goes down from 566 to 491...which is kind of odd
to me. And the number users using squid stays in aroung 4300.

 I talked to Mr. Dave Dykstra, and he thought it could be I/O delay
issues. So I tried:

cache_dir null /tmp
cache_access_log none
cache_store_log none

  But no luck, on midnight tonight again things went wild:

2010-01-25 23:57:03 ; 0.04 ; 24112 ; 11330 ; 37240 ; 74456 ; 3516 ;
160 ; 58 ; 0.25890 ; 581.047037
2010-01-26 00:02:15 ; 10.82 ; 25638 ; 11695 ; 38537 ; 177198 ; 3533 ;
149 ; 78 ; 0.27332 ; 570.312936
2010-01-26 00:07:38 ; 42.64 ; 23818 ; 11563 ; 38097 ; 88902 ; 3556 ;
171 ; 70 ; 0.30459 ; 585.880418

  From 0.04 to 42 seconds to load the main html page of amazon.com. (!)

  Do you have any idea or any other data I can collect to try and
track down this?

  I'm using squid-2.7.stable7, but I'm willing to try squid-3.0 or
squid-3.1 if you guys think it could help.

  I'm using 2 gigabit Marvell Ethernet boards with sky2 driver. Don't
know if it's relevant, though.

  If you guys need any more info to try and help me figure this out, please ask.

  I'm willing to test, code or do pretty much anything to make squid
perform better on my environment Please let me know how can I help you
help me. :-)

  Thanks!

Felipe Damasio


RE: [squid-users] Squid performance issues

2010-01-25 Thread John Lauro
What does the following give:
uname -a

While it's being slow, run the following to get some stats:

vmstat 1 11 ;# Will run for 11 seconds
iostat -dx 11   ;# Will run for 11 seconds, install sysstat if not found


My first guess is memory swapping, but could be I/O.  The above should help
narrow it down.

 -Original Message-
 From: Felipe W Damasio [mailto:felip...@gmail.com]
 Sent: Monday, January 25, 2010 9:37 PM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Squid performance issues
 
  Hi all,
 
  Sorry for the long email.
 
  I'm using squid on a 300Mbps ISP with about 10,000 users.
 
  I have an 8-core I7 Intel processor-machine, with 8GB of RAM and 500
 of HD for the cache. (exclusive Sata HD with xfs). Using aufs as
 storeio.
 
  I'm caching mostly multimedia files (youtube and such).
 
  Squid usually eats around 50-70% of one core.
 
  But always around midnight (when a lot of users browse the internet),
 my squid becomes very slowI mean, a page that usually takes 0.04s
 to load takes 23seconds to load.
 
  My best guess is that the volume of traffic is making squid slow.
 
  I'm using a 2.6.29.6 vanilla kernel with tproxy enabled for squid.
 And I'm using these /proc configurations:
 
 echo 0  /proc/sys/net/ipv4/tcp_ecn
 echo 1  /proc/sys/net/ipv4/tcp_low_latency
 echo 10  /proc/sys/net/core/netdev_max_backlog
 echo 409600   /proc/sys/net/ipv4/tcp_max_syn_backlog
 echo 7  /proc/sys/net/ipv4/tcp_fin_timeout
 echo 15  /proc/sys/net/ipv4/tcp_keepalive_intvl
 echo 3  /proc/sys/net/ipv4/tcp_keepalive_probes
 echo 65536  /proc/sys/vm/min_free_kbytes
 echo 262144 1024000 4194304  /proc/sys/net/ipv4/tcp_rmem
 echo 262144 1024000 4194304  /proc/sys/net/ipv4/tcp_wmem
 echo 1024000  /proc/sys/net/core/rmem_max
 echo 1024000  /proc/sys/net/core/wmem_max
 echo 512000  /proc/sys/net/core/rmem_default
 echo 512000  /proc/sys/net/core/wmem_default
 echo 524288  /proc/sys/net/ipv4/netfilter/ip_conntrack_max
 echo 3  /proc/sys/net/ipv4/tcp_synack_retries
 
  The machine is in bridge-mode.
 
  I wrote a little script that prints:
 
  - The date;
  - The /usr/bin/time squidclient http://www.amazon.com;;
  - The number of ESTABLISHED connections (through netstat -an);
  - The number of TIME_WAIT connections;
  - The total number of netstat connections;
  - The route cache (ip route list cache);
  - The number of clients currently connected in squid (through
 mgr:info);
  - The number of free memory in MB (free -m);
  - The % used of the squid-running core;
  - The average number of time to respond a request / sec (mgr:info
 also) - 5 minutes avg;
  - The average number of http requests / sec (5 minutes avg) - mgr:info
 as well.
 
  On any other hour, I have something like:
 
 2010-01-25 18:48:19 ; 0.04 ; 19383 ; 9902 ; 29865 ; 96972 ; 4677 ; 131
 ; 59 ; 0.24524 ; 476.871718
 2010-01-25 18:53:29 ; 0.04 ; 18865 ; 8593 ; 30123 ; 179570 ; 4679 ;
 148 ; 62 ; 0.22004 ; 504.424207
 2010-01-25 18:58:38 ; 0.04 ; 18377 ; 9056 ; 29283 ; 99038 ; 4680 ; 174
 ; 61 ; 0.22004 ; 466.659336
 2010-01-25 19:03:49 ; 0.04 ; 18877 ; 9133 ; 28327 ; 181196 ; 4673 ;
 171 ; 57 ; 0.24524 ; 483.558436
 
  So, it takes around 0.04s to get http://www.amazon.com.
 
 2010-01-24 23:46:50 ; 2.53 ; 22723 ; 9861 ; 35012 ; 64752 ; 4306 ;
 166; 70 ; 0.22004 ; 566.364274
 2010-01-24 23:52:04 ; 3.74 ; 21173 ; 10256 ; 33242 ; 167594 ; 4309 ;
 169 ; 68 ; 0.20843 ; 537.758601
 2010-01-24 23:57:20 ; 0.08 ; 18691 ; 9050 ; 29590 ; 65496 ; 4312 ; 138
 ; 71 ; 0.20843 ; 525.119006
 2010-01-25 00:02:29 ; 15.54 ; 18016 ; 8209 ; 29035 ; 149248 ; 4318 ;
 160 ; 82 ; 0.25890 ; 491.615241
 
  As I said, it goes from 0.04 to 15.54s(!) to get a single html file.
 Horrible. After 12:30, everything goes back to normal.
 
  From those variables, I can't seem to find any indication of what can
 be causing this appalling slowdown. The number of squid users doesn't
 go up that much, I just see that the avg time squid reports to
 answering a request goes from 0.20s to 0.25, and the number of http
 requests/sec actually goes down from 566 to 491...which is kind of odd
 to me. And the number users using squid stays in aroung 4300.
 
  I talked to Mr. Dave Dykstra, and he thought it could be I/O delay
 issues. So I tried:
 
 cache_dir null /tmp
 cache_access_log none
 cache_store_log none
 
   But no luck, on midnight tonight again things went wild:
 
 2010-01-25 23:57:03 ; 0.04 ; 24112 ; 11330 ; 37240 ; 74456 ; 3516 ;
 160 ; 58 ; 0.25890 ; 581.047037
 2010-01-26 00:02:15 ; 10.82 ; 25638 ; 11695 ; 38537 ; 177198 ; 3533 ;
 149 ; 78 ; 0.27332 ; 570.312936
 2010-01-26 00:07:38 ; 42.64 ; 23818 ; 11563 ; 38097 ; 88902 ; 3556 ;
 171 ; 70 ; 0.30459 ; 585.880418
 
   From 0.04 to 42 seconds to load the main html page of amazon.com. (!)
 
   Do you have any idea or any other data I can collect to try and
 track down this?
 
   I'm using squid-2.7.stable7, but I'm willing to try squid-3.0 or
 squid-3.1 if you guys think it could help.
 
   I'm using 2 gigabit 

Re: [squid-users] Squid performance issues

2010-01-25 Thread Felipe W Damasio
  Hi Mr. John,

2010/1/26 John Lauro john.la...@covenanteyes.com:
 What does the following give:
 uname -a

uname -a:

Linux squid 2.6.29.6 #4 SMP Thu Jan 14 21:00:42 BRST 2010 x86_64
Intel(R) Core(TM) i7 CPU @ 9200 @ 2.67GHz GenuineIntel GNU/Linux

 While it's being slow, run the following to get some stats:

 vmstat 1 11     ;# Will run for 11 seconds
 iostat -dx 11   ;# Will run for 11 seconds, install sysstat if not found

  I'll run these tonight.

 My first guess is memory swapping, but could be I/O.  The above should help
 narrow it down.

  I thought that, but actually both top and free -m tells me the same thing:

 total   used   free sharedbuffers cached
Mem:  7979   5076   2903  0  0   4144
-/+ buffers/cache:931   7047
Swap: 3812  0   3811

  Swap isn't even touched...even when slow.

  But if you think vmstat and iostat can help, I'll run them no problem.

  Thanks,

Felipe Damasio