Re: [squid-users] uninteruptable proxy

2010-01-03 Thread Alex Braunegg
Hi,

Consider using VMware ESX/ESXi 4.0 and FT.

I have successfully used, tested and demonstrated VMware ESX/ESXi 4.0
and VMware FT services performing web scanning and proxy services. In
the demonstration simply pull the power plug or "reboot" one of the
ESX/ESXi servers. The time to switch to the secondary system was in
the ms range - many times simply pinging the devices the switch was
not detectable.

In each test - all sessions remained active - including download of
data / files such as ISO images, or streaming video.

Each server running Squid was CentOS based, running latest kernel,
including VMware tools. At the time I was using Squid 3.0 (exact
version I cant recall, but it should not matter).

Let me know if you need any further information.

Best Regards,

Alex

On Sat, Jan 2, 2010 at 5:22 PM, Muhammad Sharfuddin
 wrote:
> peoples in my organization do online bidding, and if browsing
> interrupt/disconnect for any reason(e.g hardware failure) and restore
> within a 50 or 60 seconds.. their session terminate/disconnect, and they
> lost.
>
> we need uninteruptable web/proxy services. we need Highly Available
> Proxy Infrastructure/Services
>
> what are the possible solutions
>
> does following are the options
> 1, Clustering Squid (I think session will break)
> 2, Clustering two virtual machines, and virtual machines are running
> Squid, so if one host machine downs/crash, the other host machine
> continue running squid (I think in this case, session wont break,
> because the other host machine will run the same squid)
>
> please suggest/recommend
>
> Regards
> --ms
>
>
>
>
>


Re: [squid-users] RE: coredump files location on Solaris 8

2010-01-14 Thread Alex Braunegg
Hi,

I am no Solaris expert, however what about changing the overall system
variable on Solaris for crash / core locations to per process core
locations?

http://www.c0t0d0s0.org/archives/4388-Less-known-Solaris-features-About-crashes-and-cores-Part-3-Controlling-the-behaviour-of-the-dump-facilities.html

http://docsun.cites.uiuc.edu/sun_docs/C/solaris_9/SUNWaadm/SYSADV2/p95.html

Best Regards,

Alex

On Fri, Jan 15, 2010 at 4:31 PM,   wrote:
>
>> Hello all,
>>
>> Can somebody say me how I can solve the way Squid stores coredump
> files because I already tried to use coredump_dir and/or starting Squid
> from a dedicated directory but none of them seem to work on Solaris 8
> because dump files are always put in /var/core ?
>>
>
> nobody got this problem in the past ?
>
> many thks
> Vincent
> -
> ATTENTION:
> The information in this electronic mail message is private and confidential, 
> and only intended for the addressee. Should you receive this message by 
> mistake, you are hereby notified that any disclosure, reproduction, 
> distribution or use of this message is strictly prohibited. Please inform the 
> sender by reply transmission and delete the message without copying or 
> opening it.
>
> Messages and attachments are scanned for all viruses known.
> If this message contains password-protected attachments, the files have NOT 
> been scanned for viruses by the ING mail domain.
> Always scan attachments before opening them.
> -
> ING Belgium SA/nv -  Bank/Lender - Avenue Marnix 24, B-1000 Brussels, Belgium 
> -  Brussels RPM/RPR - vat BE 0403.200.393 - BIC (SWIFT) : BBRUBEBB - Account: 
> 310-9156027-89 (IBAN BE 45310-9156027-89).
> An insurance broker, registered with the Banking, Finance and Insurance 
> Commission under the code number 12381A.
>
> ING Belgique SA - Banque/Preteur, Avenue Marnix 24, B-1000 Bruxelles - RPM 
> Bruxelles - tva BE 0403 200 393 - BIC (SWIFT) : BBRUBEBB - Compte: 
> 310-9156027-89 (IBAN: BE 45310-9156027-89).
> Courtier d'assurances inscrit a la CBFA sous le no 12381A
>
> ING Belgie nv - Bank/Kredietgever - Marnixlaan 24, B-1000 Brussel - RPR 
> Brussel - btw BE 0403.200.393 - BIC (SWIFT) : BBRUBEBB - Rekening: 
> 310-9156027-89 (IBAN: BE45 3109 1560 2789).
> Verzekeringsmakelaar ingeschreven bij de CBFA onder het nr. 12381A.
> -
>
>


RE: [squid-users] Slow tranfert speed over ADSL internet connection

2010-04-22 Thread Alex Braunegg
For your Cisco 837 DSL config, try some of the following:

interface Dialer0
 ip mtu 1492

If you are using a bridging interface for the 4 ports on the back into a
VLAN, also try the following

interface BVI1
 ip tcp adjust-mss 1452

Whilst I have ADSL2+ here, I get nowhere near 2+ speeds thanks to copper
pair distance due to its path from exchange & quality:

sh dsl interface | i Speed
Speed (kbps): 0 3634 0   819

Using speedtest.net via Chrome/W7 through Squid 3.1.1 on CentOS 5.4 x86_64
running on a VM (can provide source RPM if needed):

Download:   1.96Mb/s
Upload: 0.54Mb/s

Via Chrome/W7 without Squid:

Download:   3.12Mb/s
Upload: 0.68Mb/s

>From another Linux system:

export http_proxy=http://x.x.x.x:3128
wget wget
http://cache-203.39.198.135.files.bigpond.com/software/network/browsers/Fire
foxPortable_3.6.3_English.paf.exe

--11:36:40--
http://cache-203.39.198.135.files.bigpond.com/software/network/browsers/Fire
foxPortable_3.6.3_English.paf.exe
   => `FirefoxPortable_3.6.3_English.paf.exe'
Connecting to 192.168.0.5:3128... connected.
Proxy request sent, awaiting response... 200 OK
Length: 9,616,696 (9.2M) [application/octet-stream]

100%[===
>] 9,616,696375.53K/sETA 00:00

11:37:07 (362.46 KB/s) - `FirefoxPortable_3.6.3_English.paf.exe' saved
[9616696/9616696]

Squid Log:

1271986638.443  26802 X.X.X.X TCP_MISS/200 9617229 GET
http://cache-203.39.198.135.files.bigpond.com/software/network/browsers/Fire
foxPortable_3.6.3_English.paf.exe - DIRECT/203.39.198.135
application/octet-stream

Not bad given my connection  quite poor performance for what it costs me
per month but that’s another story.

Also check Cisco's site for ADSL firmware updates for your router to improve
DSLAM compatibility as well - it may help.

Hope all of the above helps.

Alex


-Original Message-
From: francis aubut [mailto:fugitif...@gmail.com] 
Sent: Thursday, 22 April 2010 8:38 AM
To: squid-users@squid-cache.org
Subject: Fwd: [squid-users] Slow tranfert speed over ADSL internet
connection

-- Forwarded message --
From: francis aubut 
Date: 2010/4/21
Subject: Re: [squid-users] Slow tranfert speed over ADSL internet connection
To: Amos Jeffries 
Cc : squid-users@squid-cache.org


What I can add is when IE is not connected to the proxy, it goes at
2,5 mbps and I connect to the proxy it goes down to 500 kbps.

At home the speed is the same 10 mbps on both tests.

I'll check for the DNS, could the cisco 837 router limit speed somehow?

Tanks,

Francis.

2010/4/20 Amos Jeffries :
> On Tue, 20 Apr 2010 11:49:05 -0400, francis aubut 
> wrote:
>> Hi,I configured Squid, first with Ubuntu server and then on CentOS 5
>> the problem is the same, I get very slow speed on a network connected
>> with a ADSL internet connection and when I bring the computer at home
>> it goes well, I have a Cable Modem connection, what could be wrong?
>>
>> Francis.
>
> Your experiments as described pretty conclusively confirm that the
> problems is:
>  a) difference in network lag (its conceivable that your ADSL is simply
> slower than Cable, I know mine is by a whole order of magnitude or two).
>
>  b) site-specific configuration somewhere in your setup. Resulting in the
> box going a long way to get stuff, ie a DNS server from the cable
> connection being used when on ADSL etc.
>
> Amos
>



RE: [squid-users] squid mirror

2010-07-21 Thread Alex Braunegg
Hi,

There are a number of ways to tackle this problem, however I have solved
this by using VMware and VMware Fault Tolerance.

VMware Fault FT is a technology based on vLockStep that provides continuous
availability in the event of physical ESX server failure by creating a
shadow virtual machine which is in step with the primary machine. What this
means for Squid is that you are running a single instance of Squid on your
ESX(i) cluster in FT protected mode. Should the physical host fail, the
Squid instance shadow copy kicks in with near instantaneous effect. No
connections / interruptions to traffic occurs, and the end users are none
the wiser as all connections are persistent through the outage.

The limitation of VMware FT however is that it constrained to one vCPU - but
for Squid I don't think that this is a massive issue. Solving the problem of
lockstep for 2 single CPU virtual machines is much simpler than multi CPU
virtual machines. The underlying technology for vLockstep uses record /
replay to record the indeterministic input from one VM and replay it on the
second VM over a specific configured logging network. While this works great
for single CPU's in vSMP environments shared memory access by the CPU's is
not deterministic - hence recording only inputs from one virtual machine and
replaying them on another machine does not guarantee deterministic replay to
exactly replicate the original environment. This makes FT for vSMP
distinctly harder and a much more different problem to solve. It is unknown
at this point in time as to when vSMP support will be available for VMware
FT - even with the release of VMware ESX(i) 4.1 there is no change in this
arena.

If you are using VMware within your environment, have a look at VMware FT. I
do have video's demonstrating FT in operation when dealing with web content
highlighting the persistence of connections during failover.

Best Regards,

Alex

-Original Message-
From: viswa [mailto:waytovi...@gmail.com] 
Sent: Wednesday, 21 July 2010 4:48 PM
To: squid-users@squid-cache.org
Subject: [squid-users] squid mirror

hi all

i am using two squid servers with HA Cluster. Always one squid server is 
responds to clients another one for backup.
If the squid one is down the another squid server responds to 
clients.And my problem is persistent connection . is this possible to 
continue to client persistent connection after the squid one failure ??
is possible to mirror squid connections to another squid??

Thanks





[squid-users] squid 3 rpm build process breaks at libTrie

2007-11-15 Thread Alex Braunegg
Hi All,

I am attempting to create an Squid 3.0 RPM for Fedora (x86_64) using RC1 or
the daily snapshot. Currently the rpmbuild process fails when configuring
lib/libTrie:

==

=== configuring in lib/libTrie
(/usr/src/redhat/BUILD/squid-3.0.RC1/lib/libTrie)
configure: running /bin/sh ./configure '--prefix=/usr/local/squid'
'--enable-snmp' '--with-pthreads' '--with-large-files' '--with-maxfd=16384'
'--enable-icap-client' 'CFLAGS=-fPIE -Os -g -pipe -fsigned-char'
'LDFLAGS=-pie' --cache-file=/dev/null --srcdir=.
checking for g++... g++
checking for C++ compiler default output file name... 
configure: error: C++ compiler cannot create executables
See `config.log' for more details.
configure: error: ./configure failed for lib/libTrie
error: Bad exit status from /var/tmp/rpm-tmp.89076 (%build)

==

If I use the same ./configure options without trying to build an RPM,
./configure works fine, so does 'make all', 'make install'. I suspect that
it has something to do with the build environment & paths being created
during the rpmbuild process, but how to resolve & where to begin I don't
know.

I can provide the config.log files for the general config process and the
libTrie config process if needed.

Best Regards,

Alex



Re: [squid-users] Problems to access specific site

2009-01-07 Thread Alex Braunegg
No issues accessing the site through Squid 3.0 3.0.STABLE10 on CentOS
5.2 or via IWSVA 3.1

On Thu, Jan 8, 2009 at 2:42 AM, David Walcher  wrote:
>
> Hello and Happy new Year to all!
>
> Following problem:
> If I want to access http://www.dispo.co.at/ via Squid Proxy (Version
> 2.6.5-6etch2) i get an HTTP 500 Internal Server Error. If I access this
> page without Squid it works fine.
>
> Here is a debug log.
> http://paste-it.net/public/j56a24e/
>
> Can someone please give me a hint and/or try with his Squid Config if he
> can access this page?!
>
> I would really appreciate it!
>
> Thank you!
>
> Regards,
> David
>
>


Re: [squid-users] Fw: Web browser behind squid proxy hangs on one site only

2009-01-19 Thread Alex Braunegg
Looking at my Firefox session it appears Firefox hangs on performing
this: Read ajax.googleapis.com

Perhaps understanding what it is trying to access will lead to an
answer as to why Squid is causing issues with the site.

On Tue, Jan 20, 2009 at 1:26 PM,   wrote:
> Greetings,
>
> Access to www.dairyaustralia.com.au hangs indefinitely when accessed by IE
> and Firefox behind a squid proxy server (V2.7.STABLE4).
> Access to the site works fine when the proxy is bypassed for this site.
>
> I did a Wireshark trace and checked the http headers but they don't reveal
> anything.
> Disabling the cache for that site only did not help.
>
> Does anyone have any ideas and also a methodology for troubleshooting such
> issues.
>
> Many thanks in advance
> Tony
>
>
> This message is intended for the addressee named and may contain confidential 
> information. If you are not the intended recipient, please delete it and 
> notify the sender. Views expressed in this message are those of the 
> individual sender, and are not necessarily the views of their organisation.
>
>


Re: [squid-users] Use Squid as browser hijack deterrent (so far not working)

2009-02-02 Thread Alex Braunegg
Have you run the System Information Collector (SIC) Tool
(http://www.trendmicro.com/download/sic.asp) to gather the information
regarding the malware / trojan and submitted this for analysis to
Trend Micro?

On Tue, Feb 3, 2009 at 8:02 AM,   wrote:
> Hello Squid users all, I have a bad situation partially resolved: the past 
> few days I have been blind-sided by a Trojan based browser hijacking. A 
> script from Trendmicro has allowed me to navigate the net w/o being 
> redirected to a porn site or similar. Notwithstanding I can see from running 
> wireshark the culprit that Trendmicro has not found the signature to as of 
> yet. I am running: a Linux router/gateway, heavily firewalled (iptables) but 
> with the attack I installed Squid. I created two system files with ACLs to 
> match: bad_src_ip and bad_url_regex. From the Linux box ps shows that squid 
> is running but the logs show no activity at all albeit OK access or error. 
> Moreover, I can ping and tracert to the URLs and IPs I think I am blocking. 
> Do I need to be a master of cache proxies to run Squid? An excerpt of my 
> squid.conf is included below in case anyone has any ideas. I looked at 
> redirection (3128) such as Shallalist and other blacklist but I would rather 
> just create my own ACLs
>  that work. Thanks in advance and please advise, David.
>
> ***
> ACL list
> ***
> #Recommended minimum configuration:
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl SSL_ports port 8443
> acl Safe_ports port 80 # http
> acl Safe_ports port 21 # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70 # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports_unreg port 1025-65535 # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl Safe_ports port 8080 # Tomcat 8080
> acl Safe_ports port 8082 # Tomcat proxy redirect
> acl Safe_ports port 8009 # Tomcat ajp port
> acl CONNECT method CONNECT
> acl webmin port 1
> acl usermin port 2
> acl LAN myip 192.168.1.1-192.168.1.254
> acl Network_DNS srcdomain www.demon.net www.menandmice.com 
> www.network-tools.com
> acl davidbrownhosts dstdomain www.davidwbrown.name www.deanbrown.name 
> www.karlbrown.name
> acl tomcat urlpath_regex pebble
> acl our_networks src 192.168.1.0/24
> 
> Proxy restriction list
> 
> acl bad_src_ip src "/usr/local/etc/squid/bad_src_ip_list"
> acl bad_url_regex url_regex -i "/usr/local/etc/squid/bad_url_regex_list"
> #acl iana_named_ports port "/usr/local/etc/squid/iana_named_ports_list"
> http_access deny manager
> http_access deny !Safe_ports_unreg
> http_access deny CONNECT !SSL_ports
> http_access deny to_localhost
> http_access allow our_networks
> http_access allow our_networks
>
> # And finally deny all other access to this proxy
> http_access allow localhost
> http_access deny bad_url_regex
> http_access deny bad_src_ip
> http_access deny all
>