Re: [squid-users] Recommended Multi-CPU Configuration

2015-06-17 Thread Michael Pelletier
Which one would be good for capacity\load? I have a very, very large
environment. I have 220,000 users on 8 Gig to the INTERNET. I am running a
load balancer, ipvsadm (Direct Routing) with 20 proxies behind it. I am
interested in handling load.

Michael

On Wed, Jun 17, 2015 at 9:31 PM, Amos Jeffries  wrote:

> On 18/06/2015 8:53 a.m., Michael Pelletier wrote:
> > Hello,
> >
> > I am looking to had some more power to squid. I have seen two different
> > types of configurations to do this:
> >
> > 1. Adding workers directive equal to the number of cpus. Then adding a
> > special wrapper around the AUFS disk cache so that the correct worker can
> > only access the correct cache. Yes, I know rock is multi cpu capable.
> >
> > 2. Using the split configuration from the Squid Web page. This involved a
> > front end and multiple backend squid servers on the same server.
> > http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem
> >
> > My question is, which one is recommended? What are the pros and cons of
> > each?
> >
>
> Both and neither. #1 improves bandwidth savings. #2 improves raw speed.
> Pick your poison.
>
> These are example configurations only. For real high performance mutiple
> machines in a mix of the two setups is even better.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>

-- 


*Disclaimer: *Under Florida law, e-mail addresses are public records. If 
you do not want your e-mail address released in response to a public 
records request, do not send electronic mail to this entity. Instead, 
contact this office by phone or in writing.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] problem with some ssl services

2015-06-17 Thread Amos Jeffries
On 17/06/2015 6:52 p.m., Jason Haar wrote:
> On 15/06/15 11:58, Amos Jeffries wrote:
>> Ensure that you are using the very latest Squid version to avoid
>> problems with unsupported TLS mechanisms. The latest Squid will also
>> automatically splice if its determined that the TLS connection cannot be
>> bumped.
> Is that supposed to be in 3.5.5? I just noticed a problem with bumping
> that came down to the
> web server requiring client cert validation and squid-3.5.5 failed to
> splice - so it failed going through bump
> (as you'd expect).
> 
> I guess I'm asking if this new "SSL determination" includes detecting
> client certs, because that would be a
> good one to detect if possible?

It would seem so. AFAIK we are only detecting resumed sessions and
incompatible cipher sets at present. You may want to contact Christos
about the client certs.

FYI: the "ssl_bump peek all" config I have been advising, may not always
be the best. It seems there is some use for the "stare" option during
stage2 bumping instead of peek. But Im not sure yet myself on when its
best to do that over peek. You might awant to try it.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] split horizon dns proxy

2015-06-17 Thread Amos Jeffries
On 18/06/2015 9:23 a.m., Jeff Scarborough wrote:
> I am currently using Squid 3.1 that comes packages in RHEL 6.  I have this
> line in my config:
>   http_port 80 intercept
> 
> I have a split horizon dns.  This means if you lookup any address for my
> domain from the internet you get the address of the squid proxy server.
> However if you lookup the same name from my proxy server you get an
> internal RFC1918 IP address for the specific name.
> 
> Using squid 3.1 this works great.  A user tries to connect to a URL and by
> DNS resolution is sent to the proxy server, the proxy server then does a
> DNS lookup of the name in the URL and gets the actual address and sends the
> request to the correct place.

Sigh. Dont do that.

> 
> When I try and upgrade to anything beyond 3.2 this breaks.  I am finding
> references that intercept as of Squid 3.2 NAT is required. Reference from
> an email post in 2013:
> 
> In Squid since 3.2 if
> the original TCP details are not found in the NAT records some
> restrictions are placed on what happens with the request and response.
> 
> 
> My question is, is there anyway back to the old behavior?

No.

>  What are the restrictions mentioned?

CVE-2009-0801
 Remote attacker able to inject into proxy cache arbitrary content for
arbitrary URLs. This is then delivered by the corrupted proxy to any
client fetching that URL with full assurance that it is the original
content.

The restrictions placed on Squid are:

a) NAT errors are no longer silently ignored.

b) The IP address of the server being contacted by the client is used as
origin unstead of DNS lookup.

> 
> You may ask why I am not using the accel mode as this is quite obviously a
> reverse proxy.

Indeed.

>  The reason is I could not get accel to work with the RTSP
> server we are using.  I suspect because the Content-length returned by the
> RTSP server is invalid as it is unknown since it is streaming video and the
> length of the content is not known until a user stops the playback.

RTSP != HTTP. Squid-3.1 and older are corrupting the RTSP traffic
messages as they travel through the proxy, by injecting HTTP mandatory
header values. The RTSP software may be ignoring that or coping with it
somehow.

Squid-3.2 and later will take such unknown-length content and
Transfer-Encode it. Which will screw with RTSP in a different way.

RFC 2326 section 4.4 "Note that RTSP does not (at present) support the
HTTP/1.1 "chunked" transfer coding(see [H3.6]) and requires the presence
of the Content-Length header field."


We used to see this with ICY protocol (abuses port 80) where strange
popping sounds would be injected into the radio stream by a proxy. That
was actually the chunked encoding headers every few KB counting and
checksum'ing the payload data.

> 
> When I configure the proxy using accel I can get normal text pages back as
> expected but the video fails with TCP_MISS_ABORTED this happens on all
> version of squid.

If intercept works but accel doesn't in 3.1 and older I suspect that had
more to do with squid listening on port 80. RTSP uses port 554 and
reverse proxies by default preserve the port information.

> 
> The reason I am trying to upgrade Squid is to be able to do all of this
> using HTTPS.
> 

Now thats just jumping from the hotplate into the fire. The security
protections addded in 3.2 for plain-text messages are a tame sub-set of
the restrictions on TLS connections.


There is no hope but to convert this to an actual reverse-proxy, an
actual intercept proxy, and either way to stop RTSP going through it.
Squid supports natively HTTP/1.x, HTTPS, ICY/SHOUTcast, and FTP - all
other protocols must be transfered via HTTP CONNECT tunnels.


PS. if you want to sponsor RTSP support being added to Squid I/we are
open to it.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Recommended Multi-CPU Configuration

2015-06-17 Thread Amos Jeffries
On 18/06/2015 8:53 a.m., Michael Pelletier wrote:
> Hello,
> 
> I am looking to had some more power to squid. I have seen two different
> types of configurations to do this:
> 
> 1. Adding workers directive equal to the number of cpus. Then adding a
> special wrapper around the AUFS disk cache so that the correct worker can
> only access the correct cache. Yes, I know rock is multi cpu capable.
> 
> 2. Using the split configuration from the Squid Web page. This involved a
> front end and multiple backend squid servers on the same server.
> http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem
> 
> My question is, which one is recommended? What are the pros and cons of
> each?
> 

Both and neither. #1 improves bandwidth savings. #2 improves raw speed.
Pick your poison.

These are example configurations only. For real high performance mutiple
machines in a mix of the two setups is even better.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.1 with https traffic and delay pools is flooding network with hundreds of thousands 65-70 bytes packets (and killing the routers, anyway)

2015-06-17 Thread Amos Jeffries
On 17/06/2015 10:11 p.m., Horváth Szabolcs wrote:
> Hello!
> 
> We're having serious problems with a squid proxy server. 
> 
> The good news is the problem can be reproduced at any time in our production 
> squid system.
> 
> Environment:
> - CentOS release 6.5 (Final) with Linux kernel 2.6.32-431.29.2.el6.x86_64
> - squid-3.1.10-22.el6_5.x86_64 (a bit old, CentOS ships this version)
> 
> Problem description:
> - if we have a few mbytes/sec https traffic AND
> - delay_classes are in place AND
> - delay pools are full (I mean the available bandwidth for the customer are 
> used)
> 
> -> then squid is trickling https traffic down to the clients in 65-70 byte 
> packets.
> 
> Our WAN routers are not designed to handle thousands of 65-70 byte packets 
> per seconds and therefore we have some network stability issues.
> 
> I tracked down the following:
> - if delay_pools are commented out (clients can go with full speed as they 
> like) -> the problem eliminates, https traffic flows with ~1500 byte packets
> - if we use only http traffic, there is no problem: http traffic flows with 
> ~1500 byte packets even if the delay pools are full
> 
> Our test URL is www.opengroup.org/infosrv/DCE/dce122.tar.gz, which is 
> available both on http and https protocol.
> 
> Resources can be found at http://support.iqsys.hu/logs/
> 
> 1. squid.conf -> squid configuration file
> 2. http-delaypool.pcap: 
>   - wget -c http://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
>   - delay pools are active
>   - http flows with 1500 byte packets
> 3. http-nodelaypool.pcap: 
>   - wget -c http://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
>   - delay pools are INACTIVE
>   - http flows with 1500 byte packets
> 4. https-delaypool.pcap:
>   - wget -c https://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
>   - delay pools are active
>   - http flows with 69 byte packets -> this is extremely bad
> 5. https-nodelaypool.pcap:
>   - wget -c https://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
>   - delay pools are INACTIVE
>   - http flows with 1500 byte packets
> 
> My question is: is it a known bug?

Sounds like http://bugs.squid-cache.org/show_bug.cgi?id=2907,
 which was fixed in Squid-3.5.3.

see comment #16 in the bug report for a 3.1 workaround patch. Though if
your production server has high performance requirements the sleep(1)
workaround is not the best.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] split horizon dns proxy

2015-06-17 Thread Jeff Scarborough
I am currently using Squid 3.1 that comes packages in RHEL 6.  I have this
line in my config:
  http_port 80 intercept

I have a split horizon dns.  This means if you lookup any address for my
domain from the internet you get the address of the squid proxy server.
However if you lookup the same name from my proxy server you get an
internal RFC1918 IP address for the specific name.

Using squid 3.1 this works great.  A user tries to connect to a URL and by
DNS resolution is sent to the proxy server, the proxy server then does a
DNS lookup of the name in the URL and gets the actual address and sends the
request to the correct place.

When I try and upgrade to anything beyond 3.2 this breaks.  I am finding
references that intercept as of Squid 3.2 NAT is required. Reference from
an email post in 2013:

In Squid since 3.2 if
the original TCP details are not found in the NAT records some
restrictions are placed on what happens with the request and response.


My question is, is there anyway back to the old behavior?  What are the
restrictions mentioned?

You may ask why I am not using the accel mode as this is quite obviously a
reverse proxy.  The reason is I could not get accel to work with the RTSP
server we are using.  I suspect because the Content-length returned by the
RTSP server is invalid as it is unknown since it is streaming video and the
length of the content is not known until a user stops the playback.

When I configure the proxy using accel I can get normal text pages back as
expected but the video fails with TCP_MISS_ABORTED this happens on all
version of squid.

The reason I am trying to upgrade Squid is to be able to do all of this
using HTTPS.

Jeff Scarborough
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Recommended Multi-CPU Configuration

2015-06-17 Thread Michael Pelletier
Hello,

I am looking to had some more power to squid. I have seen two different
types of configurations to do this:

1. Adding workers directive equal to the number of cpus. Then adding a
special wrapper around the AUFS disk cache so that the correct worker can
only access the correct cache. Yes, I know rock is multi cpu capable.

2. Using the split configuration from the Squid Web page. This involved a
front end and multiple backend squid servers on the same server.
http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem

My question is, which one is recommended? What are the pros and cons of
each?

Thanks in advance,
Michael

-- 


*Disclaimer: *Under Florida law, e-mail addresses are public records. If 
you do not want your e-mail address released in response to a public 
records request, do not send electronic mail to this entity. Instead, 
contact this office by phone or in writing.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] bypass proxy

2015-06-17 Thread brendan kearney
Look into the pacparser project on github.  It allows you to evaluate a pac
file and test the logic.
Hi All,

I have 2 issues

First one: How can i bypass proxy for an IP in LAN.


Second one:
I am running squid on openwrt and i want to allow some websites to bypass
proxy and want to allow them go direct.
For that i am using wpad with PAC file but the problem is for some websites
it works and for some it doesn't.

Here is my PAC file



function FindProxyForURL(url, host)
{
// The 1st if function tests if the URI should be by-passed
// Proxy By-Pass List
if (
// ignore RFC 1918 internal addreses
isInNet(host, "10.0.0.0", "255.0.0.0") ||
isInNet(host, "172.16.0.0", "255.240.0.0") ||
isInNet(host, "192.168.0.0", "255.255.0.0") ||

// is url is like http://server by-pass
isPlainHostName(host) ||

// localhost!!
localHostOrDomainIs(host, "127.0.0.1") ||

// by-pass internal URLS
dnsDomainIs(host, ".flipkart.com") ||
dnsDomainIs(host, ".apple.com") ||
dnsDomainIs(host, ".linuxbite.com") ||
dnsDomainIs(host, ".rediff.com") ||

// by-pass FTP
//shExpMatch(url, "ftp:*")
url.substring(0, 4)=="ftp:"
)

// If True, tell the browser to go direct
return "DIRECT";

// If False, it's not on the by-pass then Proxy the request if you
fail to connect to the proxy, try direct.

return "PROXY 192.168.1.1:3128";
//return "DIRECT";
}



To be precise it works for apple.com but doesn't work for rest of the
websites.
Please enlighten me.

-- 
Regards,
Yashvinder

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.1 with https traffic and delay pools is flooding network with hundreds of thousands 65-70 bytes packets (and killing the routers, anyway)

2015-06-17 Thread Horváth Szabolcs
Hello again!

Sorry for the typos, case #4 and case #5 are https tests, not http:

4. https-delaypool.pcap:
- wget -c https://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
- delay pools are active
- HTTPS flows with 69 byte packets -> this is extremely bad
5. https-nodelaypool.pcap:
- wget -c https://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
- delay pools are INACTIVE
- HTTPS flows with 1500 byte packets

Szabolcs

-Original Message-
From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Horváth Szabolcs
Sent: Wednesday, June 17, 2015 12:11 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] squid 3.1 with https traffic and delay pools is flooding 
network with hundreds of thousands 65-70 bytes packets (and killing the 
routers, anyway)

Hello!

We're having serious problems with a squid proxy server. 

The good news is the problem can be reproduced at any time in our production 
squid system.

Environment:
- CentOS release 6.5 (Final) with Linux kernel 2.6.32-431.29.2.el6.x86_64
- squid-3.1.10-22.el6_5.x86_64 (a bit old, CentOS ships this version)

Problem description:
- if we have a few mbytes/sec https traffic AND
- delay_classes are in place AND
- delay pools are full (I mean the available bandwidth for the customer are 
used)

-> then squid is trickling https traffic down to the clients in 65-70 byte 
packets.

Our WAN routers are not designed to handle thousands of 65-70 byte packets per 
seconds and therefore we have some network stability issues.

I tracked down the following:
- if delay_pools are commented out (clients can go with full speed as they 
like) -> the problem eliminates, https traffic flows with ~1500 byte packets
- if we use only http traffic, there is no problem: http traffic flows with 
~1500 byte packets even if the delay pools are full

Our test URL is www.opengroup.org/infosrv/DCE/dce122.tar.gz, which is available 
both on http and https protocol.

Resources can be found at http://support.iqsys.hu/logs/

1. squid.conf -> squid configuration file
2. http-delaypool.pcap: 
- wget -c http://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
- delay pools are active
- http flows with 1500 byte packets
3. http-nodelaypool.pcap: 
- wget -c http://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
- delay pools are INACTIVE
- http flows with 1500 byte packets
4. https-delaypool.pcap:
- wget -c https://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
- delay pools are active
- http flows with 69 byte packets -> this is extremely bad
5. https-nodelaypool.pcap:
- wget -c https://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
- delay pools are INACTIVE
- http flows with 1500 byte packets

My question is: is it a known bug?

If yes, which version(s) contain the fix (I read through the changelog several 
times, but I can't found the exact bug)
- if 3.1 branch fixes this, upgrade is easy
- upgrading major versions (3.4, 3.5) is not so trivial due to the 
complex environment (telling you the truth, installing a new squid server and 
migrating to it would be much easier than in-place upgrade)

If not, how can I track down the issue?
- as far as I understand squid configuration, it's not too complex
- although ICAP is enabled (squidclamav is used), it's not the root of 
the problem -> when ICAP is commented out, the problem remains

Any ideas are appreciated. 

Thanks for reading this.

Best regards,
  Szabolcs Horvath

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 3.1 with https traffic and delay pools is flooding network with hundreds of thousands 65-70 bytes packets (and killing the routers, anyway)

2015-06-17 Thread Horváth Szabolcs
Hello!

We're having serious problems with a squid proxy server. 

The good news is the problem can be reproduced at any time in our production 
squid system.

Environment:
- CentOS release 6.5 (Final) with Linux kernel 2.6.32-431.29.2.el6.x86_64
- squid-3.1.10-22.el6_5.x86_64 (a bit old, CentOS ships this version)

Problem description:
- if we have a few mbytes/sec https traffic AND
- delay_classes are in place AND
- delay pools are full (I mean the available bandwidth for the customer are 
used)

-> then squid is trickling https traffic down to the clients in 65-70 byte 
packets.

Our WAN routers are not designed to handle thousands of 65-70 byte packets per 
seconds and therefore we have some network stability issues.

I tracked down the following:
- if delay_pools are commented out (clients can go with full speed as they 
like) -> the problem eliminates, https traffic flows with ~1500 byte packets
- if we use only http traffic, there is no problem: http traffic flows with 
~1500 byte packets even if the delay pools are full

Our test URL is www.opengroup.org/infosrv/DCE/dce122.tar.gz, which is available 
both on http and https protocol.

Resources can be found at http://support.iqsys.hu/logs/

1. squid.conf -> squid configuration file
2. http-delaypool.pcap: 
- wget -c http://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
- delay pools are active
- http flows with 1500 byte packets
3. http-nodelaypool.pcap: 
- wget -c http://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
- delay pools are INACTIVE
- http flows with 1500 byte packets
4. https-delaypool.pcap:
- wget -c https://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
- delay pools are active
- http flows with 69 byte packets -> this is extremely bad
5. https-nodelaypool.pcap:
- wget -c https://www.opengroup.org/infosrv/DCE/dce122.tar.gz, 
- delay pools are INACTIVE
- http flows with 1500 byte packets

My question is: is it a known bug?

If yes, which version(s) contain the fix (I read through the changelog several 
times, but I can't found the exact bug)
- if 3.1 branch fixes this, upgrade is easy
- upgrading major versions (3.4, 3.5) is not so trivial due to the 
complex environment (telling you the truth, installing a new squid server and 
migrating to it would be much easier than in-place upgrade)

If not, how can I track down the issue?
- as far as I understand squid configuration, it's not too complex
- although ICAP is enabled (squidclamav is used), it's not the root of 
the problem -> when ICAP is commented out, the problem remains

Any ideas are appreciated. 

Thanks for reading this.

Best regards,
  Szabolcs Horvath

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] bypass proxy

2015-06-17 Thread yashvinder hooda
Hi All,

I have 2 issues

First one: How can i bypass proxy for an IP in LAN.


Second one:
I am running squid on openwrt and i want to allow some websites to bypass
proxy and want to allow them go direct.
For that i am using wpad with PAC file but the problem is for some websites
it works and for some it doesn't.

Here is my PAC file



function FindProxyForURL(url, host)
{
// The 1st if function tests if the URI should be by-passed
// Proxy By-Pass List
if (
// ignore RFC 1918 internal addreses
isInNet(host, "10.0.0.0", "255.0.0.0") ||
isInNet(host, "172.16.0.0", "255.240.0.0") ||
isInNet(host, "192.168.0.0", "255.255.0.0") ||

// is url is like http://server by-pass
isPlainHostName(host) ||

// localhost!!
localHostOrDomainIs(host, "127.0.0.1") ||

// by-pass internal URLS
dnsDomainIs(host, ".flipkart.com") ||
dnsDomainIs(host, ".apple.com") ||
dnsDomainIs(host, ".linuxbite.com") ||
dnsDomainIs(host, ".rediff.com") ||

// by-pass FTP
//shExpMatch(url, "ftp:*")
url.substring(0, 4)=="ftp:"
)

// If True, tell the browser to go direct
return "DIRECT";

// If False, it's not on the by-pass then Proxy the request if you
fail to connect to the proxy, try direct.

return "PROXY 192.168.1.1:3128";
//return "DIRECT";
}



To be precise it works for apple.com but doesn't work for rest of the
websites.
Please enlighten me.

-- 
Regards,
Yashvinder
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.5 fails to build for Solaris

2015-06-17 Thread Yuri Voinov

I use this configuration parameters to build 64 bit 3.5.x Squid on Solaris:

'--prefix=/usr/local/squid' '--enable-translation' 
'--enable-external-acl-helpers=none' '--enable-ecap' 
'--enable-ipf-transparent' '--enable-storeio=diskd' 
'--enable-removal-policies=lru,heap' '--disable-wccp' 
'--enable-http-violations' '--enable-follow-x-forwarded-for' 
'--enable-arp-acl' '--enable-htcp' '--enable-cache-digests' '--with-dl' 
'--enable-auth-negotiate=none' '--disable-auth-digest' 
'--disable-auth-ntlm' '--disable-url-rewrite-helpers' 
'--enable-storeid-rewrite-helpers=file' 
'--enable-log-daemon-helpers=file' '--enable-ssl-crtd' 
'--with-openssl=/opt/csw' '--enable-zph-qos' '--disable-snmp' 
'--with-build-environment=POSIX_V6_LP64_OFF64' 'CFLAGS=-O3 -m64 
-mtune=core2 -pipe -Wno-write-strings' 'CXXFLAGS=-O3 -m64 -mtune=core2 
-pipe -Wno-write-strings' 'LIBOPENSSL_CFLAGS=-I/opt/csw/include/openssl' 
'CPPFLAGS=-I/opt/csw/include' 'PKG_CONFIG_PATH=/usr/local/lib/pkgconfig' 
--enable-build-info="Intercept/WCCPv2/OpenSSL/CRTD/DISKD/ECAP/64/GCC 
Production"


This is good enough to build well 64 bit executables.

Note: I've use dual 32/54 bit libraries from OpenCSW repository and 
specified crle libraries path:


root @ cthulhu / # crle

Configuration file [version 4]: /var/ld/ld.config
  Platform: 32-bit LSB 80386
  Default Library Path (ELF): 
/lib:/usr/lib:/usr/local/lib:/opt/csw/lib:/usr/sfw/lib
  Trusted Directories (ELF):/lib/secure:/usr/lib/secure  (system 
default)


Command line:
  crle -c /var/ld/ld.config -l 
/lib:/usr/lib:/usr/local/lib:/opt/csw/lib:/usr/sfw/lib


root @ cthulhu / # crle -64

Configuration file [version 4]: /var/ld/64/ld.config
  Platform: 64-bit LSB AMD64
  Default Library Path (ELF): 
/lib/64:/usr/lib/64:/opt/csw/lib/64:/usr/sfw/lib/64
  Trusted Directories (ELF):/lib/secure/64:/usr/lib/secure/64 
(system default)


Command line:
  crle -64 -c /var/ld/64/ld.config -l 
/lib/64:/usr/lib/64:/opt/csw/lib/64:/usr/sfw/lib/64


Hope this helps.

17.06.15 4:47, Stacy Yeh пишет:

Hi All,

I am attempting to update from Squid 3.1.23 to the latest version 
3.5.5 for Solaris and am running into the following build error. From 
my understanding (correct me if I'm wrong), the issue is that the link 
is trying to link against the 32-bit version of libtool, although a 
64-bit version also exists.


[snip]
libtool: link: /usr/gcc/4.8/bin/g++ -m64 -Wall -Wpointer-arith 
-Wwrite-strings -Wcomments -Wshadow -Werror -pipe -D_REENTRANT 
-pthreads -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -O2 
-march=native -std=c++11 -m64 -o basic_ncsa_auth basic_ncsa_auth.o 
crypt_md5.o -m64  ../../../lib/.libs/libmisccontainers.a 
../../../lib/.libs/libmiscencoding.a 
../../../compat/.libs/libcompat-squid.a -lcrypt -lmd5 -lm -lresolv 
-pthreads
ld: warning: file ../../../lib/.libs/libmiscencoding.a(md5.o): wrong 
ELF class: ELFCLASS32

Undefined   first referenced
 symbol in file
rfc1738_unescapebasic_ncsa_auth.o
SquidMD5Update  crypt_md5.o
SquidMD5Initcrypt_md5.o
SquidMD5Final   crypt_md5.o
ld: fatal: symbol referencing errors
collect2: error: ld returned 1 exit status
make[4]: *** [basic_ncsa_auth] Error 1
make[4]: Leaving directory 
`/builds/skyeh/squid-19581055-s12/components/squid/build/amd64/helpers/basic_auth/NCSA'

make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory 
`/builds/skyeh/squid-19581055-s12/components/squid/build/amd64/helpers/basic_auth'

make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory 
`/builds/skyeh/squid-19581055-s12/components/squid/build/amd64/helpers'

make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory 
`/builds/skyeh/squid-19581055-s12/components/squid/build/amd64'
gmake: *** 
[/builds/skyeh/squid-19581055-s12/components/squid/build/amd64/.built] 
Error 2



Any suggestions on how to fix this? For what it's worth, here are the 
configure options I am using:


CONFIGURE_OPTIONS += --enable-arp-acl
CONFIGURE_OPTIONS += 
--enable-auth-basic='DB,NCSA,LDAP,PAM,getpwnam,MSNT-multi-domain,POP3,SMB,SASL'

CONFIGURE_OPTIONS += --enable-cache-digests
CONFIGURE_OPTIONS += --enable-carp
CONFIGURE_OPTIONS += --enable-coss-aio-ops
CONFIGURE_OPTIONS += --enable-delay-pools
CONFIGURE_OPTIONS += --enable-auth-digest='LDAP'
CONFIGURE_OPTIONS += 
--enable-external-acl-helpers='file_userip,unix_group,LDAP_group,wbinfo_group'

CONFIGURE_OPTIONS += --enable-follow-x-forwarded-for
CONFIGURE_OPTIONS += --enable-forward-log
CONFIGURE_OPTIONS += --enable-forw-via-db
CONFIGURE_OPTIONS += --enable-htcp
CONFIGURE_OPTIONS += --enable-icmp
CONFIGURE_OPTIONS += --enable-large-cache-files
CONFIGURE_OPTIONS += --enable-multicast-miss
CONFIGURE_OPTIONS += --enable-auth-negotiate='kerberos'
CONFIGURE_OPTIONS += --enable-auth-ntlm='smb_lm,fake'
CONFIGURE_OPTIONS += --enable-ntlm-fail-open
CONFIGURE_OPTIONS += --enable-removal-policies='heap,l