Re: squid-icap crash...

2005-04-07 Thread Mateus Gröess
I will provide a better bug report. I believe I found the problem with
generation of coredumps.


On Apr 6, 2005 6:07 PM,   Tsantilas Christos
[EMAIL PROTECTED] wrote:
 This mail posted to c-icap mailing list.
 I believe that it is related with  ICAP's  request modification operation.
 It does not contain enough info  but sometimes it is  useful
 to just know the problems.
 -
 Christos
 
 Mateus Gröess wrote:
 
 Hi, Christos
 
Today I was looking in cache.log of Squid ICAP and found
 another message that was followed by a Squid restart. ...
 Unfortunally there wasn't messages between the error and the normal
 Squid startup.
 
 2005/04/04 14:07:00| storeLateRelease: released 0 objects
 2005/04/04 17:12:31| assertion failed: client_side.c:3268: 
 cbdataValid(conn)
 2005/04/04 17:12:42| Starting Squid Cache version 2.5.STABLE9-CVS for
 i386-slackware-linux-gnu...
 
 
 
 



Re: squid-icap crash...

2005-04-07 Thread Tsantilas Christos
Mateus
send me the backtrace 
On Apr 5, 2005 10:05 AM, Mateus Gröess wrote:
 

2005/04/04 14:07:00| storeLateRelease: released 0 objects
2005/04/04 17:12:31| assertion failed: client_side.c:3268: cbdataValid(conn)
2005/04/04 17:12:42| Starting Squid Cache version 2.5.STABLE9-CVS for
i386-slackware-linux-gnu...
   


Here is the stack trace that I think is the same problem of the assertion above:
2005/04/07 11:22:36| storeLateRelease: released 0 objects
2005/04/07 11:23:38| assertion failed: client_side.c:3268: cbdataValid(conn)
Program received signal SIGABRT, Aborted.
0x402299f1 in __kill () from /lib/libc.so.6
(gdb) backtrace
#0  0x402299f1 in __kill () from /lib/libc.so.6
#1  0x402296d4 in raise (sig=6) at ../sysdeps/posix/raise.c:27
#2  0x4022ae31 in abort () at ../sysdeps/generic/abort.c:88
#3  0x80736c7 in xassert (msg=0x80e15ed cbdataValid(conn),
file=0x80defdc client_side.c, line=3268)
   at debug.c:270
#4  0x806cfbe in clientReadBody (request=0x8502f38, buf=0x84c56d8 , size=8192,
   callback=0x809cad4 icapReqModBodyHandler, cbdata=0x85390d0) at
client_side.c:3268
#5  0x809cacc in icapReqModSendBodyChunk (fd=26, bufnotused=0x0,
size=627, errflag=0, data=0x85390d0)
   at icap_reqmod.c:758
#6  0x806ee9a in CommWriteStateCallbackAndFree (fd=26, code=0) at comm.c:99
#7  0x8071380 in commHandleWrite (fd=26, data=0x84f5830) at comm.c:929
#8  0x8072656 in comm_poll (msec=268) at comm_select.c:459
#9  0x80a7aae in main (argc=2, argv=0xbb34) at main.c:748
#10 0x4021a2eb in __libc_start_main (main=0x80a7664 main, argc=2,
ubp_av=0xbb34,
   init=0x804a6a4 _init, fini=0x80d7a1c _fini,
rtld_fini=0x4000c130 _dl_fini, stack_end=0xbb2c)
   at ../sysdeps/generic/libc-start.c:129
(gdb) quit
The program is running.  Exit anyway? (y or n) y
 




Re: Digest authentication with LDAP backend

2005-04-07 Thread Guilherme Buonfiglio de Castro Monteiro
Hi Henrik,
Ruy Oliveira helped me debuging the program. Return was ok, but there 
was some perl errors at my code. After some cleanup (by Ruy) it finally 
worked, and it really doesn't the encode_base64.
As I am not a experienced perl programmer, and running it from command 
line give me apparently the right results, it was driving me crazy... :-)
I will now do some implementations at it to release the first version.
By the way, only returning the ha1 is fine.

Best Regards,
Guilherme Monteiro
Henrik Nordstrom escreveu:

On Thu, 17 Mar 2005, Guilherme Buonfiglio de Castro Monteiro wrote:
Hi,
I'm developing a perl digest authentication program that uses LDAP as 
backend.
It's near completion but I'm needing help with HHA1 return to Squid.
First I will explain what I'm doing:
1) I'm creating a new Ldap ObjectClass that has uid/digestInfo/ha1
2) digestInfo is join(:,$username,$realm)
  ha1 is md5_hex( join(:,$username,$realm,$password));
3) So for username:realm:password I have
  digestInfo=username:realm
  ha1=66999343281b2624585fd58cc9d36dfc
4) My program should receive a line containing username:realm and 
replies with the appropriate H(A1) value base64 encoded or ERR if the 
user (or his H(A1) hash) does not exists. (this was extracted from 
squid.conf for auth_param digest).
Actually it's receiving it. :-)
5) Then I issue a ldapsearch (digestInfo=.$digestInfo) and read the 
attribute ha1
6) Then I return  $hha1 = encode_base64($ha1);  I know that I'm 
missing the point at this moment!!!

You need to print the result.
I know ha1 is correct. I've already compared with results from apache 
htdigest program. But what Squid want's is not the encode_base64($ha1).

Squid wants the exact same format as Apache htdigest creates in the hash 
column.

The digest_pwauth helper is a good reference as for how your helper 
should operate. By using this as reference you can easily verify that 
your helper is working correctly, as both should return the exact same 
output given the same user data (login , realm , password, input where 
appropriately)

Regards
Henrik



originserver plus carp configuration?

2005-04-07 Thread Joe Cooper
Hey Henrik and all,
I've got a reverse proxy running Squid with the following cache_peer 
configuration:

cache_peer 192.168.1.47 parent 80 7 originserver no-query carp
cache_peer 192.168.1.48 parent 80 7 originserver no-query carp
cache_peer_domain 192.168.1.47 .domain.com
cache_peer_domain 192.168.1.48 .domain.com
The cache_peer_domain settings are there because we have 6 back-end 
servers serving 3 domains--two servers for each domain.

The whole configuration is working, except for load balancing.  Without 
carp I always get FIRST_UP_PARENT/192.168.1.47.  With carp I always 
get CARP/192.168.1.48, no matter what IP I'm coming from (and I tried a 
half dozen client IPs to be sure I wasn't just coincidentally always 
hashing to the same destination).

What am I doing wrong?
Thanks!


Re: originserver plus carp configuration?

2005-04-07 Thread Henrik Nordstrom
On Thu, 7 Apr 2005, Joe Cooper wrote:
The whole configuration is working, except for load balancing.  Without 
carp I always get FIRST_UP_PARENT/192.168.1.47.  With carp I always get 
CARP/192.168.1.48, no matter what IP I'm coming from (and I tried a half 
dozen client IPs to be sure I wasn't just coincidentally always hashing to 
the same destination).
CARP balances based on a hash of the destination URL, not client.
You can get quite detailed tracing of the CARP hashing by enabling debug 
section 39,9, combined with the cachemgr carp section.

Regards
Henrik


Re: originserver plus carp configuration?

2005-04-07 Thread Joe Cooper
Thanks for the rapid response, Henrik.
Henrik Nordstrom wrote:
On Thu, 7 Apr 2005, Joe Cooper wrote:
The whole configuration is working, except for load balancing.  
Without carp I always get FIRST_UP_PARENT/192.168.1.47.  With carp 
I always get CARP/192.168.1.48, no matter what IP I'm coming from (and 
I tried a half dozen client IPs to be sure I wasn't just 
coincidentally always hashing to the same destination).

CARP balances based on a hash of the destination URL, not client.
Hmmm...that raises a different question: How does one address the issue 
of maintaining client stickiness?

You can get quite detailed tracing of the CARP hashing by enabling debug 
section 39,9, combined with the cachemgr carp section.
Excellent.  Thanks for the tip.


Re: originserver plus carp configuration?

2005-04-07 Thread Henrik Nordstrom
On Thu, 7 Apr 2005, Joe Cooper wrote:
CARP balances based on a hash of the destination URL, not client.
Hmmm...that raises a different question: How does one address the issue of 
maintaining client stickiness?
It doesn't. CARP is designed for routing requests to a cloud/array of 
parent proxy cache servers with minimal duplication of cache content.

doc/rfc/draft-vinod-carp-v1-03.txt
it's positive properties are
  - Deterministic static forwarding path. The same URL always gets the 
same forwarding part while the configuration is the same.

  - Minimal cache distruption on changes. If a member server is 
added/removed from the array only a portion of the cache in proportion to 
the size/power of the added/removed server is affected by the change.

  - No peering traffic. Thanks to the deterministic static forwarding 
path.

it's negative properties are
  - Static forwarding path. Can not easily adjust to dynamic changes in 
weights / server capacity.

  - Forwarding is based on a hash of the complete URL. No client 
or even destination persisance.

Regards
Henrik