Re: [VOTE] 2.0.2 release

2004-10-05 Thread Eric Johnson
Am I mistaken, or have the recent issues been dealt with?
-Eric.
Michael Becke wrote:
Looks like 2.0.2 has been cancelled for the moment. I'll call for a 
vote again after we fix the recently discovered issues.

Mike
On Sep 29, 2004, at 11:05 AM, Michael Becke wrote:
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Performance

2004-09-27 Thread Eric Johnson
And I've finally gotten test results back from the appropriate people here.
In our test lab, between HttpClient 2.0.1 and the nightly, we found a 
difference of about 4ms per request.  As this was a live-test 
environment, with all of our application environment around HttpClient, 
the total numbers are probably mostly irrelevant to HttpClient, but the 
measurable improvement was entirely due to HttpClient changes.

We have some other statistics, but I worry that those are misleading for 
now, so I'm not mentioning those.  Hopefully, I'll be able to pass along 
some concrete data at some point.

For our purposes, the build otherwise looks stable.
-Eric.
Oleg Kalnichevski wrote:
Folks,
Could you please grab the latest 2.0 nightly build and see if it runs
stable enough for production purposes? When we have a couple of reports
confirming adequate stability, we'll call for the 2.0.2 release
Oleg
On Fri, 2004-09-03 at 00:00, Eric Johnson wrote:
 

My read on Odi's statistics is that the patch has a pretty consistent 
1ms impact on every request.  This corresponds pretty well with my 
understanding of the theoretical improvements behind the patch.  To the 
effect that HttpClient's performance is affected, header parsing will be 
faster, and reading the body of the connection will be roughly the same, 
presumably because the client of HttpClient buffers large reads.

On a 1Ghz machine, this patch means one million processor cycles that 
can be put to a better use for *each* request.  That's more than 
benchmark optimization, I think.

-Eric.
Oleg Kalnichevski wrote:
   

Eric,
This patch makes a difference for only relatively small payloads when
the response content is about the size of the status line + headers. In
most (real life) cases the performance gain is virtually negligible.
This is more about benchmark optimization than anything else. 

Yet, it see no problem with another point release
Oleg
On Thu, 2004-09-02 at 19:06, Eric Johnson wrote:
 

I don't know whether this would be a premature time to call for a new 
release, but the prospect of significantly better performance out of 
HttpClient has some people in my company very interested.

What are the chances of a 2.0.2 release with this fix in it?  (I'm 
willing to build from the source, but others in my company like the idea 
of an official build perhaps more than they need to.)

-Eric.
  

   

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
   


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: unable to find line starting with HTTP

2004-09-09 Thread Eric Johnson
Juan,
The one technique that I'm aware of for guaranteeing that the server 
only processes the request once is to put in some sort of transaction ID 
into the request, and the server will reject duplicate requests 
(hopefully with an appropriate error message).

In short, it is an application level problem, not something that 
HttpClient can help you with at the transport level.  HttpClient can 
improve in ways that make such tricks needed less frequently, but it 
cannot eliminate the need for them.  Hopefully, in your case, you can 
change the server code to support such a technique.

-Eric.
Juan Pedro López Sáez wrote:
Hello, 

In case it is a server side issue, I would like be sure that the server
hasn't processed my request because I'm posting non idempotent requests.
If I retry the request but the first one was really processed I could be
in troubles.
I guess there is not a simple solution for this.
I include an excerpt of my httpclient log, in case you can get further
information.
Thank you very much.
Juan Pedro Lopez 


30 ago 2004 14:00:14,223 DEBUG [Thread-610] httpclient.wire - 
This is the post data I send
30 ago 2004 14:00:14,223 DEBUG [Thread-610]
org.apache.commons.httpclient.methods.EntityEnclosingMethod  - Request
body sent
30 ago 2004 14:00:14,223 DEBUG [Thread-610]
org.apache.commons.httpclient.HttpConnection - enter
HttpConnection.flushRequestOutputStream()
30 ago 2004 14:00:14,223 DEBUG [Thread-610]
org.apache.commons.httpclient.HttpMethodBase - enter
HttpMethodBase.readResponse(HttpState, HttpConnection)
30 ago 2004 14:00:14,223 DEBUG [Thread-610]
org.apache.commons.httpclient.HttpMethodBase - enter
HttpMethodBase.readStatusLine(HttpState, HttpConnection)
30 ago 2004 14:00:14,223 DEBUG [Thread-610]
org.apache.commons.httpclient.HttpConnection - enter
HttpConnection.readLine()
30 ago 2004 14:00:14,223 DEBUG [Thread-610]
org.apache.commons.httpclient.HttpParser - enter HttpParser.readLine()
30 ago 2004 14:00:14,223 DEBUG [Thread-610]
org.apache.commons.httpclient.HttpParser - enter
HttpParser.readRawLine()
30 ago 2004 14:00:14,288 DEBUG [Thread-610]
org.apache.commons.httpclient.HttpMethodBase - Closing the connection.
30 ago 2004 14:00:14,288 DEBUG [Thread-610]
org.apache.commons.httpclient.HttpConnection - enter
HttpConnection.close()
30 ago 2004 14:00:14,288 DEBUG [Thread-610]
org.apache.commons.httpclient.HttpConnection - enter
HttpConnection.closeSockedAndStreams()
30 ago 2004 14:00:14,288 INFO  [Thread-610]
org.apache.commons.httpclient.HttpMethodBase - Recoverable exception
caught when processing request
30 ago 2004 14:00:14,469 WARN  [Thread-610]
org.apache.commons.httpclient.HttpMethodBase - Recoverable exception
caught but MethodRetryHandler.retryMethod() returned false, rethrowing
exception
30 ago 2004 14:00:14,469 DEBUG [Thread-610]
org.apache.commons.httpclient.HttpConnection - enter
HttpConnection.releaseConnection()
30 ago 2004 14:00:14,469 DEBUG [Thread-610]
org.apache.commons.httpclient.MultiThreadedHttpConnectionManager - enter
HttpConnectionManager.releaseConnection(HttpConnection)
 

Juan,
Most likely it is a server side issue. Take a look at this post. It
should explain the cause of the problem and can help find a fix for it 

http://marc.theaimsgroup.com/?l=httpclient-commons-devm=109344163805313w=2
Oleg
On Thu, 2004-09-09 at 10:24, Juan Pedro López Sáez wrote:
   

Hello all.
From time to time I'm getting the following exception in my application:
org.apache.commons.httpclient.HttpRecoverableException:
org.apache.commons.httpclient.HttpRecoverableException: Error in parsing
the status  line from the response: unable to find line starting with
HTTP
   at
org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1965)
   at
org.apache.commons.httpclient.HttpMethodBase.processRequest(HttpMethodBase.java:2659)
   at
org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1093)
   at
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:675)
   at
org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:558)
How can I manage it in my application? Can I find out what's going
wrong? Is it a server side issue?
Thank you very much.
Juan Pedro Lopez

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 

***
The information in this email is confidential and may be legally privileged.  Access 
to this email by anyone other than the intended addressee is unauthorized.  If you are 
not the intended recipient of this message, any review, disclosure, copying, 
distribution, retention, or any action taken or omitted to be taken in reliance on it 
is prohibited and may be unlawful.  If you are not the intended recipient, please 
reply to or 

Re: Performance

2004-09-02 Thread Eric Johnson
I don't know whether this would be a premature time to call for a new 
release, but the prospect of significantly better performance out of 
HttpClient has some people in my company very interested.

What are the chances of a 2.0.2 release with this fix in it?  (I'm 
willing to build from the source, but others in my company like the idea 
of an official build perhaps more than they need to.)

-Eric.
Andre-John Mas wrote:
Will you make a patch to the 2.x branch as well? The project I work
on currently uses the 2.0.1 implementation and we would rather avoid
having to change API to take advantageof this.
regards
Andre
-Original Message-
From: Oleg Kalnichevski [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, September 01, 2004 2:11 PM
To: Commons HttpClient Project
Subject: Re: Performance

Makes sense. I'll start working on a patch. 

Oleg
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: DO NOT REPLY [Bug 21329] - Add InputStream buffering.

2004-09-02 Thread Eric Johnson
I thought about this as well this morning, and couldn't figure out any 
flaws with the patch.

-Eric.
Oleg Kalnichevski wrote:
Mike,
I have been also thinking about repercussions on the reliability of the
stale connection check. I tend to conclude that with the existing
architecture (no HTTP pipelining support + response 'garbage' check)
there is virtually no chance of reading past the response body.
HttpClient always makes sure that 

(1) a new request can be executed over the same connection only after
the previous response has been consumed in its entirety
(2) drops the connection whenever it detects illegal content past the
declared response body 

Am I missing something?
Oleg
On Thu, 2004-09-02 at 14:23, Michael Becke wrote:
 

Hi Roland,
Yes, that was definitely part of the discussion, and this seems to be a 
pretty good solution for that.  Now that I've slept on it , I think 
this also came up when were originally discussing isStale() and how to 
determine if a connection is still valid.  Does buffering at this level 
cause problems here?  My gut reaction is that is that is could, but 
probably won't under most conditions.  My feeling is that there 
shouldn't be anything buffered between requests.  Does anyone have a 
good real-world test case for this?

Mike
   

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Performance

2004-09-02 Thread Eric Johnson
My read on Odi's statistics is that the patch has a pretty consistent 
1ms impact on every request.  This corresponds pretty well with my 
understanding of the theoretical improvements behind the patch.  To the 
effect that HttpClient's performance is affected, header parsing will be 
faster, and reading the body of the connection will be roughly the same, 
presumably because the client of HttpClient buffers large reads.

On a 1Ghz machine, this patch means one million processor cycles that 
can be put to a better use for *each* request.  That's more than 
benchmark optimization, I think.

-Eric.
Oleg Kalnichevski wrote:
Eric,
This patch makes a difference for only relatively small payloads when
the response content is about the size of the status line + headers. In
most (real life) cases the performance gain is virtually negligible.
This is more about benchmark optimization than anything else. 

Yet, it see no problem with another point release
Oleg
On Thu, 2004-09-02 at 19:06, Eric Johnson wrote:
 

I don't know whether this would be a premature time to call for a new 
release, but the prospect of significantly better performance out of 
HttpClient has some people in my company very interested.

What are the chances of a 2.0.2 release with this fix in it?  (I'm 
willing to build from the source, but others in my company like the idea 
of an official build perhaps more than they need to.)

-Eric.
   

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: DO NOT REPLY [Bug 30514] New: - HttpClient deadlocks under multithreaded access

2004-08-06 Thread Eric Johnson
[EMAIL PROTECTED] wrote:
[snip]
   /**
* Keep hitting a URL, with no sleep time.
*/
   class ClientThread implements Runnable 
   {
   protected Thread m_thread;
   protected boolean m_running = true;
   
   public ClientThread( )
   {
   }
   
   public void run( ) 
   {
   m_thread = Thread.currentThread( );
   try 
   {
   while ( m_running )
   {
   PostMethod httppost = new PostMethod( m_serverURL.getPath( ) );
   httppost.setRequestBody( some random stuff that doesn't
matter );
   
   try 
   {
   // print out something so we know when there are active 
   // threads
   System.out.print( . );
   m_client.executeMethod( httppost );
   } 
   catch ( Throwable t )
   { 

// here you should add:
httppost.releaseConnection()
}
   }
   } 
   catch ( Throwable t )
   {
   t.printStackTrace( System.out );
   m_running = false;
   }
   }
   }
}
 

If you don't call releaseConnection(), HttpClient doesn't know whether 
you're done reading the response to the request, so it doesn't free up 
the connection for re-use, and the MultiThreadedConnectionManager 
eventually appears to deadlock, while it is really waiting for you to 
call releaseConnection().

-Eric.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Having some problems with expect 100 continue

2004-07-15 Thread Eric Johnson
Jennifer Ward wrote:
On Jul 15, 2004, at 1:09 AM, Kalnichevski, Oleg wrote:
(3) What web server you are targeting?

We are using Apache Tomcat with Slide for WebDAV support.
Aha.
At least in Tomcat 4.0, it silently processes the Expect 100 Continue 
so that the web-application never sees that part of the request.  In the 
servlet environment, a web-app must enable container managed security 
before the client would even have a chance of the expect 100 continue 
process to work as desired, and even then, I don't think that the 
connectors necessarily support it (correctly).

Absent the container managed security, Tomcat (or any other servlet 
container, for that matter) will be forced to pass along the request to 
the web application - at which point the web-application may decide that 
the invoker is not authorized, and force the client to send an 
additional request.  That sounds like you might be seeing that behavior.

If you control the server, then you can look at the choice of Tomcat 
version and supported connector, and look into doing container managed 
security for your Slide-based webapp.  You might need to send emails to 
the corresponding mailing lists to get to the bottom of it

-Eric.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Bytes written over the 'wire' are dropped on the floor?

2004-07-09 Thread Eric Johnson
Make sure you are using the MultiThreadedConnectionManager, and that you 
call releaseConnection after each request.  It strikes me that you could 
be getting into a situation where the server thinks it is doing HTTP 
pipelining, which HttpClient doesn't actually support, particularly if 
you are not releasing the connection properly.

Presumably, if you can stop the code in the debugger, you can tell 
exactly which line in HttpClient is blocked.  You don't seem to mention 
that below.  That could be a valuable hint.

As with many httpclient support issues, if you provide a trace log in a 
subsequent email, that might quickly reveal the problem.  You might also 
try on a 1.4.2 vintage JVM, to see if you get different behavior.

-Eric.
David Tonhofer, m-plify S.A. wrote:
Hello,
I have spent the past few hours tracking down a problem that seems to
occur if you push bytes too quickly over a socket. As it happened with
HTTPClient, I though I might ask here. Maybe somebody has already heard
about it and can tell me whether there is a simple trick I don't know 
about.

First, this happens under W2K, with Sun JVM 1.4.1. Haven't tried it on
Linux yet (if anyone is interested, let me know).
The HTTPClient I use is the 2.0 version.
The HTTP Server is a simple homegrown Java socket-handling framework
it basically just reads the bytes from the InputStream that it obtains
from the socket.
Problem:

If I issue HTTPClient POST requests really quickly (in this case, inside
a tight loop), then the first two request are received ok. On the 
third request,
the data written over the 'wire' (note that the network is not really 
involved,
client and server are on the same machine) seems to be dropped on the 
floor, i.e.
the server receives the HTTP header, the HTTP header endline, and I 
can see
HTTPClient log that it wrote the request body, but the request body is 
never
received on the server side, even if the server waits a whole minute. 
I have
tried to use InputStream and BufferedInputStream, but to avail.

Le fixe:

What fixed the problem was the introduction of a little delay in
org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpState 
state, HttpConnection conn),
just before the 'flush' of the body. This (line 2322 in HttpMethodBase):

...
writeRequestBody(state, conn);
// make sure the entire request body has been sent
conn.flushRequestOutputStream();

is 'augmented' with this:
...
writeRequestBody(state, conn);
try {
  Thread.sleep(20);
}
catch (Exception exe) {
}
conn.flushRequestOutputStream();
...
I think I have already encountered this problem with Java 1.2 a few 
years ago,
also on W2K (indeed I have found a 500ms sleep in some old code I have 
been keeping
around). Does anyone know if this is a common phenomenon?

Best regards and thanks in advance for any clue,
-- David Tonhofer

-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: SSL and server using self-signed certificate

2004-07-07 Thread Eric Johnson
Andre,
At a quick glance, it appears that there is one problem that I've 
experienced that the SSL guide doesn't seem to cover.  Presumably, once 
you've created your self-certified certificate, you added it to your 
JVM's cacerts file using the keytool?  I've found that a self-signed 
certificate may not work unless you pass the -trustcacerts option when 
doing the import.  Not sure why that is, and your experience may vary 
based on the JRE version you're using.

-Eric.
Andre-John Mas wrote:
Hi,
I have set up a Tomcat 4.1 server to use SSL, with the help of a self-certified
certificate, ie with no trusted third party certifying it. I now try getting
my client, which uses 'commons-httpclient-2.0-rc2' to connect. When I do,
I get the following exception:
 sun.security.validator.ValidatorException: No trusted certificate found
Is there a way to get self-certified certifcates to be automatically trusted.
I must admit I am a newbie when it comes to SSL, so any help would be very much
appreciated.
regards
Andre
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Invalid RSA modulus size

2004-06-15 Thread Eric Johnson
Tim,
Make sure you imported the CA certificate with the -trustcacerts option. 
If you do everything else correctly, and leave out this step, you'll see 
the problem you reported. I've tripped over that mistake once or twice. 
That's just a shot-in-the-dark as to what might be your problem, though.

-Eric.
Tim Wild wrote:
Thanks Michael. I have the CA cert and the chained CA certs in my
java_home/jre/lib/security/cacerts file. That CA issued the server
cert too. It all works fine when I use Mozilla.
I'm pretty sure it's a problem with certificate chaining, as when I use
my own test CA, which doesn't have an intermediate CA.
I use a custom socket factory that works perfectly with my own test CA
too, which I must get around to posting some time, once I work out the
IP issues.
Any more thoughts or suggestions?
Thanks
Tim
- Original Message -
From: Michael Becke [EMAIL PROTECTED]
Date: Tuesday, June 15, 2004 2:58 pm
Subject: Re: Invalid RSA modulus size
 

Hi Tim,
This generally means the the server's cert is signed by an 
untrusted 
CA.  You can get around this in a couple of ways.

- import the servers cert into the keystore you are using
- implement a SSL socket factory that is not so picky about who 
signed 
the cert.  This is not recommended for production use but can be 
useful 
for testing.  Take a look at the EasySSLProtocolSocketFactory 
described 
in  
target=lhttp://jakarta.apache.org/commons/httpclient/sslguide.html
   

for an 
 

example.
- Sign your server cert with a CA that is trusted by JSSE.  
Please 
take a look at the JSSE docs for info about which CAs are trusted.

Mike
On Jun 14, 2004, at 10:19 PM, Tim Wild wrote:
   

Thanks for that Oleg. Using JDK 1.5.0b2 does indeed get past the 
invalid modulus size error. I've got another error message 
 

now: 
   

javax.net.ssl.SSLHandshakeException: 
sun.security.validator.ValidatorException: No trusted 
 

certificate 
   

found.
My apache server has a certificate from a certification 
 

authority 
   

called Digital Identity, in New Zealand. They have a root 
 

certificate 
   

authority, then two sub-CAs (perhaps called chained CAs). My 
 

server 
   

certificate and client certificate are chained under one of 
 

these 
   

sub-CAs. When I use Mozilla it all works perfectly, it requests 
 

the 
   

certificate, the browser presents it, and I can see the page I 
requested.

When I try the same thing using Java I get the error message 
 

above. I 
   

have a keystore with just my client certiciate in it (nothing 
 

else), 
   

the same client certificate that works in Mozilla. I know it's 
 

finding 
   

the certificate because i'm having Java print out the alias of 
 

the 
   

certificate it's using. The CA certs are in the cacerts file of 
 

the 
   

JDK1.5 i'm using.
Does anyone have any idea why i'm getting this error? Any 
 

thoughts or 
   

ideas about how to go forward or things to investigate would be 
welcome.

Thanks
Tim
Oleg Kalnichevski wrote:
 

Tim,
This is believed to be a limitation of all Sun's JCE/JSSE
implementations up to Java version 1.5. You can try testing your
application with Java 1.5-b2 to see if the problem has indeed been
fixed. Alternatively consider using IBM Java 1.4 or 3rd party 
   

JCE/JSSE implementations which _may_ not exhibit the same limitation
   

HTH
Oleg
On Sat, 2004-06-12 at 05:36, Tim Wild wrote:
   

Hi,
I'm using HttpClient to connect to an apache server that 
 

requires 
   

certificates. When I use client and server certificates from 
 

my own 
   

CA with 1024 bit keys it works perfectly. When I get a 
 

commercial 
   

certificate with a longer key (4096 bits), I get the following 
 

error 
   

(full message below) when I connect to apache:
javax.net.ssl.SSLProtocolException: java.io.IOException: 
 

subject 
   

key, Unknown key spec: Invalid RSA modulus size.
Google produced one result, which talked about a maximum key 
 

size 
   

using the JCE of 2048 bits using the JDK 1.4.2 default policy 
 

files. 
   

Another site suggested getting the unrestricted policy files, 
 

so I 
   

got and installed them, but it doesn't seem to make any 
 

difference 
   

at all.
Does anyone have any thought or suggestions? Half formed 
 

thoughs or 
   

ideas are welcome as it might give me a lead that I can follow 
myself.

Thanks
Tim Wild
---
 

--
   

To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]

 


   

-
   

To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]

   

-
 


   

To unsubscribe, 

Re: HttpClient Consultant Needed Immediately

2004-06-09 Thread Eric Johnson
Lukas,
I certainly cannot spend the time to help you, beyond this email, but 
for the sake of correct knowledge in the HttpClient mail logs

Lukas Bradley wrote:
The response we have received from their technician is as follows:
Okay, this is making some sense now. We are not logging your requests
because you are not reaching us. Your software  is bailing out ahead of time
because you are using Java. Java has static lists included of valid
certificate authorities. Because we only issue certificates for personal
security reasons, we are not a valid certificate authority in Java's eyes.
This causes Java to have a fatal error at the handshake:
 

This simply isn't a valid criticism of Java. You are free to add 
additional certificate authorities to Java's cacerts file(found in 
jre/lib/security/). You can update the file using the keytool command 
line tool. If you have a small scale deployment, this is a perfectly 
good way to configure support for SSL (we've used this approach for 
in-house testing, for example). Note that Sun's SSL support has bugs, 
particularly in certain releases. If your problem lies there, you can 
play around with that to try to get it working, or you can use an 
alternative SSL provider (Entrust and IBM come to mind, but don't 
consider that an endorsement of either, or a criticism of any that exist 
that I've not mentioned).

 main, SEND TLSv1 ALERT:  fatal, description = certificate_unknown
There are two ways around this: 
1) Don't use java. Generate a new CSR, I will immediately sign it and send
it back to you. Use command line scripting or other programming languages to
communicate with our servers ( perl, curl, bash, etc...)
2) Write your own extended SSL verification classes. We have generated an
example of how do go about this which you will find attached. Feel free to
use any parts of the code to aid in the incorporation into your system. 
 

Also check out various posts from the email archives.
If there is anything else I can do, please let me know.
In answer to your questions, it appears as if we are never reaching them.
I will find out the answer to your second question.
However, your third question is very perplexing.  Are there SSL modules that
do NOT work with JSSE?Wouldn't that be an open standard?
Lukas
 

-Eric.
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Project team page (was RE: Release notes update. Please review)

2004-05-03 Thread Eric Johnson
Oleg,

Or how about simply listing years in which contributors actually 
contributed patches?  And if you wanted to be more specific, you could 
put a count for each year.  Of course, I'm not much of a contributor 
these days, except to come up with zany ideas on the mailing list..., so 
treat the suggestion as you wish.

-Eric.

Kalnichevski, Oleg wrote:

On a related note, Oleg, are you also working on updating the list of 
contributors?  If not, I will take a look at it.
   

Mike,

I found this to be quite a touchy issue. I have been thinking whether there's an acceptable way to inject a little more structure into the standard Maven generated contributors page. In particular I'd like to be able to mark inactive contributors as, well, inactive ones and retired committers as retired ones to give a little more prominence to the active ones. I was unable so far to come up with any idea how this could be accomplished without taking a risk of upsetting some folks and provoking some tensions on the mailing list. I have been thinking about some sort 'HttpClient history' page which would contain chronological account of important events including credits for notable personal contributions, leaving the project team to reflect _current_ _active_ participation in the project. However, for the Lord's sake no way do I want to provoke no trouble between HttpClient contributors and committers thus causing more harm than good with my innovation. In my thoughts I was gradually gravitating towards not messing around the 'project team' page and simply appending new entries to the existing list when you caught me with your question ;-)

If there are no better ideas how to deal with the issue, I'll add the new entrants to the existing list and get it over with

Oleg

-Original Message-
From: Michael Becke [mailto:[EMAIL PROTECTED]
Sent: Monday, May 03, 2004 2:41
To: Commons HttpClient Project
Subject: Re: Release notes update. Please review
Wow, has it been a year already?  It seems hard to believe.

Oleg, nice work on compiling the list of changes.  This is no small 
task.

On a related note, Oleg, are you also working on updating the list of 
contributors?  If not, I will take a look at it.

Thanks,

Mike

On May 2, 2004, at 6:08 PM, Oleg Kalnichevski wrote:

 

Folks,
I just updated the release notes doc to include personal contributions
made to HttpClient CVS HEAD since 2.0 was branched out (approx a year
from now)
http://cvs.apache.org/viewcvs.cgi/jakarta-commons/httpclient/
release_notes.txt?rev=1.19view=markup
If you have contributed code or design ideas during that period, please
review the release notes to make sure your contribution is mentioned.
If certain contributions are not mentioned, it does not necessarily 
mean
that they do not deserve mentioning. Going through several hundreds of
commit messages is tedious, not great fun, and very prone to mistakes. 
I
may well have overlooked quite a few things

It is important that we keep this document up to date, as one day is 
may
become the only (easily accessible) vehicle of communicating individual
contributions. @author tags may have to be removed, if so decides the
Jakarta PMC

So, no need to be shy. If there's anything omitted, please do let me
know
Oleg

-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]

   



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
***
The information in this email is confidential and may be legally privileged.  Access 
to this email by anyone other than the intended addressee is unauthorized.  If you are 
not the intended recipient of this message, any review, disclosure, copying, 
distribution, retention, or any action taken or omitted to be taken in reliance on it 
is prohibited and may be unlawful.  If you are not the intended recipient, please 
reply to or forward a copy of this message to the sender and delete the message, any 
attachments, and any copies thereof from your system.
***
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [PATCH] SSL guide amendments (patch against 2.0 branch)

2004-04-15 Thread Eric Johnson
Oleg,

Yes, of course.  Sorry, I should have done that the first time.

Attached (I hope).

-Eric.

Kalnichevski, Oleg wrote:

Hi Eric
Many thanks for taking time to correct my writing. All corrections make sense to me. 
(BTW, no need to be over-diplomatic. I am perfectly aware that my English has its 
limits, especially if I just type away. Just correct it. There'll be no questions 
asked).
Do you still have the current CVS snapshot at your disposal? If yes, could you please recreate the patch with all those corrections, if that's not too much of a hassle?

Oleg

-Original Message-
From: Eric Johnson [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 15, 2004 15:18
To: Commons HttpClient Project
Cc: Daniel C. Amadei
Subject: Re: [PATCH] SSL guide amendments (patch against 2.0 branch)
Oleg,

A few suggested edits  I'm not a great editor myself (I frequently
miss bevies of typos when my spouse asks me to review her writing), but
since nobody else responded, I figured I would.
Hopefully, my edits make sense.

Oleg Kalnichevski wrote:

 

? sslguide2.patch
Index: sslguide.xml
===
RCS file: /home/cvspublic/jakarta-commons/httpclient/xdocs/sslguide.xml,v
retrieving revision 1.2.2.1
diff -u -r1.2.2.1 sslguide.xml
--- sslguide.xml21 Aug 2003 16:07:31 -  1.2.2.1
+++ sslguide.xml15 Apr 2004 15:18:40 -
@@ -240,6 +240,43 @@
 /p
 /li
 
+li
+p
+ strongJSSE prior to Java 1.4 incorrectly reports socket timeout./strong
+/p
+p
+ Prior to Java 1.4, in Sun's JSSE implementation, a read operation that has 
timed out incorrect
+ reports end of stream condition instead of throwing 
java.io.InterruptedIOException as expected.
+ HttpClient responds to this exception by assuming that the connection was 
dropped and throws a recoverable
+ HTTP exception: Error in parsing the status line from the response: unable 
to find line starting with HTTP.
+ It should instead report java.io.InterruptedIOException: Read timed out. 
If you see the unable to find
+ line... message when working with an older version of JDK and JSSE, it can 
be caused by the timeout
+ waiting for data and not by a problem with the connection.
+/p
+p
+ strongWork-around:/strong One possible solution is to increase the 
timeout value as the server is
+ taking too long to start sending the response. Alternatively you may choose 
to upgrade to Java 1.4 or 
+ above which does not exhibit this problem.
+/p
+p
+ The problem has been discovered and reported by Daniel C. Amadei.
+/p
+/li
+
+li
+p
+ strongHttpClient does not work with IBM JSSE shipped with IBM Websphere 
Application Platform/strong
+/p
+p
+ Several releases of the IBM JSSE exhibit a bug that cause HttpClient to fail 
while detecting the size
+ of the socket send buffer (java.net.Socket.getSendBufferSize method throws 
java.net.SocketException:
+ Socket closed exception).
+/p
+p
+ strongSolution:/strong Make sure that you have all the latest fix packs 
applied. HttpClient users
+ have reported that IBM Websphere Application Server versions 4.0.6, 5.0.2.2, 
5.1 do not exhibit the problem.
+/p
+/li
   /ol
  
 /section

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Re: EasySSLProtocolSocketFactory for Secure and Proxied.

2004-03-25 Thread Eric Johnson
The initial connection to the proxy server is actually done over an 
unencrypted channel.

Subsequent communications are encrypted only after the connection to the 
proxy has been established.

This isn't a restriction, rather I believe it follows the proxy 
specification.

-Eric.

John Melody wrote:

Hi, 

I have just been trying to get my httpclient to accept a 
self signed certificate on a secure, proxied connection. I 
followed the SSL guide directions but it did not seem to be 
picking up my new factory. 

When I checked in the source code I noticed the following code. 

HttpConnection.java 

  // use the protocol's socket factory unless this is a secure
   // proxied connection
   final ProtocolSocketFactory socketFactory =
   (isSecure()  isProxied()
   ? new DefaultProtocolSocketFactory()
   : protocolInUse.getSocketFactory());

 

Why is this restiction in place? 

regards, 
John.  

John Melody 
SyberNet Ltd. 
Galway Business Park, 
Dangan, 
Galway. 
Tel. No. +353 91 514400 
Fax. NO. +353 91 514409 
Mobile - 087-2345847 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [VOTE] suspend use of @author tags

2004-03-17 Thread Eric Johnson
Well, hmmm - I'm perfectly happy to have my name removed, or sign a CLA.

My quick scan of the files revealed what I suspect is a much longer list 
of authors that probably haven't signed a CLA than just those two that 
you indicate.

Rewriting the code is a drastic and expensive solution!  I would think 
that we only go there in case of clear infringement, not just because 
there might be.  My understanding (courtesy of Groklaw), is that with 
normal copyright infringement there must be the opportunity to remedy 
the infringement before any damages kick in.  That point, I think, it 
would be the time to consider replacing code.  At least, that is what I 
understand of US law (of course, IANAL).

-Eric.

Jeff Dever wrote:

+1

Additionally, we should seek to contact those currently in @author 
tags that do not have a CLA on file, and ask permission that they be 
removed or to encourage them to sign a CLA.  I'll do this.

BTW: If we can't get either of these to things from a contributor 
(Sean and Sun-Gu at this point) then we will probablly should rewrite 
any code that can be attributed to them.

-jsd

Ortwin Glück wrote:

+1

Michael Becke wrote:

Given the current ambiguity regarding @author tags I propose that 
we  suspend their use for contributors without a CLA on file.  This 
is  meant to be a temporary solution until something official is 
endorsed  by the ASF board.

Mike

 
--
 Vote:  Suspend use of @author tags
 [ ] +1 I am in favor of the proposal
 [ ] -1 I am against this proposal (must include a reason).
  
 
--

-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]





-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: @author tags

2004-03-16 Thread Eric Johnson
Roland Weber wrote:

Hello Eric,

I was thinking about some kind of metrics, too.
Not as advanced as yours, of course :-) But then
I felt that a ranking is not the best approach. It
may lure people to use tricks just to improve
their ranking.
Too true.  My perspective on this matter is colored by the fact that 
everyone on this mailing list is very open and complimentary to each 
other, so I have a hard time seeing that happen here.  I certainly don't 
want to do anything that would change that environment.  As with any 
useful metric, it would require refinement over time, to prevent 
spoofing (I hope this isn't ever necessary), and to adjust for the 
relative value of contributions (size of patch, for example).  The point 
of the recognition, I think, is to provide a compliment and 
encouragement to any and all that contribute, not necessarily to 
perfectly correlate with some abstract notion of the value of 
contributions.  If anything, my suggestion was intended to be more 
inclusive than what we do now.

So perhaps as a refinement, then, take something like the ranking I 
suggested earlier, compute the order and then divide into three groups - 
high, medium, and low involvement (or four, with the bottom fourth not 
actually recognized officially?).  This would prevent people from 
competing to be first in the ranking, as people would just be 
recognized by which group they fell into.

There should be something that indicates the
kind and volume of contributions, sure. Like
that many mails, that many bug reports, and
so on. But instead of trying to compute a ranking
from it, I would prefer a randomized order, with
the kind and volume of contributions listed for
each person. Maybe with some hall of fame
into which the major contributors can be voted.
Somehow I feel that the social issues should not
be tackled with a purely technical solution.
 

After watching my spouse do grading of her student's papers, I think in 
the end there is always a necessary fudge factor involved in something 
that effectively looks like grading.  That fudge factor that might push 
someone either up or down.  For example, someone might come in late in a 
beta cycle with a key patch, and do so quickly, promptly, and 
correctly.  Someone would have to invoke the judgement for an 
appropriate recategorization, perhaps the person doing the release?

cheers,
 Roland
 

-Eric.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: @author tags

2004-03-15 Thread Eric Johnson
At the risk of adding fuel to an unproductive discussion, I thought I'd 
throw in my comments:

Legal:

   * IANAL, however, it strikes me that there is at least some small
 legal exposure in the @author tags.  As a contributor of sorts,
 but not an official committer, there are certain documents that
 I/my company need not sign with respect to my contributions to
 ASF.  The @author tag, unfortunately, adds some ambiguity back
 into the equation, insofar as I *could* appear to be a significant
 contributor even though the same level of paperwork may not be
 associatiated with my contributions.
   * Based on what I've read, it would appear that certain unnamed
 three letter companies are creating allegations based on the most
 superficial of analyses of code.  May this be ASFs way of
 protecting the innocent from spurious supeonas?  I'll grant that
 it is a very narrow margin of defense, nothing more, although one
 that apparently would defeat said unnamed three letter companies.
Social:

   * Some people contribute merely by monitoring the mailing list and
 perhaps testing, sending in a wire log that helps to find a bug. 
 Do we want to recognize those people as well?
   * Some contributions have been in the form of one-line patches
 that are not in unidiff format, and do not have an associated
 Bugzilla entry.  Do we recognize them?
   * Since the @author tag is certainly at the moment somewhat
 arbitrary in its actual recognition, its continued use may
 currently discourage contribution to the extent that people feel
 like the community is short-changing their contribution.

Having noted some of the social issues, I do have to say that this 
mailing list has been very friendly and welcoming, and my compliments to 
everyone for keeping it that way.

While not an entirely accurate measure, I have an urge to suggest a 
mathematical and statistical recognition metric, combining:

   * # of emails written to developer list
   * # of patches submitted
   * # of responses to bugzilla issues, wherein said person is not the
 reporter of the particular issue.
   * # of bugzilla issues reported, wherein reporting does not result
 in an INVALID categorization
   * negative points for each INVALID Bugzilla report (people wasting
 time and energy on behalf of the group)
   * Other contributions?
My gut instinct is that some of these contributions should be weighted 
more than others, but seeing as this is a quagmire, I'm not sure I'd 
want to suggest what that weighting would be - at least not yet.  The 
resulting number could be used to generate a ranking, and possibly a 
weighting of each contributor.

With each release, the tally should be accumulated for some time period 
prior to that release (6 months?), and those people should be recognized 
in the release notes, and perhaps also on the web site.

Such a metric would at least be an improvement over what we have now.  
It would at least recognize people who do nothing more than track down 
bugs.  It would also give us some visibility into the size and 
involvement of the HttpClient community.

Darts welcome!

-Eric.

Michael Becke wrote:

The ASF has recently recommended that we discontinue use of @author 
tags.  When first starting out I always enjoyed seeing my name in 
lights, though I do agree with the ASF's opinion on this matter.  If 
we come to a consensus to remove @authors I suggest that we remove 
them from all existing code, as well as leave them out of new 
additions.   Any comments?

Mike

Begin forwarded message:  ASF Board Summary for February 18, 2004

snip

  - author tags are officially discouraged. these create difficulties in
establishing the proper ownership and the protection of our
committers. there are other social issues dealing with collaborative
development, but the Board is concerned about the legal 
ramifications
around the use of author tags

  - it is quite acceptable and encouraged to recognize developers' 
efforts
in a CHANGES file, or some other descriptive file which is 
associated
with the overall PMC or release rather than individual files.
snip

-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: streaming request body

2004-02-24 Thread Eric Johnson
John Keyes wrote:

For (a), Oleg's response is correct. You might easily be confused, in 
the sense that HttpClient's API inverts the control. It is not that 
you write to an OutputStream to send your data, it is that you 
provide HttpClient with an InputStream, and it reads that stream 
and sends the data. HttpClient is designed to accomodate your 
concern, and if your configuration is correct (as per the examples), 
it will not buffer the entire contents of your InputStream, but 
rather read it and send it in small chunks. As another post points 
you, you may still have to buffer what you're sending to *disk*, but 
not to memory.


So you think buffering all requests to disk to support streaming is an 
acceptable solution? If I am dealing with XXX,000 of requests that 
sure as hell would suck with all the disk I/O going on. Does this not 
suggest that there is a problem with the architecture?
Many on the mailing list are aware of architectural limitations in the 
2.0 design of HttpClient. This was a conscious compromise that we made 
many months ago to live within certain constraints, with the key 
trade-off being a final version of the 2.0 implementation sooner. This 
is apparently good choice for you too, in that you've started using it 
actively!

This very issue you raise is on the list of possible tasks to address 
for the 3.0 release, as per someone else's post (see:
http://nagoya.apache.org/eyebrowse/[EMAIL PROTECTED]msgNo=6015
). See bug http://nagoya.apache.org/bugzilla/show_bug.cgi?id=26070 
referred to in that post.

You should read the discussion there, as it also describes an 
implementation approach to get around the specific limitation. If you'd 
like to second the request to get the change in for 3.0, or provide 
additional work-arounds, or add to the discussion, you might look there. 
A patch to address the issue would be welcome, I'm sure.


As for (b), this is again under your control via 
HttpMethod.getResponseBodyAsStream(). As with (a), you can also 
invoke HttpClient such that it does cache the entire contents 
(HttpMethod.getResponseBodyAsString() ).

In both cases, it is possible to get the behavior that you desire.


Not it is not. Again think of XXX,000 of requests. 
I have thought of many requests. I still maintain it is possible. Your 
argument may be that it requires more coding on your part for it to work 
well, or that it requires massive disk caching, which could dramatically 
affect performance. I don't disagree.



Connection pooling is only part of the concern. HttpClient supports 
HTTP 1.1 persistent connections. It doesn't expose the underlying 
socket's InputStream and OutputStream. If it did, it cound not ensure 
that persistent connections work properly.


I still don't see the problem. The OutputStream and InputStream can be 
wrapped so there is no loss of control. Why do you think control would 
be lost?
We're saying the same thing here. I'm saying they're not exposed, and 
you're saying they could be wrapped, thus hiding them. Since they are 
already hidden, your issue would seem to be a problem with *how* they 
are exposed (or not). Again, comments and feedback or a patch for bug 
26070 would be welcome.

-Eric.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: streaming request body

2004-02-23 Thread Eric Johnson
John,

Two separate questions:
a) sending a large post/put request, without buffering it in memory.
b) reading a large response to a request.
For (a), Oleg's response is correct. You might easily be confused, in 
the sense that HttpClient's API inverts the control. It is not that you 
write to an OutputStream to send your data, it is that you provide 
HttpClient with an InputStream, and it reads that stream and sends the 
data. HttpClient is designed to accomodate your concern, and if your 
configuration is correct (as per the examples), it will not buffer the 
entire contents of your InputStream, but rather read it and send it in 
small chunks. As another post points you, you may still have to buffer 
what you're sending to *disk*, but not to memory.

As for (b), this is again under your control via 
HttpMethod.getResponseBodyAsStream(). As with (a), you can also invoke 
HttpClient such that it does cache the entire contents 
(HttpMethod.getResponseBodyAsString() ).

In both cases, it is possible to get the behavior that you desire.

Connection pooling is only part of the concern. HttpClient supports HTTP 
1.1 persistent connections. It doesn't expose the underlying socket's 
InputStream and OutputStream. If it did, it cound not ensure that 
persistent connections work properly.

-Eric.

John Keyes wrote:

Guys,

A colleague pointed out to me that this does not in fact resolve the 
situation. The solutions pointed out allow me to read the attachment 
as a stream. The contents are still held in memory prior to writing it 
on the wire. To fully support this you would need access to the 
OutputStream.

If we could pass a HttpClient to the HttpMethod then we could get 
access to the output stream via the getRequestOutputStream method.

I don't understand the connection pooling argument. I thought it 
should be a user preference whether to have connection pooling.

Any ideas on this?
-John K
On 23 Feb 2004, at 13:02, Kalnichevski, Oleg wrote:

John,

HttpClient's entity enclosing methods (POST, PUT) do support content 
streaming when (1) the content length does not need to be 
automatically calculated or (2) chunk-encoding is used

Please refer to the following sample applications for details

Unbuffered post:

http://cvs.apache.org/viewcvs.cgi/*checkout*/jakarta-commons/ 
httpclient/src/examples/UnbufferedPost.java?content- 
type=text%2Fplainrev=1.2.2.1

Chunk-encoded post:

http://cvs.apache.org/viewcvs.cgi/*checkout*/jakarta-commons/ 
httpclient/src/examples/ChunkEncodedPost.java?content- 
type=text%2Fplainrev=1.4.2.1

Hope this helps

Oleg

-Original Message-
From: John Keyes [mailto:[EMAIL PROTECTED]
Sent: Monday, February 23, 2004 13:54
To: [EMAIL PROTECTED]
Subject: streaming request body
Hi,

I notice you have separated out the functions of the connection and the
content creation. So the code must be something like
HttpClient client = new HttpClient( url );
HttpMethod method = new GetMethod();
method.setRequestHeader( ... ); ...
method.setRequestBody( ... );
client.execute( method );
If I want to send a large attachment and I don't want it all to be in
memory then I can't do it. The issue is that you have to write your
data to the HttpMethod. The HttpMethod doesn't know where to then write
this data until you call execute and pass the client which has the
connection to write to. So there isn't really a way around this because
of the separation of the connection from the HttpMethod.
So my question is, is there a way to stream the request body rather
than having to store the request in memory prior to writing it on the
wire.
Thanks,
-John K
-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]

*** 

The information in this email is confidential and may be legally 
privileged. Access to this email by anyone other than the intended 
addressee is unauthorized. If you are not the intended recipient of 
this message, any review, disclosure, copying, distribution, 
retention, or any action taken or omitted to be taken in reliance on 
it is prohibited and may be unlawful. If you are not the intended 
recipient, please reply to or forward a copy of this message to the 
sender and delete the message, any attachments, and any copies 
thereof from your system.
*** 


-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]




-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]

Re: HttpClient 2.0 final release in February?

2004-02-13 Thread Eric Johnson
The time seems right.  I see no reason to wait!

-Eric.

Kalnichevski, Oleg wrote:

Folks,
I feel it is time we cut the final release. I am convinced we should get the long 
overdue HttpClient 2.0 release out and fully concentrate on getting HttpClient 3.0 
ready for the first alpha release. Simultaneously we should pursue the promotion to 
the Jakarta level, which will require quite a bit of efforts. The time is right IMO.
Does anyone see any reasons to wait with the final release?

Oleg

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: DO NOT REPLY [Bug 26382] - Update license terms

2004-02-04 Thread Eric Johnson
Knowing the speed with which corporate legal departments can move, I 
would hope for keeping the old license for the 2.0 release.

I think it would be unfair/surprising for clients to discover that they 
would have to do a legal an unexpected legal review just for the sake of 
eliminating a few bugs.  In the case of my company, that would probably 
keep us using 2.0rc3 for a few extra releases.

Then again, I'll have to check with legal

On the other hand, given the small bandwidth for developers here, 
perhaps it is just better to make the change, rather than messing around 
with the PMC.

-Eric.

[EMAIL PROTECTED] wrote:

DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG 
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
http://nagoya.apache.org/bugzilla/show_bug.cgi?id=26382.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND 
INSERTED IN THE BUG DATABASE.

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=26382

Update license terms





--- Additional Comments From [EMAIL PROTECTED]  2004-02-03 03:00 ---
I'm wondering if we should change licenses in mid release.  I do not know if 2.0 causes any 
problems for users, but it seems like a pretty big change for this release.  Perhaps we can wait until 
after 2.0 is released.  How does everyone feel about this?

Mike

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: I'm a big loser

2004-01-26 Thread Eric Johnson
Dave,

*You* have to generate the wire log.

See this link http://jakarta.apache.org/commons/httpclient/logging.html 
that Oleg pointed you to.

-Eric.

D Alvarado wrote:

Again, here is my noviceness coming out, but
where would I find this wirelog of the HTTP
session?  I am running Apache Web Server 1.27
with WebLogic 5.1, sp 12, if that's useful.
 Begin Original Message 

From: Oleg Kalnichevski [EMAIL PROTECTED]
Sent: Sat, 24 Jan 2004 11:42:12 +0100
To: Commons HttpClient Project
[EMAIL PROTECTED]
Subject: Re: RE: I'm a big loser
Dave,

Realm is a set of documents/URLs protected by the
same authentication
scheme and backed by the same user registry. You
may leave the realm
parameter null if you do not know what your
authentication realm name
is. Null realm basically means any realm. In very
security caution
applications you should probably avoid sending
your credentials to any
realm, but if you trust the target host, realm
does not really matter
too much, if you do not have to authenticate
against multiple realms
I am afraid I can't be of any further help,
unless I get to see the
wirelog on the HTTP session in question. Feel
free to strip or obfuscate
all the information you deem sensitive: host
names, username, passwords,
upload file content, etc. I am only interested in
request/response
headers
Oleg
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Connection Reset Error

2004-01-26 Thread Eric Johnson
There may be nothing you can do.  The underlying OS may simply choose to 
close an idle connection after a certain amount of time.  Seems odd to 
me, but possible.  An HTTP proxy server, for example, is free to close a 
connection after a certain amount of time with no activity.

You might try a telnet, FTP, or some such request from the command line, 
and see if they also get reset after 15 minutes of no activity on the HP 
box.

I don't suppose it is possible for you to connect your application 
differently?  For example, send in your requests, and the server 
responds immediately with an acknowledgement of the request received.  
On another connection, you can constantly reconnect a connection that 
waits 30-60s at most on the server to see if the activities have 
finished.  Once an activity succeeds, return some indication on this 
connection of which activity finished.  Then you can call back from 
your client application to get the results.

A little trickier that what you're doing, but not dramatically so.  
Perhaps you already thought of it, and were just wondering if there was 
a way to avoid the work?  It would have the benefit that it could 
actually consistently and reliably through proxy servers and routers 
that might otherwise drop an idle connection.

-Eric Johnson.

David Webb wrote:

I have written a program that uses HttpClient to call servlets that do batch 
jobs and wait for their return...usually no more that 15 minutes.  I have the 
Server timeout on the Web/App Server that the servlets reside on set to 1 hour 
or 3600 seconds.  I have tested this in 2 environments using HttpClient to call 
the Servlets that are in the same environment.

1) Windows 2K / JDK1.4.X - Works Fine, calls servlet, receives return code 8-9 
minutes later and exits without error

2) HP-UX / JDK1.4.X - Runs for about 15 minutes then throws the following 
exception:

Exception thrown:
java.net.SocketException: Connection reset
   at java.net.SocketInputStream.read(SocketInputStream.java:168)
   at java.net.SocketInputStream.read(SocketInputStream.java:182)
   at org.apache.commons.httpclient.HttpConnection$WrappedInputStream.read
(HttpConnection.java:1377)
   at java.io.FilterInputStream.read(FilterInputStream.java:66)
   at java.io.PushbackInputStream.read(PushbackInputStream.java:120)
   at org.apache.commons.httpclient.HttpParser.readRawLine
(HttpParser.java:109)
   at org.apache.commons.httpclient.HttpParser.readLine
(HttpParser.java:135)
   at org.apache.commons.httpclient.HttpConnection.readLine
(HttpConnection.java:1037)
   at org.apache.commons.httpclient.HttpMethodBase.readStatusLine
(HttpMethodBase.java:1842)
   at org.apache.commons.httpclient.HttpMethodBase.readResponse
(HttpMethodBase.java:1611)
   at org.apache.commons.httpclient.HttpMethodBase.execute
(HttpMethodBase.java:997)
   at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry
(HttpMethodDirector.java:316)
   at org.apache.commons.httpclient.HttpMethodDirector.executeMethod
(HttpMethodDirector.java:172)
   at org.apache.commons.httpclient.HttpClient.executeMethod
(HttpClient.java:468)
   at org.apache.commons.httpclient.HttpClient.executeMethod
(HttpClient.java:355)
   at com.bac.amg.acs.HTTPBatch.execute(HTTPBatch.java:157)
   at com.bac.amg.acs.HTTPBatch.main(HTTPBatch.java:75)
Is there anything I can do in HttpClient to prevent this from happening?

Thanks.

--
Sincerely,
David Webb
Vice-President
Hurff-Webb, Inc.
http://www.hurff-webb.com
(904) 861-2366
(904) 534-8294 Mobile




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Recoverable error question

2004-01-26 Thread Eric Johnson
Oleg Kalnichevski wrote:

Hi Tim,

See my comments in-line below
 

[snip]

My challenge is that the bank processes each GET request, even if it has 
the same parameters as a previous request (yes, I know that GETs should 
be idempotent but I don't have a choice).  I can't charge people twice. 
 I want to retry the request, though, if I know that the bank hasn't 
received it (DNS failure, connection refused, connection timeout), but I 
CAN'T retry it if they MAY have received the request (e.g. response read 
timeout, 500 error, etc.).

   

[snip]

The problem you are having should better be addressed at the application
level, not at the transport level. 
 

Oleg is precisely correct here.  The *only* way you can guarantee that 
you aren't duplicating requests is to address the problem at a higher 
level.  Perhaps put a transaction id or message id on each GET 
request, and the server will detect duplicate requests.

Failing that, you would need to make sure that the two machines were 
connected to the same network hub, so that communications failures of 
any sort help guarantee that *both* machines stop communicating with the 
outside world, and thus the server would recognize the problem as well.  
And that is still not a guarantee.

You could let us know which bank is in question, so the rest of us know 
not to leave our money with them, since they seem to have missed an 
important lesson in CS about ACID transactions.  For that matter, their 
investors probably want to know.  OK, don't tell us, since it probably 
violates some agreement you signed with them, but I sure would love to 
know

There are so many better solutions that come to mind for this kind of 
functionality, like JMS based solutions, which offer support 
functionality like guaranteed delivery.  Of course, my company sells 
such a product, so I should stop now before I cross the line into 
advertisement.

-Eric.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: JIRA vs Bugzilla issue

2004-01-21 Thread Eric Johnson
Well, I'm not sure how I would recommend going on this decision.  So 
here is my attempt at providing a slightly biased (in favor of Bugzilla) 
view of the facts.

I looked at nagoya.apache.org, and checked out both the Scarab and Jelly 
installations running there.

Random observations:

   * Bugzilla is designed for a flat product listing.  Currently
 Apache Commons tools are listed as a component of Apache Commons,
 rather than a top-level project like Commons-HttpClient.  Were
 this changed instead, all of the complaints about not being able
 to establish coherent milestones and versions would go away.  As
 it is, it seems unfair to compare to JIRA with respect to this
 issue, because migrating to JIRA will apparently make HttpClient a
 top level project- thus comparing apples to oranges.
   * Apache appears to be running Bugzilla 2.14.2.  Bugzilla is up to
 2.16.4 for their stable build, and 2.17.6 on their testing
 branch.  We use 2.16.4 at my office and I have no complaints with
 it.  I know that there are some nice but subtle improvements with
 the newer release(s).
   * JIRA appears to be missing a nice feature of (the newer) Bugzilla,
 namely that when examining a bug from a list of bugs, you can
 click Next and Previous to see other bugs, rather than having to
 go back to the list view.  In Mozilla, this actually enables an
 extra toolbar with next and previous buttons.
   * JIRA has a significantly cleaner look and feel, most definitely.
   * JIRA appears to have links to specific responses to issues/bugs. 
 Bugzilla doesn't have this - you can only link to the bug as a
 whole, so far as I know.
   * Scarab doesn't let an unregister user browse the reports.  This
 pretty much shoots it down for use in a open source project, for
 me.  I wonder if that is just the way that Apache has it configured.
   * Scarab appears to be much stricter about its access controls.  I'm
 not sure whether the extra refinement just gets in the way.
   * As far as the notification emails that JIRA sends out versus the
 ones that Bugzilla sends, I like the ones that Bugzilla sends
 better.  Far more compact (again a configuration issue?)

My suggestion would be to also investigate the possibility of HttpClient 
being promoted (in Bugzilla only) to a project rather than a component 
of commons, and also see about having the Bugzilla version updated.

-Eric.

Kalnichevski, Oleg wrote:

Is there an automatic way to move the current issues over to JIRA? The open
bugs are important, but the closed ones also contain a wealth of
information.
   

I do not have all the details, but JIRA is believed to provide some sort of an automated migration path for existing Bugzilla installations. Anyways, if ALL existing bug reports cannot be retained, in my opinion, that would completely defeat the whole migration idea. 

I'll double-check the possibility of having existing reports migrated with the infrastructure folks, before the final decision is made.

I'll keep you posted.

Oleg

-Original Message-
From: Rezaei, Mohammad A. [mailto:[EMAIL PROTECTED]
Sent: Wednesday, January 21, 2004 15:24
To: 'Commons HttpClient Project'
Subject: RE: JIRA vs Bugzilla issue
Is there an automatic way to move the current issues over to JIRA? The open
bugs are important, but the closed ones also contain a wealth of
information.
Thanks
Moh
-Original Message-
From: Oleg Kalnichevski [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, January 21, 2004 6:41 AM
To: Jakarta Commons HttpClient mailing list
Subject: Re: JIRA vs Bugzilla issue

Folks,
What say you, do we migrate HttpClient issue tracking to JIRA or do we stay
with Bugzilla? Please let me know your opinion.
Oleg

On Tue, 2004-01-13 at 20:29, Oleg Kalnichevski wrote:
 

Shall I apply? Any strong opinions to not migrate to JIRA?

Oleg

On Tue, 2004-01-13 at 20:01, Michael Becke wrote:
   

Yes, I've been following that discussion as well.  I'm definitely
interested in making the switch to JIRA.  Bugzilla has served us pretty 
well, but I find it somewhat unwieldy at times.

Mike

On Jan 13, 2004, at 11:44 AM, Kalnichevski, Oleg wrote:

 

There's currently a rather animated discussion 'JIRA vs Bugzilla'
going on the commons-dev mailing list. Personally I do not have a 
strong option on this issue. There's one thing, though, that makes me 
bring it up here: we are facing the need to massively restructure 
Bugzilla content related to HttpClient due to the change of the next 
release version from 2.1 to 3.0. (Funny enough, the way versioning is 
handled in Bugzilla is being one of the most frequently mentioned 
motivators for migration to JIRA). My point here, if we ever wanted to
   

 

migrate to JIRA, now would be the right moment.

Let me know what you think (let us not turn it into a religious 
war
currently being waged on the commons-dev, though)

Oleg


Re: PostMethod.setFollowRedirects(true); - do nothing

2004-01-20 Thread Eric Johnson
Jean-Remi,

You should re-read Oleg's previous response.  Redirect on a POST request 
is *not supported*, no matter how many times you set the flag.

You will have to follow the redirect yourself, as Oleg suggests 
(http://jakarta.apache.org/commons/httpclient/redirects.html).

This stems from fundamental architectural limitations of the current 
HttpClient 2.0 design.

-Eric.

JEAN REMI LECQ wrote:

Somebody help !

I follow all directive in
http://jakarta.apache.org/commons/httpclient/redirects.html...
But my PostMethode can still not follow redirection !! :-(

My code :
*
Cookie cookie1 = new Cookie(www.XXX.com, CFID, cookies[0].getValue(),
/, null, false);
Cookie cookie2 = new Cookie(www.XXX.com, CFTOKEN, cookies[1].getValue(),
/, null, false);
parent.getHttpClient().getState().addCookie(cookie1);
parent.getHttpClient().getState().addCookie(cookie2);
methodePOST = new
PostMethod(http://www.XXX.com/XXX/etape2h.cfm?ref=123456;);
methodePOST.setFollowRedirects(true);
methodePOST.setHttp11(false);
System.out.print(methodePOST.getURI()++methodePOST.getPath());
parent.getHttpClient().executeMethod(methodePOST);
**
The error message :
***
20 janv. 2004 19:58:15 org.apache.commons.httpclient.HttpMethodBase
processRedirectResponse
INFO: Redirect requested but followRedirects is disabled
***
here the headers of the redirection :
*
POST http://www.XXX.com/XXX/etape2h.cfm?ref=123456 HTTP/1.0   --- URL
redirection
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg,
application/x-shockwave-flash, application/vnd.ms-powerpoint,
application/vnd.ms-excel, application/msword, */*
Referer:
http://www.XXX.com/mirror/listehotels.cfm?Vilarr=MRSDatDep=210104DatArr=2
30104Adultes=2ENFANTS=0BEBES=0ref=123456
Accept-Language: fr
Content-Type: application/x-www-form-urlencoded
Proxy-Connection: Keep-Alive
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)
Host: www.XXX.com
Content-Length: 223
Pragma: no-cache
Cookie: CFID=41696517; CFTOKEN=19746368
*


I need some ideas...

Jean-Rémi



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: upload large files- Filepart

2004-01-06 Thread Eric Johnson
This problem seems like it is the perfect candidate for the 
ExpectContinueMethod.setUseExpectHeader() function.  Isn't this exactly 
the scenario for which this header was intended?

-Eric

Oleg Kalnichevski wrote:

Siddhartha,

I believe the solution to this problem is trivial. All it takes is
checking for availability of a response from the target server prior to
sending each consecutive chunk of request body. A premature response
from the target server detected before the request body has been
transmitted in its entirety most likely signifies a problem (such as
authentication failure), which should cause the request to be aborted
and the connection force-closed once the response is read.
I'll happily provide a fix for this problem, but currently there are
more pressing issues that must be addressed first. Besides, it is
already too late to incorporate the fix into 2.0 release, so it will
have to wait until next release (2.1). You are welcome to work on a
patch, if you feel like to, or you can wait until the problem makes it
to the top of our priority list (which may take a while) to be fixed in
its due time
Cheers

Oleg

On Sat, 2004-01-03 at 21:34, Sid Subr wrote:
 

from looking at the filepart code seems that this part
would be creating a problem which makes the code not
recoverable from the server closing the connection
when authentication fails...
Filepart.java for httpclient
sendData(){

create a new byte array of size 4K

while thereis stuff to be read from the file send it
out to the outputstream
finally close the stream

}

I know the while loop is the one that chokes when the
connection is closed as the  httpclient has not yet
finished writing the whole file (the release
connection is also not called which might help in teh
retry)and the IOException from that write is sent all
the way up and since it is not an
HttpRecoverableException the whole thing does not even
go to the point of trying to send it out the next time
with credentials.. how do you propose to change this?
The only way I see is to send part of the file to the
server and when the challenge comes and the connection
is closed start sending the file in parts and hope it
will not get challenged.. otherwise we might be stuck
in the sending (a max of three times specified in the
MethodRetryHandler) ..
any input would be helpful..

Sid

__
Do you Yahoo!?
Protect your identity with Yahoo! Mail AddressGuard
http://antispam.yahoo.com/whatsnewfree
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
   



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Logging problem

2003-12-31 Thread Eric Johnson
Chris,

Looks suspiciously like a ClassLoader problem, and not directly related 
to HttpClient.  I suggest looking on the net for compatibility issues 
with JRun/ColdFusion MX and commons-logging.

Also see the logging instructions for HttpClient here: 
http://jakarta.apache.org/commons/httpclient/logging.html, although if 
you're using Log4J, you'll have to adopt the instructions as 
appropriate.  I suggest searching the HttpClient email archives 
(http://nagoya.apache.org/eyebrowse/SummarizeList?listId=128) for Log4J 
to see how other people have solved this.

-Eric Johnson

Lomvardias Christopher wrote:

When attempting to run the example source code at the bottom of
http://jakarta.apache.org/commons/httpclient/tutorial.html within
JRun/ColdFusion MX, I get the following error:
[1]org.apache.commons.logging.LogConfigurationException:
java.lang.ClassCastException
at
org.apache.commons.logging.LogFactory.newFactory(LogFactory.java:558)
at
org.apache.commons.logging.LogFactory.getFactory(LogFactory.java:345)
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:390)
at
org.apache.commons.httpclient.HttpClient.clinit(HttpClient.java:100)
I should note that JRun/ColdFusion MX uses Log4J, so I assume there is some
interaction going on here that is causing the problem.
I'd welcome any suggestions.

Thanks,

Chris



 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [HttpClient] Refactor get/setRequestHeader(Content-Type,..)

2003-12-29 Thread Eric Johnson
Ugh - cross-posting.  Seems like this question was meant for 
httpclient-dev, so I've included that

I think the answer will come back from the regulars on the httpclient 
dev list that the long term intent is to split the request, and 
response parts of the HttpMethod interface into distinct pieces.  As 
part of those new interfaces, your suggestions certainly make sense.  
Unfortunately, adding to the existing HttpMethod interface *could* break 
existing code that doesn't inherit from HttpMethodBase (although as a 
practical manner, I'm not sure that anyone out there could realistically 
implement HttpMethod without extending HttpMethodBase, part of its 
architectural flaw).

Defining the constants in some place makes sense, and perhaps those are 
good utility functions?  I think there might be some other subtleties 
going on here, but I've not really considered this part of the code before.

I think various HttpClient commiters are on vacation until Jan., so I 
wouldn't expect a more complete response before then.

-Eric.

Gary Gregory wrote:

Hello HttpClient,

For our code which uses HttpClient, I find myself defining constants and
methods in our code for things like Content-Type header handling. I am
surprised not to find such a constant in HttpClient in a public place. (It
is defined in multipart.Part as protected).
I also see a lot of getRequestHeader(Content-Type) and
setRequestHeader(Content-Type, ...).
This is seem like a good opportunity for a refactoring, in this case to
getContentType() and setContentType(String).
Is there any reasons these methods do not exist? 

I am a committer on [lang] and [codec] (also dabbling in [vsf]) and can help
out directly or via patches if the group is interested in moving in this
direction.
Thanks for reading,
Gary
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Connect to Apache WEBDAV and send back XML

2003-12-16 Thread Eric Johnson
I too use the HttpClient library to connect to a WebDAV server.

I suspect what you want to do is get the webdavlib.jar file from the 
Jakarta Slide project.  The library in the Slide project includes 
support for all of the WebDAV  DeltaV extensions to HTTP.  I've worked 
on both libraries to help ensure that they interoperate.  One note, 
though - the Slide library expects to work with HttpClient 2.0rc2, not 
the latest trunk release.

If I had to guess, you're attempting to get the contents of a folder, 
which requires the PROPFIND WebDAV request.  Check out the 
PropFindMethod class in that library.

-Eric Johnson.

J H wrote:

Hi.  I found the HTTP-commons library and I was absolutely ecstatic!  
I would appreciate some help though...I'm not sure how to coax it to 
send data other than html...I've tried to set the request header to 
method.setRequestHeader(Content-Type,content=\text/xml\); and it 
still sends back the response in html format...

What I'm trying to do is create a java applet (client-side) to access 
the Tomcat Webdav server.  I have successfully connected to the WEBDav 
server with Digest authentication, but it returns the results in HTML 
which are rather cumbersome to parse.  I would really appreciate ANY 
ideas.

--Jeff

_
Browse styles for all ages, from the latest looks to cozy weekend wear 
at MSN Shopping.  And check out the beauty products! 
http://shopping.msn.com

-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Make HttpClient pick Basic Auth over NTLM?

2003-12-04 Thread Eric Johnson
I'm not sure that HttpClient should do anything different.

According to section 4.6 of RFC 2617, A user agent MUST choose to use 
the strongest auth- scheme it understands and request credentials from 
the user based upon that challenge.

Since Basic is pretty darn weak, I'd say NTLM wins out every time.  Is 
this a point on which HttpClient should have an option to override the 
RFC mandated behavior?  As somewhat of a fanatic about security, my take 
is that you should be forced to do the right thing, and if you really 
want to, the source is there for you to modify.

-Eric.

anon permutation wrote:

Hi,

I am using a proxy server that supports both NTLM and Basic 
Authentications.  How do I make HttpClient use Basic Auth. instead of 
NTLM?  I am using 2.0-rc2.  Following is my code:

 

HttpClient client = new HttpClient();
HttpMethod method = null;
client.getState().setProxyCredentials(null, new 
UsernamePasswordCredentials(user,passwd));

HostConfiguration hc = client.getHostConfiguration();
hc.setProxy(10.0.0.2, 80);
method = new GetMethod(url);
client.executeMethod(method);
byte[] responseBody = method.getResponseBody();
- 

I am getting this error:
Credentials cannot be used for NTLM authentication
Thanks.

_
Browse styles for all ages, from the latest looks to cozy weekend wear 
at MSN Shopping.  And check out the beauty products! 
http://shopping.msn.com

-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: DO NOT REPLY [Bug 24352] - NLTM Proxy and basic host authorization

2003-12-01 Thread Eric Johnson
Oleg,

You're discarding the possibility that HttpClient is approaching 
perfection, and doesn't need much in the way of email commentary, 
because it works so well.

Ah, er, sorry, I just had to day-dream for a moment there.

-Eric.

Oleg Kalnichevski wrote:

What's up, folks? I have never seen HttpClient mailing list so quiet for
so long.
The last week was REALLY rough. I had some really miserable time at
work. But with my project (the one that helps pay my bills) finally back
on track, I can finally turn my attention to HttpClient development. As
of tomorrow patches should start trickling in again.
Oleg

On Mon, 2003-11-24 at 20:34, Kalnichevski, Oleg wrote:

 

I agree. I'll try to come up with another try within a few days (most likely tomorrow)

Oleg

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
   



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: DO NOT REPLY [Bug 24352] - NLTM Proxy and basic host authorization

2003-11-17 Thread Eric Johnson
My take is slightly different (and I wish I had time to implement it)

Start by virtualizing the access to the connection, and then, rather 
than having multiple servers, just have different implementations of a 
virtualized socket interface, for example. Then see to writing test 
cases that look something like this:

# This marks what the server is supposed to receive, note that this is not
# literally what is received, because headers might be sent in a 
different order
# for example.
GET /foo HTTP/1.1
@Host: http://localhost:8080
@Content-Length: 30
@End-Headers
# Note that on content lines, the CRLF (or just LF) should be
# discarded. Instead, CRLF pairs should be explicitly encoded, perhaps
# with %CRLF%? Content should (must?) allow substitutions, for example
# multi-part boundaries. Perhaps do substitution with something like
# %BOUNDARY%
@Content:
Content goes here
# the following would wait for three seconds before sending more
# content...
@Wait: 3000
@Content:
Yet more content here...
HTTP/1.1
# Note, here since the test case knows the response it is supposed to
# send, it can (by and large) simply send it.
@Content:
.

and so on

I spend a lot of time working with XML, so I thought about doing some 
sort of test-framework like the above using XML instead. which would get 
rid of some of the bizarre syntax that I suggest above, but I'm not sure 
whether that makes sense in the context of HttpClient.

My idea would be to take cases where we want to talk to actual servers, 
and replace them with test cases like the above, wherein we could 
mimick (or exactly duplicate) the odd behavior of various servers.

Hopefully this gives someone else an idea

-Eric.

Ortwin Gluck wrote:

[EMAIL PROTECTED] wrote:

Oleg,

I agree, our lack of auth/proxy tests is a continuous source of 
problems. One of our goals for 2.1 should be an effective method for 
testing all of the various combinations of proxy, authentication and 
SSL. Ideally it would be best to make this setup as simple as 
possible. Do you have any thoughts about how we can best accomplish 
this?

Mike


The various authentication methods should be tested against servlets 
in the Test-Webapp. As to proxies, we must implement a couple of tiny 
local servers running on different ports. Like:

TCP 81: Proxy
TCP 82: SSL Proxy
Those servers should be started and stopped by the test fixtures 
(setup / teardown). The servers must be configurable as to which 
authentication method they use. This will also ensure quality of the 
various authentication methods, as currently their test cases are 
somewhat minimalistic. I'd love to hack up some code for the server 
side this week.

Odi



-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: HttpMethodBase.releaseConnection() finished download

2003-11-14 Thread Eric Johnson
Sven,

I've not tried this, but if you send a Connection: close header on the 
request, the server should echo that on the response. If that header is 
on the response from the server, the the releaseConnection() function 
will (or at least it should) immediately close the connection, rather 
than consuming the remainder of the body of the response.

Hope that helps.

-Eric.

Sven Kohler wrote:

Hi,

it seems that releaseConnection finishes the http-download until it is 
complete. I don't want that. I'm looking for a way to close the 
HttpConnection if the download wasn't completed yet.
I'm aware that one cannot abort a Http-Transfer without closing the 
connection and therfor loosing it for keep-alive etc.

There doens't seem to be a way of closing the HttpConnection by using 
the HttpMethodBase-Object. What should i try next?

Thx
Sven
-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: DO NOT REPLY [Bug 24309] - MultiThreadedHttpConnectionManager daemon Thread never GC'd

2003-11-12 Thread Eric Johnson
Odi,

This is a curiosity, I agree.

I take this documentation from System.gc() at face value:
When control returns from the method call, the Java Virtual Machine 
*has* made a best effort to reclaim space from all discarded objects. 
(emphasis added) - In other words, it is blocking, although best 
effort is not exactly well defined  I've not really looked on the 
web, though, to see whether the actual implementation varies.

In my original patch, I found that the sleep was necessary in order to 
let the thread that was not being garbage collected actually finish 
processing; otherwise it would not be collected by the gc() call.  Since 
Mike's new approach no longer uses multiple threads, the sleep prior to 
the gc() call is unnecessary now.

-Eric.

Ortwin Glück wrote:

Mike,

in the test case I would rather introduce a Thread.sleep AFTER the 
System.gc() call as well to give the GC time to run. GC happens 
asynchronously. The System.gc() call is not blocking!

Odi

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=24309


--- Additional Comments From [EMAIL PROTECTED]  2003-11-12 
00:27 ---
Any more thoughts on this one, or should I apply?

Mike



-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Problem with Basic Authentification and non ASCII characters

2003-11-12 Thread Eric Johnson
This would appear to be a character encoding issue.

In BasicScheme.authenticate, it currently does this:

return Basic  + HttpConstants.getAsciiString(
   Base64.encode(HttpConstants.getBytes(buffer.toString(;
I suspect it should be doing something like this:

return Basic  + HttpConstants.getAsciiString(
   Base64.encode(buffer.toString().getBytes(UTF-8) ) );
RFC 2617 appears to be mum on the issue.

Anyone else have a better clue?

-Eric.

P.S. I found this email which might be a useful place to start, but I 
couldn't figure out the answer from a quick read of it or its 
surrounding emails on the topic.

http://lists.w3.org/Archives/Public/ietf-http-wg/2003AprJun/0015.html

[EMAIL PROTECTED] wrote:

Hi,

today an administrator reported a password related problem within one of
our applications to me. I tracked down the problem that the user had used
the german Umlaute äöü in his password.
Our application tried to log in to another web site using a get method from
HTTPClient 2.0 rc2 setting basic authentification, but authentification
failed because of the non ASCII characters.
We used the password ä-ö-ü for testing and it turned out that HTTPClient
translates this to ZGg6Py0/LT8=. Internet Explorer and Mozilla translates
this to ZGg65C32Lfw=. Using
org.apache.commons.httpclient.util.Base64.decode with the wrong string
results in ?-?-? where the second string results in the correct ä-ö-ü,
so encode and decode are not symetric.
Using the code below (I found some time ago on the internet) to translate
the password into the base64 version results in the correct string.
For me the question is, if a password with non ASCII characters is not
allowed at all (in the HTTPClient documentation I could not find a hint in
this direction or I have missed it), but even if not, browsers seem to
support it, so the used Base64-encoding class seems to be bugy and should
be fixed in 2.0, before it is completely replaced for 2.1.
Any thoughts or hints are welcome.

Regards,
Olaf




public class Base64
{
static String BaseTable[] = {
 A, B, C, D, E, F, G, H, I, J, K, L, M, N,
O, P,
 Q, R, S, T, U, V, W, X, Y, Z, a, b, c, d,
e, f,
 g, h, i, j, k, l, m, n, o, p, q, r, s, t,
u, v,
 w, x, y, z, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
+, /
 };
 public static String encode(String text) {
   int n = text.length();
   if (n  1)
 return text; // no bytes to encode!?!
   StringBuffer output = new StringBuffer(n);
   // read the entire file into the byte array
   byte bytes[] = new byte[ (n)];
   bytes = text.getBytes();
   byte buf[] = new byte[4]; // array of base64 characters

   int n3byt = n / 3; // how 3 bytes groups?
   int nrest = n % 3; // the remaining bytes from the grouping
   int k = n3byt * 3; // we are doing 3 bytes at a time
   int linelength = 0; // current linelength
   int i = 0; // index
   // do the 3-bytes groups ...
   while (i  k) {
 buf[0] = (byte) ( (bytes[i]  0xFC)  2);
 buf[1] = (byte) ( ( (bytes[i]  0x03)  4) |
  ( (bytes[i + 1]  0xF0)  4));
 buf[2] = (byte) ( ( (bytes[i + 1]  0x0F)  2) |
  ( (bytes[i + 2]  0xC0)  6));
 buf[3] = (byte) (bytes[i + 2]  0x3F);
 output.append(BaseTable[buf[0]]).append(BaseTable[buf[1]]).append(
 BaseTable[buf[2]]).append(BaseTable[buf[3]]);
 if ( (linelength += 4) = 76) {
   output.append(\r\n);
   linelength = 0;
 }
 i += 3;
   }
   // deals with with the padding ...
   if (nrest == 2) {
 // 2 bytes left
 buf[0] = (byte) ( (bytes[k]  0xFC)  2);
 buf[1] = (byte) ( ( (bytes[k]  0x03)  4) |
  ( (bytes[k + 1]  0xF0)  4));
 buf[2] = (byte) ( (bytes[k + 1]  0x0F)  2);
   }
   else if (nrest == 1) {
 // 1 byte left
 buf[0] = (byte) ( (bytes[k]  0xFC)  2);
 buf[1] = (byte) ( (bytes[k]  0x03)  4);
   }
   if (nrest  0) {
 // send the padding
 if ( (linelength += 4) = 76)
   output.append(\r\n);
 output.append(BaseTable[buf[0]]).append(BaseTable[buf[1]]);
 // Thanks to R. Claerman for the bug fix here!
 if (nrest == 2) {
   output.append(BaseTable[buf[2]]);
 }
 else {
   output.append(=);
 }
 output.append(=);
   }
   return output.toString();
 }
}
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: DO NOT REPLY [Bug 24560] - HttpClient loops endlessly while trying to retrieve status line

2003-11-11 Thread Eric Johnson
Oleg,

No apologies needed.  As far as I'm concerned, your opinions with 
respect to HttpClient carry far more weight - I seem to contribute a 
patch about once every three months (and Mike just did a much better 
rewrite of my latest patch), but otherwise just stir up trouble on the 
mailing list!

One more wrench.  It strikes me that if HttpClient does support HTTP 
pipelining in the future, then an available check will mislead you.  
Of course there might be data available, as that would be precisely the 
point.  Only when looking at the data can you determine whether or not 
it is valid to have that data already in the queue  Granted, it is 
perhaps early to worry about pipelining!

It seems like we have a spectrum of possibilities to deal with here:

Malicious servers -- bad servers with immediate extra data -- bad 
servers with delayed extra data -- other bad servers (bad cookies et. 
al.) -- mostly compliant servers -- 100% compliant servers (yeah, right!).

It seems like we need to stop the malicious servers from derailing 
HttpClient.  As for the two types of bad servers, I'm not sure I see the 
point of catching the bad behavior on some requests immediately, when 
we're guaranteed to catch the problem on a subsequent request 100% of 
the time.  I'm guessing that the minor performance hit of reissuing a 
request when operating in strict mode will be swamped by the fact that 
in either case you have to open a new connection?

I guess that's part of why I ask whether there are specific badly 
behaved servers for which enforcing a strict mode with an available 
check will actually prove useful.  It could be that for every bad 
server out there, they almost all fall into the delayed extra data 
category, rather than the immediate category.  I think my concern 
comes down to this: How much benefit does HttpClient get out of 
enforcing the rule in two places, one with the available check, and 
one with the scan at the beginning of reading the next response? Is the 
answer 5% of bad servers, or 75% of bad servers?  And if you set a 
strict mode, can HttpClient even operate with those servers anyway?

Absent the real-world data, I'd stick with only trying to deal with the 
bad behavior at the beginning of the next response, rather than at the 
end of the previous one.

-Eric.

Kalnichevski, Oleg wrote:

Eric,
Just to clarify things: there's strongno way/strong we touch 2.0 branch for any other reason but fixing a strongserious/strong bug. 

As far as I am concerned the suggested patch is an enhancement, and not a bug. If any of my comments were interpreted otherwise, I offer my apologies. It was not meant that way. 

Anyways, if I understand Christian right, the idea is to drop those connections that are known for sure to be screwy, instead of reusing them and getting into a trouble later when processing subsequent responses. The logic in readStatusLine will be enhanced to optionally terminate the infinite loop. That is it.

I hope that addresses your concerns somewhat.

Oleg

-Original Message-
From: Eric Johnson [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 11, 2003 15:35
To: Commons HttpClient Project
Subject: Re: DO NOT REPLY [Bug 24560] - HttpClient loops endlessly while
trying to retrieve status line
Christian Kohlschütter wrote:

 

I perfectly agree - I do not see a bullet-proof solution either.

I should correct my assumption:

Can we assume that reusing the HTTP connection is unreliable/should be 
avoided if there are more bytes *INSTANTLY* available than specified with 
Content-Length

Instantly means without waiting/blocking, so at least for this situation, a 
simple workaround would be feasible.

I think that the currently used SocketInputStream's available() method _does_ 
return values  0.

   

Unfortunately, I think that depends.  I seem to recall we had 
difficulties with this function in the past, particularly related to 
different JVM versions, and also with different implementations of 
secure sockets.  Granted, some of those implementations were/are buggy, 
but we have to live with them, I think.  Before we commit such a change 
to the 2.0 release branch, we'd have to run it through tests across 
numerous JVMs on numerous platforms with numerous JCE libraries.  We 
also run the risk that the available function could misbehave not only 
by giving an incorrect response, but also by blocking for a short period 
of time (1ms?), which would be disastrous for performance.

I think the instantly available criteria is misleading, too.  There's 
absolutely no reason to prevent you from hitting a pathological case 
where the packet boundary splits right where the extra data is sent, 
thus leading the instantly available check to return false, even 
though the data would be read on the subsequent response.  In fact, such 
behavior could be entirely dependent on the misbehaving server.  The 
case that I've encountered stemmed from a server that tossed in an extra 
CRLF after

Re: HTTPClient RC2 - Response content length is not known

2003-11-07 Thread Eric Johnson
Brad,

No worries about which list you subscribe to.  Many on this list are 
happy to answer questions such as yours.

HttpClient uses commons-logging for its configuration.  If you've 
configued commons-logging properly, the message can be made to go away 
as you indicated.  Since you already generated a wire log, presumably 
you've seen the page for configuring logging.  My suggestion would be to 
look at those instructions again, except change the logging level 
(debug, info, warn, error).  Where you see this:
System.setProperty(org.apache.commons.logging.simplelog.log.org.apache.commons.httpclient, 
debug);

do your equivalent of this instead (I say equivalent on the off chance 
that you're using Log4J):
System.setProperty(org.apache.commons.logging.simplelog.log.org.apache.commons.httpclient, 
error);

It strikes me personally that the warning in this particular context 
is probably excessive, and it should be logged as a info or debug 
message instead, but only in this particular case.  If you look at the 
wire log you'll notice that the server does not respond with a 
Content-Length header.  Rather, the length of the response is dictated 
by when the server closes the connection to the client.  HttpClient is 
telling the truth with this warning that you see, but in this particular 
context, the server explicitly indicates that it will be closing the 
connection.

Let us know if you think the logging change above is not sufficient to 
your needs.

-Eric.

Brad Clarke wrote:

I have a piece of code that hits my LinkSys router to get the IP address and
e-mail it if it has changed.
When using HTTPClient 2.0 RC1, it ran fine.  When I upgraded to RC2, I get
the message
WARNING: Response content length is not known

I've isolated the two calls that result in the warning.  They are:

   int status = client.executeMethod( get );
and
   get.getResponseBodyAsString();
Is there anything I can do to make this warning go away?

Here is the main procedure, to give you a better idea of what I'm doing:

   String strHTML = new String();
   String strIP = new String();
   int statusCode = 0;
   System.out.println(* LSRouter start *);

   HttpClient client = new HttpClient();

   client.getState().setCredentials(Linksys
BEFSR41/BEFSR11/BEFSRU31,
   192.168.1.1, new UsernamePasswordCredentials(user,
pass));
   GetMethod get = new GetMethod(http://192.168.1.1/Status.htm;);
   get.setDoAuthentication( true );
   //client.setStrictMode(true);  --- no effect when enabled
   //get.setStrictMode(true);  --- no effect when enabled
   // execute the GET
   int status = client.executeMethod( get );
   // print the status and response
   statusCode = get.getStatusCode();
   System.out.println(Status =  + statusCode);
   strHTML = StripHTML(get.getResponseBodyAsString());
   strIP = locateIP(strHTML);
   System.out.println(\n + strIP);

And the wire output:

2003/11/07 00:00:12:582 EST [DEBUG] HttpClient - -Java version: 1.4.2_02
2003/11/07 00:00:12:592 EST [DEBUG] HttpClient - -Java vendor: Sun
Microsystems Inc.
2003/11/07 00:00:12:592 EST [DEBUG] HttpClient - -Java class path:
D:\j2sdk1.4.2_02\jre\lib\rt.jar;D:\j2sdk1.4.2_02\lib\tools.jar;D:\Apache_Too
ls\commons-httpclient-2.0-rc2\commons-httpclient-2.0-rc2.jar;D:\Apache_Tools
\commons-httpclient-2.0-rc1\commons-httpclient-2.0-rc1.jar;D:\Apache_Tools\c
ommons-logging-1.0.3\commons-logging.jar;H:\Development\Java\LSRouter\classe
s
2003/11/07 00:00:12:592 EST [DEBUG] HttpClient - -Operating system name:
Windows 2000
2003/11/07 00:00:12:592 EST [DEBUG] HttpClient - -Operating system
architecture: x86
2003/11/07 00:00:12:592 EST [DEBUG] HttpClient - -Operating system version:
5.0
2003/11/07 00:00:13:844 EST [DEBUG] HttpClient - -SUN 1.42: SUN (DSA
key/parameter generation; DSA signing; SHA-1, MD5 digests; SecureRandom;
X.509 certificates; JKS keystore; PKIX CertPathValidator; PKIX
CertPathBuilder; LDAP, Collection CertStores)
2003/11/07 00:00:13:844 EST [DEBUG] HttpClient - -SunJSSE 1.42: Sun JSSE
provider(implements RSA Signatures, PKCS12, SunX509 key/trust factories,
SSLv3, TLSv1)
2003/11/07 00:00:13:844 EST [DEBUG] HttpClient - -SunRsaSign 1.42: SUN's
provider for RSA signatures
2003/11/07 00:00:13:854 EST [DEBUG] HttpClient - -SunJCE 1.42: SunJCE
Provider (implements DES, Triple DES, AES, Blowfish, PBE, Diffie-Hellman,
HMAC-MD5, HMAC-SHA1)
2003/11/07 00:00:13:884 EST [DEBUG] HttpClient - -SunJGSS 1.0: Sun (Kerberos
v5)
2003/11/07 00:00:14:555 EST [DEBUG]
HttpConnection - -HttpConnection.setSoTimeout(0)
2003/11/07 00:00:14:785 EST [DEBUG] HttpMethodBase - -Execute loop try 1
2003/11/07 00:00:14:795 EST [DEBUG] wire - - GET /Status.htm
HTTP/1.1[\r][\n]
2003/11/07 00:00:14:826 EST [DEBUG] HttpMethodBase - -Adding Host request
header
2003/11/07 00:00:14:996 EST [DEBUG] wire - - User-Agent: Jakarta

Re: HTTPClient RC2 - Response content length is not known

2003-11-07 Thread Eric Johnson
Oleg,

Dang, you're good!  You complete fixes before others can even guess at them!

-Eric.

Oleg Kalnichevski wrote:

It strikes me personally that the warning in this particular context 
is probably excessive, and it should be logged as a info or debug 
message instead, but only in this particular case.  If you look at the 
wire log you'll notice that the server does not respond with a 
Content-Length header.  Rather, the length of the response is dictated 
by when the server closes the connection to the client.  HttpClient is 
telling the truth with this warning that you see, but in this particular 
context, the server explicitly indicates that it will be closing the 
connection.
   

Eric, you are completely right. I realised that a few days ago and
already applied a fix for the problem. The most recent 2.0 code snapshot
will not display the warning when content length cannot be determined
but 'connection: close' directive is given.
Oleg

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Cookie test case failures with Tomcat 4.1.29

2003-11-04 Thread Eric Johnson
Oleg,

I hate bugs like this!  I suppose if it is working for you, there's hope 
it can work for me.

I'm working against a completely stock 4.1.29 install on Linux, using 
Sun's JDK 1.4.2.  When I say stock 4.1.29 build, I expanded the file 
after download, dropped httpclienttest folder into webapps, then started 
up Tomcat with a ./catalina.sh run.  Then I ran the tests as reported.

-Eric.

Kalnichevski, Oleg wrote:

Eric,
Strangely enough, I installed Tomcat 4.1.29 yesterday and had no failing test cases of whatsoever. I reran the test cases with the latest code from CVS HEAD and 2.0 branch against Tomcat 4.1.29 after having read your message. Again, no failing test cases. 

Did you keep your old server.xml file?

Oleg

-Original Message-
From: Eric Johnson [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 04, 2003 17:11
To: HttpClient
Subject: Cookie test case failures with Tomcat 4.1.29
It would seem that the latest Tomcat (4.1.29) has engaged in a subtle 
change in behavior with respect to cookies.  When I ran it this morning, 
nine of the cookie related test cases failed.  Last week, I was running 
with Tomcat 4.1.27, and everything worked fine.

Since I had it readily available, I fell back to Tomcat 4.1.18 (I 
deleted 4.1.27, unfortunately, and Apache is no longer hosting it) and 
ran the tests again, and got all of the tests to pass with no errors.

Upon inspection, the failures would seem to be due to the test servlet 
returning:

ttsimplecookie=value/tt

instead of:

ttsimplecookie=value/tt

Which is right - our test cases, or the new behavior?

-Eric.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Cookie test case failures with Tomcat 4.1.29

2003-11-04 Thread Eric Johnson
Oleg,

I'm glad you sent along the file!  It's funny - other than that one 
file, I'd say you exactly duplicated the environment I'm using.  That 
one file is key, though.

It would appear that your version of server.xml trumps the default 
connector choice.  The default server.xml reads:

   Connector className=org.apache.coyote.tomcat4.CoyoteConnector
  port=8080 minProcessors=5 maxProcessors=75
  enableLookups=true redirectPort=8443
  acceptCount=100 debug=0 connectionTimeout=2
  useURIValidationHack=false disableUploadTimeout=true /
and yours reads:

   Connector className=org.apache.catalina.connector.http.HttpConnector
  port=8080 minProcessors=5 maxProcessors=75
  enableLookups=true redirectPort=8443
  acceptCount=10 debug=0 connectionTimeout=6/
As far as the docs on the Tomcat site go, this:
http://jakarta.apache.org/tomcat/tomcat-4.1-doc/config/connectors.html
indicates that the HttpConnector is deprecated, and the 
CoyoteConnector is the one to use.

I replaced the Connector entry in your server.xml (which worked for 
me, and generated no test failures, by the way), with the one from the 
Tomcat original, and reproduced the problem.

By the way, when I last looked at the older connector, you most 
definitely didn't want to use it with Tomcat 4.1.18.  It turns out that 
any GET request that didn't include a Content-Length line would be 
closed by the connector, rather than assuming that the GET request had 
no content and reuse the connection persistently.  I got really horrible 
behavior where each request from my web browser meant a new connection 
to the server.  So much for HTTP  1.1.  The newer CoyoteConnector 
doesn't show this behavior.

Of course, this brings us full circle to the original problem - which 
one is right?

-Eric.

Oleg Kalnichevski wrote:

Eric,
I just installed Tomcat 4.1.29 on my home PC running Redhat 9 and Sun
JDK 1.4.2. I can't reproduce the problem. All tests pass. The only thing
I did differently was tweaking tomcat's server.xml (attached below) to
disable stuff that I do not need. I's unlikely that it should have any
bearing on the problem, but who knows.
Oleg

On Tue, 2003-11-04 at 20:53, Eric Johnson wrote:
 

Oleg,

I hate bugs like this!  I suppose if it is working for you, there's hope 
it can work for me.

I'm working against a completely stock 4.1.29 install on Linux, using 
Sun's JDK 1.4.2.  When I say stock 4.1.29 build, I expanded the file 
after download, dropped httpclienttest folder into webapps, then started 
up Tomcat with a ./catalina.sh run.  Then I ran the tests as reported.

-Eric.

Kalnichevski, Oleg wrote:

   

Eric,
Strangely enough, I installed Tomcat 4.1.29 yesterday and had no failing test cases of whatsoever. I reran the test cases with the latest code from CVS HEAD and 2.0 branch against Tomcat 4.1.29 after having read your message. Again, no failing test cases. 

Did you keep your old server.xml file?

Oleg

-Original Message-
From: Eric Johnson [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 04, 2003 17:11
To: HttpClient
Subject: Cookie test case failures with Tomcat 4.1.29
It would seem that the latest Tomcat (4.1.29) has engaged in a subtle 
change in behavior with respect to cookies.  When I ran it this morning, 
nine of the cookie related test cases failed.  Last week, I was running 
with Tomcat 4.1.27, and everything worked fine.

Since I had it readily available, I fell back to Tomcat 4.1.18 (I 
deleted 4.1.27, unfortunately, and Apache is no longer hosting it) and 
ran the tests again, and got all of the tests to pass with no errors.

Upon inspection, the failures would seem to be due to the test servlet 
returning:

ttsimplecookie=value/tt

instead of:

ttsimplecookie=value/tt

Which is right - our test cases, or the new behavior?

-Eric.

 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [codec] My own little base64 implementation, as an {Input,Output}Stream.

2003-11-03 Thread Eric Johnson
Konstantin,

Konstantin Priblouda wrote:

--- Alexander Hvostov [EMAIL PROTECTED] wrote:
 

A little while ago, I wrote a base64 {en,de}coder in
Java. Huzzah. Since I've 
found out that you guys wrote one too, I thought I
should submit mine. It's 
implemented as an OutputStream and InputStream pair,
though, so it's a bit 
different from yours, which is why I thought to send
mine. It's public 
domain, so do whatever you want with it. The two
source files are attached.
   

That's really cool. I was playing with the same
thoughts - I have
to import / export binary content in XML, so it's
really interesting.
BTW, does somebody know XML parser, which works like
SAX,
but provides input stream of characters instead 
callbak with a StringBuffer? Allocating big
strings/arrays  is somehow uncool when you are about
to save binary result into BLOB...

 

By which I think you're referring to org.xml.sax.ContentHandler? If the 
contents of an element includes a large blob of continuous text, the 
parsers are free to deliver this to your application in little chunks 
(and in fact do break it into chunks) so as to save on the maximum 
amount of memory the parser might be forced to allocate. Theoretically, 
it is then *your* problem to figure out whether or not it is a good idea 
to bundle all of the data up into a giant String or StringBuffer before 
you process it.

Nobody said the SAX API makes life easy. An interesting point, at least 
in the context of this discussion thread, is that you'd then need to 
pass the *characters* off to the base64 decoder, but in chunks of a size 
that the parser chooses, which unfortunately might not line up with the 
four-character chunks that a base64 decoder wants

Hope that helps.

-Eric.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Getting 502 Bad Gateway on connection

2003-10-31 Thread Eric Johnson
David,

Others will undoubtedly suggest this, but if you can, upgrade to the 
latest HttpClient 2.0 rc2. Numerous bug fixes for proxy support have 
been made since the release you mention. See if the newer release 
resolves the problem. If it doesn't, you might follow the instructions 
for troubleshooting on the web site, and let us know.

-Eric Johnson

Karr, David wrote:

I'm apparently using version 2.0-alpha3-dev of HTTPClient.

I doubt this is a problem with HTTPClient, but I'm just trying to
understand my issue a little better.
I have a servlet running in WebLogic on my desktop, and I'm trying to
connect to an external URL through an authenticated proxy.  My response
from the connection gets a 502 Bad Gateway error.  I have another
servlet with very similar code (and using the same HTTPClient version),
but connecting to a different URL, running on a production box, which
works fine.
Is this just a local problem with our proxy/firewall configuration, or
something that I could be doing wrong in my code?
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [Vote] Release Prime

2003-10-15 Thread Eric Johnson
Jandalf,

I guess fighting the Balrog aged you a bit?

Given the collection of dedicated committers, I would hate for a non 
binding +1 to be taken as a slight against anyone else, for it most 
surely is not!  Caveats aside, Michael seems like an excellent choice 
from where I sit (+1).

-Eric.

Jandalf wrote:

Hello everyone,

I had been acting as the release prime for HttpClient, but 
unfortunately must resign.  My job has changed, and my new life path 
does not leave me free time for much these days.

I would like to stay on as the mailing list moderator (its amazing how 
much junk mail needs to be filtered from the list!).

I'd like to nominate Michael Becke as the new release prime.  He has 
been consistant and insightful in the development of HttpClient.

Jandalf.

-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [primitives] Package layout strategy

2003-10-13 Thread Eric Johnson
I've got almost no stake in this (except in having the work move forward 
so that I might take advantage of it in the future), but as far as the 
maps are concerned, I think having each of the possible pairwise 
solutions is overkill.

What is to be served by a map of short -- Object, that isn't also 
served by int -- Object? Yes, there might be a *slight* improvement in 
memory use, but little else, so far as I can tell, with the possible 
exception of primitive type safety.

The converse question applies as well. What gain is there of int -- 
short? Why not just use int -- int? The case of int -- boolean is 
really just a degenerate case of int -- Object, where Object is a 
Boolean.TRUE or Boolean.FALSE, isn't it?

Is there any value whatsoever to boolean -- X?

For primitive maps, I see the interesting key types as int, long, 
double, and destination types as int, long, double, and Object. If you 
really argue the point, you might further boil this down to just long 
and double, but that really could consume lots more memory, at least 
where short keys are concerned.

All of which might argue for defering the creation of these map classes 
until needed.

-Eric Johnson

__matthewHawthorne wrote:

The problem is: how to properly package the massive amount of 
primitive collection classes. I see this as a valid problem. Leaving 
well enough alone is a possibility, another is to discuss if there are 
better options. That's what is taking place here.

Waiting until there is a real-world use-case for primitive Maps is an 
option. However, since many of the other collection types have been 
covered, I think that basic Map implementations are a necessity for a 
release. Now, the amount of Maps could be many, in which case it may 
be wise not to get too deep into ordering and other algorithms and 
types. Having real world users of these classes would be nice, but 
given the nature of the classes, I don't think it would have much 
affect on the outcome, it all seems pretty monotonous. That's why a 
lot of the code can be generated.

I don't think that every class in commons had a use case before it was 
created. When thinking about possible additions, I'm sure that a lot 
of brainstorming occurs. This may have both good and bad effects. But 
as long as the code is documented well, and has test cases, I don't 
see this as a big deal.



Rodney Waldhoff wrote:

On Mon, 13 Oct 2003, __matthewHawthorne wrote:


I believe that there will be a lot of code generation involved, Stephen
checked in some Velocity templates a few weeks ago.


Rather than generating the 64 pairwise primitive-to-primitive maps, 
their
associated iterfaces, base classes, adapaters, decorators (immutable,
sychronized) and variations (ordered/unordered, hash/tree, etc.), why 
not
wait until we have an actual, real-world application that calls for 
them?


So the battle has become:

o.a.c.primitives.boolean
o.a.c.primitives.byte
o.a.c.primitives.short
o.a.c.primitives.int
o.a.c.primitives.long
o.a.c.primitives.float
o.a.c.primitives.double
vs.

o.a.c.primitives.collection
o.a.c.primitives.list
o.a.c.primitives.iterator
o.a.c.primitives.map
Any other opinions?



Yes, leave well enough alone. Again, what problem are we trying to 
solve?



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Proposal: Configurable HTTP Response length limit

2003-10-10 Thread Eric Johnson
I would think that if your application has any reason to believe that 
the response will be unbounded, then you should use getResponseBodyAsStream.

I suppose would could add functions that took a limit parameter for the 
functions getResponseBody() and getResponseBodyAsString().  Something like:

byte[] getResponseBody(int maxBytes);

and deprecate the old one.  In the absence of such support in 
HttpClient, presumably you can write such a function for yourself?

I agree with other posts that this is not an issue to solve at the 
stream level.

-Eric.

[EMAIL PROTECTED] wrote:

Am Freitag, 10. Oktober 2003 13:33 schrieb Ortwin Glück:
 

Chris,

Thanks for posting. However I really don't see why this should be a
responsibility of HttpClient. The user can always discard the rest of a
response if he wants to. I my eyes you are solving the problem at too
low a level. The only problem that arises is with endless streams, since
HttpClient always tries to consume the input stream until the end. The
only thing you need is a way to forcibly end a connection.
Odi
   

Odi,

thanks for your answer.

Indeed, endless streams (or streams returning more bytes than the JVM's free 
memory size) are the big problem of the current HttpClient.

HttpMethodBase's getResponseBody() has no hard limit and will cause the 
application to crash with an OutOfMemoryError when you are reading from a 
stream which is too long.

Then, you should probably mark getResponseBody and getResponseBodyAsString as 
deprecated because they will not terminate normally in this case.

Christian

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Proposal: Configurable HTTP Response length limit

2003-10-10 Thread Eric Johnson
At a lower level, the potential failure points are undoubtedly there.  
Unless you could point to a real-world server that causes them, or a 
security bug that stems from them, any such fixes would be academic in 
nature.

Can you cause an out of memory failure while using 
getResponseBodyAsStream() to process a response from a *conformant* HTTP 
server?

Sure, I could construct a chunked response that indicated as a chunk 
size a number on the order of (16^(2^30)), which would consume all 
available memory, but then the question would be how to detect that and 
fail the connection sooner - due to a non-conformant server.  It would 
not be a question of merely prematurely terminating the chunk.

I don't see how ChunkedInputStream.exhaustInputStream() can actually 
consume all available memory.  Rather I see that if the server never 
stops sending data, it will simply never exit.  The latter condition 
suggests an abuse of the HTTP protocol, and an alternative streaming 
protocol would be more appropriate.

-Eric.

[EMAIL PROTECTED] wrote:

Eric,

adding a maxBytes parameter to getResponseBody seems to be a resonable idea to 
limit the actual number of bytes returned in the response body.

Nevertheless, you would still have to fix some other flaws lying around at 
lower levels that attempt to read until EOF or newline,  such as 
ChunkedInputStream.close() and exhaustInputStream() - or 
HttpParser.readRawline().

I can easily provide test cases which will cause HttpClient to eat up all 
available memory throwing an OutOfMemoryError because of reading and reading 
from a never ending HTTP Response.

I would regard this behaviour as a bug.

The patch is intended as an emergency brake if methods at higher levels try 
to read until EOF. It is up to the user whether to set this hard limit or 
not.

Christian

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Proposal: Configurable HTTP Response length limit

2003-10-10 Thread Eric Johnson
Christian,

I would add to my previous two suggestions about using HTTP 1.0 level 
support and the Connection:close header, an additional suggestion.  
Provide your own connection manager that does not attempt to re-use 
connections in a persistent manner, or at least, lets you control 
whether to do so.

I think you might be able to solve at least some of your issues that way.

-Eric.

Christian Kohlschuetter wrote:

Am Freitag, 10. Oktober 2003 23:40 schrieb Oleg Kalnichevski:
 

Chris,
Please see my comments in-line
   

I think it is less the patch itself but a standpoint problem 
(standards-compliant vs. robust).

To end this exasperating thread: everybody is invited (but not forced) to use 
the provided patch, especially as long as there is no other solution.

I will keep using my patch since you have not shown any workarounds for 
handling endless streams or illegal HTTP headers, which is a knock-out 
criterion for the unpatched version.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Why are cookies deleted?

2003-10-06 Thread Eric Johnson
Ernst,

You might check out the troubleshooting guide.  See about turning on 
wire logging, for example.  Then let us know what you find in the log.

The server can, of course, decide that it wants to expire your 
cookies.  The server doesn't need to return them with each response.  
Cookies can be qualified by time, path and server, so perhaps your 
second request is for a different context as far as the cookie state 
is concerned?  I note that the output you provide below does not print 
out the details of the cookies found in the original request, and that 
the two requests are on different paths.

A shorter sample of the problem would undoubtedly help out, as it would 
let us try exactly the same code you're trying.

-Eric.

Ernst de Haan wrote:

But that is the constructor which is called only once. So my code is 
correct, right? If not, please elaborate on what I should do differently.

Ernst

On maandag 6 oktober 2003 11:59, Kalnichevski, Oleg wrote:
 

Ernst,

Cookies are not deleted by HttpClient unless they are expired. I am
pretty sure about it
In your particular case there's a bug in the following piece of code

  public AddReleaseTask() {
 HttpState httpState = new HttpState();
 httpState.setCookiePolicy(CookiePolicy.COMPATIBILITY);
 _httpClient = new HttpClient();
 _httpClient.setConnectionTimeout(7000); // 7 seconds
 _httpClient.setTimeout(5000);   // 5 seconds
 _httpClient.setState(httpState);
  }
There's a new instance of HttpState created every time the method is
executed. As a result the old one gets garbage collected along with all
the cookies it contains. Just keep the original HttpState instance to
stop your cookies from disappearing
HTH

Oleg

-Original Message-
From: Ernst de Haan [mailto:[EMAIL PROTECTED]
Sent: Monday, October 06, 2003 11:44
To: Commons HttpClient Project
Subject: Why are cookies deleted?
Hi,

Why are cookies deleted from the state registered with my HttpClient
object?
I do a request (GetMethod) that returns 2 cookies. Then I do another
request (using another GetMethod) and with that I lose both cookies.
Should I recycle the original method or should I release the connection
or should I do something else?
I'm pretty sure the site does not delete the cookies self, although I'm
not 100% sure. How can I determine this?
Source code:
http://people.freebsd.org/~znerd/AddReleaseTask.java
Output log:

[sfaddrelease] Using keystore file
src/certificates/sourceforge.net.cert. [sfaddrelease] Logging in to
SourceForge site as znerd.
[sfaddrelease] Executing request https://sourceforge.net/account/
login.php?return_to=form_loginname=znerdform_pw=Secret1persistent_logi
n=1login=Login +With+SSL.
[sfaddrelease] Received 2 cookies.
[sfaddrelease] Received cookie: session_ser=4mwuT3NmTwAcip%
2BNYbMb3kufdYs1ecnResrJ4qvW64J3DO1UjOB9najRyGZHsvly%2F7%
2FApd7J6HNaZzO47tBkuaT0juKf20pqVZSSAZh2eho%
3D-9b1d4e8f9591972e74e19fee00ea1f7a
[sfaddrelease] Received cookie: persist_session=Vd18PV2KlUs%3D
[sfaddrelease] Logged in to SourceForge site as znerd.
[sfaddrelease] Creating release 0.127-dev for group 71598, package
71219. [sfaddrelease] Current cookie count is 2
[sfaddrelease] Executing request https://sourceforge.net/project/admin/
newrelease.php?group_id=71598package_id=71219release_name=0.127-devsub
mit=Create +This+Release.
[sfaddrelease] Received status line: HTTP/1.1 200 OK
[sfaddrelease] Current cookie count is 0
[sfaddrelease] Created release 0.127-dev for group 71598, package
71219.
--
Ernst
   



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Wire logging problem...

2003-10-06 Thread Eric Johnson
David,

You might try setting the logging level for the Logger called 
httpclient.wire.

In other words, add this:

log4j.category.httpclient.wire=

with the debug level of your choice.  If you look carefully at the 
logging.html page, you can dig this tidbit out from this line:
System.setProperty(org.apache.commons.logging.simplelog.log.httpclient.wire, 
debug);

although perhaps that is not as obvious as it should be.

-Eric.

David Brady wrote:

I use Log4j for our current project - no problems with it.

I've added to our log4j.properties file:

log4j.category.org.apache=INFO

This represses all of the httpclient debug messages (great!), but it 
doesn't impact the wire debug messages.

Am I missing something?

I've checked the logging documentation @
http://jakarta.apache.org/commons/httpclient/logging.html - and played 
with setting some system properites.  Nothing helped.  Once I get to 
the point of throwing random nconfiguration properties around, I 
figure it is best to ask for some help...

Any help appreciated.  Thanks much.

_
Instant message during games with MSN Messenger 6.0. Download it now 
FREE!  http://msnmessenger-download.com

-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: connection leak in HttpMethodBase when HttpRecoverableException is re-thrown

2003-09-24 Thread Eric Johnson
Danch,

Note the bugs:
http://issues.apache.org/bugzilla/show_bug.cgi?id=23137
http://issues.apache.org/bugzilla/show_bug.cgi?id=22841
If you can, pull the latest code from CVS, or grab a nightly build, and 
try it again, as the bug may very well be fixed already.

Let us know how it goes.

- Eric

danch wrote:

Hello,
I'm using HttpClient (2.0 rc1) under Axis (using their 
CommonsHttpSender class) and we've run into some problems with a web 
application locking up (deadlock).

A little research and a test application revealed that this our 
application arrived at this condition after receiving a couple of 
exceptions from HttpMethodBase.processRequest. If processRequest 
catches an HttpRecoverableException, it attempts to determine whether 
a retry should be attempted or not, and if not (which is our case), it 
rethrows the exception, which is propogated on up the stack:

} catch (HttpRecoverableException httpre) {
if (LOG.isDebugEnabled()) {
LOG.debug(Closing the connection.);
}
connection.close();
LOG.info(Recoverable exception caught when processing request);
// update the recoverable exception count.
recoverableExceptionCount++;
// test if this method should be retried
if (!getMethodRetryHandler().retryMethod(
this,
connection,
httpre,
execCount,
requestSent)
) {
LOG.warn(Recoverable exception caught but 
MethodRetryHandler.retryMethod() 
+ returned false, rethrowing exception
);
throw httpre;
}
}

The caller of processRequest (HttpMethodBase.execute) does not handle 
the exception, but has the following finally block:

} finally {
inExecute = false;
// If the response has been fully processed, return the connection
// to the pool. Use this flag, rather than other tests (like
// responseStream == null), as subclasses, might reset the stream,
// for example, reading the entire response into a file and then
// setting the file as the stream.
if (doneWithConnection) {
ensureConnectionRelease();
}
}
Note that the retry case is handled internally to 
HttpMethodBase.execute. I believe that the
'doneWithConnection flag should be set to true before rethrowing the 
HttpRecoverableException in the above catch block, so that the 
connection will be released properly. This change seems to have solved 
the problem we were having. Here's diff -u output

--- HttpMethodBase.java 2003-09-24 13:10:10.249937500 -0500
+++ HttpMethodBase.mine 2003-09-23 16:51:31.394929000 -0500
@@ -2644,6 +2644,7 @@
Recoverable exception caught but MethodRetryHandler.retryMethod() 
+ returned false, rethrowing exception
);
+ doneWithConnection = true;
throw httpre;
}
}
thanks,
danch
-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Performance Issue

2003-07-29 Thread Eric Johnson
Oleg,

Thanks for doing the research!

If my math is correct, this means:
2.0b2 - average of 10.23ms per request
2.0rc1 without isStale - average of 8.16ms per request
This would seem to correlate exactly with the code - we know that the 
penalty of calling isStale() should be approximately one millisecond, 
since isStale() calls setSoTimeout(1), since it cannot set it to zero.  
Oddly, this doesn't correspond to the 100ms vs 300ms discrepancy 
reported in the original post of this thread.  I wonder if you're 
correct about the logging overhead problem.

Maybe it is just me, but I can live with a 1ms penalty that dramatically 
increases the reliability of the re-used connections.  Based on your 
research, I think we should keep the isStale() check.  What do others think?

You might consider committing your performance test as something under 
the contrib package so that we could look at running it with each 
release, and thus keep track of the library's performance over time.

-Eric.

Kalnichevski, Oleg wrote:

OK. I did a stupid thing that flawed all my measurements. I have basically ended up measuring the speed of console output ;-). Dumb Russian I am. 

Todd, are you sure you have not fallen into the same trap? BETA2 is simply more verbose than ALPHA3 ;-)

There are the revised numbers (with wire log disabled, 10 threads, 50 requests per thread)

2.0a3:  500 requests in 0.225 min
2.0b2:  500 requests in 0.086 min
2.0rc1 (with 'stale' connection check removed): 500 requests in 0.068 min
This is closer to what I expected to see. I have always suspected that the request buffering should have made beta-versions is fact faster, not slower compared to alpha ones. 'isStale' connection check does slow things down a bit.

Is anyone getting different results?

Oleg

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: ssl question

2003-07-29 Thread Eric Johnson
Quent,

You might also read Bruce Schneier's book called Applied Cryptography, 
(and his other books, too).  Every security system has its inescapable 
flaws.  HTTPS/SSL/TLS, for example, depends on the certificates not 
being compromised while they're still valid, and on the computational 
complexity involved in deciphering for the chosen symmetric key 
encryption algorithm.  I think the default with JSSE is to use DESede 
for the symmetric encryption, which security researchers have cracked 
for an individual message in under 48hrs with highly distributed 
processing (tens of thousands of computers cooperating).  In other 
words, HTTPS is good for keeping messages from criminals (they have 
better and easier ways to get your credit card numbers!), but capable 
governments can decode the messages.

Of course, this is off-topic, so you should look for more information 
elsewhere as Odi suggested.

-Eric.

Querent wrote:

Dear Odi,

I am using jsse for the ssl implementation.
I still want to use HttpClient in my program.
If I am assuming that the server and client certificate both valid and they're 
communicating to each other. Are they communicating in a secure line? (ie: no one can 
get or decrypt the data ?)
Do you have any reference or links to read to strengthen the communication between client and server?

Thanks in advance

Quent

Ortwin_Glück [EMAIL PROTECTED] wrote:
Dear Querent,
SSL is not implemented by HttpClient but is provided by an external 
company such as Sun. HttpClient uses the SSL implementation that you 
chose. How secure the SSL connections are is dependent on the algorithm 
used. To be sure you should disable known weak algorithms in your SSL 
implementation. Furthermore you can check the server certificate and 
supply a client certificate. For a ultra-sensitive data (like banking 
applications) it is certainly not sufficient to have just the code you 
posted.

HTH

Odi

Querent wrote:

 

Dear all,

I'd like to know how secure it is ssl in HttpClient. I set up the
host configuration using
HttpClient client = new HttpClient(); client.setStrictMode(true); 
client.getHostConfiguration().setHost(LOGON_SITE, LOGON_PORT,
https);

while LOGON_SITE and LOGON_PORT is the address of https site. I am
able to do either GetMethod or PostMethod.
Is my set up enough for HttpClient such that my program communicate
using secure connection ? Does HttpClient reliable on ssl ?
Thank a lot.

quent



- Do you Yahoo!? SBC Yahoo! DSL - Now
only $29.95 per month!
   

 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: FW: Commons-HttpClient conflict with WebDAVClient

2003-07-28 Thread Eric Johnson
Oleg,

It doesn't matter too much to me either way.  I've sent an email to the 
slide-dev group suggesting that they switch to the 2.0 branch, but no 
action has been taken yet.

Slide is an interesting test case, if only because it is representative 
of how other clients are using HttpClient, at least that is my guess.

Mind you, I'm not saying that we should hold back HttpClient just for 
Slide!  I agree with you 100% that we should fix critical design flaws.  
On the other hand, the webdav extensions that Slide has use HttpClient 
in a way that I think most on this list would consider standard.  To 
allow the Slide extensions to continue to work in a drop-in manner 
with both the 2.0 and 2.1 branch would be beneficial to all, and not 
just the Slide project.

Mike raised the version numbering issue a while back.  A very simple, 
clear-cut way for us to look at it is to say that if Slide DAV 
extensions continue to work with both 2.0 and 2.1, then I think we can 
get away with calling the changes that are in progress part of a 2.1 
release.  Otherwise, we should be honest with the clients of HttpClient 
and call it 3.0.

That's just my perspective.

Kalnichevski, Oleg wrote:

Eric,

Of course, the patch can be rolled back. Alternatively we can leave getResponseContentLength() method as is, and introduce an additional method that serve similar function but returns long, not int.
 

I don't believe we should just roll back the patch either.  It is a 
design flaw that needs to be fixed.  I suggested getResponseLength() as 
the new alternative that returns a long.

But the whole point is that I really can't understand why Slide folks cannot just use stable 2.0 branch. At the end of the day back in February we decided to release 2.0 with the sub-optimal API primarily in order to keep Slide folks happy (even though we were still formally in alpha phase). And now what? Is history about to repeat itself?
 

I don't think that history will repeat itself, because the 2.0 library 
is a much more stable, reliable, and robust solution that can be built 
on top of in a contrast to the way that many emails to this list have 
suggested the 1.0 release was unstable, unreliable, slow, and error 
prone.  Slide has a fallback position now that it didn't have before.

Is there any particular reason for Slide to use CVS HEAD?  
 

No.

-Eric.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Performance Issue

2003-07-28 Thread Eric Johnson
Kalnichevski, Oleg wrote:

I would just make 'stale' connection check optional and off per default. Making the check pluggable would require too much of a change for what is essentially a release candidate. That should be better done as a part of 3.0 redesign IMO.

As I recall, the isStale function solved a problem arose when the 
server closed its write channel, but not its read channel.  
HttpClient would then send a request, and only when it went to read the 
response would it fail.

Some possible alternatives:

   * Only do the isStale check if the connection has been sitting in
 the pool for a configurable amount of time.  I'm guessing we could
 choose a value here between 5 and 30 seconds without any
 significant change in behavior, that is to say, connections won't
 go stale in less than 20-30s.
   * Perhaps the isStale check is unnecessary for methods that can
 simply repeat themselves, for example, GET, HEAD, OPTION, TRACE. 
 For those methods, we could allow a retry to fix the problem. 
 For methods such as POST and PUT, however, the isStale is probably
 an essential check.
   * Is this a confirmed problem across all VMs and all OSes?  Is this
 a confirmed problem if not invoking localhost?  If it affects one
 platform, could we punt on the issue?  Which is the specific line
 in isStale() that causes the performance degradation?  Is there
 anyway to speed that one line?

-Eric.





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Commons-HttpClient conflict with WebDAVClient

2003-07-26 Thread Eric Johnson
Daniel,

Sorry I didn't respond sooner, but I was on vacation. I'm hoping you've 
gotten an answer on the Slide mailing list, but I've not had a chance to 
check.

I've been doing some work to keep the latest code in Slide up-to-date 
with the latest of HttpClient 2.0.

You should have success there. There is a bunch of interest in the Slide 
webdavclient, using the latest of HttpClient 2.0 branch, so it should be 
relatively easy to work with.

There is currently a patch outstanding which should help matters 
somewhat, as well. The Slide folks tend to be a lot less responsive then 
the excellent group here at HttpClient, so it may not be as smooth as 
you like.

-Eric.

Daniel Joshua wrote:

Thanks.

 

You will have a lot more luck asking these questions on the slide
mailing list since they've actually looked at the code in the past
year. :)
   

I have joined  posted on the Slide list... so far the list is quiet, guess
it is a low traffic mailing list.
Regards,
Daniel
-Original Message-
From: Adrian Sutton [mailto:[EMAIL PROTECTED]
Sent: Thursday, 24 July, 2003 4:47 PM
To: Commons HttpClient Project
Subject: Re: Commons-HttpClient conflict with WebDAVClient


On Thursday, July 24, 2003, at 06:02  PM, Daniel Joshua wrote:

 

Yes actually, but it's a bit of a nasty hack that was done before
slide
was actually updated to HttpClient 2.0 (there were a lot of API
changes
between 1.0 and 2.0).
 

before slide was actually updated?
so now the latest WebDAVClient has which version of HttpClient ?
eg. HttpClient 2.0 beta 1 or 2 ?
   

I have no idea, I do know that it was updated in the past few months
and they were planning to keep it in sync with HttpClient releases.
I'd suggest you download the slide source code, rip out the HttpClient
classes then compile your own jar against whatever version you like
(beta 2 would be my recommendation).  You may need to tweak a few
things in Slide, but most likely it will just work.
You will have a lot more luck asking these questions on the slide
mailing list since they've actually looked at the code in the past
year. :)
 

Regards,
Daniel
   

Regards,

Adrian Sutton.

--
Intencha tomorrow's technology today
Ph: 38478913 0422236329
Suite 8/29 Oatland Crescent
Holland Park West 4121
Australia QLD
www.intencha.com
-
To unsubscribe, e-mail:
[EMAIL PROTECTED]
For additional commands, e-mail:
[EMAIL PROTECTED]
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


 





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [VOTE] Add commons-codec as an HttpClient dependency

2003-07-16 Thread Eric Johnson
Kalnichevski, Oleg wrote:

Right, but the problem is those folks who use CVS snapshots while insisting on complete (maximum) API compatibility with 2.0 branch. They have not been quite receptive to 'but it was part of our plan for 2.1' kind of arguments up to now. 

Of course, I can put up the same 'Evil Comrade' act as always, but I have a feeling that some of them did not quite appreciate my sarcasm. 

Oleg

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 

One possible solution would be to build a version of HttpClient that 
unpacks the commons-codec and combines it with HttpClient.  People who 
need the one jar does it all could use that one.  We could even be 
clever and pull out only those class files we need, thus satisfying 
Adrian's desire as well.  Granted, there would then be two JAR files, 
but we could clearly indicate that the combination one would go away by 3.0.

Just an idea.

-Eric.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Occassional long download times

2003-07-08 Thread Eric Johnson
See:

http://archives.apache.org/eyebrowse/[EMAIL PROTECTED]msgId=754530

You'll have to configure Log4J, which I do not know well, but it should 
be straightforward to do in conjunction with the httpclient logging guide.

-Eric.

Michael Mattox wrote:

I've tried to turn the wire logging on but can't get it to work.  I pasted
the code to set the system properties but that doesn't work.  Is it possible
to use the wire log with Log4J?
Thanks
Michael
 

-Original Message-
From: Eric Johnson [mailto:[EMAIL PROTECTED]
Sent: Monday, July 07, 2003 3:23 PM
To: Commons HttpClient Project
Subject: Re: Occassional long download times
Michael,

You might try turning on the wire and/or trace logging (which sounds
like it might generate a lot of data), but it would also tell you
exactly where the delay occurs.
Knowing where the culprit occurs would provide additional detail that
might clearly identify whether the problem lies with HttpClient or the
network.
-Eric.
   





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Occassional long download times

2003-07-07 Thread Eric Johnson
Michael,

You might try turning on the wire and/or trace logging (which sounds 
like it might generate a lot of data), but it would also tell you 
exactly where the delay occurs.

Knowing where the culprit occurs would provide additional detail that 
might clearly identify whether the problem lies with HttpClient or the 
network.

-Eric.

Michael Mattox wrote:

I'm experiencing something weird and I just want to see if anyone else has
experienced it, and if it may be something I'm doing.  Basically my
application is monitoring 700+ websites every 5 minutes and timing the time
it takes to connect and download.  The main goal is to verify the site is
working, so I don't need exact precision on the times.  Here's some of my
code to time the download:
method = new GetMethod(uri.toString());
method.setFollowRedirects(true);
method.setHttp11(false);
DefaultMethodRetryHandler retry = new DefaultMethodRetryHandler();
retry.setRequestSentRetryEnabled(true);
retry.setRetryCount(3);
method.setMethodRetryHandler(retry);
start = System.currentTimeMillis();
method.execute(state, connection);
msi.setDuration(System.currentTimeMillis() - start);
What I see is that normally I get download times 150ms and then
occassionally (4-5 times a day) I see a download time of 3000ms.  It happens
to the majority of the websites, so I do not believe it's a particular site.
So it must be either my application, or the network.  My application uses a
thread pool and always has multiple threads running (typically 8 at a time
on a 4 CPU machine that's also running tomcat and Postgres), and I've seen
that at exactly the same time a website has a 3000ms download time several
others have normal 150ms times.  So this seems to rule out the network.  I
set my threads to be all MAX_PRIORITY to minimize the interruptions.  Are
there any other explanations?  Any ideas what I can do about it?  My current
thought is to put in some code to say if the download time is more than 10x
the previous time then repeat the download to make sure.  This way our
customers wouldn't see the huge spike in the numbers, but at the same time
if that spike really should be there I don't want to cover it up.
Thanks,
Michael


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [Proposal] exception handling revised

2003-07-07 Thread Eric Johnson
Oleg,

Thanks for digging into this.

Oleg Kalnichevski wrote:

One of the major shortcomings of the existing architecture is unclear
and convoluted exception handling framework. 

Here's the list of my personal gripes with it

- there is no clear-cut distinction between protocol exceptions (that
are in most cases fatal) and transport exception (that are in most cases
recoverable). As a result possible recovery strategy is not clear (at
least to me)
- Why on earth does HttpException have to extend URIException? That's
just lunacy on a massive scale.
- HttpClient#excecuteMethod  HttpMethodBase#execute declare but never
throw IOException
 

I'd add my gripe that I simply don't know what exceptions to expect to 
get!  It occurs to me that it might be a good idea to enumerate the 
possible failure points for which we want to establish a contract and 
stick to those expectations.

The scenarios I can think of, as suggested by some of our existing 
exception classes, and what I can recall of cases I've worried about.

   * Authentication failure - AuthenticationException
   * Authentication protocol failure - MalformedChallengeException
   * Bad URI - URIException
   * Cookie protocol failure - MalformedCookieException
   * Date protocol failure - DateException
   * Server not responding to initial attempt at communication -
 IOException?
   * Server not found (DNS lookup failure?) - IOException?
   * protocol failure - currently triggered by a failure to find
 HTTP/1.0 or HTTP/1.1 status response line - call this
 HttpProtocolFailure?
   * Just about any communications/IO failure during send - perhaps we
 call this HttpSendFailure - but wrap underlying IOException?
   * Just about any communications/IO failure during response - perhaps
 we call this HttpReceiveFailure - but wrap underlying IOException?
   * Too many redirects - perhaps HttpExcessiveRedirectException?
There are some exceptions that currently occur internally that shouldn't 
be exposed to clients, such as attempts to write to a recycled 
connection that can fail since the connection is stale.  Those failures 
clearly should generate retries with a fresh connection as we do now, 
and are really an artifact of how Java sockets work, not something that 
clients of HttpClient care about.

With the scenarioes above, I only see two exceptions that might possible 
need to wrap other exceptions, so I lean towards the simpler approach 
that Oleg outlined.

-Eric.

I personally see two ways of fixing things

1) Back to the roots 
-
This approach is basically about going back to a very simple, but clear
framework that existed before but got messed up on the way toward
beta-1. 

org.apache.commons.httpclient.HttpException (Root protocol exception)
 |
 +-- org.apache.commons.httpclient.cookie.MalformedCookieException
 |
 +-- org.apache.commons.httpclient.auth.AuthenticationException
 |
 +-- ...
java.io.IOException (Root transport exception; 
 |   all i/o exceptions considered recoverable. Period)
 |
 +-- java.io.InterruptedIOException (timeout)
 |
 +-- ...

Pros:
- simplicity
- no need to 'warp' or 'chain' exceptions. No need for Commons-lang
Cons:
- Some i/o exceptions MIGHT be unrecoverable, but at the moment I can't
think of a single one
- It may not be apparent to everyone that a request that has caused an
IOException can be retired
2) Go elaborate
-
org.apache.commons.lang.exception.NestableException (or equivalent)
 |
 +-- org.apache.commons.httpclient.HttpException (Root exception)
   |
   +-- ...httpclient.HttpProtocolException (Root protocol exception)
   |  |
   |  +-- ...httpclient.cookie.MalformedCookieException
   |  |
   |  +-- ...httpclient.auth.AuthenticationException
   |  |
   |  +-- ...
   |
   +-- ...httpclient.HttpTransportException 
  |   (should 'wrap' java.io.IOException)
  |
  +-- ...httpclient.RecoverableHttpException
  |  |
  |  +-- ...httpclient.TimeoutHttpException
  | |
  | +-- ...httpclient.ConnectTimeoutHttpException
  | |
  | +-- ...httpclient.IOTimeoutHttpException
  |
  +-- ...httpclient.InterruptedHttpException

Pros:
- flexibility
- clarity
Cons:
- complexity
- most likely requires an external dependency 

In my opinion we MUST get exception handling right before we do anything
else. Exception handling is a foundation of any flexible architecture. 

I personally can live with either of these two approaches. If you see
other alternatives, please share your ideas
Cheers

Oleg

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: [VOTE] Re: 2.0 release

2003-06-26 Thread Eric Johnson
Adrian,

+1

OK, my vote is non-binding, but with no test cases, and no code in 
HttpClient that uses the functions, we SHOULD deprecate them.

Even if we decide later, as Sung-Gu suggested, that we might need to 
ressurrect them, we should still deprecate them!  Deprecating them is a 
flag for the users of the library that the functions may not behave as 
expected, which is almost certainly true based on the lack of test 
cases, confusion on this list, confusing documentation, and absence of 
any uses within HttpClient.

-Eric.

Adrian Sutton wrote:

All,
Personally, I believe that this issue has gone on far too long and so 
I  would like to propose a vote:

I move the motion that the following methods from  
org.apache.commons.httpclient.util.URIUtil be depreciated for the 2.0  
release and removed in a future release:

toDocumentCharset(String)
toDocumentCharset(String, String)
toProtocolCharset(String)
toProtocolCharset(String, String)
toUsingCharset(String, String, String)
Please cast your votes:

+1 - The methods should be depreciated
0 - Active Abstain (no response being a passive abstain)
-1 - The methods should not be depreciated (veto)  Veto's must 
contain  an explanation of why the veto is appropriate.

Under Jakarta's voting guidelines  
(http://jakarta.apache.org/site/decisions.html) product changes (such  
as this) are subject to lazy consensus, however in this case I would  
like to achieve consensus on the issue and as such the vote will be  
considered passed if there are 3 binding +1 votes and no binding 
vetos  or the proposal will be turned down if there are any -1 votes.

I would encourage non-committers to submit non-binding votes as well,  
particularly if you can see a use for the methods in question.

Here's my +1.

Regards,

Adrian Sutton.

On Thursday, June 26, 2003, at 06:25  PM, Kalnichevski, Oleg wrote:



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: 2.0 release - looking to the future

2003-06-26 Thread Eric Johnson
Mike,

Thanks for starting this discussion.  I've been contemplating this one 
for a while.

My development approach follows along the lines of revolution through 
evolution.  Which means, with respect to HttpClient, that on the one 
hand I don't want to encourage too many fundamental changes for the 
existing APIs, except perhaps introducing a modicum of additional 
flexibility, while at the same time building a completely revolutionary 
framework on top of and underneath the existing structure.  Having said 
that, I see 2.1 as the first step in the evolutionary path, while 
building the framework that makes 3.0 possible.

Evolutionary - 2.1 release:
- 16729 - Cross host and port redirects - this bug has the most votes - 
although the projects we have don't need this, I think the flexibility 
it implies is good.
- 10792 - Plugin authentication modules - I'm not sure what this means 
exactly, but it sounds like it adds flexibility, and I'm thinking that 
authentication could be handled in such a way that callbacks to a 
client were obvious and transparent
- My current personal peeve - a better test framework than 
SimpleHttpConnectionManager, that allows us to more closely mimick real 
HttpConnection behavior, thus enabling more tests without actually 
requiring a real connection.  Based on missing test cases, I think we 
desperately need this, especially for people like me who are not in a 
position to test NTLM authentication, or proxies, without at least 
considerable difficulty, if at all.  Wouldn't it be great, for example, 
if we could test proxy support without actually having to have a proxy 
server hanging around (that Squid proxy comes to mind...).  That would 
mean that we could even test both proxied and non-proxied actions 
without running a separate set of tests under a new configuration.
- Try decoupling classes - JDepends reports a few cycles that might be 
worth breaking if we can.
- A better configuration mechanism.  I'm thinking of the 
javax.xml.parser.SAXFactory interface, where you call setProperty on 
the factory.  I'm thinking that we currently have a variety of hidden 
properties, which we could unify with a single exposed Properties object 
that the user could configure to their preferences.  And we could 
probably define some sort of look-up for a httpclient configuration on 
the classpath, so that clients could simply add one file to their 
classpath and have their HttpClient communications configured.  For 
examples of hidden properties and not so hidden properties, consider 
the following list:

   * the default connection manager
   * the timeout
   * the connection timeout
   * the connection factory timeout
   * strict mode - what is this used for, by the way?
   * follow redirects
   * protocol factories
   * cookie policy
   * - default headers on a request - nice to have
I'm sure there are others.  Making all of these defaults configurable 
with a deployment descriptor otherwise known as a property file in the 
classpath would be a boon to clients.

I might just recommend stopping there for a 2.1 release - with the idea 
that we release early and often.  This would, of course, mean another 
list for the 2.2 release.  Undoubtedly others have a different set of 
issues that might be more appropriate for 2.1, but whatever that list 
is, I would suggest that it be *short*.

For a 3.0 release, I lean towards a radical redesign built on top of the 
current code to start.
The radical redesign would be built around a framework of interfaces.  
The general idea would be that we expose only one or two key 
implementations of the interfaces, with the implementations of 
everything else being hidden behind a factory facade.

The following is a for example to get the idea across, not any attempt 
at real interfaces...

IHttpClient
- IHttpMethod newMethod(String verb, String url); /* not clear to me 
whether there should be a separate newGetMethod, newPostMethod, 
newPutMethod...) - there are advantages both ways */
- void setProperties(Properties defaultSettings);
- void setProperty(String propName, String propValue);
- void addDefaultHeader(String header, String value);
- String removeDefaultHeader(String header);

IHttpMethod
- int execute() throws ; /* note that in the interface construct, 
the interface keeps track of its http client, so execute can be called 
directly on the method */
- IHttpRequest getRequest();
- IHttpResponse getResponse();
- void setQueryString(NameValuePairs[] pairs);

IHttpRequest
- void addHeader(String header, String value);
IHttpResponse
- String getHeader(String header);
- IStatusLine getStatusLine();
and so on.

Note that I don't particularly like that I stuck the I in front of 
existing class names just to make them interfaces, but it did it to get 
the point across and avoid confusion with existing classes.

A cool part of this approach is that we can start writing the 
interfaces, and the implementations of the interfaces right away (we 

Re: [SURVEY] Commons-URI or not?

2003-06-25 Thread Eric Johnson
It would seem that most if not all of the responses from the HttpClient 
crew responded only to the HttpClient list, and not to commons dev.  So 
I'm not sure that all that might need/want to see the entirely negative 
feedback have seen it.  I don't subscribe to commons-dev, so if this 
doesn't get through, would someone mind reposting it there?

I too am against a separate URI commons package, at least for the moment.

In any event, who else depends on a URI class?  At the moment, there are 
three obvious sources for URI type functionality that I am aware of: 
HttpClient, Slide, and JRE 1.4.  Slide, rather than using what is in 
HttpClient, is using its own, even though Slide includes HttpClient in 
its build dependencies.  Without anyone even sharing the code in the 
first place, it doesn't seem like a good candidate for a separate project.

One of the negatives that others have mentioned on the HttpClient list 
is the growing dependency problem within the Apache projects, 
particularly with the myriad of dependencies on commons projects, and 
among the commons projects themselves.  Perhaps what we need to do is 
start clumping some of the commons projects together, as well as having 
the stand-alone pieces we have now.  A first cut at combining some of 
the commons projects into one giant JAR might include:

   * beanutil
   * cli
   * codec
   * collections
   * dbcp
   * fileupload
   * httpclient
   * lang
   * logging
   * net
   * pool
My criteria for the above list were three-fold - base requirement of JRE 
1.2 or later, that the project should have an official blessed release, 
and that it shouldn't depend on any outside libraries - like an XML 
parser - at runtime.  And I'll admit that I'm fudging on HttpClient (and 
file upload) a little, in that I don't think anyone following HttpClient 
would want to include the 1.0 release, but I'm guessing that by the time 
such a project is agreed upon and pulled together, HttpClient 2.0 will 
be final  At least here's to hoping.

Anyway, to the extent that a separate URI package would make sense, if 
we had a model such as the above, where most people used the one giant 
JAR instead of the individual ones, the creation of a separate commons 
URI project would be largely one of focus and interest, rather than an 
additional dependency quagmire.

-Eric Johnson

Sung-Gu wrote:

Hi all,

I suggest that jakarta-commons provides flexible URI issue implementations
as a package.
Various applications using URI concept comes in the internet world.   and
they need common mechanisms and algorithms for URI.
For example, all internet programs will need fundamental functionalites of
URI like extensible parsing and manipulation container for URL reference,
URN and URC,  escape codec mechanism, charset tranformation functionality,
URI transformation from real world identities or URN, or other
transformations related to DNS or telephony...   If it would be prepared
commonly in Jakarta, we can save development powers.   So I suggest new
commons-uri package.
FYI, currently the commons-httpclient is using it.

Any comments?
Or any +1, -1?
Sung-Gu

P.S.: If the requirement is very weak, I want  to put the new package into
commons-sandbox even for a long while in my opinion...
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Test Coverage

2003-06-23 Thread Eric Johnson
Adrian,

Good of you to bring up the topic...

Adrian Sutton wrote:

Howdy all,
We now have a license for clover to analyze our test cases and am now 
just starting to work through adding test cases to improve our code 
coverage.  I've very quickly come to the realization that 100% code 
coverage may actually be a bad thing.  I've gotten AuthChallengeParser 
to 100% coverage now so let me use it as an example: 
I think, like all code metrics, that coverage is a useful metric, but 
one that can be just as misleading as any other code metric - for 
example, lines of comments, or lines of code.  It is just as easy to 
have a false sense of security due to metrics such as this one.  If you 
consider a simple function like:

void doSomething(...) {
if (a) { ... }
if (b) { ... }
if (c) { ... }
...
}
Your coverage tool will tell you you've exercised 100% of the code 
when you've only covered three of the eight possible ways through the 
routine, and that is in a function without any looping.  In any case, 
the above function may either need detailed test cases that track for 
all eight possibilities, or only three, depending on how the routine 
works, and the contract it exposes, and how critical the code is.

So I would agree with your premise that 100% coverage could be a bad 
thing, particularly if it lulls you into a false sense of security.  
Also, given scarce resources, we should focus our time on testing those 
areas that most need it, rather than simply improving the statistics.  
On the other hand, now that we have the statistic, it should never go 
_down_.



There are four test cases that I consider pedantic and 1 of those that 
I really don't like.  The pedantic ones are:
[snip]

Now, I don't mind what happens with any of these decisions to be 
honest as none affect the actual behaviour of HttpClient - they are 
very much edge cases.  I would however like to set up a policy on the 
types of test cases I should create (do we want to avoid testing 
things like the pedantic things above) as well as the best way to keep 
track of questionable or overly pedantic test cases.  Currently I'm 
just adding a // pedantic  above any test case that seems pedantic and 
a todo comment over anything that I think may require a change to the 
code but isn't clearly a bug.

I figure from time to time I can provide a list of issues that need to 
be considered as I work my way through the codebase.
I struggled with a project with reams of test code on getters and 
setters, yet the value of those get/set functions and the tests that 
went with them wasn't properly evaluated until late in the project.  My 
attitude would be to avoid adding such test cases, to the extent that 
doing so does effectively enlarge the contract of the API.  On the other 
hand, where such tests already exists, I don't see much point in 
removing them until they are actually in the way, either wrong or 
testing deprecated functionality.

Personally, I'm hoping to achieve 100% test coverage firstly because 
I've discovered how dependent I am on having good test cases while 
working on HttpClient (most people don't have the detailed level of 
knowledge that Mike and Oleg do and thus aren't aware that a change 
will break some other section of code - NTLM is a regular victim of 
this).  Also, aiming for 100% coverage makes a very clear-cut decision 
on when the job is done which make life easier as well and makes it 
much more noticeable when new test cases need to be added.

Any thoughts?
If there is one thing I'd like to see with the testing code of 
HttpClient, it is a better way to test without having to actually have a 
connection to either local or remote host or proxy server.  I'd 
enhance SimpleConnection and SimpleConnectionManager so that we could 
simulate more cases without actually having to resort to connecting to 
actual servers.  Putting in simulations of proxy handling, simulations 
of dropped connections, NTLM, and so on, would be very cool.  And I'd 
like a way to do it so that you simply write a configuration file that 
specifies what should be received and what the response should be.  
Unfortunately, I've not had the time to tackle the problem, and this 
week I'm spending my scarce open source time on the client side Slide 
DAV library, to improve its compatibility with HttpClient.

Maybe someone else can run with the idea?

-Eric.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Deadlock problem with MultiThreadedHttpConnectionManager

2003-06-18 Thread Eric Johnson
Ortwin,

It is an odd problem.  Not quite a dead-lock in the traditional sense.  
One thread is waiting on the other, and the other is waiting for the 
garbage collector.  It just so happens that the garbage collector will 
never kick in, because the first thread happens to be the AWT Event 
thread, so the user cannot interact with the application any further, 
thus no objects get allocated, and there is no reason to garbage 
collect. To oversimplify, Thread A depends on Thread B depends on Thread 
C depends (indirectly) on Thread A.

Ortwin Glück wrote:

It's not clear to me how this can be a dead lock.

Eric Johnson wrote:

MultiThreadedHttpConnectionManager.doGetConnection() - line 302, 
which reads connectionPool.wait(timeToWait),


This is a limited wait. Not infinite. Oleg, do we ensure that 
timeToWait0 somehow?
You seem to be echoing one of my recommendations - the code should 
disallow passing zero here.  It doesn't currently, but I think it 
should, since the timeout value it is using is the one associated 
internally with socket connection timeouts, which if not set essentially 
defaults to the lower bound of what all the intervening routers allow, 
whereas this particular wait has no limit besides what we pass.


and the other thread at

MultiThreadedHttpConnectionManager.ReferenceQueuedThread.run(), line 
665, which reads Reference ref = referenceQueue.remove();.


Both lines of code are in a block synchronized on the same object 
(connectionPool). So how can happen what you observe? Two different 
threads both in a synchronized block on the same object?

-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Deadlock problem with MultiThreadedHttpConnectionManager

2003-06-18 Thread Eric Johnson
Mike,

I think you're roughly agreeing with what I would conclude, but I wasn't 
sure and I wanted to get other's feedback.

Michael Becke wrote:

Doing something time-consuming from the event thread is a little 
questionable I think.  It's an easy way to keep a user from using the 
UI but it causes problems like this.  I would suggest executing the 
HttpClient call from another thread and popping up a modal dialog.  
This way the UI is still responsive and you can add a cancel button to 
stop the HttpClient method if you like.
Unfortunately, I am not directly responsible for the design and 
implementation of the product, and we're not focusing resources 
elsewhere, so dramatic changes like the ones you suggest are not 
possible.  I agree with your point, though, and would change it if I could!


This leads me to ask two questions:
Should we add a call to System.gc() at line 302 of 
MultiThreadedHttpConnectionManager?


The support for detecting GCed connections is a last resort.  In 
general it should never be relied on and was mostly just added as a 
cool feature.

Doing explicit GCs from within the HttpClient code is definitely a 
hack.  It can be quite time consuing and there is no guarantee that it 
will have any effect.  You could certainly add a GC in your code but I 
think it is not something we want to include in HttpClient.
Yeah, I was leaning that way too, in that the gc() call is a hack.


Should we ever invoke the connectionPool.wait() with a zero value, 
or should this always time out?  I think this would be better if it 
always timed out, as it is possible, as my scenario shows, to get 
into states where the garbage collector never runs, the connections 
are never freed, and the application grinds to a halt.


Having a zero value is quite valid I think.  There are some cases when 
you want to wait until a connection is available regardless of how 
long that takes.  Though 0 is the default value it can certainly be 
set in your application.  Having the thread timeout doesn't really 
solve your problem though.  It just lets you know you have a problem.
Letting me know I had a problem would be better than what the 
application does now!  In my case, it would solve the problem to the 
extent that the program would not appear to be frozen.

The price of using the MultiThreadedHttpConnectionManager is that 
connections must be released manually.  It trades off the benefit of 
connection reuse/management for the burden of connection release.

I think the only real solution in this case is to ensure that 
connections are released manually.

Also, I am wondering if all of this is happening in the UI thread why 
are you using the MultiThreadedHttpConnectionManager?  It is really 
for sharing/reusing connections among various threads.
Of course, there are a variety of reasons for using 
MultiThreadedHttpConnectionManager, if only because the other option is, 
well, too simple.  We do have other threads, they just don't happen to 
get kicked off in the problematic scenario.

Thanks for the feedback.

-Eric.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Questions related to the use of HttpClient classes

2003-06-06 Thread Eric Johnson
Om,

Responses follow

Om Narayan wrote:

Please validate my understanding of how HttpClient and the other classes
work together.
1. When I do httpclient = new HttpClient, the object is created by default
with the SimpleHttpConnectionManager.
2. I create post = new PostMethod(url) and call
httpclient.executeMethod(post). At this point httpclient takes the url in
the post object and using the SimpleHttpConnectionManager, creates a
HttpConnection object.  This connection is retained by the
SimpleHttpConnectionManager and (possibly?) reused.
The connection will be reused if you invoke another request that uses 
the Simple connection manager.

3. After connection is established, data is posted, and the result
retrieved.
4. I do post.releaseConnection().  Does the connection get closed at this
point? Is the HttpConnection object still around? What do I need to do if  I
had wanted to have the connection stay alive (because it takes time to
re-establish the connection)?
releasing a connection should not be confused with closing the 
connection.  Releasing ensures that it can be safely reused by a 
subsequent request.  For the most part, releasing a connection is 
entirely unnecessary with the SimpleHttpConnectionManager, as it only 
has one connection, and it will release it internally before handing 
it out for a subsequent request.  Closing a connection only happens when 
necessary - as in a Connection: close header, an Http 1.0 response, or 
any of the other various standard reasons for forcing a connection 
closed.  As a general rule of thumb, HttpClient only closes the 
underlying socket connection when absolutely necessary (unless you are 
using HTTPS and a proxy server, as I recall).

5. Is post.recycle() related to connection in any way? What is the purpose
of this? Is PostMethod ctor an expensive operation?
 

I think that previous discussion on the recycle operation has 
indicated to me that this is a largely spurious function.  It might be 
useful, but I suspect it is mostly premature optimization.  The code in 
our application simply makes a new Method object for each new 
request.  Of course, we don't make millions of requests, which others 
might be, and they might report differently.

6. SimpleHttpConnectionManager doc says This manager makes no attempt to
provide exclusive access to the contained HttpConnection. Does this mean
that calling httpclient.execute() method in this case is not thread-safe?
 

As soon as you do an execute on the next method using the 
SimpleHttpConnectionManager, the previous response data becomes 
invalid.  In this sense, it is not even single thread safe.  For 
example, code written like this: execute A, execute B, read A, read B 
will fail on the attempt to read A.  Used across multiple threads, even 
worse things can happen.  I recommend using 
MultiThreadedHttpConnectionManager.

7. If I wanted to use connection pooling should I be using
MultiThreadedHttpConnectionManager instead?  Or do I need to implement my
own pooling?
It should be sufficient to use the MultiThreadedHttpConnectionManager, 
which actually serves multiple tasks, one of them being pooling.

I am sure I have other questions, but this will do for now. :)
Thanks.
Om.

Good luck.

-Eric Johnson

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Host Header and IP Addresses

2003-04-01 Thread Eric Johnson
Adrian,

Weird.  I just went and looked at the code again to refresh my memory.  
Based on the code, and what I recall of the previous discussion, we had 
agreed that sending the IP address for the host when none other was 
specified.

The only way that I can see that an empty host would be sent would be 
if the HttpConnection.getHost() function returned an empty string, or if 
the Host header was explicitly set to something blank.

From HttpMethodBase.addHostRequestHeader()

  // Per 19.6.1.1 of RFC 2616, it is legal for HTTP/1.0 based
   // applications to send the Host request-header.
   // TODO: Add the ability to disable the sending of this header for
   //   HTTP/1.0 requests.
   String host = conn.getHost();
   int port = conn.getPort();
   if (getRequestHeader(host) != null) {
   LOG.debug(
   Request to add Host header ignored: header already added);
   return;
   }
   // Note: RFC 2616 uses the term internet host name for what 
goes on the
   // host line.  It would seem to imply that host should be blank 
if the
   // host is a number instead of an name.  Based on the behavior 
of web
   // browsers, and the fact that RFC 2616 never defines the phrase 
internet
   // host name, and the bad behavior of HttpClient that follows if we
   // send blank, I interpret this as a small misstatement in the 
RFC, where
   // they meant to say internet host.  So IP numbers get sent as 
host
   // entries too. -- Eric Johnson 12/13/2002
   if (LOG.isDebugEnabled()) {
   LOG.debug(Adding Host request header);
   }

   //appends the port only if not using the default port for the 
protocol
   if (conn.getProtocol().getDefaultPort() != port) {
   host += (: + port);
   }

   setRequestHeader(Host, host);

Maybe there is a different problem with HttpConnection.getHost() having 
an empty host instead of a valid one?

- Eric

Adrian Sutton wrote:

Hi guys,
We've just had an interesting support issue come through related to 
HttpClient.  It seems that in a particular configuration 
Microsoft-IIS/5.0 can't handle receiving an empty Host header.  Since 
HttpClient sends this when an IP address is used, our client is having 
problems.

The fix for us is simple, we'll just remove the IP address check from 
the old version of HttpClient we're using (or possibly just tell the 
client the problem is with their server), however I thought I'd point 
out the problem as there were discussions a while back on this 
behavior and as I recall it was a grey area of the HTTP/1.1 spec so 
this info might be useful.  The server correctly handles having no 
Host header or a the Host address with an IP address, just not a blank 
one.

I don't know that I'd propose anything change in HttpClient at this 
stage but thought I'd mention the problem here for the record.

Some sample outputs from my telnet tests:

[EMAIL PROTECTED] videovision]$ telnet 164.116.4.65 80
Trying 164.116.4.65...
Connected to web_sql.esd113.k12.wa.us.
Escape character is '^]'.
GET /esd_cms_filemgr/images/logos/google.gif HTTP/1.1
Host:
HTTP/1.1 500 Server Error
Server: Microsoft-IIS/5.0
Date: Tue, 01 Apr 2003 22:57:15 GMT
Content-Type: text/html
Content-Length: 102
htmlheadtitleError/title/headbodyThe system cannot find 
the file specified. /body/htmlConnection closed by foreign host.
[EMAIL PROTECTED] videovision]$ telnet 164.116.4.65 80
Trying 164.116.4.65...
Connected to web_sql.esd113.k12.wa.us.
Escape character is '^]'.
GET /esd_cms_filemgr/images/logos/google.gif HTTP/1.1
Host: 164.116.4.65

HTTP/1.1 200 OK
Server: Microsoft-IIS/5.0
Date: Tue, 01 Apr 2003 22:57:45 GMT
Content-Type: image/gif
Accept-Ranges: bytes
Last-Modified: Tue, 25 Mar 2003 07:37:56 GMT
ETag: 344c673a1f2c21:a58
Content-Length: 2048
GIF89aP snip

Regards,

Adrian Sutton.

-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: doubt about retry

2003-03-27 Thread Eric Johnson
Sergio,

As best I can tell, your stated requirement is one that needs to be 
handled at the server.  Consider:

Your client application -- HttpClient -- JRE -- Client OS -- HTTP 
-- Server OS -- HttpServer -- Server application.
and then:
Your client application -- HttpClient -- JRE -- Client OS -- HTTP 
-- Server OS -- HttpServer -- Server application.

So far as you know, any one of these layers could consume your 
request, or the response, and fail to deliver the response processed 
message back to your client.  At least, if you're really being paranoid, 
you need to worry about this.

If you want to insure that the server application processes a request at 
most once, then put a unique number into each request, and the server 
should check that that number does not match any earlier requests.  Of 
course, the server would need to reject any request without a unique number.

It might be possible to approximate your need by a change to 
HttpClient.  Right now, it doesn't guarantee when a recoverable 
exception gets thrown.  It could, for example, be thrown before writing 
the body of any request - in which case you would have your iron-clad 
guarantee that server did not get the request.  That could be a 
sufficient level of protection for what you need.  Is it?

-Eric.

Sergio Berna wrote:

Hello,

I have a small question regarding the HttpException that usually happens
when the connection has been closed previous to re-using it.
In your schema you advise to retry the connection, since a new request
is
likely to work. The problem im facing is wheter i can be absolutely sure
that the first request did not reach the server.
For example imagine im performing some sort of request to a remote
machine
through http, this request must be unique and cannot be duplicated. Can
i be
fully sure that the first request did not reach the server if i get the
recoverable error?
Thanks.

Sergio.

 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: CRLF and Connection: close

2003-03-21 Thread Eric Johnson
Mike,

I like the idea of a cap on the total number of connections as a 
configurable default.

Perhaps HttpClient doesn't need to implicitly perform connection 
recycling on idle connections, but how about adding an explicit method 
on MultiThreadedConnectionManager that clients can call - something to 
the effect of closeConnectionsOlderThan(...).  Then clients can call 
invoke the cleanup in the way that they desire.  I cringe at the idea of 
adding a thread just to implicitly perform the close.  I suppose, as an 
alternative, we could simply look for idle threads whenever a connection 
gets returned to the pool, but that might be too blunt an instrument for 
Carl.

One last thought that occurred to me - what if we added the ability to 
observe the MultiThreadedConnectionManager, a 
ConnectionPoolObserver?  Then clients could use that as an opportunity 
to configure the behavior.  Several possible functions we could put on that:
- newConnectionCreated(connectionmanager, server)
- connectionReused(connectionmanager, server, numIdle, numInUse)
- connectionRecycled(connectionmanager, server, numIdle, numInUse)

Then clients of the manager could control the behavior without having 
to write a completely new implementation.  A client could, for example, 
decide that one particular server was allowed to have many connections, 
but restrict all other servers to a maximum of two.

-Elric.

Michael Becke wrote:

The second thing has to do with how Keep-alive connections behave. 
This is a
multi-threaded app, using MultiThreadedHttpConnectionManager. It 
works great,
however I don't get much benefit of the shared Connections, because 
I'm not
connecting to the same site more than once, generally. That's OK, the 
problem
I run into is that after running for not very long, I suddenly start 
getting
everything timing out. It's hard to really pinpoint the timing, 
giving all the
activity, and no thread identifiers in the log messages, but I think 
what is
happening is that the system is simply running out of file handles or
system-level connections. A quick netstat -n shows a whole bunch of 
open,
TIME_WAIT, and other connections. It seems that the Connection 
Manager is
keeping them around for re-use, and following HTTP/1.1. One fix was 
to send
Connection: close as a RequestHeader, which really fixed things up, 
but now
I am running into sites that are not responding, and not timing out. 
The log
traces into ReadRawLine() and just sits there. I am still tracking 
this down,
I just wonder if anyone else has seen this also?


The problem, as other have been hinting at, is that connections never 
get destroyed.  By default, a max of two connections are created per 
HostConfiguration.  If you are never connecting to the same host more 
than once this is a bit of a problem.  The 
MultiThreadedHttpConnectionManager was designed to reuse connections 
for a small number of hosts.  So as you have guessed the connection 
manager will continue to create connections to new hosts and will 
never reclaim the old ones.  We have a couple of options for fixing 
this, here are a few:

- create a cap on the max number of connections that can be created.  
once reached unused connections will be reclaimed
- implement some kind of connection cache/timeout.  only connections 
that have been used in the last X milliseconds will be kept
- implement a new type of connection manager that better fits this 
kind of process.  in particular it would focus less on connections per 
host, but more on total consecutive connections.  in general we could 
introduce a whole new set of connection managers they are optimized 
for different use scenarios

These are just a few ideas.  What do others think?

Mike

-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: First org.apache.commons.httpclient.contrib package component?

2003-03-13 Thread Eric Johnson
Oleg,

It looks like a fine submission to me.  I think your package name 
suggestion is a good one.  Perhaps a different class name, though.  I'm 
thinking HttpMethodCloner.

-Elric.

Oleg Kalnichevski wrote:

Folks
How about making this utility class the first contribution to our yet
non-existent org.apache.commons.httpclient.contrib package?
Ideas, suggestions, objections?
Cheers
Oleg
On Thu, 2003-03-13 at 14:47, [EMAIL PROTECTED] wrote:
 

Here's the code:

package at.vtg.httpclient;
// or whatever you want ;-)
import org.apache.commons.httpclient.Header;
import org.apache.commons.httpclient.HostConfiguration;
import org.apache.commons.httpclient.HttpMethod;
import org.apache.commons.httpclient.HttpMethodBase;
import org.apache.commons.httpclient.methods.EntityEnclosingMethod;
import org.apache.commons.httpclient.URI;
import org.apache.commons.httpclient.URIException;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
/**
* In this class are only methods to copy a HttpMethod: PUT, GET, POST,
DELETE, TRACE, ...
*
* @author Thomas Mathis
* @version $Revision: 1.4 $
*/
public class HttpMethodUtil {
   private static Log log = LogFactory.getLog(HttpMethodUtil.class);

   private static void copyEntityEnclosingMethod( EntityEnclosingMethod m,
EntityEnclosingMethod copy )
   throws java.io.IOException
   {
   log.debug( copy EntityEnclosingMethod );
   copy.setRequestBody( m.getRequestBodyAsString() );
   copy.setUseExpectHeader(m.getUseExpectHeader());
   }
   private static void copyHttpMethodBase(HttpMethodBase m, HttpMethodBase
copy) {
   log.debug( copy HttpMethodBase );
   if ( m.getHostConfiguration() != null ) {
   copy.setHostConfiguration( new HostConfiguration(
m.getHostConfiguration() ) );
   }
   copy.setHttp11(m.isHttp11());
   copy.setStrictMode(m.isStrictMode());
   }
   /**
* Clones a HttpMethod. br
* bAttention:/b You have to clone a method before it has been
executed, because the URI
* can change if followRedirects is set to true.
*
* @param m the HttpMethod to clone
*
* @return the cloned HttpMethod, null if the HttpMethod could not be
instantiated
*
* @throws java.io.IOException if the request body couldn't be read
*/
   public static HttpMethod clone(HttpMethod m) throws java.io.IOException
{
   log.debug( clone HttpMethod );
   HttpMethod copy = null;
   // copy the HttpMethod
   try {
   copy = (HttpMethod) m.getClass().newInstance();
   } catch (InstantiationException iEx) {
   } catch (IllegalAccessException iaEx) {
   }
   if ( copy == null ) {
   return null;
   }
   copy.setDoAuthentication(m.getDoAuthentication());
   copy.setFollowRedirects(m.getFollowRedirects());
   copy.setPath( m.getPath() );
   copy.setQueryString(m.getQueryString());
   // clone the headers
   Header[] h = m.getRequestHeaders();
   int size = (h == null) ? 0 : h.length;
   for (int i = 0; i  size; i++) {
   copy.setRequestHeader(new Header(h[i].getName(),
h[i].getValue()));
   }
   copy.setStrictMode(m.isStrictMode());

   if ( m instanceof HttpMethodBase ) {
   copyHttpMethodBase( (HttpMethodBase) m, (HttpMethodBase) copy );
   }
   if ( m instanceof EntityEnclosingMethod ) {
   copyEntityEnclosingMethod( (EntityEnclosingMethod) m,
(EntityEnclosingMethod) copy );
   }
   return copy;
   }
}
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
   



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: How to send byte-array data in a multipart post?

2003-02-28 Thread Eric Johnson
James,

The way I built my mental model, a multipart/form-data post consists 
of two distinct kinds of parts, file parts, and string parts.  The 
fundamental distinction is one of binary versus text data, although 
the distinction is, as always, somewhat arbitrary.  I think the 
nomenclature comes from staring at too many web-browsers, where the 
input type=file / tag might lead one to call that kind of part a 
file part, rather than something like a binary or raw bytes part, 
and input type=text  or input type=hidden which both send 
strings, ends up getting called a StringPart.

For the raw part (aka file), you need to indicate a data source.  It 
isn't quite sufficient to pass an InputStream, as sometimes HttpClient 
needs to retry sending your request, in which case it may need to 
restart the reading of your original data.

Hope that helps to clarify.

-Eric.

Couball, James wrote:

This seems non-intuitive... or do I just not understand the
reason/responsibility for FilePart and StringPart?  The documentation
doesn't shed any light here.
Could someone help me to understand?

Sincerely,
James.
-Original Message-
From: Michael Becke [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 28, 2003 4:37 AM
To: Commons HttpClient Project
Subject: Re: How to send byte-array data in a multipart post?

You want to use the FilePart.  It accepts a PartSource in the 
constructor, which ByteArrayPartSource implements.

Mike

On Friday, February 28, 2003, at 07:04 AM, 
[EMAIL PROTECTED] wrote:

 

I like to send data already in memory as a byte-array in a multipart 
post.
In the package org.apache.commons.httpclient.methods.multipart I saw 
the
class ByteArrayPartSource but no corresponding ByteArrayPart 
class, to
use in MultipartPostMethod.addPart(Part part). Is there a special 
reason or
problem why this class is not provided in the core.

Thank you for your help,
Olaf


-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]

   



-
To unsubscribe, e-mail:
[EMAIL PROTECTED]
For additional commands, e-mail:
[EMAIL PROTECTED]
-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Problem with SSL Certificate

2003-02-19 Thread Eric Johnson
Hi,

I brought this up on the commons dev thread and forgot
to post the idea here.

You'll need to write your own implementation of the
SecureProtocolSocketFactory to replace the
SSLProtocolSocketFactory implementation.  Add a
socketFactory argument to the constructor of this
class and use the socket factory instead of the calls
to SSLSocketFactory.getDefault() used in
SSLProtocolSocketFactory.

I think this idea ought to replace
SSLProtocolSocketFactory FWIW.  I just hadn't had time
to send it in or type up the code for it yet.

Eric Johnson (not the one that regularly contributes,
but one that might like to in the near future.)

:)

--- Carlos_Cortés_del_Valle_de_la_Lastra
[EMAIL PROTECTED] wrote:
 I have a problem using httpclient classes. I need
 connect through https protocol with PostMethod. But,
 when I execute the method, an Exception ocurred
 because it doesn't find the certificate i've created
 for this
 
 public class RobotImpl{
 
 //Init server...
 public void iniciar() throws ExceptionGlobal{
 try{
 URL u = new
 URL(https://localhost:8443/Jsp2.jsp;);
 org.apache.commons.httpclient.URI
 uri=new org.apache.commons.httpclient.URI(u);
 HttpClient client = new
 HttpClient();
 HostConfiguration hc = new
 HostConfiguration();
 hc.setHost(uri);
 client.setHostConfiguration(hc);
 client.setTimeout(3);
 
 
 PostMethod post = new
 PostMethod(https://localhost:8443/Jsp2.jsp;);
 
 int iResultCode =
 client.executeMethod(post);
 }
 catch
 ...
 }//end code
 
 
 
 Exception Message
 
 javax.net.ssl.SSLHandshakeException:
 java.security.cert.CertificateException: Couldn't
 find trusted certificate
  at

com.sun.net.ssl.internal.ssl.BaseSSLSocketImpl.a(DashoA6275)
  at

com.sun.net.ssl.internal.ssl.SSLSocketImpl.a(DashoA6275)
  at

com.sun.net.ssl.internal.ssl.SSLSocketImpl.a(DashoA6275)
  at

com.sun.net.ssl.internal.ssl.SunJSSE_az.a(DashoA6275)
  at

com.sun.net.ssl.internal.ssl.SunJSSE_az.a(DashoA6275)
  at

com.sun.net.ssl.internal.ssl.SunJSSE_ax.a(DashoA6275)...
 
 
 
 I tried to create a trust manager that does not
 validate certificate chains, but it doesn't works...
 this is the code
 
 TrustManager[] trustAllCerts = new TrustManager[]{
 new X509TrustManager() {
 public java.security.cert.X509Certificate[]
 getAcceptedIssuers() {
 return null;
 }
 public void checkClientTrusted(
 java.security.cert.X509Certificate[]
 certs, String authType) {
 }
 public void checkServerTrusted(
 java.security.cert.X509Certificate[]
 certs, String authType) {
 }
 }
 };
 // Install the all-trusting trust manager
 try {
 SSLContext sc = SSLContext.getInstance(SSL);
 sc.init(null, trustAllCerts, new
 java.security.SecureRandom());


HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());
 } catch (Exception e) {
 }
 
 
 thanks for advance,
 Carlos


__
Do you Yahoo!?
Yahoo! Shopping - Send Flowers for Valentine's Day
http://shopping.yahoo.com

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: [PATCH] SSLProtocolSocketFactory

2003-02-19 Thread Eric Johnson
Great!

I also noticed that Sun's HttpsURLConnection class
allows you to specify a HostNameVerifier.  I'm not
really sure how this works but it might be worth
thinking about including in httpclient.

Eric

--- Michael Becke [EMAIL PROTECTED] wrote:
 Attached is a patch that adds an 
 SSLProtocolSocketFactory(SSLSocketFactory)
 constructor.  This is just a 
 convenience constructor so someone does not have to
 re-implement this 
 class to use a custom SSLSocketFactory.
 
 Mike
 
 
-
 To unsubscribe, e-mail:

[EMAIL PROTECTED]
 For additional commands, e-mail:
[EMAIL PROTECTED]


__
Do you Yahoo!?
Yahoo! Shopping - Send Flowers for Valentine's Day
http://shopping.yahoo.com

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Servlet communication (was Re : Bug 13463)

2003-02-18 Thread Eric Johnson
Aurelien,

Aurelien Pernoud wrote:


I'm making two distants servlet interact using httpclient. I get the stream
and strip what is unnecessary, rewrite some tags, and locations href, well
I'm making changes to the HTML returned.

Imagine now the two servlets are on the same server, it works fine (I make
call to localhost), but I know it could be much faster and more reliable
making servlet interacts directly. So I took a look at the way to do the
things I do (get the stream from another servlet, work with it and write in
the stream of another servlet) and the only thing I found are :

- include() method on servletrequestdispatcher, and forward()
 

You could use the include function, but create your own implementation 
of the HttpServletRequest and HttpServletResponse interfaces.  As for 
more reliable, you might find it more reliable to decouple the two 
servlets by using a URL, rather than trying to tightly couple them via a 
RequestDispatcher.  In other words, keep your existing solution.  In 
that way, if you decide later that more reliable means better 
performance, you could put the two servlets on two different machines, 
and get twice the performance.

But I can't simply include the response, I have to work with it before
including it in the response. Any way to do it without breaking the working
of the distant servlet ?

Sorry for asking here, but I don't know where to post this kind of
questions... Sun forums ?
 

You might try the Tomcat lists.


Thx for any pointers,
Aurelien


-Eric Johnson



 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




httpclient client certificate authentication

2003-02-13 Thread Eric Johnson
Hello,

I was looking into using the httpclient in an
application that requires client certificate
authentication.  According to the JSSE documentation,
the mechanism for doing this is to get your
SSLSocketFactory from an SSLContext, which allows you
to specify the KeyManager and TrustManager.  I plan on
writing my own implementation of the
SecureProtocolSocketFactory that takes a SSLContext as
an argument for a constructor and has a
SSLSocketFactory field.  I just thought I'd pass this
along as it might be a nice addition to
SSLProtocolSocketFactory.  I have no need of it, but
if anyone has an idea of how to specify a particular
HostNameVerifier similar to SUN's method in
HttpsUrlConnection, that might also be a nice
addition.

BTW, if anyone has a strong opinion on whether I
should even consider http-client for my application,
please feel free to share it with me.  I realize it's
only at alpha2, but my options have pretty well been
narrowed down to rolling my own feature starved
httpclient, mucking about with the sun.net... client,
or using the commons-httpclient.

Regards,

Eric

__
Do you Yahoo!?
Yahoo! Shopping - Send Flowers for Valentine's Day
http://shopping.yahoo.com

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: File Upload

2003-02-13 Thread Eric Johnson
Daniel,

While others have provide you with alternative suggestions to pursue, I 
had one additional thought.  If all you are interested in sending is the 
file name, you should be able to extract that from the HTTP PUT request 
itself - unless of course you want the file name to put to be 
different from the actual file name that you use in the PUT request.

I could imagine, for example, that you want to send a file name like 
c:\program files\my application\myfilename.txt, and you cannot put 
that as the path in a PUT request (although you can URL encode it, 
Tomcat and other servlet containers will reject it).  If all you are 
doing, however, is getting the name myfilename.txt, then using a PUT 
request like /myserlvetcontext/myfilename.txt ought to be sufficient. 
I hope you realize that by allowing the remote client to specify an 
arbitrary location on the local hard-drive to save a file, you have 
introduced huge security risk?  By doing so, you enable a remote client 
to overwrite any file locally, and take over your system.  In fact, if 
you turn on security in your servlet container (Tomcat?), it should 
start preventing you from creating the files that you're creating, 
unless you are putting the files in the temporary folder (work) 
explicitly allowed by the Servlet API.  See the servlet API for more 
details.

Daniel Walsh wrote:

I'm trying to use HttpClient's PutMethod to transfer a file from my
client application to the associated Servlet.  There isn't actually a UI
for this application, in the end I want to automate the process.  I'm
not sure if there is a better way of doing a file upload such as this,
but what I've been trying to do uses a couple of requests to the Servlet
to implement the entire file transfer:
 

[snip]


   ServletInputStream in = req.getInputStream();
   byte[] reqContent;
   int contentLength = 0;
   
   while(in.read() != -1)
   ++contentLength;
   
   reqContent = new byte[contentLength];

   in.read(reqContent);

The above code would appear to be the reason for your blank file on 
the server, in any case.  An InputStream is a one-shot thing.  You get 
to read it once.  Your second attempt to read above 
(in.read(reqContent); )will not actually read anything.

-Eric Johnson



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Moving to 2.0a2

2003-02-07 Thread Eric Johnson
Aurelien,


Are you sure I have too ? It seems that in the end HttpMethodBase
executeMethod (line 957), there's a call to release the connection used.
 

HttpClient changes quickly, so line 957 doesn't seem to match any more. 
If I assume correctly that you are referring to the function 
ensureConnectionRelease, the
if (doneWithConnection) clause surrounding the release call is crucial, 
in that more often than not, the doneWithConnection flag is _false_. 
The connection will only be closed at this point if the response has 
been fully parsed.  It is likely that this will only happen on certain 
types of requests, like the HEAD method, or if you override the 
respective HttpMethod in some interesting way.

You have numerous choices, all of which should recycle the connection 
appropriately.  I think the following list misses some options, but 
should give you an idea - all functions are on HttpMethod.

   * call getResponseBodyAsStream(), and eventually call close on the
 returned stream, which has the effect of recycling the connection.
   * getResponseBody() - calls getResponseBodyAsStream() for you, and
 also calls close on the stream.
   * getResponseBodyAsString() - a convenience wrapper around
 getResponseBody().
   * releaseConnection() - just calls close directly on the equivalent
 of the result from getResponseBodyAsStream().

Hopefully this clarifies, rather than confuses!

-Eric Johnson.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Running out of connections

2003-02-03 Thread Eric Johnson
There is one further alternative to Jeffrey's suggestion.  You can call 
executeMethod, then get the response stream, then call close() on 
that.  The close on the response stream will trigger the right 
sequence of events.

-Eric.

Simon Roberts wrote:

I guess the problem is really mine.  I was somewhat expecting the connection
to be released after it gets a connection: close.

   /**
* A test that illustrates the problem with connections not being
recovered from a Connection: close response.
*/
   public void testConnectionPool()
   throws IOException, HttpException
   {
   final MultiThreadedHttpConnectionManager manager = new
MultiThreadedHttpConnectionManager();

   HttpClient httpClient = new HttpClient(manager);
   httpClient.getHostConfiguration().setHost(www.slashdot.org, 80,
http);
   httpClient.setHttpConnectionFactoryTimeout(2000); // wait up to 2
seconds when getting a HttpConnection
   for (int i = 0; i  30; i++) {
   HttpMethod method = new
GetMethod(http://www.slashdot.org/notfound;);
   int res = httpClient.executeMethod(method);
   // System.gc();
   // method.releaseConnection();
   }
   }

Uncommenting either of the last two lines makes the problem go away...



- Original Message -
From: Michael Becke [EMAIL PROTECTED]
To: Commons HttpClient Project [EMAIL PROTECTED]
Sent: Sunday, February 02, 2003 6:18 AM
Subject: Re: Running out of connections


 

Hello Simon,

Sorry to be replying so late.  Connections are released when:

1) the response is fully read
2) the connection is manually released via
HttpMethod.releaseConnection() or HttpConnection.releaseConnection()
3) the garbage collector runs and reclaims any connections that are no
longer being used

The most reliable way is to manually release the connection after use.
This goes for successful or unsuccessful requests.  Can you send a
sample of the code you are using that causes this problem?

Mike

On Wednesday, January 29, 2003, at 09:04 PM, Simon Roberts wrote:

   

Gidday,

With the current CVS version, I seem to be having a problem where I
run out of connections to a server.  It happens if I do a bunch of
HTTP operations that fail (404, as it happens) and the reply include a
Connection: close.  If no garbage-collect happens then the
connections are not freed!

Shouldn't we expire them if we're running out of connections?

Cheers, Simon
 

-
To unsubscribe, e-mail:
   

[EMAIL PROTECTED]
 

For additional commands, e-mail:
   

[EMAIL PROTECTED]
 

   



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

 



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: problem with recycling methods - use case

2003-02-03 Thread Eric Johnson
Michael Becke wrote:

[snip]



I've been looking into this a little more and I'm actually not sure if 
AutoCloseInputStream should close the stream or not.  I vaguely 
remember when this was first written and the various interactions are 
quite complex.  In most cases the AutoCloseInputStream is not wrapping 
the actual socket stream.  Usually there is another stream in the 
middle, either a ContentLengthInputStream or ChunkedInputStream.  Both 
of which do not close the socket stream.  The only case a socket input 
stream will be closed is when there is no chunking or content length.  
For this case it is difficult to determine when the response content 
is complete and therefore when it can be reused.  In this case it 
might actually be reasonable to close the socket stream and force a 
reconnect.  What does everyone think?

I spent quite a while crawling over this part of the code, and it seems 
to me that if you don't have a content length or chunked encoding on the 
response, the RFC indicates that the only way for the client to detect 
the end of the content on the response is to actually close the socket. 
In the case where the AutoCloseInputStream wraps the raw socket, the 
raw socket should be closed, because the server is going to close it anyway.

I went over this code carefully to try to insure that calling close on 
the getResponseBodyAsStream() result would _always_ be safe.  As a 
result, there is a fairly intricate dance between the stream wrapper, 
the method, and the connection manager, so that they all stay in sync 
with each other, and the appropriate amount of data from a persistent 
connection will be read, rather than attempting to scan to find the 
next occurrence of HTTP/1.1 in the byte stream.

-Eric.


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: using httpclient without a HttpClient object (was Redirects?)

2003-02-03 Thread Eric Johnson
Jandalf,

In contemplating your post, I had several thoughts:

   * We should not remove functions from the APIs unless they are
 already deprecated.  To do otherwise will cause people to abandon
 HttpClient (again!) as unstable.  Rather, we should maintaining
 deprecated functionality, with a good idea as to when it will be
 removed.
   * Instead of removing the execute() method, we might deprecate it
 and add a sibling function called executeWithoutRetry() - OK,
 that is a bad name, but you hopefully get the idea.
   * I believe strongly in exposing interfaces, rather than instances,
 something HttpClient could do more of.  If you take that approach,
 though, you can make the interfaces public, but the
 implementations package access, thus discouraging certain uses
 without actually preventing them.  In other words, if a client can
 figure out how to correctly implement the HttpConnectionIntf
 interface, you are welcome to do so (at your own risk), and call
 HttpMethod.execute() directly.
   * I couldn't decide whether the redirect functionality is
 something that should be pushed down or pulled up.  Is is
 something that HttpMethodBase delegates to another class to do for
 it, or is HttpMethodBase considered dumb, and not responsible
 for retries, but instead relies on its caller to do for it?  This
 would speak to the need to deprecate the execute method at all.

I agree that it certainly isn't too late to add this change, but am 
strongly in favor of designing in such a way as to maintain compatibility.

-Eric.

Jeffrey Dever wrote:

Is there anyone out there that has code that actually calls the 
HttpMethod.execute()?  Anything that looks like this:

HttpState state = new HttpState();
HttpConnection = new HttpConnection(host, port, secure);
HttpMethod method = new GetMethod(path);
int status = method.execute(state, connection);

As opposed to this:
HttpClient client = new HttpClient();
HttpMethod method = new 
GetMethod(http://jakarta.apache.org/commons/httpclient/;);
int status = client.executeMethod(method);

Anyone that is using the httpclient package without ever instantiating 
a HttpClient object, speak now, or forever hold your peace.  If we 
want to do redirects right, simplify the monolithic HttpMethod, then 
we are talking about the possibility of removing  HttpConnection and 
HttpMethod.execute() from the public interface, and your code will break.

If nobody actually uses HttpClient like this, and have compelling 
reasons for it, then I don't think this is too late to add this 
important functionality.


Oleg Kalnichevski wrote:

Jandalf,
I believe it's not just about redirects. All the retrial stuff as well
as (most likely) buffering should not be part of HttpMethodBase. It
would require quite a bit of change. I am all for it, but that's would
spell quite a bit of change in just beginning to stabilize HttpClient's
Middle Earth. What's your call?

Oleg

On Mon, 2003-02-03 at 21:17, Jeffrey Dever wrote:
 

Right, we should go back to the HttpClient to get another 
HttpConnection.  Perhaps the entire redirect mechanism should be 
pushed up to the HttpClient class.   I never liked the idea of a 
user holding onto a HttpState, HttpMethod and HttpConnection and 
calling the execute() method itself.  This use is what forces the 
HttpMethodBase to be so large.

I don't see this as being a huge job.  At some point it has to be 
done. The question is, wether it is 2.0 or 2.1 content.



Ortwin Glück wrote:

  

Alan Marcinkowski wrote:



I found HttpMethodBase:checkValidRedirect was not honoring cross 
server redirects. Isn't this a common type of redirect? Is there 
a reason its not supported? [...] unless its an architectural 
issue [...]



Alan,

unfortunately that is an architectural issue currently. Each 
HttpClient is bound to a specific host and a method can not change 
this since a method has no knowledge about its calling HttpClient 
instance (if any). Moreover the code responsible for handling the 
response is contained inside the methods. But most of it should be 
moved to the HttpClient actually in the future. Sorry for this 
limitation.

Odi


-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]




-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]

  



-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]


 



-
To unsubscribe, e-mail: 
[EMAIL PROTECTED]
For additional commands, e-mail: 
[EMAIL PROTECTED]



-
To unsubscribe, 

Not giving ourselves enough credit on the home page

2003-01-30 Thread Eric Johnson
Based on the recent URI discussion, and some other points, it strikes me 
that we could take a little more credit for the work that has gone into 
HttpClient.

On the HttpClient home page 
(http://jakarta.apache.org/commons/httpclient/index.html) four RFCs are 
listed.

Given all the discussion about URIs being thrown around, I think it 
might be reasonable to add RFC 2396 - for URI compliance.  Then there is 
RFC 1867, for multipart/form-data POST requests (I think I got the right 
number there).  Are there RFCs corresponding to our cookie compliance? 
Any other RFCs we can claim credit for conforming to?

With the recent Protocol changes, I think we've made it relatively 
straightforward for clients of HttpClient to plug in their own secure 
sockets implementations, making it easier to use third party, non-Sun 
solutions.

Someone posted recently that HttpClient appears to be faster than the 
corresponding Sun solution.

Any other up-sides that people can think of?  To push adoption of 
HttpClient, I think we want to get as much up on this page as we can. 
Not to mention, the next time my boss comes and asks me exactly why 
I've been sinking time into HttpClient, I can point to this page, and 
ask what's not to like?

Just a thought.

-Eric.



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]