Hi, everybody. Sorry for my poor English and for my ignorance too.

        We've built an application where we used utf-8 as default encoding (it
runs in an English Linux box - default Java encodings will be utf-8). A
few days ago, i've added a new Servlet Filter to our application (to
change URL jsession id encode behavior). This filter
(URLSessionEncodingFilter) was placed before another filter
(SetRequestEncodingFilter) that performs

if (request.getCharacterEncoding() == null) {
   request.setCharacterEncoding(this.defaultEncoding);
}

        Today i've found a bug on our application: Except for a multipart/form,
all non-English characters (like á and ç) sent in HttpServletRequest was
messed up. 
        I just can think that the cause of this problem was
request.getParameter() inside URLSessionEncodingFilter. Because
request.getCharacterEnconding() is still null and sent request data need
to be read (for parameter parsing), "ISO-8859-1" was took as default
(i'm just guessing).

http://java.sun.com/j2ee/sdk_1.3/techdocs/api/javax/servlet/ServletRequest.html#setCharacterEncoding(java.lang.String)


        Well, i've switched Servlet Filter execution order and everything is
working again. I'm wondering if there is a better way of do this. Is
there ? 
        We've added "<meta http-equiv="Content-Type"
content='text/html;charset=UTF-8'>" to all our pages. I was thinking
that this way web browsers will be doing a better guess and sent request
charset as UTF-8 (i really don't know how this part of HTTP
specification works).


        Any suggestions or ideas ?

        Thanks in advance !



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to