d2i_X509 segmentation violation

2008-01-22 Thread Md Lazreg
 Hi,

I have the following code:

---
unsigned char SERVER_certificate[1406]={
0x30,0x82,0x05,0x7A,0x30,0x82,0x03,0x62,0x02,0x01,0x01,0x30,0x0D,0x06,0x09,0x2A,
:
:
0xb4, 0x78, 0xc6, 0x5a, 0x2d, 0x4c, 0xf9, 0xde, 0x7a
};

   const unsigned char * p = SERVER_certificate;

   X509 * server_cert = d2i_X509(NULL,&p,sizeof(SERVER_certificate));
---

It works on all platforms except on a machine as follow:

cat <[EMAIL PROTECTED]> /etc/issue
Red Hat Enterprise Linux AS release 4 (Nahant Update 2)
Kernel \r on an \m
uname -a
Linux bromden 2.6.9-22.EL #1 SMP Mon Sep 19 17:54:55 EDT 2005 ia64 ia64 ia64
GNU/Linux

In such a configuration it crashes in the d2i_X509 function with a
segmentation violation!


The same code works on
uname -a
Linux unagi 2.6.5-7.97-default #1 SMP Fri Jul 2 14:21:59 UTC 2004 ia64 ia64
ia64 GNU/Linux
cat /etc/issue
Welcome to SUSE LINUX Enterprise Server 9 (ia64) - Kernel \r (\l).


Any ideas please why d2i_X509 does not work on redhat 4 ia64?

Thanks


Re: How to build with zlib support

2008-01-22 Thread Sisyphus


- Original Message - 
From: <[EMAIL PROTECTED]>

.
.

You may just use:
 $ ./Configure zlib --with-zlib-lib=/path --with-zlib-include=/path mingw



It still can't find zlib.

I eventually found that the following works:
./config no-shared
zlib -I/usr/local/include -L/usr/local/lib -lz --prefix=/usr/local/depot/static

(One or more of those arguments may be unnecessary.)

I should have tried that earlier - and probably would have done so if I had
been able to find the "-Ixxx" option mentioned in the documentation.

Thanks Marek.

Cheers,
Rob

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


RE: pem.h type inconsistencies in 0.9.8g

2008-01-22 Thread Dave Thompson
(Sorry, this got stuck and didn't go out as I thought.)

> From: [EMAIL PROTECTED] On Behalf Of Victor Duchovni
> Sent: Wednesday, 16 January, 2008 19:30

> On Wed, Jan 16, 2008 at 05:33:13PM -0600,
> [EMAIL PROTECTED] wrote:
>
> > So this is from 0.9.8g's pem.h:
> >
> > #define   PEM_read_SSL_SESSION(fp,x,cb,u) (SSL_SESSION
> *)PEM_ASN1_read( \
> > (char *(*)())d2i_SSL_SESSION,PEM_STRING_SSL_SESSION,fp,(char **)x,cb,u)

> > void *  PEM_ASN1_read(d2i_of_void *d2i, const char *name, FILE
> *fp, void **x,
> > pem_password_cb *cb, void *u);
> >
> > Anyone notice a problem here?
> >
> > That first argument to PEM_ASN1_read will never, ever be the right type.
> >
> > char * != void *
> >
>
> No actual problem, in ANSI C pointers can be freely converted between
> (type *) and (void *) and back.
>
Actual pointers can. Well, data pointers; Standard C (for 18 years now
ISO as well as ANSI) does not require this for function pointers, although
other standards like POSIX/XPG do, and commonly it also works.

But function-returning-T and function-returning-U are NOT required
to be compatible, and as a result calling the former through a pointer
having the latter type isn't guaranteed. I have worked on a system
where e.g. int* f1() and char* f2() are quite different, and using
( char*(*)() ) f1 will produce all kinds of crashes and corruption
(because int* and char* have different representations, and using
the bits that are actually an int* as if they were a char* is wrong).

For char* versus void* in particular, this is much less likely,
since they are required by Standard C to have the same representation;
according to a footnote this is 'intended' to allow substitution
in several places including here. Formally footnotes are nonnormative
and an implementor can break this by using a different calling convention
and be conforming, but I can't imagine why one would want to.

Standard C does allow you to cast any function pointer type to any
other funcptr type _and back to the/a correct type_, and _then_
using it to call is guaranteed to work. But not while it is 'wrong'.

Similarly, casting a pointer-to-pointer-type like (T**)& Uptr
and using it isn't guaranteed to work, and will in fact fail on the
platform I mentioned above for some combinations of T and U.
In particular it will fail for void* or char* versus any struct*,
and all the d2i's I've had occasion to look at are for struct types.
(This one is in the FAQL for comp.{lang,std}.c showing people have
in fact encountered it in the past, although in the few years I
have been reading them consistently it has not come up 'for real'.)

To absolutely-standard fix this we would have to make all the d2i
routines, and all the 'somedata' pointers, use the same actual type,
such as void*. Besides being a lot of work for no practical benefit,
this gives up too much otherwise useful and helpful type information.

A theoretical compromise might be to use an opaque struct type;
Standard C does require that all pointers _to structs_ have the
same representation (and similarly unions, not relevant here,
but NOT enums); as above the same representation doesn't actually
normatively require substitutability, but in practice it will work.
This is what C++ does; pointers (and references) to class types are
polymorphic within a (defined but unbounded) hierarchy, but not e.g. int* .
This is still a lot of work though, and not needed on mainstream
architectures where _all_ pointer types are implemented the same.



__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


distributing ssl/crypto libs with a distributed, multi-platform app

2008-01-22 Thread Bobby Krupczak
Hi!

I'm writing a distributed application that supports several platforms
(e.g. linux, solaris, solarisx86, and win32).

Because each of these different platforms has varying degrees of
native support for openssl and platforms often have different versions
installed, I would like to include libssl.so and libcrypto.so with my
application so as to ensure each installation is using a consistent
version.

However, I'm running into difficulties because my distribution uses a
single flat directory with binaries named for appname-platform.  For
example, appname-linux, appname-win32, appname-solaris, etc.
Consequently, I cannot just drop compiled versions of libssl.so into
the same directory w/o re-naming them to be libssl-linux.so, and so
on.

Because the shared libraries have an soname in them, renaming the
library file causes the application to fail to load because the loader
is still looking for libssl.so and not libssl-linux.so.

Is there a way to rename the soname in the openssl build?  I've been
trying to read through the SSL/crypto makefiles but my attempts to set
SHLIB_SUFFIX do not seem to be working.

Lastly, has anyone faced this problem before and solved it?  I hate to
re-invent the wheel.

Static linking does not seem appropriate as I cant get my app to
statically link against libssl/libcrypto (I get lots of undefined
symbol errors) and statically linking tends to not work well across
platforms that have different versions of underlying libraries.

Thanks for any tips, suggestions, or answers.

Bobby
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: How to build with zlib support

2008-01-22 Thread Marek . Marcola
Hello,
> >> How do I tell ./config where zlib.h is located ?
> > With Configure you may add something like:
> >  --with-zlib-lib=/path
> >  --with-zlib-include=/path
> 
> I find that when I run ./config the operating system 
> "i686-whatever-mingw" is detected. And that seems to work quite well in 
> general.
> 
> If I run /Configure I usually get a message requesting that I 
specify 
> the OS/compiler. I'm also given a list of options, but I don't see 
> "i686-whatever-mingw" mentioned anywhere in that list. Consequently I've 

> been avoiding ./Configure , preferring instead to run ./config.
> 
> Is there anything to be gained by running Configure instead of config ?
> 
> I tried:
> ./config no-shared 
> zlib --with-zlib-include=/usr/local/include 
--with-zlib-lib=/usr/local/lib --prefix/usr/
> local/depot/static
> 
> I also tried:
> ./config no-shared 
> zlib --with-zlib-include=/c/_32/msys//local/include 
--with-zlib-lib=/c/_32/msys/local/lib 
>  --prefix/usr/local/depot/static
> 
> zlib.h is in C:/_32/msys/local/include (and the msys shell regards that 
> location as /usr/local/include) so either incantation should work. 
However, 
> I always end up with the error "zlib.h:  No such file or directory" - 
> followed by a number of syntax errors arising from the inability to find 

> zlib.h.
> 
> When I look at the actual gcc command that is being run I don't see an 
-I 
> switch that includes the relevant location for zlib.h so I guess it's no 

> surprise that zlib.h can't be found.
> 
> I've also tried "CPPFLAGS=-I/usr/local/include" - which usually works 
for me 
> (wrt other libraries), but no joy in this instance.
> 
> I suspect that if I were to place zlib.h in my MinGW/include folder and 
> libz.a in MinGW/lib folder, then it would work. But I would prefer (if 
> possible) to be able to build without doing that.
You may just use:
  $ ./Configure zlib --with-zlib-lib=/path --with-zlib-include=/path mingw

Best regards,
--
Marek Marcola <[EMAIL PROTECTED]>

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: FIPS on Linux

2008-01-22 Thread Steve Marquess

Jacob Barrett wrote:


...

2) Does fipsld have to be used or could I, within the spirit of the security
policy, make my own fipsld of sorts that compiles fipspre_main.c with gcc
and links with g++?


The Security Policy does not require that the fipsld utility provided 
with the distribution be used as-is, only that the integrity of 
fipscanister.o be verified at application link time with respect to 
fipscanister.o.sha1.  So yes, you can perform that double link in 
another equivalent fashion.


-Steve M.

--
Steve Marquess
Open Source Software Institute
[EMAIL PROTECTED]

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


Re: How to build with zlib support

2008-01-22 Thread Sisyphus


- Original Message - 
From: <[EMAIL PROTECTED]>

.
.

How do I tell ./config where zlib.h is located ?

With Configure you may add something like:
 --with-zlib-lib=/path
 --with-zlib-include=/path


I find that when I run ./config the operating system 
"i686-whatever-mingw" is detected. And that seems to work quite well in 
general.


If I run /Configure I usually get a message requesting that I specify 
the OS/compiler. I'm also given a list of options, but I don't see 
"i686-whatever-mingw" mentioned anywhere in that list. Consequently I've 
been avoiding ./Configure , preferring instead to run ./config.


Is there anything to be gained by running Configure instead of config ?

I tried:
./config no-shared 
zlib --with-zlib-include=/usr/local/include --with-zlib-lib=/usr/local/lib --prefix/usr/local/depot/static


I also tried:
./config no-shared 
zlib --with-zlib-include=/c/_32/msys//local/include --with-zlib-lib=/c/_32/msys/local/lib 
--prefix/usr/local/depot/static


zlib.h is in C:/_32/msys/local/include (and the msys shell regards that 
location as /usr/local/include) so either incantation should work. However, 
I always end up with the error "zlib.h:  No such file or directory" - 
followed by a number of syntax errors arising from the inability to find 
zlib.h.


When I look at the actual gcc command that is being run I don't see an -I 
switch that includes the relevant location for zlib.h so I guess it's no 
surprise that zlib.h can't be found.


I've also tried "CPPFLAGS=-I/usr/local/include" - which usually works for me 
(wrt other libraries), but no joy in this instance.


I suspect that if I were to place zlib.h in my MinGW/include folder and 
libz.a in MinGW/lib folder, then it would work. But I would prefer (if 
possible) to be able to build without doing that.


Cheers,
Rob 


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


remove

2008-01-22 Thread karthik kumar



Remove

2008-01-22 Thread Qadeer Baig
Remove


FIPS on Linux

2008-01-22 Thread Jacob Barrett
Linking the FIPS capable libraries to our code is proving to be a real pain
in the butt. The problem stems from the fact that long before I arrived it
was decided that everything is to be linked statically. So that means that
fipsld is needed. To compound things our code is c++ and compiled using g++,
but the fipspre_main.c has that wonderful little char array initialization
bug that g++ complains about. As a result I have had to run fipsld with gcc
and include all the c++ libraries to successfully compile and link the FIPS
libraries to our code. So far I have been successful only on my Mac using
"-lstdc++ -shared-libgcc" as gcc flags. On Linux I just can't seem to figure
out what libraries I am missing or if something else is at play.

So...

1) Has anyone else had experience needing to use gcc to link FIPS to c++
code and been successful and have insight into the issue.

2) Does fipsld have to be used or could I, within the spirit of the security
policy, make my own fipsld of sorts that compiles fipspre_main.c with gcc
and links with g++?

3) Am I better off compiling the FIPS capable libraries as shared and
re-working our code to work with those?

Here is a snippet of the errors I get. I have tried combinations of -lstdc++
-lm -lc -shared-libgcc. None of these fix the problem.

xxx.cpp:138: undefined reference to `__dynamic_cast'
xxx.cpp:172: undefined reference to `__dynamic_cast'
xxx.cpp:1282: undefined reference to `std::basic_string, std::allocator >::basic_string()'
xxx.cpp:1286: undefined reference to `std::basic_string, std::allocator >::operator+=(char const*)'
xxx.cpp:763: undefined reference to `operator delete(void*)'

It goes on for miles like this.

Thanks in advance,
Jake

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


RE: 'make test' error - "I am unable to access the ./demoCA/newcerts directory"

2008-01-22 Thread C K KIRAN-KNTX36
Hi,
Check if the directory exists openssl-0.9.8g/apps/demoCA.
Its trying to run the the demo certification authority test i guess. It looks 
like its not able to find the directory path or some thing like that. First of 
all do u want to run the the test case.
Regards,
Kiran
 



From: [EMAIL PROTECTED] on behalf of Sisyphus
Sent: Tue 22-Jan-2008 5:23 PM
To: openssl-users@openssl.org
Subject: Re: 'make test' error - "I am unable to access the ./demoCA/newcerts 
directory"




- Original Message -
From: "C K KIRAN-KNTX36" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, January 22, 2008 7:27 PM
Subject: RE: 'make test' error - "I am unable to access the
./demoCA/newcerts directory"


Try just touching those directories. Hopefully that should fix your problem.
The problem i guess here is that, your make is running
some test cases to generate some kind of certificates and its failing.
Regards,
Kiran
---

I ran:
-
[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g
$ touch test

[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g
$ cd test

[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g/test
$ touch demoCA

[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g/test
$ cd demoCA

[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g/test/demoCA
$ touch newcerts

[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g/test/demoCA
$ cd ../..

[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g
-

I'm not all that familiar with 'touch' - but I guess that's what you meant
for me to do.

However, when I re-run 'make test' I get the same error.

Just prior to the error quoted in the subject line of this thread I get:

--
Loading 'screen' into random state -./demoCA/newcerts: Invalid argument
 done
--

Could it be that the real cause of the problem is that "Invalid argument" ?

Cheers,
Rob

__
OpenSSL Project http://www.openssl.org 
 
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


<>

Re: 'make test' error - "I am unable to access the ./demoCA/newcerts directory"

2008-01-22 Thread Sisyphus


- Original Message - 
From: "C K KIRAN-KNTX36" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, January 22, 2008 7:27 PM
Subject: RE: 'make test' error - "I am unable to access the 
./demoCA/newcerts directory"



Try just touching those directories. Hopefully that should fix your problem. 
The problem i guess here is that, your make is running

some test cases to generate some kind of certificates and its failing.
Regards,
Kiran
---

I ran:
-
[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g
$ touch test

[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g
$ cd test

[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g/test
$ touch demoCA

[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g/test
$ cd demoCA

[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g/test/demoCA
$ touch newcerts

[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g/test/demoCA
$ cd ../..

[EMAIL PROTECTED] /c/_32/comp/openssl-0.9.8g
-

I'm not all that familiar with 'touch' - but I guess that's what you meant 
for me to do.


However, when I re-run 'make test' I get the same error.

Just prior to the error quoted in the subject line of this thread I get:

--
Loading 'screen' into random state -./demoCA/newcerts: Invalid argument
done
--

Could it be that the real cause of the problem is that "Invalid argument" ?

Cheers,
Rob 


__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


RE: RSA_verify problem

2008-01-22 Thread Chris Brown
Hi 

Thank you for your response and my apologies for not replying sooner- I was
drafted from one project to another and have only just returned to this one.
I am still having some trouble with this however.

I have attempted to ensure that both the modulus and signature are 128 bytes
long but I still cannot get this to work correctly. Below is a sample of xml
showing my KeyInfo. The Modulus is 172 characters long which I believe is
correct:


 
  
 
1RjaCKAG09orRlqo9U4SCt1ozqKhYNjzQR5Jn622GelJOmSpIYPN5sXQ1urfYvuIBkF
wm/H0gBDY94TxagtZwIpm/57dGq3
k6OJADZpnaRFwuPE8+82Q/qMK8ZxrFhGJhWPBnq/Y3LlTKeon9yurOKle3J0FsOx1ePE3ojkv+WU
=
   
   AQAB
   
  
 


The SignatureValue itself is also 172 characters long.

I am still confused about the exact sequence of steps I need to take once I
have Base64 decoded the response into the raw xml such as that above. I am
not certain I am extracting and preparing my modulus correctly before
passing it to RSA_verify or indeed extracting the SignatureValue properly.
For example should I be Base64 decoding any of these values first?

Any further help anyone can offer would really be appreciated.

Many Thanks 

Chris Brown

__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


RE: 'make test' error - "I am unable to access the ./demoCA/newcerts directory"

2008-01-22 Thread C K KIRAN-KNTX36
Try just touching those directories. Hopefully that should fix your problem. 
The problem i guess here is that, your make is running 
some test cases to generate some kind of certificates and its failing. 
Regards,
Kiran
 



From: [EMAIL PROTECTED] on behalf of Sisyphus
Sent: Sun 20-Jan-2008 1:40 PM
To: openssl-users@openssl.org
Subject: 'make test' error - "I am unable to access the ./demoCA/newcerts 
directory"



Hi,
I'm building openssl-0.9.8g on Windows Vista in the msys shell using (the
mingw port of) gcc-3.4.5.

I've successfully run './config no-shared' and 'make', but 'make test'
throws up the following:

-
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-
Country Name (2 letter code) [AU]:AU
Organization Name (eg, company) []:Dodgy Brothers
Common Name (eg, YOUR name) []:Dodgy CA
Using configuration from CAss.cnf
Loading 'screen' into random state -./demoCA/newcerts: Invalid argument
 done
I am unable to access the ./demoCA/newcerts directory
make[1]: *** [test_ca] Error 1

The ./demoCA/newcerts directory exists (but is empty).

On Windows XP, the error does not occur and 'make test' runs to its
conclusion - so it looks like an issue that's specific to Vista  unless
there's something crucial I've got installed on the XP box, that's missing
on the Vista box.

Any advice on how to proceed with this ?

Cheers,
Rob

__
OpenSSL Project http://www.openssl.org 
 
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]


<>

Issues with OpenSSL implementation on Windows NT service application

2008-01-22 Thread Parag Jhavery
Hi Friends,

 

I am facing a trivial problem with OpenSSL implementation on Windows
platform.

To avoid the compilation of source code, I am using the OpenSSL installation
for Windows platform, available from shininglightpro.com
(http://www.shininglightpro.com/download/Win32OpenSSL-0_9_8g.exe)(Win32
OpenSSL v0.9.8g).

I have created a sample server application which initializes the OpenSSL
library using API SSL_library_init().

Also I am using the following APIs to load the error string
(SSL_load_error_strings, ERR_load_BIO_strings, ERR_load_SSL_strings &
OpenSSL_add_all_algorithms).

Once this is done, I am opening the certificate file and private key file
using APIs SSL_CTX_use_certificate_file & SSL_CTX_use_PrivateKey_file. .The
certificate file and private key file PEM (base 64 encoded).

In addition to this, I have added the following .lib files in the project
setting of VC++ 6.0 (libeay32MD.lib libeay32MDd.lib libeay32MT.lib
libeay32MTd.lib ssleay32MD.lib ssleay32MDd.lib ssleay32MT.lib
ssleay32MTd.lib)

This is working absolutely fine with the sample server application.

The same code does not work i.e. opening the certificate files, when I try
to execute from a Windows NT Service (Ours is an application running as a
service and that service in turn acts as a TCP server which will open SSL
ports for communication with client)

The same set of APIs and certificates does not work
(SSL_CTX_use_certificate_file & SSL_CTX_use_PrivateKey_file) from within a
Windows NT service application).

 

Also, when I try to get the latest SSL error numbers using API
ERR_print_errors_fp, the entire application crashes.

 

I am looking out for any architectural issues with OpenSSL when working with
Windows NT service applications. If anyone has tried implementing OpenSSL in
a Windows NT service, any help in this regard is highly appreciated.

 

Thanks,

Parag

 


The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should 
check this email and any attachments for the presence of viruses. The company 
accepts no liability for any damage caused by any virus transmitted by this 
email.

www.wipro.com



RE: About certificate sha1 thumbprint

2008-01-22 Thread Hou, LiangX
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Victor Duchovni
Sent: 2008年1月17日 11:30
To: openssl-users@openssl.org
Subject: Re: About certificate sha1 thumbprint

On Thu, Jan 17, 2008 at 10:14:28AM +0800, Hou, LiangX wrote:

> No. I try to convert binary digest to hexadecimal strings outside and compare 
> it with what is generated by the command-line tool. And I find they are 
> different. The strange thing is that the thumbprint generated by my 
> X509_digest begins with zero. That may be something wrong. Is it?
> 

>What's wrong with zero? The raw digest is a set of pseudo-random bytes,
>of none of the bytes or nibbles were ever zero, that would be strong
>evidence that the hash is flawed.

>You have not posted the relevant code, and your problem descriptions
>are vague. If you want help you need to post clear problem descriptions
>and a complete example constiting of a cert.pem file and code with
>working Makefile that computes the "wrong" digest for the certificate
>(different from what is reported by "openssl x509 -sha1 -fingerprint
-noout -in cert.pem").

-- 
Viktor.

Viktor, I found my problem. It was the wrong conversion from the raw thumbprint 
data to the hexadecimal string in my code which caused this problem. Now it 
works. There is no need to post the code to bother your attention any more. I 
really appreciate your help.
Liang
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]
__
OpenSSL Project http://www.openssl.org
User Support Mailing Listopenssl-users@openssl.org
Automated List Manager   [EMAIL PROTECTED]