Hi!

Doing my round of cleaning compiler warnings I came across
USE_64BIT_INTEGER in bitvec.h. I couldn't find any other place it is
used and there was an issue when it is actually enabled, I submitted a
patch for it in PR#3530071.

The thing is that USE_64BIT_INTEGER has to be defined for systems which
have 64bit unsigned integer. My amd64 Linux machine with gcc has 32bit
uint, and I guess that is the case for most since it seems that nobody
ran into issues with it.

Now the issue is if USE_64BIT_INTEGER should be dropped. I don't see it
documented or referenced anywhere. Does anyone know where does it come
from? Is user expected to set it at compile time or the compiler does it
automatically? On which systems it is actually used if any?

As I mentioned already in the comment of PR#3530071, there are two
options - either compute the bit size of uint in preprocessor macro or
during runtime. Well there is also a third option as always, which is to
let it live as is. With my current patch at PR#3530071, we are at the
third option.

Computing in preprocessor macro would look something like this:
#if 2^(32) == UINT_MAX
// 32bit uint
#elif 2^(64) == UINT_MAX
// 64bit uint
#else
#error Unknown uint size
#endif

This works for C++, but it doesn't work for SWIG bindings, since
UINT_MAX is defined in climits.h and SWIG is not aware of it (it doesn't
recurse into includes). I couldn't find a way to make it aware of it
also and picking one at random doesn't seem like a good idea.

So that leaves us with the runtime option. It would be possible to
determine the size of uint at runtime and construct appropriate
variables for it (instead of using preprocessor to choose which one to
use). This would have an added benefit that it could work with any uint
size, not just 32 or 64bit. But I'm not sure what performance impact it
might have if any (bitvec is designed for speed, so that is important).

Well there is also a possibility to use bitset from STL, which should
largely eliminate the need to determine the bit size of uint. I have no
idea how it might fare in comparison with the current implementation and
also why it was not used in the first place. Going forward even
dynamic_bitset from boost could be considered also uint32_t (C++11)
could solve the issue. Note that this basically forces to choose the bit
size of the unit of bitvec.

Any comments?


Reinis


------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
OpenBabel-Devel mailing list
OpenBabel-Devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openbabel-devel

Reply via email to