Re: Sourceware mitigating and preventing the next xz-backdoor

2024-04-24 Thread Boris Kolpackov
Martin Uecker  writes:

> Do we really still need complex build systems such as autoconf? Are
> there still so many different configurations with subtle differences
> that every single feature needs to be tested individually by running
> code at build time?

We have taken the alternative approach in build2. Specifically, instead
of dynamic compilation/linking tests (which can fail for all kinds of
reasons besides the absent feature), we use static expected values
based on the platform/compiler macro checks. For example, if we are
compiling with glibc and the version is 2.38 or later, then we know
the strl*() function family is available:

https://github.com/build2/libbuild2-autoconf

We currently have ~200 checks and have built quite a bit of software
using this approach (including Qt classic libraries and all their
dependencies) on the mainstream platforms (Linux, Windows, Mac OS,
FreeBSD).


Re: GCC 12.1 Release Candidate available from gcc.gnu.org

2022-05-02 Thread Boris Kolpackov
Jakub Jelinek  writes:

> The first release candidate for GCC 12.1 is available [...]

There is an unfixed bogus warning that is a regression in 12
and that I think will have a pretty wide effect (any code
that assigns/appends a 1-char string literal to std::string):

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105329

For example, in my relatively small codebase I had about 20
instances of this warning. Seeing that it's enabled as part
of -Wall (not just -Wextra), I believe there will be a lot
of grumpy users.

There is a proposed work around in this (duplicate) bug that
looks pretty simple:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104336

Perhaps it makes sense to consider it?


Re: [modules] Preprocessing requires compiled header unit modules

2022-04-25 Thread Boris Kolpackov
Ben Boeckel  writes:

> If we need to know and have dependencies prepared before we can figure
> out the dependencies for a TU, modules are unsolvable (without an active
> build executor). If C++ implementations are really going to require
> that, then [...] the following tools are all unsuitable for C++ with
> header units without major overhauls (alphabetical):
> 
>   - autoconf/automake
>   - cmake
>   - gn
>   - gyp
>   - make (not GNU make, though even that requires some active
> involvement via the socket communications)
>   - meson
>   - ninja

A couple of points:

1. Firstly, this only applies to header units, not named modules.

2. I am not sure what you mean by "active build executor" (but it
   does sound ominous, I will grant you that ;-)).

3. I agree some build systems may require "major overhauls" to
   support header units via the module mapper. I would like this
   not to be the case, but so far nobody has implemented an
   alternative (that I am aware of) that is correct and scalable
   and I personally have doubts such a thing is achievable.


> > Even if we manage to do this, there are some implications I
> > am not sure we will like: the isolated macros will contain
> > inclusion guards, which means we will keep re-scanning the
> > same files potentially many many time. Here is an example,
> > assume each header-unitN.hpp includes or imports :
> 
> Note that scanning each module TU only happens once. Header units might
> just get *read* in the course of scanning other units.
> 
> And headers are read multiple times already over the lifetime of the
> build, so we're not making things worse here.

I am not sure I follow. Say we have 10 TUs each include or import
10 headers each of which includes . If we use include,
then when scanning each of these 10 TUs we have to scan 
once (since all the subsequent includes are suppressed by include
guards). So total of 10x1=10 scans of  for the entire
build.

Now if instead of include we use import (which, during the scan, is
treated as include with macro isolation), we are looking at 10 scans
of  for each TU (because the include guards are ignored).
So total of 10x10=100 scans of  for the build.

What am I missing?


Re: [modules] Preprocessing requires compiled header unit modules

2022-04-25 Thread Boris Kolpackov
Iain Sandoe  writes:

> The standard has the concept of an “importable header” which is
> implementation-defined.

But it must contain all the C++ library headers:

https://eel.is/c++draft/headers#4


> We could choose that only headers that are self-contained (i.e. unaffected
> by external defines) are “importable” (thus the remaining headers would
> not be eligible for include-translation). That would mean that we could
> rely on processing any import by processing the header it is created from?
> Perhaps that is too great a restriction and we need to be more clever.

It will also be hard to determine whether the header (or any header
that it includes) satisfies this condition. You would probably want it
to be "meaningfully self-contained" (since pretty much every header is
not self-contained with regards to include guards) which I think will
be hard to automate.


Re: [modules] Preprocessing requires compiled header unit modules

2022-04-22 Thread Boris Kolpackov
Ben Boeckel  writes:

> On Thu, Apr 21, 2022 at 06:05:52 +0200, Boris Kolpackov wrote:
>
> > I don't think it is. A header unit (unlike a named module) may export
> > macros which could affect further dependencies. Consider:
> > 
> > import "header-unit.hpp"; // May or may not export macro FOO.
> > 
> > #ifdef FOO
> > import "header-unit2.hpp";
> > #endif
> 
> I agree that the header needs to be *found*, but scanning cannot require
> a pre-existing BMI for that header.

Well, if scanning cannot require a pre-existing BMI but a pre-existing
BMI is required to get accurate dependency information, then something
has to give.

You hint at a potential solution in your subsequent email:

> Can't it just read the header as if it wasn't imported? AFAIU, that's
> what GCC did in Jan 2019. I understand that CPP state is probably not
> easy, but something to consider.

The problem with this approach is that a header import and a header
include have subtly different semantics around macros. In particular,
the header import does not "see" macros defined by the importer while
the header include does. Here is an example:

// file: header-unit.hpp
//
#ifdef BAR
#define FOO
#endif

// file: importer.cpp
//
#define BAR
import "header-unit.hpp";// Should not "see" BAR.
//#include "header-unit.hpp" // Should "see" BAR.

#ifdef FOO
import "header-unit2.hpp";
#endif

In this example, if you treat import of header-unit.hpp as
include, you will get incorrect dependency information.

So to make this work correctly we will need to re-create the
macro isolation semantics of import for include.

Even if we manage to do this, there are some implications I
am not sure we will like: the isolated macros will contain
inclusion guards, which means we will keep re-scanning the
same files potentially many many time. Here is an example,
assume each header-unitN.hpp includes or imports :

// file: importer.cpp
//
import ; // Defined _GLIBCXX_FUNCTIONAL include

import "header-unit1.hpp"; // Ignores _GLIBCXX_FUNCTIONAL
import "header-unit2.hpp"; // Ditto.
import "header-unit3.hpp"; // Ditto.
import "header-unit4.hpp"; // Ditto.


Re: [modules] Preprocessing requires compiled header unit modules

2022-04-20 Thread Boris Kolpackov
Ben Boeckel  writes:

> However, for header unit modules, it runs into a problem that imported
> header units are required to be compiled and available in the mapper
> while scanning for dependencies.
> 
> Example code:
> 
> ```c++ # use-header.cpp
> module;
> 
> import "header-unit.hpp";
> 
> int main(int argc, char* argv[]) {
> return good;
> }
> ```
>
> There used to be no need to do this back prior to the modules landing in
> `master`, but I can see this being an oversight in the meantime.

I don't think it is. A header unit (unlike a named module) may export
macros which could affect further dependencies. Consider:

import "header-unit.hpp"; // May or may not export macro FOO.

#ifdef FOO
import "header-unit2.hpp"
#endif


Re: [RFC] Adding Python as a possible language and it's usage

2018-07-18 Thread Boris Kolpackov
Paul Koning  writes:

> > On Jul 18, 2018, at 11:13 AM, Boris Kolpackov  
> > wrote:
> >
> > I wonder what will be the expected way to obtain a suitable version of
> > Python if one is not available on the build machine? With awk I can
> > build it from source pretty much anywhere. Is building newer versions
> > of Python on older targets a similarly straightforward process (somehow
> > I doubt it)? What about Windows?
> 
> It's the same sort of thing: untar the sources, configure, make, make
> install.

Will this also install all the Python packages one might plausible want
to use in GCC?


Re: [RFC] Adding Python as a possible language and it's usage

2018-07-18 Thread Boris Kolpackov
On Tue, 2018-07-17 at 14:49 +0200, Martin Liška wrote:

> My question is simple: can we starting using a scripting language like
> Python and replace usage of the AWK scripts?

I wonder what will be the expected way to obtain a suitable version of
Python if one is not available on the build machine? With awk I can
build it from source pretty much anywhere. Is building newer versions
of Python on older targets a similarly straightforward process (somehow
I doubt it)? What about Windows?


Re: [PATCH] Write dependency information (-M*) even if there are errors

2017-08-12 Thread Boris Kolpackov
Segher Boessenkool  writes:

> Patches should go to gcc-patches.

Ok, will keep in mind for future (seeing that we have a discussion
already it probably doesn't make sense to move this patch).

 
> Two spaces after a full stop (all three times).

Fixed, new revision included.


Thanks,
Boris
Index: gcc/c-family/ChangeLog
===
--- gcc/c-family/ChangeLog	(revision 250514)
+++ gcc/c-family/ChangeLog	(working copy)
@@ -1,3 +1,8 @@
+2017-08-06  Boris Kolpackov 
+
+	* c-opts.c (c_common_finish): Write dependency information even if
+	there are errors.
+
 2017-07-14  David Malcolm  
 
 	* c-common.c (try_to_locate_new_include_insertion_point): New
Index: gcc/c-family/c-opts.c
===
--- gcc/c-family/c-opts.c	(revision 250514)
+++ gcc/c-family/c-opts.c	(working copy)
@@ -1152,8 +1157,11 @@
 {
   FILE *deps_stream = NULL;
 
-  /* Don't write the deps file if there are errors.  */
-  if (cpp_opts->deps.style != DEPS_NONE && !seen_error ())
+  /* Note that we write the dependencies even if there are errors. This is
+ useful for handling outdated generated headers that now trigger errors
+ (for example, with #error) which would be resolved by re-generating
+ them. In a sense, this complements -MG.  */
+  if (cpp_opts->deps.style != DEPS_NONE)
 {
   /* If -M or -MM was seen without -MF, default output to the
 	 output stream.  */


Re: [PATCH] Write dependency information (-M*) even if there are errors

2017-08-12 Thread Boris Kolpackov
Joseph Myers  writes:

> I suppose a question for the present proposal would be making sure any 
> dependencies generated in this case do not include dependencies on files 
> that don't exist (so #include "some-misspelling.h" doesn't create any sort 
> of dependency on such a header).

Good point. I've tested this and I believe everything is in order:
unless -MG is specified, a non-existent header is treated as a fatal
error so we don't even get to writing the dependency info. And if -MG
is specified, then there is no error and we get the missing header in
the dependency output, as requested.

Boris


[PATCH] Write dependency information (-M*) even if there are errors

2017-08-06 Thread Boris Kolpackov
Hi,

Currently GCC does not write extracted header dependency information
if there are errors. However, this can be useful when dealing with
outdated generated headers that trigger errors which would have been
resolved if we could update it. A concrete example in our case is a
version check with #error.

The included (trivial) patch changes this behavior. Note also that
this is how Clang already behaves. I've tested the patch in build2
and everything works well (i.e., no invalid dependency output in the
face of various preprocessor errors such as #error, stray #else, etc).

While I don't foresee any backwards-compatibility issues with such
an unconditional change (after all, the compiler still exists with
an error status), if there are concerns, I could re-do it via an
option (e.g., -ME, analogous to -MG).

P.S. I have the paperwork necessary to contribute on file with FSF.

Thanks,
Boris
Index: gcc/c-family/ChangeLog
===
--- gcc/c-family/ChangeLog	(revision 250514)
+++ gcc/c-family/ChangeLog	(working copy)
@@ -1,3 +1,8 @@
+2017-08-06  Boris Kolpackov 
+
+	* c-opts.c (c_common_finish): Write dependency information even if
+	there are errors.
+
 2017-07-14  David Malcolm  
 
 	* c-common.c (try_to_locate_new_include_insertion_point): New
Index: gcc/c-family/c-opts.c
===
--- gcc/c-family/c-opts.c	(revision 250514)
+++ gcc/c-family/c-opts.c	(working copy)
@@ -1152,8 +1152,11 @@
 {
   FILE *deps_stream = NULL;
 
-  /* Don't write the deps file if there are errors.  */
-  if (cpp_opts->deps.style != DEPS_NONE && !seen_error ())
+  /* Note that we write the dependencies even if there are errors. This is
+ useful for handling outdated generated headers that now trigger errors
+ (for example, with #error) that would be resolved by re-generating
+ them. In a sense this complements -MG. */
+  if (cpp_opts->deps.style != DEPS_NONE)
 {
   /* If -M or -MM was seen without -MF, default output to the
 	 output stream.  */


Separate preprocess and compile: some performance numbers

2017-05-18 Thread Boris Kolpackov
Hi,

I have implemented the separate preprocess and compile setup in build2.
For GCC it is using -fdirectives-only (thanks to everyone's suggestions
in the earlier thread). I've also done some benchmarking:

https://build2.org/article/preprocess-compile-performance.xhtml

TL;DR for GCC:

Surprisingly, a separate preprocessor run is about 1% faster (probably
because of the time-localization of filesystem access). Overall, a
preprocessor run costs about 5% of a non-optimized C++ build.

Boris


[PATCH] Recognize '-' as special -MF argument (write to stdout)

2017-05-15 Thread Boris Kolpackov
Hi,

Sometimes it is useful to generate pre-processed output to a file and
the dependency information to stdout for further analysis/processing.
For example:

g++ -E -MD -fdirectives-only -o test.ii test.cxx

This will generate the dependency information to test.d (as per the
documentation). While changing this behavior is probably unwise, one
traditional (e.g., supported by -o) way to handle this is to recognize
the special '-' file name as an instruction to write to stdout:

g++ -E -MD -fdirectives-only -o test.ii -MF - test.cxx

Currently this will create a file named '-'. The included patch changes
this behavior to write to stdout.

Note also that Clang has supported this from at least version 3.5.

The patch should apply cleanly to trunk. I would also like to see it
backported to previous versions, if possible. If this requires any
additional work, I am willing to do it.

Thanks,
Boris
Index: gcc/ChangeLog
===
--- gcc/ChangeLog	(revision 247825)
+++ gcc/ChangeLog	(working copy)
@@ -1,3 +1,6 @@
+
+	* doc/cppopts.texi: Document '-' special value to -MF.
+
 2017-05-09  Marek Polacek  
 
 	* doc/invoke.texi: Fix typo.
Index: gcc/c-family/ChangeLog
===
--- gcc/c-family/ChangeLog	(revision 247825)
+++ gcc/c-family/ChangeLog	(working copy)
@@ -1,3 +1,6 @@
+
+	* c-opts.c (c_common_finish): Handle '-' special value to -MF.
+
 2017-05-09  Marek Polacek  
 
 	PR c/80525
Index: gcc/c-family/c-opts.c
===
--- gcc/c-family/c-opts.c	(revision 247825)
+++ gcc/c-family/c-opts.c	(working copy)
@@ -1164,6 +1164,8 @@
 	 output stream.  */
   if (!deps_file)
 	deps_stream = out_stream;
+  else if (deps_file[0] == '-' && deps_file[1] == '\0')
+	deps_stream = stdout;
   else
 	{
 	  deps_stream = fopen (deps_file, deps_append ? "a": "w");
@@ -1177,7 +1179,7 @@
  with cpp_destroy ().  */
   cpp_finish (parse_in, deps_stream);
 
-  if (deps_stream && deps_stream != out_stream
+  if (deps_stream && deps_stream != out_stream && deps_stream != stdout
   && (ferror (deps_stream) || fclose (deps_stream)))
 fatal_error (input_location, "closing dependency file %s: %m", deps_file);
 
Index: gcc/doc/cppopts.texi
===
--- gcc/doc/cppopts.texi	(revision 247825)
+++ gcc/doc/cppopts.texi	(working copy)
@@ -125,6 +125,8 @@
 When used with the driver options @option{-MD} or @option{-MMD},
 @option{-MF} overrides the default dependency output file.
 
+If @var{file} is @file{-}, then the dependencies are written to @file{stdout}.
+
 @item -MG
 @opindex MG
 In conjunction with an option such as @option{-M} requesting


Re: Separate preprocess and compile: hack or feature?

2017-05-11 Thread Boris Kolpackov
Hi Nathan,

Nathan Sidwell  writes:

> How c++ modules fit into a build system is currently an open question.
> Richard Smith & I have talked about it, but with no firm conclusion.
> However, I think that breaking out the preprocessor is not the right
> answer.

Handling modules is one of the least important motivations for my desire
to "break out" the preprocessor. Distributed compilation is probably the
main one. But I am quickly realizing that this is probably not going to
be reliable enough.

But for completeness, let me describe how all the pieces would have fitted
together. I think it is quite elegant, even if I have to say so myself ;-).

Let's say the build system realizes (for example, based on filesystem
mtimes) that hello.o may be out-of-date. This is what it does:

1. In a single pass preprocess hello.cxx and extract header dependencies
   (-E -MD options).

   Note that if you do -M you are essentially running the preprocessor,
   so might as well save the result.

2. If any of the extracted headers are auto-generated and are missing or
   out-of-date, regenerate them.

3. Pre-parse the preprocessed output and extract module dependency
   information. If any are missing/out-of-date, compile them.

   What we need to detect here are (1) module imports and (2) module
   implementation units. All these things are "top level" and it won't
   be hard to recognize.
   
4. If nothing above has indicated that hello.o is indeed out of date,
   hash the preprocessed output to detect and ignore comment-only changes.

5. If we indeed need to update hello.o, pass the preprocessed output
   either to the compiler or ship it to a remote host for compilation.

Boris


Re: Separate preprocess and compile: hack or feature?

2017-05-11 Thread Boris Kolpackov
Hi Jakub,

Jakub Jelinek  writes:

> Especially in recent GCC versions the amount of differences for warnings and
> errors keeps dramatically increasing, with separate preprocessing simply
> too much information is lost (macro contexts, lint style comments, exact
> locations, system header issues, ...).
> 
> So it is far better to not use -E, but instead -E -fdirectives-only
> for the preprocessing step, which will get you also single file with all the
> includes in it, but macros, comments etc. are still in there.

Uh, thanks, that's very helpful.

One use case for removing comments is detecting and ignoring comments-only
changes (via a checksum). But that's relatively minor. We could probably
re-process the result ourselves and strip the comments for that (will have
to recognize comments for the C++ module import detection anyway).

Also, preserving comments helps with some VC issues (e.g., a/**/b).


> Tried to explain that to the ccache people, but they aren't listening.

I am, on the hand, all ears ;-).

Thanks,
Boris


Separate preprocess and compile: hack or feature?

2017-05-11 Thread Boris Kolpackov
Hi,

In the build system I am working on we are looking at always performing
the preprocessing and then C/C++ compilation as two separate gcc/g++
invocations. The main reason is support for distributed compilation but
see here[1] for other reasons.

I realize that tools like ccache/distcc have been relying on this for
a while (though see the 'direct' mode in ccache and 'pump' in distcc).
However, some compilers apparently do not support this (for example,
VC; see the above link for details).

So I wonder, in the context of GCC, if this is just a hack that happens
to work "for now" or if this is a feature that is expected to continue
to work?

Also, has anyone seen/heard of any real-world issues with compiling
preprocessed source code?

[1] 
https://www.reddit.com/r/cpp/comments/6abi99/rfc_issues_with_separate_preprocess_and_compile/


Thanks,
Boris


[ANN] build2 - C++ build toolchain

2016-02-03 Thread Boris Kolpackov
Hi,

build2 is an open source, cross-platform toolchain for building and
packaging C++ code. It includes a build system, package manager, and
repository web interface. We've also started cppget.org, a public
repository of open source C++ packages.

This is the first alpha release and currently it is more of a technology
preview rather than anything that is ready for production. We have tested
this version on various Linux'es, Mac OS, and FreeBSD. There is no Windows
support yet (but cross-compilation is supported).

The project's page is:

https://build2.org

For those who need to see examples right away, here is the introduction:

https://build2.org/build2-toolchain/doc/build2-toolchain-intro.xhtml

Enjoy,
Boris



[ANN] ODB C++ ORM 2.4.0 Released

2015-02-11 Thread Boris Kolpackov
Hi,

I am pleased to announce the release of ODB 2.4.0.

ODB is an open source object-relational mapping (ORM) system for C++. It
allows you to persist C++ objects to a relational database without having
to deal with tables, columns, or SQL and without manually writing any of
the mapping code. ODB is implemented as a GCC plugin and uses GCC's
frontend for C++ parsing.

Major new features in this release:

 * The plugin implementation is GCC 5-ready.

 * Support for bulk operations in Oracle and SQL Server. Bulk operations
   can be used to persist, update, or erase a range of objects using a
   single database statement execution which often translates to a
   significantly better performance.

 * Ability to join and load one or more complete objects instead of, or
   in addition to, a subset of their data members with a single SELECT
   statement execution (object loading views).

 * Support for specifying object and table join types in views (LEFT,
   RIGHT, FULL, INNER, or CROSS).

 * Support for calling MySQL and SQL Server stored procedures.

 * Support for defining persistent objects as instantiations of C++ class
   templates.

A more detailed discussion of these features can be found in the following
blog post:

http://www.codesynthesis.com/~boris/blog/2015/02/11/odb-2-4-0-released/

For the complete list of new features in this version see the official
release announcement:

http://codesynthesis.com/pipermail/odb-announcements/2015/41.html

ODB is written in portable C++ (both C++98/03 and C++11 are supported) and
you should be able to use it with any modern C++ compiler. In particular, we
have tested this release on GNU/Linux (x86/x86-64/ARM), Windows (x86/x86-64),
Mac OS X (x86/x86_64), and Solaris (x86/x86-64/SPARC) with GNU g++ 4.2.x-5.x,
MS Visual C++ 2005, 2008, 2010, 2012, and 2013, Sun Studio 12u2, and Clang 3.x.

The currently supported database systems are MySQL, SQLite, PostgreSQL,
Oracle, and SQL Server. ODB also provides optional profiles for Boost and
Qt, which allow you to seamlessly use value types, containers, and smart
pointers from these libraries in your persistent classes.

More information, documentation, source code, and pre-compiled binaries are
available from:

http://www.codesynthesis.com/products/odb/

Enjoy,
Boris



[ANN] CppCon 2014 program available

2014-07-02 Thread Boris Kolpackov
The CppCon 2014 Program is now available with talk titles, abstracts,
and speakers:

http://cppcon.org/conference-program/

The program contains over 100 one-hour sessions by over 70 speakers
including plenary sessions by Scott Meyers and Herb Sutter, as well
as the keynotes by C++ creator Bjarne Stroustrup on Keeping Simple
Things Simple and Mark Maimone on using C++ on Mars: Incorporating
C++ into Mars Rover Flight Software.

We have also extended the Early Bird deadline to July 9 so you have
a week to study the program and still get the Early Bird rate.

Hope to see you in Bellevue!

Boris



[ANN] CppCon 2014 Call for Submissions

2014-03-25 Thread Boris Kolpackov
Hi,

CppCon is the annual, week-long face-to-face gathering for the entire
C++ community. The conference is organized by the C++ community for the
community and so we invite you to present.

Have you learned something interesting about C++, maybe a new technique
possible in C++11? Or perhaps you have implemented something cool related
to C++, maybe a new C++ library? If so, consider sharing it with other C++
enthusiasts by giving a talk at CppCon 2014. Submissions deadline is May
15 with decisions sent by June 13. For topic ideas, possible formats, and
submission instructions, see the Submissions page:

http://cppcon.org/submissions/

Hope to hear you speak!

Boris



[ANN] Registration for CppCon 2014 is Open

2014-03-18 Thread Boris Kolpackov
CppCon, The C++ Conference
Opening Keynote by Bjarne Stroustrup
September 7–12, 2014
Bellevue, Washington, USA

Registration is now open for CppCon 2014 to be held September 7–12, 2014
at the Meydenbauer Center in Bellevue, Washington, USA. This year the
conference starts with the keynote by Bjarne Stroustrup titled "Make
Simple Tasks Simple!"

CppCon is the annual, week-long face-to-face gathering for the entire
C++ community. The conference is organized by the C++ community for
the community. You will enjoy inspirational talks and a friendly
atmosphere designed to help attendees learn from each other, meet
interesting people, and generally have a stimulating experience.
Taking place this year in the beautiful Seattle neighborhood and
including multiple diverse tracks, the conference will appeal to
anyone from C++ novices to experts.

What you can expect at CppCon:

 * Invited talks and panels: CppCon keynote by Bjarne Stroustrup will
   start off a week full of insight from some of the world’s leading
   experts in C++. Still have questions? Ask them at one of CppCon's
   panels featuring those at the cutting edge of the language.

 * Presentations by the C++ community: What do embedded systems, game
   development, high frequency trading, and particle accelerators have
   in common? C++, of course! Expect talks from a broad range of domains
   focused on practical C++ techniques, libraries, and tools.

 * Lightning talks: Get informed at a fast pace during special sessions
   of short, informal talks. Never presented at a conference before?
   This is your chance to share your thoughts on a C++-related topic
   in an informal setting.

 * Evening events and "unconference" time: Relax, socialize, or start
   an impromptu coding session.

CppCon’s goal is to encourage the best use of C++ while preserving the
diversity of viewpoints and experiences, but other than that it is
non-partisan and has no agenda. The conference is a project of the
Standard C++ Foundation, a not-for-profit organization whose purpose
is to support the C++ software developer community and promote the
understanding and use of modern, standard C++ on all compilers and
platforms.

For more information about the conference and to register, visit:

http://cppcon.org



[ANN] Registration for C++Now 2014 is Open

2014-01-07 Thread Boris Kolpackov
Hi,

Registration is now open for the eighth annual C++Now conference
(formerly BoostCon) which will be held in Aspen, Colorado, USA, May
12th to 17th, 2014.

C++Now is a general C++ conference for C++ experts and enthusiasts.
It is not specific to any library/framework or compiler vendor and
has three tracks with presentations ranging from hands-on, practical
tutorials to advanced C++ design and development techniques. In
particular, one of the tracks is dedicated exclusively to tutorials.

Last year the conference sold out pretty quickly and we expect it to
happen again this year. As a result, we encourage anyone interested
in attending to register early. Additionally, early bird hotel
reservations end January 10th.

For more information on registering, visit:

http://cppnow.org/2014/01/2014-registration-is-open/


For early bird hotel reservations, visit:

http://cppnow.org/location/lodging/


For general information about the conference, visit:

http://cppnow.org/about/

Boris



[ANN] C++Now 2014: 5 Days to Submissions Deadline

2013-12-03 Thread Boris Kolpackov
Hi,

Only 5 days left before the submissions deadline for C++Now 2014!

C++Now is a general C++ conference for C++ experts and enthusiasts.
It is not specific to any library/framework or compiler vendor and
has three tracks with presentations ranging from hands-on, practical
tutorials to advanced C++ design and development techniques. For more
information about C++Now, see the conference's website:

http://cppnow.org/about/

Have you learned something interesting about C++ (e.g., a new technique
possible in C++11)? Or maybe you have implemented something cool related
to C++ (e.g., a C++ library)? If so, consider sharing it with other C++
enthusiasts by giving a talk at C++Now 2014. For more information on
possible topics, formats, etc., see the call for submissions:

http://cppnow.org/2013/10/21/2014-call-for-submissions/

Boris



[ANN] ODB C++ ORM 2.3.0 released

2013-10-30 Thread Boris Kolpackov
I am pleased to announce the release of ODB 2.3.0.

ODB is an open source object-relational mapping (ORM) system for C++. It
allows you to persist C++ objects to a relational database without having
to deal with tables, columns, or SQL and without manually writing any of
the mapping code. ODB is implemented as a GCC plugin.

Major new features in this release:

  * Support for database schema evolution, including automatic schema
migration, immediate and gradual data migration, as well as soft
object model changes (ability to work with multiple schema versions
using the same C++ classes).

For a quick showcase of this functionality see the Changing Persistent
Classes section in the Hello World Example chapter:

http://www.codesynthesis.com/products/odb/doc/manual.xhtml#2.9

  * Support for object sections which provide the ability to split data
members of a persistent C++ class into independently loaded/updated
groups.

  * Support for automatic mapping of C++11 enum classes.

The database schema evolution support mentioned above was only possible
because of the compiler (GCC plugin)-based architecture of ODB. The ODB
compiler tracks database schema changed that result from C++ class
changes and automatically generate the necessary schema migration
statements.

A more detailed discussion of these features can be found in the following
blog post:

http://www.codesynthesis.com/~boris/blog/2013/10/30/odb-2-3-0-released/

For the complete list of new features in this version see the official
release announcement:

http://www.codesynthesis.com/pipermail/odb-announcements/2013/37.html

ODB is written in portable C++ (both C++98/03 and C++11 are supported) and
you should be able to use it with any modern C++ compiler. In particular, we
have tested this release on GNU/Linux (x86/x86-64/ARM), Windows (x86/x86-64),
Mac OS X (x86), and Solaris (x86/x86-64/SPARC) with GNU g++ 4.2.x-4.8.x,
MS Visual C++ 2005, 2008, 2010, and 2012, Sun Studio 12u2, and Clang 3.x.

The currently supported database systems are MySQL, SQLite, PostgreSQL,
Oracle, and SQL Server. ODB also provides optional profiles for Boost and
Qt, which allow you to seamlessly use value types, containers, and smart
pointers from these libraries in your persistent classes.

More information, documentation, source code, and pre-compiled binaries are
available from:

http://www.codesynthesis.com/products/odb/

Enjoy,
Boris



[ANN] ODB C++ ORM 2.2.0 released

2013-02-13 Thread Boris Kolpackov
Hi,

I am pleased to announce the release of ODB 2.2.0.

ODB is an open source object-relational mapping (ORM) system for C++. It
allows you to persist C++ objects to a relational database without having
to deal with tables, columns, or SQL and without manually writing any of
the mapping code.

ODB is implemented as a GCC plugin and reuses the GCC compiler frontend
for C++ parsing. ODB supports all releases of GCC with plugin support
(4.5-4.7) as well as 4.8 snapshots. With a few small modifications we've
also managed to link the plugin statically on Windows, so ODB also works
on Windows.

Major new features in this release:

  * Ability to use multiple database systems (for example, MySQL, SQLite,
etc.) from the same application. It comes in the 'static' and 'dynamic'
flavors with the latter allowing the application to dynamically load
the database support code for individual database systems if and when
necessary.

  * Support for prepared queries which are a thin wrapper around the
underlying database system's prepared statements functionality.
Prepared queries provide a way to perform potentially expensive
query preparation tasks only once and then execute the query
multiple times.

  * Support for change-tracking containers which minimize the number of
database operations necessary to synchronize the container state with
the database. This release comes with change-tracking equivalents for
std::vector and QList.

  * Support for custom sessions. This mechanism can be used to provide
additional functionality, such as automatic change tracking, delayed
database operations, auto change flushing, or object eviction.

  * Support for automatically-derived SQL name transformations. You can
now add prefixes/suffixes to table, column, index, and sequence names,
convert them to upper/lower case, or do custom regex transformations.

  * Automatic mapping of char[N] to database VARCHAR(N-1) (or similar).

This release also adds support for Qt5 in addition to Qt4 and comes with
a guide on using ODB with mobile and embedded systems (Raspberry Pi is
used as a sample ARM target).

A more detailed discussion of these features can be found in the following
blog post:

http://www.codesynthesis.com/~boris/blog/2013/02/13/odb-2-2-0-released/

For the complete list of new features in this version see the official
release announcement:

http://www.codesynthesis.com/pipermail/odb-announcements/2013/25.html

ODB is written in portable C++ and you should be able to use it with any
modern C++ compiler. In particular, we have tested this release on GNU/Linux
(x86/x86-64/ARM), Windows (x86/x86-64), Mac OS X (x86), and Solaris
(x86/x86-64/SPARC) with GNU g++ 4.2.x-4.8.x, MS Visual C++ 2008, 2010, and
2012, Sun Studio 12u2, and Clang 3.2.

The currently supported database systems are MySQL, SQLite, PostgreSQL,
Oracle, and SQL Server. ODB also provides profiles for Boost and Qt, which
allow you to seamlessly use value types, containers, and smart pointers
from these libraries in your persistent classes.

More information, documentation, source code, and pre-compiled binaries are
available from:

http://www.codesynthesis.com/products/odb/

Enjoy,
Boris



[ANN] C++Now 2013 submission deadline extended to January 5th

2012-12-18 Thread Boris Kolpackov
Just a quick note that the proposals deadline for the C++Now 2013
conference has been extended to January 5th:

http://cppnow.org/2013-call-for-submissions/

C++Now is the largest general C++ conference, that is, it is not specific
to any library/framework or compiler vendor. C++Now has three tracks with
presentations ranging from hands-on, practical tutorials to advanced C++
design and development techniques. Like last year, expect a large number
of talks to focus on C++11 with this year bringing more practical,
experience-based knowledge on using the new language features.

Giving a talk at C++Now is a great way to share with others something cool
that you have learned or built. Plus, the registration fee is waived for one
speaker of every standard presentation while shorter sessions are prorated.



Registration for C++Now 2013 is now open

2012-12-11 Thread Boris Kolpackov
The seventh annual C++Now Conference (formerly BoostCon) will be held at the
Aspen Center for Physics in Aspen, Colorado, May 12th to 17th, 2013.

"We are thrilled to announce the second annual C++Now conference,
the whole-language edition of BoostCon covering all the coolest topics
in C++," said Dave Abrahams, Conference Co-Chair. "In 2012, we broadened the
conference scope by adding a third track and offering more C++11 coverage
than any other event, and the community responded with an unprecedented
number of registrations. In 2013, we are going to build on that success
with foundational sessions integrating what we've all learned about using
C++11 during the past year, while continuing the exploration of cutting-edge
topics that BoostCon attendees have come to expect."

Early Bird Savings Deadlines!

Early Bird conference registration, which ends April 14th, 2013, costs
$599.  After that date, the registration fee is $699. Register now at our
registration page .

Early Bird hotel registration ends December 31st, 2012 and saves $20 per
night. Please reserve your room using the Aspen Meadows online reservation
system 

Speakers

If you are interested in presenting, we are currently accepting proposals
. Registration fee is waived
for one speaker of every standard session presentation. Shorter sessions are
prorated.

Student/Volunteers

Registration fees will be waived for a limited number of individuals that
wish to attend as volunteers. Volunteer work consists of helping to run the
conference and will not prevent volunteers from attending sessions. This is
the first year this opportunity is being offered. If you are interested in
applying to attend as a volunteer, contact us at .

Sponsors

For a copy of the conference Sponsorship Prospectus contact
sponsors...@cppnow.org.

C++Now is presented by Boost in-cooperation with ACM.

Permanent link for this announcement is:

http://cppnow.org/2013-registration-announcement/



[ANN] ODB C++ ORM 2.0.0 released

2012-05-02 Thread Boris Kolpackov
I am pleased to announce the release of ODB 2.0.0.

ODB is an open source object-relational mapping (ORM) system for C++. It
allows you to persist C++ objects to a relational database without having
to deal with tables, columns, or SQL and without manually writing any of
the mapping code.

ODB is implemented as a GCC plugin and this release adds support for GCC
4.7 series in addition to GCC 4.6 and 4.5.

Other major new features in this release:

  * Support for C++11 which adds integration with the new C++11 standard
library components, including smart pointers and containers. Now you
can use std::unique_ptr and std::shared_ptr as object pointers (their
lazy versions are also provided). For containers, support was added
for std::array, std::forward_list, and the unordered containers.

  * Support for polymorphism which allows you to persist, load, update,
erase, and query objects of derived classes using their base class
interfaces. Persistent class hierarchies are mapped to the relational
database model using the table-per-difference mapping.

  * Support for composite object ids which are translated to composite
primary keys in the relational database.

  * Support for the NULL semantics for composite values.

A more detailed discussion of these features can be found in the
following blog post:

http://www.codesynthesis.com/~boris/blog/2012/05/02/odb-2-0-0-released/

For the complete list of new features in this version see the official
release announcement:

http://www.codesynthesis.com/pipermail/odb-announcements/2012/13.html

ODB is written in portable C++ and you should be able to use it with any
modern C++ compiler. In particular, we have tested this release on GNU/Linux
(x86/x86-64), Windows (x86/x86-64), Mac OS X, and Solaris (x86/x86-64/SPARC)
with GNU g++ 4.2.x-4.7.x, MS Visual C++ 2008 and 2010, Sun Studio 12, and
Clang 3.0.

The currently supported database systems are MySQL, SQLite, PostgreSQL,
Oracle, and SQL Server. ODB also provides profiles for Boost and Qt, which
allow you to seamlessly use value types, containers, and smart pointers
from these libraries in your persistent classes.

More information, documentation, source code, and pre-compiled binaries are
available from:

http://www.codesynthesis.com/products/odb/

Enjoy,
Boris



Re: Traversing typedef hierarchy in C/C++ tree

2011-04-29 Thread Boris Kolpackov
Hi Dodji,

Dodji Seketeli  writes:

> Boris Kolpackov  a =C3=A9crit:
>
> > template 
> > struct wrap
> > {
> >   typedef T w_s;
> > };
> >
> > typedef wrap::w_s w_s_t;
> >
> > Now if I traverse from w_s_t using DECL_ORIGINAL_TYPE I get:
> >
> > w_s_t->w_s->s
> >
> > Instead of:
> >
> > w_s_t->w_s->my_s_t->s_t->s
>
> Ah.  Indeed.  We strip typedefs from template arguments because G++
> keeps only one instance of each template specialization.  So it chooses
> to keep the "canonical" one.  In other words wrap and wrap
> ultimately representing the same specialization, G++ only construct one
> of them.  And it chooses to construct wrap because 's' is the
> canonical type here and not "my_s_t".  If did choose to keep "my_s_t",
> error messages would refer to wrap even for cases where it
> really is wrap that has actually been written by the user.  That
> would be confusing.

I see. I guess this is also the reason why we get verbose error messages
like:

error: ‘foo’ is not a member of ‘std::vector >’

Instead of:

error: ‘foo’ is not a member of ‘std::vector’

Do you know if there are any plans (or desire) to improve this? Creating
a separate tree_node for each instantiation would probably be too wasteful
so maybe we could keep only "distinct instantiations", i.e., those that
were created using distinct type nodes as template arguments. We could use
a hash based on the template node pointer plus all the argument type node
pointers to detect duplicates.

What do you think?

Boris



Re: Traversing typedef hierarchy in C/C++ tree

2011-04-25 Thread Boris Kolpackov
Hi Dodji,

Dodji Seketeli  writes:

> Boris Kolpackov  a =C3=A9crit:
>
> > struct s {};
> >
> > typedef s s_t;
> > typedef s_t my_s_t;
> >
> > my_s_t x;
> >
>
> In G++, let's say that the tree node representing my_s_t is t.  Then,
> DECL_ORIGINAL_TYPE (TYPE_NAME (t)) points to the tree node of s_t.  You
> can walk the relationship "t is a typedef of foo" like that.

Yes, that's exactly what I was looking for. Thanks for the pointer!

While it works well for the above case, a template argument in the
path seems to break things. For example:

template 
struct wrap
{
  typedef T w_s;
};

typedef wrap::w_s w_s_t;

Now if I traverse from w_s_t using DECL_ORIGINAL_TYPE I get:

w_s_t->w_s->s

Instead of:

w_s_t->w_s->my_s_t->s_t->s

Do you know if there is a way to get this information?

Thanks,
Boris



Re: Traversing typedef hierarchy in C/C++ tree

2011-04-22 Thread Boris Kolpackov
Hi Ian,

Ian Lance Taylor  writes:

> Unfortunately we have discovered over time that all the memory usage
> matters.  A rule of thumb for gcc is that compilation speed is roughly
> proportional to the amount of memory used.

I think fundamentally the requirements of those who use GCC as just a
compiler and those who use it to do other things (via plugins) will be
often at odds. The "compiler users" will always strive to keep as little
syntactic information as possible to maximize performance. While the
"plugin users" will want as much context as possible.

A more general approach which could satisfy both camps would be to allow
the "plugin users" to maintain the extra information if they need to. For
example, I would be perfectly happy to build and use the typedef hierarchy
outside of the AST. And all that I would need for this is a plugin event,
similar to PLUGIN_FINISH_TYPE, that would give me base type, new type,
and the decl nodes. In the parser source code the overhead would be an
additional if-statement for each typedef:

if (finish_typedef_callbacks_count != 0)
  /* Call registered callbacks. */

The nice thing about this approach is that it can be applied equally well
to a lot of things without having to fit them into the existing tree.

What do you think?

Thanks,
Boris



Re: Traversing typedef hierarchy in C/C++ tree

2011-04-21 Thread Boris Kolpackov
Hi Ian,

Ian Lance Taylor  writes:

> Boris Kolpackov  writes:
>
> > Yes, that's what I suspect. Which is unfortunate since GCC already creates
> > all the nodes. All that is left is to establish a link between two types.
> > While this is not necessary for C/C++ compilation, it could be useful for
> > other tools that can now be built with the GCC plugin architecture.
>
> If you can make that work without using any extra memory and without
> making it more expensive to handle typedefs, I expect that the patch
> would be acceptable.

I took a look at it and did some pondering. It doesn't seem it will be
easy to meet both of the above requirements. We have to keep the
main_variant member in struct tree_type because getting to the main
variant is a very frequent operation. I don't think replacing it
with a loop that goes up the tree from a leaf node to the root is
an option.

We also have to keep the linked list of all the variants (the
next_variant member) because there are a couple of places in the
code that need to visit every variant.

I also looked into reusing some of the existing members that
are the same for all variants. In this case we could store the
real value in the main variant and use the member in all other
variants to organize the tree. binfo was a good candidate but then
I discovered that there is a competition for member reuse ;-) and
binfo is already taken.

I was also thinking if adding an extra member would be a big deal,
memory usage-wise. This member is only going to be added to the
TYPE nodes (struct tree_type). I may be wrong, but I would expect
that there aren't that many such nodes in a typical translation
unit. Much fewer than, say, nodes that are used to build function
bodies. Let's say we have 10,000 such nodes. Then on a 64-bit
box we will use extra ~80Kb.

What do you think?

Thanks,
Boris



Re: Traversing typedef hierarchy in C/C++ tree

2011-04-20 Thread Boris Kolpackov
Hi Ian,

Ian Lance Taylor  writes:

> As far as I know this is not possible.  A typedef name is just an alias
> for the underlying type.  When you typedef T as TNAME, where T is itself
> a typedef, GCC records that TNAME is a name for the underlying type of
> T. It does not record an equivalence of T and TNAME.  The C/C++
> language do not require GCC to keep track of this information, and it's
> simpler for the frontend to just maintain a list.

Yes, that's what I suspect. Which is unfortunate since GCC already creates
all the nodes. All that is left is to establish a link between two types.
While this is not necessary for C/C++ compilation, it could be useful for
other tools that can now be built with the GCC plugin architecture.

Thanks,
Boris



Re: Traversing typedef hierarchy in C/C++ tree

2011-04-20 Thread Boris Kolpackov
Hi Jonathan,

Jonathan Wakely  writes:

> I don't know if GCC keeps the information you want, but according to
> the language rules there is no hierarchy. There's a type, and zero or
> more alternative names for it.  The example above makes my_s_t a
> synonym for s, not s_t.

Right. "Hierarchy" was probably a poor choice of a term for this.
I didn't mean  hierarchy in the language sense but in the AST sense.
GCC already creates a separate *_TYPE node for each typedef alias.
And you can get from any such node to the "primary node", or root
of the tree, using the TYPE_MAIN_VARIANT() macro. What I want is
to get the parent node, not the root node.


> Consider this valid code:
>
> typedef int foo;
> typedef int bar;
> typedef foo bar;
> typedef bar foo;
>
> What do you expect to see here?

Any sensible (e.g., ignoring all re-declarations) tree would work for
me. I don't particularly care if my code doesn't produce the desired
result for clinical cases like the above.

> You want to track size_t, what if someone uses __typeof__(sizeof(1)),
> does that count?

I am fine with it not counting.


> What about std::size_t?

This one is actually covered. In GCC AST std::size_t node is the same
as ::size_t (i.e., GCC does not create new *_TYPE node for using-
declarations).


> That could be defined as a synonym for __SIZE_TYPE__ or decltype(sizeof(1))
> so is not in a sequence of typedef declarations that includes size_t.

If it were defined as one of these, I could then check for both ::size_t
and ::std::size_t.

Thanks,
Boris



Traversing typedef hierarchy in C/C++ tree

2011-04-20 Thread Boris Kolpackov
Hi,

I am trying to figure out how to get a typedef hierarchy from the C/C++
tree in GCC. Consider the following declarations:

struct s {};

typedef s s_t;
typedef s_t my_s_t;

my_s_t x;

Giveb 'x' VAR_DECL I can have this traversal using TYPE_MAIN_VARIANT():

x -> my_s_t -> s;

What I am trying to achieve is this:

x -> my_s_t -> s_t -> s

I looked at TYPE_NEXT_VARIANT(), but this is a list while what I need
is a tree (hierarchy).

Some background on why I need this: I would like to determine if a
member of a class is size_t so that I can always map it to a 64-bit
integer in another system (RDBMS). In other words:

struct s
{
  unsigned int i;// -> 32-bit int
  size_t s;  // -> 64-bit int
}

Even though size_t might be typedef'ed as unsigned int. In the above
example I can do it. However, adding a level or indirections causes
problems:

typedef size_t my_size;

struct s
{
  my_size s; // TYPE_MAIN_VARIANT(my_size) == unsigned int
}

Any ideas will be much appreciated.

Boris



Report: using GCC plugin to implement ORM for C++

2010-09-30 Thread Boris Kolpackov
Hi,

We have just released a C++ object-relational mapping (ORM) system,
called ODB, that uses the new GCC plugin architecture. I thought I
would report back to the GCC community on how the plugin part worked
out.

In a nutshell, the ODB compiler parses a C++ header with class 
declarations (and some custom #pragma's that control the mapping)
and generates C++ code that performs the conversion between these
classes and their database representation. I have included some more
information on ODB at the end of this email for those interested.

What worked well:

  - Access to pretty much all of the GCC functions, macros, and 
internal data structures.

  - Ability to register custom pragmas and attributes.
  
  - The tree contains information about typedef aliases. Given the
following code:

class c {};
typedef c c_t;
c_t x;

One can discover that 'x' was declared as 'c_t', not just that its
type is 'c'. This is very useful when performing the source-to-source
translation.

What didn't work so well:

  - The plugin header inclusion is a mess. You have to include the right
set of headers in the right order to get things to compile. Plus, the
GCC headers poison some declarations, so, for example, you cannot use
std::abort. Maybe there is a good reason for this.

  - If the plugin needs to generate something other than assembly (C++
in our case), then the code that suppresses the opening of the 
assembly file is quite hackish.

  - There is no callback point after the tree has been constructed and
before any other transformations have been performed. We use the
first gate callback (PLUGIN_OVERRIDE_GATE) and do all our work
there. There is also no well-defined way to stop the compilation
process at this point. We simply use exit().

  - Working with the GCC tree is not for the faint of heart. Generating
code from it is particularly hard. In fact, we have created our own
C++ classes (called "semantics graph") to represent the translation
unit being compiled. It is not as complete as the GCC's tree but it
is a lot easier to traverse.


We have built and tested the ODB plugin with GCC 4.5.1 on the following
platforms:

GNU/Linux

 Predictably, everything works out of the box.

Solaris

 Had to add OBJDUMP=gobjdump (or /usr/sfw/bin/gobjdump) when configuring
 GCC. Otherwise, the -rdynamic test will fail. Tested both x86 and SPARC.

Mac OS X

 Had to apply a backported patch for PR 43715.

Windows

 Well, this one was fun. There is no dlopen/dlsym so no plugin support.
 What we did was this: we linked in the ODB plugin statically to the
 cc1plus binary. This required a small patch to the GCC plugin loading
 code. The patch is small instead of being large thanks to the way the
 plugin support is implemented in GCC. Even if plugin support is not
 enabled, most of the code (callback points, etc.) are still there. It's
 just there is no way to load a plugin. This way it was fairly straight-
 forward to add another method of "loading" a plugin. Kudos to whoever
 designed this. I can share the patch/build instructions with anyone
 interested.

Overall, I think, the GCC plugin architecture is a great addition. The
amount of flexibility it affords you is quite amazing.

Some more information on ODB:

ODB is an open-source, compiler-based object-relational mapping (ORM)
system for C++. It allows you to persist C++ objects to a relational
database without having to deal with tables, columns, or SQL and
without manually writing any mapping code. For example:

  #pragma db object
  class person
  {
...

  private:
friend class odb::access;
person ();

#pragma db id auto
unsigned long id_;

string first_;
string last_;
unsigned short age_;
  };

ODB is not a framework. It does not dictate how you should write your
application. Rather, it is designed to fit into your style and 
architecture by only handling C++ object persistence and not 
interfering with any other functionality. As you can see, existing
classes can be made persistent with only a few modifications.

Given the above class, we can perform various database operations with
its objects:

  person john ("John", "Doe", 31);
  person jane ("Jane", "Doe", 29);

  transaction t (db.begin ());

  db.persist (john);
  db.persist (jane);

  result r (db.query (query::last == "Doe" && query::age < 30));
  copy (r.begin (), r.end (), ostream_iterator (cout, "\n"));

  jane.age (jane.age () + 1);
  db.update (jane);

  t.commit ();

The ODB compiler uses the GCC compiler frontend for C++ parsing and is
implemented using the new GCC plugin architecture. While ODB uses GCC
internally, its output is standard C++ which means that you can use
any C++ compiler to build your application.

ODB is written in portable C++ and you should be able to use it with
any modern C++ compiler. In particular, we have tested this release
on GNU/Linux (x86/x86-64), Windows (x86/x86-64), Ma