On Thu, 08 Sep 2011 17:34:50 -0400, Timon Gehr <[email protected]> wrote:

On 09/08/2011 10:33 PM, Jonathan M Davis wrote:
On Thursday, September 08, 2011 15:04:56 Andrei Alexandrescu wrote:
On 9/8/11 2:02 PM, Jonathan M Davis wrote:
I think that it makes perfect sense to use enums for flags. What I don't
think makes sense is making the type of the variable which holds the
flags to be that enum type unless _every_ possible combination of flags
has its own flag so that&ing or |ing enums always results in a valid
enum.

This ain't going to work because it would require the human user to
write by hand a combinatorial number of symbols.

A ligthweight fixed-sized set with named members is a worthy abstraction
for the standard library.

I agree. I'm not arguing that the user _should_ create such a combination of flags. That would be horrible. I'm just arguing that having a set of flags with
enums, e.g.

enum Flag { a = 1, b = 2, c = 4, d = 8 };

and then having Flag.a | Flag.b or Flag.a& Flag.b result in a value of type Flag is not a good idea, because the result isn't a valid Flag. It should result in whatever the base type is (int in this case), and functions which take such flags&ed or |ed should take them using the base type, not the enum
type.

- Jonathan M Davis

+1.

I could go either way on this. On one hand, it's nice to say "this is a bitfield, and the compiler will force you to use my enumeration constants to build it", and on the other hand, anyone who passes in integers (especially something non-hex or non-binary like 12345) is asking for code-review rejection ;)

I did use an enumeration argument that included a single bit which could be or'd in the stdio overhaul. It was still verifying the enum was valid in the contract, so it just as easily could be uint (or maybe it was ubyte?). I don't suppose the type checking is all that critical.

-Steve

Reply via email to