Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread monarch_dodra

On Wednesday, 1 August 2012 at 07:24:09 UTC, Era Scarecrow wrote:
On Tuesday, 31 July 2012 at 20:41:55 UTC, Dmitry Olshansky 
wrote:


Great to see things moving. Could you please do a separate 
pull for bitfields it should get merged easier and it seems 
like a small but important bugfix.


https://github.com/rtcvb32/phobos/commit/620ba57cc0a860245a2bf03f7b7f5d6a1bb58312

 I've updated the next update in my bitfields branch. All 
unittests pass for me.


I had an (implementation) question for you:
Does the implementation actually require knowing what the size of 
the padding is?


eg:
struct A
{
int a;
mixin(bitfields!(
uint,  x,2,
int,   y,3,
ulong,  ,3 // - This line right there
));
}

It that highlighted line really mandatory?
I'm fine with having it optional, in case I'd want to have, say, 
a 59 bit padding, but can't the implementation figure it out on 
it's own?


Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread Era Scarecrow

On Thursday, 2 August 2012 at 09:03:54 UTC, monarch_dodra wrote:
I had an (implementation) question for you: Does the 
implementation actually require knowing what the size of the 
padding is?


eg:
struct A
{
int a;
mixin(bitfields!(
uint,  x,2,
int,   y,3,
ulong,  ,3 // - This line right there
));
}

It that highlighted line really mandatory?
I'm fine with having it optional, in case I'd want to have, 
say, a 59 bit padding, but can't the implementation figure it 
out on it's own?


 The original code has it set that way, why? Perhaps so you are 
aware and actually have in place where all the bits are assigned 
(even if you aren't using them); Be horrible if you used 
accidently 33 bits and it extended to 64 without telling you 
(Wouldn't it?).


 However, having it fill the size in and ignore the last x bits 
wouldn't be too hard to do, I've been wondering if I should 
remove it.


Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread monarch_dodra

On Thursday, 2 August 2012 at 09:14:15 UTC, Era Scarecrow wrote:

On Thursday, 2 August 2012 at 09:03:54 UTC, monarch_dodra wrote:
I had an (implementation) question for you: Does the 
implementation actually require knowing what the size of the 
padding is?


eg:
struct A
{
   int a;
   mixin(bitfields!(
   uint,  x,2,
   int,   y,3,
   ulong,  ,3 // - This line right there
   ));
}

It that highlighted line really mandatory?
I'm fine with having it optional, in case I'd want to have, 
say, a 59 bit padding, but can't the implementation figure it 
out on it's own?


 The original code has it set that way, why? Perhaps so you are 
aware and actually have in place where all the bits are 
assigned (even if you aren't using them); Be horrible if you 
used accidently 33 bits and it extended to 64 without telling 
you (Wouldn't it?).


 However, having it fill the size in and ignore the last x bits 
wouldn't be too hard to do, I've been wondering if I should 
remove it.


Well, I was just trying to figure out the rationale: The most 
obvious one for me being it is much easier on the 
implementation.


One of the *big* reasons I'm against having a hand chosen 
padding, is that the implementation *should* be able to find out 
what the most efficient padding is on the current machine (could 
be 32 on some, could be 64 on some)


That said, something that could fix the above problem could be:
*Bitfields are automatically padded if the final field is not a 
padding field.

**Padding size is implementation chosen.
*If the final field is a padding field, then the total size 
must be 8/16/32/64.


EG:
//Case 1
bitfields!(
bool, x,1,
uint,  ,3, //Interfield padding
bool, y,1
)
//Fine, implementation chosen bitfield size

//Case 2
bitfields!(
bool, x,1,
uint,  ,3, //Interfield padding
bool, y,1
ulong, ,   59, //Pad to 64
)
//Fine, imposed 64 bit

//Case 3
bitfields!(
bool, x,1,
uint,  ,3, //Interfield padding
bool, y,1
ulong, ,   32, //Pad to 37
)
//ERROR: Padding requests the bitfield to be 37 bits longs

But I'd say that's another development anyways, if we ever decide 
to go this way.


Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread Andrei Alexandrescu

On 8/2/12 5:14 AM, Era Scarecrow wrote:

On Thursday, 2 August 2012 at 09:03:54 UTC, monarch_dodra wrote:

I had an (implementation) question for you: Does the implementation
actually require knowing what the size of the padding is?

eg:
struct A
{
int a;
mixin(bitfields!(
uint, x, 2,
int, y, 3,
ulong, , 3 // - This line right there
));
}

It that highlighted line really mandatory?
I'm fine with having it optional, in case I'd want to have, say, a 59
bit padding, but can't the implementation figure it out on it's own?


The original code has it set that way, why? Perhaps so you are aware and
actually have in place where all the bits are assigned (even if you
aren't using them); Be horrible if you used accidently 33 bits and it
extended to 64 without telling you (Wouldn't it?).


Yes, that's the intent. The user must define exactly how an entire 
ubyte/ushort/uint/ulong is filled, otherwise ambiguities and bugs are 
soon to arrive.



However, having it fill the size in and ignore the last x bits wouldn't
be too hard to do, I've been wondering if I should remove it.


Please don't. The effort on the programmer side is virtually nil, and 
keeps things in check. In no case would the use of bitfields() be so 
intensive that the bloat of one line gets any significance.



Andrei


Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread Andrei Alexandrescu

On 8/2/12 5:26 AM, monarch_dodra wrote:

One of the *big* reasons I'm against having a hand chosen padding, is
that the implementation *should* be able to find out what the most
efficient padding is on the current machine (could be 32 on some, could
be 64 on some)


In my neck of the woods they call that non-portability.

If your code is dependent on the machine's characteristics you use 
version() and whatnot.



Andrei


Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread Era Scarecrow
On Thursday, 2 August 2012 at 12:35:20 UTC, Andrei Alexandrescu 
wrote:


Please don't. The effort on the programmer side is virtually 
nil, and keeps things in check. In no case would the use of 
bitfields() be so intensive that the bloat of one line gets any 
significance.


 If you're using a template or something to fill in the sizes, 
then having to calculate the remainder could be an annoyance; but 
those cases would be small in number.


 I'll agree, and it's best leaving it as it is.

 BTW, Wasn't there a new/reserved type of cent/ucent (128bit)?


Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread monarch_dodra
On Thursday, 2 August 2012 at 12:38:10 UTC, Andrei Alexandrescu 
wrote:

On 8/2/12 5:26 AM, monarch_dodra wrote:
One of the *big* reasons I'm against having a hand chosen 
padding, is
that the implementation *should* be able to find out what the 
most
efficient padding is on the current machine (could be 32 on 
some, could

be 64 on some)


In my neck of the woods they call that non-portability.

If your code is dependent on the machine's characteristics you 
use version() and whatnot.


Well, isn't that the entire point: Making your code NOT dependent 
on the machine's characteristics?


By forcing the developer to chose the bitfield size (32 or 64), 
you ARE forcing him to make a choice dependent on the machine's 
characteristics. The developer just knows how he wants to pack 
his bits, not how he wants to pad them. Why should the developer 
be burdened with figuring out what the optimal size of his 
bitfield should be?


By leaving the field blank, *that* guarantees portability.



Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread Andrei Alexandrescu

On 8/2/12 9:48 AM, monarch_dodra wrote:

On Thursday, 2 August 2012 at 12:38:10 UTC, Andrei Alexandrescu wrote:

On 8/2/12 5:26 AM, monarch_dodra wrote:

One of the *big* reasons I'm against having a hand chosen padding, is
that the implementation *should* be able to find out what the most
efficient padding is on the current machine (could be 32 on some, could
be 64 on some)


In my neck of the woods they call that non-portability.

If your code is dependent on the machine's characteristics you use
version() and whatnot.


Well, isn't that the entire point: Making your code NOT dependent on the
machine's characteristics?

By forcing the developer to chose the bitfield size (32 or 64), you ARE
forcing him to make a choice dependent on the machine's characteristics.


I think that's backwards.

Andrei


Re: Why must bitfields sum to a multiple of a byte?

2012-08-01 Thread Era Scarecrow

On Tuesday, 31 July 2012 at 20:41:55 UTC, Dmitry Olshansky wrote:

Great to see things moving. Could you please do a separate pull 
for bitfields it should get merged easier and it seems like a 
small but important bugfix.


https://github.com/rtcvb32/phobos/commit/620ba57cc0a860245a2bf03f7b7f5d6a1bb58312

 I've updated the next update in my bitfields branch. All 
unittests pass for me.


Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread monarch_dodra

On Tuesday, 31 July 2012 at 00:44:16 UTC, Andrej Mitrovic wrote:

On 7/31/12, Era Scarecrow rtcv...@yahoo.com wrote:
  It assumes the largest type we can currently use which is 
ulong


Ah yes, it makes sense now. Thanks for the brain cereal. :p


I saw your bug report:
http://d.puremagic.com/issues/show_bug.cgi?id=8474

The bug is only when the field is EXACTLY 32 bits BTW. bitfields 
works quite nice with 33 or whatever. More details in the report.


Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Andrej Mitrovic
On 7/31/12, monarch_dodra monarchdo...@gmail.com wrote:
 The bug is only when the field is EXACTLY 32 bits BTW. bitfields
 works quite nice with 33 or whatever. More details in the report.

Yeah 32 or 64 bits, thanks for changing the title.


Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow

On Tuesday, 31 July 2012 at 15:25:55 UTC, Andrej Mitrovic wrote:

On 7/31/12, monarch_dodra monarchdo...@gmail.com wrote:
The bug is only when the field is EXACTLY 32 bits BTW. 
bitfields works quite nice with 33 or whatever. More details 
in the report.


Yeah 32 or 64 bits, thanks for changing the title.


 I wonder, is it really a bug? If you are going to have it fill a 
whole size it would fit anyways, why even put it in as a 
bitfield? You could just declare it separately.


 I get the feeling it's not so much a bug as a design feature. If 
you really needed a full size not aligned with whole bytes (or 
padded appropriately) then I could understand, but still...



 And I'm the one that changed the title name. Suddenly I'm 
reminded of south park (movie) and the buttfor.


Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Andrej Mitrovic
On 7/31/12, Era Scarecrow rtcv...@yahoo.com wrote:
   I wonder, is it really a bug? If you are going to have it fill a
 whole size it would fit anyways, why even put it in as a
 bitfield? You could just declare it separately.

I don't really know, I'm looking at this from a point of wrapping C++.
I haven't used bitfields myself in my own code.


Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow

On Tuesday, 31 July 2012 at 16:48:37 UTC, Andrej Mitrovic wrote:

On 7/31/12, Era Scarecrow rtcv...@yahoo.com wrote:
I wonder, is it really a bug? If you are going to have it fill 
a whole size it would fit anyways, why even put it in as a 
bitfield? You could just declare it separately.


I don't really know, I'm looking at this from a point of 
wrapping C++. I haven't used bitfields myself in my own code.


 I'd say it's not a bug since C/C++ is free to reorder the fields 
you'd need to tinker with it anyways; HOWEVER if you still need 
to be able to have it then who's to stop you from doing it?


 I think more likely a flag/version or some indicator that you 
didn't make a mistake, such as making them depreciated so it 
complains to you. Kinda like how you can't make assignments in if 
statements or do useless compares, it's an error and helps 
prevent issues that are quite obviously mistakes.


Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread monarch_dodra

On Tuesday, 31 July 2012 at 16:16:00 UTC, Era Scarecrow wrote:

On Tuesday, 31 July 2012 at 15:25:55 UTC, Andrej Mitrovic wrote:

On 7/31/12, monarch_dodra monarchdo...@gmail.com wrote:
The bug is only when the field is EXACTLY 32 bits BTW. 
bitfields works quite nice with 33 or whatever. More details 
in the report.


Yeah 32 or 64 bits, thanks for changing the title.


 I wonder, is it really a bug? If you are going to have it fill 
a whole size it would fit anyways, why even put it in as a 
bitfield? You could just declare it separately.


 I get the feeling it's not so much a bug as a design feature. 
If you really needed a full size not aligned with whole bytes 
(or padded appropriately) then I could understand, but still...



 And I'm the one that changed the title name. Suddenly I'm 
reminded of south park (movie) and the buttfor.


No, it's a bug. There is no reason for it to fail (and it 
certainly isn't a feature).


Maybe the user wants to pack an uint, ushort, ubyte, ubyte 
together in a struct, but doesn't want the rest of that struct's 
members 1-aligned?


Maybe the user needs a 32 bit ulong? This way the ulong only 
takes 32 bits, but can still be implicitly passed to functions 
expecting ulongs.


Maybe the user generated the mixin using another template? For 
example, testing integers from 10 bits wide to 40 bits wide? 
Should he write a special case for 32?


...

Now I'm not saying it is big bug or anything, but it is something 
that should be supported... Or at least explicitly asserted until 
fixed...


Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow

On Tuesday, 31 July 2012 at 16:59:11 UTC, monarch_dodra wrote:
No, it's a bug. There is no reason for it to fail (and it 
certainly isn't a feature).


 If I made two fields in a 64bit bitfield, each 32bits int's I'd 
like it to complain; If it's calculated from something else then 
finding the problem may be a little more difficult. But that's 
how my mind works, give you the tools you need to do whatever you 
want including shooting yourself in the foot (although you need 
to work harder to do that than C++).


Maybe the user wants to pack an uint, ushort, ubyte, ubyte 
together in a struct, but doesn't want the rest of that 
struct's members 1-aligned?


 That would be the one reason that makes sense; Course can't you 
align sections at 1 byte and then by the default afterwards? 
Course now that I think about it, some systems don't have options 
to do byte alignments and require to access memory at 4/8 byte 
alignments. This makes sense needing to support it.


Maybe the user needs a 32 bit ulong? This way the ulong only 
takes 32 bits, but can still be implicitly passed to functions 
expecting ulongs.


 I would think the bug only showed itself if you did int at 
32bits and ulong at 64 bits, not ulong at 32bits.


Maybe the user generated the mixin using another template? For 
example, testing integers from 10 bits wide to 40 bits wide? 
Should he write a special case for 32?


 Ummm. Yeah I think I see what your saying; And you're 
probably right a special case would be less easily documented 
(and more a pain) when it shouldn't need a special case.


Now I'm not saying it is big bug or anything, but it is 
something that should be supported... Or at least explicitly 
asserted until fixed...




Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Timon Gehr

On 07/31/2012 06:57 PM, Era Scarecrow wrote:

On Tuesday, 31 July 2012 at 16:48:37 UTC, Andrej Mitrovic wrote:

On 7/31/12, Era Scarecrow rtcv...@yahoo.com wrote:

I wonder, is it really a bug? If you are going to have it fill a
whole size it would fit anyways, why even put it in as a bitfield?
You could just declare it separately.


I don't really know, I'm looking at this from a point of wrapping C++.
I haven't used bitfields myself in my own code.


I'd say it's not a bug since C/C++ is free to reorder the fields you'd
need to tinker with it anyways; HOWEVER if you still need to be able to
have it then who's to stop you from doing it?

I think more likely a flag/version or some indicator that you didn't
make a mistake, such as making them depreciated so it complains to you.
Kinda like how you can't make assignments in if statements or do useless
compares, it's an error and helps prevent issues that are quite
obviously mistakes.


This is obviously a mistake in the bitfield implementation. What else
could be concluded from the error message:

std\bitmanip.d(76): Error: shift by 32 is outside the range 0..31

Requesting a 32 bit or 64 bit member on the other hand is not a
mistake, and it is not useless, therefore the analogy breaks down.

(Also IMO, the once-in-a-year wtf that is caused by accidentally
assigning in an if condition does not justify special casing assignment
expressions inside if conditions. Also, what is an useless compare?)


Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow

On Tuesday, 31 July 2012 at 17:17:43 UTC, Timon Gehr wrote:

(Also IMO, the once-in-a-year wtf that is caused by 
accidentally assigning in an if condition does not justify 
special casing assignment expressions inside if conditions. 
Also, what is an useless compare?)


 I've noticed in my experience, DMD gives you an error if you do 
a statement that has no effect; IE:


 1 + 2; //statement has no effect
 a == b;//ditto


Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread monarch_dodra

On Tuesday, 31 July 2012 at 17:17:25 UTC, Era Scarecrow wrote:

On Tuesday, 31 July 2012 at 16:59:11 UTC, monarch_dodra wrote:
Maybe the user needs a 32 bit ulong? This way the ulong only 
takes 32 bits, but can still be implicitly passed to functions 
expecting ulongs.


 I would think the bug only showed itself if you did int at 
32bits and ulong at 64 bits, not ulong at 32bits.


No, the bug shows itself if the first field is 32 bits, 
regardless of (ulong included).


I would add though that requesting a field in bits that is bigger 
than the type of the field should not work (IMO). EG:


struct A
{
mixin(bitfields!(
  ushort, a, 24,
  uint,,  8
)
);
}

I don't see any way how that could make sense...
But it *is* legal in C and C++...
But it does generates warnings...

I think it should static assert in D.

On Tuesday, 31 July 2012 at 17:21:00 UTC, Era Scarecrow wrote:

On Tuesday, 31 July 2012 at 17:17:43 UTC, Timon Gehr wrote:

(Also IMO, the once-in-a-year wtf that is caused by 
accidentally assigning in an if condition does not justify 
special casing assignment expressions inside if conditions. 
Also, what is an useless compare?)


 I've noticed in my experience, DMD gives you an error if you 
do a statement that has no effect; IE:


 1 + 2; //statement has no effect
 a == b;//ditto


That's part of the standard: Statements that have no effect are 
illegal. This is a good think, IMO. I've seen MANY C++ bugs that 
could have been saved by that.


Regarding the assignment in if. I think it is a good thing. D 
sides with safety. If you *really* want to test assign, you can 
always comma operator it.



void main()
{
  int a = 0, b = 5;
  while(a = --b, a) //YES, I *DO* want to assign!
  {
write(a);
  }
};
I'm sure the compiler will optimize away what it needs.


Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow

On Tuesday, 31 July 2012 at 17:34:33 UTC, monarch_dodra wrote:
No, the bug shows itself if the first field is 32 bits, 
regardless of (ulong included).


I would add though that requesting a field in bits that is 
bigger than the type of the field should not work (IMO). EG:


struct A
{
mixin(bitfields!(
  ushort, a, 24,
  uint,,  8
)
);
}

I don't see any way how that could make sense...
But it *is* legal in C and C++...
But it does generates warnings...


 Maybe so ushort has extra padding for expansion at some later 
date when they change it to uint?? could put an assert in, but if 
it doesn't break code...



I think it should static assert in D.


 Glancing over the issue, the [0..31] is a compiler error based 
on bit shifting (not bitfields itself); if the storage type is 
ulong then it shouldn't matter if the first one is a 32bit size 
or not. Unless... Nah, couldn't be... I'll look it over later to 
be sure.


That's part of the standard: Statements that have no effect are 
illegal. This is a good think, IMO. I've seen MANY C++ bugs 
that could have been saved by that.


Regarding the assignment in if. I think it is a good thing. D 
sides with safety. If you *really* want to test assign, you can 
always comma operator it.




Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow

On Tuesday, 31 July 2012 at 17:17:43 UTC, Timon Gehr wrote:

This is obviously a mistake in the bitfield implementation. 
What else could be concluded from the error message:


std\bitmanip.d(76): Error: shift by 32 is outside the range 
0..31


Requesting a 32 bit or 64 bit member on the other hand is not a 
mistake, and it is not useless, therefore the analogy breaks 
down.


 Well curiously it was easier to fix than I thought (a line for a 
static if, and a modification of the masking)... Was there any 
other bugs that come to mind? Anything of consequence?


Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Ali Çehreli

On 07/31/2012 09:15 AM, Era Scarecrow wrote:
 On Tuesday, 31 July 2012 at 15:25:55 UTC, Andrej Mitrovic wrote:
 On 7/31/12, monarch_dodra monarchdo...@gmail.com wrote:
 The bug is only when the field is EXACTLY 32 bits BTW. bitfields
 works quite nice with 33 or whatever. More details in the report.

 Yeah 32 or 64 bits, thanks for changing the title.

 I wonder, is it really a bug? If you are going to have it fill a whole
 size it would fit anyways, why even put it in as a bitfield? You could
 just declare it separately.

It can happen in templated code where the width of the first field may 
be a template parameter. I wouldn't want to 'static if (width == 32)'.


But thanks for fixing the bug already! :)

Ali



Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Dmitry Olshansky

On 31-Jul-12 22:21, Era Scarecrow wrote:

On Tuesday, 31 July 2012 at 17:17:43 UTC, Timon Gehr wrote:


This is obviously a mistake in the bitfield implementation. What else
could be concluded from the error message:

std\bitmanip.d(76): Error: shift by 32 is outside the range 0..31

Requesting a 32 bit or 64 bit member on the other hand is not a
mistake, and it is not useless, therefore the analogy breaks down.


  Well curiously it was easier to fix than I thought (a line for a
static if, and a modification of the masking)... Was there any other
bugs that come to mind? Anything of consequence?


Great to see things moving. Could you please do a separate pull for 
bitfields it should get merged easier and it seems like a small but 
important bugfix.


--
Dmitry Olshansky


Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow

On Tuesday, 31 July 2012 at 20:41:55 UTC, Dmitry Olshansky wrote:

On 31-Jul-12 22:21, Era Scarecrow wrote:
Well curiously it was easier to fix than I thought (a line for 
a static if, and a modification of the masking)... Was there 
any other bugs that come to mind? Anything of consequence?


Great to see things moving. Could you please do a separate pull 
for bitfields it should get merged easier and it seems like a 
small but important bugfix.


 Guess this means I'll be working on BitArrays a bit later and 
work instead on the bitfields code. What fun... :) I thought I 
had bitfields as separate already, but it's kinda thrown both 
sets of changes in. Once I figure it out I'll get them separated 
and finish work on the bitfields.


Re: Why must bitfields sum to a multiple of a byte?

2012-07-30 Thread Era Scarecrow

On Monday, 30 July 2012 at 17:43:28 UTC, Ali Çehreli wrote:

On 07/30/2012 10:15 AM, Andrej Mitrovic wrote:

 import std.bitmanip;
 struct Foo
 {
  mixin(bitfields!(
  uint, bits1, 32,
  ));
 }

 D:\DMD\dmd2\windows\bin\..\..\src\phobos\std\bitmanip.d(76):
Error:
 shift by 32 is outside the range 0..31

 Should I file this?

Yes, it's a bug.


 And likely one I'll be working on fairly soon. I've been 
concentrating on the BitArray, but I'll get more of the bitfields 
very very soon.


Re: Why must bitfields sum to a multiple of a byte?

2012-07-30 Thread Andrej Mitrovic
On 7/31/12, Era Scarecrow rtcv...@yahoo.com wrote:
   And likely one I'll be working on fairly soon. I've been
 concentrating on the BitArray, but I'll get more of the bitfields
 very very soon.


Cool. What about the max limit of 64bits per bitfield instantiation? I
don't suppose this is common in C++ but I wouldn't know..


Re: Why must bitfields sum to a multiple of a byte?

2012-07-30 Thread Era Scarecrow

On Monday, 30 July 2012 at 23:41:39 UTC, Andrej Mitrovic wrote:

On 7/31/12, Era Scarecrow rtcv...@yahoo.com wrote:
And likely one I'll be working on fairly soon. I've been 
concentrating on the BitArray, but I'll get more of the 
bitfields very very soon.


Cool. What about the Max limit of 64bits per bitfield 
instantiation? I don't suppose this is common in C++ but I 
wouldn't know..


 The limitation is based on what kind of types you can use. If 
you want a field 1000 bits long, i don't see why not; But 
obviously you can't treat it like an int or bool. (No cheap way 
to make a BigInt here :P), and packing say a struct, it would be 
slow since it has to copy each bit individually, or if it's byte 
aligned, by bytes.


 It assumes the largest type we can currently use which is ulong, 
if cent ever gets properly added, then 128bits will become 
available. If they want to go higher than likely type 'dime' or 
'nickel' or bicent (256bits? :P) can be used once the template is 
modified. Maybe tricent can be 384 and quadcent can be 512 
Mmmm :) or it could just be large256, ularge256.


 As for the 'why', is that all of the bitfields work by assuming 
bit-shifting and low level binary operators to do the job. So, 2 
for 4 ints would be


int value;
int a() @property {
  return value  0x7;
}
void a(int v) @property {
  value = ~0x7;
  value |= v  0x7;
}

int b() @property {
  return (value4)  0x7;
}
void a(int v) @property {
  value = ~(0x7  4);
  value |= (v  0x7)  4;
}

 That may be flawed but it gives you a basic idea.


Re: Why must bitfields sum to a multiple of a byte?

2012-07-30 Thread Era Scarecrow

On Tuesday, 31 July 2012 at 00:00:24 UTC, Era Scarecrow wrote:
Corrections:
 So, 2 variables using 4 bit ints would be


void a(int v) @property {
  value = ~(0x7  4);
  value |= (v  0x7)  4;
}


the second setter should be
 void b(int v) @property {



Re: Why must bitfields sum to a multiple of a byte?

2012-07-30 Thread Andrej Mitrovic
On 7/31/12, Era Scarecrow rtcv...@yahoo.com wrote:
   It assumes the largest type we can currently use which is ulong

Ah yes, it makes sense now. Thanks for the brain cereal. :p