Re: Why must bitfields sum to a multiple of a byte?

2012-07-30 Thread Era Scarecrow
On Monday, 30 July 2012 at 17:43:28 UTC, Ali Çehreli wrote: On 07/30/2012 10:15 AM, Andrej Mitrovic wrote: > import std.bitmanip; > struct Foo > { > mixin(bitfields!( > uint, "bits1", 32, > )); > } > > D:\DMD\dmd2\windows\bin\..\..\src\phobos\std\bitmanip.d(76): Error: > shift

Re: Why must bitfields sum to a multiple of a byte?

2012-07-30 Thread Andrej Mitrovic
On 7/31/12, Era Scarecrow wrote: > And likely one I'll be working on fairly soon. I've been > concentrating on the BitArray, but I'll get more of the bitfields > very very soon. > Cool. What about the max limit of 64bits per bitfield instantiation? I don't suppose this is common in C++ but I wo

Re: Why must bitfields sum to a multiple of a byte?

2012-07-30 Thread Era Scarecrow
On Monday, 30 July 2012 at 23:41:39 UTC, Andrej Mitrovic wrote: On 7/31/12, Era Scarecrow wrote: And likely one I'll be working on fairly soon. I've been concentrating on the BitArray, but I'll get more of the bitfields very very soon. Cool. What about the Max limit of 64bits per bitfield i

Re: Why must bitfields sum to a multiple of a byte?

2012-07-30 Thread Era Scarecrow
On Tuesday, 31 July 2012 at 00:00:24 UTC, Era Scarecrow wrote: Corrections: So, 2 variables using 4 bit ints would be void a(int v) @property { value &= ~(0x7 << 4); value |= (v & 0x7) << 4; } the second setter should be void b(int v) @property {

Re: Why must bitfields sum to a multiple of a byte?

2012-07-30 Thread Andrej Mitrovic
On 7/31/12, Era Scarecrow wrote: > It assumes the largest type we can currently use which is ulong Ah yes, it makes sense now. Thanks for the brain cereal. :p

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread monarch_dodra
On Tuesday, 31 July 2012 at 00:44:16 UTC, Andrej Mitrovic wrote: On 7/31/12, Era Scarecrow wrote: It assumes the largest type we can currently use which is ulong Ah yes, it makes sense now. Thanks for the brain cereal. :p I saw your bug report: http://d.puremagic.com/issues/show_bug.cgi?i

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Andrej Mitrovic
On 7/31/12, monarch_dodra wrote: > The bug is only when the field is EXACTLY 32 bits BTW. bitfields > works quite nice with 33 or whatever. More details in the report. Yeah 32 or 64 bits, thanks for changing the title.

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow
On Tuesday, 31 July 2012 at 15:25:55 UTC, Andrej Mitrovic wrote: On 7/31/12, monarch_dodra wrote: The bug is only when the field is EXACTLY 32 bits BTW. bitfields works quite nice with 33 or whatever. More details in the report. Yeah 32 or 64 bits, thanks for changing the title. I wonder,

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Andrej Mitrovic
On 7/31/12, Era Scarecrow wrote: > I wonder, is it really a bug? If you are going to have it fill a > whole size it would fit anyways, why even put it in as a > bitfield? You could just declare it separately. I don't really know, I'm looking at this from a point of wrapping C++. I haven't used

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow
On Tuesday, 31 July 2012 at 16:48:37 UTC, Andrej Mitrovic wrote: On 7/31/12, Era Scarecrow wrote: I wonder, is it really a bug? If you are going to have it fill a whole size it would fit anyways, why even put it in as a bitfield? You could just declare it separately. I don't really know, I'm

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread monarch_dodra
On Tuesday, 31 July 2012 at 16:16:00 UTC, Era Scarecrow wrote: On Tuesday, 31 July 2012 at 15:25:55 UTC, Andrej Mitrovic wrote: On 7/31/12, monarch_dodra wrote: The bug is only when the field is EXACTLY 32 bits BTW. bitfields works quite nice with 33 or whatever. More details in the report.

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow
On Tuesday, 31 July 2012 at 16:59:11 UTC, monarch_dodra wrote: No, it's a bug. There is no reason for it to fail (and it certainly isn't a feature). If I made two fields in a 64bit bitfield, each 32bits int's I'd like it to complain; If it's calculated from something else then finding the pr

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Timon Gehr
On 07/31/2012 06:57 PM, Era Scarecrow wrote: On Tuesday, 31 July 2012 at 16:48:37 UTC, Andrej Mitrovic wrote: On 7/31/12, Era Scarecrow wrote: I wonder, is it really a bug? If you are going to have it fill a whole size it would fit anyways, why even put it in as a bitfield? You could just decl

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow
On Tuesday, 31 July 2012 at 17:17:43 UTC, Timon Gehr wrote: (Also IMO, the once-in-a-year wtf that is caused by accidentally assigning in an if condition does not justify special casing assignment expressions inside if conditions. Also, what is an useless compare?) I've noticed in my experi

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread monarch_dodra
On Tuesday, 31 July 2012 at 17:17:25 UTC, Era Scarecrow wrote: On Tuesday, 31 July 2012 at 16:59:11 UTC, monarch_dodra wrote: Maybe the user needs a 32 bit ulong? This way the ulong only takes 32 bits, but can still be implicitly passed to functions expecting ulongs. I would think the bug on

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow
On Tuesday, 31 July 2012 at 17:34:33 UTC, monarch_dodra wrote: No, the bug shows itself if the first field is 32 bits, regardless of (ulong included). I would add though that requesting a field in bits that is bigger than the type of the field should not work (IMO). EG: struct A {

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow
On Tuesday, 31 July 2012 at 17:17:43 UTC, Timon Gehr wrote: This is obviously a mistake in the bitfield implementation. What else could be concluded from the error message: std\bitmanip.d(76): Error: shift by 32 is outside the range 0..31 Requesting a 32 bit or 64 bit member on the other ha

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Ali Çehreli
On 07/31/2012 09:15 AM, Era Scarecrow wrote: > On Tuesday, 31 July 2012 at 15:25:55 UTC, Andrej Mitrovic wrote: >> On 7/31/12, monarch_dodra wrote: >>> The bug is only when the field is EXACTLY 32 bits BTW. bitfields >>> works quite nice with 33 or whatever. More details in the report. >> >> Yeah

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Dmitry Olshansky
On 31-Jul-12 22:21, Era Scarecrow wrote: On Tuesday, 31 July 2012 at 17:17:43 UTC, Timon Gehr wrote: This is obviously a mistake in the bitfield implementation. What else could be concluded from the error message: std\bitmanip.d(76): Error: shift by 32 is outside the range 0..31 Requesting a

Re: Why must bitfields sum to a multiple of a byte?

2012-07-31 Thread Era Scarecrow
On Tuesday, 31 July 2012 at 20:41:55 UTC, Dmitry Olshansky wrote: On 31-Jul-12 22:21, Era Scarecrow wrote: Well curiously it was easier to fix than I thought (a line for a static if, and a modification of the masking)... Was there any other bugs that come to mind? Anything of consequence? Gre

Re: Why must bitfields sum to a multiple of a byte?

2012-08-01 Thread Era Scarecrow
On Tuesday, 31 July 2012 at 20:41:55 UTC, Dmitry Olshansky wrote: Great to see things moving. Could you please do a separate pull for bitfields it should get merged easier and it seems like a small but important bugfix. https://github.com/rtcvb32/phobos/commit/620ba57cc0a860245a2bf03f7b7f5d6a

Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread monarch_dodra
On Wednesday, 1 August 2012 at 07:24:09 UTC, Era Scarecrow wrote: On Tuesday, 31 July 2012 at 20:41:55 UTC, Dmitry Olshansky wrote: Great to see things moving. Could you please do a separate pull for bitfields it should get merged easier and it seems like a small but important bugfix. https

Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread Era Scarecrow
On Thursday, 2 August 2012 at 09:03:54 UTC, monarch_dodra wrote: I had an (implementation) question for you: Does the implementation actually require knowing what the size of the padding is? eg: struct A { int a; mixin(bitfields!( uint, "x",2, int, "y",3,

Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread monarch_dodra
On Thursday, 2 August 2012 at 09:14:15 UTC, Era Scarecrow wrote: On Thursday, 2 August 2012 at 09:03:54 UTC, monarch_dodra wrote: I had an (implementation) question for you: Does the implementation actually require knowing what the size of the padding is? eg: struct A { int a; mixin(bit

Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread Andrei Alexandrescu
On 8/2/12 5:14 AM, Era Scarecrow wrote: On Thursday, 2 August 2012 at 09:03:54 UTC, monarch_dodra wrote: I had an (implementation) question for you: Does the implementation actually require knowing what the size of the padding is? eg: struct A { int a; mixin(bitfields!( uint, "x", 2, int, "y",

Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread Andrei Alexandrescu
On 8/2/12 5:26 AM, monarch_dodra wrote: One of the *big* reasons I'm against having a hand chosen padding, is that the implementation *should* be able to find out what the most efficient padding is on the current machine (could be 32 on some, could be 64 on some) In my neck of the woods they ca

Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread Era Scarecrow
On Thursday, 2 August 2012 at 12:35:20 UTC, Andrei Alexandrescu wrote: Please don't. The effort on the programmer side is virtually nil, and keeps things in check. In no case would the use of bitfields() be so intensive that the bloat of one line gets any significance.> If you're using a t

Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread monarch_dodra
On Thursday, 2 August 2012 at 12:38:10 UTC, Andrei Alexandrescu wrote: On 8/2/12 5:26 AM, monarch_dodra wrote: One of the *big* reasons I'm against having a hand chosen padding, is that the implementation *should* be able to find out what the most efficient padding is on the current machine (co

Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread Andrei Alexandrescu
On 8/2/12 9:48 AM, monarch_dodra wrote: On Thursday, 2 August 2012 at 12:38:10 UTC, Andrei Alexandrescu wrote: On 8/2/12 5:26 AM, monarch_dodra wrote: One of the *big* reasons I'm against having a hand chosen padding, is that the implementation *should* be able to find out what the most efficie

Re: Why must bitfields sum to a multiple of a byte?

2012-08-02 Thread Era Scarecrow
On Thursday, 2 August 2012 at 14:52:58 UTC, Andrei Alexandrescu wrote: On 8/2/12 9:48 AM, monarch_dodra wrote: By forcing the developer to chose the bitfield size (32 or 64), you ARE forcing him to make a choice dependent on the machine's characteristics. I think that's backwards. I think