On 3/21/2011 11:58 AM, dsimcha wrote:
== Quote from dsimcha (dsim...@yahoo.com)'s article
== Quote from Michel Fortin (michel.for...@michelf.com)'s article
On second thought, no, but for practical, not theoretical reasons:
One, you can't introspect whether a foreach loop is using a ref or a
value parameter.  This is an issue with how opApply works.
Indeed a problem. Either we fix the compiler to support that, or we
change the syntax to something like this:
        taskPool.apply(range, (ref int value) {
                ...
        });
Or we leave things as they are.
Two, AFAIK there's no way to get the native word size.
Right. That's a problem too... you could probably alleviate this by
doing a runtime check with some fancy instructions to get the native
word size, but I'd expect that to be rather convoluted.
I'd like to check if I understand that well. For instance this code:
        int[100] values;
        foreach (i, ref value; parallel(values))
                value = i;
would normally run fine on a 32-bit processor, but it'd create
low-level a race on a 64-bit processor (even a 64-bit processor running
a 32-bit program in 32-bit compatibility mode). And even that is a
generalization, some 32-bit processors out there *might* have 64-bit
native words. So the code above isn't portable. Is that right?
Which makes me think... we need to document those pitfalls somewhere.
Perhaps std.parallelism's documentation should link to a related page
about what you can and what you can't do safely. People who read that
"all the safeties are off" in std.parallelism aren't going to
understand what you're talking about unless you explain the pitfalls
with actual examples (like the one above).
This problem is **much** less severe than you are suggesting.  x86 can address
single bytes, so it's not a problem even if you're iterating over bytes on a
64-bit machine.  CPUs like Alpha (which no D compiler even exists for) can't
natively address individual bytes.  Therefore, writing to a byte would be
implemented much like writing to a bit is on x86:  You'd read the full word in,
change one byte, write the full word back.  I'm not sure exactly how it would be
implemented at the compiler level, or whether you'd even be allowed to have a
reference to a byte in such an implementation.  This is why I consider this 
more a
theoretical problem than a serious practical issue.

Actually, just remembered that word tearing is also an issue with unaligned 
memory
access.  I guess I could just include a warning that says not to do this with
unaligned memory.

s/is/may be . I'm trying to read up on word tearing and post questions here and to StackOverflow. Good documentation about it seems ridiculously hard to come by. About the only solid pieces of information I've found are that x86 definitely **can** do byte granularity (which doesn't mean it always does) and that Java makes these guarantees (which is only useful if you're writing in Java). The general gut feeling people here and on StackOverflow seem to have is that it's not an issue, but there doesn't seem to be much backing up of this.

Maybe I'm just being paranoid and this is a complete non-issue except on the most obscure/ancient hardware and/or most stupidly implemented compilers, which is why few people even think of it.

Reply via email to