The compiler's reordering generally DOES NOT depend on the hardware. 
Optimizations that result in reordering generally occur well before 
instruction selection, and will happen in the same way for different 
hardware architectures. E.g. on X86, PowerPC, and ARM, HotSpot, gcc, and 
clang will all frequently reorder two stores, two loads, and any pair of 
loads and stores as long as there is nothing to explicitly prevent doing 
so. So forget about any sort of "hardware model" when thinking about the 
order you can expect.

The simple rule is "assume nothing". If a reordering is not specifically 
prohibited, assume it will happen. If you assume otherwise, you and likely 
to be unpleasantly surprised.

As for "stupid" reorderings, stupidity is in the eye of the beholder. 
"Surprising" sequence-breaking reordering that you may not see an immediate 
or obvious reason for may be beneficial in many ways.

E.g. take the following simple loop:

int[] a, b, c;
...
for (int i = 0; i < a.length; i++) {
    a[i] = b[i] + c[i];
}

You can certainly expect (due to causality) loads from b[i] to occur before 
stores to a[i]. But is it reasonable to expect loads of b[i+1] to happen 
AFTER stores to a[i]? After all, that's the order of operations in the 
program, right? Would the JVM be "stupid" to reorder things such that some 
stores to a[i+1] occur before some loads from a[i]?

A simple optimization which most compilers will hopefully do to the above 
loop is to use vector operations (SSE, AVX, etc.) on processors capable of 
them, coupled with loop unrolling. E.g. in practice, the bulk of the loop 
will be executing on 8 slots at a time on modern AVX2 x86 cpus, and 
multiple such 8 slot operations could be in flight at the same time (due 
both to the compiler unrolling the loop and the processor aggressively 
doing OOOE even without the compiler unrolling stuff). The loads from one 
such operation are absolutely allowed to [and even likely to] occur before 
stores that occur previously in the instruction stream (yes, even on x86, 
but also because the compiler may jumble them any way it wants in the 
unrolling). There is nothing "stupid" about that, and we should all hope 
that both the compiler and the hardware will feel free to jumble that order 
to get the best speed for this loop... 

Even without vectorizing or loop unrolling by the compiler, the CPU is free 
to reorder things in many ways. E.g. if one operation misses in the cache 
and the next (in logical "i" sequence order) hits in the cache, there is no 
reason for order to be maintained between the earlier store and the 
subsequent load. Now imagine that in a processor that can juggle 72 in 
flight loads and 48 in flight stores at the same time (e.g. a Haswell 
core), and you will quickly realize that any expectation of order or 
sequence of memory access not explicitly required should be left at the 
door.


On Monday, January 16, 2017 at 4:49:39 PM UTC-5, Francesco Nigro wrote:
>
> This is indeed what I was expecting...While others archs (PowerPC , tons 
> of ARMs and the legendary Alpha DEC) are allowed to be pretty creative in 
> matter of reordering...And that's the core of my question: how much a 
> developer could rely on the fact that a compiler ( or the underline HW) 
> will respect the memory access that he has put into the code without using 
> any fences?The answer is really a "depends on the compiler/architecture"?Or 
> exist common high level patterns respected by the "most" of 
> compilers/architectures?
>
> Il lun 16 gen 2017, 22:14 Vitaly Davidovich <vit...@gmail.com 
> <javascript:>> ha scritto:
>
>> Depends on which hardware.  For instance, x86/64 is very specific about 
>> what memory operations can be reordered (for cacheable operations), and two 
>> stores aren't reordered.  The only reordering is stores followed by loads, 
>> where the load can appear to reorder with the preceding store.
>>
>> On Mon, Jan 16, 2017 at 4:02 PM Dave Cheney <da...@cheney.net 
>> <javascript:>> wrote:
>>
>>> Doesn't hardware already reorder memory writes along 64 byte boundaries? 
>>> They're called cache lines. 
>>>
>>>
>>> Dave
>>>
>>>
>>>
>>> On Tue, 17 Jan 2017, 05:35 Tavian Barnes <tavia...@gmail.com 
>>> <javascript:>> wrote:
>>>
>>>> On Monday, 16 January 2017 12:38:01 UTC-5, Francesco Nigro wrote:
>>>>>
>>>>> I'm missing something for sure, because if it was true, any 
>>>>> (single-threaded) "protocol" that rely on the order of writes/loads 
>>>>> against 
>>>>> (not mapped) ByteBuffers to be fast (ie: sequential writes rocks :P) 
>>>>> risks 
>>>>> to not see the order respected if not using patterns that force the 
>>>>> compiler to block the re-ordering of such instructions (Sci-Fi 
>>>>> hypothesis).
>>>>>
>>>>
>>>> I don't think you're missing anything.  The JVM would be stupid to 
>>>> reorder your sequential writes into random writes, but it's perfectly 
>>>> within its right to do so for a single-threaded program according to the 
>>>> JMM, as long as it respects data dependencies (AFAIK).  Of course, that 
>>>> would be a huge quality of implementation issue, but that's an entirely 
>>>> separate class from correctness issues.
>>>>  
>>>>
>>>>> with great regards,
>>>>> Francesco
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> -- 
>>>>
>>>>
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "mechanical-sympathy" group.
>>>>
>>>>
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to mechanical-sympathy+unsubscr...@googlegroups.com 
>>>> <javascript:>.
>>>>
>>>>
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> -- 
>>>
>>>
>>> You received this message because you are subscribed to the Google 
>>> Groups "mechanical-sympathy" group.
>>>
>>>
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to mechanical-sympathy+unsubscr...@googlegroups.com 
>>> <javascript:>.
>>>
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>>
>>> -- 
>> Sent from my phone
>>
>> -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "mechanical-sympathy" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/mechanical-sympathy/EMIBqjX4uzk/unsubscribe
>> .
>> To unsubscribe from this group and all its topics, send an email to 
>> mechanical-sympathy+unsubscr...@googlegroups.com <javascript:>.
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to