There are many ways in which desired order may be achieved, and you need to 
examine all code that interacts with the state elements to reason about how 
(and if) ordering is enforced. Removing the "synchronized" alone  (in 
some seemingly working code) is "always a bad idea". But that doesn't make 
it "required". The question is, what scheme would you replace it with...

In your specific example above, there is actually no ordering question, 
because your writeTask() operations doesn't actually observe the state 
changed by connection.configueBlocking(false);. It comes close, but the 
fact that nothing is done with the return value of isBlocking() means that 
you have nothing to ask an ordering question about. [the entire execution 
to isBlocking() is dead code, and will be legitimately eliminated by JIT 
compilers after inlining]. However, if you change the example slightly such 
that writeTask() propagated the value of isBlocking() somewhere (e.g. to 
some static volatile boolean), we'd have a question to deal with., 
So let's assume you did that...

In the specific case of the SocketChannel implementation and the example 
above, the modification of the SocketChannel-internal blocking state in 
connection.configueBlocking(false); is guranteed to be visible to the 
potential observation of the same state by the writeTask() operation run by 
some executor thread because *both* configureBlocking() and isBlocked() use 
a synchronized block around the access to the "blocking" field (e.g. at 
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/nio/channels/spi/AbstractSelectableChannel.java#AbstractSelectableChannel.isBlocking%28%29).
 
Without the use of synchronized in isBlocking(), the use of synchronized in 
configureBlocking() wouldn't make a difference.

There are many other ways that this ordering guarantee could be achieved. 
E.g. (for this specific sequence of T1: blocking = false; enqueue 
writeTask() operation; and T2: start writeTask() operation; writeTask 
return value of blocking;) making the SocketChannel-internal blocking field 
volatile would also ensure the writeTask() operation above would return 
blocking = false. And many other means for ordering these are possible.

As your question about the possibility of "skipping" some write operations: 
Write operations are never actually "skipped" (they will eventually 
happen). But in situations where a write is followed by a subsequent 
over-writing write to the same field, the code can legitimately act like it 
"ran fast enough that no-one was able to observe the intermediate state", 
and simply execute the last write. The CPU can do this. The Compiler can do 
this. And the thread running fast enough can do this. It is important to 
understand that this is true even when synchronization and other 
ordering operations exist. E.g the following sequence:

synchronized(regLock) {
  blocking = false;
}
synchronized(regLock) {
  blocking = true;
}
synchronized(regLock) {
  blocking = false;
}

Can be legitimately executed as:

synchronized(regLock) {
  blocking = false;
}

And the sequence:

volatile int foo;
...
foo = 1;
foo = 2;
foo = 3;

can (and will) be legitimately executed as:
foo = 3; 

Ordering questions only come into play if you put other memory-interacting 
things in the middle, between those writes. Then questions about whether or 
not those other things can be re-ordered with the writes come up. Sometimes 
the rules prevent such re-ordering (forcing the actual intermediate writes 
to be executed), and sometimes the rules allow re-ordering (allowing e..g 
writes in loops to be pushed to happen only once when the loop completes). 
In general, in Java, unless some of the "other thing" players are volatile, 
atomics, or synchronized blocks, any reordering is allowed as long as it 
does not change the eventual meaning of computations the sequence. 


On Saturday, March 10, 2018 at 8:13:53 AM UTC-8, John Hening wrote:
>
> ok, reordering is not a good idea to consider here. But, please note that 
> if conifgureBlocking wans't synchronized then a statement:
>
> blocking = block
>
> could be "skipped" on compilation level because JMM doesn't guarantee you 
> that every access to the memory will be "commit" to the main memory. 
> synchronized method/ putting memory barrier would solve that problem. What 
> do you think?
>
>
> W dniu piątek, 9 marca 2018 23:20:37 UTC+1 użytkownik John Hening napisał:
>>
>>
>>     executor = Executors.newFixedThreadPool(16);
>>     while(true) {
>>         SocketChannel connection = serverSocketChannel.accept();
>>         connection.configueBlocking(false);
>>         executor.execute(() -> writeTask(connection)); 
>>     }
>>     void writeTask(SocketChannel s){
>>         s.isBlocking();
>>     }
>>
>>     public final SelectableChannel configureBlocking(boolean block) 
>> throws IOException
>>     {
>>         synchronized (regLock) {
>>             ...
>>             blocking = block;
>>         }
>>         return this;
>>     }
>>
>>
>>
>> We see the following situation: the main thread is setting 
>> connection.configueBlocking(false)
>>
>> and another thread (launched by executor) is reading that. So, it looks 
>> like a datarace.
>>
>> My question is:
>>
>> 1. Here 
>> configureBlocking
>>
>> is synchronized so it behaves as memory barrier. It means that code is 
>> ok- even if reading/writing to 
>> blocking
>>
>> field is not synchronized- reading/writing boolean is atomic.
>>
>> 2. What if 
>> configureBlocking
>>
>> wouldn't be synchronized? What in a such situation? I think that it would 
>> be necessary to emit a memory barrier because it is theoretically possible 
>> that setting blocking field could be reordered. 
>>
>> Am I right?
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to