Github user zsxwing commented on the pull request:

    https://github.com/apache/spark/pull/6852#issuecomment-112722654
  
    > There are certainly some memory barriers in between the writes and reads 
without this.
    
    For these tests, there is no memory barrier because the checking codes are 
called after `ssc.start()` immediately. 
    
    And one potential issue is that, e.g., writing and reading a 
java.util.HashMap at the same time may cause an infinite loop: 
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6423457
    
    > If a HashMap is used in a concurrent setting with insufficient 
synchronization,
    it is possible for the data structure to get corrupted in such a way that
    infinite loops appear in the data structure and thus get() could loop 
forever.
    
    Of cause, I'm not sure if scala.collection.mutable.HashMap has a similar 
issue.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to