Github user steveloughran commented on the issue:

    https://github.com/apache/spark/pull/15648
  
    LGTM, as the javadocs say *If true check only for Active NNs status, else 
check first NN's status*.  But I don't know enough about HDFS HA to be 
    
    It'll check the first NN, if that is on standby *and stale reads are not 
allowed( it'll log at error (HDFS-3477 proposes downgrading that), and throw an 
exception the url 
[https://s.apache.org/sbnn-error](https://s.apache.org/sbnn-error). If someone 
sets `dfs.ha.allow.stale.reads"` then they get the old safe mode state; there's 
nothing that can be done there.
    
    Where my knowledge of HDFS-HA fails is what happens then; Does the RPC 
client try another NN? Or just it just fail? Maybe @liuml07 could assist there.
    
    The method went in with Hadoop 2.0.3 alpha in 
[HDFS-3507](https://issues.apache.org/jira/browse/HDFS-3507), so will be across 
the whole of the Hadoop 2.x line. The enum used did change in 2015, with 
HDFS-4015; adding `SAFEMODE_FORCE_EXIT`; that action must be avoided. Luckily, 
the history server isn't trying to exit safemode.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to