Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8218#discussion_r37155642
  
    --- Diff: 
network/shuffle/src/test/java/org/apache/spark/network/sasl/SaslIntegrationSuite.java
 ---
    @@ -160,6 +164,111 @@ public void testNoSaslServer() {
         }
       }
     
    +  /**
    +   * This test is not actually testing SASL behavior, but testing that the 
shuffle service
    +   * performs correct authorization checks based on the SASL 
authentication data.
    +   */
    +  @Test
    +  public void testAppIsolation() throws Exception {
    +    // Start a new server with the correct RPC handler to serve block data.
    +    ExternalShuffleBlockResolver blockResolver = 
mock(ExternalShuffleBlockResolver.class);
    +    ExternalShuffleBlockHandler blockHandler = new 
ExternalShuffleBlockHandler(
    +      new OneForOneStreamManager(), blockResolver);
    +    TransportServerBootstrap bootstrap = new SaslServerBootstrap(conf, 
secretKeyHolder);
    +    TransportContext blockServerContext = new TransportContext(conf, 
blockHandler);
    +    TransportServer blockServer = 
blockServerContext.createServer(Arrays.asList(bootstrap));
    +
    +    TransportClient client1 = null;
    +    TransportClient client2 = null;
    +    TransportClientFactory clientFactory2 = null;
    +    try {
    +      // Create a client, and make a request to fetch blocks from a 
different app.
    +      clientFactory = blockServerContext.createClientFactory(
    +        Lists.<TransportClientBootstrap>newArrayList(
    +          new SaslClientBootstrap(conf, "app-1", secretKeyHolder)));
    +      client1 = clientFactory.createClient(TestUtils.getLocalHost(),
    +        blockServer.getPort());
    +
    +      final AtomicBoolean result = new AtomicBoolean(false);
    +
    +      BlockFetchingListener listener = new BlockFetchingListener() {
    +        @Override
    +        public synchronized void onBlockFetchSuccess(String blockId, 
ManagedBuffer data) {
    +          notifyAll();
    +        }
    +
    +        @Override
    +        public synchronized void onBlockFetchFailure(String blockId, 
Throwable exception) {
    +          
result.set(exception.getMessage().contains(SecurityException.class.getName()));
    +          notifyAll();
    +        }
    +      };
    +
    +      String[] blockIds = new String[] { "shuffle_2_3_4", "shuffle_6_7_8" 
};
    +      OneForOneBlockFetcher fetcher = new OneForOneBlockFetcher(client1, 
"app-2", "0",
    +        blockIds, listener);
    +      synchronized (listener) {
    +        fetcher.start();
    +        listener.wait();
    +      }
    +      assertTrue("Should have failed to fetch blocks from non-authorized 
app.", result.get());
    +
    +      // Register an executor so that the next steps work.
    +      ExecutorShuffleInfo executorInfo = new ExecutorShuffleInfo(
    +        new String[] { System.getProperty("java.io.tmpdir") }, 1,
    +        "org.apache.spark.shuffle.sort.SortShuffleManager");
    +      RegisterExecutor regmsg = new RegisterExecutor("app-1", "0", 
executorInfo);
    +      client1.sendRpcSync(regmsg.toByteArray(), 10000);
    +
    +      // Make a successful request to fetch blocks, which creates a new 
stream. But do not actually
    +      // fetch any blocks, to keep the stream open.
    +      result.set(false);
    --- End diff --
    
    I think you can delete this, its not checked between here and the next 
`result.set(false)`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to