[ 
https://issues.apache.org/jira/browse/HDFS-9924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15330715#comment-15330715
 ] 

Xiaobing Zhou edited comment on HDFS-9924 at 6/14/16 10:01 PM:
---------------------------------------------------------------

bq. In my experience, a bunch of rigging (threads) for polling, rather than 
notification, is required when all you have is a Future to work with.

This is not true. There's no need to do threaded polling. You can look at 
TestDiskBalancerCommand#testConcurrentAsyncRename on how to simply use it, 
e.g.: 
{code}
    Map<Integer, Future<Void>> retFutures = new HashMap<Integer, 
Future<Void>>();
    for (int i = 0; i < NUM_TESTS; i++) {
      for (;;) {
        try {
          Future<Void> retFuture = adfs.rename(srcs[i], dsts[i], 
Rename.OVERWRITE);
          retFutures.put(i, retFuture);
          break;
        } catch (AsyncCallLimitExceededException e) {
          /**
           * reached limit of async calls, fetch results of finished async calls
           * to let follow-on calls go
           */
          start = end;
          end = i;
          waitForReturnValues(retFutures, start, end);
        }
      }
    }
    waitForReturnValues(retFutures, end, NUM_TESTS);

  void waitForReturnValues(final Map<Integer, Future<Void>> retFutures,
      final int start, final int end) throws InterruptedException, 
ExecutionException {
    for (int i = start; i < end; i++) {
      retFutures.get(i).get();
    }
  }
{code}


was (Author: xiaobingo):
bq. In my experience, a bunch of rigging (threads) for polling, rather than 
notification, is required when all you have is a Future to work with.

This is not true. There's no need to do threaded polling. You can look at 
TestDiskBalancerCommand#testConcurrentAsyncRename on how to simply use it, 
e.g.: 
{code}
Map<Integer, Future<Void>> retFutures = new HashMap<Integer, Future<Void>>();
    for (int i = 0; i < NUM_TESTS; i++) {
      for (;;) {
        try {
          Future<Void> retFuture = adfs.rename(srcs[i], dsts[i], 
Rename.OVERWRITE);
          retFutures.put(i, retFuture);
          break;
        } catch (AsyncCallLimitExceededException e) {
          /**
           * reached limit of async calls, fetch results of finished async calls
           * to let follow-on calls go
           */
          start = end;
          end = i;
          waitForReturnValues(retFutures, start, end);
        }
      }
    }
    waitForReturnValues(retFutures, end, NUM_TESTS);

  void waitForReturnValues(final Map<Integer, Future<Void>> retFutures,
      final int start, final int end) throws InterruptedException, 
ExecutionException {
    for (int i = start; i < end; i++) {
      retFutures.get(i).get();
    }
  }
{code}

> [umbrella] Asynchronous HDFS Access
> -----------------------------------
>
>                 Key: HDFS-9924
>                 URL: https://issues.apache.org/jira/browse/HDFS-9924
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: fs
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Xiaobing Zhou
>         Attachments: AsyncHdfs20160510.pdf
>
>
> This is an umbrella JIRA for supporting Asynchronous HDFS Access.
> Currently, all the API methods are blocking calls -- the caller is blocked 
> until the method returns.  It is very slow if a client makes a large number 
> of independent calls in a single thread since each call has to wait until the 
> previous call is finished.  It is inefficient if a client needs to create a 
> large number of threads to invoke the calls.
> We propose adding a new API to support asynchronous calls, i.e. the caller is 
> not blocked.  The methods in the new API immediately return a Java Future 
> object.  The return value can be obtained by the usual Future.get() method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to