[jclouds-java-7-pull-requests
#1165](https://jclouds.ci.cloudbees.com/job/jclouds-java-7-pull-requests/1165/)
SUCCESS
This pull request looks good
---
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/214#issuecomment-38656006
}
- private boolean parentIsFolder(final ListContainerOptions options, final
StorageMetadata md) {
- return options.getDir() != null md.getName().indexOf('/') == -1;
+ private void waitForCompletion(final Semaphore semaphore,
+ final SetListenableFutureVoid
I committed to master with some Checkstyle, Javadoc, and indentation fixes.
Let's address some of the TODOs and test more before backporting to 1.7.x.
Thank you for your contribution @shrinandj ! The previous implementation has
troubled us for some years now.
---
Reply to this email
Closed #214.
---
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/214
+ final String fullPath = parentIsFolder(options, md) ?
options.getDir()
+ + / + md.getName() : md.getName();
+
+// Attempt to acquire a semaphore within the time limit. At least
+// one outstanding future should complete within this period for
+ switch (md.getType()) {
+ case BLOB:
+blobDelFuture = executorService.submit(new CallableVoid() {
+ @Override
+ public Void call() {
+ blobStore.removeBlob(containerName, fullPath);
+ return
+ // If a future to delete a blob/directory actually got created
above,
+ // keep a reference of that in the outstandingFutures list. This is
+ // useful in case of a timeout exception. All outstanding futures
can
+ // then be cancelled.
+ if
import com.google.common.util.concurrent.ListenableFuture;
import com.google.common.util.concurrent.ListeningExecutorService;
import com.google.inject.Inject;
/**
* Deletes all keys in the container
- *
+ *
* @author Adrian Cole
Done
---
Reply to this email directly or view
+ * a timeout. Also, when the reference is removed from this list and
when
+ * the executorService removes the reference that it has maintained,
the
+ * future will be marked for GC since there should be no other
references
+ * to it. This is important because
+ listing = blobStore.list(containerName, options);
+ } catch (ContainerNotFoundException ce) {
+ return listing;
+ }
+
+ // recurse on subdirectories
+ if (options.isRecursive()) {
+ for (StorageMetadata md : listing) {
+String
+ String marker = listing.getNextMarker();
+ if (marker != null) {
+logger.debug(%s with marker %s, message, marker);
+options = options.afterMarker(marker);
+listing = getListing(containerName, options, semaphore,
+
Looks like a network glitch on cloudbees.
code
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Could not clone
git://github.com/jclouds/jclouds.git
...
stderr: fatal: unable to connect to github.com:
github.com[0: 192.30.252.129]: errno=Connection timed out
/code
Can
import com.google.common.util.concurrent.ListenableFuture;
import com.google.common.util.concurrent.ListeningExecutorService;
import com.google.inject.Inject;
/**
* Deletes all keys in the container
- *
+ *
* @author Adrian Cole
Add your name to since this is a large body of
Can you open a JIRA issue and reference it in the commit message? This will
allow us to communicate this improvement in the release notes.
---
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/214#issuecomment-38321551
+ * acquired in 'maxTime', a TimeoutException is thrown. Any outstanding
+ * futures at that time are cancelled.
+ */
+ final Semaphore semaphore = new Semaphore(numOutStandingRequests);
+ /*
+ * When a future is created, a reference for that is added to the
+ * a timeout. Also, when the reference is removed from this list and
when
+ * the executorService removes the reference that it has maintained,
the
+ * future will be marked for GC since there should be no other
references
+ * to it. This is important because
@shrinandj This commit represents a big improvement and I apologize for my
delayed comments. Can you address some of these and add TODOs for the rest so
we can commit this as soon as possible? Specifically we must address the O(n)
behavior and I do not understand some of the synchronization.
+ listing = blobStore.list(containerName, options);
+ } catch (ContainerNotFoundException ce) {
+ return listing;
+ }
+
+ // recurse on subdirectories
+ if (options.isRecursive()) {
+ for (StorageMetadata md : listing) {
+String
but haven't been able to reproduce the problem.
OK, let's kick the PR builders again, then. I also suspect it's transient.
Thanks, @shrinandj!
---
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/214#issuecomment-36737307
Reopened #214.
---
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/214
[jclouds » jclouds
#889](https://buildhive.cloudbees.com/job/jclouds/job/jclouds/889/) SUCCESS
This pull request looks good
[(what's this?)](https://www.cloudbees.com/what-is-buildhive)
---
Reply to this email directly or view it on GitHub:
[jclouds-pull-requests
#639](https://jclouds.ci.cloudbees.com/job/jclouds-pull-requests/639/) SUCCESS
This pull request looks good
---
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/214#issuecomment-36741644
[jclouds-java-7-pull-requests
#1109](https://jclouds.ci.cloudbees.com/job/jclouds-java-7-pull-requests/1109/)
SUCCESS
This pull request looks good
---
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/214#issuecomment-36742089
I also suspect it's transient.
Bingo ;-)
---
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/214#issuecomment-36742345
jclouds » jclouds #882 UNSTABLE
This looks like a real
[failure](https://buildhive.cloudbees.com/job/jclouds/job/jclouds/org.apache.jclouds$jclouds-blobstore/882/testReport/junit/org.jclouds.blobstore.strategy.internal/DeleteAllKeysInListTest/testExecuteInDirectory/)?
Could you have a look,
I am trying to reproduce this locally. I have run 50 iterations of this test
so far, but haven't been able to reproduce the problem.
---
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/214#issuecomment-36681788
I tried varying the number of threads from 10 to 50 and deleting a container
with at least 5K blobs. Things worked fine. I'll rebase this patch on top of
the current ToT and send out.
---
Reply to this email directly or view it on GitHub:
Sorry for ignoring this. By default jclouds uses an unlimited number of user
threads, set in ```BaseApiMetadata.defaultProperties```. I believe this will
cause problems for users of Atmos blobstores. We can work around this by
setting a lower number in
28 matches
Mail list logo