SigV4 for S3 comptabile Object Store

2017-07-20 Thread Archana C
Hi,
    jClouds version till 2.0.1 supports signature V2 for S3 compatible object 
storage. Is there a plan to support signature V4 for S3 compatible object 
storage ?
RegardsArchana

bulk-delete objects

2017-05-17 Thread Archana C
Hi 
I am trying to perform bulk-delete operation on a container.
Request looks something like this{method=DELETE, 
endpoint=http://x.x.x.x:8091/v1/AUTH_62cdd842bcf44023b987196add34951e?bulk-delete,
 headers={Accept=[application/json], 
X-Auth-Token=[5c94641e6a4146f1a9857d6892206bbb]}, payload=[content=true, 
contentMetadata=[cacheControl=null, contentDisposition=null, 
contentEncoding=null, contentLanguage=null, contentLength=41, contentMD5=null, 
contentType=text/plain, expires=null], written=false, isSensitive=false]}
http://x.x.x.x:8091/v1/AUTH_62cdd842bcf44023b987196add34951e?bulk-delete
For Non MPU uploads, bulk-delete works fine. But in case of bulk-delete on 
multipart uploaded files, only the manifest file get removed, the segments 
remains.
Is there something missing ?
RegardsArchana

Swift Bulk delete

2017-04-25 Thread Archana C
Hi 
We are trying to perform bulk delete operation, and we are suck up with the 
following error
[[type=BLOB, id=null, name=jclouds1a73d, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/jclouds1a73d,
 userMetadata={}], [type=BLOB, id=null, name=jclouds29b8b, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/jclouds29b8b,
 userMetadata={}], [type=BLOB, id=null, name=jclouds3a37e, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/jclouds3a37e,
 userMetadata={}], [type=BLOB, id=null, name=jclouds42bf6, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/jclouds42bf6,
 userMetadata={}], [type=BLOB, id=null, name=jclouds76c52, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/jclouds76c52,
 userMetadata={}], [type=BLOB, id=null, name=jclouds8c858, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/jclouds8c858,
 userMetadata={}], [type=BLOB, id=null, name=jcloudsc6244, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/jcloudsc6244,
 userMetadata={}], [type=BLOB, id=null, name=jcloudse4a76, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/jcloudse4a76,
 userMetadata={}], [type=BLOB, id=null, name=jcloudsf94dd, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/jcloudsf94dd,
 userMetadata={}], [type=BLOB, id=null, name=jcloudsfbcb2, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/jcloudsfbcb2,
 userMetadata={}], [type=BLOB, id=null, name=obj0, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj0,
 userMetadata={}], [type=BLOB, id=null, name=obj1, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj1,
 userMetadata={}], [type=BLOB, id=null, name=obj10, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj10,
 userMetadata={}], [type=BLOB, id=null, name=obj11, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj11,
 userMetadata={}], [type=BLOB, id=null, name=obj12, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj12,
 userMetadata={}], [type=BLOB, id=null, name=obj13, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj13,
 userMetadata={}], [type=BLOB, id=null, name=obj14, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj14,
 userMetadata={}], [type=BLOB, id=null, name=obj15, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj15,
 userMetadata={}], [type=BLOB, id=null, name=obj16, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj16,
 userMetadata={}], [type=BLOB, id=null, name=obj17, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj17,
 userMetadata={}], [type=BLOB, id=null, name=obj18, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj18,
 userMetadata={}], [type=BLOB, id=null, name=obj19, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj19,
 userMetadata={}], [type=BLOB, id=null, name=obj2, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj2,
 userMetadata={}], [type=BLOB, id=null, name=obj20, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj20,
 userMetadata={}], [type=BLOB, id=null, name=obj21, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj21,
 userMetadata={}], [type=BLOB, id=null, name=obj22, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj22,
 userMetadata={}], [type=BLOB, id=null, name=obj23, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj23,
 userMetadata={}], [type=BLOB, id=null, name=obj24, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj24,
 userMetadata={}], [type=BLOB, id=null, name=obj25, location=null, 
uri=http://x.xx.xx.xx:8091/v1/AUTH_f9d7c1cf6500469b80f0906f5f9b1791/containerExistDLO1/obj25,
 userMetadata={}], [type=BLOB, id=null, name=obj26, location=null, 
uri=http://x.xx.xx.x

Re: jClouds 2.0

2017-04-07 Thread Archana C
Ok. We will work on this.
As per our observation, ETAG computation for SLO and DLO differs
For DLO Etag is the MD5 of manifest file
For SLO Etag is the MD5 if concatenated ETAG.
Is our understanding correct ?
RegardsArchana 

On Friday, 7 April 2017, 1:00, Andrew Gaul  wrote:
 

 jclouds is a community project so DLO support will be added when someone
contributes it, likely you.  I would look at how other libraries bridge
this gap and submit a pull request[1].  Alternatively you could migrate
your DLO to SLO with some external tool.

[1] https://cwiki.apache.org/confluence/display/JCLOUDS/How+to+Contribute

On Thu, Apr 06, 2017 at 04:03:04PM +, Archana C wrote:
> 
> Hi 
> 1. Is there any timeline planned for DLO support ?
> 2. How do we achieve backward comparability with 1.9 migrated objects (DLO) 
> etag with 2.o recall (SLO) etag ?
> 
> RegardsArchana
> 

-- 
Andrew Gaul
http://gaul.org/


   

jClouds 2.0

2017-04-06 Thread Archana C

Hi 
1. Is there any timeline planned for DLO support ?
2. How do we achieve backward comparability with 1.9 migrated objects (DLO) 
etag with 2.o recall (SLO) etag ?

RegardsArchana



Re: Swift Multipart upload SLO or DLO

2017-04-05 Thread Archana C
Hi

1. Do you enable "SLO" in swift as required filter (swift/proxy/server.py)  
  required_filters = [
    {'name': 'catch_errors'},
    {'name': 'gatekeeper',
 'after_fn': lambda pipe: (['catch_errors']
   if pipe.startswith('catch_errors')
   else [])},
    {'name': 'dlo', 'after_fn': lambda _junk: [
    'staticweb', 'tempauth', 'keystoneauth',
    'catch_errors', 'gatekeeper', 'proxy_logging']}]
Should this be slo for static large object upload ? Or is there any thing to be 
done to treat request as SLO.

2. I noticed  that "DLO" is enabled as a default filter (confirmed from swift 
logs), hence adding header "X-Object-Manifest" only helped get correct 
Content-length.
3. I compared 1.9.1 jcloluds CommonSwiftClient.java:putObjectManifest with 
jclouds 2.0 StaticLargeObject.java:replaceManifest, I figured out that 
"X-Object-Manifest" is used as required in jclouds 1.9.1 and that has been 
omitted in 2.0 
Please share sample test case that you mentioned 
RegardsArchana 

On Wednesday, 5 April 2017, 15:49, Archana C  wrote:
 

 Hi

1. Do you enable "SLO" in swift as required filter (swift/proxy/server.py)  
  required_filters = [
    {'name': 'catch_errors'},
    {'name': 'gatekeeper',
 'after_fn': lambda pipe: (['catch_errors']
   if pipe.startswith('catch_errors')
   else [])},
    {'name': 'dlo', 'after_fn': lambda _junk: [
    'staticweb', 'tempauth', 'keystoneauth',
    'catch_errors', 'gatekeeper', 'proxy_logging']}]
Should this be slo for static large object upload ? Or is there any thing to be 
done to treat request as SLO.

2. I noticed  that "DLO" is enabled as a default filter (confirmed from swift 
logs), hence adding header "X-Object-Manifest" only helped get correct 
Content-length.
3. I compared 1.9.1 jcloluds CommonSwiftClient.java:putObjectManifest with 
jclouds 2.0 StaticLargeObject.java:replaceManifest, I figured out that 
"X-Object-Manifest" is used as required in jclouds 1.9.1 and that has been 
omitted in 2.0 
Please share sample test case that you mentioned 
RegardsArchana 

On Tuesday, 4 April 2017, 23:39, Andrew Gaul  wrote:
 

 jclouds supports static large objects with Swift.  We could add support
for dynamic objects but these have a number of caveats and differ from
other providers.

On Tue, Apr 04, 2017 at 04:28:56PM +, Archana C wrote:
> Hi 
> 
> Does jclouds 2.0.0 supports swift Static Large Object Upload (SLO) or Dynamic 
> Large Object Upload(DLO) ?
> 
> As per our observation,
> 1. It looks like jClouds does SLO and not DLO.
> 2. SLO requires no headers whereas DLO requires X-Object-Manifest as header 
> while manifest upload as mentioned in [1].
> 
> [1] https://docs.openstack.org/user-guide/cli-swift-large-object-creation.html
> RegardsArchana
> 

-- 
Andrew Gaul
http://gaul.org/


   

   

Swift Multipart upload SLO or DLO

2017-04-04 Thread Archana C
Hi 

Does jclouds 2.0.0 supports swift Static Large Object Upload (SLO) or Dynamic 
Large Object Upload(DLO) ?

As per our observation,
1. It looks like jClouds does SLO and not DLO.
2. SLO requires no headers whereas DLO requires X-Object-Manifest as header 
while manifest upload as mentioned in [1].

[1] https://docs.openstack.org/user-guide/cli-swift-large-object-creation.html
RegardsArchana



Swift Multipart Manifest Upload

2017-03-28 Thread Archana C

Content length of the manifest file for multipart upload in swift doesnot have 
size of entire blob instead it has the content length of the manifest file.As 
per the javadoc [1] Content-Length of the manifest must be size of the blob.
As per my observation content length of the manifest has is not content length 
of the blob.
Also in the comments mentioned in below reference, Its clear that content 
length will be set to the object size once the PUT operation is complete.
Can you point us to the code snippet, where computation of content length 
happens once PUT is completed ?

[1] 
https://github.com/jclouds/jclouds/blob/master/apis/openstack-swift/src/main/java/org/jclouds/openstack/swift/v1/binders/BindManifestToJsonPayload.java

RegardsArchana


Re: S3 China Beijing

2017-03-18 Thread Archana C
Have raised JIRA for the same
[JCLOUDS-1258] S3 China Beijing Region Support - ASF JIRA
  
|  
|   
|   
|   ||

   |

  |
|  
|   |  
[JCLOUDS-1258] S3 China Beijing Region Support - ASF JIRA
   |   |

  |

  |

 

RegardsArchana
 

On Saturday, 18 March 2017, 3:57, Ignasi Barrera  wrote:
 

 Yes, we do. Could you kindly file a JIRA [1] so we keep track of this?

According to the wikipedia page [2] The ISO code is CN-11.


I.

[1] https://issues.apache.org/jira/browse/JCLOUDS
[2] https://en.wikipedia.org/wiki/ISO_3166-2:CN

On 17 March 2017 at 11:57, Archana C  wrote:
> Does jClouds planning to support China Beijing region which supports sig V4?
>
> What will be the ISO3166_CODE for Beijing region ?
>
> Regards
> Archana
>
>
>


   

S3 China Beijing

2017-03-17 Thread Archana C
Does jClouds planning to support China Beijing region which supports sig V4?
What will be the ISO3166_CODE for Beijing region ?
RegardsArchana





Re: Not able to build master 2.1.0-SNAPSHOT

2017-03-01 Thread Archana C
ok. Thank you :)
RegardsArchana
 

On Wednesday, 1 March 2017, 15:53, Ignasi Barrera  wrote:
 

 Not related in the end, but definitely worth removing the sonatype
repos from our poms

On 28 February 2017 at 18:11, Andrew Phillips  wrote:
>> The Maven repository in the logs is not the Apache snapshots repo. Do
>> you have any specific repo configuration in your environment?
>
>
> @Archana: Have you added the Apache snapshots repo to your local
> configuration, as described in
>
> http://jclouds.apache.org/start/install/
>
> ?
>
> @Ignasi: This may be related to the repos configured in the POMs:
>
> https://github.com/jclouds/jclouds/search?utf8=%E2%9C%93&q=oss.sonatype.org
>
> Could be worth a follow-up PR...
>
> Regards
>
> ap


   

Re: Not able to build master 2.1.0-SNAPSHOT

2017-03-01 Thread Archana C
Hi 

    We dont have any specific repo configuration. mvn clean install with -U 
resolved the issue.
RegardsArchana 

On Tuesday, 28 February 2017, 13:36, Ignasi Barrera  wrote:
 

 The Maven repository in the logs is not the Apache snapshots repo. Do you have 
any specific repo configuration in your environment?

Also worth trying the build with the "-U" flag.
I.
On Feb 28, 2017 6:00 AM, "Archana C"  wrote:

Hi 
    
    Master branch 2.1.0-SNAPSHOT build fails saying 

[ERROR] Failed to execute goal on project jclouds-slf4j: Could not resolve 
dependencies for project org.apache.jclouds.driver: jclouds-slf4j:bundle:2.1.0- 
SNAPSHOT: Failure to find org.apache.jclouds:jclouds- 
core:jar:tests:2.1.0-SNAPSHOT in https://oss.sonatype.org/ 
content/repositories/snapshots was cached in the local repository, resolution 
will not be reattempted until the update interval of 
jclouds-sona-snapshots-nexus has elapsed or updates are forced.
Is there any alternative to proceed further ?


RegardsArchana




   

Not able to build master 2.1.0-SNAPSHOT

2017-02-27 Thread Archana C
Hi 
    
    Master branch 2.1.0-SNAPSHOT build fails saying 

[ERROR] Failed to execute goal on project jclouds-slf4j: Could not resolve 
dependencies for project 
org.apache.jclouds.driver:jclouds-slf4j:bundle:2.1.0-SNAPSHOT: Failure to find 
org.apache.jclouds:jclouds-core:jar:tests:2.1.0-SNAPSHOT in 
https://oss.sonatype.org/content/repositories/snapshots was cached in the local 
repository, resolution will not be reattempted until the update interval of 
jclouds-sona-snapshots-nexus has elapsed or updates are forced.
Is there any alternative to proceed further ?


RegardsArchana



Re: Extending RegionScopedSwiftBlobStore

2017-02-14 Thread Archana C
Adding .add(SwiftBlobStoreContextModule.class) as mentioned in [1] always 
return RegionScopedSwiftBlobstore, even if have SwiftBlobStore (SwiftBlobStore 
extends RegionScopedSwiftBlobStore)
Is there a way to remove SwiftBlobStoreContextModule from the default module 
and add a custom module instead.
We are facing lot of issues in extending RegionScopedSwiftBlobStore.

[1] 
https://github.com/jclouds/jclouds/blob/ac2f746e64821878f157ba4b1c12675286ccc8e1/apis/openstack-swift/src/main/java/org/jclouds/openstack/swift/v1/SwiftApiMetadata.java

RegardsArchana
 


On Tuesday, 14 February 2017, 7:09, Archana C  
wrote:
 

 Making the bind as singleton made my class visible instead of 
RegionScopedSwiftBlobStore
Again the problem comes is this

1) No implementation for java.lang.String annotated with 
@com.google.inject.assistedinject.Assisted(value=) was bound.
  while locating java.lang.String annotated with 
@com.google.inject.assistedinject.Assisted(value=)
    for parameter 4 at jclouds20.SwiftBlobStore.(SwiftBlobStore.java:43)
  at jclouds20.Module1.configure(Module1.java:11)


public class App 
{
    private static final String CONTAINER_NAME = "arctestMP";
    private static final String OBJECT_NAME = "arc";
    static byte[][] etagMap = null;

    public static void main( String[] args ) throws IOException 
    {
         Iterable modules = ImmutableSet. builder()   
   
 .add(new Module1())
 .build();
        Properties overrides = new Properties();
        RegionScopedBlobStoreContext context = 
ContextBuilder.newBuilder("openstack-swift")
                .endpoint("http://x.xx.xx.xx:5000/v2.0";)
                .credentials("xxx:xxx", "xxx")
                .overrides(overrides)
                .modules(modules)
                .buildView(RegionScopedBlobStoreContext.class);
        BlobStore blobStore = context.getBlobStore("mcstore");

        blobStore.createContainerInLocation(null, CONTAINER_NAME);
        Path path = Paths.get("/home/archupsg03/test_dir/sample2");
        File f = new File("/home/archupsg03/Downloads/sample2");
        byte []byteArray =  Files.readAllBytes(path);
        Payload payload = newByteSourcePayload(wrap(byteArray));
        PutOptions opt = new PutOptions();
        opt.multipart();

        ExecutorService customExecutor = Executors.newFixedThreadPool(1);
        ListeningExecutorService listeningExecutor = 
MoreExecutors.listeningDecorator(customExecutor);
        opt.setCustomExecutor(listeningExecutor);
        
        Blob blob = blobStore.blobBuilder(OBJECT_NAME)
                .payload(payload).contentLength(f.length())
                .build();
        String etag =  blobStore.putBlob(CONTAINER_NAME, blob, opt);
        System.out.println(etag);}}
public class Module1 extends AbstractModule{
    @Override
    protected void configure() {
        
bind(RegionScopedSwiftBlobStore.class).to(SwiftBlobStore.class).in(Scopes.SINGLETON);
    }
}
public class SwiftBlobStore extends RegionScopedSwiftBlobStore {

    @Inject
    protected SwiftBlobStore(Injector baseGraph, BlobStoreContext context, 
SwiftApi api,
            @Memoized Supplier> locations, @Assisted 
String regionId, PayloadSlicer slicer,
            @Named(PROPERTY_USER_THREADS) ListeningExecutorService 
userExecutor) {
        super(baseGraph, context, api, locations, regionId, slicer, 
userExecutor);
        
        // TODO Auto-generated constructor stub
    }}
Call to the super constructor is throwing the error > I kindly request you 
consider the above issue.


RegardsArchana
 

On Monday, 13 February 2017, 22:18, Archana C  
wrote:
 

 Here is the complete source code of very simple usage of API's

public class App 
{
    private static final String CONTAINER_NAME = "arctestMP1";
    private static final String OBJECT_NAME = "arc";


    public static void main( String[] args ) throws IOException 
    {
        Iterable modules = ImmutableSet.of(
                new Module1());
        Properties overrides = new Properties();
        BlobStoreContext context = ContextBuilder.newBuilder("openstack-swift")
                .endpoint("http://x.xx.xx.xx:5000/v2.0";)
                .credentials("xxx:xxx", "xxx")
                .overrides(overrides)
                .modules(modules)
                .buildView(BlobStoreContext.class);
        BlobStore blobStore = context.getBlobStore();
        blobStore.createContainerInLocation(null, CONTAINER_NAME);
        Path path = Paths.get("/home/archu/test_dir/test2");
        File f = new File("/home/archu/Downloads/test2");
        byte []byteArray =  Files.readAllBytes(path);
        Payload payload = newByteSourcePayload(wrap(byteArray));
        PutOptions opt = new PutOptions();
        opt.multipart();
        Blob blob = blobSt

Re: Extending RegionScopedSwiftBlobStore

2017-02-13 Thread Archana C
I have added my modules like this 

 modules = ImmutableSet. builder()
    .addAll(modules)
    .add(new SwiftContextModule())
    .build();
 blobStoreContext = ContextBuilder.newBuilder(provider)
    .overrides(properties)
    .modules(modules)
    .buildView(BlobStoreContext.class);
Since we are working on one or more providers, our context is generic

And SwiftContextModule looks somewhat  like this
public class SwiftContextModule  extends AbstractModule {
    
    @Override
       protected void configure() {
      bind(ConsistencyModel.class).toInstance(ConsistencyModel.EVENTUAL);
      bind(BlobStoreContext.class).to(RegionScopedBlobStoreContext.class);
      install(new FactoryModuleBuilder().build(Factory.class));
      bind(RegionScopedSwiftBlobStore.class).to(SwiftBlobStore.class);
       }

       interface Factory {
      SwiftBlobStore create(String in);
       }

       @Provides
       final Function 
blobStore(FactoryFunction in) {
      return in;
       }

       static class FactoryFunction extends ForwardingObject implements 
Function {
      @Inject
      Factory delegate;

      @Override
      protected Factory delegate() {
     return delegate;
      }

      @Override
      public RegionScopedSwiftBlobStore apply(String in) {
     return delegate.create(in);
      }
       }
}

 

On Monday, 13 February 2017, 15:20, Ignasi Barrera  wrote:
 

 Could you share the code you use to get the blob store context? IIRC
when using the "region scoped" one you need to specify the region.
Something like:

RegionScopedBlobStoreContext ctx =
contextBuilder.buildView(RegionScopedBlobStoreContext.class); // with
your modules
BlobStore getBlobStore("regionId");

What region ids are returned if you call: ctx.getConfiguredRegions() ?


On 13 February 2017 at 10:33, Archana C  wrote:
> Hi
>
> We are trying to extend RegionScopedSwiftblobStore for our use case somewhat
> like
>
> public class SwiftBlobStore extends RegionScopedSwiftBlobStore{
>    // Our Implementation
> }
>
> Compilation is successful, binding is not happening properly at the run time
>
> Error:
>
> 1) No implementation for java.lang.String annotated with
> @com.google.inject.assistedinject.Assisted(value=) was bound.
>  while locating java.lang.String annotated with
> @com.google.inject.assistedinject.Assisted(value=)
>    for parameter 4 at
> com.modules.SwiftBlobStore.(SwiftBlobStore.java:129)
>
> @Assisted is causing issue here. Is there anything that we need modify
> internally to avoid this error ?
>
> Regards
> Archana


   

Extending RegionScopedSwiftBlobStore

2017-02-13 Thread Archana C
Hi 

We are trying to extend RegionScopedSwiftblobStore for our use case somewhat 
like 

public class SwiftBlobStore extends RegionScopedSwiftBlobStore{    // Our 
Implementation
}
Compilation is successful, binding is not happening properly at the run time
Error:
1) No implementation for java.lang.String annotated with 
@com.google.inject.assistedinject.Assisted(value=) was bound.
  while locating java.lang.String annotated with 
@com.google.inject.assistedinject.Assisted(value=)
    for parameter 4 at 
com.modules.SwiftBlobStore.(SwiftBlobStore.java:129)
  
@Assisted is causing issue here. Is there anything that we need modify 
internally to avoid this error ?
RegardsArchana


MultiPart Upload

2017-02-08 Thread Archana C
Hi 

Is there a way to substitute attributes in MultipartUploadSlicingAlgorithm  
(min, max and number of parts) in jClouds-2.0 ?


RegardsArchana


Re: jClouds 2.0 MultiPart Upload

2017-02-03 Thread Archana C
Does it mean if we are not specifying  executorservice, the upload happens in 
sequential ?

Regards
Archana

 

On Saturday, 4 February 2017, 10:34, Andrew Gaul  wrote:
 

 PutOptions *takes* an ExecutorService which allows multiple threads to
concurrently upload multiple parts.

On Sat, Feb 04, 2017 at 03:34:30AM +, Archana C wrote:
> Hi 
> 
> I think the question was not clear. Parallel upload of multiple file is fine 
> and that can be achieved by using executorservice.
> The question here is, does multipartUpload i.e uploading of each part is 
> happening in parallel ?
> Does sequential upload of part deprecated ?
> RegardsArchana
> 
>  
> 
>    On Saturday, 4 February 2017, 1:30, Andrew Gaul  wrote:
>  
> 
>  We rewrote multi-part uploads in jclouds 2.0.  You should pass an
> ExecutorService and via PutOptions in your call to BlobStore.putBlob.
> 
> On Fri, Feb 03, 2017 at 01:11:15PM +, Archana C wrote:
> > Hi 
> > 
> > Is SequentialMultiPartUpload deprecated in jClouds2.0. Is all the multipart 
> > uploads are parallel now ?
> > RegardsArchana 
> > 
> >    On Friday, 3 February 2017, 18:39, Archana C  
> >wrote:
> >  
> > 
> >  Thanks it helped
> > RegardsArchana 
> > 
> >    On Friday, 3 February 2017, 12:06, Ignasi Barrera  
> >wrote:
> >  
> > 
> >  It looks like the OOM exception is thrown when writing the wire logs. When 
> >using the blob store apis you might see binary data in the logs, as the 
> >"jclouds.wire" logger logs the response/request payloads which might be huge 
> >for some blobs and can cause this kind of exceptions.
> > Could you try disabling the wire logs? (I recommend doing this for 
> > production environments).
> > Perhaps for your use case the "jclouds.headers" are enough; that will log 
> > all request/reponse path and headers but skip the bodies.
> > More on this here:https://issues.apache.org/jira/browse/JCLOUDS-1187
> > https://issues.apache.org/jira/browse/JCLOUDS-932
> > 
> > 
> > HTH!
> > I.
> > On Feb 3, 2017 06:22, "Archana C"  wrote:
> > 
> > Hi 
> > 
> > I have written a sample code for multipart upload using jClouds-2.0
> >     Properties overrides = new Properties();
> >         BlobStoreContext context = ContextBuilder.newBuilder(" 
> > openstack-swift")
> >                 .endpoint("http://x.xxx.xx.xx: 5000/v2.0")
> >                 .credentials("xx:xx", "xx")
> >                 .overrides(overrides)
> >                 .modules(modules)
> >                 .buildView(BlobStoreContext. class);
> >         BlobStore blobStore = context.getBlobStore();
> >         blobStore. createContainerInLocation( null, CONTAINER_NAME);
> >         Path path = Paths.get("test2");
> >         File f = new File("test2");
> >         byte []byteArray =  Files.readAllBytes(path);
> >         Payload payload = newByteSourcePayload(wrap( byteArray));
> >         PutOptions opt = new PutOptions();
> >         opt.multipart();
> >         Blob blob = blobStore.blobBuilder(OBJECT_ NAME)
> >                 .payload(payload). contentLength(f.length())
> >                 .build();
> >         String etag =  blobStore.putBlob(CONTAINER_ NAME, blob, opt);
> > test2 is the file I am trying to upload which is of size 36MB and I am 
> > getting the following exception
> > 10:21:52.355 [main] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice - 
> > Sending request 1344471693: PUT http://x.x.x.x:8091/v1/AUTH_ 
> > 0909ac10e7024847b1a9fe9787c7de 8f/arctestMP HTTP/1.1
> > 10:21:52.356 [main] DEBUG jclouds.headers - >> PUT 
> > http://x.x.x.x:8091/v1/AUTH_ 0909ac10e7024847b1a9fe9787c7de 8f/arctestMP 
> > HTTP/1.1
> > 10:21:52.356 [main] DEBUG jclouds.headers - >> Accept: application/json
> > 10:21:52.357 [main] DEBUG jclouds.headers - >> X-Auth-Token: 
> > fd72b74db90c46cabcca3f317d5a09 d4
> > 10:21:53.129 [main] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice - 
> > Receiving response 1344471693: HTTP/1.1 201 Created
> > 10:21:53.129 [main] DEBUG jclouds.headers - << HTTP/1.1 201 Created
> > 10:21:53.129 [main] DEBUG jclouds.headers - << Date: Fri, 03 Feb 2017 
> > 04:51:53 GMT
> > 10:21:53.129 [main] DEBUG jclouds.headers - << X-Trans-Id: 
> > tx83ba6249347c43c99bb41- 0058940c68
> > 10:21:53.129 [main] DEBUG jclouds.headers - << Connection: keep-alive
> > 10:21:53.129 [main] DEBUG jclouds.headers - << Content-Type: text/html; 

Re: jClouds 2.0 MultiPart Upload

2017-02-03 Thread Archana C
Hi 

I think the question was not clear. Parallel upload of multiple file is fine 
and that can be achieved by using executorservice.
The question here is, does multipartUpload i.e uploading of each part is 
happening in parallel ?
Does sequential upload of part deprecated ?
RegardsArchana

 

On Saturday, 4 February 2017, 1:30, Andrew Gaul  wrote:
 

 We rewrote multi-part uploads in jclouds 2.0.  You should pass an
ExecutorService and via PutOptions in your call to BlobStore.putBlob.

On Fri, Feb 03, 2017 at 01:11:15PM +, Archana C wrote:
> Hi 
> 
> Is SequentialMultiPartUpload deprecated in jClouds2.0. Is all the multipart 
> uploads are parallel now ?
> RegardsArchana 
> 
>    On Friday, 3 February 2017, 18:39, Archana C  
>wrote:
>  
> 
>  Thanks it helped
> RegardsArchana 
> 
>    On Friday, 3 February 2017, 12:06, Ignasi Barrera  wrote:
>  
> 
>  It looks like the OOM exception is thrown when writing the wire logs. When 
>using the blob store apis you might see binary data in the logs, as the 
>"jclouds.wire" logger logs the response/request payloads which might be huge 
>for some blobs and can cause this kind of exceptions.
> Could you try disabling the wire logs? (I recommend doing this for production 
> environments).
> Perhaps for your use case the "jclouds.headers" are enough; that will log all 
> request/reponse path and headers but skip the bodies.
> More on this here:https://issues.apache.org/jira/browse/JCLOUDS-1187
> https://issues.apache.org/jira/browse/JCLOUDS-932
> 
> 
> HTH!
> I.
> On Feb 3, 2017 06:22, "Archana C"  wrote:
> 
> Hi 
> 
> I have written a sample code for multipart upload using jClouds-2.0
>     Properties overrides = new Properties();
>         BlobStoreContext context = ContextBuilder.newBuilder(" 
> openstack-swift")
>                 .endpoint("http://x.xxx.xx.xx: 5000/v2.0")
>                 .credentials("xx:xx", "xx")
>                 .overrides(overrides)
>                 .modules(modules)
>                 .buildView(BlobStoreContext. class);
>         BlobStore blobStore = context.getBlobStore();
>         blobStore. createContainerInLocation( null, CONTAINER_NAME);
>         Path path = Paths.get("test2");
>         File f = new File("test2");
>         byte []byteArray =  Files.readAllBytes(path);
>         Payload payload = newByteSourcePayload(wrap( byteArray));
>         PutOptions opt = new PutOptions();
>         opt.multipart();
>         Blob blob = blobStore.blobBuilder(OBJECT_ NAME)
>                 .payload(payload). contentLength(f.length())
>                 .build();
>         String etag =  blobStore.putBlob(CONTAINER_ NAME, blob, opt);
> test2 is the file I am trying to upload which is of size 36MB and I am 
> getting the following exception
> 10:21:52.355 [main] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice - 
> Sending request 1344471693: PUT http://x.x.x.x:8091/v1/AUTH_ 
> 0909ac10e7024847b1a9fe9787c7de 8f/arctestMP HTTP/1.1
> 10:21:52.356 [main] DEBUG jclouds.headers - >> PUT 
> http://x.x.x.x:8091/v1/AUTH_ 0909ac10e7024847b1a9fe9787c7de 8f/arctestMP 
> HTTP/1.1
> 10:21:52.356 [main] DEBUG jclouds.headers - >> Accept: application/json
> 10:21:52.357 [main] DEBUG jclouds.headers - >> X-Auth-Token: 
> fd72b74db90c46cabcca3f317d5a09 d4
> 10:21:53.129 [main] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice - 
> Receiving response 1344471693: HTTP/1.1 201 Created
> 10:21:53.129 [main] DEBUG jclouds.headers - << HTTP/1.1 201 Created
> 10:21:53.129 [main] DEBUG jclouds.headers - << Date: Fri, 03 Feb 2017 
> 04:51:53 GMT
> 10:21:53.129 [main] DEBUG jclouds.headers - << X-Trans-Id: 
> tx83ba6249347c43c99bb41- 0058940c68
> 10:21:53.129 [main] DEBUG jclouds.headers - << Connection: keep-alive
> 10:21:53.129 [main] DEBUG jclouds.headers - << Content-Type: text/html; 
> charset=UTF-8
> 10:21:53.129 [main] DEBUG jclouds.headers - << Content-Length: 0    
> ---> Container Creation Successful
> 10:21:53.373 [user thread 1] DEBUG o.j.rest.internal. InvokeHttpMethod - >> 
> invoking object:put
> 10:21:53.373 [user thread 0] DEBUG o.j.rest.internal. InvokeHttpMethod - >> 
> invoking object:put
> 10:21:53.374 [user thread 1] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ 
> ice - Sending request 823625484: PUT http://x.x.x.x:8091/v1/AUTH_ 
> 0909ac10e7024847b1a9fe9787c7de 8f/arctestMP/arc/slo/ 
> 1486097513.327000/0/33554432/ 0001 HTTP/1.1
> 10:21:53.376 [user thread 0] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ 
> ice - Sending request -1220101806: PUT http://x.x.x.x:8091/v1/AUTH_ 
>

Re: jClouds 2.0 MultiPart Upload

2017-02-03 Thread Archana C
Hi 

Is SequentialMultiPartUpload deprecated in jClouds2.0. Is all the multipart 
uploads are parallel now ?
RegardsArchana 

On Friday, 3 February 2017, 18:39, Archana C  wrote:
 

 Thanks it helped
RegardsArchana 

On Friday, 3 February 2017, 12:06, Ignasi Barrera  wrote:
 

 It looks like the OOM exception is thrown when writing the wire logs. When 
using the blob store apis you might see binary data in the logs, as the 
"jclouds.wire" logger logs the response/request payloads which might be huge 
for some blobs and can cause this kind of exceptions.
Could you try disabling the wire logs? (I recommend doing this for production 
environments).
Perhaps for your use case the "jclouds.headers" are enough; that will log all 
request/reponse path and headers but skip the bodies.
More on this here:https://issues.apache.org/jira/browse/JCLOUDS-1187
https://issues.apache.org/jira/browse/JCLOUDS-932


HTH!
I.
On Feb 3, 2017 06:22, "Archana C"  wrote:

Hi 

I have written a sample code for multipart upload using jClouds-2.0
    Properties overrides = new Properties();
        BlobStoreContext context = ContextBuilder.newBuilder(" openstack-swift")
                .endpoint("http://x.xxx.xx.xx: 5000/v2.0")
                .credentials("xx:xx", "xx")
                .overrides(overrides)
                .modules(modules)
                .buildView(BlobStoreContext. class);
        BlobStore blobStore = context.getBlobStore();
        blobStore. createContainerInLocation( null, CONTAINER_NAME);
        Path path = Paths.get("test2");
        File f = new File("test2");
        byte []byteArray =  Files.readAllBytes(path);
        Payload payload = newByteSourcePayload(wrap( byteArray));
        PutOptions opt = new PutOptions();
        opt.multipart();
        Blob blob = blobStore.blobBuilder(OBJECT_ NAME)
                .payload(payload). contentLength(f.length())
                .build();
        String etag =  blobStore.putBlob(CONTAINER_ NAME, blob, opt);
test2 is the file I am trying to upload which is of size 36MB and I am getting 
the following exception
10:21:52.355 [main] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice - Sending 
request 1344471693: PUT http://x.x.x.x:8091/v1/AUTH_ 
0909ac10e7024847b1a9fe9787c7de 8f/arctestMP HTTP/1.1
10:21:52.356 [main] DEBUG jclouds.headers - >> PUT http://x.x.x.x:8091/v1/AUTH_ 
0909ac10e7024847b1a9fe9787c7de 8f/arctestMP HTTP/1.1
10:21:52.356 [main] DEBUG jclouds.headers - >> Accept: application/json
10:21:52.357 [main] DEBUG jclouds.headers - >> X-Auth-Token: 
fd72b74db90c46cabcca3f317d5a09 d4
10:21:53.129 [main] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice - 
Receiving response 1344471693: HTTP/1.1 201 Created
10:21:53.129 [main] DEBUG jclouds.headers - << HTTP/1.1 201 Created
10:21:53.129 [main] DEBUG jclouds.headers - << Date: Fri, 03 Feb 2017 04:51:53 
GMT
10:21:53.129 [main] DEBUG jclouds.headers - << X-Trans-Id: 
tx83ba6249347c43c99bb41- 0058940c68
10:21:53.129 [main] DEBUG jclouds.headers - << Connection: keep-alive
10:21:53.129 [main] DEBUG jclouds.headers - << Content-Type: text/html; 
charset=UTF-8
10:21:53.129 [main] DEBUG jclouds.headers - << Content-Length: 0    
---> Container Creation Successful
10:21:53.373 [user thread 1] DEBUG o.j.rest.internal. InvokeHttpMethod - >> 
invoking object:put
10:21:53.373 [user thread 0] DEBUG o.j.rest.internal. InvokeHttpMethod - >> 
invoking object:put
10:21:53.374 [user thread 1] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice 
- Sending request 823625484: PUT http://x.x.x.x:8091/v1/AUTH_ 
0909ac10e7024847b1a9fe9787c7de 8f/arctestMP/arc/slo/ 
1486097513.327000/0/33554432/ 0001 HTTP/1.1
10:21:53.376 [user thread 0] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice 
- Sending request -1220101806: PUT http://x.x.x.x:8091/v1/AUTH_ 
0909ac10e7024847b1a9fe9787c7de 8f/arctestMP/arc/slo/ 
1486097513.327000/0/33554432/  HTTP/1.1
10:21:53.396 [user thread 1] DEBUG org.jclouds.http.internal. HttpWire - over 
limit 3145728/262144: wrote temp file
10:21:53.552 [user thread 0] DEBUG org.jclouds.http.internal. HttpWire - over 
limit 33554432/262144: wrote temp fileException in thread "main" 
com.google.common.util. concurrent.ExecutionError: java.lang.OutOfMemoryError: 
Java heap space
    at com.google.common.util. concurrent.Futures. 
wrapAndThrowUnchecked(Futures. java:1380)
    at com.google.common.util. concurrent.Futures. getUnchecked(Futures.java: 
1373)
    at org.jclouds.openstack.swift. v1.blobstore. RegionScopedSwiftBlobStore. 
putMultipartBlob( RegionScopedSwiftBlobStore. java:650)
    at org.jclouds.openstack.swift. v1.blobstore. RegionScopedSwiftBlobStore. 
putMultipartBlob( RegionScopedSwiftBlobStore. java:628)
    at org.jclouds.openstack.swift. v1.blobstore. RegionScopedSwiftBlobStore. 
putBlob( RegionScopedSw

Re: jClouds 2.0 MultiPart Upload

2017-02-03 Thread Archana C
Thanks it helped
RegardsArchana 

On Friday, 3 February 2017, 12:06, Ignasi Barrera  wrote:
 

 It looks like the OOM exception is thrown when writing the wire logs. When 
using the blob store apis you might see binary data in the logs, as the 
"jclouds.wire" logger logs the response/request payloads which might be huge 
for some blobs and can cause this kind of exceptions.
Could you try disabling the wire logs? (I recommend doing this for production 
environments).
Perhaps for your use case the "jclouds.headers" are enough; that will log all 
request/reponse path and headers but skip the bodies.
More on this here:https://issues.apache.org/jira/browse/JCLOUDS-1187
https://issues.apache.org/jira/browse/JCLOUDS-932


HTH!
I.
On Feb 3, 2017 06:22, "Archana C"  wrote:

Hi 

I have written a sample code for multipart upload using jClouds-2.0
    Properties overrides = new Properties();
        BlobStoreContext context = ContextBuilder.newBuilder(" openstack-swift")
                .endpoint("http://x.xxx.xx.xx: 5000/v2.0")
                .credentials("xx:xx", "xx")
                .overrides(overrides)
                .modules(modules)
                .buildView(BlobStoreContext. class);
        BlobStore blobStore = context.getBlobStore();
        blobStore. createContainerInLocation( null, CONTAINER_NAME);
        Path path = Paths.get("test2");
        File f = new File("test2");
        byte []byteArray =  Files.readAllBytes(path);
        Payload payload = newByteSourcePayload(wrap( byteArray));
        PutOptions opt = new PutOptions();
        opt.multipart();
        Blob blob = blobStore.blobBuilder(OBJECT_ NAME)
                .payload(payload). contentLength(f.length())
                .build();
        String etag =  blobStore.putBlob(CONTAINER_ NAME, blob, opt);
test2 is the file I am trying to upload which is of size 36MB and I am getting 
the following exception
10:21:52.355 [main] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice - Sending 
request 1344471693: PUT http://x.x.x.x:8091/v1/AUTH_ 
0909ac10e7024847b1a9fe9787c7de 8f/arctestMP HTTP/1.1
10:21:52.356 [main] DEBUG jclouds.headers - >> PUT http://x.x.x.x:8091/v1/AUTH_ 
0909ac10e7024847b1a9fe9787c7de 8f/arctestMP HTTP/1.1
10:21:52.356 [main] DEBUG jclouds.headers - >> Accept: application/json
10:21:52.357 [main] DEBUG jclouds.headers - >> X-Auth-Token: 
fd72b74db90c46cabcca3f317d5a09 d4
10:21:53.129 [main] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice - 
Receiving response 1344471693: HTTP/1.1 201 Created
10:21:53.129 [main] DEBUG jclouds.headers - << HTTP/1.1 201 Created
10:21:53.129 [main] DEBUG jclouds.headers - << Date: Fri, 03 Feb 2017 04:51:53 
GMT
10:21:53.129 [main] DEBUG jclouds.headers - << X-Trans-Id: 
tx83ba6249347c43c99bb41- 0058940c68
10:21:53.129 [main] DEBUG jclouds.headers - << Connection: keep-alive
10:21:53.129 [main] DEBUG jclouds.headers - << Content-Type: text/html; 
charset=UTF-8
10:21:53.129 [main] DEBUG jclouds.headers - << Content-Length: 0    
---> Container Creation Successful
10:21:53.373 [user thread 1] DEBUG o.j.rest.internal. InvokeHttpMethod - >> 
invoking object:put
10:21:53.373 [user thread 0] DEBUG o.j.rest.internal. InvokeHttpMethod - >> 
invoking object:put
10:21:53.374 [user thread 1] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice 
- Sending request 823625484: PUT http://x.x.x.x:8091/v1/AUTH_ 
0909ac10e7024847b1a9fe9787c7de 8f/arctestMP/arc/slo/ 
1486097513.327000/0/33554432/ 0001 HTTP/1.1
10:21:53.376 [user thread 0] DEBUG o.j.h.i. JavaUrlHttpCommandExecutorServ ice 
- Sending request -1220101806: PUT http://x.x.x.x:8091/v1/AUTH_ 
0909ac10e7024847b1a9fe9787c7de 8f/arctestMP/arc/slo/ 
1486097513.327000/0/33554432/  HTTP/1.1
10:21:53.396 [user thread 1] DEBUG org.jclouds.http.internal. HttpWire - over 
limit 3145728/262144: wrote temp file
10:21:53.552 [user thread 0] DEBUG org.jclouds.http.internal. HttpWire - over 
limit 33554432/262144: wrote temp fileException in thread "main" 
com.google.common.util. concurrent.ExecutionError: java.lang.OutOfMemoryError: 
Java heap space
    at com.google.common.util. concurrent.Futures. 
wrapAndThrowUnchecked(Futures. java:1380)
    at com.google.common.util. concurrent.Futures. getUnchecked(Futures.java: 
1373)
    at org.jclouds.openstack.swift. v1.blobstore. RegionScopedSwiftBlobStore. 
putMultipartBlob( RegionScopedSwiftBlobStore. java:650)
    at org.jclouds.openstack.swift. v1.blobstore. RegionScopedSwiftBlobStore. 
putMultipartBlob( RegionScopedSwiftBlobStore. java:628)
    at org.jclouds.openstack.swift. v1.blobstore. RegionScopedSwiftBlobStore. 
putBlob( RegionScopedSwiftBlobStore. java:274)
    at jclouds20.App.main(App.java: 83)
Caused by: java.lang.OutOfMemoryError: Java heap space
    at java.lang.StringBuilder. ensureCapacityImpl( StringBuilder.

jClouds 2.0 MultiPart Upload

2017-02-02 Thread Archana C
Hi 

I have written a sample code for multipart upload using jClouds-2.0
    Properties overrides = new Properties();
        BlobStoreContext context = ContextBuilder.newBuilder("openstack-swift")
                .endpoint("http://x.xxx.xx.xx:5000/v2.0";)
                .credentials("xx:xx", "xx")
                .overrides(overrides)
                .modules(modules)
                .buildView(BlobStoreContext.class);
        BlobStore blobStore = context.getBlobStore();
        blobStore.createContainerInLocation(null, CONTAINER_NAME);
        Path path = Paths.get("test2");
        File f = new File("test2");
        byte []byteArray =  Files.readAllBytes(path);
        Payload payload = newByteSourcePayload(wrap(byteArray));
        PutOptions opt = new PutOptions();
        opt.multipart();
        Blob blob = blobStore.blobBuilder(OBJECT_NAME)
                .payload(payload).contentLength(f.length())
                .build();
        String etag =  blobStore.putBlob(CONTAINER_NAME, blob, opt);
test2 is the file I am trying to upload which is of size 36MB and I am getting 
the following exception
10:21:52.355 [main] DEBUG o.j.h.i.JavaUrlHttpCommandExecutorService - Sending 
request 1344471693: PUT 
http://x.x.x.x:8091/v1/AUTH_0909ac10e7024847b1a9fe9787c7de8f/arctestMP HTTP/1.1
10:21:52.356 [main] DEBUG jclouds.headers - >> PUT 
http://x.x.x.x:8091/v1/AUTH_0909ac10e7024847b1a9fe9787c7de8f/arctestMP HTTP/1.1
10:21:52.356 [main] DEBUG jclouds.headers - >> Accept: application/json
10:21:52.357 [main] DEBUG jclouds.headers - >> X-Auth-Token: 
fd72b74db90c46cabcca3f317d5a09d4
10:21:53.129 [main] DEBUG o.j.h.i.JavaUrlHttpCommandExecutorService - Receiving 
response 1344471693: HTTP/1.1 201 Created
10:21:53.129 [main] DEBUG jclouds.headers - << HTTP/1.1 201 Created
10:21:53.129 [main] DEBUG jclouds.headers - << Date: Fri, 03 Feb 2017 04:51:53 
GMT
10:21:53.129 [main] DEBUG jclouds.headers - << X-Trans-Id: 
tx83ba6249347c43c99bb41-0058940c68
10:21:53.129 [main] DEBUG jclouds.headers - << Connection: keep-alive
10:21:53.129 [main] DEBUG jclouds.headers - << Content-Type: text/html; 
charset=UTF-8
10:21:53.129 [main] DEBUG jclouds.headers - << Content-Length: 0    
---> Container Creation Successful
10:21:53.373 [user thread 1] DEBUG o.j.rest.internal.InvokeHttpMethod - >> 
invoking object:put
10:21:53.373 [user thread 0] DEBUG o.j.rest.internal.InvokeHttpMethod - >> 
invoking object:put
10:21:53.374 [user thread 1] DEBUG o.j.h.i.JavaUrlHttpCommandExecutorService - 
Sending request 823625484: PUT 
http://x.x.x.x:8091/v1/AUTH_0909ac10e7024847b1a9fe9787c7de8f/arctestMP/arc/slo/1486097513.327000/0/33554432/0001
 HTTP/1.1
10:21:53.376 [user thread 0] DEBUG o.j.h.i.JavaUrlHttpCommandExecutorService - 
Sending request -1220101806: PUT 
http://x.x.x.x:8091/v1/AUTH_0909ac10e7024847b1a9fe9787c7de8f/arctestMP/arc/slo/1486097513.327000/0/33554432/
 HTTP/1.1
10:21:53.396 [user thread 1] DEBUG org.jclouds.http.internal.HttpWire - over 
limit 3145728/262144: wrote temp file
10:21:53.552 [user thread 0] DEBUG org.jclouds.http.internal.HttpWire - over 
limit 33554432/262144: wrote temp fileException in thread "main" 
com.google.common.util.concurrent.ExecutionError: java.lang.OutOfMemoryError: 
Java heap space
    at 
com.google.common.util.concurrent.Futures.wrapAndThrowUnchecked(Futures.java:1380)
    at com.google.common.util.concurrent.Futures.getUnchecked(Futures.java:1373)
    at 
org.jclouds.openstack.swift.v1.blobstore.RegionScopedSwiftBlobStore.putMultipartBlob(RegionScopedSwiftBlobStore.java:650)
    at 
org.jclouds.openstack.swift.v1.blobstore.RegionScopedSwiftBlobStore.putMultipartBlob(RegionScopedSwiftBlobStore.java:628)
    at 
org.jclouds.openstack.swift.v1.blobstore.RegionScopedSwiftBlobStore.putBlob(RegionScopedSwiftBlobStore.java:274)
    at jclouds20.App.main(App.java:83)
Caused by: java.lang.OutOfMemoryError: Java heap space
    at java.lang.StringBuilder.ensureCapacityImpl(StringBuilder.java:342)
    at java.lang.StringBuilder.append(StringBuilder.java:208)
    at org.jclouds.logging.internal.Wire.wire(Wire.java:68)
    at org.jclouds.logging.internal.Wire.copy(Wire.java:99)
    at org.jclouds.logging.internal.Wire.output(Wire.java:176)
    at org.jclouds.logging.internal.Wire.output(Wire.java:143)
    at org.jclouds.http.HttpUtils.wirePayloadIfEnabled(HttpUtils.java:296)
    at 
org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:97)
    at 
org.jclouds.rest.internal.InvokeHttpMethod.invoke(InvokeHttpMethod.java:90)
    at 
org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:73)
    at 
org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:44)
    at 
org.jclouds.reflect.FunctionalReflection$FunctionalInvocationHandler.handleInvocation(FunctionalReflection.java:117)
    at 
com.google.common.reflect.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:87)
    at com.sun.proxy.$Proxy68.

Re: Accessing blob-store by private-key only

2016-10-25 Thread Archana C
Hi 

Provider is s3 compatible,  how to do HTTPs basic access authentication rather 
than credentails.  
How to disable HTTP basic authentication and use only certificates for 
authentication
RegardsArchana 

On Wednesday, 26 October 2016, 10:18, Andrew Gaul  wrote:
 

 Can you provide more details on your use case, e.g., which provider?
All providers use an identity and credential.  Long ago someone asked
about HTTP basic access authentication which we do not support but
should be easy to add.

On Tue, Oct 25, 2016 at 07:09:52AM +, Archana C wrote:
> Hi 
> 
>     Is there any way to authenticate blob store using private key alone, 
> instead of passing credentials(identity, key) ?    Does jclouds support that 
> kind of authentication ?
> 
> RegardsArchana

-- 
Andrew Gaul
http://gaul.org/


   

Accessing blob-store by private-key only

2016-10-25 Thread Archana C
Hi 

    Is there any way to authenticate blob store using private key alone, 
instead of passing credentials(identity, key) ?    Does jclouds support that 
kind of authentication ?

RegardsArchana


Re: requested location eu-central-1, which is not in the configured locations

2016-03-08 Thread Archana C
 From the amazon documentation, we understand that the region eu-central-1 
(Frankfurt) supports only AWS signature version 4, and from 
https://issues.apache.org/jira/browse/JCLOUDS-480 we understand the jclouds 
already has the support added for version 4 signature. 
What we are trying to figure out is *if* we need to do anything from the client 
side to change the authorization signature.  We were hoping the BlobStore APIs 
might work it used to with out changing the client code, however, we see the 
requests sent are using V2 signature, and the server is rejecting them. 
2016-03-08 14:57:57,044 DEBUG [jclouds.wire] [main] >> "Test[\n]"2016-03-08 
14:57:57,045 DEBUG [jclouds.headers] [main] >> PUT 
https://testcontainer3.s3-eu-central-1.amazonaws.com/file1 HTTP/1.12016-03-08 
14:57:57,045 DEBUG [jclouds.headers] [main] >> Expect: 100-continue2016-03-08 
14:57:57,045 DEBUG [jclouds.headers] [main] >> Host: 
testcontainer3.s3-eu-central-1.amazonaws.com2016-03-08 14:57:57,045 DEBUG 
[jclouds.headers] [main] >> Date: Tue, 08 Mar 2016 09:27:50 GMT2016-03-08 
14:57:57,045 DEBUG [jclouds.headers] [main] >> Authorization: AWS 
AKIAISCW6DRRITWR6IWQ:6AndVHQV2w75OXQDq/9sWt37KN0=2016-03-08 14:57:57,045 DEBUG 
[jclouds.headers] [main] >> Content-Type: application/unknown2016-03-08 
14:57:57,045 DEBUG [jclouds.headers] [main] >> Content-Length: 5
org.jclouds.http.HttpResponseException: Server rejected operation connecting to 
PUT https://testcontainer3.s3-eu-central-1.amazonaws.com/file1 HTTP/1.1 at 
org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:118)
 at org.jclouds.rest.internal.InvokeHttpMethod.invoke(InvokeHttpMethod.java:90) 
at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:73) 
at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:44)
Any help on how to use the AWS version 4 signature for the blobstore APIs will 
be much appreciated. 
NB: With the last commit mentioned in the above reply, we do not see the region 
not supported exception anymore. 
Thanks!
RegardsArchana
 

On Tuesday, 8 March 2016, 10:54, Andrew Gaul  wrote:
 

 Please test again with the latest master which includes a fix:

https://git-wip-us.apache.org/repos/asf?p=jclouds.git;a=commit;h=c18371a7

On Mon, Mar 07, 2016 at 12:29:59PM +, Archana C wrote:
> public class App 
> {
>     public static void main( String[] args ) throws IOException
>     {
>         // TODO Auto-generated method stub
>                 // TODO Auto-generated method stub
>                 String containername = "archanatrial12";
>                 String objectname = "object1";
>                 String tempFile = 
> "/home/archana/Eclipse/trialV42/src/main/java/trialV41/trialV42/result.txt";
>                 //int length;
>                 
>                 // s3.amazonaws.com   s3.eu-central-1.amazonaws.com   
> s3-external-1.amazonaws.com
>                         
>             BlobStoreContext context = ContextBuilder.newBuilder("aws-s3")
>                         .credentials("XXX", "YYY")
>                         .buildView(BlobStoreContext.class);
>             
>                 // Access the BlobStore
>                 BlobStore blobStore = context.getBlobStore();
>                 //Location loc = "us-east-1";
>                 Location loc = new 
> LocationBuilder().scope(LocationScope.REGION)
>     .id("eu-central-1")
>     .description("region")
>     .build();
> 
>                 // Create a Container
>                 blobStore.createContainerInLocation(loc, containername);
> 
>                 // Create a Blob
>                 File input = new 
> File("/home/archana/Eclipse/jclouds1/src/main/java/jclouds1/sample.txt");
>                 long length = input.length();
>                 // Add a Blob
>                 Blob blob = 
> blobStore.blobBuilder(objectname).payload(Files.asByteSource(input)).contentLength(length)
>                 .contentDisposition(objectname).build();
> 
>                 // Upload the Blob
>                 String eTag = blobStore.putBlob(containername, blob);
>                 System.out.println(eTag);}}
> Error  : requested location eu-central-1, which is not in the configured 
> locations
> Solution to rectify the issue required
> 
> RegardsArchana

-- 
Andrew Gaul
http://gaul.org/


  

Re: requested location eu-central-1, which is not in the configured locations

2016-03-07 Thread Archana C
Jclouda version 2.0.0-SNAPSHOT
RegardsArc
Sent from Yahoo Mail on Android 
 
  On Mon, Mar 7, 2016 at 7:04 PM, Ignasi Barrera wrote:   That 
region is supported since jclouds 1.9.0. Which version are you using?

On 7 March 2016 at 13:29, Archana C  wrote:
> public class App
> {
>    public static void main( String[] args ) throws IOException
>    {
>        // TODO Auto-generated method stub
>                // TODO Auto-generated method stub
>                String containername = "archanatrial12";
>                String objectname = "object1";
>                String tempFile =
> "/home/archana/Eclipse/trialV42/src/main/java/trialV41/trialV42/result.txt";
>                //int length;
>
>                // s3.amazonaws.com  s3.eu-central-1.amazonaws.com
> s3-external-1.amazonaws.com
>
>            BlobStoreContext context = ContextBuilder.newBuilder("aws-s3")
>                        .credentials("XXX", "YYY")
>                        .buildView(BlobStoreContext.class);
>
>                // Access the BlobStore
>                BlobStore blobStore = context.getBlobStore();
>                //Location loc = "us-east-1";
>                Location loc = new
> LocationBuilder().scope(LocationScope.REGION)
>                        .id("eu-central-1")
>                        .description("region")
>                        .build();
>
>                // Create a Container
>                blobStore.createContainerInLocation(loc, containername);
>
>                // Create a Blob
>                File input = new
> File("/home/archana/Eclipse/jclouds1/src/main/java/jclouds1/sample.txt");
>                long length = input.length();
>                // Add a Blob
>                Blob blob =
> blobStore.blobBuilder(objectname).payload(Files.asByteSource(input)).contentLength(length)
>                        .contentDisposition(objectname).build();
>
>                // Upload the Blob
>                String eTag = blobStore.putBlob(containername, blob);
>                System.out.println(eTag);
> }
> }
>
> Error  : requested location eu-central-1, which is not in the configured
> locations
>
> Solution to rectify the issue required
>
> Regards
> Archana
  


requested location eu-central-1, which is not in the configured locations

2016-03-07 Thread Archana C
public class App 
{
    public static void main( String[] args ) throws IOException
    {
        // TODO Auto-generated method stub
                // TODO Auto-generated method stub
                String containername = "archanatrial12";
                String objectname = "object1";
                String tempFile = 
"/home/archana/Eclipse/trialV42/src/main/java/trialV41/trialV42/result.txt";
                //int length;
                
                // s3.amazonaws.com   s3.eu-central-1.amazonaws.com   
s3-external-1.amazonaws.com
                        
            BlobStoreContext context = ContextBuilder.newBuilder("aws-s3")
                        .credentials("XXX", "YYY")
                        .buildView(BlobStoreContext.class);
            
                // Access the BlobStore
                BlobStore blobStore = context.getBlobStore();
                //Location loc = "us-east-1";
                Location loc = new LocationBuilder().scope(LocationScope.REGION)
    .id("eu-central-1")
    .description("region")
    .build();

                // Create a Container
                blobStore.createContainerInLocation(loc, containername);

                // Create a Blob
                File input = new 
File("/home/archana/Eclipse/jclouds1/src/main/java/jclouds1/sample.txt");
                long length = input.length();
                // Add a Blob
                Blob blob = 
blobStore.blobBuilder(objectname).payload(Files.asByteSource(input)).contentLength(length)
                .contentDisposition(objectname).build();

                // Upload the Blob
                String eTag = blobStore.putBlob(containername, blob);
                System.out.println(eTag);}}
Error  : requested location eu-central-1, which is not in the configured 
locations
Solution to rectify the issue required

RegardsArchana