Hello Dimuthu,

We had just started looking into Azure and GCS. Since Azure is done we will 
take up and explore GCS.

Thank you for the update.

Thank you
Aravind Ramalingam

> On Apr 16, 2020, at 00:30, DImuthu Upeksha <[email protected]> wrote:
> 
> 
> Aravind,
> 
> I'm not sure whether you have made any progress on Azure transport yet. I got 
> a chance to look into that [6]. Let me know if you are working on GCS or any 
> other so that I can plan ahead. Next I will be focusing on Box transport.
> 
> [6] 
> https://github.com/apache/airavata-mft/commit/013ed494eb958990d0a6f90186a53103e1237bcd
> 
> Thanks
> Dimuthu
> 
>> On Mon, Apr 6, 2020 at 5:19 PM Aravind Ramalingam <[email protected]> wrote:
>> Hi  Dimuthu,
>> 
>> Thank you for the update. We look into it and get an idea about how the 
>> system works.
>> We were hoping to try an implementation for GCS, we will also look into 
>> Azure.
>> 
>> Thank you
>> Aravind Ramalingam
>> 
>>> On Mon, Apr 6, 2020 at 4:44 PM DImuthu Upeksha <[email protected]> 
>>> wrote:
>>> Aravind,
>>> 
>>> Here [2] is the complete commit for S3 transport implementation but don't 
>>> get confused by the amount of changes as this includes both transport 
>>> implementation and the service backend implementations. If you need to 
>>> implement a new transport, you need to implement a Receiver, Sender and a 
>>> MetadataCollector like this [3]. Then you need to add that resource support 
>>> to Resource service and Secret service [4] [5]. You can similarly do that 
>>> for Azure. A sample SCP -> S3 transfer request is like below. Hope that 
>>> helps.
>>> 
>>> String sourceId = "remote-ssh-resource";
>>> String sourceToken = "local-ssh-cred";
>>> String sourceType = "SCP";
>>> String destId = "s3-file";
>>> String destToken = "s3-cred";
>>> String destType = "S3";
>>> 
>>> TransferApiRequest request = TransferApiRequest.newBuilder()
>>>         .setSourceId(sourceId)
>>>         .setSourceToken(sourceToken)
>>>         .setSourceType(sourceType)
>>>         .setDestinationId(destId)
>>>         .setDestinationToken(destToken)
>>>         .setDestinationType(destType)
>>>         .setAffinityTransfer(false).build();
>>> 
>>> [2] 
>>> https://github.com/apache/airavata-mft/commit/62fae3d0ab2921fa8bf0bea7970e233f842e6948
>>> [3] 
>>> https://github.com/apache/airavata-mft/tree/master/transport/s3-transport/src/main/java/org/apache/airavata/mft/transport/s3
>>> [4] 
>>> https://github.com/apache/airavata-mft/blob/master/services/resource-service/stub/src/main/proto/ResourceService.proto#L90
>>> [5] 
>>> https://github.com/apache/airavata-mft/blob/master/services/secret-service/stub/src/main/proto/SecretService.proto#L45
>>> 
>>> Thanks
>>> Dimuthu
>>> 
>>> 
>>>> On Sun, Apr 5, 2020 at 12:10 AM DImuthu Upeksha 
>>>> <[email protected]> wrote:
>>>> There is a working on S3 transport in my local copy. Will commit it once I 
>>>> test it out properly. You can follow the same pattern for any cloud 
>>>> provider which has clients with streaming IO. Streaming among different 
>>>> transfer protocols inside an Agent has been discussed in the last part of 
>>>> this [1] document. Try to get the conceptual idea from that and reverse 
>>>> engineer SCP transport. 
>>>> 
>>>> [1] 
>>>> https://docs.google.com/document/d/1zrO4Z1dn7ENhm1RBdVCw-dDpWiebaZEWy66ceTWoOlo
>>>> 
>>>> Dimuthu
>>>> 
>>>>> On Sat, Apr 4, 2020 at 9:22 PM Aravind Ramalingam <[email protected]> 
>>>>> wrote:
>>>>> Hello, 
>>>>> 
>>>>> We were looking at the existing code in the project. We could find 
>>>>> implementations only for local copy and SCP.
>>>>> We were confused on how to go about with an external provider like S3 or 
>>>>> Azure? Since it would require integrating with their respective clients. 
>>>>> 
>>>>> Thank you
>>>>> Aravind Ramalingam
>>>>> 
>>>>> > On Apr 4, 2020, at 21:15, Suresh Marru <[email protected]> wrote:
>>>>> > 
>>>>> > Hi Aravind,
>>>>> > 
>>>>> > I have to catch up with the code, but you may want to look at the S3 
>>>>> > implementation and extend it to Azure, GCP or other cloud services like 
>>>>> > Box, Dropbox and so on. 
>>>>> > 
>>>>> > There could be many use cases, here is an idea:
>>>>> > 
>>>>> > * Compute a job on a supercomputer with SCP access and push the outputs 
>>>>> > to a Cloud storage. 
>>>>> > 
>>>>> > Suresh
>>>>> > 
>>>>> >> On Apr 4, 2020, at 8:09 PM, Aravind Ramalingam <[email protected]> 
>>>>> >> wrote:
>>>>> >> 
>>>>> >> Hello,
>>>>> >> 
>>>>> >> We set up the MFT project on local system and tested out SCP transfer 
>>>>> >> between JetStream VMs, we were wondering how the support can be 
>>>>> >> extended for AWS/GCS.
>>>>> >> 
>>>>> >> As per our understanding, the current implementation has support for 
>>>>> >> two protocols i.e. local-transport and scp-transport. Would we have to 
>>>>> >> modify/add to the code base to extend support for AWS/GCS clients? 
>>>>> >> 
>>>>> >> Could you please provide suggestions for this use case. 
>>>>> >> 
>>>>> >> Thank you
>>>>> >> Aravind Ramalingam
>>>>> > 

Reply via email to