On 30/04/2017 03:11, lee wrote:
> "Poison BL." <poiso...@gmail.com> writes:
> 
>> On Sat, Apr 29, 2017 at 3:24 PM, lee <l...@yagibdah.de> wrote:
>>
>>> Mick <michaelkintz...@gmail.com> writes:
>>>
>>>> On Tuesday 25 Apr 2017 16:45:37 Alan McKinnon wrote:
>>>>> On 25/04/2017 16:29, lee wrote:
>>>>>> Hi,
>>>>>>
>>>>>> since the usage of FTP seems to be declining, what is a replacement
>>>>>> which is at least as good as FTP?
>>>>>>
>>>>>> I'm aware that there's webdav, but that's very awkward to use and
>>>>>> missing features.
>>>>>
>>>>> Why not stick with ftp?
>>>>> Or, put another way, why do you feel you need to use something else?
>>>>>
>>>>> There's always dropbox
>>>>
>>>>
>>>> Invariably all web hosting ISPs offer ftp(s) for file upload/download.
>>> If you
>>>> pay a bit more you should be able to get ssh/scp/sftp too.  Indeed, many
>>> ISPs
>>>> throw in scp/sftp access as part of their basic package.
>>>>
>>>> Webdav(s) offers the same basic upload/download functionality, so I am
>>> not
>>>> sure what you find awkward about it, although I'd rather use lftp
>>> instead of
>>>> cadaver any day. ;-)
>>>>
>>>> As Alan mentioned, with JavaScript'ed web pages these days there are many
>>>> webapp'ed ISP offerings like Dropbox and friends.
>>>>
>>>> What is the use case you have in mind?
>>>
>>> transferring large amounts of data and automatization in processing at
>>> least some of it, without involving a 3rd party
>>>
>>> "Large amounts" can be "small" like 100MB --- or over 50k files in 12GB,
>>> or even more.  The mirror feature of lftp is extremely useful for such
>>> things.
>>>
>>> I wouldn't ever want having to mess around with web pages to figure out
>>> how to do this.  Ftp is plain and simple.  So you see why I'm explicitly
>>> asking for a replacement which is at least as good as ftp.
>>>
>>>
>>> --
>>> "Didn't work" is an error.
>>>
>>>
>> Half petabyte datasets aren't really something I'd personally *ever* trust
>> ftp with in the first place.
> 
> Why not?  (12GB are nowhere close to half a petabyte ...)
> 
>> That said, it depends entirely on the network
>> you're working with. Are you pushing this data in/out of the network your
>> machines live in, or are you working primarily internally? If internal,
>> what're the network side capabilities you have? Since you're likely already
>> using something on the order of CEPH or Gluster to back the datasets where
>> they sit, just working with it all across network from that storage would
>> be my first instinct.
> 
> The data would come in from suppliers.  There isn't really anything
> going on atm but fetching data once a month which can be like 100MB or
> 12GB or more.  That's because ppl don't use ftp ...

I have the opposite experience.
I have the devil's own time trying to convince people to NOT use ftp for
anything and everything under the sun that even remotely resembles
getting data from A to B... (especially things that are best done over a
message bus)

I'm still not understanding why you are asking your questions. What you
describe looks like the ideal case for ftp:

- supplier pushes a file or files somewhere
- you fetch those files later at a suitable time

it looks like a classic producer/consumer scenario and ftp or any of
it's webby clones like dropbox really it still the best tool overall.
Plus it has the added benefit that no user needs extra software - all
OSes have ftp clients even if it's just a browser

-- 
Alan McKinnon
alan.mckin...@gmail.com


Reply via email to