Am Sun, 14 May 2017 02:59:41 +0100
schrieb lee <l...@yagibdah.de>:

> Kai Krakow <hurikha...@gmail.com> writes:
> 
> > Am Sat, 29 Apr 2017 20:38:24 +0100
> > schrieb lee <l...@yagibdah.de>:
> >  
> >> Kai Krakow <hurikha...@gmail.com> writes:
> >>   
>  [...]  
>  [...]  
>  [...]  
> >> 
> >> Yes, I'm using it mostly for backups/copies.
> >> 
> >> The problem is that ftp is ideal for the purpose, yet users find it
> >> too difficult to use, and nobody uses it.  So there must be
> >> something else as good or better which is easier to use and which
> >> ppl do use.  
> >
> > Well, I don't see how FTP is declining, except that it is
> > unencrypted. You can still use FTP with TLS handshaking, most sites
> > should support it these days but almost none forces correct
> > certificates because it is usually implemented wrong on the server
> > side (by giving you ftp.yourdomain.tld as the hostname instead of
> > ftp.hostingprovider.tld which the TLS cert has been issued for).
> > That makes it rather pointless to use. In linux, lftp is one of the
> > few FTP clients supporting TLS out-of-the-box by default, plus it
> > forces correct certificates.  
> 
> These certificates are a very stupid thing.  They are utterly
> complicated, you have to self-sign them which produces warnings, and
> they require to have the host name within them as if the host wasn't
> known by several different names.

Use LetsEncrypt then, you can add any number of host names you want, as
far as I know. But you need a temporary web server to prove ownership
of the server/hostname and sign the certificates.

> > But I found FTP being extra slow on small files, that's why I
> > suggested to use rsync instead. That means, where you could use
> > sftp (ssh+ftp), you can usually also use ssh+rsync which is
> > faster.  
> 
> That requires shell access.
> 
> What do you consider "small files"?  I haven't observed a slowdown
> like that, but I haven't been looking for it, either.

Transfer 10000 smallish files (like web assets, php files) to a server
with FTP, then try rsync. You should see a very big difference in time
needed. That's due to the connection overhead of FTP.

> > There's also the mirror command in lftp, which can be pretty fast,
> > too, on incremental updates but still much slower than rsync.
> >  
> >> I don't see how they would transfer files without ftp when ftp is
> >> the ideal solution.  
> >
> > You simply don't. FTP is still there and used. If you see something
> > like "sftp" (ssh+ftp, not ftp+ssl which I would refer to as ftps),
> > this is usually only ftp wrapped into ssh for security reasons. It
> > just using ftp through a tunnel, but to the core it's the ftp
> > protocol. In the end, it's not much different to scp, as ftp is
> > really just only a special shell with some special commands to
> > setup a file transfer channel that's not prone to interact with
> > terminal escape sequences in whatever way those may be implemented,
> > something that e.g. rzsz needs to work around.
> >
> > In the early BBS days, where you couldn't establish a second
> > transfer channel like FTP does it using TCP, you had to send
> > special escape sequences to put the terminal into file transfer
> > mode, and then send the file. By that time, you used rzsz from the
> > remote shell to initiate a file transfer. This is more the idea of
> > how scp implements a file transfer behind the scenes.  
> 
> IIRC, I used xmodem or something like that back then, and rzsz never
> worked.

Yes, or xmodem... ;-)

> > FTP also added some nice features like site-to-site transfers where
> > the data endpoints both are on remote sites, and your local site
> > only is the control channel. This directly transfers data from one
> > remote site to another without going through your local connection
> > (which may be slow due to the dial-up nature of most customer
> > internet connections).  
> 
> Interesting, I didn't know that.  How do you do that?

You need a client that supports this. I remember LeechFTP for Windows
supported it back then. The client needs to log in to both FTP servers
and then shuffle correct PORT commands between them, so that the data
connection is directly established between both.

That feature is also the reason why this looks so overly complicated
and incompatible to firewalls. When FTP was designed, there was a real
need to directly transfer files between servers as your connection was
usually a slow modem connection below 2400 baud, or some other slow
connection. Or even one that wouldn't transfer binary data at all...

> > Also, FTP is able to stream multiple files in a single connection
> > for transferring many small files, by using tar as the transport
> > protocol, thus reducing the overhead of establishing a new
> > connection per file. Apparently, I know only few clients that
> > support that, and even fewer servers which that would with.
> >
> > FTP can be pretty powerful, as you see. It's just victim of its poor
> > implementation in most FTP clients that makes you feel it's mostly
> > declined. If wrapped into a more secure tunnel (TLS, ssh), FTP is
> > still a very good choice for transferring files, tho not the most
> > efficient. Depending on your use case, you get away much better
> > using more efficient protocols like rsync.  
> 
> So there isn't a better solution than ftp.  That's good to know
> because I can say there isn't a better solution, and if ppl don't
> want to use it, they can send emails or DVDs.

It depends... It's a simple, well supported protocol, easy to implement
on both server and client sides. It's not the most efficient one
probably, but it works. And that's what counts.

Other modern protocols may work much better, have a richer feature set,
and are easy to use on the client side, too. But due to the richer
feature set, bigger attack surface, etc, they are usually much more
complicated to implement correctly on the server side. Look at HTTPS,
version HTTP/1.1: It supports all sort of things... Encryption, range
transfer, resume, uploads, downloads, authentication (many different
implementations), you could even transfer the checksums to see if
the files match... You need to implement all this in the server to be
compliant, even if the client doesn't care. This needs to be patched
for security updates because it is a big piece of software. A simple
FTP server is usually already secure by its pure age, there are no more
security holes to fix.

So your assumption "there isn't a better solution than ftp" is not
right on its own. It may be the simplest solution for your use case.
But it's definitely not the best solution to transfer files if you look
at security, safety, or efficiency.


-- 
Regards,
Kai

Replies to list-only preferred.


Reply via email to