https://bugs.kde.org/show_bug.cgi?id=178678

Pedro V <voidpointertonull+bugskde...@gmail.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |voidpointertonull+bugskdeor
                   |                            |g...@gmail.com
             Status|REPORTED                    |CONFIRMED
     Ever confirmed|0                           |1

--- Comment #103 from Pedro V <voidpointertonull+bugskde...@gmail.com> ---
(In reply to Germano Massullo from comment #101)
> I think it's the client system triggering too much I/O on the server because
> it tries to retrieve as much data as possible from the remote folders. This
> is not happening when using Krusader

Krusader is still not immune to such issues as others mentioned earlier
already, it just gets significantly less information compared to Dolphin with
default configuration which even goes into subdirectories, so it can be really
excessive.

(In reply to Harald Sitter from comment #70)
> Alas, can't reproduce.

The key which was mentioned here already is high latency, that's a significant
problem elsewhere too in KDE, mostly because:
- A whole lot of I/O operations are done one by one and with high latency that
becomes really obvious. A simple example of that problem is observing the SFTP
KIO slaving dealing with a directory with symbolic links over a high latency
connection where an strace on sshd can show the stat(x) calls being issued
rather slowly with the latency penalty.
- Apparently there's no progressive file listing, and getting information does
block the GUI

Theoretically this doesn't even really need networking to reproduce, it's just
easier with that as it adds more latency and I suspect that no helpful I/O
scheduler can get in the way of getting high latency.

Currently I can experience this with high latency not caused by the network,
but by accessing an HDD over NFS which is under heavy load not by just the test
host which definitely makes it worse, but hammering it just with one host
already makes the experience bad.
Do note that caching definitely gets in the way of reproducing the issue, so
I'll address that.
Given the previously mentioned conditions, looking at a directory with 30k+
files where new files are slowly being created. Didn't measure first listing
attempt, but likely that's not the best anyway, so let's assume a hot cache
which gets the following experiences:
- `ls -la`: <1 s, reasonably fast
- Krusader: <2 s, still pretty decent although with the files on the top not
changing, and starting scrolling starts to make the UI unresponsive. One large
scroll with the mouse, and it's just gone for some time, although still just
for seconds.
- Dolphin: ? s. At one point it starts showing the files, but due to the
occasional creation of new files it never becomes usable, although it does show
changes occasionally

There should be quite a few ways of reproducing even with let's say a local HDD
being scrubbed, or worse, defragmented while testing.
The tricky part is that without file changes various caching strategies and
even the I/O scheduler is likely to get in the way, but with file changes other
bugs may be at play too:
- At least with Krusader the tracking of directory contents tend to fall apart
mostly after heavy I/O until reboot. This most commonly affects NFS mounts for
me, but happened multiple times already after handling directories with a ton
of files. What I tend to notice is not all deleted files disappearing from the
list. Not sure how it is to this issue, but mentioning as it may matter.
- Quite rare, but I just recently happened to have gam_server pegging a core,
Krusader staying unresponsive until gam_server got killed. I'm not really
familiar with Gamin, I'm not even sure if it's actually needed or I'd be better
off removing it as it's apparently optional, but reading around, it seems to be
a troublemaker for others too which could mess with testing.

-- 
You are receiving this mail because:
You are watching all bug changes.

Reply via email to