The hub side filter requester has been improved according to
https://adc.sourceforge.io/ADC-EXT.html#_implementation_notes
The fix does not solve the problem entirely but reduces number of those edge
cases when no bloom filter update has been requested.
Regarding the client side :
Some clients, like AirDC++ for example, do not report the numbers immediately
at all after a manual share refresh. They do it only with the next the minutely
info sending event. Using these clients, after a share refresh, Gb's of updated
files could be entirely unnoticable by the hub's filter requester.
From now, during a share update, if the size of at least one file
changes then the hubside fix already works. It's still not 100%, though;
think of mass fixed size metadata updates, like mp3 ID3v1 tags, etc...
So as per the above notes, fixes in the clients' code are also required.
** Project changed: dcplusplus => adchpp
** Also affects: dcplusplus
Importance: Undecided
Status: New
** Changed in: adchpp
Status: New => Fix Committed
** Also affects: airdcpp
Importance: Undecided
Status: New
** Changed in: dcplusplus
Status: New => Confirmed
** Changed in: dcplusplus
Importance: Undecided => Medium
--
You received this bug notification because you are a member of
Dcplusplus-team, which is subscribed to DC++.
https://bugs.launchpad.net/bugs/2110291
Title:
One time small updates in the share may not trigger a Bloom filter
update request which makes such updated files unsearchable by TTH for
other hub users
Status in ADCH++:
Fix Committed
Status in AirDC++:
New
Status in DC++:
Confirmed
Bug description:
There is a possible scenario where other users logged into the same ADCH++
hub with Bloom filter support
may not receive search results (by TTH) for one or more updated files after
manually refreshing the share in DC++, until the user updates the share once
more or reconnects to the hub.
The problem is consistently reproducible after one or a few files getting
updated and the sharre refreshed,
if the overall size of the changed files is relatively small.
To reproduce this, you need to update already shared file(s) with different
content,
or perform a similar number of file removals and additions to the share, then
manually refresh the share.
The cause of the issue is that sending INFs — just like any other commands —
is not instantaneous.
The function that compiles the INF command is placed into the async task
queue of all connected hubs' sockets, to be run when feasible.
If, for example, you update one small file and refresh the share, normally
that would result in sending SF = lastSF - 1 with the infoupdate() right after
the refresh.
Then, the hashing thread's TTHDone event handler updates the total number of
files after the file with the updated content has been hashed.
This change is then sent with the next scheduled infoupdate() (typically
minutely).
But... if the small updated file is already hashed by the time the hub's
respective infoupdate() is called,
then SF becomes lastSF + 1 again. Bingo — the value is correct, but the Bloom
plugin won't be signaled to request a filter update.
OTOH if the hasher's queue is empty before the share refresh, it will indeed
start working almost instantaneously, so if the total size of the updated
file(s) is small enough, it often wins the race, it seems.
The largest total updated file size to reproduce this depends on your
hardware.
It is higher with faster CPUs and storage, and also depends on how busy the
hub/socket is at the time.
On a system with a 100Mb/s HDD read speed and an i5-6600 CPU, the threshold
is about 15 MiB.
Obviously, this could easily be 10 times larger on modern hardware.
To manage notifications about this bug go to:
https://bugs.launchpad.net/adchpp/+bug/2110291/+subscriptions
_______________________________________________
Mailing list: https://launchpad.net/~linuxdcpp-team
Post to : [email protected]
Unsubscribe : https://launchpad.net/~linuxdcpp-team
More help : https://help.launchpad.net/ListHelp