I guess I should reply to this...

On Monday 12 July 2010 22:09:12 3BUIb3S50i wrote:
> o...@lkxpu0~cdv6dh0idyw4mbwkusgn~h~bs3qqvxyoxsay wrote :
> > The Seeker wrote:
> >
> >> On 7/11/2010 6:21 AM, o...@lkxpu0~cdv6dh0idyw4mbwkusgn~h~bs3qqvxyoxsay
> wrote:
> >>> joh...@6kzjmqcftzffej0wthb29r63t5jkjg2xy5hzsvitg1a wrote:
> >>>
> >>>> Matthew Toseland<t...@amphibian.dyndns.org>
> >>>>
> >>>> IMHO we should attempt to fix, or at least realistically work around,
> the two
> >>>> big known security issues for 0.8.0, and get a paper published at the
> same time
> >>>> as the release. These are:
> >>>> 1. The Pitch Black attack. Oskar has a good idea how to fix it but has
> not yet
> >>>> simulated a fix. This blocks publishing a paper, and it also prevents
> use of
> >>>> darknet anywhere where there may be internal attackers. As I understand
> it
> >>>> implementation should not be particularly difficult - the main work
> needed here
> >>>> is to implement it in a simple simulator and tweak it until it works,
> right?
> >>>> 2. The mobile attacker source tracing attack. What this means is an
> attacker
> >>>> knows what is to be inserted (or requested), and he is initially
> distant from
> >>>> the inserter. He recognises the blocks, and uses the keys' locations
> (and path
> >>>> folding, and possibly announcement) to move towards the originator,
> gaining more
> >>>> and more of the stream as he moves closer. This is primarily a problem
> on
> >>>> opennet, but it is also feasible on darknet - it's just massively more
> >>>> expensive. It can be worked around for inserts by:
> >>>> i) Inserting with a random splitfile key. THIS IS IMPLEMENTED AS OF
> 1255,
> >>>> provided you insert to SSK@, AND
> >>>> ii) Providing an easy to use selective reinsert mechanism, AND
> >>>> iii) Putting a timestamp on the inserts on any small reinsert, and only
> routing
> >>>> to nodes that were connected prior to that timestamp.
> >>>> IMHO the second and third items are relatively easy.
> >>>>
> >>>> At the same time, we can substantially improve data persistence (1255
> already
> >>>> does that for big files, but the insert tweaks that are going to be
> tested real
> >>>> soon now would probably gain us a lot more), ship Freetalk, WoT and
> FlogHelper
> >>>> for improved end-user functionality, a fixed wininstaller, lots of bug
> fixes and
> >>>> minor usability tweaks, and everything else we've done since 0.7.5.
> >>>>
> >>>> And having a paper published at the same time would surely help with
> publicity
> >>>> amongst certain kinds of folk.
> >>>
> >>> *lol*
> >>> Is this the same Toad who managed to break all nodes since 1250+?
> >>> Must have been fun for latest users, he will have to publish a lot of
> >>> papers to attract more users than are currently leaving.

As I understand it:
- We had a few relatively simple problems around 1250.
- A few builds later we introduced even segment splitting. This was disruptive 
in that it changed the CHKs resulting from inserts, and it did not introduce 
proper back compatibility code.
- I therefore attempted to make all planned metadata changes at once in 1255, 
resulting in a great many changes at once, including some bad bugs. However it 
did include much improved back compatiblity support.
- I did try to test thoroughly but it is difficult when there are very few 
testers.
- Anyhow, each build since 1255 fixes a bunch of bugs, most but not all of 
which were introduced in 1255.
> >>>
> >>> New, promised features are worthless if the node is broken and resets
> your
> >>> datastore or up- and downloads.

Fortunately we haven't had a datastore resetting bug for a *long* time. 
Arguably the salted hash store is incapable of such a problem short of 
corruption of the metadata files...

As regards resetting uploads and downloads:
- 1255 included a small change that might have resulted in corruption being 
detected when it hadn't before.
- 1255 changed the internal data structures quite a bit, and this combined with 
a long-standing bug related to defragmentation to cause catastrophe for some 
people who had always defrag on startup enabled.
- Fundamentally, *all* databases regularly corrupt themselves when exposed to 
the real world: End users with finite disk space, power cuts, unclean 
shutdowns, overclocked or overheating CPUs, and so on. Hence we need 
auto-backups.
- One of the reasons that we have not yet released 0.8.0 is that there is not 
yet any auto-backup for the downloads database. It has been planned for some 
time, sorry I haven't got around to it.
> >>>
> >>> What is he smoking to call this *improved persistence*?

The biggest change in 1255 was a couple of changes to FEC encoding, to split 
segments more evenly and to use extra cross-segment redundancy for files of 
80MB+. All simulation work (yes we actually simulated this in advance rather 
than just doing things at random, thanks to evanbd) shows that this should 
dramatically increase the retrievability of larger files.

There are also good grounds to think that other changes we are testing in 1255+ 
will eventually lead to significant improvements in block-level persistence. 
But we have to try them to find out (this is stuff that can be turned on per 
insert).
> >>
> >> Thanks for all your hard work testing pre-release builds, it's thanks to
> the
> >> input of people like you during testing that we get the quality of
> product that
> >> we do.
> >
> > If you tried being sarcastic you failed.

Yeah I should leave Freenet alone and go get a proper job. It's perfect 
already, why change it? In fact it was perfect in the 0.4 era before I turned 
up - the datastore bug was just an annoyance ... (Background: My first act as a 
paid coder for FPI was to fix the datastore bug, which basically reset 
everyone's datastore roughly once a month)
> >
> > We all are running pre-release builds, no matter what Toad calls them and
> > no matter whether he declares a build to be 0.80.

It was inevitable that there would be some disruption by doing all the metadata 
changes at once. After all the grumbling after the relatively small changes to 
even segment splitting in ~ 1251, I figured it would be better that there be 
only one change to what the CHKs look like. I could have been much more 
cautious and introduced changes one at a time but then we'd have had 10 back 
compatibility modes instead of 4.

Testing could have been better of course - but it's hard to test exhaustively 
when there are so few testers. Especially if they all use insecure, known 
broken or otherwise unacceptable modes of communication.
> >
> > Is there any developer reading and writing here?
> > Or maybe in Frost?
> > Can you explain to me why Frost was secure enough for Toad on 0.5 but not
> > on 0.7?
> > If he is panicked by the bots (which bots btw.), shouldn't it at least be
> > possible to announce a build in a keyed board if he still rejects to
> > communicate?

Frost is known broken. We don't bundle it. I don't use it. I don't use any 
freenet-related code that I haven't personally reviewed and I cannot justify 
reviewing either Frost (because it is fundamentally broken and cannot be 
shipped) or FMS (because it is fundamentally not bundleable and Freetalk will 
be bundled eventually). I haven't had a working Freetalk either for the last 
couple of weeks for various reasons but plan to fix that soon.
> >
> > Do you think adding hashes to metadata was an improvement?

Yes. It is better to not serve anything at all than to serve broken data. This 
is especially true of executables but it is true in general. If there is 
sufficient demand I could provide an option to turn off hash validation, but 
IMHO it is better to fix the underlying issues whatever they are. Not having 
the last ditch verification means that it is much harder to detect that there 
is a problem in the first place.
> >
> > As expected the real bug hasn't been fixed, files still get corrupted when
> > being inserted, did I write already that this seems to be a bug in FEC
> data
> > (some downloaders are affected, some not, the older the file the lower the
> > chance to get it uncorrupted)?

Do you think this is a problem with uploading or downloading? Are there some 
downloaders who never get the bug and some who frequently get it, for the same 
files? If your node gets files corrupted, is the same file always corrupted? Is 
it always corrupted in the same way? If you create a new, independent tester 
node and fetch it through that node, is it corrupted on the new node?

Anyway, we got relayed reports from FMS of data corruption, and I added a bunch 
of checks, to check the blocks decoded via FEC against the keys listed in the 
splitfile. And fixed a major bug resulting in data corruption on large files 
inserted after 1255.

As regards old files, the last block of a really old splitfile used to be 
padded by the downloader rather than the uploader. Unfortunately due to various 
bugs there were at least 3 different padding schemes used. Since quite some 
time, we pad the last block on insert and truncate it after we have downloaded; 
this avoids such problems. The above low-level double-checking code showed up a 
problem where we would occasionally pad and use the last, too-short block in 
FEC decoding and so get a bunch of invalid blocks out; this is also fixed in 
1259.

*IF* there is a problem at upload time, which I doubt, we should still get the 
correct behaviour as of 1259+: If there is a problem with the check blocks but 
not the data blocks, we will try to decode based on the known data blocks and 
the bad check blocks, that will produce bogus data blocks, we will recognise 
this and log errors mostly starting "INVALID KEY FROM", and fatally fail all 
the involved blocks. We will then try to fetch the remaining data blocks, 
resulting in either the download failing or it stalling if the blocks are not 
all available.

That's what's happened recently on FEC-related data corruption anyhow. If you 
have more data corruption problems, please TELL US!
> >
> > True, the downloading node now detects this corruption and throws away all
> > blocks, just saying hash mismatch.
> > With previous builds one was able to repair corrupted files, read about
> > Quickpar.

It really should not be necessary to layer additional software for redundancy 
and reliability. Freenet should provide both by default. Data corruption is 
unacceptable, period.

> > Now your node says "sorry, this file is toad, I can't pass it on to you".
> >
> > Great improvement, isn't it?
> > Can you please ask him to remove hash check as long as he doesn't fix the
> > real bug?

IMHO I probably have fixed the real bug in 1259. However if you still get 
problems let me know. I would very much like to fix any such bug!

> > We know ourselves when a file is corrupted.
> > We don't need the node to detect it when downloading the file, we need a
> > node not corrupting the file when it is being inserted.

I doubt very much that the problem is at upload time.
> >
> > I could continue my rant with p0's comment about NNTP not being of primary
> > interest for Freetalk, Webinterface to be sufficient.

I thought NNTP had been fixed? There are other people than p0s working 
(occasionally) on Freetalk...
> >
> > Over and out.

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
chat mailing list
chat@freenetproject.org
Archived: http://news.gmane.org/gmane.network.freenet.general
Unsubscribe at http://emu.freenetproject.org/cgi-bin/mailman/listinfo/chat
Or mailto:chat-requ...@freenetproject.org?subject=unsubscribe

Reply via email to