Hi, I'll just start by boosting your egos and say that Plucker is great and it's wonderful that you have put so much time into making it what it is today.
> Currently, when you click on a link that wasn't retrieved, you have > the option to copy the URL to the Memo database, and you can if you > wish add text after the URL. This feature could be left exactly as is; > no changes to the viewer would be necessary. Instead, there could be > an additional program that the user runs manually which would read > the Memo database that has been hotsynced to the PC, would pull > out all the Memo records that were saved from the Plucker viewer, > create a temporary HTML file from them. The new program would then > call the parser to fetch all links in that HTML file. How about extending the parser to understand the concept of "add missing links to document"? So that the missing links would not become a new document, but instead appear in the original doc. This would need some kind of functionality in the parser which (next time the document is fetched) would fetch a link if it exists in a "missing links" array regardles of maxdetph, stayonhost and other constraints. Links which cannot be mapped to a source document could be fetched to a "Missing links document". The original source document would not have to be known, we just fetch the "missing link" if we stumble upon it in a page we fetch. Problems, as I see them, are at least: - How do we know if the missing link has been fetched? We might be doing several parser runs. Maybe some way of telling the parser "fetch all missing links not fetched" into a separate document? - Right now we don't fetch at sync-time (and probably don't want to). So the roundtrip would be sync - fetch - sync if I understand it correctly? - Somebody might always/sometimes want the missing links in a new document - Problems when the source document changes? - This is just an idea, not a complete solution. > To specify maxdepths, file names and other options, a section with a > certain name could be added to the usual Plucker config files by the > user. The new program would refer to that section when calling the > parser. Also (possibly as a future enhancement) the viewer could be > modified to add a little pull-down "maxdepth" selection list to the > "External Link" screen: before you click on the "Copy URL" button, > you select the maxdepth for that particular link from the list, and > it's saved in the Memo pad entry in some format that the new program > can read and understand. Sounds reasonable. A field for naming the link would also be great, it could default to the body of the <A> tag. And maybe a checkbox for selecting if we want top copy the link in HTML (<a href="xxx">yyy</a>) or plaintext? Right now the missing link/external link form quite verbosly informs the user about why the document is missing, so there would be room for new fields. Then again, if all the whistles and bells are added, new users might be confused (There probably is a reason for the long text in the form?). > > Then there's the --no-urlinfo complex. If it's used, you lose the > > ability to retrieve those "out of bounds" urls. But that is a deliberate(?) choice made by the user who created the document? > > Alternately, we pull the database from the Palm, run a gather on the > > desktop, comparing against what is in the database we just pulled from the > > Palm, and then integrate those "missing" records. However, would you just > > want to append those records? Or remove the ones already read, and then > > replace them with the "out of bounds" records you checked? If we merge the missing links into the original source document, then we keep the missing link as long as we have the document. This again conflicts with fetching the unseen links in a "missing links" document. I would like a nice GUI for managing the links (on the handheld and on the desktop) ;-). Managing the memos which Plucker currently creates is not so easy, but far better than nothing. In my structured world the missing links should go into a separate database (I believe this was debated a while back on this list?). What is the concept on non Windows platforms with backup databases? On Windows the Hotsync manager makes a copy of the database on the handheld. This could be checked under some conditions for what pages are included in a document. Doesn't the Plucker parser also keep a copy of the databases it creates? This database could be checked before it is replaced by a new database. <std-disclaimer> Yes I know Plucker is Open Source and that nobody gets payed for implementing new features (or maintaining it) and the source is in the CVS and that if I want something I can (or might be able to) implement it myself. </std-disclaimer> -Jonte