> I like having one site per file kind of better than having one large
> plucker file. I suspect there are things I could do about this with
> plucker as well.
Each of us probably does it differently, so we'll all have different
answers. I personally either point directly to the URL I want, using
plucker-build -H -f and so on, or I'll use separate sitename.html files in
the ~/.plucker directory, such as home.slashdot.html and so on, and point to
them with -H also. I know there are a smattering of other ways to get
content, Mike has an interesting one using php and a mysql database, IIRC,
and Robert O'Connor uses the GUI client he whipped up. There's really
nothing stopping you from using your own foo.plucker files, on a per-site
basis and pointing to them.
One thing that I do find contrasting between Plucker and
SiteScooper, and it's really the largest differentiator, is that SiteScooper
is clearly a transcoding parser. It slices and dices the content in whatever
way you wish, removing sections, slicing off menus, headers, whatever.
Plucker does not do that, other than by using ~/.plucker/exclusionlist.txt
stuff to block links you don't want, but that's not the same thing.
Ideally, a mix between the two for sites which do not already have a
Palm-sized site layout would be ideal, and we really should start capturing
these and merging them together in some sort of system.
I'd love to get some user-submitted home.html examples of sites that
they actively spider, so I can add them to the server itself, and allow
other users to benefit from that knowledge and content. Hint hint.
/d