Hello weya,

  I am still fighting this myself, but here is what I got from the
  hacker...

  use stayon and staybelow when building your plucker page INSTEAD of
  using a direct url

  You can find more info on these by entering the terms in the help
  file. I am still having issues with getting plucker to understand
  when I am using them, but they do help.

  The standard spidering limits page does not have any limits other
  than the server you are on (stayon). If you try an spider a website
  that has lots of links to pages on the same server (Like geocities,
  or a number of ebook sites) plucker tries to get all of them.
  --staybelow keeps you below the file structure that you start at, so
  you only get the pages that web has.

  Does that help (from a newbie, I might add)

Wednesday, April 27, 2005, 9:18:07 AM, you wrote:

w> Yes, I'm using WinXP and Plucker's desktop version to define
w> everything.� I thought I had spidering limits set but .... 

w>  so what should they be set at so plucker isn't updating the entire 
internet, lol?

w>  Debbie

w> On 4/27/05, Bill Johnson <[EMAIL PROTECTED]> wrote:
w>  Hello weya,

w> ��It sounds like you are not setting your spidering limits. Have you
w> ��looked at those settings?

w> ��I am assuming you are another windows user (like me) who is using
w> ��the GUI.



  



-- 
Best regards,
 Bill                            mailto:[EMAIL PROTECTED]

_______________________________________________
plucker-list mailing list
[email protected]
http://lists.rubberchicken.org/mailman/listinfo/plucker-list

Reply via email to