On Wed, 2011-01-19 at 11:29 -0500, Mike Rathburn wrote:
> Nowhere in this thread did you ever mention you worked with the tools
> provided by Google - the obvious first place to look for an answer.  

No body asked me, nor about my experience with or knowledge of
robots.txt files. I can't always state everything I know every time I
post. My posts are long enough as is. :)

But if your familiar at all with my company, being a software
development firm. With a fair amount of that being web based. Kinda easy
to put two and two together. In fact for most of what I do the platform
is irrelevant. Thus given my knowledge of Linux, which I do not have to
run. You can start to get an idea of my knowledge and experience in
other areas I where I actually make my living.

At this time I make very little if any money of anything specific to
Linux. Linux services is something I do offer, and have done things for
clients over the years. Its an area I am looking to grow and expand, but
will never be my core business, as stated on my website. Just a side
offering like hosting, etc.

> As Kyle so eloquently pointed out in another thread, "Assumption is
> the mother of all f-ups.  Please verify. :)"

Well per Kyles post and your comment, you could have phrased your post
as a question. Are you familiar with Google's Webmaster tools and are
you using that resource for the JaxLug.org, and would have gotten a
simple yes. Sorry, but it assumed in the post I was not aware of
Google's Webmaster tools and/or not already using them, instead of just
asking.

Now using Google's Webmaster tools or not, really doesn't make that much
difference to the problem at hand. It just let me know when they were
crawling. I could have reduce the crawl rate, but that would not resolve
the problem really.

More than likely the problem is on WikiMedia's end, but also in part
Google. It should be aware of WikiMedia, since its extremely popular,
much less Wikipedia using it. After seeing what Google does to the LUG's
wiki, I feel for Wikipedia, the bandwidth Google must consume ;)

Also back to Kyle's comment, why do you think I am harping so much over
the robots.txt file. People are tossing out way to many assumptions
regarding robots.txt files, without much if any verifications. Not to
mention very little contributions with regard to helping create a custom
robots.txt file for the JaxLUG wiki, that allows Google bots to access
certain areas, etc.

Thus there is still more work to be done, both on verification,
resolution, and permanent solution.

-- 
William L. Thomson Jr.
Obsidian-Studios, Inc.
http://www.obsidian-studios.com\


---------------------------------------------------------------------
Archive      http://marc.info/?l=jaxlug-list&r=1&w=2
RSS Feed     http://www.mail-archive.com/[email protected]/maillist.xml
Unsubscribe  [email protected]

Reply via email to