Re: [Discuss] Small website, non-technical users: Joomla, Drupal, or WordPress?

2014-01-07 Thread markw
I use Drupal. It is easy to start and there is a lot you can do.


 Thanks for reading this.

 I'm a member of the Big-8 Board, which decides what Usenet groups are
 created and deleted.  We have both technical and non-technical members,
 and we've been using MediaWiki for the board's website
 (http://www.big-8.org/) until now, but we have to move the site to a new
 server which doesn't offer it.

 So, the question is What's the best compromise between ease-of-use,
 learning curve, and maintainability if we have to choose between Joomla,
 Drupal, or WordPress?

 The new site has 300 GB of disk and unlimited data transfers, but I
 don't have shell access, just an ftp upload account.

 I appreciate your help!

 Bill

 --
 Bill Horne
 William Warren Consulting
 339-364-8487

 ___
 Discuss mailing list
 Discuss@blu.org
 http://lists.blu.org/mailman/listinfo/discuss



___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Small website, non-technical users: Joomla, Drupal, or WordPress?

2014-01-07 Thread Daniel Barrett
On January 6, 2014, Bill Horne wrote:
...we've been using MediaWiki for the board's website 
(http://www.big-8.org/) until now, but we have to move the site to a new 
server which doesn't offer it.
So, the question is What's the best compromise between ease-of-use, 
learning curve, and maintainability if we have to choose between Joomla, 
Drupal, or WordPress?

Have you considered not switching platforms? I would think the cost of
moving all your mediawiki content to a new platform and retraining all
your users would far exceed the price of a managed VPS on
www.linode.com or www.hostdime.com, where you can install mediawiki
yourself and keep doing what you're doing.

You didn't say how many users you have, but I run mediawiki on a cheap
shared VM at www.hostdime.com (about $5/month) just fine.

--
Dan Barrett
dbarr...@blazemonger.com


___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Small website, non-technical users: Joomla, Drupal, or WordPress?

2014-01-07 Thread Richard Pieri

Daniel Barrett wrote:

Have you considered not switching platforms? I would think the cost of
moving all your mediawiki content to a new platform and retraining all
your users would far exceed the price of a managed VPS on


MediaWiki, and Wikis in general, have a spate of problems that make them 
not terribly useful for document management.


I've gotten some experience with a few actual document management 
systems since the last time this came up. That's document management, 
not content management.


The first is called DocDB. I wouldn't wish this on anyone. It's awful, 
but the scientific community loves it so that's what I'm running.


I looked at a few others and my top choice is LetoDMS. It's easy to 
install (aptitude install letodms), simple to configure, is agnostic to 
file types, does versioning, doesn't use any unique or custom markup.


--
Rich P.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Small website, non-technical users: Joomla, Drupal, or WordPress?

2014-01-07 Thread John Abreau
FYI, I did a google search for LetoDMS, and I found another one called SeedDMS 
that states

 SeedDMS is the continuation of LetoDMS because it has lost its main 
 developer. 



On Jan 7, 2014, at 1:03 PM, Richard Pieri richard.pi...@gmail.com wrote:

 Daniel Barrett wrote:
 Have you considered not switching platforms? I would think the cost of
 moving all your mediawiki content to a new platform and retraining all
 your users would far exceed the price of a managed VPS on
 
 MediaWiki, and Wikis in general, have a spate of problems that make them not 
 terribly useful for document management.
 
 I've gotten some experience with a few actual document management systems 
 since the last time this came up. That's document management, not content 
 management.
 
 The first is called DocDB. I wouldn't wish this on anyone. It's awful, but 
 the scientific community loves it so that's what I'm running.
 
 I looked at a few others and my top choice is LetoDMS. It's easy to install 
 (aptitude install letodms), simple to configure, is agnostic to file types, 
 does versioning, doesn't use any unique or custom markup.
 
 -- 
 Rich P.
 ___
 Discuss mailing list
 Discuss@blu.org
 http://lists.blu.org/mailman/listinfo/discuss
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Small website, non-technical users: Joomla, Drupal, or WordPress?

2014-01-07 Thread Bill Horne

On 1/7/2014 12:24 PM, Daniel Barrett wrote:

On January 6, 2014, Bill Horne wrote:

...we've been using MediaWiki for the board's website
(http://www.big-8.org/) until now, but we have to move the site to a new
server which doesn't offer it.
So, the question is What's the best compromise between ease-of-use,
learning curve, and maintainability if we have to choose between Joomla,
Drupal, or WordPress?

Have you considered not switching platforms? I would think the cost of
moving all your mediawiki content to a new platform and retraining all
your users would far exceed the price of a managed VPS on
www.linode.com or www.hostdime.com, where you can install mediawiki
yourself and keep doing what you're doing.

You didn't say how many users you have, but I run mediawiki on a cheap
shared VM at www.hostdime.com (about $5/month) just fine.


Thanks for the suggestion: it's always important to ask why change?, but that 
question was answered by my ISP's terms of service: the site has been stable for a while, 
but right now, it's sharing the 12GB of space on my virtual machine at prgmr.com, and I 
need to lighten up the disk load, so I'm jumping at the chance to find it a new home.

The new site has 300 GB of space and unlimited bandwidth, so it's a keeper on 
that basis alone: it's already paid for, which is a big plus in a volunteer 
organization, and has professional support available should something happen 
which I or the other members can't fix.
Alas, it offers the three options I mentioned, but /not/ Mediawiki.

There are several utilities available to convert Mediawiki format to WordPress, 
so that's a possibility, or (although it would mean a lot of work) the board 
could set up a fixed HTML site and forego a CMS altogether.

Bill

--
Bill Horne
William Warren Consulting
339-364-8487

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Small website, non-technical users: Joomla, Drupal, or WordPress?

2014-01-07 Thread Tom Metro
Bill Horne wrote:
 ...we've been using MediaWiki for the board's website
 ...but we have to move the site to a new
 server which doesn't offer it.
 
 So, the question is What's the best compromise between ease-of-use, 
 learning curve, and maintainability if we have to choose between
 Joomla, Drupal, or WordPress?

I can't answer the latter question, as I have limited experience with
maintainability for Joomla and WordPress. (Though I hear that although
WordPress is less capable, the reason for its rise in popularity is the
easier use and maintainability.)

I will, however, in the tradition of answering the question you didn't
ask, suggest the idea of using Wikispaces, as we do for BLU (and
boston.pm.org). It's free or cheap, hosted, so no maintenance, still a
wiki, so a model familiar to your users, and in my opinion has a better
UI and markup language than Mediawiki. (Though in some ways is less
powerful.)

A CMS tends to be a better bet if your priority is site design
(appearance), while a wiki is better if you are more concerned with
doing collaborative document editing.

It should be possible to write a markup converter to go from Mediawiki
to Wikispaces. One may even already exist. As a plan B, you can
highlight formatted text in your Mediawiki site and paste it into the
Wikispaces' rich text editor, preserving the formating. (You'll still
need to fix up the internal links.)

 -Tom

-- 
Tom Metro
The Perl Shop, Newton, MA, USA
Predictable On-demand Perl Consulting.
http://www.theperlshop.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Small website, non-technical users: Joomla, Drupal, or WordPress? (Solved)

2014-01-07 Thread Bill Horne

On 1/6/2014 11:30 PM, Bill Horne wrote:

Thanks for reading this.

I'm a member of the Big-8 Board, which decides what Usenet groups are 
created and deleted.  We have both technical and non-technical 
members, and we've been using MediaWiki for the board's website 
(http://www.big-8.org/) until now, but we have to move the site to a 
new server which doesn't offer it. 


Thanks to all for your help: I've just gotten off the phone, and the 
decision has been made to go in a different direction. We have a 
volunteer who wants to learn native HTML, and so we'll be setting up a 
static site without a CMS.


I appreciate your time and advice.

Bill

--
Bill Horne
William Warren Consulting http://www.william-warren.com/
339-364-8487

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


[Discuss] What's the best site-crawler utility?

2014-01-07 Thread Bill Horne
I need to copy the contents of a wiki into static pages, so please 
recommend a good web-crawler that can download an existing site into 
static content pages. It needs to run on Debian 6.0.


Bill

--
Bill Horne
339-364-8487

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Small website, non-technical users: Joomla, Drupal, or WordPress? (Solved)

2014-01-07 Thread Kent Borg

On 01/07/2014 06:46 PM, Bill Horne wrote:
Thanks to all for your help: I've just gotten off the phone, and the 
decision has been made to go in a different direction. We have a 
volunteer who wants to learn native HTML, and so we'll be setting up 
a static site without a CMS.


More secure than using fancier stuff.

I know when I once learned a little about php I was shocked to learn 
that by just following ones nose tons of dangerous things could happen.  
I forget, but I think all variables default to being public to the 
internet unless the programmer remembers mark them otherwise.  Or 
something scary like that.


-kb

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] What's the best site-crawler utility?

2014-01-07 Thread Richard Pieri

Bill Horne wrote:

I need to copy the contents of a wiki into static pages, so please
recommend a good web-crawler that can download an existing site into
static content pages. It needs to run on Debian 6.0.


Remember that I wrote how wikis have a spate of problems? This is the 
biggest one. There's no way to dump a MediaWiki in a humanly-readable 
form. There just isn't.


The best option usually is to use the dumpBackup.php script to dump the 
database as XML and then parse that somehow. This requires shell access 
on the server. This will get everything including markup; there's no way 
to exclude it.


Method number two is to use the Special:Export page, if it hasn't been 
disabled, to export each page in the wiki. It can do multiple pages at 
once but each page must be specified in the export. This is essentially 
the same as dumpBackup.php except that it's page by page instead of the 
whole database.


--
Rich P.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] What's the best site-crawler utility?

2014-01-07 Thread Matthew Gillen
On 1/7/2014 6:49 PM, Bill Horne wrote:
 I need to copy the contents of a wiki into static pages, so please
 recommend a good web-crawler that can download an existing site into
 static content pages. It needs to run on Debian 6.0.

  wget -k -m -np http://mysite

is what I used to use.  -k converts links to point to the local copy of
the page, -m turns on options for recursive mirroring, and -np enforces
that only urls below the initial one will be downloaded.  (the
recursive option by itself is pretty dangerous, since most sites have a
banner or something that points to a top level page, which then pulls in
the whole rest of the site).

HTH,
Matt
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] What's the best site-crawler utility?

2014-01-07 Thread Matthew Gillen
On 1/7/2014 7:28 PM, Matthew Gillen wrote:
 On 1/7/2014 6:49 PM, Bill Horne wrote:
 I need to copy the contents of a wiki into static pages, so please
 recommend a good web-crawler that can download an existing site into
 static content pages. It needs to run on Debian 6.0.
 
   wget -k -m -np http://mysite
 
 is what I used to use.  -k converts links to point to the local copy of
 the page, -m turns on options for recursive mirroring, and -np enforces
 that only urls below the initial one will be downloaded.  (the
 recursive option by itself is pretty dangerous, since most sites have a
 banner or something that points to a top level page, which then pulls in
 the whole rest of the site).

Now that I read more of the other thread you posted before asking this
question, depending on your intentions you might actually want to skip
'-k'.  I used -k because I was taking a wiki offline and didn't want to
figure out how to get twiki set up in two years when I needed to look up
something in the old wiki.  So I wanted a raw html version for archival
purposes that was suitable for browsing using just a local filesystem
with a browser.  '-k' is awesome for that.

However, it may or may not produce what you want if you want to actually
replace the old site, with the intention of accessing it through a web
server.

Matt
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] What's the best site-crawler utility?

2014-01-07 Thread Richard Pieri

Matthew Gillen wrote:

   wget -k -m -np http://mysite


I've tried this. It's messy at best. Wiki pages aren't static HTML. 
They're dynamically generated and they come with all sorts of style 
sheets and embedded scripts. Yes, you can get the text but it'll be text 
as rendered by a wiki. It takes a lot of work to turn it into something 
usable.


--
Rich P.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] What's the best site-crawler utility?

2014-01-07 Thread Richard Pieri

Daniel Barrett wrote:

For instance, you can write a simple script to hit Special:AllPages
(which links to every article on the wiki), and dump each page to HTML
with curl or wget. (Special:AllPages displays only N links at a time,


Yes, but that's not humanly-readable. It's a dynamically generated 
jambalaya of HTML, JavaScript, PHP, CSS, and Ghu only knows what else.


Converting to PDF is even less useful.

--
Rich P.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] What's the best site-crawler utility?

2014-01-07 Thread Tom Metro
Matthew Gillen wrote:
   wget -k -m -np http://mysite

I create an emergency backup static version of dynamic sites using:

wget -q -N -r -l inf -p -k --adjust-extension http://mysite

The option -m  is equivalent to -r -N -l inf --no-remove-listing, but
I didn't want --no-remove-listing (I don't recall why), so I specified
the individual options, and added:

  -p
  --page-requisites
This option causes Wget to download all the files that are necessary
to properly display a given HTML page.  This includes such things as
inlined images, sounds, and referenced stylesheets.

  --adjust-extension
If a file of type application/xhtml+xml or text/html is downloaded
and the URL does not end with the regexp \.[Hh][Tt][Mm][Ll]?, this
option will cause the suffix .html to be appended to the local
filename. This is useful, for instance, when you're mirroring a
remote site that uses .asp pages, but you want the mirrored pages to
be viewable on your stock Apache server.  Another good use for this
is when you're downloading CGI-generated materials.  A URL like
http://site.com/article.cgi?25 will be saved as article.cgi?25.html.


 '-k' ... may or may not produce what you want if you want to actually
 replace the old site, with the intention of accessing it through a web
 server.

Works for me. I've republished sites captured with the above through a
server and found them usable.

But generally speaking, not all dynamic sites can successfully be
crawled without customizing the crawler. And as Rich points out, if your
objective is not just to end up with what appears to be a mirrored site,
but actual clean HTML suitable for hand-editing, then you've still got
lots of work ahead of you.

 -Tom

-- 
Tom Metro
The Perl Shop, Newton, MA, USA
Predictable On-demand Perl Consulting.
http://www.theperlshop.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] What's the best site-crawler utility?

2014-01-07 Thread Greg Rundlett (freephile)
Hi Bill,

GPL - licensed HTTrack Website Copier works well (http://www.httrack.com/).
 I have not tried it on a MediaWiki site, but it's pretty adept at copying
websites including dynamically generated websites.

They say: It allows you to download a World Wide Web site from the
Internet to a local directory, building recursively all directories,
getting HTML, images, and other files from the server to your computer.
HTTrack arranges the original site's relative link-structure. Simply open a
page of the mirrored website in your browser, and you can browse the site
from link to link, as if you were viewing it online. HTTrack can also
update an existing mirrored site, and resume interrupted downloads. HTTrack
is fully configurable, and has an integrated help system.

WinHTTrack is the Windows 2000/XP/Vista/Seven release of HTTrack, and
WebHTTrack the Linux/Unix/BSD release which works in your browser. There is
also a command-line version 'httrack'.

HTTrack is actually similar in it's result to the wget -k -m -np
http://mysite that Matt mentions, but may be easier in general to use and
offers a GUI to drive the options that you want.

Using the MediaWiki API to export pages is another option if you have
specific needs that can not be addressed by a mirror operation (e.g. your
wiki has namespaced contents that you want to treat differently.)  If you
end up exporting via Special:Export or the API, then you will be faced
with the option to convert your XML to HTML.  I have some notes about wiki
format conversions at https://freephile.org/wiki/index.php/Format_conversion

There's pandoc.  If you need to convert files from one markup format into
another, pandoc is your swiss-army knife.
http://johnmacfarlane.net/pandoc/

~ Greg

Greg Rundlett


On Tue, Jan 7, 2014 at 6:49 PM, Bill Horne b...@horne.net wrote:

 I need to copy the contents of a wiki into static pages, so please
 recommend a good web-crawler that can download an existing site into static
 content pages. It needs to run on Debian 6.0.

 Bill

 --
 Bill Horne
 339-364-8487

 ___
 Discuss mailing list
 Discuss@blu.org
 http://lists.blu.org/mailman/listinfo/discuss

___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Small website, non-technical users: Joomla, Drupal, or WordPress? (Solved)

2014-01-07 Thread jc
Bill Horne wrote:
| On 1/6/2014 11:30 PM, Bill Horne wrote:
|  Thanks for reading this.
| 
|  I'm a member of the Big-8 Board, which decides what Usenet groups are
|  created and deleted.  We have both technical and non-technical
|  members, and we've been using MediaWiki for the board's website
|  (http://www.big-8.org/) until now, but we have to move the site to a
|  new server which doesn't offer it.
|
| Thanks to all for your help: I've just gotten off the phone, and the
| decision has been made to go in a different direction. We have a
| volunteer who wants to learn native HTML, and so we'll be setting up a
| static site without a CMS.
|
| I appreciate your time and advice.
| Bill

Heh.  For some reason, I'm reminded of that classic  cartoon  showing
all  the  ways  that  various  experts  designed  and  built  their
interpretation of what the customer wanted, which was a tire  hanging
on a rope from a tree branch.

I had a similar case recently. I've helped a few nonprofits build web
sites, and several have started off looking into Drupal, Joomla, etc.
After a month or so of this, with nothing working,  I've  combined  a
few  scripts  that I've collected or written anew with a few of their
designs for the pages they want, and in  a  week  or  two  they  were
happey with the results.

But the fun part is after that, when we  were  discussing  what  they
really  need,  and why my stuff was still too complex.  Finally, I've
persuaded a few of the orgs' members to try my idea that they learn a
bit  of  HTML.   Of  course,  they've looked at HTML manuals, and run
terrified from the incomprehensible technical gobbledy-gook that they
saw.   HTML  is  this  horrible stuff that mere mortals don't stand a
chance of understanding, right?

But I persuaded them to try a few experiments.  I start them  with  a
few plain-text docs that look like the pages they want, and show them
that these work when put on the web, but cause problems on  various
screens.   Smart phones are nice for this demo.  Then I show them the
effect of wrapping them in a simple htmlbody ...   /body/html
wrapper, and adding p tags between paragraphs.  Hey, that's really
simple; why didn't anyone tell us that? Then I show them a few  more
tags,  b, i, and then the all-important a href=... tags.  And
they're off and running, building some of the pages they want. I keep
emphasizing that they should just learn it one tag at a time.

The result has been that the orgs' web sites are now run by a few  of
their  members  that  have learned just enough HTML to do the job.  I
have to teach them a bit about debugging a page, of course.  And some
of  them have even started to learn basic CSS.  Their sites are often
rather impressive to interested visitors.  I attribute  this  to  the
fact  that  they're  mainly  concerned with getting their information
online, and view HTML as a tool to  make  it  readable  on  visitors'
screens, whatever size they might be.

This won't work for every org, of course.  Some of them actually need
wordpress  or  drupal or whatever.  But a fundamental problem is that
people often don't know what they need, and are prone to being  taken
in  by  people who want to sell them the ultimate solution to all the
world's Web problems.  So maybe what we need is  a  reliable  way  to
determine  when  static  pages with simple markup are sufficient, and
when we need a high-powered Solution to complex  marketing  problems.
But  I  don't  know  how to translate people's amorphous desires into
requirement specs.  I suspect nobody does.


--
--
   _'
   O
 :#/  John Chambers
   +   j...@trillian.mit.edu
  /#\  jc1...@gmail.com
  | |
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] What's the best site-crawler utility?

2014-01-07 Thread Greg Rundlett (freephile)
Also, I just discovered a MediaWiki extension written by Tim Starling that
may suit your needs.  As the name implies, its for dumping to HTML.

http://www.mediawiki.org/wiki/Extension:DumpHTML

As for processing the XML produced by export or MediaWiki dump tools,
here is info on that XML schema
http://meta.wikimedia.org/wiki/Help:Export#Export_format

And, some of the tools you can use to process MediaWiki XML
http://wikipapers.referata.com/wiki/List_of_data_processing_tools


Greg Rundlett
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] Small website, non-technical users: Joomla, Drupal, or WordPress? (Solved)

2014-01-07 Thread Eric Chadbourne
Hi Kent,

What do you mean by variables being public to the internet?  Nobody
can directly access them from what I understand.  Sanitize in and out
you should be fine no?

Thanks.

On Tue, Jan 7, 2014 at 6:55 PM, Kent Borg kentb...@borg.org wrote:
 On 01/07/2014 06:46 PM, Bill Horne wrote:

 Thanks to all for your help: I've just gotten off the phone, and the
 decision has been made to go in a different direction. We have a volunteer
 who wants to learn native HTML, and so we'll be setting up a static site
 without a CMS.


 More secure than using fancier stuff.

 I know when I once learned a little about php I was shocked to learn that by
 just following ones nose tons of dangerous things could happen.  I forget,
 but I think all variables default to being public to the internet unless the
 programmer remembers mark them otherwise.  Or something scary like that.

 -kb

 ___
 Discuss mailing list
 Discuss@blu.org
 http://lists.blu.org/mailman/listinfo/discuss



-- 
Eric Chadbourne
617.249.3377
http://theMnemeProject.org/
http://WebnerSolutions.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss


Re: [Discuss] What's the best site-crawler utility?

2014-01-07 Thread Eric Chadbourne
Plus one for HTTrack.  I used it a couple of months ago to convert a
terrible Joomla hacked site to HTML.  It was a pain to use at first,
like having to use Firefox, but it worked as advertised.

Hope that helps.

On Tue, Jan 7, 2014 at 10:34 PM, Greg Rundlett (freephile)
g...@freephile.com wrote:
 Hi Bill,

 GPL - licensed HTTrack Website Copier works well (http://www.httrack.com/).
  I have not tried it on a MediaWiki site, but it's pretty adept at copying
 websites including dynamically generated websites.

 They say: It allows you to download a World Wide Web site from the
 Internet to a local directory, building recursively all directories,
 getting HTML, images, and other files from the server to your computer.
 HTTrack arranges the original site's relative link-structure. Simply open a
 page of the mirrored website in your browser, and you can browse the site
 from link to link, as if you were viewing it online. HTTrack can also
 update an existing mirrored site, and resume interrupted downloads. HTTrack
 is fully configurable, and has an integrated help system.

 WinHTTrack is the Windows 2000/XP/Vista/Seven release of HTTrack, and
 WebHTTrack the Linux/Unix/BSD release which works in your browser. There is
 also a command-line version 'httrack'.

 HTTrack is actually similar in it's result to the wget -k -m -np
 http://mysite that Matt mentions, but may be easier in general to use and
 offers a GUI to drive the options that you want.

 Using the MediaWiki API to export pages is another option if you have
 specific needs that can not be addressed by a mirror operation (e.g. your
 wiki has namespaced contents that you want to treat differently.)  If you
 end up exporting via Special:Export or the API, then you will be faced
 with the option to convert your XML to HTML.  I have some notes about wiki
 format conversions at https://freephile.org/wiki/index.php/Format_conversion

 There's pandoc.  If you need to convert files from one markup format into
 another, pandoc is your swiss-army knife.
 http://johnmacfarlane.net/pandoc/

 ~ Greg

 Greg Rundlett


 On Tue, Jan 7, 2014 at 6:49 PM, Bill Horne b...@horne.net wrote:

 I need to copy the contents of a wiki into static pages, so please
 recommend a good web-crawler that can download an existing site into static
 content pages. It needs to run on Debian 6.0.

 Bill

 --
 Bill Horne
 339-364-8487

 ___
 Discuss mailing list
 Discuss@blu.org
 http://lists.blu.org/mailman/listinfo/discuss

 ___
 Discuss mailing list
 Discuss@blu.org
 http://lists.blu.org/mailman/listinfo/discuss



-- 
Eric Chadbourne
617.249.3377
http://theMnemeProject.org/
http://WebnerSolutions.com/
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss