On 15 January 2011 05:00, Stephan Beal <sgb...@googlemail.com> wrote:

>
> If there is an outcry of support for exporting certain fossil data as JSON,
> i would ask the group to propose JSON schemas for the relevant areas. (i'm
> thinking: timeline, maybe wiki pages... what else?).
>

In terms of schemas - my starting point would be to look at the ones that
already exist in order to make it easier for people with code linking to
them to switch to fossil. Taking the minimalist approach already taken to
the wiki, would be the way to go - no need to support every feature. If
nothing else we'll be borrowing from actual use cases discovered in other
communities.

Overall you could look at the Trac XMLRPC
plugin<http://trac-hacks.org/wiki/XmlRpcPlugin#APIUsage>- which
exposes most of Tracs functions via XML. I'ts been around a good few
years, and should give a good idea of use cases for Fossil RPC (it's the
interface I'e used before). naturally you want to look at a JSON output /
input rather than the XML.

With regards the JSON schema - I think it would be very useful to use a
simplified version of the MediaWiki JSON output
format<http://www.mediawiki.org/wiki/API:Data_formats#Output>.
This would be very useful to import documentation for projects as there is a
lot of good starting material on WikiPedia regarding programming. It would
be very useful therefore to be able to take an output like:

   - *
   
api.php?action=query&prop=revisions&rvprop=content&format=json&titles=Representational_State_Transfer<http://en.wikipedia.org/w/api.php?action=query&prop=revisions&rvprop=content&format=json&titles=Representational_State_Transfer>
   *

This means fetch (*action=query*) the content (*rvprop=content*) of the most
recent revision of the Representational_State_Transfer page on wikipedia in
JSON format (*format=xml*).

Being able to grab this content and insert it into a Fossil wiki page using
and existing library, and simply changing the endpoint for the web service
would be very useful. It also provides useful test data to code against. How
about starting with this, in terms of RoadMap but implement a simplified
subset of parameters, and ignore returned data fields that are not relevant.


NB - with the Wikipedia content, it would be good practice to respect the
license terms and automatically import the pages history log, and a link
back to the original article. This can be scripted, but it may be worth
considering in the schema so that the right json export from WikiPedia can
automatically be imported with accreditation and backlinks to the WikiPedia
article.

   - see http://meta.wikimedia.org/wiki/Help:Transwiki

On the remote talk page, copy and paste the original page's history log
> under a new heading (see 
> Talk:Wikistress<http://meta.wikimedia.org/wiki/Talk:Wikistress>for an 
> example). This is to adhere to the requirements of Wikimedia's
> license <http://meta.wikimedia.org/wiki/GFDL>, which requires that a
> record be made of all authors. If there are only a few authors, you can note
> them in your creation edit 
> summary<http://meta.wikimedia.org/wiki/Help:Edit_summary>instead.


I'm happy to be a beta tester on this, as i am working on the import export
code for a number of projects, and have experimented with doing this for
Trac.
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to