(CC to sphinx-dev list, in case Sphinx devs want to start collecting 3rd party extensions to a single place.)
Hi all, I'd like to push forward the discussion we had about extracting Sphinx extensions from Numpy's refguide and publishing them in some central location, for reuse. I think the idea was to find some SVN space eg. in Google code and put the stuff there under a BSDish license. But I think Stéfan also asked about putting them in a separate part of Sphinx's repository. Was there a conclusion yet? It would also be nice to see eg. Matplotlib's extensions hit some more public repository, given that we might have a use case at least for something derived from the plot:: directive in Numpy. Anyway, what we would have to offer in numpy-refguide [1] is * numpydoc.py (requires docscrape.py docscrape_sphinx.py) Mangles docstrings from the Numpy docstring format to something Sphinx can handle. Useful for all auto*:: directives. * mathml.py Implements math:: and :math: that output MathML for HTML pages. Handles a limited set of Latex constructs. I'm not sure how necessary this one is now that Sphinx has its own math extensions; might be nice to have, but would probably need also some refactoring for sphinx.ext.mathbase before including. * autosummary.py Generates tables that include function signatures and short summaries, extracted from docstrings. Optionally generates toctree entries. An example of the output here: [2] This was needed for making the numpy refguide more readable; typically our function docstrings are *long* and having more than one per page made the documentation rather overwhelming. So I split the pages so that there is a separate reference HTML page per function, and in the middle of prose I put autosummary tables in place of full function descriptions. * autosummary_generate.py Script that reads autosummary entries from documentation, and generates corresponding files, each containing a single autodoc directive. Repulsive, but it works. * phantom_import.py Extension that makes autodoc directives to extract docstrings from an XML file, as output by our online documentation editor. [3] Useful to avoid recompiling the module. * traitsdoc.py (by Robert Kern; from Chaco) Extracting documentation from comments above Traits attributes. This one actually looks a bit like issue #7 in Sphinx. Probably the code could be modified so that it could extract comments above all attributes, not just Traited ones. (Attached a version modified to work with current numpydoc.py) So, I'd be ready to push the extensions listed above somewhere. If we want to open a new Google code project for this, I can do that, too. These extensions are currently sparsely documented (there's some info on top of each file on usage etc.), but I can set aside some time to write proper documentation. .. [1] http://bazaar.launchpad.net/%7Epauli-virtanen/scipy/numpy-refguide/files/173?file_id=ext-20080720180401-jr1g7rtazobrcufr-1 .. [2] http://www.elisanet.fi/ptvirtan/tmp/numpy-refguide/reference/routines.random.xhtml .. [3] http://sd-2116.dedibox.fr/pydocweb/ ; http://code.google.com/p/pydocweb -- Pauli Virtanen On to, 2008-09-04 at 02:42 -0500, Robert Kern wrote: > On Thu, Sep 4, 2008 at 02:25, Pauli Virtanen <[EMAIL PROTECTED]> wrote: > > >> I'm going to drop this into our Chaco Sphinx docs tomorrow. We're > >> using this in Chaco as a pilot project until we beat on the system a > >> little. We're going to use this for all of our projects eventually, so > >> it would be nice if there were a central location for this code. It > >> might be worth making a real package out of all of the > >> numpy/matplotlib/enthought Sphinx extensions. I'm sure my comment > >> extraction code is useful for many other projects, too. > > > > Yep, there's now some danger of fragmentation, so maybe we should just pull > > the ext/ directory out from the refguide to a different branch. I wonder if > > bzr supports externals... > > I don't think it does. I think this is the page talking about plans > for such a feature: > > http://bazaar-vcs.org/NestedTreeSupport > > Regardless of where most of the development takes place, it would be > really useful to have an SVN mirror of the trunk and any releases so > it can be used as an svn:externals in SVN-using projects. That's how > we were going to manage using these extensions in all of our projects. > > > Should we make a Launchpad project for it? (I note that we seem misusing the > > Launchpad project concept a bit now; all stuff is under the Scipy project.) > > It might be useful to have a Google Code project as the "main" SVN > repository for the reasons given above. We can always have a Launchpad > mirror for Bazaar-using developers. >
""" ========= traitsdoc ========= Sphinx extension that handles docstrings in the Numpy standard format, [1] and support Traits [2]. This extension can be used as a replacement for ``numpydoc`` when support for Traits is required. .. [1] http://projects.scipy.org/scipy/numpy/wiki/CodingStyleGuidelines#docstring-standard .. [2] http://code.enthought.com/projects/traits/ """ import inspect import os import pydoc import docscrape import docscrape_sphinx from docscrape_sphinx import SphinxClassDoc, SphinxFunctionDoc, SphinxDocString import numpydoc import comment_eater class SphinxTraitsDoc(SphinxClassDoc): def __init__(self, cls, modulename='', func_doc=SphinxFunctionDoc): if not inspect.isclass(cls): raise ValueError("Initialise using a class. Got %r" % cls) self._cls = cls if modulename and not modulename.endswith('.'): modulename += '.' self._mod = modulename self._name = cls.__name__ self._func_doc = func_doc docstring = pydoc.getdoc(cls) docstring = docstring.split('\n') # De-indent paragraph try: indent = min(len(s) - len(s.lstrip()) for s in docstring if s.strip()) except ValueError: indent = 0 for n,line in enumerate(docstring): docstring[n] = docstring[n][indent:] self._doc = docscrape.Reader(docstring) self._parsed_data = { 'Signature': '', 'Summary': '', 'Description': [], 'Extended Summary': [], 'Parameters': [], 'Returns': [], 'Raises': [], 'Warns': [], 'Other Parameters': [], 'Traits': [], 'Methods': [], 'See Also': [], 'Notes': [], 'References': '', 'Example': '', 'Examples': '', 'index': {} } self._parse() def _str_summary(self): return self['Summary'] + [''] def _str_extended_summary(self): return self['Description'] + self['Extended Summary'] + [''] def __str__(self, indent=0, func_role="func"): out = [] out += self._str_signature() out += self._str_index() + [''] out += self._str_summary() out += self._str_extended_summary() for param_list in ('Parameters', 'Traits', 'Methods', 'Returns','Raises'): out += self._str_param_list(param_list) out += self._str_see_also("obj") out += self._str_section('Notes') out += self._str_references() out += self._str_section('Example') out += self._str_section('Examples') out = self._str_indent(out,indent) return '\n'.join(out) def looks_like_issubclass(obj, classname): """ Return True if the object has a class or superclass with the given class name. Ignores old-style classes. """ t = obj if t.__name__ == classname: return True for klass in t.__mro__: if klass.__name__ == classname: return True return False def get_doc_object(obj, what=None): if what is None: if inspect.isclass(obj): what = 'class' elif inspect.ismodule(obj): what = 'module' elif callable(obj): what = 'function' else: what = 'object' if what == 'class': doc = SphinxTraitsDoc(obj, '', func_doc=SphinxFunctionDoc) if looks_like_issubclass(obj, 'HasTraits'): for name, trait, comment in comment_eater.get_class_traits(obj): # Exclude private traits. if not name.startswith('_'): doc['Traits'].append((name, trait, comment.splitlines())) return doc elif what in ('function', 'method'): return SphinxFunctionDoc(obj, '') else: return SphinxDocString(pydoc.getdoc(obj)) def setup(app): # init numpydoc numpydoc.setup(app, get_doc_object)
signature.asc
Description: This is a digitally signed message part
