Sphinx-1.3.6/ 0000775 0001750 0001750 00000000000 12665046155 014134 5 ustar vagrant vagrant 0000000 0000000 Sphinx-1.3.6/EXAMPLES 0000664 0001750 0001750 00000024760 12665045070 015301 0 ustar vagrant vagrant 0000000 0000000 Projects using Sphinx
=====================
This is an (incomplete) alphabetic list of projects that use Sphinx or
are experimenting with using it for their documentation. If you like to
be included, please mail to `the Google group
`_.
I've grouped the list into sections to make it easier to find
interesting examples.
Documentation using the default theme
-------------------------------------
* APSW: http://apidoc.apsw.googlecode.com/hg/index.html
* ASE: https://wiki.fysik.dtu.dk/ase/
* Calibre: http://manual.calibre-ebook.com/
* CodePy: http://documen.tician.de/codepy/
* Cython: http://docs.cython.org/
* Cormoran: http://cormoran.nhopkg.org/docs/
* Director: http://pythonhosted.org/director/
* Dirigible: http://www.projectdirigible.com/documentation/
* F2py: http://f2py.sourceforge.net/docs/
* GeoDjango: https://docs.djangoproject.com/en/dev/ref/contrib/gis/
* Genomedata:
http://noble.gs.washington.edu/proj/genomedata/doc/1.2.2/genomedata.html
* gevent: http://www.gevent.org/
* Google Wave API:
http://wave-robot-python-client.googlecode.com/svn/trunk/pydocs/index.html
* GSL Shell: http://www.nongnu.org/gsl-shell/
* Heapkeeper: http://heapkeeper.org/
* Hands-on Python Tutorial:
http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/
* Hedge: http://documen.tician.de/hedge/
* Leo: http://leoeditor.com/
* Lino: http://lino.saffre-rumma.net/
* MeshPy: http://documen.tician.de/meshpy/
* mpmath: http://mpmath.googlecode.com/svn/trunk/doc/build/index.html
* OpenEXR: http://excamera.com/articles/26/doc/index.html
* OpenGDA: http://www.opengda.org/gdadoc/html/
* openWNS: http://docs.openwns.org/
* Paste: http://pythonpaste.org/script/
* Paver: http://paver.github.io/paver/
* Pioneers and Prominent Men of Utah: http://pioneers.rstebbing.com/
* PyCantonese: http://pycantonese.github.io/
* Pyccuracy: https://github.com/heynemann/pyccuracy/wiki/
* PyCuda: http://documen.tician.de/pycuda/
* Pyevolve: http://pyevolve.sourceforge.net/
* Pylo: http://documen.tician.de/pylo/
* PyMQI: http://pythonhosted.org/pymqi/
* PyPubSub: http://pubsub.sourceforge.net/
* pySPACE: http://pyspace.github.io/pyspace/
* Python: https://docs.python.org/
* python-apt: http://apt.alioth.debian.org/python-apt-doc/
* PyUblas: http://documen.tician.de/pyublas/
* Quex: http://quex.sourceforge.net/doc/html/main.html
* Scapy: http://www.secdev.org/projects/scapy/doc/
* Segway: http://noble.gs.washington.edu/proj/segway/doc/1.1.0/segway.html
* SimPy: http://simpy.readthedocs.org/en/latest/
* SymPy: http://docs.sympy.org/
* WTForms: http://wtforms.simplecodes.com/docs/
* z3c: http://www.ibiblio.org/paulcarduner/z3ctutorial/
Documentation using a customized version of the default theme
-------------------------------------------------------------
* Advanced Generic Widgets:
http://xoomer.virgilio.it/infinity77/AGW_Docs/index.html
* Bazaar: http://doc.bazaar.canonical.com/en/
* CakePHP: http://book.cakephp.org/2.0/en/index.html
* Chaco: http://docs.enthought.com/chaco/
* Chef: https://docs.chef.io/index.html
* Djagios: http://djagios.org/
* EZ-Draw: http://pageperso.lif.univ-mrs.fr/~edouard.thiel/ez-draw/doc/en/html/ez-manual.html
* GetFEM++: http://home.gna.org/getfem/
* Google or-tools:
https://or-tools.googlecode.com/svn/trunk/documentation/user_manual/index.html
* GPAW: https://wiki.fysik.dtu.dk/gpaw/
* Grok: http://grok.zope.org/doc/current/
* IFM: http://fluffybunny.memebot.com/ifm-docs/index.html
* Kaa: http://api.freevo.org/kaa-base/
* LEPL: http://www.acooke.org/lepl/
* Mayavi: http://docs.enthought.com/mayavi/mayavi/
* NICOS: http://trac.frm2.tum.de/nicos/doc/nicos-master/index.html
* NOC: http://redmine.nocproject.org/projects/noc
* NumPy: http://docs.scipy.org/doc/numpy/reference/
* OpenCV: http://docs.opencv.org/
* Peach^3: http://peach3.nl/doc/latest/userdoc/
* Sage: http://sagemath.org/doc/
* SciPy: http://docs.scipy.org/doc/scipy/reference/
* simuPOP: http://simupop.sourceforge.net/manual_release/build/userGuide.html
* Sprox: http://sprox.org/
* TurboGears: http://turbogears.org/2.0/docs/
* Varnish: https://www.varnish-cache.org/docs/
* Zentyal: http://doc.zentyal.org/
* Zope: http://docs.zope.org/zope2/index.html
* zc.async: http://pythonhosted.org/zc.async/1.5.0/
Documentation using the sphinxdoc theme
---------------------------------------
* Fityk: http://fityk.nieto.pl/
* MapServer: http://mapserver.org/
* Matplotlib: http://matplotlib.org/
* Music21: http://web.mit.edu/music21/doc/index.html
* NetworkX: http://networkx.github.io/
* Pweave: http://mpastell.com/pweave/
* Pyre: http://docs.danse.us/pyre/sphinx/
* Pysparse: http://pysparse.sourceforge.net/
* PyTango:
http://www.esrf.eu/computing/cs/tango/tango_doc/kernel_doc/pytango/latest/index.html
* Python Wild Magic: http://vmlaker.github.io/pythonwildmagic/
* Reteisi: http://www.reteisi.org/contents.html
* Sqlkit: http://sqlkit.argolinux.org/
* Tau: http://www.tango-controls.org/static/tau/latest/doc/html/index.html
* Turbulenz: http://docs.turbulenz.com/
* WebFaction: http://docs.webfaction.com/
Documentation using another builtin theme
-----------------------------------------
* C/C++ Development with Eclipse: http://eclipsebook.in/ (agogo)
* Jinja: http://jinja.pocoo.org/ (scrolls)
* jsFiddle: http://doc.jsfiddle.net/ (nature)
* libLAS: http://www.liblas.org/ (nature)
* MPipe: http://vmlaker.github.io/mpipe/ (sphinx13)
* pip: https://pip.pypa.io/en/latest/ (sphinx_rtd_theme)
* Pyramid web framework:
http://docs.pylonsproject.org/projects/pyramid/en/latest/ (pyramid)
* Programmieren mit PyGTK und Glade (German):
http://www.florian-diesch.de/doc/python-und-glade/online/ (agogo)
* Satchmo: http://docs.satchmoproject.com/en/latest/ (sphinx_rtd_theme)
* Setuptools: http://pythonhosted.org/setuptools/ (nature)
* Spring Python: http://docs.spring.io/spring-python/1.2.x/sphinx/html/ (nature)
* sqlparse: http://python-sqlparse.googlecode.com/svn/docs/api/index.html
(agogo)
* Sylli: http://sylli.sourceforge.net/ (nature)
* Tuleap Open ALM: https://tuleap.net/doc/en/ (nature)
* Valence: http://docs.valence.desire2learn.com/ (haiku)
Documentation using a custom theme/integrated in a site
-------------------------------------------------------
* Blender: http://www.blender.org/documentation/250PythonDoc/
* Blinker: http://discorporate.us/projects/Blinker/docs/
* Ceph: http://ceph.com/docs/master/
* Classy: http://www.pocoo.org/projects/classy/
* DEAP: http://deap.gel.ulaval.ca/doc/0.8/index.html
* Django: http://docs.djangoproject.com/
* Elemental: http://libelemental.org/documentation/dev/index.html
* Enterprise Toolkit for Acrobat products:
http://www.adobe.com/devnet-docs/acrobatetk/
* e-cidadania: http://e-cidadania.readthedocs.org/en/latest/
* Flask: http://flask.pocoo.org/docs/
* Flask-OpenID: http://pythonhosted.org/Flask-OpenID/
* Gameduino: http://excamera.com/sphinx/gameduino/
* GeoServer: http://docs.geoserver.org/
* Glashammer: http://glashammer.org/
* Istihza (Turkish Python documentation project):
http://www.istihza.com/py2/icindekiler_python.html
* Lasso: http://lassoguide.com/
* Manage documentation such as source code (fr): http://redaction-technique.org/
* MathJax: http://docs.mathjax.org/en/latest/
* MirrorBrain: http://mirrorbrain.org/docs/
* MyHDL: http://docs.myhdl.org/en/latest/
* nose: http://nose.readthedocs.org/en/latest/
* NoTex: https://notex.ch/overview/
* ObjectListView: http://objectlistview.sourceforge.net/python/
* Open ERP: http://doc.openerp.com/
* OpenCV: http://docs.opencv.org/
* Open Dylan: http://opendylan.org/documentation/ and also provides a
`dylan domain `__
* OpenLayers: http://docs.openlayers.org/
* PyEphem: http://rhodesmill.org/pyephem/
* German Plone user manual: http://www.hasecke.com/plone-benutzerhandbuch/
* PSI4: http://sirius.chem.vt.edu/psi4manual/latest/index.html
* Pylons: http://docs.pylonsproject.org/projects/pylons-webframework/en/latest/
* PyMOTW: http://pymotw.com/2/
* python-aspectlib: http://python-aspectlib.readthedocs.org/en/latest/
(`sphinx-py3doc-enhanced-theme`_)
* QGIS: http://qgis.org/en/docs/index.html
* qooxdoo: http://manual.qooxdoo.org/current/
* Roundup: http://www.roundup-tracker.org/
* Selenium: http://docs.seleniumhq.org/docs/
* Self: http://selflanguage.org/
* Substance D: http://docs.pylonsproject.org/projects/substanced/en/latest/
* Tablib: http://tablib.org/
* SQLAlchemy: http://www.sqlalchemy.org/docs/
* tinyTiM: http://tinytim.sourceforge.net/docs/2.0/
* Ubuntu packaging guide: http://packaging.ubuntu.com/html/
* Werkzeug: http://werkzeug.pocoo.org/docs/
* WFront: http://discorporate.us/projects/WFront/
.. _sphinx-py3doc-enhanced-theme: https://pypi.python.org/pypi/sphinx_py3doc_enhanced_theme
Homepages and other non-documentation sites
-------------------------------------------
* Applied Mathematics at the Stellenbosch University: http://dip.sun.ac.za/
* A personal page: http://www.dehlia.in/
* Benoit Boissinot: http://bboissin.appspot.com/
* lunarsite: http://lunaryorn.de/
* Red Hot Chili Python: http://redhotchilipython.com/
* The Wine Cellar Book: http://www.thewinecellarbook.com/doc/en/
* UC Berkeley Advanced Control Systems course:
http://www.me.berkeley.edu/ME233/sp14/
* VOR: http://www.vor-cycling.be/
Books produced using Sphinx
---------------------------
* "The ``repoze.bfg`` Web Application Framework":
http://www.amazon.com/repoze-bfg-Web-Application-Framework-Version/dp/0615345379
* A Theoretical Physics Reference book: http://theoretical-physics.net/
* "Simple and Steady Way of Learning for Software Engineering" (in Japanese):
http://www.amazon.co.jp/dp/477414259X/
* "Expert Python Programming":
http://www.packtpub.com/expert-python-programming/book
* "Expert Python Programming" (Japanese translation):
http://www.amazon.co.jp/dp/4048686291/
* "Pomodoro Technique Illustrated" (Japanese translation):
http://www.amazon.co.jp/dp/4048689525/
* "Python Professional Programming" (in Japanese):
http://www.amazon.co.jp/dp/4798032948/
* "Die Wahrheit des Sehens. Der DEKALOG von Krzysztof Kieślowski":
http://www.hasecke.eu/Dekalog/
* The "Varnish Book":
https://www.varnish-software.com/static/book/
* "Learning Sphinx" (in Japanese):
http://www.oreilly.co.jp/books/9784873116488/
* "LassoGuide":
http://www.lassosoft.com/Lasso-Documentation
* "Software-Dokumentation mit Sphinx": http://www.amazon.de/dp/1497448689/
Thesis using Sphinx
-------------------
* "A Web-Based System for Comparative Analysis of OpenStreetMap Data
by the Use of CouchDB":
https://www.yumpu.com/et/document/view/11722645/masterthesis-markusmayr-0542042
Sphinx-1.3.6/LICENSE 0000664 0001750 0001750 00000032032 12660074372 015136 0 ustar vagrant vagrant 0000000 0000000 License for Sphinx
==================
Copyright (c) 2007-2016 by the Sphinx team (see AUTHORS file).
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Licenses for incorporated software
==================================
The pgen2 package, included in this distribution under the name
sphinx.pycode.pgen2, is available in the Python 2.6 distribution under
the PSF license agreement for Python:
----------------------------------------------------------------------
Copyright © 2001-2008 Python Software Foundation; All Rights Reserved.
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
--------------------------------------------
1. This LICENSE AGREEMENT is between the Python Software Foundation
("PSF"), and the Individual or Organization ("Licensee") accessing
and otherwise using Python 2.6 software in source or binary form
and its associated documentation.
2. Subject to the terms and conditions of this License Agreement, PSF
hereby grants Licensee a nonexclusive, royalty-free, world-wide
license to reproduce, analyze, test, perform and/or display
publicly, prepare derivative works, distribute, and otherwise use
Python 2.6 alone or in any derivative version, provided, however,
that PSF's License Agreement and PSF's notice of copyright, i.e.,
"Copyright © 2001-2008 Python Software Foundation; All Rights
Reserved" are retained in Python 2.6 alone or in any derivative
version prepared by Licensee.
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python 2.6 or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary
of the changes made to Python 2.6.
4. PSF is making Python 2.6 available to Licensee on an "AS IS" basis.
PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY
WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY
REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY
PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 2.6 WILL NOT INFRINGE
ANY THIRD PARTY RIGHTS.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
2.6 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON
2.6, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY
THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between PSF
and Licensee. This License Agreement does not grant permission to
use PSF trademarks or trade name in a trademark sense to endorse or
promote products or services of Licensee, or any third party.
8. By copying, installing or otherwise using Python 2.6, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.
----------------------------------------------------------------------
The included smartypants module, included as sphinx.util.smartypants,
is available under the following license:
----------------------------------------------------------------------
SmartyPants_ license::
Copyright (c) 2003 John Gruber
(http://daringfireball.net/)
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials
provided with the distribution.
* Neither the name "SmartyPants" nor the names of its
contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.
This software is provided by the copyright holders and
contributors "as is" and any express or implied warranties,
including, but not limited to, the implied warranties of
merchantability and fitness for a particular purpose are
disclaimed. In no event shall the copyright owner or contributors
be liable for any direct, indirect, incidental, special,
exemplary, or consequential damages (including, but not limited
to, procurement of substitute goods or services; loss of use,
data, or profits; or business interruption) however caused and on
any theory of liability, whether in contract, strict liability, or
tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of
such damage.
smartypants.py license::
smartypants.py is a derivative work of SmartyPants.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials
provided with the distribution.
This software is provided by the copyright holders and
contributors "as is" and any express or implied warranties,
including, but not limited to, the implied warranties of
merchantability and fitness for a particular purpose are
disclaimed. In no event shall the copyright owner or contributors
be liable for any direct, indirect, incidental, special,
exemplary, or consequential damages (including, but not limited
to, procurement of substitute goods or services; loss of use,
data, or profits; or business interruption) however caused and on
any theory of liability, whether in contract, strict liability, or
tort (including negligence or otherwise) arising in any way out of
the use of this software, even if advised of the possibility of
such damage.
----------------------------------------------------------------------
The ElementTree package, included in this distribution in
test/etree13, is available under the following license:
----------------------------------------------------------------------
The ElementTree toolkit is
Copyright (c) 1999-2007 by Fredrik Lundh
By obtaining, using, and/or copying this software and/or its
associated documentation, you agree that you have read, understood,
and will comply with the following terms and conditions:
Permission to use, copy, modify, and distribute this software and its
associated documentation for any purpose and without fee is hereby
granted, provided that the above copyright notice appears in all
copies, and that both that copyright notice and this permission notice
appear in supporting documentation, and that the name of Secret Labs
AB or the author not be used in advertising or publicity pertaining to
distribution of the software without specific, written prior
permission.
SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT- ABILITY
AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR BE LIABLE
FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
----------------------------------------------------------------------
The included JQuery JavaScript library is available under the MIT
license:
----------------------------------------------------------------------
Copyright (c) 2008 John Resig, http://jquery.com/
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
----------------------------------------------------------------------
The included Underscore JavaScript library is available under the MIT
license:
----------------------------------------------------------------------
Copyright (c) 2009 Jeremy Ashkenas, DocumentCloud
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
-------------------------------------------------------------------------------
The included implementation of NumpyDocstring._parse_numpydoc_see_also_section
was derived from code under the following license:
-------------------------------------------------------------------------------
Copyright (C) 2008 Stefan van der Walt , Pauli Virtanen
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
-------------------------------------------------------------------------------
Sphinx-1.3.6/README.rst 0000664 0001750 0001750 00000003773 12501733751 015627 0 ustar vagrant vagrant 0000000 0000000 =================
README for Sphinx
=================
This is the Sphinx documentation generator, see http://sphinx-doc.org/.
Installing
==========
Install from PyPI to use stable version::
pip install -U sphinx
Install from PyPI to use beta version::
pip install -U --pre sphinx
Install from newest dev version in stable branch::
pip install git+https://github.com/sphinx-doc/sphinx@stable
Install from newest dev version in master branch::
pip install git+https://github.com/sphinx-doc/sphinx
Install from cloned source::
pip install .
Install from cloned source as editable::
pip install -e .
Reading the docs
================
After installing::
cd doc
make html
Then, direct your browser to ``_build/html/index.html``.
Or read them online at .
Testing
=======
To run the tests with the interpreter available as ``python``, use::
make test
If you want to use a different interpreter, e.g. ``python3``, use::
PYTHON=python3 make test
Continuous testing runs on travis:
.. image:: https://travis-ci.org/sphinx-doc/sphinx.svg?branch=master
:target: https://travis-ci.org/sphinx-doc/sphinx
Contributing
============
#. Check for open issues or open a fresh issue to start a discussion around a
feature idea or a bug.
#. If you feel uncomfortable or uncertain about an issue or your changes, feel
free to email sphinx-dev@googlegroups.com.
#. Fork the repository on GitHub https://github.com/sphinx-doc/sphinx
to start making your changes to the **master** branch for next major
version, or **stable** branch for next minor version.
#. Write a test which shows that the bug was fixed or that the feature works
as expected. Use ``make test`` to run the test suite.
#. Send a pull request and bug the maintainer until it gets merged and
published. Make sure to add yourself to AUTHORS
and the change to
CHANGES .
Sphinx-1.3.6/babel.cfg 0000664 0001750 0001750 00000001076 12477663665 015702 0 ustar vagrant vagrant 0000000 0000000 # How setup this file
# http://babel.edgewall.org/wiki/Documentation/setup.html
# this file description:
# http://babel.edgewall.org/wiki/Documentation/messages.html#extraction-method-mapping-and-configuration
# Extraction from Python source files
[python: **.py]
encoding = utf-8
# Extraction from Jinja2 HTML templates
[jinja2: **/themes/**.html]
encoding = utf-8
ignore_tags = script,style
include_attrs = alt title summary
# Extraction from Jinja2 XML templates
[jinja2: **/themes/**.xml]
# Extraction from JavaScript files
[javascript: **.js]
[javascript: **.js_t]
Sphinx-1.3.6/setup.cfg 0000664 0001750 0001750 00000001120 12665046155 015747 0 ustar vagrant vagrant 0000000 0000000 [egg_info]
tag_build =
tag_date = 0
tag_svn_revision = 0
[aliases]
release = egg_info -RDb ''
upload = upload --sign --identity=36580288
[extract_messages]
mapping_file = babel.cfg
output_file = sphinx/locale/sphinx.pot
keywords = _ l_ lazy_gettext
[update_catalog]
input_file = sphinx/locale/sphinx.pot
domain = sphinx
output_dir = sphinx/locale/
[compile_catalog]
domain = sphinx
directory = sphinx/locale/
[wheel]
universal = 1
[flake8]
max-line-length = 95
ignore = E113,E116,E221,E226,E241,E251
exclude = ez_setup.py,utils/*,tests/*,build/*,sphinx/search/*,sphinx/pycode/pgen2/*
Sphinx-1.3.6/sphinx-autogen.py 0000775 0001750 0001750 00000000572 12660074372 017463 0 ustar vagrant vagrant 0000000 0000000 #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Sphinx - Python documentation toolchain
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import sys
if __name__ == '__main__':
from sphinx.ext.autosummary.generate import main
sys.exit(main(sys.argv))
Sphinx-1.3.6/sphinx/ 0000775 0001750 0001750 00000000000 12665046155 015445 5 ustar vagrant vagrant 0000000 0000000 Sphinx-1.3.6/sphinx/ext/ 0000775 0001750 0001750 00000000000 12665046155 016245 5 ustar vagrant vagrant 0000000 0000000 Sphinx-1.3.6/sphinx/ext/autosummary/ 0000775 0001750 0001750 00000000000 12665046155 020633 5 ustar vagrant vagrant 0000000 0000000 Sphinx-1.3.6/sphinx/ext/autosummary/__init__.py 0000664 0001750 0001750 00000046105 12665045070 022745 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.autosummary
~~~~~~~~~~~~~~~~~~~~~~
Sphinx extension that adds an autosummary:: directive, which can be
used to generate function/method/attribute/etc. summary lists, similar
to those output eg. by Epydoc and other API doc generation tools.
An :autolink: role is also provided.
autosummary directive
---------------------
The autosummary directive has the form::
.. autosummary::
:nosignatures:
:toctree: generated/
module.function_1
module.function_2
...
and it generates an output table (containing signatures, optionally)
======================== =============================================
module.function_1(args) Summary line from the docstring of function_1
module.function_2(args) Summary line from the docstring
...
======================== =============================================
If the :toctree: option is specified, files matching the function names
are inserted to the toctree with the given prefix:
generated/module.function_1
generated/module.function_2
...
Note: The file names contain the module:: or currentmodule:: prefixes.
.. seealso:: autosummary_generate.py
autolink role
-------------
The autolink role functions as ``:obj:`` when the name referred can be
resolved to a Python object, and otherwise it becomes simple emphasis.
This can be used as the default role to make links 'smart'.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import os
import re
import sys
import inspect
import posixpath
from types import ModuleType
from six import text_type
from docutils.parsers.rst import directives
from docutils.statemachine import ViewList
from docutils import nodes
import sphinx
from sphinx import addnodes
from sphinx.util.compat import Directive
from sphinx.pycode import ModuleAnalyzer, PycodeError
from sphinx.ext.autodoc import Options
# -- autosummary_toc node ------------------------------------------------------
class autosummary_toc(nodes.comment):
pass
def process_autosummary_toc(app, doctree):
"""Insert items described in autosummary:: to the TOC tree, but do
not generate the toctree:: list.
"""
env = app.builder.env
crawled = {}
def crawl_toc(node, depth=1):
crawled[node] = True
for j, subnode in enumerate(node):
try:
if (isinstance(subnode, autosummary_toc) and
isinstance(subnode[0], addnodes.toctree)):
env.note_toctree(env.docname, subnode[0])
continue
except IndexError:
continue
if not isinstance(subnode, nodes.section):
continue
if subnode not in crawled:
crawl_toc(subnode, depth+1)
crawl_toc(doctree)
def autosummary_toc_visit_html(self, node):
"""Hide autosummary toctree list in HTML output."""
raise nodes.SkipNode
def autosummary_noop(self, node):
pass
# -- autosummary_table node ----------------------------------------------------
class autosummary_table(nodes.comment):
pass
def autosummary_table_visit_html(self, node):
"""Make the first column of the table non-breaking."""
try:
tbody = node[0][0][-1]
for row in tbody:
col1_entry = row[0]
par = col1_entry[0]
for j, subnode in enumerate(list(par)):
if isinstance(subnode, nodes.Text):
new_text = text_type(subnode.astext())
new_text = new_text.replace(u" ", u"\u00a0")
par[j] = nodes.Text(new_text)
except IndexError:
pass
# -- autodoc integration -------------------------------------------------------
class FakeDirective:
env = {}
genopt = Options()
def get_documenter(obj, parent):
"""Get an autodoc.Documenter class suitable for documenting the given
object.
*obj* is the Python object to be documented, and *parent* is an
another Python object (e.g. a module or a class) to which *obj*
belongs to.
"""
from sphinx.ext.autodoc import AutoDirective, DataDocumenter, \
ModuleDocumenter
if inspect.ismodule(obj):
# ModuleDocumenter.can_document_member always returns False
return ModuleDocumenter
# Construct a fake documenter for *parent*
if parent is not None:
parent_doc_cls = get_documenter(parent, None)
else:
parent_doc_cls = ModuleDocumenter
if hasattr(parent, '__name__'):
parent_doc = parent_doc_cls(FakeDirective(), parent.__name__)
else:
parent_doc = parent_doc_cls(FakeDirective(), "")
# Get the corrent documenter class for *obj*
classes = [cls for cls in AutoDirective._registry.values()
if cls.can_document_member(obj, '', False, parent_doc)]
if classes:
classes.sort(key=lambda cls: cls.priority)
return classes[-1]
else:
return DataDocumenter
# -- .. autosummary:: ----------------------------------------------------------
class Autosummary(Directive):
"""
Pretty table containing short signatures and summaries of functions etc.
autosummary can also optionally generate a hidden toctree:: node.
"""
required_arguments = 0
optional_arguments = 0
final_argument_whitespace = False
has_content = True
option_spec = {
'toctree': directives.unchanged,
'nosignatures': directives.flag,
'template': directives.unchanged,
}
def warn(self, msg):
self.warnings.append(self.state.document.reporter.warning(
msg, line=self.lineno))
def run(self):
self.env = env = self.state.document.settings.env
self.genopt = Options()
self.warnings = []
self.result = ViewList()
names = [x.strip().split()[0] for x in self.content
if x.strip() and re.search(r'^[~a-zA-Z_]', x.strip()[0])]
items = self.get_items(names)
nodes = self.get_table(items)
if 'toctree' in self.options:
dirname = posixpath.dirname(env.docname)
tree_prefix = self.options['toctree'].strip()
docnames = []
for name, sig, summary, real_name in items:
docname = posixpath.join(tree_prefix, real_name)
docname = posixpath.normpath(posixpath.join(dirname, docname))
if docname not in env.found_docs:
self.warn('toctree references unknown document %r'
% docname)
docnames.append(docname)
tocnode = addnodes.toctree()
tocnode['includefiles'] = docnames
tocnode['entries'] = [(None, docn) for docn in docnames]
tocnode['maxdepth'] = -1
tocnode['glob'] = None
tocnode = autosummary_toc('', '', tocnode)
nodes.append(tocnode)
return self.warnings + nodes
def get_items(self, names):
"""Try to import the given names, and return a list of
``[(name, signature, summary_string, real_name), ...]``.
"""
env = self.state.document.settings.env
prefixes = get_import_prefixes_from_env(env)
items = []
max_item_chars = 50
for name in names:
display_name = name
if name.startswith('~'):
name = name[1:]
display_name = name.split('.')[-1]
try:
real_name, obj, parent, modname = import_by_name(name, prefixes=prefixes)
except ImportError:
self.warn('failed to import %s' % name)
items.append((name, '', '', name))
continue
self.result = ViewList() # initialize for each documenter
full_name = real_name
if not isinstance(obj, ModuleType):
# give explicitly separated module name, so that members
# of inner classes can be documented
full_name = modname + '::' + full_name[len(modname)+1:]
# NB. using full_name here is important, since Documenters
# handle module prefixes slightly differently
documenter = get_documenter(obj, parent)(self, full_name)
if not documenter.parse_name():
self.warn('failed to parse name %s' % real_name)
items.append((display_name, '', '', real_name))
continue
if not documenter.import_object():
self.warn('failed to import object %s' % real_name)
items.append((display_name, '', '', real_name))
continue
if documenter.options.members and not documenter.check_module():
continue
# try to also get a source code analyzer for attribute docs
try:
documenter.analyzer = ModuleAnalyzer.for_module(
documenter.get_real_modname())
# parse right now, to get PycodeErrors on parsing (results will
# be cached anyway)
documenter.analyzer.find_attr_docs()
except PycodeError as err:
documenter.env.app.debug(
'[autodoc] module analyzer failed: %s', err)
# no source file -- e.g. for builtin and C modules
documenter.analyzer = None
# -- Grab the signature
sig = documenter.format_signature()
if not sig:
sig = ''
else:
max_chars = max(10, max_item_chars - len(display_name))
sig = mangle_signature(sig, max_chars=max_chars)
sig = sig.replace('*', r'\*')
# -- Grab the summary
documenter.add_content(None)
doc = list(documenter.process_doc([self.result.data]))
while doc and not doc[0].strip():
doc.pop(0)
# If there's a blank line, then we can assume the first sentence /
# paragraph has ended, so anything after shouldn't be part of the
# summary
for i, piece in enumerate(doc):
if not piece.strip():
doc = doc[:i]
break
# Try to find the "first sentence", which may span multiple lines
m = re.search(r"^([A-Z].*?\.)(?:\s|$)", " ".join(doc).strip())
if m:
summary = m.group(1).strip()
elif doc:
summary = doc[0].strip()
else:
summary = ''
items.append((display_name, sig, summary, real_name))
return items
def get_table(self, items):
"""Generate a proper list of table nodes for autosummary:: directive.
*items* is a list produced by :meth:`get_items`.
"""
table_spec = addnodes.tabular_col_spec()
table_spec['spec'] = 'll'
table = autosummary_table('')
real_table = nodes.table('', classes=['longtable'])
table.append(real_table)
group = nodes.tgroup('', cols=2)
real_table.append(group)
group.append(nodes.colspec('', colwidth=10))
group.append(nodes.colspec('', colwidth=90))
body = nodes.tbody('')
group.append(body)
def append_row(*column_texts):
row = nodes.row('')
for text in column_texts:
node = nodes.paragraph('')
vl = ViewList()
vl.append(text, '')
self.state.nested_parse(vl, 0, node)
try:
if isinstance(node[0], nodes.paragraph):
node = node[0]
except IndexError:
pass
row.append(nodes.entry('', node))
body.append(row)
for name, sig, summary, real_name in items:
qualifier = 'obj'
if 'nosignatures' not in self.options:
col1 = ':%s:`%s <%s>`\ %s' % (qualifier, name, real_name, sig)
else:
col1 = ':%s:`%s <%s>`' % (qualifier, name, real_name)
col2 = summary
append_row(col1, col2)
return [table_spec, table]
def mangle_signature(sig, max_chars=30):
"""Reformat a function signature to a more compact form."""
s = re.sub(r"^\((.*)\)$", r"\1", sig).strip()
# Strip strings (which can contain things that confuse the code below)
s = re.sub(r"\\\\", "", s)
s = re.sub(r"\\'", "", s)
s = re.sub(r"'[^']*'", "", s)
# Parse the signature to arguments + options
args = []
opts = []
opt_re = re.compile(r"^(.*, |)([a-zA-Z0-9_*]+)=")
while s:
m = opt_re.search(s)
if not m:
# The rest are arguments
args = s.split(', ')
break
opts.insert(0, m.group(2))
s = m.group(1)[:-2]
# Produce a more compact signature
sig = limited_join(", ", args, max_chars=max_chars-2)
if opts:
if not sig:
sig = "[%s]" % limited_join(", ", opts, max_chars=max_chars-4)
elif len(sig) < max_chars - 4 - 2 - 3:
sig += "[, %s]" % limited_join(", ", opts,
max_chars=max_chars-len(sig)-4-2)
return u"(%s)" % sig
def limited_join(sep, items, max_chars=30, overflow_marker="..."):
"""Join a number of strings to one, limiting the length to *max_chars*.
If the string overflows this limit, replace the last fitting item by
*overflow_marker*.
Returns: joined_string
"""
full_str = sep.join(items)
if len(full_str) < max_chars:
return full_str
n_chars = 0
n_items = 0
for j, item in enumerate(items):
n_chars += len(item) + len(sep)
if n_chars < max_chars - len(overflow_marker):
n_items += 1
else:
break
return sep.join(list(items[:n_items]) + [overflow_marker])
# -- Importing items -----------------------------------------------------------
def get_import_prefixes_from_env(env):
"""
Obtain current Python import prefixes (for `import_by_name`)
from ``document.env``
"""
prefixes = [None]
currmodule = env.ref_context.get('py:module')
if currmodule:
prefixes.insert(0, currmodule)
currclass = env.ref_context.get('py:class')
if currclass:
if currmodule:
prefixes.insert(0, currmodule + "." + currclass)
else:
prefixes.insert(0, currclass)
return prefixes
def import_by_name(name, prefixes=[None]):
"""Import a Python object that has the given *name*, under one of the
*prefixes*. The first name that succeeds is used.
"""
tried = []
for prefix in prefixes:
try:
if prefix:
prefixed_name = '.'.join([prefix, name])
else:
prefixed_name = name
obj, parent, modname = _import_by_name(prefixed_name)
return prefixed_name, obj, parent, modname
except ImportError:
tried.append(prefixed_name)
raise ImportError('no module named %s' % ' or '.join(tried))
def _import_by_name(name):
"""Import a Python object given its full name."""
try:
name_parts = name.split('.')
# try first interpret `name` as MODNAME.OBJ
modname = '.'.join(name_parts[:-1])
if modname:
try:
__import__(modname)
mod = sys.modules[modname]
return getattr(mod, name_parts[-1]), mod, modname
except (ImportError, IndexError, AttributeError):
pass
# ... then as MODNAME, MODNAME.OBJ1, MODNAME.OBJ1.OBJ2, ...
last_j = 0
modname = None
for j in reversed(range(1, len(name_parts)+1)):
last_j = j
modname = '.'.join(name_parts[:j])
try:
__import__(modname)
except ImportError:
continue
if modname in sys.modules:
break
if last_j < len(name_parts):
parent = None
obj = sys.modules[modname]
for obj_name in name_parts[last_j:]:
parent = obj
obj = getattr(obj, obj_name)
return obj, parent, modname
else:
return sys.modules[modname], None, modname
except (ValueError, ImportError, AttributeError, KeyError) as e:
raise ImportError(*e.args)
# -- :autolink: (smart default role) -------------------------------------------
def autolink_role(typ, rawtext, etext, lineno, inliner,
options={}, content=[]):
"""Smart linking role.
Expands to ':obj:`text`' if `text` is an object that can be imported;
otherwise expands to '*text*'.
"""
env = inliner.document.settings.env
r = env.get_domain('py').role('obj')(
'obj', rawtext, etext, lineno, inliner, options, content)
pnode = r[0][0]
prefixes = get_import_prefixes_from_env(env)
try:
name, obj, parent, modname = import_by_name(pnode['reftarget'], prefixes)
except ImportError:
content = pnode[0]
r[0][0] = nodes.emphasis(rawtext, content[0].astext(),
classes=content['classes'])
return r
def process_generate_options(app):
genfiles = app.config.autosummary_generate
if genfiles and not hasattr(genfiles, '__len__'):
env = app.builder.env
genfiles = [env.doc2path(x, base=None) for x in env.found_docs
if os.path.isfile(env.doc2path(x))]
if not genfiles:
return
from sphinx.ext.autosummary.generate import generate_autosummary_docs
ext = app.config.source_suffix[0]
genfiles = [genfile + (not genfile.endswith(ext) and ext or '')
for genfile in genfiles]
generate_autosummary_docs(genfiles, builder=app.builder,
warn=app.warn, info=app.info, suffix=ext,
base_path=app.srcdir)
def setup(app):
# I need autodoc
app.setup_extension('sphinx.ext.autodoc')
app.add_node(autosummary_toc,
html=(autosummary_toc_visit_html, autosummary_noop),
latex=(autosummary_noop, autosummary_noop),
text=(autosummary_noop, autosummary_noop),
man=(autosummary_noop, autosummary_noop),
texinfo=(autosummary_noop, autosummary_noop))
app.add_node(autosummary_table,
html=(autosummary_table_visit_html, autosummary_noop),
latex=(autosummary_noop, autosummary_noop),
text=(autosummary_noop, autosummary_noop),
man=(autosummary_noop, autosummary_noop),
texinfo=(autosummary_noop, autosummary_noop))
app.add_directive('autosummary', Autosummary)
app.add_role('autolink', autolink_role)
app.connect('doctree-read', process_autosummary_toc)
app.connect('builder-inited', process_generate_options)
app.add_config_value('autosummary_generate', [], True)
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
Sphinx-1.3.6/sphinx/ext/autosummary/templates/ 0000775 0001750 0001750 00000000000 12665046155 022631 5 ustar vagrant vagrant 0000000 0000000 Sphinx-1.3.6/sphinx/ext/autosummary/templates/autosummary/ 0000775 0001750 0001750 00000000000 12665046155 025217 5 ustar vagrant vagrant 0000000 0000000 Sphinx-1.3.6/sphinx/ext/autosummary/templates/autosummary/module.rst 0000664 0001750 0001750 00000001173 12477663665 027254 0 ustar vagrant vagrant 0000000 0000000 {{ fullname }}
{{ underline }}
.. automodule:: {{ fullname }}
{% block functions %}
{% if functions %}
.. rubric:: Functions
.. autosummary::
{% for item in functions %}
{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block classes %}
{% if classes %}
.. rubric:: Classes
.. autosummary::
{% for item in classes %}
{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block exceptions %}
{% if exceptions %}
.. rubric:: Exceptions
.. autosummary::
{% for item in exceptions %}
{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
Sphinx-1.3.6/sphinx/ext/autosummary/templates/autosummary/class.rst 0000664 0001750 0001750 00000001017 12477663665 027071 0 ustar vagrant vagrant 0000000 0000000 {{ fullname }}
{{ underline }}
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
{% block methods %}
.. automethod:: __init__
{% if methods %}
.. rubric:: Methods
.. autosummary::
{% for item in methods %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block attributes %}
{% if attributes %}
.. rubric:: Attributes
.. autosummary::
{% for item in attributes %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
Sphinx-1.3.6/sphinx/ext/autosummary/templates/autosummary/base.rst 0000664 0001750 0001750 00000000146 12477663665 026700 0 ustar vagrant vagrant 0000000 0000000 {{ fullname }}
{{ underline }}
.. currentmodule:: {{ module }}
.. auto{{ objtype }}:: {{ objname }}
Sphinx-1.3.6/sphinx/ext/autosummary/generate.py 0000664 0001750 0001750 00000026673 12660074372 023012 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.autosummary.generate
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Usable as a library or script to generate automatic RST source files for
items referred to in autosummary:: directives.
Each generated RST file contains a single auto*:: directive which
extracts the docstring of the referred item.
Example Makefile rule::
generate:
sphinx-autogen -o source/generated source/*.rst
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from __future__ import print_function
import os
import re
import sys
import pydoc
import optparse
import codecs
from jinja2 import FileSystemLoader, TemplateNotFound
from jinja2.sandbox import SandboxedEnvironment
from sphinx import package_dir
from sphinx.ext.autosummary import import_by_name, get_documenter
from sphinx.jinja2glue import BuiltinTemplateLoader
from sphinx.util.osutil import ensuredir
from sphinx.util.inspect import safe_getattr
# Add documenters to AutoDirective registry
from sphinx.ext.autodoc import add_documenter, \
ModuleDocumenter, ClassDocumenter, ExceptionDocumenter, DataDocumenter, \
FunctionDocumenter, MethodDocumenter, AttributeDocumenter, \
InstanceAttributeDocumenter
add_documenter(ModuleDocumenter)
add_documenter(ClassDocumenter)
add_documenter(ExceptionDocumenter)
add_documenter(DataDocumenter)
add_documenter(FunctionDocumenter)
add_documenter(MethodDocumenter)
add_documenter(AttributeDocumenter)
add_documenter(InstanceAttributeDocumenter)
def main(argv=sys.argv):
usage = """%prog [OPTIONS] SOURCEFILE ..."""
p = optparse.OptionParser(usage.strip())
p.add_option("-o", "--output-dir", action="store", type="string",
dest="output_dir", default=None,
help="Directory to place all output in")
p.add_option("-s", "--suffix", action="store", type="string",
dest="suffix", default="rst",
help="Default suffix for files (default: %default)")
p.add_option("-t", "--templates", action="store", type="string",
dest="templates", default=None,
help="Custom template directory (default: %default)")
options, args = p.parse_args(argv[1:])
if len(args) < 1:
p.error('no input files given')
generate_autosummary_docs(args, options.output_dir,
"." + options.suffix,
template_dir=options.templates)
def _simple_info(msg):
print(msg)
def _simple_warn(msg):
print('WARNING: ' + msg, file=sys.stderr)
# -- Generating output ---------------------------------------------------------
def generate_autosummary_docs(sources, output_dir=None, suffix='.rst',
warn=_simple_warn, info=_simple_info,
base_path=None, builder=None, template_dir=None):
showed_sources = list(sorted(sources))
if len(showed_sources) > 20:
showed_sources = showed_sources[:10] + ['...'] + showed_sources[-10:]
info('[autosummary] generating autosummary for: %s' %
', '.join(showed_sources))
if output_dir:
info('[autosummary] writing to %s' % output_dir)
if base_path is not None:
sources = [os.path.join(base_path, filename) for filename in sources]
# create our own templating environment
template_dirs = [os.path.join(package_dir, 'ext',
'autosummary', 'templates')]
if builder is not None:
# allow the user to override the templates
template_loader = BuiltinTemplateLoader()
template_loader.init(builder, dirs=template_dirs)
else:
if template_dir:
template_dirs.insert(0, template_dir)
template_loader = FileSystemLoader(template_dirs)
template_env = SandboxedEnvironment(loader=template_loader)
# read
items = find_autosummary_in_files(sources)
# keep track of new files
new_files = []
# write
for name, path, template_name in sorted(set(items), key=str):
if path is None:
# The corresponding autosummary:: directive did not have
# a :toctree: option
continue
path = output_dir or os.path.abspath(path)
ensuredir(path)
try:
name, obj, parent, mod_name = import_by_name(name)
except ImportError as e:
warn('[autosummary] failed to import %r: %s' % (name, e))
continue
fn = os.path.join(path, name + suffix)
# skip it if it exists
if os.path.isfile(fn):
continue
new_files.append(fn)
with open(fn, 'w') as f:
doc = get_documenter(obj, parent)
if template_name is not None:
template = template_env.get_template(template_name)
else:
try:
template = template_env.get_template('autosummary/%s.rst'
% doc.objtype)
except TemplateNotFound:
template = template_env.get_template('autosummary/base.rst')
def get_members(obj, typ, include_public=[]):
items = []
for name in dir(obj):
try:
documenter = get_documenter(safe_getattr(obj, name),
obj)
except AttributeError:
continue
if documenter.objtype == typ:
items.append(name)
public = [x for x in items
if x in include_public or not x.startswith('_')]
return public, items
ns = {}
if doc.objtype == 'module':
ns['members'] = dir(obj)
ns['functions'], ns['all_functions'] = \
get_members(obj, 'function')
ns['classes'], ns['all_classes'] = \
get_members(obj, 'class')
ns['exceptions'], ns['all_exceptions'] = \
get_members(obj, 'exception')
elif doc.objtype == 'class':
ns['members'] = dir(obj)
ns['methods'], ns['all_methods'] = \
get_members(obj, 'method', ['__init__'])
ns['attributes'], ns['all_attributes'] = \
get_members(obj, 'attribute')
parts = name.split('.')
if doc.objtype in ('method', 'attribute'):
mod_name = '.'.join(parts[:-2])
cls_name = parts[-2]
obj_name = '.'.join(parts[-2:])
ns['class'] = cls_name
else:
mod_name, obj_name = '.'.join(parts[:-1]), parts[-1]
ns['fullname'] = name
ns['module'] = mod_name
ns['objname'] = obj_name
ns['name'] = parts[-1]
ns['objtype'] = doc.objtype
ns['underline'] = len(name) * '='
rendered = template.render(**ns)
f.write(rendered)
# descend recursively to new files
if new_files:
generate_autosummary_docs(new_files, output_dir=output_dir,
suffix=suffix, warn=warn, info=info,
base_path=base_path, builder=builder,
template_dir=template_dir)
# -- Finding documented entries in files ---------------------------------------
def find_autosummary_in_files(filenames):
"""Find out what items are documented in source/*.rst.
See `find_autosummary_in_lines`.
"""
documented = []
for filename in filenames:
with codecs.open(filename, 'r', encoding='utf-8',
errors='ignore') as f:
lines = f.read().splitlines()
documented.extend(find_autosummary_in_lines(lines,
filename=filename))
return documented
def find_autosummary_in_docstring(name, module=None, filename=None):
"""Find out what items are documented in the given object's docstring.
See `find_autosummary_in_lines`.
"""
try:
real_name, obj, parent, modname = import_by_name(name)
lines = pydoc.getdoc(obj).splitlines()
return find_autosummary_in_lines(lines, module=name, filename=filename)
except AttributeError:
pass
except ImportError as e:
print("Failed to import '%s': %s" % (name, e))
except SystemExit as e:
print("Failed to import '%s'; the module executes module level "
"statement and it might call sys.exit()." % name)
return []
def find_autosummary_in_lines(lines, module=None, filename=None):
"""Find out what items appear in autosummary:: directives in the
given lines.
Returns a list of (name, toctree, template) where *name* is a name
of an object and *toctree* the :toctree: path of the corresponding
autosummary directive (relative to the root of the file name), and
*template* the value of the :template: option. *toctree* and
*template* ``None`` if the directive does not have the
corresponding options set.
"""
autosummary_re = re.compile(r'^(\s*)\.\.\s+autosummary::\s*')
automodule_re = re.compile(
r'^\s*\.\.\s+automodule::\s*([A-Za-z0-9_.]+)\s*$')
module_re = re.compile(
r'^\s*\.\.\s+(current)?module::\s*([a-zA-Z0-9_.]+)\s*$')
autosummary_item_re = re.compile(r'^\s+(~?[_a-zA-Z][a-zA-Z0-9_.]*)\s*.*?')
toctree_arg_re = re.compile(r'^\s+:toctree:\s*(.*?)\s*$')
template_arg_re = re.compile(r'^\s+:template:\s*(.*?)\s*$')
documented = []
toctree = None
template = None
current_module = module
in_autosummary = False
base_indent = ""
for line in lines:
if in_autosummary:
m = toctree_arg_re.match(line)
if m:
toctree = m.group(1)
if filename:
toctree = os.path.join(os.path.dirname(filename),
toctree)
continue
m = template_arg_re.match(line)
if m:
template = m.group(1).strip()
continue
if line.strip().startswith(':'):
continue # skip options
m = autosummary_item_re.match(line)
if m:
name = m.group(1).strip()
if name.startswith('~'):
name = name[1:]
if current_module and \
not name.startswith(current_module + '.'):
name = "%s.%s" % (current_module, name)
documented.append((name, toctree, template))
continue
if not line.strip() or line.startswith(base_indent + " "):
continue
in_autosummary = False
m = autosummary_re.match(line)
if m:
in_autosummary = True
base_indent = m.group(1)
toctree = None
template = None
continue
m = automodule_re.search(line)
if m:
current_module = m.group(1).strip()
# recurse into the automodule docstring
documented.extend(find_autosummary_in_docstring(
current_module, filename=filename))
continue
m = module_re.match(line)
if m:
current_module = m.group(2)
continue
return documented
if __name__ == '__main__':
main()
Sphinx-1.3.6/sphinx/ext/__init__.py 0000664 0001750 0001750 00000000350 12660074372 020351 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext
~~~~~~~~~~
Contains Sphinx features not activated by default.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
Sphinx-1.3.6/sphinx/ext/doctest.py 0000664 0001750 0001750 00000041565 12660074372 020274 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.doctest
~~~~~~~~~~~~~~~~~~
Mimic doctest by automatically executing code snippets and checking
their results.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from __future__ import absolute_import
import re
import sys
import time
import codecs
from os import path
import doctest
from six import itervalues, StringIO, binary_type, text_type, PY2
from docutils import nodes
from docutils.parsers.rst import directives
import sphinx
from sphinx.builders import Builder
from sphinx.util import force_decode
from sphinx.util.nodes import set_source_info
from sphinx.util.compat import Directive
from sphinx.util.console import bold
from sphinx.util.osutil import fs_encoding
blankline_re = re.compile(r'^\s*', re.MULTILINE)
doctestopt_re = re.compile(r'#\s*doctest:.+$', re.MULTILINE)
if PY2:
def doctest_encode(text, encoding):
if isinstance(text, text_type):
text = text.encode(encoding)
if text.startswith(codecs.BOM_UTF8):
text = text[len(codecs.BOM_UTF8):]
return text
else:
def doctest_encode(text, encoding):
return text
# set up the necessary directives
class TestDirective(Directive):
"""
Base class for doctest-related directives.
"""
has_content = True
required_arguments = 0
optional_arguments = 1
final_argument_whitespace = True
def run(self):
# use ordinary docutils nodes for test code: they get special attributes
# so that our builder recognizes them, and the other builders are happy.
code = '\n'.join(self.content)
test = None
if self.name == 'doctest':
if '' in code:
# convert s to ordinary blank lines for presentation
test = code
code = blankline_re.sub('', code)
if doctestopt_re.search(code):
if not test:
test = code
code = doctestopt_re.sub('', code)
nodetype = nodes.literal_block
if self.name in ('testsetup', 'testcleanup') or 'hide' in self.options:
nodetype = nodes.comment
if self.arguments:
groups = [x.strip() for x in self.arguments[0].split(',')]
else:
groups = ['default']
node = nodetype(code, code, testnodetype=self.name, groups=groups)
set_source_info(self, node)
if test is not None:
# only save if it differs from code
node['test'] = test
if self.name == 'testoutput':
# don't try to highlight output
node['language'] = 'none'
node['options'] = {}
if self.name in ('doctest', 'testoutput') and 'options' in self.options:
# parse doctest-like output comparison flags
option_strings = self.options['options'].replace(',', ' ').split()
for option in option_strings:
if (option[0] not in '+-' or option[1:] not in
doctest.OPTIONFLAGS_BY_NAME):
# XXX warn?
continue
flag = doctest.OPTIONFLAGS_BY_NAME[option[1:]]
node['options'][flag] = (option[0] == '+')
return [node]
class TestsetupDirective(TestDirective):
option_spec = {}
class TestcleanupDirective(TestDirective):
option_spec = {}
class DoctestDirective(TestDirective):
option_spec = {
'hide': directives.flag,
'options': directives.unchanged,
}
class TestcodeDirective(TestDirective):
option_spec = {
'hide': directives.flag,
}
class TestoutputDirective(TestDirective):
option_spec = {
'hide': directives.flag,
'options': directives.unchanged,
}
parser = doctest.DocTestParser()
# helper classes
class TestGroup(object):
def __init__(self, name):
self.name = name
self.setup = []
self.tests = []
self.cleanup = []
def add_code(self, code, prepend=False):
if code.type == 'testsetup':
if prepend:
self.setup.insert(0, code)
else:
self.setup.append(code)
elif code.type == 'testcleanup':
self.cleanup.append(code)
elif code.type == 'doctest':
self.tests.append([code])
elif code.type == 'testcode':
self.tests.append([code, None])
elif code.type == 'testoutput':
if self.tests and len(self.tests[-1]) == 2:
self.tests[-1][1] = code
else:
raise RuntimeError('invalid TestCode type')
def __repr__(self):
return 'TestGroup(name=%r, setup=%r, cleanup=%r, tests=%r)' % (
self.name, self.setup, self.cleanup, self.tests)
class TestCode(object):
def __init__(self, code, type, lineno, options=None):
self.code = code
self.type = type
self.lineno = lineno
self.options = options or {}
def __repr__(self):
return 'TestCode(%r, %r, %r, options=%r)' % (
self.code, self.type, self.lineno, self.options)
class SphinxDocTestRunner(doctest.DocTestRunner):
def summarize(self, out, verbose=None):
string_io = StringIO()
old_stdout = sys.stdout
sys.stdout = string_io
try:
res = doctest.DocTestRunner.summarize(self, verbose)
finally:
sys.stdout = old_stdout
out(string_io.getvalue())
return res
def _DocTestRunner__patched_linecache_getlines(self, filename,
module_globals=None):
# this is overridden from DocTestRunner adding the try-except below
m = self._DocTestRunner__LINECACHE_FILENAME_RE.match(filename)
if m and m.group('name') == self.test.name:
try:
example = self.test.examples[int(m.group('examplenum'))]
# because we compile multiple doctest blocks with the same name
# (viz. the group name) this might, for outer stack frames in a
# traceback, get the wrong test which might not have enough examples
except IndexError:
pass
else:
return example.source.splitlines(True)
return self.save_linecache_getlines(filename, module_globals)
# the new builder -- use sphinx-build.py -b doctest to run
class DocTestBuilder(Builder):
"""
Runs test snippets in the documentation.
"""
name = 'doctest'
def init(self):
# default options
self.opt = doctest.DONT_ACCEPT_TRUE_FOR_1 | doctest.ELLIPSIS | \
doctest.IGNORE_EXCEPTION_DETAIL
# HACK HACK HACK
# doctest compiles its snippets with type 'single'. That is nice
# for doctest examples but unusable for multi-statement code such
# as setup code -- to be able to use doctest error reporting with
# that code nevertheless, we monkey-patch the "compile" it uses.
doctest.compile = self.compile
sys.path[0:0] = self.config.doctest_path
self.type = 'single'
self.total_failures = 0
self.total_tries = 0
self.setup_failures = 0
self.setup_tries = 0
self.cleanup_failures = 0
self.cleanup_tries = 0
date = time.strftime('%Y-%m-%d %H:%M:%S')
self.outfile = codecs.open(path.join(self.outdir, 'output.txt'),
'w', encoding='utf-8')
self.outfile.write('''\
Results of doctest builder run on %s
==================================%s
''' % (date, '='*len(date)))
def _out(self, text):
self.info(text, nonl=True)
self.outfile.write(text)
def _warn_out(self, text):
if self.app.quiet or self.app.warningiserror:
self.warn(text)
else:
self.info(text, nonl=True)
if isinstance(text, binary_type):
text = force_decode(text, None)
self.outfile.write(text)
def get_target_uri(self, docname, typ=None):
return ''
def get_outdated_docs(self):
return self.env.found_docs
def finish(self):
# write executive summary
def s(v):
return v != 1 and 's' or ''
repl = (self.total_tries, s(self.total_tries),
self.total_failures, s(self.total_failures),
self.setup_failures, s(self.setup_failures),
self.cleanup_failures, s(self.cleanup_failures))
self._out('''
Doctest summary
===============
%5d test%s
%5d failure%s in tests
%5d failure%s in setup code
%5d failure%s in cleanup code
''' % repl)
self.outfile.close()
if self.total_failures or self.setup_failures or self.cleanup_failures:
self.app.statuscode = 1
def write(self, build_docnames, updated_docnames, method='update'):
if build_docnames is None:
build_docnames = sorted(self.env.all_docs)
self.info(bold('running tests...'))
for docname in build_docnames:
# no need to resolve the doctree
doctree = self.env.get_doctree(docname)
self.test_doc(docname, doctree)
def test_doc(self, docname, doctree):
groups = {}
add_to_all_groups = []
self.setup_runner = SphinxDocTestRunner(verbose=False,
optionflags=self.opt)
self.test_runner = SphinxDocTestRunner(verbose=False,
optionflags=self.opt)
self.cleanup_runner = SphinxDocTestRunner(verbose=False,
optionflags=self.opt)
self.test_runner._fakeout = self.setup_runner._fakeout
self.cleanup_runner._fakeout = self.setup_runner._fakeout
if self.config.doctest_test_doctest_blocks:
def condition(node):
return (isinstance(node, (nodes.literal_block, nodes.comment)) and
'testnodetype' in node) or \
isinstance(node, nodes.doctest_block)
else:
def condition(node):
return isinstance(node, (nodes.literal_block, nodes.comment)) \
and 'testnodetype' in node
for node in doctree.traverse(condition):
source = 'test' in node and node['test'] or node.astext()
if not source:
self.warn('no code/output in %s block at %s:%s' %
(node.get('testnodetype', 'doctest'),
self.env.doc2path(docname), node.line))
code = TestCode(source, type=node.get('testnodetype', 'doctest'),
lineno=node.line, options=node.get('options'))
node_groups = node.get('groups', ['default'])
if '*' in node_groups:
add_to_all_groups.append(code)
continue
for groupname in node_groups:
if groupname not in groups:
groups[groupname] = TestGroup(groupname)
groups[groupname].add_code(code)
for code in add_to_all_groups:
for group in itervalues(groups):
group.add_code(code)
if self.config.doctest_global_setup:
code = TestCode(self.config.doctest_global_setup,
'testsetup', lineno=0)
for group in itervalues(groups):
group.add_code(code, prepend=True)
if self.config.doctest_global_cleanup:
code = TestCode(self.config.doctest_global_cleanup,
'testcleanup', lineno=0)
for group in itervalues(groups):
group.add_code(code)
if not groups:
return
self._out('\nDocument: %s\n----------%s\n' %
(docname, '-'*len(docname)))
for group in itervalues(groups):
self.test_group(group, self.env.doc2path(docname, base=None))
# Separately count results from setup code
res_f, res_t = self.setup_runner.summarize(self._out, verbose=False)
self.setup_failures += res_f
self.setup_tries += res_t
if self.test_runner.tries:
res_f, res_t = self.test_runner.summarize(self._out, verbose=True)
self.total_failures += res_f
self.total_tries += res_t
if self.cleanup_runner.tries:
res_f, res_t = self.cleanup_runner.summarize(self._out,
verbose=True)
self.cleanup_failures += res_f
self.cleanup_tries += res_t
def compile(self, code, name, type, flags, dont_inherit):
return compile(code, name, self.type, flags, dont_inherit)
def test_group(self, group, filename):
if PY2:
filename_str = filename.encode(fs_encoding)
else:
filename_str = filename
ns = {}
def run_setup_cleanup(runner, testcodes, what):
examples = []
for testcode in testcodes:
examples.append(doctest.Example(
doctest_encode(testcode.code, self.env.config.source_encoding), '',
lineno=testcode.lineno))
if not examples:
return True
# simulate a doctest with the code
sim_doctest = doctest.DocTest(examples, {},
'%s (%s code)' % (group.name, what),
filename_str, 0, None)
sim_doctest.globs = ns
old_f = runner.failures
self.type = 'exec' # the snippet may contain multiple statements
runner.run(sim_doctest, out=self._warn_out, clear_globs=False)
if runner.failures > old_f:
return False
return True
# run the setup code
if not run_setup_cleanup(self.setup_runner, group.setup, 'setup'):
# if setup failed, don't run the group
return
# run the tests
for code in group.tests:
if len(code) == 1:
# ordinary doctests (code/output interleaved)
try:
test = parser.get_doctest(
doctest_encode(code[0].code, self.env.config.source_encoding), {},
group.name, filename_str, code[0].lineno)
except Exception:
self.warn('ignoring invalid doctest code: %r' %
code[0].code,
'%s:%s' % (filename, code[0].lineno))
continue
if not test.examples:
continue
for example in test.examples:
# apply directive's comparison options
new_opt = code[0].options.copy()
new_opt.update(example.options)
example.options = new_opt
self.type = 'single' # as for ordinary doctests
else:
# testcode and output separate
output = code[1] and code[1].code or ''
options = code[1] and code[1].options or {}
# disable processing as it is not needed
options[doctest.DONT_ACCEPT_BLANKLINE] = True
# find out if we're testing an exception
m = parser._EXCEPTION_RE.match(output)
if m:
exc_msg = m.group('msg')
else:
exc_msg = None
example = doctest.Example(
doctest_encode(code[0].code, self.env.config.source_encoding), output,
exc_msg=exc_msg,
lineno=code[0].lineno,
options=options)
test = doctest.DocTest([example], {}, group.name,
filename_str, code[0].lineno, None)
self.type = 'exec' # multiple statements again
# DocTest.__init__ copies the globs namespace, which we don't want
test.globs = ns
# also don't clear the globs namespace after running the doctest
self.test_runner.run(test, out=self._warn_out, clear_globs=False)
# run the cleanup
run_setup_cleanup(self.cleanup_runner, group.cleanup, 'cleanup')
def setup(app):
app.add_directive('testsetup', TestsetupDirective)
app.add_directive('testcleanup', TestcleanupDirective)
app.add_directive('doctest', DoctestDirective)
app.add_directive('testcode', TestcodeDirective)
app.add_directive('testoutput', TestoutputDirective)
app.add_builder(DocTestBuilder)
# this config value adds to sys.path
app.add_config_value('doctest_path', [], False)
app.add_config_value('doctest_test_doctest_blocks', 'default', False)
app.add_config_value('doctest_global_setup', '', False)
app.add_config_value('doctest_global_cleanup', '', False)
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
Sphinx-1.3.6/sphinx/ext/linkcode.py 0000664 0001750 0001750 00000004271 12660074372 020410 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.linkcode
~~~~~~~~~~~~~~~~~~~
Add external links to module code in Python object descriptions.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from docutils import nodes
import sphinx
from sphinx import addnodes
from sphinx.locale import _
from sphinx.errors import SphinxError
class LinkcodeError(SphinxError):
category = "linkcode error"
def doctree_read(app, doctree):
env = app.builder.env
resolve_target = getattr(env.config, 'linkcode_resolve', None)
if not callable(env.config.linkcode_resolve):
raise LinkcodeError(
"Function `linkcode_resolve` is not given in conf.py")
domain_keys = dict(
py=['module', 'fullname'],
c=['names'],
cpp=['names'],
js=['object', 'fullname'],
)
for objnode in doctree.traverse(addnodes.desc):
domain = objnode.get('domain')
uris = set()
for signode in objnode:
if not isinstance(signode, addnodes.desc_signature):
continue
# Convert signode to a specified format
info = {}
for key in domain_keys.get(domain, []):
value = signode.get(key)
if not value:
value = ''
info[key] = value
if not info:
continue
# Call user code to resolve the link
uri = resolve_target(domain, info)
if not uri:
# no source
continue
if uri in uris or not uri:
# only one link per name, please
continue
uris.add(uri)
onlynode = addnodes.only(expr='html')
onlynode += nodes.reference('', '', internal=False, refuri=uri)
onlynode[0] += nodes.inline('', _('[source]'),
classes=['viewcode-link'])
signode += onlynode
def setup(app):
app.connect('doctree-read', doctree_read)
app.add_config_value('linkcode_resolve', None, '')
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
Sphinx-1.3.6/sphinx/ext/jsmath.py 0000664 0001750 0001750 00000004057 12660074372 020110 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.jsmath
~~~~~~~~~~~~~~~~~
Set up everything for use of JSMath to display math in HTML
via JavaScript.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from docutils import nodes
import sphinx
from sphinx.application import ExtensionError
from sphinx.ext.mathbase import setup_math as mathbase_setup
def html_visit_math(self, node):
self.body.append(self.starttag(node, 'span', '', CLASS='math'))
self.body.append(self.encode(node['latex']) + '')
raise nodes.SkipNode
def html_visit_displaymath(self, node):
if node['nowrap']:
self.body.append(self.starttag(node, 'div', CLASS='math'))
self.body.append(self.encode(node['latex']))
self.body.append('')
raise nodes.SkipNode
for i, part in enumerate(node['latex'].split('\n\n')):
part = self.encode(part)
if i == 0:
# necessary to e.g. set the id property correctly
if node['number']:
self.body.append('(%s)' %
node['number'])
self.body.append(self.starttag(node, 'div', CLASS='math'))
else:
# but only once!
self.body.append('')
if '&' in part or '\\\\' in part:
self.body.append('\\begin{split}' + part + '\\end{split}')
else:
self.body.append(part)
self.body.append('
\n')
raise nodes.SkipNode
def builder_inited(app):
if not app.config.jsmath_path:
raise ExtensionError('jsmath_path config value must be set for the '
'jsmath extension to work')
app.add_javascript(app.config.jsmath_path)
def setup(app):
mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None))
app.add_config_value('jsmath_path', '', False)
app.connect('builder-inited', builder_inited)
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
Sphinx-1.3.6/sphinx/ext/pngmath.py 0000664 0001750 0001750 00000021135 12665045070 020252 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.pngmath
~~~~~~~~~~~~~~~~~~
Render math in HTML via dvipng.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
import codecs
import shutil
import tempfile
import posixpath
from os import path
from subprocess import Popen, PIPE
from hashlib import sha1
from six import text_type
from docutils import nodes
import sphinx
from sphinx.errors import SphinxError
from sphinx.util.png import read_png_depth, write_png_depth
from sphinx.util.osutil import ensuredir, ENOENT, cd
from sphinx.util.pycompat import sys_encoding
from sphinx.ext.mathbase import setup_math as mathbase_setup, wrap_displaymath
class MathExtError(SphinxError):
category = 'Math extension error'
def __init__(self, msg, stderr=None, stdout=None):
if stderr:
msg += '\n[stderr]\n' + stderr.decode(sys_encoding, 'replace')
if stdout:
msg += '\n[stdout]\n' + stdout.decode(sys_encoding, 'replace')
SphinxError.__init__(self, msg)
DOC_HEAD = r'''
\documentclass[12pt]{article}
\usepackage[utf8x]{inputenc}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{amssymb}
\usepackage{amsfonts}
\usepackage{bm}
\pagestyle{empty}
'''
DOC_BODY = r'''
\begin{document}
%s
\end{document}
'''
DOC_BODY_PREVIEW = r'''
\usepackage[active]{preview}
\begin{document}
\begin{preview}
%s
\end{preview}
\end{document}
'''
depth_re = re.compile(br'\[\d+ depth=(-?\d+)\]')
def render_math(self, math):
"""Render the LaTeX math expression *math* using latex and dvipng.
Return the filename relative to the built document and the "depth",
that is, the distance of image bottom and baseline in pixels, if the
option to use preview_latex is switched on.
Error handling may seem strange, but follows a pattern: if LaTeX or
dvipng aren't available, only a warning is generated (since that enables
people on machines without these programs to at least build the rest
of the docs successfully). If the programs are there, however, they
may not fail since that indicates a problem in the math source.
"""
use_preview = self.builder.config.pngmath_use_preview
latex = DOC_HEAD + self.builder.config.pngmath_latex_preamble
latex += (use_preview and DOC_BODY_PREVIEW or DOC_BODY) % math
shasum = "%s.png" % sha1(latex.encode('utf-8')).hexdigest()
relfn = posixpath.join(self.builder.imgpath, 'math', shasum)
outfn = path.join(self.builder.outdir, self.builder.imagedir, 'math', shasum)
if path.isfile(outfn):
depth = read_png_depth(outfn)
return relfn, depth
# if latex or dvipng has failed once, don't bother to try again
if hasattr(self.builder, '_mathpng_warned_latex') or \
hasattr(self.builder, '_mathpng_warned_dvipng'):
return None, None
# use only one tempdir per build -- the use of a directory is cleaner
# than using temporary files, since we can clean up everything at once
# just removing the whole directory (see cleanup_tempdir)
if not hasattr(self.builder, '_mathpng_tempdir'):
tempdir = self.builder._mathpng_tempdir = tempfile.mkdtemp()
else:
tempdir = self.builder._mathpng_tempdir
tf = codecs.open(path.join(tempdir, 'math.tex'), 'w', 'utf-8')
tf.write(latex)
tf.close()
# build latex command; old versions of latex don't have the
# --output-directory option, so we have to manually chdir to the
# temp dir to run it.
ltx_args = [self.builder.config.pngmath_latex, '--interaction=nonstopmode']
# add custom args from the config file
ltx_args.extend(self.builder.config.pngmath_latex_args)
ltx_args.append('math.tex')
with cd(tempdir):
try:
p = Popen(ltx_args, stdout=PIPE, stderr=PIPE)
except OSError as err:
if err.errno != ENOENT: # No such file or directory
raise
self.builder.warn('LaTeX command %r cannot be run (needed for math '
'display), check the pngmath_latex setting' %
self.builder.config.pngmath_latex)
self.builder._mathpng_warned_latex = True
return None, None
stdout, stderr = p.communicate()
if p.returncode != 0:
raise MathExtError('latex exited with error', stderr, stdout)
ensuredir(path.dirname(outfn))
# use some standard dvipng arguments
dvipng_args = [self.builder.config.pngmath_dvipng]
dvipng_args += ['-o', outfn, '-T', 'tight', '-z9']
# add custom ones from config value
dvipng_args.extend(self.builder.config.pngmath_dvipng_args)
if use_preview:
dvipng_args.append('--depth')
# last, the input file name
dvipng_args.append(path.join(tempdir, 'math.dvi'))
try:
p = Popen(dvipng_args, stdout=PIPE, stderr=PIPE)
except OSError as err:
if err.errno != ENOENT: # No such file or directory
raise
self.builder.warn('dvipng command %r cannot be run (needed for math '
'display), check the pngmath_dvipng setting' %
self.builder.config.pngmath_dvipng)
self.builder._mathpng_warned_dvipng = True
return None, None
stdout, stderr = p.communicate()
if p.returncode != 0:
raise MathExtError('dvipng exited with error', stderr, stdout)
depth = None
if use_preview:
for line in stdout.splitlines():
m = depth_re.match(line)
if m:
depth = int(m.group(1))
write_png_depth(outfn, depth)
break
return relfn, depth
def cleanup_tempdir(app, exc):
if exc:
return
if not hasattr(app.builder, '_mathpng_tempdir'):
return
try:
shutil.rmtree(app.builder._mathpng_tempdir)
except Exception:
pass
def get_tooltip(self, node):
if self.builder.config.pngmath_add_tooltips:
return ' alt="%s"' % self.encode(node['latex']).strip()
return ''
def html_visit_math(self, node):
try:
fname, depth = render_math(self, '$'+node['latex']+'$')
except MathExtError as exc:
msg = text_type(exc)
sm = nodes.system_message(msg, type='WARNING', level=2,
backrefs=[], source=node['latex'])
sm.walkabout(self)
self.builder.warn('display latex %r: ' % node['latex'] + msg)
raise nodes.SkipNode
if fname is None:
# something failed -- use text-only as a bad substitute
self.body.append('%s' %
self.encode(node['latex']).strip())
else:
c = ('
')
raise nodes.SkipNode
def html_visit_displaymath(self, node):
if node['nowrap']:
latex = node['latex']
else:
latex = wrap_displaymath(node['latex'], None)
try:
fname, depth = render_math(self, latex)
except MathExtError as exc:
sm = nodes.system_message(str(exc), type='WARNING', level=2,
backrefs=[], source=node['latex'])
sm.walkabout(self)
self.builder.warn('inline latex %r: ' % node['latex'] + str(exc))
raise nodes.SkipNode
self.body.append(self.starttag(node, 'div', CLASS='math'))
self.body.append('')
if node['number']:
self.body.append('(%s)' % node['number'])
if fname is None:
# something failed -- use text-only as a bad substitute
self.body.append('%s
\n' %
self.encode(node['latex']).strip())
else:
self.body.append(('
\n')
raise nodes.SkipNode
def setup(app):
mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None))
app.add_config_value('pngmath_dvipng', 'dvipng', 'html')
app.add_config_value('pngmath_latex', 'latex', 'html')
app.add_config_value('pngmath_use_preview', False, 'html')
app.add_config_value('pngmath_dvipng_args',
['-gamma', '1.5', '-D', '110', '-bg', 'Transparent'],
'html')
app.add_config_value('pngmath_latex_args', [], 'html')
app.add_config_value('pngmath_latex_preamble', '', 'html')
app.add_config_value('pngmath_add_tooltips', True, 'html')
app.connect('build-finished', cleanup_tempdir)
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
Sphinx-1.3.6/sphinx/ext/mathjax.py 0000664 0001750 0001750 00000005633 12660074372 020257 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.mathjax
~~~~~~~~~~~~~~~~~~
Allow `MathJax `_ to be used to display math in
Sphinx's HTML writer -- requires the MathJax JavaScript library on your
webserver/computer.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from docutils import nodes
import sphinx
from sphinx.application import ExtensionError
from sphinx.ext.mathbase import setup_math as mathbase_setup
def html_visit_math(self, node):
self.body.append(self.starttag(node, 'span', '', CLASS='math'))
self.body.append(self.builder.config.mathjax_inline[0] +
self.encode(node['latex']) +
self.builder.config.mathjax_inline[1] + '')
raise nodes.SkipNode
def html_visit_displaymath(self, node):
self.body.append(self.starttag(node, 'div', CLASS='math'))
if node['nowrap']:
self.body.append(self.builder.config.mathjax_display[0] +
self.encode(node['latex']) +
self.builder.config.mathjax_display[1])
self.body.append('')
raise nodes.SkipNode
parts = [prt for prt in node['latex'].split('\n\n') if prt.strip()]
for i, part in enumerate(parts):
part = self.encode(part)
if i == 0:
# necessary to e.g. set the id property correctly
if node['number']:
self.body.append('(%s)' %
node['number'])
if '&' in part or '\\\\' in part:
self.body.append(self.builder.config.mathjax_display[0] +
'\\begin{split}' + part + '\\end{split}' +
self.builder.config.mathjax_display[1])
else:
self.body.append(self.builder.config.mathjax_display[0] + part +
self.builder.config.mathjax_display[1])
self.body.append('\n')
raise nodes.SkipNode
def builder_inited(app):
if not app.config.mathjax_path:
raise ExtensionError('mathjax_path config value must be set for the '
'mathjax extension to work')
app.add_javascript(app.config.mathjax_path)
def setup(app):
mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None))
# more information for mathjax secure url is here:
# http://docs.mathjax.org/en/latest/start.html#secure-access-to-the-cdn
app.add_config_value('mathjax_path',
'https://cdn.mathjax.org/mathjax/latest/MathJax.js?'
'config=TeX-AMS-MML_HTMLorMML', False)
app.add_config_value('mathjax_inline', [r'\(', r'\)'], 'html')
app.add_config_value('mathjax_display', [r'\[', r'\]'], 'html')
app.connect('builder-inited', builder_inited)
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
Sphinx-1.3.6/sphinx/ext/napoleon/ 0000775 0001750 0001750 00000000000 12665046155 020060 5 ustar vagrant vagrant 0000000 0000000 Sphinx-1.3.6/sphinx/ext/napoleon/__init__.py 0000664 0001750 0001750 00000033114 12665045070 022166 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.napoleon
~~~~~~~~~~~~~~~~~~~
Support for NumPy and Google style docstrings.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import sys
from six import PY2, iteritems
import sphinx
from sphinx.ext.napoleon.docstring import GoogleDocstring, NumpyDocstring
class Config(object):
"""Sphinx napoleon extension settings in `conf.py`.
Listed below are all the settings used by napoleon and their default
values. These settings can be changed in the Sphinx `conf.py` file. Make
sure that both "sphinx.ext.autodoc" and "sphinx.ext.napoleon" are
enabled in `conf.py`::
# conf.py
# Add any Sphinx extension module names here, as strings
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.napoleon']
# Napoleon settings
napoleon_google_docstring = True
napoleon_numpy_docstring = True
napoleon_include_private_with_doc = False
napoleon_include_special_with_doc = True
napoleon_use_admonition_for_examples = False
napoleon_use_admonition_for_notes = False
napoleon_use_admonition_for_references = False
napoleon_use_ivar = False
napoleon_use_param = True
napoleon_use_rtype = True
.. _Google style:
http://google-styleguide.googlecode.com/svn/trunk/pyguide.html
.. _NumPy style:
https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
Attributes
----------
napoleon_google_docstring : bool, defaults to True
True to parse `Google style`_ docstrings. False to disable support
for Google style docstrings.
napoleon_numpy_docstring : bool, defaults to True
True to parse `NumPy style`_ docstrings. False to disable support
for NumPy style docstrings.
napoleon_include_private_with_doc : bool, defaults to False
True to include private members (like ``_membername``) with docstrings
in the documentation. False to fall back to Sphinx's default behavior.
**If True**::
def _included(self):
\"\"\"
This will be included in the docs because it has a docstring
\"\"\"
pass
def _skipped(self):
# This will NOT be included in the docs
pass
napoleon_include_special_with_doc : bool, defaults to True
True to include special members (like ``__membername__``) with
docstrings in the documentation. False to fall back to Sphinx's
default behavior.
**If True**::
def __str__(self):
\"\"\"
This will be included in the docs because it has a docstring
\"\"\"
return unicode(self).encode('utf-8')
def __unicode__(self):
# This will NOT be included in the docs
return unicode(self.__class__.__name__)
napoleon_use_admonition_for_examples : bool, defaults to False
True to use the ``.. admonition::`` directive for the **Example** and
**Examples** sections. False to use the ``.. rubric::`` directive
instead. One may look better than the other depending on what HTML
theme is used.
This `NumPy style`_ snippet will be converted as follows::
Example
-------
This is just a quick example
**If True**::
.. admonition:: Example
This is just a quick example
**If False**::
.. rubric:: Example
This is just a quick example
napoleon_use_admonition_for_notes : bool, defaults to False
True to use the ``.. admonition::`` directive for **Notes** sections.
False to use the ``.. rubric::`` directive instead.
Note
----
The singular **Note** section will always be converted to a
``.. note::`` directive.
See Also
--------
:attr:`napoleon_use_admonition_for_examples`
napoleon_use_admonition_for_references : bool, defaults to False
True to use the ``.. admonition::`` directive for **References**
sections. False to use the ``.. rubric::`` directive instead.
See Also
--------
:attr:`napoleon_use_admonition_for_examples`
napoleon_use_ivar : bool, defaults to False
True to use the ``:ivar:`` role for instance variables. False to use
the ``.. attribute::`` directive instead.
This `NumPy style`_ snippet will be converted as follows::
Attributes
----------
attr1 : int
Description of `attr1`
**If True**::
:ivar attr1: Description of `attr1`
:vartype attr1: int
**If False**::
.. attribute:: attr1
*int*
Description of `attr1`
napoleon_use_param : bool, defaults to True
True to use a ``:param:`` role for each function parameter. False to
use a single ``:parameters:`` role for all the parameters.
This `NumPy style`_ snippet will be converted as follows::
Parameters
----------
arg1 : str
Description of `arg1`
arg2 : int, optional
Description of `arg2`, defaults to 0
**If True**::
:param arg1: Description of `arg1`
:type arg1: str
:param arg2: Description of `arg2`, defaults to 0
:type arg2: int, optional
**If False**::
:parameters: * **arg1** (*str*) --
Description of `arg1`
* **arg2** (*int, optional*) --
Description of `arg2`, defaults to 0
napoleon_use_rtype : bool, defaults to True
True to use the ``:rtype:`` role for the return type. False to output
the return type inline with the description.
This `NumPy style`_ snippet will be converted as follows::
Returns
-------
bool
True if successful, False otherwise
**If True**::
:returns: True if successful, False otherwise
:rtype: bool
**If False**::
:returns: *bool* -- True if successful, False otherwise
"""
_config_values = {
'napoleon_google_docstring': (True, 'env'),
'napoleon_numpy_docstring': (True, 'env'),
'napoleon_include_private_with_doc': (False, 'env'),
'napoleon_include_special_with_doc': (True, 'env'),
'napoleon_use_admonition_for_examples': (False, 'env'),
'napoleon_use_admonition_for_notes': (False, 'env'),
'napoleon_use_admonition_for_references': (False, 'env'),
'napoleon_use_ivar': (False, 'env'),
'napoleon_use_param': (True, 'env'),
'napoleon_use_rtype': (True, 'env'),
}
def __init__(self, **settings):
for name, (default, rebuild) in iteritems(self._config_values):
setattr(self, name, default)
for name, value in iteritems(settings):
setattr(self, name, value)
def setup(app):
"""Sphinx extension setup function.
When the extension is loaded, Sphinx imports this module and executes
the ``setup()`` function, which in turn notifies Sphinx of everything
the extension offers.
Parameters
----------
app : sphinx.application.Sphinx
Application object representing the Sphinx process
See Also
--------
The Sphinx documentation on `Extensions`_, the `Extension Tutorial`_, and
the `Extension API`_.
.. _Extensions: http://sphinx-doc.org/extensions.html
.. _Extension Tutorial: http://sphinx-doc.org/ext/tutorial.html
.. _Extension API: http://sphinx-doc.org/ext/appapi.html
"""
from sphinx.application import Sphinx
if not isinstance(app, Sphinx):
return # probably called by tests
app.connect('autodoc-process-docstring', _process_docstring)
app.connect('autodoc-skip-member', _skip_member)
for name, (default, rebuild) in iteritems(Config._config_values):
app.add_config_value(name, default, rebuild)
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
def _process_docstring(app, what, name, obj, options, lines):
"""Process the docstring for a given python object.
Called when autodoc has read and processed a docstring. `lines` is a list
of docstring lines that `_process_docstring` modifies in place to change
what Sphinx outputs.
The following settings in conf.py control what styles of docstrings will
be parsed:
* ``napoleon_google_docstring`` -- parse Google style docstrings
* ``napoleon_numpy_docstring`` -- parse NumPy style docstrings
Parameters
----------
app : sphinx.application.Sphinx
Application object representing the Sphinx process.
what : str
A string specifying the type of the object to which the docstring
belongs. Valid values: "module", "class", "exception", "function",
"method", "attribute".
name : str
The fully qualified name of the object.
obj : module, class, exception, function, method, or attribute
The object to which the docstring belongs.
options : sphinx.ext.autodoc.Options
The options given to the directive: an object with attributes
inherited_members, undoc_members, show_inheritance and noindex that
are True if the flag option of same name was given to the auto
directive.
lines : list of str
The lines of the docstring, see above.
.. note:: `lines` is modified *in place*
"""
result_lines = lines
if app.config.napoleon_numpy_docstring:
docstring = NumpyDocstring(result_lines, app.config, app, what, name,
obj, options)
result_lines = docstring.lines()
if app.config.napoleon_google_docstring:
docstring = GoogleDocstring(result_lines, app.config, app, what, name,
obj, options)
result_lines = docstring.lines()
lines[:] = result_lines[:]
def _skip_member(app, what, name, obj, skip, options):
"""Determine if private and special class members are included in docs.
The following settings in conf.py determine if private and special class
members are included in the generated documentation:
* ``napoleon_include_private_with_doc`` --
include private members if they have docstrings
* ``napoleon_include_special_with_doc`` --
include special members if they have docstrings
Parameters
----------
app : sphinx.application.Sphinx
Application object representing the Sphinx process
what : str
A string specifying the type of the object to which the member
belongs. Valid values: "module", "class", "exception", "function",
"method", "attribute".
name : str
The name of the member.
obj : module, class, exception, function, method, or attribute.
For example, if the member is the __init__ method of class A, then
`obj` will be `A.__init__`.
skip : bool
A boolean indicating if autodoc will skip this member if `_skip_member`
does not override the decision
options : sphinx.ext.autodoc.Options
The options given to the directive: an object with attributes
inherited_members, undoc_members, show_inheritance and noindex that
are True if the flag option of same name was given to the auto
directive.
Returns
-------
bool
True if the member should be skipped during creation of the docs,
False if it should be included in the docs.
"""
has_doc = getattr(obj, '__doc__', False)
is_member = (what == 'class' or what == 'exception' or what == 'module')
if name != '__weakref__' and name != '__init__' and has_doc and is_member:
cls_is_owner = False
if what == 'class' or what == 'exception':
if PY2:
cls = getattr(obj, 'im_class', getattr(obj, '__objclass__',
None))
cls_is_owner = (cls and hasattr(cls, name) and
name in cls.__dict__)
elif sys.version_info >= (3, 3):
qualname = getattr(obj, '__qualname__', '')
cls_path, _, _ = qualname.rpartition('.')
if cls_path:
try:
if '.' in cls_path:
import importlib
import functools
mod = importlib.import_module(obj.__module__)
mod_path = cls_path.split('.')
cls = functools.reduce(getattr, mod_path, mod)
else:
cls = obj.__globals__[cls_path]
except:
cls_is_owner = False
else:
cls_is_owner = (cls and hasattr(cls, name) and
name in cls.__dict__)
else:
cls_is_owner = False
else:
cls_is_owner = True
if what == 'module' or cls_is_owner:
is_special = name.startswith('__') and name.endswith('__')
is_private = not is_special and name.startswith('_')
inc_special = app.config.napoleon_include_special_with_doc
inc_private = app.config.napoleon_include_private_with_doc
if (is_special and inc_special) or (is_private and inc_private):
return False
return skip
Sphinx-1.3.6/sphinx/ext/napoleon/iterators.py 0000664 0001750 0001750 00000016517 12660074372 022455 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.napoleon.iterators
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A collection of helpful iterators.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import collections
class peek_iter(object):
"""An iterator object that supports peeking ahead.
Parameters
----------
o : iterable or callable
`o` is interpreted very differently depending on the presence of
`sentinel`.
If `sentinel` is not given, then `o` must be a collection object
which supports either the iteration protocol or the sequence protocol.
If `sentinel` is given, then `o` must be a callable object.
sentinel : any value, optional
If given, the iterator will call `o` with no arguments for each
call to its `next` method; if the value returned is equal to
`sentinel`, :exc:`StopIteration` will be raised, otherwise the
value will be returned.
See Also
--------
`peek_iter` can operate as a drop in replacement for the built-in
`iter `_ function.
Attributes
----------
sentinel
The value used to indicate the iterator is exhausted. If `sentinel`
was not given when the `peek_iter` was instantiated, then it will
be set to a new object instance: ``object()``.
"""
def __init__(self, *args):
"""__init__(o, sentinel=None)"""
self._iterable = iter(*args)
self._cache = collections.deque()
if len(args) == 2:
self.sentinel = args[1]
else:
self.sentinel = object()
def __iter__(self):
return self
def __next__(self, n=None):
# note: prevent 2to3 to transform self.next() in next(self) which
# causes an infinite loop !
return getattr(self, 'next')(n)
def _fillcache(self, n):
"""Cache `n` items. If `n` is 0 or None, then 1 item is cached."""
if not n:
n = 1
try:
while len(self._cache) < n:
self._cache.append(next(self._iterable))
except StopIteration:
while len(self._cache) < n:
self._cache.append(self.sentinel)
def has_next(self):
"""Determine if iterator is exhausted.
Returns
-------
bool
True if iterator has more items, False otherwise.
Note
----
Will never raise :exc:`StopIteration`.
"""
return self.peek() != self.sentinel
def next(self, n=None):
"""Get the next item or `n` items of the iterator.
Parameters
----------
n : int or None
The number of items to retrieve. Defaults to None.
Returns
-------
item or list of items
The next item or `n` items of the iterator. If `n` is None, the
item itself is returned. If `n` is an int, the items will be
returned in a list. If `n` is 0, an empty list is returned.
Raises
------
StopIteration
Raised if the iterator is exhausted, even if `n` is 0.
"""
self._fillcache(n)
if not n:
if self._cache[0] == self.sentinel:
raise StopIteration
if n is None:
result = self._cache.popleft()
else:
result = []
else:
if self._cache[n - 1] == self.sentinel:
raise StopIteration
result = [self._cache.popleft() for i in range(n)]
return result
def peek(self, n=None):
"""Preview the next item or `n` items of the iterator.
The iterator is not advanced when peek is called.
Returns
-------
item or list of items
The next item or `n` items of the iterator. If `n` is None, the
item itself is returned. If `n` is an int, the items will be
returned in a list. If `n` is 0, an empty list is returned.
If the iterator is exhausted, `peek_iter.sentinel` is returned,
or placed as the last item in the returned list.
Note
----
Will never raise :exc:`StopIteration`.
"""
self._fillcache(n)
if n is None:
result = self._cache[0]
else:
result = [self._cache[i] for i in range(n)]
return result
class modify_iter(peek_iter):
"""An iterator object that supports modifying items as they are returned.
Parameters
----------
o : iterable or callable
`o` is interpreted very differently depending on the presence of
`sentinel`.
If `sentinel` is not given, then `o` must be a collection object
which supports either the iteration protocol or the sequence protocol.
If `sentinel` is given, then `o` must be a callable object.
sentinel : any value, optional
If given, the iterator will call `o` with no arguments for each
call to its `next` method; if the value returned is equal to
`sentinel`, :exc:`StopIteration` will be raised, otherwise the
value will be returned.
modifier : callable, optional
The function that will be used to modify each item returned by the
iterator. `modifier` should take a single argument and return a
single value. Defaults to ``lambda x: x``.
If `sentinel` is not given, `modifier` must be passed as a keyword
argument.
Attributes
----------
modifier : callable
`modifier` is called with each item in `o` as it is iterated. The
return value of `modifier` is returned in lieu of the item.
Values returned by `peek` as well as `next` are affected by
`modifier`. However, `modify_iter.sentinel` is never passed through
`modifier`; it will always be returned from `peek` unmodified.
Example
-------
>>> a = [" A list ",
... " of strings ",
... " with ",
... " extra ",
... " whitespace. "]
>>> modifier = lambda s: s.strip().replace('with', 'without')
>>> for s in modify_iter(a, modifier=modifier):
... print('"%s"' % s)
"A list"
"of strings"
"without"
"extra"
"whitespace."
"""
def __init__(self, *args, **kwargs):
"""__init__(o, sentinel=None, modifier=lambda x: x)"""
if 'modifier' in kwargs:
self.modifier = kwargs['modifier']
elif len(args) > 2:
self.modifier = args[2]
args = args[:2]
else:
self.modifier = lambda x: x
if not callable(self.modifier):
raise TypeError('modify_iter(o, modifier): '
'modifier must be callable')
super(modify_iter, self).__init__(*args)
def _fillcache(self, n):
"""Cache `n` modified items. If `n` is 0 or None, 1 item is cached.
Each item returned by the iterator is passed through the
`modify_iter.modified` function before being cached.
"""
if not n:
n = 1
try:
while len(self._cache) < n:
self._cache.append(self.modifier(next(self._iterable)))
except StopIteration:
while len(self._cache) < n:
self._cache.append(self.sentinel)
Sphinx-1.3.6/sphinx/ext/napoleon/docstring.py 0000664 0001750 0001750 00000075141 12665045070 022431 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.napoleon.docstring
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Classes for docstring parsing and formatting.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import collections
import inspect
import re
from six import string_types
from six.moves import range
from sphinx.ext.napoleon.iterators import modify_iter
from sphinx.util.pycompat import UnicodeMixin
_directive_regex = re.compile(r'\.\. \S+::')
_google_section_regex = re.compile(r'^(\s|\w)+:\s*$')
_google_typed_arg_regex = re.compile(r'\s*(.+?)\s*\(\s*(.+?)\s*\)')
_numpy_section_regex = re.compile(r'^[=\-`:\'"~^_*+#<>]{2,}\s*$')
_xref_regex = re.compile(r'(:\w+:\S+:`.+?`|:\S+:`.+?`|`.+?`)')
class GoogleDocstring(UnicodeMixin):
"""Parse Google style docstrings.
Convert Google style docstrings to reStructuredText.
Parameters
----------
docstring : str or list of str
The docstring to parse, given either as a string or split into
individual lines.
config : sphinx.ext.napoleon.Config or sphinx.config.Config, optional
The configuration settings to use. If not given, defaults to the
config object on `app`; or if `app` is not given defaults to the
a new `sphinx.ext.napoleon.Config` object.
See Also
--------
:class:`sphinx.ext.napoleon.Config`
Other Parameters
----------------
app : sphinx.application.Sphinx, optional
Application object representing the Sphinx process.
what : str, optional
A string specifying the type of the object to which the docstring
belongs. Valid values: "module", "class", "exception", "function",
"method", "attribute".
name : str, optional
The fully qualified name of the object.
obj : module, class, exception, function, method, or attribute
The object to which the docstring belongs.
options : sphinx.ext.autodoc.Options, optional
The options given to the directive: an object with attributes
inherited_members, undoc_members, show_inheritance and noindex that
are True if the flag option of same name was given to the auto
directive.
Example
-------
>>> from sphinx.ext.napoleon import Config
>>> config = Config(napoleon_use_param=True, napoleon_use_rtype=True)
>>> docstring = '''One line summary.
...
... Extended description.
...
... Args:
... arg1(int): Description of `arg1`
... arg2(str): Description of `arg2`
... Returns:
... str: Description of return value.
... '''
>>> print(GoogleDocstring(docstring, config))
One line summary.
Extended description.
:param arg1: Description of `arg1`
:type arg1: int
:param arg2: Description of `arg2`
:type arg2: str
:returns: Description of return value.
:rtype: str
"""
def __init__(self, docstring, config=None, app=None, what='', name='',
obj=None, options=None):
self._config = config
self._app = app
if not self._config:
from sphinx.ext.napoleon import Config
self._config = self._app and self._app.config or Config()
if not what:
if inspect.isclass(obj):
what = 'class'
elif inspect.ismodule(obj):
what = 'module'
elif isinstance(obj, collections.Callable):
what = 'function'
else:
what = 'object'
self._what = what
self._name = name
self._obj = obj
self._opt = options
if isinstance(docstring, string_types):
docstring = docstring.splitlines()
self._lines = docstring
self._line_iter = modify_iter(docstring, modifier=lambda s: s.rstrip())
self._parsed_lines = []
self._is_in_section = False
self._section_indent = 0
if not hasattr(self, '_directive_sections'):
self._directive_sections = []
if not hasattr(self, '_sections'):
self._sections = {
'args': self._parse_parameters_section,
'arguments': self._parse_parameters_section,
'attributes': self._parse_attributes_section,
'example': self._parse_examples_section,
'examples': self._parse_examples_section,
'keyword args': self._parse_keyword_arguments_section,
'keyword arguments': self._parse_keyword_arguments_section,
'methods': self._parse_methods_section,
'note': self._parse_note_section,
'notes': self._parse_notes_section,
'other parameters': self._parse_other_parameters_section,
'parameters': self._parse_parameters_section,
'return': self._parse_returns_section,
'returns': self._parse_returns_section,
'raises': self._parse_raises_section,
'references': self._parse_references_section,
'see also': self._parse_see_also_section,
'warning': self._parse_warning_section,
'warnings': self._parse_warning_section,
'warns': self._parse_warns_section,
'yield': self._parse_yields_section,
'yields': self._parse_yields_section,
}
self._parse()
def __unicode__(self):
"""Return the parsed docstring in reStructuredText format.
Returns
-------
unicode
Unicode version of the docstring.
"""
return u'\n'.join(self.lines())
def lines(self):
"""Return the parsed lines of the docstring in reStructuredText format.
Returns
-------
list of str
The lines of the docstring in a list.
"""
return self._parsed_lines
def _consume_indented_block(self, indent=1):
lines = []
line = self._line_iter.peek()
while(not self._is_section_break() and
(not line or self._is_indented(line, indent))):
lines.append(next(self._line_iter))
line = self._line_iter.peek()
return lines
def _consume_contiguous(self):
lines = []
while (self._line_iter.has_next() and
self._line_iter.peek() and
not self._is_section_header()):
lines.append(next(self._line_iter))
return lines
def _consume_empty(self):
lines = []
line = self._line_iter.peek()
while self._line_iter.has_next() and not line:
lines.append(next(self._line_iter))
line = self._line_iter.peek()
return lines
def _consume_field(self, parse_type=True, prefer_type=False):
line = next(self._line_iter)
before, colon, after = self._partition_field_on_colon(line)
_name, _type, _desc = before, '', after
if parse_type:
match = _google_typed_arg_regex.match(before)
if match:
_name = match.group(1)
_type = match.group(2)
if _name[:2] == '**':
_name = r'\*\*'+_name[2:]
elif _name[:1] == '*':
_name = r'\*'+_name[1:]
if prefer_type and not _type:
_type, _name = _name, _type
indent = self._get_indent(line) + 1
_desc = [_desc] + self._dedent(self._consume_indented_block(indent))
_desc = self.__class__(_desc, self._config).lines()
return _name, _type, _desc
def _consume_fields(self, parse_type=True, prefer_type=False):
self._consume_empty()
fields = []
while not self._is_section_break():
_name, _type, _desc = self._consume_field(parse_type, prefer_type)
if _name or _type or _desc:
fields.append((_name, _type, _desc,))
return fields
def _consume_returns_section(self):
lines = self._dedent(self._consume_to_next_section())
if lines:
before, colon, after = self._partition_field_on_colon(lines[0])
_name, _type, _desc = '', '', lines
if colon:
if after:
_desc = [after] + lines[1:]
else:
_desc = lines[1:]
match = _google_typed_arg_regex.match(before)
if match:
_name = match.group(1)
_type = match.group(2)
else:
_type = before
_desc = self.__class__(_desc, self._config).lines()
return [(_name, _type, _desc,)]
else:
return []
def _consume_usage_section(self):
lines = self._dedent(self._consume_to_next_section())
return lines
def _consume_section_header(self):
section = next(self._line_iter)
stripped_section = section.strip(':')
if stripped_section.lower() in self._sections:
section = stripped_section
return section
def _consume_to_next_section(self):
self._consume_empty()
lines = []
while not self._is_section_break():
lines.append(next(self._line_iter))
return lines + self._consume_empty()
def _dedent(self, lines, full=False):
if full:
return [line.lstrip() for line in lines]
else:
min_indent = self._get_min_indent(lines)
return [line[min_indent:] for line in lines]
def _format_admonition(self, admonition, lines):
lines = self._strip_empty(lines)
if len(lines) == 1:
return ['.. %s:: %s' % (admonition, lines[0].strip()), '']
elif lines:
lines = self._indent(self._dedent(lines), 3)
return ['.. %s::' % admonition, ''] + lines + ['']
else:
return ['.. %s::' % admonition, '']
def _format_block(self, prefix, lines, padding=None):
if lines:
if padding is None:
padding = ' ' * len(prefix)
result_lines = []
for i, line in enumerate(lines):
if i == 0:
result_lines.append((prefix + line).rstrip())
elif line:
result_lines.append(padding + line)
else:
result_lines.append('')
return result_lines
else:
return [prefix]
def _format_field(self, _name, _type, _desc):
separator = any([s for s in _desc]) and ' --' or ''
if _name:
if _type:
if '`' in _type:
field = ['**%s** (%s)%s' % (_name, _type, separator)]
else:
field = ['**%s** (*%s*)%s' % (_name, _type, separator)]
else:
field = ['**%s**%s' % (_name, separator)]
elif _type:
if '`' in _type:
field = ['%s%s' % (_type, separator)]
else:
field = ['*%s*%s' % (_type, separator)]
else:
field = []
return field + _desc
def _format_fields(self, field_type, fields):
field_type = ':%s:' % field_type.strip()
padding = ' ' * len(field_type)
multi = len(fields) > 1
lines = []
for _name, _type, _desc in fields:
field = self._format_field(_name, _type, _desc)
if multi:
if lines:
lines.extend(self._format_block(padding + ' * ', field))
else:
lines.extend(self._format_block(field_type + ' * ', field))
else:
lines.extend(self._format_block(field_type + ' ', field))
return lines
def _get_current_indent(self, peek_ahead=0):
line = self._line_iter.peek(peek_ahead + 1)[peek_ahead]
while line != self._line_iter.sentinel:
if line:
return self._get_indent(line)
peek_ahead += 1
line = self._line_iter.peek(peek_ahead + 1)[peek_ahead]
return 0
def _get_indent(self, line):
for i, s in enumerate(line):
if not s.isspace():
return i
return len(line)
def _get_min_indent(self, lines):
min_indent = None
for line in lines:
if line:
indent = self._get_indent(line)
if min_indent is None:
min_indent = indent
elif indent < min_indent:
min_indent = indent
return min_indent or 0
def _indent(self, lines, n=4):
return [(' ' * n) + line for line in lines]
def _is_indented(self, line, indent=1):
for i, s in enumerate(line):
if i >= indent:
return True
elif not s.isspace():
return False
return False
def _is_section_header(self):
section = self._line_iter.peek().lower()
match = _google_section_regex.match(section)
if match and section.strip(':') in self._sections:
header_indent = self._get_indent(section)
section_indent = self._get_current_indent(peek_ahead=1)
return section_indent > header_indent
elif self._directive_sections:
if _directive_regex.match(section):
for directive_section in self._directive_sections:
if section.startswith(directive_section):
return True
return False
def _is_section_break(self):
line = self._line_iter.peek()
return (not self._line_iter.has_next() or
self._is_section_header() or
(self._is_in_section and
line and
not self._is_indented(line, self._section_indent)))
def _parse(self):
self._parsed_lines = self._consume_empty()
while self._line_iter.has_next():
if self._is_section_header():
try:
section = self._consume_section_header()
self._is_in_section = True
self._section_indent = self._get_current_indent()
if _directive_regex.match(section):
lines = [section] + self._consume_to_next_section()
else:
lines = self._sections[section.lower()](section)
finally:
self._is_in_section = False
self._section_indent = 0
else:
if not self._parsed_lines:
lines = self._consume_contiguous() + self._consume_empty()
else:
lines = self._consume_to_next_section()
self._parsed_lines.extend(lines)
def _parse_attributes_section(self, section):
lines = []
for _name, _type, _desc in self._consume_fields():
if self._config.napoleon_use_ivar:
field = ':ivar %s: ' % _name
lines.extend(self._format_block(field, _desc))
if _type:
lines.append(':vartype %s: %s' % (_name, _type))
else:
lines.append('.. attribute:: ' + _name)
if _type:
lines.append('')
if '`' in _type:
lines.append(' %s' % _type)
else:
lines.append(' *%s*' % _type)
if _desc:
lines.extend([''] + self._indent(_desc, 3))
lines.append('')
if self._config.napoleon_use_ivar:
lines.append('')
return lines
def _parse_examples_section(self, section):
use_admonition = self._config.napoleon_use_admonition_for_examples
return self._parse_generic_section(section, use_admonition)
def _parse_usage_section(self, section):
header = ['.. rubric:: Usage:', '']
block = ['.. code-block:: python', '']
lines = self._consume_usage_section()
lines = self._indent(lines, 3)
return header + block + lines + ['']
def _parse_generic_section(self, section, use_admonition):
lines = self._strip_empty(self._consume_to_next_section())
lines = self._dedent(lines)
if use_admonition:
header = '.. admonition:: %s' % section
lines = self._indent(lines, 3)
else:
header = '.. rubric:: %s' % section
if lines:
return [header, ''] + lines + ['']
else:
return [header, '']
def _parse_keyword_arguments_section(self, section):
return self._format_fields('Keyword Arguments', self._consume_fields())
def _parse_methods_section(self, section):
lines = []
for _name, _, _desc in self._consume_fields(parse_type=False):
lines.append('.. method:: %s' % _name)
if _desc:
lines.extend([''] + self._indent(_desc, 3))
lines.append('')
return lines
def _parse_note_section(self, section):
lines = self._consume_to_next_section()
return self._format_admonition('note', lines)
def _parse_notes_section(self, section):
use_admonition = self._config.napoleon_use_admonition_for_notes
return self._parse_generic_section('Notes', use_admonition)
def _parse_other_parameters_section(self, section):
return self._format_fields('Other Parameters', self._consume_fields())
def _parse_parameters_section(self, section):
fields = self._consume_fields()
if self._config.napoleon_use_param:
lines = []
for _name, _type, _desc in fields:
field = ':param %s: ' % _name
lines.extend(self._format_block(field, _desc))
if _type:
lines.append(':type %s: %s' % (_name, _type))
return lines + ['']
else:
return self._format_fields('Parameters', fields)
def _parse_raises_section(self, section):
fields = self._consume_fields(parse_type=False, prefer_type=True)
field_type = ':raises:'
padding = ' ' * len(field_type)
multi = len(fields) > 1
lines = []
for _, _type, _desc in fields:
has_desc = any(_desc)
sep = (_desc and _desc[0]) and ' -- ' or ''
if _type:
has_refs = '`' in _type or ':' in _type
has_space = any(c in ' \t\n\v\f ' for c in _type)
if not has_refs and not has_space:
_type = ':exc:`%s`%s' % (_type, sep)
elif has_desc and has_space:
_type = '*%s*%s' % (_type, sep)
else:
_type = '%s%s' % (_type, sep)
if has_desc:
field = [_type] + _desc
else:
field = [_type]
else:
field = _desc
if multi:
if lines:
lines.extend(self._format_block(padding + ' * ', field))
else:
lines.extend(self._format_block(field_type + ' * ', field))
else:
lines.extend(self._format_block(field_type + ' ', field))
if lines and lines[-1]:
lines.append('')
return lines
def _parse_references_section(self, section):
use_admonition = self._config.napoleon_use_admonition_for_references
return self._parse_generic_section('References', use_admonition)
def _parse_returns_section(self, section):
fields = self._consume_returns_section()
multi = len(fields) > 1
if multi:
use_rtype = False
else:
use_rtype = self._config.napoleon_use_rtype
lines = []
for _name, _type, _desc in fields:
if use_rtype:
field = self._format_field(_name, '', _desc)
else:
field = self._format_field(_name, _type, _desc)
if multi:
if lines:
lines.extend(self._format_block(' * ', field))
else:
lines.extend(self._format_block(':returns: * ', field))
else:
lines.extend(self._format_block(':returns: ', field))
if _type and use_rtype:
lines.append(':rtype: %s' % _type)
lines.append('')
return lines
def _parse_see_also_section(self, section):
lines = self._consume_to_next_section()
return self._format_admonition('seealso', lines)
def _parse_warning_section(self, section):
lines = self._consume_to_next_section()
return self._format_admonition('warning', lines)
def _parse_warns_section(self, section):
return self._format_fields('Warns', self._consume_fields())
def _parse_yields_section(self, section):
fields = self._consume_returns_section()
return self._format_fields('Yields', fields)
def _partition_field_on_colon(self, line):
before_colon = []
after_colon = []
colon = ''
found_colon = False
for i, source in enumerate(_xref_regex.split(line)):
if found_colon:
after_colon.append(source)
else:
if (i % 2) == 0 and ":" in source:
found_colon = True
before, colon, after = source.partition(":")
before_colon.append(before)
after_colon.append(after)
else:
before_colon.append(source)
return ("".join(before_colon).strip(),
colon,
"".join(after_colon).strip())
def _strip_empty(self, lines):
if lines:
start = -1
for i, line in enumerate(lines):
if line:
start = i
break
if start == -1:
lines = []
end = -1
for i in reversed(range(len(lines))):
line = lines[i]
if line:
end = i
break
if start > 0 or end + 1 < len(lines):
lines = lines[start:end + 1]
return lines
class NumpyDocstring(GoogleDocstring):
"""Parse NumPy style docstrings.
Convert NumPy style docstrings to reStructuredText.
Parameters
----------
docstring : str or list of str
The docstring to parse, given either as a string or split into
individual lines.
config : sphinx.ext.napoleon.Config or sphinx.config.Config, optional
The configuration settings to use. If not given, defaults to the
config object on `app`; or if `app` is not given defaults to the
a new `sphinx.ext.napoleon.Config` object.
See Also
--------
:class:`sphinx.ext.napoleon.Config`
Other Parameters
----------------
app : sphinx.application.Sphinx, optional
Application object representing the Sphinx process.
what : str, optional
A string specifying the type of the object to which the docstring
belongs. Valid values: "module", "class", "exception", "function",
"method", "attribute".
name : str, optional
The fully qualified name of the object.
obj : module, class, exception, function, method, or attribute
The object to which the docstring belongs.
options : sphinx.ext.autodoc.Options, optional
The options given to the directive: an object with attributes
inherited_members, undoc_members, show_inheritance and noindex that
are True if the flag option of same name was given to the auto
directive.
Example
-------
>>> from sphinx.ext.napoleon import Config
>>> config = Config(napoleon_use_param=True, napoleon_use_rtype=True)
>>> docstring = '''One line summary.
...
... Extended description.
...
... Parameters
... ----------
... arg1 : int
... Description of `arg1`
... arg2 : str
... Description of `arg2`
... Returns
... -------
... str
... Description of return value.
... '''
>>> print(NumpyDocstring(docstring, config))
One line summary.
Extended description.
:param arg1: Description of `arg1`
:type arg1: int
:param arg2: Description of `arg2`
:type arg2: str
:returns: Description of return value.
:rtype: str
Methods
-------
__str__()
Return the parsed docstring in reStructuredText format.
Returns
-------
str
UTF-8 encoded version of the docstring.
__unicode__()
Return the parsed docstring in reStructuredText format.
Returns
-------
unicode
Unicode version of the docstring.
lines()
Return the parsed lines of the docstring in reStructuredText format.
Returns
-------
list of str
The lines of the docstring in a list.
"""
def __init__(self, docstring, config=None, app=None, what='', name='',
obj=None, options=None):
self._directive_sections = ['.. index::']
super(NumpyDocstring, self).__init__(docstring, config, app, what,
name, obj, options)
def _consume_field(self, parse_type=True, prefer_type=False):
line = next(self._line_iter)
if parse_type:
_name, _, _type = self._partition_field_on_colon(line)
else:
_name, _type = line, ''
_name, _type = _name.strip(), _type.strip()
if prefer_type and not _type:
_type, _name = _name, _type
indent = self._get_indent(line)
_desc = self._dedent(self._consume_indented_block(indent + 1))
_desc = self.__class__(_desc, self._config).lines()
return _name, _type, _desc
def _consume_returns_section(self):
return self._consume_fields(prefer_type=True)
def _consume_section_header(self):
section = next(self._line_iter)
if not _directive_regex.match(section):
# Consume the header underline
next(self._line_iter)
return section
def _is_section_break(self):
line1, line2 = self._line_iter.peek(2)
return (not self._line_iter.has_next() or
self._is_section_header() or
['', ''] == [line1, line2] or
(self._is_in_section and
line1 and
not self._is_indented(line1, self._section_indent)))
def _is_section_header(self):
section, underline = self._line_iter.peek(2)
section = section.lower()
if section in self._sections and isinstance(underline, string_types):
return bool(_numpy_section_regex.match(underline))
elif self._directive_sections:
if _directive_regex.match(section):
for directive_section in self._directive_sections:
if section.startswith(directive_section):
return True
return False
_name_rgx = re.compile(r"^\s*(:(?P\w+):`(?P[a-zA-Z0-9_.-]+)`|"
r" (?P[a-zA-Z0-9_.-]+))\s*", re.X)
def _parse_see_also_section(self, section):
lines = self._consume_to_next_section()
try:
return self._parse_numpydoc_see_also_section(lines)
except ValueError:
return self._format_admonition('seealso', lines)
def _parse_numpydoc_see_also_section(self, content):
"""
Derived from the NumpyDoc implementation of _parse_see_also.
See Also
--------
func_name : Descriptive text
continued text
another_func_name : Descriptive text
func_name1, func_name2, :meth:`func_name`, func_name3
"""
items = []
def parse_item_name(text):
"""Match ':role:`name`' or 'name'"""
m = self._name_rgx.match(text)
if m:
g = m.groups()
if g[1] is None:
return g[3], None
else:
return g[2], g[1]
raise ValueError("%s is not a item name" % text)
def push_item(name, rest):
if not name:
return
name, role = parse_item_name(name)
items.append((name, list(rest), role))
del rest[:]
current_func = None
rest = []
for line in content:
if not line.strip():
continue
m = self._name_rgx.match(line)
if m and line[m.end():].strip().startswith(':'):
push_item(current_func, rest)
current_func, line = line[:m.end()], line[m.end():]
rest = [line.split(':', 1)[1].strip()]
if not rest[0]:
rest = []
elif not line.startswith(' '):
push_item(current_func, rest)
current_func = None
if ',' in line:
for func in line.split(','):
if func.strip():
push_item(func, [])
elif line.strip():
current_func = line
elif current_func is not None:
rest.append(line.strip())
push_item(current_func, rest)
if not items:
return []
roles = {
'method': 'meth',
'meth': 'meth',
'function': 'func',
'func': 'func',
'class': 'class',
'exception': 'exc',
'exc': 'exc',
'object': 'obj',
'obj': 'obj',
'module': 'mod',
'mod': 'mod',
'data': 'data',
'constant': 'const',
'const': 'const',
'attribute': 'attr',
'attr': 'attr'
}
if self._what is None:
func_role = 'obj'
else:
func_role = roles.get(self._what, '')
lines = []
last_had_desc = True
for func, desc, role in items:
if role:
link = ':%s:`%s`' % (role, func)
elif func_role:
link = ':%s:`%s`' % (func_role, func)
else:
link = "`%s`_" % func
if desc or last_had_desc:
lines += ['']
lines += [link]
else:
lines[-1] += ", %s" % link
if desc:
lines += self._indent([' '.join(desc)])
last_had_desc = True
else:
last_had_desc = False
lines += ['']
return self._format_admonition('seealso', lines)
Sphinx-1.3.6/sphinx/ext/intersphinx.py 0000664 0001750 0001750 00000025754 12665045070 021202 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.intersphinx
~~~~~~~~~~~~~~~~~~~~~~
Insert links to objects documented in remote Sphinx documentation.
This works as follows:
* Each Sphinx HTML build creates a file named "objects.inv" that contains a
mapping from object names to URIs relative to the HTML set's root.
* Projects using the Intersphinx extension can specify links to such mapping
files in the `intersphinx_mapping` config value. The mapping will then be
used to resolve otherwise missing references to objects into links to the
other documentation.
* By default, the mapping file is assumed to be at the same location as the
rest of the documentation; however, the location of the mapping file can
also be specified individually, e.g. if the docs should be buildable
without Internet access.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import time
import zlib
import codecs
import posixpath
from os import path
import re
from six import iteritems
from six.moves.urllib import request
from docutils import nodes
from docutils.utils import relative_path
import sphinx
from sphinx.locale import _
from sphinx.builders.html import INVENTORY_FILENAME
handlers = [request.ProxyHandler(), request.HTTPRedirectHandler(),
request.HTTPHandler()]
try:
handlers.append(request.HTTPSHandler)
except AttributeError:
pass
request.install_opener(request.build_opener(*handlers))
UTF8StreamReader = codecs.lookup('utf-8')[2]
def read_inventory_v1(f, uri, join):
f = UTF8StreamReader(f)
invdata = {}
line = next(f)
projname = line.rstrip()[11:]
line = next(f)
version = line.rstrip()[11:]
for line in f:
name, type, location = line.rstrip().split(None, 2)
location = join(uri, location)
# version 1 did not add anchors to the location
if type == 'mod':
type = 'py:module'
location += '#module-' + name
else:
type = 'py:' + type
location += '#' + name
invdata.setdefault(type, {})[name] = (projname, version, location, '-')
return invdata
def read_inventory_v2(f, uri, join, bufsize=16*1024):
invdata = {}
line = f.readline()
projname = line.rstrip()[11:].decode('utf-8')
line = f.readline()
version = line.rstrip()[11:].decode('utf-8')
line = f.readline().decode('utf-8')
if 'zlib' not in line:
raise ValueError
def read_chunks():
decompressor = zlib.decompressobj()
for chunk in iter(lambda: f.read(bufsize), b''):
yield decompressor.decompress(chunk)
yield decompressor.flush()
def split_lines(iter):
buf = b''
for chunk in iter:
buf += chunk
lineend = buf.find(b'\n')
while lineend != -1:
yield buf[:lineend].decode('utf-8')
buf = buf[lineend+1:]
lineend = buf.find(b'\n')
assert not buf
for line in split_lines(read_chunks()):
# be careful to handle names with embedded spaces correctly
m = re.match(r'(?x)(.+?)\s+(\S*:\S*)\s+(\S+)\s+(\S+)\s+(.*)',
line.rstrip())
if not m:
continue
name, type, prio, location, dispname = m.groups()
if type == 'py:module' and type in invdata and \
name in invdata[type]: # due to a bug in 1.1 and below,
# two inventory entries are created
# for Python modules, and the first
# one is correct
continue
if location.endswith(u'$'):
location = location[:-1] + name
location = join(uri, location)
invdata.setdefault(type, {})[name] = (projname, version,
location, dispname)
return invdata
def fetch_inventory(app, uri, inv):
"""Fetch, parse and return an intersphinx inventory file."""
# both *uri* (base URI of the links to generate) and *inv* (actual
# location of the inventory file) can be local or remote URIs
localuri = uri.find('://') == -1
join = localuri and path.join or posixpath.join
try:
if inv.find('://') != -1:
f = request.urlopen(inv)
else:
f = open(path.join(app.srcdir, inv), 'rb')
except Exception as err:
app.warn('intersphinx inventory %r not fetchable due to '
'%s: %s' % (inv, err.__class__, err))
return
try:
line = f.readline().rstrip().decode('utf-8')
try:
if line == '# Sphinx inventory version 1':
invdata = read_inventory_v1(f, uri, join)
elif line == '# Sphinx inventory version 2':
invdata = read_inventory_v2(f, uri, join)
else:
raise ValueError
f.close()
except ValueError:
f.close()
raise ValueError('unknown or unsupported inventory version')
except Exception as err:
app.warn('intersphinx inventory %r not readable due to '
'%s: %s' % (inv, err.__class__.__name__, err))
else:
return invdata
def load_mappings(app):
"""Load all intersphinx mappings into the environment."""
now = int(time.time())
cache_time = now - app.config.intersphinx_cache_limit * 86400
env = app.builder.env
if not hasattr(env, 'intersphinx_cache'):
env.intersphinx_cache = {}
env.intersphinx_inventory = {}
env.intersphinx_named_inventory = {}
cache = env.intersphinx_cache
update = False
for key, value in iteritems(app.config.intersphinx_mapping):
if isinstance(value, tuple):
# new format
name, (uri, inv) = key, value
if not name.isalnum():
app.warn('intersphinx identifier %r is not alphanumeric' % name)
else:
# old format, no name
name, uri, inv = None, key, value
# we can safely assume that the uri<->inv mapping is not changed
# during partial rebuilds since a changed intersphinx_mapping
# setting will cause a full environment reread
if not isinstance(inv, tuple):
invs = (inv, )
else:
invs = inv
for inv in invs:
if not inv:
inv = posixpath.join(uri, INVENTORY_FILENAME)
# decide whether the inventory must be read: always read local
# files; remote ones only if the cache time is expired
if '://' not in inv or uri not in cache \
or cache[uri][1] < cache_time:
app.info('loading intersphinx inventory from %s...' % inv)
invdata = fetch_inventory(app, uri, inv)
if invdata:
cache[uri] = (name, now, invdata)
update = True
break
if update:
env.intersphinx_inventory = {}
env.intersphinx_named_inventory = {}
# Duplicate values in different inventories will shadow each
# other; which one will override which can vary between builds
# since they are specified using an unordered dict. To make
# it more consistent, we sort the named inventories and then
# add the unnamed inventories last. This means that the
# unnamed inventories will shadow the named ones but the named
# ones can still be accessed when the name is specified.
cached_vals = list(cache.values())
named_vals = sorted(v for v in cached_vals if v[0])
unnamed_vals = [v for v in cached_vals if not v[0]]
for name, _x, invdata in named_vals + unnamed_vals:
if name:
env.intersphinx_named_inventory[name] = invdata
for type, objects in iteritems(invdata):
env.intersphinx_inventory.setdefault(
type, {}).update(objects)
def missing_reference(app, env, node, contnode):
"""Attempt to resolve a missing reference via intersphinx references."""
target = node['reftarget']
if node['reftype'] == 'any':
# we search anything!
objtypes = ['%s:%s' % (domain.name, objtype)
for domain in env.domains.values()
for objtype in domain.object_types]
domain = None
else:
domain = node.get('refdomain')
if not domain:
# only objects in domains are in the inventory
return
objtypes = env.domains[domain].objtypes_for_role(node['reftype'])
if not objtypes:
return
objtypes = ['%s:%s' % (domain, objtype) for objtype in objtypes]
to_try = [(env.intersphinx_inventory, target)]
in_set = None
if ':' in target:
# first part may be the foreign doc set name
setname, newtarget = target.split(':', 1)
if setname in env.intersphinx_named_inventory:
in_set = setname
to_try.append((env.intersphinx_named_inventory[setname], newtarget))
for inventory, target in to_try:
for objtype in objtypes:
if objtype not in inventory or target not in inventory[objtype]:
continue
proj, version, uri, dispname = inventory[objtype][target]
if '://' not in uri and node.get('refdoc'):
# get correct path in case of subdirectories
uri = path.join(relative_path(node['refdoc'], env.srcdir), uri)
newnode = nodes.reference('', '', internal=False, refuri=uri,
reftitle=_('(in %s v%s)') % (proj, version))
if node.get('refexplicit'):
# use whatever title was given
newnode.append(contnode)
elif dispname == '-' or \
(domain == 'std' and node['reftype'] == 'keyword'):
# use whatever title was given, but strip prefix
title = contnode.astext()
if in_set and title.startswith(in_set+':'):
newnode.append(contnode.__class__(title[len(in_set)+1:],
title[len(in_set)+1:]))
else:
newnode.append(contnode)
else:
# else use the given display name (used for :ref:)
newnode.append(contnode.__class__(dispname, dispname))
return newnode
# at least get rid of the ':' in the target if no explicit title given
if in_set is not None and not node.get('refexplicit', True):
if len(contnode) and isinstance(contnode[0], nodes.Text):
contnode[0] = nodes.Text(newtarget, contnode[0].rawsource)
def setup(app):
app.add_config_value('intersphinx_mapping', {}, True)
app.add_config_value('intersphinx_cache_limit', 5, False)
app.connect('missing-reference', missing_reference)
app.connect('builder-inited', load_mappings)
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
Sphinx-1.3.6/sphinx/ext/autodoc.py 0000664 0001750 0001750 00000164671 12665045077 020276 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.autodoc
~~~~~~~~~~~~~~~~~~
Automatically insert docstrings for functions, classes or whole modules into
the doctree, thus avoiding duplication between docstrings and documentation
for those who like elaborate docstrings.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
import sys
import inspect
import traceback
from types import FunctionType, BuiltinFunctionType, MethodType
from six import PY2, iterkeys, iteritems, itervalues, text_type, class_types, \
string_types
from docutils import nodes
from docutils.utils import assemble_option_dict
from docutils.statemachine import ViewList
import sphinx
from sphinx.util import rpartition, force_decode
from sphinx.locale import _
from sphinx.pycode import ModuleAnalyzer, PycodeError
from sphinx.application import ExtensionError
from sphinx.util.nodes import nested_parse_with_titles
from sphinx.util.compat import Directive
from sphinx.util.inspect import getargspec, isdescriptor, safe_getmembers, \
safe_getattr, object_description, is_builtin_class_method
from sphinx.util.docstrings import prepare_docstring
#: extended signature RE: with explicit module name separated by ::
py_ext_sig_re = re.compile(
r'''^ ([\w.]+::)? # explicit module name
([\w.]+\.)? # module and/or class name(s)
(\w+) \s* # thing name
(?: \((.*)\) # optional: arguments
(?:\s* -> \s* (.*))? # return annotation
)? $ # and nothing more
''', re.VERBOSE)
class DefDict(dict):
"""A dict that returns a default on nonexisting keys."""
def __init__(self, default):
dict.__init__(self)
self.default = default
def __getitem__(self, key):
try:
return dict.__getitem__(self, key)
except KeyError:
return self.default
def __bool__(self):
# docutils check "if option_spec"
return True
__nonzero__ = __bool__ # for python2 compatibility
def identity(x):
return x
class Options(dict):
"""A dict/attribute hybrid that returns None on nonexisting keys."""
def __getattr__(self, name):
try:
return self[name.replace('_', '-')]
except KeyError:
return None
class _MockModule(object):
"""Used by autodoc_mock_imports."""
def __init__(self, *args, **kwargs):
pass
def __call__(self, *args, **kwargs):
return _MockModule()
@classmethod
def __getattr__(cls, name):
if name in ('__file__', '__path__'):
return '/dev/null'
elif name[0] == name[0].upper():
# Not very good, we assume Uppercase names are classes...
mocktype = type(name, (), {})
mocktype.__module__ = __name__
return mocktype
else:
return _MockModule()
def mock_import(modname):
if '.' in modname:
pkg, _n, mods = modname.rpartition('.')
mock_import(pkg)
mod = _MockModule()
sys.modules[modname] = mod
return mod
ALL = object()
INSTANCEATTR = object()
def members_option(arg):
"""Used to convert the :members: option to auto directives."""
if arg is None:
return ALL
return [x.strip() for x in arg.split(',')]
def members_set_option(arg):
"""Used to convert the :members: option to auto directives."""
if arg is None:
return ALL
return set(x.strip() for x in arg.split(','))
SUPPRESS = object()
def annotation_option(arg):
if arg is None:
# suppress showing the representation of the object
return SUPPRESS
else:
return arg
def bool_option(arg):
"""Used to convert flag options to auto directives. (Instead of
directives.flag(), which returns None).
"""
return True
class AutodocReporter(object):
"""
A reporter replacement that assigns the correct source name
and line number to a system message, as recorded in a ViewList.
"""
def __init__(self, viewlist, reporter):
self.viewlist = viewlist
self.reporter = reporter
def __getattr__(self, name):
return getattr(self.reporter, name)
def system_message(self, level, message, *children, **kwargs):
if 'line' in kwargs and 'source' not in kwargs:
try:
source, line = self.viewlist.items[kwargs['line']]
except IndexError:
pass
else:
kwargs['source'] = source
kwargs['line'] = line
return self.reporter.system_message(level, message,
*children, **kwargs)
def debug(self, *args, **kwargs):
if self.reporter.debug_flag:
return self.system_message(0, *args, **kwargs)
def info(self, *args, **kwargs):
return self.system_message(1, *args, **kwargs)
def warning(self, *args, **kwargs):
return self.system_message(2, *args, **kwargs)
def error(self, *args, **kwargs):
return self.system_message(3, *args, **kwargs)
def severe(self, *args, **kwargs):
return self.system_message(4, *args, **kwargs)
# Some useful event listener factories for autodoc-process-docstring.
def cut_lines(pre, post=0, what=None):
"""Return a listener that removes the first *pre* and last *post*
lines of every docstring. If *what* is a sequence of strings,
only docstrings of a type in *what* will be processed.
Use like this (e.g. in the ``setup()`` function of :file:`conf.py`)::
from sphinx.ext.autodoc import cut_lines
app.connect('autodoc-process-docstring', cut_lines(4, what=['module']))
This can (and should) be used in place of :confval:`automodule_skip_lines`.
"""
def process(app, what_, name, obj, options, lines):
if what and what_ not in what:
return
del lines[:pre]
if post:
# remove one trailing blank line.
if lines and not lines[-1]:
lines.pop(-1)
del lines[-post:]
# make sure there is a blank line at the end
if lines and lines[-1]:
lines.append('')
return process
def between(marker, what=None, keepempty=False, exclude=False):
"""Return a listener that either keeps, or if *exclude* is True excludes,
lines between lines that match the *marker* regular expression. If no line
matches, the resulting docstring would be empty, so no change will be made
unless *keepempty* is true.
If *what* is a sequence of strings, only docstrings of a type in *what* will
be processed.
"""
marker_re = re.compile(marker)
def process(app, what_, name, obj, options, lines):
if what and what_ not in what:
return
deleted = 0
delete = not exclude
orig_lines = lines[:]
for i, line in enumerate(orig_lines):
if delete:
lines.pop(i - deleted)
deleted += 1
if marker_re.match(line):
delete = not delete
if delete:
lines.pop(i - deleted)
deleted += 1
if not lines and not keepempty:
lines[:] = orig_lines
# make sure there is a blank line at the end
if lines and lines[-1]:
lines.append('')
return process
def formatargspec(*argspec):
return inspect.formatargspec(*argspec,
formatvalue=lambda x: '=' + object_description(x))
class Documenter(object):
"""
A Documenter knows how to autodocument a single object type. When
registered with the AutoDirective, it will be used to document objects
of that type when needed by autodoc.
Its *objtype* attribute selects what auto directive it is assigned to
(the directive name is 'auto' + objtype), and what directive it generates
by default, though that can be overridden by an attribute called
*directivetype*.
A Documenter has an *option_spec* that works like a docutils directive's;
in fact, it will be used to parse an auto directive's options that matches
the documenter.
"""
#: name by which the directive is called (auto...) and the default
#: generated directive name
objtype = 'object'
#: indentation by which to indent the directive content
content_indent = u' '
#: priority if multiple documenters return True from can_document_member
priority = 0
#: order if autodoc_member_order is set to 'groupwise'
member_order = 0
#: true if the generated content may contain titles
titles_allowed = False
option_spec = {'noindex': bool_option}
@staticmethod
def get_attr(obj, name, *defargs):
"""getattr() override for types such as Zope interfaces."""
for typ, func in iteritems(AutoDirective._special_attrgetters):
if isinstance(obj, typ):
return func(obj, name, *defargs)
return safe_getattr(obj, name, *defargs)
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
"""Called to see if a member can be documented by this documenter."""
raise NotImplementedError('must be implemented in subclasses')
def __init__(self, directive, name, indent=u''):
self.directive = directive
self.env = directive.env
self.options = directive.genopt
self.name = name
self.indent = indent
# the module and object path within the module, and the fully
# qualified name (all set after resolve_name succeeds)
self.modname = None
self.module = None
self.objpath = None
self.fullname = None
# extra signature items (arguments and return annotation,
# also set after resolve_name succeeds)
self.args = None
self.retann = None
# the object to document (set after import_object succeeds)
self.object = None
self.object_name = None
# the parent/owner of the object to document
self.parent = None
# the module analyzer to get at attribute docs, or None
self.analyzer = None
def add_line(self, line, source, *lineno):
"""Append one line of generated reST to the output."""
self.directive.result.append(self.indent + line, source, *lineno)
def resolve_name(self, modname, parents, path, base):
"""Resolve the module and name of the object to document given by the
arguments and the current module/class.
Must return a pair of the module name and a chain of attributes; for
example, it would return ``('zipfile', ['ZipFile', 'open'])`` for the
``zipfile.ZipFile.open`` method.
"""
raise NotImplementedError('must be implemented in subclasses')
def parse_name(self):
"""Determine what module to import and what attribute to document.
Returns True and sets *self.modname*, *self.objpath*, *self.fullname*,
*self.args* and *self.retann* if parsing and resolving was successful.
"""
# first, parse the definition -- auto directives for classes and
# functions can contain a signature which is then used instead of
# an autogenerated one
try:
explicit_modname, path, base, args, retann = \
py_ext_sig_re.match(self.name).groups()
except AttributeError:
self.directive.warn('invalid signature for auto%s (%r)' %
(self.objtype, self.name))
return False
# support explicit module and class name separation via ::
if explicit_modname is not None:
modname = explicit_modname[:-2]
parents = path and path.rstrip('.').split('.') or []
else:
modname = None
parents = []
self.modname, self.objpath = \
self.resolve_name(modname, parents, path, base)
if not self.modname:
return False
self.args = args
self.retann = retann
self.fullname = (self.modname or '') + \
(self.objpath and '.' + '.'.join(self.objpath) or '')
return True
def import_object(self):
"""Import the object given by *self.modname* and *self.objpath* and set
it as *self.object*.
Returns True if successful, False if an error occurred.
"""
dbg = self.env.app.debug
if self.objpath:
dbg('[autodoc] from %s import %s',
self.modname, '.'.join(self.objpath))
try:
dbg('[autodoc] import %s', self.modname)
for modname in self.env.config.autodoc_mock_imports:
dbg('[autodoc] adding a mock module %s!', self.modname)
mock_import(modname)
__import__(self.modname)
parent = None
obj = self.module = sys.modules[self.modname]
dbg('[autodoc] => %r', obj)
for part in self.objpath:
parent = obj
dbg('[autodoc] getattr(_, %r)', part)
obj = self.get_attr(obj, part)
dbg('[autodoc] => %r', obj)
self.object_name = part
self.parent = parent
self.object = obj
return True
# this used to only catch SyntaxError, ImportError and AttributeError,
# but importing modules with side effects can raise all kinds of errors
except (Exception, SystemExit) as e:
if self.objpath:
errmsg = 'autodoc: failed to import %s %r from module %r' % \
(self.objtype, '.'.join(self.objpath), self.modname)
else:
errmsg = 'autodoc: failed to import %s %r' % \
(self.objtype, self.fullname)
if isinstance(e, SystemExit):
errmsg += ('; the module executes module level statement ' +
'and it might call sys.exit().')
else:
errmsg += '; the following exception was raised:\n%s' % \
traceback.format_exc()
if PY2:
errmsg = errmsg.decode('utf-8')
dbg(errmsg)
self.directive.warn(errmsg)
self.env.note_reread()
return False
def get_real_modname(self):
"""Get the real module name of an object to document.
It can differ from the name of the module through which the object was
imported.
"""
return self.get_attr(self.object, '__module__', None) or self.modname
def check_module(self):
"""Check if *self.object* is really defined in the module given by
*self.modname*.
"""
if self.options.imported_members:
return True
modname = self.get_attr(self.object, '__module__', None)
if modname and modname != self.modname:
return False
return True
def format_args(self):
"""Format the argument signature of *self.object*.
Should return None if the object does not have a signature.
"""
return None
def format_name(self):
"""Format the name of *self.object*.
This normally should be something that can be parsed by the generated
directive, but doesn't need to be (Sphinx will display it unparsed
then).
"""
# normally the name doesn't contain the module (except for module
# directives of course)
return '.'.join(self.objpath) or self.modname
def format_signature(self):
"""Format the signature (arguments and return annotation) of the object.
Let the user process it via the ``autodoc-process-signature`` event.
"""
if self.args is not None:
# signature given explicitly
args = "(%s)" % self.args
else:
# try to introspect the signature
try:
args = self.format_args()
except Exception as err:
self.directive.warn('error while formatting arguments for '
'%s: %s' % (self.fullname, err))
args = None
retann = self.retann
result = self.env.app.emit_firstresult(
'autodoc-process-signature', self.objtype, self.fullname,
self.object, self.options, args, retann)
if result:
args, retann = result
if args is not None:
return args + (retann and (' -> %s' % retann) or '')
else:
return ''
def add_directive_header(self, sig):
"""Add the directive header and options to the generated content."""
domain = getattr(self, 'domain', 'py')
directive = getattr(self, 'directivetype', self.objtype)
name = self.format_name()
sourcename = self.get_sourcename()
self.add_line(u'.. %s:%s:: %s%s' % (domain, directive, name, sig),
sourcename)
if self.options.noindex:
self.add_line(u' :noindex:', sourcename)
if self.objpath:
# Be explicit about the module, this is necessary since .. class::
# etc. don't support a prepended module name
self.add_line(u' :module: %s' % self.modname, sourcename)
def get_doc(self, encoding=None, ignore=1):
"""Decode and return lines of the docstring(s) for the object."""
docstring = self.get_attr(self.object, '__doc__', None)
# make sure we have Unicode docstrings, then sanitize and split
# into lines
if isinstance(docstring, text_type):
return [prepare_docstring(docstring, ignore)]
elif isinstance(docstring, str): # this will not trigger on Py3
return [prepare_docstring(force_decode(docstring, encoding),
ignore)]
# ... else it is something strange, let's ignore it
return []
def process_doc(self, docstrings):
"""Let the user process the docstrings before adding them."""
for docstringlines in docstrings:
if self.env.app:
# let extensions preprocess docstrings
self.env.app.emit('autodoc-process-docstring',
self.objtype, self.fullname, self.object,
self.options, docstringlines)
for line in docstringlines:
yield line
def get_sourcename(self):
if self.analyzer:
# prevent encoding errors when the file name is non-ASCII
if not isinstance(self.analyzer.srcname, text_type):
filename = text_type(self.analyzer.srcname,
sys.getfilesystemencoding(), 'replace')
else:
filename = self.analyzer.srcname
return u'%s:docstring of %s' % (filename, self.fullname)
return u'docstring of %s' % self.fullname
def add_content(self, more_content, no_docstring=False):
"""Add content from docstrings, attribute documentation and user."""
# set sourcename and add content from attribute documentation
sourcename = self.get_sourcename()
if self.analyzer:
attr_docs = self.analyzer.find_attr_docs()
if self.objpath:
key = ('.'.join(self.objpath[:-1]), self.objpath[-1])
if key in attr_docs:
no_docstring = True
docstrings = [attr_docs[key]]
for i, line in enumerate(self.process_doc(docstrings)):
self.add_line(line, sourcename, i)
# add content from docstrings
if not no_docstring:
encoding = self.analyzer and self.analyzer.encoding
docstrings = self.get_doc(encoding)
if not docstrings:
# append at least a dummy docstring, so that the event
# autodoc-process-docstring is fired and can add some
# content if desired
docstrings.append([])
for i, line in enumerate(self.process_doc(docstrings)):
self.add_line(line, sourcename, i)
# add additional content (e.g. from document), if present
if more_content:
for line, src in zip(more_content.data, more_content.items):
self.add_line(line, src[0], src[1])
def get_object_members(self, want_all):
"""Return `(members_check_module, members)` where `members` is a
list of `(membername, member)` pairs of the members of *self.object*.
If *want_all* is True, return all members. Else, only return those
members given by *self.options.members* (which may also be none).
"""
analyzed_member_names = set()
if self.analyzer:
attr_docs = self.analyzer.find_attr_docs()
namespace = '.'.join(self.objpath)
for item in iteritems(attr_docs):
if item[0][0] == namespace:
analyzed_member_names.add(item[0][1])
if not want_all:
if not self.options.members:
return False, []
# specific members given
members = []
for mname in self.options.members:
try:
members.append((mname, self.get_attr(self.object, mname)))
except AttributeError:
if mname not in analyzed_member_names:
self.directive.warn('missing attribute %s in object %s'
% (mname, self.fullname))
elif self.options.inherited_members:
# safe_getmembers() uses dir() which pulls in members from all
# base classes
members = safe_getmembers(self.object, attr_getter=self.get_attr)
else:
# __dict__ contains only the members directly defined in
# the class (but get them via getattr anyway, to e.g. get
# unbound method objects instead of function objects);
# using list(iterkeys()) because apparently there are objects for which
# __dict__ changes while getting attributes
try:
obj_dict = self.get_attr(self.object, '__dict__')
except AttributeError:
members = []
else:
members = [(mname, self.get_attr(self.object, mname, None))
for mname in list(iterkeys(obj_dict))]
membernames = set(m[0] for m in members)
# add instance attributes from the analyzer
for aname in analyzed_member_names:
if aname not in membernames and \
(want_all or aname in self.options.members):
members.append((aname, INSTANCEATTR))
return False, sorted(members)
def filter_members(self, members, want_all):
"""Filter the given member list.
Members are skipped if
- they are private (except if given explicitly or the private-members
option is set)
- they are special methods (except if given explicitly or the
special-members option is set)
- they are undocumented (except if the undoc-members option is set)
The user can override the skipping decision by connecting to the
``autodoc-skip-member`` event.
"""
ret = []
# search for members in source code too
namespace = '.'.join(self.objpath) # will be empty for modules
if self.analyzer:
attr_docs = self.analyzer.find_attr_docs()
else:
attr_docs = {}
# process members and determine which to skip
for (membername, member) in members:
# if isattr is True, the member is documented as an attribute
isattr = False
doc = self.get_attr(member, '__doc__', None)
# if the member __doc__ is the same as self's __doc__, it's just
# inherited and therefore not the member's doc
cls = self.get_attr(member, '__class__', None)
if cls:
cls_doc = self.get_attr(cls, '__doc__', None)
if cls_doc == doc:
doc = None
has_doc = bool(doc)
keep = False
if want_all and membername.startswith('__') and \
membername.endswith('__') and len(membername) > 4:
# special __methods__
if self.options.special_members is ALL and \
membername != '__doc__':
keep = has_doc or self.options.undoc_members
elif self.options.special_members and \
self.options.special_members is not ALL and \
membername in self.options.special_members:
keep = has_doc or self.options.undoc_members
elif want_all and membername.startswith('_'):
# ignore members whose name starts with _ by default
keep = self.options.private_members and \
(has_doc or self.options.undoc_members)
elif (namespace, membername) in attr_docs:
# keep documented attributes
keep = True
isattr = True
else:
# ignore undocumented members if :undoc-members: is not given
keep = has_doc or self.options.undoc_members
# give the user a chance to decide whether this member
# should be skipped
if self.env.app:
# let extensions preprocess docstrings
skip_user = self.env.app.emit_firstresult(
'autodoc-skip-member', self.objtype, membername, member,
not keep, self.options)
if skip_user is not None:
keep = not skip_user
if keep:
ret.append((membername, member, isattr))
return ret
def document_members(self, all_members=False):
"""Generate reST for member documentation.
If *all_members* is True, do all members, else those given by
*self.options.members*.
"""
# set current namespace for finding members
self.env.temp_data['autodoc:module'] = self.modname
if self.objpath:
self.env.temp_data['autodoc:class'] = self.objpath[0]
want_all = all_members or self.options.inherited_members or \
self.options.members is ALL
# find out which members are documentable
members_check_module, members = self.get_object_members(want_all)
# remove members given by exclude-members
if self.options.exclude_members:
members = [(membername, member) for (membername, member) in members
if membername not in self.options.exclude_members]
# document non-skipped members
memberdocumenters = []
for (mname, member, isattr) in self.filter_members(members, want_all):
classes = [cls for cls in itervalues(AutoDirective._registry)
if cls.can_document_member(member, mname, isattr, self)]
if not classes:
# don't know how to document this member
continue
# prefer the documenter with the highest priority
classes.sort(key=lambda cls: cls.priority)
# give explicitly separated module name, so that members
# of inner classes can be documented
full_mname = self.modname + '::' + \
'.'.join(self.objpath + [mname])
documenter = classes[-1](self.directive, full_mname, self.indent)
memberdocumenters.append((documenter, isattr))
member_order = self.options.member_order or \
self.env.config.autodoc_member_order
if member_order == 'groupwise':
# sort by group; relies on stable sort to keep items in the
# same group sorted alphabetically
memberdocumenters.sort(key=lambda e: e[0].member_order)
elif member_order == 'bysource' and self.analyzer:
# sort by source order, by virtue of the module analyzer
tagorder = self.analyzer.tagorder
def keyfunc(entry):
fullname = entry[0].name.split('::')[1]
return tagorder.get(fullname, len(tagorder))
memberdocumenters.sort(key=keyfunc)
for documenter, isattr in memberdocumenters:
documenter.generate(
all_members=True, real_modname=self.real_modname,
check_module=members_check_module and not isattr)
# reset current objects
self.env.temp_data['autodoc:module'] = None
self.env.temp_data['autodoc:class'] = None
def generate(self, more_content=None, real_modname=None,
check_module=False, all_members=False):
"""Generate reST for the object given by *self.name*, and possibly for
its members.
If *more_content* is given, include that content. If *real_modname* is
given, use that module name to find attribute docs. If *check_module* is
True, only generate if the object is defined in the module name it is
imported from. If *all_members* is True, document all members.
"""
if not self.parse_name():
# need a module to import
self.directive.warn(
'don\'t know which module to import for autodocumenting '
'%r (try placing a "module" or "currentmodule" directive '
'in the document, or giving an explicit module name)'
% self.name)
return
# now, import the module and get object to document
if not self.import_object():
return
# If there is no real module defined, figure out which to use.
# The real module is used in the module analyzer to look up the module
# where the attribute documentation would actually be found in.
# This is used for situations where you have a module that collects the
# functions and classes of internal submodules.
self.real_modname = real_modname or self.get_real_modname()
# try to also get a source code analyzer for attribute docs
try:
self.analyzer = ModuleAnalyzer.for_module(self.real_modname)
# parse right now, to get PycodeErrors on parsing (results will
# be cached anyway)
self.analyzer.find_attr_docs()
except PycodeError as err:
self.env.app.debug('[autodoc] module analyzer failed: %s', err)
# no source file -- e.g. for builtin and C modules
self.analyzer = None
# at least add the module.__file__ as a dependency
if hasattr(self.module, '__file__') and self.module.__file__:
self.directive.filename_set.add(self.module.__file__)
else:
self.directive.filename_set.add(self.analyzer.srcname)
# check __module__ of object (for members not given explicitly)
if check_module:
if not self.check_module():
return
sourcename = self.get_sourcename()
# make sure that the result starts with an empty line. This is
# necessary for some situations where another directive preprocesses
# reST and no starting newline is present
self.add_line(u'', sourcename)
# format the object's signature, if any
sig = self.format_signature()
# generate the directive header and options, if applicable
self.add_directive_header(sig)
self.add_line(u'', sourcename)
# e.g. the module directive doesn't have content
self.indent += self.content_indent
# add all content (from docstrings, attribute docs etc.)
self.add_content(more_content)
# document members, if possible
self.document_members(all_members)
class ModuleDocumenter(Documenter):
"""
Specialized Documenter subclass for modules.
"""
objtype = 'module'
content_indent = u''
titles_allowed = True
option_spec = {
'members': members_option, 'undoc-members': bool_option,
'noindex': bool_option, 'inherited-members': bool_option,
'show-inheritance': bool_option, 'synopsis': identity,
'platform': identity, 'deprecated': bool_option,
'member-order': identity, 'exclude-members': members_set_option,
'private-members': bool_option, 'special-members': members_option,
'imported-members': bool_option,
}
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
# don't document submodules automatically
return False
def resolve_name(self, modname, parents, path, base):
if modname is not None:
self.directive.warn('"::" in automodule name doesn\'t make sense')
return (path or '') + base, []
def parse_name(self):
ret = Documenter.parse_name(self)
if self.args or self.retann:
self.directive.warn('signature arguments or return annotation '
'given for automodule %s' % self.fullname)
return ret
def add_directive_header(self, sig):
Documenter.add_directive_header(self, sig)
sourcename = self.get_sourcename()
# add some module-specific options
if self.options.synopsis:
self.add_line(
u' :synopsis: ' + self.options.synopsis, sourcename)
if self.options.platform:
self.add_line(
u' :platform: ' + self.options.platform, sourcename)
if self.options.deprecated:
self.add_line(u' :deprecated:', sourcename)
def get_object_members(self, want_all):
if want_all:
if not hasattr(self.object, '__all__'):
# for implicit module members, check __module__ to avoid
# documenting imported objects
return True, safe_getmembers(self.object)
else:
memberlist = self.object.__all__
# Sometimes __all__ is broken...
if not isinstance(memberlist, (list, tuple)) or not \
all(isinstance(entry, string_types) for entry in memberlist):
self.directive.warn(
'__all__ should be a list of strings, not %r '
'(in module %s) -- ignoring __all__' %
(memberlist, self.fullname))
# fall back to all members
return True, safe_getmembers(self.object)
else:
memberlist = self.options.members or []
ret = []
for mname in memberlist:
try:
ret.append((mname, safe_getattr(self.object, mname)))
except AttributeError:
self.directive.warn(
'missing attribute mentioned in :members: or __all__: '
'module %s, attribute %s' % (
safe_getattr(self.object, '__name__', '???'), mname))
return False, ret
class ModuleLevelDocumenter(Documenter):
"""
Specialized Documenter subclass for objects on module level (functions,
classes, data/constants).
"""
def resolve_name(self, modname, parents, path, base):
if modname is None:
if path:
modname = path.rstrip('.')
else:
# if documenting a toplevel object without explicit module,
# it can be contained in another auto directive ...
modname = self.env.temp_data.get('autodoc:module')
# ... or in the scope of a module directive
if not modname:
modname = self.env.ref_context.get('py:module')
# ... else, it stays None, which means invalid
return modname, parents + [base]
class ClassLevelDocumenter(Documenter):
"""
Specialized Documenter subclass for objects on class level (methods,
attributes).
"""
def resolve_name(self, modname, parents, path, base):
if modname is None:
if path:
mod_cls = path.rstrip('.')
else:
mod_cls = None
# if documenting a class-level object without path,
# there must be a current class, either from a parent
# auto directive ...
mod_cls = self.env.temp_data.get('autodoc:class')
# ... or from a class directive
if mod_cls is None:
mod_cls = self.env.ref_context.get('py:class')
# ... if still None, there's no way to know
if mod_cls is None:
return None, []
modname, cls = rpartition(mod_cls, '.')
parents = [cls]
# if the module name is still missing, get it like above
if not modname:
modname = self.env.temp_data.get('autodoc:module')
if not modname:
modname = self.env.ref_context.get('py:module')
# ... else, it stays None, which means invalid
return modname, parents + [base]
class DocstringSignatureMixin(object):
"""
Mixin for FunctionDocumenter and MethodDocumenter to provide the
feature of reading the signature from the docstring.
"""
def _find_signature(self, encoding=None):
docstrings = self.get_doc(encoding)
self._new_docstrings = docstrings[:]
result = None
for i, doclines in enumerate(docstrings):
# no lines in docstring, no match
if not doclines:
continue
# match first line of docstring against signature RE
match = py_ext_sig_re.match(doclines[0])
if not match:
continue
exmod, path, base, args, retann = match.groups()
# the base name must match ours
valid_names = [self.objpath[-1]]
if isinstance(self, ClassDocumenter):
valid_names.append('__init__')
if hasattr(self.object, '__mro__'):
valid_names.extend(cls.__name__ for cls in self.object.__mro__)
if base not in valid_names:
continue
# re-prepare docstring to ignore more leading indentation
self._new_docstrings[i] = prepare_docstring('\n'.join(doclines[1:]))
result = args, retann
# don't look any further
break
return result
def get_doc(self, encoding=None, ignore=1):
lines = getattr(self, '_new_docstrings', None)
if lines is not None:
return lines
return Documenter.get_doc(self, encoding, ignore)
def format_signature(self):
if self.args is None and self.env.config.autodoc_docstring_signature:
# only act if a signature is not explicitly given already, and if
# the feature is enabled
result = self._find_signature()
if result is not None:
self.args, self.retann = result
return Documenter.format_signature(self)
class DocstringStripSignatureMixin(DocstringSignatureMixin):
"""
Mixin for AttributeDocumenter to provide the
feature of stripping any function signature from the docstring.
"""
def format_signature(self):
if self.args is None and self.env.config.autodoc_docstring_signature:
# only act if a signature is not explicitly given already, and if
# the feature is enabled
result = self._find_signature()
if result is not None:
# Discarding _args is a only difference with
# DocstringSignatureMixin.format_signature.
# Documenter.format_signature use self.args value to format.
_args, self.retann = result
return Documenter.format_signature(self)
class FunctionDocumenter(DocstringSignatureMixin, ModuleLevelDocumenter):
"""
Specialized Documenter subclass for functions.
"""
objtype = 'function'
member_order = 30
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
return isinstance(member, (FunctionType, BuiltinFunctionType))
def format_args(self):
if inspect.isbuiltin(self.object) or \
inspect.ismethoddescriptor(self.object):
# cannot introspect arguments of a C function or method
return None
try:
argspec = getargspec(self.object)
except TypeError:
if (is_builtin_class_method(self.object, '__new__') and
is_builtin_class_method(self.object, '__init__')):
raise TypeError('%r is a builtin class' % self.object)
# if a class should be documented as function (yay duck
# typing) we try to use the constructor signature as function
# signature without the first argument.
try:
argspec = getargspec(self.object.__new__)
except TypeError:
argspec = getargspec(self.object.__init__)
if argspec[0]:
del argspec[0][0]
args = formatargspec(*argspec)
# escape backslashes for reST
args = args.replace('\\', '\\\\')
return args
def document_members(self, all_members=False):
pass
class ClassDocumenter(DocstringSignatureMixin, ModuleLevelDocumenter):
"""
Specialized Documenter subclass for classes.
"""
objtype = 'class'
member_order = 20
option_spec = {
'members': members_option, 'undoc-members': bool_option,
'noindex': bool_option, 'inherited-members': bool_option,
'show-inheritance': bool_option, 'member-order': identity,
'exclude-members': members_set_option,
'private-members': bool_option, 'special-members': members_option,
}
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
return isinstance(member, class_types)
def import_object(self):
ret = ModuleLevelDocumenter.import_object(self)
# if the class is documented under another name, document it
# as data/attribute
if ret:
if hasattr(self.object, '__name__'):
self.doc_as_attr = (self.objpath[-1] != self.object.__name__)
else:
self.doc_as_attr = True
return ret
def format_args(self):
# for classes, the relevant signature is the __init__ method's
initmeth = self.get_attr(self.object, '__init__', None)
# classes without __init__ method, default __init__ or
# __init__ written in C?
if initmeth is None or \
is_builtin_class_method(self.object, '__init__') or \
not(inspect.ismethod(initmeth) or inspect.isfunction(initmeth)):
return None
try:
argspec = getargspec(initmeth)
except TypeError:
# still not possible: happens e.g. for old-style classes
# with __init__ in C
return None
if argspec[0] and argspec[0][0] in ('cls', 'self'):
del argspec[0][0]
return formatargspec(*argspec)
def format_signature(self):
if self.doc_as_attr:
return ''
return DocstringSignatureMixin.format_signature(self)
def add_directive_header(self, sig):
if self.doc_as_attr:
self.directivetype = 'attribute'
Documenter.add_directive_header(self, sig)
# add inheritance info, if wanted
if not self.doc_as_attr and self.options.show_inheritance:
sourcename = self.get_sourcename()
self.add_line(u'', sourcename)
if hasattr(self.object, '__bases__') and len(self.object.__bases__):
bases = [b.__module__ in ('__builtin__', 'builtins') and
u':class:`%s`' % b.__name__ or
u':class:`%s.%s`' % (b.__module__, b.__name__)
for b in self.object.__bases__]
self.add_line(_(u' Bases: %s') % ', '.join(bases),
sourcename)
def get_doc(self, encoding=None, ignore=1):
lines = getattr(self, '_new_docstrings', None)
if lines is not None:
return lines
content = self.env.config.autoclass_content
docstrings = []
attrdocstring = self.get_attr(self.object, '__doc__', None)
if attrdocstring:
docstrings.append(attrdocstring)
# for classes, what the "docstring" is can be controlled via a
# config value; the default is only the class docstring
if content in ('both', 'init'):
initdocstring = self.get_attr(
self.get_attr(self.object, '__init__', None), '__doc__')
# for new-style classes, no __init__ means default __init__
if (initdocstring is not None and
(initdocstring == object.__init__.__doc__ or # for pypy
initdocstring.strip() == object.__init__.__doc__)): # for !pypy
initdocstring = None
if initdocstring:
if content == 'init':
docstrings = [initdocstring]
else:
docstrings.append(initdocstring)
doc = []
for docstring in docstrings:
if isinstance(docstring, text_type):
doc.append(prepare_docstring(docstring, ignore))
elif isinstance(docstring, str): # this will not trigger on Py3
doc.append(prepare_docstring(force_decode(docstring, encoding),
ignore))
return doc
def add_content(self, more_content, no_docstring=False):
if self.doc_as_attr:
classname = safe_getattr(self.object, '__name__', None)
if classname:
content = ViewList(
[_('alias of :class:`%s`') % classname], source='')
ModuleLevelDocumenter.add_content(self, content,
no_docstring=True)
else:
ModuleLevelDocumenter.add_content(self, more_content)
def document_members(self, all_members=False):
if self.doc_as_attr:
return
ModuleLevelDocumenter.document_members(self, all_members)
class ExceptionDocumenter(ClassDocumenter):
"""
Specialized ClassDocumenter subclass for exceptions.
"""
objtype = 'exception'
member_order = 10
# needs a higher priority than ClassDocumenter
priority = 10
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
return isinstance(member, class_types) and \
issubclass(member, BaseException)
class DataDocumenter(ModuleLevelDocumenter):
"""
Specialized Documenter subclass for data items.
"""
objtype = 'data'
member_order = 40
priority = -10
option_spec = dict(ModuleLevelDocumenter.option_spec)
option_spec["annotation"] = annotation_option
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
return isinstance(parent, ModuleDocumenter) and isattr
def add_directive_header(self, sig):
ModuleLevelDocumenter.add_directive_header(self, sig)
sourcename = self.get_sourcename()
if not self.options.annotation:
try:
objrepr = object_description(self.object)
except ValueError:
pass
else:
self.add_line(u' :annotation: = ' + objrepr, sourcename)
elif self.options.annotation is SUPPRESS:
pass
else:
self.add_line(u' :annotation: %s' % self.options.annotation,
sourcename)
def document_members(self, all_members=False):
pass
class MethodDocumenter(DocstringSignatureMixin, ClassLevelDocumenter):
"""
Specialized Documenter subclass for methods (normal, static and class).
"""
objtype = 'method'
member_order = 50
priority = 1 # must be more than FunctionDocumenter
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
return inspect.isroutine(member) and \
not isinstance(parent, ModuleDocumenter)
def import_object(self):
ret = ClassLevelDocumenter.import_object(self)
if not ret:
return ret
# to distinguish classmethod/staticmethod
obj = self.parent.__dict__.get(self.object_name)
if isinstance(obj, classmethod):
self.directivetype = 'classmethod'
# document class and static members before ordinary ones
self.member_order = self.member_order - 1
elif isinstance(obj, staticmethod):
self.directivetype = 'staticmethod'
# document class and static members before ordinary ones
self.member_order = self.member_order - 1
else:
self.directivetype = 'method'
return ret
def format_args(self):
if inspect.isbuiltin(self.object) or \
inspect.ismethoddescriptor(self.object):
# can never get arguments of a C function or method
return None
argspec = getargspec(self.object)
if argspec[0] and argspec[0][0] in ('cls', 'self'):
del argspec[0][0]
args = formatargspec(*argspec)
# escape backslashes for reST
args = args.replace('\\', '\\\\')
return args
def document_members(self, all_members=False):
pass
class AttributeDocumenter(DocstringStripSignatureMixin, ClassLevelDocumenter):
"""
Specialized Documenter subclass for attributes.
"""
objtype = 'attribute'
member_order = 60
option_spec = dict(ModuleLevelDocumenter.option_spec)
option_spec["annotation"] = annotation_option
# must be higher than the MethodDocumenter, else it will recognize
# some non-data descriptors as methods
priority = 10
method_types = (FunctionType, BuiltinFunctionType, MethodType)
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
isdatadesc = isdescriptor(member) and not \
isinstance(member, cls.method_types) and not \
type(member).__name__ in ("type", "method_descriptor",
"instancemethod")
return isdatadesc or (not isinstance(parent, ModuleDocumenter) and
not inspect.isroutine(member) and
not isinstance(member, class_types))
def document_members(self, all_members=False):
pass
def import_object(self):
ret = ClassLevelDocumenter.import_object(self)
if isdescriptor(self.object) and \
not isinstance(self.object, self.method_types):
self._datadescriptor = True
else:
# if it's not a data descriptor
self._datadescriptor = False
return ret
def get_real_modname(self):
return self.get_attr(self.parent or self.object, '__module__', None) \
or self.modname
def add_directive_header(self, sig):
ClassLevelDocumenter.add_directive_header(self, sig)
sourcename = self.get_sourcename()
if not self.options.annotation:
if not self._datadescriptor:
try:
objrepr = object_description(self.object)
except ValueError:
pass
else:
self.add_line(u' :annotation: = ' + objrepr, sourcename)
elif self.options.annotation is SUPPRESS:
pass
else:
self.add_line(u' :annotation: %s' % self.options.annotation,
sourcename)
def add_content(self, more_content, no_docstring=False):
if not self._datadescriptor:
# if it's not a data descriptor, its docstring is very probably the
# wrong thing to display
no_docstring = True
ClassLevelDocumenter.add_content(self, more_content, no_docstring)
class InstanceAttributeDocumenter(AttributeDocumenter):
"""
Specialized Documenter subclass for attributes that cannot be imported
because they are instance attributes (e.g. assigned in __init__).
"""
objtype = 'instanceattribute'
directivetype = 'attribute'
member_order = 60
# must be higher than AttributeDocumenter
priority = 11
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
"""This documents only INSTANCEATTR members."""
return isattr and (member is INSTANCEATTR)
def import_object(self):
"""Never import anything."""
# disguise as an attribute
self.objtype = 'attribute'
self._datadescriptor = False
return True
def add_content(self, more_content, no_docstring=False):
"""Never try to get a docstring from the object."""
AttributeDocumenter.add_content(self, more_content, no_docstring=True)
class AutoDirective(Directive):
"""
The AutoDirective class is used for all autodoc directives. It dispatches
most of the work to one of the Documenters, which it selects through its
*_registry* dictionary.
The *_special_attrgetters* attribute is used to customize ``getattr()``
calls that the Documenters make; its entries are of the form ``type:
getattr_function``.
Note: When importing an object, all items along the import chain are
accessed using the descendant's *_special_attrgetters*, thus this
dictionary should include all necessary functions for accessing
attributes of the parents.
"""
# a registry of objtype -> documenter class
_registry = {}
# a registry of type -> getattr function
_special_attrgetters = {}
# flags that can be given in autodoc_default_flags
_default_flags = set([
'members', 'undoc-members', 'inherited-members', 'show-inheritance',
'private-members', 'special-members',
])
# standard docutils directive settings
has_content = True
required_arguments = 1
optional_arguments = 0
final_argument_whitespace = True
# allow any options to be passed; the options are parsed further
# by the selected Documenter
option_spec = DefDict(identity)
def warn(self, msg):
self.warnings.append(self.reporter.warning(msg, line=self.lineno))
def run(self):
self.filename_set = set() # a set of dependent filenames
self.reporter = self.state.document.reporter
self.env = self.state.document.settings.env
self.warnings = []
self.result = ViewList()
try:
source, lineno = self.reporter.get_source_and_line(self.lineno)
except AttributeError:
source = lineno = None
self.env.app.debug('[autodoc] %s:%s: input:\n%s',
source, lineno, self.block_text)
# find out what documenter to call
objtype = self.name[4:]
doc_class = self._registry[objtype]
# add default flags
for flag in self._default_flags:
if flag not in doc_class.option_spec:
continue
negated = self.options.pop('no-' + flag, 'not given') is None
if flag in self.env.config.autodoc_default_flags and \
not negated:
self.options[flag] = None
# process the options with the selected documenter's option_spec
try:
self.genopt = Options(assemble_option_dict(
self.options.items(), doc_class.option_spec))
except (KeyError, ValueError, TypeError) as err:
# an option is either unknown or has a wrong type
msg = self.reporter.error('An option to %s is either unknown or '
'has an invalid value: %s' % (self.name, err),
line=self.lineno)
return [msg]
# generate the output
documenter = doc_class(self, self.arguments[0])
documenter.generate(more_content=self.content)
if not self.result:
return self.warnings
self.env.app.debug2('[autodoc] output:\n%s', '\n'.join(self.result))
# record all filenames as dependencies -- this will at least
# partially make automatic invalidation possible
for fn in self.filename_set:
self.state.document.settings.record_dependencies.add(fn)
# use a custom reporter that correctly assigns lines to source
# filename/description and lineno
old_reporter = self.state.memo.reporter
self.state.memo.reporter = AutodocReporter(self.result,
self.state.memo.reporter)
if documenter.titles_allowed:
node = nodes.section()
# necessary so that the child nodes get the right source/line set
node.document = self.state.document
nested_parse_with_titles(self.state, self.result, node)
else:
node = nodes.paragraph()
node.document = self.state.document
self.state.nested_parse(self.result, 0, node)
self.state.memo.reporter = old_reporter
return self.warnings + node.children
def add_documenter(cls):
"""Register a new Documenter."""
if not issubclass(cls, Documenter):
raise ExtensionError('autodoc documenter %r must be a subclass '
'of Documenter' % cls)
# actually, it should be possible to override Documenters
# if cls.objtype in AutoDirective._registry:
# raise ExtensionError('autodoc documenter for %r is already '
# 'registered' % cls.objtype)
AutoDirective._registry[cls.objtype] = cls
def setup(app):
app.add_autodocumenter(ModuleDocumenter)
app.add_autodocumenter(ClassDocumenter)
app.add_autodocumenter(ExceptionDocumenter)
app.add_autodocumenter(DataDocumenter)
app.add_autodocumenter(FunctionDocumenter)
app.add_autodocumenter(MethodDocumenter)
app.add_autodocumenter(AttributeDocumenter)
app.add_autodocumenter(InstanceAttributeDocumenter)
app.add_config_value('autoclass_content', 'class', True)
app.add_config_value('autodoc_member_order', 'alphabetic', True)
app.add_config_value('autodoc_default_flags', [], True)
app.add_config_value('autodoc_docstring_signature', True, True)
app.add_config_value('autodoc_mock_imports', [], True)
app.add_event('autodoc-process-docstring')
app.add_event('autodoc-process-signature')
app.add_event('autodoc-skip-member')
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
class testcls:
"""test doc string"""
def __getattr__(self, x):
return x
def __setattr__(self, x, y):
"""Attr setter."""
Sphinx-1.3.6/sphinx/ext/graphviz.py 0000664 0001750 0001750 00000025276 12665045070 020460 0 ustar vagrant vagrant 0000000 0000000 # -*- coding: utf-8 -*-
"""
sphinx.ext.graphviz
~~~~~~~~~~~~~~~~~~~
Allow graphviz-formatted graphs to be included in Sphinx-generated
documents inline.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
import codecs
import posixpath
from os import path
from subprocess import Popen, PIPE
from hashlib import sha1
from six import text_type
from docutils import nodes
from docutils.parsers.rst import directives
from docutils.statemachine import ViewList
import sphinx
from sphinx.errors import SphinxError
from sphinx.locale import _
from sphinx.util.osutil import ensuredir, ENOENT, EPIPE, EINVAL
from sphinx.util.compat import Directive
mapname_re = re.compile(r'